id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
10,512,032
https://en.wikipedia.org/wiki/BLAT%20%28bioinformatics%29
BLAT (BLAST-like alignment tool) is a pairwise sequence alignment algorithm that was developed by Jim Kent at the University of California Santa Cruz (UCSC) in the early 2000s to assist in the assembly and annotation of the human genome. It was designed primarily to decrease the time needed to align millions of mouse genomic reads and expressed sequence tags against the human genome sequence. The alignment tools of the time were not capable of performing these operations in a manner that would allow a regular update of the human genome assembly. Compared to pre-existing tools, BLAT was ~500 times faster with performing mRNA/DNA alignments and ~50 times faster with protein/protein alignments. Overview BLAT is one of multiple algorithms developed for the analysis and comparison of biological sequences such as DNA, RNA and proteins, with a primary goal of inferring homology in order to discover biological function of genomic sequences. It is not guaranteed to find the mathematically optimal alignment between two sequences like the classic Needleman-Wunsch and Smith-Waterman dynamic programming algorithms do; rather, it first attempts to rapidly detect short sequences which are more likely to be homologous, and then it aligns and further extends the homologous regions. It is similar to the heuristic BLAST family of algorithms, but each tool has tried to deal with the problem of aligning biological sequences in a timely and efficient manner by attempting different algorithmic techniques. Uses of BLAT BLAT can be used to align DNA sequences as well as protein and translated nucleotide (mRNA or DNA) sequences. It is designed to work best on sequences with great similarity. The DNA search is most effective for primates and the protein search is effective for land vertebrates. In addition, protein or translated sequence queries are more effective for identifying distant matches and for cross-species analysis than DNA sequence queries. Typical uses of BLAT include the following: Alignment of multiple mRNA sequences onto a genome assembly in order to infer their genomic coordinates; Alignment of a protein or mRNA sequence from one species onto a sequence database from another species to determine homology. Provided the two species are not too divergent, cross-species alignment is generally effective with BLAT. This is possible because BLAT does not require perfect matches, but rather accepts mismatches in alignments; BLAT can be used for alignments of two protein sequences. However, it is not the tool of choice for these types of alignments. BLASTP, the Standard Protein BLAST tool, is more efficient at protein-protein alignments; Determination of the distribution of exonic and intronic regions of a gene; Detection of gene family members of a specific gene query; Display of the protein-coding sequence of a specific gene. BLAT is designed to find matches between sequences of length at least 40 bases that share ≥95% nucleotide identity or ≥80% translated protein identity. Process BLAT is used to find regions in a target genomic database which are similar to a query sequence under examination. The general algorithmic process followed by BLAT is similar to BLAST's in that it first searches for short segments in the database and query sequences which have a certain number of matching elements. These alignment seeds are then extended in both directions of the sequences in order to form high-scoring pairs. However, BLAT uses a different indexing approach from BLAST, which allows it to rapidly scan very large genomic and protein databases for similarities to a query sequence. It does this by keeping an indexed list (hash table) of the target database in memory, which significantly reduces the time required for the comparison of the query sequences with the target database. This index is built by taking the coordinates of all the non-overlapping k-mers (words with k letters) in the target database, except for highly repeated k-mers. BLAT then builds a list of all overlapping k-mers from the query sequence and searches for these in the target database, building up a list of hits where there are matches between the sequences (Figure 1 illustrates this process). Search stage There are three different strategies used in order to search for candidate homologous regions: The first method requires single perfect matches between the query and database sequences i.e. the two k-mer words are exactly the same. This approach is not considered the most practical. This is because a small k-mer size is necessary in order to achieve high levels of sensitivity, but this increases the number of false positive hits, thus increasing the amount of time spent in the alignment stage of the algorithm. The second method allows at least one mismatch between the two k-mer words. This decreases the amount of false positives, allowing larger k-mer sizes which are less computationally expensive to handle than those produced from the previous method. This method is very effective in identifying small homologous regions. The third method requires multiple perfect matches which are in close proximity to each other. As Kent shows, this is a very effective technique capable of taking into consideration small insertions and deletions within the homologous regions. When aligning nucleotides, BLAT uses the third method requiring two perfect word matches of size 11 (11-mers). When aligning proteins, the BLAT version determines the search methodology used: when the client/server version is used, BLAT searches for three perfect 4-mer matches; when the stand-alone version is used, BLAT searches for a single perfect 5-mer between the query and database sequences. BLAT vs. BLAST Some of the differences between BLAT and BLAST are outlined below: BLAT indexes the genome/protein database, retains the index in memory, and then scans the query sequence for matches. BLAST, on the other hand, builds an index of the query sequences and searches through the database for matches. A BLAST variant called MegaBLAST indexes 4 databases to speed up alignments. BLAT can extend on multiple perfect and near-perfect matches (default is 2 perfect matches of length 11 for nucleotide searches and 3 perfect matches of length 4 for protein searches), while BLAST extends only when one or two matches occur close together. BLAT connects each homologous area between two sequences into a single larger alignment, in contrast to BLAST which returns each homologous area as a separate local alignment. The result of BLAST is a list of exons with each alignment extending just past the end of the exon. BLAT, however, correctly places each base of the mRNA onto the genome, using each base only once and can be used to identify intron-exon boundaries (i.e. splice sites). BLAT is less sensitive than BLAST. Program usage BLAT can be used either as a web-based server-client program or as a stand-alone program. Server-client The web-based application of BLAT can be accessed from the UCSC Genome Bioinformatics Site. Building the index is a relatively slow procedure. Therefore, each genome assembly used by the web-based BLAT is associated with a BLAT server, in order to have a pre-computed index available for alignments. These web-based BLAT servers keep the index in memory for users to input their query sequences. Once the query sequence is uploaded/pasted into the search field, the user can select various parameters such as which species' genome to target (there are currently over 50 species available) and the assembly version of that genome (for example, the human genome has four assemblies to select from), the query type (i.e. whether the sequence relates to DNA, protein etc.) and output settings (i.e. how to sort and visualise the output). The user can then run the search by either submitting the query or using the BLAT "I'm feeling lucky" search. Bhagwat et al. provide step by step protocols for how to use BLAT to: Map an mRNA/cDNA sequence to a genomic sequence; Map a protein sequence to the genome; Perform homology searches. Input BLAT can handle long database sequences, however, it is more effective with short query sequences than long query sequences. Kent recommends a maximum query length of 200,000 bases. The UCSC browser limits query sequences to less than 25,000 letters (i.e. nucleotides) for DNA searches and less than 10,000 letters (i.e. amino acids) for protein and translated sequence searches. The BLAT Search Genome available on the UCSC website accepts query sequences as text (cut and pasted into the query box) or uploaded as text files. The BLAT Search Genome can accept multiple sequences of the same type at once, up to a maximum of 25. For multiple sequences, the total number of nucleotides must not exceed 50,000 for DNA searches or 25,000 letters for protein or translated sequence searches. An example of searching a target database with a DNA query sequence is shown in Figure 2. Output A BLAT search returns a list of results that are ordered in decreasing order based on the score. The following information is returned: the score of the alignment, the region of query sequence that matches to the database sequence, the size of the query sequence, the level of identity as a percentage of the alignment and the chromosome and position that the query sequence maps to. Bhagwat et al. describe how the BLAT "Score" and "Identity" measures are calculated. For each search result, the user is provided with a link to the UCSC Genome Browser so they can visualise the alignment on the chromosome. This a major benefit of the web-based BLAT over the stand-alone BLAT. The user is able to obtain biological information associated with the alignment, such as information about the gene to which the query may match. The user is also provided with a link to view the alignment of the query sequence with the genome assembly. The matches between the query and genome assembly are blue and the boundaries of the alignments are lighter in colour. These exon boundaries indicate splice sites. The "I'm feeling lucky" search result returns the highest scoring alignment for the first query sequence based on the output sort option selected by the user. Stand-alone Stand-alone BLAT is more suitable for batch runs, and more efficient than the web-based BLAT. It is more efficient because it is able to store the genome in memory, unlike the web-based application which only stores the index in memory. License Both the source and precompiled binaries of BLAT are freely available for academic and personal use. Commercial license of stand-alone BLAT is distributed by Kent Informatics, Inc. See also BLAST Basic Local Alignment Search Tool Sequence alignment software References External links UCSC BLAT Search Genome Kent Informatics, Inc. BLAT source code BLAT FAQ — by UCSC BLAT Suite Program Specifications and User Guide Human BLAT Search Bioinformatics software Laboratory software
BLAT (bioinformatics)
Biology
2,254
53,276,317
https://en.wikipedia.org/wiki/C7H5FO2
{{DISPLAYTITLE:C7H5FO2}} The molecular formula C7H5FO2 (molar mass: 140.11 g/mol) may refer to: Fluorobenzoic acids 2-Fluorobenzoic acid 3-Fluorobenzoic acid 4-Fluorobenzoic acid
C7H5FO2
Chemistry
72
39,904,365
https://en.wikipedia.org/wiki/Upptalk
Upptalk (formerly known as Yuilop) was a proprietary voice-over-IP service and software application that provided mobile phone numbers in the cloud and allows users to call or text any phone for free whether or not the device receiving the calls and texts has the Yuilop application. The service was discontinued in 2017 and even its domain was abandoned. Upptalk, formerly known as Yuilop, is officially transitioning to an Edtech company. Yuilop provided phone numbers could be accessed across any device on the internet (IP) and were reachable via regular phone calls (PSTN) from any landline or mobile phone, and SMS. Calls and chats to other users within the yuilop network service werefree of charge. Unlike most other VoIP services, yuilop did not charge for off network communication. Calls and SMS to landline telephones and mobile phones use virtual credits that were earned for free through participating in promotional activities and using the app. Yuilop did not require credit for national calls and SMS in the U.S. Yuilop had additional features, including instant messaging, group chat, and location, and photo sharing. Competitors include Skype, Viber and Google Voice. Overview Yuilop was created by former simyo CEO, Jochen Doppelhammer in November 2010. Yuilop's headquarters are located in Barcelona, Spain and the app is available in over 200 countries including the United States, United Kingdom, Germany, Italy, Spain, Israel, and Mexico. The app had approximately 5 million users and is available on Android, iOS, Windows Phone, BlackBerry OS. As of July 2013, Yuilop raised approximately $9 million in funding from investors such as Nauta Capital, Shortcut Ventures GmbH, Bright Capital, and the Spanish government. In 2014, the company UppTalk entered insolvency procedures. As of June 2020, As an Edtech company, it focuses on developing cutting-edge technology and engaging content to enhance learning experiences for students of all ages. Features Yuilop provided free calling and texting between Yuilop users as well as to mobile and landline numbers not in the Yuilop network. Yuilop allowed these registered users to communicate through instant messaging, SMS, voice chat, and calling with yuilop.me. Yuilop's text chat client allowed group chats, emoticons, photo sharing, and location sharing. Yuilop.me Yuilop.me was an option to receive a virtual mobile phone number. The number could be used across devices with access to the internet and is not tied to a SIM card or an operator. Yuilop.me was available to Yuilop users in the United States, the United Kingdom, Germany and Spain. Credits Calls and instant messages made to other Yuilop users via the app were free and unlimited. Calls and SMS made to mobile phones and land lines used "credits". Credits were acquired by: Inviting friends and having them sign up to become users of Yuilop. Completing an online offer such as downloading apps, filling out a survey, shopping, or signing up for a trial period of a listed product. Redeeming voucher codes shared on Yuilop's Facebook and Twitter accounts. Using the app, i.e. receiving chats from other Yuilop users, receiving calls, and receiving SMS to a yuilop.me number. Version 1.9 of the iOS app also allowed users to purchase credits. The number of credits used to call mobile phone and land lines was determined by the location to which the call or SMS is sent. Sending calls or SMS to mobile phones required no credit in or between some countries, such as the United States. Security Yuilop used a 2048 bit encryption to encrypt data traffic and call signalling over any transport medium (Wi-Fi, 2G, 3G, 4G LTE). A STARTTLS extension is used through a Transport Layer Security protocol. See also Comparison of instant messaging clients Comparison of VoIP software Mobile VoIP Presence information Secure communication Unified communications References External links 2011 software Android (operating system) software Cross-platform mobile software Cross-platform software Freeware Instant messaging clients IOS software Portable software VoIP services VoIP companies of Spain VoIP software Windows Phone software Defunct instant messaging clients
Upptalk
Technology
892
55,731,085
https://en.wikipedia.org/wiki/Boletus%20pyrrhosceles
Boletus pyrrhosceles is a species of bolete (pored mushroom) native to Colombia. It was described by Roy Halling in 1992 from material collected on 20 November 1988 near the highway between Pasto and Chachagüí in Nariño Department in the country's southwest, at an altitude of 2700 m. It was classified in section Luridii, and thought most similar to Boletus austrinus, Boletus flammans and Boletus rubroflammeus. The species name is derived from the Ancient Greek words pyrrhos "red" and skelos "legs", referring to its stem. Description The shape of the cap is convex to broadly convex, flattening with age, and reaching a diameter of . The margin of the cap is inrolled in young specimens before flattening out. The cap is a red-brown color that becomes more orange-brown with age. The flesh is thick, and yellow, with no detectable taste nor odor. On the underside of the cap, the spore-bearing surface comprises vertically arranged minute yellow tubes with brownish pore-like openings. The tubes are deep, adnate (fused) or subdecurrent to the stem, and the individual pores are round and small (about 1 per mm). The stem is long, thick at the apex and thick elsewhere. The upper stem surface is covered with reticulations, and the stem is a dark red-brown and furry. The mycelium is yellow. The pore surface quickly turns blue with injury, as does the stem. Unlike similar species, its cap is never sticky, even after wet weather, and its pores are much shallower. Boletus pyrrhosceles grows in association with Colombian oak (Quercus humboldtii). References pyrrhosceles Fungi of Colombia Fungi described in 1992 Fungus species Taxa named by Roy Halling
Boletus pyrrhosceles
Biology
404
188,517
https://en.wikipedia.org/wiki/Micropsia
Micropsia is a condition affecting human visual perception in which objects are perceived to be smaller than they actually are. Micropsia can be caused by optical factors (such as wearing glasses), by distortion of images in the eye (such as optically, via swelling of the cornea or from changes in the shape of the retina such as from retinal edema, macular degeneration, or central serous retinopathy), by changes in the brain (such as from traumatic brain injury, epilepsy, migraines, or drugs), and from psychological factors. Dissociative phenomena are linked with micropsia, which may be the result of brain-lateralization disturbance. Micropsia is also commonly reported when the eyes are fixating at (convergence), or focusing at (accommodation), a distance closer than that of the object in accord with Emmert's law. Specific types of micropsia include hemimicropsia, a form of micropsia that is localized to one half of the visual field and can be caused by brain lesions in one of the cerebral hemispheres. Related visual distortion conditions include macropsia, a less common condition with the reverse effect, and Alice in Wonderland syndrome, a condition that has symptoms that can include both micropsia and macropsia. Signs and symptoms Micropsia causes affected individuals to perceive objects as being smaller or more distant than they actually are. The majority of individuals with micropsia are aware that their perceptions do not mimic reality. Many can imagine the actual sizes of objects and distances between objects. It is common for patients with micropsia to be able to indicate true size and distance despite their inability to perceive objects as they actually are. One specific patient was able to indicate the dimensions of specific objects with her hands. She was also able to estimate the distances between two objects and between an object and herself. She succeeded in indicating horizontal, vertical, and 45 degree positions and did not find it difficult to search for an object in a cluttered drawer, indicating that her figure-ground discrimination was intact despite having micropsia. Individuals experiencing hemimicropsia often complain that objects in their left or right visual field appear to be shrunken or compressed. They may also have difficulty appreciating the symmetry of pictures. When drawing, patients often have a tendency to compensate for their perceptual asymmetry by drawing the left or right half of objects slightly larger than the other. In a case of one person with hemimicropsia asked to draw six symmetrical objects, the size of the picture on the left half was on average 16% larger than the corresponding right half. Diagnosis EEG testing can diagnose patients with medial temporal lobe epilepsy. Epileptiform abnormalities including spikes and sharp waves in the medial temporal lobe of the brain can diagnose this condition, which can in turn be the cause of an epileptic patient's micropsia. The Amsler grid test can be used to diagnose macular degeneration. For this test, patients are asked to look at a grid, and distortions or blank spots in the patient's central field of vision can be detected. A positive diagnosis of macular degeneration may account for a patient's micropsia. A controlled size comparison task can be employed to evaluate objectively whether a person is experiencing hemimicropsia. For each trial, a pair of horizontally aligned circles is presented on a computer screen, and the person being tested is asked to decide which circle is larger. After a set of trials, the overall pattern of responses should display a normal distance effect where the more similar the two circles, the higher the number of errors. This test is able to effectively diagnose micropsia and confirm which hemisphere is being distorted. Due to the large range of causes that lead to micropsia, diagnosis varies among cases. Computed tomography (CT) and magnetic resonance imaging (MRI) may find lesions and hypodense areas in the temporal and occipital lobes. MRI and CT techniques are able to rule out lesions as the cause for micropsia, but are not sufficient to diagnose the most common causes. Definition Micropsia is the most common visual distortion, or dysmetropsia. It is categorized as an illusion in the positive phenomena grouping of abnormal visual distortions. Convergence-accommodative micropsia is a physiologic phenomenon in which an object appears smaller as it approaches the subject. Psychogenic micropsia can present itself in individuals with certain psychiatric disorders. Retinal micropsia is characterized by an increase in the distance between retinal photoreceptors and is associated with decreased visual acuity. Cerebral micropsia is a rare form of micropsia that can arise in children with chronic migraines. Hemimicropsia is a type of cerebral micropsia that occurs within one half of the visual field. Differential diagnosis Of all of the visual distortions, micropsia has the largest variety of causes. Migraines Micropsia can occur during the aura phase of a migraine attack, a phase that often precedes the onset of a headache and is commonly characterized by visual disturbances. Micropsia, along with hemianopsia, quadrantopsia, scotoma, phosphene, teicopsia, metamorphopsia, macropsia, teleopsia, diplopia, dischromatopsia, and hallucination disturbances, is a type of aura that occurs immediately before or during the onset of a migraine headache. The symptom usually occurs less than thirty minutes before the migraine headache begins and lasts for five to twenty minutes. Only 10-20% of children with migraine headaches experience auras. Visual auras such as micropsia are most common in children with migraines. Seizures The most frequent neurological origin of micropsia is a result of temporal lobe seizures. These seizures affect the entire visual field of the patient. More rarely, micropsia can be part of purely visual seizures. This in turn only affects one half of the visual field and is accompanied by other cerebral visual disturbances. The most common cause of seizures which produce perceptual disturbances such as micropsia and macropsia is medial temporal lobe epilepsy in which the seizures originate in the amygdala-hippocampus complex. Micropsia often occurs as an aura signalling a seizure in patients with medial temporal lobe epilepsy. Most auras last for a very short period, ranging from a few seconds to a few minutes. Drug use Micropsia can result from the action of mescaline and other hallucinogenic drugs. Although drug-induced changes in perception usually subside as the chemical leaves the body, long-term cocaine use can result in the chronic residual effect of micropsia. Micropsia can be a symptom of Hallucinogen Persisting Perception Disorder, or HPPD, in which a person can experience hallucinogenic flashbacks long after ingesting a hallucinogen. A majority of these flashbacks are visual distortions which include micropsia, and 15-80% of hallucinogen users may experience these flashbacks. Micropsia can also be a rare side effect of zolpidem, a prescription medication used to temporarily treat insomnia. Psychological factors Psychiatric patients may experience micropsia in an attempt to distance themselves from situations involving conflict. Micropsia may also be a symptom of psychological conditions in which patients visualize people as small objects as a way to control others in response to their insecurities and feelings of weakness. In some adults who experienced loneliness as children, micropsia may arise as a mirror of prior feelings of separation from people and objects. Epstein-Barr virus infection Micropsia can be caused by swelling of the cornea due to infection by the Epstein-Barr virus (EBV) and can therefore present as an initial symptom of EBV mononucleosis, a disease caused by Epstein-Barr virus infection. Retinal edema Micropsia can result from retinal edema causing a dislocation of the receptor cells. Photoreceptor misalignment seems to occur following the surgical re-attachment for macula-off rhegmatogenous retinal detachment. After surgery, patients may experience micropsia as a result of larger photoreceptor separation by edematous fluid. Macular degeneration Macular degeneration typically produces micropsia due to the swelling or bulging of the macula, an oval-shaped yellow spot near the center of the retina in the human eye. The main factors leading to this disease are age, smoking, heredity, and obesity. Some studies show that consuming spinach or collard greens five times a week cuts the risk of macular degeneration by 43%. Central serous chorioretinopathy CSCR is a disease in which a serous detachment of the neurosensory retina occurs over an area of leakage from the choriocapillaris through the retinal pigment epithelium (RPE). The most common symptoms that result from the disease are a deterioration of visual acuity and micropsia. Brain lesions Micropsia is sometimes seen in individuals with brain infarctions. The damaged side of the brain conveys size information that contradicts the size information conveyed by the other side of the brain. This causes a contradiction to arise between the true perception of an object's size and the smaller perception of the object, and micropsic bias ultimately causes the individual to experience micropsia. Lesions affecting other parts of the extracerebral visual pathways can also cause micropsia. Treatment Treatment varies for micropsia due to the large number of different causes for the condition. Treatments involving the occlusion of one eye and the use of a prism fitted over an eyeglass lens have both been shown to provide relief from micropsia. Micropsia that is induced by macular degeneration can be treated in several ways. A study called AREDS (age-related eye disease study) determined that taking dietary supplements containing high-dose antioxidants and zinc produced significant benefits with regard to disease progression. This study was the first ever to prove that dietary supplements can alter the natural progression and complications of a disease state. Laser treatments also look promising but are still in clinical stages. Epidemiology Episodes of micropsia or macropsia occur in 9% of adolescents. 10-35% of those with migraines experience auras, with 88% of these patients experiencing both visual auras (which include micropsia) and neurological auras. Micropsia seems to be slightly more common in boys than in girls among children who experience migraines. Approximately 80% of temporal lobe seizures produce auras that may lead to micropsia or macropsia. They are a common feature of simple partial seizures and usually precede complex partial seizures of temporal lobe origin. Central Serous Chorioretinopathy (CSCR) which can produce micropsia predominantly affects persons between the ages of 20 and 50. Women appear to be affected more than men by a factor of almost 3 to 1. Society and culture Comparison with Alice's Adventures in Wonderland Alice in Wonderland Syndrome, a neurological condition associated with both micropsia and macropsia, is named after Lewis Carroll's famous 19th century novel Alice's Adventures in Wonderland. In the story, the title character, Alice, experiences numerous situations similar to those of micropsia and macropsia. Speculation has arisen that Carroll may have written the story using his own direct experience with episodes of micropsia resulting from the numerous migraines he was known to have. It has also been suggested that Carroll may have had temporal lobe epilepsy. Comparison with Gulliver's Travels Micropsia has also been related to Jonathan Swift's novel Gulliver's Travels. It has been referred to as "Lilliput sight" and "Lilliputian hallucination," a term coined by British physician Raoul Leroy in 1909, based on the small people that inhabited the island of Lilliput in the novel. Research Current experimental evidence focuses on the involvement of the occipitotemporal pathway in both the perceptual equivalence of objects across translations of retinal position and also across size modifications. Recent evidence points to this pathway as a mediator for an individual's perception of size. Even further, numerous cases suggest that size perception may be dissociated from other aspects of visual perception such as color and movement. However, more research is called for to correctly relate the condition to defined physiological conditions. Current research is being done on macular degeneration which could help prevent cases of micropsia. A variety of drugs that block vascular endothelial growth factors (VEGFs) are being evaluated as a treatment option. These treatments for the first time have produced actual improvements in vision, rather than simply delaying or arresting the continued loss of vision characteristic of macular degeneration. A number of surgical treatments are also being investigated for macular degeneration lesions that may not qualify for laser treatment, including macular translocation to a healthier area of the eye, displacement of submacular blood using gas, and removing membranes by surgery. See also Alice in Wonderland syndrome Convergence micropsia Dysmetropsia Macropsia References External links Medical Dictionary: Micropsia Web-Md: Migraines in Children Neurological disorders Optical illusions Eye diseases Visual disturbances and blindness es:Micropsia
Micropsia
Physics
2,837
220,050
https://en.wikipedia.org/wiki/Philanthropy
Philanthropy is a form of altruism that consists of "private initiatives for the public good, focusing on quality of life". Philanthropy contrasts with business initiatives, which are private initiatives for private good, focusing on material gain; and with government endeavors that are public initiatives for public good, such as those that focus on the provision of public services. A person who practices philanthropy is a philanthropist. Etymology The word philanthropy comes , from 'to love, be fond of' and 'humankind, mankind'. In , Plutarch used the Greek concept of to describe superior human beings. During the Middle Ages, was superseded in Europe by the Christian virtue of charity (Latin: ) in the sense of selfless love, valued for salvation and escape from purgatory. Thomas Aquinas held that "the habit of charity extends not only to the love of God, but also to the love of our neighbor". Sir Francis Bacon considered philanthrôpía to be synonymous with "goodness", correlated with the Aristotelian conception of virtue as consciously instilled habits of good behaviour. Samuel Johnson simply defined philanthropy as "love of mankind; good nature". This definition still survives today and is often cited more gender-neutrally as the "love of humanity." Europe Great Britain In London, prior to the 18th century, parochial and civic charities were typically established by bequests and operated by local church parishes (such as St Dionis Backchurch) or guilds (such as the Carpenters' Company). During the 18th century, however, "a more activist and explicitly Protestant tradition of direct charitable engagement during life" took hold, exemplified by the creation of the Society for the Promotion of Christian Knowledge and Societies for the Reformation of Manners. In 1739, Thomas Coram, appalled by the number of abandoned children living on the streets of London, received a royal charter to establish the Foundling Hospital to look after these unwanted orphans in Lamb's Conduit Fields, Bloomsbury. This was "the first children's charity in the country, and one that 'set the pattern for incorporated associational charities' in general." The hospital "marked the first great milestone in the creation of these new-style charities." Jonas Hanway, another notable philanthropist of the era, established The Marine Society in 1756 as the first seafarer's charity, in a bid to aid the recruitment of men to the navy. By 1763, the society had recruited over 10,000 men and it was incorporated in 1772. Hanway was also instrumental in establishing the Magdalen Hospital to rehabilitate prostitutes. These organizations were funded by subscriptions and run as voluntary associations. They raised public awareness of their activities through the emerging popular press and were generally held in high social regard—some charities received state recognition in the form of the Royal Charter. 19th century Philanthropists, such as anti-slavery campaigner William Wilberforce, began to adopt active campaigning roles, where they would champion a cause and lobby the government for legislative change. This included organized campaigns against the ill-treatment of animals and children and the campaign that succeeded in ending the slave trade throughout the Empire starting in 1807. Although there were no slaves allowed in Britain itself, many rich men owned sugar plantations in the West Indies, and resisted the movement to buy them out until it finally succeeded in 1833. Financial donations to organized charities became fashionable among the middle class in the 19th century. By 1869 there were over 200 London charities with an annual income, all together, of about . By 1885, rapid growth had produced over 1000 London charities, with an income of about . They included a wide range of religious and secular goals, with the American import, YMCA, as one of the largest, and many small ones, such as the Metropolitan Drinking Fountain Association. In addition to making annual donations, increasingly wealthy industrialists and financiers left generous sums in their wills. A sample of 466 wills in the 1890s revealed a total wealth of , of which was bequeathed to charities. By 1900 London charities enjoyed an annual income of about . Led by the energetic Lord Shaftesbury (1801–1885), philanthropists organized themselves. In 1869 they set up the Charity Organisation Society. It was a federation of district committees, one in each of the 42 Poor Law divisions. Its central office had experts in coordination and guidance, thereby maximizing the impact of charitable giving to the poor. Many of the charities were designed to alleviate the harsh living conditions in the slums. such as the Labourer's Friend Society founded in 1830. This included the promotion of allotment of land to labourers for "cottage husbandry" that later became the allotment movement. In 1844 it became the first Model Dwellings Company—an organization that sought to improve the housing conditions of the working classes by building new homes for them, while at the same time receiving a competitive rate of return on any investment. This was one of the first housing associations, a philanthropic endeavor that flourished in the second half of the nineteenth century, brought about by the growth of the middle class. Later associations included the Peabody Trust, and the Guinness Trust. The principle of philanthropic intention with capitalist return was given the label "five per cent philanthropy." Switzerland In 1863, the Swiss businessman Henry Dunant used his fortune to fund the Geneva Society for Public Welfare, which became the International Committee of the Red Cross. During the Franco-Prussian War of 1870, Dunant personally led Red Cross delegations that treated soldiers. He shared the first Nobel Peace Prize for this work in 1901. The International Committee of the Red Cross (ICRC) played a major role in working with POWs on all sides in World War II. It was in a cash-starved position when the war began in 1939, but quickly mobilized its national offices to set up a Central Prisoner of War Agency. For example, it provided food, mail and assistance to 365,000 British and Commonwealth soldiers and civilians held captive. Suspicions, especially by London, of ICRC as too tolerant or even complicit with Nazi Germany led to its side-lining in favour of the UN Relief and Rehabilitation Administration (UNRRA) as the primary humanitarian agency after 1945. France The French Red Cross played a minor role in the war with Germany (1870–71). After that, it became a major factor in shaping French civil society as a non-religious humanitarian organization. It was closely tied to the army's Service de Santé. By 1914 it operated one thousand local committees with 164,000 members, 21,500 trained nurses, and over in assets. The Pasteur Institute had a monopoly of specialized microbiological knowledge, allowing it to raise money for serum production from private and public sources, walking the line between a commercial pharmaceutical venture and a philanthropic enterprise. By 1933, at the depth of the Great Depression, the French wanted a welfare state to relieve distress but did not want new taxes. War veterans devised a solution: the new national lottery proved highly popular to gamblers while generating the cash needed without raising taxes. American money proved invaluable. The Rockefeller Foundation opened an office in Paris and helped design and fund France's modern public health system under the National Institute of Hygiene. It also set up schools to train physicians and nurses. Germany The history of modern philanthropy on the European continent is especially important in the case of Germany, which became a model for others, especially regarding the welfare state. The princes and the various imperial states continued traditional efforts, funding monumental buildings, parks, and art collections. Starting in the early 19th century, the rapidly emerging middle classes made local philanthropy a way to establish their legitimate role in shaping society, pursuing ends different from the aristocracy and the military. They concentrated on support for social welfare, higher education, and cultural institutions, as well as working to alleviate the hardships brought on by rapid industrialization. The bourgeoisie (upper-middle class) was defeated in its effort to gain political control in 1848, but it still had enough money and organizational skills that could be employed through philanthropic agencies to provide an alternative power base for its worldview. Religion was divisive in Germany, as Protestants, Catholics, and Jews used alternative philanthropic strategies. The Catholics, for example, continued their medieval practice of using financial donations in their wills to lighten their punishment in purgatory after death. The Protestants did not believe in purgatory, but made a strong commitment to improving their communities there and then. Conservative Protestants raised concerns about deviant sexuality, alcoholism, and socialism, as well as illegitimate births. They used philanthropy to try to eradicate what they considered as "social evils" that were seen as utterly sinful. All the religious groups used financial endowments, which multiplied in number and wealth as Germany grew richer. Each was devoted to a specific benefit to that religious community, and each had a board of trustees; laymen donated their time to public service. Chancellor Otto von Bismarck, an upper class Junker, used his state-sponsored philanthropy, in the form of his invention of the modern welfare state, to neutralize the political threat posed by the socialistic labor unions. The middle classes, however, made the most use of the new welfare state, in terms of heavy use of museums, gymnasiums (high schools), universities, scholarships, and hospitals. For example, state funding for universities and gymnasiums covered only a fraction of the cost; private philanthropy became essential. 19th-century Germany was even more oriented toward civic improvement than Britain or the United States, when measured in voluntary private funding for public purposes. Indeed, such German institutions as the kindergarten, the research university, and the welfare state became models copied by the Anglo-Saxons. The heavy human and economic losses of the First World War, the financial crises of the 1920s, as well as the Nazi regime and other devastation by 1945, seriously undermined and weakened the opportunities for widespread philanthropy in Germany. The civil society so elaborately built up in the 19th century was dead by 1945. However, by the 1950s, as the "economic miracle" was restoring German prosperity, the old aristocracy was defunct, and middle-class philanthropy started to return to importance. War and postwar: Belgium and Eastern Europe The Commission for Relief in Belgium (CRB) was an international (predominantly American) organization that arranged for the supply of food to German-occupied Belgium and northern France during the First World War. It was led by Herbert Hoover. Between 1914 and 1919, the CRB operated entirely with voluntary efforts and was able to feed eleven million Belgians by raising money, obtaining voluntary contributions of money and food, shipping the food to Belgium and controlling it there. For example, the CRB shipped 697,116,000 pounds of flour to Belgium. Biographer George Nash finds that by the end of 1916, Hoover "stood preeminent in the greatest humanitarian undertaking the world had ever seen." Biographer William Leuchtenburg adds, "He had raised and spent millions of dollars, with trifling overhead and not a penny lost to fraud. At its peak, his organization fed nine million Belgians and French daily. When the war ended in late 1918, Hoover took control of the American Relief Administration (ARA), with the mission of food to Central and Eastern Europe. The ARA fed millions. U.S. government funding for the ARA expired in the summer of 1919, and Hoover transformed the ARA into a private organization, raising millions of dollars from private donors. Under the auspices of the ARA, the European Children's Fund fed millions of starving children. When attacked for distributing food to Russia, which was under Bolshevik control, Hoover snapped, "Twenty million people are starving. Whatever their politics, they shall be fed!" United States The first corporation founded in the Thirteen Colonies was Harvard College (1636), designed primarily to train young men for the clergy. A leading theorist was the Puritan theologian Cotton Mather (1662–1728), who in 1710 published a widely read essay, "Bonifacius, or an Essay to Do Good". Mather worried that the original idealism had eroded, so he advocated philanthropic benefaction as a way of life. Though his context was Christian, his idea was also characteristically American and explicitly Classical, on the threshold of the Enlightenment. Benjamin Franklin (1706–1790) was an activist and theorist of American philanthropy. He was much influenced by Daniel Defoe's An Essay upon Projects (1697) and Cotton Mather's Bonifacius: an essay upon the good (1710). Franklin attempted to motivate his fellow Philadelphians into projects for the betterment of the city: examples included the Library Company of Philadelphia (the first American subscription library), the fire department, the police force, street lighting, and a hospital. A world-class physicist himself, he promoted scientific organizations including the Philadelphia Academy (1751) – which became the University of Pennsylvania – as well as the American Philosophical Society (1743), to enable scientific researchers from all 13 colonies to communicate. By the 1820s, newly rich American businessmen were initiating philanthropic work, especially with respect to private colleges and hospitals. George Peabody (1795–1869) is the acknowledged father of modern philanthropy. A financier based in Baltimore and London, in the 1860s, he began to endow libraries and museums in the United States and also funded housing for poor people in London. His activities became a model for Andrew Carnegie and many others. Andrew Carnegie Andrew Carnegie (1835–1919) was the most influential leader of philanthropy on a national (rather than local) scale. After selling his steel company in 1901 he devoted himself to establishing philanthropic organizations and to making direct contributions to many educational, cultural, and research institutions. He financed over 2,500 public libraries built across the United States and abroad. He also funded Carnegie Hall in New York City and the Peace Palace in the Netherlands. His final and largest project was the Carnegie Corporation of New York, founded in 1911 with a endowment, later enlarged to . Carnegie Corporation has endowed or otherwise helped to establish institutions that include the Russian Research Center at Harvard University (now known as the Davis Center for Russian and Eurasian Studies), the Brookings Institution and the Sesame Workshop. In all, Andrew Carnegie gave away 90% of his fortune. John D. Rockefeller Other prominent American philanthropists of the early 20th century included John D. Rockefeller (1839–1937), Julius Rosenwald (1862–1932) and Margaret Olivia Slocum Sage (1828–1918). Rockefeller retired from business in the 1890s; he and his son John D. Rockefeller Jr. (1874–1960) made large-scale national philanthropy systematic, especially with regard to the study and application of modern medicine, higher education, and scientific research. Of the the elder Rockefeller gave away, went to medicine. Their leading advisor Frederick Taylor Gates launched several large philanthropic projects staffed by experts who sought to address problems systematically at the roots rather than let the recipients deal only with their immediate concerns. By 1920, the Rockefeller Foundation was opening offices in Europe. It launched medical and scientific projects in Britain, France, Germany, Spain, and elsewhere. It supported the health projects of the League of Nations. By the 1950s, it was investing heavily in the Green Revolution, especially the work by Norman Borlaug that enabled India, Mexico, and many poor countries to upgrade their agricultural productivity dramatically. Ford Foundation With the acquisition of most of the stock of the Ford Motor Company in the late 1940s, the Ford Foundation became the largest American philanthropy, splitting its activities between the United States and the rest of the world. Outside the United States, it established a network of human rights organizations, promoted democracy, gave large numbers of fellowships for young leaders to study in the United States, and invested heavily in the Green Revolution, whereby poor nations dramatically increased their output of rice, wheat, and other foods. Both Ford and Rockefeller were heavily involved. Ford also gave heavily to build up research universities in Europe and worldwide. For example, in Italy in 1950, sent a team to help the Italian ministry of education reform the nation's school system, based on meritocracy (rather than political or family patronage) and democratisation (with universal access to secondary schools). It reached a compromise between the Christian Democrats and the Socialists to help promote uniform treatment and equal outcomes. The success in Italy became a model for Ford programs and many other nations. The Ford Foundation in the 1950s wanted to modernize the legal systems in India and Africa, by promoting the American model. The plan failed, because of India's unique legal history, traditions, , as well as its economic and political conditions. Ford, therefore, turned to agricultural reform. The success rate in Africa was no better, and that program closed in 1977. Asia While charity has a long history in Asia, philanthropy or a systematic approach to doing good remains nascent. Chinese philosopher Mozi () developed the concept of "universal love" (, ), a reaction against perceived over-attachment to family and clan structures within Confucianism. Other interpretations of Confucianism see concern for others as an extension of benevolence. Muslims in countries such as Indonesia are bound zakat (almsgiving), while Buddhists and Christians throughout Asia may participate in philanthropic activities. In India, corporate social responsibility is now mandated, with 2% of net profits to be directed towards charity. Asia is home to most of the world's billionaires, surpassing the United States and Europe in 2017. Wikipedia's list of countries by number of billionaires shows four Asian economies in the top ten: 495 in China, 169 in India, 66 in Hong Kong, and 52 in Taiwan (). While the region's philanthropy practices are relatively under-researched compared to those of the United States and Europe, the Centre for Asian Philanthropy and Society (CAPS) produces a study of the sector every two years. In 2020, its research found that if Asia were to donate the equivalent of two percent of its GDP, the same as the United States, it would unleash () annually, more than 11 times the foreign aid flowing into the region every year and one-third of the annual amount needed globally to meet the sustainable development goals by 2030. Australia Structured giving in Australia through foundations is slowly growing, although public data on the philanthropic sector is sparse. There is no public registry of philanthropic foundations as distinct from charities more generally. Two foundation types for which some data is available are Private Ancillary Funds (PAFs) and Public Ancillary Funds (PubAFs). Private Ancillary Funds have some similarities to private family foundations in the US and Europe, and do not have a public fundraising requirement. Public Ancillary Funds include community foundations, some corporate foundations, and foundations that solely support single organisations such as hospitals, schools, museums, and art galleries. They must raise funds from the general public. Differences between traditional and new philanthropy Impact investment versus traditional philanthropy Traditional philanthropy and impact investment can be distinguished by how they serve society. Traditional philanthropy is usually short-term, where organizations obtain resources for causes through fund-raising and one-off donations. The Rockefeller Foundation and the Ford Foundation are examples of such; they focus more on financial contributions to social causes and less on actions and processes of benevolence. Impact investment, on the other hand, focuses on the interaction between individual wellbeing and broader society by promoting sustainability. Stressing the importance of impact and change, they invest in different sectors of society, including housing, infrastructure, healthcare and energy. A suggested explanation for the preference for impact investment philanthropy to traditional philanthropy is the gaining prominence of the Sustainable Development Goals (SDGs) since 2015. Almost every SDG is linked to environmental protection and sustainability because of rising concerns about how globalisation, consumerism, and population growth may affect the environment. As a result, development agencies have seen increased demands for accountability as they face greater pressure to fit with current developmental agendas. Traditional philanthropy versus philanthrocapitalism Philanthrocapitalism differs from traditional philanthropy in how it operates. Traditional philanthropy is about charity, mercy, and selfless devotion improving recipients' wellbeing. Philanthrocapitalism, is philanthropy transformed by business and the market, where profit-oriented business models are designed that work for the good of humanity. Share value companies are an example. They help develop and deliver curricula in education, strengthen their own businesses and improve the job prospects of people. Firms improve social outcomes, but while they do so, they also benefit themselves. The rise of philanthrocapitalism can be attributed to global capitalism. Therefore, philanthropy has been seen as a tool to sustain economic and firm growth, based on human capital theory. Through education, specific skills are taught that enhance people's capacity to learn and their productivity at work. Intel invests in science, technology, engineering, and mathematics (STEM) curricular standards in the US and provides learning resources and materials for schools, for its innovation and revenue. The New Employment Opportunities initiative in Latin America is a regional collaboration to train one million youth by 2022 to raise employment standards and ultimately provide a talented pool of labour for companies. Promoting equity through science and health philanthropy Philanthropy has the potential to foster equity and inclusivity in various fields, such as scientific research, development, and healthcare. Addressing systemic inequalities in these sectors can lead to more diverse perspectives, innovations, and better overall outcomes. Scholars have examined the importance of philanthropic support in promoting equity in different areas. For example, Christopherson et al. highlight the need to prioritize underrepresented groups, promote equitable partnerships, and advocate for diverse leadership within the scientific community. In the healthcare sector, Thompson et al. emphasize the role of philanthropy in empowering communities to reduce health disparities and address the root causes of these disparities. Research by Chandra et al. demonstrates the potential of strategic philanthropy to tackle health inequalities through initiatives that focus on prevention, early intervention, and building community capacity. Similarly, a report by the Bridgespan Group suggests that philanthropy can create systemic change by investing in long-term solutions that address the underlying causes of social issues, including those related to science and health disparities. To advance equity in science and healthcare, philanthropists can adopt several key strategies: Prioritize underrepresented groups: Support scientists and health professionals from diverse backgrounds to help address historical injustices and foster diversity. Encourage equitable partnerships: Facilitate collaborations between institutions from different backgrounds to promote knowledge exchange and a fair distribution of resources. Advocate for diverse leadership: Support initiatives that emphasize diversity and inclusion in leadership positions within scientific and health institutions. Invest in early-career professionals: Help create a more equitable pipeline for future leaders in science and healthcare by investing in early-career researchers and health professionals. Influence policy changes: Utilize philanthropic influence to advocate for policy changes that address systemic inequalities in science and health. Through these approaches, philanthropy can significantly promote equity within scientific and health communities, leading to more inclusive and effective advancements. Types of philanthropy Philanthropy is defined differently by different groups of people; many define it as a means to alleviate human suffering and advance the quality of life. There are many forms of philanthropy, allowing for different impacts by different groups in different settings. Celebrity philanthropy Celebrity philanthropy refers to celebrity-affiliated charitable and philanthropic activities. It is a scholarship topic in studies of "the popular" vis-à-vis the modern and post-modern world. Structured and systematised charitable giving by celebrities is a relatively new phenomenon. Although charity and fame are associated historically, it was only in the 1990s that entertainment and sports celebrities from affluent western societies became involved with a particular type of philanthropy. Celebrity philanthropy in contemporary western societies is not isolated to large one-off monetary donations. It involves celebrities using their publicity, brand credibility, and personal wealth to promote not-for-profit organisations, which are increasingly business-like in form. This is sometimes termed as "celanthropy"—the fusion of celebrity and cause as a representation of what the organisation advocates. Implications on government and governance The advent of celebrity philanthropy has coincided with the contraction of government involvement in areas such as welfare support and foreign aid to name a few. This can be identified from the proliferation of neoliberal policies. Public interest groups, not-for-profit organisations and the United Nations now budget extensive amounts of time and money to use celebrity endorsers in their campaigns. An example of this is the People's Climate March of 2014. The demonstration was part of the larger People's Climate Movement, which aims to raise awareness of climate change and environmental issues more generally. Notable celebrities who were part of this campaign included actors Leonardo DiCaprio, Mark Ruffalo, and Edward Norton. Examples The Concert for Bangladesh Band Aid LiveAid NetAid Danny Thomas and St. Jude Children's Research Hospital Geena Davis Institute on Gender in Media Jerry Lewis and the MDA Telethon List of UNICEF Goodwill Ambassadors Newman's Own Tiger Woods Foundation Richard Gere Activism Remote Area Medical Diaspora philanthropy Diaspora philanthropy is philanthropy conducted by diaspora populations either in their country of residence or in their countries of origin. Diaspora philanthropy is a newly established term with many variations, including migrant philanthropy, homeland philanthropy, and transnational giving. In diaspora philanthropy, migrants and their descendants are frontline distributors of aid, and enablers of development. For many countries, diaspora philanthropy is a prominent way in which members of the diaspora invest back into their homeland countries. Along with diaspora-led foreign direct investment, diaspora philanthropy is a force in the development of a country. Members of a diaspora are familiar with their community's needs and the social, political, and economic factors that influence the delivery of those needs. Studies show that those who are a part of the diaspora are more aware of the pressing and neglected issues of their community than outsiders or other well wishers. Also given their deep ties to their country of origin, diaspora philanthropies have greater longevity than other international philanthropies. Due to diaspora philanthropy, diaspora philanthropy is more willing to address controversial issues found in their country of origin compared to local philanthropy. African American philanthropists have made significant contributions across various fields, including mental health, education, entrepreneurship, and disaster relief. Taraji P. Henson's Boris Lawrence Henson Foundation focuses on mental health awareness and support for those affected by mental illness, particularly within the African American community. Shawn Carter's Shawn Carter Foundation provides scholarships and educational opportunities to underserved youth, aiming to improve access to higher education and support students in achieving their academic goals. Damon John's FUBU Foundation promotes entrepreneurship by offering mentorship and resources to aspiring business owners. Additionally, Rihanna's Clara Lionel Foundation provides disaster relief and humanitarian aid, helping communities in need during crises and supporting global emergency response efforts. While there are dozens more examples, each of these foundations reflects the African American community's commitment to addressing critical issues and improving the lives of individuals in diverse and impactful ways. Trust-based philanthropy Trust-based philanthropy is an approach which aims to give greater decision-making power to the leaders of non-profits, as opposed to the donor. This differs from the often stringent restrictions placed on donations in traditional philanthropy. Criticism Philanthropy has been used by ultra high-net-worth individuals to offset their larger tax liabilities through charitable contribution deductions enabled by the tax code. In the book Winners Take All: The Elite Charade of Changing the World by Anand Giridharadas, he asserts that various philanthropic initiatives by the wealthy elite in practice function to entrench the power structures and special interests of the wealthy elite. For example, despite Robert F. Smith's generosity by paying off the student debt incurred by the Morehouse class of 2019, he simultaneously fought against changes to the tax code that could have made more money available to help low-income students pay for college. As a result, Giridharadas argues, Smith's philanthropic giving functions to reinforce the prevailing status quo and perpetuates income inequality, instead of addressing the root cause of social issues. Jane Mayer highlights how wealthy donors, like the Koch brothers, use philanthropy to promote policies that serve their financial interests. Their donations, targeting think tanks and educational programs, influence public opinion on issues like tax cuts for the rich, deregulation, slashing the welfare state, and climate change denial, shaping American politics without being traditional campaign contributions. Mayer criticizes the anonymity of such donations, made through organizations like Donors Trust, which are not required to disclose their sources, enabling hidden political influence. The ability of wealthy people to deduct a significant amount of their tax liabilities in the form of philanthropic giving, as noted by the late German billionaire shipping magnate and philanthropist Peter Kramer, functioned as "a bad transfer of power", from democratically elected politicians to unelected billionaires, whereby it is no longer "the state that determines what is good for the people, but rather the rich who decide". The Global Policy Forum, an independent policy watchdog which functions to monitor the activities of the United Nations General Assembly, warned governments and international organisations that they should "assess the growing influence of major philanthropic foundations, and especially the Bill & Melinda Gates Foundation… and analyse the intended and unintended risks and side-effects of their activities" prior to accepting money from rich donors. In 2015, Global Policy Forum also warned elected politicians that they should be particularly concerned about "the unpredictable and insufficient financing of public goods, the lack of monitoring and accountability mechanisms, and the prevailing practice of applying business logic to the provision of public goods". Giridharadas also argues that philanthropy distracts the public from some of the immoral and exploitative tactics used to derive profit. For example, the Sackler family were known for their generous philanthropic giving to various cultural institutions worldwide. However, their philanthropic giving functioned as a distraction and propaganda to the public, as their legacy of generosity was tainted by the subsequent exposure of Purdue Pharma's role in encouraging and exacerbating the opioid epidemic. As a result of their exposed ill-gotten gains from the social issues caused by the philanthropic donors, the British institutions of the National Portrait Gallery, London and the Tate, along with the American institution Solomon R. Guggenheim Museum, announced their rejection of charitable giving from the Sackler family trusts. Thus, some argue that philanthropy is merely a distraction and temporary relief, both physically and spiritually for those who receive it, in replacement of facing the true causes of the issues that it attempts to relieve, such as high housing costs and economic inequality, as philanthropy typically offers no long-term solutions. According to Harvard Political Review, philanthropy currently "is only a band-aid to a much larger and deeper structural issue[s]." See also References Further reading (3 vol.) Examines philanthropy in Buddhist, Islamic, Hindu, Jewish, and Native American religious traditions and in cultures from Latin America, Eastern Europe, the Middle East, Africa, and Asia. External links A History of Modern Philanthropy, 1601–present compiled and edited by National Philanthropic Trust
Philanthropy
Biology
6,351
39,583,234
https://en.wikipedia.org/wiki/%C3%89tale%20homotopy%20type
In mathematics, especially in algebraic geometry, the étale homotopy type is an analogue of the homotopy type of topological spaces for algebraic varieties. Roughly speaking, for a variety or scheme X, the idea is to consider étale coverings and to replace each connected component of U and the higher "intersections", i.e., fiber products, (n+1 copies of U, ) by a single point. This gives a simplicial set which captures some information related to X and the étale topology of it. Slightly more precisely, it is in general necessary to work with étale hypercovers instead of the above simplicial scheme determined by a usual étale cover. Taking finer and finer hypercoverings (which is technically accomplished by working with the pro-object in simplicial sets determined by taking all hypercoverings), the resulting object is the étale homotopy type of X. Similarly to classical topology, it is able to recover much of the usual data related to the étale topology, in particular the étale fundamental group of the scheme and the étale cohomology of locally constant étale sheaves. References External links http://ncatlab.org/nlab/show/étale+homotopy Homotopy theory Algebraic geometry
Étale homotopy type
Mathematics
273
35,307,783
https://en.wikipedia.org/wiki/Medium%20error
In digital storage, a Medium Error is a class of errors that a storage device can experience, which imply that a physical problem was encountered when trying to access the device. The word "medium" refers to the physical storage layer, the medium on which the data is stored; as opposed to errors related to e.g. protocol, device/controller/driver state, etc. Medium errors are most commonly detected by checking the read data against a checksum – itself being most commonly also stored on the same device. The mismatch of data to its supposed checksum is assumed to be caused by the data being corrupted. Locations of medium errors can be either temporary (as in the case of bit rot – there is no damage to the medium, the data was simply lost), or permanent (as in the case of scratching – the physical location is unusable from that point onwards). Devices can sometimes recover from medium errors, either by retrying or by managing to reconcile the data with the checksum. If the medium has incurred permanent damage, the device might remap the logical address where the error occurred to a different, undamaged physical location. Medium errors are often associated with long latency for the IOs. This is due to the device retrying and attempting to recover from the error. Examples of conditions that might cause medium errors Bad Blocks: These are damaged or defective areas on the storage medium where data cannot be reliably read or written. Media Degradation: Over time, storage media like optical discs or magnetic tapes can degrade, leading to read/write errors. Physical Damage: Physical shocks, drops, or other mechanical damage to the storage device can result in Medium Errors. Media Wear: In the case of hard drives, the read/write heads may wear out or come into contact with the platters, causing errors. Manufacturing Defects: Occasionally, storage media can have manufacturing defects that become evident over time. Environmental Factors: Extreme temperatures, humidity, or exposure to magnetic fields can also lead to Medium Errors. See also Bad sector Bit rot Computer storage devices
Medium error
Technology
423
68,486,117
https://en.wikipedia.org/wiki/WASP-159
WASP-159 is a faint star located in the southern constellation Caelum. With an apparent magnitude of 12.84, a powerful telescope is needed to see the star. The star is located based on parallax, but is drifting away with a heliocentric radial velocity of +35.16 km/s. Properties WASP-159 is a F-type subgiant with 1.41 times the Sun's mass, and double the Sun's radius. It radiates at 4.78 times the Sun's luminosity from its photosphere at an effective temperature of 6,120 K. WASP-159 is about 3 billion years old, and is metal-rich like many other planetary hosts. Planetary system In 2019, SuperWASP discovered an inflated "hot Jupiter" orbiting the star. References F-type subgiants Caelum Planetary systems with one confirmed planet Hot Jupiters
WASP-159
Astronomy
188
27,287,672
https://en.wikipedia.org/wiki/International%20Harvester%201066
The International Harvester 1066 is a farm tractor that was made by International Harvester from 1971 to 1976. The 1066 has a six-cylinder diesel engine and about 105 drawbar and 125 PTO horsepower. The 1066 is significant for its popularity, with over 50,000 units having been built in its six-year run. Features DT-414 engine. Turbocharged six-cylinder direct start diesel engine. Hydrostatic power steering. Dry disc brakes. Hydraulic assist clutch Dual pto's (540/1000). Non synchronized 4 speed transmission with a non synchronized 2 speed range transmission, 16 total forward speeds when equipped with Torque Amplifier. Dual hydraulics. Engine HP 105 it has been a myth that some tractors came off the floor with over 150 hp, which was not uncommon. The 414 is sometimes known as a factory hot rod, built and engineered by Jerry Lagod who later formed Hypermax Engineering and set the milestone for the ultimate super stock diesel engine Cabs When introduced in 1971 two cabs were available, the "Custom" cab carried over from the previous series and the "Deluxe" cab. Both cabs could be equipped with air conditioning, heat and AM radio. Updates In 1975 after the introduction of the 30 series John Deere, IH decided rather than replacing the 66 series they would give the line a "tune-up" starting with increase of about five horsepower, also IH altered the "Deluxe" cab, it now had just one window on the doors instead of two, the rear window now opened farther and the lower rear window was enlarged. The cab was now painted all red with the roof still being white. Also in 1973 the "custom" cab was dropped along with the optional Hydrostatic transmission version. References Tractors International Harvester vehicles
International Harvester 1066
Engineering
363
39,477,390
https://en.wikipedia.org/wiki/Xanthoconium%20stramineum
Xanthoconium stramineum is a species of bolete fungus and the type species of the genus Xanthoconium. First described as a species of Gyroporus by William Alphonso Murrill in 1940, it was placed in its current genus by Rolf Singer in 1944. See also List of North American boletes References External links Boletaceae Fungi described in 1940 Taxa named by William Alphonso Murrill Fungus species
Xanthoconium stramineum
Biology
96
164,346
https://en.wikipedia.org/wiki/Society%20of%20Motion%20Picture%20and%20Television%20Engineers
The Society of Motion Picture and Television Engineers (SMPTE) (, rarely ), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is a global professional association of engineers, technologists, and executives working in the media and entertainment industry. As an internationally recognized standards organization, SMPTE has published more than 800 technical standards and related documents for broadcast, filmmaking, digital cinema, audio recording, information technology (IT), and medical imaging. SMPTE also publishes the SMPTE Motion Imaging Journal, provides networking opportunities for its members, produces academic conferences and exhibitions, and performs other industry-related functions. SMPTE membership is open to any individual or organization with an interest in the subject matter. In the US, SMPTE is a 501(c)3 non-profit charitable organization. History An informal organizational meeting was held in April 1916 at the Astor Hotel in New York City. Enthusiasm and interest increased, and meetings were held in New York and Chicago, culminating in the founding of the Society of Motion Picture Engineers in the Oak Room of the Raleigh Hotel, Washington DC on the 24th of July. Ten industry stakeholders attended and signed the Articles of Incorporation. Papers of incorporation were executed on 24 July 1916, were filed on 10 August in Washington DC. With a second meeting scheduled, invitations were telegraphed to Jenkin’s industry friends, i.e., key players and engineering executives in the motion picture industry. Three months later, 26 attended the first “official” meeting of the Society, the SMPE, at the Hotel Astor in New York City, on 2 and 3 October 1916. Jenkins was formally elected president, a constitution was ratified, an emblem for the Society was approved, and six committees were established. At the July 1917 Society Convention in Chicago, a set of specifications including the dimensions of 35 mm film, 16 frames per second, etc. were adopted. SMPE set and issued a formal document reached by consensus, its first as an accredited Standards Development Organization (SDO), registering the specifications with the United States Bureau of Standards. The SMPTE Centennial Gala took place on Friday, 28 October 2016, following the annual Conference and Exhibition; James Cameron and Douglas Trumbull received SMPTE’s top honors. SMPTE officially bestowed Honorary Membership, the Society’s highest honor, upon Avatar and Titanic director Cameron in recognition of his work advancing visual effects (VFX), motion capture, and stereoscopic 3D photography, as well as his experimentation in HFR. Presented by Oscar-winning special effects cinematographer Richard Edlund, SMPTE honored Trumbull, who was responsible for the VFX in 2001: A Space Odyssey and Blade Runner, with the Society’s most prestigious medal award, the Progress Medal. The award recognized Trumbull’s contributions to VFX, stereoscopic 3D, and HFR cinema, including his current work to enable stereoscopic 3D with his 120-frames-per-second Magi system. Educational and professional development activities SMPTE's educational and professional development activities include technical presentations at regular meetings of its local Sections, annual and biennial conferences in the US and Australia and the SMPTE Motion Imaging Journal. The society sponsors many awards, the oldest of which are the SMPTE Progress Medal, the Samuel Warner Memorial Medal, and the David Sarnoff Medal. SMPTE also has a number of Student Chapters and sponsors scholarships for college students in the motion imaging disciplines. Standards SMPTE standards documents are copyrighted and may be purchased from the SMPTE website, or other distributors of technical standards. Standards documents may be purchased by the general public. Significant standards promulgated by SMPTE include: All film and television transmission formats and media, including digital. Physical interfaces for transmission of television signals and related data (such as SMPTE timecode and the serial digital interface) (SDI) SMPTE color bars Test card patterns and other diagnostic tools The Material Exchange Format (MXF) SMPTE 2110 SMPTE ST 421:2013 (VC-1 video codec) Film format SMP(T)E'S first standard was to get everyone using 35-mm film width, four sprocket holes per frame, 1.37:1 picture ratio. Until then, there were competing film formats. With the standard, theaters could all run the same films. Film frame rate SMP(T)E's standard in 1927 was for speed at which sound film is shown, 24 frames per second. 3D television SMPTE's taskforce on "3D to the home" produced a report on the issues and challenges and suggested minimum standards for the 3D home master that would be distributed after post-production to the ingest points of distribution channels for 3D video content. A group within the standards committees has begun to work on the formal definition of the SMPTE 3D Home Master. Digital cinema In 1999, SMPTE established the DC28 technology committee, for the foundations of Digital Cinema. Membership SMPTE Fellows Terry Adams, NBC Olympics, LLC Andy Beale, BT Sport Lynn D. Claudy, National Association of Broadcasters Lawrence R. Kaplan, CEO of SDVI Honors and awards program The SMPTE presents awards to individuals for outstanding contributions in fields of the society. Honorary membership and the honor roll Recipients include: Renville "Ren" H. McMann Jr. (2017) James Cameron (2016) Oscar B. "O.B." Hanson (2015) George Lucas (2014) John Logie Baird (2014) Philo Taylor Farnsworth (1996) Ray M. Dolby (1992) Linwood G. Dunn (1984) Herbert T. Kalmus (1958) Walt Disney (1955) Vladimir K. Zworykin (1950) Samuel L. Warner (1946) George Eastman (1928) Thomas Alva Edison (1928) Louis Lumiere (1928) C. Francis Jenkins (1926) Progress Medal The Progress Medal, instituted in 1935, is SMPTE's oldest and most prestigious medal, and is awarded annually for contributions to engineering aspects of the film and/or television industries. Recipients include: Douglas Trumbull (2016) Ioan Allen (2014) David Wood (2012) Edwin Catmull (2011) Birney Dayton (2008) Clyde D. Smith (2007) Roderick Snell (2006) S. Merrill Weiss (2005) Dr. Kees Immink (2004) Stanley N. Baron (2003) William C. Miller (2002) Bernard J. Lechner (2001) Edwin E. Catmall (1996) Ray Dolby (1983) Harold E. Edgerton (1959) Fred Waller (1953) Vladimir K. Zworykin (1950) John G. Frayne (1947) Walt Disney (1940) Herbert Kalmus (1938) Edward W. Kellogg (1937) Kenneth Mees (1936) David Sarnoff Gold Medal Chuck Pagano (2013) James M. DeFilippis (2012) Bernard J. Lechner (1996) Stanley N. Baron (1991) William F. Schreiber (1990) Adrian Ettlinger (1976) Joseph A. Flaherty, Jr. (1974) Peter C. Goldmark (1969) W. R. G. Baker (1959) Albert Rose (1958) Charles Ginsburg (1957) Robert E. Shelby (1956) Arthur V. Loughren (1953) Otto H. Schade (1951) Eastman Kodak Gold Medal The Eastman Kodak Gold Medal, instituted in 1967, recognizes outstanding contributions that lead to new or unique educational programs utilizing motion pictures, television, high-speed and instrumentation photography or other photography sciences. Recent recipients are Andrew Laszlo (2006) James MacKay (2005) Dr. Roderick T. Ryan (2004) George Spiro Dibie (2003) Jean-Pierre Beauviala (2002) Related organizations Related organizations include Advanced Television Systems Committee (ATSC) Audio Engineering Society (AES) BBC Research Department Digital Video Broadcasting European Broadcasting Union (EBU) ITU Radiocommunication Sector (formerly known as the CCIR) ITU Telecommunication Sector (formerly known as the CCITT) Institute of Electrical and Electronics Engineers (IEEE) Joint Photographic Experts Group (JPEG) Moving Picture Experts Group (MPEG) See also Digital Picture Exchange General Exchange Format (GXF) Glossary of video terms Outline of film (Extensive alphabetical listing) Media Dispatch Protocol SMPTE 2032 parts 1, 2 and 3 Video tape recorder (VTR) standards defined by SMPTE References Bibliography Charles S. Swartz (editor). Understanding Digital Cinema. A Professional Handbook. Elsevier, 2005. Philip J. Cianci (Editorial Content Director), The SMPTE Chronicle, Vol. I 1916 – 1949 Motion Pictures, Vol. II 1950 – 1989 Television, Vol III. 1990 – 2016 Digital Media, SMPTE, 2022. Philip J. Cianci (Editorial Content Director), Magic and Miracles - 100 Years of Moving Image Science and Technology - The Work of the Society of Motion Picture and Television Engineers, SMPTE, 2017. Philip J. Cianci (Editorial Content Director), The Honor Roll and Honorary Members of The Society of Motion Picture and Television Engineers, SMPTE, 2016 1916 establishments in the United States 3D imaging Broadcast engineering Economy of Westchester County, New York Film and video technology Organizations awarded an Academy Honorary Award Organizations based in New York (state) Science and technology in New York (state) Television terminology White Plains, New York
Society of Motion Picture and Television Engineers
Engineering
1,959
3,396,759
https://en.wikipedia.org/wiki/Pooper-scooper
A pooper-scooper, or poop scoop, is a device used to pick up animal feces from public places and yards, particularly those of dogs. Pooper-scooper devices often have a bag or bag attachment. 'Poop bags' are alternatives to pooper scoopers, and are simply a bag, usually turned inside out, to carry the feces to a proper disposal area. Sometimes, the person performing the cleanup is also known as the pooper-scooper. History The invention is credited to Brooke Miller, of Anaheim, California. The design she patented is a metal bin with a rake-like edge attached to a wooden stick. It also includes a rake-like device to scoop the poop into the scooper and a hatch that can be attached to a garbage bag that fits onto the base. The generic term pooper-scooper has been included in dictionaries since the early 1970s. Legislation Around 1935, "Curb Your Dog" signs started appearing in NYC, initiating discussions and correspondence with the Department of Sanitation. The Village of Great Neck Estates was one of the earliest communities to enact a local ordinance, in 1975, requiring residents to remove pollution on private and public property caused by dogs. Murray Seeman, Jay S. Goodman and Howard Zelikow, advocated in the face of heated opposition. In 1978, New York State passed the Pooper-Scooper Law. It was so controversial that Mayor Koch needed the New York State Legislature to pass it, after being unable to convince the New York City Council. The New York Times called actress and consumer advocate Fran Lee "New York's foremost fighter against dog dirt". October 20, 1978, KQED San Francisco news footage featured scenes from a Harvey Milk press conference in Duboce Park in which he discussed the city's new "pooper scooper law" with a how-to demonstration. Marking the 25th anniversary of the Pooper-scooper law, NYC Mayor Ed Koch was quoted saying, "If you’ve ever stepped in dog doo, you know how important it is to enforce the canine waste law. New Yorkers overwhelmingly do their duty and self-enforce. Those who don’t are not fit to call friend." In 2018, the City of San Francisco allocated budget funds in the amount of $830,977 to address this issue. A number of jurisdictions, including New York City, San Francisco and Chicago have laws requiring pet owners to clean up after their pets: a) A person who owns, possesses or controls a dog, cat or other animal shall not permit the animal to commit a nuisance on a sidewalk of any public place, on a floor, wall, stairway or roof of any public or private premises used in common by the public, or on a fence, wall or stairway of a building abutting on a public . Authorized employees of New York City Departments of Health (including Animal Care & Control), of Sanitation, or of Parks and Recreation can issue tickets. Such laws are often nicknamed "pooper-scooper laws", though the laws only stipulate that dog owners remove their dogs' feces, not the method or device used (thus using a hand-held plastic bag to remove feces complies with these laws). Some apartment complexes, condos, and neighborhoods require residents to pick up dog poop and use DNA testing on poop to fine people who did not pick up after their pet. Health concerns Dog droppings are one of the leading sources of E. coli (fecal coliforms) bacterial pollution, Toxocara canis and Neospora caninum helminth parasite pollution. One gram of dog feces contains over 20,000,000 E. coli cells. While an individual animal's deposit of feces will not measurably affect the environment, the cumulative effect of thousands of dogs and cats in a metropolitan area can create serious problems due to contamination of soil and water supplies. The runoff from neglected pet waste contaminates water, creating health hazards for people, fish, ducks, etc. In Germany an estimated of feces are deposited daily on public property. A citizen commission (2005) overwhelmingly recommended a plan that would break even at about seven months. DNA samples would be required when pet licenses come up for renewal. Within a year, a database of some 12,500 registration-required canine residents would be available to sanitation workers with sample-test kits. Evidence would be submitted to a forensics laboratory where technicians could readily match the waste to its dog. The prospect of a prompt fine equivalent to $600 US (at 2005 exchange rate) would help assure preventive compliance, as well as cover costs. In adult dogs, the infection by Toxocara canis is usually asymptomatic but can be fatal in puppies. A number of various vertebrates, including humans, and some invertebrates can become infected by Toxocara canis. Humans are infected, like other paratenic hosts, by ingestion of embryonated T. canis eggs. The disease caused by migrating T. canis larvae (toxocariasis) results in visceralis larva migrans and ocularis larva migrans. Clinically infected people have helminth infection and rarely blindness. See also Motocrotte – motorcycle-based solution for cleaning the streets of Paris Mutt Mitt – a plastic mitt used to pick up waste from pets References Sources ROMP (Responsible Owners of Mannerly Pets), metropolitan Twin Cities recreation-advocacy group; June 1996 in Roseville, MN, nonprofit incorporation April 2000 [About ROMP] New York attorney and dog lawyer; External links Sanitation Pet equipment Waste collection Dog equipment Feces
Pooper-scooper
Biology
1,176
21,729,932
https://en.wikipedia.org/wiki/Playboy.co.uk
Playboy.co.uk is an internet web address owned by the PLBY Group. Since 2012 it has redirected to playboy.com, but prior to that a separate website was maintained at the address. Playboy.co.uk was originally operated on a paid-for-content basis. It was re-launched in February 2009 as a free-access website funded by advertising, with the bulk of its content free to users. The re-launched website had editorial and video content that was advertiser-funded, including entertainment channels featuring movies, music, games, TV shows and sports along with a branded social networking element. Its "Life & Style" section included editorial and video content on grooming, fashion, food, gadgets and opinions, all reflecting the Playboy lifestyle. Additionally, it also produced some original content that differed from print editions of the publications produced by Playboy Publishing. See also Home Video Channel References External links Playboy Internet properties established in 1994 1994 establishments in the United Kingdom
Playboy.co.uk
Technology
201
42,361,280
https://en.wikipedia.org/wiki/%28532037%29%202013%20FY27
(provisional designation ) is a trans-Neptunian object and binary system that belongs to the scattered disc (like Eris). Its discovery was announced on 31 March 2014. It has an absolute magnitude (H) of 3.2. is a binary object, with two components approximately and in diameter. It is the ninth-intrinsically-brightest known trans-Neptunian system, and is approximately tied with and (to within measurement uncertainties) as the largest unnamed object in the Solar System. Orbit orbits the Sun once every 449 years. It will come to perihelion around November 2202, at a distance of about 35.6 AU. It is currently near aphelion, 80 AU from the Sun, and, as a result, it has an apparent magnitude of 22. Its orbit has a significant inclination of 33°. The sednoid and the scattered-disc object were discovered by the same survey as and were announced within about a week of one another. Physical properties has a diameter of about , placing it at a transition zone between medium-sized and large TNOs. Using the Atacama Large Millimeter Array and Magellan Telescopes, its albedo was found to be 0.17, and its colour to be moderately red. is one of the largest moderately red TNOs. The physical processes that lead to a lack of such moderately red TNOs larger than are not yet well understood. The brightness of varies by less than over hours and days, suggesting that it either has a very long rotation period, an approximately spheroidal shape, or a rotation axis pointing towards Earth. Brown estimated, prior to the discovery of its satellite, that was very likely to be a dwarf planet, due to its large size. However, Grundy et al. calculate that bodies such as , less than about 1000 km in diameter, with albedos less than ≈0.2 and densities of ≈1.2 g/cm3 or less, may retain a degree of porosity in their physical structure, having never collapsed into fully solid bodies. The surface area of asteroid 532037 (2013 FY27) is similar to the area of the state of Texas. Satellite Using Hubble Space Telescope observations taken in January 2018, Scott Sheppard found a satellite around , that was 0.17 arcseconds away and fainter than its primary. The discovery was announced on 10 August 2018. The satellite does not have a provisional designation nor a proper name. Assuming the two components have equal albedos, they are about and in diameter, respectively. Follow-up observations were taken between May and July 2018 in order to determine the orbit of the satellite, but the results of these observations remain yet to be published . Once the orbit is known, the mass of the system can be determined. See also List of Solar System objects most distant from the Sun List of Solar System objects by size Notes References External links 2013 FY27, Minor planets with Satellites Database, Johnston's Archive Celestia Files of the recent Dwarf Planet finds (Ian Musgrave: 6 April 2014) Gaggle of dwarf planets found by Dark Energy Camera (Aviva Rutkin: 2 April 2014) 532037 Discoveries by Scott S. Sheppard Discoveries by Chad Trujillo 532037 20130317 532037 20130317 20140331
(532037) 2013 FY27
Physics,Astronomy
689
76,876,760
https://en.wikipedia.org/wiki/Weather%20of%202001
The following is a list of weather events that occurred on Earth in the year 2001. There were several natural disasters around the world from various types of weather, including tornadoes, floods and tropical cyclones. The deadliest disaster was Typhoon Lingling in November, which caused 379 fatalities. The costliest event of the year was Hurricane Michelle, which caused $2.43 billion in damages. 2001 was the second hottest year on record at the time behind 1998, which was amplified by the end of a years-long La Niña. The Atlantic and Pacific tropical storm seasons were both unusually active. Many Winter storms and cold waves In January, a winter storm hit parts of the northern United States, causing an injury but no fatalities. Droughts, heat waves, and wildfires In May, a severe drought affected portions of the United States, but caused no injuries or fatalities. 2001 had a relatively low amount of droughts and heat waves. Large wildfires took place in California in 2001, killing over 2 people, destroying over 390 buildings, and causing US$196 million (2001 USD) in damages. The Observation Fire was the largest fire to take place during the season, burning over 67,000 acres of land. The Poe Fire in September was the most destructive wildfire of 2001, injuring over 23 people and destroying more than 133 buildings in parts of north-central California. No fatalities were reported. Floods In April, a historic flood occurred in portions of the Upper Mississippi River, rising to the highest water levels for the river since 1965. Many homes were washed away, and an unknown number of injuries were reported. On May 21 a large flood in Lensk, Russia washed away 400+ homes and left over 2,000 people homeless. On June 4, the 2001 Southeastern United States floods, were triggered by Tropical Storm Allison, killed over 30 people in the Houston, Texas area and left over 40,000 people homeless. Other smaller floods were also triggered as a result of Allison, but none were significant. Tornadoes There were 1,215 tornadoes in the United States, resulting in 40 deaths. In February, a tornado outbreak caused $35 million in damage, and one tornado killed 6 people. In April a large tornado outbreak killed 4 people and injured 18. In September, the tornado outbreak of September 24, 2001 killed 2 people, injured 57 others, and caused $105.157 million (2001 USD) in damages. In November, the Tornado outbreak of November 23–24, 2001 impacted the southern United States, killing 13 and injuring 219. Tropical cyclones In 2001, tropical cyclones and hurricanes formed in various parts of the Atlantic, Pacific and Indian Oceans. A total of 128 tropical cyclones formed within tropical cyclone basins, and 83 of them were named by weather agencies when they attained maximum sustained winds of 35 knots (65 km/h; 40 mph). Typhoon Faxai is the strongest tropical cyclone throughout the year, peaking with a pressure of 915 hPa (27.02 inHg) and attaining 10-minute sustained winds of 195 km/h (120 mph). The deadliest tropical cyclone of the year was Lingling in the West Pacific which caused 379 fatalities in total as it struck the Philippines and Vietnam, while the costliest storm of the year was Michelle, with a damage cost of around $2.43 billion as it catastrophically affected the Greater Antilles and the Bahamas in late October. 23 Category 3 tropical cyclones formed, and 2 Category 5 tropical cyclones formed. The accumulated cyclone energy (ACE) index for the 2001, as calculated by Colorado State University was 672.4 units. References Weather by year Weather-related lists 2001-related lists
Weather of 2001
Physics
738
518,211
https://en.wikipedia.org/wiki/Cloaca
A cloaca ( ), : cloacae ( or ), or vent, is the rear orifice that serves as the only opening for the digestive (rectum), reproductive, and urinary tracts (if present) of many vertebrate animals. All amphibians, reptiles, birds, and a few mammals (monotremes, afrosoricids, and marsupial moles, etc.) have this orifice, from which they excrete both urine and feces; this is in contrast to most placental mammals, which have separate orifices for evacuation and reproduction. Excretory openings with analogous purpose in some invertebrates are also sometimes called cloacae. Mating through the cloaca is called cloacal copulation and cloacal kissing. The cloacal region is also often associated with a secretory organ, the cloacal gland, which has been implicated in the scent-marking behavior of some reptiles, marsupials, amphibians, and monotremes. Etymology The word is from the Latin verb cluo, "(I) cleanse", thus the noun cloaca, "sewer, drain". Birds Birds reproduce using their cloaca; this occurs during a cloacal kiss in most birds. Birds that mate using this method touch their cloacae together, in some species for only a few seconds, sufficient time for sperm to be transferred from the male to the female. For palaeognaths and waterfowl, the males do not use the cloaca for reproduction, but have a phallus. One study has looked into birds that use their cloaca for cooling. Among falconers, the word vent is also a verb meaning "to defecate". Fish Among fish, a true cloaca is present only in elasmobranchs (sharks and rays) and lobe-finned fishes. In lampreys and in some ray-finned fishes, part of the cloaca remains in the adult to receive the urinary and reproductive ducts, although the anus always opens separately. In chimaeras and most teleosts, however, all three openings are entirely separated. Mammals With a few exceptions noted below, mammals have no cloaca. Even in the marsupials that have one, the cloaca is partially subdivided into separate regions for the anus and urethra. Monotremes The monotremes (egg-laying mammals) possess a true cloaca. Marsupials In marsupials, the genital tract is separate from the anus, but a trace of the original cloaca does remain externally. This is one of the features of marsupials (and monotremes) that suggest their basal nature, as the amniotes from which mammals evolved had a cloaca, and probably so did the earliest mammals. Unlike other marsupials, marsupial moles have a true cloaca. This fact has been used to argue that they are not marsupials. Placentals Most adult placentals have no cloaca. In the embryo, the embryonic cloaca divides into a posterior region that becomes part of the anus, and an anterior region that develops depending on sex: in males, it forms the penile urethra, while in females, it develops into the vestibule or urogenital sinus that receives the urethra and vagina. However, some placentals retain a cloaca as adults: those are members of the order Afrosoricida (small mammals native to Africa) as well as pikas, beavers, and some shrews. Being placental animals, humans have an embryonic cloaca which divides into separate tracts during the development of the urinary and reproductive organs. However, a few human congenital disorders result in persons being born with a cloaca, including persistent cloaca and sirenomelia (mermaid syndrome). Reptiles In reptiles, the cloaca consists of the urodeum, proctodeum, and coprodeum. Some species have modified cloacae for increased gas exchange (see reptile respiration and reptile reproduction). This is where reproductive activity occurs. Cloacal respiration in animals Some turtles, especially those specialized in diving, are highly reliant on cloacal respiration during dives. They accomplish this by having a pair of accessory air bladders connected to the cloaca, which can absorb oxygen from the water. Sea cucumbers use cloacal respiration. The constant flow of water through it has allowed various fish, polychaete worms and even crabs to specialize to take advantage of it while living protected inside the cucumber. At night, many of these species emerge through the anus of the sea cucumber in search of food. See also Cloaca (embryology) References Animal anatomy Bird anatomy Digestive system Sex organs Animal reproductive system Urinary system
Cloaca
Biology
1,040
18,171,647
https://en.wikipedia.org/wiki/Index%20of%20telephone-related%20articles
These are some of the links to articles that are telephone related. 0-9 116 telephone number 800 number A-F Alexander Graham Bell Answering machine Antonio Meucci Area code Bell labs Bell System Call Login Systems Carterfone Cell site Cellular network Charles Bourseul Cordless telephone Martin Cooper Demon Dialing Dial tone Elisha Gray Elisha Gray and Alexander Bell telephone controversy Emergency phone Emile Berliner Fax Federal telephone excise tax Francis Blake (telephone) G-L Geographic number Harmonised service of social value History of mobile phones History of the telephone Telephone in United States history Hybrid routing Innocenzo Manzetti Invention of the telephone Jipp curve Local loop M-R Mobile phone Philipp Reis Phreaking Plain old telephone service (POTS) Private branch exchange Public switched telephone network Rate center Regional Bell Operating Company Ringaround S-Z Satellite phone Sidetone Telecommunications Telephone Telephone call Telephone directory Telephone exchange Telephone line Telephone newspaper Telephone number Telephone switchboard Telephone tapping Telephony Thomas Edison Timeline of the telephone Tip and ring (Wiring terminology) Toll-free telephone number Zone Usage Measurement See also Telephony Telecommunications equipment Telephone connectors Telephone directory publishing companies Telephone directory publishing companies of the United States Telephone exchanges Telephone numbers
Index of telephone-related articles
Mathematics
244
4,566,237
https://en.wikipedia.org/wiki/X-PLOR
X-PLOR is a computer software package for computational structural biology originally developed by Axel T. Brunger at Yale University. It was first published in 1987 as an offshoot of CHARMM - a similar program that ran on supercomputers made by Cray Inc. It is used in the fields of X-ray crystallography and nuclear magnetic resonance spectroscopy of proteins (NMR) analysis. X-PLOR is a highly sophisticated program that provides an interface between theoretical foundations and experimental data in structural biology, with specific emphasis on X-ray crystallography and nuclear magnetic resonance spectroscopy in solution of biological macro-molecules. It is intended mainly for researchers and students in the fields of computational chemistry, structural biology, and computational molecular biology. See also Comparison of software for molecular mechanics modeling Molecular mechanics References External links The program's reference manual hosted at Oxford University Molecular dynamics software Computer libraries
X-PLOR
Chemistry,Technology,Biology
182
37,212,381
https://en.wikipedia.org/wiki/Aureoboletus%20auriflammeus
Aureoboletus auriflammeus, commonly known as the flaming gold bolete, is a species of bolete fungus in the family Boletaceae. Described as new to science in 1872, it is found in eastern North America, where it grows in a mycorrhizal association with oaks. The caps of the fruit bodies are golden orange, with a yellow pore surface on the underside, and a reticulated (network-like) stem. The edibility of the mushroom is not known. Taxonomy The species was first described scientifically by English mycologist Miles Joseph Berkeley in 1872, based on specimens collected in North Carolina and sent to him by Moses Ashley Curtis. Berkeley and Curtis named it Boletus auriflammeus. Berkeley called it "a lovely species", and thought it to be related to two other boletes he described in the same publication: Boletus hemichrysus and Boletus ravenelii. It was later transferred to Ceriomyces by William Alphonso Murrill in 1909, a genus that has since been folded into Boletus. Because the fruit bodies stain the collector's hands yellow, Rolf Singer in 1947 placed the species in Pulveroboletus, despite the lack of a partial veil characteristic of that genus. Singer interpreted the powdery surface as lingering remnants of a powdery partial veil. In 2016, it was moved to Aureoboletus genus. The specific epithet auriflammeus means "flaming gold". Similarly, its common name is "flaming gold bolete". Description The cap is initially convex before becoming broadly convex to flattened in age, and attains a diameter of . The cap surface is dry, and, in young individuals, has a powdery coating that will stain hands yellow if handled. Later, the cap becomes tomentose (hairy), and sometimes develops small cracks. The cap color is bright orange-yellow, sometimes mixed with olive tints. The flesh is white to cream, and does not bruise blue when injured or exposed to air by cutting. Its odor is not distinctive, and its taste either not distinctive or acidic. The pore surface is initially yellow to yellow orange, becoming olive-yellow to greenish yellow in age, sometimes developing bright crimson to crimson-orange tints. The tube attachment to the stem is adnate to subdecurrent (running slightly down the length of the stem) and often depressed near the stem at maturity. The pores are angular, radially elongated near the stem, and typically more than 1 mm wide in maturity. Tubes are up to deep. The stem is long, thick, and either nearly equal in width throughout or slightly enlarged in either end. The stem surface is usually reticulate at least on the upper portion of mature specimens, although this characteristic is less pronounced or absent in young individuals. The mycelium at the base of the stem is white. The stem has neither a partial veil nor an annulus. Aureoboletus auriflammeus produces an olive-brown to ochre-brown spore print. It is not known if the fruit bodies are edible. Spores are roughly elliptical to somewhat spindle-shaped, smooth, nearly hyaline (translucent), and measure 8–12 by 3–5 μm. The cheilocystidia (cystidia on the tube edge) are abundant, thin-walled, broadly club-shaped to sphaeropedunculate (rounded and with a short stalk). The pleurocystidia (cystidia on the tube face) are abundant, think-walled, broadly ventricose (swollen in the middle) or sometimes club-shaped. The cap cuticle is made of hyphae with bright yellow encrusted crystals in water that dissolve in potassium hydroxide to produce a diffuse lemon-yellow pigment. Similar species Boletus aurantiosplendens is somewhat similar in appearance to A. auriflammeus, but several features of the former species can distinguish it from the latter: an orange to brownish-orange or brownish-yellow cap; a yellow to apricot or orange stem with tawny to reddish-brown streaks that do not stain fingers when handled; yellow flesh that darkens when exposed or injured; ventricose or ventricose-rostrate (swollen in the middle with a narrow tip) cheilocystidia and pleurocystidia; and hyphae in the cap cuticle that lack bright yellow encrusted crystals. B. roxanae is another lookalike, but with less bright coloring, brownish tones on its cap, and no reticulation on the stem. Retiboletus ornatipes has a more robust stature and does not have orange tones on the stem. Habitat and distribution The fruit bodies of Aureoboletus auriflammeus grow singly, scattered, or in groups on the ground in woods in a mycorrhizal association with oaks. The fruiting season is between July and November. An occasional species, its range covers New York south to Florida and west to Ohio and Tennessee. See also List of Boletus species List of North American boletes References External links auriflammeus Fungi of North America Fungus species Taxa named by Miles Joseph Berkeley Taxa named by Moses Ashley Curtis
Aureoboletus auriflammeus
Biology
1,111
10,606,078
https://en.wikipedia.org/wiki/Source%E2%80%93sink%20dynamics
Source–sink dynamics is a theoretical model used by ecologists to describe how variation in habitat quality may affect the population growth or decline of organisms. Since quality is likely to vary among patches of habitat, it is important to consider how a low quality patch might affect a population. In this model, organisms occupy two patches of habitat. One patch, the source, is a high quality habitat that on average allows the population to increase. The second patch, the sink, is a very low quality habitat that, on its own, would not be able to support a population. However, if the excess of individuals produced in the source frequently moves to the sink, the sink population can persist indefinitely. Organisms are generally assumed to be able to distinguish between high and low quality habitat, and to prefer high quality habitat. However, ecological trap theory describes the reasons why organisms may actually prefer sink patches over source patches. Finally, the source–sink model implies that some habitat patches may be more important to the long-term survival of the population, and considering the presence of source–sink dynamics will help inform conservation decisions. Theory development Although the seeds of a source–sink model had been planted earlier, Pulliam is often recognized as the first to present a fully developed source–sink model. He defined source and sink patches in terms of their demographic parameters, or BIDE rates (birth, immigration, death, and emigration rates). In the source patch, birth rates were greater than death rates, causing the population to grow. The excess individuals were expected to leave the patch, so that emigration rates were greater than immigration rates. In other words, sources were a net exporter of individuals. In contrast, in a sink patch, death rates were greater than birth rates, resulting in a population decline toward extinction unless enough individuals emigrated from the source patch. Immigration rates were expected to be greater than emigration rates, so that sinks were a net importer of individuals. As a result, there would be a net flow of individuals from the source to the sink (see Table 1). Pulliam's work was followed by many others who developed and tested the source–sink model. Watkinson and Sutherland presented a phenomenon in which high immigration rates could cause a patch to appear to be a sink by raising the patch's population above its carrying capacity (the number of individuals it can support). However, in the absence of immigration, the patches are able to support a smaller population. Since true sinks cannot support any population, the authors called these patches "pseudo-sinks". Definitively distinguishing between true sinks and pseudo-sinks requires cutting off immigration to the patch in question and determining whether the patch is still able to maintain a population. Thomas et al. were able to do just that, taking advantage of an unseasonable frost that killed off the host plants for a source population of Edith's checkerspot butterfly (Euphydryas editha). Without the host plants, the supply of immigrants to other nearby patches was cut off. Although these patches had appeared to be sinks, they did not become extinct without the constant supply of immigrants. They were capable of sustaining a smaller population, suggesting that they were in fact pseudo-sinks. Watkinson and Sutherland's caution about identifying pseudo-sinks was followed by Dias, who argued that differentiating between sources and sinks themselves may be difficult. She asserted that a long-term study of the demographic parameters of the populations in each patch is necessary. Otherwise, temporary variations in those parameters, perhaps due to climate fluctuations or natural disasters, may result in a misclassification of the patches. For example, Johnson described periodic flooding of a river in Costa Rica which completely inundated patches of the host plant for a rolled-leaf beetle (Cephaloleia fenestrata). During the floods, these patches became sinks, but at other times they were no different from other patches. If researchers had not considered what happened during the floods, they would not have understood the full complexity of the system. Dias also argued that an inversion between source and sink habitat is possible so that the sinks may actually become the sources. Because reproduction in source patches is much higher than in sink patches, natural selection is generally expected to favor adaptations to the source habitat. However, if the proportion of source to sink habitat changes so that sink habitat becomes much more available, organisms may begin to adapt to it instead. Once adapted, the sink may become a source habitat. This is believed to have occurred for the blue tit (Parus caeruleus) 7500 years ago as forest composition on Corsica changed, but few modern examples are known. Boughton described a source—pseudo-sink inversion in butterfly populations of E. editha. Following the frost, the butterflies had difficulty recolonizing the former source patches. Boughton found that the host plants in the former sources senesced much earlier than in the former pseudo-sink patches. As a result, immigrants regularly arrived too late to successfully reproduce. He found that the former pseudo-sinks had become sources, and the former sources had become true sinks. One of the most recent additions to the source–sink literature is by Tittler et al., who examined wood thrush (Hylocichla mustelina) survey data for evidence of source and sink populations on a large scale. The authors reasoned that emigrants from sources would likely be the juveniles produced in one year dispersing to reproduce in sinks in the next year, producing a one-year time lag between population changes in the source and in the sink. Using data from the Breeding Bird Survey, an annual survey of North American birds, they looked for relationships between survey sites showing such a one-year time lag. They found several pairs of sites showing significant relationships 60–80 km apart. Several appeared to be sources to more than one sink, and several sinks appeared to receive individuals from more than one source. In addition, some sites appeared to be a sink to one site and a source to another (see Figure 1). The authors concluded that source–sink dynamics may occur on continental scales. One of the more confusing issues involves identifying sources and sinks in the field. Runge et al. point out that in general researchers need to estimate per capita reproduction, probability of survival, and probability of emigration to differentiate source and sink habitats. If emigration is ignored, then individuals that emigrate may be treated as mortalities, thus causing sources to be classified as sinks. This issue is important if the source–sink concept is viewed in terms of habitat quality (as it is in Table 1) because classifying high-quality habitat as low-quality may lead to mistakes in ecological management. Runge et al. showed how to integrate the theory of source–sink dynamics with population projection matrices and ecological statistics in order to differentiate sources and sinks. Modes of dispersal Why would individuals ever leave high quality source habitat for a low quality sink habitat? This question is central to source–sink theory. Ultimately, it depends on the organisms and the way they move and distribute themselves between habitat patches. For example, plants disperse passively, relying on other agents such as wind or water currents to move seeds to another patch. Passive dispersal can result in source–sink dynamics whenever the seeds land in a patch that cannot support the plant's growth or reproduction. Winds may continually deposit seeds there, maintaining a population even though the plants themselves do not successfully reproduce. Another good example for this case are soil protists. Soil protists also disperse passively, relying mainly on wind to colonize other sites. As a result, source–sink dynamics can arise simply because external agents dispersed protist propagules (e.g., cysts, spores), forcing individuals to grow in a poor habitat. In contrast, many organisms that disperse actively should have no reason to remain in a sink patch, provided the organisms are able to recognize it as a poor quality patch (see discussion of ecological traps). The reasoning behind this argument is that organisms are often expected to behave according to the "ideal free distribution", which describes a population in which individuals distribute themselves evenly among habitat patches according to how many individuals the patch can support. When there are patches of varying quality available, the ideal free distribution predicts a pattern of "balanced dispersal". In this model, when the preferred habitat patch becomes crowded enough that the average fitness (survival rate or reproductive success) of the individuals in the patch drops below the average fitness in a second, lower quality patch, individuals are expected to move to the second patch. However, as soon as the second patch becomes sufficiently crowded, individuals are expected to move back to the first patch. Eventually, the patches should become balanced so that the average fitness of the individuals in each patch and the rates of dispersal between the two patches are even. In this balanced dispersal model, the probability of leaving a patch is inversely proportional to the carrying capacity of the patch. In this case, individuals should not remain in sink habitat for very long, where the carrying capacity is zero and the probability of leaving is therefore very high. An alternative to the ideal free distribution and balanced dispersal models is when fitness can vary among potential breeding sites within habitat patches and individuals must select the best available site. This alternative has been called the "ideal preemptive distribution", because a breeding site can be preempted if it has already been occupied. For example, the dominant, older individuals in a population may occupy all of the best territories in the source so that the next best territory available may be in the sink. As the subordinate, younger individuals age, they may be able to take over territories in the source, but new subordinate juveniles from the source will have to move to the sink. Pulliam argued that such a pattern of dispersal can maintain a large sink population indefinitely. Furthermore, if good breeding sites in the source are rare and poor breeding sites in the sink are common, it is even possible that the majority of the population resides in the sink. Importance in ecology The source–sink model of population dynamics has made contributions to many areas in ecology. For example, a species' niche was originally described as the environmental factors required by a species to carry out its life history, and a species was expected to be found only in areas that met these niche requirements. This concept of a niche was later termed the "fundamental niche", and described as all of the places a species could successfully occupy. In contrast, the "realized niche", was described as all of the places a species actually did occupy, and was expected to be less than the extent of the fundamental niche as a result of competition with other species. However, the source–sink model demonstrated that the majority of a population could occupy a sink which, by definition, did not meet the niche requirements of the species, and was therefore outside the fundamental niche (see Figure 2). In this case, the realized niche was actually larger than the fundamental niche, and ideas about how to define a species' niche had to change. Source–sink dynamics has also been incorporated into studies of metapopulations, a group of populations residing in patches of habitat. Though some patches may go extinct, the regional persistence of the metapopulation depends on the ability of patches to be re-colonized. As long as there are source patches present for successful reproduction, sink patches may allow the total number of individuals in the metapopulation to grow beyond what the source could support, providing a reserve of individuals available for re-colonization. Source–sink dynamics also has implications for studies of the coexistence of species within habitat patches. Because a patch that is a source for one species may be a sink for another, coexistence may actually depend on immigration from a second patch rather than the interactions between the two species. Similarly, source–sink dynamics may influence the regional coexistence and demographics of species within a metacommunity, a group of communities connected by the dispersal of potentially interacting species. Finally, the source–sink model has greatly influenced ecological trap theory, a model in which organisms prefer sink habitat over source habitat. Besides being ecological trap sink habitat may vary in their response i major disturbance and colonization of sink habitat may allow species survival even if population in source habitat extinct due to some catastrophic event which may substantially increase metapopulational stability. Conservation Land managers and conservationists have become increasingly interested in preserving and restoring high quality habitat, particularly where rare, threatened, or endangered species are concerned. As a result, it is important to understand how to identify or create high quality habitat, and how populations respond to habitat loss or change. Because a large proportion of a species' population could exist in sink habitat, conservation efforts may misinterpret the species' habitat requirements. Similarly, without considering the presence of a trap, conservationists might mistakenly preserve trap habitat under the assumption that an organism's preferred habitat was also good quality habitat. Simultaneously, source habitat may be ignored or even destroyed if only a small proportion of the population resides there. Degradation or destruction of the source habitat will, in turn, impact the sink or trap populations, potentially over large distances. Finally, efforts to restore degraded habitat may unintentionally create an ecological trap by giving a site the appearance of quality habitat, but which has not yet developed all of the functional elements necessary for an organism's survival and reproduction. For an already threatened species, such mistakes might result in a rapid population decline toward extinction. In considering where to place reserves, protecting source habitat is often assumed to be the goal, although if the cause of a sink is human activity, simply designating an area as a reserve has the potential to convert current sink patches to source patches (e.g. no-take zones). Either way, determining which areas are sources or sinks for any one species may be very difficult, and an area that is a source for one species may be unimportant to others. Finally, areas that are sources or sinks currently may not be in the future as habitats are continually altered by human activity or climate change. Few areas can be expected to be universal sources, or universal sinks. While the presence of source, sink, or trap patches must be considered for short-term population survival, especially for very small populations, long-term survival may depend on the creation of networks of reserves that incorporate a variety of habitats and allow populations to interact. See also Conservation biology Ecological trap Ecology Landscape ecology List of ecology topics Metapopulation Perceptual trap Population dynamics Population ecology Population viability analysis Refuge (ecology) References Further reading Landscape ecology Ecological theories Population Conservation biology Disease ecology Behavioral ecology Ecological connectivity
Source–sink dynamics
Biology
2,988
15,419,078
https://en.wikipedia.org/wiki/PHLDB2
Pleckstrin homology-like domain family B member 2 is a protein that in humans is encoded by the PHLDB2 gene. Interactions PHLDB2 has been shown to interact with FLNC. References Further reading
PHLDB2
Chemistry
48
18,069
https://en.wikipedia.org/wiki/Lubricant
A lubricant (sometimes shortened to lube) is a substance that helps to reduce friction between surfaces in mutual contact, which ultimately reduces the heat generated when the surfaces move. It may also have the function of transmitting forces, transporting foreign particles, or heating or cooling the surfaces. The property of reducing friction is known as lubricity. In addition to industrial applications, lubricants are used for many other purposes. Other uses include cooking (oils and fats in use in frying pans and baking to prevent food sticking), to reduce rusting and friction in machinery, through the use of motor oil and grease, bioapplications on humans (e.g., lubricants for artificial joints), ultrasound examination, medical examination, and sexual intercourse. It is mainly used to reduce friction and to contribute to a better, more efficient functioning of a mechanism. History Lubricants have been in some use for thousands of years. Calcium soaps have been identified on the axles of chariots dated to 1400 BC. Building stones were slid on oil-impregnated lumber in the time of the pyramids. In the Roman era, lubricants were based on olive oil and rapeseed oil, as well as animal fats. The growth of lubrication accelerated in the Industrial Revolution with the accompanying use of metal-based machinery. Relying initially on natural oils, needs for such machinery shifted toward petroleum-based materials early in the 1900s. A breakthrough came with the development of vacuum distillation of petroleum, as described by the Vacuum Oil Company. This technology allowed the purification of very non-volatile substances, which are common in many lubricants. Properties A good lubricant generally possesses the following characteristics: A high boiling point and low freezing point (in order to stay liquid within a wide range of temperature) A high viscosity index Thermal stability Hydraulic stability Demulsibility Corrosion prevention A high resistance to oxidation Pour Point (the minimum temperature at which oil will flow under prescribed test conditions) Formulation Typically lubricants contain 90% base oil (most often petroleum fractions, called mineral oils) and less than 10% additives. Vegetable oils or synthetic liquids such as hydrogenated polyolefins, esters, silicones, fluorocarbons and many others are sometimes used as base oils. Additives deliver reduced friction and wear, increased viscosity, improved viscosity index, resistance to corrosion and oxidation, aging or contamination, etc. Non-liquid lubricants include powders (dry graphite, PTFE, molybdenum disulphide, tungsten disulphide, etc.), PTFE tape used in plumbing, air cushion and others. Dry lubricants such as graphite, molybdenum disulphide and tungsten disulphide also offer lubrication at temperatures (up to 350 °C) higher than liquid and oil-based lubricants are able to operate. Limited interest has been shown in low friction properties of compacted oxide glaze layers formed at several hundred degrees Celsius in metallic sliding systems; however, practical use is still many years away due to their physically unstable nature. Additives A large number of additives are used to impart performance characteristics to the lubricants. Modern automotive lubricants contain as many as ten additives, comprising up to 20% of the lubricant, the main families of additives are: Pour point depressants are compounds that prevent crystallization of waxes. Long chain alkylbenzenes adhere to small crystallites of wax, preventing crystal growth. Anti-foaming agents are typically silicone compounds which increase surface tension in order to discourage foam formation. Viscosity index improvers (VIIs) are compounds that allow lubricants to remain viscous at higher temperatures. Typical VIIs are polyacrylates and butadiene. Antioxidants suppress the rate of oxidative degradation of the hydrocarbon molecules within the lubricant. At low temperatures, free radical inhibitors such as hindered phenols are used, e.g. butylated hydroxytoluene. At temperatures >90 °C, where the metals catalyze the oxidation process, dithiophosphates are more useful. In the latter application the additives are called metal deactivators. Detergents ensure the cleanliness of engine components by preventing the formation of deposits on contact surfaces at high temperatures. Corrosion inhibitors (rust inhibitors) are usually alkaline materials, such as alkylsulfonate salts, that absorb acids that would corrode metal parts. Anti-wear additives form protective 'tribofilms' on metal parts, suppressing wear. They come in two classes depending on the strength with which they bind to the surface. Popular examples include phosphate esters and zinc dithiophosphates. Extreme pressure (anti-scuffing) additives form protective films on sliding metal parts. These agents are often sulfur compounds, such as dithiophosphates. Friction modifiers reduce friction and wear, particularly in the boundary lubrication regime where surfaces come into direct contact. In 1999, an estimated 37,300,000 tons of lubricants were consumed worldwide. Automotive applications dominate, including electric vehicles but other industrial, marine, and metal working applications are also big consumers of lubricants. Although air and other gas-based lubricants are known (e.g., in fluid bearings), liquid lubricants dominate the market, followed by solid lubricants. Lubricants are generally composed of a majority of base oil plus a variety of additives to impart desirable characteristics. Although generally lubricants are based on one type of base oil, mixtures of the base oils also are used to meet performance requirements. Mineral oil The term "mineral oil" is used to refer to lubricating base oils derived from crude oil. The American Petroleum Institute (API) designates several types of lubricant base oil: Group I – Saturates < 90% and/or sulfur > 0.03%, and Society of Automotive Engineers (SAE) viscosity index (VI) of 80 to 120 Manufactured by solvent extraction, solvent or catalytic dewaxing, and hydro-finishing processes. Common Group I base oil are 150SN (solvent neutral), 500SN, and 150BS (brightstock) Group II – Saturates > 90% and sulfur < 0.03%, and SAE viscosity index of 80 to 120 Manufactured by hydrocracking and solvent or catalytic dewaxing processes. Group II base oil has superior anti-oxidation properties since virtually all hydrocarbon molecules are saturated. It has water-white color. Group III – Saturates > 90%, sulfur < 0.03%, and SAE viscosity index over 120 Manufactured by special processes such as isohydromerization. Can be manufactured from base oil or slax wax from dewaxing process. Group IV – Polyalphaolefins (PAO) Group V – All others not included above, such as naphthenics, polyalkylene glycols (PAG), and polyesters. The lubricant industry commonly extends this group terminology to include: Group I+ with a viscosity index of 103–108 Group II+ with a viscosity index of 113–119 Group III+ with a viscosity index of at least 140 Can also be classified into three categories depending on the prevailing compositions: Paraffinic Naphthenic Aromatic Synthetic oils Petroleum-derived lubricant can also be produced using synthetic hydrocarbons (derived ultimately from petroleum), "synthetic oils". These include: Polyalpha-olefin (PAO) Synthetic esters Polyalkylene glycols (PAG) Phosphate esters Perfluoropolyether (PFPE) Alkylated naphthalenes (AN) Silicate esters Ionic fluids Multiply alkylated cyclopentanes (MAC) Solid lubricants PTFE: polytetrafluoroethylene (PTFE) is typically used as a coating layer on, for example, cooking utensils to provide a non-stick surface. Its usable temperature range up to 350 °C and chemical inertness make it a useful additive in special greases, where it can function both as a thickener and a lubricant. Under extreme pressures, PTFE powder or solids is of little value as it is soft and flows away from the area of contact. Ceramic or metal or alloy lubricants must be used then. Inorganic solids: Graphite, hexagonal boron nitride, molybdenum disulfide and tungsten disulfide are examples of solid lubricants. Some retain their lubricity to very high temperatures. The use of some such materials is sometimes restricted by their poor resistance to oxidation (e.g., molybdenum disulfide degrades above 350 °C in air, but 1100 °C in reducing environments. Metal/alloy: Metal alloys, composites and pure metals can be used as grease additives or the sole constituents of sliding surfaces and bearings. Cadmium and gold are used for plating surfaces which gives them good corrosion resistance and sliding properties, Lead, tin, zinc alloys and various bronze alloys are used as sliding bearings, or their powder can be used to lubricate sliding surfaces alone. Aqueous lubrication Aqueous lubrication is of interest in a number of technological applications. Strongly hydrated brush polymers such as PEG can serve as lubricants at liquid solid interfaces. By continuous rapid exchange of bound water with other free water molecules, these polymer films keep the surfaces separated while maintaining a high fluidity at the brush–brush interface at high compressions, thus leading to a very low coefficient of friction. Biolubricant Biolubricants are derived from vegetable oils and other renewable sources. They usually are triglyceride esters (fats obtained from plants and animals). For lubricant base oil use, the vegetable derived materials are preferred. Common ones include high oleic canola oil, castor oil, palm oil, sunflower seed oil and rapeseed oil from vegetable, and tall oil from tree sources. Many vegetable oils are often hydrolyzed to yield the acids which are subsequently combined selectively to form specialist synthetic esters. Other naturally derived lubricants include lanolin (wool grease, a natural water repellent). Whale oil was a historically important lubricant, with some uses up to the latter part of the 20th century as a friction modifier additive for automatic transmission fluid. In 2008, the biolubricant market was around 1% of UK lubricant sales in a total lubricant market of 840,000 tonnes/year. , researchers at Australia's CSIRO have been studying safflower oil as an engine lubricant, finding superior performance and lower emissions than petroleum-based lubricants in applications such as engine-driven lawn mowers, chainsaws and other agricultural equipment. Grain-growers trialling the product have welcomed the innovation, with one describing it as needing very little refining, biodegradable, a bioenergy and biofuel. The scientists have reengineered the plant using gene silencing, creating a variety that produces up to 93% of oil, the highest currently available from any plant. Researchers at Montana State University’s Advanced Fuel Centre in the US studying the oil’s performance in a large diesel engine, comparing it with conventional oil, have described the results as a "game-changer". Greases Are solid or semi-solid lubricant produced by blending thickening agents within a liquid lubricant. Greases are typically composed of about 80% lubricating oil, around 5% to 10% thickener, and approximately 10% to 15% additives. In most common greases, the thickener is a light or alkali metal soap, forming a sponge-like structure that encapsulates the oil droplets. Beyond lubrication, greases are generally expected to provide corrosion protection, typically achieved through additives. To prevent drying out at higher temperatures, dry lubricants are also added. By selecting appropriate oils, thickeners, and additives, the properties of greases can be optimized for a wide range of applications. There are greases suited for high or extremely low temperatures, vacuum applications, water-resistant and weatherproof greases, highly pressure-resistant or creeping types, food-grade, or exceptionally adhesive greases. Functions of lubricants One of the largest applications for lubricants, in the form of motor oil, is protecting the internal combustion engines in motor vehicles and powered equipment. Lubricant vs. anti-tack coating Anti-tack or anti-stick coatings are designed to reduce the adhesive condition (stickiness) of a given material. The rubber, hose, and wire and cable industries are the largest consumers of anti-tack products but virtually every industry uses some form of anti-sticking agent. Anti-sticking agents differ from lubricants in that they are designed to reduce the inherently adhesive qualities of a given compound while lubricants are designed to reduce friction between any two surfaces. Keep moving parts apart Lubricants are typically used to separate moving parts in a system. This separation has the benefit of reducing friction, wear and surface fatigue, together with reduced heat generation, operating noise and vibrations. Lubricants achieve this in several ways. The most common is by forming a physical barrier i.e., a thin layer of lubricant separates the moving parts. This is analogous to hydroplaning, the loss of friction observed when a car tire is separated from the road surface by moving through standing water. This is termed hydrodynamic lubrication. In cases of high surface pressures or temperatures, the fluid film is much thinner and some of the forces are transmitted between the surfaces through the lubricant. Reduce friction Typically the lubricant-to-surface friction is much less than surface-to-surface friction in a system without any lubrication. Thus use of a lubricant reduces the overall system friction. Reduced friction has the benefit of reducing heat generation and reduced formation of wear particles as well as improved efficiency. Lubricants may contain polar additives known as friction modifiers that chemically bind to metal surfaces to reduce surface friction even when there is insufficient bulk lubricant present for hydrodynamic lubrication, e.g. protecting the valve train in a car engine at startup. The base oil itself might also be polar in nature and as a result inherently able to bind to metal surfaces, as with polyolester oils. Transfer heat Both gas and liquid lubricants can transfer heat. However, liquid lubricants are much more effective on account of their high specific heat capacity. Typically the liquid lubricant is constantly circulated to and from a cooler part of the system, although lubricants may be used to warm as well as to cool when a regulated temperature is required. This circulating flow also determines the amount of heat that is carried away in any given unit of time. High flow systems can carry away a lot of heat and have the additional benefit of reducing the thermal stress on the lubricant. Thus lower cost liquid lubricants may be used. The primary drawback is that high flows typically require larger sumps and bigger cooling units. A secondary drawback is that a high flow system that relies on the flow rate to protect the lubricant from thermal stress is susceptible to catastrophic failure during sudden system shut downs. An automotive oil-cooled turbocharger is a typical example. Turbochargers get red hot during operation and the oil that is cooling them only survives as its residence time in the system is very short (i.e. high flow rate). If the system is shut down suddenly (pulling into a service area after a high-speed drive and stopping the engine) the oil that is in the turbo charger immediately oxidizes and will clog the oil ways with deposits. Over time these deposits can completely block the oil ways, reducing the cooling with the result that the turbo charger experiences total failure, typically with seized bearings. Non-flowing lubricants such as greases and pastes are not effective at heat transfer although they do contribute by reducing the generation of heat in the first place. Carry away contaminants and debris Lubricant circulation systems have the benefit of carrying away internally generated debris and external contaminants that get introduced into the system to a filter where they can be removed. Lubricants for machines that regularly generate debris or contaminants such as automotive engines typically contain detergent and dispersant additives to assist in debris and contaminant transport to the filter and removal. Over time the filter will get clogged and require cleaning or replacement, hence the recommendation to change a car's oil filter at the same time as changing the oil. In closed systems such as gear boxes the filter may be supplemented by a magnet to attract any iron fines that get created. It is apparent that in a circulatory system the oil will only be as clean as the filter can make it, thus it is unfortunate that there are no industry standards by which consumers can readily assess the filtering ability of various automotive filters. Poor automotive filters significantly reduce the life of the machine (engine) as well as make the system inefficient. Transmit power Lubricants known as hydraulic fluid are used as the working fluid in hydrostatic power transmission. Hydraulic fluids comprise a large portion of all lubricants produced in the world. The automatic transmission's torque converter is another important application for power transmission with lubricants. Protect against wear Lubricants prevent wear by reducing friction between two parts. Lubricants may also contain anti-wear or extreme pressure additives to boost their performance against wear and fatigue. Prevent corrosion and rusting Many lubricants are formulated with additives that form chemical bonds with surfaces or that exclude moisture, to prevent corrosion and rust. It reduces corrosion between two metallic surfaces and avoids contact between these surfaces to avoid immersed corrosion. Seal for gases Lubricants will occupy the clearance between moving parts through the capillary force, thus sealing the clearance. This effect can be used to seal pistons and shafts. Fluid types Automotive Motor oils Petrol (Gasolines) engine oils Diesel engine oils Automatic transmission fluid Gearbox fluids Brake fluids Hydraulic fluids Air conditioning compressor oils Tractor (one lubricant for all systems) Universal Tractor Transmission Oil – UTTO Super Tractor Oil Universal – STOU – includes engine Other motors 2-stroke engine oils Industrial Hydraulic oils Air compressor oils Food-grade lubricant Gas Compressor oils Gear oils Bearing and circulating system oils Refrigerator compressor oils Steam and gas turbine oils Aviation Gas turbine engine oils Piston engine oils Marine Crosshead cylinder oils Crosshead Crankcase oils Trunk piston engine oils Stern tube lubricants "Glaze" formation (high-temperature wear) A further phenomenon that has undergone investigation in relation to high-temperature wear prevention and lubrication is that of a compacted oxide layer glaze formation. Such glazes are generated by sintering a compacted oxide layer. Such glazes are crystalline, in contrast to the amorphous glazes seen in pottery. The required high temperatures arise from metallic surfaces sliding against each other (or a metallic surface against a ceramic surface). Due to the elimination of metallic contact and adhesion by the generation of oxide, friction and wear is reduced. Effectively, such a surface is self-lubricating. As the "glaze" is already an oxide, it can survive to very high temperatures in air or oxidising environments. However, it is disadvantaged by it being necessary for the base metal (or ceramic) having to undergo some wear first to generate sufficient oxide debris. Disposal and environmental impact It is estimated that about 50% of all lubricants are released into the environment. Common disposal methods include recycling, burning, landfill and discharge into water, though typically disposal in landfill and discharge into water are strictly regulated in most countries, as even small amount of lubricant can contaminate a large amount of water. Most regulations permit a threshold level of lubricant that may be present in waste streams and companies spend hundreds of millions of dollars annually in treating their waste waters to get to acceptable levels. Burning the lubricant as fuel, typically to generate electricity, is also governed by regulations mainly on account of the relatively high level of additives present. Burning generates both airborne pollutants and ash rich in toxic materials, mainly heavy metal compounds. Thus lubricant burning takes place in specialized facilities that have incorporated special scrubbers to remove airborne pollutants and have access to landfill sites with permits to handle the toxic ash. Unfortunately, most lubricant that ends up directly in the environment is due to the general public discharging it onto the ground, into drains, and directly into landfills as trash. Other direct contamination sources include runoff from roadways, accidental spillages, natural or man-made disasters, and pipeline leakages. Improvement in filtration technologies and processes has now made recycling a viable option (with the rising price of base stock and crude oil). Typically various filtration systems remove particulates, additives, and oxidation products and recover the base oil. The oil may get refined during the process. This base oil is then treated much the same as virgin base oil however there is considerable reluctance to use recycled oils as they are generally considered inferior. Basestock fractionally vacuum distilled from used lubricants has superior properties to all-natural oils, but cost-effectiveness depends on many factors. Used lubricant may also be used as refinery feedstock to become part of crude oil. Again, there is considerable reluctance to this use as the additives, soot, and wear metals will seriously poison/deactivate the critical catalysts in the process. Cost prohibits carrying out both filtration (soot, additives removal) and re-refining (distilling, isomerization, hydrocrack, etc.) however the primary hindrance to recycling still remains the collection of fluids as refineries need continuous supply in amounts measured in cisterns, rail tanks. Occasionally, unused lubricant requires disposal. The best course of action in such situations is to return it to the manufacturer where it can be processed as a part of fresh batches. Environment: Lubricants both fresh and used can cause considerable damage to the environment mainly due to their high potential of serious water pollution. Further, the additives typically contained in lubricant can be toxic to flora and fauna. In used fluids, the oxidation products can be toxic as well. Lubricant persistence in the environment largely depends upon the base fluid, however if very toxic additives are used they may negatively affect the persistence. Lanolin lubricants are non-toxic making them the environmental alternative which is safe for both users and the environment. Societies and industry bodies American Petroleum Institute (API) Society of Tribologists and Lubrication Engineers (STLE) National Lubricating Grease Institute (NLGI) Society of Automotive Engineers (SAE) Independent Lubricant Manufacturer Association (ILMA) European Automobile Manufacturers Association (ACEA) Japanese Automotive Standards Organization (JASO) Petroleum Packaging Council (PPC) Major publications Peer reviewed ASME Journal of Tribology Tribology International Tribology Transactions Journal of Synthetic Lubricants Tribology Letters Lubrication Science Trade periodicals Tribology and Lubrication Technology Fuels & Lubes International Oiltrends Lubes n' Greases Compoundings Chemical Market Review Machinery lubrication See also References Notes Sources API 1509, Engine Oil Licensing and Certification System, 15th Edition, 2002. Appendix E, API Base Oil Interchangeability Guidelines for Passenger Car Motor Oils and Diesel Engine Oils (revised) Boughton and Horvath, 2003, Environmental Assessment of Used Oil Management Methods, Environmental Science and Technology, V38 I.A. Inman. Compacted Oxide Layer Formation under Conditions of Limited Debris Retention at the Wear Interface during High Temperature Sliding Wear of Superalloys, Ph.D. Thesis (2003), Northumbria University Mercedes-Benz oil recommendations, extracted from factory manuals and personal research Measuring reserve alkalinity and evaluation of wear dependence Testing used oil quality, list of possible measurements External links SAE-ISO-AGMA viscosity conversion chart Chart of API Gravity and Specific gravity Petroleum products Tribology
Lubricant
Chemistry,Materials_science,Engineering
5,165
4,938,776
https://en.wikipedia.org/wiki/Gerotor
A gerotor is a positive displacement pump. The name gerotor is derived from "generated rotor." A gerotor unit consists of an inner and an outer rotor. The inner rotor has n teeth, while the outer rotor has n + 1 teeth, with n defined as a natural number greater than or equal to 2. The axis of the inner rotor is offset from the axis of the outer rotor and both rotors rotate on their respective axes. The geometry of the two rotors partitions the volume between them into n different dynamically-changing volumes. During the assembly's rotation cycle, each of these volumes changes continuously, so any given volume first increases, and then decreases. An increase creates a vacuum. This vacuum creates suction, and hence, this part of the cycle is where the inlet is located. As a volume decreases, compression occurs. During this compression period, fluids can be pumped or, if they are gaseous fluids, compressed. Gerotor pumps are generally designed using a trochoidal inner rotor and an outer rotor formed by a circle with intersecting circular arcs. A gerotor can also function as a pistonless rotary engine. High-pressure gas enters the intake and pushes against the inner and outer rotors, causing both to rotate as the volume between the inner and outer rotor increases. During the compression period, the exhaust is pumped out. History At the most basic level, a gerotor is essentially one that is moved via fluid power. Originally, this fluid was water; today, the wider use is in hydraulic devices. Myron F. Hill, who might be called the father of the gerotor, in his booklet "Kinematics of Ge-rotors", lists efforts by Galloway in 1787, by Nash and Tilden in 1879, by Cooley in 1900, by Professor Lilly of Dublin University in 1915, and by Feuerheerd in 1918. These men were all working to perfect an internal gear mechanism by a one-tooth difference to provide displacement. Myron Hill made his first efforts in 1906, then in 1921, gave his entire time to developing the gerotor. He developed a great deal of geometric theory bearing upon these rotors, coined the word GE-ROTOR (meaning generated rotor), and secured basic patents on GE-ROTOR. Gerotors are widely used today throughout industry, and are produced in a variety of shapes and sizes by a number of different methods. Uses Engine Fuel pump Gas compressor Hydraulic motor Limited-slip differential Oil pump (internal combustion engine) Power steering units See also Conical screw compressor Gear pump Quasiturbine Wankel engine References https://www.academia.edu/10200507/Gerotor_Modeling_with_NX3 External links Cascon Inc. Nichols Portland LLC Pump School - Gerotor pump description and animation Step by step drawing Engine technology Gas compressors Pumps
Gerotor
Physics,Chemistry,Technology
592
568,248
https://en.wikipedia.org/wiki/Paralanguage
Paralanguage, also known as vocalics, is a component of meta-communication that may modify meaning, give nuanced meaning, or convey emotion, by using techniques such as prosody, pitch, volume, intonation, etc. It is sometimes defined as relating to nonphonemic properties only. Paralanguage may be expressed consciously or unconsciously. The study of paralanguage is known as paralinguistics and was invented by George L. Trager in the 1950s, while he was working at the Foreign Service Institute of the U.S. Department of State. His colleagues at the time included Henry Lee Smith, Charles F. Hockett (working with him on using descriptive linguistics as a model for paralanguage), Edward T. Hall developing proxemics, and Ray Birdwhistell developing kinesics. Trager published his conclusions in 1958, 1960 and 1961. His work has served as a basis for all later research, especially those investigating the relationship between paralanguage and culture (since paralanguage is learned, it differs by language and culture). A good example is the work of John J. Gumperz on language and social identity, which specifically describes paralinguistic differences between participants in intercultural interactions. The film Gumperz made for BBC in 1982, Multiracial Britain: Cross talk, does a particularly good job of demonstrating cultural differences in paralanguage and their impact on relationships. Paralinguistic information, because it is phenomenal, belongs to the external speech signal (Ferdinand de Saussure's parole) but not to the arbitrary conmodality. Even vocal language has some paralinguistic as well as linguistic properties that can be seen (lip reading, McGurk effect), and even felt, e.g. by the Tadoma method. Aspects of the speech signal Perspectival aspects Speech signals arrive at a listener's ears with acoustic properties that may allow listeners to identify location of the speaker (sensing distance and direction, for example). Sound localization functions in a similar way also for non-speech sounds. The perspectival aspects of lip reading are more obvious and have more drastic effects when head turning is involved. Organic aspects The speech organs of different speakers differ in size. As children grow up, their organs of speech become larger, and there are differences between male and female adults. The differences concern not only size, but also proportions. They affect the pitch of the voice and to a substantial extent also the formant frequencies, which characterize the different speech sounds. The organic quality of speech has a communicative function in a restricted sense, since it is merely informative about the speaker. It will be expressed independently of the speaker's intention. Expressive aspects Paralinguistic cues such as loudness, rate, pitch, pitch contour, and to some extent formant frequencies of an utterance, contribute to the emotive or attitudinal quality of an utterance. Typically, attitudes are expressed intentionally and emotions without intention, but attempts to fake or to hide emotions are not unusual. Consequently, paralinguistic cues relating to expression have a moderate effect of semantic marking. That is, a message may be made more or less coherent by adjusting its expressive presentation. For instance, upon hearing an utterance such as "I drink a glass of wine every night before I go to sleep" is coherent when made by a speaker identified as an adult, but registers a small semantic anomaly when made by a speaker identified as a child. This anomaly is significant enough to be measured through electroencephalography, as an N400. Autistic individuals have a reduced sensitivity to this and similar effects. Emotional tone of voice, itself paralinguistic information, has been shown to affect the resolution of lexical ambiguity. Some words have homophonous partners; some of these homophones appear to have an implicit emotive quality, for instance, the sad "die" contrasted with the neutral "dye"; uttering the sound /dai/ in a sad tone of voice can result in a listener writing the former word significantly more often than if the word is uttered in a neutral tone. Linguistic aspects Ordinary phonetic transcriptions of utterances reflect only the linguistically informative quality. The problem of how listeners factor out the linguistically informative quality from speech signals is a topic of current research. Some of the linguistic features of speech, in particular of its prosody, are paralinguistic or pre-linguistic in origin. A most fundamental and widespread phenomenon of this kind is described by John Ohala as the "frequency code". This code works even in communication across species. It has its origin in the fact that the acoustic frequencies in the voice of small vocalizers are high, while they are low in the voice of large vocalizers. This gives rise to secondary meanings such as "harmless", "submissive", "unassertive", which are naturally associated with smallness, while meanings such as "dangerous", "dominant", and "assertive" are associated with largeness. In most languages, the frequency code also serves the purpose of distinguishing questions from statements. It is universally reflected in expressive variation, and it is reasonable to assume that it has phylogenetically given rise to the sexual dimorphism that lies behind the large difference in pitch between average female and male adults. In text-only communication such as email, chatrooms and instant messaging, paralinguistic elements can be displayed by emoticons, font and color choices, capitalization and the use of non-alphabetic or abstract characters. Nonetheless, paralanguage in written communication is limited in comparison with face-to-face conversation, sometimes leading to misunderstandings. Specific forms of paralinguistic respiration Gasps A gasp is a kind of paralinguistic respiration in the form of a sudden and sharp inhalation of air through the mouth. A gasp may indicate difficulty breathing and a panicked effort to draw air into the lungs. Gasps also occur from an emotion of surprise, shock or disgust. Like a sigh, a yawn, or a moan, a gasp is often an automatic and unintentional act. Gasping is closely related to sighing, and the inhalation characterizing a gasp induced by shock or surprise may be released as a sigh if the event causing the initial emotional reaction is determined to be less shocking or surprising than the observer first believed. As a symptom of physiological problems, apneustic respirations (a.k.a. apneusis), are gasps related to the brain damage associated with a stroke or other trauma. Sighs A sigh is a kind of paralinguistic respiration in the form of a deep and especially audible, single exhalation of air out of the mouth or nose, that humans use to communicate emotion. It is a voiced pharyngeal fricative, sometimes associated with a guttural glottal breath exuded in a low tone. It often arises from a negative emotion, such as dismay, dissatisfaction, boredom, or futility. A sigh can also arise from positive emotions such as relief, particularly in response to some negative situation ending or being avoided. Like a gasp, a yawn, or a moan, a sigh is often an automatic and unintentional act. Scientific studies show that babies sigh after 50 to 100 breaths. This serves to improve the mechanical properties of lung tissue, and it also helps babies to develop a regular breathing rhythm. Behaviors equivalent to sighing have also been observed in animals such as dogs, monkeys, and horses. In text messages and internet chat rooms, or in comic books, a sigh is usually represented with the word itself, 'sigh', possibly within asterisks, *sigh*. Sighing is also a reflex, governed by a few neurons. Moans and groans Moaning and groaning both refer to an extended sound emanating from the throat, which is typically made by engaging in sexual activity. Moans and groans are also noises traditionally associated with ghosts, and their supposed experience of suffering in the afterlife. They are sometimes used to indicate displeasure. Throat clearing Throat clearing is a metamessaging nonverbal form of communication used in announcing one's presence upon entering the room or approaching a group. It is done by individuals who perceive themselves to be of higher rank than the group they are approaching and utilize the throat-clear as a form of communicating this perception to others. It can convey nonverbalized disapproval. In chimpanzee social hierarchy, this utterance is a sign of rank, directed by alpha males and higher-ranking chimps to lower-ranking ones and signals a mild warning or a slight annoyance. As a form of metacommunication, the throat-clear is acceptable only to signal that a formal business meeting is about to start. It is not acceptable business etiquette to clear one's throat when approaching a group on an informal basis; the basis of one's authority has already been established and requires no further reiteration by this ancillary nonverbal communication. Mhm is between a literal language and movement, by making a noise "hmm" or "mhm", to make a pause for the conversation or as a chance to stop and think. The "mhm" utterance is often used in narrative interviews, such as an interview with a disaster survivor or sexual violence victim. In this kind of interview, it is better for the interviewers or counselors not to intervene too much when an interviewee is talking. The "mhm" assures the interviewee that they are being heard and can continue their story. Observing emotional differences and taking care of an interviewee's mental status is an important way to find slight changes during conversation. Huh? "Huh?", meaning "what?" (that is, used when an utterance by another is not fully heard or requires clarification), is an essentially universal expression, but may be a normal word (learned like other words) and not paralanguage. If it is a word, it is a rare (or possibly even unique) one, being found with basically the same sound and meaning in almost all languages. Physiology of paralinguistic comprehension fMRI studies Several studies have used the fMRI paradigm to observe brain states brought about by adjustments of paralinguistic information. One such study investigated the effect of interjections that differed along the criteria of lexical index (more or less "wordy") as well as neutral or emotional pronunciation; a higher hemodynamic response in auditory cortical gyri was found when more robust paralinguistic data was available. Some activation was found in lower brain structures such as the pons, perhaps indicating an emotional response. See also Business communication Intercultural competence Kinesics Meta message Meta-communication Metacommunicative competence Prosody (linguistics) Proxemics References Further reading Cook, Guy (2001) The Discourse of Advertising. (second edition) London: Routledge. (chapter 4 on paralanguage and semiotics). Robbins, S. and Langton, N. (2001) Organizational Behaviour: Concepts, Controversies, Applications (2nd Canadian ed.). Upper Saddle River, NJ: Prentice-Hall. Traunmüller, H. (2005) "Paralinguale Phänomene" (Paralinguistic phenomena), chapter 76 in: SOCIOLINGUISTICS An International Handbook of the Science of Language and Society, 2nd ed., U. Ammon, N. Dittmar, K. Mattheier, P. Trudgill (eds.), Vol. 1, pp. 653–665. Walter de Gruyter, Berlin/New York. Matthew McKay, Martha Davis, Patrick Fanning [1983] (1995) Messages: The Communication Skills Book, Second Edition, New Harbinger Publications, , , pp. 63–67. Human communication Nonverbal communication Sociological terminology Social philosophy Online chat
Paralanguage
Biology
2,486
18,110,071
https://en.wikipedia.org/wiki/HR%204049
HR 4049, also known as HD 89353 and AG Antliae, is a binary post-asymptotic-giant-branch (post-AGB) star in the constellation Antlia. A very metal-poor star, it is surrounded by a thick unique circumbinary disk enriched in several molecules. With an apparent magnitude of about 5.5, the star can readily be seen with the naked eye under ideal conditions. It is located approximately distant. HR 4049 has a peculiar spectrum. The star appears, based on its spectrum in the Balmer series, to be a blue supergiant, although in reality it is an old low-mass star on the post-AGB phase of its life. Its atmosphere is extremely deficient in heavy elements, over with a metallicity over 30,000 lower than the Sun. It also shows a strong infrared excess, corresponding closely to a blackbody produced by a disk of material surrounding the star. The star is also undergoing intense mass-loss HR 4049 has an unseen companion, detected from variations in the doppler shift of its spectral lines. The properties of the companion can only be estimated by making certain assumptions about the inclination of the orbit and the mass function. Given those assumptions, it is thought to be a low luminosity main sequence star. HR 4049 was discovered to be a variable star by Christoffel Waelkens and Fredy Rufener in 1983. It was given the variable star designation AG Antliae in 1987, but is still more commonly referred to as HR 4049. HR 4049 is an unusual variable star, ranging between magnitudes 5.29 and 5.83 with a period of 429 days. It has been described as pulsating in a similar fashion to an RV Tauri variable, although the preferred interpretation is that the variations are produced by variable extinction produced by the material around the star and that the period is the same as the orbital period. Although HR 4049 apparently has the spectrum of a blue supergiant, it is an old low-mass star which has exhausted nuclear fusion and is losing its outer layers as it transitions towards a white dwarf and possibly a planetary nebula. During this phase it has a luminosity several thousand times that of the Sun, although a mass around half that of the sun. The mass can only be guessed from the expected mass of the white dwarf that it is becoming. References External links The Spatial Distribution of Grains Around the Dual Chemistry Post-AGB Star Synthetic post-AGB evolution Non-linear radiative models of post-AGB stars: Application to HD 56126 Post-AGB stars as testbeds of nucleosynthesis in AGB stars The post-AGB evolution of AGB mass loss variations Antlia Post-asymptotic-giant-branch stars 089353 J10180758-2859308 DENIS objects Antliae, AG 4049 B-type supergiants 050456 Durchmusterung objects Spectroscopic binaries
HR 4049
Astronomy
629
31,222,291
https://en.wikipedia.org/wiki/Morchella%20tomentosa
Morchella tomentosa, commonly called the gray, fuzzy foot, or black foot morel, is a species of fungus in the family Morchellaceae. M. tomentosa is a fire-associated species described from western North America, formally described as new to science in 2008. Morchella tomentosa is identified by its post-fire occurrence, fine hairs on the surface of young fruit bodies, and a thick, "double-walled" stem. It also has unique sclerotia-like underground parts. Color can range from black and "sooty" to gray, brown, yellow, or white, although color tends to progress from darker to lighter with age of the fruiting body. Three other wildfire-adapted morels were described from western North America in 2012: M. capitata, M. septimelata, and M. sextelata. None of these three new species share the hairy surface texture of M. tomentosa. Phylogeny Based on studies of DNA, M. tomentosa is clearly a distinct species apart from the yellow morels (M. esculenta & ssp.) and black morels (M. elata & ssp.). Mushroom collectors also use the common name "gray morel" for M. esculenta-type morels in eastern North America. References External links tomentosa Edible fungi Fungi of North America Fungi described in 2008 Fungus species
Morchella tomentosa
Biology
299
34,406,020
https://en.wikipedia.org/wiki/SAV001
SAV001-H is the first candidate preventive HIV vaccine using a killed or "dead" version of the HIV-1 virus (inactivated vaccine). The vaccine was developed by Dr. Chil-Yong Kang and his research team at Western University’s Schulich School of Medicine & Dentistry in Canada. The results of the Phase I clinical trial, completed in August 2013, showed no serious adverse effects in 33 participants. Vaccine design The SAV001-H vaccine is considered to be the first whole killed genetically modified HIV-1 vaccine. According to Dr. Kang, the HIV-1 strain was genetically engineered such that first, “the gene responsible for pathogenicity, known as nef” is removed to make it non-pathogenic. Then, the signal peptide gene is replaced with a honey bee toxin (melittin) signal peptide to make the virus production much higher and faster. In the signal peptide exchange process, another gene called vpu is lost due to an overlapping. Finally, this genetically modified version of HIV-1, (i.e., HIV-1 virus with nef negative, vpu negative and signal peptide gene replaced with those of a honey bee) is grown in human T-lymphocytes (A3.01 cell line), collected, purified and inactivated by AT-2 (aldrithiol-2 or 2,2'-Dipyridyldisulfide) chemical treatment and gamma irradiation. AT-2 chemical treatment is used because it does not affect the viral structure and immunogens. The killed virus vaccine approach successfully prevents polio, influenza, cholera, mumps, rabies, typhoid fever and hepatitis A. At the moment, there are also 16 animal vaccines using the killed virus design. A vaccine against feline immunodeficiency virus (a virus related to HIV which infects cats) used the kills virus design - the vaccine was discontinued from production for multiple reasons, including commercial non-viability, protection not covering all FIV strains, and concerns over sarcoma at the injection site caused by adjuvants. Clinical trials Phase I clinical trial (NCT01546818) in HIV-infected individuals Funded by Sumagen Canada, the government of Canada and the Bill and Melinda Gates Foundation, it started in March 2012 to assess its safety, tolerability, and immune responses. This was a randomized, double-blind, placebo-controlled trial, administering vaccine intramuscularly to 33 chronic HIV-1 infected individuals being treated with HAART. The trial was completed in August 2013. It reported no serious adverse effects. The vaccine induced antibodies in participants. Antibodies against gp120 surface antigen and P24 capsid antigen increased up to 6-fold and 64-fold, respectively, and the increased level of antibody was continued throughout the 52-week study period. Broadly neutralizing antibodies were found in some blood samples of the participants. Phase II clinical trial The Phase II clinical trial was expected to begin in 2018 in the United States to measure immune responses. The researchers planned to recruit about 600 HIV-negative volunteers who are in the high risk category for HIV infections such as commercial sex workers, men who have sex with men (MSM), injecting drug users, and people who have unsafe sex with multiple partners. Therapeutic HIV vaccine status Dr. Kang has also developed a therapeutic HIV vaccine employing recombinant vesicular stomatitis viruses carrying of HIV-1gag, pol and/or env genes. Researchers reported that the therapeutic vaccine induced robust cellular immune responses in animal tests recently conducted. History of killed HIV vaccine Although the whole killed virus vaccine strategy is successfully used worldwide to prevent diseases like polio, influenza, cholera, mumps, rabies, typhoid fever and hepatitis A, it did not receive serious attention in HIV vaccine development, for scientific, economic and technical reasons. First, there are risks associated with inadequately inactivated or not killed HIV remaining in vaccines. Second, massive production of HIV is not economically feasible, if not impossible. Third, many researchers believe that inactivating/killing HIV by chemical treatment also removes its antigenicity, so that it fails to induce both neutralizing antibodies and cytotoxic T-lymphocyte or CD8+ T cells (CTL). Fourth, early studies with monkeys using the killed simian immunodeficiency virus (SIV) vaccine showed some optimism but it turned out that the protection was attributable to responses to both the cellular proteins on the SIV vaccine and on the challenge virus grown not in monkey cells but in human cells. Fifth, lab-adapted HIV-1 seemed to lose envelope glycoprotein, gp120, during preparation. Nonetheless, many scientists and researchers believe that the whole killed virus vaccine strategy is a feasible option for an HIV vaccine. Jonas Salk had developed a therapeutic whole killed HIV vaccine in 1987, called Remune, which is being developed by Immune Response BioPharma, Inc., Remune vaccine completed over 25 clinical studies and showed a robust mechanism of action, restoring white blood cell counts in CD4 and CD8 T cells by reducing viral load and increasing immunity. Developer and organizers The developer of SAV001-H, Dr. Chil-yong Kang, is a professor of Virology in the Department of Microbiology and Immunology, Schulich School of Medicine & Dentistry at the University of Western Ontario since 1992. In addition to HIV preventive and therapeutic vaccine candidates, Dr. Kang is developing a second generation vaccine against hepatitis B and hepatitis C virus. The patents related to the SAV001 vaccine were registered in more than 70 countries, including the U.S., the European Union, China, India, and South Korea. References External links Clinical Trial Site for SAV001-H Dr. Chil-yong Kang's Lab Killed HIV Vaccine Advocate Phase I Trial Details IAVI Report: Whole Killed AIDS Vaccines US Patent: HIV COMBINATION VACCINE AND PRIME BOOST Sumagen Canada Homepage Curocom Homepage Schulich School of Medicine & Dentistry HIV vaccine research
SAV001
Chemistry
1,273
10,905,663
https://en.wikipedia.org/wiki/Asia%20and%20South%20Pacific%20Design%20Automation%20Conference
The Asia and South Pacific Design Automation Conference, or ASP-DAC is the international conference on VLSI design automation in Asia and South Pacific regions, the most active region of design, CAD and fabrication of silicon chips in the world. The ASP-DAC is a high-quality and premium conference on electronic design automation (EDA) like other sister conferences such as Design Automation Conference (DAC), International Conference on Computer Aided Design (ICCAD), Design, Automation & Test in Europe (DATE). Founded in 1995, the conference aims to provide a platform for researchers and designers to exchange ideas and understand the latest technologies in the areas of LSI design and design automation. See also Design Automation Conference International Conference on Computer-Aided Design Design Automation and Test in Europe References External links Main web page for the ASP-DAC conference IEEE conferences Electronic design automation conferences
Asia and South Pacific Design Automation Conference
Technology
181
36,448,428
https://en.wikipedia.org/wiki/Date%20windowing
Date windowing is a method by which dates with two-digit years are converted to and from dates with four-digit years. The year at which the century changes is called the pivot year of the date window. Date windowing was one of several techniques used to resolve the year 2000 problem in legacy computer systems. Reasoning For organizations and institutions with data that is only decades old, a "date windowing" solution was considered easier and more economical than the massive conversions and testing required when converting two-digit years into four-digit years. Windowing methods There are three primary methods used to determine the date window: Fixed pivot year: simplest to code, works for most business dates. Sliding pivot year: determined by subtracting some constant from the current year, typically used for birth dates. Closest date: Three different interpretations (last century, this century, and next century) are compared to the current date, and the closest date is chosen from the three. FOCUS Information Builders's FOCUS "Century Aware" implementation allowed the user to focus on field-specific and file-specific settings. This flexibility gave the best of all three major mechanisms: A school could have file RecentDonors set a field named BirthDate to use DEFCENT=19 YRTHRESH=31, covering those born 1931-2030. Those born 2031 are not likely to be donating before 2049, by which time those born 1931 would be 118 years old, and unlikely current donors. DEFCENT and YRTHRESH for a file containing present students and recent graduates would use different values. Examples Below is a typical example of COBOL code that establishes a fixed date window, used to figure the century for ordinary business dates. IF RECEIPT-DATE-YEAR >= 60 MOVE 19 TO RECEIPT-DATE-CENTURY ELSE MOVE 20 TO RECEIPT-DATE-CENTURY END-IF. The above code establishes a fixed date window of 1960 through 2059. It assumes that none of the receipt dates are before 1960, and should work until January 1, 2060. Some systems have environment variables that set the fixed pivot year for the system. Any year after the pivot year will belong to this century (the 21st century), and any year before or equal to the pivot year will belong to last century (the 20th century). Some products, such as Microsoft Excel 95 used a window of years 1920–2019 which had the potential to encounter a windowing bug reoccurring only 20 years after the year 2000 problem had been addressed. The IBM i operating system uses a window of 1940-2039 for date formats with a two-digit year. In the 7.5 release of the operating system, an option was added to use a window of 1970-2069 instead. See also Serial number arithmetic, a form of windowing for sequential counters References Units of time
Date windowing
Physics,Mathematics
579
58,623,400
https://en.wikipedia.org/wiki/Aspergillus%20pachycristatus
Aspergillus pachycristatus is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 2012. It has been isolated from soil in Xinjiang in China. It has been reported to produce echinocandins. Growth and morphology A. pachycristatus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References miraensis Fungi described in 2012 Fungus species
Aspergillus pachycristatus
Biology
132
3,482,688
https://en.wikipedia.org/wiki/Tetramethrin
Tetramethrin is a potent synthetic insecticide in the pyrethroid family. It is a white crystalline solid with a melting point of 65–80 °C. The commercial product is a mixture of stereoisomers. It is commonly used as an insecticide, and affects the insect's nervous system. It is found in many household insecticide products. Tetramethrin has an expected half-life of 12.5–14 days in soil and 13–25 days in water. Tetramethrin was classified as a Category 2 carcinogen in 2018 by Directorate-General for the Environment of the European Commission. References External links Pyrethrins and Pyrethroids Fact Sheet - National Pesticide Information Center Pyrethrins and Pyrethroids Pesticide Information Profile - Extension Toxicology Network Maleimides Chrysanthemate esters Household chemicals Isoindoles
Tetramethrin
Chemistry
187
77,930,248
https://en.wikipedia.org/wiki/Body%20roundness%20index
Body roundness index (BRI) is a calculated geometric index used to quantify an aspect of a person's individual body shape. Based on the principle of body eccentricity, it provides a rapid visual and anthropometric tool for health evaluation. Introduced in 2013, the BRI calculation can be used to estimate total and visceral body fat. Ranges of healthy body roundness have been established to accurately classify people with healthy fat mass (weight) compared to obese people who are at risk for morbidities. Compared to traditional metrics, such as the body mass index (BMI), (which uses weight and height), BRI may improve predictions of the amount of body fat and the volume of visceral adipose tissue. Despite its common use, BMI can misclassify individuals as obese because it does not distinguish between a person's lean body mass and fat mass. Instead, BRI quantifies body girth as well as height, potentially providing more accurate estimates of fat mass. BRI scores range from 1 to 16, with most people between 1 and 10, although people with scores of 6.9 and up – indicating wider, rounder bodies – were found to have a risk of all-cause mortality that was increased by up to 49% compared to people having a medium BRI of 5. In a 2020 review, high BRI was associated with increased risk of metabolic syndrome and several other diseases. Typical American adult BRI values range from 3 or less (midsection leanness) to 7 or more (midsection roundness), with a medium index of about 5. As a relatively newer predictive metric, BRI has a smaller research record compared to long-established indices like the BMI and waist-to-hip ratio, so its accuracy and applications remain to be as fully established. Conversely, the simple waist-to-height ratio (which uses the same measurements and is simpler to calculate) has a better research base, leading to its adoption as the preferred guideline in some countries. History BRI was first reported in 2013 by the mathematician Diana Thomas and colleagues in an analysis of three databases from studies of demographics, anthropometrics, fat mass, and visceral fat volume. Thomas visualized the human body shape as an egg or ellipse rather than as the cylinder model that is envisioned in the concept of the BMI. The degree of circularity of an ellipse is quantified by eccentricity, with values between 0 to 1, where 0 is a perfect circle (waist circumference same as height) and 1 is a vertical line. To accommodate human shape data in a greater range, Thomas and colleagues mapped eccentricity in a range of 1 to 20 by using the equation: Body Roundness Index = 364.2 − 365.5 × Eccentricity Range of body roundness Body roundness shapes vary across a range of people who are lean (BRI less than 3) to severely obese (BRI more than 12). According to the authors who developed BRI and subsequent research, overlap between adjacent BRI categories may occur. Relationship to other anthropometric indices In using human body and fat mass data from the United States National Health and Nutrition Examination Survey (NHANES) database, the Thomas group found that BRI never was a negative value, and that larger BRI values were associated with people having a round shape, while shape values closer to 1 were related to people with narrow, lean bodies. The maximum observed BRI value in the NHANES data was 16. BRI had similar accuracy in predicting percentage body fat and percentage fat volume as existing indices, such as the BMI. As the conventional index associated with obesity research, the BMI has numerous drawbacks, as it is unable to distinguish between muscle and fat, is inaccurate in predicting body fat percentage, and has poor ability to predict the risk of heart attack, stroke or death. In a comparison study with BMI and five other metrics – a body shape index, conicity index, body adiposity index, waist–hip ratio, and abdominal volume index (AVI) – BRI and AVI proved most effective at predicting risk of developing nonalcoholic fatty liver disease (NAFLD). BRI and AVI also accurately stratified diagnosis of NAFLD by race, age, and gender. Clinical research The BRI has proved effective as an index for identifying risk of death from different diseases, disorders of metabolic syndrome, liver disease, cardiovascular diseases in association with sarcopenia, and bone mineral density. BRI was also a better indicator than the BMI and body shape index for predicting the risk of hypertension, dyslipidemia, and hyperuricemia in Chinese women. Limitations Other indices of body and fat mass, such as BMI and waist-to-height ratio, have undergone more research evaluation and longitudinal clinical applications than BRI, and may be better predictors of fat distribution (e.g., visceral vs. subcutaneous fat) for estimating health risks. Two measurements of the BRI waist circumference and hip circumference are subject to high variability in standing obese people. Such variability may indicate differences in fat distribution in people with excessive visceral fat, causing errors in BRI. Diagnostic factors for diseases associated with obesity, such as ethnicity, family history, dietary habits, and physical activity, are not factored into the BRI, or are other outcomes, such as organ health status and duration of disease. Calculation The BRI models the human body shape as an ellipse (an oval), with the intent to relate body girth with height to determine body roundness. A simple tape measure suffices to obtain waist circumference and height. Waist circumference and height can be in any unit of length, as long as they both use the same one. BRI is calculated as which can be broken down to 3 steps: Compute Waist-to-height ratio, = = Compute eccentricity (e) of the vertical ellipse around the body, sqrt(1-(/)^2) Compute BRI, 364.3 - 365.5 * ≈ Predictions of % total body fat and % visceral adipose tissue apply a different eccentricity equation using waist and hip circumferences, age, height, gender, ethnicity, and body weight as inputs. See also References Anthropometry Body shape Classification of obesity Human body weight Human height Mathematics in medicine Medical signs Ratios
Body roundness index
Mathematics
1,356
56,542,129
https://en.wikipedia.org/wiki/Hydraulis%20of%20Dion
The Hydraulis of Dion () is a unique exhibit of the Archaeological Museum of Dion. It is the earliest archeological example of a pipe organ to date. Excavation history At the beginning of the 1980s, the area was drained east of the main road of ancient Dion. The neighboring river had permanently flooded parts of the archaeological site. In this area, east of the main road, excavations were carried out in the summer of 1992 under the direction of Dimitrios Pandermalis. Opposite the villa of Dionysus, the foundations of a building were uncovered. On the morning of August 19, 1992, archaeologists found pieces of small copper tubes. Furthermore, one found a larger, rectangular, copper plate. The individual finds were partially connected by the compacted soil. After recognizing the meaning of the find, the earth was widely removed and sent for further processing to the workshops. After cleaning the items, it was recognized that it was a musical instrument, a hydraulis. The find was dated to the 1st century BC. The instrument The Pipe organ is considered the oldest keyboard instrument in the world. It was built in the 3rd century BC, invented by the engineer Ctesibius in Alexandria. The height of the instrument is 120 cm, width 70 cm. The organ pipes are arranged in two stops and consist of 24 pipes with a diameter of 18 mm and 16 narrow pipes with about 10 mm diameter. They were decorated with silver rings. The body of the organ was decorated with silver stripes and multicolored, rectangular glass ornaments. Valves were opened by keyboard and the air flowing through the organ pipe generated the sound. The instrument is structurally classified between the water organ described by Hero of Alexandria and Vitruvius. Spreading as a musical instrument After its invention in the Egyptian Alexandria, the organ arrived in Greece in the Hellenistic period. After its conquest by the Romans it was spread in the Roman Empire. There it was used for musical accompaniment in the competitions in the arenas and played by the wealthy as a domestic musical instrument. At the Byzantine court the hydraulis was a prestigious object. In 757, Constantine V sent an organ as a gift to the French king Pippin the Short. So it came to Central Europe and was discovered by the Catholic Church and eventually developed into a church organ. Replica of the hydraulis of Dion With the support of the Greek Ministry of Culture and Sport and the help of Professor Pandermalis, a reconstruction of a water organ was started at the European Cultural Center of Delphi in 1995. They kept to ancient records and to the original excavated in Dion. The instrument was completed in 1999. Literature Dimitrios Pandermalis: Η Ύδραυλις του Δίου. In: Ministry for culture, Ministry for Macedonia and Thrace, Aristotle-University Thessaloniki: Το Αρχαιολογικό Έργο στη Μακεδονία και Θράκη. Volume 6, 1992, Thessaloniki 1995, pages. 217–222. (Greek language) Dimitrios Pandermalis: Dion. The archaeological site and the museum. Athens 1997. Hellenic Republic, Ministry of culture and sports, Onassis Foundation USA: Gods and Mortals at Olympus. S. 26, Edited by Dimitrios Pandermalis, . Free Travel Guide about the Olympus region Title: Mount Olympus - Ancient Sites, Museums, Monasteries and Churches References External links Ministry of Culture and Sports Archaeological Museum of Dion The ancient Hydraulis Hydraulis 24/40 - Órgano de Dión - Prof. Manuel Lafarga Professor Pantermalis talks about the Hydraulis of Dion (Dion, 10 August 2018, in Greek language) Ancient Greek musical instruments Ancient Roman musical instruments Greek inventions Hellenistic engineering Organs (music) Pipe organ Ancient inventions Water Archaeological discoveries in Macedonia (Greece) 1992 archaeological discoveries
Hydraulis of Dion
Environmental_science
792
24,312,430
https://en.wikipedia.org/wiki/Sheath%20current%20filter
Sheath current filters are electronic components that can prevent noise signals travelling in the sheath of sheathed cables, which can cause interference. Using sheath current filters, ground loops causing mains hum and high frequency common-mode signals can be prevented. Depending on the type, sheath current filters can remove or ameliorate hum in audio equipment, scanning frequencies in AV equipment and unwanted common-mode signals in coaxial cables. Type There are various types of sheath current filter. Different types have different characteristics and are used to combat different forms of sheath current. Isolation transformer Isolation transformers are transformers for low frequency analog and digital audio connections or rarely for high-frequencies in antenna cables between TV outlets and devices (tuner, VCR, TV, etc.). This filter then suppresses low-frequency ground loop currents on the sheath and core of coaxial cables, which can result from multiple grounds at different potentials. They affect the signal because of their upper and lower frequency limits and therefore can not transmit DC. In addition, analog signals can suffer from nonlinear distortion, especially near the frequency limits of the device. Capacitive coupler The propagation of (low-frequency) ripple current through antenna cables may be prevented by capacitive coupling of the two conductors. Such elements are available as adapters called braid-breakers or ground breakers and have, in both the signal and ground connection, coupling capacitors (with a capacitance of approximately 1 nF). They are generally only capable of passing frequencies greater than approximately 50 MHz - ripple current cannot flow. Capacitive coupling adapters have an upper limit frequency of around 1 GHz, so UHF signals can pass through. A passband of approximately 50 MHz to 1 GHz makes the devices useful for analog and digital television reception, and broadcast FM radio reception. Such ground breakers cannot be used in commercial satellite receivers, since low-frequency control signals and the supply voltage for the low-noise block converter have to be transferred. Ferrite chokes Ferrite sheath current filters consist of a ferrite sleeve around the line or cable bundle. These are common mode chokes, damping high-frequency common-mode noise on cables. They block to high-frequency common-mode currents above about 50 MHz and affect the signal and the ground connection is not in terms of their Low frequency properties or protective function. Ferrite sheath current filters cannot effectively attenuate ground loop noise. Cables for connection of computer peripherals often have a ferrite bead. The cable can be used to increase the inductance also repeatedly passed through a ferrite core. Ferrite sheath current filters can only work effectively if a common-mode signal can flow on a line. This is generally the case when a cable bundle or a coaxial cable has a ground connection at both ends to the grounded equipment. For a cable bundle between two devices but grounded at only one device, in general a ferrite sheath current filter is not effective. With such an arrangement, a ferrite bead would only be effective to reduce sheath current standing waves. When used to eliminate standing waves, the ferrite sheath current filter must be placed at a current antinode, but not at standing wave nodes. Ferrite beads are available for different frequency ranges and power capacity. Application Transformer sheath current filters are used in low-frequency signal lines, where a ground loop otherwise can not be prevented. They are galvanically isolated. Capacitive coupling filters can be used to prevent hum loops in antennas and radio frequency cables. They also have a galvanic separation. Ferrite sheath current filters are used for noise suppression, combating noise such as radio frequency interference. They have no electrical isolation and cannot prevent ground loops. See also Sheath current External links How to build a homemade sheath current filter. (German) Filters
Sheath current filter
Chemistry,Engineering
789
8,504,807
https://en.wikipedia.org/wiki/Theta%20Columbae
Theta Columbae, also named Elkurud , is a solitary star in the southern constellation of Columba. It is faintly visible to the naked eye, having an apparent visual magnitude of 5.02. Based upon parallax measurements taken during the Hipparcos mission, it is roughly distant from the Sun. At its present distance, the visual magnitude of the star is reduced by an interstellar extinction factor of 0.11. It is currently moving away from the Sun with a radial velocity of 45.3 km/s. The star made its closest approach about 4.7 million years ago when it underwent perihelion passage at a distance of . This is an evolving B-type subgiant star with a stellar classification of B8 IV, having recently left the main sequence. It is spinning rapidly with a projected rotational velocity of 249 km/s. The star has an estimated four times the mass of the Sun. It radiates 472 times the solar luminosity from its outer atmosphere at an effective temperature of 9,916 K. Nomenclature θ Columbae, Latinised to Theta Columbae, is the star's Bayer designation. Early Arab poets referred to a number of anonymous stars as الفرود al-furūd, "the solitary ones". Later Arabian astronomers attempted to identify this name with particular stars, principally in the modern constellations Centaurus and Colomba. Allen (1899) noted the accepted etymology but suggested that al-furūd might have been an old transcriber's error for القرود al-qurūd "the apes", which he rendered "Al Ḳurūd", though this suggested has not received scholarly support. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Elkurud for this star on 1 June 2018, and it is now so included in the List of IAU-approved Star Names. (The historical form Furud was chosen for Zeta Canis Majoris.) In Chinese, (), meaning Grandson, refers to an asterism consisting of Theta Columbae and Kappa Columbae. Consequently, Theta Columbae itself is known as (, ). References B-type subgiants Columba (constellation) Columbae, Theta Durchmusterung objects 042167 029034 02177
Theta Columbae
Astronomy
508
46,420,664
https://en.wikipedia.org/wiki/Chandrashekhar%20Joshi
Chandrashekhar Janardan Joshi (born July 22, 1953, at Wai, Maharashtra, India) is an Indian–American experimental plasma physicist. He is known for his pioneering work in plasma-based particle acceleration techniques for which he won the 2006 James Clerk Maxwell Prize for Plasma Physics and the 2023 Hannes Alfvén Prize (with Pisin Chen and James Rosenzweig). Joshi was elected a member of the National Academy of Engineering in 2014 for contributions to development of laser- and beam-driven plasma accelerators. He is currently Distinguished Professor of Electrical Engineering, the director of the Center for High Frequency Electronics and the head of the Neptune Laboratory for Advanced Accelerator Research at UCLA. Early life and education Joshi had his primary education at Dravid High school, Wai. While in 9th grade, he was selected by 'Pestalozzi Children's village Trust' in England and went to England for his further studies. He received his B.Sc. (1974) in nuclear engineering from the University of London and Ph.D. (1978) in applied physics from the University of Hull, which are both in the United Kingdom. Following a two-year stint as a research associate at the National Research Council of Canada, where he worked on laser-plasma interactions, he joined UCLA first as a researcher and became a faculty member since 1988. Scientific contributions At UCLA, Joshi has built a strong research group that has done pioneering work in the areas of laser-plasma instabilities, plasma-based light sources, laser fusion and basic plasma experiments. Joshi has made many fundamental contributions to the understanding of extremely nonlinear optical effects in plasmas. Most notable including his first experimental demonstration of four-wave mixing, stimulated Raman forward instability, resonant self-focusing, frequency upshifting by ionization fronts and nonlinear coupling between electron-plasma waves. His group is best known, however, for developing the field of plasma-based particle accelerators over the past three decades. Honors and awards Joshi is a Fellow of the APS, IEEE and UK Institute of Physics. He is also the recipient of the 1996 John Dawson Award for Excellence in Plasma Physics Research (jointly awarded with Christopher E. Clayton) as well as the 1997 . He was the APS Centennial Speaker (1999) and a Distinguished Lecturer in Plasma Physics (2001). He was elected to the National Academy of Engineering in 2014. Citations John Dawson Award for Excellence in Plasma Physics Research (1996): "For their pioneering experiments in Plasma Based Accelerator Concepts; particularly for their unambiguous experimental demonstration that electrons can be accelerated to relativistic energies by the beating of two laser beams in a plasma with their frequency difference equal to the plasma frequency." USPAS Prize for Achievement in Accelerator Physics and Technology (1997): "For pioneering experiments on high gradient, laser-driven, plasma beat-wave acceleration." James Clerk Maxwell Prize for Plasma Physics (2006): "For his insight and leadership in applying plasma concepts to high energy electron and positron acceleration, and for his creative exploration of related aspects of plasma physics." Hannes Alfvén Prize (2023, with Pisin Chen and James Rosenzweig): "For proposing, demonstrating and conducting impressive ground-breaking experiments on plasma wakefield accelerators driven by particle beams, thus firmly establishing the new concept of plasma acceleration and their applications in the scientific community." References Publications 1953 births Living people Alumni of the University of London Fellows of the American Physical Society Plasma physicists Indian physicists American people of Indian descent Fellows of the IEEE Fellows of the Institute of Physics Alumni of the University of Hull American plasma physicists Naturalized citizens of the United States
Chandrashekhar Joshi
Physics
742
35,292,139
https://en.wikipedia.org/wiki/BareMetal
BareMetal is an exokernel-based single address space operating system (OS) created by Return Infinity. It is written in assembly to achieve high-performance computing with minimal footprint with a "just enough operating system" (JeOS) approach. The operating system is primarily targeted towards virtualized environments for cloud computing, or HPCs due to its design as a lightweight kernel (LWK). It could be used as a unikernel. It was inspired by another OS written in assembly, MikeOS, and it is a recent example of an operating system that is not written in C or C++, nor based on Unix-like kernels. Overview Hardware requirements AMD/Intel based 64-bit computer Memory: 4 MB (plus 2 MB for every additional core) Hard Disk: 32 MB One task per core Multitasking on BareMetal is unusual for modern operating systems. BareMetal uses an internal work queue that all CPU cores poll. A task added to the work queue will be processed by any available CPU core in the system and will execute until completion, which results in no context switch overhead. Programming API An API is documented but, in line with its philosophy, the OS does not enforce entry points for system calls (e.g.: no call gates or other safety mechanisms). C BareMetal OS has a build script to pull the latest code, make the needed changes, and then compile C code using the Newlib C standard library. C++ A mostly-complete C++11 Standard Library was designed and developed for working in ring 0. The main goal of such library is providing, on a library level, an alternative to hardware memory protection used in classical OSes, with help of carefully designed classes. Rust A Rust program demonstration was added to the programs in November 2014, demonstrating the ability to write Rust programs for BareMetal OS. Networking TCP/IP stack A TCP/IP stack was the #1 feature request. A port of lwIP written in C was announced in October 2014. minIP, a minimalist IP stack in ANSI C able to provide enough functionalities to serve a simple static webpage, is being developed as a proof of concept to learn the fundamentals in preparation for an x86-64 assembly re-write planned for the future. References External links BareMetal OS Google Group discussion forum Free software operating systems Hobbyist operating systems Microkernels Software using the BSD license Assembly language software X86-64 operating systems
BareMetal
Technology
516
67,112,408
https://en.wikipedia.org/wiki/Empowerment%20%28artificial%20intelligence%29
Empowerment in the field of artificial intelligence formalises and quantifies (via information theory) the potential an agent perceives that it has to influence its environment. An agent which follows an empowerment maximising policy, acts to maximise future options (typically up to some limited horizon). Empowerment can be used as a (pseudo) utility function that depends only on information gathered from the local environment to guide action, rather than seeking an externally imposed goal, thus is a form of intrinsic motivation. The empowerment formalism depends on a probabilistic model commonly used in artificial intelligence. An autonomous agent operates in the world by taking in sensory information and acting to change its state, or that of the environment, in a cycle of perceiving and acting known as the perception-action loop. Agent state and actions are modelled by random variables () and time (). The choice of action depends on the current state, and the future state depends on the choice of action, thus the perception-action loop unrolled in time forms a causal bayesian network. Definition Empowerment () is defined as the channel capacity () of the actuation channel of the agent, and is formalised as the maximal possible information flow between the actions of the agent and the effect of those actions some time later. Empowerment can be thought of as the future potential of the agent to affect its environment, as measured by its sensors. In a discrete time model, Empowerment can be computed for a given number of cycles into the future, which is referred to in the literature as 'n-step' empowerment. The unit of empowerment depends on the logarithm base. Base 2 is commonly used in which case the unit is bits. Contextual Empowerment In general the choice of action (action distribution) that maximises empowerment varies from state to state. Knowing the empowerment of an agent in a specific state is useful, for example to construct an empowerment maximising policy. State-specific empowerment can be found using the more general formalism for 'contextual empowerment'. is a random variable describing the context (e.g. state). Application Empowerment maximisation can be used as a pseudo-utility function to enable agents to exhibit intelligent behaviour without requiring the definition of external goals, for example balancing a pole in a cart-pole balancing scenario where no indication of the task is provided to the agent. Empowerment has been applied in studies of collective behaviour and in continuous domains. As is the case with Bayesian methods in general, computation of empowerment becomes computationally expensive as the number of actions and time horizon extends, but approaches to improve efficiency have led to usage in real-time control. Empowerment has been used for intrinsically motivated reinforcement learning agents playing video games, and in the control of underwater vehicles. References Artificial intelligence Cognitive science Robotics engineering
Empowerment (artificial intelligence)
Technology,Engineering
566
42,128,108
https://en.wikipedia.org/wiki/Surgical%20stress
Surgical stress is the systemic response to surgical injury and is characterized by activation of the sympathetic nervous system, endocrine responses as well as immunological and haematological changes. Measurement of surgical stress is used in anaesthesia, physiology and surgery. Analysis of the surgical stress response can be used for evaluation of surgical techniques and comparisons of different anaesthetic protocols. Moreover, they can be performed both in the intraoperative or postoperative period. If there is a choice between different techniques for a surgical procedure, one method to evaluate and compare the surgical techniques is to subject one group of patients to one technique, and the other group of patients to another technique, after which the surgical stress responses triggered by the procedures are compared. Absent any other difference, the technique with the least surgical stress response is considered the best for the patient. Similarly, a group of patients can be subjected to a surgical procedure where one anaesthetic protocol is used, and another group of patients are subjected to the same surgical procedure but with a different anaesthetic protocol. The anaesthetic protocol that yields the least stress response is considered the most suitable for that surgical procedure. It is generally considered or hypothesized that a more invasive surgery, with extensive tissue trauma and noxious stimuli, triggers a more significant stress response. However, duration of surgery may affect the stress response which therefore may make comparisons of procedures that differ in time difficult. Methods Examples of used parameters are blood pressure, heart rate, heart rate variability, photoplethysmography and skin conductance. Essentially, physiologic parameters are measured in order to assess sympathetic tone as a surrogate measure of stress. Intraoperative neurophysiological monitoring can also be used. Examples of commonly used biomarkers are adrenaline, cortisol, interleukins, noradrenaline and vasopressin. History Loss of nitrogen (urea) was observed already in the 1930s in fracture patients by the Scottish physician David Cuthbertson. The reason for the patients' catabolic response was not understood at the time, but later attention was turned to the stress reaction caused by the surgery. The evolutionary background is believed to be that a wounded animal increases its chance of survival by using stored energy reserves. The stress reaction thus initiates a catabolic state by an increased release of catabolic hormones. Additionally immunosuppressive hormones are also released. In a surgery patient, the stress reaction is considered detrimental for wound healing. However, surgical stress reduced mortality from endotoxin shock. Today, development of new surgical techniques and anaesthetic protocols aim to minimise the surgical stress reaction. References Stress (biology) Anesthesia
Surgical stress
Biology
548
60,444,111
https://en.wikipedia.org/wiki/Fracture%20of%20soft%20materials
The fracture of soft materials involves large deformations and crack blunting before propagation of the crack can occur. Consequently, the stress field close to the crack tip is significantly different from the traditional formulation encountered in the Linear elastic fracture mechanics. Therefore, fracture analysis for these applications requires a special attention. The Linear Elastic Fracture Mechanics (LEFM) and K-field (see Fracture Mechanics) are based on the assumption of infinitesimal deformation, and as a result are not suitable to describe the fracture of soft materials. However, LEFM general approach can be applied to understand the basics of fracture on soft materials. The solution for the deformation and crack stress field in soft materials considers large deformation and is derived from the finite strain elastostatics framework and hyperelastic material models. Soft materials (Soft matter) consist of a type of material that e.g. includes soft biological tissues as well as synthetic elastomers, and that is very sensitive to thermal variations. Hence, soft materials can become highly deformed before crack propagation. Hyperelastic material models Hyperelastic material models are utilized to obtain the stress–strain relationship through a strain energy density function. Relevant models for deriving stress-strain relations for soft materials are: Mooney-Rivlin solid, Neo-Hookean, Exponentially hardening material and Gent hyperelastic models. On this page, the results will be primarily derived from the Neo-Hookean model. Generalized neo-Hookean (GNH) The Neo-Hookean model is generalized to account for the hardening factor: where b>0 and n>1/2 are material parameters, and is the first invariant of the Cauchy-Green deformation tensor: where are the principal stretches. Specific Neo-Hookean model Setting n=1, the specific stress-strain function for the neo-Hookean model is derived: . Finite strain crack tip solutions (under large deformation) Since LEFM is no longer applicable, alternative methods are adapted to capture large deformations in the calculation of stress and deformation fields. In this context the method of asymptotic analysis is of relevance. Method of asymptotic analysis The method of asymptotic analysis consists of analyzing the crack-tip asymptotically to find a series expansion of the deformed coordinates capable to characterize the solution near the crack tip. The analysis is reducible to a nonlinear eigenvalue problem. The problem is formulated based on a crack in an infinite solid, loaded at infinity with uniform uni-axial tension under condition of plane strain (see Fig.1). As the crack deforms and progresses, the coordinates in the current configuration are represented by and in cartesian basis and and in polar basis. The coordinates and are functions of the undeformed coordinates () and near the crack tip, as r→0, can be specified as: where , are unknown exponents, and , are unknown functions describing the angular variation. In order to obtain the eigenvalues, the equation above is substituted into the constitutive model, which yields the corresponding nominal stress components. Then, the stresses are substituted into the equilibrium equations (the same formulation as in LEFM theory) and the boundary conditions are applied. The most dominating terms are retained resulting in an eigenvalue problem for and . Deformation and stress field in a plane strain crack For the case of a homogeneous neo-Hookean solid (n=1) under Mode I condition the deformed coordinates for a plane strain configuration are given by where a and are unknown positive amplitudes that depends on the applied loading and specimen geometry. The leading terms for the nominal stress (or first Piola–Kirchhoff stress, denoted by on this page) are: Thus, and are bounded at the crack tip and and have the same singularity. The leading terms for the true stress (or Cauchy stress, denoted by on this page), The only true stress component completely defined by a is . It also presents the most severe singularity. With that, it is clear that the singularity differs if the stress is given in the current or reference configuration. Additionally, in LEFM, the true stress field under Mode I has a singularity of , which is weaker than the singularity in . While in LEFM the near tip displacement field depends only on the Mode I stress intensity factor, it is shown here that for large deformations, the displacement depends on two parameters (a and for a plane strain condition). Deformation and stress field in a plane stress crack The crack tip deformation field for a Mode I configuration in a homogeneous material neo-Hookean solid (n=1) is given by where a and c are positive independent amplitudes determined by far field boundary conditions. The dominant terms of the nominal stress are And the true stress components are Analogously, the displacement depends on two parameters (a and c for a plane stress condition) and the singularity is stronger in the term. The distribution of the true stress in the deformed coordinates (as shown in Fig. 1B) can be relevant when analyzing the crack propagation and blunt phenomenon. Additionally, it is useful when verifying experimental results of the deformation of the crack. J-integral The J-integral represents the energy that flows to the crack, hence, it is used to calculate the energy release rate, G. Additionally, it can be used as a fracture criterion. This integral is found to be path independent as long as the material is elastic and damages to the microstructure are not occurring. Evaluating J on a circular path in the reference configuration yields for plane strain Mode I, where a is the amplitude of the leading order term of and A and n are material parameters from the strain-energy function. For plane stress Mode I in a neo-Heookean material J is given by where b and n are material parameters of GNH solids. For the specific case of a neo-Hookean model, where n=1, b=1 and , the J-integral for plane stress and plane strain in Mode I are the same: J-integral in the pure-shear experiment The J-integral can be determined by experiments. One common experiment is the pure-shear in an infinite long strip, as shown in Fig. 2. The upper and bottom edges are clamped by grips and the loading is applied by pulling the grips vertically apart by ± ∆. This set generates a condition of plane stress. Under these conditions, the J-integral is evaluated, therefore, as where and is the high of the strip undeformed state. The function is determined by measuring the nominal stress acting on the strip stretched by : Therefore, from the imposed displacement of each grip, ± ∆, it is possible to determine the J-integral for the corresponding nominal stress. With the J-integral, the amplitude (parameter a) of some true stress components can be found. Some other stress components amplitudes, however, depend on other parameters such as c (e.g. under plane stress condition) and cannot be determined by the pure shear experiment. Nevertheless, the pure shear experiment is very important because it allows the characterization of fracture toughness of soft materials. Interface cracks To approach the interaction of adhesion between soft adhesives and rigid substrates, the asymptotic solution for an interface crack problem between a GNH material and a rigid substrate is specified. The interface crack configuration considered here is shown in Fig.3 where the lateral slip is disregarded. For the special neo-Hookean case with n=1, and , the solution for the deformed coordinates is which is equivalent to According to the above equation, the crack on this type of interface is found to open with a parabolic shape. This is confirmed by plotting the normalized coordinates vs for different ratios (see Fig. 4). To go through the analysis of the interface between two GNH sheets with the same hardening characteristics, refer to the model described by Gaubelle and Knauss. See also Fracture mechanics Soft matter J-integral Neo-Hookean solid Gent (hyperelastic model) Mooney-rivlin solid Fracture of Biological Materials References Soft matter Fracture mechanics
Fracture of soft materials
Physics,Materials_science,Engineering
1,673
31,858,593
https://en.wikipedia.org/wiki/DC%20distribution%20system%20%28ship%20propulsion%29
The DC distribution system has been proposed, as a replacement for the present AC power distribution system for ships with electric propulsion. This concept represents a new way of distributing energy for low-voltage installations on ships. It can be used for any electrical ship application up to 20 megawatts and operates at a nominal voltage of 1000 V DC. The DC distribution system is simply an extension of the multiple DC links that already exist in all propulsion and thruster drives, which usually account for more than 80 percent of the electrical power consumption on electric propulsion vessels. Benefits In addition to boosting efficiency by up to 20 percent, other benefits include space and weight savings of up to 30 percent and flexible placement of electrical equipment. This allows for significantly more cargo space and a more functional vessel layout where the electrical system is designed around the vessel functions and not vice versa. The efficiency improvement is mainly achieved from the system no longer being locked at a specific frequency (usually 60 Hz on ships), even though a 60 Hz power source can also be connected to the grid. This new freedom of being able to control each power source totally independently opens up numerous ways of optimizing fuel consumption. The reduced weight and footprint of the installed electrical equipment will vary depending on the ship type and application. One comparison using the DC distribution system instead of the traditional AC system for a Platform Supply Vessel (PSV), reduced the weight of the electrical system components from to . Another saves 15-30% fuel. On land, the solar panels on several buildings in Sweden are connected via DC to smooth production and consumption, bypassing the AC grid and its inverters. Fuel savings The biggest potential for fuel savings lies in the ease with which energy storage devices, such as batteries or super capacitors, can be added to the system. Energy storage will help the engines level out load variations from the thrusters and other large loads. Operational optimization DC distribution system allows for new ways of thinking regarding operational optimization. The system is flexible and can combine different energy sources such as engines, turbines, and fuel cells. This means that there is the potential to implement an energy management system that takes into account varying fuel prices and the availability of different fuels. Challenges Because the main AC switchboard with its AC circuit breakers and protection relays is omitted from the new design, a new protection philosophy that fulfills class requirements is needed for selectivity and equipment protection. ABB has proposed a solution for protecting the DC distribution system using a combination of fuses and controlled turn-off semiconductor power devices. Because all energy-producing components have controllable switching devices, the fault current can be blocked much faster than is possible with traditional circuit breakers with associated protection relays. Although this approach offers a faster response during a short circuit, it does not fit well in system independent building philosophies. Safety and selectivity The electrical power requirements of vessels are expanding as systems are expected to support power converters capable of integrating alternative sources and storage systems – including wind and solar power – and battery storage with a range of voltages, frequencies and power levels. DC links are ideal for this, but cannot be safely deployed without the necessary protection. Proper selection of protective devices (such as a DC breaker switch, high-speed fuse, or a circuit breaker) and their allocation according to distribution protection zones enables system integrators to achieve protection selectivity. The protection device(s) closest to the fault location should isolate the fault before the protection devices at healthy zones are triggered. That is, they operate only on faults within their zone of protection and do not ordinarily sense faults outside that zone. If a fault occurs outside the zone, fault current can flow through, but the protection device(s) will not operate for this through-fault. As a result, the fault location is isolated, enabling the unaffected zones to remain operable. Protection selectivity is achieved once the correct type of device has been chosen and the correct location at distribution protection levels. Selectivity between two protection devices can be complete (the load-side device provides protection without making the other device trip) and partial (the load-side device provides protection up to a given level of over-current, without making the other device trip). These protection devices come with a certain price tag, but the cost is justified thanks to the mitigation of any potential damage to a critical piece of equipment, or expensive system downtime and losses in production resulting from a fault. Fast fault interruption with solid state technology A solid-state DC breaker switch is able to interrupt the full short-circuit current in microseconds. With such a time constraint, an autonomous switch control system must ensure local fault protection, without the need for external control or fault detection. This technology provides maximum flexibility for onboard DC grids and provides protection against short-circuit currents in any part of the grid. In addition to rapid over-current protection, the breaker should be programmed to open to a time-current profile in case of a overshoot. This enables the overall system to reconfigure the behavior of the DC breaker switch within certain predefined boundaries and according to applied ship rules. The fast opening time of a solid-state breakers limits the fault current considerably and minimizes the negative impact on the load. The current does not reach damaging levels and can be interrupted without forming an arc. Voltage reversal is therefore not required. Safe and redundant closed bus operations Traditional (DP) systems are often designed for open bus mode, meaning completely separated power systems. A closed bus system is a more complex and tightly integrated system, which is demanding to build, verify and operate safely. Solid state switching technology enables system integrators to design smarter solutions with equivalent safety. It contributes to save on fuel and maintenance costs and reduce the environmental footprint. It also enables a significant reduction in engine hours. Approval of a closed bus requires validation of the fault tolerance of the connected system, including live short-circuit testing of worst-case failure modes. See also Electric boat Diesel-electric transmission References External links Shipbuilding Electric power distribution
DC distribution system (ship propulsion)
Engineering
1,231
24,397,233
https://en.wikipedia.org/wiki/C15H13NO3
{{DISPLAYTITLE:C15H13NO3}} The molecular formula C15H13NO3 (molar mass: 255.27 g/mol) may refer to: Amfenac, also known as 2-amino-3-benzoylbenzeneacetic acid Dinoxyline Ketorolac Polyfothine Pranoprofen Molecular formulas
C15H13NO3
Physics,Chemistry
81
29,181,673
https://en.wikipedia.org/wiki/MOSCED
MOSCED (short for “modified separation of cohesive energy density" model) is a thermodynamic model for the estimation of limiting activity coefficients (also known as activity coefficient at infinite dilution). From a historical point of view MOSCED can be regarded as an improved modification of the Hansen method and the Hildebrand solubility model by adding higher interaction term such as polarity, induction and separation of hydrogen bonding terms. This allows the prediction of polar and associative compounds, which most solubility parameter models have been found to do poorly. In addition to making quantitative prediction, MOSCED can be used to understand fundamental molecular level interaction for intuitive solvent selection and formulation. In addition to infinite dilution, MOSCED can be used to parameterize excess Gibbs Free Energy model such as NRTL, WILSON, Mod-UNIFAC to map out Vapor Liquid Equilibria of mixture. This was demonstrated briefly by Schriber and Eckert using infinite dilution data to parameterize WILSON equation. The first publication is from 1984 and a major revision of parameters has been done 2005. This revised version is described here. Basic principle MOSCED uses component-specific parameters describing electronic properties of a compound. These five properties are partly derived from experimental values and partly fitted to experimental data. In addition to the five electronic properties the model uses the molar volume for every component. These parameters are then entered in several equations to obtain the limiting activity coefficient of an infinitely diluted solute in a solvent. These equations have further parameters which have been found empirically. The authors found an average absolute deviation of 10.6% against their database of experimental data. The database contains limiting activity coefficients of binary systems of non-polar, polar and hydrogen compounds, but no water. As can be seen in the deviation chart, the systems with water deviate significantly. Due to such huge deviation of water as solute as seen in the chart, new water parameters are regressed to improve results. All the data for regression were taking from Yaws Handbook of Properties for Aqueous System. Using the old water parameter, for water in organics, Root Mean Square Deviation(RMSD) for ln (γ∞) was found to be around 2.864% and Average Absolute Error(AAE) for (γ∞) around 3056.2 %. That is a significant error which might explain the deviation as seen from the graph. With the new water parameters for water in organics, RMSD for ln (γ∞) decreased to 0.771% and AAE for (γ∞) also decreased to 63.2%. The revised water parameters can be found in the table below titled "Revised water". Equations , , with Important note: The value 3.4 in the equation for ξ is different from the value 3.24 in the original publication. The 3.24 has been verified to be a typing error. The activity coefficient of the solute and solvent can be extended to other concentrations by applying the principle of the Margules equation. This gives: where is the volume fraction and the mole fraction of compound i. The activity coefficient of the solvent is calculated with same equations, but interchanging indices 1 and 2. Model parameters The model uses five component specific properties to characterize the interaction forces between a solute and its solvent. Some of these properties are derived from other known component properties and some are fitted to experimental data obtained from data banks. Liquid molar volume The molar liquid volume ν is given in cm³/mol and assumed to be temperature-independent. Dispersion parameter The dispersion parameter λ describes the polarizability of a molecule. Polarity parameter The polarity parameter τ describes the fixed dipole of a molecule. Induction parameter The induction parameter q describes the effects of induced dipoles (induced by fixed dipoles). For structures with an aromatic ring the value is set to 0.9, for aliphatic rings and chains this value is set on 1. For some compounds the q-parameter is optimized between 0.9 and 1 (e.g. hexene, octene). Acidity and basicity parameters These parameters describe the effects of hydrogen-bonding during solving and association. Parameter table References Further reading External links Online Calculation of limiting activity coefficients with MOSCED Desktop Application for MOSCED property calculations. https://sites.google.com/view/mosced Thermodynamic models
MOSCED
Physics,Chemistry
924
43,515,506
https://en.wikipedia.org/wiki/Laguerre%20formula
The Laguerre formula (named after Edmond Laguerre) provides the acute angle between two proper real lines, as follows: where: is the principal value of the complex logarithm is the cross-ratio of four collinear points and are the points at infinity of the lines and are the intersections of the absolute conic, having equations , with the line joining and . The expression between vertical bars is a real number. Laguerre formula can be useful in computer vision, since the absolute conic has an image on the retinal plane which is invariant under camera displacements, and the cross ratio of four collinear points is the same for their images on the retinal plane. Derivation It may be assumed that the lines go through the origin. Any isometry leaves the absolute conic invariant, this allows to take as the first line the x axis and the second line lying in the plane z=0. The homogeneous coordinates of the above four points are respectively. Their nonhomogeneous coordinates on the infinity line of the plane z=0 are , , 0, . (Exchanging and changes the cross ratio into its inverse, so the formula for gives the same result.) Now from the formula of the cross ratio we have References O. Faugeras. Three-dimensional computer vision. MIT Press, Cambridge, London, 1999. Equations Geometry in computer vision
Laguerre formula
Mathematics
281
21,708,896
https://en.wikipedia.org/wiki/Still%20engine
The Still engine was a piston engine that simultaneously used both steam power from an external boiler, and internal combustion from gasoline or diesel, in the same unit. The waste heat from the cylinder and internal combustion exhaust was directed to the steam boiler, resulting in claimed fuel savings of up to 10%. History The inventor, William Joseph Still, patented his device in 1917 and on 26 May 1919 in London he and his collaborator Captain Francis Acland (1857–1943, a consulting engineer formerly of the Royal Artillery) announced it at a meeting, chaired by steam turbine inventor Charles Algernon Parsons, at the Royal Society of Arts. Acland described a continuous process by which a double-acting cylinder is powered on one side by internal combustion and on the other by steam from a boiler heated principally by the waste heat from the water jacket and exhaust gases. He explained how the reserve of energy represented by the steam pressure in the boiler provided for any occasional overload which would defeat a standard internal combustion engine of the same power. Independent heating of the boiler was occasionally used, to provide extra power for exceptional conditions, and in the first stage of operation to allow the engine to start itself from steam power alone, even against a load. Still was not the first in this field; a similar system, whereby compressed air (instead of gearing) was to transfer the power from an internal combustion engine and steam recovered from its cooling system was to augment the compressed air, had been patented in 1903 by Captain Paul Lucas-Girardville (a French military aviator) and Louis Mékarski. Development Marine In 1924 Scotts Shipbuilding and Engineering Company of Greenock, Scotland, put a diesel-fuelled marine version, the Scott-Still regenerative engine, into production, with the first pair of engines installed in the twin-screw M. V. Dolius, of the Blue Funnel Line. The trial was successful and in 1928 Blue Funnel commissioned a larger and faster ship, the Eurybates, with this propulsion system. However the requirement to carry marine engineering officers certified with both steam and motor qualifications, meaning extra crew members and wages, and the extra complexity with consequent higher maintenance costs, offset the fuel savings and conventional diesel engines were later installed in their place. Railway In 1926 Kitson and Company, locomotive builders of Leeds, England, produced a steam–diesel hybrid locomotive, the Kitson Still locomotive. This was loaned for trials to the London and North Eastern Railway and used successfully to haul heavy coal trains, but the difference in the cost of coal used by a conventional locomotive, against the fuel oil used by the hybrid, was not great. When Kitson's failed in 1934, a failure to which the development costs of the hybrid locomotive had contributed, the receivers sold the machine for scrap. Decline Developments of larger diesel engines in the 1930s, with improved methods of power transmission, meant that the principal advantages of the Still engine – the ability to provide for direct-drive starts from rest and additional power at times of temporary high load – was lost, and further development ended. References Steam engines Diesel engines Marine propulsion
Still engine
Engineering
628
38,707,224
https://en.wikipedia.org/wiki/Snub%20octaoctagonal%20tiling
In geometry, the snub octaoctagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of sr{8,8}. Images Drawn in chiral pairs, with edges missing between black triangles: Symmetry A higher symmetry coloring can be constructed from [8,4] symmetry as s{8,4}, . In this construction there is only one color of octagon. Related polyhedra and tiling References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Square tiling Tilings of regular polygons List of uniform planar tilings List of regular polytopes External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Snub tilings Uniform tilings
Snub octaoctagonal tiling
Physics
218
35,142,051
https://en.wikipedia.org/wiki/New%20York%20Digital%20District
Founded in 2010, the New York Digital District (NYDD) is an initiative located in the DUMBO area of Brooklyn, NY, USA. Representing more than 80 digital businesses The District promotes the area's ability to support an array of businesses, from technology to startups to agencies. The NYDD is an organized body of professionals dedicated to advocacy to support the viability of the area as a location with the ability to support a vast array of businesses . History The NYDD was announced at the area's monthly event, Digital DUMBO, in January 2010. The announcement was made by Mike Germano of Carrot Creative, Brian Lemond of Brooklyn United and the Brooklyn Digital Foundry, and Sam Lessin formerly of Drop.io. Other members include Big Spaceship, Etsy, and Huge Inc. The DUMBO section of Brooklyn, known as such for its position down under the Manhattan Bridge overpass, a former manufacturing and light industrial section of the city, has over the past decade experienced a renaissance as the new home for digital business. Attracted to affordable rents and large, light-filled spaces, early digital pioneers helped local residents and artists recast the neighborhood in a positive light. The NYDD works closely with the DUMBO Business Improvement District and is home to one of the most recognized New York gathering for the digital industry, Digital Dumbo. The organization centers around a series of initiatives and committees, through which volunteers from local businesses rally to share ideas and incite progress on issues as diverse as welcoming new businesses, improving technology infrastructure and supports, and drawing attention to the urgent need for additional real estate for the expanding business community. Initiatives Digital Census One key initiative is the NYDD Annual Digital Census, a survey that provides aggregate, anonymous, year-over-year data on the needs and profiles of the area's businesses. Internship Fair Another initiative is the NYDD's bi-annual internship fair, launching in April, 2012. The fair will create a centralized arena for emerging professionals and students to meet leaders from the digital industry and to learn about employment prospects. References External links Organizations based in New York City Dumbo, Brooklyn Information technology places 2010 establishments in New York City
New York Digital District
Technology
440
58,644,759
https://en.wikipedia.org/wiki/Information%20engineering
Information engineering is the engineering discipline that deals with the generation, distribution, analysis, and use of information, data, and knowledge in electrical systems. The field first became identifiable in the early 21st century. The components of information engineering include more theoretical fields such as Electromagnetism, machine learning, artificial intelligence, control theory, signal processing, and microelectronics, and more applied fields such as computer vision, natural language processing, bioinformatics, medical image computing, cheminformatics, autonomous robotics, mobile robotics, and telecommunications. Many of these originate from Computer Engineering , as well as other branches of engineering such as electrical engineering, computer science and bioengineering. The field of information engineering is based heavily on Engineering and mathematics, particularly probability,statistics, calculus, linear algebra, optimization, differential equations, variational calculus, and complex analysis. Information engineers often hold a degree in information engineering or a related area, and are often part of a professional body such as the Institution of Engineering and Technology or Institute of Measurement and Control. They are employed in almost all industries due to the widespread use of information engineering. History In the 1980s/1990s term information engineering referred to an area of software engineering which has come to be known as data engineering in the 2010s/2020s. Elements Machine learning and statistics Machine learning is the field that involves the use of statistical and probabilistic methods to let computers "learn" from data without being explicitly programmed. Data science involves the application of machine learning to extract knowledge from data. Subfields of machine learning include deep learning, supervised learning, unsupervised learning, reinforcement learning, semi-supervised learning, and active learning. Causal inference is another related component of information engineering. Control theory Control theory refers to the control of (continuous) dynamical systems, with the aim being to avoid delays, overshoots, or instability. Information engineers tend to focus more on control theory rather than the physical design of control systems and circuits (which tends to fall under electrical engineering). Subfields of control theory include classical control, optimal control, and nonlinear control. Signal processing Signal processing refers to the generation, analysis and use of signals, which could take many forms such as image, sound, electrical, or biological. Information theory Information theory studies the analysis, transmission, and storage of information. Major subfields of information theory include coding and data compression. Computer vision Computer vision is the field that deals with getting computers to understand image and video data at a high level. Natural language processing Natural language processing deals with getting computers to understand human (natural) languages at a high level. This usually means text, but also often includes speech processing and recognition. Bioinformatics Bioinformatics is the field that deals with the analysis, processing, and use of biological data. This usually means topics such as genomics and proteomics, and sometimes also includes medical image computing. Cheminformatics Cheminformatics is the field that deals with the analysis, processing, and use of chemical data. Robotics Robotics in information engineering focuses mainly on the algorithms and computer programs used to control robots. As such, information engineering tends to focus more on autonomous, mobile, or probabilistic robots. Major subfields studied by information engineers include control, perception, SLAM, and motion planning. Tools In the past some areas in information engineering such as signal processing used analog electronics, but nowadays most information engineering is done with digital computers. Many tasks in information engineering can be parallelized, and so nowadays information engineering is carried out using CPUs, GPUs, and AI accelerators. There has also been interest in using quantum computers for some subfields of information engineering such as machine learning and robotics. See also References Engineering disciplines Information
Information engineering
Engineering
760
12,357,222
https://en.wikipedia.org/wiki/Transition%20metal%20dinitrogen%20complex
Transition metal dinitrogen complexes are coordination compounds that contain transition metals as ion centers the dinitrogen molecules (N2) as ligands. Historical background Transition metal complexes of N2 have been studied since 1965 when the first complex was reported by Allen and Senoff. This diamagnetic complex, [Ru(NH3)5(N2)]2+, was synthesized from hydrazine hydrate and ruthenium trichloride and consists of a [Ru(NH3)5]2+ centre attached to one end of N2. The existence of N2 as a ligand in this compound was identified by IR spectrum with a strong band around 2170–2100 cm−1. In 1966, the molecular structure of [Ru(NH3)5(N2)]Cl2 was determined by Bottomly and Nyburg by X-ray crystallography. The dinitrogen complex trans-[IrCl(N2)(PPh3)2] is made by treating Vaska's complex with aromatic acyl azides. It has a planar geometry. The first preparation of a metal-dinitrogen complex using dinitrogen was reported in 1967 by Yamamoto and coworkers. They obtained [Co(H)(N2)(PPh3)3] by reduction of Co(acac)3 with AlEt2OEt under an atmosphere of N2. Containing both hydrido and N2 ligands, the complex was of potential relevance to nitrogen fixation. From the late 1960s, a variety of transition metal-dinitrogen complexes were made including those with iron, molybdenum and vanadium as metal centers. Interest in such complexes arises because N2 comprises the majority of the atmosphere and because many useful compounds contain nitrogen. Biological nitrogen fixation probably occurs via the binding of N2 to those metal centers in the enzyme nitrogenase, followed by a series of steps that involve electron transfer and protonation. Bonding modes In terms of its bonding to transition metals, N2 is related to CO and acetylene as all three species have triple bonds. A variety of bonding modes have been characterized. Based on whether the N2 molecules are shared by two more metal centers, the complexes can be classified into mononuclear and bridging. Based on the geometric relationship between the N2 molecule and the metal center, the complexes can be classified into end-on or side-on modes. In the end-on bonding modes of transition metal-dinitrogen complexes, the N-N vector can be considered in line with the metal ion center, whereas in the side-on modes, the metal-ligand bond is known to be perpendicular to the N-N vector. Mononuclear, end-on As a ligand, N2 usually binds to metals as an "end-on" ligand, as illustrated by [Ru(NH3)5N2]2+. Such complexes are usually analogous to related CO derivatives. This relationship is illustrated by the pair of complexes IrCl(CO)(PPh3)2 and IrCl(N2)(PPh3)2. In these mononuclear cases, N2 is both as a σ-donor and a π-acceptor. The M-N-N bond angles are close to 180°. N2 is a weaker pi-acceptor than CO, reflecting the nature of the π* orbitals on CO vs N2. For this reason, few examples exist of complexes containing both CO and N2 ligand. Transition metal-dinitrogen complexes can contain more than one N2 as "end-on" ligands, such as mer-[Mo(N2)3(PPrn2Ph)3], which has octahedral geometry. In another example, the dinitrogen ligand in Mo(N2)2(Ph2PCH2CH2PPh2)2 can be reduced to produce ammonia. Because many nitrogenases contain Mo, there has been particular interest in Mo-N2 complexes. Bridging, end-on N2 also serves as a bridging ligand with "end-on" bonding to two metal centers, as illustrated by {[Ru(NH3)5]2(μ-N2)}4+. These complexes are also called multinuclear dinitrogen complexes. In contrast to their mononuclear counterpart, they can be prepared for both early and late transition metals. In 2006, a study of iron-dinitrogen complexes by Holland and coworkers showed that the N–N bond is significantly weakened upon complexation with iron atoms with a low coordination number. The complex involved bidentate chelating ligands attached to the iron atoms in the Fe–N–N–Fe core, in which N2 acts as a bridging ligand between two iron atoms. Increasing the coordination number of iron by modifying the chelating ligands and adding another ligand per iron atom showed an increase in the strength of the N–N bond in the resulting complex. It is thus suspected that Fe in a low-coordination environment is a key factor to the fixation of nitrogen by the nitrogenase enzyme, since its Fe–Mo cofactor also features Fe with low coordination numbers. The average bond length of those bridging-end-on dinitrogen complexes is about 1.2 Å. In some cases, the bond length can be as long as 1.4 Å, which is similar to those of N-N single bonds. Hasanayn and co-workers have shown that the Lewis structures of end-on bridging complexes can be assigned based on π-molecular-orbital occupancy, in analogy with simple tetratomic organic molecules. For example the cores of N2-bridged complexes with 8, 10, or 12 π-electrons can generally be formulated, respectively, as M≡N-N≡M, M=N=N=M, and M-N≡N-M, in analogy with the 8-, 10-, and 12-π-electron organic molecules HC≡C-C≡CH, O=C=C=O, and F-C≡C-F. Mononuclear, side-on In comparison with their end-on counterpart, the mononuclear side-on dinitrogen complexes are usually higher in energy and the examples of them are rare. Dinitrogen act as a π-donor in these type of complexes. Fomitchev and Coppens has reported the first crystallographic evidence for side-on coordination of N2 to a single metal center in a photoinduced metastable state. When treated with UV light, the transition metal-dinitrogen complex, [Os(NH3)5(N2)]2+ in solid states can be converted into a metastable state of [Os(NH3)5(η2-N2)]2+, where the vibration of dinitrogen has shifted from 2025 to 1831 cm−1. Some other examples are considered to exist in the transition states of intramolecular linkage isomerizations. Armor and Taube has reported these isomerizations using 15N-labelled dinitrogen as ligands. Bridging, side-on In a second mode of bridging, bimetallic complexes are known wherein the N-N vector is perpendicular to the M-M vector, which can be considered as side-on fashion. One example is [(η5-C5Me4H)2Zr]2(μ2,η2,η2-N2). The dimetallic complex can react with H2 to achieve the artificial nitrogen fixation by reducing N2. A related ditantalum tetrahydride complex could also reduce N2. Reactivity Cleavage to nitrides When metal nitrido complexes are produced from N2, the intermediacy of a dinitrogen complex is assumed. Some Mo(III) complexes also cleave N2: 2Mo(NR2)3 + N2 → (R2N)3Mo-N2-Mo(NR2)3 (R2N)3Mo-N2-Mo(NR2)3 → 2N≡Mo(NR2)3 Attack by electrophiles Some electron-rich metal dinitrogen complexes are susceptible to attack by electrophiles on nitrogen. When the electrophile is a proton, the reaction is of interest in the context of abiological nitrogen fixation. Some metal-dintrogen complexes even catalyze the hydrogenation of N2 to ammonia in a cycle that involves N-protonation of a reduced M-N2 complex. See also Abiological nitrogen fixation Main-group element-mediated activation of dinitrogen Transition metal nitrido complex References Coordination complexes Nitrogen compounds
Transition metal dinitrogen complex
Chemistry
1,842
1,989,599
https://en.wikipedia.org/wiki/Euclidean%20plane%20isometry
In geometry, a Euclidean plane isometry is an isometry of the Euclidean plane, or more informally, a way of transforming the plane that preserves geometrical properties such as length. There are four types: translations, rotations, reflections, and glide reflections (see below ). The set of Euclidean plane isometries forms a group under composition: the Euclidean group in two dimensions. It is generated by reflections in lines, and every element of the Euclidean group is the composite of at most three distinct reflections. Informal discussion Informally, a Euclidean plane isometry is any way of transforming the plane without "deforming" it. For example, suppose that the Euclidean plane is represented by a sheet of transparent plastic sitting on a desk. Examples of isometries include: Shifting the sheet one inch to the right. Rotating the sheet by ten degrees around some marked point (which remains motionless). Turning the sheet over to look at it from behind. Notice that if a picture is drawn on one side of the sheet, then after turning the sheet over, we see the mirror image of the picture. These are examples of translations, rotations, and reflections respectively. There is one further type of isometry, called a glide reflection (see below under classification of Euclidean plane isometries). However, folding, cutting, or melting the sheet are not considered isometries. Neither are less drastic alterations like bending, stretching, or twisting. Formal definition An isometry of the Euclidean plane is a distance-preserving transformation of the plane. That is, it is a map such that for any points p and q in the plane, where d(p, q) is the usual Euclidean distance between p and q. Classification It can be shown that there are four types of Euclidean plane isometries. (Note: the notations for the types of isometries listed below are not completely standardised.) Reflections Reflections, or mirror isometries, denoted by Fc,v, where c is a point in the plane and v is a unit vector in R2. (F is for "flip".) have the effect of reflecting the point p in the line L that is perpendicular to v and that passes through c. The line L is called the reflection axis or the associated mirror. To find a formula for Fc,v, we first use the dot product to find the component t of p − c in the v direction, and then we obtain the reflection of p by subtraction, The combination of rotations about the origin and reflections about a line through the origin is obtained with all orthogonal matrices (i.e. with determinant 1 and −1) forming orthogonal group O(2). In the case of a determinant of −1 we have: which is a reflection in the x-axis followed by a rotation by an angle θ, or equivalently, a reflection in a line making an angle of θ/2 with the x-axis. Reflection in a parallel line corresponds to adding a vector perpendicular to it. Translations Translations, denoted by Tv, where v is a vector in R2 have the effect of shifting the plane in the direction of v. That is, for any point p in the plane, or in terms of (x, y) coordinates, A translation can be seen as a composite of two parallel reflections. Rotations Rotations, denoted by Rc,θ, where c is a point in the plane (the centre of rotation), and θ is the angle of rotation. In terms of coordinates, rotations are most easily expressed by breaking them up into two operations. First, a rotation around the origin is given by These matrices are the orthogonal matrices (i.e. each is a square matrix G whose transpose is its inverse, i.e. ), with determinant 1 (the other possibility for orthogonal matrices is −1, which gives a mirror image, see below). They form the special orthogonal group SO(2). A rotation around c can be accomplished by first translating c to the origin, then performing the rotation around the origin, and finally translating the origin back to c. That is, or in other words, Alternatively, a rotation around the origin is performed, followed by a translation: A rotation can be seen as a composite of two non-parallel reflections. Rigid transformations The set of translations and rotations together form the rigid motions or rigid displacements. This set forms a group under composition, the group of rigid motions, a subgroup of the full group of Euclidean isometries. Glide reflections Glide reflections, denoted by Gc,v,w, where c is a point in the plane, v is a unit vector in R2, and w is non-null a vector perpendicular to v are a combination of a reflection in the line described by c and v, followed by a translation along w. That is, or in other words, (It is also true that that is, we obtain the same result if we do the translation and the reflection in the opposite order.) Alternatively we multiply by an orthogonal matrix with determinant −1 (corresponding to a reflection in a line through the origin), followed by a translation. This is a glide reflection, except in the special case that the translation is perpendicular to the line of reflection, in which case the combination is itself just a reflection in a parallel line. The identity isometry, defined by I(p) = p for all points p is a special case of a translation, and also a special case of a rotation. It is the only isometry which belongs to more than one of the types described above. In all cases we multiply the position vector by an orthogonal matrix and add a vector; if the determinant is 1 we have a rotation, a translation, or the identity, and if it is −1 we have a glide reflection or a reflection. A "random" isometry, like taking a sheet of paper from a table and randomly laying it back, "almost surely" is a rotation or a glide reflection (they have three degrees of freedom). This applies regardless of the details of the probability distribution, as long as θ and the direction of the added vector are independent and uniformly distributed and the length of the added vector has a continuous distribution. A pure translation and a pure reflection are special cases with only two degrees of freedom, while the identity is even more special, with no degrees of freedom. Isometries as reflection group Reflections, or mirror isometries, can be combined to produce any isometry. Thus isometries are an example of a reflection group. Mirror combinations In the Euclidean plane, we have the following possibilities. [d  ] Identity Two reflections in the same mirror restore each point to its original position. All points are left fixed. Any pair of identical mirrors has the same effect. [db] Reflection As Alice found through the looking-glass, a single mirror causes left and right hands to switch. (In formal terms, topological orientation is reversed.) Points on the mirror are left fixed. Each mirror has a unique effect. [dp] Rotation Two distinct intersecting mirrors have a single point in common, which remains fixed. All other points rotate around it by twice the angle between the mirrors. Any two mirrors with the same fixed point and same angle give the same rotation, so long as they are used in the correct order. [dd] Translation Two distinct mirrors that do not intersect must be parallel. Every point moves the same amount, twice the distance between the mirrors, and in the same direction. No points are left fixed. Any two mirrors with the same parallel direction and the same distance apart give the same translation, so long as they are used in the correct order. [dq] Glide reflection Three mirrors. If they are all parallel, the effect is the same as a single mirror (slide a pair to cancel the third). Otherwise we can find an equivalent arrangement where two are parallel and the third is perpendicular to them. The effect is a reflection combined with a translation parallel to the mirror. No points are left fixed. Three mirrors suffice Adding more mirrors does not add more possibilities (in the plane), because they can always be rearranged to cause cancellation. Recognition We can recognize which of these isometries we have according to whether it preserves hands or swaps them, and whether it has at least one fixed point or not, as shown in the following table (omitting the identity). Group structure Isometries requiring an odd number of mirrors — reflection and glide reflection — always reverse left and right. The even isometries — identity, rotation, and translation — never do; they correspond to rigid motions, and form a normal subgroup of the full Euclidean group of isometries. Neither the full group nor the even subgroup are abelian; for example, reversing the order of composition of two parallel mirrors reverses the direction of the translation they produce. Since the even subgroup is normal, it is the kernel of a homomorphism to a quotient group, where the quotient is isomorphic to a group consisting of a reflection and the identity. However the full group is not a direct product, but only a semidirect product, of the even subgroup and the quotient group. Composition Composition of isometries mixes kinds in assorted ways. We can think of the identity as either two mirrors or none; either way, it has no effect in composition. And two reflections give either a translation or a rotation, or the identity (which is both, in a trivial way). Reflection composed with either of these could cancel down to a single reflection; otherwise it gives the only available three-mirror isometry, a glide reflection. A pair of translations always reduces to a single translation; so the challenging cases involve rotations. We know a rotation composed with either a rotation or a translation must produce an even isometry. Composition with translation produces another rotation (by the same amount, with shifted fixed point), but composition with rotation can yield either translation or rotation. It is often said that composition of two rotations produces a rotation, and Euler proved a theorem to that effect in 3D; however, this is only true for rotations sharing a fixed point. Translation, rotation, and orthogonal subgroups We thus have two new kinds of isometry subgroups: all translations, and rotations sharing a fixed point. Both are subgroups of the even subgroup, within which translations are normal. Because translations are a normal subgroup, we can factor them out leaving the subgroup of isometries with a fixed point, the orthogonal group. Nested group construction The subgroup structure suggests another way to compose an arbitrary isometry: Pick a fixed point, and a mirror through it. If the isometry is odd, use the mirror; otherwise do not. If necessary, rotate around the fixed point. If necessary, translate. This works because translations are a normal subgroup of the full group of isometries, with quotient the orthogonal group; and rotations about a fixed point are a normal subgroup of the orthogonal group, with quotient a single reflection. Discrete subgroups The subgroups discussed so far are not only infinite, they are also continuous (Lie groups). Any subgroup containing at least one non-zero translation must be infinite, but subgroups of the orthogonal group can be finite. For example, the symmetries of a regular pentagon consist of rotations by integer multiples of 72° (360° / 5), along with reflections in the five mirrors which perpendicularly bisect the edges. This is a group, D5, with 10 elements. It has a subgroup, C5, of half the size, omitting the reflections. These two groups are members of two families, Dn and Cn, for any n > 1. Together, these families constitute the rosette groups. Translations do not fold back on themselves, but we can take integer multiples of any finite translation, or sums of multiples of two such independent translations, as a subgroup. These generate the lattice of a periodic tiling of the plane. We can also combine these two kinds of discrete groups — the discrete rotations and reflections around a fixed point and the discrete translations — to generate the frieze groups and wallpaper groups. Curiously, only a few of the fixed-point groups are found to be compatible with discrete translations. In fact, lattice compatibility imposes such a severe restriction that, up to isomorphism, we have only 7 distinct frieze groups and 17 distinct wallpaper groups. For example, the pentagon symmetries, D5, are incompatible with a discrete lattice of translations. (Each higher dimension also has only a finite number of such crystallographic groups, but the number grows rapidly; for example, 3D has 230 groups and 4D has 4783.) Isometries in the complex plane In terms of complex numbers, the isometries of the plane are either of the form or of the form for some complex numbers and with |ω| = 1. This is easy to prove: if and and if one defines then is an isometry, , and . It is then easy to see that g is either the identity or the conjugation, and the statement being proved follows from this and from the fact that . This is obviously related to the previous classification of plane isometries, since: functions of the type are translations; functions of the type are rotations (when |ω| = 1); the conjugation is a reflection. Note that a rotation about complex point p is obtained by complex arithmetic with where the last expression shows the mapping equivalent to rotation at 0 and a translation. Therefore, given direct isometry one can solve to obtain as the center for an equivalent rotation, provided that , that is, provided the direct isometry is not a pure translation. As stated by Cederberg, "A direct isometry is either a rotation or a translation." See also Beckman–Quarles theorem, a characterization of isometries as the transformations that preserve unit distances Congruence (geometry) Coordinate rotations and reflections Hjelmslev's theorem, the statement that the midpoints of corresponding pairs of points in an isometry of lines are collinear References External links Plane Isometries Crystallography Euclidean plane geometry Euclidean symmetries Group theory Articles containing proofs
Euclidean plane isometry
Physics,Chemistry,Materials_science,Mathematics,Engineering
2,938
16,153,473
https://en.wikipedia.org/wiki/Device%20fingerprint
A device fingerprint or machine fingerprint is information collected about the software and hardware of a remote computing device for the purpose of identification. The information is usually assimilated into a brief identifier using a fingerprinting algorithm. A browser fingerprint is information collected specifically by interaction with the web browser of the device. Device fingerprints can be used to fully or partially identify individual devices even when persistent cookies (and zombie cookies) cannot be read or stored in the browser, the client IP address is hidden, or one switches to another browser on the same device. This may allow a service provider to detect and prevent identity theft and credit card fraud, but also to compile long-term records of individuals' browsing histories (and deliver targeted advertising or targeted exploits) even when they are attempting to avoid tracking – raising a major concern for internet privacy advocates. History Basic web browser configuration information has long been collected by web analytics services in an effort to measure real human web traffic and discount various forms of click fraud. Since its introduction in the late 1990s, client-side scripting has gradually enabled the collection of an increasing amount of diverse information, with some computer security experts starting to complain about the ease of bulk parameter extraction offered by web browsers as early as 2003. In 2005, researchers at the University of California, San Diego showed how TCP timestamps could be used to estimate the clock skew of a device, and consequently to remotely obtain a hardware fingerprint of the device. In 2010, Electronic Frontier Foundation launched a website where visitors can test their browser fingerprint. After collecting a sample of 470161 fingerprints, they measured at least 18.1 bits of entropy possible from browser fingerprinting, but that was before the advancements of canvas fingerprinting, which claims to add another 5.7 bits. In 2012, Keaton Mowery and Hovav Shacham, researchers at University of California, San Diego, showed how the HTML5 canvas element could be used to create digital fingerprints of web browsers. In 2013, at least 0.4% of Alexa top 10,000 sites were found to use fingerprinting scripts provided by a few known third parties. In 2014, 5.5% of Alexa top 10,000 sites were found to use canvas fingerprinting scripts served by a total of 20 domains. The overwhelming majority (95%) of the scripts were served by AddThis, which started using canvas fingerprinting in January that year, without the knowledge of some of its clients. In 2015, a feature to protect against browser fingerprinting was introduced in Firefox version 41, but it has been since left in an experimental stage, not initiated by default. The same year a feature named Enhanced Tracking Protection was introduced in Firefox version 42 to protect against tracking during private browsing by blocking scripts from third party domains found in the lists published by Disconnect Mobile. At WWDC 2018 Apple announced that Safari on macOS Mojave "presents simplified system information when users browse the web, preventing them from being tracked based on their system configuration." In 2019, starting from Firefox version 69, Enhanced Tracking Protection has been turned on by default for all users also during non-private browsing. The feature was first introduced to protect private browsing in 2015 and was then extended to standard browsing as an opt-in feature in 2018. Diversity and stability Motivation for the device fingerprint concept stems from the forensic value of human fingerprints. In order to uniquely distinguish over time some devices through their fingerprints, the fingerprints must be both sufficiently diverse and sufficiently stable. In practice neither diversity nor stability is fully attainable, and improving one has a tendency to adversely impact the other. For example, the assimilation of an additional browser setting into the browser fingerprint would usually increase diversity, but it would also reduce stability, because if a user changes that setting, then the browser fingerprint would change as well. A certain degree of instability can be compensated by linking together fingerprints that, although partially different, might probably belong to the same device. This can be accomplished by a simple rule-based linking algorithm (which, for example, links together fingerprints that differ only for the browser version, if that increases with time) or machine learning algorithms. Entropy is one of several ways to measure diversity. Sources of identifying information Applications that are locally installed on a device are allowed to gather a great amount of information about the software and the hardware of the device, often including unique identifiers such as the MAC address and serial numbers assigned to the machine hardware. Indeed, programs that employ digital rights management use this information for the very purpose of uniquely identifying the device. Even if they are not designed to gather and share identifying information, local applications might unwillingly expose identifying information to the remote parties with which they interact. The most prominent example is that of web browsers, which have been proved to expose diverse and stable information in such an amount to allow remote identification, see . Diverse and stable information can also be gathered below the application layer, by leveraging the protocols that are used to transmit data. Sorted by OSI model layer, some examples of protocols that can be utilized for fingerprinting are: OSI Layer 7: SMB, FTP, HTTP, Telnet, TLS/SSL, DHCP OSI Layer 5: SNMP, NetBIOS OSI Layer 4: TCP (see TCP/IP stack fingerprinting) OSI Layer 3: IPv4, IPv6, ICMP OSI Layer 2: IEEE 802.11, CDP Passive fingerprinting techniques merely require the fingerprinter to observe traffic originated from the target device, while active fingerprinting techniques require the fingerprinter to initiate connections to the target device. Techniques that require interaction with the target device over a connection initiated by the latter are sometimes addressed as semi-passive. Browser fingerprint The collection of a large amount of diverse and stable information from web browsers is possible for most part due to client-side scripting languages, which were introduced in the late 1990s. Today there are several open-source browser fingerprinting libraries, such as FingerprintJS, ImprintJS, and ClientJS, where FingerprintJS is updated the most often and supersedes ImprintJS and ClientJS to a large extent. Browser version Browsers provide their name and version, together with some compatibility information, in the User-Agent request header. Being a statement freely given by the client, it should not be trusted when assessing its identity. Instead, the type and version of the browser can be inferred from the observation of quirks in its behavior: for example, the order and number of HTTP header fields is unique to each browser family and, most importantly, each browser family and version differs in its implementation of HTML5, CSS and JavaScript. Such differences can be remotely tested by using JavaScript. A Hamming distance comparison of parser behaviors has been shown to effectively fingerprint and differentiate a majority of browser versions. Browser extensions A combination of extensions or plugins unique to a browser can be added to a fingerprint directly. Extensions may also modify how any other browser attributes behave, adding additional complexity to the user's fingerprint. Adobe Flash and Java plugins were widely used to access user information before their deprecation. Hardware properties User agents may provide system hardware information, such as phone model, in the HTTP header. Properties about the user's operating system, screen size, screen orientation, and display aspect ratio can be also retrieved by using JavaScript to observe the result of CSS media queries. Browsing history The fingerprinter could determine which sites the browser had previously visited within a list it provided, by querying the list using JavaScript with the CSS selector . Typically, a list of 50 popular websites were sufficient to generate a unique user history profile, as well as provide information about the user's interests. However, browsers have since then mitigated this risk. Font metrics The letter bounding boxes differ between browsers based on anti-aliasing and font hinting configuration and can be measured by JavaScript. Canvas and WebGL Canvas fingerprinting uses the HTML5 canvas element, which is used by WebGL to render 2D and 3D graphics in a browser, to gain identifying information about the installed graphics driver, graphics card, or graphics processing unit (GPU). Canvas-based techniques may also be used to identify installed fonts. Furthermore, if the user does not have a GPU, CPU information can be provided to the fingerprinter instead. A canvas fingerprinting script first draws text of specified font, size, and background color. The image of the text as rendered by the user's browser is then recovered by the ToDataURL Canvas API method. The hashed text-encoded data becomes the user's fingerprint. Canvas fingerprinting methods have been shown to produce 5.7 bits of entropy. Because the technique obtains information about the user's GPU, the information entropy gained is "orthogonal" to the entropy of previous browser fingerprint techniques such as screen resolution and JavaScript capabilities. Hardware benchmarking Benchmark tests can be used to determine whether a user's CPU utilizes AES-NI or Intel Turbo Boost by comparing the CPU time used to execute various simple or cryptographic algorithms. Specialized APIs can also be used, such as the Battery API, which constructs a short-term fingerprint based on the actual battery state of the device, or OscillatorNode, which can be invoked to produce a waveform based on user entropy. A device's hardware ID, which is a cryptographic hash function specified by the device's vendor, can also be queried to construct a fingerprint. Mitigation methods for browser fingerprinting Different approaches exist to mitigate the effects of browser fingerprinting and improve users' privacy by preventing unwanted tracking, but there is no ultimate approach that can prevent fingerprinting while keeping the richness of a modern web browser. Offering a simplified fingerprint Users may attempt to reduce their fingerprintability by selecting a web browser which minimizes the availability of identifying information, such as browser fonts, device ID, canvas element rendering, WebGL information, and local IP address. As of 2017 Microsoft Edge is considered to be the most fingerprintable browser, followed by Firefox and Google Chrome, Internet Explorer, and Safari. Among mobile browsers, Google Chrome and Opera Mini are most fingerprintable, followed by mobile Firefox, mobile Edge, and mobile Safari. Tor Browser disables fingerprintable features such as the canvas and WebGL API and notifies users of fingerprint attempts. In order to reduce diversity, Tor browser doesn't allow the width and height of the window available to the webpage to be any number of pixels, but allows only some given values. The result is that the webpage is windowboxed: it fills a space that is slightly smaller than the browser window. Offering a spoofed fingerprint Spoofing some of the information exposed to the fingerprinter (e.g. the user agent) may create a reduction in diversity, but the contrary could be also achieved if the spoofed information differentiates the user from all the others who do not use such a strategy more than the real browser information. Spoofing the information differently at each site visit, for example by perturbating the sound and canvas rendering with a small amount of random noise, allows a reduction of stability. This technique has been adopted by the Brave browser in 2020. Blocking scripts Blindly blocking client-side scripts served from third-party domains, and possibly also first-party domains (e.g. by disabling JavaScript or using NoScript) can sometimes render websites unusable. The preferred approach is to block only third-party domains that seem to track people, either because they are found on a blacklist of tracking domains (the approach followed by most ad blockers) or because the intention of tracking is inferred by past observations (the approach followed by Privacy Badger). Using multiple browsers Different browsers on the same machine would usually have different fingerprints, but if both browsers are not protected against fingerprinting, then the two fingerprints could be identified as originating from the same machine. See also Anonymous web browsing Browser security Browser sniffing Evercookie Fingerprint (computing) Internet privacy Web tracking References Further reading External links Panopticlick, by the Electronic Frontier Foundation, gathers some elements of a browser's device fingerprint and estimates how identifiable it makes the user Am I Unique, by INRIA and INSA Rennes, implements fingerprinting techniques including collecting information through WebGL. Computer network security Internet privacy Internet fraud Fingerprinting algorithms Web analytics
Device fingerprint
Technology,Engineering
2,626
43,624,349
https://en.wikipedia.org/wiki/Denaturing%20high%20performance%20liquid%20chromatography
Denaturing High Performance Liquid Chromatography (DHPLC) is a method of chromatography for the detection of base substitutions, small deletions or insertions in the DNA. Due to its speed and high resolution, this method is particularly useful for finding polymorphisms in DNA. In practice, the analysis begins with a standard polymerase chain reaction (PCR) in order to amplify the fragment of interest. If the amplified region that exhibits the polymorphism(s) is heterozygous, two kinds of fragments corresponding to the allele and the wild polymorphic allele will be present in the PCR product. This first step is followed by a step of denaturation–renaturation to create hetero- and homoduplexes from the two allele populations in the PCR. To find a homozygous polymorphism, proceed in the same way by premixing a DNA wild population to a population of polymorphic DNA to obtain heteroduplexes after the denaturation–renaturation step. Heteroduplexes are actually double strands of DNA containing a strand from the wild-type allele and a sprig from the polymorphic allele. The formation of such DNA fragments then causes the appearance of a "mismatch" or bad pairing where the polymorphism is located. These "mismatches" in the heteroduplex are the basis for the polymorphism detection by DHPLC. Heteroduplexes are thermally less stable than their corresponding homoduplexes, and the single DNA strands will therefore be disconnected by chromatography when subjected to a sufficiently high temperature. The consequence of this double strand instability will be a mismatch of the two DNA strands in the region of polymorphism when DNA is heated to the DNA melting temperature. This mismatch will therefore decrease the interaction with the column and will result in a reduced retention time compared to the homoduplexes in the chromatographic separation process. To observe the phenomenon of separation, the DHPLC method uses a column of a non-grafted porous stationary phase composed of polystyrene-divinylbenzene alkyl. The stationary phase is electrically neutral and hydrophobic. The DNA, however, is negatively charged at its phosphate groups and therefore can adsorb itself on the column. In order to make the adsorption possible, triethylammonium acetate (TEAA) is used. The positively charged ammonium ion of these molecules interacts with the DNA, and the alkyl chain with the hydrophobic surface of the solid phase. Therefore, when heteroduplexes are partially denaturated by heating, the negative charges undergo partial relocation and the interaction force between DNA heteroduplexes and column decreases in comparison to the strength of interaction of the homoduplexes. These will therefore be eluted less rapidly by the mobile phase (consisting of acetonitrile). References Chromatography
Denaturing high performance liquid chromatography
Chemistry
625
18,424,177
https://en.wikipedia.org/wiki/Wavelength%20selective%20switching
Wavelength selective switching components are used in WDM optical communications networks to route (switch) signals between optical fibres on a per-wavelength basis. What is a WSS A WSS comprises a switching array that operates on light that has been dispersed in wavelength without the requirement that the dispersed light be physically demultiplexed into separate ports. This is termed a ‘disperse and switch’ configuration. For example, an 88 channel WDM system can be routed from a “common” fiber to any one of N fibers by employing 88 1 x N switches. This represents a significant simplification of a demux and switch and multiplex architecture that would require (in addition to N +1 mux/demux elements) a non-blocking switch for 88 N x N channels which would test severely the manufacturability limits of large-scale optical cross-connects for even moderate fiber counts. A more practical approach, and one adopted by the majority of WSS manufacturers is shown schematically in Figure 1 (to be uploaded). The various incoming channels of a common port are dispersed continuously onto a switching element which then directs and attenuates each of these channels independently to the N switch ports. The dispersive mechanism is generally based on holographic or ruled diffraction gratings similar to those used commonly in spectrometers. It can be advantageous, for achieving resolution and coupling efficiency, to employ a combination of a reflective or transmissive grating and a prism – known as a GRISM. The operation of the WSS can be bidirectional so the wavelengths can be multiplexed together from different ports onto a single common port. To date, the majority of deployments have used a fixed channel bandwidth of 50 or 100 GHz and 9 output ports are typically used. Microelectromechanical Mirrors (MEMS) The simplest and earliest commercial WSS were based on movable mirrors using Micro-Electro-Mechanical Systems (MEMS). The incoming light is broken into a spectrum by a diffraction grating (shown at RHS of Figure) and each wavelength channel then focuses on a separate MEMS mirror. By tilting the mirror in one dimension, the channel can be directed back into any of the fibers in the array. A second tilting axis allows transient crosstalk to be minimized, otherwise switching (eg) from port 1 to port 3 will always involve passing the beam across port 2. The second axis provides a means to attenuate the signal without increasing the coupling into neighboring fibers. This technology has the advantage of a single steering surface, not necessarily requiring polarization diversity optics. It works well in the presence of a continuous signal, allowing the mirror tracking circuits to dither the mirror and maximise coupling. MEMS based WSS typically produce good extinction ratios, but poor open loop performance for setting a given attenuation level. The main limitations of the technology arise from the channelization that the mirrors naturally enforce. During manufacturing, the channels must be carefully aligned with the mirrors, complicating the manufacturing process. Post-manufacturing alignment adjustments have been mainly limited to adjusting the gas pressure within the hermetic enclosure. This enforced channelization has also proved, so far, an insurmountable obstacle to implementing flexible channel plans where different channel sizes are required within a network. Additionally the phase of light at the mirror edge is not well controlled in a physical mirror so artefacts can arise in the switching of light near the channel edge due to interference of the light from each channel. Binary Liquid Crystal (LC) Liquid crystal switching avoids both the high cost of small volume MEMS fabrication and potentially some of its fixed channel limitations. The concept is illustrated in Figure 3 (to be uploaded). A diffraction grating breaks the incoming light into a spectrum. A software controlled binary liquid crystal stack, individually tilts each optical channel and a second grating (or a second pass of the first grating) is used to spectrally recombine the beams. The offsets created by the liquid crystal stack cause the resulting spectrally recombined beams to be spatially offset, and hence to focus, through a lens array, into separate fibers. Polarization diversity optics ensures low Polarization Dependent Losses (PDL). This technology has the advantages of relatively low cost parts, simple electronic control and stable beam positions without active feedback. It is capable of configuring to a flexible grid spectrum by the use of a fine pixel grid. The inter-pixel gaps must be small compared to the beam size, to avoid perturbing the transmitted light significantly. Furthermore, each grid must be replicated for each of the switching stages creating the requirement of individually controlling thousands of pixels on different substrates so the advantages of this technology in terms of simplicity are negated as the wavelength resolution becomes finer. The main disadvantage of this technology arises from the thickness of the stacked switching elements. Keeping the optical beam tightly focused over this depth is difficult and has, so far, limited the ability of high port count WSS to achieve very fine (12.5 GHz or less) granularity. Liquid Crystal on Silicon (LCoS) Liquid Crystal on Silicon LCoS is particularly attractive as a switching mechanism in a WSS because of the near continuous addressing capability, enabling much new functionality. In particular the bands of wavelengths which are switched together (channels) need not be preconfigured in the optical hardware but can be programmed into the switch through the software control. Additionally, it is possible to take advantage of this ability to reconfigure channels while the device is operating. A schematic of an LCoS WSS is shown in Figure 4 (to be uploaded). LCoS technology has enabled the introduction of more flexible wavelength grids which help to unlock the full spectral capacity of optical fibers. Even more surprising features rely on the phase matrix nature of the LCoS switching element. Features in common use include such things as shaping the power levels within a channel or broadcasting the optical signal to more than one port. LCoS-based WSS also permit dynamic control of channel centre frequency and bandwidth through on-the-fly modification of the pixel arrays via embedded software. The degree of control of channel parameters can be very fine-grained, with independent control of the centre frequency and either upper- or lower-band-edge of a channel with better than 1 GHz resolution possible. This is advantageous from a manufacturability perspective, with different channel plans being able to be created from a single platform and even different operating bands (such as C and L) being able to use an identical switch matrix. Products have been introduced allowing switching between 50 GHz channels and 100 GHz channels, or a mix of channels, without introducing any errors or “hits” to the existing traffic. More recently, this has been extended to support the whole concept of Flexible or Elastic networks under ITU G.654.2 through products such as Finisar's Flexgrid™ WSS. For more detailed information on the applications of LCoS in telecommunications and, in particular, Wavelength Selective Switches, see chapter 16 in Optical Fiber Telecommunications VIA, edited by Kaminov, Li and Wilner, Academic Press . MEMS Arrays A further array-based switch engine uses an array of individual reflective MEMS mirrors to perform the necessary beam steering (Figure 5 (to be uploaded). These arrays are typically a derivative of the Texas Instruments DLP range of spatial light modulators. In this case, the angle of the MEMs mirrors is changed to deflect the beam. However, current implementations only allow the mirrors to have two possible states, giving two potential beam angles. This complicates the design of multi-port WSS and has limited their application to relatively low-port-count devices. Future Developments Dual WSS It is likely that in future two WSS could use the same optical module utilizing different wavelength processing regions of a single matrix switch such as LCoS, provided that the issues associated with device isolation are able to be appropriately addressed. Channel selectivity ensures only wavelengths required to be dropped locally (up to the maximum number of transceivers in the bank) are presented to any mux/demux module through each fiber, which in turn reduces the filtering and extinction requirements on the mux/demux module. Contentionless WSS This provides cost and performance benefits for next generation colorless, directionless, contentionless (CDC) reconfigurable optical add-drop multiplexer (ROADM) networks, resulting from improved scalability of add/drop ports and removal of erbium-doped fiber amplifier (EDFA) arrays (which are required to overcome splitting losses in multicast switches). Advanced Spatial Light Modulators The technical maturity of spatial light modulators based on consumer driven applications has been highly advantageous to their adoption in the telecommunications arena. There are developments in MEMs phased arrays and other electro-optic spatial light modulators that could be envisaged in the future to be applicable to telecom switching and wavelength processing, perhaps bringing faster switching or having an advantage in simplicity of optical design through polarisation-independent operation. For example, the design principles developed for LCoS could be applied to other phase-controllable arrays in a straightforward fashion if a suitable phase stroke (greater than 2π at 1550 nm) can be achieved. However the requirements for low electrical crosstalk and high fill factor over very small pixels required to allow switching in a compact form factor remain serious practical impediments to achieving these goals. References External links Lumentum WSS Products Finisar WSS Products II-VI WSS Products Calient WSS Products Optical devices Photonics
Wavelength selective switching
Materials_science,Engineering
1,984
9,363,315
https://en.wikipedia.org/wiki/Australian%20Aboriginal%20astronomy
Australian Aboriginal astronomy has been passed down orally, through ceremonies, and in their artwork of many kinds. The astronomical systems passed down thus show a depth of understanding of the movement of celestial objects which allowed them to use them as a practical means for creating calendars and for navigating across the continent and waters of Australia. There is a diversity of astronomical traditions in Australia, each with its own particular expression of cosmology. However, there appear to be common themes and systems between the groups. Due to the long history of Australian Aboriginal astronomy, the Aboriginal peoples have been described as "world's first astronomers" on several occasions. Many of the constellations were given names based on their shapes, just as traditional western astronomy does, such as the Pleiades, Orion and the Milky Way, with others, such as Emu in the Sky, describes the dark patches rather than the points lit by the stars. Contemporary Indigenous Australian art often references astronomical subjects and their related lore, such as the Seven Sisters. Records of Aboriginal astronomy One of the earliest written records of Aboriginal astronomy was made by William Edward Stanbridge, an Englishman who emigrated to Australia in 1841 and befriended the local Boorong people. Interpreting the sky Emu in the sky A constellation used almost everywhere in Australian Aboriginal culture is the "Emu in the Sky", which consists of dark nebulae (opaque clouds of dust and gas in outer space) that are visible against the (centre and other sectors of the) Milky Way background. The Emu's head is the very dark Coalsack nebula, next to the Southern Cross; the body and legs are that extension of the Great Rift trailing out to Scorpius. In Ku-ring-gai Chase National Park, north of Sydney, are extensive rock engravings of the Guringai people who lived there, including representations of the creator-hero Daramulan and his emu-wife. An engraving near the Elvina Track shows an emu in the same pose and orientation as the Emu in the Sky constellation. To the Wardaman, however, the Coalsack is the head of a lawman. Bruce Pascoe's book Dark Emu takes its title from one of the Aboriginal names for the constellation, known as Gugurmin to the Wiradjuri people. In May 2020, the Royal Australian Mint launched a limited edition commemorative one-dollar coin, as the first in its "Star Dreaming" series celebrating Indigenous Australians' astrology. Canoe in Orion The Yolŋu people of northern Australia say that the constellation of Orion, which they call Julpan (or Djulpan), is a canoe. They tell the story of three brothers who went fishing, and one of them ate a sawfish that was forbidden under their law. Seeing this, the Sun-woman, Walu, made a waterspout that carried him and his two brothers and their canoe up into the sky. The three stars that line in the constellation's centre, which form Orion's Belt in Western mythology, are the three brothers; the Orion Nebula above them is the forbidden fish; and the bright stars Betelgeuse and Rigel are the bow and stern of the canoe. This is an example of astronomical legends underpinning the ethical and social codes that people use on Earth. Seven Sisters The Pleiades constellation figures in the Dreamings and songlines of several Aboriginal Australian peoples, usually referred to as the seven sisters. The story has been described as "one of the most defining and predominant meta-narratives chronicled in ancient mainland Australia", which describes a male ancestral being (with names including Wati Nyiru, Yurlu and others) who pursues seven sisters across the middle of the Australian continent from west to east, where the sisters turn into stars. Told by a number of peoples across the country, using varying names for the characters, it starts in Martu country in the Pilbara region of Western Australia (specifically, Roebourne), and travels across the lands of the Ngaanyatjarra (WA) to (Anangu Pitjantjatjara Yankunytjatjara, or APY lands, of South Australia, where the Pitjantjatjara and Yankunytjatjara peoples live. The story also includes Warlpiri lands, the Tanami Desert of the Northern Territory. The Yamatji people of the Wajarri language group, of the Murchison region in Western Australia, call the sisters Nyarluwarri. When the constellation is close to the horizon as the sun is setting, the people know that it is the right time to harvest emu eggs, and they also use the brightness of the stars to predict seasonal rainfall. In the Kimberley region of Western Australia, the eagle hawk chases the seven sisters up into the sky, where they become the star cluster and he becomes the Southern Cross. In the Western Desert cultural bloc in central Australia, they are said to be seven sisters fleeing from the unwelcome attentions of a man represented by some of the stars in Orion, the hunter. In these stories, the man is called Nyiru or Nirunja, and the Seven Sisters is songline known as Kungkarangkalpa. The seven sisters story often features in the artwork of the region, such as the 2017 painting by Tjungkara Ken, Kaylene Whiskey's 2018 work "Seven Sistas", and the large-scale installation by the Tjanpi Desert Weavers commissioned as a feature of the National Gallery of Australia's 2020 Know My Name Exhibition. The Museum of Contemporary Art Australia in Sydney holds a 2013 work by the Tjanpi Desert Weavers called Minyma Punu Kungkarangkalpa (Seven Sisters Tree Women). In March 2013, senior desert dancers from the APY Lands (South Australia) in collaboration with the Australian National University's ARC Linkage and mounted by artistic director Wesley Enoch, performed Kungkarangkalpa: The Seven Sisters Songline on the shores of Lake Burley Griffin in Canberra. In the Warlpiri version of the story, the Napaljarri sisters are often represented carrying a man called Wardilyka, who is in love with the women. But the morning star, Jukurra-jukurra, a man from a different skin group and who is also in love with the sisters, chases them across the sky. Each night they launch themselves into the sky, and each night he follows them. This story is known as the Napaljarri-warnu Jukurrpa. The people of around Lake Eyre in South Australia tell how the ancestor male is prevented from capturing one of the seven sisters by a great flood. The Wirangu people of the west coast of South Australia have a creation story embodied in a songline of great significance based on the Pleiades. In the story, the hunter (the Orion constellation) is named Tgilby. Tgilby, after falling in love with the seven sisters, known as Yugarilya, chases them out of the sky, onto and across the earth. He chases them as the Yugarilya chase a snake, Dyunu. The Boonwurrung people of the Kulin nation of Victoria tell the Karatgurk story, which tells of how a crow robbed the seven sisters of their secret of how to make fire, thus bringing the skill to the people on earth. In another story, told by peoples of New South Wales, the seven sisters are beautiful women known as the Maya-Mayi, two of whom are kidnapped by a warrior, Warrumma, or Warunna. They eventually escape by climbing a pine tree that continually grows up into the sky where they join their other sisters. In 2017, a major exhibition entitled Songlines: Tracking the Seven Sisters was mounted at the National Museum of Australia, afterwards travelling to Berlin (2022) and Paris (2023). In September 2020, the Royal Australian Mint issued its second commemorative one-dollar coin in its "Star Dreaming" series celebrating Indigenous Australians' astrology (see Emu in the sky above). The Milky Way The Kaurna people of the Adelaide Plains of South Australia called the (centre and other sectors of) the Milky Way wodliparri in the Kaurna language, meaning "house river". They believed that Karrawirra Parri (the River Torrens) was a reflection of wodliparri. The Yolŋu people believe that when they die, they are taken by a mystical canoe, Larrpan, to the spirit-island Baralku in the sky, where their camp-fires can be seen burning along the edge of the great river of the Milky Way. The canoe is sent back to Earth as a shooting star, letting their family on Earth know that they have arrived safely in the spirit-land. Aboriginals also thought that god was the canoe. The Boorong people see in the Southern Cross a possum in a tree. Sun and Moon Many traditions have stories of a female Sun and a male Moon. The Yolŋu say that Walu, the Sun-woman, lights a small fire each morning, which we see as the dawn. She paints herself with red ochre, some of which spills onto the clouds, creating the sunrise. She then lights a torch and carries it across the sky from east to west, creating daylight. At the end of her journey, as she descends from the sky, some of her ochre paints again rubs off onto the clouds, creating the sunset. She then puts out her torch, and throughout the night travels underground back to her starting camp in the east. Other Aboriginal peoples of the Northern Territory call her Wuriupranili. Other stories about the Sun involve Wala, Yhi, and Gnowee. The Yolŋu tell that Ngalindi, the Moon-man, was once young and slim (the waxing Moon), but grew fat and lazy (the full Moon). His wives chopped bits off him with their axes (the waning Moon); to escape them he climbed a tall tree towards the Sun, but died from the wounds (the new Moon). After remaining dead for three days, he rose again to repeat the cycle, and continues doing so till this day. The Kuwema people in the Northern Territory say that he grows fat at each full Moon by devouring the spirits of those who disobey the tribal laws. Another story by the Aboriginals of Cape York involves the making of a giant boomerang that is thrown into the sky and becomes the Moon. A story from Southern Victoria concerns a beautiful woman who is forced to live by herself in the sky after a number of scandalous affairs. The Yolŋu also associated the Moon with the tides. Eclipses The Warlpiri people explain a solar eclipse as being the Sun-woman being hidden by the Moon-man as he makes love to her. This explanation is shared by other groups, such as the Wirangu. In the Ku-ring-gai Chase National Park there are a number of engravings showing a crescent shape, with sharp horns pointing down, and below it a drawing of a man in front of a woman. While the crescent shape has been assumed by most researchers to represent a boomerang, some argue that it is more easily interpreted as a solar eclipse, with the mythical man-and-woman explanation depicted below it. Venus The rising of Venus marks an important ceremony of the Yolŋu, who call it Barnumbirr ("Morning Star and Evening Star"). They gather after sunset to await the rising of the planet. As she reappears (or in other nearby weeks appears only) in the early hours before dawn, the Yolŋu say that she draws behind her a rope of light attached to the island of Baralku on Earth, and along this rope, with the aid of a richly decorated "Morning Star Pole", the people are able to communicate with their dead loved ones, showing that they still love and remember them. Jupiter The Dja Dja Wurrung call Jupiter "Bunjil's campfire". The planet features in the Dja Dja Wurrung Aboriginal Clans Corporation logo, as a symbol of the Creator Spirit. Eta Carinae In 2010, astronomers Duane Hamacher and David Frew from Macquarie University in Sydney showed that the Boorong Aboriginal people of northwestern Victoria, Australia, witnessed the outburst of Eta Carinae in the 1840s and incorporated it into their oral traditions as Collowgulloric War, the wife of War (Canopus, the Crow – ). This is the only definitive indigenous record of Eta Carinae's outburst identified in the literature to date. Astronomical calendars Aboriginal calendars tend to differ from European calendars: many groups in northern Australia use a calendar with six seasons, and some groups mark the seasons by the stars which are visible during them. For the Pitjantjatjara, for example, the rising of the Pleiades at dawn (in May) marks the start of winter. It is not known to what extent Aboriginal people were interested in the precise motion of the sun, moon, planets or stars. However, it likely that some of the stone arrangements in Victoria such as Wurdi Youang near Little River, Victoria, may have been used to predict and confirm the equinoxes and/or solstices. The arrangement is aligned with the setting sun at the solstices and equinox, but its age is unknown. There are rock engravings by the Nganguraku people at Ngaut Ngaut which, according to oral tradition, represent lunar cycles. Most of their culture (including their language) has been lost because of the banning of such things by Christian missionaries before 1913. Stories enrich a custom-linked calendar whereby the heliacal rising or setting of stars or constellations indicates to Aboriginal Australians when it is time to move to a new place and/or look for a new food source. For example, the Boorong people in Victoria know that when the Malleefowl (Lyra) disappears in October, to "sit with the Sun", it is time to start gathering her eggs on Earth. Other groups know that when Orion first appears in the sky, the dingo puppies are about to be born. When Scorpius appears, the Yolŋu know that the Macassan fisherman would soon arrive to fish for trepang. In contemporary culture A great deal of contemporary Aboriginal art has an astronomical theme, reflecting the astronomical elements of the artists' cultures. Prominent examples are Gulumbu Yunupingu, Bill Yidumduma Harney, and Nami Maymuru, all of whom have won awards or been finalists in the Telstra Indigenous Art Awards. In 2009 an exhibition of Indigenous Astronomical Art from WA, named "Ilgarijiri", was launched at AIATSIS in Canberra in conjunction with a Symposium on Aboriginal Astronomy. Other contemporary painters include the daughters of the late Clifford Possum Tjapaltjarri, who have the seven sisters as one of their Dreamings. Gabriella Possum and Michelle Possum paint the Seven Sisters Dreaming in their paintings. They inherited this Dreaming through their maternal line. See also Australian Aboriginal Astronomy Project Archaeoastronomy Indigenous Australian art List of archaeoastronomical sites by country Pleiades in folklore and literature References Further reading } ABC Message Stick program on Aboriginal Astronomy The Emu in the Sky story at Questacon ABC Radio National Artworks piece on "The First Astronomers" Cairns, H. & Yidumduma Harney, B. (2003). Dark Sparklers: Yidumduma's Aboriginal Astronomy. Hugh Cairns, Sydney. Fredrick, S. (2008). The Sky of Knowledge: A Study of the Ethnoastronomy of the Aboriginal People of Australia. Master of Philosophy Thesis. Department of Archaeology and Ancient History, University of Leicester, UK. Fuller, R.S.; Hamacher, D.W. & Norris, R.P. (2013). Astronomical Orientations of Bora Ceremonial Grounds in Southeast Australia. Australian Archaeology, No. 77, pp. 30–37. Hamacher, D.W. (2013). Aurorae in Australian Aboriginal Traditions." Journal of Astronomical History & Heritage", Vol. 16(2), pp. 207–219. Hamacher, D.W. (2012). On the Astronomical Knowledge and Traditions of Aboriginal Australians. Doctor of Philosophy Thesis. Department of Indigenous Studies, Macquarie University, Sydney, Australia. Hamacher, D.W. & Norris, R.P. (2011). Bridging the Gap through Australian Cultural Astronomy. In Archaeoastronomy & Ethnoastronomy: building bridges between cultures, edited by C. Ruggles. Cambridge University Press, pp. 282–290. Haynes, R.F., et al. (1996). Dreaming the Stars. In Explorers of the Southern Sky, edited by R. Haynes. Cambridge University Press, pp. 7–20. Johnson, D. (1998). Night skies of Aboriginal Australia: a Noctuary. University of Sydney Press. Morieson, J. (1996). The Night Sky of the Boorong. Master of Arts Thesis, Australian Centre, University of Melbourne. Morieson, J. (2003). The Astronomy of the Boorong. World Archaeological Congress, June 2003. Norris, R.P. & Hamacher, D.W. (2013). Australian Aboriginal Astronomy: An Overview. In Handbook of Cultural Astronomy, edited by C. Ruggles. Springer, in press. Norris, R.P. & Hamacher, D.W. (2009). The Astronomy of Aboriginal Australia. In The Role of Astronomy in Society and Culture, edited by D. Valls-Gabaud & A. Boksenberg. Cambridge University Press, pp. 39–47. Norris, R.P. & Norris, P.M. (2008). Emu Dreaming: An Introduction to Aboriginal Astronomy. Emu Dreaming, Sydney. Norris, R. P., (2016) External links Website created by Kokatha artist Darryl Milika, designer of the Yerrakartarta art installation in Adelaide. Australian Aboriginal mythology Archaeoastronomy Astronomy in Australia
Australian Aboriginal astronomy
Astronomy
3,800
9,516,673
https://en.wikipedia.org/wiki/RNA%20activation
RNA activation (RNAa) is a small RNA-guided and Argonaute (Ago)-dependent gene regulation phenomenon in which promoter-targeted short double-stranded RNAs (dsRNAs) induce target gene expression at the transcriptional/epigenetic level. RNAa was first reported in a 2006 PNAS paper by Li et al. who also coined the term "RNAa" as a contrast to RNA interference (RNAi) to describe such gene activation phenomenon. dsRNAs that trigger RNAa have been termed small activating RNA (saRNA). Since the initial discovery of RNAa in human cells, many other groups have made similar observations in different mammalian species including human, non-human primates, rat and mice, plant and C. elegans, suggesting that RNAa is an evolutionarily conserved mechanism of gene regulation. RNAa can be generally classified into two categories: exogenous and endogenous. Exogenous RNAa is triggered by artificially designed saRNAs which target non-coding sequences such as the promoter and the 3’ terminus of a gene and these saRNAs can be chemically synthesized or expressed as short hairpin RNA (shRNA). Whereas for endogenous RNAa, upregulation of gene expression is guided by naturally occurring endogenous small RNAs such as miRNA in mammalian cells and C. elegans, and 22G RNA in C. elegans. Mechanism The molecular mechanism of RNAa is not fully understood. Similar to RNAi, it has been shown that mammalian RNAa requires members of the Ago clade of Argonaute proteins, particularly Ago2, but possesses kinetics distinct from RNAi. In contrast to RNAi, promoter-targeted saRNAs induce prolonged activation of gene expression associated with epigenetic changes. It is currently suggested that saRNAs are first loaded and processed by an Ago protein to form an Ago-RNA complex which is then guided by the RNA to its promoter target. The target can be a non-coding transcript overlapping the promoter or the chromosomal DNA. The RNA-loaded Ago then recruits other proteins such as RHA, also known as nuclear DNA helicase II, and CTR9 to form an RNA-induced transcriptional activation (RITA) complex. RITA can directly interacts with RNAP II to stimulate transcription initiation and productive transcription elongation which is related to increased ubiquitination of H2B. Endogenous RNAa In 2008, Place et al. identified targets for miRNA miR-373 on the promoters of several human genes and found that introduction of miR-373 mimics into human cells induced the expression of its predicted target genes. This study provided the first example that RNAa could be mediated by naturally occurring non-coding RNA (ncRNA). In 2011, Huang et al. further demonstrated in mouse cells that endogenous RNAa mediated by miRNAs functions in a physiological context and is possibly exploited by cancer cells to gain a growth advantage. Since then, a number of miRNAs have been shown to upregulate gene expression by targeting gene promoters or enhancers, thereby, exerting important biological roles. A good example is miR-551b-3p which is overexpressed in ovarian cancer due to amplification. By targeting the promoter of STAT3 to increase its transcription, miR-551b-3p confers to ovarian cancer cells resistance to apoptosis and a proliferative advantage. In C. elegans hypodermal seam cells, the transcription of lin-4 miRNA is positively regulated by lin-4 itself which binds to a conserved lin-4 complementary element in its promoter, constituting a positive autoregulatory loop. In C. elegans, Argonaute CSR-1 interacts with 22G small RNAs derived from RNA-dependent RNA polymerase and antisense to germline-expressed transcripts to protect these mRNAs from Piwi-piRNA mediated silencing via promoting epigenetic activation. It is currently unknown how widespread gene regulation by endogenous RNAa is in mammalian cells. Studies have shown that both miRNAs and Ago proteins (Ago1) bind to numerous sites in human genome, especially promoter regions, to exert a largely positive effect on gene transcription. Applications RNAa has been used to study gene function in lieu of vector-based gene overexpression. Studies have demonstrated RNAa in vivo and its potential therapeutic applications in treating cancer and non-cancerous diseases. In June 2016, UK-based MiNA Therapeutics announced the initiation of a phase I trial of the first-ever saRNA drug MTL-CEBPA in patients with liver cancer, in an attempt to activate CEBPA gene. References Further reading External links RNAa FAQs Li Lab, University of California San Francisco How to get your genes switched on. New Scientist 16 November 2006 RNA Gene expression
RNA activation
Chemistry,Biology
1,020
1,438,662
https://en.wikipedia.org/wiki/Saha%20ionization%20equation
In physics, the Saha ionization equation is an expression that relates the ionization state of a gas in thermal equilibrium to the temperature and pressure. The equation is a result of combining ideas of quantum mechanics and statistical mechanics and is used to explain the spectral classification of stars. The expression was developed by physicist Meghnad Saha in 1920. It is discussed in many textbooks on statistical physics and plasma physics. Description For a gas at a high enough temperature (here measured in energy units, i.e. keV or J) and/or density, the thermal collisions of the atoms will ionize some of the atoms, making an ionized gas. When several or more of the electrons that are normally bound to the atom in orbits around the atomic nucleus are freed, they form an independent electron gas cloud co-existing with the surrounding gas of atomic ions and neutral atoms. With sufficient ionization, the gas can become the state of matter called plasma. The Saha equation describes the degree of ionization for any gas in thermal equilibrium as a function of the temperature, density, and ionization energies of the atoms. For a gas composed of a single atomic species, the Saha equation is written: where: is the density of atoms in the ith state of ionization, that is with i electrons removed. is the degeneracy of states for the i-ions is the energy required to remove i electrons from a neutral atom, creating an i-level ion. is the electron density is the Boltzmann constant is the thermal de Broglie wavelength of an electron is the mass of an electron is the temperature of the gas is the Planck constant The expression is the energy required to remove the th electron. In the case where only one level of ionization is important, we have and defining the total density n as , the Saha equation simplifies to: where is the energy of ionization. We can define the degree of ionization and find This gives a quadratic equation that can be solved in closed form: For small , , so that the ionization decreases with density. Note that except for weakly ionized plasmas, the plasma environment affects the atomic structure with the subsequent lowering of the ionization potentials and the "cutoff" of the partition function. Therefore, and depend, in general, on and and solving the Saha equation is only possible iteratively. As a simple example, imagine a gas of monatomic hydrogen atoms, set and let , the ionization energy of hydrogen from its ground state. Let , which is the Loschmidt constant, or particle density of Earth's atmosphere at standard pressure and temperature. At , the ionization is essentially none: and there would almost certainly be no ionized atoms in the volume of Earth's atmosphere. increases rapidly with , reaching 0.35 for . There is substantial ionization even though this is much less than the ionization energy (although this depends somewhat on density). This is a common occurrence. Physically, it stems from the fact that at a given temperature, the particles have a distribution of energies, including some with several times . These high energy particles are much more effective at ionizing atoms. In Earth's atmosphere, ionization is actually governed not by the Saha equation but by very energetic cosmic rays, largely muons. These particles are not in thermal equilibrium with the atmosphere, so they are not at its temperature and the Saha logic does not apply. Particle densities The Saha equation is useful for determining the ratio of particle densities for two different ionization levels. The most useful form of the Saha equation for this purpose is where Z denotes the partition function. The Saha equation can be seen as a restatement of the equilibrium condition for the chemical potentials: This equation simply states that the potential for an atom of ionization state i to ionize is the same as the potential for an electron and an atom of ionization state ; the potentials are equal, therefore the system is in equilibrium and no net change of ionization will occur. Stellar atmospheres In the early twenties Ralph H. Fowler (in collaboration with Charles Galton Darwin) developed a new method in statistical mechanics permitting a systematic calculation of the equilibrium properties of matter. He used this to provide a rigorous derivation of the ionization formula which Saha had obtained, by extending to the ionization of atoms the theorem of Jacobus Henricus van 't Hoff, used in physical chemistry for its application to molecular dissociation. Also, a significant improvement in the Saha equation introduced by Fowler was to include the effect of the excited states of atoms and ions. A further important step forward came in 1923, when Edward Arthur Milne and R.H. Fowler published a paper in the Monthly Notices of the Royal Astronomical Society, showing that the criterion of the maximum intensity of absorption lines (belonging to subordinate series of a neutral atom) was much more fruitful in giving information about physical parameters of stellar atmospheres than the criterion employed by Saha which consisted in the marginal appearance or disappearance of absorption lines. The latter criterion requires some knowledge of the relevant pressures in the stellar atmospheres, and Saha following the generally accepted view at the time assumed a value of the order of 1 to 0.1 atmosphere. Milne wrote: Saha had concentrated on the marginal appearances and disappearances of absorption lines in the stellar sequence, assuming an order of magnitude for the pressure in a stellar atmosphere and calculating the temperature where increasing ionization, for example, inhibited further absorption of the line in question owing to the loss of the series electron. As Fowler and I were one day stamping round my rooms in Trinity and discussing this, it suddenly occurred to me that the maximum intensity of the Balmer lines of hydrogen, for example, was readily explained by the consideration that at the lower temperatures there were too few excited atoms to give appreciable absorption, whilst at the higher temperatures there are too few neutral atoms left to give any absorption. ... That evening I did a hasty order of magnitude calculation of the effect and found that to agree with a temperature of 10000° [K] for the stars of type A0, where the Balmer lines have their maximum, a pressure of the order of 10−4 atmosphere was required. This was very exciting, because standard determinations of pressures in stellar atmospheres from line shifts and line widths had been supposed to indicate a pressure of the order of one atmosphere or more, and I had begun on other grounds to disbelieve this. The generally accepted view at the time assumed that the composition of stars were similar to Earth. However, in 1925 Cecilia Payne used Saha's ionization theory to calculate that the composition of stellar atmospheres is as we now know them; mostly hydrogen and helium, expanding the knowledge of stars. Stellar coronae Saha equilibrium prevails when the plasma is in local thermodynamic equilibrium, which is not the case in the optically thin corona. Here the equilibrium ionization states must be estimated by detailed statistical calculation of collision and recombination rates. Early universe Equilibrium ionization, described by the Saha equation, explains evolution in the early universe. After the Big Bang, all atoms were ionized, leaving mostly protons and electrons. According to Saha's approach, when the universe had expanded and cooled such that the temperature reached about , electrons recombined with protons forming hydrogen atoms. At this point, the universe became transparent to most electromagnetic radiation. That surface, red-shifted by a factor of about 1,000, generates the 3 K cosmic microwave background radiation, which pervades the universe today. References External links Derivation & Discussion by Hale Bradt A detailed derivation from the University of Utah Physics Department Lecture notes from the University of Maryland Department of Astronomy Atomic physics Eponymous equations of physics Plasma physics equations
Saha ionization equation
Physics,Chemistry
1,603
1,705,111
https://en.wikipedia.org/wiki/Gamboge
Gamboge ( ) is a deep-yellow pigment derived from a species of tree that primarily grows in Cambodia. Popular in east Asian watercolor works, it has been used across a number of media dating back to the 8th century. Easy to transport and manipulate into a durable watercolor paint, gamboge is notable for its versatility as a pigment in how it has been used in paintings, printing of books, and garment dyes, including the robes of Buddhist monks. Gamboge is toxic to humans, and is potentially deadly in larger doses. Due to its toxicity and poor lightfastness, gamboge is no longer used in paints, though limited use continues in other contexts. Though used in a number of different contexts, Gamboge is known not to react well with citric acid surfaces therefore making it unsuitable for frescos and with white lead. For its popularity, Gamboge has not been extensively identified in works of art from any time period; the few instances wherein art historians have attempted to identify whether or not the pigment was used in a given work have confirmed its widespread use and its longevity as staple within watercolor painting particularly in eastern art. History Gamboge's first recorded use dates back to the 8th century during which time it appeared in Japanese art. Some historians speculate small shipments of the pigment were able to be distributed in European contexts due to the occasional over-land trade journeys made from Asia to Europe. Gamboge would become much more accessible beginning in the 17th century as shipping grew in popularity as a method of transporting goods from Asia to Europe. It was around this time that Gamboge's popularity in watercolors grew tremendously. The pigment is derived from the gum of a species of evergreen of the family Guttiferae which grows in southeast Asia, primarily Cambodia and Thailand. In fact, Gamboge gets its name from a now-antiquated name for Cambodia, Camboja, though it was also referred to as gama gitta in some 17th century European color manuals. The pigment is extracted from Guttiferae trees through a process of cutting several deep incisions into the tree's trunk thus allowing the resin to bleed out into pre-mounted bamboo canes used as receptacle to initially catch and later transport the product. The practice of collecting Gamboge in bamboo cane was so widespread and recognizable that the pigment was often referred to as “pipe Gamboge” for how it conformed to the cylindrical shape of the receptacle. There was a brief shortage of the pigment during the 1970s and 1980s due to trade restrictions placed on the Khmer Rouge regime. It was during this time that shipments of the gum which makes Gamboge were found to contain bullet casings and other impurities which tainted the pigment. While Gamboge is best known for its use in artwork, it does have a secondary function as a laxative. In small doses it can cause watery feces while large doses have been reported to cause death. Production Gamboge is most often extracted by tapping latex (sometimes incorrectly referred to as sap) from various species of evergreen trees of the family Clusiaceae (also known as Guttiferae). The tree most commonly used is the gamboge tree (genus Garcinia), including G. hanburyi (Cambodia and Thailand), G. morella (India and Sri Lanka), and G. elliptica and G. heterandra (Myanmar). The orange fruit of Garcinia gummi-gutta (formerly called G. cambogia) is also known as gamboge or gambooge. The trees must be at least ten years old before they are tapped. The resin is extracted by making spiral incisions in the bark, and by breaking off leaves and shoots and letting the milky yellow resinous gum drip out. The resulting latex is collected in hollow bamboo canes. After the resin is congealed, the bamboo is broken away and large rods of raw gamboge remain. Visual characteristics Once extracted from the trees in which it is found, Gamboge resin has a brownish-yellow color. Once ground, the resin takes on a deep yellow color. Artists sometimes combined Prussian blue with gamboge to create green; they also mixed it with burnt sienna to make orange. Gamboge was most commonly used in watercolor painting. Permanence Gamboge is highly sensitive to light. It is known to react poorly with lime surfaces and, as such, is deemed unsuitable for frescos. Gamboge is also known to react with white lead. Notable occurrences Gamboge has not been extensively identified in works of art from any time period; many analyses of paintings simply identify the presence of "organic yellow" without distinguishing between different organic yellow pigments. Gamboge has been identified as the underlying gold paint in the Maitepnimit Temple in Thailand as well as having featured in the Medieval Armenian Glajor Gospel. Though not authenticated, it has also been noted that Rembrandt may have used the pigment in a few of his works and that it appears in J. M. W. Turners color box. Etymology The word comes from , the Latin word for the pigment, which derives from , the Latin word for Cambodia. Its first recorded use as a colour name in English was in 1634. Notes References External links https://web.archive.org/web/20090410075328/http://www.sewanee.edu/chem/chem%26art/Detail_Pages/Pigments/Gamboge Organic pigments Pigments Resins Shades of yellow Shades of orange Plant dyes
Gamboge
Physics
1,170
4,704,503
https://en.wikipedia.org/wiki/Sikhye
Sikhye (, also spelled shikhye or shikeh; also occasionally termed dansul or gamju) is a traditional sweet Korean rice beverage, usually served as a dessert. It is a popular beverage in South Korea, often found in the beverage sections of convenience stores. It is a drink made by fermenting rice with malt to give it a sweet taste. In addition to its liquid ingredients, sikhye contains grains of cooked rice and in some cases pine nuts. It is similar to the Chinese jiuniang and Japanese amazake. Preparation Sikhye is made by pouring malt water onto cooked rice. The malt water steeps in the rice at typically 62 degrees Celsius until grains of rice appear on the surface. The liquid is filtered and boiled until it gets sweet enough (no sugar is added to this drink). In South Korea and in overseas Korean grocery stores, sikhye is readily available in cans or plastic bottles. One of the largest South Korean producers of sikhye is the Vilac company of Busan. Most canned sikhye typically have a residue of cooked rice at the bottom. Homemade sikhye is often served after a meal in a Korean restaurant. The method of making sikhye is to first measure the malt properly, put the skin in warm water, wash it, strain it through a fine sieve, and then let the water settle. Grow it in the ground and water it occasionally. Regional variations There are several regional variations of sikhye. These include Andong sikhye and yeonyeop sikhye or yeonyeopju, a variety of sikhye made in Gangwon province. Andong sikhye differs in that it includes radishes, carrots, and powdered red pepper. Also, it is fermented for several days as opposed to being boiled. The crunchy texture of the radish is kept despite the longer fermentation process; a soft texture would indicate an inferior product. Whereas the sweet canned or restaurant sikhye is enjoyed as a dessert beverage, Andong sikhye is appreciated as a digestive aid, containing lactobacillus. Names Sikhye is also referred to by the names dansul () and gamju (감주; 甘酒). Both of these names mean "sweet wine." However, they are also used to refer to a different, slightly alcoholic rice drink called gamju. Hobak-sikhye Hobak-sikhye (pumpkin sikhye) is a water-boiled broth with pumpkin, steamed rice, and malt. It is fermented for several days at a proper temperature. Some sugar is added to taste sweet. Andong sikhye It is original sikhye in Andong, South Korea. It is a little bit different from other Sikhyes. This Sikhye's color is light red with red pepper added. Though also made with rice, it is left to ferment naturally rather than rushed through the process using the boiling method. Sikhye, especially the type enjoyed in this city but also the most common variety, is high in probiotic bacteria. Yeonyeop-sikhye Yeonyeop-sikhye is made by wrapping the hot glutinous rice, sake, and honey in a lotus leaf. Before drinking, put up a few pieces of pine nuts. Effects Sikhye is believed to aid digestion, as it contains dietary fiber and anti-oxidants. It was regularly served to royalty after meals to help digestion. Sikhye is said to help people who have a "cold" constitution to be warm and also helps those who have too "warm" constitution to be less warm. It is also believed to be very helpful for relieving hangovers. Origin of the word Sikhye is a word that does not exist in China or Japan, but rather a Korean word similar to "shikhye" with similar pronunciation and meaning. Sik (or Sak) is related with mature and Hye is making alcohol or sweet juice. These two words were combined to form. However, there is not yet a solid literary basis for etymology. Preparation Barley is sprouted in water, then ground, filtered, and fermented. Gallery See also Gamju Korean cuisine Korean tea Plant milk Rice milk Sujeonggwa Sungnyung References External links Picture Naver Encyclopedia article Netcooks recipe Lifeinkorea recipe Fermented drinks Korean drinks Rice drinks Andong
Sikhye
Biology
917
20,590
https://en.wikipedia.org/wiki/Mathematical%20model
A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). It can also be taught as a subject in its own right. The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and philosophy (for example, intensively in analytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior. Elements of a mathematical model Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements: Governing equations Supplementary sub-models Defining equations Constitutive equations Assumptions and constraints Initial and boundary conditions Classical constraints and kinematic equations Classifications Mathematical models are of different types: Linear vs. nonlinear. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity. Static vs. dynamic. A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations. Explicit vs. implicit. If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties. Discrete vs. continuous. A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge. Deterministic vs. probabilistic (stochastic). A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions. Deductive, inductive, or floating. A is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model. Strategic vs. non-strategic. Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players. Construction In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables). Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example, economists often apply linear algebra when using input–output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables. A priori information Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take. Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model. In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque. Subjective information Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data. An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability. Complexity In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification. For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before. Training, tuning, and fitting Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting. Evaluation and assessment A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation. Prediction of empirical data Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics. Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form. Scope of the model Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation. As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics. Philosophical considerations Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied. An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to any mathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation. Significance in the natural sciences Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used. It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis. Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean. Some applications Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations. A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, Boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables. Examples One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s: where and is defined by the following state-transition table: {| border="1" | || || |- |S1 || || |- |S''2 || || |} The state represents that there has been an even number of 0s in the input so far, while signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, will finish in state an accepting state, so the input string will be accepted. The language recognized by is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1". Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel. Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning. Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions. Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function and the trajectory, that is a function is the solution of the differential equation: that can be written also as Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion. Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of commodities labeled each with a market price The consumer is assumed to have an ordinal utility function (ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities consumed. The model further assumes that the consumer has a budget which is used to purchase a vector in such a way as to maximize The problem of rational behavior in this model then becomes a mathematical optimization problem, that is: subject to: This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria. Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network. In computer science, mathematical models may be used to simulate computer networks. In mechanics, mathematical models may be used to analyze the movement of a rocket model. See also Agent-based model All models are wrong Cliodynamics Computer simulation Conceptual model Decision engineering Grey box model International Mathematical Modeling Challenge Mathematical biology Mathematical diagram Mathematical economics Mathematical modelling of infectious disease Mathematical finance Mathematical psychology Mathematical sociology Microscale and macroscale models Model inversion Resilience (mathematics) Scientific model Sensitivity analysis Statistical model Surrogate model System identification References Further reading Books Aris, Rutherford [ 1978 ] ( 1994 ). Mathematical Modelling Techniques, New York: Dover. Bender, E.A. [ 1978 ] ( 2000 ). An Introduction to Mathematical Modeling, New York: Dover. Gary Chartrand (1977) Graphs as Mathematical Models, Prindle, Webber & Schmidt Dubois, G. (2018) "Modeling and Simulation", Taylor & Francis, CRC Press. Gershenfeld, N. (1998) The Nature of Mathematical Modeling, Cambridge University Press . Lin, C.C. & Segel, L.A. ( 1988 ). Mathematics Applied to Deterministic Problems in the Natural Sciences, Philadelphia: SIAM. Models as Mediators: Perspectives on Natural and Social Science edited by Mary S. Morgan and Margaret Morrison, 1999. Mary S. Morgan The World in the Model: How Economists Work and Think, 2012. Specific applications Papadimitriou, Fivos. (2010). Mathematical Modelling of Spatial-Ecological Complex Systems: an Evaluation. Geography, Environment, Sustainability 1(3), 67–80. An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White. External links General reference Patrone, F. Introduction to modeling via differential equations, with critical remarks. Plus teacher and student package: Mathematical Modelling. Brings together all articles on mathematical modeling from Plus Magazine'', the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge. Philosophical Frigg, R. and S. Hartmann, Models in Science, in: The Stanford Encyclopedia of Philosophy, (Spring 2006 Edition) Griffiths, E. C. (2010) What is a model? Applied mathematics Conceptual modelling Knowledge representation Mathematical terminology Mathematical and quantitative methods (economics)
Mathematical model
Mathematics
4,945
72,075,093
https://en.wikipedia.org/wiki/Time%20in%20El%20Salvador
El Salvador observes Central Standard Time (UTC−6) year-round. IANA time zone database In the IANA time zone database, El Salvador is given one zone in the file zone.tab—America/El_Salvador. "SV" refers to the country's ISO 3166-1 alpha-2 country code. Data for El Salvador directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself: References External links Current time in El Salvador at Time.is Time in El Salvador at TimeAndDate Time by country Time in North America Geography of El Salvador
Time in El Salvador
Physics
130
906,193
https://en.wikipedia.org/wiki/Native%20chemical%20ligation
Native Chemical Ligation (NCL) is an important extension of the chemical ligation concept for constructing a larger polypeptide chain by the covalent condensation of two or more unprotected peptides segments. Native chemical ligation is the most effective method for synthesizing native or modified proteins of typical size (i.e., proteins< ~300 AA). Reaction In native chemical ligation, the ionized thiol group of an N-terminal cysteine residue of an unprotected peptide attacks the C-terminal thioester of a second unprotected peptide, in an aqueous buffer at pH 7.0 and room temperature. This transthioesterification step is reversible in the presence of an aryl thiol catalyst, rendering the reaction both chemoselective and regioselective, and leads to formation of a thioester-linked intermediate. The intermediate rapidly and spontaneously rearranges by an intramolecular S,N-acyl shift that results in the formation of a native amide ('peptide') bond at the ligation site (scheme 1). Remarks : Thiol additives : The initial transthioesterification step of the native chemical ligation reaction is catalyzed by thiol additives. The most effective and commonly used thiol catalyst is 4-mercaptophenylacetic acid (MPAA), (ref). Regioselectivity: The key feature of native chemical ligation of unprotected peptides is the reversibility of the first step, the thiol(ate)–thioester exchange reaction. Native chemical ligation is exquisitely regioselective because that thiol(ate)–thioester exchange step is freely reversible in the presence of an added arylthiol catalyst. The high yields of final ligation product obtained, even in the presence of internal Cys residues in either/both segments, is the result of the irreversibility of the second (S-to-N acyl shift) amide-forming step under the reaction conditions used. Chemoselectivity of NCL : No side-products are formed from reaction with the other functional groups present in either peptide segment (e.g. Asp, Glu side chain carboxylic acids; Lys epsilon amino group; Tyr phenolic hydroxyl; Ser, Thr hydroxyls, etc.). Historical context In 1992, Stephen Kent and Martina Schnölzer at The Scripps Research Institute developed the "Chemical Ligation" concept, the first practical method to covalently condense unprotected peptide segments; the key feature of chemical ligation is formation of an unnatural bond at the ligation site. Just two years later in 1994, Philip Dawson, Tom Muir and Stephen Kent reported "Native Chemical Ligation", an extension of the chemical ligation concept to the formation of a native amide ('peptide') bond after initial nucleophilic condensation formed a thioester-linked condensation product designed to spontaneously rearrange to the native amide bond at the ligation site. Theodor Wieland and coworkers had reported the S-to-N acyl shift as early as 1953, when the reaction of valine-thioester and cysteine amino acid in aqueous buffer was shown to yield the dipeptide valine-cysteine. The reaction proceeded through the intermediacy of a thioester containing the sulfur of the cysteine residue. However, Wieland's work did NOT lead to the development of the native chemical ligation reaction. Rather, the study of amino acid thioester reactions led Wieland and others to develop the 'active ester' method for the synthesis of protected peptide segments by conventional chemical methods carried out in organic solvents. Features Native chemical ligation forms the basis of modern chemical protein synthesis, and has been used to prepare numerous proteins and enzymes by total chemical synthesis. The payoff in the native chemical ligation method is that coupling long peptides by this technique is typically near quantitative and provides synthetic access to large peptides and proteins otherwise impossible to make, due to their large size, decoration by post-translational modification, and containing non-coded amino acid or other chemical building blocks. Native chemical ligation is inherently 'Green' in its atom economy and its use of benign solvents. It involves the reaction of an unprotected peptide thioester with a second, unprotected peptide that has an N-terminal cysteine residue. It is carried out in aqueous solution at neutral pH, usually in 6 M guanidine.hydrochloride, in the presence of an arylthiol catalyst and typically gives near-quantitative yields of the desired ligation product. Peptide-thioesters can be directly prepared by Boc chemistry SPPS; however, thioester-containing peptides are not stable to treatment with a nucleophilic base, thus preventing direct synthesis of peptide thioesters by Fmoc chemistry SPPS. Fmoc chemistry solid phase peptide synthesis techniques for generating peptide-thioesters are based on the synthesis of peptide hydrazides that are converted to peptide thioesters post-synthetically. Polypeptide C-terminal thioesters can also be produced in situ, using so-called N,S-acyl shift systems. Bis(2-sulfanylethyl)amido group, also called SEA group, belongs to this family. Polypeptide C-terminal bis(2-sulfanylethyl)amides (SEA peptide segments) react with Cys peptide to give a native peptide bond as in NCL. This reaction, which is called SEA Native Peptide Ligation, is a useful variant of native chemical ligation. In making peptide segments that contain an N-terminal cysteine residue, exposure to ketones should be avoided since these may cap the N-terminal cysteine. Do not use protecting groups that release aldehydes or ketones. For the same reason, the use of acetone should be avoided, particularly in washing glassware used for lyophilization. A feature of the native chemical ligation technique is that the product polypeptide chain contains cysteine at the site of ligation. The cysteine at the ligation site can be desulfurized to alanine, thus extending the range of possible ligation sites to include alanine residues. Other beta-thiol containing amino acids can be used for native chemical ligation, followed by desulfurization. Alternatively, thiol-containing ligation auxiliaries can be used that mimic an N-terminal cysteine for the ligation reaction, but which can be removed after synthesis. The use of thiol-containing auxiliaries may not be as effective as ligation at a Cys residue. Native chemical ligation can also be performed with an N-terminal selenocysteine residue. Polypeptide C-terminal thioesters produced by recombinant DNA techniques can be reacted with an N-terminal Cys containing polypeptide by the same native ligation chemistry to provide very large semi-synthetic proteins. Native chemical ligation of this kind using a recombinant polypeptide segment is known as Expressed Protein Ligation. Similarly, a recombinant protein containing an N-terminal Cys can be reacted with a synthetic polypeptide thioester. Thus, native chemical ligation can be used to introduce chemically synthesized segments into recombinant proteins, regardless of size. See also Intein KAHA Ligation Peptide synthesis Protein synthesis SEA Native Peptide Ligation References Further reading Peptides
Native chemical ligation
Chemistry
1,618
43,553,152
https://en.wikipedia.org/wiki/Michelle%20Coote
Michelle Louise Coote FRSC FAA is an Australian polymer chemist. She has published extensively in the fields of polymer chemistry, radical chemistry and computational quantum chemistry. She is an Australian Research Council (ARC) Future Fellow, Fellow of the Royal Society of Chemistry (FRSC) and Fellow of the Australian Academy of Science (FAA). Coote is a professor of chemistry in the Australian National University (ANU) College of Physical and Mathematical Sciences. She is a member of the ARC Centre of Excellence for Electromaterials Science and past chief investigator in the ARC Centre of Excellence for Free-Radical Chemistry and Biotechnology. Education and early career Professor Michelle Coote completed a B.Sc. (Hons) in Industrial Chemistry at the University of New South Wales in 1995. During her degree she spent 15 months working in the chemical industry, "but it made me realise that my real interest was in a career in pure chemical research. So, I went back to university and ended up graduating in 1995 with the university medal." Graduating in 2000 with a PhD in Polymer Chemistry from UNSW, Coote took out major awards including the Cornforth Medal from the Royal Australian Chemical Institute (RACI) and the prize for young scientists from the International Union of Pure and Applied Chemistry (IUPAC) for her PhD thesis 'The origin of the penultimate unit effect in free-radical copolymerisation'. Coote left Australia for the UK in September 1999 to take up a Post Doctoral Research Role in polymer physics focusing on neutron reflectivity within the Polymer Interdisciplinary Research Centre at The University of Durham. Academic career in Australia Coote returned to Australia in 2001 and joined the Research School of Chemistry, Australian National University as a postdoctoral fellow with Leo Radom. It was during this time that she began to build a reputation in computational chemistry, and she established an independent research group on the computer-aided chemical design at ANU in 2004. Awarded an ARC Future Fellowship in 2010, Coote focused on a computer-guided experimental approach to understand and control the stereochemistry of free-radical polymerisation. Since then, Coote has received a number of grants from the Australian Research Council, including the Georgina Sweet Australian Laureate Fellowship in 2017. Today, her research interests span several broad areas of fundamental and applied chemistry: stereocontrol in free-radical polymerisation, polymer degradation and stabilisation, radical stability and, most recently, electric field effects on chemical reactivity. Coote became the first female Professor of Chemistry at ANU in 2011. Awards, prizes and recognition Coote received numerous awards, including the Rennie Memorial Medal (2006), David Sangster Polymer Science and Technology Achievement Award (2011) and H.G. Smith medal (2016) from the Royal Australian Chemical Institute, the Le Fevre Memorial Prize of the Australian Academy of Science (2010) and the Pople Medal of the Asia-Pacific Association for Theoretical and Computational Chemistry (2015). She was also named the 2019 Schleyer lecturer, becoming the first female and the second Australian since the series beginning in 2001. Coote was elected a Fellow of the Royal Society of Chemistry in March 2013. She was elected a Fellow of the Australian Academy of Science in 2014 for developing and applying accurate computational chemistry for modelling radical polymerization processes. Coote gave her New Fellows' Presentation in July 2014. Coote was recognised by ANU in 2012 as part of their International Women's day celebrations for her achievements as a role model as the first female professor of chemistry at ANU and for inspiring, mentoring and motivating female undergraduate and postgraduate students in the sciences. In December 2016, Coote was appointed as the first Australian Associate Editor of the premier Chemistry journal, the Journal of the American Chemical Society. See also List of chemists References Australian chemists Australian women chemists Australian women academics University of New South Wales alumni Academics of Durham University Academic staff of the Australian National University Fellows of the Australian Academy of Science Fellows of the Royal Society of Chemistry Living people Year of birth missing (living people) Computational chemists
Michelle Coote
Chemistry
821
73,411,485
https://en.wikipedia.org/wiki/Americium%20hexafluoride
Americium hexafluoride is an inorganic chemical compound of americium metal and fluorine with the chemical formula . It is still a hypothetical compound. Synthesis by fluorination of americium tetrafluoride was unsuccessfully attempted in 1990. A thermochromatographic identification in 1986 remains inconclusive. Calculations suggest that it may be distorted from octahedral symmetry. Synthesis It is proposed that can be prepared by in both the condensed and gaseous states by the reaction of with in anhydrous HF at 313–333 K. References Americium compounds Metal halides Hexafluorides Hypothetical chemical compounds Theoretical chemistry Actinide halides
Americium hexafluoride
Chemistry
144
25,732,074
https://en.wikipedia.org/wiki/Plastigauge
Plastigauge is a measuring tool used for measuring plain bearing clearances, such as in engines. Other uses include marine drive shaft bearings, turbine housing bearings, pump and pressure system bearings, shaft end-float, flatness and clearance in pipe-flanges and cylinder heads. Wherever it is required to determine the separation between hidden surfaces. Plastigauge is a registered trademark of Plastigauge Ltd., West Sussex, United Kingdom. Plastigauge was introduced to US retail sales in 1948. Plastigauge consists of a strip of soft material with precise known dimensions and deformation characteristics. This is sandwiched between a clean bearing surface on a shaft and the bearing shell itself. The plastigauge flattens after the bearing cap is tightened. The dimensional clearance is then determined by comparing the amount that the gauge material has flattened using a template. Letter designation describes the range of measurement use for each gauge. References External links Manufacturers website USA distributor website RUSSIA distributor website Plastigauge in use Как пользоваться plastigage Dimensional instruments
Plastigauge
Physics,Mathematics
227
24,122,756
https://en.wikipedia.org/wiki/Reisekamera
The Reisekamera, meaning a "travel camera", is a large-format wooden bellows tailboard view camera of almost standardised design, unlike the much lighter and more flexible field camera, but not as cumbersome as the studio camera. A sturdy tripod is always brought along, but it might just as well be placed on a tabletop. It has equally sized rectangular front and back panels on a full-width double-extension baseboard that is hinged near the front. The front panel, holding the lens plate, has horizontal and vertical movements, while the back is tilt-suspended on brass standards running on brass tracks on either side of the baseboard, providing rack and pinion focusing on the film plane. An almost non-tapering calico double-extension bellows is employed; allowing the projected image to freely reach the photographic plate regardless of lens offset position. The camera folds flat, after the back panel is brought forward to the lens panel, by folding the hinged base board up, and thus conveniently protecting the focusing screen. For insertion of the wooden dark slide plate cassette, the hinged focusing screen is swung up and away. The Reisekamera was made available for several plate sizes; most common are the 13×18 cm, 18×24 cm and 24×30 cm versions. Shutter and lens were normally not part of the original delivery. However, some were made available with a spectacular focal-plane shutter, recognisable by the brass mechanisms either side of the back panel. The Reisekamera might be considered the portable version of the studio camera, to be used outside the professional photographic studio, on assignment at location, for architectural work or documentation away from the studio, in homes or museums. It is collapsible, yet not small, and it is much too cumbersome and large for the travelling amateur to bring along. The bellows extension would allow a selection of focal length lenses to be employed covering the required field of view. The back panel may be tilted and turned slightly, and at the front, the lens plate may slide vertically and horizontally to control perspective and distortion. The Reisekamera would be one of several cameras suitable for field assignments. A black cloth is required to keep stray light out while observing the image on the ground glass focusing screen. When the image is composed and focused, the focusing screen is replaced by the plate holder. The lens cap is placed on the lens, the dark slide removed from the plate holder, and the lens cap is removed for the required exposure time and replaced. If an auxiliary shutter is present, it may perform the function of the lens cap. After the exposure the dark slide is replaced to protect the exposed plate from stray light. The plate holder is either turned for a second photo, or brought to the darkroom for plate development and copying. The camera would be supplied with a number of plate holders, each usually holding two plates, one on either side. The Reisekamera was a popular wooden bellows view camera of the tailboard design, manufactured in large quantities in specialised cabinetmaker's workshops of the eastern regions of Germany from about 1860, but reaching peak popularity in the decades around 1900. These cameras would be distributed through well-known camera manufacturers situated in cities like Görlitz or Dresden, but would also emerge elsewhere and abroad, sometimes nameless and sometimes with the distributor or warehouse names attached, a fact causing considerable confusion with regard to origin. The Reisekamera is, regardless of the maker, quite similarly built and of almost standardised design, often without maker's identification. The collector's usual practice to identify some of these cameras by its name plate, if present, or even by the lens attached, results in almost identical cameras being attributed to more than one brand name. The original designer or manufacturer has not been ascertained, but it is thought to originate in the Saxony region of Germany around 1860, and in fact, many Reisekameras came from Dresden or Görlitz in particular. This region had a highly specialised woodwork industry well suited for camera manufacture. As the type became popular, manufacture of similar cameras took place worldwide, but these are more often than not clearly marked with the manufacturer's name. Possibly the best known Reisekamera is the Globus, manufactured by Ernst Herbst & Firl, Görlitz for the camera manufacturer Heinrich Ernemann in Dresden. References Cameras
Reisekamera
Technology
907
38,310
https://en.wikipedia.org/wiki/Cannabis
Cannabis () is a genus of flowering plants in the family Cannabaceae that is widely accepted as being indigenous to and originating from the continent of Asia. However, the number of species is disputed, with as many as three species being recognized: Cannabis sativa, C. indica, and C. ruderalis. Alternatively, C. ruderalis may be included within C. sativa, or all three may be treated as subspecies of C. sativa, or C. sativa may be accepted as a single undivided species. The plant is also known as hemp, although this term is usually used to refer only to varieties cultivated for non-drug use. Hemp has long been used for fibre, seeds and their oils, leaves for use as vegetables, and juice. Industrial hemp textile products are made from cannabis plants selected to produce an abundance of fibre. Cannabis also has a long history of being used for medicinal purposes, and as a recreational drug known by several slang terms, such as marijuana, pot or weed. Various cannabis strains have been bred, often selectively to produce high or low levels of tetrahydrocannabinol (THC), a cannabinoid and the plant's principal psychoactive constituent. Compounds such as hashish and hash oil are extracted from the plant. More recently, there has been interest in other cannabinoids like cannabidiol (CBD), cannabigerol (CBG), and cannabinol (CBN). Etymology Cannabis is a Scythian word. The ancient Greeks learned of the use of cannabis by observing Scythian funerals, during which cannabis was consumed. In Akkadian, cannabis was known as qunubu (). The word was adopted in to the Hebrew language as qaneh bosem (). Description Cannabis is an annual, dioecious, flowering herb. The leaves are palmately compound or digitate, with serrate leaflets. The first pair of leaves usually have a single leaflet, the number gradually increasing up to a maximum of about thirteen leaflets per leaf (usually seven or nine), depending on variety and growing conditions. At the top of a flowering plant, this number again diminishes to a single leaflet per leaf. The lower leaf pairs usually occur in an opposite leaf arrangement and the upper leaf pairs in an alternate arrangement on the main stem of a mature plant. The leaves have a peculiar and diagnostic venation pattern (which varies slightly among varieties) that allows for easy identification of Cannabis leaves from unrelated species with similar leaves. As is common in serrated leaves, each serration has a central vein extending to its tip, but in Cannabis this originates from lower down the central vein of the leaflet, typically opposite to the position of the second notch down. This means that on its way from the midrib of the leaflet to the point of the serration, the vein serving the tip of the serration passes close by the intervening notch. Sometimes the vein will pass tangentially to the notch, but often will pass by at a small distance; when the latter happens a spur vein (or occasionally two) branches off and joins the leaf margin at the deepest point of the notch. Tiny samples of Cannabis also can be identified with precision by microscopic examination of leaf cells and similar features, requiring special equipment and expertise. Reproduction All known strains of Cannabis are wind-pollinated and the fruit is an achene. Most strains of Cannabis are short day plants, with the possible exception of C. sativa subsp. sativa var. spontanea (= C. ruderalis), which is commonly described as "auto-flowering" and may be day-neutral. Cannabis is predominantly dioecious, having imperfect flowers, with staminate "male" and pistillate "female" flowers occurring on separate plants. "At a very early period the Chinese recognized the Cannabis plant as dioecious", and the (c. 3rd century BCE) Erya dictionary defined xi 枲 "male Cannabis" and fu 莩 (or ju 苴) "female Cannabis". Male flowers are normally borne on loose panicles, and female flowers are borne on racemes. Many monoecious varieties have also been described, in which individual plants bear both male and female flowers. (Although monoecious plants are often referred to as "hermaphrodites", true hermaphrodites – which are less common in Cannabis – bear staminate and pistillate structures together on individual flowers, whereas monoecious plants bear male and female flowers at different locations on the same plant.) Subdioecy (the occurrence of monoecious individuals and dioecious individuals within the same population) is widespread. Many populations have been described as sexually labile. As a result of intensive selection in cultivation, Cannabis exhibits many sexual phenotypes that can be described in terms of the ratio of female to male flowers occurring in the individual, or typical in the cultivar. Dioecious varieties are preferred for drug production, where the fruits (produced by female flowers) are used. Dioecious varieties are also preferred for textile fiber production, whereas monoecious varieties are preferred for pulp and paper production. It has been suggested that the presence of monoecy can be used to differentiate licit crops of monoecious hemp from illicit drug crops, but sativa strains often produce monoecious individuals, which is possibly as a result of inbreeding. Sex determination Cannabis has been described as having one of the most complicated mechanisms of sex determination among the dioecious plants. Many models have been proposed to explain sex determination in Cannabis. Based on studies of sex reversal in hemp, it was first reported by K. Hirata in 1924 that an XY sex-determination system is present. At the time, the XY system was the only known system of sex determination. The X:A system was first described in Drosophila spp in 1925. Soon thereafter, Schaffner disputed Hirata's interpretation, and published results from his own studies of sex reversal in hemp, concluding that an X:A system was in use and that furthermore sex was strongly influenced by environmental conditions. Since then, many different types of sex determination systems have been discovered, particularly in plants. Dioecy is relatively uncommon in the plant kingdom, and a very low percentage of dioecious plant species have been determined to use the XY system. In most cases where the XY system is found it is believed to have evolved recently and independently. Since the 1920s, a number of sex determination models have been proposed for Cannabis. Ainsworth describes sex determination in the genus as using "an X/autosome dosage type". The question of whether heteromorphic sex chromosomes are indeed present is most conveniently answered if such chromosomes were clearly visible in a karyotype. Cannabis was one of the first plant species to be karyotyped; however, this was in a period when karyotype preparation was primitive by modern standards. Heteromorphic sex chromosomes were reported to occur in staminate individuals of dioecious "Kentucky" hemp, but were not found in pistillate individuals of the same variety. Dioecious "Kentucky" hemp was assumed to use an XY mechanism. Heterosomes were not observed in analyzed individuals of monoecious "Kentucky" hemp, nor in an unidentified German cultivar. These varieties were assumed to have sex chromosome composition XX. According to other researchers, no modern karyotype of Cannabis had been published as of 1996. Proponents of the XY system state that Y chromosome is slightly larger than the X, but difficult to differentiate cytologically. More recently, Sakamoto and various co-authors have used random amplification of polymorphic DNA (RAPD) to isolate several genetic marker sequences that they name Male-Associated DNA in Cannabis (MADC), and which they interpret as indirect evidence of a male chromosome. Several other research groups have reported identification of male-associated markers using RAPD and amplified fragment length polymorphism. Ainsworth commented on these findings, stating, Environmental sex determination is known to occur in a variety of species. Many researchers have suggested that sex in Cannabis is determined or strongly influenced by environmental factors. Ainsworth reviews that treatment with auxin and ethylene have feminizing effects, and that treatment with cytokinins and gibberellins have masculinizing effects. It has been reported that sex can be reversed in Cannabis using chemical treatment. A polymerase chain reaction-based method for the detection of female-associated DNA polymorphisms by genotyping has been developed. Chemistry Cannabis plants produce a large number of chemicals as part of their defense against herbivory. One group of these is called cannabinoids, which induce mental and physical effects when consumed. Cannabinoids, terpenes, terpenoids, and other compounds are secreted by glandular trichomes that occur most abundantly on the floral calyxes and bracts of female plants. Genetics Cannabis, like many organisms, is diploid, having a chromosome complement of 2n=20, although polyploid individuals have been artificially produced. The first genome sequence of Cannabis, which is estimated to be 820 Mb in size, was published in 2011 by a team of Canadian scientists. Taxonomy The genus Cannabis was formerly placed in the nettle family (Urticaceae) or mulberry family (Moraceae), and later, along with the genus Humulus (hops), in a separate family, the hemp family (Cannabaceae sensu stricto). Recent phylogenetic studies based on cpDNA restriction site analysis and gene sequencing strongly suggest that the Cannabaceae sensu stricto arose from within the former family Celtidaceae, and that the two families should be merged to form a single monophyletic family, the Cannabaceae sensu lato. Various types of Cannabis have been described, and variously classified as species, subspecies, or varieties: plants cultivated for fiber and seed production, described as low-intoxicant, non-drug, or fiber types. plants cultivated for drug production, described as high-intoxicant or drug types. escaped, hybridised, or wild forms of either of the above types. Cannabis plants produce a unique family of terpeno-phenolic compounds called cannabinoids, some of which produce the "high" which may be experienced from consuming marijuana. There are 483 identifiable chemical constituents known to exist in the cannabis plant, and at least 85 different cannabinoids have been isolated from the plant. The two cannabinoids usually produced in greatest abundance are cannabidiol (CBD) and/or Δ9-tetrahydrocannabinol (THC), but only THC is psychoactive. Since the early 1970s, Cannabis plants have been categorized by their chemical phenotype or "chemotype", based on the overall amount of THC produced, and on the ratio of THC to CBD. Although overall cannabinoid production is influenced by environmental factors, the THC/CBD ratio is genetically determined and remains fixed throughout the life of a plant. Non-drug plants produce relatively low levels of THC and high levels of CBD, while drug plants produce high levels of THC and low levels of CBD. When plants of these two chemotypes cross-pollinate, the plants in the first filial (F1) generation have an intermediate chemotype and produce intermediate amounts of CBD and THC. Female plants of this chemotype may produce enough THC to be utilized for drug production. Whether the drug and non-drug, cultivated and wild types of Cannabis constitute a single, highly variable species, or the genus is polytypic with more than one species, has been a subject of debate for well over two centuries. This is a contentious issue because there is no universally accepted definition of a species. One widely applied criterion for species recognition is that species are "groups of actually or potentially interbreeding natural populations which are reproductively isolated from other such groups." Populations that are physiologically capable of interbreeding, but morphologically or genetically divergent and isolated by geography or ecology, are sometimes considered to be separate species. Physiological barriers to reproduction are not known to occur within Cannabis, and plants from widely divergent sources are interfertile. However, physical barriers to gene exchange (such as the Himalayan mountain range) might have enabled Cannabis gene pools to diverge before the onset of human intervention, resulting in speciation. It remains controversial whether sufficient morphological and genetic divergence occurs within the genus as a result of geographical or ecological isolation to justify recognition of more than one species. Early classifications The genus Cannabis was first classified using the "modern" system of taxonomic nomenclature by Carl Linnaeus in 1753, who devised the system still in use for the naming of species. He considered the genus to be monotypic, having just a single species that he named Cannabis sativa L. Linnaeus was familiar with European hemp, which was widely cultivated at the time. This classification was supported by Christiaan Hendrik Persoon (in 1807), Lindley (in 1838) and De Candollee (in 1867). These first classification attempts resulted in a four group division: Kif (southern hemp - psychoactive) Vulgaris (intermediate - psychoactive and fiber) Pedemontana (northern hemp - fiber) Chinensis (northern hemp - fiber) In 1785, evolutionary biologist Jean-Baptiste de Lamarck published a description of a second species of Cannabis, which he named Cannabis indica Lam. Lamarck based his description of the newly named species on morphological aspects (trichomes, leaf shape) and geographic localization of plant specimens collected in India. He described C. indica as having poorer fiber quality than C. sativa, but greater utility as an inebriant. Also, C. indica was considered smaller, by Lamarck. Also, woodier stems, alternate ramifications of the branches, narrow leaflets, and a villous calyx in the female flowers were characteristics noted by the botanist. In 1843, William O’Shaughnessy, used "Indian hemp (C. indica)" in a work title. The author claimed that this choice wasn't based on a clear distinction between C. sativa and C. indica, but may have been influenced by the choice to use the term "Indian hemp" (linked to the plant's history in India), hence naming the species as indica. Additional Cannabis species were proposed in the 19th century, including strains from China and Vietnam (Indo-China) assigned the names Cannabis chinensis Delile, and Cannabis gigantea Delile ex Vilmorin. However, many taxonomists found these putative species difficult to distinguish. In the early 20th century, the single-species concept (monotypic classification) was still widely accepted, except in the Soviet Union, where Cannabis continued to be the subject of active taxonomic study. The name Cannabis indica was listed in various Pharmacopoeias, and was widely used to designate Cannabis suitable for the manufacture of medicinal preparations. 20th century In 1924, Russian botanist D.E. Janichevsky concluded that ruderal Cannabis in central Russia is either a variety of C. sativa or a separate species, and proposed C. sativa L. var. ruderalis Janisch, and Cannabis ruderalis Janisch, as alternative names. In 1929, renowned plant explorer Nikolai Vavilov assigned wild or feral populations of Cannabis in Afghanistan to C. indica Lam. var. kafiristanica Vav., and ruderal populations in Europe to C. sativa L. var. spontanea Vav. Vavilov, in 1931, proposed a three species system, independently reinforced by Schultes et al (1975) and Emboden (1974): C. sativa, C. indica and C. ruderalis. In 1940, Russian botanists Serebriakova and Sizov proposed a complex poly-species classification in which they also recognized C. sativa and C. indica as separate species. Within C. sativa they recognized two subspecies: C. sativa L. subsp. culta Serebr. (consisting of cultivated plants), and C. sativa L. subsp. spontanea (Vav.) Serebr. (consisting of wild or feral plants). Serebriakova and Sizov split the two C. sativa subspecies into 13 varieties, including four distinct groups within subspecies culta. However, they did not divide C. indica into subspecies or varieties. Zhukovski, in 1950, also proposed a two-species system, but with C. sativa L. and C. ruderalis. In the 1970s, the taxonomic classification of Cannabis took on added significance in North America. Laws prohibiting Cannabis in the United States and Canada specifically named products of C. sativa as prohibited materials. Enterprising attorneys for the defense in a few drug busts argued that the seized Cannabis material may not have been C. sativa, and was therefore not prohibited by law. Attorneys on both sides recruited botanists to provide expert testimony. Among those testifying for the prosecution was Dr. Ernest Small, while Dr. Richard E. Schultes and others testified for the defense. The botanists engaged in heated debate (outside of court), and both camps impugned the other's integrity. The defense attorneys were not often successful in winning their case, because the intent of the law was clear. In 1976, Canadian botanist Ernest Small and American taxonomist Arthur Cronquist published a taxonomic revision that recognizes a single species of Cannabis with two subspecies (hemp or drug; based on THC and CBD levels) and two varieties in each (domesticated or wild). The framework is thus: C. sativa L. subsp. sativa, presumably selected for traits that enhance fiber or seed production. C. sativa L. subsp. sativa var. sativa, domesticated variety. C. sativa L. subsp. sativa var. spontanea Vav., wild or escaped variety. C. sativa L. subsp. indica (Lam.) Small & Cronq., primarily selected for drug production. C. sativa L. subsp. indica var. indica, domesticated variety. C. sativa subsp. indica var. kafiristanica (Vav.) Small & Cronq, wild or escaped variety. This classification was based on several factors including interfertility, chromosome uniformity, chemotype, and numerical analysis of phenotypic characters. Professors William Emboden, Loran Anderson, and Harvard botanist Richard E. Schultes and coworkers also conducted taxonomic studies of Cannabis in the 1970s, and concluded that stable morphological differences exist that support recognition of at least three species, C. sativa, C. indica, and C. ruderalis. For Schultes, this was a reversal of his previous interpretation that Cannabis is monotypic, with only a single species. According to Schultes' and Anderson's descriptions, C. sativa is tall and laxly branched with relatively narrow leaflets, C. indica is shorter, conical in shape, and has relatively wide leaflets, and C. ruderalis is short, branchless, and grows wild in Central Asia. This taxonomic interpretation was embraced by Cannabis aficionados who commonly distinguish narrow-leafed "sativa" strains from wide-leafed "indica" strains. McPartland's review finds the Schultes taxonomy inconsistent with prior work (protologs) and partly responsible for the popular usage. Continuing research Molecular analytical techniques developed in the late 20th century are being applied to questions of taxonomic classification. This has resulted in many reclassifications based on evolutionary systematics. Several studies of random amplified polymorphic DNA (RAPD) and other types of genetic markers have been conducted on drug and fiber strains of Cannabis, primarily for plant breeding and forensic purposes. Dutch Cannabis researcher E.P.M. de Meijer and coworkers described some of their RAPD studies as showing an "extremely high" degree of genetic polymorphism between and within populations, suggesting a high degree of potential variation for selection, even in heavily selected hemp cultivars. They also commented that these analyses confirm the continuity of the Cannabis gene pool throughout the studied accessions, and provide further confirmation that the genus consists of a single species, although theirs was not a systematic study per se. An investigation of genetic, morphological, and chemotaxonomic variation among 157 Cannabis accessions of known geographic origin, including fiber, drug, and feral populations showed cannabinoid variation in Cannabis germplasm. The patterns of cannabinoid variation support recognition of C. sativa and C. indica as separate species, but not C. ruderalis. C. sativa contains fiber and seed landraces, and feral populations, derived from Europe, Central Asia, and Turkey. Narrow-leaflet and wide-leaflet drug accessions, southern and eastern Asian hemp accessions, and feral Himalayan populations were assigned to C. indica. In 2005, a genetic analysis of the same set of accessions led to a three-species classification, recognizing C. sativa, C. indica, and (tentatively) C. ruderalis. Another paper in the series on chemotaxonomic variation in the terpenoid content of the essential oil of Cannabis revealed that several wide-leaflet drug strains in the collection had relatively high levels of certain sesquiterpene alcohols, including guaiol and isomers of eudesmol, that set them apart from the other putative taxa. A 2020 analysis of single-nucleotide polymorphisms reports five clusters of cannabis, roughly corresponding to hemps (including folk "Ruderalis") folk "Indica" and folk "Sativa". Despite advanced analytical techniques, much of the cannabis used recreationally is inaccurately classified. One laboratory at the University of British Columbia found that Jamaican Lamb's Bread, claimed to be 100% sativa, was in fact almost 100% indica (the opposite strain). Legalization of cannabis in Canada () may help spur private-sector research, especially in terms of diversification of strains. It should also improve classification accuracy for cannabis used recreationally. Legalization coupled with Canadian government (Health Canada) oversight of production and labelling will likely result in more—and more accurate—testing to determine exact strains and content. Furthermore, the rise of craft cannabis growers in Canada should ensure quality, experimentation/research, and diversification of strains among private-sector producers. Popular usage The scientific debate regarding taxonomy has had little effect on the terminology in widespread use among cultivators and users of drug-type Cannabis. Cannabis aficionados recognize three distinct types based on such factors as morphology, native range, aroma, and subjective psychoactive characteristics. "Sativa" is the most widespread variety, which is usually tall, laxly branched, and found in warm lowland regions. "Indica" designates shorter, bushier plants adapted to cooler climates and highland environments. "Ruderalis" is the informal name for the short plants that grow wild in Europe and Central Asia. Mapping the morphological concepts to scientific names in the Small 1976 framework, "Sativa" generally refers to C. sativa subsp. indica var. indica, "Indica" generally refers to C. sativa subsp. i. kafiristanica (also known as afghanica), and "Ruderalis", being lower in THC, is the one that can fall into C. sativa subsp. sativa. The three names fit in Schultes's framework better, if one overlooks its inconsistencies with prior work. Definitions of the three terms using factors other than morphology produces different, often conflicting results. Breeders, seed companies, and cultivators of drug type Cannabis often describe the ancestry or gross phenotypic characteristics of cultivars by categorizing them as "pure indica", "mostly indica", "indica/sativa", "mostly sativa", or "pure sativa". These categories are highly arbitrary, however: one "AK-47" hybrid strain has received both "Best Sativa" and "Best Indica" awards. Phylogeny Cannabis likely split from its closest relative, Humulus (hops), during the mid Oligocene, around 27.8 million years ago according to molecular clock estimates. The centre of origin of Cannabis is likely in the northeastern Tibetan Plateau. The pollen of Humulus and Cannabis are very similar and difficult to distinguish. The oldest pollen thought to be from Cannabis is from Ningxia, China, on the boundary between the Tibetan Plateau and the Loess Plateau, dating to the early Miocene, around 19.6 million years ago. Cannabis was widely distributed over Asia by the Late Pleistocene. The oldest known Cannabis in South Asia dates to around 32,000 years ago. Uses Cannabis is used for a wide variety of purposes. History According to genetic and archaeological evidence, cannabis was first domesticated about 12,000 years ago in East Asia during the early Neolithic period. The use of cannabis as a mind-altering drug has been documented by archaeological finds in prehistoric societies in Eurasia and Africa. The oldest written record of cannabis usage is the Greek historian Herodotus's reference to the central Eurasian Scythians taking cannabis steam baths. His () Histories records, "The Scythians, as I said, take some of this hemp-seed [presumably, flowers], and, creeping under the felt coverings, throw it upon the red-hot stones; immediately it smokes, and gives out such a vapour as no Greek vapour-bath can exceed; the Scyths, delighted, shout for joy." Classical Greeks and Romans also used cannabis. In China, the psychoactive properties of cannabis are described in the Shennong Bencaojing (3rd century AD). Cannabis smoke was inhaled by Daoists, who burned it in incense burners. In the Middle East, use spread throughout the Islamic empire to North Africa. In 1545, cannabis spread to the western hemisphere where Spaniards imported it to Chile for its use as fiber. In North America, cannabis, in the form of hemp, was grown for use in rope, cloth and paper. Cannabinol (CBN) was the first compound to be isolated from cannabis extract in the late 1800s. Its structure and chemical synthesis were achieved by 1940, followed by some of the first preclinical research studies to determine the effects of individual cannabis-derived compounds in vivo. Globally, in 2013, 60,400 kilograms of cannabis were produced legally. Recreational use Cannabis is a popular recreational drug around the world, only behind alcohol, caffeine, and tobacco. In the U.S. alone, it is believed that over 100 million Americans have tried cannabis, with 25 million Americans having used it within the past year. As a drug it usually comes in the form of dried marijuana, hashish, or various extracts collectively known as hashish oil. Normal cognition is restored after approximately three hours for larger doses via a smoking pipe, bong or vaporizer. However, if a large amount is taken orally the effects may last much longer. After 24 hours to a few days, minuscule psychoactive effects may be felt, depending on dosage, frequency and tolerance to the drug. Cannabidiol (CBD), which has no intoxicating effects by itself (although sometimes showing a small stimulant effect, similar to caffeine), is thought to attenuate (i.e., reduce) the anxiety-inducing effects of high doses of THC, particularly if administered orally prior to THC exposure. According to Delphic analysis by British researchers in 2007, cannabis has a lower risk factor for dependence compared to both nicotine and alcohol. However, everyday use of cannabis may be correlated with psychological withdrawal symptoms, such as irritability or insomnia, and susceptibility to a panic attack may increase as levels of THC metabolites rise. Cannabis withdrawal symptoms are typically mild and are not life-threatening. Risk of adverse outcomes from cannabis use may be reduced by implementation of evidence-based education and intervention tools communicated to the public with practical regulation measures. In 2014 there were an estimated 182.5 million cannabis users worldwide (3.8% of the global population aged 15–64). This percentage did not change significantly between 1998 and 2014. Medical use Medical cannabis (or medical marijuana) refers to the use of cannabis and its constituent cannabinoids, in an effort to treat disease or improve symptoms. Cannabis is used to reduce nausea and vomiting during chemotherapy, to improve appetite in people with HIV/AIDS, and to treat chronic pain and muscle spasms. Cannabinoids are under preliminary research for their potential to affect stroke. Evidence is lacking for depression, anxiety, attention deficit hyperactivity disorder, Tourette syndrome, post-traumatic stress disorder, and psychosis. Two extracts of cannabis – dronabinol and nabilone – are approved by the FDA as medications in pill form for treating the side effects of chemotherapy and AIDS. Short-term use increases both minor and major adverse effects. Common side effects include dizziness, feeling tired, vomiting, and hallucinations. Long-term effects of cannabis are not clear. Concerns including memory and cognition problems, risk of addiction, schizophrenia in young people, and the risk of children taking it by accident. Industrial use (hemp) The term hemp is used to name the durable soft fiber from the Cannabis plant stem (stalk). Cannabis sativa cultivars are used for fibers due to their long stems; Sativa varieties may grow more than six metres tall. However, hemp can refer to any industrial or foodstuff product that is not intended for use as a drug. Many countries regulate limits for psychoactive compound (THC) concentrations in products labeled as hemp. Cannabis for industrial uses is valuable in tens of thousands of commercial products, especially as fibre ranging from paper, cordage, construction material and textiles in general, to clothing. Hemp is stronger and longer-lasting than cotton. It also is a useful source of foodstuffs (hemp milk, hemp seed, hemp oil) and biofuels. Hemp has been used by many civilizations, from China to Europe (and later North America) during the last 12,000 years. In modern times novel applications and improvements have been explored with modest commercial success. In the US, "industrial hemp" is classified by the federal government as cannabis containing no more than 0.3% THC by dry weight. This classification was established in the 2018 Farm Bill and was refined to include hemp-sourced extracts, cannabinoids, and derivatives in the definition of hemp. Ancient and religious uses The Cannabis plant has a history of medicinal use dating back thousands of years across many cultures. The Yanghai Tombs, a vast ancient cemetery (54 000 m2) situated in the Turfan district of the Xinjiang Uyghur Autonomous Region in northwest China, have revealed the 2700-year-old grave of a shaman. He is thought to have belonged to the Jushi culture recorded in the area centuries later in the Hanshu, Chap 96B. Near the head and foot of the shaman was a large leather basket and wooden bowl filled with 789g of cannabis, superbly preserved by climatic and burial conditions. An international team demonstrated that this material contained THC. The cannabis was presumably employed by this culture as a medicinal or psychoactive agent, or an aid to divination. This is the oldest documentation of cannabis as a pharmacologically active agent. The earliest evidence of cannabis smoking has been found in the 2,500-year-old tombs of Jirzankal Cemetery in the Pamir Mountains in Western China, where cannabis residue were found in burners with charred pebbles possibly used during funeral rituals. Settlements which date from c. 2200–1700 BCE in the Bactria and Margiana contained elaborate ritual structures with rooms containing everything needed for making drinks containing extracts from poppy (opium), hemp (cannabis), and ephedra (which contains ephedrine). Although there is no evidence of ephedra being used by steppe tribes, they engaged in cultic use of hemp. Cultic use ranged from Romania to the Yenisei River and had begun by 3rd millennium BC Smoking hemp has been found at Pazyryk. Cannabis is first referred to in Hindu Vedas between 2000 and 1400 BCE, in the Atharvaveda. By the 10th century CE, it has been suggested that it was referred to by some in India as "food of the gods". Cannabis use eventually became a ritual part of the Hindu festival of Holi. One of the earliest to use this plant in medical purposes was Korakkar, one of the 18 Siddhas. The plant is called Korakkar Mooli in the Tamil language, meaning Korakkar's herb. In Buddhism, cannabis is generally regarded as an intoxicant and may be a hindrance to development of meditation and clear awareness. In ancient Germanic culture, Cannabis was associated with the Norse love goddess, Freya. An anointing oil mentioned in Exodus is, by some translators, said to contain Cannabis. In modern times, the Rastafari movement has embraced Cannabis as a sacrament. Elders of the Ethiopian Zion Coptic Church, a religious movement founded in the U.S. in 1975 with no ties to either Ethiopia or the Coptic Church, consider Cannabis to be the Eucharist, claiming it as an oral tradition from Ethiopia dating back to the time of Christ. Like the Rastafari, some modern Gnostic Christian sects have asserted that Cannabis is the Tree of Life. Other organized religions founded in the 20th century that treat Cannabis as a sacrament are the THC Ministry, Cantheism, the Cannabis Assembly and the Church of Cognizance. Since the 13th century CE, cannabis has been used among Sufis – the mystical interpretation of Islam that exerts strong influence over local Muslim practices in Bangladesh, India, Indonesia, Turkey, and Pakistan. Cannabis preparations are frequently used at Sufi festivals in those countries. Pakistan's Shrine of Lal Shahbaz Qalandar in Sindh province is particularly renowned for the widespread use of cannabis at the shrine's celebrations, especially its annual Urs festival and Thursday evening dhamaal sessions – or meditative dancing sessions. See also Cannabis drug testing Cannabis edible Cannabis flower essential oil Hash, Marihuana & Hemp Museum Indian Hemp Drugs Commission Legal history of cannabis in the United States Legality of cannabis by U.S. jurisdiction List of books about cannabis List of celebrities who own cannabis businesses Occupational health concerns of cannabis use Notes References Further reading External links International Plant Names Index (IPNI) Biopiracy Cannabis Rosales genera Dioecious plants Entheogens Euphoriants Herbs Medicinal plants Soma (drink) Taxa named by Carl Linnaeus Invasive plant species in Japan
Cannabis
Biology
7,323
6,773,580
https://en.wikipedia.org/wiki/Residual%20%28numerical%20analysis%29
Loosely speaking, a residual is the error in a result. To be precise, suppose we want to find x such that Given an approximation x0 of x, the residual is that is, "what is left of the right hand side" after subtracting f(x0)" (thus, the name "residual": what is left, the rest). On the other hand, the error is If the exact value of x is not known, the residual can be computed, whereas the error cannot. Residual of the approximation of a function Similar terminology is used dealing with differential, integral and functional equations. For the approximation of the solution of the equation the residual can either be the function , or can be said to be the maximum of the norm of this difference over the domain , where the function is expected to approximate the solution , or some integral of a function of the difference, for example: In many cases, the smallness of the residual means that the approximation is close to the solution, i.e., In these cases, the initial equation is considered as well-posed; and the residual can be considered as a measure of deviation of the approximation from the exact solution. Use of residuals When one does not know the exact solution, one may look for the approximation with small residual. Residuals appear in many areas in mathematics, including iterative solvers such as the generalized minimal residual method, which seeks solutions to equations by systematically minimizing the residual. References Numerical analysis
Residual (numerical analysis)
Mathematics
303
24,509,228
https://en.wikipedia.org/wiki/Gymnopilus%20norfolkensis
Gymnopilus norfolkensis is a species of mushroom in the Hymenogastraceae family. See also List of Gymnopilus species External links Gymnopilus norfolkensis at Index Fungorum norfolkensis Fungi of North America Fungus species
Gymnopilus norfolkensis
Biology
53
2,421,481
https://en.wikipedia.org/wiki/Isotropic%20radiator
An isotropic radiator is a theoretical point source of waves which radiates the same intensity of radiation in all directions. It may be based on sound waves or electromagnetic waves, in which case it is also known as an isotropic antenna. It has no preferred direction of radiation, i.e., it radiates uniformly in all directions over a sphere centred on the source. Isotropic radiators are used as reference radiators with which other sources are compared, for example in determining the gain of antennas. A coherent isotropic radiator of electromagnetic waves is theoretically impossible, but incoherent radiators can be built. An isotropic sound radiator is possible because sound is a longitudinal wave. The term isotropic radiation means a radiation field which has the same intensity in all directions at each point; thus an isotropic radiator does not produce isotropic radiation. Physics In physics, an isotropic radiator is a point radiation or sound source. At a distance, the Sun is an isotropic radiator of electromagnetic radiation. Radiation pattern The radiation field of an isotropic radiator in empty space can be found from conservation of energy. The waves travel in straight lines away from the source point, in the radial direction . Since it has no preferred direction of radiation, the power density of the waves at any point does not depend on the angular direction , but only on the distance from the source. Assuming it is located in empty space where there is nothing to absorb the waves, the power striking a spherical surface enclosing the radiator, with the radiator at center, regardless of the radius , must be the total power in watts emitted by the source. Since the power density in watts per square meter striking each point of the sphere is the same, it must equal the radiated power divided by the surface area of the sphere Thus the power density radiated by an isotropic radiator decreases with the inverse square of the distance from the source. The term isotropic radiation is not usually used for the radiation from an isotropic radiator because it has a different meaning in physics. In thermodynamics it refers to the electromagnetic radiation pattern which would be found in a region at thermodynamic equilibrium, as in a black thermal cavity at a constant temperature. In a cavity at equilibrium the power density of radiation is the same in every direction and every point in the cavity, meaning that the amount of power passing through a unit surface is constant at any location, and with the surface oriented in any direction. This radiation field is different from that of an isotropic radiator, in which the direction of power flow is everywhere away from the source point, and decreases with the inverse square of distance from it. Antenna theory In antenna theory, an isotropic antenna is a hypothetical antenna radiating the same intensity of radio waves in all directions. It thus is said to have a directivity of 0 dBi (dB relative to isotropic) in all directions. Since it is entirely non-directional, it serves as a hypothetical worst-case against which directional antennas may be compared. In reality, a coherent isotropic radiator of linear polarization can be shown to be impossible. Its radiation field could not be consistent with the Helmholtz wave equation (derived from Maxwell's equations) in all directions simultaneously. Consider a large sphere surrounding the hypothetical point source, in the far field of the radiation pattern so that at that radius the wave over a reasonable area is essentially planar. In the far field the electric (and magnetic) field of a plane wave in free space is always perpendicular to the direction of propagation of the wave. So the electric field would have to be tangent to the surface of the sphere everywhere, and continuous along that surface. However the hairy ball theorem shows that a continuous vector field tangent to the surface of a sphere must fall to zero at one or more points on the sphere, which is inconsistent with the assumption of an isotropic radiator with linear polarization. Incoherent isotropic antennas are possible and do not violate Maxwell's equations. Even though an exactly isotropic antenna cannot exist in practice, it is used as a base of comparison to calculate the directivity of actual antennas. Antenna gain which is equal to the antenna's directivity multiplied by the antenna efficiency, is defined as the ratio of the intensity (power per unit area) of the radio power received at a given distance from the antenna (in the direction of maximum radiation) to the intensity received from a perfect lossless isotropic antenna at the same distance. This is called isotropic gain Gain is often expressed in logarithmic units called decibels (dB). When gain is calculated with respect to an isotropic antenna, these are called decibels isotropic (dBi) The gain of any perfectly efficient antenna averaged over all directions is unity, or 0 dBi. Isotropic receiver In EMF measurement applications, an isotropic receiver (also called isotropic antenna) is a calibrated radio receiver with an antenna which approximates an isotropic reception pattern; that is, it has close to equal sensitivity to radio waves from any direction. It is used as a field measurement instrument to measure electromagnetic sources and calibrate antennas. The isotropic receiving antenna is usually approximated by three orthogonal antennas or sensing devices with a radiation pattern of the omnidirectional type such as short dipoles or small loop antennas. The parameter used to define accuracy in the measurements is called isotropic deviation. Optics In optics, an isotropic radiator is a point source of light. The Sun approximates an (incoherent) isotropic radiator of light. Certain munitions such as flares and chaff have isotropic radiator properties. Whether a radiator is isotropic is independent of whether it obeys Lambert's law. As radiators, a spherical black body is both, a flat black body is Lambertian but not isotropic, a flat chrome sheet is neither, and by symmetry the Sun is isotropic, but not Lambertian on account of limb darkening. Sound An isotropic sound radiator is a theoretical loudspeaker radiating equal sound volume in all directions. Since sound waves are longitudinal waves, a coherent isotropic sound radiator is feasible; an example is a pulsing spherical membrane or diaphragm, whose surface expands and contracts radially with time, pushing on the air. Derivation of aperture of an isotropic antenna The aperture of an isotropic antenna can be derived by a thermodynamic argument, which follows. Suppose an ideal (lossless) isotropic antenna A located within a thermal cavity CA is connected via a lossless transmission line through a band-pass filter F to a matched resistor R in another thermal cavity CR (the characteristic impedance of the antenna, line and filter are all matched). Both cavities are at the same temperature The filter F only allows through a narrow band of frequencies from to Both cavities are filled with blackbody radiation in equilibrium with the antenna and resistor. Some of this radiation is received by the antenna. The amount of this power within the band of frequencies passes through the transmission line and filter F and is dissipated as heat in the resistor. The rest is reflected by the filter back to the antenna and is reradiated into the cavity. The resistor also produces Johnson–Nyquist noise current due to the random motion of its molecules at the temperature The amount of this power within the frequency band passes through the filter and is radiated by the antenna. Since the entire system is at the same temperature it is in thermodynamic equilibrium; there can be no net transfer of power between the cavities, otherwise one cavity would heat up and the other would cool down in violation of the second law of thermodynamics. Therefore, the power flows in both directions must be equal The radio noise in the cavity is unpolarized, containing an equal mixture of polarization states. However any antenna with a single output is polarized, and can only receive one of two orthogonal polarization states. For example, a linearly polarized antenna cannot receive components of radio waves with electric field perpendicular to the antenna's linear elements; similarly a right circularly polarized antenna cannot receive left circularly polarized waves. Therefore, the antenna only receives the component of power density in the cavity matched to its polarization, which is half of the total power density Suppose is the spectral radiance per hertz in the cavity; the power of black-body radiation per unit area (m2) per unit solid angle (steradian) per unit frequency (hertz) at frequency and temperature in the cavity. If is the antenna's aperture, the amount of power in the frequency range the antenna receives from an increment of solid angle in the direction is To find the total power in the frequency range the antenna receives, this is integrated over all directions (a solid angle of ) Since the antenna is isotropic, it has the same aperture in any direction. So the aperture can be moved outside the integral. Similarly the radiance in the cavity is the same in any direction Radio waves are low enough in frequency so the Rayleigh–Jeans formula gives a very close approximation of the blackbody spectral radiance Therefore The Johnson–Nyquist noise power produced by a resistor at temperature over a frequency range is Since the cavities are in thermodynamic equilibrium so See also Radiation pattern E-plane and H-plane Footnotes References External links Isotropic Radiators, Matzner and McDonald, arXiv Antennas Antennas D.Jefferies isotropic radiator AMS Glossary U.S. Patent 4,130,023 – Method and apparatus for testing and evaluating loudspeaker performance Non Lethal Concepts – Implications for Air Force Intelligence Published Aerospace Power Journal, Winter 1994 Glossary Cosmic Microwave Background – Introduction Isotropic Radiators Holon Academic Institute of Technology Radiation Radio frequency antenna types Antennas (radio)
Isotropic radiator
Physics,Chemistry
2,107
17,187,970
https://en.wikipedia.org/wiki/Principle%20of%20least%20motion
In organic chemistry, the principle of least motion is the hypothesis that when multiple species with different nuclear structures could theoretically form as products of a given chemical reaction, the more likely to form tends to be the one requiring the least amount of change in nuclear structure or the smallest change in nuclear positions. References Organic chemistry
Principle of least motion
Chemistry
63
61,099,017
https://en.wikipedia.org/wiki/Gestalt%20pattern%20matching
Gestalt pattern matching, also Ratcliff/Obershelp pattern recognition, is a string-matching algorithm for determining the similarity of two strings. It was developed in 1983 by John W. Ratcliff and John A. Obershelp and published in the Dr. Dobb's Journal in July 1988. Algorithm The similarity of two strings and is determined by this formula: twice the number of matching characters divided by the total number of characters of both strings. The matching characters are defined as some longest common substring plus recursively the number of matching characters in the non-matching regions on both sides of the longest common substring: where the similarity metric can take a value between zero and one: The value of 1 stands for the complete match of the two strings, whereas the value of 0 means there is no match and not even one common letter. Sample The longest common substring is WIKIM (light grey) with 5 characters. There is no further substring on the left. The non-matching substrings on the right side are EDIA and ANIA. They again have a longest common substring IA (dark gray) with length 2. The similarity metric is determined by: Properties The Ratcliff/Obershelp matching characters can be substantially different from each longest common subsequence of the given strings. For example and have as their only longest common substring, and no common characters right of its occurrence, and likewise left, leading to . However, the longest common subsequence of and is , with a total length of . Complexity The execution time of the algorithm is in a worst case and in an average case. By changing the computing method, the execution time can be improved significantly. Commutative property The Python library implementation of the gestalt pattern matching algorithm is not commutative: Sample For the two strings and the metric result for is with the substrings GESTALT P, A, T, E and for the metric is with the substrings GESTALT P, R, A, C, I. Applications The Python difflib library, which was introduced in version 2.1, implements a similar algorithm that predates the Ratcliff-Obershelp algorithm. Due to the unfavourable runtime behaviour of this similarity metric, three methods have been implemented. Two of them return an upper bound in a faster execution time. The fastest variant only compares the length of the two substrings: , The second upper bound calculates twice the sum of all used characters which occur in divided by the length of both strings but the sequence is ignored. # Dqr Implementation in Python import collections def quick_ratio(s1: str, s2: str) -> float: """Return an upper bound on ratio() relatively quickly.""" length = len(s1) + len(s2) if not length: return 1.0 intersect = (collections.Counter(s1) & collections.Counter(s2)) matches = sum(intersect.values()) return 2.0 * matches / length Trivially the following applies: and . References Further reading See also Pattern matching Search algorithms Information theory Quantitative linguistics Recursion String metrics Articles with example Python (programming language) code
Gestalt pattern matching
Mathematics,Technology,Engineering
686
17,601,646
https://en.wikipedia.org/wiki/Restoration%20of%20the%20Everglades
An ongoing effort to remedy damage inflicted during the 20th century on the Everglades, a region of tropical wetlands in southern Florida, is the most expensive and comprehensive environmental repair attempt in history. The degradation of the Everglades became an issue in the United States in the early 1970s after a proposal to construct an airport in the Big Cypress Swamp. Studies indicated the airport would have destroyed the ecosystem in South Florida and Everglades National Park. After decades of destructive practices, both state and federal agencies are looking for ways to balance the needs of the natural environment in South Florida with urban and agricultural centers that have recently and rapidly grown in and near the Everglades. In response to floods caused by hurricanes in 1947, the Central and Southern Florida Flood Control Project (C&SF) was established to construct flood control devices in the Everglades. The C&SF built of canals and levees between the 1950s and 1971 throughout South Florida. Their last venture was the C-38 canal, which straightened the Kissimmee River and caused catastrophic damage to animal habitats, adversely affecting water quality in the region. The canal became the first C&SF project to revert when the canal began to be backfilled, or refilled with the material excavated from it, in the 1980s. When high levels of phosphorus and mercury were discovered in the waterways in 1986, water quality became a focus for water management agencies. Costly and lengthy court battles were waged between various government entities to determine who was responsible for monitoring and enforcing water quality standards. Governor Lawton Chiles proposed a bill that determined which agencies would have that responsibility, and set deadlines for pollutant levels to decrease in water. Initially the bill was criticized by conservation groups for not being strict enough on polluters, but the Everglades Forever Act was passed in 1994. Since then, the South Florida Water Management District (SFWMD) and the U.S. Army Corps of Engineers have surpassed expectations for achieving lower phosphorus levels. A commission appointed by Governor Chiles published a report in 1995 stating that South Florida was unable to sustain its growth, and the deterioration of the environment was negatively affecting daily life for residents in South Florida. The environmental decline was predicted to harm tourism and commercial interests if no actions were taken to halt current trends. Results of an eight-year study that evaluated the C&SF were submitted to the United States Congress in 1999. The report warned that if no action was taken the region would rapidly deteriorate. A strategy called the Comprehensive Everglades Restoration Plan (CERP) was enacted to restore portions of the Everglades, Lake Okeechobee, the Caloosahatchee River, and Florida Bay to undo the damage of the past 50 years. It would take 30 years and cost $7.8 billion to complete. Though the plan was passed into law in 2000, it has been compromised by political and funding problems. Background The Everglades are part of a very large watershed that begins in the vicinity of Orlando. The Kissimmee River drains into Lake Okeechobee, a lake with an average depth of . During the wet season when the lake exceeds its capacity, the water leaves the lake in a very wide and shallow river, approximately long and wide. This wide and shallow flow is known as sheetflow. The land gradually slopes toward Florida Bay, the historical destination of most of the water leaving the Everglades. Before drainage attempts, the Everglades comprised , taking up a third of the Florida peninsula. Since the early 19th century the Everglades have been a subject of interest for agricultural development. The first attempt to drain the Everglades occurred in 1882 when Pennsylvania land developer Hamilton Disston constructed the first canals. Though these attempts were largely unsuccessful, Disston's purchase of land spurred tourism and real estate development of the state. The political motivations of Governor Napoleon Bonaparte Broward resulted in more successful attempts at canal construction between 1906 and 1920. Recently reclaimed wetlands were used for cultivating sugarcane and vegetables, while urban development began in the Everglades. The 1926 Miami Hurricane and the 1928 Okeechobee Hurricane caused widespread devastation and flooding which prompted the Army Corps of Engineers to construct a dike around Lake Okeechobee. The four-story wall cut off water from the Everglades. Floods from hurricanes in 1947 motivated the US Congress to establish the Central and Southern Florida Flood Control Project (C&SF), responsible for constructing of canals and levees, hundreds of pumping stations and other water control devices. The C&SF established Water Conservation Areas (WCAs) in 37% of the original Everglades, which acted as reservoirs providing excess water to the South Florida metropolitan area, or flushing it into the Atlantic Ocean or the Gulf of Mexico. The C&SF also established the Everglades Agricultural Area (EAA), which grows the majority of sugarcane crops in the United States. When the EAA was first established, it encompassed approximately 27% of the original Everglades. By the 1960s, urban development and agricultural use had decreased the size of the Everglades considerably. The remaining 25% of the Everglades in its original state is protected in Everglades National Park, but the park was established before the C&SF, and it depended upon the actions of the C&SF to release water. As Miami and other metropolitan areas began to intrude on the Everglades in the 1960s, political battles took place between park management and the C&SF when insufficient water in the park threw ecosystems into chaos. Fertilizers used in the EAA began to alter soil and hydrology in Everglades National Park, causing the proliferation of exotic plant species. A proposition to build a massive jetport in the Big Cypress Swamp in 1969 focused attention on the degraded natural systems in the Everglades. For the first time, the Everglades became a subject of environmental conservation. Everglades as a priority Environmental protection became a national priority in the 1970s. Time magazine declared it the Issue of the Year in January 1971, reporting that it was rated as Americans' "most serious problem confronting their community—well ahead of crime, drugs and poor schools". When South Florida experienced a severe drought from 1970 to 1975, with Miami receiving only of rain in 1971— less than average—media attention focused on the Everglades. With the assistance of governor's aide Nathaniel Reed and U.S. Fish and Wildlife Service biologist Arthur R. Marshall, politicians began to take action. Governor Reubin Askew implemented the Land Conservation Act in 1972, allowing the state to use voter-approved bonds of $240 million to purchase land considered to be environmentally unique and irreplaceable. Since then, Florida has purchased more land for public use than any other state. In 1972 President Richard Nixon declared the Big Cypress Swamp—the intended location for the Miami jetport in 1969—to be federally protected. Big Cypress National Preserve was established in 1974, and Fakahatchee Strand State Preserve was created the same year. In 1976, Everglades National Park was declared an International Biosphere Reserve by UNESCO, which also listed the park as a World Heritage Site in 1979. The Ramsar Convention designated the Everglades a Wetland of International Importance in 1987. Only three locations on Earth have appeared on all three lists: Everglades National Park, Lake Ichkeul in Tunisia, and Srebarna Lake in Bulgaria. Kissimmee River In the 1960s, the C&SF came under increased scrutiny from government overseers and conservation groups. Critics maintained its size was comparable to the Tennessee Valley Authority's dam-building projects during the Great Depression, and that the construction had run into the billions of dollars without any apparent resolution or plan. The projects of the C&SF have been characterized as part of "crisis and response" cycles that "ignored the consequence for the full system, assumed certainty of the future, and succeeded in solving the momentary crisis, but set in motion conditions that exaggerate future crises". The last project, to build a canal to straighten the winding floodplain of the Kissimmee River that had historically fed Lake Okeechobee which in turn fed the Everglades, began in 1962. Marjory Stoneman Douglas later wrote that the C&SF projects were "interrelated stupidity", crowned by the C-38 canal. Designed to replace a meandering river with a channel, the canal was completed in 1971 and cost $29 million. It supplanted approximately of marshland with retention ponds, dams, and vegetation. Loss of habitat has caused the region to experience a drastic decrease of waterfowl, wading birds, and game fish. The reclaimed floodplains were taken over by agriculture, bringing fertilizers and insecticides that washed into Lake Okeechobee. Even before the canal was finished, conservation organizations and sport fishing and hunting groups were calling for the restoration of the Kissimmee River. Arthur R. Marshall led the efforts to undo the damage. According to Douglas, Marshall was successful in portraying the Everglades from the Kissimmee Chain of Lakes to Florida Bay—including the atmosphere, climate, and limestone—as a single organism. Rather than remaining the preserve of conservation organizations, the cause of restoring the Everglades became a priority for politicians. Douglas observed, "Marshall accomplished the extraordinary magic of taking the Everglades out of the bleeding-hearts category forever". At the insistent urging of Marshall, newly elected Governor Bob Graham announced the formation of the "Save Our Everglades" campaign in 1983, and in 1985 Graham lifted the first shovel of backfill for a portion of the C-38 canal. Within a year the area was covered with water returning to its original state. Graham declared that by the year 2000, the Everglades would resemble its predrainage state as much as possible. The Kissimmee River Restoration Project was approved by Congress in the Water Resources Development Act of 1992. The project was estimated to cost $578 million to convert only of the canal; the cost was designed to be divided between the state of Florida and the U.S. government, with the state being responsible for purchasing land to be restored. A project manager for the Army Corps of Engineers explained in 2002, "What we're doing on this scale is going to be taken to a larger scale when we do the restoration of the Everglades". The entire project was originally estimated to be completed by 2011, but was completed in July 2021. In all, about of the Kissimmee River was restored, plus 20,000 acres of wetlands. Water quality Attention to water quality was focused in South Florida in 1986 when a widespread algal bloom occurred in one-fifth of Lake Okeechobee. The bloom was discovered to be the result of fertilizers from the Everglades Agricultural Area. Although laws stated in 1979 that the chemicals used in the EAA should not be deposited into the lake, they were flushed into the canals that fed the Everglades Water Conservation Areas, and eventually pumped into the lake. Microbiologists discovered that, although phosphorus assists plant growth, it destroys periphyton, one of the basic building blocks of marl in the Everglades. Marl is one of two types of Everglades soil, along with peat; it is found where parts of the Everglades are flooded for shorter periods of time as layers of periphyton dry. Most of the phosphorus compounds also rid peat of dissolved oxygen and promote algae growth, causing native invertebrates to die, and sawgrass to be replaced with invasive cattails that grow too tall and thick to allow nesting for birds and alligators. Tested water showed 500 parts per billion (ppb) of phosphorus near sugarcane fields. State legislation in 1987 mandated a 40% reduction of phosphorus by 1992. Attempts to correct phosphorus levels in the Everglades met with resistance. The sugarcane industry, dominated by two companies named U.S. Sugar and Flo-Sun, was responsible for more than half of the crop in the EAA. They were well represented in state and federal governments by lobbyists who enthusiastically protected their interests. According to the Audubon Society, the sugar industry, nicknamed "Big Sugar", donated more money to political parties and candidates than General Motors. The sugar industry attempted to block government-funded studies of polluted water, and when the federal prosecutor in Miami faulted the sugar industry in legal action to protect Everglades National Park, Big Sugar tried to get the lawsuit withdrawn and the prosecutor fired. A costly legal battle ensued from 1988 to 1992 between the State of Florida, the U.S. government, and the sugar industry to resolve who was responsible for water quality standards, the maintenance of Everglades National Park and the Arthur R. Marshall Loxahatchee National Wildlife Refuge. A different concern about water quality arose when mercury was discovered in fish during the 1980s. Because mercury is damaging to humans, warnings were posted for fishermen that cautioned against eating fish caught in South Florida, and scientists became alarmed when a Florida panther was found dead near Shark River Slough with mercury levels high enough to be fatal to humans. When mercury is ingested it adversely affects the central nervous system, and can cause brain damage and birth defects. Studies of mercury levels found that it is bioaccumulated through the food chain: animals that are lower on the chain have decreased amounts, but as larger animals eat them, the amount of mercury is multiplied. The dead panther's diet consisted of small animals, including raccoons and young alligators. The source of the mercury was found to be waste incinerators and fossil fuel power plants that expelled the element in the atmosphere, which precipitated with rain, or in the dry season, dust. Naturally occurring bacteria in the Everglades that function to reduce sulfur also transform mercury deposits into methylmercury. This process was more dramatic in areas where flooding was not as prevalent. Because of requirements that reduced power plant and incinerator emissions, the levels of mercury found in larger animals decreased as well: approximately a 60% decrease in fish and a 70% decrease in birds, though some levels still remain a health concern for people. Everglades Forever Act In an attempt to resolve the political quagmire over water quality, Governor Lawton Chiles introduced a bill in 1994 to clean up water within the EAA that was being released to the lower Everglades. The bill stated that the "Everglades ecosystem must be restored both in terms of water quality and water quantity and must be preserved and protected in a manner that is long term and comprehensive". It ensured the Florida Department of Environmental Protection (DEP) and the South Florida Water Management District (SFWMD) would be responsible for researching water quality, enforcing water supply improvement, controlling exotic species, and collecting taxes, with the aim of decreasing the levels of phosphorus in the region. It allowed for purchase of land where pollutants would be sent to "treat and improve the quality of waters coming from the EAA". Critics of the bill argued that the deadline for meeting the standards was unnecessarily delayed until 2006—a period of 12 years—to enforce better water quality. They also maintained that it did not force sugarcane farmers, who were the primary polluters, to pay enough of the costs, and increased the threshold of what was an acceptable amount of phosphorus in water from 10 ppb to 50 ppb. Governor Chiles initially named it the Marjory Stoneman Douglas Act, but Douglas was so unimpressed with the action it took against polluters that she wrote to Chiles and demanded her name be stricken from it. Despite criticism, the Florida legislature passed the Act in 1994. The SFWMD stated that its actions have exceeded expectations earlier than anticipated, by creating Stormwater Treatment Areas (STA) within the EAA that contain a calcium-based substance such as lime rock layered between peat, and filled with calcareous periphyton. Early tests by the Army Corps of Engineers revealed this method reduced phosphorus levels from 80 ppb to 10 ppb. The STAs are intended to treat water until the phosphorus levels are low enough to be released into the Loxahatchee National Wildlife Refuge or other WCAs. Wildlife concerns The intrusion of urban areas into wilderness has had a substantial impact on wildlife, and several species of animals are considered endangered in the Everglades region. One animal that has benefited from endangered species protection is the American alligator (Alligator mississippiensis), whose holes give refuge to other animals, often allowing many species to survive during times of drought. Once abundant in the Everglades, the alligator was listed as an endangered species in 1967, but a combined effort by federal and state organizations and the banning of alligator hunting allowed it to rebound; it was pronounced fully recovered in 1987 and is no longer an endangered species. However, alligators' territories and average body masses have been found to be generally smaller than in the past, and because populations have been reduced, their role during droughts has become limited. The American Crocodile (Crocodylus acutus) is also native to the region and has been designated as endangered since 1975. Unlike their relatives the alligators, crocodiles tend to thrive in brackish or salt-water habitats such as estuarine or marine coasts. Their most significant threat is disturbance by people. Too much contact with humans causes females to abandon their nests, and males in particular are often victims of vehicle collisions while roaming over large territories and attempting to cross U.S. 1 and Card Sound Road in the Florida Keys. There are an estimated 500 to 1,000 crocodiles in southern Florida. The most critically endangered of any animal in the Everglades region is the Florida panther (Puma concolor coryi), a species that once lived throughout the southeastern United States: there were only 25–30 in the wild in 1995. The panther is most threatened by urban encroachment, because males require approximately for breeding territory. A male and two to five females may live within that range. When habitat is lost, panthers will fight over territory. After vehicle collisions, the second most frequent cause of death for panthers is intra-species aggression. In the 1990s urban expansion crowded panthers from southwestern Florida as Naples and Ft. Myers began to expand into the western Everglades and Big Cypress Swamp. Agencies such as the Army Corps of Engineers and the U.S. Fish and Wildlife Service were responsible for maintaining the Clean Water Act and the Endangered Species Act, yet still approved 99% of all permits to build in wetlands and panther territory. A limited genetic pool is also a danger. Biologists introduced eight female Texas cougars (Puma concolor) in 1995 to diversify genes, and there are between 80 and 120 panthers in the wild . Perhaps the most dramatic loss of any group of animals has been to wading birds. Their numbers were estimated by eyewitness accounts to be approximately 2.5 million in the late 19th century. However, snowy egrets (Egretta thula), roseate spoonbills (Platalea ajaja), and reddish egrets (Egretta rufescens) were hunted to the brink of extinction for the colorful feathers used in women's hats. After about 1920 when the fashion passed, their numbers returned in the 1930s, but over the next 50 years actions by the C&SF further disturbed populations. When the canals were constructed, natural water flow was restricted from the mangrove forests near the coast of Florida Bay. From one wet season to the next, fish were unable to reach traditional locations to repopulate when water was withheld by the C&SF. Birds were forced to fly farther from their nests to forage for food. By the 1970s, bird numbers had decreased 90%. Many of the birds moved to smaller colonies in the WCAs to be closer to a food source, making them more difficult to count. Yet they remain significantly fewer in number than before the canals were constructed. Invasive species Around 6 million people moved to South Florida between 1940 and 1965. With a thousand people moving to Miami each week, urban development quadrupled. As the human population grew rapidly, the problem of exotic plant and animal species also grew. Many species of plants were brought into South Florida from Asia, Central America, or Australia as decorative landscaping. Exotic animals imported by the pet trade have escaped or been released. Biological controls that keep invasive species smaller in size and fewer in number in their native lands often do not exist in the Everglades, and they compete with the embattled native species for food and space. Of imported plant species, melaleuca trees (Melaleuca quinquenervia) have caused the most problems. Melaleucas grow on average in the Everglades, as opposed to in their native Australia. They were brought to southern Florida as windbreaks and deliberately seeded in marsh areas because they absorb vast amounts of water. In a region that is regularly shaped by fire, melaleucas are fire-resistant and their seeds are more efficiently spread by fire. They are too dense for wading birds with large wingspans to nest in, and they choke out native vegetation. Costs of controlling melaleucas topped $2 million in 1998 for Everglades National Park. In Big Cypress National Preserve, melaleucas covered at their most pervasive in the 1990s. Brazilian pepper (Schinus terebinthifolius) was brought to Southern Florida as an ornamental shrub and was dispersed by the droppings of birds and other animals that ate its bright red berries. It thrives on abandoned agricultural land growing in forests too dense for wading birds to nest in, similar to melaleucas. It grows rapidly especially after hurricanes and has invaded pineland forests. Following Hurricane Andrew, scientists and volunteers cleared damaged pinelands of Brazilian pepper so the native trees would be able to return to their natural state. The species that is causing the most impediment to restoration is the Old World climbing fern (Lygodium microphyllum), introduced in 1965. The fern grows rapidly and thickly on the ground, making passage for land animals such as black bears and panthers problematic. The ferns also grow as vines into taller portions of trees, and fires climb the ferns in "fire ladders" to scorch portions of the trees that are not naturally resistant to fire. Several animal species have been introduced to Everglades waterways. Many tropical fish are released, the most detrimental being the blue tilapia (Oreochromis aureus), which builds large nests in shallow waters. Tilapia also consume vegetation which would normally be used by young native fishes for cover and protection. Reptiles have a particular affinity for the South Florida ecosystem. Virtually all lizards appearing in the Everglades have been introduced, such as the brown anole (Anolis sagrei) and the tropical house gecko (Hemidactylus mabouia). The herbivorous green iguana (Iguana iguana) can reproduce rapidly in wilderness habitats. However, the reptile that has earned media attention for its size and potential to harm children and domestic pets is the Burmese python (Python bivittatus), which has spread quickly throughout the area. The python can grow up to long and competes with alligators for the top of the food chain. Though exotic birds such as parrots and parakeets are also found in the Everglades, their impact is negligible. Conversely, perhaps the animal that causes the most damage to native wildlife is the domestic or feral cat. Across the U.S., cats are responsible for approximately a billion bird deaths annually. They are estimated to number 640 per square mile; cats living in suburban areas have devastating effects on migratory birds and marsh rabbits. Homestead Air Force Base Hurricane Andrew struck Miami in 1992, with catastrophic damage to Homestead Air Force Base in Homestead. A plan to rejuvenate the property in 1993 and convert it into a commercial airport was met with enthusiasm from local municipal and commercial entities hoping to recoup $480 million and 11,000 jobs lost in the local community by the destruction and subsequent closing of the base. On March 31, 1994, the base was designated as a reserve base, functioning only part-time. A cursory environmental study performed by the Air Force was deemed insufficient by local conservation groups, who threatened to sue in order to halt the acquisition when estimates of 650 flights a day were projected. Groups had previously been alarmed in 1990 by the inclusion of Homestead Air Force Base on a list of the U.S. Government's most polluted properties. Their concerns also included noise, and the inevitable collisions with birds using the mangrove forests as rookeries. The Air Force base is located between Everglades National Park and Biscayne National Park, giving it the potential to cause harm to both. In 2000, Secretary of the Interior Bruce Babbitt and the director of the U.S. Environmental Protection Agency expressed their opposition to the project, despite other Clinton Administration agencies previously working to ensure the base would be turned over to local agencies quickly and smoothly as "a model of base disposal". Although attempts were made to make the base more environmentally friendly, in 2001 local commercial interests promoting the airport lost federal support. Comprehensive Everglades Restoration Plan Sustainable South Florida Despite the successes of the Everglades Forever Act and the decreases in mercury levels, the focus intensified on the Everglades in the 1990s as quality of life in the South Florida metropolitan areas diminished. It was becoming clear that urban populations were consuming increasingly unsustainable levels of natural resources. A report entitled "The Governor's Commission for a Sustainable South Florida", submitted to Lawton Chiles in 1995, identified the problems the state and municipal governments were facing. The report remarked that the degradation of the natural quality of the Everglades, Florida Bay, and other bodies of water in South Florida would cause a significant decrease in tourism (12,000 jobs and $200 million annually) and income from compromised commercial fishing (3,300 jobs and $52 million annually). The report noted that past abuses and neglect of the environment had brought the region to "a precipitous juncture" where the inhabitants of South Florida faced health hazards in polluted air and water; furthermore, crowded and unsafe urban conditions hurt the reputation of the state. It noted that though the population had increased by 90% over the previous two decades, registered vehicles had increased by 166%. On the quality and availability of water, the report stated, "[The] frequent water shortages ... create the irony of a natural system dying of thirst in a subtropical environment with over 53 inches of rain per year". Restoration of the Everglades, however, briefly became a bipartisan cause in national politics. A controversial penny-a-pound (2 cent/kg) tax on sugar was proposed to fund some of the necessary changes to be made to help decrease phosphorus and make other improvements to water. State voters were asked to support the tax, and environmentalists paid $15 million to encourage the issue. Sugar lobbyists responded with $24 million in advertising to discourage it and succeeded; it became the most expensive ballot issue in state history. How restoration might be funded became a political battleground and seemed to stall without resolution. However, in the 1996 election year, Republican senator Bob Dole proposed that Congress give the State of Florida $200 million to acquire land for the Everglades. Democratic Vice President Al Gore promised the federal government would purchase of land in the EAA to turn it over for restoration. Politicking reduced the number to , but both Dole's and Gore's gestures were approved by Congress. Central and South Florida Project Restudy As part of the Water Resources Development Act of 1992, Congress authorized an evaluation of the effectiveness of the Central and Southern Florida Flood Control Project. A report known as the "Restudy", written by the U.S. Army Corps of Engineers and the South Florida Water Management District, was submitted to Congress in 1999. It cited indicators of harm to the system: a 50% reduction in the original Everglades, diminished water storage, harmful timing of water release, an 85 to 90% decrease in wading bird populations over the past 50 years, and the decline of output from commercial fisheries. Bodies of water including Lake Okeechobee, the Caloosahatchee River, St. Lucie estuary, Lake Worth Lagoon, Biscayne Bay, Florida Bay, and the Everglades reflected drastic water level changes, hypersalinity, and dramatic changes in marine and freshwater ecosystems. The Restudy noted the overall decline in water quality over the past 50 years was caused by loss of wetlands that act as filters for polluted water. It predicted that without intervention the entire South Florida ecosystem would deteriorate. Canals took roughly of water to the Atlantic Ocean or Gulf of Mexico daily, so there was no opportunity for water storage, yet flooding was still a problem. Without changes to the current system, the Restudy predicted water restrictions would be necessary every other year, and annually in some locations. It also warned that revising some portions of the project without dedicating efforts to an overall comprehensive plan would be insufficient and probably detrimental. After evaluating ten plans, the Restudy recommended a comprehensive strategy that would cost $7.8 billion over 20 years. The plan advised taking the following actions: Create surface water storage reservoirs to capture of water in several locations taking up . Create water preserve areas between Miami-Dade and Palm Beach and the eastern Everglades to treat runoff water. Manage Lake Okeechobee as an ecological resource to avoid the drastic rise and fall of water levels in the lake that are harmful to aquatic plant and animal life and disturb the lake sediments. Improve water deliveries to estuaries to reduce the rapid discharge of excess water to the Caloosahatchee and St. Lucie estuaries that upset nutrient balances and cause lesions on fish. Stormwater discharge would be sent instead to reservoirs. Increase underground water storage to hold a day in wells, or reservoirs in the Floridan Aquifer, to be used later in dry periods, in a method called Aquifer Storage and Recovery (ASR). Construct treatment wetlands as Stormwater Treatment Areas throughout , that would decrease the amount of pollutants in the environment. Improve water deliveries to the Everglades by increasing them at a rate of approximately 26% into Shark River Slough. Remove barriers to sheetflow by destroying or removing of canals and levees, specifically removing the Miami Canal and reconstructing the Tamiami Trail from a highway to culverts and bridges to allow sheetflow to return to a more natural rate of water flow into Everglades National Park. Store water in quarries and reuse wastewater by employing existing quarries to supply the South Florida metropolitan area as well as Florida Bay and the Everglades. Construct two wastewater treatment plants capable of discharging a day to recharge the Biscayne Aquifer. The implementation of all of the advised actions, the report stated, would "result in the recovery of healthy, sustainable ecosystems throughout south Florida". The report admitted that it did not have all the answers, though no plan could. However, it predicted that it would restore the "essential defining features of the pre-drainage wetlands over large portions of the remaining system", that populations of all animals would increase, and animal distribution patterns would return to their natural states. Critics expressed concern over some unused technology; scientists were unsure if the quarries would hold as much water as was being suggested, and whether the water would harbor harmful bacteria from the quarries. Overtaxing the aquifers was another concern—it was not a technique that had been previously attempted. Though it was optimistic, the Restudy noted, It is important to understand that the 'restored' Everglades of the future will be different from any version of the Everglades that has existed in the past. While it certainly will be vastly superior to the current ecosystem, it will not completely match the pre-drainage system. This is not possible, in light of the irreversible physical changes that have made (sic) to the ecosystem. It will be an Everglades that is smaller and somewhat differently arranged than the historic ecosystem. But it will be a successfully restored Everglades, because it will have recovered those hydrological and biological patterns which defined the original Everglades, and which made it unique among the world's wetland systems. It will become a place that kindles the wildness and richness of the former Everglades. The report was the result of many cooperating agencies that often had conflicting goals. An initial draft was submitted to Everglades National Park management who asserted not enough water would be released to the park quickly enough—that the priority went to delivering water to urban areas. When they threatened to refuse to support it, the plan was rewritten to provide more water to the park. However, the Miccosukee Indians have a reservation in between the park and water control devices, and they threatened to sue to ensure their tribal lands and a $50 million casino would not be flooded. Other special interests were also concerned that businesses and residents would take second priority after nature. The Everglades, however, proved to be a bipartisan cause. The Comprehensive Everglades Restoration Plan (CERP) was authorized by the Water Resources Development Act of 2000 and signed into law by President Bill Clinton on December 11, 2000. It approved the immediate use of $1.3 billion for implementation to be split by the federal government and other sources. Implementation The State of Florida reports that it has spent more than $2 billion on the various projects since CERP was signed. More than of Stormwater Treatment Areas (STA) have been constructed to filter of phosphorus from Everglades waters. An STA covering was constructed in 2004, making it the largest environmental restoration project in the world. Fifty-five percent of the land necessary for restoration, totaling , has been purchased by the State of Florida. A plan named "Acceler8", to hasten the construction and funding of the project, was put into place, spurring the start of six of eight construction projects, including that of three large reservoirs. Despite the bipartisan goodwill and declarations of the importance of the Everglades, the region still remains in danger. Political maneuvering continues to impede CERP: sugar lobbyists promoted a bill in the Florida legislature in 2003 that increased the acceptable amount of phosphorus in Everglades waterways from 10 ppb to 15 ppb and extended the deadline for the mandated decrease by 20 years. A compromise of 2016 was eventually reached. Environmental organizations express concern that attempts to speed up some of the construction through Acceler8 are politically motivated; the six projects Acceler8 focuses on do not provide more water to natural areas in desperate need of it, but rather to projects in populated areas bordering the Everglades, suggesting that water is being diverted to make room for more people in an already overtaxed environment. Though Congress promised half the funds for restoration, after the War in Iraq began and two of CERP's major supporters in Congress retired, the federal role in CERP was left unfulfilled. According to a story in The New York Times, state officials say the restoration is lost in a maze of "federal bureaucracy, a victim of 'analysis paralysis' ". In 2007, the release of $2 billion for Everglades restoration was approved by Congress, overriding President George W. Bush's veto of the entire Water Development Project the money was a part of. Bush's rare veto went against the wishes of Florida Republicans, including his brother, Governor Jeb Bush. A lack of subsequent action by the Congress prompted Governor Charlie Crist to travel to Washington D.C. in February 2008 and inquire about the promised funds. By June 2008, the federal government had spent only $400 million of the $7.8 billion legislated. Carl Hiaasen characterized George W. Bush's attitude toward the environment as "long-standing indifference" in June 2008, exemplified when Bush stated he would not intervene to change the Environmental Protection Agency's (EPA) policy allowing the release of water polluted with fertilizers and phosphorus into the Everglades. Reassessment of CERP Florida still receives a thousand new residents daily and lands slated for restoration and wetland recovery are often bought and sold before the state has a chance to bid on them. The competitive pricing of real estate also drives it beyond the purchasing ability of the state.  Because the State of Florida is assisting with purchasing lands and funding construction, some of the programs under CERP are vulnerable to state budget cuts. In June 2008 Governor Crist announced that the State of Florida will buy U.S. Sugar for $1.7 billion. The idea came when sugar lobbyists were trying to persuade Crist to relax restriction of U.S. Sugar's practice of pumping phosphorus-laden water into the Everglades. According to one of the lobbyists who characterized it as a "duh moment", Crist said, "If sugar is polluting the Everglades, and we're paying to clean the Everglades, why don't we just get rid of sugar?" The largest producer of cane sugar in the U.S. will continue operations for six years, and when ownership transfers to Florida, of the Everglades will remain undeveloped to allow it to be restored to its pre-drainage state. In September 2008 the National Research Council (NRC), a nonprofit agency providing science and policy advice to the federal government, submitted a report on the progress of CERP. The report noted "scant progress" in restoration because of problems in budgeting, planning, and bureaucracy. The NRC report called the Everglades one of the "world's treasured ecosystems" that is being further endangered by lack of progress: "Ongoing delay in Everglades restoration has not only postponed improvements—it has allowed ecological decline to continue". It cited the shrinking tree islands, and the negative population growth of the endangered Rostrhamus sociabilis or Everglades snail kite, and Ammodramus maritimus mirabilis, the Cape Sable seaside sparrow. The lack of water reaching Everglades National Park was characterized as "one of the most discouraging stories" in implementation of the plan. The NRC recommended improving planning on the state and federal levels, evaluating each CERP project annually, and further acquisition of land for restoration. Everglades restoration was earmarked $96 million in federal funds as part of the American Recovery and Reinvestment Act of 2009 with the intention of providing civil service and construction jobs while simultaneously implementing the legislated repair projects. In January 2010, work began on the C-111 canal, built in the 1960s to drain irrigated farmland, to reconstruct it to keep from diverting water from Everglades National Park. Two other projects focusing on restoration were also scheduled to start in 2010. Governor Crist announced the same month that $50 million would be earmarked for Everglades restoration. In April of the same year, a federal district court judge sharply criticized both state and federal failures to meet deadlines, describing the cleanup efforts as being slowed by "glacial delay" and government neglect of environmental law enforcement "incomprehensible". See also Draining and development of the Everglades Everglades National Park Geography and ecology of the Everglades History of Miami, Florida Indigenous people of the Everglades region Notes and references Bibliography Barnett, Cynthia (2007). Mirage: Florida and the Vanishing Water of the Eastern U.S., University of Michigan Press. Douglas, Marjory; Rothchild, John (1987). Marjory Stoneman Douglas: Voice of the River. Pineapple Press. Grunwald, Michael (2006). The Swamp: The Everglades, Florida, and the Politics of Paradise, Simon & Schuster. Lodge, Thomas E. (1994). The Everglades Handbook: Understanding the Ecosystem. CRC Press. U.S. Army Corps of Engineers and South Florida Water Management District (April 1999). "Summary", Central and Southern Florida Project Comprehensive Review Study. Further reading Alderson, Doug. 2009. New Dawn for the Kissimmee River. Gainesville, FL: University Press of Florida. The Everglades in the Time of Marjory Stoneman Douglas Photo exhibit created by the State Archives of Florida External links CERP: A Visual Explanation of the Comprehensive Everglades Restoration Project (SFWMD) C-44 Reservoir Storm Water Treatment Area Project (SFWMD/CERP) Everglades Everglades History of sugar Constructed wetlands Sugar industry of Florida
Restoration of the Everglades
Chemistry,Engineering,Biology
8,391
4,743,665
https://en.wikipedia.org/wiki/Scrum%20%28software%20development%29
Scrum is an agile team collaboration framework commonly used in software development and other industries. Scrum prescribes for teams to break work into goals to be completed within time-boxed iterations, called sprints. Each sprint is no longer than one month and commonly lasts two weeks. The scrum team assesses progress in time-boxed, stand-up meetings of up to 15 minutes, called daily scrums. At the end of the sprint, the team holds two further meetings: one sprint review to demonstrate the work for stakeholders and solicit feedback, and one internal sprint retrospective. A person in charge of a scrum team is typically called a scrum master. Scrum's approach to product development involves bringing decision-making authority to an operational level. Unlike a sequential approach to product development, scrum is an iterative and incremental framework for product development. Scrum allows for continuous feedback and flexibility, requiring teams to self-organize by encouraging physical co-location or close online collaboration, and mandating frequent communication among all team members. The flexible approach of scrum is based in part on the notion of requirement volatility, that stakeholders will change their requirements as the project evolves. History The use of the term scrum in software development came from a 1986 Harvard Business Review paper titled "The New New Product Development Game" by Hirotaka Takeuchi and Ikujiro Nonaka. Based on case studies from manufacturing firms in the automotive, photocopier, and printer industries, the authors outlined a new approach to product development for increased speed and flexibility. They called this the rugby approach, as the process involves a single cross-functional team operating across multiple overlapping phases in which the team "tries to go the distance as a unit, passing the ball back and forth". The authors later developed scrum in their book, The Knowledge Creating Company. In the early 1990s, Ken Schwaber used what would become scrum at his company, Advanced Development Methods. Jeff Sutherland, John Scumniotales, and Jeff McKenna developed a similar approach at Easel Corporation, referring to the approach with the term scrum. Sutherland and Schwaber later worked together to integrate their ideas into a single framework, formally known as scrum. Schwaber and Sutherland tested scrum and continually improved it, leading to the publication of a research paper in 1995, and the Manifesto for Agile Software Development in 2001. Schwaber also collaborated with Babatunde Ogunnaike at DuPont Research Station and the University of Delaware to develop Scrum. Ogunnaike believed that software development projects could often fail when initial conditions change if product management was not rooted in empirical practice. In 2002, Schwaber with others founded the Scrum Alliance and set up the Certified Scrum accreditation series. Schwaber left the Scrum Alliance in late 2009 and subsequently founded Scrum.org, which oversees the parallel Professional Scrum accreditation series. Since 2009, a public document called The Scrum Guide has been published and updated by Schwaber and Sutherland. It has been revised six times, with the most recent version having been published in November 2020. Scrum team A scrum team is organized into at least three categories of individuals: the product owner, developers, and the scrum master. The product owner liaises with stakeholders, those who have an interest in the project's outcome, to communicate tasks and expectations with developers. Developers in a scrum team organize work by themselves, with the facilitation of a scrum master. Product owner Each scrum team has one product owner. The product owner focuses on the business side of product development and spends the majority of time liaising with stakeholders and the team. The role is intended to primarily represent the product's stakeholders, the voice of the customer, or the desires of a committee, and bears responsibility for the delivery of business results. Product owners manage the product backlog and are responsible for maximizing the value that a team delivers. They do not dictate the technical solutions of a team but may instead attempt to seek consensus among team members. As the primary liaison of the scrum team towards stakeholders, product owners are responsible for communicating announcements, project definitions and progress, RIDAs (risks, impediments, dependencies, and assumptions), funding and scheduling changes, the product backlog, and project governance, among other responsibilities. Product owners can also cancel a sprint if necessary, without the input of team members. Developers In scrum, the term developer or team member refers to anyone who plays a role in the development and support of the product and can include researchers, architects, designers, programmers, etc. Scrum master Scrum is facilitated by a scrum master, whose role is to educate and coach teams about scrum theory and practice. Scrum masters have differing roles and responsibilities from traditional team leads or project managers. Some scrum master responsibilities include coaching, objective setting, problem solving, oversight, planning, backlog management, and communication facilitation. On the other hand, traditional project managers often have people management responsibilities, which a scrum master does not. Scrum teams do not involve project managers, so as to maximize self-organisation among developers. Workflow Sprint A sprint (also known as a design sprint, iteration, or timebox) is a fixed period of time wherein team members work on a specific goal. Each sprint is normally between one week and one month, with two weeks being the most common. The outcome of the sprint is a functional deliverable, or a product which has received some development in increments. When a sprint is abnormally terminated, the next step is to conduct new sprint planning, where the reason for the termination is reviewed. Each sprint starts with a sprint planning event in which a sprint goal is defined. Priorities for planned sprints are chosen out of the backlog. Each sprint ends with two events: A sprint review (progress shown to stakeholders to elicit their feedback) A sprint retrospective (identifying lessons and improvements for the next sprints) The suggested maximum duration of sprint planning is eight hours for a four-week sprint. Daily scrum Each day during a sprint, the developers hold a daily scrum (often conducted standing up) with specific guidelines, and which may be facilitated by a scrum master. Daily scrum meetings are intended to be less than 15 minutes in length, taking place at the same time and location daily. The purpose of the meeting is to announce progress made towards the sprint goal and issues that may be hindering the goal, without going into any detailed discussion. Once over, individual members can go into a 'breakout session' or an 'after party' for extended discussion and collaboration. Scrum masters are responsible for ensuring that team members use daily scrums effectively or, if team members are unable to use them, providing alternatives to achieve similar outcomes. Post-sprint events Conducted at the end of a sprint, a sprint review is a meeting that has a team share the work they've completed with stakeholders and liaise with them on feedback, expectations, and upcoming plans. At a sprint review completed deliverables are demonstrated to stakeholders. The recommended duration for a sprint review is one hour per week of sprint. A sprint retrospective is a separate meeting that allows team members to internally analyze the strengths and weaknesses of the sprint, future areas of improvement, and continuous process improvement actions. Backlog refinement Backlog refinement is a process by which team members revise and prioritize a backlog for future sprints. It can be done as a separate stage done before the beginning of a new sprint or as a continuous process that team members work on by themselves. Backlog refinement can include the breaking down of large tasks into smaller and clearer ones, the clarification of success criteria, and the revision of changing priorities and returns. Artifacts Artifacts are a means by which scrum teams manage product development by documenting work done towards the project. There are seven scrum artifacts, with three of them being the most common: product backlog, sprint backlog, and increment. Product backlog The product backlog is a breakdown of work to be done and contains an ordered list of product requirements (such as features, bug fixes and non-functional requirements) that the team maintains for a product. The order of a product backlog corresponds to the urgency of the task. Common formats for backlog items include user stories and use cases. The product backlog may also contain the product owner's assessment of business value and the team's assessment of the product's effort or complexity, which can be stated in story points using the rounded Fibonacci scale. These estimates try to help the product owner gauge the timeline and may influence the ordering of product backlog items. The product owner maintains and prioritizes product backlog items based on considerations such as risk, business value, dependencies, size, and timing. High-priority items at the top of the backlog are broken down into more detail for developers to work on, while tasks further down the backlog may be more vague. Sprint backlog The sprint backlog is the subset of items from the product backlog intended for developers to address in a particular sprint. Developers fill this backlog with tasks they deem appropriate to fill the sprint, using past performance to assess their capacity for each sprint. The scrum approach has tasks on the sprint backlog not assigned to developers by any particular individual or leader. Team members self organize by pulling work as needed according to the backlog priority and their own capabilities and capacity. Increment An increment is a potentially releasable output of a sprint, which meets the sprint goal. It is formed from all the completed sprint backlog items, integrated with the work of all previous sprints. Other artifacts Burndown chart Often used in scrum, a burndown chart is a publicly displayed chart showing remaining work. It provides quick visualizations for reference. The horizontal axis of the burndown chart shows the days remaining, while the vertical axis shows the amount of work remaining each day. During sprint planning, the ideal burndown chart is plotted. Then, during the sprint, developers update the chart with the remaining work. Release burnup chart Updated at the end of each sprint, the release burn-up chart shows progress towards delivering a forecast scope. The horizontal axis of the release burnup chart shows the sprints in a release, while the vertical axis shows the amount of work completed at the end of each sprint. Velocity Some project managers believe that a team's total capability effort for a single sprint can be derived by evaluating work completed in the last sprint. The collection of historical "velocity" data is a guideline for assisting the team in understanding their capacity. Limitations Some have argued that scrum events, such as daily scrums and scrum reviews, hurt productivity and waste time that could be better spent on actual productive tasks. Scrum has also been observed to pose difficulties for part-time or geographically distant teams; those that have highly specialized members who would be better off working by themselves or in working cliques; and those that are unsuitable for incremental and development testing. Adaptations Scrum is frequently tailored or adapted in different contexts to achieve varying aims. A common approach to adapting scrum is the combination of scrum with other software development methodologies, as scrum does not cover the whole product development lifecycle. Various scrum practitioners have also suggested more detailed techniques for how to apply or adapt scrum to particular problems or organizations. Many refer to these techniques as 'patterns', an analogous use to design patterns in architecture and software. Scrumban Scrumban is a software production model based on scrum and kanban. To illustrate each stage of work, teams working in the same space often use post-it notes or a large whiteboard. Kanban models allow a team to visualize work stages and limitations. Scrum of scrums Scrum of scrums is a technique to operate scrum at scale for multiple teams coordinating on the same product. Scrum-of-scrums daily scrum meetings involve ambassadors selected from each individual team, who may be either a developer or scrum master. As a tool for coordination, scrum of scrums allows teams to collectively work on team-wide risks, impediments, dependencies, and assumptions (RIDAs), which may be tracked in a backlog of their own. Large-scale scrum Large-scale scrum is an organizational system for product development that scales scrum with varied rules and guidelines, developed by Bas Vodde and Craig Larman. There are two levels to the framework: the first level, designed for up to eight teams; and the second level, known as 'LeSS Huge', which can accommodate development involving hundreds of developers. Criticism A systematic review found "that Distributed Scrum has no impact, positive or negative on overall project success" in distributed software development. Martin Fowler, one of the authors of the Manifesto for Agile Software Development, has criticised what he calls "faux-agile" practices that are "disregarding agile's values and principles", and "the Agile Industrial Complex imposing methods upon people" contrary to the Agile principle of valuing "individuals and interactions over processes and tools" and allowing the individuals doing the work to decide how the work is done, changing processes to suit their needs. In September 2016, Ron Jeffries, a signatory to the Agile Manifesto, described what he called "Dark Scrum", saying that "Scrum can be very unsafe for programmers." See also Agile software development Agile testing Agile learning Disciplined agile delivery Comparison of scrum software High-performance teams Lean software development Project management Unified process Citations General and cited references Verheyen, Gunther (2013). Scrum – A Pocket Guide (A Smart Travel Companion). . External links Agile Alliance's Scrum library A scrum process description by the Eclipse process framework (EPF) project Agile software development Software development Software development philosophies Software project management
Scrum (software development)
Technology,Engineering
2,879
44,004,087
https://en.wikipedia.org/wiki/Apparent%20longitude
Apparent longitude is celestial longitude corrected for aberration and nutation as opposed to mean longitude. Apparent longitude is used in the definition of equinox and solstice. At equinox, the apparent geocentric celestial longitude of the Sun is 0° or 180°. At solstice, it is equal to 90° or 270°. This does not match up to declination exactly zero or declination extreme value because the celestial latitude of the Sun is (less than 1.2 arcseconds but) not zero. Sources Astronomical coordinate systems
Apparent longitude
Astronomy,Mathematics
117
13,387,024
https://en.wikipedia.org/wiki/Self-consolidating%20concrete
Self-consolidating concrete or self-compacting concrete (SCC) is a concrete mix which has a low yield stress, high deformability, good segregation resistance (prevents separation of particles in the mix), and moderate viscosity (necessary to ensure uniform suspension of solid particles during transportation, placement (without external compaction), and thereafter until the concrete sets). In everyday terms, when poured, SCC is an extremely fluid mix with the following distinctive practical features – it flows very easily within and around the formwork, can flow through obstructions and around corners ("passing ability"), is close to self-leveling (although not actually self-levelling), does not require vibration or tamping after pouring, and follows the shape and surface texture of a mold (or form) very closely once set. As a result, pouring SCC is also much less labor-intensive compared to standard concrete mixes. Once poured, SCC is usually similar to standard concrete in terms of its setting and curing time (gaining strength), and strength. SCC does not use a high proportion of water to become fluid – in fact SCC may contain less water than standard concretes. Instead, SCC gains its fluid properties from an unusually high proportion of fine aggregate, such as sand (typically 50%), combined with superplasticizers (additives that ensure particles disperse and do not settle in the fluid mix) and viscosity-enhancing admixtures (VEA). Ordinarily, concrete is a dense, viscous material when mixed, and when used in construction, requires the use of vibration or other techniques (known as compaction) to remove air bubbles (cavitation), and honeycomb-like holes, especially at the surfaces, where air has been trapped during pouring. This kind of air content (unlike that in aerated concrete) is not desired and weakens the concrete if left. However it is laborious and takes time to remove by vibration, and improper or inadequate vibration can lead to undetected problems later. Additionally some complex forms cannot easily be vibrated. Self-consolidating concrete is designed to avoid this problem, and not require compaction, therefore reducing labor, time, and a possible source of technical and quality control issues. SCC was conceptualized in 1986 by Prof. Okamura at Kochi University, Japan, at a time when skilled labor was in limited supply, causing difficulties in concrete-related industries. The first generation of SCC used in North America was characterized by the use of relatively high content of binder as well as high dosages of chemicals admixtures, usually superplasticizer to enhance flowability and stability. Such high-performance concrete had been used mostly in repair applications and for casting concrete in restricted areas. The first generation of SCC was therefore characterized and specified for specialized applications. SCC can be used for casting heavily reinforced sections, places where there can be no access to vibrators for compaction and in complex shapes of formwork which may otherwise be impossible to cast, giving a far superior surface than conventional concrete. The relatively high cost of material used in such concrete continues to hinder its widespread use in various segments of the construction industry, including commercial construction, however the productivity economics take over in achieving favorable performance benefits and works out to be economical in pre-cast industry. The incorporation of powder, including supplementary cementitious materials and filler, can increase the volume of the paste, hence enhancing deformability, and can also increase the cohesiveness of the paste and stability of the concrete. The reduction in cement content and increase in packing density of materials finer than 80 μm, like fly ash etc. can reduce the water-cement ratio, and the high-range water reducer (HRWR) demand. The reduction in free water can reduce the concentration of viscosity-enhancing admixture (VEA) necessary to ensure proper stability during casting and thereafter until the onset of hardening. It has been demonstrated that a total fine aggregate content ("fines", usually sand) of about 50% of total aggregate is appropriate in an SCC mix. There are many studies on different types of SCC which address its fresh properties, strength, durability and microstructural properties. Types of self-consolidating concrete include low-fines SCC (LF-SCC) and semi-flowable SCC (SF-SCC) etc. SCC can be produced using different industrial wastes as cement replacing materials. They can be used for pavement construction <2-6>. Reference: https://doi.org/10.1016/j.conbuildmat.2022.130036 Overview SCC is measured using the flow table test (slump-flow test) rather than the usual concrete slump test, as it is too fluid to keep its shape when the cone is removed. A typical SCC mix will have slump-flow of around 500 – 700 mm. SCC is weakened, not strengthened, by vibration. As vibration is not needed for compacting the mix, all that it achieves is to separate and segregate it. See also Concrete slump test Flow table test References 2. Low-fines self-consolidating concrete using rice husk ash for road pavement: An environment-friendly and sustainable approach https://doi.org/10.1016/j.conbuildmat.2022.130036 3. Kannur, B., Chore, H.S. Utilization of sugarcane bagasse ash as cement-replacing materials for concrete pavement: an overview. Innov. Infrastruct. Solut. 6, 184 (2021). https://doi.org/10.1007/s41062-021-00539-4 4.Strength and durability study of low-fines self-consolidating concrete as a pavement material using fly ash and bagasse ash Bhupati Kannur &H. S. Chore. https://doi.org/10.1080/19648189.2022.2140207 5.Bhupati Kannur, Hemant Sharad Chore. Semi-flowable self-consolidating concrete using industrial wastes for construction of rigid pavements in India: An overview. https://doi.org/10.1016/j.jtte.2023.01.001 6.B Kannur, HS Chore. Assessing Semiflowable Self-Consolidating Concrete with Sugarcane Bagasse Ash for Application in Rigid Pavement. Journal of Materials in Civil Engineering 35 (10), 04023358, 2023. https://doi.org/10.1061/JMCEE7.MTENG-16355 External links Proportioning of self-compacting concrete – the UCL method – paper summarizing common mixes, uses, choices of additives, properties, and extensive information on SCCs. Working With SCC Needn’t Be Hit or Miss – precast concrete makers' experience is SCC / what to do and not do. Concrete
Self-consolidating concrete
Engineering
1,497
3,189
https://en.wikipedia.org/wiki/Ascending%20chain%20condition
In mathematics, the ascending chain condition (ACC) and descending chain condition (DCC) are finiteness properties satisfied by some algebraic structures, most importantly ideals in certain commutative rings. These conditions played an important role in the development of the structure theory of commutative rings in the works of David Hilbert, Emmy Noether, and Emil Artin. The conditions themselves can be stated in an abstract form, so that they make sense for any partially ordered set. This point of view is useful in abstract algebraic dimension theory due to Gabriel and Rentschler. Definition A partially ordered set (poset) P is said to satisfy the ascending chain condition (ACC) if no infinite strictly ascending sequence of elements of P exists. Equivalently, every weakly ascending sequence of elements of P eventually stabilizes, meaning that there exists a positive integer n such that Similarly, P is said to satisfy the descending chain condition (DCC) if there is no infinite strictly descending chain of elements of P. Equivalently, every weakly descending sequence of elements of P eventually stabilizes. Comments Assuming the axiom of dependent choice, the descending chain condition on (possibly infinite) poset P is equivalent to P being well-founded: every nonempty subset of P has a minimal element (also called the minimal condition or minimum condition). A totally ordered set that is well-founded is a well-ordered set. Similarly, the ascending chain condition is equivalent to P being converse well-founded (again, assuming dependent choice): every nonempty subset of P has a maximal element (the maximal condition or maximum condition). Every finite poset satisfies both the ascending and descending chain conditions, and thus is both well-founded and converse well-founded. Example Consider the ring of integers. Each ideal of consists of all multiples of some number . For example, the ideal consists of all multiples of . Let be the ideal consisting of all multiples of . The ideal is contained inside the ideal , since every multiple of is also a multiple of . In turn, the ideal is contained in the ideal , since every multiple of is a multiple of . However, at this point there is no larger ideal; we have "topped out" at . In general, if are ideals of such that is contained in , is contained in , and so on, then there is some for which all . That is, after some point all the ideals are equal to each other. Therefore, the ideals of satisfy the ascending chain condition, where ideals are ordered by set inclusion. Hence is a Noetherian ring. See also Artinian Ascending chain condition for principal ideals Krull dimension Maximal condition on congruences Noetherian Notes Citations References External links Commutative algebra Order theory Wellfoundedness
Ascending chain condition
Mathematics
564
3,638,822
https://en.wikipedia.org/wiki/Ilfak%20Guilfanov
Ilfak Guilfanov (, born 12 November 1966) is a Russian software developer, computer security researcher and blogger. He became well known when he issued a free hotfix for the Windows Metafile vulnerability on 31 December 2005. His unofficial patch was favorably reviewed and widely publicized because no official patch was initially available from Microsoft. Microsoft released an official patch on 5 January 2006. Guilfanov was born in a small village in the Tatarstan Region of Russia in a Volga Tatar family. He graduated from Moscow State University in 1987 with a Bachelor of Science in Mathematics. He is the systems architect and main developer for IDA Pro, which is Hex-Rays' commercial version of the Interactive Disassembler Guilfanov created. A freeware version of this reverse engineering tool is also available. Actually, he lives in Liège, Belgium. He worked for DataRescue. In 2005, Guilfanov founded Hex-Rays. In 2020, the income of the company has eclipsed the mark of 20 million euros per year. In 2022, a consortium of investors Smartfin, SFPIM and SRIW acquired Hex-Rays for 81 million euros. References External links http://www.hex-rays.com Interview on CNET https://www.helpnetsecurity.com/2022/10/21/hex-rays-smartfin 1966 births Living people Russian computer programmers Moscow State University alumni Computer security specialists
Ilfak Guilfanov
Technology
301
2,389,749
https://en.wikipedia.org/wiki/Pedotransfer%20function
In soil science, pedotransfer functions (PTF) are predictive functions of certain soil properties using data from soil surveys. The term pedotransfer function was coined by Johan Bouma as translating data we have into what we need. The most readily available data comes from a soil survey, such as the field morphology, soil texture, structure and pH. Pedotransfer functions add value to this basic information by translating them into estimates of other more laborious and expensively determined soil properties. These functions fill the gap between the available soil data and the properties which are more useful or required for a particular model or quality assessment. Pedotransfer functions utilize various regression analysis and data mining techniques to extract rules associating basic soil properties with more difficult to measure properties. Although not formally recognized and named until 1989, the concept of the pedotransfer function has long been applied to estimate soil properties that are difficult to determine. Many soil science agencies have their own (unofficial) rule of thumb for estimating difficult-to-measure soil properties. Probably because of the particular difficulty, cost of measurement, and availability of large databases, the most comprehensive research in developing PTFs has been for the estimation of water retention curve and hydraulic conductivity. History The first PTF came from the study of Lyman Briggs and McLane (1907). They determined the wilting coefficient, which is defined as percentage water content of a soil when the plants growing in that soil are first reduced to a wilted condition from which they cannot recover in an approximately saturated atmosphere without the addition of water to the soil, as a function of particle-size: Wilting coefficient = 0.01 sand + 0.12 silt + 0.57 clay With the introduction of the field capacity (FC) and permanent wilting point (PWP) concepts by Frank Veihmeyer and Arthur Hendricksen (1927), research during the period 1950-1980 attempted to correlate particle-size distribution, bulk density and organic matter content with water content at field capacity (FC), permanent wilting point (PWP), and available water capacity (AWC). In the 1960s various papers dealt with the estimation of FC, PWP, and AWC, notably in a series of papers by Salter and Williams (1965 etc.). They explored relationships between texture classes and available water capacity, which are now known as class PTFs. They also developed functions relating the particle-size distribution to AWC, now known as continuous PTFs. They asserted that their functions could predict AWC to a mean accuracy of 16%. In the 1970s more comprehensive research using large databases was developed. A particularly good example is the study by Hall et al. (1977) from soil in England and Wales; they established field capacity, permanent wilting point, available water content, and air capacity as a function of textural class, and as well as deriving continuous functions estimating these soil-water properties. In the USA, Gupta and Larson (1979) developed 12 functions relating particle-size distribution and organic matter content to water content at potentials ranging from -4 kPa to -1500 kPa. With the flourishing development of models describing soil hydraulic properties and computer modelling of soil-water and solute transport, the need for hydraulic properties as inputs to these models became more evident. Clapp and Hornberger (1978) derived average values for the parameters of a power-function water retention curve, sorptivity and saturated hydraulic conductivity for different texture classes. In probably the first research of its kind, Bloemen (1977) derived empirical equations relating parameters of the Brooks and Corey hydraulic model to particle-size distribution. Jurgen Lamp and Kneib (1981) from Germany introduced the term pedofunction, while Bouma and van Lanen (1986) used the term transfer function. To avoid confusion with the term transfer function used in soil physics and in many other disciplines, Johan Bouma (1989) later called it pedotransfer function. (A personal anecdote hinted that Arnold Bregt from Wageningen University suggested this term). Since then, the development of hydraulic PTFs has become a boom research topic, first in the US and Europe, South America, Australia and all over the world. Although most PTFs have been developed to predict soil hydraulic properties, they are not restricted to hydraulic properties. PTFs for estimating soil physical, mechanical, chemical and biological properties have also been developed. Software There are several available programs that aid determining hydraulic properties of soils using pedotransfer functions, among them are SOILPAR – By Acutis and Donatelli ROSETTA – By Schaap et al. of the USDA, uses artificial neural networks Soil inference systems McBratney et al. (2002) introduced the concept of a soil inference system, SINFERS, where pedotransfer functions are the knowledge rules for soil inference engines. A soil inference system takes measurements with a given level of certainty (source) and by means of logically linked pedotransfer functions (organiser) infers data that is not known with minimal inaccuracy (predictor). See also Moisture equivalent Nonlimiting water range Soil functions References Pedology Soil physics
Pedotransfer function
Physics
1,084
45,641,381
https://en.wikipedia.org/wiki/TeamNote
TeamNote is a mobile-first business communication and collaboration software developed by the Hong Kong–based technology company TeamNote Limited. TeamNote is a product that is provided as a white label solution to corporations and deployed in a private cloud or an on-premises server. It allows users to send text messages and voice messages, share images, documents, user locations, and other content. It is not available for download on the iOS App Store or in Google Play. TeamNote adds new users by sending out links or manual deployment. Features TeamNote offers standardized communication features, customizable workflow modules and system integration. The primary features of TeamNote are instant messaging including text and voice, individual and group chat mode, as well as news announcements organized by top management. It also offers GPS location tracking, polling or voting, task assignments, photo reporting, sales reporting in chat rooms and share training manual. In addition, TeamNote also has customized features such as form filling, HR tasks, job dispatch, and duty roster. Platforms TeamNote provides Android and iOS mobile apps for end user, and web portal for web clients including end user and superuser. Business model TeamNote offers subscription business model and claimed to charge US$5 per user per month. The fee would be adjusted accordingly for additional features. A custom rate could be applied if a deeper integration is required. History TeamNote is a product started off its research and development in 2012, under its then-parent company Apptask Limited, a project-led mobile applications development company. TeamNote was originally developed for a Hong Kong local real estate conglomerate as a customized corporate communication app, which inspired its founder Roy Law and the team to develop TeamNote as a product. TeamNote Limited as an independent company was founded in July 2013, after spinning off from its now-sister company Apptask Limited and TeamNote is officially launched to the Asia market in the first quarter of 2014. In January 2015, TeamNote Limited as a startup company was shortlisted to be a part of Y Combinator's three-month-long accelerator programme and received $120,000 seed money and later raised approximately USD$1M angel round. TeamNote announced its global launch during an interview with TechCrunch in March 2015. The original TeamNote app focused on secure messaging. This included password-protected conversations, the ability to send a message out to a group and get private replies, and even a feature to make sensitive messages disappear after a specific expiration date. As it expanded, the application has gained features for managing shifts for workers on the field, who can send back messages and photographs related to their work in to their company’s home base to complete tasks. There are also mobile training modules, letting teams quickly get new workers out on the field up to speed without making them sit down and watch an entire training session. Award In 2014, TeamNote won the Red Herring (magazine) Asia Top 100 Technology Award in Hong Kong and Global Top 100 Technology Award in Los Angeles. In 2015, TeamNote won the Best Mobile Apps Grand Award and Best Mobile Apps (Business and Enterprise Solution) Gold Award in Hong Kong ICT Awards. TeamNote also won merit of Asia Pacific ICT Alliance Awards (APICTA). See also Comparison of cross-platform instant messaging clients List of collaborative software Secure communication Sources Emily Goh, 25 startups in Asia that caught our eye (8/3/2015), 來自香港的Slack對手 – TeamNote (8/03/2015), Josh Horwitz, Y Combinator-backed TeamNote is a Hong Kong–born answer to Slack and HipChat (4/03/2015), Kyle Russell, YC-Backed TeamNote Provides Enterprise Communications For Companies With People Out In The Field (3/03/2015), 當辦公也移動:YC 孵化的“團信”欲打造企業版WhatsApp (5/02/2015), 羅國明, 矽谷啟示錄(一):闖矽谷需要盲公竹 (30/01/2015), 港產通訊App 闖矽谷:「創業很型!」(27/01/2015), Y Combinator casts its eye towards Asia and what that means for you (18/03/2015), Here Are The Companies That Presented At Y Combinator Demo Day 1 (24/03/2015), 深入矽谷精煉技術TeamNote向國際進發 (24/03/2015), 2015 香港資訊及通訊科技獎得獎者巡禮 (二) (15/05/2015), Startups to Watch: Hong Kong, The Wall Street Journal (19/05/2015), 我從矽谷加速器回來,學到的是…(30/05/2015), 矽谷啟示錄(二):知識分享並非「教路」(02/06/2015), 2015香港資訊及通訊科技獎得獎巡禮(三) (04/06/2015), 香港初創科企:走出創業迷霧 (05/06/2015), References External links Business software Project management software Collaborative software Instant messaging
TeamNote
Technology
1,081
18,368,163
https://en.wikipedia.org/wiki/KCNF1
Potassium voltage-gated channel subfamily F member 1 is a protein that in humans is encoded by the KCNF1 gene. The protein encoded by this gene is a voltage-gated potassium channel subunit. References Further reading External links Ion channels
KCNF1
Chemistry
51
433,827
https://en.wikipedia.org/wiki/Pepper%27s%20ghost
Pepper's ghost is an illusion technique, used in the theatre, cinema, amusement parks, museums, television, and concerts, in which an image of an object off-stage is projected so that it appears to be in front of the audience. The technique is named after the English scientist John Henry Pepper, who popularised the effect during an 1862 Christmas Eve theatrical production of the Charles Dickens novella, The Haunted Man and the Ghost's Bargain, which caused a sensation among those in attendance at the Regent Street theatre in London. An instant success, the production was moved to a larger theatre and continued to be performed throughout the whole of 1863, with the Prince of Wales (future King Edward VII) bringing his new bride (later Queen Alexandra) to see the illusion, and launched an international vogue for ghost-themed plays which used this novel stage effect during the 1860s and subsequent decades. The illusion is widely used for entertainment and publicity purposes. These include the Girl-to-Gorilla trick found in old carnival sideshows and the appearance of "ghosts" at the Haunted Mansion and the "Blue Fairy" in Pinocchio's Daring Journey, both at Disneyland in California. Teleprompters are a modern implementation of Pepper's ghost. The technique was used to display a life-size illusion of Kate Moss at the 2006 runway show for the Alexander McQueen collection The Widows of Culloden. In the 2010s, the technique has been used to make virtual artists appear onstage in apparent "live" concerts, with examples including Elvis Presley, Tupac Shakur, and Michael Jackson. It is often wrongly described as "holographic". Such setups can involve custom projection media server software and specialized stretched films. The installation may be a site-specific one-off, or a use of a commercial system such as the Cheoptics360 or Musion Eyeliner. Products have been designed using a clear plastic pyramid and a smartphone screen to generate the illusion of a 3D object. Effect The core illusion involves a stage specially arranged into two rooms or areas, one into which audience members can see, and a second (sometimes referred to as the "blue room") that is hidden to the side. A plate of glass (or Plexiglas or plastic film) is placed somewhere in the main room at an angle that reflects the view of the blue room towards the audience. Generally, this is arranged with the blue room to one side of the stage, and the plate on the stage rotated around its vertical axis at 45 degrees. Care must be taken to make the glass as invisible as possible, normally hiding the lower edge in patterning on the floor and ensuring lights do not reflect off it. The plate catches a reflection from a brightly lit actor in an area hidden from the audience. Not noticing the glass screen, the audience mistakenly perceive this reflection as a ghostly figure located among the actors on the main stage. The lighting of the actor in the hidden area can be gradually brightened or dimmed to make the ghost image fade in and out of visibility. When the lights are bright in the main room and dark in the blue room, the reflected image cannot be seen. When the lighting in the blue room is increased, often with the main room lights dimming to make the effect more pronounced, the reflection becomes visible and the objects within the blue/hidden room seem to appear, from thin air, in the space visible to the audience. A common variation uses two blue/hidden rooms, one behind the glass in the main room, and one to the side, the contents of which can be switched between "visible" and "invisible" states by manipulating the lighting therein. The hidden room may be an identical mirror-image of the main room, so that its reflected image exactly matches the layout of the main room; this approach is useful in making objects seem to appear or disappear. This illusion can also be used to make an object, or person—reflected in, say, a mirror—appear to morph into another (or vice versa). This is the principle behind the Girl-to-Gorilla trick found in old carnival sideshows. Another variation: the hidden room may itself be painted black, with only light-coloured objects in it. In this case, when light is cast on the room, only the light objects strongly reflect that light, and therefore appear as ghostly, translucent images on the (invisible) pane of glass in the room visible to the audience. This can be used to make objects appear to float in space. The type of theatre use of the illusion which John Henry Pepper pioneered and repeatedly staged in the 1860s were short plays featuring a ghostly apparition which interacts with other actors. An early favourite showed an actor attempting to use a sword against an ethereal ghost, as in the illustration. To choreograph other actors' dealings with the ghost, Pepper used concealed markings on the stage floor for where they should place their feet, since they could not see the ghost image's apparent location. Pepper's 1890 book includes such detailed explanation of his stagecraft secrets, disclosed in his 1863 joint application with co-inventor Henry Dircks to patent this ghost illusion technique. The hidden area is typically below the visible stage but in other Pepper's Ghost set-ups it can be above or, quite commonly, adjacent to the area visible to the viewers. The scale can be very much smaller, for instance small peepshows, even hand-held toys. The illustration shows Pepper's initial arrangement for making a ghost image visible anywhere throughout a theatre. Many effects can be produced via Pepper's Ghost. Since glass screens are less reflective than mirrors, they do not reflect matte black objects in the area hidden from the audience. Thus Pepper's Ghost showmen sometimes used an invisible black-clad actor in the hidden area to manipulate brightly lit, light-coloured objects, which can thus appear to float in air. Pepper's very first public ghost show used a seated skeleton in a white shroud which was being manipulated by an unseen actor in black velvet robes. Hidden actors, whose heads were powdered white for reflection but whose clothes were matte black, could appear as disembodied heads when strongly lit and reflected by the angled glass screen. Pepper's Ghost can be adapted to make performers apparently materialise from nowhere or disappear into empty space. Pepper would sometimes greet an audience by suddenly materialising in the middle of the stage. The illusion can also apparently transform one object or person into another. For instance, Pepper sometimes suspended on stage a basket of oranges which then "transformed" into jars of marmalade. Another 19th century Pepper's Ghost entertainment featured a figure flying around a theatre backcloth painted as the sky. The hidden actor, lying under bright lights on a rotating, matte black table, wore a costume with metallic spangles to maximise reflection on the hidden glass screen. This foreshadows some 20th century cinema special effects. History Precursors Giambattista della Porta was a 16th-century Neapolitan scientist and scholar who is credited with a number of scientific innovations. His 1589 work Magia Naturalis (Natural Magic) includes a description of an illusion, titled "How we may see in a Chamber things that are not" that is the first known description of the Pepper's ghost effect. Porta's description, from the 1658 English language translation (page 370), is as follows. From the mid-19th century, the illusion, today known as Pepper's Ghost, became widely developed for money-making stage entertainments, amid bitter argument, patent disputes, and legal action concerning the technique's authorship. A popular genre of entertainment was stage demonstrations of scientific novelties. Simulations of ghostly phenomena through innovative optical technology fitted these well. Phantasmagoria shows, which simulated supernatural effects, were also familiar public entertainments. Previously, these had made much use of complex magic lantern techniques, like the multiple projectors, mobile projectors, and projection on mirrors and smoke, which had been perfected by Étienne-Gaspard Robert/Robertson in Paris early in the century. The new illusion, soon to be labeled Pepper's Ghost, offered a completely different and more convincing way to produce ghost effects, using reflections not projection. A claim to be the first user of the new illusion in theatres came from the Dutch-born stage magician Henrik Joseph Donckel, who became famous in France under the stage name Henri Robin. Robin said he had spent two years developing the illusion before trying it in 1847 during his regular shows of stage magic and the supernatural in Lyons. However, he found this early rendering of the ghost effect made little impression on the audience. He wrote: "The ghosts failed to achieve the full illusory effect which I have subsequently perfected." The shortcomings of his original techniques "caused me great embarrassment, I found myself forced to put them aside for a while." While Robin later became famous for many effective, imaginative, and complex applications of "Pepper's Ghost" at Robin's own theatre in Paris, such shows only began mid-1863 after John Henry Pepper had demonstrated his own method for staging the illusion at the London Polytechnic in December 1862. Jean-Eugene Robert-Houdin, contemporary French grand master of stage magic, regarded Robin's performances and other 1863 ghost shows in Paris as "plagiarists" of Pepper's innovation. Jim Steinmeyer, a modern technical and historical authority on Pepper's Ghost, has expressed doubts as to the reliability of Robin's claims for his 1847 performances. Whatever Robin did in 1847, by his own account it produced nothing like the stage effect whereby Pepper, and later Robin himself, astonished and thrilled audiences during 1863. In October 1852 Pierre Séguin, an artist, patented in France a portable peepshow-like toy for children, which he named the "polyoscope". This used the very same illusion, based on reflection, which ten years later Pepper and Dircks would patent in Britain under their own names. Although creating illusory images within a small box is appreciably different from delivering an illusion on stage, Séguin's 1852 patent was eventually to lead to the defeat of Pepper's 1863 attempt to control and license the "Pepper's Ghost" technique in France as well as in Britain. Pepper described Séguin's polyoscope: "It consisted of a box with a small sheet of glass, placed at an angle of forty-five degrees, and it reflected a concealed table, with plastic figures, the spectre of which appeared behind the glass, and which young people who possessed the toy invited their companions to take out of the box, when it melted away, as it were, in their hands and disappeared." In 1863, Henri Robin maintained that Séguin's polyoscope had been inspired by his own original version of the stage illusion, which Séguin had witnessed while painting magic lantern slides for another part of Robin's show. Dircks and Pepper Henry Dircks was an English engineer and practical inventor who from 1858 strove to find theatres which would implement his vision of a sensational new genre of drama featuring apparitions which interacted with actors on stage. He constructed a peepshow-like model which demonstrated how reflections on a glass screen could produce convincing illusions. He also outlined a series of plays featuring ghost effects, which his apparatus could enable, and worked out how complex illusions, like image transformations, could be achieved through the technique. But in terms of applying the effect in theatres, Dircks seemed unable to think beyond remodelling theatres to resemble his peepshow model. He produced a design for theatres which required costly, impractical rebuilding of an auditorium to host the illusion. The theatres, which he approached, were not interested. In another bid to attract interest, he advertised his models for sale and in late 1862 the models' manufacturer invited John Henry Pepper to view one. John Henry Pepper was a scientific all-rounder who was both an effective public educator in science and an astute, publicity-conscious, commercial showman. In 1854, he became the director and sole lessee of the Royal Polytechnic where he held the title of Professor. The Polytechnic ran a mix of science education courses and eye-catching public displays of scientific innovations. After seeing Dircks' peepshow model in 1862, Pepper quickly devised an ingenious twist whereby, through adding an angled sheet of glass and a screened-off orchestra pit, almost any theatre or hall could make the illusion visible to a large audience. First public performance in December 1862—a scene from Charles Dickens's The Haunted Man—produced rapturous responses from audience and journalists. A deal was struck between Pepper and Dircks whereby they jointly patented the illusion. Dircks agreed to waive any share of profits for the satisfaction of seeing his idea implemented so effectively. Their joint patent was obtained provisionally in February 1863 and ratified in October 1863. Before Dircks' partnership with Pepper was a full year old, Dircks published a book which accused Pepper of plotting to systematically stamp Pepper's name alone on their joint creation. According to Dircks, while Pepper took care to credit Dircks in any communications to the scientific community, everything which reached the general public—like newspaper reports, advertisements and theatre posters—mentioned Pepper alone. Whenever Dircks complained, he said, Pepper would blame careless journalists or theatre managers. However, the omission had occurred so repeatedly that Dircks believed that Pepper was deliberately striving to fix his name alone in the minds of the general public. A good half of Dircks' 106-page book, The Ghost, comprises such recriminations with detailed examples of how Pepper hid Dircks' name. An earlier 1863 Spectator article had presented the Dircks/Pepper partnership thus: "This admirable ghost is the offspring of two fathers…. To Mr. Dircks belongs the honour of having invented him…. and Professor Pepper has the merit of having improved him considerably, fitting him for the intercourse of mundane society, and even educating him for the stage." Popularity Short plays using the new ghost illusion swiftly became sensationally popular. Pepper staged many dramatic and profitable demonstrations, notably in the lecture theatre of London's Royal Polytechnic. By late 1863, the illusion's fame had spread extensively with ghost-centred plays performed at multiple London venues, Manchester, Glasgow, Paris, and New York. Royalty attended. There was even a shortage of plate glass because of demand from theatres for glass screens. A popular song from 1863 celebrated the "Patent Ghost": By his own account, Pepper, who was entitled to all profits, made considerable earnings from the patent. He ran his own performances and licensed other operators for money. In Britain, he was initially successful in suing some unlicensed imitators, deterring others by legal threats, and defeating a September 1863 court action by music-hall proprietors who challenged the patent. However, while in Paris in summer 1863 to assist a licensed performance, Pepper had proved unable to stop Henri Robin and several others who were already performing unlicensed versions there. Robin successfully cited Séguin's pre-existing patent of the polyoscope, of which Pepper had been ignorant. During the next four years Robin developed spectacular and original applications of the illusion in Paris. One famous Robin show depicted the great violinist Paganini being troubled in his sleep by a demon violinist, who repeatedly appeared and disappeared. During the next two decades, performances using the illusion spread to several countries. In 1877 a patent was registered for the United States. In Britain, theatre productions using Pepper's Ghost toured far outside major cities. The performers travelled with their own glass screens and became known as "spectral opera companies". Around a dozen such specialist theatre companies existed in Britain. A typical performance would comprise a substantial play where apparitions were central to the plot, like an adaption of Dickens' A Christmas Carol, followed by a short comic piece which also used ghost effects. One company, for instance, "The Original Pepper's Ghost and Spectral Opera Company" had 11 ghost-themed plays in its repertoire. Another such company during a single year, 1877, performed at 30 different places in Britain, usually for a week but sometimes for as long as six weeks. By the 1890s, however, novelty had faded and the vogue for such theatre was in steep decline. Pepper's Ghost remained in use however at sensational entertainments comparable to "dark rides" or "ghost trains" at modern funfairs and amusement parks: a detailed account survives of audience participation in two macabre entertainments, which both used Pepper's Ghost, within a "Tavern of the Dead" show which visited Paris and New York in the 1890s. Since the 1860s, "Pepper's Ghost" has become a universal term for any illusion produced via a reflection on an unnoticed glass screen. It is routinely applied to all versions of the illusion, which are now quite common in 21st century displays, peepshows, and installations in museums and amusement parks. However, the specific optics in these modern displays often follow Séguin's or Dircks' earlier designs rather than the modification for theatres which first brought Pepper's name into enduring usage. Modern uses Systems Several proprietary systems produce modern Pepper's ghost effects. The "Musion Eyeliner" uses thin metalized film placed across the front of the stage at an angle of 45 degrees towards the audience; recessed below the screen is a bright image supplied by an LED screen or powerful projector. When viewed from the audience's perspective, the reflected images appear to be on the stage. The "Cheoptics360" displays revolving 3D animations or special video sequences inside a four-sided transparent pyramid. This system is often used for retail environments and exhibitions. Amusement parks The world's largest implementation of this illusion can be found at The Haunted Mansion and Phantom Manor attractions at several Walt Disney Parks and Resorts. There, a -long scene features multiple Pepper's ghost effects, brought together in one scene. Guests travel along an elevated mezzanine, looking through a -tall pane of glass into an empty ballroom. Animatronic ghosts move in hidden black rooms beneath and above the mezzanine. A more advanced variation of the Pepper's Ghost effect is also used at The Twilight Zone Tower of Terror. The walk-through attraction Turbidite Manor in Nashville, Tennessee, employs variations of the classic technique, enabling guests to see various spirits that also interact with the physical environment, viewable at a much closer proximity. The House at Haunted Hill, a Halloween attraction in Woodland Hills, California, employs a similar variation in its front window to display characters from its storyline. An example that combines the Pepper's ghost effect with a live actor and film projection can be seen in the Mystery Lodge exhibit at the Knott's Berry Farm theme park in Buena Park, California, and the Ghosts of the Library exhibit at the Abraham Lincoln Presidential Library and Museum in Springfield, Illinois, as well as the depiction of Maori legends called A Millennium Ago at the Museum of Wellington City & Sea in New Zealand. The Hogwarts Express attraction at Universal Studios Florida uses the Pepper's ghost effect, such that guests entering "Platform " seem to disappear into a brick wall when viewed from those further behind in the queue. The Curse at Alton Manor, an attraction at the Alton Towers theme park in Staffordshire, England, uses multiple Pepper's ghost effects. These include the ride's preshow, where characters are projected inside an empty doll's house before disappearing as the room is bathed in ultraviolet light, and a scene where Emily Alton, the attraction's central antagonist, appears in a corporeal form before vanishing, in a similar fashion to effects used at the Disney parks. The effect was also used in the ride's previous iterations, The Haunted House and Duel: The Haunted House Strikes Back; where Emily Alton and her cat Snowy could be seen as small corporeal ghosts inside a doll's house in the attraction's queue, similar to the preshow in the current iteration of the attraction. Museums Museums increasingly use Pepper's ghost exhibits to create attractions that appeal to visitors. In the mid-1970s James Gardener designed the Changing Office installation in the London Science Museum, consisting of a 1970s-style office that transforms into an 1870s-style office as the audience watches. It was designed and built by Will Wilson and Simon Beer of Integrated Circles. Another particularly intricate Pepper's ghost display is the Eight Stage Ghost built for the British Telecom Showcase Exhibition in London in 1978. This display follows the history of electronics in a number of discrete transitions. More modern examples of Pepper's ghost effects can be found in various museums in the United Kingdom and Europe. Examples of these in the United Kingdom are the ghost of Annie McLeod at the New Lanark World Heritage Site, the ghost of John McEnroe at the Wimbledon Lawn Tennis Museum, which reopened in new premises in 2006, and one of Sir Alex Ferguson, which opened at the Manchester United Museum in 2007. Other examples include the ghost of Sarah (who picks up a candle and walks through the wall) and also the ghost of the Eighth Duke at Blenheim Palace. In October 2008 a life-sized Pepper's ghost of Shane Warne was opened at the National Sports Museum in Melbourne, Australia. The effect was also used at the Dickens World attraction at Chatham Maritime, Kent, United Kingdom. Both the York Dungeon and the Edinburgh Dungeon use the effect in the context of their "Ghosts" shows. Another example can be found at Our Planet Centre in Castries, St Lucia, which opened in May 2011, where a life-size Charles III and Governor-General of the island appear on stage talking about climate change. German company Musion installed a holostage in the German Football Museum in Dortmund in 2016. Television, film and video The 1940 film Beyond Tomorrow uses the technique to show the three ghosts in the second half of the film. Teleprompters are a modern implementation of Pepper's ghost used by the television industry. They reflect a speech or script and are commonly used for live broadcasts such as news programmes. A 1985 episode of Mr. Wizard's World demonstrates Pepper's ghost in one of its educational segments. On 1 June 2013, ITV broadcast Les Dawson: An Audience With That Never Was. The program featured a Pepper's ghost projection of Les Dawson, presenting content for a 1993 edition of An Audience with... to be hosted by Dawson but unused due to his death two weeks before recording. In the 1990 movie Home Alone, the technique is used to show Harry with his head in flames, as the result of a blowtorch from a home invasion gone bad. CGI was not able to produce the desired results. The James Bond movie Diamonds are Forever features the girl-to-gorilla trick in one scene. Early electro-mechanical arcade machines, such as Midway's "Stunt Pilot" and Bally's "Road Runner," both made in 1971, use the effect to allow player-controlled moving vehicles to appear to share the same space as various obstacles within a diorama. Electrical contacts, connected to the control linkages, sense the position of the vehicle and obstacles, simulating collisions in the games' logic circuits without the models physically touching each other. Various arcade games, most notably Taito's 1978 video game Space Invaders and SEGA's 1991 video game Time Traveler, used a mirror-based variation of the illusion to make the game's graphics appear against an illuminated backdrop. Concerts An illusion based on Pepper's ghost involving projected images has been featured at music concerts (often erroneously marketed as "holographic"). At the 2006 Grammy Awards, the Pepper's ghost technique was used to project Madonna with the virtual members of the band Gorillaz onto the stage in a "live" performance. This type of system consists of a projector (usually DLP) or LED screen, with a resolution of 1280×1024 or higher and brightness of at least 5,000 lumens, a high-definition video player, a stretched film between the audience and the acting area, a 3D set/drawing that encloses three sides, plus lighting, audio, and show control. During Dr. Dre and Snoop Dogg's performance at the 2012 Coachella Valley Music and Arts Festival, a projection of deceased rapper Tupac Shakur appeared and performed "Hail Mary" and "2 of Amerikaz Most Wanted". The use of this approach was repeated in 2013 at west coast Rock the Bells dates, featuring projections of Eazy-E and Ol' Dirty Bastard. On 18 May 2014, during the Billboard Music Awards, an illusion of deceased pop star Michael Jackson, other dancers, and the entire stage set was projected onto the stage for a performance of the song "Slave to the Rhythm" from the posthumous Xscape album. On 21 September 2017, the Frank Zappa estate announced plans to conduct a reunion tour with the Mothers of Invention that would make use of Pepper's ghosts of Frank Zappa and the settings from his studio albums. Initially scheduled to run through 2018, the tour was later pushed back to 2019. A projection of Ronnie James Dio performed at the Wacken Open Air festival in 2016. Swedish supergroup ABBA returned to the stage in May 2022 as "digital avatars". This was referred to as a use of Pepper's ghost, though it actually consisted of a huge opaque flat LED screen, with careful integration of show effects inside the theatre to create a realistic illusion of depth. Political speeches NChant 3D telecast, live, a 55-minute speech by Narendra Modi, Chief Minister of Gujarat, to 53 locations across Gujarat on 10 December 2012 during the assembly elections. In April 2014, they projected Narendra Modi again at 88 locations across India. In 2014, Turkish Prime Minister Recep Tayyep Erdogan delivered a speech via Pepper's ghost in Izmir. In 2017, French Presidential candidate Jean-Luc Mélenchon gave a speech using Pepper's ghost at a campaign event in Aubervilliers. See also Camera lucida Camera obscura Front projection effect Head-up display Magic lantern Optical illusion Reflector sight Schüfftan process Catadioptric telescope References Further reading Pepper, John Henry (1890). The True History of the Ghost. London: Cassell & Co. Gbur, Gregory J. (2016). Dircks and Pepper: a Tale of Two Ghosts. Skulls in the Stars website] Hopkins, Albert A. (1897). Magic, Stage Illusions, Special Effects and Trick Photography. New York: Dover Publications. Dircks, Henry (1863). The Ghost, London: E & F.N. Spon. Robert-Houdin, Jean-Eugene (1881). The Secrets of Stage Conjuring. London: George Routledge. External links 1862 introductions Articles containing video clips Magic tricks Optical illusions Phantasmagoria Haunted Mansion
Pepper's ghost
Physics
5,545
72,635,333
https://en.wikipedia.org/wiki/Nicolson%E2%80%93Ross%E2%80%93Weir%20method
Nicolson–Ross–Weir method is a measurement technique for determination of complex permittivities and permeabilities of material samples for microwave frequencies. The method is based on insertion of a material sample with a known thickness inside a waveguide, such as a coaxial cable or a rectangular waveguide, after which the dispersion data is extracted from the resulting scattering parameters. The method is named after A. M. Nicolson and G. F. Ross, and W. B. Weir, who developed the approach in 1970 and 1974, respectively. The technique is one of the most common procedures for material characterization in microwave engineering. Method The method uses scattering parameters of a material sample embedded in a waveguide, namely and , to calculate permittivity and permeability data. and correspond to the cumulative reflection and transmission coefficient of the sample that are referenced to the each sample end, respectively: these parameters account for the multiple internal reflections inside the sample, which is considered to have a thickness of . The reflection coefficient of the bulk sample is: where The sign of the root for the reflection coefficient is chosen appropriately to ensure its passivity (). Similarly, the transmission coefficient of the bulk sample can be written as: Thus, the effective permeability () and permittivity () of the material can be written as: where and is the free-space wavelength. is the guided mode wavelength of the unfilled transmission line. is the cutoff wavelength of the unfilled transmission line The constitutive relation for admits an infinite number of solutions due to the branches of the complex logarithm. The ambiguity regarding its result can be resolved by taking the group delay into account. Limitations and extensions In the case of low material loss, the Nicolson–Ross–Weir method is known to be unstable for sample thicknesses at integer multiples of one half wavelength due to resonance phenomenon. Improvements over the standard algorithm have been presented in engineering literature to alleviate this effect. Furthermore, complete filling of a waveguide with sample material may pose a particular challenge: presence of gaps during the filling of the waveguide section would excite higher-order modes, which may yield errors in scattering parameter results. In such cases, more advanced methods based on the rigorous modal analysis of partially-filled waveguides or optimization methods can be used. A modification of the method for single-port measurements was also reported. In addition to homogenous materials, the extension of the method was developed to obtain constitutive parameters of isotropic and bianisotropic metamaterials. See also Fourier-transform spectroscopy Microwave radiometer Reflection seismology Spectroscopy Time-domain reflectometer Vector network analyzer References Further reading Microwave technology Spectroscopy Electric and magnetic fields in matter
Nicolson–Ross–Weir method
Physics,Chemistry,Materials_science,Engineering
564
43,862,507
https://en.wikipedia.org/wiki/Peter%20Bradshaw%20%28aeronautical%20engineer%29
Peter Bradshaw FRS (26 December 1935 – 27 July 2024) was an aeronautical engineer specialising in fluid mechanics. He was educated at Torquay Grammar School and Cambridge University, where he was awarded a B.A. in Aeronautical Engineering in 1957. Career He worked at the National Physical Laboratory in the Aerodynamics Division until 1969. Following this he was Professor of Experimental Aerodynamics at the Department of Aeronautics, Imperial College, London University until 1988. He was then appointed to the Thomas V. Jones Chair of Engineering at Stanford University, retiring as Emeritus Professor in 1995. He is the author or co-author of a number of textbooks on fluid dynamics. Bradshaw died on 27 July 2024, at the age of 88. Works Experimental Fluid Mechanics, (Pergamon, Oxford 1964, 1970) An Introduction to Turbulence and Its Measurement, (Pergamon, Oxford 1971, 1975) Momentum Transfer in Boundary Layers, (with T. Cebeci; Hemisphere, New York 1977) Engineering Calculation Methods for Turbulent Flows, (with T. Cebeci and J.H. Whitelaw; Academic Press, London 1981, 1985) Physical and Computational Aspects of Convective Heat Transfer, (with T. Cebeci;Springer, New York 1984, 1988) Honours and awards 1971 Bronze Medal of the Royal Aeronautical Society 1981 Fellow of the Royal Society 1990 Hon. D.Sc., Exeter University 1994 American Institute of Aeronautics and Astronautics (AIAA) Fluid Dynamics Award References 1935 births People educated at Torquay Boys' Grammar School Alumni of St John's College, Cambridge Academics of Imperial College London Stanford University School of Engineering faculty English aerospace engineers Fellows of the Royal Society Fluid dynamicists 2024 deaths
Peter Bradshaw (aeronautical engineer)
Chemistry
343
58,401,614
https://en.wikipedia.org/wiki/Aspergillus%20sclerotioniger
Aspergillus sclerotioniger is a species of fungus in the genus Aspergillus. It belongs to the group of black Aspergilli which are important industrial workhorses. A. sclerotioniger belongs to the Nigri section. The species was first described in 2004. It has been found in green coffee beans from India. It is a very effective producer of ochratoxin A and ochratoxin B, and produces aurasperone B, pyranonigrin A, corymbiferan lactone-like exometabolites, and some cytochalasins. The genome of A. sclerotioniger was sequenced and published in 2014 as part of the Aspergillus whole-genome sequencing project – a project dedicated to performing whole-genome sequencing of all members of the genus Aspergillus. The genome assembly size was 36.72 Mbp. Growth and morphology Aspergillus sclerotioniger has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References sclerotioniger Fungi described in 2004 Fungus species
Aspergillus sclerotioniger
Biology
265
623,560
https://en.wikipedia.org/wiki/Henry%20Draper%20Medal
The Henry Draper Medal is awarded every 4 years by the United States National Academy of Sciences "for investigations in astronomical physics". Named after Henry Draper, the medal is awarded with a gift of USD $15,000. The medal was established under the Draper Fund by his widow, Anna Draper, in honor of her husband, and was first awarded in 1886 to Samuel Pierpont Langley "for numerous investigations of a high order of merit in solar physics, and especially in the domain of radiant energy". It has since been awarded 45 times. The medal has been awarded to multiple individuals in the same year: in 1977 it was awarded to Arno Allan Penzias and Robert Woodrow Wilson "for their discovery of the cosmic microwave radiation (a remnant of the very early universe), and their leading role in the discovery of interstellar molecules"; in 1989 to Riccardo Giovanelli and Martha P. Haynes "for the first three-dimensional view of some of the remarkable large-scale filamentary structures of our visible universe"; in 1993 to Ralph Asher Alpher and Robert Herman "for their insight and skill in developing a physical model of the evolution of the universe and in predicting the existence of a microwave background radiation years before this radiation was serendipitously discovered" and in 2001 to R. Paul Butler and Geoffrey Marcy "for their pioneering investigations of planets orbiting other stars via high-precision radial velocities". List of recipients Source: National Academy of Sciences See also List of astronomy awards List of physics awards References Astronomy prizes Awards established in 1886 Awards of the United States National Academy of Sciences 1886 establishments in the United States
Henry Draper Medal
Astronomy,Technology
337
28,487,427
https://en.wikipedia.org/wiki/Witsenhausen%27s%20counterexample
Witsenhausen's counterexample, shown in the figure below, is a deceptively simple toy problem in decentralized stochastic control. It was formulated by Hans Witsenhausen in 1968. It is a counterexample to a natural conjecture that one can generalize a key result of centralized linear–quadratic–Gaussian control systems—that in a system with linear dynamics, Gaussian disturbance, and quadratic cost, affine (linear) control laws are optimal—to decentralized systems. Witsenhausen constructed a two-stage linear quadratic Gaussian system where two decisions are made by decision makers with decentralized information and showed that for this system, there exist nonlinear control laws that outperform all linear laws. The problem of finding the optimal control law remains unsolved. Statement of the counterexample The statement of the counterexample is simple: two controllers attempt to control the system by attempting to bring the state close to zero in exactly two time steps. The first controller observes the initial state There is a cost on the input of the first controller, and a cost on the state after the input of the second controller. The input of the second controller is free, but it is based on noisy observations of the state after the first controller's input. The second controller cannot communicate with the first controller and thus cannot observe either the original state or the input of the first controller. Thus the system dynamics are with the second controller's observation equation The objective is to minimize an expected cost function, where the expectation is taken over the randomness in the initial state and the observation noise , which are distributed independently. The observation noise is assumed to be distributed in a Gaussian manner, while the distribution of the initial state value differs depending on the particular version of the problem. The problem is to find control functions that give at least as good a value of the objective function as do any other pair of control functions. Witsenhausen showed that the optimal functions and cannot be linear. Specific results of Witsenhausen Witsenhausen obtained the following results: An optimum exists (Theorem 1). The optimal control law of the first controller is such that (Lemma 9). The exact solution is given for the case in which both controllers are constrained to be linear (Lemma 11). If has a Gaussian distribution and if at least one of the controllers is constrained to be linear, then it is optimal for both controllers to be linear (Lemma 13). The exact nonlinear control laws are given for the case in which has a two-point symmetric distribution (Lemma 15). If has a Gaussian distribution, for some values of the preference parameter a non-optimal nonlinear solution for the control laws is given which gives a lower value for the expected cost function than does the best linear pair of control laws (Theorem 2). The significance of the problem The counterexample lies at the intersection of control theory and information theory. Due to its hardness, the problem of finding the optimal control law has also received attention from the theoretical computer science community. The importance of the problem was reflected upon in the 47th IEEE Conference on Decision and Control (CDC) 2008, Cancun, Mexico, where an entire session was dedicated to understanding the counterexample 40 years after it was first formulated. The problem is of conceptual significance in decentralized control because it shows that it is important for the controllers to communicate with each other implicitly in order to minimize the cost. This suggests that control actions in decentralized control may have a dual role: those of control and communication. The hardness of the problem The hardness of the problem is attributed to the fact that information of the second controller depends on the decisions of the first controller. Variations considered by Tamer Basar show that the hardness is also because of the structure of the performance index and the coupling of different decision variables. It has also been shown that problems of the spirit of Witsenhausen's counterexample become simpler if the transmission delay along an external channel that connects the controllers is smaller than the propagation delay in the problem. However, this result requires the channels to be perfect and instantaneous, and hence is of limited applicability. In practical situations, the channel is always imperfect, and thus one can not assume that decentralized control problems are simple in presence of external channels. A justification of the failure of attempts that discretize the problem came from the computer science literature: Christos Papadimitriou and John Tsitsiklis showed that the discrete version of the counterexample is NP-complete. Attempts at obtaining a solution A number of numerical attempts have been made to solve the counterexample. Focusing on a particular choice of problem parameters , researchers have obtained strategies by discretization and using neural networks. Further research (notably, the work of Yu-Chi Ho, and the work of Li, Marden and Shamma) has obtained slightly improved costs for the same parameter choice. The best known numerical results for a variety of parameters, including the one mentioned previously, are obtained by a local search algorithm proposed by S.-H. Tseng and A. Tang in 2017. The first provably approximately optimal strategies appeared in 2010 (Grover, Park, Sahai) where information theory is used to understand the communication in the counterexample. The optimal solution of the counterexample is still an open problem. References Control theory Stochastic control
Witsenhausen's counterexample
Mathematics
1,126
31,314,347
https://en.wikipedia.org/wiki/Hardy%E2%80%93Littlewood%20zeta%20function%20conjectures
In mathematics, the Hardy–Littlewood zeta function conjectures, named after Godfrey Harold Hardy and John Edensor Littlewood, are two conjectures concerning the distances between zeros and the density of zeros of the Riemann zeta function. Conjectures In 1914, Godfrey Harold Hardy proved that the Riemann zeta function has infinitely many real zeros. Let be the total number of real zeros, be the total number of zeros of odd order of the function , lying on the interval . Hardy and Littlewood claimed two conjectures. These conjectures – on the distance between real zeros of and on the density of zeros of on intervals for sufficiently great , and with as less as possible value of , where is an arbitrarily small number – open two new directions in the investigation of the Riemann zeta function. 1. For any there exists such that for and the interval contains a zero of odd order of the function . 2. For any there exist and , such that for and the inequality is true. Status In 1942, Atle Selberg studied the problem 2 and proved that for any there exists such and , such that for and the inequality is true. In his turn, Selberg made his conjecture that it's possible to decrease the value of the exponent for which was proved 42 years later by A.A. Karatsuba. References Conjectures Zeta and L-functions
Hardy–Littlewood zeta function conjectures
Mathematics
285
38,763,736
https://en.wikipedia.org/wiki/CU%20Virginis
CU Virginis is a single star in the equatorial constellation of Virgo. It has an apparent visual magnitude of 4.99, which is bright enough to be faintly visible to the naked eye. The distance to this star can be estimated from its annual parallax shift of , yielding a separation of 234 light years. This is one of the best studied Ap stars. It has a stellar classification of Ap Si with strong lines of silicon and weak helium lines. The star is a fast rotator with a period of 0.52 days and an axis that is inclined by to the line of sight from the Earth. Both the spectrum and luminosity of the star vary with the rotation, and it is classified as a α2 Canum Venaticorum variable with the designation CU Virginis (CU Vir). There is some evidence that the rotation period may vary slightly over a timescale measured in decades. Such changes have been observed to occur in glitches, rather than varying constantly. CU Virginis has three times the mass of the Sun and double the Sun's radius. It is radiating 100 times the Sun's luminosity from its photosphere at an effective temperature of 12,750 K. The star has a strong magnetic field, placing it in the class of magnetic chemically peculiar stars. The polar magnetic field has a strength of about . The magnetic pole may be displaced by 87° from the axis of rotation, and the effective magnetic field is seen to vary over the course of a rotation. The mean surface magnetic field varies over the range . This star is a radio emitter, with the emission being modulated by the rotational phase. This emission is believed to be gyrosynchrotron radiation emitted by mildly relativistic (Lorentz factor of γ ≤ 2) electrons trapped in the magnetosphere". Two pulses of 100% circularly polarized radio energy are detected each rotation, which may be produced via an electron cyclotron maser process. These polarized beams are then refracted as they pass through cold plasma in the star's magnetosphere. References Virgo (constellation) B-type main-sequence stars Ap stars Alpha2 Canum Venaticorum variables Durchmusterung objects 124224 069389 5313 Virginis, CU
CU Virginis
Astronomy
475
34,864,428
https://en.wikipedia.org/wiki/Parkerioideae
Parkerioideae, synonym Ceratopteridoideae, is one of the five subfamilies in the fern family Pteridaceae. It includes only the two genera Acrostichum and Ceratopteris. The following diagram shows a likely phylogenic relationship between the two Parkerioideae genera and the other Pteridaceae subfamilies. References Pteridaceae Plant subfamilies
Parkerioideae
Biology
91
31,747,442
https://en.wikipedia.org/wiki/Von%20Babo%27s%20law
Von Babo's law (sometimes styled Babo's law) is an experimentally determined scientific law formulated by German chemist Lambert Heinrich von Babo in 1857. It states that the vapor pressure of solution decreases according to the concentration of solute. The law is related to other laws concerning the vapor pressure of solutions, such as Henry's law and Raoult's law. References See also Henry's Law Raoults Law Empirical laws Eponymous laws of physics Solutions
Von Babo's law
Physics,Chemistry
97
44,047,930
https://en.wikipedia.org/wiki/Ulrich%20Frank
Ulrich Frank (born 1958) is a German Business informatician and Professor of Business informatics at the University of Duisburg-Essen, known for his work on the state of the art in information systems research and the development of the Multi-Perspective Enterprise Modeling (MEMO) meta modelling framework. Life and work After studying business administration at the University of Cologne, Frank in 1988 received his doctorate from the University of Mannheim with a dissertation, entitled "Expertensysteme – Neue Automatisierungspotentiale im Büro- und Verwaltungsbereich" (Expert systems - New Potentials for automation in office and administration area). Frank wrote his habilitation in 1993 at the University of Marburg, and was appointed Professor of computer science at the University of Koblenz-Landau in 1994. In 2004 Frank moved to the University of Duisburg-Essen, where he was appointed Professor of computer science and business enterprise modeling. The focus of Frank's work is on multi-perspective enterprise modeling. He has developed and researched the (meta-) method Multi-Perspective Enterprise Modeling (MEMO), which propagates a scientific approach of enterprise modeling and adds value in the discourse of integrative modeling, across the functional areas of business functions. Selected publications Frank published several books and articles. Books: Ulrich Frank. Expertensysteme: Neue Automatisierungspotentiale im Büro- und Verwaltungsbereich?, Gabler, 1988. Ulrich Frank and J. Kronen. Kommunikationsanalyseverfahren als Grundlage der Gestaltung von Informationssystemen: Konzeptioneller Bezugsrahmen, Anwendungspraxis und Perspektiven, Vieweg, 1991. Ulrich Frank. Multiperspektivische Unternehmensmodellierung: Theoretischer Hintergrund und Entwurf einer objektorientierten Entwicklungsumgebung, Oldenbourg Verlag, 1994. Articles, a selection: Frank, Ulrich. "Multi-perspective enterprise modeling: foundational concepts, prospects and future research challenges." Software & Systems Modeling, July 2014, Volume 13, Issue 3, pp 941–962. Frank, Ulrich. "Conceptual modelling as the core of the information systems discipline-perspectives and epistemological challenges." AMCIS 1999 Proceedings (1999): 240. Frank, Ulrich. "Multi-perspective enterprise modeling (memo) conceptual framework and modeling languages." System Sciences, 2002. HICSS. Proceedings of the 35th Annual Hawaii International Conference on. IEEE, 2002. Frank, Ulrich. "Evaluation of reference models." Reference modeling for business systems analysis (2007): 118-140. Frank, Ulrich. Towards a pluralistic conception of research methods in information systems research. No. 7. ICB-research report, 2006. Hubert Österle, Jörg Becker, Ulrich Frank, Thomas Hess, Dimitris Karagiannis, Helmut Krcmar, Peter Loos, Peter Mertens, Andreas Oberweis, and Elmar J. Sinz (2011). "Memorandum on design-oriented information systems research." European Journal of Information Systems, 20(1), 7-10. References External links Ulrich Frank homepage at the University of Duisburg-Essen Multi-Perspective Enterprise Modelling Homepage at the University of Duisburg-Essen Ulrich Frank im Gespräch über Sprache, Abstraktion und konzeptuelle Modelle in: Perspektiven | Wirtschaftsinformatik-Podcast, Folge 2, 18.05.2016. (in German) 1958 births Living people German computer scientists Enterprise modelling experts Information systems researchers Academic staff of the University of Duisburg-Essen University of Cologne alumni University of Mannheim alumni
Ulrich Frank
Technology
811
30,088,610
https://en.wikipedia.org/wiki/Pervasive%20informatics
Pervasive informatics is the study of how information affects interactions with the built environments they occupy. The term and concept were initially introduced by Professor Kecheng Liu during a keynote speech at the SOLI 2008 international conference. The built environment is rich with information which can be utilised by its occupants to enhance the quality of their work and life. By introducing ICT systems, this information can be created, managed, distributed and consumed more effectively, leading to more advanced interactions between users and the environment. The social interactions in these spaces are of additional value, and Informatics can effectively capture the complexities of such information rich activities. Information literally pervades, or spreads throughout, these socio-technical systems, and pervasive informatics aims to study, and assist in the design of, pervasive information environments, or pervasive spaces, for the benefit of their stakeholders and users. Pervasive computing Pervasive informatics may be initially viewed as simply another branch of pervasive, or ubiquitous computing. However, pervasive informatics places a greater emphasis on the ICT-enhanced socio-technical pervasive spaces, as opposed to the technology driven direction of pervasive computing. This distinction between fields is analogous to that of informatics and computing, where Informatics focuses on the study of information, while the primary concern of computing is the processing of information. Pervasive informatics aims to analyse the pervasive nature of information, examining its various representations and transformations in pervasive spaces, which are enabled by pervasive computing technologies e.g. smart devices and intelligent control systems. Pervasive spaces A pervasive space is characterised by the physical and informational interaction between the occupants and the built environment e.g. the act of controlling the building is a physical interaction, while the space responding to this action/user instruction is an informational interaction. Intelligent pervasive spaces are those that display intelligent behaviour in the form of adaptation to user requirements or the environment itself. Such intelligent behaviour can be implemented using artificial intelligence algorithms and agent-based technologies. These intelligent spaces aim to provide communication and computing services to their occupants in such a way that the experience is almost transparent e.g. automated control of heating and ventilation based on occupant preference profiles. The term first appeared in an IBM Research Report but was not properly defined or discussed until later. An intelligent pervasive space is a “social and physical space with enhanced capability through ICT for human to interact with the built environments” An alternative definition is “an adaptable and dynamic area that optimises user services and management processes using information systems and networked ubiquitous technologies”. A common point between these definitions is that pervasive computing technologies are the means by which intelligence and interactions are achieved in pervasive spaces, with the purpose of enhancing a users experience. Theories and techniques Historically, there have been few attempts to consolidate approaches to studying the complex interplay between occupants and the built environment, and to assist in the design of pervasive information environments. Many theoretical interdisciplinary approaches are relevant to the design of effective pervasive spaces. A core concept in pervasive informatics is the range of interactions that may occur in pervasive spaces: people to people, people to the physical and the physical space to technological artefacts such as sensors. In order to study these interactions it is necessary to have an understanding of what information is being created and exchanged. In light of this, a series of theories which enable us to consider both social and technological interactions together form the foundations of pervasive informatics STS Socio-technical systems provide an approach which assists in understanding and supporting the use of pervasive technologies. The space could be considered as a network of artefacts, information, technology and occupants. By adopting STS approaches, a means for dynamically investigating and mapping such networks becomes possible. Distributed cognition Distributed cognition can be used to explain how information is passed and processed, with a focus on both interactions between people, in addition to their interactions with the environment. These interactions are analysed in terms of the trajectories of information. CSCW Human interactions with a space, and its effect on coordination mechanisms have been examined in the field of computer supported cooperative work (CSCW). The concepts of media spaces and awareness have also emerged from CSCW which are of relevance to pervasive informatics. Semiotics Semiotics, the study of signs, can be used to assess the effectiveness of a built environment from six different levels: physical, empirical, syntactical, semantic, pragmatic and social. Semiotics enables us to understand the nature and characteristics of sign-based interactions in pervasive spaces. Trend and future research The current technology-centred view of pervasive computing is no longer sufficient for studying the information in the built environment. Socio-technical approaches are required to direct attention to the interaction between the built environment and its occupants. The concept of pervasive informatics then captures this shift, and enables current research efforts in different fields to converge their focus and consolidate their methods under one label, leading to a better direction and understanding of this complex domain. Research issues identified for further study in pervasive informatics: Understanding the impacts of intelligent pervasive spaces and enabling technologies on occupants Designing organisations as pervasive information systems—the role of information and artefacts in communication and interaction. Context-dependent information and knowledge management, towards effective decision support in pervasive spaces. Service-oriented design of intelligent buildings as adaptive and learning information spaces with regards to norms and emerging practices in intelligent pervasive spaces. Through-life intelligent support in building management, with a better understanding of the lifecycle of pervasive spaces from the conception, design, implementation, utilisation till recycling to achieve the building performance and sustainability. The list, of course, is not exhaustive, but they all address the issues that lie on the boundaries between the physical, informational and social-capturing the essence of pervasive spaces. See also Smart city References External links Informatics Research Centre, University of Reading Schools of informatics Computing and society Interdisciplinary subfields of sociology
Pervasive informatics
Technology
1,256
15,682,872
https://en.wikipedia.org/wiki/Naval%20Materials%20Research%20Laboratory
Naval Materials Research Laboratory (NMRL) is an Indian defence laboratory of the Defence Research and Development Organisation (DRDO). Located at Ambernath, in Thane district, Maharashtra. It develops materials and alloys for Naval use, and is a single-window agency for all materials requirement of the Indian Navy. NMRL is organized under the Naval Research & Development Directorate of DRDO. The present director of NMRL is Shri Prashant T Rojatkar. History NMRL was established in 1953 as the Naval Chemical and Metallurgical Laboratory, an in-house laboratory of the Navy, located at the Naval Dockyard, Mumbai. It was brought under the administrative control of DRDO in the early 1960s. The laboratory is located in its own technical-cum residential complex at Ambernath, Maharashtra. The laboratory still has its erstwhile infrastructure intact in Naval Dockyard, Mumbai, without any physical scientific or administrative presence. Areas of work Fuel Cell Power Pack Technology Advanced Protection Technology in Marine Environment Electrochemistry & Electrochemical Processes Polymer and Elastomer Science and Technology including Stealth Material Processing Technologies for Speciality Metallic and Non-metallic Materials Chemical and Biological Control of Marine Environment Facilities Projects and products Technologies for military use Air independent propulsion (AIP) - for use in submarines of the Indian Navy. Technologies for civilian use Bio-emulsifier - for Bio-remediation of floating oil. Arsenic removal kit - NMRL has developed a low-cost arsenic removal filter to remove arsenic from contaminated drinking water. The filter is made of stainless steel, and the filter medium is a processed waste of the steel industry. The filter works on the principle of co-precipitation and adsorption, which is followed by filtration through treated sand. The complete filter costs Rs. 500, has a life of 5 years and does not require any electricity to run. After six months of testing in 24 Paraganas District in West Bengal, the technology was given to NGOs for productionizing. References External links NMRL Home Page See also Defence Research and Development Organisation laboratories Research institutes in Maharashtra Materials science institutes Research and development in India Ambarnath 1953 establishments in Bombay State Research institutes established in 1953
Naval Materials Research Laboratory
Materials_science
442
49,387,073
https://en.wikipedia.org/wiki/LG%20G5
The LG G5 is an Android smartphone developed by LG Electronics as part of the LG G series. It was announced during Mobile World Congress as the successor to the 2015 LG G4. The G5 is distinguished from its predecessors by its aluminum chassis and a focus on modularity. Its lower housing, which holds the user-replaceable battery, can be slid from the bottom of the device, and replaced by alternative add-on modules that provide additional functions. Two modules are available: a camera grip, and a high-fidelity audio module with DAC. A lower-spec variation, dubbed the LG G5 SE, is available in some markets. The G5 received mixed reviews. The device was praised for its shift to all-metal construction, while maintaining its removable battery. However, the modular accessory system was criticized for its limited use cases and for its inability to perform hot swapping. LG's software, too, was panned for the quality of its customizations. Specifications Hardware The G5 is constructed with an aluminum unibody chassis; a "micro-dizing" process utilizing a plastic primer was used to conceal seams required for the antenna. A rounded rectangular protrusion houses the camera components. The bottom houses a USB-C connector; this connector supports USB 2.0 data transfer (compatible with USB 3) and Quick Charge 3 fast charging. Unlike previous LG G models, which had volume buttons on the back, the G5's volume controls are on its side bezel, but the circular power button—which also contains a fingerprint reader—remains on the rear. The lower "chin" can be detached to remove or replace the battery and to install add-on modules. The battery plugs into these modules, which is reinserted to replace the stock "chin". The G5 includes a Qualcomm Snapdragon 820 system-on-chip and 4 GB of LPDDR4 RAM. It also includes 32 GB of internal storage, expandable via MicroSD card. The G5 has a 5.3-inch 1440p IPS display. It has two rear-facing cameras; a 16-megapixel primary camera and an 8-megapixel 135-degree wide-angle camera. The rear camera also offers color spectrum sensor and infrared autofocus features. The G5 supports DisplayPort, HDMI, and Miracast. A lower-specification variation, LG G5 SE, is sold in some markets including Latin America and China. It includes a Snapdragon 652 system-on-chip instead of the Snapdragon 820, and has 3 GB of LPDDR3 RAM (instead of 4 GB of LPDDR4). Software The LG G5 shipped with the Android 6.0.1 "Marshmallow" operating system. Citing confusion between removing shortcuts to apps and uninstalling them entirely, the G5's home screen does not have an "app drawer"; instead, it places all apps on pages of the main home screen, like iOS does. However, there is a setting to enable the app drawer on the home screen. The LG software includes an "always-on display" feature, which persistently displays a clock and notifications on-screen when the device is in standby. The G5 does not support Android Marshmallow's "adoptable storage" feature. An upgrade to Android 7.0 "Nougat" was released in November 2016, followed by a final upgrade to Android 8.1.0 "Oreo" in September 2018. LG supports unlocking the bootloader of certain G5 models. This allows them to be rooted, and allows custom ROM images to be installed. Several independent custom ROMs continued to be developed for some G5 versions independently of LG; for example LineageOS 20 (based on Android 13) continued to be developed as of August 2023. Accessories The "Quick Cover" accessory was unveiled prior to the unveiling of the device itself; it is semi-translucent and features a window for the always-on portion of the screen. Touch inputs can be made through the cover and semi-translucent screen for actions such as accepting calls. A line of accessories for the G5 branded as "Friends" were initially unveiled alongside the phone itself, including a wired head-mounted display known as the LG 360 VR (which attaches via the device's USB-C port), the LG 360 Cam virtual reality camera, and the LG Rolling Bot; these ultimately did not become available. These accessories are all managed via the LG Friends Manager application on the device, which automatically pairs and synchronizes with these devices. Two accessories utilizing the expansion slot system were unveiled; the "LG Cam Plus" accessory adds a grip to the rear of the device that incorporates physical camera controls, a jog wheel for zoom, and a supplemental battery. The "LG Hi-Fi Plus" accessory is a collaboration with Bang & Olufsen which adds a DAC, an amplifier, Direct Stream Digital audio support and upsampling, and is bundled with B&O Play H3 earbuds. LG stated that it would allow the co-development of third-party "Friends" to integrate with the G5, although none were produced. Reception The LG G5's overall design was praised for its shift to a metal construction. The Next Web was critical of its design, arguing that the rear of the phone looked too "boring" because it was simply a rounded rectangle with a camera enclosure and power button that "protrude in an oddly wart-like manner" and a visible seam for the chin, and noting that the lack of curvature and its "hollow" feel made the design of the G5 "less premium" than that of the G4. TechRadar was also mixed on LG's decision to re-locate the volume keys back to the bezel but maintain the rear-mounted power button as a fingerprint reader, noting that front and side-mounted fingerprint readers were easier to use—especially if the device is sitting flat. Due to its use of the Qualcomm Snapdragon 820 system-on-chip, the G5's specifications were considered to be more competitive than other flagships, unlike the G4, which used a model with reduced core count to avoid the overheating issues of the Snapdragon 810. TechRadar felt that the G5's performance was on par with the Snapdragon 820-based version of the Samsung Galaxy S7 sold in the United States, and that "in day-to-day use, and when not directly compared to its rivals' performance in a lab, it feels super slick." The modular accessory system received mixed reviews due to the limited number of modules designed for it, as well as the inability to hot swap modules due to the design of the system, which requires the removal of the battery. The accessories themselves also received mixed reviews; TechRadar felt that the Cam Plus and the Hi-Fi Plus did not justify their high price, and affected the device's size, but that it was "satisfying to set autofocus by half shutter key" with the Cam Plus. However, The Next Web praised the design for "solving" the historic exclusion of user-replaceable batteries from metal phones, noting that "at least having the option for customizability is pretty awesome, not to mention replacing your battery after its capacity drops in a year or two. That's something no other metal smartphone can claim." However, the metal case does not permit wireless charging, supported by earlier "G" series phones. The display was praised for its brightness, although The Next Web felt that its color temperature was too cool, producing a "distractingly blue-green hue" that was exacerbated by the prominent use of white in its user interface. The software of the G5 also received mixed reviews, with particular criticism directed to the removal of the app drawer from LG's default home screen. The always-on display on the G5 was praised for being more useful than that of the Galaxy S7 due to its support for displaying all notifications, and not just specific types. The Next Web lamented LG's removal of the Dual Window feature—albeit believing the removal may have been due to the inclusion of native dual window functionality on Android Nougat, and that LG's customizations were "all around less useful than Samsung's". Issues LG smartphone bootloop issues that mainly plagued the G4 line were also present in the G5, and the G4 lawsuit against LG was amended to include the G5, among other models. The Canadian model is not part of this suit, leaving Canadian uses without redress for defective units. References Further reading User guides This guide describes how to use the LG G5 and change some of its settings. It also includes safety and warranty information. This larger-format guide is composed of 8.5" by 11" pages. This guide includes a detailed table of contents. T-Mobile guides Miscellaneous The post must be expanded, using its "Click to expand" link, to be viewed in full. The expanded post includes: Descriptions of the official "LG Bridge" and "LG VPInput" software utilities. An extensive list of LG G5 submodels, plus a footnote describing how the lower-end "LG G5 SE" devices differ from the original LG G5 devices. Information related to rooting and bootloader unlocking. Tips on how to enable Android's "adoptable storage" feature on the G5. And other information. External links Android (operating system) devices G5 Modular smartphones Mobile phones introduced in 2016 Mobile phones with multiple rear cameras Mobile phones with user-replaceable battery Mobile phones with 4K video recording Discontinued flagship smartphones Mobile phones with infrared transmitter
LG G5
Technology,Engineering
2,069
2,902,596
https://en.wikipedia.org/wiki/9%20Andromedae
9 Andromedae, abbreviated 9 And by convention, is a variable binary star system in the northern constellation Andromeda. 9 Andromedae is the Flamsteed designation, while it bears the variable star designation AN Andromedae, or AN And. The maximum apparent visual magnitude of the system is 5.98, which places it near the lower limit of visibility to the human eye. Based upon an annual parallax shift of , it is located 460 light years from the Earth. This system was determined to be a single-lined spectroscopic binary in 1916 by American astronomer W. S. Adams, and the initial orbital elements were computed by Canadian astronomer R. K. Young in 1920. The pair orbit each other with a period of 3.2196 days and an eccentricity of 0.03. It is an eclipsing binary, which means the orbital plane is inclined close to the line of sight and, from the perspective of the Earth, the stars pass in front of each other, causing two partial eclipses every orbit. During the transit of the secondary in front of the primary, the visual magnitude drops to 6.16, while the eclipse of the secondary by the primary lowers the net magnitude to 6.09. References External links Image 9 Andromedae AN Andromedae 9 Andromedae Durchmusterung objects Andromedae, 09 219815 115065 8864 Andromedae, AN Eclipsing binaries A-type main-sequence stars
9 Andromedae
Astronomy
314
5,533,026
https://en.wikipedia.org/wiki/Interferon%20regulatory%20factors
Interferon regulatory factors (IRF) are proteins which regulate transcription of interferons (see regulation of gene expression). Interferon regulatory factors contain a conserved N-terminal region of about 120 amino acids, which folds into a structure that binds specifically to the IRF-element (IRF-E) motifs, which is located upstream of the interferon genes. Some viruses have evolved defense mechanisms that regulate and interfere with IRF functions to escape the host immune system. For instance, the remaining parts of the interferon regulatory factor sequence vary depending on the precise function of the protein. The Kaposi sarcoma herpesvirus, KSHV, is a cancer virus that encodes four different IRF-like genes; including vIRF1, which is a transforming oncoprotein that inhibits type 1 interferon activity. In addition, the expression of IRF genes is under epigenetic regulation by promoter DNA methylation. Role in IFN signaling IRFs primarily regulate type I IFNs in the host after pathogen invasion and are considered the crucial mediators of an antiviral response. Following a viral infection, pathogens are detected by Pattern Recognition Receptors (PRRs), including various types of Toll-like Receptors (TLR) and cytosolic PRRs, in the host cell. The downstream signaling pathways from PRR activation phosphorylate ubiquitously expressed IRFs (IRF1, IRF3, and IRF7) through IRF kinases, such as TANK-binding kinase 1 (TBK1). Phosphorylated IRFs are translocated to the nucleus where they bind to IRF-E motifs and activate the transcription of Type I IFNs. In addition to IFNs, IRF1 and IRF5 has been found to induce transcription of pro-inflammatory cytokines. Some IFNs like IRF2 and IRF4 regulate the activation of IFNs and pro-inflammatory cytokines through inhibition. IRF2 contains a repressor region that downregulates expression of type I IFNs. IRF4 competes with IRF5, and inhibits its sustained activity. Role in immune cell development In addition to the signal transduction functions of IRFs in innate immune responses, multiple IRFs (IRF1, IRF2, IRF4, and IRF8) play essential roles in the development of immune cells, including dendritic, myeloid, natural killer (NK), B, and T cells. Dendritic cells (DC) are a group of heterogeneous cells that can be divided into different subsets with distinct functions and developmental programs. IRF4 and IRF8 specify and direct the differentiation of different subsets of DCs by stimulating subset-specific gene expression. For example, IRF4 is required for the generation of CD4 + DCs, whereas IRF8 is essential for CD8α + DCs. In addition to IRF4 and IRF8, IRF1 and IRF2 are also involved in DC subset development. IRF8 has also been implicated in the promotion of macrophage development from common myeloid progenitors (CMPs) and the inhibition of granulocytic differentiation during the divergence of granulocytes and monocytes. IRF8 and IRF4 are also involved in the regulation of B and T-cell development at multiple stages. IRF8 and IRF4 function redundantly to drive common lymphoid progenitors (CLPs) to B-cell lineage. IRF8 and IRF4 are also required in the regulation of germinal center (GC) B cell differentiation. Role in diseases IRFs are critical regulators of immune responses and immune cell development, and abnormalities in IRF expression and function have been linked to numerous diseases. Due to their critical role in IFN type I activation, IRFs are implicated in autoimmune diseases that are linked to activation of IFN type I system, such as systemic lupus erythematosus (SLE). Accumulating evidence also indicates that IRFs play a major role in the regulation of cellular responses linked to oncogenesis. In addition to autoimmune diseases and cancers, IRFs are also found to be involved in the pathogenesis of metabolic, cardiovascular, and neurological diseases, such as hepatic steatosis, diabetes, cardiac hypertrophy, atherosclerosis, and stroke. Genes IRF1 IRF2 IRF3 IRF4 IRF5 IRF6 IRF7 IRF8 IRF9 See also Interferon References External links Transcription factors Protein families
Interferon regulatory factors
Chemistry,Biology
967
6,043,076
https://en.wikipedia.org/wiki/Herbrand%20quotient
In mathematics, the Herbrand quotient is a quotient of orders of cohomology groups of a cyclic group. It was invented by Jacques Herbrand. It has an important application in class field theory. Definition If G is a finite cyclic group acting on a G-module A, then the cohomology groups Hn(G,A) have period 2 for n≥1; in other words Hn(G,A) = Hn+2(G,A), an isomorphism induced by cup product with a generator of H2(G,Z). (If instead we use the Tate cohomology groups then the periodicity extends down to n=0.) A Herbrand module is an A for which the cohomology groups are finite. In this case, the Herbrand quotient h(G,A) is defined to be the quotient h(G,A) = |H2(G,A)|/|H1(G,A)| of the order of the even and odd cohomology groups. Alternative definition The quotient may be defined for a pair of endomorphisms of an Abelian group, f and g, which satisfy the condition fg = gf = 0. Their Herbrand quotient q(f,g) is defined as if the two indices are finite. If G is a cyclic group with generator γ acting on an Abelian group A, then we recover the previous definition by taking f = 1 - γ and g = 1 + γ + γ2 + ... . Properties The Herbrand quotient is multiplicative on short exact sequences. In other words, if 0 → A → B → C → 0 is exact, and any two of the quotients are defined, then so is the third and h(G,B) = h(G,A)h(G,C) If A is finite then h(G,A) = 1. For A is a submodule of the G-module B of finite index, if either quotient is defined then so is the other and they are equal: more generally, if there is a G-morphism A → B with finite kernel and cokernel then the same holds. If Z is the integers with G acting trivially, then h(G,Z) = |G| If A is a finitely generated G-module, then the Herbrand quotient h(A) depends only on the complex G-module C⊗A (and so can be read off from the character of this complex representation of G). These properties mean that the Herbrand quotient is usually relatively easy to calculate, and is often much easier to calculate than the orders of either of the individual cohomology groups. See also Class formation Notes References See section 8. Algebraic number theory Abelian group theory
Herbrand quotient
Mathematics
604
22,457,411
https://en.wikipedia.org/wiki/Isophote
In geometry, an isophote is a curve on an illuminated surface that connects points of equal brightness. One supposes that the illumination is done by parallel light and the brightness is measured by the following scalar product: where is the unit normal vector of the surface at point and the unit vector of the light's direction. If , i.e. the light is perpendicular to the surface normal, then point is a point of the surface silhouette observed in direction Brightness 1 means that the light vector is perpendicular to the surface. A plane has no isophotes, because every point has the same brightness. In astronomy, an isophote is a curve on a photo connecting points of equal brightness. Application and example In computer-aided design, isophotes are used for checking optically the smoothness of surface connections. For a surface (implicit or parametric), which is differentiable enough, the normal vector depends on the first derivatives. Hence, the differentiability of the isophotes and their geometric continuity is 1 less than that of the surface. If at a surface point only the tangent planes are continuous (i.e. G1-continuous), the isophotes have there a kink (i.e. is only G0-continuous). In the following example (s. diagram), two intersecting Bezier surfaces are blended by a third surface patch. For the left picture, the blending surface has only G1-contact to the Bezier surfaces and for the right picture the surfaces have G2-contact. This difference can not be recognized from the picture. But the geometric continuity of the isophotes show: on the left side, they have kinks (i.e. G0-continuity), and on the right side, they are smooth (i.e. G1-continuity). Determining points of an isophote On an implicit surface For an implicit surface with equation the isophote condition is That means: points of an isophote with given parameter are solutions of the nonlinear system which can be considered as the intersection curve of two implicit surfaces. Using the tracing algorithm of Bajaj et al. (see references) one can calculate a polygon of points. On a parametric surface In case of a parametric surface the isophote condition is which is equivalent to This equation describes an implicit curve in the s-t-plane, which can be traced by a suitable algorithm (see implicit curve) and transformed by into surface points. See also Contour line References J. Hoschek, D. Lasser: Grundlagen der geometrischen Datenverarbeitung, Teubner-Verlag, Stuttgart, 1989, , p. 31. Z. Sun, S. Shan, H. Sang et al.: Biometric Recognition, Springer, 2014, , p. 158. C.L. Bajaj, C.M. Hoffmann, R.E. Lynch, J.E.H. Hopcroft: Tracing Surface Intersections, (1988) Comp. Aided Geom. Design 5, pp. 285–307. C. T. Leondes: Computer Aided and Integrated Manufacturing Systems: Optimization methods, Vol. 3, World Scientific, 2003, , p. 209. External links Patrikalakis-Maekawa-Cho: Isophotes (engl.) A. Diatta, P. Giblin: Geometry of Isophote Curves Jin Kim: Computing Isophotes of Surface of Revolution and Canal Surface Curves
Isophote
Engineering
727
23,090,341
https://en.wikipedia.org/wiki/List%20of%20Mir%20visitors
This is a list of visitors to the Mir space station in alphabetical order. Station crew names are in bold. The suffix (twice) refers to the individual's number of Mir visits, not their total number of space flights. Entries without a flag symbol indicate that the person was a citizen from the bloc of countries comprising the former Soviet Union at launch. Statistics Between 1986 and 2000, 104 individuals visited the Mir space station. Note that this list does not double count for individuals with dual citizenship (for example, the British-American astronaut Michael Foale is only listed under the United States). By nationality By agency A Viktor Afanasyev (thrice) Thomas D. Akers Toyohiro Akiyama (tourist) Aleksandr Aleksandrov Aleksandr Aleksandrov Michael P. Anderson Jerome Apt Anatoly Artsebarsky Toktar Aubakirov Sergei Avdeyev (thrice) B Ellen S. Baker Michael A. Baker Aleksandr Balandin Yuri Baturin Ivan Bella John E. Blaha Michael J. Bloomfield Nikolai Budarin (twice) C Kenneth D. Cameron Franklin R. Chang-Diaz Kevin P. Chilton Jean-Loup Chrétien (twice) Jean-François Clervoy Michael R. Clifford Eileen M. Collins D Vladimir Dezhurov Bonnie J. Dunbar (twice) E Joe F. Edwards Reinhold Ewald Léopold Eyharts F Muhammed Faris Klaus-Dietrich Flade Michael Foale G Robert L. Gibson Yuri Gidzenko Linda M. Godwin John M. Grunsfeld H Chris A. Hadfield James D. Halsell Claudie Haigneré Jean-Pierre Haigneré (twice) Gregory J. Harbaugh I Marsha S. Ivins J Brent W. Jett K Aleksandr Kaleri (thrice) Janet L. Kavandi Leonid Kizim Yelena Kondakova (twice) Sergei Krikalev (twice) Valery Korzun L Aleksandr Laveykin Wendy B. Lawrence (twice) Aleksandr Lazutkin Anatoli Levchenko Jerry M. Linenger Edward T. Lu Shannon W. Lucid Vladimir Lyakhov M Yuri Malenchenko Gennadi Manakov (twice) Musa Manarov (twice) William S. McArthur Ulf Merbold Abdul Ahad Mohmand Talgat Musabayev (twice) N Carlos I. Noriega O Yuri Onufrienko P Gennady Padalka Scott E. Parazynski Charles J. Precourt (thrice) Aleksandr Poleshchuk Valeri Polyakov (twice) Dominic L. Pudwill Gorie R William F. Readdy James F. Reilly Thomas Reiter Yuri Romanenko Jerry L. Ross Valery Ryumin S Viktor Savinykh Richard A. Searfoss Ronald M. Sega Aleksandr Serebrov (twice) Salizhan Sharipov Helen Sharman Anatoly Solovyev (five times) Vladimir Solovyov Gennady Strekalov (twice) T Norman E. Thagard Andrew S.W. Thomas Vladimir Titov (twice) Michel Tognini Vasili Tsibliyev (twice) U Yury Usachev (twice) V Franz Viehböck Aleksandr Viktorenko (four times) Pavel Vinogradov Aleksandr Volkov (twice) W Carl E. Walz James D. Wetherbee Terrence W. Wilcutt (twice) Peter J.K. Wisoff David A. Wolf Z Sergei Zalyotin See also List of astronauts by name References External links NASA Shuttle-Mir overview Mir Space Station Astronauts by space program Mir visitors Mir visitors
List of Mir visitors
Engineering
758