source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Woods%E2%80%93Saxon%20potential | The Woods–Saxon potential is a mean field potential for the nucleons (protons and neutrons) inside the atomic nucleus, which is used to describe approximately the forces applied on each nucleon, in the nuclear shell model for the structure of the nucleus. The potential is named after Roger D. Woods and David S. Saxon.
The form of the potential, in terms of the distance r from the center of nucleus, is:
where V0 (having dimension of energy) represents the potential well depth,
a is a length representing the "surface thickness" of the nucleus, and is the nuclear radius where and A is the mass number.
Typical values for the parameters are: , .
For large atomic number A this potential is similar to a potential well. It has the following desired properties
It is monotonically increasing with distance, i.e. attracting.
For large A, it is approximately flat in the center.
Nucleons near the surface of the nucleus (i.e. having within a distance of order a) experience a large force towards the center.
It rapidly approaches zero as r goes to infinity (), reflecting the short-distance nature of the strong nuclear force.
The Schrödinger equation of this potential can be solved analytically, by transforming it into a hypergeometric differential equation. The radial part of the wavefunction solution is given by
where , , , and . Here is the hypergeometric function.
See also
Finite potential well
Quantum harmonic oscillator
Particle in a box
Yukawa potential
Nuclear force
Nuclear structure
Shell model |
https://en.wikipedia.org/wiki/Jean-Marie%20De%20Koninck | Jean-Marie De Koninck, (born 1948) is a Canadian mathematician. He has served as a professor at Université Laval since 1972 and is the creator of the road safety program Opération Nez Rouge, or "Red Nose Operation", a system preventing people from drinking and driving.
Biography
He is the son of the philosopher and theologian Charles De Koninck and the brother of the geographer Rodolphe De Koninck, the psychologist Joseph De Koninck, the philosopher Thomas De Koninck and the sociologist Maria De Koninck.
Birthdate:
April 29, 1948, Quebec City.
Occupation:
Professor of Mathematics at Université Laval
University diplomas:
1970: Baccalauréat ès Sciences, Université Laval
1972" Master's degree in mathematics, Temple University
1973: Ph.D. in mathematics, Temple University
Professional career at Université Laval:
1972-1977: assistant professor at the Mathematics Department
1976-1980: assistant director at the Mathematics Department and person responsible of graduated studies
1977-1982: associate professor at the Mathematics Department
1982–present: professor at the Mathematics and Statistics Department
1988-002: person responsible of the collaboration between Université Laval and colleges
1999-2003: assistant director at the Mathematics and Statistics Department and director of programs for the 2nd and 3rd cycles in mathematics and statistics
2005–present: director of SMAC (Science and Mathematics in Action) program
Scientific realizations
Author of nine books:
Topics in Arithmetical Functions, North-Holland, 1980
Approche élémentaire de l'étude des fonctions arithmétiques, Les Presses de l'Université Laval, 1982
Introduction à la théorie des nombres, Modulo, Montréal, 1994
1001 problèmes en théorie classique des nombres (with Armel Mercier), Ellipses, Paris, 2004
Mathématiques de l'ingénieur, Éditions Loze, Montréal, 2004
1001 Problems in Classical Number Theory (with Armel Mercier), American Mathematical Society, 2007 (English |
https://en.wikipedia.org/wiki/Alessandra%20Giliani | Alessandra Giliani (1307–1326) was thought to be an Italian natural historian, best known as the first woman to be recorded in historical documents as practicing anatomy and pathology. However, the historical evidence for her existence is limited. Some scholars consider her to be a fiction invented by Alessandro Machiavelli (1693–1766). whilst others hold that the participation of a woman in anatomy at that time was so shocking that she has been edited out of history.
Giliani is believed to have been born in 1307, in San Giovanni in Persiceto, in the Italian province of Emilia-Romagna. The chronicle of her life holds that she died in 1326, possibly from a septic wound, at the age of 19. Celebrated as the first female anatomist of the Western World, she is reputed to have been a brilliant prosector (preparer of corpses for anatomical dissection). She is said to have worked as the surgical assistant to Mondino de' Liuzzi (d. 1326), a world-renowned professor at the medical school of the University of Bologna. (Credited with being the father of modern anatomy, de' Liuzzi published a seminal text on the subject in 1316.)
Giliani is said to have carried out her own anatomical investigations, developing a method of draining the blood from a corpse and replacing it with a hardening coloured dye—and possibly adding to our understanding of the coronary-pulmonary circulatory system. (All evidence of her work was either lost or destroyed.)
Alessandra Giliani's short life was honoured by Otto Angenius, also one of Mondino's assistants and probably her fiancé, with a plaque at the "San Pietro e Marcellino degli Spedolari di Santa Maria di Mareto, o d'Ulmareto" which describes her work.
Legacy
She is mentioned by the nineteenth-century historian Michele Medici, who published a history of the Bolognese school of anatomy in 1857.
Barbara Quick's novel, A Golden Web, published by HarperTeen in 2010, is a fictional re-imagining of Alessandra Giliani's life and times. |
https://en.wikipedia.org/wiki/International%20Journal%20of%20Wireless%20Information%20Networks | The International Journal of Wireless Information Networks is a quarterly peer-reviewed scientific journal covering research on wireless networks, including sensor networks, mobile ad hoc networks, wireless personal area networks, wireless LANs, indoor positioning systems, wireless health, body area networking, cyber-physical systems, and RFID techniques. The journal is abstracted and indexed in Scopus. |
https://en.wikipedia.org/wiki/Predictive%20state%20representation | In computer science, a predictive state representation (PSR) is a way to model a state of controlled dynamical system from a history of actions taken and resulting observations. PSR captures the state of a system as a vector of predictions for future tests (experiments) that can be done on the system. A test is a sequence of action-observation pairs and its prediction is the probability of the test's observation-sequence happening if the test's action-sequence were to be executed on the system. One of the advantage of using PSR is that the predictions are directly related to observable quantities. This is in contrast to other models of dynamical systems, such as partially observable Markov decision processes (POMDPs) where the state of the system is represented as a probability distribution over unobserved nominal states. |
https://en.wikipedia.org/wiki/Gestational%20pemphigoid | Gestational pemphigoid (GP) is a rare autoimmune variant of the skin disease bullous pemphigoid, and first appears in pregnancy. It presents with tense blisters, small bumps, hives and intense itching, usually starting around the navel before spreading to limbs in mid-pregnancy or shortly after delivery. The head, face and mouth are not usually affected.
It may flare after delivery before resolving around three to six months after the pregnancy. It can be triggered by subsequent pregnancies, menstrual periods and oral contraceptive pill. A molar pregnancy and choriocarcinoma can provoke it. In some people, it persists long-term. It is associated with premature delivery of a small baby, a few who may be born with blisters and urticaria, which generally resolves within six weeks. It does not spread from one person to another, and does not run in families.
Diagnosis is by visulaization, biopsy and immunofluorescence. It can resemble pruritic urticarial papules and plaques of pregnancy, erythema multiforme, drug reactions and blistering scabies.
Around 1 in 20,000 to 50,000 pregnancies are affected. It was originally called herpes gestationis because of the blistering appearance, although it is not associated with the herpes virus.
Signs and symptoms
Diagnosis of GP becomes clear when skin lesions progress to tense blisters during the second or third trimester. The face and mucous membranes are usually spared. GP typically starts as a blistering rash in the navel area and then spreads over the entire body. It is sometimes accompanied by raised, hot, painful welts called plaques. After one to two weeks, large, tense blisters typically develop on the red plaques, containing clear or blood-stained fluid. GP creates a histamine response that causes extreme relentless itching (pruritus). GP is characterized by flaring and remission during the gestational and sometimes post partum period. Usually after delivery, lesions will heal within months, but may reoccur during men |
https://en.wikipedia.org/wiki/Charles%20de%20Bovelles | Charles de Bovelles (; born c. 1475 at Saint-Quentin, died at Ham, Somme after 1566) was a French mathematician and philosopher, and canon of Noyon. His Géométrie en françoys (1511) was the first scientific work to be printed in French.
Bovelles authored a number of philological, theological and mystical treatises, and has been reckoned to be "perhaps the most remarkable French thinker of the 16th century."
Life
Joseph Victor has written the best intellectual biography of Charles de Bovelles, but got the date of his death wrong.
He studied arithmetic under Jacques Lefèvre d'Étaples. His contemporaries knew him as widely travelled in Europe.
It is known that he made a rebus for the year (1509) of the building of the hôtel de ville in Saint-Quentin. He gave a stained glass window in the town in 1521. In 1547, in the preface of La Geometrie practique, Bovelles acknowledges help from Oronce Fine with the engravings.
Maupin (end of nineteenth century) assigned dates 1470-1553, which confused many people subsequently. S. Musial in a careful study published in the Actes of the 500th centennial of his birth in Noyon in 1979 (published 1982) fixes the date of his death at 1567 (see also Margolin's Letters and Poems of Charles de Bovelles, 2002).
Works
In artem oppositorum introductio, 1501;
Metaphysicae introductorium, 1503;
De constitutione et utilitate artium, ca. 1510;
Quae in hoc volumine continentur: Liber de intellectu. Liber de sensibus. Liber de generatione. Libellus de nihilo. Ars oppositorum. Liber de sapiente. Liber de duodecim numeris. Philosophicae epistulae. Liber de perfectis numeris. Libellus de mathematicis rosis. Liber de mathematicis corporibus. Libellus de mathematicis supplementis, 1510 (Repr. 1970);
Dominica Oratio tertrinis ecclesiastice hierarchie ordinibus particulatim attributa et facili explanata commentario, 1511;
In hoc opere contenta: Commentarius in primordiale Evangelium divi Joannis. Vita Remundi eremitae. Philosophicae aliquot Epistol |
https://en.wikipedia.org/wiki/Social%20television | Social television is the union of television and social media. Millions of people now share their TV experience with other viewers on social media such as Twitter and Facebook using smartphones and tablets. TV networks and rights holders are increasingly sharing video clips on social platforms to monetise engagement and drive tune-in.
The social TV market covers the technologies that support communication and social interaction around TV as well as companies that study television-related social behavior and measure social media activities tied to specific TV broadcasts – many of which have attracted significant investment from established media and technology companies. The market is also seeing numerous tie-ups between broadcasters and social networking players such as Twitter and Facebook. The market is expected to be worth $256bn by 2017.
Social TV was named one of the 10 most important emerging technologies by the MIT Technology Review on Social TV in 2010. And in 2011, David Rowan, the editor of Wired magazine, named Social TV at number three of six in his peek into 2011 and what tech trends to expect to get traction. Ynon Kreiz, CEO of the Endemol Group told the audience at the Digital Life Design (DLD) conference in January 2011: "Everyone says that social television will be big. I think it's not going to be big—it's going to be huge".
Much of the investment in the earlier years of social TV went into standalone social TV apps. The industry believed these apps would provide an appealing and complimentary consumer experience which could then be monetized with ads. These apps featured TV listings, check-ins, stickers and synchronised second-screen content but struggled to attract users away from Twitter and Facebook. Most of these companies have since gone out of business or been acquired amid a wave of consolidation and the market has instead focused on the activities of the social media channels themselves – such as Twitter Amplify, Facebook Suggested Vid |
https://en.wikipedia.org/wiki/Wireless%20Washtenaw | The Wireless Washtenaw project was originally an ambitious plan to provide free wireless broadband access throughout Washtenaw County, Michigan by April 2008 "without a burden on taxpayers". To accomplish this, it was to rely upon a public/ private sector partnership between the Washtenaw County government and 20/20 Communications. In March 2010, due to a failure to qualify for a certain anticipated federal stimulus grant, 20/20 Communications sold most of its operations to 123Net.
20/20 Communications however continues to be 123.net's sales representative for the Washtenaw County area via its website and sales office.
123.net has continued to maintain the Wireless Washtenaw network, and in the downtown Ann Arbor area has significantly expanded its transmission capabilities to include the 4G WiMAX microwave band. Their 4G WiMAX service is a business class product offered outside of the original Wireless Washtenaw project. It has also upgraded some of the network equipment of the project as well.
As of November 2010, the network provided wireless internet access options to downtown Ann Arbor, Manchester, Saline, Chelsea, and Dexter.
From 2008 through 2010 it became increasingly clear that all of the original goals of the Wireless Washtenaw program were not being achieved by the deadlines as originally stipulated in the 20/20 Communications contract. Since acquisition by 123.net, unless and until an additional source of significant funding for the program might be found, 20/20 Communications, under 123.net has restated the more realistic goals of the plan as merely hoping, "to revisit the possibility of slowly expanding the Wireless Washtenaw network sometime next summer (2011)."
One estimate for the amount of additional funding needed to provide full coverage to the county is $10,000,000.
New 'free' subscriptions to the service are no longer offered on the 20/20 website. 20/20 also no longer advertises any pricing on its website (November 2010).
Original proje |
https://en.wikipedia.org/wiki/Venus%20Engine | The Venus Engine is an image-processing engine for digital cameras. It is developed by the company Panasonic. Almost all of their Lumix cameras use a version of the Venus Engine. It is based on the Panasonic MN103/MN103S.
All image processors operate in four steps. Firstly, they receive data from the CCD sensor. Secondly, they create the Y-color difference signal (image processing). Thirdly, they perform JPEG compression. Finally, they save the image data. Panasonic claims that its VENUS II processing engine performs all of these simultaneously.
Venus
Image-processing engines are categorized as follows for each generation. The Venus Engine is a chip based on UniPhier products. The image processing engine of the attached RAW image development software is made by Ichikawa Soft Laboratory and outputs images of a trend (SILKYPIX style) different from those developed by the Venus engine in the camera.
Venus – 2002
Developed to achieve both high image quality and high-speed processing, Venus Engine Plus is based on this chip.
Improvement in resolution in an oblique direction (usually 1.5 times of engine)
Concurrent processing by multitasking image processing
Venus II 2004
This chip was limited to the top range models, such as the DMC-FZ7. It was developed mainly aiming at high image quality. After this chip (excluding Venus Engine · Plus) the camera shake correction is hardware processed.
Enhanced camera shake correction (480 to 4,000 times per second)
Improvement in resolution in the vertical and horizontal directions (about 10%)
Improvement of color reproducibility (4 to 12 axis color correction)
2DNR strengthening (reduction of dark noise and beautiful skin treatment)
Magnification chromatic aberration correction
Free consecutive shooting
Production process: 130 nm
Maximum operating frequency: 50 MH
Venus Engine Plus – 2005
This chip was developed for mounting in an entry model. Reproducibility is inferior to Venus Engine II, but it realizes power saving and hi |
https://en.wikipedia.org/wiki/Speech%20recognition%20software%20for%20Linux | As of the early 2000s, several speech recognition (SR) software packages exist for Linux. Some of them are free and open-source software and others are proprietary software. Speech recognition usually refers to software that attempts to distinguish thousands of words in a human language. Voice control may refer to software used for communicating operational commands to a computer.
Linux native speech recognition
History
In the late 1990s, a Linux version of ViaVoice, created by IBM, was made available to users for no charge. In 2002, the free software development kit (SDK) was removed by the developer.
Development status
In the early 2000s, there was a push to get a high-quality Linux native speech recognition engine developed. As a result, several projects dedicated to creating Linux speech recognition programs were begun, such as Mycroft, which is similar to Microsoft Cortana, but open-source.
Speech sample crowdsourcing
It is essential to compile a speech corpus to produce acoustic models for speech recognition projects. VoxForge is a free speech corpus and acoustic model repository that was built to collect transcribed speech to be used in speech recognition projects. VoxForge accepts crowdsourced speech samples and corrections of recognized speech sequences. It is licensed under a GNU General Public License (GPL).
Speech recognition concept
The first step is to begin recording an audio stream on a computer. The user has two main processing options:
Discrete speech recognition (DSR) – processes information on a local machine entirely. This refers to self-contained systems in which all aspects of SR are performed entirely within the user's computer. This is becoming critical for protecting intellectual property (IP) and avoiding unwanted surveillance (2018).
Remote or server-based SR – transmits an audio speech file to a remote server to convert the file into a text string file. Due to recent cloud storage schemes and data mining, this method more easily |
https://en.wikipedia.org/wiki/Huntington%27s%20Disease%20Society%20of%20America | The Huntington's Disease Society of America is a US non-profit organization dedicated to improving the lives of those affected by Huntington's disease, an incurable, genetically transmitted degenerative disease of the nervous system that affects movement, thinking, and some aspects of personality.
The Huntington's Disease Society of America is the largest non-profit volunteer organization dedicated to improving the lives of everyone affected by Huntington's Disease. Founded in 1967 by Marjorie Guthrie, wife of folk legend Woody Guthrie who died of HD, the Society works to provide the family services, education, advocacy and research for the more than 41,000 people diagnosed with HD in the United States.
HDSA supports and participates in the HD Drug Research Pipeline, which develops potential therapies to treat and eventually cure HD; and HDSA also supports 50+ HDSA Centers of Excellence at major medical facilities throughout the U.S., where people with HD and their families receive comprehensive medical, psychological and social services, in addition to physical and occupational therapy and genetic testing and counseling. The Society comprises 50+ volunteer-led local chapters and affiliates across the country with its headquarters in New York City. Additionally, HDSA hosts more than 200 support groups for people with HD, their families, caregivers and people at-risk, and is a resource on Huntington's Disease for medical professionals and the general public. |
https://en.wikipedia.org/wiki/Pachycaul | Pachycauls are plants with a disproportionately thick trunk for their height, and few branches.In contrast, trees with thin twigs such as Oak (Quercus), Maple (Acer) and Eucalyptus are called leptocauls while those with moderately thick twigs like Plumeria are called mesocauls. Pachycauly can be the product of exceptional primary growth (as with palms and cycads) or disproportionate secondary growth as with the Baobabs (Adansonia). The word is derived from the Greek pachy- meaning thick or stout, and Latin caulis meaning the stem. All of the tree (and treelike) species of cactus are pachycauls, as are most palms, Cycads and pandans. The most extreme pachycauls are the floodplains, or riverbottom variety of the African Palmyra (Borassus aethiopum) with primary growth up to in thickness, and the Coquito Palm (Jubaea chilensis) with primary growth up to thick. The most pachycaulous cycad is Cycas thouarsii at up to in diameter. The tallest pachycaul is the Andean Wax Palm (Ceroxylon quindiuense) at up to . and about in diameter. The most pachycaulous cactus is the Bisnaga (Echinocactus platyacanthus) with primary growth up to in diameter. The largest caudex type pachycaul is the African Baobab (Adansonia digitata). One called the Glencoe Baobab at Hoedspruit, South Africa has a basal diameter (not girth) of . This tree suffered a severe trauma and is dying.
Examples occur in the genera
Pachycormus (Anacardiaceae)
Adenium
Pachypodium (Apocynaceae)
Dendrosenecio (Asteraceae)
Bursera (Burseraceae)
Cyanea
Lobelia (Campanulaceae)
Dendrosicyos (Cucurbitaceae)
Givotia (Euphorbiaceae)
Delonix (Fabaceae)
Fouquieria (Fouquieriaceae)
Adansonia
Bombax
Brachychiton
Cavanillesia
Ceiba (Malvaceae)
Dorstenia (Moraceae)
Cyphostemma (Vitaceae).
See also
Caudex |
https://en.wikipedia.org/wiki/Reg%20Hill | Reginald Eric Hill (16 May 1914 – 1999) was an English model-maker, art director, producer, and freelance storyboard artist. He is most prominently associated with the work of Gerry Anderson.
Early life
Born on 16 May 1914, Hill started his working life during the 1930s in the display department of a London wholesale grocer before progressing to a role of advertising designer. He obtained a private pilot's licence in June 1939. Hill served in the Royal Air Force during the Second World War, spending time at Benson in Oxfordshire as an airframe fitter instructor. After the war ended, he was posted to Germany and, on his return, flew an Avro Lancaster from Germany to England.
Post-war
After returning to England, Hill joined National Interest Picture Productions as a designer for British Army, RAF and other government-made films, working as a model maker and animator. He also used his artistic and design skills as a commercial artist creating paper cut-out model books (three-dimensional flight aircraft and other working models), jigsaw puzzles, greeting cards, the gunfire featured in the film The Dam Busters (1955), and more.
Involvement with Gerry Anderson
In 1954, while working as an artist at Pentagon Films, Hill met Gerry Anderson, who had just formed, in partnership with Arthur Provis, the production company Anderson-Provis (AP) Films. Hill became the company's production designer. Initially based in Taplow, the new company produced a range of adverts for TV, including the "Blue Cars" advert starring Nicholas Parsons. During quiet periods, Reg worked on a number of other projects, including the TV series The Adventures of Robin Hood (1957), made at Walton Studios.
AP Films was approached by Roberta Leigh to produce animated programmes for TV, a collaboration that resulted in The Adventures of Twizzle and Torchy the Battery Boy towards the end of the 1950s. Hill worked in all things artistic, from set and puppet design to special effects. The collaboration with |
https://en.wikipedia.org/wiki/Attenuated%20vaccine | An attenuated vaccine (or a live attenuated vaccine, LAV) is a vaccine created by reducing the virulence of a pathogen, but still keeping it viable (or "live"). Attenuation takes an infectious agent and alters it so that it becomes harmless or less virulent. These vaccines contrast to those produced by "killing" the pathogen (inactivated vaccine).
Attenuated vaccines stimulate a strong and effective immune response that is long-lasting. In comparison to inactivated vaccines, attenuated vaccines produce a stronger and more durable immune response with a quick immunity onset. They are generally avoided in patients with severe immunodeficiencies. Attenuated vaccines function by encouraging the body to create antibodies and memory immune cells in response to the specific pathogen which the vaccine protects against. Common examples of live attenuated vaccines are measles, mumps, rubella, yellow fever, and some influenza vaccines.
Development
Attenuated viruses
Viruses may be attenuated using the principles of evolution via serial passage of the virus through a foreign host species, such as:
Tissue culture
Embryonated eggs (often chicken)
Live animals
The initial virus population is applied to a foreign host. Through natural genetic variability or induced mutation, a small percentage of the viral particles should have the capacity to infect the new host. These strains will continue to evolve within the new host and the virus will gradually lose its efficacy in the original host, due to lack of selection pressure. This process is known as "passage" in which the virus becomes so well adapted to the foreign host that it is no longer harmful to the subject that is to receive the vaccine. This makes it easier for the host immune system to eliminate the agent and create the immunological memory cells which will likely protect the patient if they are infected with a similar version of the virus in "the wild".
Viruses may also be attenuated via reverse genetics. Attenuat |
https://en.wikipedia.org/wiki/Brascamp%E2%80%93Lieb%20inequality | In mathematics, the Brascamp–Lieb inequality is either of two inequalities. The first is a result in geometry concerning integrable functions on n-dimensional Euclidean space . It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb.
The geometric inequality
Fix natural numbers m and n. For 1 ≤ i ≤ m, let ni ∈ N and let ci > 0 so that
Choose non-negative, integrable functions
and surjective linear maps
Then the following inequality holds:
where D is given by
Another way to state this is that the constant D is what one would obtain by restricting attention to the case in which each is a centered Gaussian function, namely .
Alternative forms
Consider a probability density function . This probability density function is said to be a log-concave measure if the function is convex. Such probability density functions have tails which decay exponentially fast, so most of the probability mass resides in a small region around the mode of . The Brascamp–Lieb inequality gives another characterization of the compactness of by bounding the mean of any statistic .
Formally, let be any derivable function. The Brascamp–Lieb inequality reads:
where H is the Hessian and is the Nabla symbol.
BCCT inequality
The inequality is generalized in 2008 to account for both continuous and discrete cases, and for all linear maps, with precise estimates on the constant.
Definition: the Brascamp-Lieb datum (BL datum)
.
.
.
are linear surjections, with zero common kernel: .
Call a Brascamp-Lieb datum (BL datum).
For any with , define
Now define the Brascamp-Lieb constant for the BL datum:
Discrete case
Setup:
Finitely generated abelian groups .
Group homomorphisms .
BL datum defined as
is the torsion subgroup, that is, the subgroup of finite-order elements |
https://en.wikipedia.org/wiki/CDK7%20pathway | CDK7 is a cyclin-dependent kinase shown to be not easily classified. CDK7 is both a CDK-activating kinase (CAK) and a component of the general transcription factor TFIIH.
Introduction
An intricate network of cyclin-dependent kinases (CDKs) is organized in a pathway to ensure that each cell accurately replicates its DNA and segregates it equally between the two daughter cells. One CDK–the CDK7 complex–cannot be so easily classified. CDK7 is both a CDK-activating kinase (CAK), which phosphorylates cell-cycle CDKs within the activation segment (T-loop), and a component of the general transcription factor TFIIH, which phosphorylates the C-terminal domain (CTD) of the largest subunit of Pol II. A proposed mode of CDK7 inhibition is the phosphorylation of cyclin H by CDK7 itself or by another kinase.
CDK7 has been observed as a prerequisite to S phase entry and mitosis. CDK7 is activated by the binding of cyclin H and its substrate specificity is altered by the binding of MAT1. The free form of the complex formed, CDK7-cycH-MAT1, operates as CDK-activating kinase (CAK). In vivo, CDK7 forms a stable complex with cyclin H and MAT1 only when its T-loop is phosphorylated on either Ser164 or Thr170 residues.
The T-loop
To be active, most CDKs require not only a cyclin partner but also phosphorylation at one particular site, which corresponds to Thr161 in human CDK1, and which is located within the so-called T-loop (or activation loop) of kinase subdomain VIII. CDKl, CDK2 and CDK4 all require T-loop phosphorylation for maximum activity.
The free form of CDK7-cycH-MAT1 phosphorylates the T-loops of CDK1, CDK2, CDK4 and CDK6. For all CDK substrates of CDK7, phosphorylation by CDK7 occurs following the binding of the substrate kinase to its associated cyclin. This two-step process has been observed in CDK2, where the association of CDK2 with cyclin A results in a conformational change that primes the catalytic site for binding of its ATP substrate and phosphorylation by CDK7 o |
https://en.wikipedia.org/wiki/Surfactant%20protein%20B | Surfactant protein B is an essential lipid-associated protein found in pulmonary surfactant. Without it, the lung would not be able to inflate after a deep breath out. It rearranges lipid molecules in the fluid lining the lung so that tiny air sacs in the lung, called alveoli, can more easily inflate.
Gene
SP-B is encoded by SFTPB, a single, 11425 nucleotide long gene on chromosome 2. Mutations in this gene are the basis for several of the lung conditions mentioned above. Both frameshift mutations and several single nucleotide polymorphisms (SNPs) have been found correlated to a variety of lung conditions. A frame shift mutation responsible for congenital alveolar proteinosis (CAP) was identified by Kattan et al. Many SNP's have been identified in relation to lung conditions. They have been correlated to severe influenza, neonatal respiratory distress syndrome, mechanical ventilation necessity, and more.
Protein
Surfactant protein B (SP-B) is a small protein, weighing about 8 kDa. Proteins are composed of building blocks called amino acids, and SP-B is composed of 79 of them (Valine, alanine, phenylalanine, leucine, isoleucine, and tryptophan being found in the highest levels). Nine of these carry with them a positive charge, and two carry a negative charge, leaving a protein with a net (total) charge of +7. In the body, two molecules of SP-B stick together and form what is called a homodimer. These are found embedded into membranes and other lipid structures, SP-B is a highly hydrophobic, avoiding contact with water.
SP-B is the mature form of a large precursor protein called proSP-B. Synthesized in the endoplasmic reticulum of type II pneumocytes, proSP-B weighs approximately 40 kDa and is cut down to the size of mature SP-B in the golgi apparatus through a process called post-translational modification. ProSP-B is also created in another type of lung cell called a Club cell, but these cells are unable to edit proSP-B into SP-B.
SP-B is a saposin-like p |
https://en.wikipedia.org/wiki/Surfactant%20protein%20C | Surfactant protein C (SP-C), is one of the pulmonary surfactant proteins. In humans this is encoded by the SFTPC gene.
It is a membrane protein.
Structure
SFTPC is a 197-residue protein made up of two halves: a unique N-terminal propeptide domain and a C-terminal BRICHOS domain. The around 100-aa long propeptide domain actually contains not only the cleaved part, but also the mature peptide. It can be further broken down into a 23-aa helical transmembrane propeptide proper, the mature secreted SP-C (24-58), and a linker (59-89) that connects to the BRICHOS domain.
The propeptide of pulmonary surfactant C has an N-terminal alpha-helical segment whose suggested function was stabilization of the protein structure, since the mature peptide can irreversibly transform from its native alpha-helical structure to beta-sheet aggregates and form amyloid fibrils. The correct intracellular trafficking of proSP-C has also been reported to depend on the propeptide.
The structure of the BRICHOS domain has been solved. Mutations in this domain also lead to amyloid fibrils made up of the mature peptide, suggesting a chaperone activity.
Clinical significance
Mutations are associated with surfactant metabolism dysfunction type 2.
Humans and animals born lacking SP-C tend to develop progressive interstitial lung disease.
Recombinant SP-C is used in Venticute, an artificial lung surfactant.
A process to mass-produce an analogue called rSP-C33Le by fusion with spidroin has been described. |
https://en.wikipedia.org/wiki/Collectin-10 | Collectin-10, also known as collectin liver 1, is a collectin protein that in humans is encoded by the COLEC10 gene. Its structure is similar to mannan-binding lectin (MBL).
Collectin liver 1 (CL-L1) show very similar carbohydrate selectivity as MBL. Two other discovered collectins include collectin placenta 1 (CL-P1) and collectin kidney 1 (CL-K1). CL-L1's location found to be on chromosome 8 q23-24.1. Research concluded CL-L1 to be a serum protein. |
https://en.wikipedia.org/wiki/Expectation%20value%20%28quantum%20mechanics%29 | In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. It can be thought of as an average of all the possible outcomes of a measurement as weighted by their likelihood, and as such it is not the most probable value of a measurement; indeed the expectation value may have zero probability of occurring (e.g. measurements which can only yield integer values may have a non-integer mean). It is a fundamental concept in all areas of quantum physics.
Operational definition
Consider an operator . The expectation value is then in Dirac notation with a normalized state vector.
Formalism in quantum mechanics
In quantum theory, an experimental setup is described by the observable to be measured, and the state of the system. The expectation value of in the state is denoted as .
Mathematically, is a self-adjoint operator on a Hilbert space. In the most commonly used case in quantum mechanics, is a pure state, described by a normalized vector in the Hilbert space. The expectation value of in the state is defined as
If dynamics is considered, either the vector or the operator is taken to be time-dependent, depending on whether the Schrödinger picture or Heisenberg picture is used. The evolution of the expectation value does not depend on this choice, however.
If has a complete set of eigenvectors , with eigenvalues , then () can be expressed as
This expression is similar to the arithmetic mean, and illustrates the physical meaning of the mathematical formalism: The eigenvalues are the possible outcomes of the experiment, and their corresponding coefficient is the probability that this outcome will occur; it is often called the transition probability.
A particularly simple case arises when is a projection, and thus has only the eigenvalues 0 and 1. This physically corresponds to a "yes-no" type of experiment. In this case, the expectation value is the probability that the experiment resul |
https://en.wikipedia.org/wiki/HLT%20%28x86%20instruction%29 | In the x86 computer architecture, HLT (halt) is an assembly language instruction which halts the central processing unit (CPU) until the next external interrupt is fired. Interrupts are signals sent by hardware devices to the CPU alerting it that an event occurred to which it should react. For example, hardware timers send interrupts to the CPU at regular intervals.
Most operating systems execute a HLT instruction when there is no immediate work to be done, putting the processor into an idle state. In Windows NT, for example, this instruction is run in the "System Idle Process". On x86 processors, the opcode of HLT is 0xF4.
History on x86
All x86 processors from the 8086 onward had the HLT instruction, but it was not used by MS-DOS prior to 6.0 and was not specifically designed to reduce power consumption until the release of the Intel DX4 processor in 1994. MS-DOS 6.0 provided a POWER.EXE that could be installed in CONFIG.SYS and in Microsoft's tests it saved 5%. Some of the first 100 MHz DX chips had a buggy HLT state, prompting the developers of Linux to implement a "no-hlt" option for use when running on those chips, but this was fixed in later chips.
Intel has since introduced additional processor-yielding instructions. These include:
in SSE2 intended for spin loops. Available to userspace (low-privilege rings).
/ in SSE3 for thread synchronization.
(timed pause) and / (userspace monitor/mwait). Available to userspace.
Process
Almost every modern processor instruction set includes an instruction or sleep mode which halts the processor until more work needs to be done. In interrupt-driven processors, this instruction halts the CPU until an external interrupt is received. On most architectures, executing such an instruction allows the processor to significantly reduce its power usage and heat output, which is why it is commonly used instead of busy waiting for sleeping and idling. In most processors, halting (instead of looping) also reduces the latency |
https://en.wikipedia.org/wiki/Everglades%20virus | Everglades virus (EVEV) is an alphavirus included in the Venezuelan equine encephalitis virus complex. The virus circulates among rodents and vector mosquitoes and sometimes infects humans, causing a febrile illness with occasional neurological manifestations. Although it is said to be rare in humans it is still debated if this is the case because of the possibility of underdiagnosing as well as being a unrecognized cause of other illnesses. The virus is named after the Everglades, a region of subtropical wetlands in southern Florida. The virus is endemic to the U.S. state of Florida, where its geographic range mirrors that of the mosquito species Culex cedecei. Hispid cotton rat and cotton mouse are considered important reservoir hosts of Everglades virus. Most clinical cases of infection occur in and around the city of Miami. The abundance in clinical cases in certain parts of Florida comes from many factors such as population density and proximity to the hosts and their ecosystem.
Signs and symptoms
Symptoms of infection include:
Enlarged, tender lymph nodes
Fever
Headache
Malaise
Myalgia
Pharyngitis
Transmission
The virus is transmitted by the bite of infected mosquitoes of the genus Culex, specifically Culex cedecei. |
https://en.wikipedia.org/wiki/Mooers%27s%20law | Mooers's law is a comment about the use of information retrieval systems made by the American computer scientist Calvin Mooers in 1959:
Original interpretation
Mooers argued that information is at risk of languishing unused due not only on the effort required to assimilate it but also to any impliciations of the information that may conflict with the user's prior information. In learning new information, a user may end up proving their work incorrect or irrelevant. Mooers argued that users prefer to remain in a state of safety in which new information is ignored in an attempt to save potential embarrassment or reprisal from supervisors.
Out-of-context interpretation
The more common interpretation of Mooers's law is similar to Zipf's principle of least effort. It emphasizes the amount of effort needed to use and understand an information retrieval system before the information seeker gives up; it is often paraphrased to increase the focus on the retrieval system:
In this interpretation, "painful and troublesome" comes from using the retrieval system.
See also
Availability heuristic
Cognitive dissonance
Confirmation bias
Satisficing |
https://en.wikipedia.org/wiki/We%20Are%20Our%20Mountains | We Are Our Mountains (, Menk' enk' mer leṙnerə) is a monument north of Stepanakert (Khankendi) in the disputed territory of Nagorno-Karabakh in Azerbaijan. The sculpture, completed in 1967 by Sargis Baghdasaryan, is widely regarded as a symbol of the Armenian heritage of Nagorno-Karabakh, and even Armenian identity as a whole. The monument is made from volcanic tuff and depicts an old man and woman hewn from rock, representing the mountain people of Karabakh. It is known colloquially as "tatik-papik" (տատիկ-պապիկ) in Armenian and "Dedo-Babo" (Դեդո-Բաբո) in the Karabakh dialect, which translates as "Grandmother and Grandfather". The sculpture is prominent in Artsakh's coat of arms.
Following the Azerbaijani offensive of September 19–20, 2023, Nagorno-Karabakh was dissolved and nearly all of its population had fled to Armenia. On 29 September 2023, Azerbaijani officials placed the flag of Azerbaijan on the monument, on the same day of the Azerbaijani takeover of Stepanakert, after the Azerbaijani military offensive in Nagorno-Karabakh undertaken ten days earlier and the subsequent flight of the Armenian population.
Eurovision 2009 image controversy
During the Eurovision Song Contest 2009, We Are Our Mountains was included, among other local symbols, in the introductory "postcard" preceding the Armenian performance. Representatives from Azerbaijan complained to the European Broadcasting Union about the use of the monument in the Armenian intro, since the territory of Nagorno-Karabakh is de jure part of Azerbaijan. In response to the complaint, the image was edited out of the video in the finals. However, Armenia retaliated for the decision by having images of the monument on a video screen in the background, and on the back of the clipboard held by its spokesperson Sirusho.
In popular culture
This monument is featured in the artwork of the songs "Protect the Land" and "Genocidal Humanoidz" of the American band System of a Down to draw attention to the Second Nagorn |
https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll%20theory | The Cattell–Horn–Carroll theory (commonly abbreviated to CHC), is a psychological theory on the structure of human cognitive abilities. Based on the work of three psychologists, Raymond B. Cattell, John L. Horn and John B. Carroll, the Cattell–Horn–Carroll theory is regarded as an important theory in the study of human intelligence. Based on a large body of research, spanning over 70 years, Carroll's Three Stratum theory was developed using the psychometric approach, the objective measurement of individual differences in abilities, and the application of factor analysis, a statistical technique which uncovers relationships between variables and the underlying structure of concepts such as 'intelligence' (Keith & Reynolds, 2010). The psychometric approach has consistently facilitated the development of reliable and valid measurement tools and continues to dominate the field of intelligence research (Neisser, 1996).
The Cattell–Horn–Carroll theory is an integration of two previously established theoretical models of intelligence: the theory of fluid and crystallized intelligence (Gf-Gc) (Cattell, 1941; Horn 1965), and Carroll's three-stratum theory (1993), a hierarchical, three-stratum model of intelligence. Due to substantial similarities between the two theories they were amalgamated to form the Cattell–Horn–Carroll theory (Willis, 2011, p. 45). However, some researchers, including John Carroll, have questioned not only the need but also the empirical basis for the theory.
In the late 1990s the CHC model was expanded by McGrew, later revised with the help of Flanagan. Later extensions of the model are detailed in McGrew (2011) and Schneider and McGrew (2012) There are a fairly large number of distinct individual differences in cognitive ability, and CHC theory holds that the relationships among them can be derived by classifying them into three different strata: stratum I, "narrow" abilities; stratum II, "broad abilities"; and stratum III, consisting of a single |
https://en.wikipedia.org/wiki/Mathematical%20Geosciences | Mathematical Geosciences (formerly Mathematical Geology) is a scientific journal published semi-quarterly by Springer Science+Business Media on behalf of the International Association for Mathematical Geosciences. It contains original papers in mathematical geosciences. The journal focuses on quantitative methods and studies of the Earth and its natural resources and environment. Its impact factor is 1.909. |
https://en.wikipedia.org/wiki/Binomial%20approximation | The binomial approximation is useful for approximately calculating powers of sums of 1 and a small number x. It states that
It is valid when and where and may be real or complex numbers.
The benefit of this approximation is that is converted from an exponent to a multiplicative factor. This can greatly simplify mathematical expressions (as in the example below) and is a common tool in physics.
The approximation can be proven several ways, and is closely related to the binomial theorem. By Bernoulli's inequality, the left-hand side of the approximation is greater than or equal to the right-hand side whenever and .
Derivations
Using linear approximation
The function
is a smooth function for x near 0. Thus, standard linear approximation tools from calculus apply: one has
and so
Thus
By Taylor's theorem, the error in this approximation is equal to for some value of that lies between 0 and . For example, if and , the error is at most . In little o notation, one can say that the error is , meaning that .
Using Taylor series
The function
where and may be real or complex can be expressed as a Taylor series about the point zero.
If and , then the terms in the series become progressively smaller and it can be truncated to
This result from the binomial approximation can always be improved by keeping additional terms from the Taylor series above. This is especially important when starts to approach one, or when evaluating a more complex expression where the first two terms in the Taylor series cancel (see example).
Sometimes it is wrongly claimed that is a sufficient condition for the binomial approximation. A simple counterexample is to let and . In this case but the binomial approximation yields . For small but large , a better approximation is:
Example
The binomial approximation for the square root, , can be applied for the following expression,
where and are real but .
The mathematical form for the binomial approximation |
https://en.wikipedia.org/wiki/Principal%20axis%20theorem | In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.
Mathematically, the principal axis theorem is a generalization of the method of completing the square from elementary algebra. In linear algebra and functional analysis, the principal axis theorem is a geometrical counterpart of the spectral theorem. It has applications to the statistics of principal components analysis and the singular value decomposition. In physics, the theorem is fundamental to the studies of angular momentum and birefringence.
Motivation
The equations in the Cartesian plane R2:
define, respectively, an ellipse and a hyperbola. In each case, the x and y axes are the principal axes. This is easily seen, given that there are no cross-terms involving products xy in either expression. However, the situation is more complicated for equations like
Here some method is required to determine whether this is an ellipse or a hyperbola. The basic observation is that if, by completing the square, the quadratic expression can be reduced to a sum of two squares then the equation defines an ellipse, whereas if it reduces to a difference of two squares then the equation represents a hyperbola:
Thus, in our example expression, the problem is how to absorb the coefficient of the cross-term 8xy into the functions u and v. Formally, this problem is similar to the problem of matrix diagonalization, where one tries to find a suitable coordinate system in which the matrix of a linear transformation is diagonal. The first step is to find a matrix in which the technique of diagonalization can be applied.
The trick is to write the quadratic form as
where the cross-term has been split into two equal parts. The matrix A in the above d |
https://en.wikipedia.org/wiki/CIP-Tool | CIP-Tool (Communicating Interacting Processes) is a software tool for the modelling and implementation of event-driven applications. It is especially relevant for the development of software components of embedded systems.
History
The underlying mathematical formalisms of CIP were first proposed by the physicist, Prof. Dr. Hugo Fierz. The tool was subsequently developed at the Swiss Federal Institute of Technology (Zurich) in a series of research projects during the 1990s. Development and distribution has since been transferred to a commercially operating spin-off company, CIP-Tool, based in Solothurn, Switzerland.
CIP Tool has been over taken by Actifsource GmbH in summer 2011. Actifsource has integrated the CIP Tool into the Actifsource workbench.
Methodology
The CIP-model is basically a finite state machine, or more precisely, an extended finite state machine (processes can store and modify variables and can use these to enable or disable transitions).
In CIP, a desired system behaviour is broken down into distinct processes, each of which is a set of states interconnected by transitions. One state in every process is tagged as active state. This active status can be transferred to another state through the execution of a transition. Such transitions are triggered by events (from external sources, e.g. sensors) or in-pulses (from other processes). Transitions can in turn send one or several out-pulses (to other processes) or actions (to external receivers, e.g. effectors).
The CIP-model is sometimes confused with petri nets. This may be because to beginners, the notation looks similar. The similarities should not be over-stressed, however. For example, CIP allows only (and exactly) one active state per process and processes are neither started nor terminated during run-time.
Code generation
CIP-Tool permits models to be automatically converted to executable code. This greatly facilitates testing, documentation and final implementation. Currently the lang |
https://en.wikipedia.org/wiki/John%20McConnell%20%28peace%20activist%29 | John McConnell (March 22, 1915 – October 20, 2012) was the founder and creator of Earth Day, and The Earth Society Foundation. He was known for designing the Earth Flag, pursuing causes relating to peace, religion, and science.
Early years
John McConnell was born on March 22, 1915, in Davis City, Iowa, United States. He was the son of a Pentecostal preacher and traveling doctor. His first interest in the Earth began in 1939 while partnering with Albert Nobell, a chemist, in the Nobell Research Laboratory in Los Angeles that built a factory for the manufacture of plastics. Realizing how much the manufacture of plastic polluted the Earth, his concern for ecology grew. Afterward, he was a lifetime believer in care of the environment, founded on his Christian beliefs. He stated that, leading into World War II, he believed that love and prayer could be more powerful than bombs.
On October 31, 1957, soon after the first successful Sputnik launch, McConnell wrote an editorial entitled, "Make Our Satellite A Symbol Of Hope", calling for peaceful cooperation in the exploration of Space with a visible "Star of Hope" satellite. This led him to create a "Star of Hope" organization to foster international cooperation in space.
Major actions and campaigns
Peace activism
In 1959 to pursue his dream of peace, John McConnell moved to California where he and his co-publisher, Erling Toness, founded the "Mountain View". Along with the "Mountain View", he organized a campaign in San Francisco in 1962 called "Meals for Millions". It was used to feed thousands of Hong Kong refugees. In 1963, after the "Meals for Millions" campaign, McConnell worked on another campaign called "Minute for Peace" for seven years following "Meals for Millions". He began his "Minute for Peace" campaign with a broadcast on December 22, 1963, ending the mourning period for the late president, John F. Kennedy. On June 26, 1965, McConnell spoke at the National Education Association Convention in Madison Squa |
https://en.wikipedia.org/wiki/Architectural%20animation | Architectural animation is a short architectural movie created on a computer. A computer-generated building is created along with landscaping and sometimes moving people and vehicles. Unlike an architectural rendering, which is a single image from a single point of view, an architectural animation is generally a series of hundreds or even thousands of still images played simultaneously in order to produce a video. When these images are assembled and played back, they produce a movie effect much like a real movie camera except all images are artificially created by computer. It is possible to add a computer-created environment around the building to enhance reality and to better convey its relationship to the surrounding area; this can all be done before the project is built giving designers and stakeholders a realistic view of the completed project. Architectural renderings are often used along with architectural animation.
History
The first use of a 3D hidden-line removal movie depicting an architectural street scene was in 1976 by Jonathan Ingram. It shows the planned Crown Courts in Hobart in 1976 and was used for planning approval. The buildings exist today.
Usage
Commercial demand for computer-generated rendering is on the rise. There is a large growing demand of architectural visualization services worldwide. This has mainly been accelerated by the advancements in computing technology and allowing architectural animations to become cheaper. There are numerous real-time rendering engines that differ from the traditional method of multiple stitched still images together. This allows architectural animation to be far cheaper and less labor-intensive. However, it usually doesn't have the same photo realism. Typically, members of the AIA (American Institute of Architects) and NAHB (National Association of Home Builders) prefer to use 3D animations and single renderings for their customers before starting on a construction project. These professionals often find |
https://en.wikipedia.org/wiki/Health%20and%20Ageing | Health and Ageing is a research programme set up by the Geneva Association, also known as the International Association for the Study of Insurance Economics.
The Geneva Association Research Programme on Health and Ageing seeks to bring together facts, figures and analyses linked to issues in health. The key is to test new and promising ideas, linking them to related studies and initiatives in the health sector and trying to find solutions for the future financing of healthcare.
Major concerns are generally directed at the rising health costs resulting from technological advances and the changing demographic structure where the population over 60 largely exceeds other sectors. Importance is placed on two major issues:
The change in demographic structures leading to the perceived "ageing society"
The technological advances, which are perceived as resulting in increasing health costs
It is important to view these issues from the proper perspective. We are not ageing as a society but benefiting from an extended period of good health, which is largely a consequence of technological advances. It is not the increased spending on health that should be the concern but what it is spent on. It is crucial that spending is targeted and appropriately controlled with respect to the intended aim. Demographic changes and technological progress are driving changes in the governments' finances; the proportion of people in work compared with those already retired will decrease, leading to shrinking the tax base. The difficulties of financing the care of an increasing number of elderly people for increasingly long periods combined with an ever-shrinking tax base are very great. Faced with the growing health expenditure, changes have affected entire healthcare systems. The main trend in most developed countries is a creeping decentralization combined with a change in funding emphasis from public to a mix of public and private.
As the life cycle is getting longer, people have the op |
https://en.wikipedia.org/wiki/Cat%20eye%20syndrome | Cat eye syndrome (CES) or Schmid–Fraccaro syndrome is a rare condition caused by an abnormal extra chromosome, i.e. a small supernumerary marker chromosome. This chromosome consists of the entire short arm and a small section of the long arm of chromosome 22. In consequence, individuals with the cat eye syndrome have three (trisomic) or four (tetrasomic) copies of the genetic material contained in the abnormal chromosome instead of the normal two copies. The prognosis for patients with CES varies depending on the severity of the condition and their associated signs and symptoms, especially when heart or kidney abnormalities are seen.
Signs and symptoms
Unilateral or bilateral iris coloboma (absence of tissue from the colored part of the eyes)
Preauricular pits/tags (small depressions/growths of skin on the outer ears)
Anal atresia (abnormal obstruction of the anus)
Downward-slanting palpebral fissures (openings between the upper and lower eyelids)
Cleft palate
Kidney problems (missing, extra, or underdeveloped kidneys)
Short stature
Scoliosis/skeletal problems
Cardiac defects (such as TAPVR)
Micrognathia (smaller jaw)
Hernias
Biliary atresia
Rarer malformations can affect almost any organ
Intellectual disability – many are intellectually normal; about 30% of CES patients have moderately impaired mental development, although severe intellectual disability is rare.
The term "cat eye" syndrome was coined because of the particular appearance of the vertical colobomas in the eyes of some patients, but over half of the CES patients in the literature do not present with this trait.
Genetics
The small supernumerary marker chromosome (sSMC) in CES usually arises spontaneously. It may be hereditary and parents may be mosaic for the marker chromosome, but show no phenotypic symptoms of the syndrome. This sSMC may be small, large, or ring-shaped, and typically includes 2 Mb, i.e. 2 million DNA base pairs, termed the CES critical region, located on its q arm(s |
https://en.wikipedia.org/wiki/Skew-Hamiltonian%20matrix | In linear algebra, skew-Hamiltonian matrices are special matrices which correspond to skew-symmetric bilinear forms on a symplectic vector space.
Let V be a vector space, equipped with a symplectic form . Such a space must be even-dimensional. A linear map is called a skew-Hamiltonian operator with respect to if the form is skew-symmetric.
Choose a basis in V, such that is written as . Then a linear operator is skew-Hamiltonian with respect to if and only if its matrix A satisfies , where J is the skew-symmetric matrix
and In is the identity matrix. Such matrices are called skew-Hamiltonian.
The square of a Hamiltonian matrix is skew-Hamiltonian. The converse is also true: every skew-Hamiltonian matrix can be obtained as the square of a Hamiltonian matrix.
Notes
Matrices
Linear algebra |
https://en.wikipedia.org/wiki/OP-TEC | The National Center for Optics and Photonics Education, known as OP-TEC for short, was a joint effort by educational institutions and other groups to develop curriculum materials for photonics. Headquartered in Waco, Texas, it was funded by the National Science Foundation.
OP-TEC held workshops at various institutions around the United States to promote the use of optics and photonics in secondary and post-secondary curricula. |
https://en.wikipedia.org/wiki/Kitchen%20Bouquet | Kitchen Bouquet is a browning and seasoning sauce primarily composed of caramel with vegetable flavorings. It has been used as a flavoring addition for gravies and other foods since the late 19th century. It is currently produced by the Hidden Valley or HV Food Products Company.
Kitchen Bouquet was manufactured in the late 19th and early 20th centuries by the Palisade Manufacturing Company of West Hoboken, New Jersey. An advertisement in a 1903 edition of The Boston Cooking School Magazine indicated that Kitchen Bouquet, then known as "Tournade's Kitchen Bouquet," had been "a favorite for 30 years." It was one of the products featured in the United States exhibit at the Paris Exposition of 1889.
Its ingredients include caramel, vegetable base (water, carrots, onions, celery, parsnips, turnips, salt, parsley, spices), sodium benzoate and sulfiting agents.
Kitchen Bouquet is also used by food stylists for a variety of appearance effects, including 'coffee' made by adding a few drops to a cup of water
and lending a browned appearance to poultry.
See also
Caramel color
Food coloring
Gravy |
https://en.wikipedia.org/wiki/Orphan%20source | An orphan source is a self-contained radioactive source that is no longer under regulatory control.
The United States Nuclear Regulatory Commission defines an orphan source more exactly as:
...a sealed source of radioactive material contained in a small volume—but not radioactively contaminated soils and bulk metals—in any one or more of the following conditions
In an uncontrolled condition that requires removal to protect public health and safety from a radiological threat
Controlled or uncontrolled, but for which a responsible party cannot be readily identified
Controlled, but the material's continued security cannot be assured. If held by a licensee, the licensee has few or no options for, or is incapable of providing for, the safe disposition of the material
In the possession of a person, not licensed to possess the material, who did not seek to possess the material
In the possession of a State radiological protection program for the sole purpose of mitigating a radiological threat because the orphan source is in one of the conditions described in one of the first four bullets and for which the State does not have a means to provide for the material's appropriate disposition
Most known orphan sources were, generally, small radioactive sources produced legitimately under governmental regulation and put into service for radiography, generating electricity in radioisotope thermoelectric generators, medical radiotherapy or irradiation. These sources were then "abandoned, lost, misplaced or stolen" and so no longer subject to proper regulation.
See also
List of orphan source incidents |
https://en.wikipedia.org/wiki/Usage%20share%20of%20operating%20systems | The usage share of operating systems is the percentage of computing devices that run each operating system (OS) at any particular time. All such figures are necessarily estimates because data about operating system share is difficult to obtain. There are few reliable primary sources and no agreed methodologies for its collection. Operating systems are used in the vast majority of computers, from embedded devices to supercomputers.
Most devices access the web, so web access statistics can be used to estimate the usage share of operating systems across device types, as well as the usage share of operating systems within types.
, Android, an operating system using the Linux kernel, is the world's most-used operating system when judged by web use. It has 42% of the global market, followed by Windows with 28%, iOS with 17%, macOS with 7%, ChromeOS 1.3%, and desktop Linux at 1.2% (also using the Linux kernel). These numbers do not include embedded devices or game consoles.
For smartphones and other pocket-sized devices, Android dominates with 71% market share, and Apple's iOS has 28%.
For desktop and laptop computers, Microsoft's Windows is the most used at 69%, followed by Apple's macOS at 17%, and Google's ChromeOS at 3.2% (in the US up to 8.0%), and desktop Linux at 2.9%. In addition, 5% is attributed to "unknown" operating systems - which are likely forms of BSD or obscure varieties of Linux.
For tablets, Apple's iPadOS (a variant of iOS) has 52% share and Android has 48% worldwide (though Android is more used in vast majority of countries; and occasionally Android has measured even or ahead, up to 51.5% globally).
For the above devices, smartphones and other pocket-sized devices make up 58%, desktops and laptops 40%, and tablets 2.0%. Smartphones have the most use in virtually all countries, including in the US at 51%, with PC operating systems (including Windows) down to 46%.
Linux has completely dominated the supercomputer field since 2017, with all of t |
https://en.wikipedia.org/wiki/Immunofixation | Immunofixation permits the detection and typing of monoclonal antibodies or immunoglobulins in serum or urine. It is of great importance for the diagnosis and monitoring of certain blood related diseases such as myeloma.
Principle
The method detects by precipitation: when a soluble antigen (Ag) is brought in contact with the corresponding antibody, precipitation occurs, which may be visible with the naked eye or microscope.
Immunofixation first separates antibodies in a mixture as a function of their specific electrophoretic mobility. For the purpose of identification, antisera are used that are specific for the targeted antibodies.
Specifically, immunofixation allows the detection of monoclonal antibodies representative of diseases such as myeloma or Waldenström macroglobulinemia.
Technique
The technique consists of depositing a serum (or urine which has been previously concentrated) sample on a gel. After application of an electric current that allows the separation of proteins according to their size, antibodies specific for each type of immunoglobulin are laid upon the gel. It thus appears to be more or less narrow bands on the gel, which are at different immunoglobulins.
Immunofixation as immunoelectrophoresis, takes place in two steps:
The first step is identical for both techniques. It consists in depositing the immunoglobulins contained in the serum or urine on a gel and then separating the immunoglobulins according to their electrophoretic mobility by making them migrate under the effect of an electric field. This migration depends on the mass and charge of the antigen. Once the immunoglobulins are separated, we can move to the next step.
The second step is based on the technique used. Immunofixation requires electrophoresis to migrate serum proteins in replicate. Then, specific anti-immunoglobulin antisera are used to treat each replicate. For this, the antisera are not placed in a channel, as in electrophoresis, but they are added individually |
https://en.wikipedia.org/wiki/Image%20spam | Image-based spam, or image spam, is a kind of email spam where the textual spam message is embedded into images, that are then attached to spam emails. Since most of the email clients will display the image file directly to the user, the spam message is conveyed as soon as the email is opened (there is no need to further open the attached image file).
Technique
The goal of image spam is clearly to circumvent the analysis of the email’s textual content performed by most spam filters (e.g., SpamAssassin, RadicalSpam, Bogofilter, SpamBayes). Accordingly, for the same reason, together with the attached image, often spammers add some “bogus” text to the email, namely, a number of words that are most likely to appear in legitimate emails and not in spam.
The earlier image spam emails contained spam images in which the text was clean and easily readable, as shown in Fig. 1.
Detection
Consequently, optical character recognition tools were used to extract the text embedded into spam images, which could be then processed together with the text in the email’s body by the spam filter, or, more generally, by more sophisticated text categorization techniques.
Further, signatures (e.g., MD5 hashing) were also generated to easily detected and block already known spam images.
Spammers in turn reacted by applying some obfuscation techniques to spam images, similarly to CAPTCHAs, both to prevent the embedded text to be read by OCR tools, and to mislead signature-based detection. Some examples are shown in Fig. 2.
This raised the issue of improving image spam detection using computer vision and pattern recognition techniques.
In particular, several authors investigated the possibility of recognizing image spam with obfuscated images by using generic low-level image features (like number of colours, prevalent colour coverage, image aspect ratio, text area), image metadata, etc. (see for a comprehensive survey).
Notably, some authors also tried detecting the presence of text in at |
https://en.wikipedia.org/wiki/Pleistocene%20rewilding | Pleistocene rewilding is the advocacy of the reintroduction of extant Pleistocene megafauna, or the close ecological equivalents of extinct megafauna. It is an extension of the conservation practice of rewilding, which aims to restore functioning, self-sustaining ecosystems through practices that may include species reintroductions.
Towards the end of the Pleistocene era (roughly 13,000 to 10,000 years ago), nearly all megafauna of Eurasia, Australia, and South/North America, dwindled towards extinction, in what has been referred to as the Quaternary extinction event. With the loss of large herbivores and predator species, niches important for ecosystem functioning were left unoccupied. In the words of the biologist Tim Flannery, "ever since the extinction of the megafauna 13,000 years ago, the continent has had a seriously unbalanced fauna". This means, for example, that the managers of national parks in North America have to resort to culling to keep the population of ungulates under control.
Paul S. Martin (originator of the Pleistocene overkill hypothesis) states that present ecological communities in North America do not function appropriately in the absence of megafauna, because much of the native flora and fauna evolved under the influence of large mammals.
Ecological and evolutionary implications
Research shows that species interactions play a pivotal role in conservation efforts. Communities where species evolved in response to Pleistocene megafauna (but now lack large mammals) may be in danger of collapse. Most living megafauna are threatened or endangered; extant megafauna have a significant impact on the communities they occupy, which supports the idea that communities evolved in response to large mammals. Pleistocene rewilding could "serve as additional refugia to help preserve that evolutionary potential" of megafauna. Reintroducing megafauna to North America could preserve current megafauna, while filling ecological niches that have been vacant s |
https://en.wikipedia.org/wiki/La%20Abad%C3%ADa%20del%20Crimen | La abadía del crimen (The Abbey of Crime) is a video game written by Paco Menéndez with graphics made by Juan Delcán and published in 1987 by Opera Soft. It was conceived as a version of Umberto Eco's 1980 book The Name of the Rose. Paco Menéndez and Opera Soft were unable to secure the rights for the name, so the game was released as La abadía del crimen. "The Abbey of the Crime" was the working title of the novel The Name of the Rose.
This game is an adventure with isometric graphics. A Franciscan friar, William of Occam (William of Baskerville in the book) and his young novice Adso have to discover the perpetrator of a series of murders in a medieval Italian abbey.
Gameplay
The player controls the movement of the friar Fra William (mistakenly described as a monk in the user manual). The player also has the possibility of controlling the movement of the novice Adso within the same screen in which Fra William is. If the key for controlling the novice is not pressed, he follows Fra William most of the time. The game features other characters representing the monks of the abbey who behave according to programmed artificial intelligence to move throughout the mapping of the abbey and show a series of dialogs shown by written text which is moved along the lower part of the screen.
An extensive mapping of the abbey is represented in a large series of screens with 3D isometric graphics. A series of objects has to be collected in order to successfully complete the game. The action occurs in seven days subdivided in different Canonical hours. The time (day + current hour) is indicated at the bottom left of the screen.
The game starts with the abbot welcoming Fra William and explaining that a monk has disappeared. He also explains to Fra William that he is obligated to obey the orders of the abbot and the rules of the monastery, attend religious services and meals and stay in his cell at night while the research of the crimes is pursued. During the game, the novice Ad |
https://en.wikipedia.org/wiki/Spin%E2%80%93spin%20relaxation | In physics, the spin–spin relaxation is the mechanism by which , the transverse component of the magnetization vector, exponentially decays towards its equilibrium value in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). It is characterized by the spin–spin relaxation time, known as 2, a time constant characterizing the signal decay.
It is named in contrast to 1, the spin–lattice relaxation time. It is the time it takes for the magnetic resonance signal to irreversibly decay to 37% (1/e) of its initial value after its generation by tipping the longitudinal magnetization towards the magnetic transverse plane. Hence the relation
.
2 relaxation generally proceeds more rapidly than 1 recovery, and different samples and different biological tissues have different 2. For example, fluids have the longest 2 (on the order of seconds for protons), and water based tissues are in the 40–200 ms range, while fat based tissues are in the 10–100 ms range. Amorphous solids have 2 in the range of milliseconds, while the transverse magnetization of crystalline samples decays in around 1/20 ms.
Origin
When excited nuclear spins—i.e., those lying partially in the transverse plane—interact with each other by sampling local magnetic field inhomogeneities on the micro- and nanoscales, their respective accumulated phases deviate from expected values. While the slow- or non-varying component of this deviation is reversible, some net signal will inevitably be lost due to short-lived interactions such as collisions and random processes such as diffusion through heterogeneous space.
T2 decay does not occur due to the tilting of the magnetization vector away from the transverse plane. Rather, it is observed due to the interactions of an ensemble of spins dephasing from each other. Unlike spin-lattice relaxation, considering spin-spin relaxation using only a single isochromat is trivial and not informative.
Determining parameters
Like spin-lattice relaxation, spin- |
https://en.wikipedia.org/wiki/Spin%E2%80%93lattice%20relaxation | During nuclear magnetic resonance observations, spin–lattice relaxation is the mechanism by which the longitudinal component of the total nuclear magnetic moment vector (parallel to the constant magnetic field) exponentially relaxes from a higher energy, non-equilibrium state to thermodynamic equilibrium with its surroundings (the "lattice"). It is characterized by the spin–lattice relaxation time, a time constant known as T1.
There is a different parameter, T2, the spin-spin relaxation time, which concerns the exponential relaxation of the transverse component of the nuclear magnetization vector ( to the external magnetic field). Measuring the variation of T1 and T2 in different materials is the basis for some magnetic resonance imaging techniques.
Nuclear physics
T1 characterizes the rate at which the longitudinal Mz component of the magnetization vector recovers exponentially towards its thermodynamic equilibrium, according to equation
Or, for the specific case that
It is thus the time it takes for the longitudinal magnetization to recover approximately 63% [1-(1/e)] of its initial value after being flipped into the magnetic transverse plane by a 90° radiofrequency pulse.
Nuclei are contained within a molecular structure, and are in constant vibrational and rotational motion, creating a complex magnetic field. The magnetic field caused by thermal motion of nuclei within the lattice is called the lattice field. The lattice field of a nucleus in a lower energy state can interact with nuclei in a higher energy state, causing the energy of the higher energy state to distribute itself between the two nuclei. Therefore, the energy gained by nuclei from the RF pulse is dissipated as increased vibration and rotation within the lattice, which can slightly increase the temperature of the sample. The name spin-lattice relaxation refers to the process in which the spins give the energy they obtained from the RF pulse back to the surrounding lattice, thereby restorin |
https://en.wikipedia.org/wiki/Network-centric%20organization | A network-centric organization is a network governance pattern which empowers knowledge workers to create and leverage information to increase competitive advantage through the collaboration of small and agile self-directed teams. It is emerging in many progressive 21st century enterprises. This implies new ways of working, with consequences for the enterprise’s infrastructure, processes, people and culture.
Overview
With a network-centric configuration, knowledge workers are able to create and leverage information to increase competitive advantage through the collaboration of small and agile self-directed teams. For this, the organizational culture needs to change from one solely determined by a single form of organizing (e.g., hierarchy) to an adaptive hybrid enabling multiple forms of organizing within the same organization. The nature of the work, in an area, determines best the way its conduct is organized and the networked mediation of work activities affords interoperability among differentially-organized areas of work.
A network-centric organization is both a sensible response to a complex environment and an enactor of sensibility on that environment. The business climate of the new millennium is characterized by profound and continuous changes due to globalization, exponential leaps in technological capabilities, and other market forces. Rapid developments of Information and Communication Technologies(ICT) are driving and supporting the change from the industrial to the information age.
In this world of rapid change and uncertainty, organizations need to continually renew, reinvent and reinvigorate themselves in order to respond creatively. The network-centric approach aims to tap into the hidden resources of knowledge workers supported and enabled by ICT, in particular the social technologies associated with Web 2.0 and Enterprise 2.0. Essentially though, a network-centric organization is more about people and culture than technology. A useful surve |
https://en.wikipedia.org/wiki/Species%20translocation | Translocation in wildlife conservation is the capture, transport and release or introduction of species, habitats or other ecological material (such as soil) from one location to another. It is commonly related to species relocation which encompasses "moving an individual animal (or family group) from one location within its home range to another location within the same home range." Both contrast with reintroduction, a term which is generally used to denote the introduction into the wild of species from captive stock. The International Union for the Conservation of Nature (IUCN) catalogues translocation projects for threatened species around the globe.
Overview
Translocation can be an effective management strategy and important topic in conservation biology, but despite their popularity, translocations are a high‐cost endeavor with a history of failures. It may decrease the risk of extinction by increasing the range of a species, augmenting the numbers in a critical population, or establishing new populations. Translocation may also improve the level of biodiversity in the ecosystem.
Translocation may be expensive and is often subject to public scrutiny, particularly when the species involved is charismatic or perceived as dangerous (for example wolf reintroduction). Translocation as a tool is used to reduce the risk of a catastrophe to a species with a single population, to improve genetic heterogeneity of separated populations of a species, to aid the natural recovery of a species or re-establish a species where barriers might prevent it from doing so naturally. It is also used to move ecological features out of the way of development.
Several critically endangered plant species in the southwestern Western Australia have either been considered for translocation or trialled. Grevillea scapigera is one such case, threatened by rabbits, dieback and degraded habitat. The rarest marsupial in the world, Gilbert's potoroo, has been successfully translocated to remo |
https://en.wikipedia.org/wiki/Extinction%20threshold | Extinction threshold is a term used in conservation biology to explain the point at which a species, population or metapopulation, experiences an abrupt change in density or number because of an important parameter, such as habitat loss. It is at this critical value below which a species, population, or metapopulation, will go extinct, though this may take a long time for species just below the critical value, a phenomenon known as extinction debt.
Extinction thresholds are important to conservation biologists when studying a species in a population or metapopulation context because the colonization rate must be larger than the extinction rate, otherwise the entire entity will go extinct once it reaches the threshold.
Extinction thresholds are realized under a number of circumstances and the point in modeling them is to define the conditions that lead a population to extinction. Modeling extinction thresholds can explain the relationship between extinction threshold and habitat loss and habitat fragmentation.
Mathematical models
Metapopulation-type models are used to predict extinction thresholds. The classic metapopulation model is the Levins Model, which is the model of metapopulation dynamics established by Richard Levins in the 1960s. It was used to evaluate patch occupancy in a large network of patches. This model was extended in the 1980s by Russell Lande to include habitat occupancy. This mathematical model is used to infer the extinction values and important population densities. These mathematical models are primarily used to study extinction thresholds because of the difficulty in understanding extinction processes through empirical methods and the current lack of research on this subject. When determining an extinction threshold there are two types of models that can be used: deterministic and stochastic metapopulation models.
Deterministic
Deterministic metapopulation models assume that there are an infinite number of habitat patches available a |
https://en.wikipedia.org/wiki/%C3%80%20la%20zingara | In French cuisine, à la zingara (lit. "gypsy style"), sometimes spelled as à la singara, is a garnish or sauce consisting of chopped ham, tongue, mushrooms and truffles combined with tomato sauce, tarragon and sometimes madeira. Additional ingredients may include white wine, cayenne pepper, lemon juice and orange rind. The sauce is prepared by cooking the ingredients until the mixture reduces and thickens. This garnish is served with meat such as veal, poultry and sometimes eggs.
Gypsy sauce (German: Zigeunersauce) may have originated from à la zingara. Gypsy sauce is prepared using many of the same ingredients as à la zingara. Simpler versions of gypsy sauce, including commercial varieties, typically use a lesser amount of ingredients, such as tomato paste, Hungarian paprika, bell peppers and sometimes onion.
À la zingara
À la zingara has sometimes been referred to as singara and zingara sauce. Conversely, à la zingara has also been referred to as separate from zingara sauce, such as in the 2009 book Dictionary of Food authored by Charles Sinclair and published by Bloomsbury Publishing, which has separate entries for à la zingara and zingara sauce, referring to à la zingara as "France In the gypsy style, i.e. with ham, tongue, mushrooms and tomatoes" and zingara sauce as "France A sauce for veal and poultry made to a variety of recipes and little used."
An 1869 recipe for blonde veal cutlets with ham à la zingara uses espagnole sauce and ham. The espagnole sauce is cooked with the veal, and then later the fat is skimmed from the sauce, which is then run through a sieve, after which it is served with the dish. An 1858 recipe for veal cutlets à la zingara is similar, with the addition of mushrooms and truffles in the center of the dish surrounding the veal and ham. After the meats are cooked and plated, The espagnole sauce is cooked in the pan the veal was cooked in, lemon juice and cayenne pepper are added, and then the sauce is poured over the cutlets.
Gypsy sa |
https://en.wikipedia.org/wiki/Levy%E2%80%93Mises%20equations | The Levi–Mises equations (also called flow rules) describe the relationship between stress and strain for an ideal plastic solid where the elastic strains are negligible.
The generalized Levy–Mises equation can be written as:
Materials science
Continuum mechanics
Solid mechanics |
https://en.wikipedia.org/wiki/Panzootic | A panzootic (from Greek παν all + ζόιον animal) is an epizootic (an outbreak of an infectious disease of animals) that spreads across a large region (for example a continent), or even worldwide. The equivalent in human populations is called a pandemic.
A panzootic can start when three conditions have been met:
the emergence of a disease new to the population.
the agent infects a species and causes serious illness.
the agent spreads easily and sustainably among animals.
A disease or condition is not a panzootic merely because it is widespread or kills a large number of animals; it must also be infectious. For example, cancer is responsible for a large number of deaths but is not considered a panzootic because the disease is, generally speaking, not infectious. Unlike an epizootic, a panzootic covers all or nearly all
species over a large surface area (ex. rabies, anthrax). Typically an enzootic or an epizootic, or
their cause, may act as a potential preparatory factor .
Causes of Spread and Environmental Influences
Contagion and infection by far play the biggest role in the
dissemination and spread of epizootic and panzootic diseases. These include
virulent (ex. Cattle Plague), septic (can be caused in the change in food
quality), parasitic (ex. Malaria), and miasmatic infections (ex. Typhoid Fever).
Many claim that an accidental morbific cause, which infects a great number of
animals which ceases activity after a prolonged time period.
Certain factors come into play in the spread of certain
panzootic diseases, as can be seen with Batrachochytrium dendrobatidis. This infection
seems to be sensitive to external conditions, particularly the environments temperature
and moisture. These factors leads to limitations on where the diseases can
thrive, acting almost as its ‘climate niche’.
Examples
Persistence of H5N1 Avian Influenza
Influenza A virus subtype H5N1, the highly pathogenic strain of influenza, was first detected in the goose population of Guangdon |
https://en.wikipedia.org/wiki/Two-dimensionalism | Two-dimensionalism is an approach to semantics in analytic philosophy. It is a theory of how to determine the sense and reference of a word and the truth-value of a sentence. It is intended to resolve the puzzle: How is it possible to discover empirically that a necessary truth is true? Two-dimensionalism provides an analysis of the semantics of words and sentences that makes sense of this possibility. The theory was first developed by Robert Stalnaker, but it has been advocated by numerous philosophers since, including David Chalmers.
Two-dimensional semantic analysis
Any given sentence, for example, the words,
"Water is H2O"
is taken to express two distinct propositions, often referred to as a primary intension and a secondary intension, which together compose its meaning.
The primary intension of a word or sentence is its sense, i.e., is the idea or method by which we find its referent. The primary intension of "water" might be a description, such as watery stuff. The thing picked out by the primary intension of "water" could have been otherwise. For example, on some other world where the inhabitants take "water" to mean watery stuff, but, where the chemical make-up of watery stuff is not H2O, it is not the case that water is H2O for that world.
The secondary intension of "water" is whatever thing "water" happens to pick out in this world, whatever that world happens to be. So, if we assign "water" the primary intension watery stuff, then the secondary intension of "water" is H2O, since H2O is watery stuff in this world. The secondary intension of "water" in our world is H2O, which is H2O in every world because unlike watery stuff it is impossible for H2O to be other than H2O. When considered according to its secondary intension, "Water is H2O" is true in every world.
Impact
If two-dimensionalism is workable it solves some very important problems in the philosophy of language. Saul Kripke has argued that "Water is H2O" is an example of a necessary truth whi |
https://en.wikipedia.org/wiki/Single-subject%20research | Single-subject research is a group of research methods that are used extensively in the experimental analysis of behavior and applied behavior analysis with both human and non-human participants. This research strategy focuses on one participant and tracks their progress in the research topic over a period of time. Single-subject research allows researchers to track changes in an individual over a large stretch of time instead of observing different people at different stages. This type of research can provide critical data in several fields, specifically psychology. It is most commonly used in experimental and applied analysis of behaviors. This research has been heavily debated over the years. Some believe that this research method is not effective at all while others praise the data that can be collected from it. Principal methods in this type of research are: A-B-A-B designs, Multi-element designs, Multiple Baseline designs, Repeated acquisition designs, Brief experimental designs and Combined designs.
These methods form the heart of the data collection and analytic code of behavior analysis. Behavior analysis is data driven, inductive, and disinclined to hypothetico-deductive methods.
Experimental questions
Experimental questions are decisive in determining the nature of the experimental design to be selected. There are four basic types of experimental questions: demonstration, comparison, parametric, and component. A demonstration is "Does A cause or influence B?". A comparison is "Does A1 or A2 cause or influence B more?". A parametric question is "How much of A will cause how much change or influence on B?". A component question is "Which part of A{1,2,3} - A1 or A2 or A3... - causes or influences B?" where A is composed of parts that can be separated and tested.
The A-B-A-B design is useful for demonstration questions.
A-B-A-B
A-B-A-B
A-B-A-B designs begin with establishing a baseline (A #1) then introduce a new behavior or treatment (B #1). Then the |
https://en.wikipedia.org/wiki/Amanita%20arocheae | Amanita arocheae, also known as the Latin American death cap, is a mushroom of the large genus Amanita, which occurs in Colombia, Central America and South America. Deadly poisonous, it is a member of section Phalloideae and related to the death cap A. phalloides.
It is known as hongo gris in Mexico, where it is found under oak. It differs from the death cap in the colour of its cap, which is brownish to grayish.
Taxonomy
The species was first described in 1992 by mycologists Rod Tullos, C.L. Ovebro, and Roy Halling. It is closely related to Amanita phalloides, and was referred to this species in the past by Mexican mycologists. It is named after mycologist Regla Maria Aroche.
Description
The cap is convex to plano-convex, reaching dimensions of . The cap surface is sticky or tacky. The center of the cap is gray to brown with a gray edge. The white gills are closely crowded together and free from attachment to the stipe. In young mushrooms, the gills exude drops of clear fluid. The dry, white to pale grey stipe measures long by thick. It has a bulbous base, a white to grey, membranous volva at the stipe base, and white mycelium at the base. The stipe has a white ring. The odor of the flesh is mild to unpleasant.
The spore print is white. Spores are smooth, amyloid, spherical or roughly so, and measure 7–10 by 6.8–9.5 μm. Clamp connections are absent from the hyphae.
Similar species
Amanita vaginata is similar, however A. vaginata has non-amyloid spores and lacks a ring on the stipe.
Habitat and distribution
Amanita arocheae is a mycorrhizal species that associates with oak as a host. It is found in Mexico, Costa Rica, and Colombia.
See also
List of Amanita species
List of deadly fungi |
https://en.wikipedia.org/wiki/Electroblotting | Electroblotting is a method in molecular biology/biochemistry/immunogenetics to transfer proteins or nucleic acids onto a membrane by using PVDF or nitrocellulose, after gel electrophoresis. The protein or nucleic acid can then be further analyzed using probes such as specific antibodies, ligands like lectins, or stains. This method can be used with all polyacrylamide and agarose gels. An alternative technique for transferring proteins from a gel is capillary blotting.
Development
This technique was patented in 1989 by William J. Littlehales under the title "Electroblotting technique for transferring specimens from a polyacrylamide electrophoresis or like gel onto a membrane.
Electroblotting procedure
This technique relies upon current and a transfer buffer solution to drive proteins or nucleic acids onto a membrane. Following electrophoresis, a standard tank or semi-dry blotting transfer system is set up. A stack is put together in the following order from cathode to anode: sponge | three sheets of filter paper soaked in transfer buffer | gel | PVDF or nitrocellulose membrane | three sheets of filter paper soaked in transfer buffer | sponge. It is a necessity that the membrane is located between the gel and the positively charged anode, as the current and sample will be moving in that direction. Once the stack is prepared, it is placed in the transfer system, and a current of suitable magnitude is applied for a suitable period of time according to the materials being used.
Typically the electrophoresis gel is stained with Coomassie brilliant blue following the transfer to ensure that a sufficient quantity of material has been transferred. Because the proteins may retain or regain part of their structure during blotting they may react with specific antibodies giving rise to the term immunoblotting. Alternatively the proteins may react with ligands like lectins giving rise to the term affinity blotting.
See also
Western blotting
SDS-page |
https://en.wikipedia.org/wiki/Kobon%20triangle%20problem | The Kobon triangle problem is an unsolved problem in combinatorial geometry first stated by Kobon Fujimura (1903-1983). The problem asks for the largest number N(k) of nonoverlapping triangles whose sides lie on an arrangement of k lines. Variations of the problem consider the projective plane rather than the Euclidean plane, and require that the triangles not be crossed by any other lines of the arrangement.
Known upper bounds
Saburo Tamura proved that the number of nonoverlapping triangles realizable by lines is at most . G. Clément and J. Bader proved more strongly that this bound cannot be achieved when is congruent to 0 or 2 (mod 6). The maximum number of triangles is therefore at most one less in these cases. The same bounds can be equivalently stated, without use of the floor function, as:
Solutions yielding this number of triangles are known when is 3, 4, 5, 6, 7, 8, 9, 13, 15 or 17. For k = 10, 11 and 12, the best solutions known reach a number of triangles one less than the upper bound.
Known constructions
Given an optimal solution with k0 > 3 lines, other Kobon triangle solution numbers can be found for all ki-values where
by using the procedure by D. Forge and J. L. Ramirez Alfonsin. For example, the solution for k0 = 5 leads to the maximal number of nonoverlapping triangles for k = 5, 9, 17, 33, 65, ....
Examples
See also
Roberts's triangle theorem, on the minimum number of triangles that lines can form |
https://en.wikipedia.org/wiki/Stringed%20instrument%20tunings | This is a chart of stringed instrument tunings. Instruments are listed alphabetically by their most commonly known name.
Terminology
A course may consist of one or more strings.
Courses are listed reading from left to right facing the front of the instrument, with the instrument standing vertically. On a majority of instruments, this places the notes from low to high pitch.
Exceptions exist:
Instruments using reentrant tuning (e.g., the charango) may have a high string before a low string.
Instruments strung in the reverse direction (e.g. mountain dulcimer) will be noted with the highest sounding courses on the left and the lowest to the right.
A few instruments exist in "right-hand" and "left-hand" versions; left-handed instruments are not included here as separate entries, as their tuning is identical to the right-hand version, but with the strings in reverse order (e.g., a left-handed guitar).
Strings within a course are also given from left to right, facing the front of the instrument, with it standing vertically. Single-string courses are separated by spaces; multiple-string courses (i.e. paired or tripled strings) are shown with courses separated by bullet characters (•).
Pitch: Unless otherwise noted, contemporary western standard pitch (A4 = 440 Hz) and 12-tone equal temperament are assumed.
Octaves are given in scientific pitch notation, with Middle C written as "C4". (The 'A' above Middle C would then be written as "A4"; the next higher octave begins on "C5"; the next lower octave on "C3"; etc.)
Because stringed instruments are easily re-tuned, the concept of a "standard tuning" is somewhat flexible. Some instruments:
have a designated standard tuning (e.g., violin; guitar)
have more than one tuning considered "standard" (e.g. mejorana, ukulele)
do not have a standard tuning but rather a "common" tuning that is used more frequently than others (e.g., banjo; lap steel guitar)
are typically re-tuned to suit the music being played or the voice |
https://en.wikipedia.org/wiki/Koffler%20Scientific%20Reserve | The Koffler Scientific Reserve at Jokers Hill, is a biological field station belonging to and managed by the University of Toronto. It occupies roughly 348 hectares of old fields, wetlands, grasslands, and forest lands in King Township, on the western portion of the Oak Ridges Moraine and close to the town of Newmarket, Ontario, Canada. The site's ecosystems are home to many species of plants and animals.
History
Starting in the 1880s, much of the present day's Jokers Hill was heavily deforested and settled as 16 small farms. In the 1950s, Major General Clarence Churchill Mann and his wife Billie McLaughlin Mann consolidated these farms and developed the property as a horse farm and estate; a race track, barns, pastures, and other equestrian relics remain from this era. The property was purchased by the Koffler family in 1969, and served as their country home. They retained noted architect Napier Simpson to expand and remodel the estate house, first built by the Manns. The Koffler's hosted a variety of charitable and sporting events, including Canada's first three-day equestrian event competition. Guests to Jokers Hill during the Koffler years included Prince Philip, Princess Margaret, Pierre and Margaret Trudeau, and numerous luminaries from the Canadian business and arts communities.
The Kofflers turned over the estate to the University of Toronto in 1995. Estimated to be worth $18 million at the time, it is the most valuable land gift ever given to a Canadian university. Under the guidance of founding Director, Prof. Ann Zimmerman, Koffler Scientific Reserve became a favored site for biological research by members of the University's Department of Ecology and Evolutionary Biology and its Faculty of Forestry. In 2007, Arthur E. Weis, a longtime professor of evolutionary biology at the University of California, was hired by Toronto to succeed Zimmerman as director. He oversaw a construction project that converted the former 'racing barn' into the 'Laboratory for |
https://en.wikipedia.org/wiki/Tribimaximal%20mixing | Tribimaximal mixing is a specific postulated form for the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) lepton mixing matrix U. Tribimaximal mixing is defined by a particular choice of the matrix of moduli-squared of the elements of the PMNS matrix as follows:
This mixing is historically interesting as it is quite close to reality when compared to other simple hypotheses where the squares of matrix elements take exact ratios, and also compared to the naive supposition that the matrix would be approximately diagonal like the CKM matrix. However the precision of modern experiments mean that such a simple form is excluded by experiment at a level of over 5σ, mainly due to the fact the tribimaximal scheme has a zero in the element, but also (to a much lesser extent) because it predicts no violation of CP symmetry.
The tribimaximal mixing form was compatible with pre-2011 neutrino oscillation experiments and may be used as a zeroth-order approximation to more general forms for the PMNS matrix, including some that are consistent with the data. In the PDG convention for the PMNS matrix, tribimaximal mixing may be specified in terms of lepton mixing angles as follows:
The above prediction has been falsified experimentally, because θ13 was found to be nontrivial, θ13 =8.5°.
A non-negligible value of θ13 has been foreseen in certain theoretical schemes that were put forward before tribimaximal mixing and that
supported a large solar mixing, before it was confirmed experimentally (these theoretical schemes do not have a special name, but for the reasons explained above, they could be called pre-tribimaximal or also non-tribimaximal). This situation is not new: also in the 1990s, the solar mixing angle was supposed to be small by most theorists, until KamLAND proved the contrary to be true.
Explanation of name
The name tribimaximal reflects the commonality of the tribimaximal mixing matrix with two previously proposed specific forms for the PMNS matrix, the trimaximal and |
https://en.wikipedia.org/wiki/Scatter%20matrix | For the notion in quantum mechanics, see scattering matrix.
In multivariate statistics and probability theory, the scatter matrix is a statistic that is used to make estimates of the covariance matrix, for instance of the multivariate normal distribution.
Definition
Given n samples of m-dimensional data, represented as the m-by-n matrix, , the sample mean is
where is the j-th column of .
The scatter matrix is the m-by-m positive semi-definite matrix
where denotes matrix transpose, and multiplication is with regards to the outer product. The scatter matrix may be expressed more succinctly as
where is the n-by-n centering matrix.
Application
The maximum likelihood estimate, given n samples, for the covariance matrix of a multivariate normal distribution can be expressed as the normalized scatter matrix
When the columns of are independently sampled from a multivariate normal distribution, then has a Wishart distribution.
See also
Estimation of covariance matrices
Sample covariance matrix
Wishart distribution
Outer product—or X⊗X is the outer product of X with itself.
Gram matrix |
https://en.wikipedia.org/wiki/Centering%20matrix | In mathematics and multivariate statistics, the centering matrix is a symmetric and idempotent matrix, which when multiplied with a vector has the same effect as subtracting the mean of the components of the vector from every component of that vector.
Definition
The centering matrix of size n is defined as the n-by-n matrix
where is the identity matrix of size n and is an n-by-n matrix of all 1's.
For example
,
,
Properties
Given a column-vector, of size n, the centering property of can be expressed as
where is a column vector of ones and is the mean of the components of .
is symmetric positive semi-definite.
is idempotent, so that , for . Once the mean has been removed, it is zero and removing it again has no effect.
is singular. The effects of applying the transformation cannot be reversed.
has the eigenvalue 1 of multiplicity n − 1 and eigenvalue 0 of multiplicity 1.
has a nullspace of dimension 1, along the vector .
is an orthogonal projection matrix. That is, is a projection of onto the (n − 1)-dimensional subspace that is orthogonal to the nullspace . (This is the subspace of all n-vectors whose components sum to zero.)
The trace of is .
Application
Although multiplication by the centering matrix is not a computationally efficient way of removing the mean from a vector, it is a convenient analytical tool. It can be used not only to remove the mean of a single vector, but also of multiple vectors stored in the rows or columns of an m-by-n matrix .
The left multiplication by subtracts a corresponding mean value from each of the n columns, so that each column of the product has a zero mean. Similarly, the multiplication by on the right subtracts a corresponding mean value from each of the m rows, and each row of the product has a zero mean.
The multiplication on both sides creates a doubly centred matrix , whose row and column means are equal to zero.
The centering matrix provides in particular a succinct way to expres |
https://en.wikipedia.org/wiki/N-Acetylserotonin | N-Acetylserotonin (NAS), also known as normelatonin, is a naturally occurring chemical intermediate in the endogenous production of melatonin from serotonin. It also has biological activity in its own right, including acting as a melatonin receptor agonist, an agonist of the TrkB, and having antioxidant effects.
Biological function
Like melatonin, NAS is an agonist at the melatonin receptors MT1, MT2, and MT3, and may be considered to be a neurotransmitter. In addition, NAS is distributed in some areas of the brain where serotonin and melatonin are not, suggesting that it may have unique central duties of its own instead of merely functioning as a precursor in the synthesis of melatonin. NAS is known to have anti-depressant, neurotrophic and cognition-enhancing effects and has been proposed to be a target for the treatment of aging-associated cognitive decline and depression
TrkB receptor
NAS has been shown to act as a potent TrkB receptor agonist, while serotonin and melatonin do not. Subchronic and chronic administration of NAS to adult mice induces proliferation of neural progenitor cells (NPC)s, blockage of TrkB abolished this effect suggesting that it is TrkB-dependent. NAS was also found to significantly enhance NPC proliferation in sleep-deprived mice. It is thought that the anti-depressant and neurotrophic effects of NAS are in part due to its role as a TrkB agonist.
Antioxidant properties
NAS acts as a potent antioxidant, NAS effectiveness as an anti-oxidant has been found to be different depending on the experimental model used, it has been described as being between 5 and 20 times more effect than melatonin at protecting against oxidant damage. NAS has been shown to protect against lipid peroxidation in microsomes and mitochondria. NAS has also been reported to lower resting levels of ROS in peripheral blood lymphocytes and to exhibit anti-oxidant effects against t-butylated hydroperoxide- and diamide-induced ROS. NAS has also been observed to inhibit |
https://en.wikipedia.org/wiki/MEROPS | MEROPS is an online database for peptidases (also known as proteases, proteinases and proteolytic enzymes) and their inhibitors. The classification scheme for peptidases was published by Rawlings & Barrett in 1993, and that for protein inhibitors by Rawlings et al. in 2004. The most recent version, MEROPS 12.4, was released in late October 2021.
Overview
The classification is based on similarities at the tertiary and primary structural levels. Comparisons are restricted to that part of the sequence directly involved in the reaction, which in the case of a peptidase must include the active site, and for a protein inhibitor the reactive site. The classification is hierarchical: sequences are assembled into families, and families are assembled into clans. Each peptidase, family, and clan has a unique identifier.
Classification
Family
The families of peptidases are constructed by comparisons of amino acid sequences. A family is assembled around a type example, the sequence of a well-characterized peptidase or inhibitor. All other sequences in the family must be related to the family type example, either directly or through a transitive relationship involving one or more sequences already shown to be family members. Typically, FastA or BlastP is used to establish sequence relationships, with an expect value of 0.001 or lower taken to be statistically significant. HMMER or psi-blast searches are used for adding sequences which are distantly related to a family. Each family is identified by a letter representing the catalytic type of the peptidases it contains followed by an arbitrary unique number.
Some families are divided into subfamilies due to evidence of very ancient divergence within the family. The divergence corresponds to more than 150 accepted point mutations per 100 amino acid residues.
Clan
The similarity in three-dimensional structures supports the evidence that many of the families do share common ancestry with others. "Clan" is used to describe s |
https://en.wikipedia.org/wiki/Paradox%20of%20the%20plankton | In aquatic biology, the paradox of the plankton describes the situation in which a limited range of resources supports an unexpectedly wide range of plankton species, apparently flouting the competitive exclusion principle which holds that when two species compete for the same resource, one will be driven to extinction.
Ecological paradox
The paradox of the plankton results from the clash between the observed diversity of plankton and the competitive exclusion principle, also known as Gause's law, which states that, when two species compete for the same resource, ultimately only one will persist and the other will be driven to extinction. Coexistence between two such species is impossible because the dominant one will inevitably deplete the shared resources, thus decimating the inferior population. Phytoplankton life is diverse at all phylogenetic levels despite the limited range of resources (e.g. light, nitrate, phosphate, silicic acid, iron) for which they compete amongst themselves. The paradox of the plankton was originally described in 1961 by G. Evelyn Hutchinson, who proposed that the paradox could be resolved by factors such as vertical gradients of light or turbulence, symbiosis or commensalism, differential predation, or constantly changing environmental conditions.
Later studies found that the paradox can be resolved by factors such as: zooplankton grazing pressure; chaotic fluid motion; size-selective grazing; spatio-temporal heterogeneity; bacterial mediation; or environmental fluctuations. In general, researchers suggest that ecological and environmental factors continually interact such that the planktonic habitat never reaches an equilibrium for which a single species is favoured.
While it was long assumed that turbulence disrupts plankton patches at spatial scales less than a few metres, researchers using small-scale analysis of plankton distribution found that these exhibited patches of aggregation — on the order of 10 cm — that had suffic |
https://en.wikipedia.org/wiki/Tajima%27s%20D | Tajima's D is a population genetic test statistic created by and named after the Japanese researcher Fumio Tajima. Tajima's D is computed as the difference between two measures of genetic diversity: the mean number of pairwise differences and the number of segregating sites, each scaled so that they are expected to be the same in a neutrally evolving population of constant size.
The purpose of Tajima's D test is to distinguish between a DNA sequence evolving randomly ("neutrally") and one evolving under a non-random process, including directional selection or balancing selection, demographic expansion or contraction, genetic hitchhiking, or introgression. A randomly evolving DNA sequence contains mutations with no effect on the fitness and survival of an organism. The randomly evolving mutations are called "neutral", while mutations under selection are "non-neutral". For example, a mutation that causes prenatal death or severe disease would be expected to be under selection. In the population as a whole, the frequency of a neutral mutation fluctuates randomly (i.e. the percentage of individuals in the population with the mutation changes from one generation to the next, and this percentage is equally likely to go up or down) through genetic drift.
The strength of genetic drift depends on population size. If a population is at a constant size with constant mutation rate, the population will reach an equilibrium of gene frequencies. This equilibrium has important properties, including the number of segregating sites , and the number of nucleotide differences between pairs sampled (these are called pairwise differences). To standardize the pairwise differences, the mean or 'average' number of pairwise differences is used. This is simply the sum of the pairwise differences divided by the number of pairs, and is often symbolized by .
The purpose of Tajima's test is to identify sequences which do not fit the neutral theory model at equilibrium between mutation and gene |
https://en.wikipedia.org/wiki/Chiral%20gauge%20theory | In quantum field theory, a chiral gauge theory is a quantum field theory with charged chiral (i.e. Weyl) fermions. For instance, the Standard Model is a chiral gauge theory. For topological reasons, chiral charged fermions cannot be given a mass without breaking the gauge symmetry, which will lead to inconsistencies unlike a global symmetry. It is notoriously difficult to construct a chiral gauge theory from a theory which does not already contain chiral fields at the fundamental level. A consistent chiral gauge theory must have no gauge anomaly (or global anomaly). Almost by necessity, regulators will have to break the gauge symmetry. This is responsible for gauge anomalies in the first place.
Fermion doubling on a lattice
Lattice regularizations suffer from fermion doublings leading to a loss of chirality.
See also
Chiral anomaly |
https://en.wikipedia.org/wiki/UWIN | UWIN is a computer software package created by David Korn which allows programs written for the operating system Unix to be built and run on Microsoft Windows with few, if any, changes. Some of the software development was subcontracted to Wipro, India. References, correct or not, to the software as U/Win and AT&T Unix for Windows can be found in some cases, especially from the early days of its existence.
UWIN source is available under the Open Source Eclipse Public License 1.0 at AT&T's AST/UWIN repositories on GitHub.
UWIN 5 is distributed with the FireCMD enhanced Windows shell with the Korn Shell thereof as one of three default shells present at install, the others being the FireCMD scripting language and the default Windows command shell cmd.exe. Other UWIN shells like csh and tclsh and those of other interoperability suites like the MKS Toolkit and other shells like those that come with Tcl, Lua, Python and Ruby distributions inter alia can be added to the menu by the user/administrator.
Technical details
Technically, it is an X/Open library for the Windows 32-bit application programming interface (API), called Win32.
UWIN contains:
Libraries that emulate a Unix environment by implementing the Unix API
Include files and development tools such as cc(1), yacc(1), lex(1), and make(1).
ksh(1) (the Korn Shell) and over 250 utilities such as ls(1), sed(1), cp(1), stty(1), etc.
Most of the Unix API is implemented by the POSIX.DLL dynamically loaded (shared) library. Programs linked with POSIX.DLL run under the Win32 subsystem instead of the POSIX subsystem, so programs can freely intermix Unix and Win32 library calls. A cc(1) command is provided to compile and link programs for UWIN on Windows using traditional Unix build tools such as make(1). The cc(1) command is a front end to the underlying compiler that performs the actual compilation and linking. It can be used with the Microsoft Visual C/C++ 5.X compiler, the Visual C/C++ 6.X compiler, the Visual C |
https://en.wikipedia.org/wiki/Compressed%20sensing | Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.
Overview
A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to be possible to perfectly reconstruct a signal from a series of measurements (acquiring this series of measurements is called sampling). Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized.
An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if a real signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly by means of sinc interpolation. The main idea is that with prior knowledge about constraints on the signal's frequencies, fewer samples are needed to reconstruct the signal.
Around 2004, Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho proved that given knowledge about a signal's sparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires. This idea is the basis of compressed sensing.
History
Compressed s |
https://en.wikipedia.org/wiki/Phosphoribosyl-N-formylglycineamide | Phosphoribosyl-N-formylglycineamide (or FormylGlycinAmideRibotide, FGAR) is a biochemical intermediate in the formation of purine nucleotides via inosine-5-monophosphate, and hence is a building block for DNA and RNA. The vitamins thiamine and cobalamin also contain fragments derived from FGAR.
FGAR is formed when the enzyme phosphoribosylglycinamide formyltransferase adds a formyl group from 10-formyltetrahydrofolate to glycineamide ribonucleotide (GAR) in reaction :
GAR + 10-formyltetrahydrofolate → FGAR + tetrahydrofolate
The biosynthesis pathway next converts FGAR to an amidine by the action of phosphoribosylformylglycinamidine synthase (), transferring an amino group from glutamine and giving 5'-phosphoribosylformylglycinamidine (FGAM) in a reaction that also requires ATP:
FGAR + ATP + glutamine + H2O → FGAM + ADP + glutamate + Pi
See also
5-Aminoimidazole ribotide
Purine metabolism |
https://en.wikipedia.org/wiki/Ceruminous%20gland | Ceruminous glands are specialized sudoriferous glands (sweat glands) located subcutaneously in the external auditory canal, in the outer third. Ceruminous glands are simple, coiled, tubular glands made up of an inner secretory layer of cells and an outer myoepithelial layer of cells. They are classed as apocrine glands. The glands drain into larger ducts, which then drain into the guard hairs that reside in the external auditory canal. Here they produce cerumen, or earwax, by mixing their secretion with sebum and dead epidermal cells. Cerumen keeps the eardrum pliable, lubricates and cleans the external auditory canal, waterproofs the canal, kills bacteria, and serves as a barrier to trap foreign particles (dust, fungal spores, etc.) by coating the guard hairs of the ear, making them sticky.
These glands are capable of developing both benign and malignant tumors. The benign tumors include ceruminous adenoma, ceruminous pleomorphic adenoma, and ceruminous syringocystadenoma papilliferum. The malignant tumors include ceruminous adenocarcinoma, adenoid cystic carcinoma, and mucoepidermoid carcinoma.
See also
List of specialized glands within the human integumentary system
List of distinct cell types in the adult human body |
https://en.wikipedia.org/wiki/Hydra%20%28operating%20system%29 | Hydra (stylized as HYDRA) is an early, discontinued, capability-based, object-oriented microkernel designed to support a wide range of possible operating systems to run on it. Hydra was created as part of the C.mmp project at Carnegie-Mellon University in 1971.
The name is based on the ancient Greek mythological creature the hydra.
Hydra was designed to be modular and secure, and intended to be flexible enough for easy experimentation.
The system was implemented in the programming language BLISS. |
https://en.wikipedia.org/wiki/Glutamate-1-semialdehyde | Glutamate-1-semialdehyde is a molecule formed from by the reduction of tRNA bound glutamate, catalyzed by glutamyl-tRNA reductase. It is isomerized by glutamate-1-semialdehyde 2,1-aminomutase to give aminolevulinic acid in the biosynthesis of porphyrins, including heme and chlorophyll.
See also
Glutamate-5-semialdehyde |
https://en.wikipedia.org/wiki/Diversity%E2%80%93function%20debate | Functional diversity, composition, and species richness affect the biogeochemical processes of ecosystems. However, the degree to which these factors influence ecosystems and whether that influence is significant is debated.
In the article The Influence of Functional Diversity and Composition on Ecosystem Processes, scientists reported on an experiment in which they studied the effects of plant species diversity, functional diversity, and functional composition on ecosystem processes, as measured in six response variables (productivity, plant % N, plant tot. N, soil NH4, soil NO3, and light penetration). 289 plots were designed with varying amounts of the three controlled factors. Each plot contained up to 32 perennial savannah-grassland species representing up to five plant functional groups. These species were not equal in their functional impact to the ecosystem.
The statistical results show that functional diversity and species composition significantly affected the six response variables to a greater extent than species diversity. By themselves, all three factors significantly affected ecosystem processes and also influenced each other. The mechanisms and degree by which they influenced each other are unclear. The Tilman article doesn't purport to have the definite answer. Uncertainty is implied in the major conclusions of the paper: "...the number of functionally different roles represented in an ecosystem may be a stronger determinant of ecosystem processes than the total number of species, per se. However, species diversity and functional diversity are correlated..." This study implies that to progress, scientists on both sides of the diversity–function debate must develop a holistic model that acknowledges the inextricable relationship between diversity and function.
In the fourth installment of the Ecological Society of America's Issues in Ecology series, the David Tilman et al. study was used to support the argument for a positive correlation between d |
https://en.wikipedia.org/wiki/Continental%20Electronics | Continental Electronics is an American manufacturer of broadcast and military radio transmitters, based in Dallas, Texas. Although Continental today is best known for its FM, shortwave, and military VLF transmitters, Continental is most significant historically for its line of mediumwave (AM) transmitters, many of which are still in active service as either main transmitters or backup facilities. Among clear-channel AM stations in the U.S. and Canada, the Continental 317C was the most popular transmitter type in the 1970s and 1980s.
History
Continental Electronics was founded in Dallas in 1946 by James O. Weldon, as a spin-off of the broadcast consulting business in which he was a partner, Weldon & Carr. In 1953, when Western Electric's radio equipment business was broken up by Federal antitrust regulators, Continental acquired the AM transmitter business, and with it the U.S. patent on the Doherty linear RF amplifier. Continental became part of Ling-Temco-Vought in about 1962, the first in a series of sales which would later bring it under the control of E-Systems and then Varian Associates in 1985. Tech-Sym acquired Continental from Varian in 1990, and then sold it to Integrated Defense Technologies in 2000. DRS Technologies acquired IDT in 2003, and in 2005, private-equity firm Veritas Capital (which had previously owned IDT) bought Continental back from DRS. Weldon remained with the company until his retirement in 1988.
In 1958, Continental introduced a more-efficient Doherty-style amplifier based on a tetrode (previous Doherty designs made by Western Electric and Continental used triodes) with the type 317B transmitter. With four subsequent revisions, more than 200 units were sold in the 317 line (a substantial number given the limited customer base for 50-kW AM transmitters in the North American market); the final revision, the 317C-3, was introduced in 1990. By this time, competitors such as Harris had demonstrated the workability of all-solid-state |
https://en.wikipedia.org/wiki/Marker-assisted%20selection | Marker assisted selection or marker aided selection (MAS) is an indirect selection process where a trait of interest is selected based on a marker (morphological, biochemical or DNA/RNA variation) linked to a trait of interest (e.g. productivity, disease resistance, abiotic stress tolerance, and quality), rather than on the trait itself. This process has been extensively researched and proposed for plant- and animal- breeding.
For example, using MAS to select individuals with disease resistance involves identifying a marker allele that is linked with disease resistance rather than the level of disease resistance. The assumption is that the marker associates at high frequency with the gene or quantitative trait locus (QTL) of interest, due to genetic linkage (close proximity, on the chromosome, of the marker locus and the disease resistance-determining locus). MAS can be useful to select for traits that are difficult or expensive to measure, exhibit low heritability and/or are expressed late in development. At certain points in the breeding process the specimens are examined to ensure that they express the desired trait.
Marker types
The majority of MAS work in the present era uses DNA-based markers. However, the first markers that allowed indirect selection of a trait of interest were morphological markers. In 1923, Karl Sax first reported association of a simply inherited genetic marker with a quantitative trait in plants when he observed segregation of seed size associated with segregation for a seed coat color marker in beans (Phaseolus vulgaris L.). In 1935, J. Rasmusson demonstrated linkage of flowering time (a quantitative trait) in peas with a simply inherited gene for flower color.
Markers may be:
Morphological These were the first markers loci available that have an obvious impact on the morphology of plants. These markers are often detectable by eye, by simple visual inspection. Examples of this type of marker include the presence or absence of a |
https://en.wikipedia.org/wiki/Mesopredator%20release%20hypothesis | The mesopredator release hypothesis is an ecological theory used to describe the interrelated population dynamics between apex predators and mesopredators within an ecosystem, such that a collapsing population of the former results in dramatically increased populations of the latter. This hypothesis describes the phenomenon of trophic cascade in specific terrestrial communities.
A mesopredator is a medium-sized, middle trophic level predator, which both preys and is preyed upon. Examples are raccoons, skunks, snakes, cownose rays, and small sharks.
The hypothesis
The term "mesopredator release" was first used by Soulé and colleagues in 1988 to describe a process whereby mid-sized carnivorous mammals became far more abundant after being "released" from the control of a larger carnivore. This, in turn, resulted in decreased populations of still smaller prey species, such as birds. This may lead to dramatic prey population decline, or even extinction, especially on islands. This process arises when mammalian top predators are considered to be the most influential factor on trophic structure and biodiversity in terrestrial ecosystems. Top predators may feed on herbivores and kill predators in lower trophic levels as well. Thus, reduction in the abundance of top predators may cause the medium-sized predator population to increase, therefore having a negative effect on the underlying prey community. The mesopredator release hypothesis offers an explanation for the abnormally high numbers of mesopredators and the decline in prey abundance and diversity. The hypothesis supports the argument for conservation of top predators because they protect smaller prey species that are in danger of extinction. This argument has been a subject of interest within conservation biology for years, but few studies have adequately documented the phenomenon.
Criticism
One of the main criticisms of the mesopredator release hypothesis is that it argues in favor of the top-down control |
https://en.wikipedia.org/wiki/Umbrella%20species | Umbrella species are species selected for making conservation-related decisions, typically because protecting these species indirectly protects the many other species that make up the ecological community of its habitat (the umbrella effect). Species conservation can be subjective because it is hard to determine the status of many species. The umbrella species is often either a flagship species whose conservation benefits other species or a keystone species which may be targeted for conservation due to its impact on an ecosystem. Umbrella species can be used to help select the locations of potential reserves, find the minimum size of these conservation areas or reserves, and to determine the composition, structure, and processes of ecosystems.
Definitions
Two commonly used definitions are:
"A wide-ranging species whose requirements include those of many other species"
A species with large area requirements for which protection of the species offers protection to other species that share the same habitat
Other descriptions include:
"Traditional umbrella species, relatively large-bodied and wide-ranging species of higher vertebrates"
Animals may also be considered umbrella species if they are charismatic. The hope is that species that appeal to popular audiences, such as pandas, will attract support for habitat conservation in general.
In land use management
The use of umbrella species as a conservation tool is highly debated. The term was first used by Bruce Wilcox in 1984, who defined an umbrella species as one whose minimum area requirements are at least as comprehensive of the rest of the community for which protection is sought through the establishment and management of a protected area.
Some scientists have found that the umbrella effect provides a simpler way to manage ecological communities. Others feel that a combination of other tools establish better land management reserves to help protect more species than just using umbrella species alone. I |
https://en.wikipedia.org/wiki/Whitley%20Awards%20%28Australia%29 | The Whitley Awards have been awarded annually since 1979 by the Royal Zoological Society of New South Wales (RZSNSW). They commemorate Gilbert Whitley, an eminent Australian ichthyologist, and are presented for outstanding publications, either printed or electronic, that contain new information about the fauna of the Australasian region.
For a publication to receive a Whitley Award it must either make a significant contribution of new information, present a new synthesis of existing information, or present existing information in a more acceptable form. All texts must contain a significant proportion of information that relates directly to Australasian zoology. Moreover, all submissions must have been published within 18 months of the awards entry date.
A presentation ceremony is held each year in September at the Australian Museum in Sydney when the authors and publishers of the winning titles receive their awards.
Awards
Certificates of Commendation may be awarded to publications judged as the best in various categories including, but not limited to, illustrated publications, textbooks, field guides, reference works, historical zoology, periodicals, handbooks, children’s publications, CD-ROMs, limited editions and videos.
Whitley Medal
The Whitley Medal may be awarded to a publication deemed to be of superior quality that makes a landmark contribution to the understanding, content or dissemination of zoological knowledge. The Whitley Medal is the top award in zoological publishing in Australia and is not necessarily awarded every year, though sometimes more than one medal may be awarded.
See also
Whitley Awards (UK)
List of biology awards |
https://en.wikipedia.org/wiki/Drosophila%20hybrid%20sterility | The concept of a biological species as a group of organisms capable of interbreeding to produce viable offspring dates back to at least the 18th century, although it is often associated today with Ernst Mayr. Species of the fruit-fly Drosophila are one of the most commonly used organisms in evolutionary research, and have been used to test many theories related to the evolution of species. The genus Drosophila comprises numerous species that have varying degrees of premating and postmating isolation (including hybrid sterility) between them. These species are useful for testing hypotheses of the reproductive mechanisms underlying speciation.
Historical background
Working in the early 20th century T.H. Morgan, was the first to use Drosophila to explore heredity. Primarily on the basis of work with D. melanogaster, Morgan and his colleagues C.B. Bridges, A.H. Sturtevant, and H.J. Mueller developed a chromosome theory of heredity, for which Morgan was awarded a Nobel Prize in 1933. Their experiments consisted of cross-breeding Drosophila mutants and documenting offspring. Another highly regarded figure in Drosophila research was Theodosius Dobzhansky, who invented the use of genetic markers and used them to study hybrid sterility between Drosophila pseudoobscura and Drosophila persimilis. This experimental method has been used for many years.
Gender determination in Drosophila
The genome of D. melanogaster, has been sequenced and studied in fine detail. It is now known that Drosophila has 6 chromosomes—an X/Y pair and four autosomal chromosomes. The genome comprises about 139.5 million base pairs. There are about 15,000 genes.
Gender is determined in Drosophila not by the presence or absence of the Y chromosome as in mammals, but by the ratio of X chromosomes to autosomes.
Experimentation
In the off-spring of crosses between Drosophila simulans and its island derivative Drosophila mauritiana, female hybrids are fertile but male hybrids are sterile. |
https://en.wikipedia.org/wiki/Latent%20extinction%20risk | In conservation biology, latent extinction risk is a measure of the potential for a species to become threatened.
Latent risk can most easily be described as the difference, or discrepancy, between the current observed extinction risk of a species (typically as quantified by the IUCN Red List) and the theoretical extinction risk of a species predicted by its biological or life history characteristics.
Calculation
Because latent risk is the discrepancy between current and predicted risks, estimates of both of these values are required (See population modeling and population dynamics). Once these values are known, the latent extinction risk can be calculated as Predicted Risk - Current Risk = Latent Extinction Risk.
When the latent extinction risk is a positive value, it indicates that a species is currently less threatened than its biology would suggest it ought to be. For example, a species may have several of the characteristics often found in threatened species, such as large body size, small geographic distribution, or low reproductive rate, but still be rated as "least concern" in the IUCN Red List. This may be because it has not yet been exposed to serious threatening processes such as habitat degradation.
Conversely, negative values of latent risk indicate that a species is already more threatened than its biology would indicate, probably because it inhabits a part of the world where it has been exposed to extreme endangering processes. Species with severely low negative values are usually listed as an endangered species and have associated recovery and conservation plans.
Limits
One of the issues associated with latent extinction risk is its difficulty to calculate because of the limited availability of data for predicting extinction risk across large numbers of species. Hence, the only study of latent risk to date has focused on mammals, which are one of the best-studied groups of organisms.
Effects on conservation
A study of latent extinction risk |
https://en.wikipedia.org/wiki/Ecological%20extinction | Ecological extinction is "the reduction of a species to such low abundance that, although it is still present in the community, it no longer interacts significantly with other species".
Ecological extinction stands out because it is the interaction ecology of a species that is important for conservation work. They state that "unless the species interacts significantly with other species in the community (e.g. it is an important predator, competitor, symbiont, mutualist, or prey) its loss may result in little to no adjustment to the abundance and population structure of other species".
This view stems from the neutral model of communities that assumes there is little to no interaction within species unless otherwise proven.
Estes, Duggins, and Rathburn (1989) recognize two other distinct types of extinction:
Global extinction is defined as "the ubiquitous disappearance of a species".
Local extinction is characterized by "the disappearance of a species from part of its natural range".
Keystone species
Robert Paine (1969) first came up with the concept of a keystone species while studying the effects of the predatory sea star Pisaster ochraceus, on the abundance of the herbivorous gastropod, Tegula funebralis. This study took place in the rocky intertidal habitat off the coast of Washington; Paine removed all Pisaster in 8m x 10m plots weekly while noting the response of Tegula for two years. He found that removing the top predator, in this case being Pisaster, reduced species number in the treatment plots. Paine defined the concept of a keystone species as a species that has a disproportionate effect on the community structure of an environment in relation to its total biomass. This keystone species effect forms the basis for the concept of ecological extinction.
Examples
Estes et al. (1978) evaluated the potential role of the sea otter as the keystone predator in near-shore kelp forests. They compared the Rat and Near islands in the Aleutian isl |
https://en.wikipedia.org/wiki/Mathematical%20Sciences%20Foundation | Mathematical Sciences Foundation (MSF) was formally registered as a non-profit society in 2002 by Dr. Anil Wilson. It is an institute of education and research, located in Delhi, India. Its goal is the promotion of mathematics and its applications at all levels, from school to college to research.
Educational programmes
Undergraduate
Mathematical Finance: A hands-on introduction to modern Finance and the role of mathematics in it.
Mathematical Simulation with IT: Explores the interaction between Mathematics, Technology, and Education.
Graduate
In association with the University of Houston, leading to PhD's in Mathematics, Computer Science, and Physics. Students are trained at MSF for a year before heading to Houston.
Seminars and conferences
A Life of Mathematics: An annual programme under which eminent mathematicians reside at St. Stephen's College to interact with students and faculty. Recent visitors under this programme have been Sir Michael Atiyah, M S Narasimhan and Martin Golubitsky (President, SIAM).
Mathematics in the 20th Century: An international conference held in Delhi in 2006 to commemorate the birth centenary of André Weil.
Contests
MSF Challenge: An annual contest, first held in 2006, to encourage school students to use computers for mathematical problem solving.
Recognizing Ramanujan: An annual contest, first held in 2019, to encourage school students in adapting to unique thinking ability in mathematical problem solving. This contest also encourages students to get to know about the great Indian Mathematician Srinivasa Ramanujan.
Akshit Gupta from Delhi Public School Rohini topped the first edition of this contest with 75% marks which was almost 15% more than the next best student in the edition. He till date remains the only student to secure a perfect score in the mathematical problem solving section of the exam.
Navvye Anand from Sanskriti School is to date the only student to win the distinction of "Budding Ramanujan" in all three of its |
https://en.wikipedia.org/wiki/Moving%20sofa%20problem | In mathematics, the moving sofa problem or sofa problem is a two-dimensional idealisation of real-life furniture-moving problems and asks for the rigid two-dimensional shape of largest area that can be maneuvered through an L-shaped planar region with legs of unit width. The area thus obtained is referred to as the sofa constant. The exact value of the sofa constant is an open problem. The currently leading solution, by Joseph L. Gerver, has a value of approximately 2.2195 and is thought to be close to the optimal, based upon subsequent study and theoretical bounds.
History
The first formal publication was by the Austrian-Canadian mathematician Leo Moser in 1966, although there had been many informal mentions before that date.
Bounds
Work has been done on proving that the sofa constant (A) cannot be below or above certain values (lower bounds and upper bounds).
Lower
Lower bounds can be proven by finding a specific shape of high area and a path for moving it through the corner. An obvious lower bound is . This comes from a sofa that is a half-disk of unit radius, which can slide up one passage into the corner, rotate within the corner around the center of the disk, and then slide out the other passage.
In 1968, John Hammersley stated a lower bound of . This can be achieved using a shape resembling a telephone handset, consisting of two quarter-disks of radius 1 on either side of a 1 by rectangle from which a half-disk of radius has been removed.
In 1992, Joseph L. Gerver of Rutgers University described a sofa specified by 18 curve sections each taking a smooth analytic form. This further increased the lower bound for the sofa constant to approximately 2.2195 .
Upper
Hammersley stated an upper bound on the sofa constant of at most . Yoav Kallus and Dan Romik published a new upper bound in 2018, capping the sofa constant at . Their approach involves rotating the corridor (rather than the sofa) through a finite sequence of distinct angles (rather than continuo |
https://en.wikipedia.org/wiki/Swiss%20cheese%20%28mathematics%29 | In mathematics, a Swiss cheese is a compact subset of the complex plane obtained by removing from a closed disc some countable union of open discs, usually with some restriction on the centres and radii of the removed discs. Traditionally the deleted discs should have pairwise disjoint closures which are subsets of the interior of the starting disc, the sum of the radii of the deleted discs should be finite, and the Swiss cheese should have empty interior. This is the type of Swiss cheese originally introduced by the Swiss mathematician Alice Roth.
More generally, a Swiss cheese may be all or part of Euclidean space Rn – or of an even more complicated manifold – with "holes" in it. |
https://en.wikipedia.org/wiki/Edmonds%20matrix | In graph theory, the Edmonds matrix of a balanced bipartite graph with sets of vertices and is defined by
where the xij are indeterminates. One application of the Edmonds matrix of a bipartite graph is that the graph admits a perfect matching if and only if the polynomial det(Aij) in the xij is not identically zero. Furthermore, the number of perfect matchings is equal to the number of monomials in the polynomial det(A), and is also equal to the permanent of . In addition, rank of is equal to the maximum matching size of .
The Edmonds matrix is named after Jack Edmonds. The Tutte matrix is a generalisation to non-bipartite graphs. |
https://en.wikipedia.org/wiki/SAP%20BI%20Accelerator | In computing, the SAP BW Accelerator is a computer appliance - preinstalled software on predefined hardware - which is used to speed up OLAP queries. The software was initially known as the BI Accelerator.
SAP BW Accelerator includes indexes that are vertically inverted reproductions of all the data included in InfoCubes (i.e., fact and dimension tables as well as master data). Note that there is no relational or other database management systems in BW Accelerator. There is only a file system, and indexes are essentially held as flat files. The second primary component of SAP BW Accelerator is the engine that processes the queries in memory - it uses the SAP TREX search engine. The software is running on an expandable rack of blade servers. The operating system used for BW Accelerator is 64-bit SUSE Linux Enterprise Server (SLES).
Hardware partners
The software is optimized for specific hardware and operating system combinations.
The list of partners which deliver the appliance is:
IBM BW Accelerator solution
HP
Fujitsu Siemens Computers
Sun BI Accelerator Offering |
https://en.wikipedia.org/wiki/Cryptophycin | Cryptophycins are a family of macrolide molecules that are potent cytotoxins and have been studied for potential antiproliferative properties useful in developing chemotherapy. They are members of the depsipeptide family.
History
Cryptophycins were originally discovered in 1990 in cyanobacteria of the genus Nostoc. Cryptophycins were patented as antifungal agents with an unknown mechanism of action and subsequently identified as microtubule inhibitors. Closely related molecules were reported in the marine sponge Dysidea arenaria, which were first given the name arenastatins. However, since cyanobacteria are common symbionts of sponges, it has been suggested that bacteria may be the true origin in cases where sponge and bacterial metabolites closely resemble one another. Nevertheless, study of the structure-activity relationships between the two subgroups of molecules led to improved understanding of their cytotoxic effects.
Mechanism of action
Cryptophycins are potent microtubule inhibitors, with a mechanism of action similar to that of vinca alkaloids. Treatment of cells with cryptophycins depletes microtubules through interaction with tubulin, thereby preventing cell division. Cryptophycins are capable of inducing apoptosis, possibly through other mechanisms in addition to that mediated by microtubule inhibition.
Clinical studies
Members of the cryptophycin family have been studied as anti-tumor agents. Cryptophycin-52, a synthetic analog of natural product cryptophycins also known as LY355703, reached phase II clinical trials but was withdrawn due to side effects.
Synthesis
Cryptophycins were first isolated from cyanobacteria but have subsequently been produced by chemical synthesis. Chemoenzymatic syntheses have also been reported. |
https://en.wikipedia.org/wiki/Organization%20of%20Biological%20Field%20Stations | The Organization of Biological Field Stations (OBFS) is a nonprofit multinational organization representing the field stations and research centers across Canada, United States, and Central America.
While it has no administrative or management control over its member stations, it helps to improve their effectiveness in research, education, and outreach through various initiatives. This includes promoting the establishment of research networks, working with public agencies to enhance funding sources, and building interactions between scientists and policy makers.
The OBFS collaborates with the National Center for Ecological Analysis and Synthesis (NCEAS), the University of California Natural Reserve System (UC NRS), and the Long Term Ecological Research Network Office in maintaining a comprehensive registry of scientific data sets which may be used in future research projects.
Since its establishment in 1963, the organization has grown to nearly two hundred member stations. With the success, the International Organization of Biological Field Stations (IOBFS) was later created to facilitate the exchange of information and ideas at a larger geographic scale. |
https://en.wikipedia.org/wiki/PYTHIA | PYTHIA is a computer simulation program for particle collisions at very high energies (see event (particle physics)) in particle accelerators.
History
PYTHIA was originally written in FORTRAN 77, until the 2007 release of PYTHIA 8.1 which was rewritten in C++. Both the Fortran and C++ versions were maintained until 2012 because not all components had been merged into the 8.1 version. However, the latest version already includes new features not available in the Fortran release. PYTHIA is developed and maintained by an international collaboration of physicists, consisting of Christian Bierlich, Nishita Desai, Leif Gellersen, Ilkka Helenius, Philip Ilten, Leif Lönnblad, Stephen Mrenna, Stefan Prestel, Christian Preuss, Torbjörn Sjöstrand, Peter Skands, Marius Utheim and Rob Verheyen.
Features
The following is a list of some of the features PYTHIA is capable of simulating:
Hard and soft interactions
Parton distributions
Initial/final-state parton showers
Multiparton interactions
Fragmentation and decay
See also
Particle physics
Particle decay |
https://en.wikipedia.org/wiki/CtRNA | In molecular biology ctRNA (counter-transcribed RNA) is a plasmid encoded noncoding RNA that binds to the mRNA of repB and causes translational inhibition.
ctRNA is encoded by plasmids and functions in rolling circle replication to maintain a low copy number. In Corynebacterium glutamicum, it achieves this by antisense pairing with the mRNA of RepB, a replication initiation protein.
In Enterococcus faecium the plasmid pJB01 contains three open reading frames, copA, repB, and repC. The pJB01 ctRNA is coded on the opposite strand from the copA/repB intergenic region and partially overlaps an atypical ribosome binding site for repB.
See also
S-element |
https://en.wikipedia.org/wiki/Epstein%E2%80%93Barr%20virus%20nuclear-antigen%20internal%20ribosomal%20entry%20site | The Epstein–Barr virus nuclear-antigen internal ribosome entry site (EBNA IRES) is an internal ribosome entry site (IRES) that is found in an exon in the 5' untranslated region of the Epstein–Barr virus nuclear antigen 1 (EBNA1) gene. The EBNA IRES allows EBNA1 translation to occur under situations where initiation from the 5' cap structure and ribosome scanning is reduced. It is thought that the EBNA IRES is necessary for the regulation of latent-gene expression.
The EBNA IRES is located in the U leader exon, which is a portion of the mRNA of the Epstein–Barr virus common to all four EBNA1 transcripts.
See also
Epstein–Barr virus stable intronic-sequence RNAs |
https://en.wikipedia.org/wiki/Mir-399%20microRNA%20precursor%20family | mir-399 is a microRNA that was identified in both Arabidopsis thaliana and Oryza sativa computationally and was later experimentally verified. mir-399 is thought to target mRNAs coding for a phosphate transporter. The mature sequence is excised from the 3' arm of the hairpin. There are multiple copies of MIR399 in each plant genome, for example A. thaliana contains six microRNA precursors that all give rise to an almost identical mature miR-399 sequence. |
https://en.wikipedia.org/wiki/Glossary%20of%20reconfigurable%20computing | This is a glossary of terms used in the field of Reconfigurable computing and reconfigurable computing systems, as opposed to the traditional Von Neumann architecture.
See also
Glossary of computer terms
Reconfigurable computing
Reconfigurable computing
Wikipedia glossaries using description lists |
https://en.wikipedia.org/wiki/Prime%20k-tuple | In number theory, a prime -tuple is a finite collection of values representing a repeatable pattern of differences between prime numbers. For a -tuple , the positions where the -tuple matches a pattern in the prime numbers are given by the set of integers such that all of the values are prime. Typically the first value in the -tuple is 0 and the rest are distinct positive even numbers.
Named patterns
Several of the shortest k-tuples are known by other common names:
OEIS sequence covers 7-tuples (prime septuplets) and contains an overview of related sequences, e.g. the three sequences corresponding to the three admissible 8-tuples (prime octuplets), and the union of all 8-tuples. The first term in these sequences corresponds to the first prime in the smallest prime constellation shown below.
Admissibility
In order for a -tuple to have infinitely many positions at which all of its values are prime, there cannot exist a prime such that the tuple includes every different possible value modulo . For, if such a prime existed, then no matter which value of was chosen, one of the values formed by adding to the tuple would be divisible by , so there could only be finitely many prime placements (only those including itself). For example, the numbers in a -tuple cannot take on all three values 0, 1, and 2 modulo 3; otherwise the resulting numbers would always include a multiple of 3 and therefore could not all be prime unless one of the numbers is 3 itself. A -tuple that satisfies this condition (i.e. it does not have a for which it covers all the different values modulo ) is called admissible.
It is conjectured that every admissible -tuple matches infinitely many positions in the sequence of prime numbers. However, there is no admissible tuple for which this has been proven except the 1-tuple (0). Nevertheless, by Yitang Zhang's famous proof of 2013 it follows that there exists at least one 2-tuple which matches infinitely many positions; subsequent work showe |
https://en.wikipedia.org/wiki/Go%20and%20mathematics | The game of Go is one of the most popular games in the world. As a result of its elegant and simple rules, the game has long been an inspiration for mathematical research. Shen Kuo, an 11th century Chinese scholar, estimated in his Dream Pool Essays that the number of possible board positions is around 10172. In more recent years, research of the game by John H. Conway led to the development of the surreal numbers and contributed to development of combinatorial game theory (with Go Infinitesimals being a specific example of its use in Go).
Computational complexity
Generalized Go is played on n × n boards, and the computational complexity of determining the winner in a given position of generalized Go depends crucially on the ko rules.
Go is “almost” in PSPACE, since in normal play, moves are not reversible, and it is only through capture that there is the possibility of the repeating patterns necessary for a harder complexity.
Without ko
Without ko, Go is PSPACE-hard. This is proved by reducing True Quantified Boolean Formula, which is known to be PSPACE-complete, to generalized geography, to planar generalized geography, to planar generalized geography with maximum degree 3, finally to Go positions.
Go with superko is not known to be in PSPACE. Though actual games seem never to last longer than moves, in general it is not known if there were a polynomial bound on the length of Go games. If there were, Go would be PSPACE-complete. As it currently stands, it might be PSPACE-complete, EXPTIME-complete, or even EXPSPACE-complete.
Japanese ko rule
Japanese ko rules state that only the basic ko, that is, a move that reverts the board to the situation one move previously, is forbidden. Longer repetitive situations are allowed, thus potentially allowing a game to loop forever, such as the triple ko, where there are three kos at the same time, allowing a cycle of 12 moves.
With Japanese ko rules, Go is EXPTIME-complete.
Superko rule
The superko rule (also called |
https://en.wikipedia.org/wiki/Ultrasonic%20transducer | Ultrasonic transducers and ultrasonic sensors are devices that generate or sense ultrasound energy. They can be divided into three broad categories: transmitters, receivers and transceivers. Transmitters convert electrical signals into ultrasound, receivers convert ultrasound into electrical signals, and transceivers can both transmit and receive ultrasound.
Applications and performance
Ultrasound can be used for measuring wind speed and direction (anemometer), tank or channel fluid level, and speed through air or water. For measuring speed or direction, a device uses multiple detectors and calculates the speed from the relative distances to particulates in the air or water. To measure tank or channel liquid level, and also sea level (tide gauge), the sensor measures the distance (ranging) to the surface of the fluid. Further applications include: humidifiers, sonar, medical ultrasonography, burglar alarms and non-destructive testing.
Systems typically use a transducer that generates sound waves in the ultrasonic range, above 18 kHz, by turning electrical energy into sound, then upon receiving the echo turn the sound waves into electrical energy which can be measured and displayed.
This technology, as well, can detect approaching objects and track their positions.
Ultrasound can also be used to make point-to-point distance measurements by transmitting and receiving discrete bursts of ultrasound between transducers. This technique is known as Sonomicrometry where the transit-time of the ultrasound signal is measured electronically (ie digitally) and converted mathematically to the distance between transducers assuming the speed of sound of the medium between the transducers is known. This method can be very precise in terms of temporal and spatial resolution because the time-of-flight measurement can be derived from tracking the same incident (received) waveform either by reference level or zero crossing. This enables the measurement resolution to far exceed |
https://en.wikipedia.org/wiki/Online%20refuelling | In nuclear power technology, online refuelling is a technique for changing the fuel of a nuclear reactor while the reactor is critical. This allows the reactor to continue to generate electricity during routine refuelling, and therefore improve the availability and profitability of the plant.
Benefits of online refuelling
Online refuelling allows a nuclear reactor to continue to generate electricity during periods of routine refuelling, and therefore improves the availability and therefore the economy of the plant. Additionally, this allows for more flexibility in reactor refuelling schedules, exchanging a small number of fuel elements at a time rather than high-intensity offline refuelling programmes.
The ability to refuel a reactor while generating power has the greatest benefits where refuelling is required at high frequency, for example during the production of plutonium suitable for nuclear weapons during which low-burnup fuel is required from short irradiation periods in a reactor. Conversely, frequent rearrangement of fuel within the core can balance the thermal load and allow higher fuel burnup, therefore reducing both the fuel requirements, and subsequently the amount of high-level nuclear waste for disposal.
Although online refuelling is generally desirable, it requires design compromises which means that it is often uneconomical. This includes added complexity to refuelling equipment, and the requirement for these to pressurise during refuelling gas and water-cooled reactors. Online refuelling equipment for Magnox reactors proved to be less reliable than the reactor systems, and retrospectively its use was regarded as a mistake. Molten salt reactors and pebble-bed reactors also require online handling and processing equipment to replace the fuel during operation.
Reactor designs with online refuelling
Reactors with online refuelling capability to date have typically been either liquid sodium cooled, gas cooled, or cooled by water in pressurised |
https://en.wikipedia.org/wiki/Burnup | In nuclear power technology, burnup (also known as fuel utilization) is a measure of how much energy is extracted from a primary nuclear fuel source. It is measured as the fraction of fuel atoms that underwent fission in %FIMA (fissions per initial metal atom) or %FIFA (fissions per initial fissile atom) as well as, preferably, the actual energy released per mass of initial fuel in gigawatt-days/metric ton of heavy metal (GWd/tHM), or similar units.
Measures of burnup
Expressed as a percentage: if 5% of the initial heavy metal atoms have undergone fission, the burnup is 5%FIMA. If these 5% were the total of 235U that were in the fuel at the beginning, the burnup is 100%FIFA (as 235U is fissile and the other 95% heavy metals like 238U are not). In reactor operations, this percentage is difficult to measure, so the alternative definition is preferred. This can be computed by multiplying the thermal power of the plant by the time of operation and dividing by the mass of the initial fuel loading. For example, if a 3000 MW thermal (equivalent to 1000 MW electric at 30% efficiency, which is typical of US LWRs) plant uses 24 tonnes of enriched uranium (tU) and operates at full power for 1 year, the average burnup of the fuel is (3000 MW·365 d)/24 metric tonnes = 45.63 GWd/t, or 45,625 MWd/tHM (where HM stands for heavy metal, meaning actinides like thorium, uranium, plutonium, etc.).
Converting between percent and energy/mass requires knowledge of κ, the thermal energy released per fission event. A typical value is 193.7 MeV () of thermal energy per fission (see Nuclear fission). With this value, the maximum burnup of 100%FIMA, which includes fissioning not just fissile content but also the other fissionable nuclides, is equivalent to about 909 GWd/t. Nuclear engineers often use this to roughly approximate 10% burnup as just less than 100 GWd/t.
The actual fuel may be any actinide that can support a chain reaction (meaning it is fissile), including uranium, plutonium, a |
https://en.wikipedia.org/wiki/The%20Moral%20Animal | The Moral Animal is a 1994 book by journalist Robert Wright, in which the author explores many aspects of everyday life through evolutionary biology.
Summary
Wright explores many aspects of everyday life through evolutionary biology. He provides Darwinian explanations for human behavior and psychology, social dynamics and structures, as well as people's relationships with lovers, friends, and family.
Wright borrows extensively from Charles Darwin's better-known publications, including On the Origin of Species (1859), but also from his chronicles and personal writings, illustrating behavioral principles with Darwin's own biographical examples.
Reception
The Moral Animal was a national bestseller and has been published in 12 languages; The New York Times Book Review chose it as one of its eleven Best Books of 1994. The linguist Steven Pinker praised The Moral Animal as a "fiercely intelligent, beautifully written and engrossingly original book" but "found his [Wright's] larger ethical arguments problematic." Neurologist Amy Wax wrote: "One measure of his [Wright's] success is that most of the incoherences in the book can be traced to weaknesses in the body of work he seeks to present, and not in Wright's exposition." The paleontologist Stephen Jay Gould wrote that The Moral Animal presents "pure guesswork" as science, and that the book owes its impact to "good writing and egregiously simplistic argument."
See also
Evolutionary ethics
Evolutionary psychology
John Stuart Mill
Kin selection
Reciprocal altruism
Richard Dawkins
Steven Pinker
The Naked Ape |
https://en.wikipedia.org/wiki/Mild%20cognitive%20impairment | Mild cognitive impairment (MCI) is a neurocognitive disorder which involves cognitive impairments beyond those expected based on an individual's age and education but which are not significant enough to interfere with instrumental activities of daily living. MCI may occur as a transitional stage between normal aging and dementia, especially Alzheimer's disease. It includes both memory and non-memory impairments. The cause of the disorder remains unclear, as well as both its prevention and treatment, with some 50 percent of people diagnosed with it going on to develop Alzheimer's disease within five years. The diagnosis can also serve as an early indicator for other types of dementia, although MCI may remain stable or even remit.
Mild cognitive impairment has been relisted as mild neurocognitive disorder in DSM-5, and in ICD-11, the latter effective on 1 January 2022.
Classification
MCI can present with a variety of symptoms, but is divided generally into two types.
Amnestic MCI (aMCI) is mild cognitive impairment with memory loss as the predominant symptom; aMCI is frequently seen as a prodromal stage of Alzheimer's disease. Studies suggest that these individuals tend to progress to probable Alzheimer's disease at a rate of approximately 10% to 15% per year. It is possible that being diagnosed with cognitive decline may serve as an indicator of MCI.
Nonamnestic MCI (naMCI) is mild cognitive impairment in which impairments in domains other than memory (for example, language, visuospatial, executive) are more prominent. It may be further divided as nonamnestic single- or multiple-domain MCI, and these individuals are believed to be more likely to convert to other dementias (for example, dementia with Lewy bodies).
The International Classification of Diseases classifies MCI as a "mental and behavioural disorder."
Causes
Mild cognitive impairment (MCI) may be caused due to alteration in the brain triggered during early stages of Alzheimer's disease or other forms |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.