id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
40,565,028
https://en.wikipedia.org/wiki/PathVisio
PathVisio is a free open-source pathway analysis and drawing software. It allows drawing, editing, and analyzing biological pathways. Visualization of ones experimental data on the pathways for finding relevant pathways that are over-represented in your data set is possible. PathVisio provides a basic set of features for pathway drawing, analysis and visualization. Additional features are available as plugins. History PathVisio was created primarily at Maastricht University and Gladstone Institutes. The software is developed in Java and it's also used as part of the WikiPathways framework as an applet. Starting from version 3.0 (released in 2012) plugins are OSGi compliant and a plugin directory, describing them, was developed. In 2015 version 3.2 was released. This was the first signed version with a certificate issued by a certification authority. Many of the running issues introduced by java 1.7 and 1.8 with the new security rules were solved. Since 2013 a javascript version (PVJS) is being developed to replace the applet. From 2015 it also allows small edits and in the future it will be a full editor. Features Pathway drawing and annotation Pathway analysis Integration with WikiPathways for easy editing/publishing Integration with Cytoscape Integration with other programming languages via PathVisioRPC References External links PathVisio on Twitter Free bioinformatics software Systems biology Data and information visualization software Mathematical and theoretical biology Cross-platform software Java platform software
PathVisio
[ "Mathematics", "Biology" ]
304
[ "Applied mathematics", "Systems biology", "Mathematical and theoretical biology" ]
40,567,183
https://en.wikipedia.org/wiki/National%20Aerospace%20Week
Aerospace Week is an event which celebrates aerospace in the United States. It was established in 2010, and has been celebrated by various government and private organizations, including NASA and the U.S. Department of Commerce. National Aerospace Week was established by an agreement between the U.S. Congress in conjunction with the Aerospace Industries Association . In late 2010 a resolution supporting National Aerospace Week passed both houses of U.S. congress. In 2012 NAW was held from September 16–22, 2012. In 2013 the NASA Administrator Charlie Bolden noted the National Aerospace Week. In 2014, the National Aerospace Week was held Sept. 14-20, 2014. During that week the Governor of the U.S. State of Utah honored the aerospace industry at a luncheon. In 2014 the United States secretary of commerce issued a statement for National Aerospace Week. Aerospace Week was once again recognized by the U.S Department of Commerce in 2017 through a letter from Secretary of Commerce Wilbur Ross. In addition to aerospace week, NASA also celebrates National Aviation Day, which was established in the 1939. See also Paris Air Show National Aviation Day References External links National Aerospace Week Observing National Aerospace Week (NASA) National Aerospace Week Aerospace September observances
National Aerospace Week
[ "Physics", "Astronomy" ]
245
[ "Aerospace", "Outer space", "Astronomy stubs", "Space", "Spacetime", "Outer space stubs" ]
40,568,034
https://en.wikipedia.org/wiki/Neuroenhancement
Neuroenhancement or cognitive enhancement is the experimental use of pharmacological or non-pharmacological methods intended to improve cognitive and affective abilities in healthy people who do not have a mental illness. Agents or methods of neuroenhancement are intended to affect cognitive, social, psychological, mood, or motor benefits beyond normal functioning. Pharmacological neuroenhancement agents may include compounds thought to be nootropics, such as modafinil, caffeine, and other drugs used for treating people with neurological disorders. Non-pharmacological measures of cognitive enhancement may include behavioral methods (activities, techniques, and changes), non-invasive brain stimulation, which has been used with the intent to improve cognitive and affective functions, and brain-machine interfaces. Potential agents There are many supposed nootropics, most having only small effect sizes in healthy individuals. The most common pharmacological agents in neuroenhancement include modafinil and methylphenidate (Ritalin). Stimulants in general and various dementia treatments or other neurological therapies may affect cognition. Neuroenhancement may also occur from: mood ('mood enhancement') motivation sociability (e.g., talking-related or empathy) creativity cognitive endurance psychological resilience Enhancers are multidimensional and can be clustered into biochemical, physical, and behavioral enhancement strategies. Modafinil Approved for treating narcolepsy, obstructive sleep apnea, and shift work sleep disorder, modafinil is a wakefulness-promoting drug used to decrease fatigue, increase vigilance, and reduce daytime sleepiness. Modafinil improves alertness, attention, long-term memory, and daily performance in people with sleep disorders. In sustained sleep deprivation, repeated use of modafinil helped individuals maintain higher levels of wakefulness than a placebo, but did not help attention and executive function. Modafinil may impair one's self-monitoring ability; a common trend found in research studies indicated that participants rated their performances on cognitive tests higher than it was, suggesting an "overconfidence" effect. Methylphenidate Methylphenidate (MPH), also known as Ritalin, is a stimulant that is used to treat attention-deficit hyperactivity disorder (ADHD). MPH is abused by a segment of the general population, especially college students. A comparison between the sales of MPH to the number of people for whom it was prescribed revealed a disproportionate ratio, indicating high abuse. MPH may impair cognitive performance. Others Studies are too preliminary to determine whether there are any cognitive-enhancing effects of agents such as memantine or acetylcholinesterase inhibitors (examples: donepezil, galantamine). Possible adverse effects Common drugs intended for neuroehancement are typically well-tolerated by healthy people. These drugs are already in mainstream use to treat people with different kinds of psychiatric disorders. Assessment to determine potential adverse effects are drop-out rates and subjective rating. The drop-out rates were minimal or non-existent for donepezil, memantine, MPH, and modafinil. In the drug trials, participants reported the following adverse reactions to use of donepezil, memantine, MPH, modafinil or caffeine: gastrointestinal complaints (nausea), headache, dizziness, nightmares, anxiety, drowsiness, nervousness, restlessness, sleep disturbances, and insomnia, diuresis. The side effects normally ceased in the course of treatment. Various factors, such as dosage, timing and concurrent behavior, may influence the onset of adverse effects. Non-pharmacological Neurostimulation Neurostimulation methods are being researched and developed. Results indicate details of the stimulation procedures are crucial, with some applications impairing rather than enhancing cognition and questions being raised about whether this approach can deliver any meaningful results for cognitive domains. Stimulation methods include electrical stimulation, magnetic stimulation, optical stimulation with lasers, several forms of acoustic stimulation, and physical methods like forms of neurofeedback. Software and media Applications of augmented reality technologies may affect general memory enhancement, extending perception and learning-assistance. The Internet may be considered a tool for enabling or extending cognition. However, it is not "a simple, uniform technology, [n]either in its composition, [n]or in its use" and, as "an informational resource, currently fails to enhance cognition", partly due to issues that include information overload, misinformation and persuasion. Quality and social issues Validation and quality control Quality standards, validation and authentication, sampling and lab testing are commonly substandard or absent for products thought to be cognitive enhancers, including dietary supplements. Well-being and productivity Neuroenhancement products or methods are used with the intent to: improve well-being possibly encourage societal productivity increase incentives to develop potential therapies for various brain diseases, such as Alzheimer's disease. In popular culture Neuroenhancement products are mentioned in entertainment productions, such as Limitless (2011), which may to some degree probe and explore opportunities and threats of using such products. Prevalence In general, people under the age of 25 feel that neuroenhancement agents are acceptable or that the decision to use them is to be made individually. Healthcare officials and parents feel concerned due to safety factors, lack of complete information on these agents, and possible irreversible adverse effects; such concerns may reduce the willingness to take such agents. A 2024 study based on a representative sample of more than 20,000 adults in Germany showed that around 70% of those surveyed had taken substances with the aim of improving mental performance within a year, without a medical prescription. The consumption of caffeinated drinks, such as coffee and energy drinks, was widespread (64% of users), expressly with the aim of improving performance, followed by dietary supplements and home remedies, such as ginkgo biloba (31%). Around 4% stated that they had taken prescription drugs for cognitive enhancement (lifetime prevalence of 6%), corresponding to around 2.5 million users in Germany. A 2016 German study among 6,454 employees found a rather low life-time prevalence of cognitive enhancement prescription drug use (namely 3%), while the willingness to take such drugs was found in 10% of respondents. A survey of some 5,000 German university students found a relatively low 30-day prevalence of 1%, while 2% of those sampled used such drugs within the last 6 months, 3% within the last 12 months, and 5% of others used the drugs over their lifetimes. Of those students who used such substances during the last 6 months, 39% reported their use once in this period, 24% twice, 12% three times, and 24% more than three times. Consumers of neuroenhancement drugs are more willing to use them again in the future due to positive experiences or a tendency towards addiction. See also Alertness Cognitive development Cosmetic pharmacology Deep brain stimulation Intelligence amplification Neurohacking Neuromarketing Neuroregeneration Performance enhancement Transcranial magnetic stimulation References Cognition Intelligence Neuroscience Neurotechnology Transhumanism
Neuroenhancement
[ "Technology", "Engineering", "Biology" ]
1,529
[ "Neuroscience", "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
40,568,371
https://en.wikipedia.org/wiki/Activation-induced%20cell%20death
AICD (activation-induced cell death) is programmed cell death caused by the interaction of Fas receptors (Fas, CD95) and Fas ligands (FasL, CD95 ligand). AICD is a negative regulator of activated T lymphocytes that results from repeated stimulation of their T-cell receptors (TCR) and helps to maintain peripheral immune tolerance. Alteration of the process may lead to autoimmune diseases. The AICD effector cell is one that expresses FasL, and apoptosis is induced in the cell expressing the Fas receptor. Both activated T cells and B cells express Fas and undergo clonal deletion by the AICD mechanism. Activated T cells that express both Fas and FasL may be killed by themselves or by each other. Signaling The binding of Fas ligand to Fas receptor triggers trimerization of Fas, whose cytoplasmic domain is then able to bind the death domain of the adaptor protein FADD (Fas-associated protein with death domain). Procaspase 8 binds to FADD's death effector domain (DED) and proteolytically self-activates as caspase 8. Fas, FADD, and procaspase 8 together form a death-inducing signaling complex (DISC). Activated caspase 8 is released into the cytosol, where it activates the caspase cascade that initiates apoptosis. Regulation of Fas-FasL and AICD FasL is primarily regulated at the transcriptional level. (The other option is regulation of the signal emanating from the death receptor itself, controlling sensitivity to the induction of apoptosis.) NFAT activated by TCR stimulation activates FasL transcription, possibly indirectly by upregulating early growth response proteins. T cell activation-induced transcription of FasL is further regulated by c-Myc–MAX heterodimers, and can be blocked by c-Myc downregulation. Interferon regulatory factors IRF1 and IRF2 also upregulate FasL transcription by directly binding to the FasL promoter. Not much is known about the regulation of Fas and other death receptors. However, overexpression of the protein CFLAR (caspase and FADD-like apoptosis regulator) inhibits Fas-mediated apoptosis. See also Immune system Autoimmunity References Cell biology
Activation-induced cell death
[ "Biology" ]
509
[ "Cell biology" ]
50,640,842
https://en.wikipedia.org/wiki/Dirac%20cone
In physics, Dirac cones are features that occur in some electronic band structures that describe unusual electron transport properties of materials like graphene and topological insulators. In these materials, at energies near the Fermi level, the valence band and conduction band take the shape of the upper and lower halves of a conical surface, meeting at what are called Dirac points. Typical examples include graphene, topological insulators, bismuth antimony thin films and some other novel nanomaterials, in which the electronic energy and momentum have a linear dispersion relation such that the electronic band structure near the Fermi level takes the shape of an upper conical surface for the electrons and a lower conical surface for the holes. The two conical surfaces touch each other and form a zero-band gap semimetal. The name of Dirac cone comes from the Dirac equation that can describe relativistic particles in quantum mechanics, proposed by Paul Dirac. Isotropic Dirac cones in graphene were first predicted by P. R. Wallace in 1947 and experimentally observed by the Nobel Prize laureates Andre Geim and Konstantin Novoselov in 2005. Description In quantum mechanics, Dirac cones are a kind of crossing-point which electrons avoid, where the energy of the valence and conduction bands are not equal anywhere in two dimensional lattice -space, except at the zero dimensional Dirac points. As a result of the cones, electrical conduction can be described by the movement of charge carriers which are massless fermions, a situation which is handled theoretically by the relativistic Dirac equation. The massless fermions lead to various quantum Hall effects, magnetoelectric effects in topological materials, and ultra high carrier mobility. Dirac cones were observed in 2008-2009, using angle-resolved photoemission spectroscopy (ARPES) on the potassium-graphite intercalation compound KC8 and on several bismuth-based alloys. As an object with three dimensions, Dirac cones are a feature of two-dimensional materials or surface states, based on a linear dispersion relation between energy and the two components of the crystal momentum x and y. However, this concept can be extended to three dimensions, where Dirac semimetals are defined by a linear dispersion relation between energy and x, y, and z. In -space, this shows up as a hypercone, which have doubly degenerate bands which also meet at Dirac points. Dirac semimetals contain both time reversal and spatial inversion symmetry; when one of these is broken, the Dirac points are split into two constituent Weyl points, and the material becomes a Weyl semimetal. In 2014, direct observation of the Dirac semimetal band structure using ARPES was conducted on the Dirac semimetal cadmium arsenide. Analog systems Dirac points have been realized in many physical areas such as plasmonics, phononics, or nanophotonics (microcavities, photonic crystals). See also Dirac matter References Further reading Electronic band structures
Dirac cone
[ "Physics", "Chemistry", "Materials_science" ]
641
[ "Electron", "Electronic band structures", "Materials", "Condensed matter physics", "Semimetals", "Matter" ]
50,642,014
https://en.wikipedia.org/wiki/List%20of%20dipterans%20of%20Sri%20Lanka
Sri Lanka is a tropical island situated close to the southern tip of India. The invertebrate fauna is as large as it is common to other regions of the world. There are about 2 million species of arthropods found in the world, and more are still being discovered to this day. This makes it very complicated and difficult to summarize the exact number of species found within a certain region. This is a list of the dipterans found from Sri Lanka. Fly Phylum: Arthropoda Class: Insecta Order: Diptera Diptera is a large order containing an estimated 1,000,000 species of mosquitoes, horseflies, Mosquitoes (Culicidae) are vectors for malaria, dengue, West Nile virus, yellow fever, encephalitis, and other infectious diseases. Houseflies spread food-borne illnesses. Larger flies such as tsetse fly and screwworm cause significant economic harm to cattle. Well over 3,500 species of mosquitoes were found and described, and new species are about to discover. Sri Lanka is home to 131 species of mosquitoes that included to 16 genera with 17 endemic species. Blowfly larvae, known as gentles, and other dipteran larvae, known more generally as maggots, are used as fishing bait and as food for carnivorous animals. In medical debridement, wounds are cleaned using maggots. The exact number of species confined to the country is very difficult to note down, due to few researchers and publications of papers focusing them. Most of the cited references are from way back in 1900s, and very few are from 2010 revisions. In 2020, two stalk-eyed flies were described from Pundaluoya and Udawattakele. There are more than 1,341 dipterans found in the island, which earns fourth largest insect order found. Family: Acroceridae - Spider flies Astomella jardinei Lasia spinosa Ogcodes angustimarginatus Ogcodes rufomarginatus Pialea jardinei Family: Agromyzidae - Leaf-miner flies Agromyza ceylonensis Agromyza solita Amauromyza flavida Amauromyza meridionalis Cerodontha incisa Cerodontha oryzivora Cerodontha piliseta Japanagromyza perplexa Japanagromyza tristella Liriomyza brassicae Liriomyza huidobrensis Liriomyza pusilla Liriomyza sativae Melanagromyza albisquama Melanagromyza atomella Melanagromyza cleomae Melanagromyza hibisci Melanagromyza obtusa Melanagromyza pubiseta Melanagromyza rotata Melanagromyza sojae Ophiomyia aberrans Ophiomyia atralis Ophiomyia conspicua Ophiomyia lantanae Ophiomyia phaseoli Phytoliriomyza arctica Phytoliriomyza australensis Phytoliriomyza nigriantennalis Phytoliriomyza rangalensis Phytomyza ceylonensis Phytomyza syngenesiae Pseudonapomyza asiatica Pseudonapomyza gujaratica Pseudonapomyza quatei Tropicomyia polyphaga Tropicomyia theae Family: Anisopodidae - Wood gnats Olbiogaster fulviventris Olbiogaster orientalis Olbiogaster zeylanicus Sylvicola foveatus Sylvicola maculipennis Family: Anthomyiidae - Root-maggot flies Emmesomyia kempi Family: Asilidae - Robber flies Astochia annulipes Astochia ceylonicus Astochia determinatus Astochia grisea Astochia tarsalis Chrysopogon nodulibarbis Chrysopogon serrulatus Chrysopogon zizanioides Clephydroneura annulatus Clephydroneura apicalis Clinopogon odontoferus Cophinopoda chinensis Damalis felderi Damalis fulvipes Damalis infuscata Damalis kassebeeri Dasophrys coetzeei Emphysomera femorata Emphysomera nigra Euscelidia flava Euscelidia prolata Euscelidia simplex Eutolmus ohirai Heligmoneura pulcher Heteropogon triticeus Laphystia stigmaticallis Machimus parvus Michotamia aurata Michotamia deceptus Microstylum rufoabdominalis Microstylum whitei Neoitamus ceylonicus Neoitamus pulcher Neoitamus tarsalis Neomochtherus gnavus Nusa yerburyi Pegesimallus srilankensis Pegesimallus yerburyi Philodicus ceylanicus Philodicus chinensis Philodicus hospes Philodicus meridionalis Philodicus thoracinus Promachus ceylanicus Promachus pseudomaculatus Promachus yerburiensis Saropogon maculipennis Saropogon srilankaensis Scleropogon piceus Stenopogon piceus Stenopogon variabilis Family: Asteiidae - Asteiid flies Asteia pusillima Family: Athericidae - Ibis flies Atherix labiate Suragina elegans Suragina labiata Suragina uruma Family: Blephariceridae - Net-winged midges Hammatorrhina bella Hammatorrhina pulchra Family: Bombyliidae - Bee flies Anthrax ceylonica Anthrax distigma Anthrax semifuscatus Bombylisoma resplendens Bombylius ardens Bombylius brunettii Bombylius dives Bombylius maculatus Bombylius propinquus Dischistus resplendens Euchariomyia dives Exoprosopa affinissima Exoprosopa bengalensis Exoprosopa brahma Exoprosopa flammea Exoprosopa ghilarovi Exoprosopa insulata Exoprosopa niveiventris Exoprosopa punjabensis Exoprosopa stylata Exoprosopa tursonovi Geron argentifrons Heteralonia neurospila Exoprosopa affinissima Ligyra semifuscata Ligyra sphinx Litorhina lar Micomitra vitripennis Petrorossia brunettii Petrorossia ceylonica Petrorossia intermedia Petrorossia talawila Systoechus eupogonatus Systoechus socius Systoechus srilankae Thyridanthrax absalon Thyridanthrax keiseri Villa approximata Villa fletcheri Family: Bibionidae - March flies Plecia malayaensis Plecia mallochi Plecia rufilatera Family: Calliphoridae - Blow flies Bengalia bezzii Bengalia fuscipennis Bengalia hastativentris Bengalia jejuna Bengalia obscuripennis Bengalia torosa Borbororhinia bivittata Caiusa testacea Chrysomya megacephala Chrysomya nigripes Chrysomya pinguis Chrysomya rufifacies Chrysomya villeneuvi Cosmina simplex Hemipyrellia ligurriens Idiella divisa Idiella euidielloides Idiella mandarina Isomyia fulvicornis Isomyia pseudonepalana Isomyia pseudoviridana Isomyia sinharaja Isomyia versicolor Isomyia yerburyi Isomyia zeylanica Lucilia caesia Lucilia obesa Lucilia porphyrina Metallea clausa Metallea flavibasis Metallea major Metallea notata Onesia danielssoni Onesia kamimurai Onesia lanka Phumosia indica Phumosia testacea Polleniopsis nigripalpis Polleniopsis zaitzevi Rhinia apicalis Rhinia melastoma Rhyncomya currani Rhyncomya divisa Stomorhina cincta Stomorhina discolor Stomorhina lunata Stomorhina nigripes Strongyloneura prolata Tainanina pilisquama Tainanina sarcophagoides Thoracites abdominalis Thoracites miltogrammoides Family: Canacidae - Surge flies Chaetocanace brincki Dasyrhicnoessa fulva Dasyrhicnoessa vockerothi Horaismoptera hennigi Nocticanace taprobane Procanace grisescens Pseudorhicnoessa rattii Xanthocanace zeylanica Family: Cecidomyiidae - Gall midges Androdiplosis coccidivora - monotypic endemic Arthrocnodax rutherfordi Arthrocnodax walkeriana Bryocrypta girafa Calopedila polyalthiae Cecidomyiidi generosi Cecidomyiidi hirta Chrysodiplosis squamatipes Clinodiplosis ceylonica Coccomyza donaldi Dentifibula obtusilobae Diadiplosis coccidivora Didactylomyia ceylanica Diplecus inconspicuus - monotypic endemic Endaphis hirta Feltiella acarisuga Hallomyia iris Lasioptera aeschynanthusperottetti Lestodiplosis ceylanica Lestremia ceylanica Misocosmus ceylanicus - monotypic endemic Mycodiplosis simulacri Mycodiplosis simulaeri Orseolia ceylanica Orseolia ceylonica Plutodiplosis maginifica Plutodiplosis magnifica Pseudoperomyia parvolobata Trichoperrisia pipericola - monotypic endemic Vanchidiplosis ceylanica Xylodiplosis aestivalis Family: Celyphidae - Beetle flies Celyphus anisotomoides Celyphus hyalinus Celyphus obtectus Spaniocelyphus bigoti Spaniocelyphus cognatus Family: Ceratopogonidae - Biting midges Alluaudomyia marginalis Alluaudomyia spinosipes Alluaudomyia xanthocoma Alluaudomyia bifurcata Alluaudomyia formosana Alluaudomyia fuscipes Alluaudomyia maculosipennis Atrichopogon schizonyx Parabezzia orientalis Family: Chamaemyiidae - Silver flies Acrometopia reicherti Leucopis luteicornis Family: Chaoboridae - Phantom midges Chaoborus asiaticus Corethrella inepta Family: Chironomidae - Nonbiting midges Ablabesmyia annulatipes Cardiocladius ceylanicus Chironomini albiforceps Chironomini ceylanicus Chironomini striatipennis Chironomus allothrix Chironomus chlogaster Chironomus elatus Chironomus gloriosus Chironomus heptatomus Chironomus hexatomus Chironomus laminatus Chironomus perichlorus Chironomus pretiosus Chironomus sumptuosus Chironomus superbus Chironomus variicornis Clinotanypus ceylanicus Clinotanypus ornatissimus Clinotanypus variegatus Nilodorum biroi Nilodorum tainanus Orthocladiinae ceylanicus Polypedilum nubifer Tanypus pallidipes Tanytarsini prasiogaster Tanytarsini transversalis Tanytarsus ceylanicus Tanytarsus lobatus Tanytarsus poecilus Family: Chloropidae - Eye flies Anatrichus pygmaeus Anthracophagella albovariegata Anthracophagella sulcifrons Arcuator horni Cadrema bilineata Cadrema colombensis Cadrema minor Cadrema ocellata Cestoplectus intuens Chlorops laevifrons Chlorops lutheri Chlorops quadrilineata Chlorops zeylanicus Chloropsina brunnescens Chloropsina lacreiventris Conioscinella humeralis Dactylothyrea hyalipennis Dactylothyrea spinipes Elachiptera indistincta Ensiferella ceylonica Ensiferella obscurella Eutropha flavomaculata Eutropha siphloidea Gampsocera grandis Gampsocera mutata Lasiopleura fulvitarsis Lasiopleura zeylanica Meijerella inaequalis Merochlorops ceylanicus Myrmecosepsis taprobane Pachylophus rufescens Parectecephala indica Polyodaspis flavipila Polyodaspis ruficornis Pseudeurina maculata Rhodesiella ceylonica Rhodesiella foveata Rhodesiella nana Rhodesiella planiscutellata Rhodesiella sanctijohani Rhodesiella sauteri Rhodesiella scutellata Rhodesiella tibiella Scoliophthalmus micans Semaranga dorsocentralis Sepsidoscinis maculipennis Siphlus vittatus Siphonellopsina kokagalensis Siphunculina funicola Siphunculina intonsa Thaumatomyia semicolon Family: Clusiidae - Druid flies Czernyola sasakawai Phylloclusia lanceola Family: Conopidae - Thick-headed flies Conops ceylonicus Conops keiseri Conops nubeculosus Physocephala aurantiaca Physocephala diffusa Physocephala limbipennis Physocephala tenella Pleurocerinella dioctriaeformis Pleurocerinella srilankai Stylogaster orientalis Family: Cryptochetidae Cryptochetum curtipenne Family: Cypselosomatidae Formicosepsis brincki Family: Culicidae - Mosquitoes Family: Curtonotidae - Quasimodo flies Axinota sarawakensis Axinota simulans Curtonotum ceylonense Family: Diopsidae - Stalk-eyed flies Pseudodiopsis bipunctipennis Teleopsis ferruginea Teleopsis krombeini Teleopsis maculata Teleopsis neglecta Teleopsis rubicunda Teleopsis sorora Family: Dixidae - Meniscus midges Dixa zeylanica Family: Dolichopodidae - Long-legged flies Amblypsilopus bruneli Amblypsilopus munroi Argyrochlamys impudicus Campsicnemus crossotibia Chrysosoma annotatum Chrysosoma appendiculatum Chrysosoma armillatum Chrysosoma congruens Chrysosoma cupido Chrysosoma derisior Chrysosoma duplicatum Chrysosoma excisum Chrysosoma extractum Chrysosoma fasciatum Chrysosoma infirme Chrysosoma kandyensis Chrysosoma ovale Chrysosoma palapes Chrysosoma pallidum Chrysosoma petulans Chrysosoma pulcherrimum Chrysosoma pusilum Chrysosoma shentorea Chrysosoma vittatum Chrysotus degener Condylostylus impar Condylostylus lutheri Condylostylus setifer Diaphorus detectus Diaphorus maurus Diaphorus nigerrimus Diaphorus rostratus Diaphorus simulans Diaphorus vagans Dolichopodinae torquata Dolichopus hirsutisetis Hydrophorus geminus Lichtwardtia ziczac Medetera austroapicalis Medetera chandleri Medetera grisecens Megistostylus longicornis Mesorhaga breviapendiculata Mesorhaga breviappendiculata Mesorhaga mellavana Mesorhaga nigrobarbata Mesorhaga nigroviridis Mesorhaga obscura Mesorhaga pseudolata Mesorhaga terminalis Neurigona denudata Neurigona exemta Paraclius adligatus Paraclius albimanus Paraclius callosus Paraclius luculentus Paraclius maritimus Paraclius paraguayensis Paraclius trisetosus Paraclius viridus Pelastoneurus aequalis Pelastoneurus crassinervis Pelastoneurus potomacus Plagiozopelma santense Sciapus aequalis Sciapus rectus Sciapus viridicollis Sympycnus albipes Sympycnus maculatus Sympycnus strenuus Sympycnus turbidus Syntormon edwardsi Tachytrechus tessellatus Urodolichus keiseri Family: Drosophilidae - Fruit flies Amiota magna Amiota subradiata Cacoxenus asiatica Colocasiomyia minor Colocasiomyia nigripennis Colocasiomyia zeylanica Chymomyza rufithorax Chymomyza pararufithorax Chymomyza flagellata Chymomyza formosana Chymomyza brevis Chymomyza cinctifrons Chymomyza cirricauda Chymomyza constricta Chymomyza cyanea Crincosia gugorum Dettopsomyia formosa Dettopsomyia jacobsoni Dettopsomyia preciosa Dettopsomyia zeylanica Diathoneura preciosa Dichaetophora cirricauda Dichaetophora constricta Dichaetophora cyanea Dichaetophora fascifrons Dichaetophora nigrifrons Dichaetophora paraserrata Dichaetophora quadrifrons Drosophila melanogaster Drosophila chandleri Hirtodrosophila chandleri Hirtodrosophila seminigra Hirtodrosophila trivittata Hypselothyrea varanasiensis Laccodrosophila atra Leucophenga abbreviata Leucophenga acutipollinosa Leucophenga albofasciata Leucophenga angusta Leucophenga argentata Leucophenga atrinervis Leucophenga bellula Leucophenga digmasoma Leucophenga flavicosta Leucophenga interrupta Leucophenga jacobsoni Leucophenga limbipennis Leucophenga lynettae Leucophenga maculata Leucophenga meijerei Leucophenga nigripalpis Leucophenga nigroscutellata Leucophenga pectinata Leucophenga quadripunctata Leucophenga setipalpis Leucophenga subpollinosa Leucophenga umbratula Liodrosophila actinia Liodrosophila ceylonica Liodrosophila crescens Liodrosophila globosa Liodrosophila ornata Liodrosophila varians Lissocephala metallescens Lordiphosa nigrostyla Lordiphosa spinopenicula Microdrosophila bullata Microdrosophila conica Microdrosophila elongata Microdrosophila filamentea Microdrosophila furcata Microdrosophila macroctenia Microdrosophila matsudairai Microdrosophila nigrispina Microdrosophila pleurolineata Microdrosophila sarawakana Microdrosophila tectifrons Mulgravea asiatica Mulgravea vittata Mycodrosophila alienata Mycodrosophila amabilis Mycodrosophila aqua Mycodrosophila ciliophora Mycodrosophila gordoni Mycodrosophila gratiosa Mycodrosophila parallelinervis Paramycodrosophila pictula Pararhinoleucophenga maura Phortica foliiseta Phortica xyleboriphaga Phorticella bistriata Scaptodrosophila anderssoni Scaptodrosophila brincki Scaptodrosophila cederholmi Scaptodrosophila coniura Scaptodrosophila excavata Scaptodrosophila nigrescens Scaptodrosophila subminima Scaptomyza bipars Scaptomyza brachycerca Scaptomyza clavifera Scaptomyza devexa Scaptomyza elmoi Scaptomyza salvadorae Sphaerogastrella javana Stegana castanea Stegana lateralis Stegana nigrifrons Stegana subconvergens Tambourella sphaerogaster Zaprionus sepsoides Zapriothrica hirta Zapriothrica nudiseta Zygothrica flavociliata Zygothrica fuscina Zygothrica vittinubila Family: Empididae - Balloon flies Empis carbonaria Empis ceylonica Hilarempis neptunus Wiedemannia submarina Family: Ephydridae - Shore flies Actocetor nigrifinis Ceropsilopa cupreiventris Ceropsilopa decussata Chlorichaeta orba Chlorichaeta tuberculosa Clasiopella uncinata Discocerina obscurella Discomyza maculipennis Dryxo brahma Dryxo lispoidea Hecamede granifera Hecamedoides hepatica Hydrellia griseola Hydrellia latipalpis Lamproclasiopa biseta Leptopsilopa pollinosa Notiphila bipunctata Notiphila dorsopunctata Notiphila indistincta Notiphila philippinensis Notiphila puberula Notiphila puncta Notiphila simalurensis Rhynchopsilopa ceylonensis Paralimna hirticornis Paralimna javana Paralimna lineata Paralimna picta Paralimna quadrifascia Placopsidella phaeonota Polytrichophora brunneifrons Psilopa flavimanus Rhynchopsilopa ceylonensis Family: Fanniidae - Little house flies Euryomma peregrinum Fannia canicularis Family: Hippoboscidae - Louse flies Ascodipteron emballonurae Brachytarsina cucullata Brachytarsina modesta Brachytarsina pygialis Brachytarsina speiseri Hippobosca longipennis Hippobosca variegata Lipoptena axis Lipoptena efovea Lynchia corvina Lynchia longipalpis Myophthiria reduvioides Myophthiria zeylanica Ornithoctona plicata Ornithoica curvata Raymondia pagodarum Family: Hybotidae - Dance flies Bicellaria bisetosa Drapetis abdominenotata Drapetis basalis Drapetis distincta Drapetis fulvithorax Drapetis metatarsata Drapetis nigropunctata Drapetis notatithorax Drapetis plumicornis Elaphropeza distincta Elaphropeza pollicata Hybos apicis Hybos bistosus Hybos geniculatus Parahybos luteicornis Parahybos maculithorax Platypalpus ceylonensis Platypalpus zelanica Stilpon divergens Syndyas jovis Syndyas parvicellulata Syneches bigoti Syneches fuscipennis Syneches helvolus Syneches immaculatus Syneches jardinei Syneches maculithorax Syneches minutus Syneches peradeniyae Syneches signatus Syneches singatus Syneches varipes Trichina ceylonica Family: Lauxaniidae Cerataulina boettcheri Chaetolauxania sulphuriceps Drepanephora horrida Homoneura bistriata Homoneura crassicauda Homoneura curta Homoneura intereuns Homoneura leucoprosopon Homoneura lucida Homoneura ornatipennis Homoneura sauteri Homoneura spiculata Homoneura trypetoptera Homoneura yerburyi Pachycerina javana Phobeticomyia lunifera Poecilolycia vittata Steganopsis fuscipennis Steganopsis multilineata Steganopsis pupicola Steganopsis tripunctata Trigonometopus zeylanicus Family: Keroplatidae - Fungus gnats Burmacrocera minuta - monotypic genus Heteropterna fenestralis Isoneuromyia annandalei Keroplatus notaticoxa Laurypta tripunctata Macrocera fryeri Orfelia bibula Orfelia negotiosa Orfelia saeva Orfelia ventosa Platyceridion edax - endemic genus Platyceridion talaroceroides Platyura fumipes Platyura juxta Platyura lunifrons Platyura minuta Platyura tripunctata Proceroplatus poecilopterus Proceroplatus pulchripennis Rutylapa juxta Srilankana mirabilis - monotypic endemic Truplaya fumipes Xenoplatyura lunifrons Family: Limoniidae - Limoniid crane flies Antocha salikensis Baeoura pollicis Baeoura taprobanes Baeoura triquetra Conosia irrorata Conosia minuscula Dicranomyia guttula Dicranomyia ravana Dicranomyia rectidens Dicranomyia saltens Dicranomyia sielediva Dicranomyia sordida Dicranomyia tipulipes Dolichopeza flavicans Dolichopeza guttulanalis Dolichopeza palifera Dolichopeza singhalica Ellipteroides pictilis Ellipteroides rohuna Ellipteroides thiasodes Epiphragma kempi Erioptera incompleta Erioptera notate Erioptera orbitalis Eupilaria singhalica Eupilaria taprobanica Eupilaria thysanotos Geranomyia circipunctata Geranomyia circipunctata Geranomyia genitaloides Geranomyia gracilispinosa Geranomyia genitaloides Gonomyia conjugens Gonomyia hedys Gonomyia lanka Gonomyia persimilis Gonomyia runa Gonomyia serendibensis Gymnastes maya Gymnastes simhalae Gymnastes violaceus Hexatoma albonotata - subsp. citrocastanea Hexatoma badia Hexatoma crystalloptera Hexatoma ctenophoroides - subsp. ctenophoroides, nigrithorax Hexatoma fusca Hexatoma greenii Hexatoma humberti Hexatoma meleagris Hexatoma neopaenulata Hexatoma ochripleuris Hexatoma pachyrrhina Hexatoma pachyrrhinoides Hexatoma scutellata Hexatoma serendib Hexatoma subnitens Hexatoma tuberculifera Hexatoma yerburyi Idiocera conchiformis Idiocera persimilis Libnotes greeni Libnotes immaculipennis Libnotes notata Libnotes palaeta Libnotes poeciloptera Libnotes thwaitesiana Limonia albipes Limonia annulata Limonia ayodhya Limonia chaseni Limonia latiorflava Limonia longivena Limonia ravida Limonia vibhishana Molophilus hylandensis Molophilus rachius Molophilus veddah Molophilus wejaya Molophilus yakkho Orimarga asignata Orimarga taprobanica Paradelphomyia indulcata Paradelphomyia subterminalis Polymera zeylanica Prionota serraticornis Pseudolimnophila zelanica Rhabdomastix schmidiana Rhipidia subtesselata Styringomyia ceylonica Styringomyia flava Styringomyia fryeri Styringomyia marmorata Tasiocerellus kandyensis - endemic genus Teucholabis angusticapitis Teucholabis annuloabdominalis Teucholabis fenestrata Teucholabis ornata Thrypticomyia apicalis Toxorhina yamma Trentepohlia nigroapicalis Trentepohlia pennipes Trentepohlia speiseri Trentepohlia tenera Trentepohlia trentepohlii Family: Lonchaeidae - Lance flies Lamprolonchaea pipinna Lonchaea incisurata Lonchaea minuta Silba abstata Silba admirabilis Silba excisa Silba perplexa Silba pollinosa Silba setifera Silba srilanka Family: Lygistorrhinidae Lygistorrhina asiatica Family: Micropezidae - Stilt-legged flies Grammicomyia ferrugata Grammicomyia testacea Mimegralla nietneri Mimegralla splendens Family: Milichiidae - Freeloader flies Desmometopa inaurata Desmometopa kandyensis Desmometopa srilankae Phyllomyza aelleni Family: Muscidae - House flies Atherigona atripalpis Atherigona bella Atherigona confusa Atherigona exigua Atherigona falcata Atherigona gamma Atherigona laeta Atherigona lamda Atherigona maculigera Atherigona naquvii Atherigona naqvii Atherigona orientalis Atherigona oryzae Atherigona pulla Atherigona punctata Atherigona reversura Atherigona simplex Caricea tinctipennis Cephalispa lata Cephalispa mira Cephalispa capitulata Dichaetomyia acrostichalis Dichaetomyia apicalis Dichaetomyia curvimedia Dichaetomyia fumaria Dichaetomyia handschini Dichaetomyia holoxantha Dichaetomyia keiseri Dichaetomyia manca Dichaetomyia melanotela Dichaetomyia pallidorsis Dichaetomyia seniorwhitei Dichaetomyia splendida Dichaetomyia tamil Graphomya adumbrata Graphomya atripes Graphomya rufitibia Gymnodia ascendens Gymnodia distincta Haematobia minuta Haematobosca sanguinolenta Hebecnema nigra Hebecnema nigrithorax Helina fuscisquama Helina nervosa Heliographa ceylanica Hydrotaea australis Hydrotaea jacobsoni Limnophora albonigra Limnophora himalayensis Limnophora prominens Limnophora tinctipennis Lispe bengalensis Lispe binotata Lispe flavicornis Lispe incerta Lispe kowarzi Lispe mirabilis Lispe sericipalpis Lispocephala tinctipennis Mitroplatia albisquama Morellia albisquama Morellia biseta Morellia hortensia Morellia pectinipes Morellia quadriremis Morellia sordidisquama Musca cassara Musca conducens Musca confiscata Musca convexifrons Musca craggi Musca crassirostris Musca domestica Musca fletcheri Musca formosana Musca hervei Musca inferior Musca pattoni Musca planiceps Musca seniorwhitei Musca ventrosa Mydaea diaphana Mydaea fuscisquama Mydaea morosa Mydaea nervosa Mydaea pallens Mydaea splendida Mydaea tuberculifacies Myospila argentata Myospila femorata Myospila laveis Myospila morosa Myospila ruficollis Neomyia claripennis Neomyia coeruleifrons Neomyia diffidens Neomyia fletcheri Neomyia gavisa Neomyia indica Neomyia lauta Neomyia steini Neomyia stella Neomyia timorensis Ophyra spinigera Phaonia auricoxa Phaonia caeruleicolor Pygophora hopkinsi Pygophora immaculipennis Pygophora keiseri Pygophora lutescens Pygophora macularis Pygophora microchaeta Pygophora nigricauda Pygophora plumifera Pygophora xanthogaster Rhynchomydaea tuberculifacies Stomoxys indicus Stomoxys plurinotata Stomoxys sitiens Tamilomyia dichaetomyiina Family: Mycetophilidae - Fungus gnats Acnemia asiatica Allodia varicornis Aneura pinguis Anomalomyia affinis - endemic genus Anomalomyia basalis Anomalomyia flavicauda Anomalomyia guttata Anomalomyia immaculata Anomalomyia intermedia Anomalomyia minor Anomalomyia nasuta Anomalomyia obscura Anomalomyia picta Anomalomyia subobscura Anomalomyia thompsoni Anomalomyia viatoris Anthracophaga sulcifrons Azana asiatica Boraceomyia cajuensis Boraceomyia paulistensis Brevicornu callidum Clastobasis fugitiva Clastobasis lepida Docosia caniripes Dziedzickia basalis Epicypta bilunulata Epicypta ferruginea Epicypta flavohirta Epicypta nigroflava Epicypta pectenipes Epicypta setosiventris Exechia albicincta Exechia ampullata Exechia argenteofasciata Exechia boracensis Exechia cristata Exechia cristatoides Exechia paramirastoma Exechia zeylanica Exechiopsis bifida Greenomyia fugitiva Greenomyia lepida Clastobasis fugitiva Clastobasis lepida Leia annulicornis Leia arcuata Manota orientalis Manota sespinaea Neoempheria bifascipennis Neoempheria unifascipennis Zygomyia valepedro Family: Mydidae - Mydas flies Leptomydas notos Family: Nemestrinidae - Tangle-veined flies Atriadops javana Ceyloniola magnifica - monotypic endemic genus Hirmoneura brunnea Hirmoneura coffeata Family: Neriidae - Stilt-legged flies Chaetonerius comperei Gymnonerius ceylanicus Telostylus latibrachium Family: Nycteribiidae - Bat flies Basilia amiculata Basilia eileenae Basilia pumila Basilia punctata Cyclopodia sykesii Eucampsipoda latisternum Leptocyclopodia ferrarii Nycteribia allotopa Penicillidia indica Phthiridium ceylonicum Phthiridium phillipsi Family: Pachyneuridae Haruka elegans Family: Periscelididae Stenomicra fascipennis Family: Phoridae - Scuttle flies Ceylonoxenia bugnioni Ceylonoxenia butteli Clitelloxenia clitellaria Clitelloxenia paradeniyae Diplonevra ater Diplonevra cinctiventris Megaselia achatinae Megaselia argiopephaga Megaselia bowlesi Megaselia deningi Megaselia hepworthae Megaselia pseudoscalaris Megaselia reynoldsi Megaselia robinsoni Puliciphora trisclerita Rhynchomicropteron puliciforme Spiniphora conspicua Family: Pipunculidae - Big-headed flies Cephalops magnimembrus Eudorylas angustipennis Eudorylas beckeri Eudorylas biroi Tomosvaryella aeneiventris Tomosvaryella singalensis Family: Platypezidae - Flat-footed flies Lindneromyia brunettii Lindneromyia cirrhocera Lindneromyia curta Lindneromyia kandyi Microsania lanka Platypeza brunettii Platypeza nepalensis Polyporivora nepalensis Family: Platystomatidae - Signal flies Elassogaster linearis Euprosopia dorsata Euprosopia latifrons Euprosopia nigropunctata Euprosopia planiceps Euprosopia platystomoides Lamprophthalma felderi Plagiostenopterina cinctaria Plagiostenopterina dubiosa Plagiostenopterina fasciata Plagiostenopterina rufa Pseudepicausta angulata Pterogenia niveitarsis Rivellia costalis Rivellia eximia Rivellia frugalis Rivellia furcata Rivellia fusca Rivellia herinella Family: Psilidae - Rust flies Chyliza cylindrica Chyliza pseudomunda Loxocera brevibuccata Loxocera insolita Sargus decorus Sargus metallinus Family: Psychodidae - Moth flies Brunettia albohumeralis Brunettia albonotata Brunettia atrisquamis Brunettia uzeli Clogmia albipunctata Neotelmatoscopus acutus Neotelmatoscopus rotundus Phlebotomus argentipes Phlebotomus annandalei Phlebotomus glaucus Phlebotomus stantoni Psychoda acanthostyla Psychoda alabangensis Psychoda aponensos Psychoda formosana Psychoda geniculata Psychoda maculipennis Psychoda mediocris Psychoda vagabunda Sergentomyia arboris Sergentomyia insularis Telmatoscopus flavicollis Telmatoscopus proximus Family: Pyrgotidae - Picture-winged flies Peltodasia magnicornis Peltodesia magnicornis Taeniomastix pictiventris Taeniomastix unicolor Family: Rhagionidae - Snipe flies Chrysopilus latus Chrysopilus magnipennis Chrysopilus opalescens Chrysopilus similis Chrysopilus yerburyi Family: Rhiniidae Metallea clausa Family: Rhinophoridae - Woodlouse flies Ptilocera fastuosa Ptilocera smaragdifera Family: Sarcophagidae - Flesh flies Amobia auriceps Apodacra ceylonica Dolichotachina melanura Eremasiomyia orientalis Heteronychia calicifera Hoplacephala asiatica Hoplacephala mirabilis Krombeinomyia mirabilis Metopia argyrocephala Metopia nudibasis Phyllarista rohdendorfi Phylloteles argyrozoster Phylloteles ballucapitatus Phylloteles longiunguis Phylloteles rohdendorfi Protomiltogramma nandii Protomiltogramma seniorwhitei Pterella krombeini Sarcophaga alba Sarcophaga annandalei Sarcophaga futilis Sarcophaga henryi Sarcophaga kempi Sarcophaga martellata Sarcophaga martellatoides Sarcophaga peregrina Sarcophaga scopariiformis Sarcophaga talonata Sarcophaga zaitzevi Thereomyia nandii Thereomyia seniorwhitei Family: Sciaridae - Dark-winged fungus gnats Odontosciara exacta Family: Scathophagidae - Dung flies Cordilura lineata Cordilura pudica Cordilura punctipes Parallelomma banski Family: Scatopsidae - Dung midges Colobostema metarhamphe Colobostema occabipes Psectrosciara brunnescens Rhegmoclema hirtipenne Scatopse brunnescens Scatopse pilosa Scatopse zeylanica Family: Scenopinidae - Window flies Scenopinus longiventris Scenopinus papuanus Family: Sciaridae - Dark-winged fungus gnats Apelmocreagris simulator Family: Sciomyzidae - Marsh flies Sepedon crishna Sepedon ferruginosa Family: Sepsidae - Black scavenger flies Perochaeta hennigi Sepsis thoracica Toxopoda contracta Family: Simuliidae - Black flies Simulium bulla Simulium ceylonicum Simulium cremnosi Simulium cruszi Simulium dola Simulium ela Simulium krombeini Simulium languidum Simulium nilgiricum Simulium nubis Simulium paranubis Simulium striatum Simulium subpalmatum Simulium trirugosum Family: Sphaeroceridae - Lesser dung flies Aspinilimonina postocellaris Ceroptera equitans Chaetopodella nigrinotum Coproica ferrguinata Coproica ferruginata Coproica hirtula Lotobia asiatica Lotobia pallidiventris Norrbomia marginatis Norrbomia tropica Opacifrons brevisecunda Opacifrons cederholmi Paralimosina ceylanica Pellucialula polyseta Poecilosomella aciculata Poecilosomella affinis Poecilosomella borboroides Poecilosomella nigra Poecilosomella pappi Poecilosomella punctipennis Poecilosomella varians Rachispoda filiforceps Rachispoda fuscipennis Spinilimosina brevicostata Trachyopella leucoptera Family: Stratiomyidae - Soldier flies Acrochaeta dimidiata Adoxomyia heminopla Allognosta annulifemur Allognosta fuscitarsis Ankylacantha keiseri - endemic genus Argyrobrithes albopilosa Atherigona lamda - endemic genus Aulana confirmata Beris javana Cibotogaster azurea Clitellaria heminopla Gabaza albiseta Hermetia illucens Hermetia inflata Microchrysa flavicornis Microchrysa flaviventris Microchrysa vertebrata Massicyta inflata Nigritomyia ceylonica Nigritomyia maculipennis Odontomyia angustilimbata Odontomyia cyanea Odontomyia fascipes Oplodontha minuta Oplodontha punctifacies Oplodontha rubrithorax Oxycera whitei Pachygaster transmarinus Pegadomyia ceylonica Prosopochrysa vitripennis Ptecticus australis Ptecticus cingulatus Ptecticus pseudohistrio Ptecticus srilankai Ptilocera fastuosa Ptilocera fastuosa Ptilocera smaragdifera Ptilocera smaragdina Sargus contractus Sargus flaviventris Sargus gselli Sargus metallinus Sargus splendidus Stratiomys fenestrata Stratiomys minuta Strophognathus argentatus - monotypic endemic genus Tinda javana Family: Streblidae - Streblid bat flies Brachytarsina cucullata Brachytarsina joblingi Brachytarsina modesta Brachytarsina pygialis Brachytarsina speiseri Raymondia pagodarum Speiserella lobulata Family: Syrphidae - Hoverflies Allobaccha amphithoe Allobaccha fallax Allobaccha oldroydi Allobaccha pulchrifrons Allobaccha triangulifera Asarkina ayyari Asarkina belli Asarkina pitamara Asarkina porcina Betasyrphus fletcheri Calcaretropidia triangulifera Ceriana ornatifrons Chrysogaster aerosa Chrysogaster basalis Chrysogaster hirtella Chrysogaster incisa Chrysogaster insignis Chrysogaster longicornis Chrysogaster tarsata Chrysotoxum baphyrus Citrogramma henryi Dasysyrphus orsua Dideopsis aegrota Dideopsis aegrota Eosphaerophoria dentiscutellata Episyrphus nubilipennis Episyrphus viridaureus Eristalinus arvorum Eristalinus invirgulatus Eristalinus lucilia Eristalinus megacephalus Eristalinus multifarius Eristalinus paria Eristalinus quadristriatus Eristalis curvipes Eumerus argentipes Eumerus aurifrons Eumerus coeruleifrons Eumerus figurans Eumerus nicobarensis Eumerus singhalensis Eumerus sita Graptomyza brevirostris Graptomyza coomani Helophilus curvigaster Indascia gracilis Mallota curvigaster Mallota vilis Melanostoma apicale Melanostoma ceylonense Melanostoma scalare Meliscaeva ceylonica Meliscaeva monticola Microdon elisabeth Microdon fulvopubescens Microdon lanka Microdon montis Microdon taprobanicus Paragus auritus Paragus crenulatus Paragus rufocincta Paragus yerburiensis Phytomia errans Pipizella rufocincta Rhinobaccha gracilis Simosyrphus grandicornis Sphaerophoria indiana Sphaerophoria macrogaster Syritta proximata Syritta triangulifera Tropidia bambusifolia Tropidia curculigoides Tropidia septemnervis Xanthandrus ceylonicus Xanthogramma eoa Xylota atroparva Family: Tabanidae - Horseflies Atylotus agrestis Atylotus virgo Chrysops dispar Chrysops dubiens Chrysops fasciatus Chrysops fixissimus Chrysops flaviventris Chrysops flavocinctus Chrysops srilankensis Chrysops translucens Cydistomyia brunnea Cydistomyia ceylonicus Cydistomyia laeta Cydistomyia minor Cydistomyia philipi Cydistomyia pilipennis Cydistomyia putea Cydistomyia tibialis Dichelacera tetradelta Gastroxides ater Gastroxides ornatus Haematopota bequaerti Haematopota brevis Haematopota cingalensis Haematopota krombeini Haematopota litoralis- subsp. rhizophorae Haematopota roralis Haematopota tessellata Haematopota unizonata Hybomitra minshanensis Lissimodes ceylonicus Lissimodes minor Philoliche taprobanes Philoliche zernyi Silviomyza picea - monotypic endemic genus Silvius ceylonicus Stenotabanus sphaeriscapus Tabanus angustilimbatus Tabanus atrohirtus Tabanus brincki Tabanus ceylonicus Tabanus discrepans Tabanus diversifrons Tabanus dorsiger Tabanus flavissimus Tabanus fuscicauda Tabanus griseifacies Tabanus indiscriminatus Tabanus inflatipalpis Tabanus jucundus Tabanus krombeini Tabanus obconicus Tabanus particolor Tabanus pullus Tabanus puteus Tabanus speciosus Tabanus striatus Tabanus tenens Tabanus thellus Tabanus tumidicallus Tabanus wilpattuensis Udenocera brunnea - monotypic endemic genus Family: Tachinidae - Tachina flies Aneogmena fischeri Aneogmena rutherfordi Aneogmena secunda Argyrophylax franseni Argyrophylax fransseni Atractocerops ceylanicus Austrophorocera grandis Austrophorocera laetifica Austrophorocera lucagus Austrophorocera solennis Blepharella lateralis Blepharipa orbitalis Blepharipa zebina Carcelia atripes Carcelia bakeri Carcelia caudata Carcelia ceylanica Carcelia excisa Carcelia latistylata Carcelia rasoides Carcelia subferrifera Carcelia sumatrana Chetogena bezziana Clausicella molitor Cylindromyia umbripennis Delta pyriforme Dinera meridionalis Doleschalla elongata Drino curvipalpis Eutrixopsis paradoxa Halydaia luteicornis Hermya beelzebul Isosturmia intermedia Isosturmia picta Medinodexia morgana Nealsomyia rufella Nealsomyia rufipes Paradrino laevicula Phasia triangulata Phorocera vagator Prosena siberita Prosheliomyia nietneri Rutilia rubriceps Sisyropa formosa Sisyropa heterusiae Stevenia ceylanica Sumpigaster flavipennis Thecocarcelia thrix Thelaira macropus Torocca fasciata Urodexia penicillum Winthemia mallochi Winthemia trichopareia Zenillia anomala Zenillia nymphalidophaga Family: Tephritidae - Fruit flies Acanthiophilus astrophorus Acinoeuphranta zeylanica Acroceratitis striata Actinoptera biseta Actinoptera brahma Actinoptera formosana Adrama austeni Bactrocera apicofuscans Bactrocera bipustulata Bactrocera brunneola Bactrocera caryeae Bactrocera caudata Bactrocera ceylanica Bactrocera correcta Bactrocera cucurbitae Bactrocera diaphora Bactrocera diversa Bactrocera dorsalis Bactrocera duplicata Bactrocera expandens Bactrocera fastigata Bactrocera fernandoi Bactrocera garciniae Bactrocera gavisa Bactrocera hantanae Bactrocera invadens Bactrocera kandiensis Bactrocera latifrons Bactrocera nigrofemoralis Bactrocera nigrotibialis Bactrocera perigrapha Bactrocera profunda Bactrocera selenophora Bactrocera syzygii Bactrocera tau Bactrocera trilineata Bactrocera verbascifoliae Bactrocera versicolor Bactrocera tau Bactrocera invadens Bactrocera verbascifoliae Bactrocera zahadi Campiglossa aeneostriata Campiglossa agatha Coelotrypes luteifasciata Dacus cliatus Dacus discophorus Dacus keiseri Dacus longicornis Dacus nepalensis Dacus persicus Dacus ramanii Dioxyna picciola Elaphromyia siva Euphranta conjuncta Euphranta zeylanica Galbifascia quadripunctata Goniurellia persignata Hexacinia radiosa Meracanthomyia gamma Metasphenisca reinhardi Oxyaciura monochaeta Oxyaciura xanthotricha Platensina acrostacta Platensina zodiacalis Pliomelaena translucida Rhabdochaeta pulchella Rhochmopterum seniorwhitei Rioxa discalis Rioxa lanceolata Rioxa parvipunctata Scedella orientalis Scedella spiloptera Sophiroides flammosa Spathulina acroleuca Sphaeniscus melanotrichotus Sphaeniscus quadrincisus Sphenella sinensis Tephraciura basimacula Tritaeniopteron punctatipleurum Trupanea amoena Trupanea aucta Xarnuta leucotela Family: Therevidae - Stiletto flies Irwiniella ceylonica Irwiniella sequa Megapalla curvata Phycus brunneus Phycus frommeri Phycus hauseri Phycus minutus Schoutedenomyia argentiventris Family: Tipulidae - Crane flies Holorusia ochripes Indotipula demarcata Indotipula palnica Indotipula singhalica Leptotarsus errans Leptotarsus zeylanica Nephrotoma javensis Nephrotoma pleurinotata Prionota serraticornis Pselliophora henryi Pselliophora laeta - subsp. laeta, strigidorsum Pselliophora taprobanes Tipula hampsoni Tipulodina brunettiella Tipulodina ceylonica Tipulodina gracillima Family: Ulidiidae - picture-winged flies Physiphora clausa Physiphora longicornis Family: Xylomyidae - Wood soldier flies Solva inamoena Family: Xylophagidae - Awl-flies Rachicerus aterrimus Rachicerus bicolor Rachicerus rusticus Rachicerus spissus Xylophagus brunneus Notes References Sri Lanka
List of dipterans of Sri Lanka
[ "Biology" ]
12,061
[ "Biota by country", "Wildlife by country" ]
32,156,511
https://en.wikipedia.org/wiki/Water%20resource%20policy
Water resource policy, sometimes called water resource management or water management, encompasses the policy-making processes and legislation that affect the collection, preparation, use, disposal, and protection of water resources. The long-term viability of water supply systems poses a significant challenge as a result of water resource depletion, climate change, and population expansion. Water is a necessity for all forms of life as well as industries on which humans are reliant, like technology development and agriculture. This global need for clean water access necessitates water resource policy to determine the means of supplying and protecting water resources. Water resource policy varies by region and is dependent on water availability or scarcity, the condition of aquatic systems, and regional needs for water. Since water basins do not align with national borders, water resource policy is also determined by international agreements, also known as hydropolitics. Water quality protection also falls under the umbrella of water resource policy; laws protecting the chemistry, biology, and ecology of aquatic systems by reducing and eliminating pollution, regulating its usage, and improving the quality are considered water resource policy. When developing water resource policies, many different stakeholders, environmental variables, and considerations have to be taken to ensure the health of people and ecosystems are maintained or improved. Finally, ocean zoning, coastal, and environmental resource management are also encompassed by water resource management, like in the instance of offshore wind land leasing. As water scarcity increases with climate change, the need for robust water resource policies will become more prevalent. An estimated 57% of the world's population will experience water scarcity at least one month out of the year by 2050. Mitigation and updated water resource policies will require interdisciplinary and international collaboration, including government officials, environmental scientists, sociologists, economists, climate modelers, and activists. Water as a resource When considering its utility as a resource and developing water resource policy, water can be classified into 4 different categories: green, blue, gray, and virtual water. Blue water is surface and groundwater, like water in rivers, lakes, and aquifers. Green water is rainwater that was precipitated on soil that can be used naturally for plants and agriculture. Gray water is water that has been contaminated by human use or proximity. The gray water classification can range from freshwater fertilizer runoff pollution to water contaminated from dishwashers and showers. Virtual water is the water consumed to make an agricultural or industrial product. Calculating virtual water of a commodity is used to determine the water footprint of a country and see how much water they are importing and exporting through their goods. Broad Types of Water Resource Policy Agreements Between Nations Water basins do not align with national borders and an estimated 60% of worldwide freshwater flows across political boundaries. Countries navigate managing shared water resources by making agreements in the form of treaties. Treaties between nations may enumerate policies, rights and responsibilities. The Permanent Court of International Justice adjudicates disputes between nations, including water rights litigation. An estimated 3600 water treaties have existed, including the introduction of more than 150 new ones since 1950. Transboundary water agreements, like treaties, are oftentimes focused on water infrastructure and quality. Water resource treaties encompass many types of water like surface water, groundwater, watercourses, and dams. When a water resource can be shared equally, like a river acting as a border between nations, there tends to be less conflict than upstream/downstream water resource sharing agreements. Sometimes treaties establish joint committees between the two or more nations to oversee all water sharing and to ensure that treaty agreements are being met. Two examples of this are the 1996 Ganges Treaty between India and Bangladesh and the 1955 Great Lakes Basin Compact between the United States and Canada. With increasing water scarcity and competition for water resources due to climate change and diminished water quality, there has been an increase in international water-based conflict. Another example of a water resource interstate agreement is through multi-country agreements to get funding for water resource projects such as building hydropower dams. In Sub-Saharan African countries, China has financed many hydropower projects. Covenants and Declarations In Water Resource Policy, covenants and declarations are nonbinding goals for reaching universal human access to water for drinking and sanitation purposes. The United Nations has adopted three covenants and declarations: the 1948 Declaration of Human Rights, the 1966 International Covenant on Civil and Political Rights, and the 1966 International Covenant on Economic, Social, and Cultural Rights. Since the 1996 International Covenant on Economic, Social, and Cultural Rights declaration, all 191 UN member states have also signed the Millennium Development Goals, which is a further commitment to combat health inequalities. Access to safe and clean water for drinking and sanitation were fully declared human rights on July 28, 2010 through the UN General Assembly resolution A/RES/64/292. Management Rules and Regulations Water management rules and regulations dictate different national standards for water quality, like drinking water and environmental water quality standards. For example, in the United States, the Safe Drinking Water Act authorizes the Environmental Protection Agency to set the national standards for safe drinking water and set regulations for contaminants. Within the European Union, the European Environment Agency enacted the Water Framework Directive in 2000 to regulate water resource planning, management, and protection. In India, the Ministry of Environment and Forests sets the water management policies that the Central Pollution Control Board and the State Pollution Control Boards then enforce. The Ministry for Environmental Protection directions national efforts for water management and regulation in China, like the Law on Prevention and Control of Water Pollution. Aid Programs and Diplomatic Efforts Several global organizations have created aid programs and diplomatic efforts to see that progress is being made towards achieving global covenants and declarations regarding water resource access. Because health is closely tied to drinking water and sanitation access, UNICEF and the World Health Organization formed the Joint Monitoring Programme for Water Supply and Sanitation focused exclusively on monitoring and reporting progress on water, sanitation and hygiene goals as dictated by the UN. In 1977, the United Nations convened for a Conference on Water in Mar del Plata to develop recommendations for national water policy. Subsequently, the United Nations declared the 1980s as the International Drinking-water Supply and Sanitation Decade. In 2000, the UN sanctioned a task force led by UNESCO, World Water Assessment Programme, to report on worldwide freshwater use and sustainability in the World Water Development Report. In 2003, UN-Water was formed as an interagency coordination tool to help countries achieve their water resource goals as set by the Millennium Development Goals and make global water governance frameworks. Additionally, the United Nations declared 2013 as the International Year of Water Cooperation. As well as the United Nations' interest in water resource policy for the benefit of human health, the United Nations Environmental Programme has also done work to improve international water quality. Non-profits and non-governmental organizations also play a role in water resource policy. For example, the World Water Council is an international think tank established in 1996 to help countries and stakeholders with water resource management strategies. Additionally, the US Agency for International Development (USAID) developed a Water and Development Strategy in 2013 to help people improve water supply, sanitation, and hygiene (WASH) programs and help with water resource management. Economic Exchanges Water Resource Policy also encompasses the economic exchange of water, known as virtual water. The term virtual water is used to understand and quantify the volume of water required for a product or service. For example, when determining the virtual water trade for agricultural goods, the trade flow rate (ton/yr) would be multiplied by the virtual water content (m3/ton) of each type of produce or livestock to determine how much water was exchanged in addition to the good. According to these calculations for virtual water, India, the United States, and China are the top national consumers for virtual water. Critiques for this method have questioned virtual water's relevance in creating water resource policy, but understanding the trade of water may be useful for countries facing water scarcity to prioritize importation of virtual water instead of exportation of water-intensive goods and services. Business water resource policy initiatives The World Business Council for Sustainable Development engages stakeholders in H2OScenarios that consider various alternative policies and their effects. In June 2011 in Geneva, the Future of Water Virtual Conference addressed water resource sustainability. Issues raised included: water infrastructure monitoring, global water security, potential resource wars, interaction between water, energy, food and economic activity, the "true value" of "distribution portions of available water" and a putative "investment gap" in water infrastructure. It was asserted that climate change will affect scarcity of water but the water security presentation emphasized that a combined effect with population growth "could be devastating". Identified corporate water related risks include physical supply, regulatory and product reputation. This forum indicated policy concerns with: trade barriers, price supports, treatment of water as a free good creates underpricing of 98% of water, need to intensify debate, and need to harmonize public/private sectors Issues Environmental Freshwater resources on earth are under increasing stress and depletion because of pollution, climate change, and consumptive use. Flood Water can produce a natural disaster in the form of tsunamis, hurricanes, rogue waves and storm surge. Land-based floods can originate from infrastructural issues like bursting dams or levee failure during surges, as well as environmental phenomena like rivers overflowing their banks during increased rainfall events, urban stormwater flooding, or snowmelt. The increased magnitude and frequency of floods are a result of urbanization and climate change. Urbanization increases stormwater runoff during large rain events. Surface runoff is water that flows when heavy rains do not infiltrate soil; excess water from rain, meltwater, or other sources flowing over the land. This is a major component of the water cycle. Runoff that occurs on surfaces before reaching a channel is also called a nonpoint source. When runoff flows along the ground, it can pick up soil contaminants including, but not limited to petroleum, pesticides, or fertilizers that become discharge or nonpoint source pollution. Water resource policy encompasses flood risk management and development of infrastructure to mitigate damages from floods. Water resource policy solutions to flooding include land drainage for agriculture, urban planning focused on flood prevention, rainwater harvesting, and permeable surfacing of developed areas. Drought A drought is defined as a period of dry conditions with either less precipitation or more depleted water reserves than normal. Because droughts are defined relative to the area's normal weather patterns and water availability, the definition varies from place to place. Overall, defining a drought takes into consideration 1.) the duration, intensity, and area of lessened precipitation or water availability and 2.) the estimated environmental, social and economic impact of the limited water. For example, in Colorado, paleohydrologic data, or tree rings from areas affected by drought, have been used to define drought extent and understand the impact of past droughts to improve future water resource planning and decision making. With climate change, the frequency and intensity of droughts have been increasing but water resource policy is typically reactive instead of proactive. Droughts have negative economic impacts on many sectors including agriculture, environment, energy production and transportation. Local and national governments normally respond to droughts once they happen and are in crisis mode, whereas a robust policy would include early drought monitoring systems, preparedness plans, energy response programs, and impact assessment and management procedures to help mitigate the effects of drought on the economy and the environment. Different nations have different policies regarding national droughts. In 2013, the High-level Meeting on National Drought Policy (HMNDP) was organized by the World Meteorological Organization, the Secretariat of the United Nations Convention to Combat Desertification (UNCCD) and the Food and Agriculture Organization of the United Nations (FAO) to help nations develop drought preparedness policies and plans for international emergency relief efforts in the event of droughts. There were 414 participants from 87 countries that unanimously adopted the HMNDP declaration at the end of the meeting rallying national governments to implement drought management policies. Oceans and salinity The oceans provide many important resources for the planet and humans including: transportation, marine life, food, minerals, oil, natural gas, and recreation. Water resource policy involving the ocean includes jurisdiction and regulation issues, pollution regulation and reduction, over exploitation prevention, and desalination to make drinking water. National jurisdictions of the oceans are dictated by coastal proximity. Oceans along coastlines of nations are considered territories of that nation. For the first 12 nautical miles away from the nation's coastal border, the country has rights to the ocean for its resources, including fish and minerals, and it considered a continuation of that nation's territory. The countries' economic zone, consisting of both the water column and the seafloor, continues out for 200 nautical miles where they are still entitled to the areas' resources. On the other hand, the Antarctic and Southern Oceans are shared by 45 state parties under the Antarctic Treaty, so the status and ownership of Antarctic and Southern Ocean resources is unclear legally. Additionally, some areas are conserved as Marine Protected Areas (MPAs) and resource exploitation is prohibited. For example, by 1997 off the coast of California, there were 103 MPAs. The oceans are becoming polluted and exploited for resources. With increasing carbon dioxide concentrations in the atmosphere from burning fossil fuels, the oceans are experiencing acidification. Decreasing the pH of the ocean makes it more difficult for marine organisms, like coral reefs, to make their calcium carbonate shells. Additionally, pollution is threatening oceanic resources, especially near coasts. Oil rigs and undersea mineral extraction can create problems that affect shorelines, marine life, fisheries and human safety. Decommissioning of such operations has another set of issues. Rigs-to-reefs is a proposal for using obsolete oil rigs as substrate for coral reefs that has failed to reach consensus. There have been oil tanker accidents and oil pipeline spills like the Exxon Valdez oil spill and the Deepwater Horizon oil spill. Ballast water, fuel/oil leaks and trash originating from ships foul harbors, reefs and estuaries pollute the oceans. Ballast water may contain toxins, invasive plants, animals, viruses, and bacteria. Additionally, marine debris, or industrially processed materials that have been dumped in the oceans, threatens the wellbeing and biodiversity of marine organisms. Along coasts, oceans are threatened by land runoff that includes fertilizers, insecticides, chemicals, and organic pollutants that can cause algal blooms and dead zones. Fisheries also have an effect on oceans and can fall under water resource policy rules. According to the UN Food and Agriculture Organization (FAO), 87% of the fisheries worldwide are either fully exploited or overexploited. Regional fisheries management organizations (RFMOs) control and oversee high sea fisheries under the UN Convention of the Law of the Sea (UNCLOS) and the UN FishStocks Agreement. Poor management by RFMOs, government subsidies for fish, and illegal fish catches have contributed to overfishing and over exploitation of ocean resources. Ecosystem-based fishery management (EBFM) is an attempt to correct some RFMO mismanagement by limiting biomass that is allowed to be removed by fisheries, and by making sure fishing is more targeted for the desired species. One problem EBFM tries to eliminate is bycatch, or unintentional catching of the wrong fish species. For example, white marlin, an endangered billfish, is mostly accidentally caught and killed by swordfish and tuna longline fisheries. Desalination of seawater is becoming a resource for coastal nations needing freshwater for industry and drinking, particularly areas with over exploited groundwater aquifers and surface water, pollution of freshwater, or unreliable water supply due to climate change. Desalination is particularly popular in arid, water-stressed regions like Egypt, Jordan, Kuwait, Cyprus, Israel, Saudi Arabia, United Arab Emirates, Australia, and California, US. Freshwater Surface and groundwater Surface water and groundwater can be studied and managed as separate resources as a single resource in multiple forms. Jurisdictions typically distinguish three recognized groundwater classifications: subterranean streams, underflow of surface waters, and percolating groundwater. Constituencies Drinking water and water for utilitarian uses such as washing, crop cultivation and manufacture is competed for by various constituencies: Residential Agriculture. "Many rural people practice subsistence rain fed agriculture as a basic livelihood strategy, and as such are vulnerable to the effects of drought or flood that can diminish or destroy a harvest. " Construction Industrial Municipal or institutional activities Surface water (runoff) and wastewater discharge Regulatory bodies address piped waste water discharges to surface water that include riparian and ocean ecosystems. These review bodies are charged with protecting wilderness ecology, wildlife habitat, drinking water, agricultural irrigation and fisheries. Stormwater discharge can carry fertilizer residue and bacterial contamination from domestic and wild animals. They have the authority to make orders which are binding upon private actors such as international corporations and do not hesitate to exercise the police powers of the state. Water agencies have statutory mandate which in many jurisdictions is resilient to pressure from constituents and lawmakers in which they on occasion stand their ground despite heated opposition from agricultural interests On the other hand, the Boards enjoy strong support from environmental concerns such as Greenpeace,Heal the Ocean and Channelkeepers. Water quality issues or sanitation concerns reuse or water recycling and pollution control which in turn breaks out into stormwater and wastewater. Wastewater Wastewater is water that has been discharged from human use. The primary discharges flow from the following sources: residences, commercial properties, industry, and agriculture. Sewage is technically wastewater contaminated with fecal and similar animal waste byproducts, but is frequently used as a synonym for wastewater. Origination includes cesspool and sewage outfall pipes. Water treatment is subject to the same overlapping jurisdictional constraints which affect other aspects of water policy. For instance, levels of chloramines with their resulting toxic trihalomethane by-product are subject to Federal guidelines even though water management implementing those policy constraints are carried out by local water boards. Human right to water and sanitation Structural constraints on policy makers Policies are implemented by organizational entities created by government exercise of state power. However, all such entities are subject to constraints upon their autonomy. Jurisdictional issues Subject matter and geographic jurisdiction are distinguishable. The jurisdiction of any water agency is limited by political boundaries and by enabling legislation. In some cases, limits target specific types of uses (wilderness, agricultural, urban-residential, urban-commercial, etc.) A second part of jurisdictional limitation governs the subject matter that the agency controls, such as flood control, water supply and sanitation, etc. In many locations, agencies may face unclear or overlapping authority, increasing conflicts and delaying conflict resolution. Typical information access issue As reported by the non-partisan Civil Society Institute, a 2005 US Congressional study on water supply was suppressed and became the target of a Freedom of Information Act (FOIA) litigation. Multi-jurisdictional issues One jurisdiction's projects may cause problems in other jurisdictions. For instance, Monterey County, California controls a body of water that acts as a reservoir for San Luis Obispo County. The specific responsibilities for managing the resource must therefore be negotiated See also Aquifer Biotic index Clean Water Act Water consumption Drinking water quality in the United States Economic Instruments for Water Policies Energy law International Water Management Institute Marine Protection area Pollution Stormwater United Nations Environmental Programme Wastewater Water efficiency Water law in the United States Waterkeeper Alliance WELS rating Wet Infrastructure References External links California Water Rights Fact Sheet U.S. Centers for Disease Control and Prevention (CDC) Healthy Water - Water Quality - Information on water quality, water testing, and understanding consumer confidence reports on water contaminants U.S. National Water Quality Monitoring Council (NWQMC) - Partnership of federal and state agencies U.S. Geological Survey - National Water Quality Assessment Program U.S. Environmental Protection Agency - Water Quality Monitoring U.S. National Agricultural Library American Water Resources Association Global Water Quality online database Beaches 911 - U.S. Beach Water Quality Monitoring Aquatic ecology Aquifers Environmental science Sewerage Water management Water conservation Water treatment Water Water and the environment Water chemistry Water pollution Water supply
Water resource policy
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
4,132
[ "Hydrology", "Water", "Water treatment", "Water pollution", "Sewerage", "Aquifers", "Ecosystems", "nan", "Environmental engineering", "Water technology", "Aquatic ecology", "Water supply" ]
32,158,106
https://en.wikipedia.org/wiki/Barrier-to-autointegration%20factor
In molecular biology, barrier-to-autointegration factor (BAF) is a family of essential proteins that is highly conserved in metazoan evolution, and which may act as DNA-bridging proteins. BAF binds directly to double-stranded DNA, to transcription activators, and to inner nuclear membrane proteins, including lamin A filaments that anchor nuclear pore complexes in place, and nuclear LEM-domain proteins that bind to lamin filaments and chromatin. New findings suggest that BAF has structural roles in nuclear assembly and chromatin organization, represses gene expression and might interlink chromatin structure, nuclear architecture and gene regulation in metazoans. BAF can be exploited by retroviruses to act as a host component of pre-integration complexes, which promote the integration of the retroviral DNA into the host chromosome by preventing autointegration (integration into itself). BAF might contribute to the assembly or activity of retroviral pre-integration complexes through direct binding to the retroviral proteins p55 Gag and matrix, as well as to DNA. References Protein domains
Barrier-to-autointegration factor
[ "Biology" ]
235
[ "Protein domains", "Protein classification" ]
32,160,483
https://en.wikipedia.org/wiki/BEN%20domain
In molecular biology, BEN domain is a conserved protein domain found in a variety of eukaryotic transcriptional regulators and chromatin-associated proteins. It is named after three proteins in which it was first identified: BANP, E5R, and NAC1. The BEN domain is thought to play a critical role in protein-DNA and protein-protein interactions, particularly in gene silencing, transcriptional regulation, and chromatin organization. It is commonly involved in processes such as development, differentiation, and the maintenance of cellular identity through epigenetic regulation. Structure This domain is predicted to form an all-alpha fold with four conserved helices. Its conservation pattern revealed several conserved residues, most of which have hydrophobic side-chains and are likely to stabilize the fold through helix-helix packing. First human BEN domain (BEND3)structure is solved together with TPR (ERCC6L)domain and Stimulates the ERCC6L translocase and ATPase activities. Function The BEN domain is predicted to function as an adaptor for the higher-order structuring of chromatin, and recruitment of chromatin modifying factors in transcriptional regulation. It has been suggested to mediate protein-DNA and protein-protein interactions during chromatin organization and transcription. The presence of BEN domains in a poxviral early virosomal protein and in polydnaviral proteins also suggests a possible role in the organisation of viral DNA during replication or transcription. They are generally linked to other globular domains with functions related to transcriptional regulation and chromatin structure, such as BTB, C4DM, and C2H2 fingers. Examples The BEN domain is found in diverse proteins including: SMAR1 (Scaffold/Matrix attachment region-binding protein 1; also known as BANP), a tumour-suppressor MAR-binding protein that down-regulates Cyclin D1 expression by recruiting HDAC1-mSin3A co-repressor complex at Cyclin D1 promoter locus; SMAR1 is the target of prostaglandin A2 (PGA2) induced growth arrest. NACC1, a novel member of the POZ/BTB (Pox virus and Zinc finger/Broad complex, Tramtrack and Bric-a-brac), but which varies from other proteins of this class in that it lacks the characteristic DNA-binding motif. Mod(mdg4) isoform C, the modifier of the mdg4 locus in Drosophila melanogaster (Fruit fly), where mdg4 encodes chromatin proteins which are involved in position effect variegation, establishment of chromatin boundaries, nerve path finding, meiotic chromosome pairing and apoptosis. Trans-splicing of Mod(mdg4) produces at least 26 transcripts. BEND2, a protein of unknown function, that is predicted to be involved in chromatin modification and has been associated clinically with central nervous system disorders. E5R protein from Chordopoxvirus virosomes, which is found in cytoplasmic sites of viral DNA replication. Several proteins of polydnaviruses. References Further reading Protein domains
BEN domain
[ "Biology" ]
666
[ "Protein domains", "Protein classification" ]
32,162,058
https://en.wikipedia.org/wiki/Jacquet%20module
In mathematics, the Jacquet module is a module used in the study of automorphic representations. The Jacquet functor is the functor that sends a linear representation to its Jacquet module. They are both named after Hervé Jacquet. Definition The Jacquet module J(V) of a representation (π,V) of a group N is the space of co-invariants of N; or in other words the largest quotient of V on which N acts trivially, or the zeroth homology group H0(N,V). In other words, it is the quotient V/VN where VN is the subspace of V generated by elements of the form π(n)v - v for all n in N and all v in V. The Jacquet functor J is the functor taking V to its Jacquet module J(V). Applications Jacquet modules are used to classify admissible irreducible representations of a reductive algebraic group G over a local field, and N is the unipotent radical of a parabolic subgroup of G. In the case of p-adic groups, they were studied by . For the general linear group GL(2), the Jacquet module of an admissible irreducible representation has dimension at most two. If the dimension is zero, then the representation is called a supercuspidal representation. If the dimension is one, then the representation is a special representation. If the dimension is two, then the representation is a principal series representation. References Representation theory
Jacquet module
[ "Mathematics" ]
332
[ "Representation theory", "Fields of abstract algebra" ]
39,260,084
https://en.wikipedia.org/wiki/Spectral%20line%20shape
Spectral line shape or spectral line profile describes the form of an electromagnetic spectrum in the vicinity of a spectral line – a region of stronger or weaker intensity in the spectrum. Ideal line shapes include Lorentzian, Gaussian and Voigt functions, whose parameters are the line position, maximum height and half-width. Actual line shapes are determined principally by Doppler, collision and proximity broadening. For each system the half-width of the shape function varies with temperature, pressure (or concentration) and phase. A knowledge of shape function is needed for spectroscopic curve fitting and deconvolution. Origins A spectral line can result from an electron transition in an atom, molecule or ion, which is associated with a specific amount of energy, E. When this energy is measured by means of some spectroscopic technique, the line is not infinitely sharp, but has a particular shape. Numerous factors can contribute to the broadening of spectral lines. Broadening can only be mitigated by the use of specialized techniques, such as Lamb dip spectroscopy. The principal sources of broadening are: Lifetime broadening. According to the uncertainty principle the uncertainty in energy, ΔE and the lifetime, Δt, of the excited state are related by This determines the minimum possible line width. As the excited state decays exponentially in time this effect produces a line with Lorentzian shape in terms of frequency (or wavenumber). Doppler broadening. This is caused by the fact that the velocity of atoms or molecules relative to the observer follows a Maxwell distribution, so the effect is dependent on temperature. If this were the only effect the line shape would be Gaussian. Pressure broadening (Collision broadening). Collisions between atoms or molecules reduce the lifetime of the upper state, Δt, increasing the uncertainty ΔE. This effect depends on both the density (that is, pressure for a gas) and the temperature, which affects the rate of collisions. The broadening effect is described by a Lorentzian profile in most cases. Proximity broadening. The presence of other molecules close to the molecule involved affects both line width and line position. It is the dominant process for liquids and solids. An extreme example of this effect is the influence of hydrogen bonding on the spectra of protic liquids. Observed spectral line shape and line width are also affected by instrumental factors. The observed line shape is a convolution of the intrinsic line shape with the instrument transfer function. Each of these mechanisms, and others, can act in isolation or in combination. If each effect is independent of the other, the observed line profile is a convolution of the line profiles of each mechanism. Thus, a combination of Doppler and pressure broadening effects yields a Voigt profile. Line shape functions Lorentzian A Lorentzian line shape function can be represented as where L signifies a Lorentzian function standardized, for spectroscopic purposes, to a maximum value of 1; is a subsidiary variable defined as where is the position of the maximum (corresponding to the transition energy E), p is a position, and w is the full width at half maximum (FWHM), the width of the curve when the intensity is half the maximum intensity (this occurs at the points ). The unit of , and is typically wavenumber or frequency. The variable x is dimensionless and is zero at . Gaussian The Gaussian line shape has the standardized form, The subsidiary variable, x, is defined in the same way as for a Lorentzian shape. Both this function and the Lorentzian have a maximum value of 1 at x = 0 and a value of 1/2 at x=±1. Voigt The third line shape that has a theoretical basis is the Voigt function, a convolution of a Gaussian and a Lorentzian, where σ and γ are half-widths. The computation of a Voigt function and its derivatives are more complicated than a Gaussian or Lorentzian. Spectral fitting A spectroscopic peak may be fitted to multiples of the above functions or to sums or products of functions with variable parameters. The above functions are all symmetrical about the position of their maximum. Asymmetric functions have also been used. Instances Atomic spectra For atoms in the gas phase the principal effects are Doppler and pressure broadening. Lines are relatively sharp on the scale of measurement so that applications such as atomic absorption spectroscopy (AAS) and Inductively coupled plasma atomic emission spectroscopy (ICP) are used for elemental analysis. Atoms also have distinct x-ray spectra that are attributable to the excitation of inner shell electrons to excited states. The lines are relatively sharp because the inner electron energies are not very sensitive to the atom's environment. This is applied to X-ray fluorescence spectroscopy of solid materials. Molecular spectra For molecules in the gas phase, the principal effects are Doppler and pressure broadening. This applies to rotational spectroscopy, rotational-vibrational spectroscopy and vibronic spectroscopy. For molecules in the liquid state or in solution, collision and proximity broadening predominate and lines are much broader than lines from the same molecule in the gas phase. Line maxima may also be shifted. Because there are many sources of broadening, the lines have a stable distribution, tending towards a Gaussian shape. Nuclear magnetic resonance The shape of lines in a nuclear magnetic resonance (NMR) spectrum is determined by the process of free induction decay. This decay is approximately exponential, so the line shape is Lorentzian. This follows because the Fourier transform of an exponential function in the time domain is a Lorentzian in the frequency domain. In NMR spectroscopy the lifetime of the excited states is relatively long, so the lines are very sharp, producing high-resolution spectra. Magnetic resonance imaging Gadolinium-based pharmaceuticals alter the relaxation time, and hence spectral line shape, of those protons that are in water molecules that are transiently attached to the paramagnetic atoms, resulting contrast enhancement of the MRI image. This allows better visualisation of some brain tumours. Applications Curve decomposition Some spectroscopic curves can be approximated by the sum of a set of component curves. For example, when Beer's law applies, the total absorbance, A, at wavelength λ, is a linear combination of the absorbance due to the individual components, k, at concentration, ck. ε is an extinction coefficient. In such cases the curve of experimental data may be decomposed into sum of component curves in a process of curve fitting. This process is also widely called deconvolution. Curve deconvolution and curve fitting are completely different mathematical procedures. Curve fitting can be used in two distinct ways. The line shapes and parameters and of the individual component curves have been obtained experimentally. In this case the curve may be decomposed using a linear least squares process simply to determine the concentrations of the components. This process is used in analytical chemistry to determine the composition of a mixture of the components of known molar absorptivity spectra. For example, if the heights of two lines are found to be h1 and h2, c1 = h1 / ε1 and c2 = h2 / ε2. Parameters of the line shape are unknown. The intensity of each component is a function of at least 3 parameters, position, height and half-width. In addition one or both of the line shape function and baseline function may not be known with certainty. When two or more parameters of a fitting curve are not known the method of non-linear least squares must be used. The reliability of curve fitting in this case is dependent on the separation between the components, their shape functions and relative heights, and the signal-to-noise ratio in the data. When Gaussian-shaped curves are used for the decomposition of set of Nsol spectra into Npks curves, the and parameters are common to all Nsol spectra. This allows to calculated the heights of each Gaussian curve in each spectrum (Nsol·Npks parameters) by a (fast) linear least squares fitting procedure, while the and w parameters (2·Npks parameters) can be obtained with a non-linear least-square fitting on the data from all spectra simultaneously, thus reducing dramatically the correlation between optimized parameters. Derivative spectroscopy Spectroscopic curves can be subjected to numerical differentiation. When the data points in a curve are equidistant from each other the Savitzky–Golay convolution method may be used. The best convolution function to use depends primarily on the signal-to-noise ratio of the data. The first derivative (slope, ) of all single line shapes is zero at the position of maximum height. This is also true of the third derivative; odd derivatives can be used to locate the position of a peak maximum. The second derivatives, , of both Gaussian and Lorentzian functions have a reduced half-width. This can be used to apparently improve spectral resolution. The diagram shows the second derivative of the black curve in the diagram above it. Whereas the smaller component produces a shoulder in the spectrum, it appears as a separate peak in the 2nd. derivative. Fourth derivatives, , can also be used, when the signal-to-noise ratio in the spectrum is sufficiently high. Deconvolution Deconvolution can be used to apparently improve spectral resolution. In the case of NMR spectra, the process is relatively straight forward, because the line shapes are Lorentzian, and the convolution of a Lorentzian with another Lorentzian is also Lorentzian. The Fourier transform of a Lorentzian is an exponential. In the co-domain (time) of the spectroscopic domain (frequency) convolution becomes multiplication. Therefore, a convolution of the sum of two Lorentzians becomes a multiplication of two exponentials in the co-domain. Since, in FT-NMR, the measurements are made in the time domain division of the data by an exponential is equivalent to deconvolution in the frequency domain. A suitable choice of exponential results in a reduction of the half-width of a line in the frequency domain. This technique has been rendered all but obsolete by advances in NMR technology. A similar process has been applied for resolution enhancement of other types of spectra, with the disadvantage that the spectrum must be first Fourier transformed and then transformed back after the deconvoluting function has been applied in the spectrum's co-domain. See also Fano resonance Holtsmark distribution Zero-phonon line and phonon sideband Notes References Further reading External links Curve Fitting in Raman and IR Spectroscopy: Basic Theory of Line Shapes and Applications 21st International Conference on Spectral Line Shapes, St. Petersburg (2012) Spectroscopy
Spectral line shape
[ "Physics", "Chemistry" ]
2,237
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
39,261,916
https://en.wikipedia.org/wiki/Liver%20support%20system
A liver support system or diachysis is a type of therapeutic device to assist in performing the functions of the liver. Such systems focus either on removing the accumulating toxins (liver dialysis), or providing additional replacement of the metabolic functions of the liver through the inclusion of hepatocytes to the device (bioartificial liver device). A diachysis machine is used for acute care i.e. emergency care, as opposed to a dialysis machine which are typically used over the longer term. These systems are being trialed to help people with acute liver failure (ALF) or acute-on-chronic liver failure. The primary functions of the liver include removing toxic substances from the blood, manufacturing blood proteins, storing energy in the form of glycogen, and secreting bile. The hepatocytes that perform these tasks can be killed or impaired by disease, resulting in acute liver failure (ALF) which can be seen in person with previously diseased liver or a healthy one. Etymology The word diachysis derives from the Greek word, διάχυσησ, which means "Diffusion" The word dialysis derives from the Greek word, διάλυσις, which means "Dissolution" Liver failure In hyperacute and acute liver failure, the clinical picture develops rapidly with progressive encephalopathy and multiorgan dysfunction such as hyperdynamic circulation, coagulopathy, acute kidney injury and respiratory insufficiency, severe metabolic alterations, and cerebral edema that can lead to brain death. In these cases the mortality without liver transplantation (LTx) ranges between 40-80%. LTx is the only effective treatment for these patients although it requires a precise indication and timing to achieve good results. Nevertheless, due to the scarcity of organs to carry out liver transplantations, it is estimated that one third of patients with ALF die while waiting to be transplanted. On the other hand, a patient with a chronic hepatic disease can suffer acute decompensation of liver function following a precipitating event such as variceal bleeding, sepsis and excessive alcohol intake among others that can lead to a condition referred to as acute-on-chronic liver failure (ACLF). Both types of hepatic insufficiency, ALF and ACLF, can potentially be reversible and liver functionality can return to a level similar to that prior to the insult or precipitating event. LTx has shown an improvement in the prognosis and survival with severe cases of ALF. Nevertheless, cost and donor scarcity have prompted researchers to look for new supportive treatments that can act as “bridge” to the transplant procedure. By stabilizing the patient's clinical state, or by creating the right conditions that could allow the recovery of native liver functions, both detoxification and synthesis can improve, after an episode of ALF or ACLF. Three different types of supportive therapies have been developed: bio-artificial, artificial and hybrid liver support systems (Table 2). Bioartificial liver devices Bioartificial liver devices are experimental extracorporeal devices that use living cell lines to provide detoxification and synthesis support to the failing liver. Bio-artificial liver (BAL) Hepatassist 2000 uses porcine hepatocytes whereas ELAD system employs hepatocytes derived from human hepatoblastoma C3A cell lines. Both techniques can produce, in fulminant hepatic failure (FHF), an improvement of hepatic encephalopathy grade and biochemical parameters. Potential side effects that have been documented include immunological issues (porcine endogenous retrovirus transmission), infectious complications, and tumor transmigration. Other biological hepatic systems are Bioartificial Liver Support (BLSS) and Radial Flow Bioreactor (RFB). Detoxification capacity of these systems is poor and therefore they must be used combined with other systems to mitigate this deficiency. Today, its use is limited to centers with high experience in their application. A bioartificial liver device (BAL) is an artificial extracorporeal liver support (ELS) system for an individual who is suffering from acute liver failure (ALF) or acute-on-chronic liver failure (ACLF). The fundamental difference between artificial and BAL systems lies in the inclusion of hepatocytes into the reactor, often operating alongside the purification circuits used in artificial ELS systems. The overall design varies between different BAL systems, but they largely follow the same basic structure, with patient blood or plasma flow through an artificial matrix housing hepatocytes. Plasma is often separated from the patient’s blood to improve efficiency of the system, and the device can be connected to artificial liver dialysis devices in order to further increase the effectiveness of the device in filtration of toxins. The inclusion of functioning hepatocytes in the reactor allows the restoration of some of the synthetic functions that the patient’s liver is lacking. History Early history The first bioartificial liver device was developed in 1993 by Dr. Achilles A. Demetriou at Cedars-Sinai Medical Center. The bioartificial liver helped an 18-year-old southern California woman survive without her own liver for 14 hours until she received a human liver using a 20-inch-long, 4-inch-wide plastic cylinder filled with cellulose fibers and pig liver cells. Blood was routed outside the patient's body and through the artificial liver before being returned to the body. Dr. Kenneth Matsumara's work on the BAL led it to be named an invention of the year by Time magazine in 2001. Liver cells obtained from an animal were used instead of developing a piece of equipment for each function of the liver. The structure and function of the first device also resembles that of today's BALs. Animal liver cells are suspended in a solution and a patient's blood is processed by a semipermeable membrane that allow toxins and blood proteins to pass but restricts an immunological response. Development Advancements in bioengineering techniques in the decade after Matsumara's work have led to improved membranes and hepatocyte attachment systems. Cell sources now include primary porcine hepatocytes, primary human hepatocytes, human hepatoblastoma (C3A), immortalized human cell lines and stem cells. Use The purpose of BAL-type devices is not to permanently replace liver functions, but to serve as a supportive device, either allowing the liver to regenerate properly upon acute liver failure, or to bridge the individual's liver functions until a transplant is possible. Function BALs are essentially bioreactors, with embedded hepatocytes (liver cells) that perform the functions of a normal liver. They process oxygenated blood plasma, which is separated from the other blood constituents. Several types of BALs are being developed, including hollow fiber systems and flat membrane sheet systems. Various types of hepatocytes are used in these devices. Porcine hepatocytes are often used due to ease of acquisition and cost; however, they are relatively unstable and carry the risk of cross-species disease transmission. Primary human hepatocytes sourced from donor organs present several problems in their cost and difficulty to obtain, especially with the current lack in transplantable tissue. In addition, questions have been raised about tissue collected from patients transmitting malignancy or infection via the BAL device. Several lines of human hepatocytes are also used in BAL devices, including C3A and HepG2 tumour cell lines, but due to their origin from hepatomas, they possess the potential to pass on malignancy to the patient. There is ongoing research into the cultivation of new types of human hepatocytes capable of improved longevity and efficacy in a bioreactor over currently used cell types, that do not pose the risk of transfer of malignancy or infection, such as the HepZ cell line created by Werner et al.. Hollow fibre systems Similar to kidney dialysis, hollow fiber systems employ a hollow fiber cartridge. Hepatocytes are suspended in a gel solution such as collagen, which is injected into a series of hollow fibers. In the case of collagen, the suspension is then gelled within the fibers, usually by a temperature change. The hepatocytes then contract the gel by their attachment to the collagen matrix, reducing the volume of the suspension and creating a flow space within the fibers. Nutrient media is circulated through the fibers to sustain the cells. During use, plasma is removed from the patients blood. The patient's plasma is fed into the space surrounding the fibers. The fibers, which are composed of a semi-permeable membrane, facilitate transfer of toxins, nutrients and other chemicals between the blood and the suspended cells. The membrane also keeps immune bodies, such as immunoglobulins, from passing to the cells to prevent an immune system rejection. Cryogel-Based Systems Currently, hollow-fibre bioreactors are the most commonly accepted design for clinical use due to their capillary-network allowing for easy perfusion of plasma across cell populations. However, these structures have their limitations, with convectional transport issues, nutritional gradients, non-uniform seeding, inefficient immobilisation of cells, and reduced hepatocyte growth restricting their effectiveness in BAL designs. Researchers are now investigating the use of cryogels to replace hollow-fibres as the cell carrier components in BAL systems. Cryogels are super-macroporous three-dimensional polymers prepared at sub-zero temperatures, by the freezing of a solution of cryogel precursors and solvent. The pores develop during this freezing process – as the cryogel solution cools, the solvent begins to form crystals. This causes the concentration of the cryogel precursors in the solution to increase, initiating the cryogelation process and forming the polymer walls. As the cryogel warms, the solvent crystals thaw, leaving cavities that form the pores. Cryogel pores range in size from 10-100 μm in size, forming an interconnected network that mimics a capillary system with a very large surface area to volume ratio, supporting large numbers of immobilised cells. Convection mediated transport is also supported by cryogels, enabling even distribution of nutrients and metabolite elimination, overcoming some of the shortcomings of hollow-fibre systems. Cryogel scaffolds demonstrate good mechanical strength and biocompatibility without triggering an immune response, improving their potential for long-term inclusion in BAL devices or in-vitro use. Another advantage of cryogels is their flexibility for use in a variety of tasks, including separation and purification of substances, along with acting as extracellular matrix for cell growth and proliferation. Immobilisation of specific ligands onto cryogels enables adsorption of specific substances, supporting their use as treatment options for toxins, for separation of haemoglobin from blood, and as a localised and sustained method for drug delivery. Developing an effective bioartificial liver (BAL) remains a formidable challenge, as it necessitates the intricate optimization of cell colonization, biomaterial scaffold design, and BAL fluid dynamics. Expanding upon prior research indicating its potential as a blood perfusion device for detoxification, some studies have explored the application of Arg-Gly-Asp (RGD)-containingPoly(2-hydroxyethyl methacrylate) (pHEMA)-alginate cryogels as scaffolds for BAL. These cryogels, incorporating alginate to mitigate protein fouling and functionalized with an RGD-containing peptide to enhance hepatocyte adhesion, represent a promising avenue for BAL scaffold development. Methods for characterizing internal flow within the porous cryogel matrix such as Particle Image Velocimetry (PIV), enables visualization of flow dynamics. PIV analysis revealed the laminar flow characteristics within cryogel pores, prompting the design of a multi-layered bioreactor consisting of spaced cryogel discs to optimize blood/hepatocyte mass exchange. Compared to the column configuration, the stacked bioreactor demonstrated significantly elevated production of albumin and urea, alongside enhanced cell colonization and proliferation over time. Recent developments in bioartificial liver (BAL) using living liver cells have shown promising advancements in the field of liver support and regeneration. These developments focus on utilizing various cell sources, scaffold materials, and bioreactor designs to enhance the functionality and viability of BAL systems. Key advancements include: Cell Sources: Researchers have explored different cell sources for BAL, including primary hepatocytes, stem cell-derived hepatocyte-like cells, and immortalized liver cell lines. Efforts have been made to optimize cell culture conditions to maintain cell viability and functionality within BAL systems. Scaffold Materials: Biomaterial scaffolds play a critical role in providing structural support and facilitating cell attachment and proliferation in BAL systems. Recent studies have investigated the use of natural and synthetic materials, such as hydrogels, alginate, and decellularized liver scaffolds, to create biomimetic environments conducive to liver cell growth and function. Bioreactor Designs: Innovative bioreactor designs have been developed to enhance the performance of BAL systems by optimizing mass transfer, fluid dynamics, and cell-matrix interactions. These designs include perfusion-based bioreactors, microfluidic devices, and three-dimensional (3D) bioprinted constructs, which aim to mimic the physiological microenvironment of the liver and promote liver cell function and survival. Functional Assessment: Advances in bioanalytical techniques have enabled researchers to assess the functionality of liver cells within BAL systems more accurately. These techniques include measuring the secretion of liver-specific biomarkers, such as albumin, urea, and bile acids, as well as evaluating metabolic activity, drug metabolism, and detoxification capacity. Clinical Studies There have been numerous clinical studies involving hollow-fibre bioreactors. Overall, they show promise but do not provide statistically significant evidence supporting their effectiveness. This is generally due to inherent design limitations, causing convectional transport issues, nutritional gradients, non-uniform seeding, inefficient immobilisation of cells, and reduced hepatocyte growth. As of writing, no cryogel-based devices have entered clinical trials. However, laboratory results have been promising, and hopefully trials will begin soon. HepatAssist The HepatAssist, developed at the Cedars-Sinai Medical Center, is a BAL device containing porcine hepatocytes within a hollow-fibre bioreactor. These semi-permeable fibres act as capillaries, allowing the perfusion of plasma through the device, and across the hepatocytes surrounding the fibres. The system incorporates a charcoal column to act as a filter, removing additional toxins from the plasma. Demetriou et al. carried out a large, randomised, multicentre, controlled trial on the safety and efficacy of the HepatAssist device. 171 patients with ALF stemming from viral hepatitis, paracetamol overdose or other drug complications, primary non-function (PNF), or of indeterminate aetiology, were involved in the study and were randomly assigned to either the experimental or control groups. The study found that at the primary end-point 30-day post admission mark, there was an increased survival rate in BAL patients over control patients (71% vs 62%), but the difference was not significant. However, when patients with PNF are excluded from the results there is a 44% reduction in mortality for BAL treated patients, a statistically significant advantage. The investigators noted that exclusion of PNF patients is justifiable due to early retransplantation and lack of intercranial hypertension, so HepatAssist would give little benefit to this group. For the secondary end-point of time-to-death, in patients with ALF of known aetiology there was a significant difference between BAL and control groups, with BAL patients surviving for longer. There was no significant difference for patients of unknown aetiology, however. The conclusions of the study suggest that such a device has potentially significant importance when used as a treatment measure. While the overall findings were not statistically significant, when the aetiology of the patients was taken into account the BAL group gained a statistically significant reduction in mortality over the control group. This suggests that while the device may not be applicable to patients as an overall treatment for liver dysfunction, it can provide an advantage when the heterogeneity of patients is considered and is used with patients of specific aetiology. Extracorporeal Liver Assist Device The Extracorporeal Liver Assist Device (ELAD) is a human-cell based treatment system. A catheter removes blood from the patient, and an ultrafiltrate generator separates the plasma from the rest of the blood. This plasma is then run through a separate circuit containing cartridges filled with C3A cells, before being returned to the main circuit and re-entering the patient. Thompson et al. performed a large open-label trial, measuring the effectiveness of ELAD on patients with severe alcoholic hepatitis resulting in ACLF. Their study involved patients screened at 40 sites across the US, UK, and Australia, and enrolled a total of 203 patients. Patients were then randomised into either ELAD (n=96) or standard medical care (n=107) groups, with even distribution for patients in terms of sex, MELD score, and bilirubin levels. Of the 96 patients in the ELAD group, 45 completed the full 120 hours of treatment – the rest were unable to complete the full regimen due to a variety of reasons, including withdrawal of consent or severe adverse events, though 37 completed >72 hours of treatment, with results showing minimal difference in mortality between those receiving either >72 hours or the full 120 hours of treatment. The study was unable to complete its goal, finding no statistically significant improvement in mortality rates for patients that received ELAD treatment over those receiving standard care at 28 and 91 days (76.0% versus 80.4% and 59.4% versus 61.7%, respectively). Biomarker measurements showed a significantly reduced level of bilirubin and alkaline phosphatase in ELAD patients, though neither improvement translated into increased survivability rates. Outcomes for patients with MELD score <28 showed trends towards improved survival on ELAD, whereas those with MELD >28 had decreased survivability on ELAD. These patients presented with raised creatinine from kidney failure, suggesting a reason why ELAD decreased survival chance over standard care. Unlike artificial ELS devices and HepatAssist, ELAD does not incorporate any filtration devices, such as charcoal columns and exchange resins. Therefore, it cannot replace the filtration capability of the kidneys and cannot compensate for multi-organ failure from more severe presentations of ACLF, resulting in increased mortality rates. While the results of the study cannot provide conclusive evidence to suggest that a BAL device like ELAD improves the outcome of severe ACLF, it does suggest that it can aid the survival of patients with a less severe form of the disease. In those patients with a MELD <28, beneficial effects were seen 2–3 weeks post treatment, suggesting that while C3A incorporating BAL devices are unable to provide short-term aid like artificial albumin filtration devices, they instead provide more long-term aid in recovery of the patient’s liver. A randomized, phase 3 trial of the ELAD device in patients with severe alcoholic hepatitis failed to show benefit on overall survival and development was discontinued. Liver dialysis Artificial liver support systems are aimed to temporarily replace native liver detoxification functions and they use albumin as scavenger molecule to clear the toxins involved in the physiopathology of the failing liver. Most of the toxins that accumulate in the plasma of patients with liver insufficiency are protein bound, and therefore conventional renal dialysis techniques, such as hemofiltration, hemodialysis or hemodiafiltration are not able to adequately eliminate them. Liver dialysis has shown promise for patients with hepatorenal syndrome. It is similar to hemodialysis and based on the same principles, but hemodialysis does not remove toxins bound to albumin that accumulate in liver failure. Like a bioartificial liver device, it is a form of artificial extracorporeal liver support. A critical issue of the clinical syndrome in liver failure is the accumulation of toxins not cleared by the failing liver. Based on this hypothesis, the removal of lipophilic, albumin-bound substances such as bilirubin, bile acids, metabolites of aromatic amino acids, medium-chain fatty acids and cytokines should be beneficial to the clinical course of a patient in liver failure. This led to the development of artificial filtration and absorption devices. Liver dialysis is performed by physicians and surgeons and specialized nurses with training in gastroenterological medicine and surgery, namely, in hepatology, alongside their colleagues in the intensive or critical care unit and the transplantation department, which is responsible for procuring and implanting a new liver, or a part (lobe) of one, if and when it becomes available in time and the patient is eligible. Because of the need for these experts, as well as the relative newness of the procedure in certain areas, it is usually available only in larger hospitals, such as level I trauma center teaching hospitals connected with medical schools. Between the different albumin dialysis modalities, single pass albumin dialysis (SPAD) has shown some positive results at a very high cost; it has been proposed that lowering the concentration of albumin in the dialysate does not seem to affect the detoxification capability of the procedure. Nevertheless, the most widely used systems today are based on hemodialysis and adsorption. These systems use conventional dialysis methods with an albumin containing dialysate that is later regenerated by means of adsorption columns, filled with activated charcoal and ion exchange resins. At present, there are two artificial extracorporeal liver support systems: the Molecular Adsorbents Recirculating System (MARS) from Gambro and Fractionated Plasma Separation and Adsorption (FPSA), commercialised as Prometheus (PROM) from Fresenius Medical Care. Of the two therapies, MARS is the most frequently studied, and clinically used system to date. Prognosis/survival While the technique is in its infancy, the prognosis of patients with liver failure remains guarded. Liver dialysis is currently only considered to be a bridge to transplantation or liver regeneration (in the case of acute liver failure) and, unlike kidney dialysis (for kidney failure), cannot support a patient for an extended period of time (months to years). Devices Artificial detoxification devices currently under clinical evaluation include the Single Pass Albumin Dialysis (SPAD), Molecular Adsorbent Recirculating System (MARS), Prometheus system, and Dialive. Single Pass Albumin Dialysis (SPAD) Single pass albumin dialysis (SPAD) is a simple method of albumin dialysis using standard renal replacement therapy machines without an additional perfusion pump system: The patient's blood flows through a circuit with a high-flux hollow fiber hemodiafilter, identical to that used in the MARS system. The other side of this membrane is cleansed with an albumin solution in counter-directional flow, which is discarded after passing the filter. Hemodialysis can be performed in the first circuit via the same high-flux hollow fibers. Molecular adsorbents recirculation system The Molecular Adsorbents Recirculation System (MARS) is the best known extracorporal liver dialysis system. It consists of two separate dialysis circuits. The first circuit consists of human serum albumin, is in contact with the patient's blood through a semipermeable membrane and has two filters to clean the albumin after it has absorbed toxins from the patient's blood. The second circuit consists of a hemodialysis machine and is used to clean the albumin in the first circuit, before it is recirculated to the semipermeable membrane in contact with the patient's blood. Comparing SPAD, MARS and CVVHDF SPAD, MARS and continuous veno-venous haemodiafiltration (CVVHDF) were compared in vitro with regard to detoxification capacity. SPAD and CVVHDF showed a significantly greater reduction of ammonia compared with MARS. No significant differences could be observed between SPAD, MARS and CVVHDF concerning other water-soluble substances. However, SPAD enabled a significantly greater bilirubin reduction than MARS. Bilirubin serves as an important marker substance for albumin-bound (non-water-soluble) substances. Concerning the reduction of bile acids no significant differences between SPAD and MARS were seen. It was concluded that the detoxification capacity of SPAD is similar or even higher when compared with the more sophisticated, more complex and hence more expensive MARS. Albumin dialysis is a costly procedure: for a seven-hour treatment with MARS, approximately €300 for 600 mL human serum albumin solution (20%), €1740 for a MARS treatment kit, and €125 for disposables used by the dialysis machine have to be spent. The cost of this therapy adds up to approximately €2165. Performing SPAD according to the protocol by Sauer et al., however, requires 1000 mL of human albumin solution (20%) at a cost of €500. A high-flux dialyzer costing approximately €40 and the tubings (€125) must also be purchased. The overall costs of a SPAD treatment is approximately €656—30% of the costs of an equally efficient MARS therapy session. The expenditure for the MARS monitor necessary to operate the MARS disposables is not included in this calculation. Prometheus The Prometheus system (Fresenius Medical Care, Bad Homburg, Germany) is a device based on the combination of albumin adsorption with high-flux hemodialysis after selective filtration of the albumin fraction through a specific polysulfon filter (AlbuFlow). It has been studied in a group of eleven patients with hepatorenal syndrome (acute-on-chronic liver failure and accompanying kidney failure). The treatment for two consecutive days for more than four hours significantly improved serum levels of conjugated bilirubin, bile acids, ammonia, cholinesterase, creatinine, urea and blood pH. Prometheus was proven to be a safe supportive therapy for patients with liver failure. Dialive Dialive (Yaqrit Limited, London, UK) incorporates albumin removal and replacement and, endotoxin removal. It is at "Technology readiness level" (TRL) 5, which means it is validated in the disease environment. The MARS System MARS was developed by a group of researchers at the University of Rostock (Germany), in 1993 and later commercialized for its clinical use in 1999. The system is able to replace the detoxification function of the liver while minimizing the inconvenience and drawbacks of previously used devices. In vivo preliminary investigations indicated the ability of the system to effectively remove bilirubin, biliary salts, free fatty acids and tryptophan while important physiological proteins such as albumin, alpha-1-glicoproteine, alpha 1 antitrypsin, alpha-2-macroglobulin, transferrin, globulin tyrosine, and hormonal systems are unaffected. Also, MARS therapy in conjunction with CRRT/HDF can help clear cytokines acting as inflammatory and immunological mediators in hepatocellular damage, and therefore can create the right environment to favour hepatocellular regeneration and recovery of native liver function. MARS System Components MARS is an extracorporeal hemodialysis system composed of three different circuits: blood, albumin and low-flux dialysis. The blood circuit uses a double lumen catheter and a conventional hemodialysis device to pump the patient's blood into the MARS FLUX, a biocompatible polysulfone high-flux dialyser. With a membrane surface area of 2.1 m2, 100 nm of thickness and a cut-off of 50 KDa, the MARSFLUX is essential to retaining the albumin in the dialysate. Blood is dialysed against a human serum albumin (HSA) dialysate solution that allows blood detoxification of both water-soluble and protein-bound toxins, by means of the presence of albumin in the dialysate (albumin dialysis). The albumin dialysate is then regenerated in a close loop in the MARS circuit by passing through the fibres of the low-flux diaFLUX filter, to clear water-soluble toxins and provide electrolyte/acid-base balance, by a standard dialysis fluid. Next, the albumin dialysate passes through two different adsorption columns; protein-bound substances are removed by the diaMARS AC250, containing activated charcoal and anionic substances are removed by the diaMARS IE250, filled with cholestyramine, an anion-exchange resin. The albumin solution is then ready to initiate another detoxifying cycle of the patient's blood that can be sustained until both adsorption columns are saturated, eliminating the need to continuously infuse albumin into the system during treatment (Fig. 1). Figure 1: The MARS system Results published in the literature with the MARS system A systematic review of the literature from 1999 to June 2011 was performed in the following databases: Specialized in systematic reviews: Cochrane Library Plus and NHS Centre database for Reviews and Dissemination (HTA, DARE and NHSEED). General databases: Medline, Pubmed and Embase. On-going clinical trials and research project databases: Clinical Trials Registry (National Institutes of Health, EE.UU.) and Health Services Research Projects in Progress. General web searching engines: Scholar Google. Effects of MARS treatment on Hepatic Encephalopathy (HE) Hepatic encephalopathy (HE) represents one of the more serious extrahepatic complications associated with liver dysfunction. Neuro-psychiatric manifestations of HE affect consciousness and behaviour. Evidence suggests that HE develops as some neurotoxins and neuro active substances, produced after hepatocellular breakdown, accumulates in the brain as a consequence of a portosystemic shunt and the limited detoxification capability of the liver. Substances involved are ammonia, manganese, aromatic aminoacids, mercaptans, phenols, medium chain fatty acids, bilirubin, endogenous benzodiazepines, etc. The relationship between ammonia neurotoxicity and HE was first described in animal studies by Pavlov et al. Subsequently, several studies in either animals or humans have confirmed that, a ratio in ammonia concentration higher than 2 mM between the brain and blood stream, causes HE, and even a comatose state when the value is greater than 5 mM. Some investigators have also reported a decrease in serum ammonia following a MARS treatment (Table 3). Manganese and copper serum levels are increased in patients with either acute or acute on chronic liver failure. Nevertheless, only in those patients with chronic hepatic dysfunction, a bilateral magnetic resonance alteration on Globos Pallidus is observed, probably because this type of patients selectively shows higher cerebral membrane permeability. Imbalance between aromatic and branched chain aminoacids (Fischer index), traditionally involved in HE genesis, can be normalized following a MARS treatment. The effects are noticeable even after 3 hours of treatment and this reduction in the Fisher index is accompanied with an improvement in the HE. Novelli G et al. published their three years experience on MARS analyzing the impact of the treatment in the cerebral level for 63 patients reporting an improvement in Glasgow Coma Score (GCS) for all observed in all patients. In the last 22 patients, cerebral perfusion pressure was monitored by Doppler (mean flow velocity in middle cerebral artery), establishing a clear relationship between a clinical improvement (especially neurological) and an improvement in arterial cerebral perfusion. This study confirms other results showing similar increments in cerebral perfusion in patients treated with MARS. More recently, several studies have shown a significant improvement of HE in patients treated with MARS. In the studies by Heemann et al. and Sen et al. an improvement in HE was considered when encephalopathy grade was reduced by one or more grades vs. basal values; for Hassenein et al., in their randomized controlled trial, improvement was considered when a decrease of two grades was observed. In the latter, 70 patients with acute on chronic liver failure and encephalopathy grade III and IV were included. Likewise, Kramer et al. estimated an HE improvement when an improvement in peak N70 latency in electroencephalograms was observed. Sen et al.44 observed a significant reduction in Child-Pugh Score (p<0,01) at 7 days following a MARS treatment, without any significant change in the controls. Nevertheless, when they looked at the Model for End-Stage Liver Disease Score (MELD), a significant reduction in both groups, MARS and controls, was recorded (p<0,01 y p<0,05, respectively). Likewise, in several case series, an improvement in HE grade with MARS therapy is also reported. Effects of MARS Treatment on Unstable Hemodynamics Hemodynamic instability is often associated with acute liver insufficiency, as a consequence of endogenous accumulation of vasoactive agents in the blood. This is characterized by a systemic vasodilatation, a decrease of systemic vascular resistance, arterial hypotension, and an increase of cardiac output that gives rise to a hyperdynamic circulation. During MARS therapy, systemic vascular resistance index and mean arterial pressure have been shown to increase and show improvement. Schmidt et al. reported the treatment of 8 patients, diagnosed with acute hepatic failure, that were treated with MARS for 6 hours, and were compared with a control group of 5 patients to whom ice pads were applied to match the heat loss produced in the treatment group during the extracorporeal therapy. They analyzed hemodynamic parameters in both groups hourly. In the MARS group, a statistically significant increase of 46% on systemic vascular resistance was observed (1215 ± 437 to 1778 ± 710 dinas x s x cm−5 x m−2) compared with a 6% increase in the controls. Mean arterial pressure also increased (69 ± 5 to 83 ± 11 mmHg, p< 0.0001) in the MARS group, whereas no difference was observed in the controls. Cardiac output and heart rate also decreased in the MARS group as a consequence of an improvement in the hyperdynamic circulation. Therefore, it was shown that a statistically significant improvement was obtained with MARS when compared with the SMT. Catalina et al. have also evaluated systemic and hepatic hemodynamic changes produced by MARS therapy. In 4 patients with acute decompensation of chronic liver disease, they observed after MARS therapy, an attenuation of hyperdynamic circulation and a reduction in the portal pressure gradient was measured. Results are summarized in table 4. There are other studies also worth mentioning with similar results: Heemann et al. and Parés et al. among others. Dethloff T et al. concluded that there is a statistically significant improvement favourable to MARS in comparison with Prometheus system (Table 5). Effects of MARS Treatment on Renal Function Hepatorenal syndrome is one of the more serious complications in patients with acute decompensation of cirrhosis and increased portal hypertension. It is characterized by hemodynamic changes in splanchnic, systemic and renal circulation. Splanchnic vasodilatation triggers the production of endogenous vasoactive substances that produce renal vasoconstriction and low glomerular filtration rate, leading to oliguria with a concomitant reduction in creatinine clearance. Renal insufficiency is always progressive with an inferior prognosis, with survival at 1 and 2 months of 20 and 10% respectively. Pierre Versin is one of the pioneers in the study of hepatorenal syndrome in patients with liver impairment. Great efforts have been made trying to improve the prognosis of this type of patient; however, few have solved the problem. Orthotopic liver transplantation is the only treatment that has shown to improve acute and chronic complications derived from severe liver insufficiency. Today it is possible to combine albumin dialysis with continuous veno-venous hemodialfiltration, which provides a greater expectation for these patients by optimization of their clinical status. MARS treatment lowers serum urea and creatinine levels improving their clearance, and even favors resolution of hepatorenal syndrome. Results are confirmed in a randomized controlled trial published by Mitzner et al.. in which 13 patients diagnosed with hepatorenal syndrome type I was treated with MARS therapy. Mean survival was 25,2±34,6 days in the MARS group compared to 4,6±1,8 days observed in the controls in whom hemodiafiltration and standard care (SMT) was applied. This resulted in a statistically significance difference in survival at 7 and 30 days (p<0.05). Authors concluded that MARS therapy, applied to liver failure patients (Child-Pugh C and UNOS 2A scores) who develop hepatorenal syndrome type I, prolonged survival compared to patients treated with SMT. Although mechanisms explaining previous findings are not yet fully understood, it has been reported that there was a decrease in plasma renin concentrations in patients diagnosed with acute on chronic liver failure with renal impairment that were treated with MARS. Likewise, other studies have suggested some efficacy for MARS in the treatment of hepatorenal syndrome. However, other references have been published that do not show efficacy in the treatment of these types of patients with MARS therapy. Khuroo et al.. published a meta-analysis based on 4 small RCT's and 2 non RCT's in patients diagnosed with ACLF, concluding that MARS therapy would not bring any significant increment in survival compared with SMT. Another observational study in 6 patients with cirrhosis, refractory ascites and hepatorenal syndrome type I, not responding to vasoconstrictor therapy, showed no impact on hemodynamics following MARS therapy; however authors concluded that MARS therapy could effectively serve as bridge to liver transplantation. Effects of MARS Treatment on Biochemical Parameters Total bilirubin was the only parameter analyzed in all trials that was always reduced in the groups of patients treated with MARS; Banayosy et al.. measured bilirubin levels 14 days after since MARS therapy was terminated and observed a consistent, significant decrease not only for bilirubin but also for creatinine and urea (Table 6). Impact of MARS therapy on plasma biliary acids levels was evaluated in 3 studies. In the study from Stadbauer et al.., that was specifically addressing the topic, it is reported that MARS and Prometheus systems lower to the same extent biliary acids plasma concentration. Heemann et al.. and Laleman et al.. have also published a significant improvement for these organic ions. Effects of MARS Treatment on Pruritus Pruritus is one of the most common clinical manifestations in cholestasis liver diseases and one of the most distressing symptoms in patients with chronic liver disease caused by viral hepatitis C. Many hypotheses have been formulated to explain physio pathogenesis of such manifestation, including incremental plasma concentration of biliary acids, abnormalities in the bile ducts, increased central neurotransmitters coupling opioid receptors, etc. Despite the number of historical drugs used, individually or combined (exchange resins, hydrophilic biliary acids, antihistamines, antibiotics, anticonvulsants, opioid antagonists), there are reported cases of intractable or refractory pruritus with a dramatic reduction in patients’ quality of life (i.e. sleep disorders, depression, suicide attempts...). Intractable pruritus can be an indication for liver transplantation. The MARS indication for intractable pruritus is therapeutically an option that has shown to be beneficial for patients in desperate cases, although at high cost. In several studies, it was confirmed that after MARS treatments, patients remain free from pruritus for a period of time ranging from 6 to 9 months. Nevertheless, some authors have concluded that besides the good results found in the literature, application of MARS therapy in refractory pruritus requires larger evidence. Effects of MARS Treatment on Drugs and Poisons clearance Pharmacokinetics and pharmacodynamics for a majority of drugs can be significantly be modified with liver failure, affecting the therapeutic approach and potential toxicity of the drugs. In these type of patients, Child-Pugh score represents a poor prognostic factor to assess the metabolic capacity of the failing liver. Metabolic performance of the liver depends on several factors: Hepatic flow rate Cytochrome P-450 enzimatic activity Albumin affinity for the drug Extrahepatic clearance for the drug In patients with hepatic failure, drugs that are only metabolized in the liver, accumulate in the plasma right after they are administered, and therefore it is needed to modify drug dosing in both, concentration and time intervals, to lower the risk of toxicity. It is also necessary to adjust the dosing for those drugs that are exclusively metabolized by the liver, and have low affinity for prioteins and high distribution volume, such as fluoroquinolones (Levofloxacin and Ciprofloxacin). Extracorporeal detoxification with albumin dialysis increases the clearance of drugs that are bound to plasmatic proteins (Table 7). Effects of MARS on Survival In the meta-analysis published by Khuroo et al.. which included 4 randomized trials an improvement in survival for the patients with liver failure treated with MARS, compared with SMT, was not observed. However, neither in the extracorporeal liver support systems review by the Cochrane (published in 2004), nor the meta-analysis by Kjaergard et al.. was a significance difference in survival found for patients diagnosed with ALF treated with extracorporeal liver support systems. Nevertheless, these reviews included all kind of liver support systems and used a heterogeneous type of publication (abstracts, clinical trials, cohort, etc.). There is literature showing favorable results in survival for patients diagnosed with ALF, and treated with MARS., In a randomized controlled trial, Salibà et al.. studied the impact on survival of MARS therapy for patients with ALF, waiting on the liver transplant list. Forty-nine patients received SMT and 53 were treated with MARS. They observed that patients that received 3 or more MARS sessions showed a statistically significance increase in transplant-free survival compared with the others patients of the study. Notably, 75% of the patients underwent liver transplantation in the first 24 hours after inclusion in the waiting list, and besides the short exposure to MARS therapy, some patients showed a better survival trend compared to controls, when they were treated with MARS prior to the transplant. In a case-controlled study by Montejo et al.. it was reported that MARS treatment do not decrease mortality directly; however, the treatment contributed to significantly improve survival in patients that were transplanted. In studies by Mitzner et al.. and Heemann et al.. they were able to show a significance statistical difference in 30-day survival for patients in the MARS group. However, El Banayosy et al.. and Hassanein et al.. noticed a non significant improvement in survival, probably because of the short number of patients included in the trials. In the majority of available MARS studies published with patients diagnosed with ALF, either transplanted or not, survival was greater in the MARS group with some variations according to the type of trial, ranging from 20-30%, and 60-80%. Data is summarized in Tables 8, 9 and 10. For patients diagnosed with acute on chronic liver failure and treated with MARS therapy, clinical trial results showed a not statistically significant reduction in mortality (odds ratio [OR] =0,78; confident interval [CI] =95%: 0,58 – 1,03; p= 0,1059, Figure 3) Figure 3: Meta-analysis showing the effect on survival of patients with ACLF treated with MARS therapy A non-statistically significant reduction of mortality was shown in patients with ALF treated with MARS (OR = 0,75 [CI= 95%, 0,42 – 1,35]; p= 0,3427). (Figure 4) Figure 4. Meta-analysis showing the effect on survival of patients with ALF treated with MARS therapy. Combined results yielded a non-significant reduction on mortality in patients treated with MARS therapy. However, the low number of patients included in each of the studies may be responsible for not being able to achieve enough statistical power to show differences between both treatment groups. Moreover, heterogeneity for the number of MARS sessions and severity of liver disease of the patients included, make it very difficult for the evaluation of MARS impact on survival. Recently, a meta-analysis on survival in patients treated with an extra-hepatic therapy has been published. Searching strategies yielded 74 clinical trials: 17 randomized controlled trials, 5 case control and 52 cohort studies. Eight studies were included in the meta-analysis: three addressing acute liver failure, one with MARS therapy and five addressing acute on chronic, being four MARS related. Authors concluded that extra-hepatic detoxifying systems improve survival for acute liver insufficiency, whereas results for acute decompensation of chronic liver diseases suggested a non significant survival benefit. Also, due to an increased demand for liver transplantation together with an augmented risk of liver failure following large resections, development of detoxifying extrahepatic systems are necessary. Safety Aspects Safety, defined as presence of adverse events, is evaluated in few trials. Adverse events in patients receiving MARS therapy are similar to those in the controls with the exception of thrombocytopenia and hemorrhage that seems to occur more frequently with the MARS system. Heemann et al. reported two adverse events most probably MARS related: fever and sepsis, presumably originated at the catheter. In the study by Hassanein et al., two patients in the MARS group abandoned the study owing to hemodynamic instability, three patients required larger than average platelets transfusion and three more patients presented gastrointestinal bleeding. Laleman et al.. detected one patient with thrombocytopenia in both the MARS and Prometheus treatment groups, and an additional patient with clotting of the dialysis circuit and hypotension, only in the Prometheus group. Kramer et al.. (Biologic-DT) wrote about 3 cases with disseminated intravascular coagulation in the interventional group, two of them with fatal outcomes. Mitzner et al.. described, among patients treated with MARS, a thrombocytopenia case and a second patient with chronic hepatitis B, who underwent TIPS placement on day 44 after randomization and died on day 105 of multiorgan failure, as a consequence of complications related to the TIPS procedure. Montejo et al.. showed that MARS is an easy technique, without serious adverse events related to the procedure, and also easy to implement in ICU settings that are used to renal extracorporeal therapies. The MARS International Registry, with data from more than 500 patients (although sponsored by the manufacturer), shows that the adverse effects observed are similar to the control group. However, in these severely ill patients it is difficult to distinguish between complications of the disease itself and side effects attributable to the technique. Health Economics Only three Studies addressing cost-effectivenenss of MARS therapy have been found. Hassanein et al. analysed costs of randomized patients with ACLF receiving MARS therapy or standard medical care. They used the study published in 2001 by Kim et al. describing the impact of complications in hospitalization costs in patients diagnosed with alcoholic liver failure. Cost of 11 patients treated with standard medical care (SMT) were compared to those that received MARS, in addition to SMT (12 patients). In the MARS group, there was less in-hospital mortality and complications related to the disease, with a remarkable reduction in cost which compensated the MARS related expenditure (Table 11). There were 5 survivors in the control group, with a cost per patient of $35.904, whereas in the MARS group, 11 patients out of 12 survived with a cost per patient of $32.036 which represents a $4000 savings per patient in favors of the MARS group. Hessel et al. published a 3-year follow-up of a cohort of 79 patients with ACLF, of whom 33 received MARS treatments and 46 received SMT. Survival was 67% for the MARS group and 63% for the controls, that was reduced to 58 and 35% respectively at one year follow-up, and then 52 and 17% at three years. Hospitalization costs for the MARS treated group were greater than that for the controls (€31,539 vs. €7,543) and similarly direct cost at 3-year follow-up (€8,493 vs. €5,194). Nevertheless, after adjusting mortality rate, the annual cost per patient was €12,092 for controls and €5,827 for MARS group; also in the latter, they found an incremental cost-effectiveness ratio of 31.448 € per life-year gained (LYG) and an incremental costs per QALY gained of 47171 €. Two years later, same authors published the results of 149 patients diagnosed with ACLF. There were 67 patients (44,9%) treated with MARS and 82 patients (55,1%) were allocated to receive SMT. Mean survival time was 692 days in the MARS group (33% at 3 years) and 453 days in the controls (15% at 3 years); the results were significant (p=0,022). Differences in average cost was €19,853 (95% IC: 13.308-25.429): 35.639 € for MARS patients and 15.804 € for the control group. Incremental cost per LYG was 29.985 € (95% IC: 9.441-321.761) and €43,040 (95% IC: 13.551-461.856) per quality-adjusted life years (QALY). Liver support systems, such as MARS, are very important to stabilize patients with acute or acute on chronic liver failure and avoid organ dysfunction, as well as a bridge-to-transplant. Although initial in-hospital costs are high, they are worth for the favorable outcome. MARS Therapy Indications Acute on Chronic Liver Failure Etiology: Chronic viral hepatitis Alcoholic liver disease Autoimmune disease Metabolic disease such as hemochromatosis Idiopathic Cirrhosis Goals of MARS Therapy Re-compensation of previous chronic state. Prolong survival time and bridge to urgent or elective transplant Pre-transplant optimization of the patient MARS Therapy Indication Bilirubin > 15 mg/dL (255 μmol/L), not responding to standard medical care alter 3 days Renal dysfunction or hepatorenal syndrome. Hepatic encephalopathy ≥ II Treatment Schedule: 3 to 5 eight-hour treatment sessions on consecutive days Continuous treatment with hemodynamic instability (in any case, treatment kit must be replaced every 24 hours) Acute Liver failure Etiology: Viral infection Poisoning (paracetamol overdose, mushrooms) Multiorgan dysfunction (severe sepsis) Vascular diseases (Budd Chiari syndrome) Hypoxic hepatitis Liver failure during pregnancy or Reye syndrome Unknown etiology Goals of MARS Therapy Native liver recovery. Bridging to liver transplant Pre-transplant optimization of the patient. MARS Therapy Indication King's College or Clichy criteria for liver transplantation Hepatic encephalopathy ≥ II Increased intracraneal pressure Acute hypoxic hepatitis with bilirubin > 8 mg/dL (100 μmol/L) Renal dysfunction or hepatorenal syndrome Progressive intrahepatic cholestasis Fulminant Wilson disease Acute liver dysfunction following paracetamol overdose Treatment Schedule: 3 to 5 eight-hour treatment sessions in consecutives days Hypoxic hepatitis. 3 eight-hour treatment sessions in consecutives days Paracetamol overdose: 3 to 5 twenty four-hour treatment sessions Mushroom poisoning: 3 to 5 twenty four-hour treatment sessions Fulminant Wilson: minimum 5 twenty four-hour treatment sessions owing to copper saturation of the treatment kit Drug overdose: 3 to 5 eight-hour treatment sessions in consecutives days MARS in Graft Dysfucntion After Liver Transplant Etiology: Graft damage during preparation and transportation Infection Hepatotoxic drugs Graft rejection Technical complications (vascular, biliary) Recurrence of primary disease Goals of MARS Therapy Recovery and prevention of re-transplantation • Prolong survival time and stabilize the patient to receive a re-transplant if the above goal is not achieved MARS Therapy Indication Primary graft dysfunction Hepatic encephalopathy ≥ II Increased intracranial pressure Renal dysfunction or hepatorenal syndrome. Progressive intrahepatic cholestasis Treatment Schedule: 3 to 5 eight-hour treatment sessions on consecutive days Continuous treatment with hemodynamic instability (in any case, treatment kit must be replaced every 24 hours) MARS in liver Failure after Liver Surgery Etiology: Liver Resection in hepatocellular carcinoma Transarterial Chemoembolization (TACE) Partial resection in living donor transplantation Other surgical interventions Goals of MARS Therapy Recovery until hepatic regeneration MARS Therapy Indication Hepatic encephalopathy ≥ II Renal dysfunction or hepatorenal syndrome. Progressive intrahepatic cholestasis Treatment Schedule: 3 to 5 eight-hour treatment sessions on consecutive days Continuous treatment with hemodynamic instability (in any case, treatment kit must be replaced every 24 hours) MARS for intractable pruritus in Cholestasis Etiology: Primary biliary cirrhosis (PBC), primary sclerosing cholangitis (PSC) Benign intrahepatic cholestasis (BIC) Biliary Atresia Goals of MARS Therapy Attenuate pruritus symptoms and improve patients’ quality of life MARS Therapy Indication Pruritus not responding to SMT Treatment Schedule: 3 to 5 eight-hour treatment sessions in consecutives days Repeat treatment when symptoms reoccur MARS Therapy Contraindications Same contraindications as with any other extracorporeal treatment may be applied to MARS therapy. Unstable hemodynamics with mean arterial pressure (MAP)< 55 mmHg despite vasoconstrictors administration Uncontrolled hemorrhage Severe coagulopathy Severe thrombocytopenia Treatment Parameters Blood Flow The trend is to use high flow rates, although it is determined by the technical specifications of the combined machine and catheters’ size Intermittent treatments: Without renal dysfunction, it is recommended a blood and albumin flow rates ranging from 150 to 250 mL/min Continuous treatments: With or without renal impairment it is recommended to use flow rates from 100 to 150 mL/min. Dyalisate Flow Rate Intermittent treatments: Without renal impairment: 1800 a 3000 mL/hour With renal impairment: 3000 a 6000 mL/hour Continuous treatments: Recommended flow rate: 1000 to 2000 mL/hour. Replacement Flow Rate According to medical criteria and same as in CVVHD Heparin Anticoagulation Similarly to CVVHD, it depends on previous patient's coagulation status. In many cases it will not be needed, unless the patient presents a PTT inferior to 160 seconds. In patients with normal values, a bolus of 5000 to 10000 IU of heparin could be administered at the commencement of the treatment, followed by a continuous perfusion, to keep PTT in ratios from 1,5 to 2,5 or 160 to 180 seconds. Monitoring A biochemical analysis is recommended (liver and kidney profile, ionic, glucose) together with a hemogram at the end of first session and before starting the following one. Coagulation analysis must be also performed before starting the session to adjusting heparin dose. In case that medication susceptible to be eliminated by MARS is being administered, it is also recommended to monitor their levels in blood End of the Session Once the treatment is finalized, blood should be returned following the unit procedure, and both catheter's lumens heparinized For the next session a new kit must be used For continuous treatments, kit must be changed by a new one every 24 hours Treatment must be stopped before schedule owing to the particular circumstances listed below: MAP inferior to 40 mmHg at least for 10 minutes Air embolism of the extracorporeal circuit Transmembrane pressure (TMP) greater than 600 mmHg. Blood leak detection in the albumin circuit Disseminated intravascular coagulation (DIC) Severe active hemorrhage FDA Clearance (US only) Federal Drug Administration (FDA) cleared, in a document dated on May 27, 2005, MARS therapy for the treatment of drug overdose and poisoning. The only requirement is that the drug or poison must be susceptible to be dialysed and removed by activated charcoal or anionic exchange resins. More recently, on December 17, 2012, MARS therapy has been cleared by the FDA for the treatment of hepatic encephalopathy due to a decompensation of a chronic liver disease. Clinical trials conducted with MARS treatment in HE patients having a decompensation of chronic liver disease demonstrated a transient effect from MARS treatments to significantly decrease their hepatic encephalopathy scores by at least 2 grades compared to standard medical therapy (SMT). The MARS is not indicated as a bridge to liver transplant. Safety and efficacy has not been demonstrated in controlled, randomized clinical trials. The effectiveness of the MARS device in patients that are sedated could not be established in clinical studies and therefore cannot be predicted in sedated patients LiverNet The LiverNet is a database dedicated to the liver diseases treated with the support of extracorporeal therapies. To date, the most currently used system is the Molecular Adsorbent Recirculating System (MARS), which is based on the selective removal of albumin bound molecules and toxins from the blood in patients with acute and acute-on-chronic liver failure. The purpose is to register prospectively all patients treated worldwide with the MARS system in order to: Improve our understanding of the clinical course, pathophysiology and treatment of these diseases Evaluate the clinical impact of MARS therapy on the course of the disease in different specific indication Increase the knowledge in this extremely innovative area, a basis for an improvement of liver support devices and the treatment of these patients in the next future The liverNet is an eCRF database (www.livernet.net) using a SAS platform that allows major advantages for the centres including the automatic calculations of most liver rand ICU scoring systems, instant queries online, instant export of all patients included in the database of each centre to an Excel file for direct statistical analysis and finally instant online statistical analysis of selective data decided by the scientific committee. Therefore, the LiverNet is an important tool to progress in the knowledge of liver support therapies. See also American Society for Artificial Internal Organs Tissue engineering References Further reading Liver Medical treatments Digestive system procedures Hepatology Medical equipment Membrane technology
Liver support system
[ "Chemistry", "Biology" ]
12,675
[ "Medical technology", "Membrane technology", "Medical equipment", "Separation processes" ]
39,262,593
https://en.wikipedia.org/wiki/Darken%27s%20equations
In metallurgy, the Darken equations are used to describe the solid-state diffusion of materials in binary solutions. They were first described by Lawrence Stamper Darken in 1948. The equations apply to cases where a solid solution's two components do not have the same coefficient of diffusion. The equations Darken's first equation is: where: is the marker velocity of inert markers showing the diffusive flux. and are the diffusion coefficients of the two components. and are the atomic fractions of the two components. represents the direction in which the diffusion is measured. It is important to note that this equation only holds in situations where the total concentration remains constant. Darken's second equation is: where: is the activity coefficient of the first component. is the overall diffusivity of the binary solution. Experimental methods In deriving the first equation, Darken referenced Simgelskas and Kirkendall's experiment, which tested the mechanisms and rates of diffusion and gave rise to the concept now known as the Kirkendall effect. For the experiment, inert molybdenum wires were placed at the interface between copper and brass components, and the motion of the markers was monitored. The experiment supported the concept that a concentration gradient in a binary alloy would result in the different components having different velocities in the solid solution. The experiment showed that in brass zinc had a faster relative velocity than copper, since the molybdenum wires moved farther into the brass. In establishing the coordinate axes to evaluate the derivation, Darken refers back to Smigelskas and Kirkendall’s experiment which the inert wires were designated as the origin. In respect to the derivation of the second equation, Darken referenced W. A. Johnson’s experiment on a gold–silver system, which was performed to determine the chemical diffusivity. In this experiment radioactive gold and silver isotopes were used to measure the diffusivity of gold and silver, because it was assumed that the radioactive isotopes have relatively the same mobility as the non-radioactive elements. If the gold–silver solution is assumed to behave ideally, it would be expected the diffusivities would also be equivalent. Therefore, the overall diffusion coefficient of the system would be the average of each components diffusivity; however, this was found not to be true. This finding led Darken to analyze Johnson's experiment and derive the equation for chemical diffusivity of binary solutions. Darken's first equation Background As stated previously, Darken's first equation allows the calculation of the marker velocity in respect to a binary system where the two components have different diffusion coefficients. For this equation to be applicable, the analyzed system must have a constant concentration and can be modeled by the Boltzmann–Matano solution. For the derivation, a hypothetical case is considered where two homogeneous binary alloy rods of two different compositions are in contact. The sides are protected, so that all of the diffusion occurs parallel to the length of the rod. In establishing the coordinate axes to evaluate the derivation, Darken sets the x-axis to be fixed at the far ends of the rods, and the origin at the initial position of the interface between the two rods. In addition this choice of a coordinate system allows the derivation to be simplified, whereas Smigelskas and Kirkendall's coordinate system was considered to be the non-optimal choice for this particular calculation as can be seen in the following section. At the initial planar interface between the rods, it is considered that there are infinitely small inert markers placed in a plane which is perpendicular to the length of the rods. Here, inert markers are defined to be a group of particles that are of a different elemental make-up from either of the diffusing components and move in the same fashion. For this derivation, the inert markers are assumed to be following the motion of the crystal lattice. The motion relative to the marker is associated with diffusion, , while the motion of the markers is associated with advection, . Fick’s first law, the previous equation stated for diffusion, describes the entirety of the system for only small distances from the origin, since at large distances advection needs to be accounted for. This results in the total rate of transport for the system being influenced by both factors, diffusion and advection. Derivation The derivation starts with Fick's first law using a uniform distance axis y as the coordinate system and having the origin fixed to the location of the markers. It is assumed that the markers move relative to the diffusion of one component and into one of the two initial rods, as was chosen in Kirkendall's experiment. In the following equation, which represents Fick's first law for one of the two components, D1 is the diffusion coefficient of component one, and C1 is the concentration of component one: This coordinate system only works for short range from the origin because of the assumption that marker movement is indicative of diffusion alone, which is not true for long distances from the origin as stated before. The coordinate system is transformed using a Galilean transformation, y = x − νt, where x is the new coordinate system that is fixed to the ends of the two rods, ν is the marker velocity measured with respect to the x axis. The variable t, time, is assumed to be constant, so that the partial derivative of C1 with respect to y is equal to the partial of C1 with respect to x. This transformation then yields The above equation, in terms of the variable x, only takes into account diffusion, so the term for the motion of the markers must also be included, since the frame of reference is no longer moving with the marker particles. In the equation below, is the velocity of the markers. Taking the above equation and then equating it to the accumulation rate in a volume results in the following equation. This result is similar to Fick's second law, but with an additional advection term: The same equation can be written for the other component, designated as component two: Using the assumption that C, the total concentration, is constant, C1 and C2 can be related in the following expression: The above equation can then be used to combine the expressions for and to yield Since C is constant, the above equation can be written as The above equation states that is constant because the derivative of a constant is equal to zero. Therefore, by integrating the above equation it is transforms to , where is an integration constant. At relative infinite distances from the initial interface, the concentration gradients of each of the components and the marker velocity can be assumed to be equal to zero. Based on this condition and the choice for the coordinate axis, where the x axis fixed at the far ends of the rods, I is equal zero. These conditions then allow the equation to be rearranged to give Since C is assumed to be constant, . Rewriting this equation in terms of atom fraction and yields Accompanying derivation Referring back to the derivation for Darken's first equation, is written as Inserting this value for in gives As stated before, , which gives Rewriting this equation in terms of atom fraction and yields By using and solving to the form , it is found that Integrating the above gives the final equation: This equation is only applicable for binary systems that follow the equations of state and the Gibbs–Duhem equation. This equation, as well as Darken's first law, , gives a complete description of an ideal binary diffusion system. This derivation was the approach taken by Darken in his original 1948, though shorter methods can be used to attain the same result. Darken's second equation Background Darken's second equation relates the chemical diffusion coefficient, , of a binary system to the atomic fractions of the two components. Similar to the first equation, this equation is applicable when the system does not undergo a volume change. This equation also only applies to multicomponent systems, including binary systems, that obey the equations of state and the Gibbs–Duhem equations. Derivation To derive Darken's second equation the gradient in Gibb's chemical potential is analyzed. The gradient in potential energy, denoted by F2, is the force which causes atoms to diffuse. To begin, the flux J is equated to the product of the differential of the gradient and the mobility B, which is defined as the diffusing atom's velocity per unit of applied force. In addition, NA is the Avogadro constant, and C2 is the concentration of diffusing component two. This yields which can be equated to the expression for Fick's first law: so that the expression can be written as After some rearrangement of variables the expression can be written for D2, the diffusivity of component two: Assuming that atomic volume is constant, so C = C1 + C2, Using a definition activity, , where R is the gas constant, and T is the temperature, to rewrite the equation in terms of activity gives The above equation can be rewritten in terms of the activity coefficient γ, which is defined in terms of activity by the equation . This yields The same equation can also be written for the diffusivity of component one, , and combining the equations for D1 and D2 gives the final equation: Applications Darken’s equations can be applied to almost any scenario involving the diffusion of two different components that have different diffusion coefficients. This holds true except in situations where there is an accompanying volume change in the material because this violates one of Darken’s critical assumptions that atomic volume is constant. More complicated equations than presented must be used in cases where there is convection. One application in which Darken’s equations play an instrumental role is in analyzing the process of diffusion bonding. Diffusion bonding is used widely in manufacturing to connect two materials without using adhesives or welding techniques. Diffusion bonding works because atoms from both materials diffuse into the other material, resulting in a bond that is formed between the two materials. The diffusion of atoms between the two materials is achieved by placing the materials in contact with each other at high pressure and temperature, while not exceeding the melting temperature of either material. Darken’s equations, particularly Darken’s second equation, come into play when determining the diffusion coefficients for the two materials in the diffusion couple. Knowing the diffusion coefficients is necessary for predicting the flux of atoms between the two materials, which can then be used in numerical models of the diffusion bonding process, as, for example, was looked at in the paper by Orhan, Aksoy, and Eroglu when creating a model to determine the amount of time required to create a diffusion bond. In a similar manner, Darken’s equations were used in a paper by Watanabe et al., on the nickel-aluminum system, to verify the interdiffusion coefficients that were calculated for nickel aluminum alloys. Application of Darken’s first equation has important implications for analyzing the structural integrity of materials. Darken’s first equation, , can be rewritten in terms of vacancy flux, . Use of Darken’s equation in this form has important implications for determining the flux of vacancies into a material undergoing diffusion bonding, which, due to the Kirkendall effect, could lead to porosity in the material and have an adverse effect on its strength. This is particularly important in materials such as aluminum nickel superalloys that are used in jet engines, where the structural integrity of the materials is extremely important. Porosity formation, known as Kirkendall porosity, in these nickel-aluminum superalloys have been observed when diffusion bonding has been used. It is important then to use Darken’s findings to predict this porosity formation. See also Gibbs-Duhem equation#Ternary and multicomponent solutions and mixtures References Chemical kinetics Physical chemistry
Darken's equations
[ "Physics", "Chemistry" ]
2,434
[ "Chemical reaction engineering", "Applied and interdisciplinary physics", "nan", "Chemical kinetics", "Physical chemistry" ]
39,265,457
https://en.wikipedia.org/wiki/Passive%20sign%20convention
In electrical engineering, the passive sign convention (PSC) is a sign convention or arbitrary standard rule adopted universally by the electrical engineering community for defining the sign of electric power in an electric circuit. The convention defines electric power flowing out of the circuit into an electrical component as positive, and power flowing into the circuit out of a component as negative. So a passive component which consumes power, such as an appliance or light bulb, will have positive power dissipation, while an active component, a source of power such as an electric generator or battery, will have negative power dissipation. This is the standard definition of power in electric circuits; it is used for example in computer circuit simulation programs such as SPICE. To comply with the convention, the direction of the voltage and current variables used to calculate power and resistance in the component must have a certain relationship: the current variable must be defined so positive current enters the positive voltage terminal of the device. These directions may be different from the directions of the actual current flow and voltage. The convention The passive sign convention states that in components in which the conventional current variable i is defined as entering the device through the terminal which is positive as defined by the voltage variable v, the power p and resistance r are given by      and      In components in which the current i is defined such that positive current enters the device through the negative voltage terminal, power and resistance are given by    and    With these definitions, passive components (loads) will have p > 0 and r > 0, and active components (power sources) will have p < 0 and r < 0. Explanation Active and passive components In electrical engineering, power represents the rate of electrical energy flowing into or out of a given device (electrical component) or control volume. Power is a signed quantity; negative power represents power flowing in the opposite direction from positive power. A simple component (shown in these diagrams as a rectangle) is connected to the circuit by two wires, through which electric current passes through the device. From the standpoint of power flow, electrical components in a circuit can be divided into two types: In a source or active component, such as a battery or electric generator, electric current (conventional current, flow of positive charges) is forced to move through the device in the direction of greater electric potential, from the negative to the positive voltage terminal. This increases the potential energy of the electric charges, so electric power flows out of the component into the circuit. Work must be done on the moving charges by some source of energy in the component, to make them move in this direction against the opposing force of the electric field E. In a load or passive component, such as a light bulb, resistor, or electric motor, the current moves through the device under the influence of the electric field E in the direction of lower electric potential, from the positive terminal to the negative. So work is done by the charges on the component; potential energy flows out of the charges; and electric power flows from the circuit into the component, where it is converted to some other form of energy such as heat or mechanical work. Some components can be either a source or a load, depending on the voltage or current through them. For example, a rechargeable battery acts as a source when used to supply energy but as a load when it is being recharged. A capacitor or an inductor acts as a load when it is storing energy in its electric or magnetic field from the external circuit, respectively, but as a source when it is releasing into the external circuit the stored energy from the electric or magnetic field. Since it can flow in either direction, there are two possible ways to define electric power; two possible reference directions: either power flowing into an electrical component or power flowing out of the component, which can be defined as positive. Whichever is defined as positive, the other will be negative. The passive sign convention arbitrarily defines power flowing into the component (out of the circuit) as positive, so passive components have "positive" power flow. In an AC (alternating current) circuit, the current and voltage switch direction with each half-cycle of the current, but the definitions above still apply. At any given instant, in nonreactive passive components, the current flows from the positive terminal to the negative, while in nonreactive active components, it flows the other direction. In addition, components with reactance (capacitance or inductance) store energy temporarily, so they act as sources or sinks in different parts of the AC cycle. For example, in a capacitor, when the voltage across it is increasing, the current is directed into the positive terminal, so the component is storing energy from the circuit in its electric field, while when the voltage is decreasing, the current is directed out of the positive terminal, so it is acting as a source, returning stored energy to the circuit. In a steady-state AC circuit, all the energy stored in reactances is returned within the AC cycle, so a pure reactance, a capacitor or inductor, neither consumes nor produces net power, neither a source nor a load. Reference directions The power flow p and resistance r of an electrical component are related to the voltage v and current i variables by the defining equation for power and Ohm's law: Like power, voltage and current are signed quantities. The current flow in a wire has two possible directions, so when defining a current variable i the direction which represents positive current flow must be indicated, usually by an arrow on the circuit diagram. This is called the reference direction for current i. If the actual current is in the opposite direction, the variable i will have a negative value. Similarly in defining a variable v representing the voltage between two terminals, the terminal which is positive when the voltage is positive must be specified, usually with a plus sign. This is called the reference direction or reference terminal for voltage v. If the terminal marked positive actually has a lower voltage than the other one, then the variable v will have a negative value. To understand the passive sign convention, it is important to distinguish the reference directions of the variables, v and i, which can be assigned at will, from the direction of the actual voltage and current, which is determined by the circuit. The idea of the PSC is that by assigning the reference direction of variables v and i in a component with the right relationship, the power flow in passive components calculated from Eq. (1) will come out positive, while the power flow in active components will come out negative. It is unnecessary to know whether a component produces or consumes power when analyzing the circuit; reference directions can be assigned arbitrarily, directions to currents and polarities to voltages, then the PSC is used to calculate the power in components. If the power comes out positive, the component is a load, consuming electric energy and converting it to some other kind of energy. If the power comes out negative, the component is a source, converting some other form of energy to electric energy. Sign conventions The above discussion shows that choosing the reference directions of the voltage and current variables in a component determines the direction of power flow that is considered positive. The reference directions of the individual variables are not important, only their relation to each other. There are two choices: Passive sign convention: the reference direction of the current variable (the arrow representing the direction of positive current) points into the positive reference terminal of the voltage variable. This means that if the voltage and current variables have positive values, current flows through the device from the positive to the negative terminal, doing work on the component, as occurs in a passive component. So power flowing into the component from the line is defined as positive; the power variable represents power dissipation in the component. Therefore Active components (power sources) will have negative resistance and negative power flow Passive components (loads) will have positive resistance and positive power flow This is the convention normally used. Active sign convention: the reference direction of the current variable (the arrow representing the direction of positive current) points into the negative reference terminal of the voltage variable. This means that if the voltage and current variables have positive values, current flows through the device from the negative to the positive terminal, so work is being done on the current, and power flows out of the component. So power flowing out of the component is defined as positive; the power variable represents power produced. Therefore: Active components will have positive resistance and positive power flow Passive components will have negative resistance and negative power flow This convention is rarely used, except for special cases in power engineering. In practice, assigning the voltage and current variables in a circuit is not necessary to comply with the PSC. Components in which the variables have a "backward" relationship, in which the current variable enters the negative terminal, can still be made to comply with the PSC by changing the sign of the constitutive relations (1) and (2) used with them. A current entering the negative terminal is equivalent to a negative current entering the positive terminal, so in such a component , and Conservation of energy One advantage of defining all the variables in a circuit to comply with the PSC is that it makes it easy to express conservation of energy. Since electric energy cannot be created or destroyed at any given instant, every watt of power consumed by a load component must be produced by some source component in the circuit. Therefore the sum of all the power consumed by loads equals the sum of all the power produced by sources. Since with the PSC, the power dissipation in sources is negative, and power dissipation in loads is positive, the algebraic sum of all the power dissipation in all the components in a circuit is always zero AC circuits Since the sign convention only deals with the directions of the variables and not with the direction of the actual current, it also applies to alternating current (AC) circuits, in which the direction of the voltage and current periodically reverses. In an AC circuit, even though the voltage and current reverse direction during the second half of the cycle, at any given instant, it obeys the PSC: in passive components, the instantaneous current flows through the device from the positive to the negative terminal, while in active components it flows through the component from the negative to the positive terminal. In nonreactive circuits, since power is the product of voltage and current, and both the voltage and the current reverse direction, the two sign reversals cancel each other. The sign of the power flow is unchanged in both halves of the cycle. In loads with reactance, the voltage and current are not in phase. The load also temporarily stores some energy that is returned to the circuit each cycle, so the instantaneous direction of power flow reverses during parts of the cycle. However, the average power still obeys the passive sign convention. The average power dissipation over a cycle is , where is the voltage amplitude, is the current amplitude and is the phase angle between them. If the load has resistance, the phase angle is between +90° and −90°, so the average power is positive. Alternative convention in power engineering In practice, the power output of power sources such as batteries and generators is not given in negative numbers, as required by the passive sign convention. No manufacturer sells a "−5 kilowatt generator". The standard practice in electric power circuits is to use positive values for the power and resistance of power sources, as well as loads. This avoids confusion over the meaning of "negative power", and particularly "negative resistance". In order to make the power for both sources and loads come out positive, instead of the PSC, separate sign conventions must be used for sources and loads. These are called the "generator-load conventions" which are used in electric power engineering Generator convention - In source components like generators and batteries, the variables V and I are defined according to the active sign convention above; the current variable is defined as entering the negative terminal of the device. Load convention - In loads, the variables are defined according to the normal passive sign convention; the current variable is defined as entering the positive terminal. Using this convention, positive power flow in source components is power produced, while positive power flow in load components is power consumed. As with the PSC, if the variables in a given component do not conform to the applicable convention, the component can still be made to conform by using negative signs in the constitutive equations (1) and (2)    and    This convention may seem preferable to the passive sign convention, since the power P and resistance R always have positive values. However, it cannot be used in electronics because it is not possible to classify some electronic components unambiguously as "sources" or "loads". Some electronic components may act as sources of power with negative resistance in some portions of their operating range, and as absorbers of power with positive resistance in other portions, or even in different portions of the AC cycle. The power consumption or production of a component depends on its current–voltage characteristic curve. Whether the component acts as a source or load may depend on the current i or voltage v in it, which is not known until the circuit is analyzed. For example, if the voltage across a rechargeable battery's terminals is less than its open-circuit voltage, it will act as a source, while if the voltage is greater it will act as a load and recharge. So it is necessary for power and resistance variables to be able to take on both positive and negative values. References Electronic circuits Electric power
Passive sign convention
[ "Physics", "Engineering" ]
2,780
[ "Physical quantities", "Electronic circuits", "Power (physics)", "Electronic engineering", "Electric power", "Electrical engineering" ]
39,269,199
https://en.wikipedia.org/wiki/Heterolysis%20%28biology%29
Heterolysis (hetero = other/different, lysis = cell breakdown) is the spontaneous death and disintegration of a cell from factors other than itself. In contrast, autolysis happens when a cell dies due to its own secretions or signaling.  Some external factors that cause heterolysis are hypoxia, biological factors, chemical agents like drugs or free radical reactions, physical factors like electric shock, trauma, extreme radiation, and immunological reactions such as inflammation or allergic reactions. Such extrinsic cell death is important in executing proper immune response functions. This is commonly seen when a bacterial or viral infection occurs and the pathogen forces the cell to stop apoptosis to avoid death of host cells. In such scenarios, heterolytic factors make it possible to combat infections by lysing the infected cells. References Cell biology
Heterolysis (biology)
[ "Biology" ]
176
[ "Cell biology" ]
39,271,050
https://en.wikipedia.org/wiki/Free%20radical%20damage%20to%20DNA
Free radical damage to DNA can occur as a result of exposure to ionizing radiation or to radiomimetic compounds. Damage to DNA as a result of free radical attack is called indirect DNA damage because the radicals formed can diffuse throughout the body and affect other organs. Malignant melanoma can be caused by indirect DNA damage because it is found in parts of the body not exposed to sunlight. DNA is vulnerable to radical attack because of the very labile hydrogens that can be abstracted and the prevalence of double bonds in the DNA bases that free radicals can easily add to. Damage via radiation exposure Radiolysis of intracellular water by ionizing radiation creates peroxides, which are relatively stable precursors to hydroxyl radicals. 60%–70% of cellular DNA damage is caused by hydroxyl radicals, yet hydroxyl radicals are so reactive that they can only diffuse one or two molecular diameters before reacting with cellular components. Thus, hydroxyl radicals must be formed immediately adjacent to nucleic acids in order to react. Radiolysis of water creates peroxides that can act as diffusable, latent forms of hydroxyl radicals. Some metal ions in the vicinity of DNA generate the hydroxyl radicals from peroxide. H2O + hν → H2O+ + e− H2O + e− → H2O− H2O+ → H+ + OH· H2O− → OH− + H· 2 OH· →H2O2 Free radical damage to DNA is thought to cause mutations that may lead to some cancers. The Fenton reaction The Fenton reaction results in the creation of hydroxyl radicals from hydrogen peroxide and an Iron (II) catalyst. Iron(III) is regenerated via the Haber–Weiss reaction. Transition metals with a free coordination site are capable of reducing peroxides to hydroxyl radicals. Iron is believed to be the metal responsible for the creation of hydroxyl radicals because it exists at the highest concentration of any transition metal in most living organisms. The Fenton reaction is possible because transition metals can exist in more than one oxidation state and their valence electrons may be unpaired, allowing them to participate in one-electron redox reactions. Fe2+ + H2O2 → Fe3+ + OH· + OH− The creation of hydroxyl radicals by iron(II) catalysis is important because iron(II) can be found coordinated with, and therefore in close proximity to, DNA. This reaction allows for hydrogen peroxide created by radiolysis of water to diffuse to the nucleus and react with Iron (II) to produce hydroxyl radicals, which in turn react with DNA. The location and binding of Iron (II) to DNA may play an important role in determining the substrate and nature of the radical attack on the DNA. The Fenton reaction generates two types of oxidants, Type I and Type II. Type I oxidants are moderately sensitive to peroxides and ethanol. Type I and Type II oxidants preferentially cleave at the specific sequences. Radical hydroxyl attack Hydroxyl radicals can attack the deoxyribose DNA backbone and bases, potentially causing a plethora of lesions that can be cytotoxic or mutagenic. Cells have developed complex and efficient repair mechanisms to fix the lesions. In the case of free radical attack on DNA, base-excision repair is the repair mechanism used. Hydroxyl radical reactions with the deoxyribose sugar backbone are initiated by hydrogen abstraction from a deoxyribose carbon, and the predominant consequence is eventual strand breakage and base release. The hydroxyl radical reacts with the various hydrogen atoms of the deoxyribose in the order 5′ H > 4′ H > 3′ H ≈ 2′ H ≈ 1′ H. This order of reactivity parallels the exposure to solvent of the deoxyribose hydrogens. Hydroxyl radicals react with DNA bases via addition to the electron-rich, pi bonds. These pi bonds in the bases are located between C5-C6 of pyrimidines and N7-C8 in purines. Upon addition of the hydroxyl radical, many stable products can be formed. In general, radical hydroxyl attacks on base moieties do not cause altered sugars or strand breaks except when the modifications labilize the N-glycosyl bond, allowing the formation of baseless sites that are subject to beta-elimination. Abasic sites Hydrogen abstraction from the 1’-deoxyribose carbon by the hydroxyl radical creates a 1 ‘-deoxyribosyl radical. The radical can then react with molecular oxygen, creating a peroxyl radical which can be reduced and dehydrated to yield a 2’-deoxyribonolactone and free base. A deoxyribonolactone is mutagenic and resistant to repair enzymes. Thus, an abasic site is created. Radical damage through radiomimetic compounds Radical damage to DNA can also occur through the interaction of DNA with certain natural products known as radiomimetic compounds, molecular compounds which affect DNA in similar ways to radiation exposure. Radiomimetic compounds induce double-strand breaks in DNA via highly specific, concerted free-radical attacks on the deoxyribose moieties in both strands of DNA. General mechanism Many radiomimetic compounds are enediynes, which undergo the Bergman cyclization reaction to produce a 1,4-didehydrobenzene diradical. The 1,4-didehydrobenzene diradical is highly reactive, and will abstract hydrogens from any possible hydrogen-donor. In the presence of DNA, the 1,4-didehydrobenzene diradical abstracts hydrogens from the deoxyribose sugar backbone, predominantly at the C-1’, C-4’ and C-5’ positions. Hydrogen abstraction causes radical formation at the reacted carbon. The carbon radical reacts with molecular oxygen, which leads to a strand break in the DNA through a variety of mechanisms. 1,4-Didehydrobenzene is able to position itself in such a way that it can abstract proximal hydrogens from both strands of DNA. This produces a double-strand break in the DNA, which can lead to cellular apoptosis if not repaired. Enediynes generally undergo the Bergman cyclization at temperatures exceeding 200 °C. However, incorporating the enediyne into a 10-membered cyclic hydrocarbon makes the reaction more thermodynamically favorable by releasing the ring strain of the reactants. This allows for the Bergman cyclization to occur at 37 °C, the biological temperature of humans. Molecules which incorporate enediynes into these larger ring structures have been found to be extremely cytotoxic. Natural products Enediynes are present in many complicated natural products. They were originally discovered in the early 1980s during a search for new anticancer products produced by microorganisms. Calicheamicin was one of the first such products identified and was originally found in a soil sample taken from Kerrville, Texas. These compounds are synthesized by bacteria as defense mechanisms due to their ability to cleave DNA through the formation of 1,4-didehydrobenzene from the enediyne component of the molecule. Calicheamicin and other related compounds share several common characteristics. The extended structures attached to the enediyne allow the compound to specifically bind DNA, in most cases to the minor groove of the double helix. Additionally, part of the molecule is known as the “trigger” which, under specific physiological conditions, activates the enediyne, known as the “warhead” and 1,4-didehydrobenzene is generated. Three classes of enediynes have since been identified: calicheamicin, dynemicin, and chromoprotein-based products. The calicheamicin types are defined by a methyl trisulfide group that is involved in triggering the molecule by the following mechanism. Calicheamicin and the closely related esperamicin have been used as anticancer drugs due to their high toxicity and specificity. Dynemicin and its relatives are characterized by the presence of an anthraquinone and enediyne core. The anthraquinone component allows for specific binding of DNA at the 3’ side of purine bases through intercalation, a site that is different from calicheamicin. Its ability to cleave DNA is greatly increased in the presence of NADPH and thiol compounds. This compound has also found prominence as an antitumor agent. Chromoprotein enediynes are characterized by an unstable chromophore enediyne bound to an apoprotein. The chromophore is unreactive when bound to the apoprotein. Upon its release, it reacts to form 1,4-didehydrobenzene and subsequently cleaves DNA. Antitumor ability Most enediynes, including the ones listed above, have been used as potent antitumor antibiotics due to their ability to efficiently cleave DNA. Calicheamicin and esperamicin are the two most commonly used types due to their high specificity when binding to DNA, which minimizes unfavorable side reactions. They have been shown to be especially useful for treating acute myeloid leukemia. Additionally, calicheamicin is able to cleave DNA at low concentrations, proving to be up to 1000 times more effective than adriamycin at combating certain types of tumors. The free radical mechanism to treat certain types of cancers extends beyond enediynes. Tirapazamine generates a free radical under anoxic conditions instead of the trigger mechanism of an enediyne. The free radical then continues on to cleave DNA in a similar manner to 1,4-didehydrobenzene in order to treat cancerous cells. It is currently in Phase III trials. Evolution of Meiosis Meiosis is a central feature of sexual reproduction in eukaryotes. The need to repair oxidative DNA damage caused by oxidative free radicals has been hypothesized to be a major driving force in the evolution of meiosis References DNA DNA repair Molecular genetics
Free radical damage to DNA
[ "Chemistry", "Biology" ]
2,167
[ "Molecular genetics", "Cellular processes", "DNA repair", "Molecular biology" ]
39,271,639
https://en.wikipedia.org/wiki/A%20Boy%20and%20His%20Atom
A Boy and His Atom is a 2013 stop-motion animated short film released on YouTube by IBM Research. One minute in length, it was made by moving carbon monoxide molecules with a scanning tunneling microscope, a device that magnifies them 100 million times. These two-atom molecules were moved to create images, which were then saved as individual frames to make the film. The movie was recognized by the Guinness Book of World Records as the World's Smallest Stop-Motion Film in 2013. The scientists at IBM Research – Almaden who made the film are moving atoms to explore the limits of data storage because, as data creation and consumption gets bigger, data storage needs to get smaller, all the way down to the atomic level. Traditional silicon transistor technology has become cheaper, denser and more efficient, but fundamental physical limitations suggest that scaling down is an unsustainable path to solving the growing Big Data dilemma. This team of scientists is particularly interested in starting on the smallest scale, single atoms, and building structures up from there. Using this method, IBM announced it can now store a single bit of information in just 12 atoms (current technology as of 2012 takes roughly one million atoms to store a single bit). Creation A Boy And His Atom was created by a team of IBM scientists – together with Ogilvy & Mather, IBM's longstanding advertising agency – at the company's Almaden Research Center in San Jose, California. Using a scanning tunneling microscope, carbon monoxide molecules were manipulated into place on a copper substrate with a copper needle at a distance of 1 nanometer. They remain in place, forming a bond with the substrate because of the extremely low temperature of 5 K (, ) at which the device operates. The oxygen component of each molecule shows up as a dot when photographed by the scanning tunneling microscope, allowing the creation of images composed of many such dots. The team created 242 still images with 65 carbon monoxide molecules. The images were combined to make a stop-motion film. Each frame measures 45 by 25 nanometers. It took four researchers two weeks of 18-hour days to produce the film. The graphics and sound effects resemble those of early video games. "This movie is a fun way to share the atomic-scale world," said project leader Andreas J. Heinrich. "The reason we made this was not to convey a scientific message directly, but to engage with students, to prompt them to ask questions." In addition, the researchers created three still images to promote Star Trek Into Darkness—the Federation logo, the starship Enterprise, and a Vulcan salute. Reaction Guinness World Records certified the movie as The World's Smallest Stop-Motion Film ever made. The film was accepted into the Tribeca Online Film Festival and shown at the New York Tech Meet-up and the World Science Festival. The film surpassed a million views in 24 hours, and two million views in 48 hours, with more than 27,000 likes. As of October 2024, the film has over 24.3 million views and over 742,000 likes. Implications While the film was used by the researchers as a fun way to get students interested in science, it grew out of work that could increase the amount of data computers could store. In 2012, they demonstrated that they could store a bit of computer memory on a group of just 12 atoms instead of a million, the previous minimum. If it became commercially viable, "You could carry around, not just two movies on your iPhone," Heinrich said in a companion video about the film's production, "you could carry around every movie ever produced." See also Teeny Ted from Turnip Town, the "world's smallest book" which requires an electron microscope to be read References External links IBM researchers make world's smallest movie using atoms (w/ video) at Phys.org 2013 animated short films 2013 films Experiments IBM Individual particles Nanotechnology Scanning probe microscopy 2010s English-language films English-language short films
A Boy and His Atom
[ "Chemistry", "Materials_science", "Engineering" ]
816
[ "Nanotechnology", "Materials science", "Scanning probe microscopy", "Microscopy" ]
49,504,684
https://en.wikipedia.org/wiki/Torricelli%27s%20experiment
Torricelli's experiment was invented in Pisa in 1643 by the Italian scientist Evangelista Torricelli (1608-1647). The purpose of his experiment is to prove that the source of vacuum comes from atmospheric pressure. Context For much of human history, the pressure of gases like air was ignored, denied, or taken for granted, but as early as the 6th century BC, Greek philosopher Anaximenes of Miletus claimed that all things are made of air that is simply changed by varying levels of pressure. He could observe water evaporating and changing to a gas and felt that this applied even to solid matter. More condensed air made colder, heavier objects, and expanded air made lighter, hotter objects. This was akin to how gases become less dense when warmer and more dense when cooler. Aristotle stated in some writings that "nature abhors a vacuum" and also that air has no mass/weight. The popularity of that philosopher kept this the dominant view in Europe for two thousand years. Even Galileo accepted it, believing that the pull of vacuum creates a siphon and that the pull can be overcome if the siphon is high enough. In the 17th century, Evangelista Torricelli conducted experiments with mercury that allowed him to measure the presence of air. He would dip a glass tube, closed at one end, into a bowl of mercury and raise the closed end out of it, keeping the open end submerged. The weight of the mercury would pull it down, leaving a partial vacuum at the far end. This validated his belief that air/gas has mass, creating pressure on things around it. The discovery helped bring Torricelli to the following conclusion: This test was essentially the first documented pressure gauge. In 1647 Valerianus Magnus published his Demonstratio ocularis, in which he claims to have proved the existence of the vacuum in the court of the king of Poland, Ladislaus IV, in Warsaw by means of an experiment identical to that carried out by Torricelli three years earlier. Three months after Magnus, Blaise Pascal published his Expériences nouvelles touchant le vide, giving details of his first barometric experiments. Pascal went farther than Torricelli, having his brother-in-law try the experiment at different altitudes on a mountain and finding, indeed, that the farther down in the ocean of atmosphere, the higher the pressure. Procedure The experiment uses a simple barometer to measure the pressure of air, filling it with mercury up until 75% of the tube. Any air bubbles in the tube must be removed by inverting several times. After that, a clean mercury is filled once again until the tube is completely full. The barometer is then placed inverted on the dish full of mercury. This causes the mercury in the tube to fall down until the difference between mercury on the surface and in the tube is about 760 mm. Even when the tube is shaken or tilted, the difference between the surface and in the tube is not affected due to the influence of atmospheric pressure. Conclusion Torricelli concluded that the mercury fluid in the tube is aided by the atmospheric pressure that is present on the surface of mercury fluid on the dish. He also stated that the changes of liquid level from day to day are caused by the variation of atmospheric pressure. The empty space in the tube is called the Torricellian vacuum. 760 mmHg = 1 atm 1 atm = 1 013 mbar or hPa 1 mbar or hPa = 0.7502467 mmHg 1 pascal = 1 Newton per square metre (SI unit) 1 hectopascal is 100 pascals Additional images References 1643 in science Science and technology in Italy Physics experiments Pressure
Torricelli's experiment
[ "Physics" ]
752
[ "Scalar physical quantities", "Mechanical quantities", "Physics experiments", "Physical quantities", "Pressure", "Experimental physics", "Wikipedia categories named after physical quantities" ]
49,509,508
https://en.wikipedia.org/wiki/Helicase%E2%80%93primase%20complex
A helicase–primase complex (also helicase-primase, Hel/Prim, H-P or H/P) is a complex of enzymes including DNA helicase and DNA primase. A helicase-primase associated factor protein may also be present. The complex is used by herpesviruses, in which it is responsible for lytic DNA virus replication. In many dsDNA viruses, primase and helicase are fused into a single polypeptide chain, so that the primase and helicase domains correspond to the N-terminal and C-terminal parts of the protein, respectively. A helicase-primase inhibitor (HPI) is a drug that blocks this action through acting as an enzyme inhibitor. List of H-P by virus name EBV: helicase:BBLF4 primase: BSLF1 accessory protein:BBLF2/3 List of H-P inhibitors Amenamevir (ASP2151) Pritelivir (BAY 57–1293, AIC316) BILS 22 BS T157602 References Viral nonstructural proteins Enzymes Helicases
Helicase–primase complex
[ "Biology" ]
247
[ "Virus stubs", "Viruses" ]
49,510,292
https://en.wikipedia.org/wiki/Alkynylation
In organic chemistry, alkynylation is an addition reaction in which a terminal alkyne () is added to a carbonyl group () to form an α-alkynyl alcohol (). When the acetylide is formed from acetylene (), the reaction gives an α-ethynyl alcohol. This process is often referred to as ethynylation. Such processes often involve metal acetylide intermediates. Scope The principal reaction of interest involves the addition of the acetylene () to a ketone () or aldehyde (): RR'C=O + HC#CR'' -> RR'C(OH)C#CR'' The reaction proceeds with retention of the triple bond. For aldehydes and unsymmetrical ketones, the product is chiral, hence there is interest in asymmetric variants. These reactions invariably involve metal-acetylide intermediates. This reaction was discovered by chemist John Ulric Nef in 1899 while experimenting with reactions of elemental sodium, phenylacetylene, and acetophenone. For this reason, the reaction is sometimes referred to as Nef synthesis. Sometimes this reaction is erroneously called the Nef reaction, a name more often used to describe a different reaction (see Nef reaction). Chemist Walter Reppe coined the term ethynylation during his work with acetylene and carbonyl compounds. In the following reaction (scheme 1), the alkyne proton of ethyl propiolate is deprotonated by n-butyllithium at -78 °C to form lithium ethyl propiolate to which cyclopentanone is added forming a lithium alkoxide. Acetic acid is added to remove lithium and liberate the free alcohol. Modifications Several modifications of alkynylation reactions are known: In the Arens–van Dorp synthesis the compound ethoxyacetylene is converted to a Grignard reagent and reacted with a ketone, the reaction product is a propargyl alcohol. The Isler modification is a modification of Arens–Van Dorp Synthesis where ethoxyacetylene is replaced by β-chlorovinyl ethyl ether and lithium amide. Catalytic variants Alkynylations, including the asymmetric variety, have been developed as metal-catalyzed reactions. Various catalytic additions of alkynes to electrophiles in water have also been developed. Uses Alkynylation finds use in synthesis of pharmaceuticals, particularly in the preparation of steroid hormones. For example, ethynylation of 17-ketosteroids produces important contraceptive medications known as progestins. Examples include drugs such as Norethisterone, Ethisterone, and Lynestrenol. Hydrogenation of these compounds produces anabolic steroids with oral bioavailability, such as Norethandrolone. Alkynylation is used to prepare commodity chemicals such as propargyl alcohol, butynediol, 2-methylbut-3-yn-2-ol (a precursor to isoprenes such as vitamin A), 3-hexyne-2,5-diol (a precursor to Furaneol), and sulcatone (a precursor to Linalool). Reaction conditions For the stoichiometric reactions involving alkali metal or alkaline earth acetylides, work-up for the reaction requires liberation of the alcohol. To achieve this hydrolysis, aqueous acids are often employed. RR'C(ONa)C#CR''{} + \overset{acetic\, acid}{CH3COOH} -> RR'C(OH)C#CR''{} + \overset{sodium\, acetate}{CH3COONa} Common solvents for the reaction include ethers, acetals, dimethylformamide, and dimethyl sulfoxide. Variations Grignard reagents Grignard reagents of acetylene or alkynes can be used to perform alkynylations on compounds that are liable to polymerization reactions via enolate intermediates. However, substituting lithium for sodium or potassium acetylides accomplishes similar results, often giving this route little advantage over the conventional reaction. Favorskii reaction The Favorskii reaction is an alternative set of reaction conditions, which involves prereaction of the acetylene with an alkali metal hydroxide such as KOH. The reaction proceeds through equilibria, making the reaction reversible: HC#CH + KOH <=> HC#CK + H2O RR'C=O + HC#CK <=> RR'C(OK)C#CH To overcome this reversibility, the reaction often uses an excess of base to trap the water as hydrates. Reppe chemistry Chemist Walter Reppe pioneered catalytic, industrial-scale ethynylations using acetylene with alkali metal and copper(I) acetylides: These reactions are used to manufacture propargyl alcohol and butynediol. Alkali metal acetylides, which are often more effective for ketone additions, are used to produce 2-methyl-3-butyn-2-ol from acetylene and acetone. See also Alkylation Methylation Organolithium reagent Organosodium chemistry Alkyne coupling reactions Sonogashira coupling Glaser coupling Cadiot–Chodkiewicz coupling Castro–Stephens coupling A3 coupling reaction References Carbon-carbon bond forming reactions Organometallic chemistry Addition reactions
Alkynylation
[ "Chemistry" ]
1,182
[ "Organometallic chemistry", "Carbon-carbon bond forming reactions", "Organic reactions" ]
41,989,305
https://en.wikipedia.org/wiki/In%20vivo%20bioreactor
The in vivo bioreactor is a tissue engineering paradigm that uses bioreactor methodology to grow neotissue in vivo that augments or replaces malfunctioning native tissue. Tissue engineering principles are used to construct a confined, artificial bioreactor space in vivo that hosts a tissue scaffold and key biomolecules necessary for neotissue growth. Said space often requires inoculation with pluripotent or specific stem cells to encourage initial growth, and access to a blood source. A blood source allows for recruitment of stem cells from the body alongside nutrient delivery for continual growth. This delivery of cells and nutrients to the bioreactor eventually results in the formation of a neotissue product. Overview Conceptually, the in vivo bioreactor was borne from complications in a repair method of bone fracture, bone loss, necrosis, and tumor reconstruction known as bone grafting. Traditional bone grafting strategies require fresh, autologous bone harvested from the iliac crest; this harvest site is limited by the amount of bone that can safely be removed, as well as associated pain and morbidity. Other methods include cadaverous allografts and synthetic options (often made of hydroxyapatite) that have become available in recent years. In response to the question of limited bone sourcing, it has been posited that bone can be grown to fit a damaged region within the body through the application of tissue engineering principles. Tissue engineering is a biomedical engineering discipline that combines biology, chemistry, and engineering to design neotissue (newly formed tissue) on a scaffold. Tissues scaffolds are functionally identical to the extracellular matrix found, acting as a site upon which regenerative cellular components adsorb to encourage cellular growth. This cellular growth is then artificially stimulated by additive growth factors in the environment that encourage tissue formation. The scaffold is often seeded with stem cells and growth additives to encourage a smooth transition from cells to tissues, and more recently, organs. Traditionally, this method of tissue engineering is performed in vitro, where scaffold components and environmental manipulation recreate in vivo stimuli that direct growth. Environmental manipulation includes changes in physical stimulation, pH, potential gradients, cytokine gradients, and oxygen concentration. The overarching goal of in vitro tissue engineering is to create a functional tissue that is equivalent to native tissue in terms of composition, biomechanical properties, and physiological performance. However, in vitro tissue engineering suffers from a limited ability to mimic in vitro conditions, often leading to inadequate tissue substitutes. Therefore, in vivo tissue engineering has been suggested as a method to circumvent the tedium of environmental manipulation and use native in vivo stimuli to direct cell growth. To achieve in vivo tissue growth, an artificial bioreactor space must be established in which cells may grow. The in vivo bioreactor depends on harnessing the reparative qualities of the body to recruit stem cells into an implanted scaffold, and utilize vasculature to supply all necessary growth components. Design Cells Tissue engineering done in vivo is capable of recruiting local cellular populations into a bioreactor space. Indeed a range of neotissue growth has been shown: bone, cartilage, fat, and muscle. In theory, any tissue type could be grown in this manner if all necessary components (growth factors, environmental and physical ques) are met. Recruitment of stem cells require a complex process of mobilization from their niche, though research suggests that mature cells transplanted upon the bioreactor scaffold can improve stem cell recruitment. These cells secrete growth factors that promote repair and can be co-cultured with stem cells to improve tissue formation. Scaffolds Scaffold materials are designed to enhance tissue formation through control of the local and surrounding environments. Scaffolds are critical in regulating cellular growth and provide a volume in which vascularization and stem cell differentiation can occur. Scaffold geometry significantly affects tissue differentiation through physical growth ques. Predicting tissue formation computationally requires theories that link physical growth ques to cell differentiation. Current models rely on mechano-regulation theory, widely shaped by Prendergast et al. for predicting cell growth. Thus a quantitative analysis of geometry and materials commonly used in tissue scaffolds is capable. Such materials include: Porous ceramic and demineralized bone matrix supports Coralline cylinders Biodegradable material such as poly(α-hydroxy esters) Decellularized tissue matrices Injectable biomaterials or hydrogels are typically composed of polysaccharides, proteins/peptide mimetics, or synthetic polymers such as (poly(ethylene glycol)). Peptide amphiphile (PA) systems are self assembling and can form solid bioactive scaffolds after injection within the body. Inert systems have been proven to be adequate for tissue formation. Cartilage formation has occurred by injecting an inert agarose gel beneath the periosteum in a rabbit model, vascularization was restricted. fibrin Sponges made from collagen Bioreactors Methods Initially, focusing on bone growth, subcutaneous pockets were used for bone prefabrication as a simple in vivo bioreactor model. The pocket is an artificially created space between varying levels of subcutaneous fascia. The location provides regenerative ques to the bioreactor implant but does not rely on pre-existing bone tissue as a substrate. Furthermore, these bioreactors may be wrapped with muscle tissue to encourage vascularization and bone growth. Another strategy is through the use of a periosteal flap wrapped around the bioreactor, or the scaffold itself to create an in vivo bioreactor. This strategy utilizes the guided bone regeneration treatment scheme, and is a safe method for bone prefabrication. These 'flap' methods of packing the bioreactor within fascia, or wrapping it in tissue is effective, though somewhat random due to the non-directed vascularization these methods incur. The axial vascular bundle (AVB) strategy requires that an artery and vein are inserted in an in vitro bioreactor to transport growth factors, cells, and remove waste. This ultimately results in extensive vascularization of the bioreactor space and a vast improvement in growth capability. This vascularization, though effective, is limited by the surface contact that it can achieve between the scaffold and the capillaries filling the bioreactor space. Thus, a combination of the flap and AVB techniques can maximize the growth rate and vascular contact of the bioreactor as suggested by Han and Dai, by inserting a vascular bundle into a scaffold wrapped in either musculature or periosteum. If inadequate pre-existing vasculature is present in the growth site due to damage or disease, an arteriovenous loop (AVL) can be used. The AVL strategy requires a surgical connection be made between an artery of vein to form an arteriovenous fistula which is then placed within an in vitro bioreactor space containing a scaffold. A capillary network will form from this loop and accelerate the vascularization of new tissue. Materials Materials used in the construction of an in vivo bioreactor space vary widely depending on the type of substrate, type of tissue, and mechanical demands of said tissue being grown. At its simplest, a bioreactor space will be created between tissue layers through the use of hydrogel injections to create a bioreactor space. Early models used an impermeable silicone shroud to encase a scaffold, though more recent studies have begun 3D printing custom bioreactor molds to further enhance the mechanical growth properties of the bioreactors. The choice of bioreactor chamber material generally requires that it is nontoxic and medical grade, examples include: "silicon, polycarbonate, and acrylic polymer". Recently both Teflon and titanium have been used in the growth of bone. One study utilized Polymethyl methacrylate as a chamber material and 3D printed hollow rectangular blocks. Yet another study pushed the limits of the in vivo bioreactor by proving that the omentum is suitable as a bioreactor space and chamber. Specifically, highly vascularized and functional bladder tissue was grown within the omentum space. Examples An example of the implementation of the IVB approach was in the engineering of autologous bone by injecting calcium alginate in a sub-periosteal location. The periosteum is a membrane that covers the long bones, jawbone, ribs and the skull. This membrane contains an endogenous population of pluripotent cells called the periosteal cells, which are a type of mesenchymal stem cells (MSC), which reside in the cambium layer, i.e., the side facing the bone. A key step in the procedure is the elevation of the periosteum without damaging the cambium surface and to ensure this a new technique called hydraulic elevation was developed. The choice of the sub-periosteum site is used because stimulation of the cambium layer using transforming growth factor–beta resulted in enhanced chondrogenesis, i.e., formation of cartilage. In development the formation of bone can either occur via a Cartilage template initially formed by the MSCs that then gets ossified through a process called endochondral ossification or directly from MSC differentiation to bone via a process termed intra-membranous ossification. Upon exposure of the periosteal cells to calcium from the alginate gel, these cells become bone cells and start producing bone matrix through the intra-membranous ossification process, recapitulating all steps of bone matrix deposition. The extension of the IVB paradigm to engineering autologous hyaline cartilage was also recently demonstrated. In this case, agarose is injected and this triggers local hypoxia, which then results in the differentiation of the periosteal MSCs into articular chondrocytes, i.e. cells similar to those found in the joint cartilage. Since this processes occurs in a relative short period of less than two weeks and cartilage can remodel into bone, this approach might provide some advantages in treatment of both cartilage and bone loss. The IVB concept needs to be however realized in humans and this is currently being undertaken. See also Biomedical engineering Tissue engineering Bioreactor Bone Grafting Guided Bone and Tissue Regeneration Further reading References Medical technology Regenerative biomedicine Tissue engineering
In vivo bioreactor
[ "Chemistry", "Engineering", "Biology" ]
2,212
[ "Biological engineering", "Cloning", "Chemical engineering", "Tissue engineering", "Medical technology" ]
41,992,525
https://en.wikipedia.org/wiki/CRISPR%20interference
CRISPR interference (CRISPRi) is a genetic perturbation technique that allows for sequence-specific repression of gene expression in prokaryotic and eukaryotic cells. It was first developed by Stanley Qi and colleagues in the laboratories of Wendell Lim, Adam Arkin, Jonathan Weissman, and Jennifer Doudna. Sequence-specific activation of gene expression refers to CRISPR activation (CRISPRa). Based on the bacterial genetic immune system - CRISPR (clustered regularly interspaced short palindromic repeats) pathway, the technique provides a complementary approach to RNA interference. The difference between CRISPRi and RNAi, though, is that CRISPRi regulates gene expression primarily on the transcriptional level, while RNAi controls genes on the mRNA level. Background Many bacteria and most archaea have an adaptive immune system which incorporates CRISPR RNA (crRNA) and CRISPR-associated (cas) genes. The CRISPR interference (CRISPRi) technique was first reported by Lei S. Qi and researchers at the University of California at San Francisco in early 2013. The technology uses a catalytically dead Cas9 (usually denoted as dCas9) protein that lacks endonuclease activity to regulate genes in an RNA-guided manner. Targeting specificity is determined by complementary base-pairing of a single guide RNA (sgRNA) to the genomic locus. sgRNA is a chimeric noncoding RNA that can be subdivided into three regions: a 20 nt base-pairing sequence, a 42 nt dCas9-binding hairpin and a 40 nt terminator (bacteria, yeast, fruit flies, zebrafish, mice). When designing a synthetic sgRNA, only the 20 nt base-pairing sequence is modified. Secondary variables must also be considered: off-target effects (for which a simple BLAST run of the base-pairing sequence is required), maintenance of the dCas9-binding hairpin structure, and ensuring that no restriction sites are present in the modified sgRNA, as this may pose a problem in downstream cloning steps. Due to the simplicity of sgRNA design, this technology is amenable to genome-wide scaling. CRISPRi relies on the generation of catalytically inactive Cas9. This is accomplished by introducing point mutations in the two catalytic residues (D10A and H840A) of the gene encoding Cas9. In doing so, dCas9 is unable to cleave dsDNA but retains the ability to target DNA. Together, sgRNA and dCas9 constitute a minimal system for gene-specific regulation. Transcriptional regulation Repression CRISPRi can sterically repress transcription by blocking either transcriptional initiation or elongation. This is accomplished by designing sgRNA complementary to the promoter or the exonic sequences. The level of transcriptional repression with a target within the coding sequence is strand-specific. Depending on the nature of the CRISPR effector, either the template or non-template strand leads to stronger repression. For dCas9 (based on a Type-2 CRISPR system), repression is stronger when the guide RNA is complementary to the non-template strand. It has been suggested that this is due to the activity of helicase, which unwinds the RNA:DNA heteroduplex ahead of RNA pol II when the sgRNA is complementary to the template strand. Unlike transcription elongation block, silencing is independent of the targeted DNA strand when targeting the transcriptional start site. In prokaryotes, this steric inhibition can repress transcription of the target gene by almost 99.9%; in archaea, more than 90% repression was achieved; in human cells, up to 90% repression was observed. In bacteria, it is possible to saturate the target with a high enough level of dCas9 complex. In this case, the repression strength only depends on the probability that dCas9 is ejected upon collision with the RNA polymerase, which is determined by the guide sequence. Higher temperatures are also associated with higher ejection probability, thus weaker repression. In eukaryotes, CRISPRi can also repress transcription via an effector domain. Fusing a repressor domain to dCas9 allows transcription to be further repressed by inducing heterochromatinization. For example, the well-studied Krüppel associated box (KRAB) domain can be fused to dCas9 to repress transcription of the target gene up to 99% in human cells. Improvements on the efficiency Whereas genome-editing by the catalytically active Cas9 nuclease can be accompanied by irreversible off-target genomic alterations, CRISPRi is highly specific with minimal off-target reversible effects for two distinct sgRNA sequences. Nonetheless, several methods have been developed to improve the efficiency of transcriptional modulation. Identification of the transcription start site of a target gene and considering the preferences of sgRNA improves efficiency, as does the presence of accessible chromatin at the target site. Other methods Along with other improvements mentioned, factors such as the distance from the transcription start and the local chromatin state may be critical parameters in determining activation/repression efficiency. Optimization of dCas9 and sgRNA expression, stability, nuclear localization, and interaction will likely allow for further improvement of CRISPRi efficiency in mammalian cells. Applications Gene knockdown A significant portion of the genome (both reporter and endogenous genes) in eukaryotes has been shown to be targetable using lentiviral constructs to express dCas9 and sgRNAs, with comparable efficiency to existing techniques such as RNAi and TALE proteins. In tandem or as its own system, CRISPRi could be used to achieve the same applications as in RNAi. For bacteria, gene knockdown by CRISPRi has been fully implemented and characterized (off-target analysis, leaky repression) for both Gram-negative E. coli and Gram-positive B. subtilis. Not only in bacteria but also in archaea (e.g., M. acetivorans) CRISPRi-Cas9 was successfully utilized to knockdown several genes/operons that related to nitrogen fixation. Allelic series Differential gene expression can be achieved by modifying the efficiency of sgRNA base-pairing to the target loci. In theory, modulating this efficiency can be used to create an allelic series for any given gene, in essence creating a collection of hypo- and hypermorphs. These powerful collections can be used to probe any genetic investigation. For hypomorphs, this allows the incremental reduction of gene function as opposed to the binary nature of gene knockouts and the unpredictability of knockdowns. For hypermorphs, this is in contrast to the conventional method of cloning the gene of interest under promoters with variable strength. Genome loci imaging Fusing a fluorescent protein to dCas9 allows for imaging of genomic loci in living human cells. Compared to fluorescence in situ hybridization (FISH), the method uniquely allows for dynamic tracking of chromosome loci. This has been used to study chromatin architecture and nuclear organization dynamics in laboratory cell lines including HeLa cells. Stem cells Activation of Yamanaka factors by CRISPRa has been used to induce pluripotency in human and mouse cells providing an alternative method to iPS technology. In addition, large-scale activation screens could be used to identify proteins that promote induced pluripotency or, conversely, promote differentiation to a specific cell lineage. Genetic screening The ability to upregulate gene expression using dCas9-SunTag with a single sgRNA also opens the door to large-scale genetic screens, such as Perturb-seq, to uncover phenotypes that result from increased or decreased gene expression, which will be especially important for understanding the effects of gene regulation in cancer. Furthermore, CRISPRi systems have been shown to be transferable via horizontal gene transfer mechanisms such as bacterial conjugation and specific repression of reporter genes in recipient cells has been demonstrated. CRISPRi could serve as a tool for genetic screening and potentially bacterial population control. Advantages and limitations Advantages CRISPRi can silence a target gene of interest up to 99.9% repression. The strength of the repression can also be tuned by changing the amount of complementarity between the guide RNA and the target. Contrary to inducible promoters, partial repression by CRISPRi does not add transcriptional noise to the target's expression. Since the repression level is encoded in a DNA sequence, various expression levels can be grown in competition and identified by sequencing. Since CRISPRi is based on Watson-Crick base-pairing of sgRNA-DNA and an NGG PAM motif, selection of targetable sites within the genome is straightforward and flexible. Carefully defined protocols have been developed. Multiple sgRNAs can not only be used to control multiple different genes simultaneously (multiplex CRISPRi), but also to enhance the efficiency of regulating the same gene target. A popular strategy to express many sgRNAs simultaneously is to array the sgRNAs in a single construct with multiple promoters or processing elements. For example, Extra-Long sgRNA Arrays (ELSAs) use nonrepetitive parts to allow direct synthesis of 12-sgRNA arrays from a gene synthesis provider, can be directly integrated into the E. coli genome without homologous recombination occurring, and can simultaneously target many genes to achieve complex phenotypes. While the two systems can be complementary, CRISPRi provides advantages over RNAi. As an exogenous system, CRISPRi does not compete with endogenous machinery such as microRNA expression or function. Furthermore, because CRISPRi acts at the DNA level, one can target transcripts such as noncoding RNAs, microRNAs, antisense transcripts, nuclear-localized RNAs, and polymerase III transcripts. Finally, CRISPRi possesses a much larger targetable sequence space; promoters and, in theory, introns can also be targeted. In E. coli, construction of a gene knockdown strain is extremely fast and requires only one-step oligo recombineering. Limitations The requirement of a protospacer adjacent motif (PAM) sequence limits the number of potential target sequences. Cas9 and its homologs may use different PAM sequences, and therefore could theoretically be utilized to expand the number of potential target sequences. Sequence specificity to target loci is only 14 nt long (12 nt of sgRNA and 2nt of the PAM), which can recur around 11 times in a human genome. Repression is inversely correlated with the distance of the target site from the transcription start site. Genome-wide computational predictions or selection of Cas9 homologs with a longer PAM may reduce nonspecific targeting. Endogenous chromatin states and modifications may prevent the sequence-specific binding of the dCas9-sgRNA complex. The level of transcriptional repression in mammalian cells varies between genes. Much work is needed to understand the role of local DNA conformation and chromatin in relation to binding and regulatory efficiency. CRISPRi can influence genes that are in close proximity to the target gene. This is especially important when targeting genes that either overlap other genes (sense or antisense overlapping) or are driven by a bidirectional promoter. Sequence-specific toxicity has been reported in eukaryotes, with some sequences in the PAM-proximal region causing a large fitness burden. This phenomenon, called the "bad seed effect", is still unexplained but can be reduced by optimizing the expression level of dCas9. References Genome editing Repetitive DNA sequences Non-coding RNA
CRISPR interference
[ "Engineering", "Biology" ]
2,394
[ "Genetics techniques", "Genome editing", "Genetic engineering", "Molecular genetics", "Repetitive DNA sequences" ]
41,992,983
https://en.wikipedia.org/wiki/Cotriple%20homology
In algebra, given a category C with a cotriple, the n-th cotriple homology of an object X in C with coefficients in a functor E is the n-th homotopy group of the E of the augmented simplicial object induced from X by the cotriple. The term "homology" is because in the abelian case, by the Dold–Kan correspondence, the homotopy groups are the homology of the corresponding chain complex. Example: Let N be a left module over a ring R and let . Let F be the left adjoint of the forgetful functor from the category of rings to Set; i.e., free module functor. Then defines a cotriple and the n-th cotriple homology of is the n-th left derived functor of E evaluated at M; i.e., . Example (algebraic K-theory): Let us write GL for the functor . As before, defines a cotriple on the category of rings with F free ring functor and U forgetful. For a ring R, one has:   where on the left is the n-th K-group of R. This example is an instance of nonabelian homological algebra. Notes References Further reading Who Threw a Free Algebra in My Free Algebra?, a blog post. Adjoint functors Category theory Homotopy theory
Cotriple homology
[ "Mathematics" ]
296
[ "Functions and mappings", "Mathematical structures", "Category theory stubs", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Mathematical relations" ]
33,749,075
https://en.wikipedia.org/wiki/C15H11ClN2O2
{{DISPLAYTITLE:C15H11ClN2O2}} The molecular formula C15H11ClN2O2 (molar mass: 286.71 g/mol) may refer to: Demoxepam Oxazepam Molecular formulas
C15H11ClN2O2
[ "Physics", "Chemistry" ]
58
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
33,751,968
https://en.wikipedia.org/wiki/Biggest%20little%20polygon
In geometry, the biggest little polygon for a number is the -sided polygon that has diameter one (that is, every two of its points are within unit distance of each other) and that has the largest area among all diameter-one -gons. One non-unique solution when is a square, and the solution is a regular polygon when is an odd number, but the solution is irregular otherwise. Quadrilaterals For , the area of an arbitrary quadrilateral is given by the formula where and are the two diagonals of the quadrilateral and is either of the angles they form with each other. In order for the diameter to be at most one, both and must themselves be at most one. Therefore, the quadrilateral has largest area when the three factors in the area formula are individually maximized, with and . The condition that means that the quadrilateral is an equidiagonal quadrilateral (its diagonals have equal length), and the condition that means that it is an orthodiagonal quadrilateral (its diagonals cross at right angles). The quadrilaterals that are both equidiagonal and orthodiagonal are the midsquare quadrilaterals. They include the square with unit-length diagonals, which has area . Infinitely many other midsquare quadrilaterals also have diameter one and have the same area as the square, so in this case the solution is not unique. Odd numbers of sides For odd values of , it was shown by Karl Reinhardt in 1922 that a regular polygon has largest area among all diameter-one polygons. Even numbers of sides In the case , the unique optimal polygon is not regular. The solution to this case was published in 1975 by Ronald Graham, answering a question posed in 1956 by Hanfried Lenz; it takes the form of an irregular equidiagonal pentagon with an obtuse isosceles triangle attached to one of its sides, with the distance from the apex of the triangle to the opposite pentagon vertex equal to the diagonals of the pentagon. Its area is 0.674981.... , a number that satisfies the equation Because the Galois group of this number is the insoluble group , it cannot be expressed in closed form using nested radicals. Graham conjectured that the optimal solution for the general case of even values of consists in the same way of an equidiagonal -gon with an isosceles triangle attached to one of its sides, its apex at unit distance from the opposite -gon vertex. In the case this was verified by a computer calculation by Audet et al. Graham's proof that his hexagon is optimal, and the computer proof of the case, both involved a case analysis of all possible -vertex thrackles with straight edges. The full conjecture of Graham, characterizing the solution to the biggest little polygon problem for all even values of , was proven in 2007 by Foster and Szabo. See also Reinhardt polygon, the polygons maximizing perimeter for their diameter, maximizing width for their diameter, and maximizing width for their perimeter References External links Graham's Largest Small Hexagon, from the Hall of Hexagons Types of polygons Area Superlatives
Biggest little polygon
[ "Physics", "Mathematics" ]
696
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Wikipedia categories named after physical quantities", "Area" ]
33,752,291
https://en.wikipedia.org/wiki/Server%20Efficiency%20Rating%20Tool
The Server Efficiency Rating Tool (SERT) is a performance analysis tool that is specifically designed to address the requirements of the Environmental Protection Agency's ENERGY STAR for Servers v2.0 specification. The SERT Beta 1 was introduced in September, 2011. Several SPEC member companies contributed to the development of the SERT including AMD, Dell, Fujitsu, HPE, Intel, IBM, and Microsoft. References External links Benchmarks (computing) Evaluation of computers bs:SPEC de:Standard Performance Evaluation Corporation es:SPEC ja:Standard Performance Evaluation Corporation pl:SPEC (organizacja)
Server Efficiency Rating Tool
[ "Technology", "Engineering" ]
122
[ "Computer engineering", "Computer engineering stubs", "Computing comparisons", "Evaluation of computers", "Computer performance", "Computer hardware stubs", "Benchmarks (computing)", "Computing stubs", "Computers" ]
29,824,007
https://en.wikipedia.org/wiki/Darwin%20Core%20Archive
Darwin Core Archive (DwC-A) is a biodiversity informatics data standard that makes use of the Darwin Core terms to produce a single, self-contained dataset for species occurrence, checklist, sampling event or material sample data. Essentially it is a set of text (CSV) files with a simple descriptor (meta.xml) to inform others how your files are organized. The format is defined in the Darwin Core Text Guidelines. It is the preferred format for publishing data to the GBIF network. Darwin Core The Darwin Core standard has been used to mobilize the vast majority of specimen occurrence and observational records within the GBIF network. The Darwin Core standard was originally conceived to facilitate the discovery, retrieval, and integration of information about modern biological specimens, their spatio-temporal occurrence, and their supporting evidence housed in collections (physical or digital). The Darwin Core today is broader in scope. It aims to provide a stable, standard reference for sharing information on biological diversity. As a glossary of terms, the Darwin Core provides stable semantic definitions with the goal of being maximally reusable in a variety of contexts. This means that Darwin Core may still be used in the same way it has historically been used, but may also serve as the basis for building more complex exchange formats, while still ensuring interoperability through a common set of terms. Archive format The central idea of an archive is that its data files are logically arranged in a star-like manner, with one core data file surrounded by any number of ’extensions’. Each extension record (or ‘extension file row’) points to a record in the core file; in this way, zero to many extension records can exist for each single core record, a more space-efficient method for data transfer than the alternative of including all the data within a single table which could otherwise contain many empty cells. Details about recommended extensions can be found in their respective subsections and will be extensively documented in the GBIF registry, which will catalogue all available extensions. Sharing entire datasets instead of using pageable web services like DiGIR and TAPIR allows much simpler and more efficient data transfer. For example, retrieving 260,000 records via TAPIR takes about nine hours, issuing 1,300 http requests to transfer 500 MB of XML-formatted data. The exact same dataset, encoded as DwC-A and zipped, becomes a 3 MB file. Therefore, GBIF highly recommends compressing an archive using ZIP or GZIP when generating a DwC-A. An archive requires stable identifiers for core records, but not for extensions. For any kind of shared data it is therefore necessary to have some sort of local record identifiers. It's good practice to maintain – with the original data – identifiers that are stable over time and are not being reused after the record is deleted. If you can, please provide globally unique identifiers instead of local ones. Archive descriptor To be completed. Dataset metadata A Darwin Core Archive should contain a file containing metadata describing the whole dataset. The Ecological Metadata Language (EML) is the most common format for this, but simple Dublin Core files are being used too. References External links Darwin Core Quick Reference Guide Biodiversity Information Standards (TDWG) Global Biodiversity Information Facility (GBIF) Biodiversity informatics Bioinformatics Knowledge representation Interoperability
Darwin Core Archive
[ "Engineering", "Biology" ]
696
[ "Bioinformatics", "Biological engineering", "Telecommunications engineering", "Interoperability" ]
29,825,613
https://en.wikipedia.org/wiki/Salt%20and%20cardiovascular%20disease
Excess salt consumption has been extensively studied for its role in human physiology and impact on human health. Chronic, high intake of dietary salt is associated with hypertension and cardiovascular disease, among other adverse health outcomes. Major health and scientific organizations, such as the World Health Organization, the US Centers for Disease Control and Prevention, and the American Heart Association, have established high salt consumption as a major risk factor for cardiovascular diseases and stroke. Dietary salt is also known as sodium chloride. Effect of salt on blood pressure Salt fulfills several important biological functions in humans. The human body has evolved to compensate for high salt intake through regulatory systems such as the renin–angiotensin system. Salt is particularly involved with maintaining body fluid volume, including the regulation of osmotic balance in the blood, extracellular and intracellular fluids, and resting membrane potential. The well-known effect of sodium on blood pressure can be explained by comparing blood to a solution with its salinity changed by ingested salt. Artery walls are analogous to a selectively permeable membrane, and they allow solutes, including sodium and chloride, to pass through (or not), depending on osmosis. Circulating water and solutes in the body maintain blood pressure in the blood, as well as other functions such as regulation of body temperature. When salt is ingested, it is dissolved in the blood as two separate ions – Na+ and Cl−. The water potential in blood will decrease due to the increased solutes, and blood osmotic pressure will increase. While the kidney reacts to excrete excess sodium and chloride in the body, water retention causes blood pressure to increase. Reducing salt intake in chronic kidney disease A 2021 Cochrane review of people with chronic kidney disease, including those on dialysis, demonstrated robust evidence that salt reduction decreases systolic and diastolic blood pressure and albuminuria. However, there was moderate certainty evidence that some people may experience hypotensive symptoms, such as dizziness, following sudden reduction of salt intake. The effect of salt restriction on extracellular fluid, edema, and total body weight was uncertain. Dietary Approaches to Stop Hypertension-Sodium study The DASH-Sodium study was a sequel to the original DASH (Dietary Approaches to Stop Hypertension) study. Both studies were designed and conducted by the National Heart, Lung, and Blood Institute in the United States, each involving a large, randomized sample. While the original study was designed to test the effects of several varying nutrients on blood pressure, DASH-Sodium varies only in salt content in the diet. Participants were pre-hypertensive or at stage 1 hypertension, and either ate a DASH Diet or a diet reflecting an "average American Diet". During the intervention phase, participants ate their assigned diets containing three distinct levels of sodium in random order. Their blood pressure is monitored during the control period, and at all three intervention phases. The study concluded that the effect of a reduced dietary sodium intake alone on blood pressure is substantial and that the largest decrease in blood pressure occurred in those eating the DASH eating plan at the lowest sodium level (1,500 milligrams per day). However, this study is especially significant because participants in both the control and DASH diet groups showed lowered blood pressure with decreased sodium alone. In agreement with studies regarding salt sensitivity, participants of African descent showed high reductions in blood pressure. Hypertension and cardiovascular disease In 2018, the American Heart Association published an advisory stating that "if the U.S. population dropped its sodium intake to 1,500 mg/day, overall blood pressure could decrease by 25.6%, with an estimated $26.2 billion in health care savings. Another estimate projected that achieving this goal would reduce cardiovascular disease deaths by anywhere from 500,000 to nearly 1.2 million over the next decade." There has been evidence from epidemiological studies, human and animal intervention experiments supporting the links between high rate of salt intake and hypertension. A Cochrane review and meta-analysis of clinical trials showed that reduced sodium intake reduces blood pressure in hypertensive and normotensive subjects. Since controlling hypertension is related to a reduced risk of cardiovascular disease, it is plausible that salt consumption is a risk factor for cardiovascular health. However, to properly study the effects of sodium intake levels on the risk of development of cardiovascular disease, long-term studies of large groups using both dietary and biochemical measures are necessary. As of 2019, major government research organizations, such as the US Centers for Disease Control and Prevention and the European Food Safety Authority, advise consumers to reduce their consumption of salt to lower the risk of cardiovascular diseases. One 2016 review found that five studies were supportive of the evidence that reduced sodium intake lowers cardiovascular disease incidence and mortality, three contradicted this evidence, and two found insufficient evidence to conclude. The survey found 27 primary studies and 106 letters in academic journals in support of the salt evidence, 34 primary studies and 51 letters contradicting the evidence, and 7 primary studies and 19 letters that were inconclusive. There are several long-term studies which found that groups with sodium-reduced diets have lower incidences of cardiovascular disease in all demographics. Some researchers cast doubts on the link between lowering sodium intake and the health of a given population. Current trends and campaigns Government regulatory agencies and clinical organizations, the European Food Safety Authority, the US Centers for Disease Control, and the American Heart Association recommend that consumers use less salt in their diets, mainly to reduce the risk of high blood pressure and associated cardiovascular diseases in adults and children. The World Health Organization issued a 2016 fact sheet to encourage reducing global salt consumption by 30% through 2025. In 2015, the United States Centers for Disease Control and Prevention began an initiative encouraging Americans to reduce their consumption of salty foods. The American Heart Association defined a daily sodium consumption limit of 1500 milligrams (contained in less than 0.75 teaspoon of table salt). According to a 2012 Health Canada report, Canadians in all age groups are consuming 3400 mg per day of sodium, more than twice as much as needed. The US Centers for Disease Control and Prevention stated that the average daily sodium intake for Americans over 2 years of age is 3436 milligrams. The majority of sodium consumed by North Americans is from processed and restaurant foods, while only a small portion is added during cooking or at the table. In the European Union, half of the member states legislated change in the form of taxation, mandatory nutrition labeling, and regulated nutrition and health claims to address over consumption of sodium in response to a 2012 EU Salt Reduction Framework. Sodium sensitivity A diet high in sodium increases the risk of hypertension in people with sodium sensitivity, corresponding to an increase in health risks associated with hypertension including cardiovascular disease. Unfortunately, there is no universal definition of sodium sensitivity; the method to assess sodium sensitivity varies from one study to another. In most studies, sodium sensitivity is defined as the change in mean blood pressure corresponding to a decrease or increase in sodium intake. The method to assess sodium sensitivity includes the measurement of circulating fluid volume and peripheral vascular resistance. Several studies have shown a relationship between sodium sensitivity and the increase in circulating fluid volume or peripheral vascular resistance. Several factors are associated with sodium sensitivity. Demographic factors which affect sodium sensitivity include race, gender, and age. One study shows that the American population of African descent are significantly more salt sensitive than Caucasians. Women are found to be more sodium sensitive than men; one possible explanation is based on the fact that women tend to consume more salt per unit weight, as women weigh less than men on average. Several studies have shown that the increase in age is also associated with the occurrence of sodium sensitivity. The difference in genetic makeup and family history has a significant impact on salt sensitivity, and is being studied more with improvement on the efficiencies and techniques of genetic testing. In both hypertensive and non-hypertensive individuals, those with haptoglobin 1-1 phenotype are more likely to have sodium sensitivity than people with haptoglobin 2-1 or 2-2 phenotypes. More specifically, haptoglobin 2-2 phenotypes contribute to the characteristic of sodium-resistance in humans. Moreover, prevalence of a family history of hypertension is strongly linked with the occurrence of sodium sensitivity. The influence of physiological factors including renal function and insulin levels on sodium sensitivity are shown in various studies. One study concludes that the effect of kidney failure on sodium sensitivity is substantial due to the contribution of decreasing the Glomerular filtration rate (GFR) in the kidney. Moreover, insulin resistance is found to be related to sodium sensitivity; however, the actual mechanism is still unknown. Potassium and hypertension Possible mechanisms by which high intakes of dietary potassium can decrease risk of hypertension and instances of cardiovascular disease have been proposed, but not extensively studied. However, studies have found a strong inverse association between long-term adequate to high rates of potassium intake and the development of cardiovascular diseases. The recommended dietary intake of potassium is higher than that of sodium. However, the average absolute intake of potassium of studied populations is lower than that of sodium intake. According to Statistics Canada in 2007, Canadians' potassium intake in all age groups was lower than recommended, while sodium intake greatly exceeded recommended intake in every age group. The ratio of potassium to sodium intake may account for the large difference in the occurrence of hypertension between primitive cultures eating diets made up of mostly unprocessed foods and Western diets which tend to include highly processed foods. Salt substitutes The growing awareness of excessive sodium consumption in connection with hypertension and cardiovascular disease has increased the usage of salt substitutes at both a consumer and industrial level. On a consumer level, salt substitutes, which usually substitute a portion of sodium chloride content with potassium chloride, can be used to increase the potassium to sodium consumption ratio. This change has been shown to blunt the effects of excess salt intake on hypertension and cardiovascular disease. It has also been suggested that salt substitutes can be used to provide an essential portion of daily potassium intake, and may even be more economical than prescription potassium supplements. In the food industry, processes have been developed to create low-sodium versions of existing products. The meat industry especially have developed and fine-tuned methods to decrease salt contents in processed meats without sacrificing consumer acceptance. Research demonstrates that salt substitutes such as potassium chloride, and synergistic compounds such as phosphates, can be used to decrease salt content in meat products. There have been concerns with certain populations' use of potassium chloride as a substitute for salt as high potassium loads are dangerous for groups with diabetes, renal diseases, or heart failure. The use of salts with minerals such as natural salts have also been tested, but like salt substitutes partially containing potassium, mineral salts produce a bitter taste above certain levels. See also Salt Hypertension Cardiovascular disease References Health effects of food and nutrition Edible salt Cardiovascular diseases Hypertension Blood pressure
Salt and cardiovascular disease
[ "Chemistry" ]
2,240
[ "Edible salt", "Salts" ]
29,827,348
https://en.wikipedia.org/wiki/Modular%20invariant%20theory
In mathematics, a modular invariant of a group is an invariant of a finite group acting on a vector space of positive characteristic (usually dividing the order of the group). The study of modular invariants was originated in about 1914 by . Dickson invariant When G is the finite general linear group GLn(Fq) over the finite field Fq of order a prime power q acting on the ring Fq[X1, ...,Xn] in the natural way, found a complete set of invariants as follows. Write [e1, ..., en] for the determinant of the matrix whose entries are X, where e1, ..., en are non-negative integers. For example, the Moore determinant [0,1,2] of order 3 is Then under the action of an element g of GLn(Fq) these determinants are all multiplied by det(g), so they are all invariants of SLn(Fq) and the ratios [e1, ...,en] / [0, 1, ..., n − 1] are invariants of GLn(Fq), called Dickson invariants. Dickson proved that the full ring of invariants Fq[X1, ...,Xn]GLn(Fq) is a polynomial algebra over the n Dickson invariants [0, 1, ..., i − 1, i + 1, ..., n] / [0, 1, ..., n − 1] for i = 0, 1, ..., n − 1. gave a shorter proof of Dickson's theorem. The matrices [e1, ..., en] are divisible by all non-zero linear forms in the variables Xi with coefficients in the finite field Fq. In particular the Moore determinant [0, 1, ..., n − 1] is a product of such linear forms, taken over 1 + q + q2 + ... + qn – 1 representatives of (n – 1)-dimensional projective space over the field. This factorization is similar to the factorization of the Vandermonde determinant into linear factors. See also Sanderson's theorem References Invariant theory
Modular invariant theory
[ "Physics" ]
479
[ "Invariant theory", "Group actions", "Symmetry" ]
29,837,448
https://en.wikipedia.org/wiki/Fluid%20dynamic%20gauge
A fluid dynamic gauge (FDG) is a measurement technique used to study the behaviour of soft deposit layers in a liquid environment. It employs fluid mechanics to determine the thickness of the layer, and can also be used to obtain a measure of its strength. It was inspired by the technique of pneumatic gauging, which relies on a flow of air rather than the process liquid. Fluid dynamic gauging can be conducted as an in-line measuring technique, but is more commonly used as a research tool. The technique was originally developed to measure the buildup or removal of the fouling layers commonly encountered in the process industry (such as in the heat treatment of dairy products). More recently, it has been applied to study cake buildup on porous membrane surfaces. Scanning versions can determine the topology of a solid/soft-solid surface immersed in a liquid environment, in an analogous manner to an atomic force microscope, but exploiting the principles of fluid mechanics. Key features of the technique are that it can study soft deposit layers without touching them, relies on relatively simple operating principles, can be used in a completely opaque liquid, and does not rely on knowledge of the fluid or deposit properties. References External links Fluid Dynamic Gauging Fluid mechanics
Fluid dynamic gauge
[ "Engineering" ]
255
[ "Civil engineering", "Fluid mechanics" ]
36,362,733
https://en.wikipedia.org/wiki/Pentameric%20protein
A pentameric protein is a quaternary protein structure that consists of five protein subunits. Examples Ligand-gated ion channels Five sub-units come together to form a channel. Each channel consist of two alpha chain, one beta, one gamma and one delta chain. These five chains assemble together (along with certain receptors like protons or acetylcholine) forming the structure of the channel. A ligand-gated ion channel on the post-synaptic junction of the muscle end-plate is an example of such a channel. They are acetylcholine-operated ion channels, which means that acetylcholine brings about a conformational change. The channel allows the free movement of the cations like and when acetylcholine binds to its receptors. Viral capsids Many viral capsids are formed by hexameric and pentameric proteins. Such capsids are assigned a triangulation number (T-number) which describe relation between the number of pentagons and hexagons. Carboxysomes Protein enclosing bacterial organelles carboxysome is also made up of pentameric protein. Major histocompatibility complex (MHC) pentamers Synthetic pentameric proteins include MHC pentamers, a type of MHC multimer, comprising five peptide-MHC complexes associated via a coiled-coil domain, attached to five fluorophore moieties. These proteins are used as reagents in immunology research. See also Pentamer References Protein structure
Pentameric protein
[ "Chemistry" ]
322
[ "Protein structure", "Structural biology" ]
35,408,416
https://en.wikipedia.org/wiki/Lorentz%20force%20velocimetry
Lorentz force velocimetry (LFV) is a noncontact electromagnetic flow measurement technique. LFV is particularly suited for the measurement of velocities in liquid metals like steel or aluminium and is currently under development for metallurgical applications. The measurement of flow velocities in hot and aggressive liquids such as liquid aluminium and molten glass constitutes one of the grand challenges of industrial fluid mechanics. Apart from liquids, LFV can also be used to measure the velocity of solid materials as well as for detection of micro-defects in their structures. A Lorentz force velocimetry system is called Lorentz force flowmeter (LFF). A LFF measures the integrated or bulk Lorentz force resulting from the interaction between a liquid metal in motion and an applied magnetic field. In this case the characteristic length of the magnetic field is of the same order of magnitude as the dimensions of the channel. It must be addressed that in the case where localized magnetic fields are used, it is possible to perform local velocity measurements and thus the term Lorentz force velocimeter is used. Introduction The use of magnetic fields in flow measurement date back to the 19th century, when in 1832 Michael Faraday attempted to determine the velocity of the River Thames. Faraday applied a method in which a flow (the river flow) is exposed to a magnetic field (earth magnetic field) and the induced voltage is measured using two electrodes across the same flow. This method is the basis of one of the most successful commercial applications in flow metering known as the inductive flowmeter. The theory of such devices has been developed and comprehensively summarized by Prof. J. A. Shercliff in the early 1950s. While inductive flowmeters are widely used for flow measurement in fluids at room temperatures such as beverages, chemicals and waste water, they are not suited for flow measurement of media such as hot, aggressive or for local measurements where surrounding obstacles limit access to the channel or pipe. Since they require electrodes to be inserted into the fluid, their use is limited to applications at temperatures far below the melting points of practically relevant metals. The Lorentz force velocimetry was invented by the A. Shercliff. However, it did not find practical application in these early years up until recent technical advances; in manufacturing of rare earth and non rare-earth strong permanent magnets, accurate force measurement techniques, multiphysical process simulation software for magnetohydrodynamic (MHD) problems that this principle could be turned into a feasible working flow measurement technique. LFV is currently being developed for applications in metallurgy as well as in other areas. Based on theory introduced by Shercliff there have been several attempts to develop flow measurement methods which do not require any mechanical contact with the fluid,. Among them is the eddy current flowmeter which measures flow-induced changes in the electric impedance of coils interacting with the flow. More recently, a non-contact method was proposed in which a magnetic field is applied to the flow and the velocity is determined from measurements of flow-induced deformations of the applied magnetic field,. Principle and physical interpretation The principle of Lorentz force velocimetry is based on measurements of the Lorentz force that occurs due to the flow of a conductive fluid under the influence of a variable magnetic field. According to Faraday's law, when a metal or conductive fluid moves through a magnetic field, eddy currents generate there by electromotive force in zones of maximal magnetic field gradient (in the present case in the inlet and outlet zones). Eddy current in its turn creates induced magnetic field according to Ampère's law. The interaction between eddy currents and total magnetic field gives rise to Lorentz force that breaks the flow. By virtue of Newton's third law "actio=reactio" a force with the same magnitude but opposite direction acts upon its source - permanent magnet. Direct measurement of the magnet's reaction force allows to determine fluid's velocity, since this force is proportional to flow rate. The Lorentz force used in LFV has nothing to do with magnetic attraction or repulsion. It is only due to the eddy currents whose strength depends on the electrical conductivity, the relative velocity between the liquid and the permanent magnet as well as the magnitude of the magnetic field. So, when a liquid metal moves across magnetic field lines, the interaction of the magnetic field (which are either produced by a current-carrying coil or by a permanent magnet) with the induced eddy currents leads to a Lorentz force (with density ) which brakes the flow. The Lorentz force density is roughly where is the electrical conductivity of the fluid, its velocity, and the magnitude of the magnetic field. This fact is well known and has found a variety of applications. This force is proportional to the velocity and conductivity of the fluid, and its measurement is the key idea of LFV. With the recent advent of powerful rare earth permanent magnets (like NdFeB, SmCo and other kind of magnets) and tools for designing sophisticated systems by permanent magnet the practical realization of this principle has now become possible. The primary magnetic field can be produced by a permanent magnet or a primary current (see Fig. 1). The motion of the fluid under the action of the primary field induces eddy currents which are sketched in figure 3. They will be denoted by and are called secondary currents. The interaction of the secondary current with the primary magnetic field is responsible for the Lorentz force within the fluid which breaks the flow. The secondary currents create a magnetic field , the secondary magnetic field. The interaction of the primary electric current with the secondary magnetic field gives rise to the Lorentz force on the magnet system The reciprocity principle for the Lorentz force velocimetry states that the electromagnetic forces on the fluid and on the magnet system have the same magnitude and act in opposite direction, namely The general scaling law that relates the measured force to the unknown velocity can be derived with reference to the simplified situation shown in Fig. 2. Here a small permanent magnet with dipole moment is located at a distance above a semi-infinite fluid moving with uniform velocity parallel to its free surface. The analysis that leads to the scaling relation can be made quantitative by assuming that the magnet is a point dipole with dipole moment whose magnetic field is given by where and . Assuming a velocity field for , the eddy currents can be computed from Ohm's law for a moving electrically conducting fluid subject to the boundary conditions at and as . First, the scalar electric potential is obtained as from which the electric current density is readily calculated. They are indeed horizontal. Once they are known, the Biot–Savart law can be used to compute the secondary magnetic field . Finally, the force is given by where the gradient of has to be evaluated at the location of the dipole. For the problem at hand all these steps can be carried out analytically without any approximation leading to the result This provides us with the estimate Conceptual setups Lorentz force flowmeters are usually classified in several main conceptual setups. Some of them designed as static flowmeters where the magnet system is at rest and one measures the force acting on it. Alternatively, they can be designed as rotary flowmeters where the magnets are arranged on a rotating wheel and the spinning velocity is a measure of the flow velocity. Obviously, the force acting on a Lorentz force flowmeter depends both on the velocity distribution and on the shape of the magnet system. This classification depends on the relative direction of the magnetic field that is being applied respect to the direction of the flow. In Figure 3 one can distinguish diagrams of the longitudinal and the transverse Lorentz force flowmeters. It is important to mention that even that in figures only a coil or a magnet are sketched, the principle holds for both. Rotary LFF consists of a freely rotating permanent magnet (or an array of magnets mounted on a flywheel as shown in figure 4), which is magnetized perpendicularly to the axle it is mounted on. When such a system is placed close to a duct carrying an electrically conducting fluid flow, it rotates so that the driving torque due to the eddy currents induced by the flow is balanced by the braking torque induced by the rotation itself. The equilibrium rotation rate varies directly with the flow velocity and inversely with the distance between the magnet and the duct. In this case it is possible to measure either the torque on the magnet system or the angular velocity at which the wheel spins. Practical applications LFV is sought to be extended to all fluid or solid materials, providing that they are electrical conductors. As shown before, the Lorentz force generated by the flow depend linearly on the conductivity of the fluid. Typically, the electrical conductivity of molten metals is of the order of so the Lorentz force is in the range of some mN. However, equally important liquids as glass melts and electrolytic solutions have a conductivity of giving rise to a Lorentz force of the order of micronewtons or even smaller. High Conducting media: liquid or solid metals Among different possibilities to measure the effect on the magnet system, it has been successfully applied those based on the measurement of the deflection of a parallel spring under an applied force. Firstly using a strain gauge and then recording the deflection of a quartz spring with an interferometer, in whose case the deformation is detected to within 0.1 nm. Low Conducting media: Electrolytic solution or glass melts Recent advance in LFV made it possible for metering flow velocity of media which has very low electroconductivity, particularly by varying parameters as well as using some state-of-art force measurement devices enable to measure flow velocity of electrolyte solutions with conductivity that is 106 times smaller than that for the liquid metals. There are variety of industrial and scientific applications where noncontact flow measurement through opaque walls or in opaque liquids is desirable. Such applications include flow metering of chemicals, food, beverages, blood, aqueous solutions in the pharmaceutical industry, molten salts in solar thermal power plants, and high temperature reactors as well as glass melts for high-precision optics. A noncontact flowmeter is a device that is neither in mechanical contact with the liquid nor with the wall of the pipe in which the liquid flows. Noncontact flowmeters are equally useful when walls are contaminated like in the processing of radioactive materials, when pipes are strongly vibrating or in cases when portable flowmeters are to be developed. If the liquid and the wall of the pipe are transparent and the liquid contains tracer particles, optical measurement techniques, are effective enough tool to perform noncontact measurements. However, if either the wall or the liquid are opaque as is often the case in food production, chemical engineering, glass making, and metallurgy, very few possibilities for noncontact flow measurement exist. The force measurement system is an important part of the Lorentz force velocimetry. With high resolution force measurement system makes the measurement of even lower conductivity possible. Up to date has the force measurement system continually being developed. At first the pendulum-like setups was used (Figure 5). One of the experimental facilities consists of two high power (410 mT) magnets made of NdFeB suspended by thin wires on both side of channel thereby creating magnetic field perpendicular to the fluid flow, here deflection is measured by interferometer system,. The second setup consists of state-of-art weighting balance system (Figure 6) from which is being hanged optimized magnets on the base of Halbach array system. While the total mass of both magnet systems are equal (1 kg), this system induces 3 times higher system response due to arrangement of individual elements in the array and its interaction with predefined fluid profile. Here use of very sensitive force measuring devices is desirable, since flow velocity is being converted from the very tiny detected Lorentz Force. This force in combination with unavoidable dead weight of the magnet () is around . After that, the method of differential force measurement was developed. With this method two balance were used, one with magnet and the other is with same-weight-dummy. In this way the influence of environment would be reduced. Recently, it have been reported that the flow measurements by this method is possible for saltwater flows whose electrical conductivity is as small as 0.06 S/m (range of electrical conductivity of the regular water from tap). Lorentz force sigmometry Lorentz force sigmometry (LOFOS) is a contactless method for measuring the thermophysical properties of materials, no matter whether it is a fluid or a solid body. The precise measurements of electrical value, density, viscosity, thermal conductivity and surface tension of molten metals are in great importance in industry applications. One of the major problems in the experimental measurements of the thermophysical properties at high temperature (>1000 K) in the liquid state is the problem of chemical reaction between the hot fluid and the electrical probes. The basic equation for calculating the electrical conductivity is derived from the equation that links the mass flow rate and Lorentz force generated by magnetic field in flow: where is the specific electrical conductivity equals to the ratio of the electrical conductivity and the mass density of fluid . is a calibration factor that depends on the geometry of the LOFOS system. From equation above the cumulative mass during operating time is determined as where is the integral of Lorentz force within the time process. From this equation and considering the specific electrical conductivity formula, one can derive the final equation to compute the electrical conductivity for the fluid, in the form Time-of-flight Lorentz force velocimetry Time-of-flight Lorentz force velocimetry, is intended for contactless determination of flow rate in conductive fluids. It can be successfully used even in case when such material properties as electrical conductivity or density are not precisely known under specific outer conditions. The last reason makes time-of-flight LFV especially important for industry application. According to time-of-flight LFV (Fig. 9) two coherent measurement systems are mounted on a channel one by one. The measurement is based on getting of cross-correlating function of signals, which are registered by two magnetic measurement's system. Every system consists of permanent magnet and force sensor, so inducing of Lorentz force and measurement of the reaction force are made simultaneously. Any cross-correlation function is useful only in case of qualitative difference between signals and for creating the difference in this case turbulent fluctuations are used. Before reaching of measurement zone of a channel liquid passes artificial vortex generator that induces strong disturbances in it. And when such fluctuation-vortex reaches magnetic field of measurement system we can observe a peak on its force-time characteristic while second system still measures stable flow. Then according to the time between peaks and the distance between measurement system observer can estimate mean velocity and, hence, flow rate of the liquid by equation: where is the distance between magnet system, the time delay between recorded peaks, and is obtained experimentally for every specific liquid, as shown in figure 9. Lorentz force eddy current testing A different, albeit physically closely related challenge is the detection of deeply lying flaws and inhomogeneities in electrically conducting solid materials. In the traditional version of eddy current testing an alternating (AC) magnetic field is used to induce eddy currents inside the material to be investigated. If the material contains a crack or flaw which make the spatial distribution of the electrical conductivity nonuniform, the path of the eddy currents is perturbed and the impedance of the coil which generates the AC magnetic field is modified. By measuring the impedance of this coil, a crack can hence be detected. Since the eddy currents are generated by an AC magnetic field, their penetration into the subsurface region of the material is limited by the skin effect. The applicability of the traditional version of eddy current testing is therefore limited to the analysis of the immediate vicinity of the surface of a material, usually of the order of one millimeter. Attempts to overcome this fundamental limitation using low frequency coils and superconducting magnetic field sensors have not led to widespread applications. A recent technique, referred to as Lorentz force eddy current testing (LET), exploits the advantages of applying DC magnetic fields and relative motion providing deep and relatively fast testing of electrically conducting materials. In principle, LET represents a modification of the traditional eddy current testing from which it differs in two aspects, namely (i) how eddy currents are induced and (ii) how their perturbation is detected. In LET eddy currents are generated by providing the relative motion between the conductor under test and a permanent magnet (see figure 10). If the magnet is passing by a defect, the Lorentz force acting on it shows a distortion whose detection is the key for the LET working principle. If the object is free of defects, the resulting Lorentz force remains constant. Advantages & Limitations The advantages of LFV are LFV is a non-contact techniques of flow rate measurement. LFV can be successfully applied for aggressive and high-temperature fluids like liquid metals. Mean flow rate or mean velocity of fluid can be obtained without depending on flow's inhomogeneities and zones of turbulence. The limitations of the LFV are Necessity of temperature control of measurement system because of strong dependence of magnet's magnetic field on temperature. High temperature could cause irretrievable loss of the magnetic properties of permanent magnet (Curie temperature). Restriction of measurement zone by permanent magnet's dimensions. Necessity of liquid level's control in case of work with open channel. Rapid decay of the magnetic fields give rise to tiny forces on the magnet system. See also Magnetohydrodynamics Lorentz force External links Official web page of Lorentz Force Velocimetry and Lorentz Force Eddy Current Testing Group References Fluid dynamics
Lorentz force velocimetry
[ "Chemistry", "Engineering" ]
3,708
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
35,413,022
https://en.wikipedia.org/wiki/Lyddane%E2%80%93Sachs%E2%80%93Teller%20relation
In condensed matter physics, the Lyddane–Sachs–Teller relation (or LST relation) determines the ratio of the natural frequency of longitudinal optic lattice vibrations (phonons) () of an ionic crystal to the natural frequency of the transverse optical lattice vibration () for long wavelengths (zero wavevector). The ratio is that of the static permittivity to the permittivity for frequencies in the visible range . The relation holds for systems with a single optical branch, such as cubic systems with two different atoms per unit cell. For systems with many phonon branches, the relation does not necessarily hold, as the permittivity for any pair of longitudinal and transverse modes will be altered by the other modes in the system. The Lyddane–Sachs–Teller relation is named after the physicists R. H. Lyddane, Robert G. Sachs, and Edward Teller. Origin and limitations The Lyddane–Sachs–Teller relation applies to optical lattice vibrations that have an associated net polarization density, so that they can produce long ranged electromagnetic fields (over ranges much longer than the inter-atom distances). The relation assumes an idealized polar ("infrared active") optical lattice vibration that gives a contribution to the frequency-dependent permittivity described by a lossless Lorentzian oscillator: where is the permittivity at high frequencies, is the static DC permittivity, and is the "natural" oscillation frequency of the lattice vibration taking into account only the short-ranged (microscopic) restoring forces. The above equation can be plugged into Maxwell's equations to find the complete set of normal modes including all restoring forces (short-ranged and long-ranged), which are sometimes called phonon polaritons. These modes are plotted in the figure. At every wavevector there are three distinct modes: a longitudinal wave mode occurs with an essentially flat dispersion at frequency . In this mode, the electric field is parallel to the wavevector and produces no transverse currents, hence it is purely electric (there is no associated magnetic field). The longitudinal wave is basically dispersionless, and appears as a flat line in the plot at frequency . This remains 'split off' from the bare oscillation frequency even at high wave vectors, because the importance of electric restoring forces does not diminish at high wavevectors. two transverse wave modes appear (actually, four modes, in pairs with identical dispersion), with complex dispersion behavior. In these modes, the electric field is perpendicular to the wavevector, producing transverse currents, which in turn generate magnetic fields. As light is also a transverse electromagnetic wave, the behaviour is described as a coupling of the transverse vibration modes with the light inside the material (in the figure, shown as red dashed lines). At high wavevectors, the lower mode is primarily vibrational. This mode approaches the 'bare' frequency because magnetic restoring forces can be neglected: the transverse currents produce a small magnetic field and the magnetically induced electric field is also very small. At zero, or low wavevector the upper mode is primarily vibrational and its frequency instead coincides with the longitudinal mode, with frequency . This coincidence is required by symmetry considerations and occurs due to electrodynamic retardation effects that make the transverse magnetic back-action behave identically to the longitudinal electric back-action. The longitudinal mode appears at the frequency where the permittivity passes through zero, i.e. . Solving this for the Lorentzian resonance described above gives the Lyddane–Sachs–Teller relation. Since the Lyddane–Sachs–Teller relation is derived from the lossless Lorentzian oscillator, it may break down in realistic materials where the permittivity function is more complicated for various reasons: Real phonons have losses (also known as damping or dissipation). Materials may have multiple phonon resonances that add together to produce the permittivity. There may be other electrically active degrees of freedom (notably, mobile electrons) and non-Lorentzian oscillators. In the case of multiple, lossy Lorentzian oscillators, there are generalized Lyddane–Sachs–Teller relations available. Most generally, the permittivity cannot be described as a combination of Lorentizan oscillators, and the longitudinal mode frequency can only be found as a complex zero in the permittivity function. Anharmonic crystals The most general Lyddane–Sachs–Teller relation applicable in crystals where the phonons are affected by anharmonic damping has been derived in Ref. and reads as the absolute value is necessary since the phonon frequencies are now complex, with an imaginary part that is equal to the finite lifetime of the phonon, and proportional to the anharmonic phonon damping (described by Klemens' theory for optical phonons). Non-polar crystals A corollary of the LST relation is that for non-polar crystals, the LO and TO phonon modes are degenerate, and thus . This indeed holds for the purely covalent crystals of the group IV elements, such as for diamond (C), silicon, and germanium. Reststrahlen effect In the frequencies between and there is 100% reflectivity. This range of frequencies (band) is called the Reststrahl band. The name derives from the German reststrahl which means "residual ray". Example with NaCl The static and high-frequency dielectric constants of NaCl are and , and the TO phonon frequency is THz. Using the LST relation, we are able to calculate that THz Experimental methods Raman spectroscopy One of the ways to experimentally determine and is through Raman spectroscopy. As previously mentioned, the phonon frequencies used in the LST relation are those corresponding to the TO and LO branches evaluated at the gamma-point () of the Brillouin zone. This is also the point where the photon-phonon coupling most often occurs for the Stokes shift measured in Raman. Hence two peaks will be present in the Raman spectrum, each corresponding to the TO and LO phonon frequency. See also Reststrahlen effect Citations References Textbooks Articles Condensed matter physics
Lyddane–Sachs–Teller relation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,313
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
35,417,850
https://en.wikipedia.org/wiki/Laser%20Doppler%20imaging
Laser Doppler imaging (LDI) is an imaging method that uses a laser beam to image live tissue. When the laser light reaches the tissue, the moving blood cells generate Doppler components in the reflected (backscattered) light. The light that comes back is detected using a photodiode that converts it into an electrical signal. Then the signal is processed to calculate a signal that is proportional to the tissue perfusion in the imaged area. When the process is completed, the signal is processed to generate an image that shows the perfusion on a screen. The laser Doppler effect was first used to measure microcirculation by Stern M.D. in 1975. It is used widely in medicine, some representative research work about it are these: Use in ophthalmology The eye offers a unique opportunity for the non-invasive exploration of cardiovascular diseases. LDI by digital holography can measure blood flow in the retina and choroid. In particular, the choroid is a highly vascularized tissue supplying the retinal pigment epithelium and photoreceptors. Yet investigating the anatomy and flow of the choroid remains challenging. LDI provides high-contrast visualization of local blood flow in choroidal vessels in humans, with a spatial resolution comparable to state-of-the-art indocyanine green angiography. Differences in blood pressure drive the flow of blood throughout the circulation. The rate of mean blood flow depends on both blood pressure and the hemodynamic resistance to flow presented by the blood vessels. LDI can enable mapping of the local arterial resistivity index, and the possibility to perform unambiguous identification of retinal arteries and veins on the basis of their systole-diastole variations, and reveal ocular hemodynamics in human eyes. Measurement of surface waves on the skin The local velocity of blood flow measured by laser Doppler holography in the digit (photoplethysmogram) and the eye fundus has a pulse-shaped profile with time. These remote pulse wave measurements can be done clinically to reveal hemodynamics in arteries and veins and can be readily measured non-invasively. Principal component analysis of digital holograms is an efficient way of performing temporal demodulation of digital holograms reconstructed from on-axis interferograms and can be used to reveal surface waves on the hand. Use in obstetrics and gynaecology LDI provides a direct measure of female sexual response that does not require genital contact; signals are gathered at a depth of two to three millimetres below the skin's surface. Two studies have suggested that LDI is a valid measure of female sexual arousal. Waxman and Pukall showed that LDI has discriminant validity; that is, it can differentiate sexual response from neutral, positive, and negative mood induced states. Compared to vaginal photoplethysmography (VPG), LDI is advantageous because it does not require genital contact. Also, LDI provides a direct measure of vasocongestion and has an absolute unit of measurement, consisting of flux or units of blood flow. The disadvantages of LDI are that it cannot provide a continuous measure of sexual response and the laser Doppler perfusion imager is much more costly that other methods of genital sexual arousal assessment, such as VPG. See also Hot-wire anemometry Laser Doppler velocimetry Laser Doppler vibrometer Laser surface velocimeter Molecular tagging velocimetry Particle image velocimetry Particle tracking velocimetry Photon Doppler velocimetry Photoplethysmogram References External links LDA/LDV principle LDV overview Laser applications Doppler effects Measurement Optical imaging Female genital procedures
Laser Doppler imaging
[ "Physics", "Mathematics" ]
788
[ "Physical phenomena", "Physical quantities", "Quantity", "Astrophysics", "Size", "Measurement", "Doppler effects" ]
35,419,579
https://en.wikipedia.org/wiki/Prefix%20%28acoustics%29
In acoustics, the prefix of a sound is an initial phase, the onset of a sound quite dissimilar to the ensuing lasting vibration. The term was coined by J. F. Schouten (1968, 42), who called it one of at least five major acoustic parameters that determine the elusive attributes of timbre. See also Onset (audio) Timbre#Attributes Synthesizer#ADSR envelope Transient (acoustics) References Schouten, J. F. (1968). "The Perception of Timbre". In Reports of the 6th International Congress on Acoustics, Tokyo, GP-6-2, 6 vols., edited by Y. Kohasi, 6:35–44, 90. Tokyo: Maruzen; Amsterdam: Elsevier. Acoustics
Prefix (acoustics)
[ "Physics" ]
164
[ "Classical mechanics", "Acoustics" ]
35,420,291
https://en.wikipedia.org/wiki/Transient%20%28acoustics%29
In acoustics and audio, a transient is a high amplitude, short-duration sound at the beginning of a waveform that occurs in phenomena such as musical sounds, noises or speech. Transients do not necessarily directly depend on the frequency of the tone they initiate. It contains a high degree of non-periodic components and a higher magnitude of high frequencies than the harmonic content of that sound. Transients are more difficult to encode with many audio compression algorithms, causing pre-echo. See also Prefix (acoustics) Impulse function Onset (audio) Transient response – a common electrical engineering term that may be the source of the idea of an acoustic "transient" References Acoustics Sonar de:Einschwingvorgang
Transient (acoustics)
[ "Physics" ]
145
[ "Classical mechanics", "Acoustics" ]
28,357,910
https://en.wikipedia.org/wiki/Dynamic%20fluid%20film%20equations
Fluid films, such as soap films, are commonly encountered in everyday experience. A soap film can be formed by dipping a closed contour wire into a soapy solution as in the figure on the right. Alternatively, a catenoid can be formed by dipping two rings in the soapy solution and subsequently separating them while maintaining the coaxial configuration. Stationary fluid films form surfaces of minimal surface area, leading to the Plateau problem. On the other hand, fluid films display rich dynamic properties. They can undergo enormous deformations away from the equilibrium configuration. Furthermore, they display several orders of magnitude variations in thickness from nanometers to millimeters. Thus, a fluid film can simultaneously display nanoscale and macroscale phenomena. In the study of the dynamics of free fluid films, such as soap films, it is common to model the film as two dimensional manifolds. Then the variable thickness of the film is captured by the two dimensional density . The dynamics of fluid films can be described by the following system of exact nonlinear Hamiltonian equations which, in that respect, are a complete analogue of Euler's inviscid equations of fluid dynamics. In fact, these equations reduce to Euler's dynamic equations for flows in stationary Euclidean spaces. The foregoing relies on the formalism of tensors, including the summation convention and the raising and lowering of tensor indices. The full dynamic system Consider a thin fluid film that spans a stationary closed contour boundary. Let be the normal component of the velocity field and be the contravariant components of the tangential velocity projection. Let be the covariant surface derivative, be the covariant curvature tensor, be the mixed curvature tensor and be its trace, that is mean curvature. Furthermore, let the internal energy density per unit mass function be so that the total potential energy is given by This choice of : where is the surface energy density results in Laplace's classical model for surface tension: where A is the total area of the soap film. The governing system reads where the -derivative is the central operator, originally due to Jacques Hadamard, in The Calculus of Moving Surfaces. Note that, in compressible models, the combination is commonly identified with pressure . The governing system above was originally formulated in reference 1. For the Laplace choice of surface tension the system becomes: Note that on flat () stationary () manifolds, the system becomes which is precisely classical Euler's equations of fluid dynamics. A simplified system If one disregards the tangential components of the velocity field, as frequently done in the study of thin fluid film, one arrives at the following simplified system with only two unknowns: the two dimensional density and the normal velocity : References 1. Exact nonlinear equations for fluid films and proper adaptations of conservation theorems from classical hydrodynamics P. Grinfeld, J. Geom. Sym. Phys. 16, 2009 Continuum mechanics Fluid dynamics Fluid mechanics Nonlinear systems Differential geometry Manifolds
Dynamic fluid film equations
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
609
[ "Continuum mechanics", "Dynamical systems", "Chemical engineering", "Classical mechanics", "Space (mathematics)", "Nonlinear systems", "Topological spaces", "Topology", "Civil engineering", "Manifolds", "Piping", "Fluid mechanics", "Fluid dynamics" ]
28,357,925
https://en.wikipedia.org/wiki/Calculus%20of%20moving%20surfaces
The calculus of moving surfaces (CMS) is an extension of the classical tensor calculus to deforming manifolds. Central to the CMS is the tensorial time derivative whose original definition was put forth by Jacques Hadamard. It plays the role analogous to that of the covariant derivative on differential manifolds in that it produces a tensor when applied to a tensor. Suppose that is the evolution of the surface indexed by a time-like parameter . The definitions of the surface velocity and the operator are the geometric foundations of the CMS. The velocity C is the rate of deformation of the surface in the instantaneous normal direction. The value of at a point is defined as the limit where is the point on that lies on the straight line perpendicular to at point P. This definition is illustrated in the first geometric figure below. The velocity is a signed quantity: it is positive when points in the direction of the chosen normal, and negative otherwise. The relationship between and is analogous to the relationship between location and velocity in elementary calculus: knowing either quantity allows one to construct the other by differentiation or integration. The tensorial time derivative for a scalar field F defined on is the rate of change in in the instantaneously normal direction: This definition is also illustrated in second geometric figure. The above definitions are geometric. In analytical settings, direct application of these definitions may not be possible. The CMS gives analytical definitions of C and in terms of elementary operations from calculus and differential geometry. Analytical definitions For analytical definitions of and , consider the evolution of given by where are general curvilinear space coordinates and are the surface coordinates. By convention, tensor indices of function arguments are dropped. Thus the above equations contains rather than . The velocity object is defined as the partial derivative The velocity can be computed most directly by the formula where are the covariant components of the normal vector . Also, defining the shift tensor representation of the surface's tangent space and the tangent velocity as , then the definition of the derivative for an invariant F reads where is the covariant derivative on S. For tensors, an appropriate generalization is needed. The proper definition for a representative tensor reads where are Christoffel symbols and is the surface's appropriate temporal symbols ( is a matrix representation of the surface's curvature shape operator) Properties of the -derivative The -derivative commutes with contraction, satisfies the product rule for any collection of indices and obeys a chain rule for surface restrictions of spatial tensors: Chain rule shows that the -derivatives of spatial "metrics" vanishes where and are covariant and contravariant metric tensors, is the Kronecker delta symbol, and and are the Levi-Civita symbols. The main article on Levi-Civita symbols describes them for Cartesian coordinate systems. The preceding rule is valid in general coordinates, where the definition of the Levi-Civita symbols must include the square root of the determinant of the covariant metric tensor . Differentiation table for the -derivative The derivative of the key surface objects leads to highly concise and attractive formulas. When applied to the covariant surface metric tensor and the contravariant metric tensor , the following identities result where and are the doubly covariant and doubly contravariant curvature tensors. These curvature tensors, as well as for the mixed curvature tensor , satisfy The shift tensor and the normal satisfy Finally, the surface Levi-Civita symbols and satisfy Time differentiation of integrals The CMS provides rules for time differentiation of volume and surface integrals. References Tensors Differential geometry Riemannian geometry Curvature (mathematics) Moving surfaces
Calculus of moving surfaces
[ "Physics", "Mathematics", "Engineering" ]
753
[ "Geometric measurement", "Tensors", "Physical quantities", "Calculus", "Curvature (mathematics)" ]
28,360,272
https://en.wikipedia.org/wiki/Les%20Houches%20School%20of%20Physics
Les Houches School of Physics () is an international physics center dedicated to seasonal schools and workshops. It is located in Les Houches, France. The school was founded in 1951 by French scientist Cécile DeWitt-Morette. Between its participants there have been famous Nobel laureates in Physics like Enrico Fermi, Wolfgang Pauli, Murray Gell-Mann and John Bardeen amongst others. According to former director of the school, Jean Zinn-Justin, the school is the "mother of all modern schools of physics”. Since 2017, it is a Joint Research Service (, UMS) of the French National Centre for Scientific Research (CNRS) and the Grenoble Alpes University. In 2020, it was recognized as a EPS Historic Site by the European Physical Society (EPS). History The school was founded by Cécile DeWitt-Morette in 1951. She was 29 years old at the time, had married physicist Bryce DeWitt a week before, and was still a postdoctoral researcher in the United States. The school was created as a post-World War II effort to improve the standard of modern physics in Europe, which was lagging behind the United States. She was inspired by her experience in the Girl Scouts and 1949 Richard Feynman's Ann Arbor annual Summer Symposium, at the University of Michigan, which DeWitt-Morette attended. She quickly gathered the institutional and financial support of Pierre Victor Auger (then director of the Natural Sciences Department at UNESCO), the CNRS, Albert Châtelet (dean of faculty of physics of the University of Paris) and in charge of the French Ministry of Education. With a reduced budget, she settled to open the school in a rustic farm surrounded by chalets, a few kilometers from the village of Les Houches. The school was publicized by her French colleagues: Yves Rocard at the École normale supérieure, Louis Leprince-Ringuet at École polytechnique, Louis de Broglie and Alexandre Proca at the Institut Henri Poincaré, and Francis Perrin at the Collège de France and CEA who hired a secretary to handle the paperwork. Louis Néel acquired the patronage of the Grenoble faculty of science in order for the school to be legally attached to the University of Grenoble. DeWitt-Morette also obtained international support from J. Robert Oppenheimer, Enrico Fermi, Julian Schwinger and Victor Weisskopf. The first session in 1951 was attended by young French professors like , Alfred Kastler and , as well as by famous physicists from abroad including Walter Heitler, Léon van Hove, Emilio Segrè, Walter Kohn and Wolfgang Pauli. The first lessons were given by Van Hove on quantum mechanics. Up until the 1960s, the students at the school were cut off from the outside world with the bare minimum in amenities. Nobel laureate Claude Cohen-Tannoudji, a student in 1955, recalled Yves Rocard and Maurice Lévy, inspired by the school, founded a summer school in Cargèse, Corsica, which they called the '‘Les Houches on the beach". Subsequently, a number of scientific summer schools opened all over Europe following the same model, partly with the support of Advanced Study Institutes program of NATO. In its early years, it caused some political controversy, with the French Communist Party accusing the school of US espionage and interference. A counter-school project against the allegedly Americanized Les Houches school was considered but was short-lived. In 1977, a physics centre was created, specialised for shorter conferences which could take place all year round. In 1988, a pre-doctoral school was opened for young researchers entering into their PhD theses. Attendees This table records attendees who later went on to receive either the Nobel Prize in Physics or the Fields Medal. Prize The Cecile DeWitt-Morette, Ecole de Physique des Houches Prize is awarded annually since 2019. It is awarded to scientists, less than 55 year old, from any nationality, who has made a remarkable contribution to physics and have attended the school as a lecturer or student. The jury is composed of members of the French Academy of Sciences. Since 2023, it is called the Cécile DeWitt-Morette / Ecole de Physique des Houches / Fundation CFM for Research prize. The laureates are: References External links École de Physique des Houches web site Education in France Summer schools Physics education
Les Houches School of Physics
[ "Physics" ]
919
[ "Applied and interdisciplinary physics", "Physics education" ]
28,364,303
https://en.wikipedia.org/wiki/Cell%20Research
Cell Research is a monthly peer-reviewed scientific journal covering cell biology. It is published by the Nature Research on behalf of the Shanghai Institutes for Biological Sciences (Chinese Academy of Sciences) and is affiliated with the Chinese Society for Cell Biology. It was established in 1990. The editor-in-chief is Gang Pei (Shanghai Institutes for Biological Sciences), and the deputy editor-in-chief is Dangsheng Li (Shanghai Institutes for Biological Sciences). Abstracting and indexing The journal is abstracted and indexed in: Index Medicus/MEDLINE/PubMed Science Citation Index Current Contents/Life Sciences Chemical Abstracts BIOSIS Previews VINITI Database RAS According to the Journal Citation Reports, Cell Research has a 2021 impact factor of 46.297. References External links NPG website Molecular and cellular biology journals Nature Research academic journals Monthly journals Academic journals established in 1990 English-language journals Academic journals associated with learned and professional societies Hybrid open access journals
Cell Research
[ "Chemistry" ]
192
[ "Molecular and cellular biology journals", "Molecular biology" ]
48,259,346
https://en.wikipedia.org/wiki/Emanuele%20Fo%C3%A0
Emanuele Foà (16 August 1892 – 9 October 1949) was an Italian engineer and engineering physicist, known for his contribution to mathematical fluid dynamics. In particular he proved the first known uniqueness theorem for the solutions to the three-dimensional Navier–Stokes equations for incompressible fluids in bounded domains. Life and academic career He was born in Savigliano, in a Jewish family of distinguished professionals and officials: his father, Teodoro Foà, was a military physician serving as a major the Royal Italian Army, who died at the age of 42 due to the viral fevers he contracted during the Eritrea war campaign. Despite having lost his father at a young age and having a disabled sister, he succeeded in studying engineering at the Polytechnic University of Turin thanks to a scholarship. The outbreak of World War I in Italy in 1915 forced him to interrupt his engineering studies: he joined the army and served as an artillery officer for the years 1916 and 1917. On 28 October 1917, during the battle of Caporetto, he was taken prisoner and spent a year in a prisoner camp located in Germany. At the end of the war, notwithstanding his health problems, he successfully completed his university studies: he got his Laurea degree in industrial engineering at the Polytechnic University of Turin in August 1919. From 1 December 1919 he started to work at his alma mater, as assistant professor to the chair of thermal engineering, which at the time was held by Benedetto Luigi Montel. In 1927 he participated and won a competitive examination for a professorship in engineering physics at the then called "Royal School of Engineering of Bologna": in 1928 he left Turin for Bologna, succeeding, after a brief time period, to Luigi Donati who had held the chair for several decades. The very same year he met Dario Graffi, who had earlier become assistant professor to the chair of engineering physics: their cordial relations became over time a deep and tenacious friendship, lasted until Foà's death. In Bologna, he passionately devoted himself to teaching as his course handouts, published in several editions, testify: the same time was fruitful for his researches activity, and in 1930 he was appointed ordinary professor. The years from 1938 to 1945: the "Italian Racial Laws" and the World War II His teaching at the university was interrupted in 1938, the year the Italian Government approved the "Racial Laws", "unreasonable, before being unjust". Forced by the law to an early retirement, the Council of the faculty of engineering substituted him with Graffi: he was very happy with the council choice, due to their friendship and mutual esteem. For his part Graffi, who could not adopt Foà's handouts due to prohibition imposed by the laws on publications by Jewish authors, published them under his name: cautiously, he kept sending to Foà's house students for private lessons, in order to help him supplement his small retirement pension. During World War II period, Foà and his wife managed to stay in Bologna but had to change their accommodation frequently, being hosted by friendly families. In October 1943, being warned by Dino Zanobetti about a raid of the police, he and his wife left their house and went to an apartment made available by Dante Piccioli, a wealthy engineer and friend of them. More than a month later, on 7 December, Bologna was bombed and the apartment where Foà and his wife resided was destroyed: being at home, Foà was severely wounded at the right leg and was brought to the Sant'Orsola Hospital. Honors In 1933, he was elected corresponding member of the Accademia delle Scienze dell'Istituto di Bologna and, after being reintegrated in his role of professor at the University of Bologna in 1945, he became ordinary member in 1947. Also in 1947, jointly with some fellow engineers, he founded the Bologna Section of the Associazione Termotecnica Italiana, he was elected as his first president. Work Teaching activity Research activity Selected publications . . . In this article Foà proves his uniqueness theorem for classical solutions to the Navier-Stokes equation. . This is the companion paper to , where Foà describes his rigorous approach to dimensional analysis. . . . , also reviewed by . See also David Dolidze Euler equations Fluid mechanics Olga Ladyzhenskaya James Serrin Notes References Biographical and general references . The slides of a conference held by Alessandro Cocchi, emeritus professor of engineering physics at the University of Bologna, on the history of the Laboratory of Engineering Physics of the University. . An obituary written by one of his fellow students at the Polytechnic University of Turin, with a list of his publications. . "Contribution to the history of Engineering Physics in Italy" is a short historical survey giving details on life and work of several Italian scholars on Engineering Physics. . A short obituary written by the Dean of the Faculty of Engineering at the University of Bologna, for the Yearboook of the Academic years 1948–1949 – 1949–1950: a photograph of Foà is included. . The Inaugural Address of the director of the Polytechnic University of Turin, for the Yearboook of the Academic Year 1928–1929. . An obituary, with a list of his publications. . Recollections of Giulio Supino and Emanuele Foà by Dino Zanobetti, professor emeritus of Electrical engineering and one of their former students. Scientific references . In this article, Graffi extends to compressible viscous fluids a uniqueness theorem for the solutions to Navier-Stokes equation in bounded domains, previously proved only for incompressible fluids by Emanuele Foà and rediscovered by David Dolidze. , available at Gallica. A short research note announcing the results of the author on the uniqueness of solutions of the Navier-Stokes equations on unbounded domains under the hypothesis of constant fluid velocity at infinity. . In this paper, Graffi extends his uniqueness theorem for the solutions of Navier-Stokes equations on unbounded domains relaxing previously assumed hypotheses on the behaviour of the velocity at infinity. . This article is the published text of a conference Graffi held at the Seminario Matematico e Fisico di Milano, exposing mainly his researches on the uniqueness of the solutions to the Navier-Stokes equations. . . External links . The biographical entry about Emanuele Foà in the "Dizionario Biografico degli Italiani (Biographical Dictionary of Italians)" section of the Enciclopedia Treccani. 1892 births 1949 deaths 20th-century Italian Jews Jewish physicists People from Savigliano Fluid dynamicists 20th-century Italian mathematicians Engineers from Bologna 20th-century Italian physicists Polytechnic University of Turin alumni Academic staff of the University of Bologna Academic staff of the Polytechnic University of Turin 20th-century Italian engineers
Emanuele Foà
[ "Chemistry" ]
1,395
[ "Fluid dynamicists", "Fluid dynamics" ]
48,264,022
https://en.wikipedia.org/wiki/Pursuit%20predation
Pursuit predation is a form of predation in which predators actively give chase to their prey, either solitarily or as a group. It is an alternate predation strategy to ambush predation — pursuit predators rely on superior speed, endurance and/or teamwork to seize the prey, while ambush predators use concealment, luring, exploiting of surroundings and the element of surprise to capture the prey. While the two patterns of predation are not mutually exclusive, morphological differences in an organism's body plan can create an evolutionary bias favoring either type of predation. Pursuit predation is typically observed in carnivorous species within the kingdom Animalia, such as cheetahs, lions, wolves and early Homo species. The chase can be initiated either by the predator, or by the prey if it is alerted to a predator's presence and attempt to flee before the predator gets close. The chase ends either when the predator successfully catches up and tackles the prey, or when the predator abandons the attempt after the prey outruns it and escapes. One particular form of pursuit predation is persistence hunting, where the predator stalks the prey slowly but persistently to wear it down physically with fatigue or overheating; some animals are examples of both types of pursuit. Strategy There is still uncertainty as to whether predators behave with a general tactic or strategy while preying. However, among pursuit predators there are several common behaviors. Often, predators will scout potential prey, assessing prey quantity and density prior to engaging in a pursuit. Certain predators choose to pursue prey primarily in a group of conspecifics; these animals are known as pack hunters or group pursuers. Other species choose to hunt alone. These two behaviors are typically due to differences in hunting success, where some groups are very successful in groups and others are more successful alone. Pursuit predators may also choose to either exhaust their metabolic resources rapidly or pace themselves during a chase. This choice can be influenced by prey species, seasonal settings, or temporal settings. Predators that rapidly exhaust their metabolic resources during a chase tend to first stalk their prey, slowly approaching their prey to decrease chase distance and time. When the predator is at a closer distance (one that would lead to easier prey capture), it finally gives chase. Pacing pursuit is more commonly seen in group pursuit, as individual animals do not need to exert as much energy to capture prey. However, this type of pursuit requires group coordination, which may have varying degrees of success. Since groups can engage in longer chases, they often focus on separating a weaker or slower prey item during pursuit. Morphologically speaking, while ambush predation requires stealth, pursuit predation requires speed; pursuit predators are proportionally long-limbed and equipped with cursorial adaptations. Current theories suggest that this proportionally long-limbed approach to body plan was an evolutionary countermeasure to prey adaptation. Group pursuers Vertebrates Group pursuers hunt with a collection of conspecifics. Group pursuit is usually seen in species of relatively high sociality; in vertebrates, individuals often seem to have defined roles in pursuit. Mammals African wild dog (Lycaon pictus) packs have been known to split into several smaller groups while in pursuit; one group initiates the chase, while the other travels ahead of the prey's escape path. The group of chase initiators coordinate their chase to lead the prey towards the location of the second group, where the prey's escape path will be effectively cut off. Bottlenose dolphins (Tursiops) have been shown exhibiting similar behaviors of pursuit role specialization. One group within the dolphin pod, known as the drivers, give chase to the fish - forcing the fish into a tight circle formation, while the other group of the pod, the barriers, approach the fish from the opposite direction. This two-pronged attack leaves the fish with only the option of jumping out of the water to escape the dolphins. However, the fish are completely vulnerable in the air; it is at this point when the dolphins leap out and catch the fish. In lion (Panthera leo) pack hunting, each member of the hunting group is assigned a position, from left wing to right wing, in order to better obtain prey. Such specializations in roles within the group are thought to increase sophistication in technique; lion wing members are faster, and will drive prey toward the center where the larger, stronger, killing members of the pride will take down the prey. Many observations of group pursuers note an optimal hunting size in which certain currencies (mass of prey killed or number of prey killed) are maximized with respect to costs (kilometers covered or injuries sustained). Groups size is often dependent on aspects of the environment: number of prey, prey density, number of competitors, seasonal changes, etc. Birds While birds are generally believed to be individual hunters, there are a few examples of birds that cooperate during pursuits. Harris's hawks (Parabuteo unicinctus) have two cooperative strategies for hunting: Surrounding and cover penetration, and long chase relay attack. The first strategy involves a group of hawks surrounding prey hidden under some form of cover, while another hawk attempts to penetrate the prey's cover. The penetration attempt flushes the prey out from its cover where it is swiftly killed by one of the surrounding hawks. The second strategy is less commonly used: It involves a "relay attack" in which a group of hawks, led by a "lead" hawk, engage in a long chase for prey. The "lead" hawk will dive in order to kill the prey. If the dive is unsuccessful, the role of the "lead" shifts to another hawk who will then dive in another attempt to kill the prey. During one observed relay attack, 20 dives and hence 20 lead switches were exhibited. Invertebrates As in vertebrates, there are many species of invertebrates which actively pursue prey in groups and exhibit task specialization, but while the vertebrates change their behavior based on their role in hunting, invertebrate task delegation is usually based on actual morphological differences. The vast majority of eusocial insects have castes within a population which tend to differ in size and have specialized structures for different tasks. This differentiation is taken to the extreme in the groups isoptera and hymenoptera, or termites and ants, bees, and wasps respectively. Termite-hunting ants of the genus Pachycondyla, also known as Matabele ants, form raiding parties consisting of ants of different castes, such as soldier ants and worker ants. Soldier ants are much larger than worker ants, with more powerful mandibles and more robust exoskeletons, and so they make up the front lines of raiding parties and are responsible for killing prey. Workers usually butcher and carry off the killed prey, while supporting the soldiers. The raiding parties are highly mobile and move aggressively into the colonies of termites, often breaking through their outer defenses and entering their mounds. The ants do not completely empty the mound of termites, instead they only take a few, allowing the termites to recover their numbers so that the ants have a steady stream of prey. Asian giant hornets, Vespa mandarinia, form similar raiding parties to hunt their prey, which usually consists of honeybees. The giant hornets group together and as a team can decimate an entire honeybee colony, especially those of non-native European honeybees. Alone, the hornets are subject to attack by the smaller bees, who swarm the hornet and vibrate their abdomens to generate heat, collectively cooking the hornet until it dies. By hunting in groups, the hornets avoid this problem. Individual pursuers Vertebrates Mammals While most big cat species are either solitary ambush predators or pack hunters, cheetahs (Acinonyx jubatus) are primarily solitary pursuit predators. Widely known as the fastest terrestrial animal with running speeds reaching , cheetahs take advantage of their speed during chases. However, their speed and acceleration also have disadvantages, as both rely on anaerobic metabolism and can only be sustained for short periods of time. Studies show that cheetahs can maintain maximum speed for up to a distance of approximately , which is only about 20 seconds of sprinting, before fatigue and overheating set in. Due to these limitations, cheetahs are often observed quietly walking towards the prey to shorten the distance before running at moderate speeds during chases. There are claims that the key to cheetahs' pursuit being successful may not be just burst of sheer speed. Cheetahs are extremely agile, able to change directions in very short amounts of time while running at very high speeds. This maneuverability can make up for unsustainable high-speed pursuits, as it allows a cheetah to quickly close the distance without having to decelerate when the prey suddenly changes direction. Due to being lightly built, cheetahs will try to foot sweep and unbalance the prey, instead of grasping and tackling. Only after the prey has fallen over and thus momentarily stopped running, the cheetah will pounce and try to subdue it with a throat bite. Birds The Painted redstart (Myioborus pictus) is one of the most well documented flush pursuers. When flies, prey for redstarts, are alerted of the presence of predators, they respond by fleeing. Redstarts take advantage of this anti-predator response by spreading and orienting their easily noticeable wings and tails, alerting the flies, but only when they are in a position where the flies' escape path intersects with the redstart's central field of vision. When prey's path are in this field of vision, the redstart's prey capture rate is at its maximum. Once the flies begin to flee, the redstart begins to chase. It has been proposed that redstarts exploit two aspects of the visual sensitivity of their prey: sensitivity to the location of the stimulus in the prey's visual field and sensitivity to the direction of stimulus environment. The effectiveness of this pursuit can also be explained by "rare enemy effect", an evolutionary consequence of multi-species predator-prey interactions. Invertebrates Dragonflies are skilled aerial pursuers; they have a 97% success rate for prey capture. This success rate is a consequence of the "decision" on which prey to pursue, based on initial conditions. Observations of several species of perching dragonflies show more pursuit initiations at larger starting distances for larger size prey species than for much smaller prey. Further evidence points to a potential bias towards larger prey, due to more substantial metabolic rewards. This bias is in spite of the fact that larger prey are typically faster and choosing them results in less successful pursuits. Dragonflies high success rate for prey capture may also be due to their interception foraging method. Unlike classical pursuit, in which the predator aims for the current position of their prey, dragonflies predict the prey's direction of motion, as in parallel navigation. Perching dragonflies (Libellulidae family), have been observed "staking out" high density prey spots prior to pursuit. There are no noticeable distinctions in prey capture efficiency between males and females. Further, percher dragonflies are bound by their visual range. They are more likely to engage in pursuit when prey come within a subtended angle of around 1-2 degrees. Angles greater than this are outside of a dragonflies visual range. Evolutionary basis of the behavior Evolution as a countermeasure Current theory on the evolution of pursuit predation suggests that the behavior is an evolutionary countermeasure to prey adaptation. Prey animals vary in their likelihood to avoid predation, and it is predation failure that drives evolution of both prey and predator. Predation failure rates vary wildly across the animal kingdom; raptorial birds can fail anywhere from 20% to 80% of the time in predation, while predatory mammals usually fail more than half the time. Prey adaptation drives these low rates in three phases: the detection phase, the pursuit phase, and the resistance phase. The pursuit phase drove the evolution of distinct behaviors for pursuit predation. As selective pressure on prey is higher than on predators adaptation usually occurs in prey long before the reciprocal adaptations in predators. Evidence in the fossil record supports this, with no evidence of modern pursuit predators until the late Tertiary period. Certain adaptations, like long limbs in ungulates, that were thought to be adaptive for speed against predatory behavior have been found to predate predatory animals by over 20 million years. Because of this, modern pursuit predation is an adaptation that may have evolved separately and much later as a need for more energy in colder and more arid climates. Longer limbs in predators, the key morphological adaptation required for lengthy pursuit of prey, is tied in the fossil record to the late Tertiary. It is now believed that modern pursuit predators like the wolf and lion evolved this behavior around this time period as a response to ungulates increasing feeding range. As ungulate prey moved into a wider feeding range to discover food in response to changing climate, predators evolved the longer limbs and behavior necessary to pursue prey across larger ranges. In this respect, pursuit predation is not co-evolutionary with prey adaptation, but a direct response to prey. Prey's adaptation to climate is the key formative reason for evolving the behavior and morphological necessities of pursuit predation. In addition to serving as a countermeasure to prey adaptation, pursuit predation has evolved in some species as an alternative, facultative mechanism for foraging. For example, polar bears typically act as specialized predators of seal pups and operate in a manner closely predicted by the optimal foraging theory. However, they have been seen to occasionally employ more energy-inefficient pursuit predation tactics on flightless geese. This alternative predatory strategy may serve as a back-up resource when optimal foraging is circumstantially impossible, or may even be a function of filling dietary needs. Evolution from an ecological basis Pursuit predation revolves around a distinct movement interaction between predator and prey; as prey move to find new foraging areas, predators should move with them. Predators congregate in areas of high prey density, and prey should therefore avoid these areas. However, dilution factor may be a reason to stay in areas of high density due to a decreased risk of predation. Given the movements of predators over ranges in pursuit predation, though, dilution factor seems a less important cause for predation avoidance. Because of these interactions, spatial patterns of predators and prey are important in preserving population size. Attempts by prey to avoid predation and find food are coupled with predator attempts to hunt and compete with other predators. These interactions act to preserve populations. Models of spatial patterns and synchrony of predator-prey relationships can be used as support for the evolution of pursuit predation as one mechanism to preserve these population mechanics. By pursuing prey over long distances, predators actually improve longterm survival of both their own population and prey population through population synchrony. Pursuit predation acts to even out population fluctuations by moving predatory animals from areas of high predator density to low predator density, and low prey density to high prey density. This keeps migratory populations in synchrony, which increases metapopulation persistence. Pursuit predation's effect on population persistence is more marked over larger travel ranges. Predator and prey levels are usually more synchronous in predation over larger ranges, as population densities have more ability to even out. Pursuit predation can then be supported as an adaptive mechanism for not just individual feeding success but also metapopulation persistence. Anti-predator adaptation to pursuit predation Anti-predator adaptation Just as the evolutionary arms race has led to the development of pursuit behavior of predators, so too has it led to the anti-predator adaptations of prey. Alarm displays such as eastern swamphen's tail flicking, white-tailed deer's tail flagging, and Thomson's gazelles' stotting have been observed deterring pursuit. These tactic are believed to signal that a predator's presence is known and, therefore, pursuit will be much more difficult. These displays are more frequent when predators are at an intermediate distance away. Alarm displays are used more often when prey believe predators are more prone to change their decision to pursue. For instance, cheetahs, common predators of Thomson's gazelles, are less likely to change their choice to pursue. As such, gazelles stott less when cheetahs are present than when other predators are present. In addition to behavioral adaptations, there are also morphological anti-predator adaptations to pursuit predators. For example, many birds have evolved rump feathers that fall off with much less force than the feathers of their other body parts. This allows for easier escape from predator birds, as avian predators often approach prey from their rump. The confusion effect In many species that fall prey to pursuit predation, gregariousness on a massive scale has evolved as a protective behavior. Such herds can be conspecific (all individuals are of one species) or heterospecific. This is primarily due to the confusion effect, which states that if prey animals congregate in large groups, predators will have more difficulty identifying and tracking specific individuals. This effect has greater influence when individuals are visually similar and less distinguishable. In groups where individuals are visually similar, there is a negative correlation between group size and predator success rates. This may mean that the overall number of attacks decreases with larger group size or that the number of attacks per kill increases with larger group size. This is especially true in open habitats, such as grasslands or open ocean ecosystems, where view of the prey group is unobstructed, in contrast to a forest or reef. Prey species in these open environments tend to be especially gregarious, with notable examples being starlings and sardines. When individuals of the herd are visually dissimilar, however, the success rate of predators increases dramatically. In one study, wildebeest on the African Savannah were selected at random and had their horns painted white. This introduced a distinction, or oddity, into the population; researchers found that the wildebeest with white horns were preyed upon at substantially higher rates. By standing out, individuals are not as easily lost in the crowd, and so predators are able to track and pursue them with higher fidelity. This has been proposed as the reason why many schooling fish show little to no sexual dimorphism, and why many species in heterospecific schools bear a close resemblance to other species in their school. References Ecology Predation Eating behaviors
Pursuit predation
[ "Biology" ]
3,807
[ "Biological interactions", "Eating behaviors", "Behavior", "Ecology" ]
26,536,063
https://en.wikipedia.org/wiki/Premature%20chromosome%20condensation
Premature chromosome condensation (PCC), also known as premature mitosis, occurs in eukaryotic organisms when mitotic cells fuse with interphase cells. Chromatin, a substance that contains genetic material such as DNA, is normally found in a loose bundle inside a cell's nucleus. During the prophase of mitosis, the chromatin in a cell compacts to form condensed chromosomes; this condensation is required in order for the cell to divide properly. While mitotic cells have condensed chromosomes, interphase cells do not. PCC results when an interphase cell fuses with a mitotic cell, causing the interphase cell to produce condensed chromosomes prematurely. The appearance of a prematurely condensed chromosome depends on the stage that the interphase cell was in. Chromosomes that are condensed during the G1 phase are usually long and have a single strand, while chromosomes condensed during the S phase appear crushed. Condensation during the G2 phase yields long chromosomes with two chromatids. PCC was first reported in 1968, of viral-infected cells showing strange appearance of chromosomes. It was found that the strange appearance was selectively observed in S-phase nuclei, and therefore concluded that the nuclei of cells fused in mitotic cells condensed prematurely by unknown material which accumulated in mitotic cells, and observed chromosome structures that are equivalent to those in cell fusion. This material was named as the mitosis promoting factor (MPF). The precise mechanism of chromosome condensation, as well as the premature condensation, is still in question. It is only known that MPF is a key enzyme that induces PCC in somatic cells or oocytes, as they play a key role in cell cycle regulation and cell growth control. When the interphase nuclei is exposed to activated MPF, which is supplied from the mitotic nuclei, PCC is induced. References Chromosomes Mitosis
Premature chromosome condensation
[ "Biology" ]
395
[ "Cellular processes", "Mitosis" ]
26,536,158
https://en.wikipedia.org/wiki/Cooperative%20coevolution
Cooperative Coevolution (CC) in the field of biological evolution is an evolutionary computation method. It divides a large problem into subcomponents, and solves them independently in order to solve the large problem. The subcomponents are also called species. The subcomponents are implemented as subpopulations and the only interaction between subpopulations is in the cooperative evaluation of each individual of the subpopulations. The general CC framework is nature inspired where the individuals of a particular group of species mate amongst themselves, however, mating in between different species is not feasible. The cooperative evaluation of each individual in a subpopulation is done by concatenating the current individual with the best individuals from the rest of the subpopulations as described by M. Potter. The cooperative coevolution framework has been applied to real world problems such as pedestrian detection systems, large-scale function optimization and neural network training. It has also be further extended into another method, called Constructive cooperative coevolution. Pseudocode i := 0 for each subproblem S do Initialise a subpopulation Pop0(S) calculate fitness of each member in Pop0(S) while termination criteria not satisfied do i := i + 1 for each subproblem S do select Popi(S) from Popi-1(S) apply genetic operators to Popi(S) calculate fitness of each member in Popi(S) See also Constructive cooperative coevolution Genetic algorithms Differential evolution Metaheuristic References Evolutionary computation
Cooperative coevolution
[ "Biology" ]
321
[ "Bioinformatics", "Evolutionary computation" ]
26,537,657
https://en.wikipedia.org/wiki/Free%20Radical%20Biology%20and%20Medicine
Free Radical Biology and Medicine is a peer-reviewed scientific journal and official journal of the Society for Redox Biology and Medicine. The journal covers research on redox biology, signaling, biological chemistry and medical implications of free radicals, reactive species, oxidants and antioxidants. Abstracting and indexing The journal is abstracted and indexed in ADONIS, BIOSIS, CAB Abstracts, Chemical Abstracts, Current Contents, EMBASE, EMBiology, MEDLINE, Science Citation Index, Scopus and Toxicology Abstracts. External links FRBM Society Biochemistry journals Biweekly journals English-language journals Elsevier academic journals
Free Radical Biology and Medicine
[ "Chemistry" ]
126
[ "Biochemistry stubs", "Biochemistry journals", "Biochemistry literature", "Biochemistry journal stubs" ]
26,541,583
https://en.wikipedia.org/wiki/Mycosporine-like%20amino%20acid
Mycosporine-like amino acids (MAAs) are small secondary metabolites produced by organisms that live in environments with high volumes of sunlight, usually marine environments. The exact number of compounds within this class of natural products is yet to be determined, since they have only relatively recently been discovered and novel molecular species are constantly being discovered; however, to date their number is around 30. They are commonly described as “microbial sunscreens” although their function is believed not to be limited to sun protection. MAAs represent high potential in cosmetics, and biotechnological applications. Indeed, their UV-absorbing properties would allow to create products derived from natural photoprotectors, potentially harmless to the environment and efficient against UV damage. Background MAAs are widespread in the microbial world and have been reported in many microorganisms including heterotrophic bacteria, cyanobacteria, microalgae, ascomycetous and basidiomycetous fungi, as well as some multicellular organisms such as macroalgae and marine animals. Most research done on MAAs is on their light absorbing and radiation protecting properties. The first thorough description of MAAs was done in cyanobacteria living in a high UV radiation environment. The major unifying characteristic among all MAAs is UV light absorption. All MAAs absorb UV light that can be destructive to biological molecules (DNA, proteins, etc.). Though most MAA research is done on their photo-protective capabilities, they are also considered to be multi-functional secondary metabolites that have many cellular functions. MAAs are effective antioxidant molecules and are able to stabilize free radicals within their ring structure. In addition to protecting cells from mutation via UV radiation and free radicals, MAAs are able to boost cellular tolerance to desiccation, salt stress, and heat stress. Chemistry Mycosporine–like amino acids are rather small molecules (<400 Da). The structures of over 30 MAAs have been resolved and all contain a central cyclohexenone or cyclohexenimine ring and a wide variety of substitutions. The ring structure is thought to absorb UV light and accommodate free radicals. All MAAs absorb ultraviolet wavelengths, typically between 310 and 362 nm. They are considered to be amongst the strongest natural absorbers of UV radiation. It is this light absorbing property that allows MAAs to protect cells from the harmful UV-B and UV-A components of sunlight. Biosynthetic pathways of MAAs depend on the specific MAA molecule and the organism that is producing it. These biosynthetic pathways often share common enzymes and metabolic intermediates with pathways of the primary metabolism. An example is the shikimate pathway that is classically used to produce the aromatic amino acids (phenylalanine, tyrosine and tryptophan); with many intermediates and enzymes from this pathway utilized in MAA biosynthesis. Examples Functions Ultraviolet light responses Protection from UV radiation Ultraviolet UV-A and UV-B radiation is harmful to living systems. An important tool used to deal with UV exposure is the biosynthesis of small-molecule sunscreens. MAAs have been implicated in UV radiation protection. The genetic basis for this implication comes from the observed induction of MAA synthesis when organisms are exposed to UV radiation. This has been observed in aquatic yeasts, cyanobacteria, marine dinoflagellates and some Antarctic diatoms. MAAs have also been identified in 572 species of other algae : 45 species in Chlorophyta, 41 species in Phaeophyta, 486 species in Rhodophyta which also present anti-aging, anti-inflammatory, antioxidative and wound healing properties. When MAAs absorb UV light the energy is dissipated as heat. UV-B photoreceptors have been identified in cyanobacteria as the molecules responsible for the UV light induced responses, including synthesis of MAAs. Helioguard™365 containing Porphyra-334 and shinorine derived from Porphyra umbilicalis is already a creme on the market were developed by Mibelle AG biochemistry and shows preventive effects against UVA. An MAA known as palythine, derived from seaweed, has been found to protect human skin cells from UV radiation even in low concentrations. "MAAs, in addition to their environmental benefits, appear to be multifunctional photoprotective compounds," says Dr. Karl Lawrence, lead author of a paper on the research. "They work through the direct absorption of UVR [ultraviolet radiation] photons, much like the synthetic filters. They also act as potent antioxidants, which is an important property as exposure to solar radiation induces high levels of oxidative stress, and this is something not seen in synthetic filters." Protection from oxidative damage Some MAAs protect cells from reactive oxygen species (i.e. singlet oxygen, superoxide anions, hydroperoxyl radicals, and hydroxyl radicals). Reactive oxygen species can be created during photosynthesis; further supporting the idea that MAAs provide protection from UV light. Mycosporine-glycine is a MAA that provides antioxidant protection even before Oxidative stress response genes and antioxidant enzymes are induced. MAA-glycine (mycosporine-glycine) is able to quench singlet oxygen and hydroxyl radicals very quickly and efficiently. Some oceanic microbial ecosystems are exposed to high concentrations of oxygen and intense light; these conditions are likely to generate high levels of reactive oxygen species. In these ecosystems, MAA-rich cyanobacteria may be providing antioxidant activity. Accessory pigments in photosynthesis MAAs are able to absorb UV light. A study published in 1976 demonstrated that an increase in MAA content was associated with an increase in photosynthetic respiration. Further studies done in marine cyanobacteria showed that the MAAs synthesized in response to UV-B correlated with an increase in photosynthetic pigments. Though not absolute proof, these findings do implicate MAAs as accessory pigments to photosynthesis. Photoreceptors The eyes for the mantis shrimp contain four different kinds of mycosporine-like amino acids as filters, which combined with two different visual pigments assist the eye to detect six different bands of ultraviolet light. Three of the filter MAAs are identified with porphyra-334, mycosporine-gly, and gadusol. Environmental stress responses Salt stress Osmotic stress is defined as difficulty maintaining proper fluids in the cell within a hypertonic or hypotonic environment. MAAs accumulate within a cell’s cytoplasm and contribute to the osmotic pressure within a cell, thus relieving pressure from salt stress in a hypertonic environment. As evidence of this, MAAs are seldom found in large quantities in cyanobacteria living in freshwater environments. However, in saline and hypertonic environments, cyanobacteria often contain high concentrations of MAAs. The same phenomenon was noted for some halotolerant fungi. But, the concentration of MAAs within cyanobacteria living in hyper-saline environments is far from the amount required to balance the salinity. Therefore, additional osmotic solutes must be present as well. Desiccation stress Desiccation (drought) stress is defined as conditions where water becomes the growth limiting factor. MAAs have been reportedly found in high concentrations in many microorganisms exposed to drought stress. Particularly cyanobacteria species that are exposed to desiccation, UV radiation and oxidation stress have been shown to possess MAA’s in an extracellular matrix. However it has been shown that MAAs do not provide sufficient protection against high doses of UV radiation. Thermal stress Thermal (heat) stress is defined as temperatures lethal or inhibitory towards growth. MAA concentrations have been shown to be up-regulated when an organism is under thermal stress. Multipurpose MAAs could also be compatible solutes under freezing conditions, because a high incidence of MAA producing organisms have been reported in cold aquatic environments. References Further reading Amino acids
Mycosporine-like amino acid
[ "Chemistry" ]
1,716
[ "Amino acids", "Biomolecules by chemical classification" ]
26,544,101
https://en.wikipedia.org/wiki/Genetic%20matchmaking
Genetic matchmaking is the idea of matching couples for romantic relationships based on their biological compatibility. The initial idea was conceptualized by Claus Wedekind through his "sweaty t-shirt" experiment. Males were asked to wear T-shirts for two consecutive nights, and then females were asked to smell the T-shirts and rate the body odors for attractiveness. Human body odor has been associated with the human leukocyte antigens (HLA) genomic region. They discovered that females were attracted to men who had dissimilar HLA alleles from them. Furthermore, these females reported that the body odors of HLA-dissimilar males reminded them of their current partners or ex-partners providing further evidence of biological compatibility. Research Following research done by Dr. Wedekind, several studies found corroborating evidence for biological compatibility. Garver-Apgar et al. presented evidence for HLA-dissimilar alleles playing a factor in the healthiness of romantic relationships. They discovered that as the proportion of HLA-similar alleles increased between couples, females reported being less sexually responsive to their partners, less satisfaction from being aroused by their partners, and having additional sexual partners (while with their current partner). Additionally, Ober et al. conducted an independent study on a population of American Hutterites by comparing the HLA alleles of married couples. They discovered that married couples were less likely to share HLA alleles than expected from random chance; thus their results were consistent with tendencies for same-HLA alleled partners to avoid mating. Further evidence of the importance of genetic compatibility can be found in the finding that couples sharing a higher proportion of HLA alleles tend to have recurring spontaneous abortions, reduced body mass in babies, and longer intervals between successive births. The application of this research to find romantic partners via genetic testing has been described as "dubious". Analyses of data from the International HapMap Project has not found a consistent relationship between marital partners and genes related to the immune system. Reasons for biological compatibility There are several biological reasons why women would be attracted to and mate with men with dissimilar HLA alleles: Their offspring would have a greater assortment of HLA alleles theoretically giving them a wider diversity of antigens present on the surface of cells compared to HLA-homozygous offspring. The wider variety of antigens allows the immune system to target a greater number of pathogens making the offspring more immunocompetent. Any HLA allele which becomes a more resistant allele would not simply become an inherent allele in all individuals. Through evolution, there will always be some pathogens that can become resistant to this allele, and spread to create a selection against the allele. HLA-dissortative mating can be considered a method to cause the adaptations that pathogens have to their host to become obsolete in their offspring; In other words, allow us to keep up in the "Red Queen's race". HLA genes are highly polymorphic between individuals. Any two individuals with similar HLA genes could be possibly related. Mating of two related individuals would result in inbreeding which can be harmful to the offspring since it would result in a greater amount of genetic homozygosity thus increasing the chances of recessive mutations. References Genetics Dating Matchmaking
Genetic matchmaking
[ "Biology" ]
691
[ "Genetics" ]
43,456,675
https://en.wikipedia.org/wiki/USA-256
USA-256, also known as GPS IIF-7, GPS SVN-68 and NAVSTAR 71, is an American navigation satellite which forms part of the Global Positioning System. It was the seventh of twelve Block IIF satellites to be launched. Launch Built by Boeing and launched by United Launch Alliance, USA-256 was launched at 03:23 UTC on 2 August 2014, atop an Atlas V 401 carrier rocket, vehicle number AV-048. The launch took place from Space Launch Complex 41 at the Cape Canaveral Air Force Station, and placed USA-256 directly into medium Earth orbit. Orbit As of 3 August 2014, USA-256 was in an orbit with a perigee of , an apogee of , a period of 727.05 minutes, and 55.02 degrees of inclination to the equator. It is used to broadcast the PRN 09 signal, and operates in slot 6 of plane F of the GPS constellation. The satellite has a design life of 12 years and a mass of . It is currently in service following commissioning on September 17, 2014. References Spacecraft launched in 2014 GPS satellites USA satellites Spacecraft launched by Atlas rockets
USA-256
[ "Technology" ]
237
[ "Global Positioning System", "GPS satellites" ]
43,457,309
https://en.wikipedia.org/wiki/Echo%20%28communications%20protocol%29
Echo (one-to-all, one-to-one, or one-to-some distribution) is a group communications protocol where authenticated and encrypted information is addressed to members connected to a node. Adaptive Echo, Full Echo, and Half Echo can be chosen as several modes of the encrypted Echo protocol. The Echo protocol offers three modes of operation: Adaptive Echo, Full Echo, and Half Echo. Adaptive Echo The Adaptive Echo distributes messages to parties that have shown awareness of a secret token. The graphic at the side shows the communication example of Hansel and Gretel. Referring to the old fairy tale, both highlight the trees with either "white pebbles" or "bread crumbs" to discover each other in the forest. They wish to communicate without the wicked witch knowing. How can Hansel and Gretel communicate without revealing their communications? The nodes in this example use the token "white pebbles". Because the wicked witch is unaware of the secret token, she will not receive communications from Hansel and Gretel unless, of course, she misbehaves. Full Echo Full Echo or simply Echo sends each message to every neighbor. Every neighbor does the same, unless it's the target node of a specific message. In smaller networks, the message should reach every peer. Nodes can be client, server, or both. Half Echo The Half Echo sends the message only to a direct neighbor. If configured correctly, the target node will not disperse the received message to other nearby nodes. This allows two neighbors to communicate with each other on dedicated sockets. That is, data from other nodes will not traverse the restricted socket. Though always authenticated and encrypted, the nodes can exclude others from knowing about the communications. Echo Accounts Accounts allow for exclusive connections. A server node may establish accounts and then distribute the credentials' information. Accounts create an artificial web of trust without exposing the public encryption key and without attaching the key to an IP address. References External links Internet architecture Internet broadcasting Television terminology Routing Multihoming Packets (information technology)
Echo (communications protocol)
[ "Technology" ]
426
[ "Computing stubs", "Internet architecture", "IT infrastructure", "Computer network stubs" ]
43,457,615
https://en.wikipedia.org/wiki/Membrane%20theory%20of%20shells
The membrane theory of shells, or membrane theory for short, describes the mechanical properties of shells when twisting or under bending and assumes that bending moments are small enough to be negligible. The spectacular simplification of membrane theory makes possible the examination of a wide variety of shapes and supports, in particular, tanks and shell roofs. There are heavy penalties paid for this simplification, and such inadequacies are apparent through critical inspection, remaining within the theory, of solutions. However, this theory is more than a first approximation. If a shell is shaped and supported so as to carry the load within a membrane stress system it may be a desirable solution to the design problem, i.e., thin, light and stiff. See also Theory of plates and shells Stress resultants in plates and shells References Literature Practical industry example for plates and shell analysis - animated video Scientific theories Continuum mechanics
Membrane theory of shells
[ "Physics" ]
181
[ "Classical mechanics stubs", "Classical mechanics", "Continuum mechanics" ]
43,458,076
https://en.wikipedia.org/wiki/AP%20Physics%202
Advanced Placement (AP) Physics 2 is a year-long introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester algebra-based university course in thermodynamics, electromagnetism, optics, and modern physics. Along with AP Physics 1, the first AP Physics 2 exam was administered in 2015. History The AP Physics 2 classes began in the fall of 2014, with the first AP exams administered in May 2015. The courses were formed through collaboration between current Advanced Placement teachers and The College Board, with the guidance from the National Research Council and the National Science Foundation. As of August 2013 AP summer institutes, the College Board professional development course for Advanced Placement and Pre-AP teachers, dedicate 20% of the total to preparing AP Physics B educators for the new AP physics course. Face to face workshops sponsored by the College Board focused 20% of their content on the course in September 2013. In February 2014, the official course description and sample curriculum resources were posted to the College Board website, with two practice exams being posted the next month. As of September 2014, face to face workshops are dedicated solely to AP Physics 1 & AP Physics 2. The full course was first taught in 2014, with the exam given in 2015. The College Board released a "Curriculum Framework" which includes the 7 principles on which AP Physics 2 would be based on as well as smaller "Enduring Understanding" concepts. In 2020, the examination was administered on computer from home because of COVID-19. College Board suspected that some students may be using unauthorized resources while taking the test. In order to ensure accurate results in the future, the course materials will be more difficult and in-depth. In February 2024, College Board announced that there would be changes in curricula for their AP Physics classes for the 2025 exams. For AP Physics 2, this removed fluids (the first topic of the curriculum) from the exam. From the 2024-25 school year onward, this topic is covered as the last unit of AP Physics 1. With fluids no longer being on the curriculum, the optics unit was separated into two units which cover the subject with more depth. This added mechanical waves, standing waves, sound waves, and the Doppler effect which are covered in Waves, Sound, and Physical Optics. The unit covering electric circuits was changed to be more comprehensive, and Blackbody radiation and Compton scattering were added to Modern Physics as well. As of the fall of 2024, all AP Physics 2 units are numbered sequentially to those in AP Physics 1, starting with Thermodynamics as unit 9 and ending with Modern Physics as unit 15. Curriculum AP Physics 2 is an algebra-based, introductory college-level physics course in which students explore thermodynamics with kinetic theory; PV diagrams and probability; electrostatics; electrical circuits with capacitors; magnetic fields; electromagnetism; physical and geometric optics; and quantum, atomic, and nuclear physics. Through inquiry-based learning, students develop scientific critical thinking and reasoning skills. The content of AP Physics 2 overlaps with that of AP Physics C: Electricity and Magnetism, but Physics 2 is algebra-based, while Physics C is calculus-based. AP Physics C: Electricity and Magnetism also is focused entirely on Electricity and Magnetism while AP Physics 2 covers additional topics such as Thermodynamics, Waves, and Modern Physics. Exam Science Practices Assessed Score Distributions See also Glossary of physics Science education in the United States Notes References Advanced Placement Physics education Standardized tests
AP Physics 2
[ "Physics" ]
730
[ "Applied and interdisciplinary physics", "Physics education" ]
25,130,540
https://en.wikipedia.org/wiki/Short%20Oligonucleotide%20Analysis%20Package
SOAP (Short Oligonucleotide Analysis Package) is a suite of bioinformatics software tools from the BGI Bioinformatics department enabling the assembly, alignment, and analysis of next generation DNA sequencing data. It is particularly suited to short read sequencing data. All programs in the SOAP package may be used free of charge and are distributed under the GPL open source software license. Functionality The SOAP suite of tools can be used to perform the following genome assembly tasks: Sequence Alignment SOAPaligner (SOAP2) is specifically designed for fast alignment of short reads and performs favorably with respect to similar alignment tools such as Bowtie and MAQ. Genome Assembly SOAPdenovo is a short read de novo assembler utilizing De Bruijn graph construction. It is optimized for short reads such as that generated by Illumina and is capable of assembling large genomes such as the human genome. SOAPdenovo was used to assemble the genome of the giant panda. This was upgraded to SOAPdenovo2, which was optimized for large genomes and included the widely used GapCloser module. Transcriptome Assembly SOAPdenovo-Trans is a de novo transcriptome assembler designed specifically for RNA-Seq that was created for the 1000 Plant Genomes project. Indel Discovery SOAPindel is a tool to find insertions and deletions from next generation paired-end sequencing data, providing a list of candidate indels with quality scores. SNP Discovery SOAPsnp is a consensus sequence builder. This tool uses the output from SOAPaligner to generate a consensus sequence which enables SNPs to be called on a newly sequenced individual. Structural Variation Discovery SOAPsv is a tool to find structural variations using whole genome assembly. Quality control and preprocessing SOAPnuke is a tool for integrated quality control and preprocessing of datasets from genomic, small RNA, Digital Gene Expression, and metagenomic experiments. History SOAP v1 The first release of SOAP consisted only of the sequence alignment tool SOAPaligner. SOAP v2 SOAP v2 extended and improved on SOAP v1 by significantly improving the performance of the SOAPaligner tool. Alignment time was reduced by a factor of 20-30, while memory usage was reduced by a factor of 3. Support was added for compressed file formats. The SOAP suite was expanded then to include the new tools: SOAPdenovo 1&2, SOAPindel, SOAPsnp, and SOAPsv. SOAP v3 SOAP v3 extended the alignment tool by being the first short-read alignment tool to utilize GPU processors. As a result of these improvements, SOAPalign significantly outperformed competing aligners Bowtie and BWA in terms of speed. See also genomics genome sequencing genome assembly bioinformatics External links http://soap.genomics.org.cn http://soap.genomics.org.cn/soap1 http://bioinformatics.genomics.org.cn http://seqanswers.com/forums/showthread.php?t=43 References Bioinformatics algorithms Bioinformatics software DNA sequencing Free software projects Short Oligonucleotide Analysis Package
Short Oligonucleotide Analysis Package
[ "Chemistry", "Biology" ]
670
[ "Bioinformatics algorithms", "Bioinformatics software", "Bioinformatics", "Molecular biology techniques", "DNA sequencing" ]
25,133,525
https://en.wikipedia.org/wiki/Catalytic%20oxidation
Catalytic oxidation are processes that rely on catalysts to introduce oxygen into organic and inorganic compounds. Many applications, including the focus of this article, involve oxidation by oxygen. Such processes are conducted on a large scale for the remediation of pollutants, production of valuable chemicals, and the production of energy. Oxidations of organic compounds Carboxylic acids, ketones, epoxides, and alcohols are often obtained by partial oxidation of alkanes and alkenes with dioxygen. These intermediates are essential to the production of consumer goods. Partial oxidation is challenging because the most favored reaction between oxygen and hydrocarbons is combustion. Oxidations of inorganic compounds Sulfuric acid is produced from sulfur trioxide which is obtained by oxidation of sulfur dioxide. Food-grade phosphates are generated via oxidation of white phosphorus. Carbon monoxide in automobile exhaust is converted to carbon dioxide in catalytic converters. Examples Industrially important examples include both inorganic and organic substrates. Catalysts Oxidation catalysis is conducted by both heterogeneous catalysis and homogeneous catalysis. In the heterogeneous processes, gaseous substrate and oxygen (or air) are passed over solid catalysts. Typical catalysts are platinum, and redox-active oxides of iron, vanadium, and molybdenum. In many cases, catalysts are modified with a host of additives or promoters that enhance rates or selectivities. Important homogeneous catalysts for the oxidation of organic compounds are carboxylates of cobalt, iron, and manganese. To confer good solubility in the organic solvent, these catalysts are often derived from naphthenic acids and ethylhexanoic acid, which are highly lipophilic. These catalysts initiate radical chain reactions, autoxidation that produce organic radicals that combine with oxygen to give hydroperoxide intermediates. Generally the selectivity of oxidation is determined by bond energies. For example, benzylic C-H bonds are replaced by oxygen faster than aromatic C-H bonds. Fine chemicals Many selective oxidation catalysts have been developed for producing fine chemicals of pharmaceutical or academic interest. Nobel Prize–winning examples are the Sharpless epoxidation and the Sharpless dihydroxylation. Biological catalysis Catalytic oxidations are common in biology, especially since aerobic life subsists on energy obtained by oxidation of organic compounds by air. In contrast to the industrial processes, which are optimized for producing chemical compounds, energy-producing biological oxidations are optimized to produce energy. Many metalloenzymes mediate these reactions. Fuel cells, etc Fuel cells rely on oxidation of organic compounds (or hydrogen) using catalysts. Catalytic heaters generate flameless heat from a supply of combustible fuel and oxygen from air as oxidant. Challenges The foremost challenge in catalytic oxidation is the conversion of methane to methanol. Most methane is stranded, i.e. not located near metropolitan areas. Consequently, it is flared (converted to carbon dioxide). One challenge is that methanol is more easily oxidized than is methane. Catalytic oxidation with oxygen or air is a major application of green chemistry. There are however many oxidations that cannot be achieved so straightforwardly. The conversion of propylene to propylene oxide is typically effected using hydrogen peroxide, not oxygen or air. References External links https://archive.today/20130626171216/https://portal.navfac.navy.mil/portal/page/portal/NAVFAC/NAVFAC_WW_PP/NAVFAC_NFESC_PP/ENVIRONMENTAL/ERB/THERMCATOX http://www.frtr.gov/matrix2/section4/4-59.html Catalysis
Catalytic oxidation
[ "Chemistry" ]
797
[ "Catalysis", "Chemical kinetics" ]
25,134,163
https://en.wikipedia.org/wiki/Whole%20Earth%20Blazar%20Telescope
The Whole Earth Blazar Telescope (WEBT) is an international consortium of astronomers created in 1997, with the aim to study a particular category of Active Galactic Nuclei (AGN) called blazars, which are characterized by strong and fast brightness variability, on time scales down to hours or less. This collaboration involves many telescopes observing at optical, near-infrared, and radio (millimetric and centimetric) wavelengths. Thanks to their different geographic location all around the world, the emission variations of the pointed source can be monitored 24 hours a day, with the observing task moving from east to west as the Earth rotates. WEBT observations are often carried out in conjunction with observations at higher frequencies, from ultraviolet to gamma rays, performed by both space and ground-based telescopes. In this way, information on blazar emission over almost the whole electromagnetic spectrum can be obtained. The multi-wavelength studies performed by the WEBT have the purpose of understanding the physical mechanisms that rule the variable emission of these celestial objects. This emission mainly comes from a plasma jet pointing closely to the line of sight, and originating from a supermassive black hole located in the core of the host galaxy. Foundation The WEBT was founded in autumn 1997 by John Mattox, from the Institute of Astrophysical Research at the Boston University, as a collaboration among optical observers. Three years after, in 2000, the leadership was committed to Massimo Villata, from the Observatory of Turin. A constitution was issued, defining purposes and management of the organization. Soon after, also radio and near-infrared observers joined the consortium. Observing campaigns Until February 2009, the WEBT has organised 24 observing campaigns, with the participation of more than one hundred telescopes. Each campaign is devoted to a specific source, and is led by a Campaign Manager appointed by the President. The Campaign Manager is responsible for the observing strategy, data collection, analysis and interpretation, and finally takes care of the publication of the results. This is the list of the blazars that have been targets of WEBT campaigns: AO 0235+16 Markarian 421 S5 0716+71 BL Lacertae Markarian 501 3C 66A OJ 287 3C 454.3 3C 279 Papers After eighteen years of operations, more than 160 scientific publications have been released. The GASP On September 4, 2007, the WEBT started a new project: the GLAST-AGILE Support Program (GASP). Its aim is to provide observing support at longer wavelengths to the observations by the gamma-ray satellites GLAST (Gamma-ray Large Area Space Telescope, later renamed Fermi Gamma-ray Space Telescope in honor of the famous Italian physicist Enrico Fermi), and AGILE (Astro-rivelatore Gamma a Immagini LEggero). The GASP strategy is a long-term monitoring of selected targets, with periodic data gathering and analysis. The list of the GASP monitored blazars includes 28 bright objects: 3C 66A, AO 0235+16, PKS 0420−01, PKS 0528+134, S5 0716+71, PKS 0735+17, OJ 248, OJ 49, 4C 71.07, OJ 287, S4 0954+65, Markarian 421, 4C 29.45, ON 231, 3C 273, 3C 279, PKS 1510−08, DA 406, 4C 38.41, 3C 345, Markarian 501, 4C 51.37, 3C 371, PKS 2155−304, BL Lacertae, CTA 102, 3C 454.3 and 1ES 2344+514. References External links WEBT website Osservatorio Astronomico di Torino website BeppoSAX website INTEGRAL website XMM-Newton website FERMI website AGILE website GASP list of monitored sources Astrophysics
Whole Earth Blazar Telescope
[ "Physics", "Astronomy" ]
803
[ "Astronomical sub-disciplines", "Astrophysics" ]
25,135,282
https://en.wikipedia.org/wiki/X%20%28charge%29
In particle physics, the X charge (or simply X) is a conserved quantum number associated with the SO(10) grand unification theory. It is thought to be conserved in strong, weak, electromagnetic, gravitational, and Higgs interactions. Because the X charge is related to the weak hypercharge, it varies depending on the helicity of a particle. For example, a left-handed quark has an X charge of +1, whereas a right-handed quark can have either an X charge of −1 (for up, charm and top quarks), or −3 (for down, strange and bottom quarks). is related to the difference between the baryon number and the lepton number (that is, ), and the weak hypercharge via the relation: X charge in proton decay Proton decay is a hypothetical form of radioactive decay, predicted by many grand unification theories. During proton decay, the common baryonic proton decays into lighter subatomic particles. However, proton decay has never been experimentally observed and is predicted to be mediated by hypothetical X and Y bosons. Many protonic decay modes have been predicted, one of which is shown below: This form of decay violates the conservation of both baryon number and lepton number, however the X charge is conserved. Similarly, all experimentally confirmed forms of decay also conserve the X charge value. Values of X charge for known elementary particles The following table lists the X charge values for the standard model fermions and their antiparticles. Note that the CP conjugate of a fermion has the opposite X charge (e.g. vs. , = −3 vs. +3). The next table gives the X charge of the standard model bosons. Although not part of the Standard Model, the GUT X and Y bosons also have zero X charge. See also Standard Model (mathematical formulation) Noether's theorem X and Y bosons Particle physics Nuclear physics Standard Model
X (charge)
[ "Physics" ]
412
[ "Standard Model", "Particle physics stubs", "Particle physics", "Nuclear physics" ]
32,164,063
https://en.wikipedia.org/wiki/Hamiltonian%20Monte%20Carlo
The Hamiltonian Monte Carlo algorithm (originally known as hybrid Monte Carlo) is a Markov chain Monte Carlo method for obtaining a sequence of random samples whose distribution converges to a target probability distribution that is difficult to sample directly. This sequence can be used to estimate integrals of the target distribution, such as expected values and moments. Hamiltonian Monte Carlo corresponds to an instance of the Metropolis–Hastings algorithm, with a Hamiltonian dynamics evolution simulated using a time-reversible and volume-preserving numerical integrator (typically the leapfrog integrator) to propose a move to a new point in the state space. Compared to using a Gaussian random walk proposal distribution in the Metropolis–Hastings algorithm, Hamiltonian Monte Carlo reduces the correlation between successive sampled states by proposing moves to distant states which maintain a high probability of acceptance due to the approximate energy conserving properties of the simulated Hamiltonian dynamic when using a symplectic integrator. The reduced correlation means fewer Markov chain samples are needed to approximate integrals with respect to the target probability distribution for a given Monte Carlo error. The algorithm was originally proposed by Simon Duane, Anthony Kennedy, Brian Pendleton and Duncan Roweth in 1987 for calculations in lattice quantum chromodynamics. In 1996, Radford M. Neal showed how the method could be used for a broader class of statistical problems, in particular artificial neural networks. However, the burden of having to provide gradients of the Bayesian network delayed the wider adoption of the algorithm in statistics and other quantitative disciplines, until in the mid-2010s the developers of Stan implemented HMC in combination with automatic differentiation. Algorithm Suppose the target distribution to sample is for () and a chain of samples is required. The Hamilton's equations are where and are the th component of the position and momentum vector respectively and is the Hamiltonian. Let be a mass matrix which is symmetric and positive definite, then the Hamiltonian is where is the potential energy. The potential energy for a target is given as which comes from the Boltzmann's factor. Note that the Hamiltonian is dimensionless in this formulation because the exponential probability weight has to be well defined. For example, in simulations at finite temperature the factor (with the Boltzmann constant ) is directly absorbed into and . The algorithm requires a positive integer for number of leapfrog steps and a positive number for the step size . Suppose the chain is at . Let . First, a random Gaussian momentum is drawn from . Next, the particle will run under Hamiltonian dynamics for time , this is done by solving the Hamilton's equations numerically using the leapfrog algorithm. The position and momentum vectors after time using the leapfrog algorithm are: These equations are to be applied to and times to obtain and . The leapfrog algorithm is an approximate solution to the motion of non-interacting classical particles. If exact, the solution will never change the initial randomly-generated energy distribution, as energy is conserved for each particle in the presence of a classical potential energy field. In order to reach a thermodynamic equilibrium distribution, particles must have some sort of interaction with, for example, a surrounding heat bath, so that the entire system can take on different energies with probabilities according to the Boltzmann distribution. One way to move the system towards a thermodynamic equilibrium distribution is to change the state of the particles using the Metropolis–Hastings algorithm. So first, one applies the leapfrog step, then a Metropolis-Hastings step. The transition from to is where A full update consists of first randomly sampling the momenta (independently of the previous iterations), then integrating the equations of motion (e.g. with leapfrog), and finally obtaining the new configuration from the Metropolis-Hastings accept/reject step. This updating mechanism is repeated to obtain . No U-Turn Sampler The No U-Turn Sampler (NUTS) is an extension by controlling the number of steps automatically. Tuning is critical. For example, in the one dimensional case, the potential is which corresponds to the potential of a simple harmonic oscillator. For too large, the particle will oscillate and thus waste computational time. For too small, the particle will behave like a random walk. Loosely, NUTS runs the Hamiltonian dynamics both forwards and backwards in time randomly until a U-Turn condition is satisfied. When that happens, a random point from the path is chosen for the MCMC sample and the process is repeated from that new point. In detail, a binary tree is constructed to trace the path of the leap frog steps. To produce a MCMC sample, an iterative procedure is conducted. A slice variable is sampled. Let and be the position and momentum of the forward particle respectively. Similarly, and for the backward particle. In each iteration, the binary tree selects at random uniformly to move the forward particle forwards in time or the backward particle backwards in time. Also for each iteration, the number of leap frog steps increase by a factor of 2. For example, in the first iteration, the forward particle moves forwards in time using 1 leap frog step. In the next iteration, the backward particle moves backwards in time using 2 leap frog steps. The iterative procedure continues until the U-Turn condition is met, that is or when the Hamiltonian becomes inaccurate or where, for example, . Once the U-Turn condition is met, the next MCMC sample, , is obtained by sampling uniformly the leap frog path traced out by the binary tree which satisfies This is usually satisfied if the remaining HMC parameters are sensible. See also Dynamic Monte Carlo method Software for Monte Carlo molecular modeling Stan, a probabilistic programing language implementing HMC. PyMC, a probabilistic programming language implementing HMC. Metropolis-adjusted Langevin algorithm References Further reading External links Hamiltonian Monte Carlo from scratch Optimization and Monte Carlo Methods Monte Carlo methods Markov chain Monte Carlo
Hamiltonian Monte Carlo
[ "Physics" ]
1,218
[ "Monte Carlo methods", "Computational physics" ]
32,164,549
https://en.wikipedia.org/wiki/Thermodynamik%20chemischer%20Vorg%C3%A4nge
In the history of thermodynamics, Thermodynamik chemischer Vorgänge (Chemical thermodynamic process) is a sequence of three papers (1882–1883) written by German physicist Hermann von Helmholtz. It is one of the founding papers in thermodynamics, along with Josiah Willard Gibbs's 1876 paper "On the Equilibrium of Heterogeneous Substances". Together they form the foundation of chemical thermodynamics as well as a large part of physical chemistry. It was published in three parts in Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin ["Proceedings of the Royal Prussian Academy of Sciences"], and is available on HathiTrust and online archive of the Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin. First part: Die Thermodynamik chemischer Vorgänge, Submitted on 2 February 1882. Second part: Zur Thermodynamik chemischer Vorgänge, subtitled Folgerungen die galvanische Polarisation betreffend ["Conclusions concerning galvanic polarization"]. Submitted on 27 July 1882. (yes, the word "Die" was turned to "Zur", which might cause bibliographical confusion) Third part: Zur Thermodynamik chemischer Vorgänge, subtitled Versuche an Chlorzink-Kalomel-Elementen ["Experiments on zinc chloride calomel elements"]. Submitted on 31 May 1883. References Thermodynamics Historical physics publications Physics papers
Thermodynamik chemischer Vorgänge
[ "Physics", "Chemistry", "Mathematics" ]
356
[ "Thermodynamics", "Dynamical systems" ]
32,168,542
https://en.wikipedia.org/wiki/Institute%20of%20Space%20and%20Planetary%20Astrophysics
The Institute of Space and Planetary Astrophysics, also known as by its abbreviation ISPA, is a premier and national research institute of the University of Karachi, engaging the theoretical and applied studies and research into topics pertaining to Astronomy, Astrophysics, Satellite Communication, Space Flight Dynamics, Atmospheric Science, Climatology, GIS & Remote Sensing and other related subjects. The institute has network of various mathematics and physics laboratories located in various universities of Pakistan, while it operates a single Karachi University Astrophysics Observatory. History The institute was established by Karachi University's Department of Mathematics in 1994. It was established as an autonomous research institute with the idea for a space science astrophysics programme and an astronomical observatory was conceived right at the inception of the Karachi University. However, the trace of establishing such institute came in place in 1960, when Professor A.B.A Haleem, Professor of Political Science and Vice-Chancellor of the Karachi University, first obtained and procured the powerful astronomical telescope with relevant equipment from a West-German company. Known as Karachi University Astrophysics Observatory, an astronomical observatory was initiated under the administrative supervision of the Department of Mathematics of the Karachi University. Later in time, the Federal Government of Pakistan built a building to install the observatory which initially belonged to the Department of Mathematics. In 1994, the building and the observatory became a part of the ISPA with Prof. Dr. Jawaid Quamar as its first founding director. Prof. Dr. Jawaid Quamar is credited for leading the institute at an academic and international level as he had attracted many students to do their research under his supervision and he also introduced new programmes in the relevant field in the country. The new metallic dome of the observatory was an evidence of Dr. Quamar's hard work and dedication. Later, after the retirement of Dr. Quamar, Mr. M. Shahid Qureshi took charge of the institute in 2002. He regularized the academic programs at ISPA and lead the institute till 2010. Mr Qureshi earned his PhD from the same institute under the supervision of Prof. Dr. Nasiruddin Khan (of department of Mathematics, UOK). In Fact Dr. Qureshi was the third PhD of the institute, first being Prof. Dr. M. Ayub Khan Yousufzai, second Prof. Dr. M. Jawed Iqbal (the current Director of ISPA). On June 4, 2015, a brief inaugural session took place at ISPA where, Prof. Dr. Jawaid Quamar inaugurate the ISPA Seminar Library in the memory of "Prof. Dr. Irshad Ahmed Khan Afridi", who was involved in the foundation of ISPA. The new name of ISPA seminar library (Dr. Irshad Ahmed Khan Afridi Seminar Library) was approved by the Syndicate of Karachi University. Academic programmes The ISPA offers undergraduate and graduate (even the doctoral level) programmes in particle physics, theoretical physics, astrophysics, plasma physics, and mathematics. Undergraduate programme (Master of Science (M.Sc.)) — the institute offers a two years programme. The undergraduate programmes are taught with the advanced courses of engineering. The disciplines are listed below in which the programmes are being offered by the institute: Computational Physics, Engineering Physics, Physics, Astronomy, Electrical physics, Mathematics, Applied Mathematics, Applied mechanics, Astrochemistry, Chemistry, and Computer Science. Master's and post-research programme (Master of Science (honors), Master of Physics, and/or Master of Philosophy) — the CHEP offers a two-year programme in the disciplines listed below: Mathematics, Applied Mathematics, Computational Physics, Particle physics, Electrical physics, Quantum Physics, Solid State Physics, Theoretical Chemistry, Astrochemistry, Astrophysics, Chemistry, Biophysics, and Computer science. Post-doctoral research and Doctoral programme (Doctor of Philosophy and Doctor of Science) — the institute manages the post-doctoral research and doctoral programmes that are expected to be completed in 2–5 years. The degrees are awarded by publishing a thesis supervised by the institute. The doctoral programmes that are offered are listed below. Nuclear physics, Particle Physics, Theoretical physics, Mathematical physics, Laser physics, Atomic physics, AMO physics, Mathematics, Radiophysics, Astrophysics, Cosmology Nuclear chemistry, Analytical chemistry, Mathematical chemistry and Computer science. References Research institutes in Pakistan Space programme of Pakistan Astrophysics research institutes University of Karachi
Institute of Space and Planetary Astrophysics
[ "Physics" ]
903
[ "Astrophysics research institutes", "Astrophysics" ]
32,170,500
https://en.wikipedia.org/wiki/BPS%20domain
In molecular biology, the BPS domain (Between PH and SH2) domain is a protein domain of approximately 45 amino acids found in the adaptor proteins Grb7/|Grb10/Grb14. It mediates inhibition of the tyrosine kinase domain of the insulin receptor by binding of the N-terminal portion of the BPS domain to the substrate peptide groove of the kinase, acting as a pseudosubstrate inhibitor. It is composed of two beta strands and a C-terminal helix. References Protein domains
BPS domain
[ "Chemistry", "Biology" ]
110
[ "Biochemistry stubs", "Protein stubs", "Protein domains", "Protein classification" ]
32,172,638
https://en.wikipedia.org/wiki/Kepler%27s%20equation
In orbital mechanics, Kepler's equation relates various geometric properties of the orbit of a body subject to a central force. It was derived by Johannes Kepler in 1609 in Chapter 60 of his Astronomia nova, and in book V of his Epitome of Copernican Astronomy (1621) Kepler proposed an iterative solution to the equation. This equation and its solution, however, first appeared in a 9th-century work by Habash al-Hasib al-Marwazi, which dealt with problems of parallax. The equation has played an important role in the history of both physics and mathematics, particularly classical celestial mechanics. Equation Kepler's equation is where is the mean anomaly, is the eccentric anomaly, and is the eccentricity. The 'eccentric anomaly' is useful to compute the position of a point moving in a Keplerian orbit. As for instance, if the body passes the periastron at coordinates , , at time , then to find out the position of the body at any time, you first calculate the mean anomaly from the time and the mean motion by the formula , then solve the Kepler equation above to get , then get the coordinates from: where is the semi-major axis, the semi-minor axis. Kepler's equation is a transcendental equation because sine is a transcendental function, and it cannot be solved for algebraically. Numerical analysis and series expansions are generally required to evaluate . Alternate forms There are several forms of Kepler's equation. Each form is associated with a specific type of orbit. The standard Kepler equation is used for elliptic orbits (). The hyperbolic Kepler equation is used for hyperbolic trajectories (). The radial Kepler equation is used for linear (radial) trajectories (). Barker's equation is used for parabolic trajectories (). When , the orbit is circular. Increasing causes the circle to become elliptical. When , there are four possibilities: a parabolic trajectory, a trajectory that goes back and forth along a line segment from the centre of attraction to a point at some distance away, a trajectory going in or out along an infinite ray emanating from the centre of attraction, with its speed going to zero with distance or a trajectory along a ray, but with speed not going to zero with distance. A value of slightly above 1 results in a hyperbolic orbit with a turning angle of just under 180 degrees. Further increases reduce the turning angle, and as goes to infinity, the orbit becomes a straight line of infinite length. Hyperbolic Kepler equation The Hyperbolic Kepler equation is: where is the hyperbolic eccentric anomaly. This equation is derived by redefining M to be the square root of −1 times the right-hand side of the elliptical equation: (in which is now imaginary) and then replacing by . Radial Kepler equations The Radial Kepler equation for the case where the object does not have enough energy to escape is: where is proportional to time and is proportional to the distance from the centre of attraction along the ray and attains the value 1 at the maximum distance. This equation is derived by multiplying Kepler's equation by 1/2 and setting to 1: and then making the substitution The radial equation for when the object has enough energy to escape is: When the energy is exactly the minimum amount needed to escape, then the time is simply proportional to the distance to the power 3/2. Inverse problem Calculating for a given value of is straightforward. However, solving for when is given can be considerably more challenging. There is no closed-form solution. Solving for is more or less equivalent to solving for the true anomaly, or the difference between the true anomaly and the mean anomaly, which is called the "Equation of the center". One can write an infinite series expression for the solution to Kepler's equation using Lagrange inversion, but the series does not converge for all combinations of and (see below). Confusion over the solvability of Kepler's equation has persisted in the literature for four centuries. Kepler himself expressed doubt at the possibility of finding a general solution: Fourier series expansion (with respect to ) using Bessel functions is With respect to , it is a Kapteyn series. Inverse Kepler equation The inverse Kepler equation is the solution of Kepler's equation for all real values of : Evaluating this yields: These series can be reproduced in Mathematica with the InverseSeries operation. InverseSeries[Series[M - Sin[M], {M, 0, 10}]] InverseSeries[Series[M - e Sin[M], {M, 0, 10}]] These functions are simple Maclaurin series. Such Taylor series representations of transcendental functions are considered to be definitions of those functions. Therefore, this solution is a formal definition of the inverse Kepler equation. However, is not an entire function of at a given non-zero . Indeed, the derivative goes to zero at an infinite set of complex numbers when the nearest to zero being at and at these two points (where inverse cosh is taken to be positive), and goes to infinity at these values of . This means that the radius of convergence of the Maclaurin series is and the series will not converge for values of larger than this. The series can also be used for the hyperbolic case, in which case the radius of convergence is The series for when converges when . While this solution is the simplest in a certain mathematical sense,, other solutions are preferable for most applications. Alternatively, Kepler's equation can be solved numerically. The solution for was found by Karl Stumpff in 1968, but its significance wasn't recognized. One can also write a Maclaurin series in . This series does not converge when is larger than the Laplace limit (about 0.66), regardless of the value of (unless is a multiple of ), but it converges for all if is less than the Laplace limit. The coefficients in the series, other than the first (which is simply ), depend on in a periodic way with period . Inverse radial Kepler equation The inverse radial Kepler equation () for the case in which the object does not have enough energy to escape can similarly be written as: Evaluating this yields: To obtain this result using Mathematica: InverseSeries[Series[ArcSin[Sqrt[t]] - Sqrt[(1 - t) t], {t, 0, 15}]] Numerical approximation of inverse problem Newton's method For most applications, the inverse problem can be computed numerically by finding the root of the function: This can be done iteratively via Newton's method: Note that and are in units of radians in this computation. This iteration is repeated until desired accuracy is obtained (e.g. when < desired accuracy). For most elliptical orbits an initial value of is sufficient. For orbits with , a initial value of can be used. Numerous works developed accurate (but also more complex) start guesses. If is identically 1, then the derivative of , which is in the denominator of Newton's method, can get close to zero, making derivative-based methods such as Newton-Raphson, secant, or regula falsi numerically unstable. In that case, the bisection method will provide guaranteed convergence, particularly since the solution can be bounded in a small initial interval. On modern computers, it is possible to achieve 4 or 5 digits of accuracy in 17 to 18 iterations. A similar approach can be used for the hyperbolic form of Kepler's equation. In the case of a parabolic trajectory, Barker's equation is used. Fixed-point iteration A related method starts by noting that . Repeatedly substituting the expression on the right for the on the right yields a simple fixed-point iteration algorithm for evaluating . This method is identical to Kepler's 1621 solution. In pseudocode: function E(e, M, n) E = M for k = 1 to n E = M + e*sin E next k return E The number of iterations, , depends on the value of . The hyperbolic form similarly has . This method is related to the Newton's method solution above in that To first order in the small quantities and , . See also Equation of the center Kepler's laws of planetary motion Kepler problem Kepler problem in general relativity Radial trajectory References External links Kepler's Equation at Wolfram Mathworld Eponymous equations of physics Johannes Kepler Orbits
Kepler's equation
[ "Physics" ]
1,743
[ "Eponymous equations of physics", "Equations of physics" ]
32,174,553
https://en.wikipedia.org/wiki/Physics%20of%20failure
Physics of failure is a technique under the practice of reliability design that leverages the knowledge and understanding of the processes and mechanisms that induce failure to predict reliability and improve product performance. Other definitions of Physics of Failure include: A science-based approach to reliability that uses modeling and simulation to design-in reliability. It helps to understand system performance and reduce decision risk during design and after the equipment is fielded. This approach models the root causes of failure such as fatigue, fracture, wear, and corrosion. An approach to the design and development of reliable product to prevent failure, based on the knowledge of root cause failure mechanisms. The Physics of Failure (PoF) concept is based on the understanding of the relationships between requirements and the physical characteristics of the product and their variation in the manufacturing processes, and the reaction of product elements and materials to loads (stressors) and interaction under loads and their influence on the fitness for use with respect to the use conditions and time. Overview The concept of Physics of Failure, also known as Reliability Physics, involves the use of degradation algorithms that describe how physical, chemical, mechanical, thermal, or electrical mechanisms evolve over time and eventually induce failure. While the concept of Physics of Failure is common in many structural fields, the specific branding evolved from an attempt to better predict the reliability of early generation electronic parts and systems. The beginning Within the electronics industry, the major driver for the implementation of Physics of Failure was the poor performance of military weapon systems during World War II. During the subsequent decade, the United States Department of Defense funded an extensive amount of effort to especially improve the reliability of electronics, with the initial efforts focused on after-the-fact or statistical methodology. Unfortunately, the rapid evolution of electronics, with new designs, new materials, and new manufacturing processes, tended to quickly negate approaches and predictions derived from older technology. In addition, the statistical approach tended to lead to expensive and time-consuming testing. The need for different approaches led to the birth of Physics of Failure at the Rome Air Development Center (RADC). Under the auspices of the RADC, the first Physics of Failure in Electronics Symposium was held in September 1962. The goal of the program was to relate the fundamental physical and chemical behavior of materials to reliability parameters. Early history – integrated circuits The initial focus of physics of failure techniques tended to be limited to degradation mechanisms in integrated circuits. This was primarily because the rapid evolution of the technology created a need to capture and predict performance several generations ahead of existing product. One of the first major successes under predictive physics of failure was a formula developed by James Black of Motorola to describe the behavior of electromigration. Electromigration occurs when collisions of electrons cause metal atoms in a conductor to dislodge and move downstream of current flow (proportional to current density). Black used this knowledge, in combination with experimental findings, to describe the failure rate due to electromigration as where A is a constant based on the cross-sectional area of the interconnect, J is the current density, Ea is the activation energy (e.g. 0.7 eV for grain boundary diffusion in aluminum), k is the Boltzmann constant, T is the temperature and n is a scaling factor (usually set to 2 according to Black). Physics of failure is typically designed to predict wearout, or an increasing failure rate, but this initial success by Black focused on predicting behavior during operational life, or a constant failure rate. This is because electromigration in traces can be designed out by following design rules, while electromigration at vias are primarily interfacial effects, which tend to be defect or process-driven. Leveraging this success, additional physics-of-failure based algorithms have been derived for the three other major degradation mechanisms (time dependent dielectric breakdown [TDDB], hot carrier injection [HCI], and negative bias temperature instability [NBTI]) in modern integrated circuits (equations shown below). More recent work has attempted to aggregate these discrete algorithms into a system-level prediction. TDDB: τ = τo(T) exp[ G(T)/ εox] where τo(T) = exp(−Ea / kT), G(T) = 120 + 5.8/kT, and εox is the permittivity. HCI: λHCI = A3 exp(−β/VD) exp(−Ea / kT) where λHCI is the failure rate of HCI, A3 is an empirical fitting parameter, β is an empirical fitting parameter, VD is the drain voltage, Ea is the activation energy of HCI, typically −0.2 to −0.1 eV, k is the Boltzmann constant, and T is absolute temperature. NBTI: λ = A εoxm VTμp exp(−Ea / kT) where A is determined empirically by normalizing the above equation, m = 2.9, VT is the thermal voltage, μp is the surface mobility constant, Ea is the activation energy of NBTI, k is the Boltzmann constant, and T is the absolute temperature. Next stage – electronic packaging The resources and successes with integrated circuits, and a review of some of the drivers of field failures, subsequently motivated the reliability physics community to initiate physics of failure investigations into package-level degradation mechanisms. An extensive amount of work was performed to develop algorithms that could accurately predict the reliability of interconnects. Specific interconnects of interest resided at 1st level (wire bonds, solder bumps, die attach), 2nd level (solder joints), and 3rd level (plated through holes). Just as integrated circuit community had four major successes with physics of failure at the die-level, the component packaging community had four major successes arise from their work in the 1970s and 1980s. These were Peck: Predicts time to failure of wire bond / bond pad connections when exposed to elevated temperature / humidity where A is a constant, RH is the relative humidity, f(V) is a voltage function (often cited as voltage squared), Ea is the activation energy, kB is the Boltzmann constant, and T is absolute temperature. Engelmaier: Predicts time to failure of solder joints exposed to temperature cycling where εf is a fatigue ductility coefficient, c is a time and temperature dependent constant, F is an empirical constant, LD is the distance from the neutral point, α is the coefficient of thermal expansion, ΔT is the change in temperature, and h is solder joint thickness. Steinberg: Predicts time to failure of solder joints exposed to vibration where Z is maximum displacement, PSD is the power spectral density (g2/Hz), fn is the natural frequency of the CCA, Q is transmissibility (assumed to be square root of natural frequency), Zc is the critical displacement (20 million cycles to failure), B is the length of PCB edge parallel to component located at the center of the board, c is a component packaging constant, h is PCB thickness, r is a relative position factor, and L is component length. IPC-TR-579: Predicts time to failure of plated through holes exposed to temperature cycling where a is coefficient of thermal expansion (CTE), T is temperature, E is elastic modules, h is board thickness, d is hole diameter, t is plating thickness, and E and Cu label corresponding board and copper properties, respectively, Su being the ultimate tensile strength and Df being ductility of the plated copper, and De is the strain range. Each of the equations above uses a combination of knowledge of the degradation mechanisms and test experience to develop first-order equations that allow the design or reliability engineer to be able to predict time to failure behavior based on information on the design architecture, materials, and environment. Recent work More recent work in the area of physics of failure has been focused on predicting the time to failure of new materials (i.e., lead-free solder, high-K dielectric ), software programs, using the algorithms for prognostic purposes, and integrating physics of failure predictions into system-level reliability calculations. Limitations There are some limitations with the use of physics of failure in design assessments and reliability prediction. The first is physics of failure algorithms typically assume a 'perfect design'. Attempting to understand the influence of defects can be challenging and often leads to Physics of Failure (PoF) predictions limited to end of life behavior (as opposed to infant mortality or useful operating life). In addition, some companies have so many use environments (think personal computers) that performing a PoF assessment for each potential combination of temperature / vibration / humidity / power cycling / etc. would be onerous and potentially of limited value. See also List of finite element software packages Critical plane analysis Maintainability References Mechanical failure
Physics of failure
[ "Materials_science", "Engineering" ]
1,829
[ "Mechanical failure", "Materials science", "Mechanical engineering" ]
37,811,993
https://en.wikipedia.org/wiki/DNA%20polymerase%20III%2C%20delta%20subunit
In molecular biology, the δ (delta) subunit of DNA polymerase III is encoded by the holA gene in E. coli and other bacteria. Along with the γ, δ', χ, and ψ subunits that make up the core polymerase, and the β accessory proteins, the δ subunit is responsible for the high speed and processivity of polIII. References Bacterial proteins Protein families DNA replication
DNA polymerase III, delta subunit
[ "Chemistry", "Biology" ]
83
[ "Genetics techniques", "Protein classification", "Molecular biology stubs", "DNA replication", "Molecular genetics", "Molecular biology", "Protein families" ]
37,812,027
https://en.wikipedia.org/wiki/HolB
In E. coli and other bacteria, holB is a gene that encodes the delta prime subunit of DNA polymerase III. References Bacterial proteins DNA replication
HolB
[ "Chemistry", "Biology" ]
33
[ "Genetics techniques", "Molecular biology stubs", "DNA replication", "Molecular genetics", "Molecular biology" ]
37,812,045
https://en.wikipedia.org/wiki/HolC
In E. coli and other bacteria, holC is a gene that encodes the chi subunit of DNA polymerase III. References Bacterial proteins DNA replication
HolC
[ "Chemistry", "Biology" ]
32
[ "Genetics techniques", "Molecular biology stubs", "DNA replication", "Molecular genetics", "Molecular biology" ]
37,812,076
https://en.wikipedia.org/wiki/HolD
In E. coli and other bacteria, holD is a gene that encodes the psi subunit of DNA polymerase III. References Bacterial proteins DNA replication
HolD
[ "Chemistry", "Biology" ]
31
[ "Genetics techniques", "Molecular biology stubs", "DNA replication", "Molecular genetics", "Molecular biology" ]
37,812,104
https://en.wikipedia.org/wiki/HolE
In E. coli and other bacteria, holE is a gene that encodes the theta subunit of DNA polymerase III. References Bacterial proteins DNA replication
HolE
[ "Chemistry", "Biology" ]
31
[ "Genetics techniques", "Molecular biology stubs", "DNA replication", "Molecular genetics", "Molecular biology" ]
37,815,827
https://en.wikipedia.org/wiki/Astrostatistics
Astrostatistics is a discipline which spans astrophysics, statistical analysis and data mining. It is used to process the vast amount of data produced by automated scanning of the cosmos, to characterize complex datasets, and to link astronomical data to astrophysical theory. Many branches of statistics are involved in astronomical analysis including nonparametrics, multivariate regression and multivariate classification, time series analysis, and especially Bayesian inference. The field is closely related to astroinformatics. References Astrophysics Applied statistics Data mining Machine learning
Astrostatistics
[ "Physics", "Astronomy", "Mathematics", "Engineering" ]
111
[ "Machine learning", "Applied mathematics", "Astrophysics", "Artificial intelligence engineering", "Applied statistics", "Astronomical sub-disciplines" ]
37,816,349
https://en.wikipedia.org/wiki/CFD%20in%20buildings
CFD stands for computational fluid dynamics (and heat transfer). As per this technique, the governing differential equations of a flow system or thermal system are known in the form of Navier–Stokes equations, thermal energy equation and species equation with an appropriate equation of state. In the past few years, CFD has been playing an increasingly important role in building design, following its continuing development for over a quarter of a century. The information provided by CFD can be used to analyse the impact of building exhausts to the environment, to predict smoke and fire risks in buildings, to quantify indoor environment quality, and to design natural ventilation systems. Applications Recently CFD finds very wide application in different areas of science and engineering; some examples are: Aerodynamics of aircraft and vehicles : lift and drag Hydrodynamics of ships Power plant : combustion in internal combustion engines and gas turbines Turbo machinery: Flows inside rotating passages, diffusers etc. Electrical and electronics engineering: cooling of equipment including microcircuits. Chemical process engineering: mixing and separation and polymer moulding. Marine engineering: loads on off-shore structure. Environmental engineering: distribution of pollutant and effluents. Hydrology and oceanography: flows in rivers, estuaries and oceans. Meteorology: weather prediction. Biomedical engineering: blood flows through arteries and veins. External and internal environment of buildings: wind loading, ventilation analysis and heating/cooling load calculations. Previously, most building-related issues such as ventilation analysis, wind loading, wind environment etc. were examined using wind tunnel tests, but today all these tests can be done effectively with CFD. CFD can resolve all of the above-mentioned issues in a relatively short time period, and it is more economical as well as being a stronger approach than the older one (experimental). Currently, Computational Fluid Dynamics is used as a sophisticated airflow modeling method and can be used to predict airflow, heat transfer and contaminant transportation in and around buildings. CFD plays an important role in building design, designing a thermally-conformable, healthy and energy-efficient building. CFD can examine the effectiveness and efficiency of various heating ventilation and air conditioning (HVAC) systems by easily changing the type and location of different components, supply air conditions and system control schedules. Furthermore, CFD helps in developing passive heating/cooling/ventilation strategies (e.g. natural ventilation) by modelling and optimizing building site-plans and indoor layouts. Globally, the building sector is the source of approximately 40% of total energy consumption. In the present era, there is a huge gap in energy consumption and energy production. As the building sector makes up a huge amount of the total consumption, it is essential to investigate the optimum configuration for buildings to reduce the buildings' energy usage. In order to achieve this, CFD can play an important role. Building performance simulation (BPS) and CFD programs are important building design tools which are used for the evaluation of building performance, including thermal comfort, indoor air quality mechanical system efficiency and energy consumption. CFD in buildings is mainly used for one or more followings purposes: Thermal analysis: through walls, roof and floor of buildings Ventilation analysis. Orientation, site and location selection of buildings based on local geographical and environmental conditions. Thermal analysis In buildings, heat transfer takes place in its all modes i.e. conduction, convection and radiation. In order to reduce heat losses from buildings, CFD analysis can be done for the optimum configuration of composite walls, roof and floor. The differential form of the general transport equation is as follows: The numerical solution of above equation can be obtained by finite difference method (FDM), finite volume method (FVM) and finite element method (FEM). In buildings, for heat transfer analysis, the scalar function ф in equation (1) is replaced by Temperature (T), diffusion coefficient Γ is replaced by thermal conductivity k and the source term is replaced by heat generation term e or by any heat radiation source or by both (depending upon the nature of source available) and there are different forms of equations for different cases. For simplicity and easy understanding, only 1-Dimensional cases have been discussed. In buildings the heat transfer analysis can be done for all parts of buildings (walls, roof and floor) in following two ways Steady State Thermal Analysis Transient Thermal Analysis Steady state thermal analysis The steady state thermal analysis consist the following type of governing differential equations. Case-1: General steady state heat conduction equation. For this case the governing differential equation (GDE) (1) becomes as follows: Case-2: Steady state heat conduction equation (no heat generation) For this case the governing differential equation (GDE) (1) becomes as follows: Case-3: Steady state heat conduction equation (no heat generation and no convection) For this case the governing differential equation (GDE) (1) becomes as follows: Transient thermal analysis The transient thermal analysis consist the following type of governing differential equations. Case-1: Transient heat conduction For this case the governing differential equation (GDE) (1) becomes as follows: Case- 2: Transient heat conduction (no heat generation) For this case the governing differential equation (GDE) (1) becomes as follows: Case-3: Transient heat conduction (no heat generation and no convection) For this case the governing differential equation (GDE) (1) becomes as follows: We can solve these above mentioned governing differential equation (GDE) equations using CFD technique. Ventilation analysis The ventilation study in buildings is done to find the thermally comfortable environment with acceptable indoor air quality by regulating indoor air parameters (air temperature, relative humidity, air speed, and chemical species concentrations in the air). CFD finds an important role in regulating the indoor air parameters to predict the ventilation performance in buildings. The ventilation performance prediction provides the information regarding indoor air parameters in a room or a building even before the construction of buildings. These air parameters are crucial for designing a comfortable indoor as well as a good integration of the building in the outdoor environment. This is because the design of appropriate ventilation systems and the development of control strategies need detailed information regarding the following parameters; Airflow Contaminant dispersion Temperature distribution The aforesaid information is also useful for an architect to design the building configuration. From the last three decades, the CFD technique is widely used with considerable success in buildings. Recently ventilation and its related fields has becomes a great part of wind engineering. A ventilation study can be done using wind tunnel investigation (experimentally) or by CFD modeling (theoretically). Natural ventilation system may be preferred over the forced ventilation system in some applications, as it eliminates or reduces the mechanical ventilation system, which may provide both fan energy and first-cost savings. In present era, due to development of a lot of CFD software and other building performance simulation software, it has become easier to assess the possibility of natural/forced ventilation system in a building. CFD analysis is quite useful than the experimental approach because here other related relations among the variables in post-processing could be found. The data obtained either experimental or numerically is useful in two ways: Better comfort of user It provides the data which is used as input to the heat balance calculation of the buildings Orientation, site, and location selection Earlier, the choice of dwelling location was dependent on the need for water, so most developments started in valley areas. In our present era, due to advancements in science and technology, it has become easier to select the building orientation, site and location based on local geographical and environmental conditions. In the selection of building site and location, wind loading plays an important role. For example, in the case in which two buildings at a location exist side by side with a gap, when a volume of wind blows around the ends of the buildings and through the gap, the sum of flow around each building and then its velocity increases as it travels through the gap, at the expense of pressure loss. As a result, there is a build up of pressure entering the gap, which leads to higher wind loads on the sides of buildings. When wind blows over the face of a high rise building, a vortex is created by the downward flow on the front face (as shown in figure-1). The wind speed in the reverse direction near the ground level may have 140% of the reference wind speed, which can cause severe damage (especially to the roof of building). Such damage to buildings can be prevented if the effects of wind loading are considered in the early stage of construction of a building. In early age of construction, wind loading effects were determined by the wind tunnel test but, today, all these tests can be successfully simulated through CFD analysis. It is becoming increasingly important to provide pleasant building environments. Architects and wind engineers are often asked to look over the design (orientation, site, location and gaps between the surrounding buildings) in the formative planning stage of construction. By using CFD analysis, it is possible to find the suitable information (local wind velocity, convective coefficients, and solar radiation intensity) for optimal orientation, site and location selection of buildings. CFD approach for heat transfer analysis in buildings CFD technique can be used for the analysis of heat transfer in each part of a building. CFD technique finds the solution by following ways: Discretization of the governing differential equation using numerical methods (Finite difference method has been discussed). Solve the discretized version of equation with high performance computers. Discretization of the governing differential equations for the steady state heat transfer analysis Consider a building having a plane wall with thickness L, heat generation e and constant thermal conductivity k. The wall is subdivided into M equal regions of thickness = X/T in x-direction, and the divisions between the regions are selected as nodes as shown in figure-2. The whole domain of wall in x-direction is divided in elements as shown in figure and the size of all interior elements is same while for exterior elements it is half. Now to obtain the FDM solution for the interior nodes, consider the element represented by the node m which is surrounded by neighboring nodes m-1 and m+1. The FDM technique presumes that temperature varies linearly in walls (shown in figure-3). FDM solution is (for all interior nodes except to 0 and last node): Boundary conditions Above equation is valid only to interior nodes only. To obtain the solution for exterior nodes we have to apply the boundary conditions (as applicable), which are as follows. 1.Specified heat flux boundary condition When boundary is insulated (q=0) 2. Convective boundary condition 3. Radiation boundary condition 4. Combined convective and radiation boundary condition (shown in figure-4). or when radiation and convection heat transfer coefficient are combined, above equation becomes as follows; 5. Combined convective, radiation and heat flux boundary condition 6.Interface boundary condition : when there is an interface (in composite walls) of different walls having different thermo-physical properties, the two different solid media A and B are assumed to be perfect contact and thus have same temperature at interface at node m (as shown in figure-5). In above equations q_0 = denotes specified heat flux is in , h =convective coefficient, = combined convective and radiation heat transfer coefficient, = Temperature of surrounding surface, =Ambient Temperature, = Temperature of at initial node. Note: For interior side of wall we can apply the suitable boundary condition from above (as applicable), in that case will be replaced by (Room Temperature), = will be replaced by (Temperature of last node). Discretization of the governing differential equations for the transient heat transfer analysis Transient thermal analysis is more important than the steady thermal analysis, as this analysis include the variable ambient condition with time. In transient heat conduction, the temperature changes with time as well as position. The finite difference solution of transient heat conduction requires discretization in time in addition to space, as shown in figure-6. The nodal points and volume elements for the transient FDM formulation of 1-D conduction in a plane wall exist as shown in the figure-7. For this case the FDM explicit solution for equation (1) will be as follows, The above equation can be solved explicitly for the temperature to give where, and here, represents the cell Fourier no, represents thermal diffusivity, represents specific heat at constant pressure, represents time step, represents space step. Above equation is valid for all interior nodes and to find the relation for first and last node, apply boundary conditions (as applicable) as discussed in steady state heat transfer. For a convective & radiation boundary if solar radiation data \, in () is available and absorptivity-transmissivity constant K is known, the relation for temperature is obtained as follows; Note: the thermal analysis for the roof and floor of a building can be done in same way, as discussed for walls.\\ See also Computational fluid dynamics Natural ventilation JPMorgan Chase Tower (Houston) Dynamic insulation Thermal management of high-power LEDs Vented balance safety enclosure Different types of boundary conditions in fluid dynamics Wind tunnel Greenhouse References Computational fluid dynamics
CFD in buildings
[ "Physics", "Chemistry" ]
2,721
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
37,821,607
https://en.wikipedia.org/wiki/Field-induced%20polymer%20electroluminescent%20technology
Field-induced polymer electroluminescent (FIPEL) technology is a low power electroluminescent light source. Three layers of moldable light-emitting polymer blended with a small amount of carbon nanotubes glow when an alternating current is passed through them. The technology can produce white light similar to that of the Sun, or other tints if desired. It is also more efficient than compact fluorescent lamps in terms of the energy required to produce light. As cited from the Carroll Research Group at Wake Forest University, "To date our brightest device – without output couplers – exceeds 18,000 cd/m2." This confirms that FIPEL technology is a viable solution for area lighting. FIPEL lights are different from LED lighting, in that there is no junction. Instead, the light emitting component is a layer of polymer containing an iridium compound which is doped with multi-wall carbon nanotubes. This planar light emitting structure is energized by an AC field from insulated electrodes. The lights can be shaped into many different forms, from mimicking conventional light bulbs to unusual forms such as 2-foot-by-4-foot flat sheets and straight or bent tubes. The technology was developed by a team headed by Dr. David Carroll of Wake Forest University in Winston-Salem, North Carolina. See also Notes Energy-saving lighting Electrical engineering Nanoelectronics Luminescence
Field-induced polymer electroluminescent technology
[ "Chemistry", "Materials_science", "Engineering" ]
291
[ "Luminescence", "Molecular physics", "Nanoelectronics", "Electrical engineering", "Nanotechnology" ]
42,000,479
https://en.wikipedia.org/wiki/Valleytronics
Valleytronics (from valley and electronics) is an experimental area in semiconductors that exploits local extrema ("valleys") in the electronic band structure. Certain semiconductors have multiple "valleys" in the electronic band structure of the first Brillouin zone, and are known as multivalley semiconductors. Valleytronics is the technology of control over the valley degree of freedom, a local maximum/minimum on the valence/conduction band, of such multivalley semiconductors. Details The term was coined in analogy to spintronics. While in spintronics the internal degree of freedom of spin is harnessed to store, manipulate and read out bits of information, the proposal for valleytronics is to perform similar tasks using the multiple extrema of the band structure, so that the information of 0s and 1s would be stored as different discrete values of the crystal momentum. Valleytronics may refer to other forms of quantum manipulation of valleys in semiconductors, including quantum computation with valley-based qubits, valley blockade and other forms of quantum electronics. First experimental evidence of valley blockade predicted in Ref. (which completes the set of Coulomb charge blockade and Pauli spin blockade) has been observed in a single atom doped silicon transistor. Several theoretical proposals and experiments were performed in a variety of systems, such as graphene, few-layer phosphorene, some transition metal dichalcogenide monolayers, diamond, bismuth, silicon, carbon nanotubes, aluminium arsenide and silicene. References External links Matthew Francis: "Experiments hint at a new type of electronics: valleytronics" at Ars Technica Source of the above: Zeng, H., Dai, J., Yao, W., et al. "Valley polarization in MoS2 monolayers by optical pumping". Nature Nanotechnology, 7 490–493 (August 2012). . Quantum mechanics Semiconductors
Valleytronics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
408
[ "Matter", "Physical quantities", "Semiconductors", "Theoretical physics", "Quantum mechanics", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Electrical resistance and conductance" ]
42,003,835
https://en.wikipedia.org/wiki/Tango%20%28platform%29
Tango (named Project Tango while in testing) was an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It used computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. This allowed application developers to create user experiences that include indoor navigation, 3D mapping, physical space measurement, environmental recognition, augmented reality, and windows into a virtual world. The first product to emerge from ATAP, Tango was developed by a team led by computer scientist Johnny Lee, a core contributor to Microsoft's Kinect. In an interview in June 2015, Lee said, "We're developing the hardware and software technologies to help everything and everyone understand precisely where they are, anywhere." Google produced two devices to demonstrate the Tango technology: the Peanut phone and the Yellowstone 7-inch tablet. More than 3,000 of these devices had been sold as of June 2015, chiefly to researchers and software developers interested in building applications for the platform. In the summer of 2015, Qualcomm and Intel both announced that they were developing Tango reference devices as models for device manufacturers who use their mobile chipsets. At CES, in January 2016, Google announced a partnership with Lenovo to release a consumer smartphone during the summer of 2016 to feature Tango technology marketed at consumers, noting a less than $500 price-point and a small form factor below 6.5 inches. At the same time, both companies also announced an application incubator to get applications developed to be on the device on launch. On 15 December 2017, Google announced that they would be ending support for Tango on March 1, 2018, in favor of ARCore. Overview Tango was different from other contemporary 3D-sensing computer vision products, in that it was designed to run on a standalone mobile phone or tablet and was chiefly concerned with determining the device's position and orientation within the environment. The software worked by integrating three types of functionality: Motion-tracking: using visual features of the environment, in combination with accelerometer and gyroscope data, to closely track the device's movements in space Area learning: storing environment data in a map that can be re-used later, shared with other Tango devices, and enhanced with metadata such as notes, instructions, or points of interest Depth perception: detecting distances, sizes, and surfaces in the environment Together, these generate data about the device in "six degrees of freedom" (3 axes of orientation plus 3 axes of position) and detailed three-dimensional information about the environment. Project Tango was also the first project to graduate from Google X in 2012 Applications on mobile devices use Tango's C and Java APIs to access this data in real time. In addition, an API was also provided for integrating Tango with the Unity game engine; this enabled the conversion or creation of games that allow the user to interact and navigate in the game space by moving and rotating a Tango device in real space. These APIs were documented on the Google developer website. Applications Tango enabled apps to track a device's position and orientation within a detailed 3D environment, and to recognize known environments. This allowed the creations of applications such as in-store navigation, visual measurement and mapping utilities, presentation and design tools, and a variety of immersive games. At Augmented World Expo 2015, Johnny Lee demonstrated a construction game that builds a virtual structure in real space, an AR showroom app that allows users to view a full-size virtual automobile and customize its features, a hybrid Nerf gun with mounted Tango screen for dodging and shooting AR monsters superimposed on reality, and a multiplayer VR app that lets multiple players converse in a virtual space where their avatar movements match their real-life movements. Tango apps are distributed through Play. Google has encouraged the development of more apps with hackathons, an app contest, and promotional discounts on the development tablet. Devices As a platform for software developers and a model for device manufacturers, Google created two Tango devices. The Peanut phone "Peanut" was the first production Tango device, released in the first quarter of 2014. It was a small Android phone with a Qualcomm MSM8974 quad-core processor and additional special hardware including a fisheye motion camera, "RGB-IR" camera for color image and infrared depth detection, and Movidius Vision processing units. A high-performance accelerometer and gyroscope were added after testing several competing models in the MARS lab at the University of Minnesota. Several hundred Peanut devices were distributed to early-access partners including university researchers in computer vision and robotics, as well as application developers and technology startups. Google stopped supporting the Peanut device in September 2015, as by then the Tango software stack had evolved beyond the versions of Android that run on the device. The Yellowstone tablet "Yellowstone" was a 7-inch tablet with full Tango functionality, released in June 2014, and sold as the Project Tango Tablet Development Kit. It featured a 2.3 GHz quad-core Nvidia Tegra K1 processor, 128GB flash memory, 1920x1200-pixel touchscreen, 4MP color camera, fisheye-lens (motion-tracking) camera, an IR projector with RGB-IR camera for integrated depth sensing, and 4G LTE connectivity. As of May 27, 2017, the Tango tablet is considered officially unsupported by Google. Testing by NASA In May 2014, two Peanut phones were delivered to the International Space Station to be part of a NASA project to develop autonomous robots that navigate in a variety of environments, including outer space. The soccer-ball-sized, 18-sided polyhedral SPHERES robots were developed at the NASA Ames Research Center, adjacent to the Google campus in Mountain View, California. Andres Martinez, SPHERES manager at NASA, said "We are researching how effective [Tango's] vision-based navigation abilities are for performing localization and navigation of a mobile free flyer on ISS. Intel RealSense smartphone Announced at Intel's Developer Forum in August 2015, and offered to public through a Developer Kit since January 2016. It incorporated a RealSense ZR300 camera which had optical features required for Tango, such as the fisheye camera. Lenovo Phab 2 Pro Lenovo Phab 2 Pro was the first commercial smartphone with the Tango Technology, the device was announced at the beginning of 2016, launched in August, and available for purchase in the US in November. The Phab 2 Pro had a 6.4 inch screen, a Snapdragon 652 processor, and 64 GB of internal storage, with a rear facing 16 Megapixels camera and 8 MP front camera. Asus Zenfone AR Asus Zenfone AR, announced at CES 2017, was the second commercial smartphone with the Tango Technology. It ran Tango AR & Daydream VR on Snapdragon 821, with 6GB or 8GB of RAM and 128 or 256GB of internal memory depending on the configuration. See also Computer vision Vision processing unit RGB Simultaneous localization and mapping References External links Project Tango developer site (API and documentation). Project Tango developer community. Project Tango Smartphone Development Platform page at Qualcomm. "Intel Expands Developer Opportunities As Computing Expands Across All Areas of Peoples' Lives". Intel Developer Forum. 20 August 2015. "Google and Intel bring RealSense to phones with Project Tango dev kit". Engadget. 18 August 2015. "Google I/O 2015 - Project Tango - Mobile 3D tracking and perception". Johnny Lee, Google I/O 2015. YouTube. 29 May 2015. "Project Tango Concepts". Johnny Lee, YouTube. 21 April 2015. "Project Tango Tablet Teardown". iFixit. 15 August 2014. Computer vision 3D imaging Augmented reality Navigation Mobile technology Google hardware Products and services discontinued in 2018
Tango (platform)
[ "Technology", "Engineering" ]
1,622
[ "Artificial intelligence engineering", "Packaging machinery", "nan", "Computer vision" ]
42,008,420
https://en.wikipedia.org/wiki/Institute%20of%20Making
The Institute of Making is a multidisciplinary research club based at University College London. Composed of the Materials Library and the MakeSpace, its work focuses on hands-on research into materials and making from many different perspectives. Members are encouraged to "make, break, design and combine both advanced and traditional tools, techniques and materials". It was founded by directors Mark Miodownik, Zoe Laughlin and Martin Conreen in 2010 at King's College London and moved to UCL in 2012, officially opening on 16 March 2013. The institute has produced some notable projects and research in the fields of making and maker culture and sensoaesthetics. References External links The Institute of Making website Clubs and societies of University College London Materials science institutes
Institute of Making
[ "Materials_science" ]
155
[ "Materials science organizations", "Materials science institutes" ]
33,765,759
https://en.wikipedia.org/wiki/Gell-Mann%20and%20Low%20theorem
In quantum field theory, the Gell-Mann and Low theorem is a mathematical statement that allows one to relate the ground (or vacuum) state of an interacting system to the ground state of the corresponding non-interacting theory. It was proved in 1951 by Murray Gell-Mann and Francis E. Low. The theorem is useful because, among other things, by relating the ground state of the interacting theory to its non-interacting ground state, it allows one to express Green's functions (which are defined as expectation values of Heisenberg-picture fields in the interacting vacuum) as expectation values of interaction picture fields in the non-interacting vacuum. While typically applied to the ground state, the Gell-Mann and Low theorem applies to any eigenstate of the Hamiltonian. Its proof relies on the concept of starting with a non-interacting Hamiltonian and adiabatically switching on the interactions. History The theorem was proved first by Gell-Mann and Low in 1951, making use of the Dyson series. In 1969, Klaus Hepp provided an alternative derivation for the case where the original Hamiltonian describes free particles and the interaction is norm bounded. In 1989, G. Nenciu and G. Rasche proved it using the adiabatic theorem. A proof that does not rely on the Dyson expansion was given in 2007 by Luca Guido Molinari. Statement of the theorem Let be an eigenstate of with energy and let the 'interacting' Hamiltonian be , where is a coupling constant and the interaction term. We define a Hamiltonian which effectively interpolates between and in the limit and . Let denote the evolution operator in the interaction picture. The Gell-Mann and Low theorem asserts that if the limit as of exists, then are eigenstates of . Note that when applied to, say, the ground-state, the theorem does not guarantee that the evolved state will be a ground state. In other words, level crossing is not excluded. Proof As in the original paper, the theorem is typically proved making use of Dyson's expansion of the evolution operator. Its validity however extends beyond the scope of perturbation theory as has been demonstrated by Molinari. We follow Molinari's method here. Focus on and let . From Schrödinger's equation for the time-evolution operator and the boundary condition we can formally write Focus for the moment on the case . Through a change of variables we can write We therefore have that This result can be combined with the Schrödinger equation and its adjoint to obtain The corresponding equation between is the same. It can be obtained by pre-multiplying both sides with , post-multiplying with and making use of The other case we are interested in, namely can be treated in an analogous fashion and yields an additional minus sign in front of the commutator (we are not concerned here with the case where have mixed signs). In summary, we obtain We proceed for the negative-times case. Abbreviating the various operators for clarity Now using the definition of we differentiate and eliminate derivatives using the above expression, finding where . We can now let as by assumption the in left hand side is finite. We then clearly see that is an eigenstate of and the proof is complete. References A.L. Fetter and J.D. Walecka: "Quantum Theory of Many-Particle Systems", McGraw–Hill (1971) Quantum field theory Theorems in quantum mechanics
Gell-Mann and Low theorem
[ "Physics", "Mathematics" ]
714
[ "Quantum field theory", "Theorems in quantum mechanics", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Physics theorems" ]
33,766,734
https://en.wikipedia.org/wiki/Xenon%20dichloride
Xenon dichloride (XeCl2) is a xenon compound and the only known stable chloride of xenon. The compound can be prepared by using microwave discharges towards the mixture of xenon and chlorine, and it can be isolated from a condensate trap. One experiment tried to use xenon, chlorine and boron trichloride to produce XeCl2·BCl3, but only generated xenon dichloride. However, it is still doubtful whether xenon dichloride is a true compound or a Van der Waals molecule composed of a xenon atom and a chlorine molecule connected by a secondary bond. References Xenon(II) compounds Chlorides Nonmetal halides Van der Waals molecules
Xenon dichloride
[ "Physics", "Chemistry" ]
171
[ "Chlorides", "Inorganic compounds", "Van der Waals molecules", "Molecules", "Salts", "Inorganic compound stubs", "Matter" ]
36,366,135
https://en.wikipedia.org/wiki/Shungite
Shungite is either a diverse group of metamorphosed Precambrian rocks all of which contain pyrobitumen, or the pyrobitumen within those rocks. It was first described from a deposit near Shunga village, in Karelia, Russia, from where it gets its name. Shungite is most widely known for pseudoscientific and quack medical claims about its uses in medicine and technology, where it is claimed to have properties ranging from nebulous health benefits to blocking 5G radiation. Occurrence Shungite has mainly been found in Russia. The main deposit is in the Lake Onega area of Karelia, at Zazhoginskoye, near Shunga, with another occurrence at Vozhmozero. Two other much smaller occurrences have been reported in Russia, one in Kamchatka in volcanic rocks and the other formed by the burning of spoil from a coal mine at high temperature in Chelyabinsk. Other occurrences have been described from Austria, India, Democratic Republic of Congo and Kazakhstan. Terminology The term "shungite" has evolved substantially since was originally used in 1879 to describe a black substance with more than 98% carbon found in veins near its type locality of Shunga. More recently the term has also been used to describe a wide variety of rocks containing similar carbon layers, leading to some confusion. In scientific usage, shungite refers to a mineraloid which contains >98% carbon, and is used as a modifier to the host-rock's name, i.e. "shungite-bearing dolostone". In popular usage, shungite-bearing rocks are sometimes themselves referred to as shungite. Shungite is subdivided into bright, semi-bright, semi-dull and dull on the basis of its lustre. Shungite has two main modes of occurrence, disseminated within the host rock and as apparently mobilised material. Migrated shungite, which is bright (lustrous) shungite, has been interpreted to represent migrated hydrocarbons and is found as either layer shungite, layers or lenses near conformable with the host rock layering, or vein shungite, which is found as cross-cutting veins. Shungite may also occur as clasts within younger sedimentary rocks. Formation and structure Shungite had historically been regarded as an example of abiogenic petroleum formation, but its biological origin has now been confirmed. Non-migrated shungite is found directly stratigraphically above deposits that were formed in a shallow water carbonate shelf to non-marine evaporitic environment. The shungite-bearing sequence is thought to have been deposited during active rifting, consistent with the alkaline volcanic rocks that are found within the sequence. The organic-rich sediments were likely deposited in a brackish lagoon. The concentration of carbon indicates elevated biological productivity levels, possibly due to high levels of nutrients available from volcanic material. Shungite-bearing deposits that retain sedimentary structures are interpreted as metamorphosed oil source rocks. Some mushroom shaped structures have been interpreted as possible mud volcanoes. Layer and vein shungite varieties, and shungite filling cavities and forming the matrix of breccias, are interpreted as migrated petroleum, now in the form of metamorphosed bitumen. Solid-bitumen shungite is predominantly amorphous, though as with many carbon deposits it contains trace amounts of carbon allotropes such as graphene sheets and fullerenes. Shunga deposit The Shunga deposit contains an estimated total carbon reserve of more than 250 gigatonnes. It is found within a sequence of Palaeoproterozoic meta-sedimentary and meta-volcanic rocks that are preserved in a synform. The sequence has been dated by a gabbro intrusion, which gives a date of 1980±27 Ma, and the underlying dolomites, which give an age of 2090±70 Ma. There are nine shungite-bearing layers within the Zaonezhskaya Formation, from the middle of the preserved sequence. Of these the thickest is layer six, which is also known as the "Productive horizon", due to its concentration of shungite deposits. Four main deposits are known from the area, the Shungskoe, Maksovo, Zazhogino and Nigozero deposits. The Shungskoe deposit is the most studied and is largely depleted. Uses and pseudoscientific claims Shungite has been used since the middle of the 18th century as a pigment for paint, and is currently sold under the names "carbon black" or "shungite natural black". In the 1970s, shungite was exploited in the production of an insulating material, known as shungisite. Shungisite is prepared by heating rocks with low shungite concentrations to and is used as a low density filler. Shungite has applications in construction technologies. The presence of fullerenes has resulted in shungite being of interest to researchers as a natural reservoir, though shungite is not uniquely enriched in fullerenes compared to other carbon-rich rocks. Shungite has been used as a folk medical treatment since the early 18th century. Peter the Great set up Russia's first spa in Karelia to make use of the purported water purifying properties of shungite. He also instigated its use in providing purified water for the Russian army. Crystal healing pseudoscience proponents and 5G conspiracy theorists have erroneously claimed that shungite may remove 5G radiation from their vicinity more efficiently than any material of similar electrical conductivity would do. Many of these claims frequently focus on the reputed benefits of fullerenes contained in shungite, which are found in concentrations of 1 to 10 parts per million. Despite its purported health benefits, shungite contains toxic heavy metals such as lead and cadmium and can pose a health risk when used as an alternative medicine. See also Hydrocarbon Oil shale References Geology of European Russia Bitumen-impregnated rocks Paleoproterozoic geology
Shungite
[ "Chemistry" ]
1,258
[ "Asphalt", "Bitumen-impregnated rocks" ]
39,274,656
https://en.wikipedia.org/wiki/Adhesive%20category
In mathematics, an adhesive category is a category where pushouts of monomorphisms exist and work more or less as they do in the category of sets. An example of an adhesive category is the category of directed multigraphs, or quivers, and the theory of adhesive categories is important in the theory of graph rewriting. More precisely, an adhesive category is one where any of the following equivalent conditions hold: C has all pullbacks, it has pushouts along monomorphisms, and pushout squares of monomorphisms are also pullback squares and are stable under pullback. C has all pullbacks, it has pushouts along monomorphisms, and the latter are also (bicategorical) pushouts in the bicategory of spans in C. If C is small, we may equivalently say that C has all pullbacks, has pushouts along monomorphisms, and admits a full embedding into a Grothendieck topos preserving pullbacks and preserving pushouts of monomorphisms. References Steve Lack and Pawel Sobocinski, Adhesive categories, Basic Research in Computer Science series, BRICS RS-03-31, October 2003. Richard Garner and Steve Lack, "On the axioms for adhesive and quasiadhesive categories", Theory and Applications of Categories, Vol. 27, 2012, No. 3, pp 27–46. Steve Lack and Pawel Sobocinski, "Toposes are adhesive". Steve Lack, "An embedding theorem for adhesive categories", Theory and Applications of Categories, Vol. 25, 2011, No. 7, pp 180–188. External links Category theory
Adhesive category
[ "Mathematics" ]
346
[ "Functions and mappings", "Mathematical structures", "Category theory stubs", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
39,274,704
https://en.wikipedia.org/wiki/Syngas%20to%20gasoline%20plus
Syngas to gasoline plus (STG+) is a thermochemical process to convert natural gas, other gaseous hydrocarbons or gasified biomass into drop-in fuels, such as gasoline, diesel fuel or jet fuel, and organic solvents. Process chemistry This process follows four principal steps in one continuous integrated loop, comprising four fixed bed reactors in a series in which a syngas is converted to synthetic fuels. The steps for producing high-octane synthetic gasoline are as follows: Methanol Synthesis: Syngas is fed to Reactor 1, the first of four reactors, which converts most of the syngas to methanol when passing through the catalyst bed. CO + 2 H2 → Dimethyl Ether (DME) Synthesis: The methanol-rich gas from Reactor 1 is next fed to Reactor 2, the second STG+ reactor. The methanol is exposed to a catalyst and much of it is converted to DME, which involves a dehydration from methanol to form DME. 2 CH3OH → CH3OCH3 + H2O Gasoline synthesis: The Reactor 2 product gas is next fed to Reactor 3, the third reactor containing the catalyst for conversion of DME to hydrocarbons including paraffins (alkanes), aromatics, naphthenes (cycloalkanes) and small amounts of olefins (alkenes), typically with the carbon number ranging from 6 to 10. Gasoline Treatment: The fourth reactor provides transalkylation and hydrogenation treatment to the products coming from Reactor 3. The treatment reduces durene/isodurene (tetramethylbenzenes) and trimethylbenzene components that have high freezing points and must be minimized in gasoline. As a result, the synthetic gasoline product has high octane and desirable viscometric properties. Separator: Finally, the mixture from Reactor 4 is condensed to obtain gasoline. The non-condensed gas and gasoline are separated in a conventional condenser/separator. Most of the non-condensed gas from the product separator becomes recycled gas and is sent back to the feed stream to Reactor 1, leaving the synthetic gasoline product composed of paraffins, aromatics and naphthenes. Catalysts The STG+ process uses standard catalysts similar to those used in other gas to liquids technologies, specifically in methanol to gasoline processes. Methanol to gasoline processes favor molecular size- and shape-selective zeolite catalysts, and the STG+ process also utilizes commercially available shape-selective catalysts, such as ZSM-5. Process efficiency According to Primus Green Energy, the STG+ process converts natural gas into 90+-octane gasoline at approximately . The energy content of gasoline is , making this process about 60% efficient, with a 40% loss of energy. Gasification As is the case with other gas to liquids processes, STG+ utilizes syngas produced via other technologies as a feedstock. This syngas can be produced through several commercially available technologies and from a wide variety of feedstocks, including natural gas, biomass and municipal solid waste. Natural gas and other methane-rich gases, including those produced from municipal waste, are converted into syngas through methane reforming technologies such as steam methane reforming and auto-thermal reforming. Biomass gasification technologies are less established, though several systems being developed utilize fixed bed or fluidized bed reactors. Comparison to other GTL technologies Other technologies for syngas to liquid fuels synthesis include the Fischer–Tropsch process and the methanol to gasoline processes. Research conducted at Princeton University indicates that methanol to gasoline processes are consistently more cost-effective, both in capital cost and overall cost, than the Fischer–Tropsch process at small, medium and large scales. Preliminary studies suggest that the STG+ process is more energetically efficient and the highest yielding methanol to gasoline process. Fischer–Tropsch process The primary difference between the Fischer–Tropsch process and methanol to gasoline processes such as STG+ are the catalysts used, product types and economics. Generally, the Fischer–Tropsch process favors unselective cobalt and iron catalysts, while methanol to gasoline technologies favor molecular size- and shape-selective zeolites. In terms of product types, Fischer–Tropsch production has been limited to linear paraffins, such as synthetic crude oil, whereas methanol to gasoline processes can produce aromatics, such as xylene and toluene, and naphthenes and iso-paraffins, such as drop-in gasoline and jet fuel. The main product of the Fischer–Tropsch process, synthetic crude oil, requires additional refining to produce fuel products such as diesel fuel or gasoline. This refining typically adds additional costs, causing some industry leaders to label the economics of commercial-scale Fischer–Tropsch processes as challenging. Methanol to gasoline The STG+ technology offers several differentiators that distinguish it from other methanol to gasoline processes. These differences include product flexibility, durene reduction, environmental footprint and capital cost. Traditional methanol to gasoline technologies produce diesel, gasoline or liquefied petroleum gas. STG+ produces gasoline, diesel, jet fuel and aromatics, depending on the catalysts used. The STG+ technology also incorporates durene reduction into its core process, meaning that the entire fuel production process requires only two steps: syngas production and gas to liquids synthesis. Other methanol to gasoline processes do not incorporate durene reduction into the core process, and they require the implementation of an additional refining step. Due to the additional number of reactors, traditional methanol to gasoline processes include inefficiencies such as the additional cost and energy loss of condensing and evaporating the methanol prior to feeding it to the durene reduction unit. These inefficiencies can lead to a greater capital cost and environmental footprint than methanol to gasoline processes that use fewer reactors, such as STG+. The STG+ process eliminates multiple condensation and evaporation, and the process converts syngas to liquid transportation fuels directly without producing intermediate liquids. This eliminates the need for storage of two products, including pressure storage for liquefied petroleum gas and storage of liquid methanol. Simplifying a gas to liquids process by combining multiple steps into fewer reactors leads to increased yield and efficiency, enabling less expensive facilities that are more easily scaled. Commercialization The STG+ technology is currently operating at pre-commercial scale in Hillsborough, New Jersey at a plant owned by alternative fuels company Primus Green Energy. The plant produces approximately 100,000 gallons of high-quality, drop-in gasoline per year directly from natural gas. Further, the company announced the findings of an independent engineer’s report prepared by E3 Consulting, which found that STG+ system and catalyst performance exceeded expectations during plant operation. The pre-commercial demonstration plant has also achieved 720 hours of continuous operation. Primus Green Energy has announced plans to break ground on its first commercial STG+ plant in the second half of 2014, and the company has announced that this plant is expected to produce approximately 27.8 million gallons of fuel annually. In early 2014, the U.S. Patent and Trademark Office (USPTO) allowed Primus Green Energy’s patent covering its single-loop STG+ technology. See also Alternative fuel Biogasoline Biomass to liquid Fischer–Tropsch process Gas to liquids References Synthetic fuel technologies Gas technologies
Syngas to gasoline plus
[ "Chemistry" ]
1,569
[ "Petroleum technology", "Synthetic fuel technologies" ]
39,275,268
https://en.wikipedia.org/wiki/Computationally%20bounded%20adversary
In information theory, the computationally bounded adversary problem is a different way of looking at the problem of sending data over a noisy channel. In previous models the best that could be done was ensuring correct decoding for up to d/2 errors, where d was the Hamming distance of the code. The problem with doing it this way is that it does not take into consideration the actual amount of computing power available to the adversary. Rather, it only concerns itself with how many bits of a given code word can change and still have the message decode properly. In the computationally bounded adversary model the channel – the adversary – is restricted to only being able to perform a reasonable amount of computation to decide which bits of the code word need to change. In other words, this model does not need to consider how many errors can possibly be handled, but only how many errors could possibly be introduced given a reasonable amount of computing power on the part of the adversary. Once the channel has been given this restriction it becomes possible to construct codes that are both faster to encode and decode compared to previous methods that can also handle a large number of errors. Comparison to other models Worst-case model At first glance, the worst-case model seems intuitively ideal. The guarantee that an algorithm will succeed no matter what is, of course, highly alluring. However, it demands too much. A real-life adversary cannot spend an indefinite amount of time examining a message in order to find the one error pattern which an algorithm would struggle with. As a comparison, consider the Quicksort algorithm. In the worst-case scenario, Quicksort makes O(n2) comparisons; however, such an occurrence is rare. Quicksort almost invariably makes O(n log n) comparisons instead, and even outperforms other algorithms which can guarantee O(n log n) behavior. Let us suppose an adversary wishes to force the Quicksort algorithm to make O(n2) comparisons. Then he would have to search all of the n! permutations of the input string and test the algorithm on each until he found the one for which the algorithm runs significantly slower. But since this would take O(n!) time, it is clearly infeasible for an adversary to do this. Similarly, it is unreasonable to assume an adversary for an encoding and decoding system would be able to test every single error pattern in order to find the most effective one. Stochastic noise model The stochastic noise model can be described as a kind of "dumb" noise model. That is to say that it does not have the adaptability to deal with "intelligent" threats. Even if the attacker is bounded it is still possible that they might be able to overcome the stochastic model with a bit of cleverness. The stochastic model has no real way to fight against this sort of attack and as such is unsuited to dealing with the kind of "intelligent" threats that would be preferable to have defenses against. Therefore, a computationally bounded adversarial model has been proposed as a compromise between the two. This forces one to consider that messages may be perverted in conscious, even malicious ways, but without forcing an algorithm designer to worry about rare cases which likely will never occur. Applications Comparison to stochastic noise channel Since any computationally bounded adversary could in O(n) time flip a coin for each bit, it is intuitively clear that any encoding and decoding system which can work against this adversary must also work in the stochastic noise model. The converse is less simple; however, it can be shown that any system which works in the stochastic noise model can also efficiently encode and decode against a computationally bounded adversary, and only at an additional cost which is polynomial in n. The following method to accomplish this was designed by Dick Lipton, and is taken from: Let be an encoder for the stochastic noise model and be a simple decoder for the same, each of which runs in polynomial time. Furthermore, let both the sender and receiver share some random permutation function and a random pattern . For encoding: 1. Let . 2. Let . 3. Transmit Then for decoding: 1. Receive . Compute . 2. Calculate . Similarly to the Quicksort comparison above, if the channel wants to do something smart, it must first test all the permutations. However, this is infeasible for a computationally bounded adversary, so the most it can do is make a random error pattern . But then: since by definition. , where since any permutation is linear with respect to XOR, as per the definition of above. Since is random, is just random noise and we can use the simple decoder to decode the received message and get back . Specific applications By assuming a computationally bounded adversary, it is possibly to design a locally decodable code which is both efficient and near-optimal, with a negligible error probability. These codes are used in complexity theory for things like self-correcting computations, probabilistically checkable proof systems, and worst-case to average-case hardness reductions in the constructions of pseudo-random generators. They are useful in cryptography as a result of their connection with private information retrieval protocols. They are also in a number of database applications like fault-tolerant data storage. Furthermore, it is possible to construct codes which surpass known bounds for worst-case codes—specifically, unique decoding with a error rate. This can be done by concatenating timestamped digital signatures onto messages. A computationally bounded channel cannot forge a signature; and while it may have valid past signatures, the receiver can use list decoding and select a message only if its signature has the correct timestamp. See also Concrete security References Computational complexity theory Coding theory
Computationally bounded adversary
[ "Mathematics" ]
1,202
[ "Discrete mathematics", "Coding theory" ]
39,278,576
https://en.wikipedia.org/wiki/Sporobolomyces
Sporobolomyces is a genus of fungi in the subdivision Pucciniomycotina. Species produce both yeast states and hyphal states. The latter form teliospores from which auricularioid (tubular and laterally septate) basidia emerge, bearing basidiospores. Yeast colonies are salmon-pink to red. Sporobolomyces species occur worldwide and have been isolated (as yeasts) from a wide variety of substrates. They produce ballistoconidia that are bilaterally symmetrical, they have Coenzyme Q10 or Coenzyme Q10(H2) as their major ubiquinone, they lack xylose in whole-cell hydrolysates, and they cannot ferment sugars. One species, Sporobolomyces salmonicolor, is known to cause disease in humans. Species Molecular research, based on cladistic analysis of DNA sequences, has shown that Sporobolomyces sensu stricto is a monophyletic (natural) genus, but that many species previously placed in the genus belong elsewhere. The teleomorphic (hyphal) state was formerly referred to the genus Sporidiobolus, but, following changes to the International Code of Nomenclature for algae, fungi, and plants, the practice of giving different names to teleomorph and anamorph forms of the same fungus was discontinued, meaning that Sporidiobolus became a synonym of the earlier name Sporobolomyces. S. agrorum S. bannaensis S. beijingensis S. blumeae S. carnicolor S. cellobiolyticus S. ellipsoideus S. japonicus S. jilinensis S. johnsonii S. koalae S. longiusculus S. musae S. patagonicus S. phafii S. primogenomicus S. reniformis S. roseus S. ruberrimus S. salmoneus S. salmonicolor S. shibatanus S. sucorum References Basidiomycota genera Yeasts Sporidiobolales Taxa described in 1924
Sporobolomyces
[ "Biology" ]
448
[ "Yeasts", "Fungi" ]
39,280,133
https://en.wikipedia.org/wiki/SNED1
SNED1 (Sushi, Nidogen, and EGF-like Domains) is an extracellular matrix (ECM) protein expressed at low levels in a wide range of tissues. The gene encoding SNED1 is located in the human chromosome 2 at locus q37.3. The corresponding mRNA isolated from the spleen and is 6834bp in length, and the corresponding protein is 1413 amino-acid long. The mouse ortholog of SNED1 was cloned in 2004 from the embryonic kidney by Leimester et al. SNED1 present domains characteristic of ECM proteins, including an amino-terminal NIDO domain, several calcium binding EGF-like domains (EGF_CA), a Sushi domain also known as complement control protein (CCP) domain, and three type III fibronectin (FN3) domains in the carboxy-terminal region. Gene Locus SNED1 is located on the plus strand of chromosome 2 at locus 2q37.3. The Refseq identification number is NM_001080437.3 The genomic DNA sequence of SNED1 contains 98,159bp and the longest spliced mRNA as predicted by AceView is 7048bp and contains 31 exons. There are 9 predicted splice variants of SNED1 that exhibited protein structure matches using the Phyre 2 database which is discussed under "Tertiary and Quaternary Structure". Common aliases SNED1 is an acronym for Sushi, Nidogen, and EGF-like Domains 1. Obsolete aliases for SNED1 include Snep, SST3, and IRE-BP1. Homology/evolution Homologs and phylogeny SNED1 is highly conserved throughout evolutionary history and is shown to exhibit this conservation across vertebrates including fish, reptiles, amphibians, birds, and mammals. It is unclear that SNED1 is conserved in invertebrates, but protein domains found in SNED1 are also found in invertebrates. It may be worth noting that the abundance of cysteine residues, mostly located within EGF-like domains where they form disulfide bonds, appears to be very highly conserved, suggesting that the cysteine richness is a very important feature of this protein. Paralogs SNED1 has several paralogs within the human genome, which cover small portions of the entire peptide sequence. Genes encoding proteins sharing domains (EGF-like, Sushi) with SNED 1 include the neurogenic locus notch homolog (NOTCH) proteins, the jagged proteins, eyes shut homolog proteins, the crumbs homolog proteins, delta and notch-like epidermal growth factor receptors, the sushi von Wilebrand factor A protein (SVEP1), and slit homolog three protein. Protein Primary sequence The Protein Knowledge Database, UniProt, reports that the full length SNED1 protein is 1413 amino-acid long (UniProt Q8TER0). The full sequence obtained by an NCBI BLAST search can be accessed with the reference ID NP_001073906.1. One presumably important feature of this protein that is worth noting is that it is extraordinarily cysteine rich, with 107 cysteines total, giving an overall cysteine composition of 13.2%. Domains and motifs SNED1 is a secreted protein of the extracellular matrix. It contains a signal peptide (amino acid 1-24) directing the protein to the secretory pathway. Precise prediction of domain boundaries can be obtained using the InterPro domain database or SMART. There are various interesting domains in this protein. The first in the annotated sequence above shown in pink, is the NIDO domain, also found in the Nidogen-1 protein, also known as Entactin. Other than SNED1, this domain is shared with only four human proteins: the basement membrane proteins nidogen-1, nidogen-2, and alpha-tectorin; and mucin-4, which has been demonstrated to play a role in promoting pancreatic cancer metastasis. The second regions of interest shown by an underline are calcium-binding EGF domain (EGF-CA). There are many of these domains in the sequence and they are often present in a large number of membrane bound and extracellular proteins. These EGF-CA domains may suggest a "sticky" nature to this protein as oftentimes extracellular matrix (ECM) proteins require calcium cations to form homo- and hetero-dimeric complexes between other ECM proteins. The Sushi domain or complement control protein (CCP) motif is annotated in green in the figure and this domain has been identified in many proteins involved in the complement system. Other aliases for this domain include short consensus repeats (SCRs) and the Sushi domain, from which the protein gets its name. The Fibronectin type III domain (FN3) is annotated in blue and the presence of this domain may suggest one of the properties of this protein as being involved in cell adhesion. SNED1 contains an RGD and a LDV sequence, important in the binding of other ECM proteins to integrins that are proteins found in cell membranes, an mediate cell-ECM interactions. Post-translational modifications 13 N-glycosylation sites are predicted in the sequence of SNED1, and the presence of N-linked sites has been determined experimentally. SNED1 also has several predicted attachment sites for O-linked glycans and glycosaminoglycans, but these have not yet been validated experimentally at this time. There was only a few post-translational kinase dependant phosphorylation sites worth noting that resulted in a score of >0.8 by the NetPhosK program in the ExPASy Bioinformatics suite proteomics tools. These sites are annotated with yellow highlight in the conceptual translation above. All of these sites are predicted to be phosphorylated by either Protein kinase A (PKA) or Protein kinase C (PKC). Experimental evidence exists for phosphorylation at 12 residues: 5 serine, 5 threonine, and 2 tyrosine residues. Secondary structure The amino acid sequence of the longest variant is incredibly cysteine rich, presumably resulting in a large amount of disulfide bond formation. The beta sheets are annotated as purple text in the conceptual translation and the alpha-helices are annotated as red text. The percentage of intrinsic disorder of processed human SNED1 (residues 25–1413) predicted by IUPred2A is 15.3%. A large proportion of random coil (73%) was predicted in SNED1 together with 26% of β-strands, and 1% of helix corresponding to a sequence found in the amino-terminal region of SNED1 Tertiary and quaternary structure [This section needs referencing to figures and experimental demonstration] The program Phyre2 was used to construct predictions of both the conserved domain regions NIDO, CCP, and FN3, as well as each of the splice variants. There were some interesting results consistent with the proposed function of an extracellular "sticky" protein possibly involved in cell-cell adhesion or in clotting. Protein matches found in Phyre2 comprise an array of proteins with functions of; clotting, hydrolysis, plasminogen activation, hormone/growth factor, protein binding, cell-adhesion, and ECM proteins. Splice variants a, b, and e, ihave >99% structural similarity to the protein neurexin 1-alpha (NRXN1). Neurexins are cell adhesion molecules and often contain EGF binding domains, enhancing intracellular junction forming between cells. NRXN1 is also proposed to play a role in angiogenesis. Alpha-neurexins interact with neurexophilins and possibly function in the synaptic junctions of the vertebrate nervous system. Alpha neurexins often utilize alternate promoters and splice sites, resulting in many different transcripts from one gene, may be an explanation of this gene's abundance of alternative transcripts. Splice variant d has a 100% structural match to Low density lipoprotein receptor-related protein 4 (LRP4). This protein is involved in SOST-mediated bone formation inhibition and inhibition of Wnt signaling. LRP4 plays an important role in the formation of neuromuscular junctions. Splice variants f and g have >99% similarity to fibrillin-1, an ECM protein that is a structural component of calcium binding microfibrils. Splice variant i and conserved domain CCP are >99% structurally similar to t-plasminogen activator (PLAT). PLAT is secreted by vascular endothelial cells and acts as a serine protease that converts plasminogen to plasmin. Plasmin is a fibrolytic enzyme that aids in the breakdown of blood clots and is used clinically for that exact purpose. The conserved domain NIDO, was >99% similar to coagulation factor IX, also known as Factor IX (F9). F9 is a secreted coagulation factor involved in the clotting cascade that required activation by multiple other coagulation factors within the cascade. The 3 consecutive conserved FN3 domains together are 100% similar with 100% coverage to anosmin 1. Anosmin-1 is an ECM glycoprotein responsible for normal neural development of the brain, spinal cord and kidney. Interacting proteins Computational prediction by several databases, focusing on secreted proteins and membrane proteins, resulted in the prediction of 114 unique interactions by at least one algorithm, including SNED1 auto-interaction. More than half of the protein partners of SNED1 were annotated as membrane proteins in UniProtKB. 47 extracellular proteins were identified as SNED1 binding partners, including 30 core matrisome proteins, 10 matrisome-associated proteins, and seven secreted proteins. Among the 30 matrisome proteins are 6 collagens: COL6A3, found in basement membranes and other ECMs, COL7A1, and the Fibril-Associated Collagens with Interrupted triple-helices (FACITS), all containing a thrombospondin domain, COL12A1, COL14A1, COL16A1, COL20A1); and a number of ECM glycoproteins: 4 tenascins (TNC, TNN, TNR, and TNXB), fibronectin (FN1), the latent-TGFβ binding protein 2 (LTBP2), and the basement membrane glycoproteins nidogens 1 and 2. Independently, the STRING-Known and Predicted Protein Interaction database was used to determine proteins that may be interacting and the following proteins were candidates for interaction: somatostatin (SST), somatostatin receptor 2 (SSTR2)as well as a variety of other somatostatin receptors, spermine synthase (SMS), and TMEM132C. All of the somatostatin related proteins are involved in the inhibition of hormones. There is very little known about TMEM132C and all publications related to the protein are mass genome screens. The protein expression profiles of TMEM132C and SNED1 are very similar to SNED1, with protein abundance found in blood plasma, platelets, and liver. All of the interacting proteins described are expressed in these three common areas. Expression SNED1 is ubiquitously expressed at low to intermediate levels in adult tissues, making it unclear from RNA expression profiles, which cells are secreting SNED1 in tissues. Experimental data obtained in mice have shown that the Sned1 promoter is broadly active during embryogenesis, particularly in the limb buds, tail, sclerotome, vertebrate and ribs, lung, kidney, adrenal gland, cerebellum, choroid plexus, and head mesenchyme. The protein expression profiles of SNED1 predicted with MOPED-Multi-Omics Profiling Expression Database and PaxB-Protein Abundance Across Organisms database indicate that the protein is found in blood serum, blood plasma, blood T-lymphocytes, platelets, kidney Hek-293 cells, liver, and low levels in the brain. Transcript variants The program Aceview was used to predict transcript variants, shown in Figure 6. There are 9 spliced forms and 3 unspliced forms. Three of the transcript variants, b, c, and e, contain green regions that represent uORFs which indicate that they contain regulatory elements within the coding region of the transcript. All of the spliced transcript variants a-i were analyzed with the Phyre2 server to predict protein structure. See, "Tertiary and Quaternary Structure". The existence of the splice variants are has not been yet validated experimentally. Promoter The promoter was predicted and analyzed for transcription factor binding sites using the ElDorado software on the Genomatix software suite. There were alternative promoters downstream of the selected 845bp promoter. Transcription factors The following transcription factors were found with a matrix similarity of 1.00 and the entire binding domain was matched in the ElDorado predicted promoter. Protein functions and Clinical significance A select cases on NCBI's GeoProfiles highlighted some clinically relevant expression data regarding SNED1 expression levels in response to certain conditions. In aldosterone producing adenoma versus control lung tissue, SNED1 expression decreased about 25 fold in the adenoma tissue. In a development study on the transition from oligodendrocyte precursors to mature oligodendrocytes, expression decreased almost 100 fold upon differentiation into mature oligodendrocytes. It may be interesting to explore the expression in clotting disorders or other blood related diseases. A seminal study published in 2014 has demonstrated that SNED1 was a promoter of breast cancer metastasis. The recent generation of a Sned1 knockout mouse model is also shedding light on the multiple roles of SNED1 in development and physiology. The global Sned1 knockout leads to early post-natal lethality and severe craniofacial and skeletal anomalies, indicating that Sned1 is an essential gene. References Extracellular matrix proteins Glycoproteins
SNED1
[ "Chemistry" ]
3,042
[ "Glycoproteins", "Glycobiology" ]
39,283,803
https://en.wikipedia.org/wiki/Faraday%20Medal%20%28electrochemistry%29
The Faraday Medal is awarded by the Electrochemistry Group of the Royal Society of Chemistry. Since 1977, it honours distinguished mid-career electrochemists working outside of the United Kingdom and the Republic of Ireland for their research advancements. Laureates Source: RIC 1977 Veniamin Grigorievich Levich (1917–1987) 1981 John O’M. Bockris 1983 Jean-Michel Savéant 1985 Michel Armand 1987 Heinz Gerischer (1919–1994) 1991 David A. J. Rand, CSIRO Division of Mineral Chemistry, Port Melbourne 1994 Stanley Bruckenstein, University at Buffalo 1995 Michael J. Weaver (1947–2002), Purdue University 1996 Adam Heller, University of Texas 1998 Wolf Vielstich, Universität Bonn 1999 Philippe Allongue, CNRS 2000 Alan Maxwell Bond (b. 1946), Monash University 2001 Michael Grätzel, École polytechnique fédérale de Lausanne 2002 Henry S. White, University of Utah 2003 (1942–2011), Universität Ulm 2004 Daniel A. Scherson, Case Western Reserve University 2005 Robert Mark Wightman, University of North Carolina 2006 Hubert H. Girault, École polytechnique fédérale de Lausanne 2007 Christian Amatore, CNRS 2008 Nathan Lewis, California Institute of Technology 2009 Reginald M. Penner, University of California, Irvine 2011 Héctor D. Abruña, Cornell University 2012 Zhong-Qun Tian, Xiamen University 2013 Nenad Markovic 2014 Masatoshi Osawa, Hokkaido University 2015 Richard M. Crooks, University of Texas at Austin 2016 Justin Gooding, University of New South Wales, Australia 2017 Marc Koper, Leiden University 2018 Yang Shao-Horn, MIT 2019 Martin Winter, Westfälische Wilhelms-Universität Münster 2020 Shirley Meng, University of California, San Diego 2021 Peter Strasser, Technische Universität Berlin 2022 Beatriz Roldán Cuenya, Fritz-Haber-Institute, Berlin See also List of chemistry awards References 1977 establishments in the United Kingdom Awards of the Royal Society of Chemistry Electrochemistry
Faraday Medal (electrochemistry)
[ "Chemistry" ]
434
[ "Electrochemistry" ]
49,514,596
https://en.wikipedia.org/wiki/Formalized%20administrative%20notation
Formalized administrative notation (FAN) is a method that enables administrators of various organizations to describe the flow and sequence of operations, identify their responsibilities, define and integrate their transactions etc. Its objective is to formulate the mode of implementing a specific system in order to have further systemization. Originally called in Spanish as MECAF () Known areas FAN is primarily used by professionals in systems operations in order to achieve formalized descriptions of administrative circuits. Types of action FAN is generally used to implement software packets for enterprise resource planning (ERP), customer relationship management (CRM) and supply chain. History In Argentina, FAN was created by Pablo Iacub y Leonardo Mayo and was originally used in the '90s for multiple software implementation projects within the country. FAN was first formally recognized in 2005 at the IRMA congress in the U.S. In 2014, it was presented at FELTI and received the LatinaTEC award. Currently, FAN is used in various universities throughout Latin America to enable students to understand and describe the mechanics of administrative circuits of different types of organizations. References MECAF: nuevo paradigma en formalización de la Administración Empresarial | CanalAR Sistemas ERP: La Guía Definitiva Software ERP : el nuevo Gran Hermano de las organizaciones Steps To Buying Auto Insurance Se entregaron los Premios LaTinatec, en el marco de FELTI FELTi-2014: un acercamiento al desarrollo tecnológico de la información | Universidad de las Ciencias Informáticas Enterprise resource planning terminology Supply chain management
Formalized administrative notation
[ "Technology" ]
334
[ "Enterprise resource planning terminology", "Computing terminology" ]
49,515,985
https://en.wikipedia.org/wiki/ESIM
An eSIM (embedded SIM) is a form of SIM card that is embedded directly into a device as software installed onto a eUICC chip. First released in March 2016, eSIM is a global specification by the GSMA that enables remote SIM provisioning; end-users can change mobile network operators without the need to physically swap a SIM from the device. eSIM technology has been referred to as a disruptive innovation for the mobile telephony industry. Most flagship devices manufactured since 2018 that are not SIM locked support eSIM technology; as of October 2023, there were 134 models of mobile phones that supported eSIMs. In addition to mobile phones, tablet computers, and smartwatches, eSIM technology is used for Internet of things applications such as connected cars (smart rearview mirrors, on-board diagnostics, vehicle Wi-Fi hotspots), artificial intelligence translators, MiFi devices, smart earphones, smart metering, GPS tracking units, database transaction units, bicycle-sharing systems, advertising players, and closed-circuit television cameras. A report stated that by 2025, 98% of mobile network operators were expected to offer eSIMs. The eUICC chip used to host the eSIM is installed via surface-mount technology at the factory and uses the same electrical interface as a physical SIM as defined in ISO/IEC 7816 but with a small format of 6 mm × 5 mm. Once an eSIM carrier profile has been installed on an eUICC, it operates in the same way as a physical SIM, complete with a unique ICCID and network authentication key generated by the carrier. If the eSIM is eUICC-compatible, it can be re-programmed with new SIM information. Otherwise, the eSIM is programmed with its ICCID/IMSI and other information at the time it is manufactured, and cannot be changed. One common physical form factor of an eUICC chip is commonly designated MFF2. All eSIMs are programmed with a permanent eSIM ID (EID) at the factory, which is used by the provisioning service to associate the device with an existing carrier subscription as well as to negotiate a secure channel for programming. The GSMA maintains two different versions of the eSIM standard: one for consumer and Internet of things devices and another for machine to machine (M2M) devices. History In November 2010, the GSMA began discussing the possibility of a software-based SIM. In March 2012, at the meeting of the European Telecommunications Standards Institute, Motorola noted that eUICC is geared at industrial devices, while Apple foresaw eSIMs in consumer products. A first version of the standard was published in March 2016, followed by a second version in November 2016. In February 2016, Samsung released the Samsung Gear S2 Classic 3G smartwatch, the first device to implement an eSIM. In March 2017, during Mobile World Congress, Qualcomm introduced a technical solution, with a live demonstration, within its Snapdragon hardware chip associated with related software (secured Java applications). In September 2017, Apple first introduced eSIM support with the Apple Watch Series 3. In 2018, it introduced it to iPhone, with the iPhone XS and iPhone XR, and iPad, with the iPad Pro (3rd generation). The first iPhone models to not have a SIM card tray and work exclusively with eSIM were the iPhone 14 and iPhone 14 Pro, announced in 2022. Outside the United States, all iPhone models continue to be sold with support for physical SIM cards, but the iPad Air (6th generation), iPad Pro (7th generation), and iPad Mini (7th generation), announced in 2024, work exclusively with eSIM. In October 2017, Google unveiled the Pixel 2, the first mobile phone to use an eSIM, available via its Google Fi Wireless service. In 2018, Google released the Pixel 3 and Pixel 3 XL and in May 2019, the Pixel 3a and Pixel 3a XL, with eSIM support for carriers other than Google Fi. In October 2019, Google released the Pixel 4 and Pixel 4 XL with eSIM support. Motorola released the 2020 version of the Motorola Razr, a foldable smartphone that has no physical SIM slot since it only supports eSIM. In July 2018, Plintron implemented the eSIM4Things Internet of things product. In December 2017, Microsoft launched its first eSIM-enabled device, the Microsoft Surface Pro LTE. In 2018, Microsoft also introduced eSIM to the Windows 10 operating system. Samsung shipped the Samsung Galaxy S21 and S20 in North America with eSIM hardware onboard but no software support out of the box. The feature was enabled with the One UI version 4 update in November 2021. In June 2018, Singapore sought public consultation on introducing eSIM as a new standard. In 2023, there were 650 million installed devices with eSIM capability. Advantages and disadvantages Advantages Several SIMs can be stored at the same time. There is no need to obtain, store, and insert/eject (and potentially lose) small physical SIMs. If the phone is stolen, it can be tracked by "find my phone" services, while a physical SIM can be removed. The risk of damaging a SIM socket's delicate contacts inserting and removing a SIM is eliminated. Phones with eSIM only do not need to be built with hardware SIM holders or means to insert them. This is particularly relevant for small devices such as smartwatches. Users can update to a new plan or switch carriers instantly online. eSIMs are better suited for Wi-Fi hotspots due to seamless network switching and enhanced security. The eSIM chip is half the size of the smallest physical SIM card, allowing phone designers to use space for other applications. eSIMs provide cost savings when traveling internationally. Disadvantages eSIMs cannot be easily transferred to another phone; the process usually requires technical support. If a phone is broken, anything restricted to the eSIM's network becomes inaccessible; in particular, calls cannot be received, and resources (calls, SMS, data) paid for cannot be used. A physical SIM can be transferred from a broken to a working phone. The eSIM, which allows communications to be made and charged to the account-holder, cannot be removed if having the phone repaired, or lending it to someone. eSIM accounts must be deleted or transferred from a phone when it is sold or disposed. There may be compatibility issues with some phones. An eSIM cannot be physically removed from a device, which some might view as a disadvantage if they are concerned about being tracked. The implementation of the eSIM on the Samsung Galaxy series in North America (USA and Canada) is different than the implementation in the rest of the world: North American variants lack the ability to specify different default SIMs for different functions, e.g., one SIM as the default for data and the other SIM as the default for voice. They require that the same eSIM be the default SIM for data, voice, and SMS. The US variants also force a reboot each time the user switches eSIMs, while other models do not, because the CSC codes correspond to a single carrier. If a phone is bought directly from a carrier with a SIM lock, the phone can only add eSIMs from the same carrier as the one on the physical SIM card, even after a carrier unlock. Foreign eSIMs may have limited support. References External links eSIM overview from the GSMA Computer access control Cryptographic hardware Mobile phone standards Smart cards Telecommunications-related introductions in 2016
ESIM
[ "Engineering" ]
1,545
[ "Cybersecurity engineering", "Computer access control" ]
49,520,524
https://en.wikipedia.org/wiki/Ground%20reinforcement
Ground reinforcement is a reinforcing element placed on a flat surface in order to increase accessibility for vehicles and ensure proper rainwater drainage in addition to protection. The reinforcing element, usually in the form of grids, is used beneath grass, asphalt, concrete in roads, parking lots, driveways and paths. Materials The materials used for ground reinforcement include iron, plywood and recycled plastic Recycled plastic possesses the desirable properties of water resistance and recycling opportunities, in addition to the sustainability. Installation Iron plates, being heavy, are generally installed using a crane while plywood and plastic reinforcements are placed by hand. Ground reinforcement grids are installed by preparing a suitable depth of sub base material, overlaid with a screed layer of fine gravel or sharp sand to create a level followed by geotextile membrane before final assembly and in-filling of the final grid surface. Typical ground reinforcement systems involve gravel stabilization and grass reinforcement. These systems reinforce the surface to enable use by vehicles. References How Gravel Stabilisation Grids Work, - Nidagravel UK | Gravel Stabilisers UK Ltd (UK) Building materials Materials
Ground reinforcement
[ "Physics", "Engineering" ]
228
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
49,522,576
https://en.wikipedia.org/wiki/Whittle%20likelihood
In statistics, Whittle likelihood is an approximation to the likelihood function of a stationary Gaussian time series. It is named after the mathematician and statistician Peter Whittle, who introduced it in his PhD thesis in 1951. It is commonly used in time series analysis and signal processing for parameter estimation and signal detection. Context In a stationary Gaussian time series model, the likelihood function is (as usual in Gaussian models) a function of the associated mean and covariance parameters. With a large number () of observations, the () covariance matrix may become very large, making computations very costly in practice. However, due to stationarity, the covariance matrix has a rather simple structure, and by using an approximation, computations may be simplified considerably (from to ). The idea effectively boils down to assuming a heteroscedastic zero-mean Gaussian model in Fourier domain; the model formulation is based on the time series' discrete Fourier transform and its power spectral density. Definition Let be a stationary Gaussian time series with (one-sided) power spectral density , where is even and samples are taken at constant sampling intervals . Let be the (complex-valued) discrete Fourier transform (DFT) of the time series. Then for the Whittle likelihood one effectively assumes independent zero-mean Gaussian distributions for all with variances for the real and imaginary parts given by where is the th Fourier frequency. This approximate model immediately leads to the (logarithmic) likelihood function where denotes the absolute value with . Special case of a known noise spectrum In case the noise spectrum is assumed a-priori known, and noise properties are not to be inferred from the data, the likelihood function may be simplified further by ignoring constant terms, leading to the sum-of-squares expression This expression also is the basis for the common matched filter. Accuracy of approximation The Whittle likelihood in general is only an approximation, it is only exact if the spectrum is constant, i.e., in the trivial case of white noise. The efficiency of the Whittle approximation always depends on the particular circumstances. Note that due to linearity of the Fourier transform, Gaussianity in Fourier domain implies Gaussianity in time domain and vice versa. What makes the Whittle likelihood only approximately accurate is related to the sampling theorem—the effect of Fourier-transforming only a finite number of data points, which also manifests itself as spectral leakage in related problems (and which may be ameliorated using the same methods, namely, windowing). In the present case, the implicit periodicity assumption implies correlation between the first and last samples ( and ), which are effectively treated as "neighbouring" samples (like and ). Applications Parameter estimation Whittle's likelihood is commonly used to estimate signal parameters for signals that are buried in non-white noise. The noise spectrum then may be assumed known, or it may be inferred along with the signal parameters. Signal detection Signal detection is commonly performed with the matched filter, which is based on the Whittle likelihood for the case of a known noise power spectral density. The matched filter effectively does a maximum-likelihood fit of the signal to the noisy data and uses the resulting likelihood ratio as the detection statistic. The matched filter may be generalized to an analogous procedure based on a Student-t distribution by also considering uncertainty (e.g. estimation uncertainty) in the noise spectrum. On the technical side, the EM algorithm may be utilized here, effectively leading to repeated or iterative matched-filtering. Spectrum estimation The Whittle likelihood is also applicable for estimation of the noise spectrum, either alone or in conjunction with signal parameters. See also Coloured noise Discrete Fourier transform Likelihood function Matched filter Power spectral density Statistical signal processing Weighted least squares References Time series Time series models Frequency-domain analysis Statistical inference Statistical models Statistical signal processing Signal estimation Normal distribution
Whittle likelihood
[ "Physics", "Engineering" ]
808
[ "Frequency-domain analysis", "Statistical signal processing", "Spectrum (physical sciences)", "Engineering statistics" ]
49,526,463
https://en.wikipedia.org/wiki/Surveyor%20nuclease%20assay
Surveyor nuclease assay is an enzyme mismatch cleavage assay used to detect single base mismatches or small insertions or deletions (indels). Surveyor nuclease is part of a family of mismatch-specific endonucleases that were discovered in celery (CEL nucleases). The enzyme recognizes all base substitutions and insertions/deletions, and cleaves the 3′ side of mismatched sites in both DNA strands with high specificity This assay has been used to identify and analyze mutations in a variety of organisms and cell types, as well as to confirm genome modifications following genome editing (using CRISPR/TALENs/zinc fingers). Background The ability to discover and detect known and unknown mutations is of great importance in biomedical research and genetic diagnosis (see Applications). Therefore, multiple methods have been developed to enable research-based and clinical diagnostic detection of such mutations. The most direct manner to identify the sequence changes/differences is through reading the DNA sequence with traditional and high throughput DNA sequencing methods (see Sanger sequencing and DNA sequencing). However, these methods provide large amounts of unnecessary data and are costly to use. In addition, traditional sequencing can be useful for detection of germline mutations, but may be less successful in detecting somatic minor alleles at low frequencies (mosaicism). Therefore, other non-sequencing based approaches to detect the mutation or polymorphisms are required. Other widely used methods depend on physical properties of DNA, for example melting temperature-based systems such as Single-stranded conformational polymorphism analysis (SSCP) and Denaturing high-performance liquid chromatography (DHPLC). These techniques are generally limited to the analysis of short DNA fragments (< 1000 bp) and are only able to indicate the presence of polymorphism(s), but do not easily yield the location of a mutation within a DNA sequence. Therefore, this must be followed with additional techniques in order to pinpoint the mutation or map multiple mutations in the same fragment. Enzymatic mismatch cleavage assays exploit the properties of mismatch-specific endonucleases to detect and cleave mismatches. These methods are simple to run using standard laboratory techniques and equipment, and can detect polymorphisms, single base pair mismatches, and insertions and deletions at low frequencies. Several such enzymes have been discovered (including CEL I, T4 endonuclease VII, Endonuclease V, T7 endonuclease I). One of the commonly used enzymes is Surveyor nuclease (CEL II), which cleaves the 3′ side of both DNA strands with high specificity at sites of base substitution or insertion/deletion. This enzyme is capable of cleaving at multiple mutations in large DNA fragments, and produces detectable cleavage products from mismatch DNA representing only a small proportion of the DNA in a population, thus making it suitable for use in enzyme mismatch cleavage assays. History In 1998, Olekowski et al. identified a new mismatch specific endonuclease. The enzyme was purified from celery and was given the name CEL I. CEL I was shown to cut DNA with high specificity on both strands at the 3′ side of base-substitution mismatches, and can therefore be used in enzyme mutation detection methods to identify mutations and polymorphisms. Olekowski and colleagues demonstrated this technique by using this enzyme to detect a variety of mutations and polymorphisms in the human BRCA1 gene. While monitoring the purification of CEL I using polyacrylamide gel electrophoresis, Yang et al. noticed that there were two nuclease bands that stayed together during all the purification steps. The major nuclease activity was designated CEL I, while the minor activity on SDS-PAGE was named CEL II. They concluded that CEL I and CEL II are similar, and that both are able to cleave a DNA mismatch. In 2004, Qiu et al. developed a mutation detection technology based on CEL II, also known as Surveyor Nuclease. Since then the method has been used to detect mutations and polymorphisms in many different organisms and cell types (see applications). Surveyor nuclease was licensed from the Fox Chase Cancer Center by Transgenomic, Inc. and was subsequently sold to IDT, which currently distributes it. Surveyor nuclease assay workflow DNA Extraction Initially, the DNA of interest (nuclear or mitochondrial DNA) is extracted from tissues or cell culture. This can be done by standard extraction methods such as Proteinase K digestion followed by ethanol precipitation or by other commercially available methods. If the DNA is predicted to be heterogeneous, e.g. from a pool of differentially modified cells or from heterozygous mutation carriers, there is no need to add control DNA. Polymerase chain reaction The region of interest in both mutant and wild-type reference DNA is amplified by polymerase chain reaction (PCR). The PCR reaction should be carried out using a high-fidelity proofreading polymerase to avoid introducing PCR errors that will be interpreted by the nuclease. The PCR reaction should be optimized to create a single, strong PCR band, as non-specific bands will increase the background noise of the assay. If the allele of interest is expected to be present in low frequency, such as in the case of somatic mosaicism or heteroplasmic mitochondrial DNA, a modified PCR protocol that enriches variant alleles from a mixture of wild-type and mutation-containing DNA might be considered (e.g. COLD-PCR). Formation of hybrid DNA duplex The DNA of interest is denatured and annealed in order to form heteroduplexes containing a mismatch at the point of the mutation, which can then be identified by the Surveyor nuclease. If the DNA is predicted to be homogenous (e.g. homoplasmic mitochondrial DNA or identical alleles on both chromosomes of genomic DNA) then DNA from a control sample is needed in order to form a heteroduplex which is then recognizable by the nuclease. If the DNA sample is heterogeneous, no additional control DNA is needed; however, the PCR products should still be denatured and annealed in order to create heteroduplexes. Digestion The annealed DNA is treated with Surveyor nuclease to cleave the heteroduplexes. All types of mismatches are identifiable by Surveyor nuclease, although the mismatch cutting preferences fall into four groups from most to least preferred: CT, AC, and CC are preferred equally over TT, followed by AA and GG, and finally followed by the least preferred, AG and GT. Sequence context also influences the Surveyor nuclease digestion rate. Analysis Digested DNA products can be analyzed using conventional gel electrophoresis or high-resolution capillary electrophoresis. The detection of cleaved products indicates the presence of a heteroduplex formed by a mismatch. The location of the mutation/polymorphism can be inferred by observing the fragment length after cleavage. If fluorescent labelled primers are used to mark the 5’ and 3’ end of the PCR products, different colored bands will be observed in the analysis. The size of each band independently confirms the position of the mutation/polymorphism. Multiple mutations can be detected by the presence of several fragments. Advantages and limitations Advantages Advantages of mismatch nuclease assays One of the main advantages of detecting mutations and polymorphisms using mismatch nuclease methods is that no previous knowledge is required regarding the nature of the mutation, as opposed to other methods, such as restriction fragment length polymorphisms (RFLP) that has been used for SNP analysis in the past. In comparison to methods based on melting temperature, mismatch-specific endonuclease methods are not only faster, but can also detect multiple mutations in large DNA fragments. The method is feasible to use in high-throughput systems using automated injection, making it suitable for screening. Advantages of surveyor nuclease Surveyor nuclease is a reasonably sensitive enzyme, producing detectable cleavage products from sequences representing only a small proportion of DNA in the population. It can detect a ratio of 1:32 heteroduplex to homoduplex for smaller PCR products (~0.6 kb), and 1:16 heteroduplex to homoduplex for longer PCR products (~2.3 kb). This property makes it possible to pool clinical samples in order to increase the heteroduplex formation and hence the sensitivity. This is also useful for detection of minor variants in a heterogeneous population (such as heterogeneous tumor). In the case of genome editing by CRISPR or other methods, this property can enhance detection of rare editing events in a population of cells prior to creation and testing of individual edited clones. Surveyor nuclease cleaves all types of mismatches, even if some are more preferred then others: CT, AC, and CC are preferred equally over TT, followed by AA and GG, and finally followed by the least preferred, AG and GT. It also detects Indels up to at least 12 bp. Surveyor nuclease assay can also detect multiple mutations in the same fragment. However, this requires several additional processing steps that may also increase the background of the assay (see Limitations). Limitations PCR amplification product One of the main limitations of this assay is that it relies on PCR amplification, and is therefore influenced by the quality of the amplified product. PCR artifacts (e.g. primer-dimers or truncated products) can increase the background noise and obscure the signal. Primer-dimers can also inhibit the activity of surveyor nuclease, reducing the signal. As the PCR method has the potential to introduce its own mutations during the amplification, these errors can also increase background noise. Therefore, it is best to use a high-fidelity polymerase to minimize the amplification errors. Mitochondrial DNA analysis might be susceptible to contamination by nuclear mitochondrial DNA sequences co-amplified during the PCR reaction, thus confounding the analysis of homoplasmic versus heteroplasmic mitochondrial DNA. In order to detect multiple mutations in the same fragment, post-PCR clean-up must be done before Surveyor nuclease digestion. Detection of multiple mismatches can also be improved by increasing time and amount of surveyor nuclease in the reaction, but this also increases the background due to non-specific cleavage. Limitations of the surveyor nuclease enzyme Surveyor nuclease also has a 5′ exonuclease activity that attacks the ends of double‐stranded DNA increasing background signal during extended incubation. This can be reduced by shortening the digestion time and by adding DNA polymerase. Applications Confirming genome modifications using CRISPR and other methods A number of genome editing technologies have emerged in recent years, including zinc-finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs) and the RNA-guided CRISPR/Cas9 nuclease system. These methods promote genome editing by introduction of a double strand DNA break, followed by repair through the non-homologous end-joining (NHEJ) or homology-directed repair (HDR) pathways. While HDR is expected to introduce a consistent modification to the genome, NHEJ can introduce a heterogeneous mix of mutations (usually small indels), which will be difficult to identify using Sanger sequencing. Point mutations and indels can be detected by surveyor nuclease assay, making it useful to detect genome editing in a pool of cells without the need for clonal expansion prior to analysis, and it can also provide an estimate of the targeting efficiency achieved. Even after clonal expansion, detection of mutations using Sanger sequence may be difficult as each allele can undergo a different editing event. In this case, the Surveyor nuclease assay will actually use this effect to create the required heteroduplexes for detection by the mismatch endonuclease. Detection of germline mutations in human genes Surveyor nuclease assay has been used to detect germline mutations in human genes. For example, ATRX for X-linked mental retardation, and the HBB gene linked to β-thalassemia. The assay has also been used to detect mitochondrial and nuclear DNA mutations associated with respiratory chain defects, and mutations associated with kidney disease. Detection of somatic mutations in cancer Surveyor nuclease assay has been used to detect somatic mutations in various cancer-related genes, and as stated above, can be used even when the sample is heterogeneous and the mutant allele only comprises 1%–5% of the total alleles. The method has been used to detect mutations in epidermal-growth-factor-receptor (EGFR), Janus Kinase 2 (JAK2), p53 and others. Other applications The method has been used to detect mutations that cause drug resistance in Mycobacterium tuberculosis. References Biochemistry detection reactions Enzymes Genome editing Mutation
Surveyor nuclease assay
[ "Chemistry", "Engineering", "Biology" ]
2,793
[ "Genetics techniques", "Genome editing", "Biochemistry detection reactions", "Genetic engineering", "Biochemical reactions", "Microbiology techniques" ]
50,650,350
https://en.wikipedia.org/wiki/Degeneration%20%28algebraic%20geometry%29
In algebraic geometry, a degeneration (or specialization) is the act of taking a limit of a family of varieties. Precisely, given a morphism of a variety (or a scheme) to a curve C with origin 0 (e.g., affine or projective line), the fibers form a family of varieties over C. Then the fiber may be thought of as the limit of as . One then says the family degenerates to the special fiber . The limiting process behaves nicely when is a flat morphism and, in that case, the degeneration is called a flat degeneration. Many authors assume degenerations to be flat. When the family is trivial away from a special fiber; i.e., is independent of up to (coherent) isomorphisms, is called a general fiber. Degenerations of curves In the study of moduli of curves, the important point is to understand the boundaries of the moduli, which amounts to understand degenerations of curves. Stability of invariants Ruled-ness specializes. Precisely, Matsusaka'a theorem says Let X be a normal irreducible projective scheme over a discrete valuation ring. If the generic fiber is ruled, then each irreducible component of the special fiber is also ruled. Infinitesimal deformations Let D = k[ε] be the ring of dual numbers over a field k and Y a scheme of finite type over k. Given a closed subscheme X of Y, by definition, an embedded first-order infinitesimal deformation of X is a closed subscheme X of Y ×Spec(k) Spec(D) such that the projection X → Spec D is flat and has X as the special fiber. If Y = Spec A and X = Spec(A/I) are affine, then an embedded infinitesimal deformation amounts to an ideal I of A[ε] such that A[ε]/ I is flat over D and the image of I in A = A[ε]/ε is I. In general, given a pointed scheme (S, 0) and a scheme X, a morphism of schemes : X → S is called the deformation of a scheme X if it is flat and the fiber of it over the distinguished point 0 of S is X. Thus, the above notion is a special case when S = Spec D and there is some choice of embedding. See also deformation theory differential graded Lie algebra Kodaira–Spencer map Frobenius splitting Relative effective Cartier divisor References M. Artin, Lectures on Deformations of Singularities – Tata Institute of Fundamental Research, 1976 E. Sernesi: Deformations of algebraic schemes M. Gross, M. Siebert, An invitation to toric degenerations M. Kontsevich, Y. Soibelman: Affine structures and non-Archimedean analytic spaces, in: The unity of mathematics (P. Etingof, V. Retakh, I.M. Singer, eds.), 321–385, Progr. Math. 244, Birkh ̈auser 2006. Karen E Smith, Vanishing, Singularities And Effective Bounds Via Prime Characteristic Local Algebra. V. Alexeev, Ch. Birkenhake, and K. Hulek, Degenerations of Prym varieties, J. Reine Angew. Math. 553 (2002), 73–116. External links http://mathoverflow.net/questions/88552/when-do-infinitesimal-deformations-lift-to-global-deformations Algebraic geometry
Degeneration (algebraic geometry)
[ "Mathematics" ]
756
[ "Fields of abstract algebra", "Algebraic geometry" ]
50,656,950
https://en.wikipedia.org/wiki/Flow%20Free
Flow Free is a puzzle game developed and published by American studio Big Duck Games for iOS and Android in June 2012. As of 2022, the original game has received more than 100 million downloads, with its various variants receiving additional millions more. Gameplay Flow Free presents numberlink puzzles. Each puzzle has a grid of squares with pairs of colored dots occupying some of the squares. The objective is to connect dots of the same color by drawing 'pipes' between them so that the entire grid is occupied by pipes. However, pipes may not intersect. Difficulty is primarily determined by the size of the grid, ranging from 5x5 squares (3 colors) to 15x15 squares (up to 16 colors). Many grids are "open" and some contain "walls" which must be navigated around. Whenever a level is completed, a check mark will appear on the level select icon to indicate that the puzzle is solved, while a star indicates a "perfect" game, where the player finished the puzzle with the fewest moves required. The app also contains additional paid packs as well as a time trial mode. Expansions Big Duck Games has also released three expansions in the series. The first expansion, "Flow Free: Bridges", was released on November 8, 2012, at a fixed price. In this expansion, pipes can be made to intersect through pre-made bridges. The second expansion, "Flow Free: Hexes", was released on October 12, 2016, and features both free and paid premium puzzles. The gameplay is similar to "Flow Free" except the grid is made of hexagons instead of squares. The third expansion, "Flow Free: Warps", was released on August 8, 2017. This expansion allows pipes to warp from an edge of the map to another edge of the map. The fourth expansion, "Flow Free: Shapes", was released on December 18, 2024. This expansion features many levels with multiple cell shapes and round edges. NP-completeness According to a 2022 paper by Eammon Hart and Joshua A. McGinnis, Flow Free is NP-complete, meaning computers cannot solve the puzzles in polynomial time as complexity increases, building upon a collaborative 2014 paper. References External links Official website 2012 video games Android (operating system) games IOS games Noodlecake Games games NP-complete problems Puzzle video games Single-player video games Video games developed in the United States Windows Phone games
Flow Free
[ "Mathematics" ]
496
[ "NP-complete problems", "Mathematical problems", "Computational problems" ]
29,850,197
https://en.wikipedia.org/wiki/Biomarkers%20of%20Alzheimer%27s%20disease
The biomarkers of Alzheimer's disease are neurochemical indicators used to assess the risk or presence of the disease. The biomarkers can be used to diagnose Alzheimer's disease (AD) in a very early stage, but they also provide objective and reliable measures of disease progress. It is imperative to diagnose AD disease as soon as possible, because neuropathologic changes of AD precede the symptoms by years. It is well known that amyloid beta (Aβ) is a good indicator of AD disease, which has facilitated doctors to accurately pre-diagnose cases of AD. When Aβ peptide is released by proteolytic cleavage of amyloid-beta precursor protein, some Aβ peptides that are solubilized are detected in CSF and blood plasma which makes AB peptides a promising candidate for biological markers. It has been shown that the amyloid beta biomarker shows 80% or above sensitivity and specificity, in distinguishing AD from dementia. It is believed that amyloid beta as a biomarker will provide a future for diagnosis of AD and eventually treatment of AD. Amyloid beta Amyloid beta (Aβ) is composed of a family of peptides produced by proteolytic cleavage of the type I transmembrane spanning glycoprotein amyloid-beta precursor protein (APP). Amyloid plaque Aβ protein species ends in residue 40 or 42, but it is suspected that Aβ42 form is crucial in the pathogenesis of AD. Although Aβ42 makes up less than 10% of total Aβ, it aggregates at much faster rates than Aβ40. Aβ42 is the initial and major component of amyloid plaque deposits. While the most prevalent hypothesis for mechanisms of Aβ-mediated neurotoxicity is structural damage to the synapse, various mechanisms such as oxidative stress, altered calcium homeostasis, induction of apoptosis, structural damage, chronic inflammation and neuronal formation of amyloid has been proposed. Observation of AB42/AB40 ratio has been a promising biomarker for AD. However, as AB42 fails to be a reliable biomarker in plasma, attention was drawn for alternative biomarkers. Current biomarkers BACE1 Various enzymatic digestion including beta secretase (β secretase), and gamma-secretase (γ-secretase) will cleave amyloid-beta precursor protein (APP) into various types of amyloid beta (Aβ) protein. Most β-secretase activity originates from an integral membrane aspartyle protease encoded by the β-site APP-cleaving enzyme 1 gene (BACE1). Dr. Zetterberg and his team used a sensitive and specific BACE1 assay to assess CSF BACE1 activity in AD. It was found that those with AD showed increased BACE1 expression and enzymatic activity. It was concluded that elevated BACE 1 activity may contribute to the amyloidgenic process in Alzheimer's disease. CSF BACE1 activity could be a potential candidate biomarker to monitor amyloidogenic APP metabolism in the CNS. Soluble Aβ precursor protein (sAPP) APP is an integral membrane protein whose proteolysis generates beta amyloid ranging from 39- to 42- amino acid peptide. Although the biological function of APP are not known, it has been hypothesized that APP may play a role during neuroregeneration, and regulation of neural activity, connectivity, plasticity, and memory. Recent research has shown that large soluble APP (sAPP) that are present in CSF may serve as a novel potential biomarker of Alzheimer's disease. In an article published in Nature, a group led by Lewczuk performed a test to observe the performance of a soluble form of APP α and β. A significant increase in sAPP α and sAPP β was found in people with AD as compared to normal subjects. However, the CSF level of α-sAPP and β-sAPP was found to be contradictory. Although many researchers have found that the CSF level of α sAPP increases in some people with AD, some report that there is no significant change, while Lannfelt argues that there is a slight decrease. Therefore, more studies using experimental models are needed in order to confirm the validity of sAPP as a biological marker for AD. Autoantibodies Researchers at Indiana University found that titres of anti-beta-amyloid antibodies in cerebral spinal fluid was lower in AD patients compared to healthy patients. Novel approach Recent studies primarily focus on use of an autoantibody, not only for biological markers but for future treatment. However, there are various arguments whether an autoantibody method provides a reliable biomarker. A number of reports show that patients with AD have lower levels of serum anti-AB antibodies than healthy individuals, and others have argued that the level of anti-AB antibody may be higher in AD. In order to avoid provide solution for discrepancy in the existing data, Dr. Gustaw came up with novel method of dissociation sample. Theory In biological fluids, antibodies and antigens are in a state of dynamic equilibrium between bound and unbound forms that is concentration-dependent. As antigen masks the antibody, it obstructs accurate measurement of antibody-antigen detection. Dr. Gustow discovered a novel way to enhance antibody-antigen detection. Using a dissociation buffer (1.5% bovine serum albumin (BSA) and 0.2M glycine HCl pH2/5), he dissociated antigen-antibody complexes. In dissociated samples, unbound antigen-antibody complexes reveal increased disease state compared to non-diseased state. Method Prepare dissociation buffer: 1.4% bovine serum albumin + 0.2M glycine-HCL, pH2.5 Incubate AB42 for 20 minutes Dissolve AB42 in 500 uL dissociation buffer in Microcon centrifugal device Incubate at for 20 minutes Centrifuge for 20 minutes at 16,000 G at Invert filter and spin for 3 minutes at 2000 G Bring the sample back to a neutral pH with 15-2uL 2.5M Tris pH9 Add ELISA buffer (1.5% BSA and 0.05% Tween 20 in phosphate buffered saline) Perform ELISA analysis. Result The white block represents non-dissociation data. The black block represents dissociation data. As the ELISA result shows, the detection of antibody is blocked by addition of beta-amyloid when the experiment was performed without dissociation. Following dissociation, the level of antibody detected increased to a level nearly control to level of control. He used the same methodology in vivo to examine sera collected from AD patients. The results, surprisingly, demonstrated a significant increase in antibody titer. It contradicts the majority of studies arguing that the amyloid-beta antibody decreases in AD patients. The non-dissociated sample follows the widespread theory that amyloid-beta decreases in AD patients. However, he had already proven that a non-dissociated sample fails to bring out a valid result. The dissociated sample results show significant increases in AD patients, which contradicts the majority of previous studies. Contribution Currently, there are many biomarkers for diagnosis of Alzheimer's disease. However, most of them do not provide consistent data results. The novel approach (autoantibody) not only explained the discrepancy of results in previous studies of autoantibody, but provided a new standard as a biomarker of Alzheimer's disease. Compared to other biomarkers which have variable measurements on diagnosis of AD, the new autoantibody approach accurately measures Aβ level with high sensitivity, and proved itself to be an excellent biomarker for Alzheimer's disease. It is believed that the new technology will provide not only future early diagnosis of Alzheimer's disease but also possible therapy for Alzheimer's disease. An open international study group (ND.Neuromark.net) has been constituted for arranging scientific information and developing a rational guide for implementing biomarkers into routine practice. See also Autoantibody Amyloid beta Biomarker BACE1 Neuroregeneration Dementia References Alzheimer's disease Biomarkers
Biomarkers of Alzheimer's disease
[ "Biology" ]
1,745
[ "Biomarkers" ]
29,850,583
https://en.wikipedia.org/wiki/Impella
Impella is a family of medical devices used for temporary ventricular support in patients with depressed heart function. Some versions of the device can provide left heart support during other forms of mechanical circulatory support including ECMO and Centrimag. The device is approved for use in high-risk percutaneous coronary intervention (PCI) and cardiogenic shock following heart attack or open heart surgery and is placed through a peripheral artery. From the peripheral artery it pumps blood to the left or right heart via the ascending aorta or pulmonary artery. The Impella technology was acquired by Abiomed in 2005. As of March 2019, the Impella series includes: the Impella 2.5, Impella 5.0/LD, Impella CP and Impella RP. Medical uses The Impella device is an alternative for percutaneous mechanical circulatory support that has been utilized as a bridge to recovery. Used alone or in tandem sets, it utilizes the concept of magnetic levitation to reduce moving parts to an absolute minimum, thus reducing anticoagulation requirements. Cardiogenic shock has been addressed by many devices, most notably the intraaortic balloon pump (IABP). The technology deployed by the Impella device similarly alters the fundamental characteristics of the human circulatory system. As the propeller is accelerated to give respite to an acutely injured myocardium, the circulatory system transitions from a pulsatile mechanism to continuous flow. Cellular response to cardiogenic shock is poorly described by either method (counterpulsation or continuous flow). Control of directional flow of the device (magnetic vectors) is under investigation for addressing right- versus left-sided heart failure. Transseptal intervention in addressing physiologic mismatch in perfusion between left- and right-sided heart failure is in experimental status. However, recent studies point to significantly greater in-hospital risks of major bleeding, death, and other adverse events for patients supported by Impella devices, compared with those managed with an IABP. A propensity-matched comparison of patients receiving mechanical circulatory support (MCS) for myocardial infarction–related shock saw a nearly one-third excess in mortality and almost a doubling in risk of major bleeding, both in-hospital endpoints, with use of Impella compared to IABP. Impella may provide some of the results similar to venoarterial extracorporeal life support and TandemHeart. In patients with acute myocardial infarction complicated by cardiogenic shock, haemodynamic support with the Impella device had no significant effect on thirty-day mortality as compared with IABP. Overall outcomes in the population, regardless of MCS device, were significantly worse for patients after the 2008 approval of Impella. Among hospitals using Impella, those using it the most had significantly worse outcomes with Impella than those using it the least. Potential complications related to the use of Impella are device related, peripheral vascular and distal thrombus formation with subsequent strokes. The most common complications reported were bleeding requiring transfusion, vascular access complications, infection, haemolysis, vascular complications requiring surgical repair, limb ischaemia, and bleeding requiring surgical intervention (2.6%). Valvular complications included aortic and mitral valve injury or mitral valve regurgitation. Technology Impella heart pumps are percutaneous microaxial pumps that act as mechanical circulatory support devices in patients in need of hemodynamic support. The pumps are mounted on support catheters and typically inserted through the femoral artery, although axillary and subclavian artery approaches are not uncommon. The Impella Device is a generational extension of the Intra aortic balloon pump (IABP) in addressing cardiogenic shock. Tech has allowed a single moving piece floated by magnetically steered mechanisms to deploy an "Archimedes Pump" just north of the Aortic Valve that purports to reduce both preload and afterload. The same tech can apparently also be deployed just above the pulmonary (pulmonic) valve as a gate on right sided heart failure. Left-sided support Designed to provide hemodynamic support when the patient's heart is unable to produce sufficient cardiac output, Impella heart pumps can supply one to five liters per minute of blood flow. The physiological consequences of left-sided support are threefold. First, it unloads the left ventricle by reducing left ventricular end-diastolic volume and pressure, thereby decreasing ventricular wall stress, work, and myocardial oxygen demand. Second, it increases mean arterial pressure, diastolic pressure, and cardiac output, improving cardiac power output and cardiac index. The combined effects on wall stress and perfusion pressure (especially diastolic pressure) augment coronary perfusion. Lastly, augmented cardiac output and forward flow from the left ventricle decreases pulmonary capillary wedge pressure and reduces right ventricular afterload. Approval Impella was approved for mechanical circulatory support in 2008, but large-scale, real-world data on its use are lacking. In June 2008, the Impella 2.5 heart pump received FDA 510(k) clearance for partial circulatory support for periods of up to six hours during cardiac procedures not requiring cardiopulmonary bypass. In March 2015, it received FDA premarket approval for elective and urgent high-risk percutaneous intervention procedures. In December 2016, the premarket approval was expanded to include the Impella CP heart pump. In April 2009, the Impella 5.0 and Impella LD heart pumps received 510(k) clearance for circulatory support for periods of up to six hours during cardiac procedures not requiring cardiopulmonary bypass. In July 2010, the automated Impella controller received FDA 510(k) clearance for use by trained healthcare professionals in healthcare facilities and medical transport. In January 2015, the Impella RP was granted a humanitarian device exemption to provide circulatory assistance for patients with right heart failure. In February 2018, the FDA approved the sale of the Impella ventricular support systems. Deaths and strokes in the data base overall increased after the Impella gained regulatory approval in 2008, compared to earlier years; mortality went up 17% and strokes more than tripled. In July 2023, the FDA issued a Class I recall for all Impella left-sided blood pumps due to risk of motor damage after contact with a transcatheter aortic valve replacement stent. In March 2024, the FDA issued a warning about Impella left-sided blood pumps being linked to 49 deaths due to left ventricular perforation or wall rupture. See also Protected percutaneous coronary intervention References Medical devices Implants (medicine) Cardiology Prosthetics Interventional cardiology
Impella
[ "Biology" ]
1,439
[ "Medical devices", "Medical technology" ]
29,855,647
https://en.wikipedia.org/wiki/Arsenic%20biochemistry
Arsenic biochemistry is the set of biochemical processes that can use arsenic or its compounds, such as arsenate. Arsenic is a moderately abundant element in Earth's crust, and although many arsenic compounds are often considered highly toxic to most life, a wide variety of organoarsenic compounds are produced biologically and various organic and inorganic arsenic compounds are metabolized by numerous organisms. This pattern is general for other related elements, including selenium, which can exhibit both beneficial and deleterious effects. Arsenic biochemistry has become topical since many toxic arsenic compounds are found in some aquifers, potentially affecting many millions of people via biochemical processes. Sources of arsenic Organoarsenic compounds in nature The evidence that arsenic may be a beneficial nutrient at trace levels below the background to which living organisms are normally exposed has been reviewed. Some organoarsenic compounds found in nature are arsenobetaine and arsenocholine, both being found in many marine organisms. Some As-containing nucleosides (sugar derivatives) are also known. Several of these organoarsenic compounds arise via methylation processes. For example, the mold Scopulariopsis brevicaulis produces significant amounts of trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. In clean environments, the edible mushroom species Cyanoboletus pulverulentus hyperaccumulates arsenic in concentrations reaching even 1,300 mg/kg in dry weight; cacodylic acid is the major As compound. A very unusual composition of organoarsenic compounds was found in deer truffles (Elaphomyces spp.). The average person's intake is about 10–50 μg/day. Values about 1000 μg are not unusual following consumption of fish or mushrooms; however, there is little danger in eating fish since this arsenic compound is nearly non-toxic. A topical source of arsenic are the green pigments once popular in wallpapers, e.g. Paris green. A variety of illness have been blamed on this compound, although its toxicity has been exaggerated. Trimethylarsine, once known as Gosio's gas, is an intensely malodorous organoarsenic compound that is commonly produced by microbial action on inorganic arsenic substrates. Arsenic (V) compounds are easily reduced to arsenic (III) and could have served as an electron acceptor on primordial Earth. Lakes that contain a substantial amount of dissolved inorganic arsenic, harbor arsenic-tolerant biota. Incorrect claims of arsenic-based life (phosphorus substitution) Although phosphate and arsenate are structurally similar, there is no evidence that arsenic replaces phosphorus in DNA or RNA. A 2010 experiment involving the bacteria GFAJ-1 that made this claim was refuted by 2012. Anthropogenic arsenic compounds Anthropogenic (man-made) sources of arsenic, like the natural sources, are mainly arsenic oxides and the associated anions. Man-made sources of arsenic, include wastes from mineral processing, swine and poultry farms. For example, many ores, especially sulfide minerals, are contaminated with arsenic, which is released in roasting (burning in air). In such processing, arsenide is converted to arsenic trioxide, which is volatile at high temperatures and is released into the atmosphere. Poultry and swine farms make heavy use of the organoarsenic compound roxarsone as an antibiotic in feed. Some wood is treated with copper arsenates as a preservative. The mechanisms by which these sources affect "downstream" living organisms remains uncertain but are probably diverse. One commonly cited pathway involves methylation. The monomethylated acid, methanearsonic acid (CH3AsO(OH)2), is a precursor to fungicides (tradename Neoasozin) in the cultivation of rice and cotton. Derivatives of phenylarsonic acid (C6H5AsO(OH)2) are used as feed additives for livestock, including 4-hydroxy-3-nitrobenzenearsonic acid (3-NHPAA or Roxarsone), ureidophenylarsonic acid, and p-arsanilic acid. These applications are controversial as they introduce soluble forms of arsenic into the environment. Arsenic-based drugs Despite, or possibly because of, its long-known toxicity, arsenic-containing potions and drugs have a history in medicine and quackery that continues into the 21st century. Starting in the early 19th century and continuing into the 20th century, Fowler's solution, a toxic concoction of sodium arsenite, was sold. The organoarsenic compound Salvarsan was the first synthetic chemotherapeutic agent, discovered by Paul Ehrlich. The treatment, however, led to many problems causing long lasting health complications. Around 1943 it was finally superseded by penicillin. The related drug Melarsoprol is still in use against late-state African trypanosomiasis (sleeping sickness), despite its high toxicity and possibly fatal side effects. Arsenic trioxide (As2O3) inhibits cell growth and induces apoptosis (programmed cell death) in certain types of cancer cells, which are normally immortal and can multiply without limit. In combination with all-trans retinoic acid, it is FDA-approved as first-line treatment for promyelocytic leukemia. Methylation of arsenic Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolised (detoxified) through a process of methylation. The methylation occurs through alternating reductive and oxidative methylation reactions, that is, reduction of pentavalent to trivalent arsenic followed by addition of a methyl group (CH3). In mammals, methylation occurs in the liver by methyltransferases, the products being the (CH3)2AsOH (dimethylarsinous acid) and (CH3)2As(O)OH (dimethylarsinic acid), which have the oxidation states As(III) and As(V), respectively. Although the mechanism of methylation of arsenic in humans has not been elucidated, the source of methyl is methionine, which suggests a role of S-adenosyl methionine. Exposure to toxic doses begin when the liver's methylation capacity is exceeded or inhibited. There are two major forms of arsenic that can enter the body, arsenic (III) and arsenic (V). Arsenic (III) enters the cells though aquaporins 7 and 9, which is a type of aquaglyceroporin. Arsenic (V) compounds use phosphate transporters to enter cells. The arsenic (V) can be converted to arsenic (III) by the enzyme purine nucleoside phosphorylase. This is classified as a bioactivation step, as although arsenic (III) is more toxic, it is more readily methylated. There are two routes by which inorganic arsenic compounds are methylated. The first route uses Cyt19 arsenic methyltransferase to methylate arsenic (III) to a mono-methylated arsenic (V) compound. This compound is then converted to a mono-methylated arsenic (III) compound using Glutathione S-Transferase Omega-1 (GSTO1). The mono-methylated arsenic (V) compound can then be methylated again by Cyt19 arsenic methyltransferase, which forms a dimethyl arsenic (V) compound, which can be converted to a dimethyl arsenic (III) compound by Glutathione S-Transferase Omega-1 (GTSO1). The other route uses glutathione (GSH) to conjugate with arsenic (III) to form an arsenic (GS) 3 complex. This complex can form a monomethylated arsenic (III) GS complex, using Cyt19 arsenic methyltransferase, and this monomethylated GS complex is in equilibrium with the monomethylated arsenic (III). Cyt19 arsenic methyltransferase can methylate the complex one more time, and this forms a dimethylated arsenic GS complex, which is in equilibrium with a dimethyl arsenic (III) complex. Both of the mono-methylated and di-methylated arsenic compounds can readily be excreted in urine. However, the monomethylated compound was shown to be more reactive and more toxic than the inorganic arsenic compounds to human hepatocytes (liver), keratinocytes in the skin, and bronchial epithelial cells (lungs). Studies in experimental animals and humans show that both inorganic arsenic and methylated metabolites cross the placenta to the fetus, however, there is evidence that methylation is increased during pregnancy and that it could be highly protective for the developing organism. Enzymatic methylation of arsenic is a detoxification process; it can be methylated to methylarsenite, dimethylarsenite or trimethylarsenite, all of which are trivalent. The methylation is catalyzed by arsenic methyltransferase (AS3MT) in mammals, which transfers a methyl group on the cofactor S-adenomethionine (SAM) to arsenic (III). An orthologue of AS3MT is found in bacteria and is called CmArsM. This enzyme was tested in three states (ligand free, arsenic (III) bound and SAM bound). Arsenic (III) binding sites usually use thiol groups of cysteine residues. The catalysis involves thiolates of Cys72, Cys174, and Cys224. In an SN2 reaction, the positive charge on the SAM sulfur atom pulls the bonding electron from the carbon of the methyl group, which interacts with the arsenic lone pair to form an As−C bond, leaving SAH. Excretion In humans, the major route of excretion of most arsenic compounds is via the urine. The biological half-life of inorganic arsenic is about 4 days, but is slightly shorter following exposure to arsenate than to arsenite. The main metabolites excreted in the urine of humans exposed to inorganic arsenic are mono- and dimethylated arsenic acids, together with some unmetabolized inorganic arsenic. The biotransformation of arsenic for excretion is primarily done through the nuclear factor erythroid 2 related factor 2 (Nrf2) pathway. Under normal conditions the Nrf2 is bound to Kelch-like ECH associated protein 1 (Keap1) in its inactive form. With the uptake of arsenic within cells and the subsequent reactions that result in the production of reactive oxygen species (ROS), the Nrf2 unbinds and becomes active. Keap1 has reactive thiol moieties that bind ROS or electrophilic arsenic species such as monomethylted arsenic (III) and induces the release of Nrf2 which then travels through the cytoplasm to the nucleus. The Nrf2 then activates antioxidant responsive element (ARE) as well as electrophilic responsive element (EpRE) both of which contribute in the increase of antioxidant proteins. Of particular note in these antioxidant proteins is heme oxygenase 1 ([HO-1]), NAD(P)H-quinone oxidoreductase 1 (NQO1), and γ-glutamylcysteine synthase (γGCS) which work in conjunction to reduce the oxidative species such as hydrogen peroxide to decrease the oxidative stress upon the cell. The increase in γGCS causes an increased production of arsenite triglutathionine (As(SG)3) an important adduct that is taken up by either multidrug associated protein 1 or 2 (MRP1 or MRP2) which removes the arsenic out of the cell and into bile for excretion. This adduct can also decompose back into inorganic arsenic. Of particular note in the excretion of arsenic is the multiple methylation steps that take place which may increase the toxicity of arsenic due to MMeAsIII being a potent inhibitor of glutathione peroxidase, glutathione reductase, pyruvate dehydrogenase, and thioredoxin reductase. Arsenic toxicity Arsenic is a cause of mortality throughout the world; associated problems include heart, respiratory, gastrointestinal, liver, nervous and kidney diseases. Arsenic interferes with cellular longevity by allosteric inhibition of an essential metabolic enzyme pyruvate dehydrogenase (PDH) complex, which catalyzes the oxidation of pyruvate to acetyl-CoA by NAD+. With the enzyme inhibited, the energy system of the cell is disrupted resulting in a cellular apoptosis episode. Biochemically, arsenic prevents use of thiamine resulting in a clinical picture resembling thiamine deficiency. Poisoning with arsenic can raise lactate levels and lead to lactic acidosis. Genotoxicity involves inhibition of DNA repair and DNA methylation. The carcinogenic effect of arsenic arises from the oxidative stress induced by arsenic. Arsenic's high toxicity naturally led to the development of a variety of arsenic compounds as chemical weapons, e.g. dimethylarsenic chloride. Some were employed as chemical warfare agents, especially in World War I. This threat led to many studies on antidotes and an expanded knowledge of the interaction of arsenic compounds with living organisms. One result was the development of antidotes such as British anti-Lewisite. Many such antidotes exploit the affinity of As(III) for thiolate ligands, which convert highly toxic organoarsenicals to less toxic derivatives. It is generally assumed that arsenates bind to cysteine residues in proteins. By contrast, arsenic oxide is an approved and effective chemotherapeutic drug for the treatment of acute promyelocytic leukemia (APL). Toxicity of pentavalent arsenicals Due to its similar structure and properties, pentavalent arsenic metabolites are capable of replacing the phosphate group of many metabolic pathways. The replacement of phosphate by arsenate is initiated when arsenate reacts with glucose and gluconate in vitro. This reaction generates glucose-6-arsenate and 6-arsenogluconate, which act as analogs for glucose-6-phosphate and 6-phosphogluconate. At the substrate level, during glycolysis, glucose-6-arsenate binds as a substrate to glucose-6-phosphate dehydrogenase, and also inhibits hexokinase through negative feedback. Unlike the importance of phosphate in glycolysis, the presence of arsenate restricts the generation of ATP by forming an unstable anhydride product, through the reaction with D-glyceraldehyde-3-phosphate. The anhydride 1-arsenato-3-phospho-D-glycerate generated readily hydrolyzes due to the longer bond length of As-O compared to P-O. At the mitochondrial level, arsenate uncouples the synthesis of ATP by binding to ADP in the presence of succinate, thus forming an unstable compound that ultimately results in a decrease of ATP net gain. Arsenite (III) metabolites, on the other hand, have limited effect on ATP production in red blood cells. Toxicity of trivalent arsenicals Enzymes and receptors that contain thiol or sulfhydryl functional groups are actively targeted by arsenite (III) metabolites. These sulfur-containing compounds are normally glutathione and the amino acid cysteine. Arsenite derivatives generally have higher binding affinity compared to the arsenate metabolites. These bindings restrict activity of certain metabolic pathways. For example, pyruvate dehydrogenase (PDH) is inhibited when monomethylarsonous acid (MMAIII) targets the thiol group of the lipoic acid cofactor. PDH is a precursor of acetyl-CoA, thus the inhibition of PDH eventually limits the production of ATP in electron transport chain, as well as the production of gluconeogenesis intermediates. Oxidative stress Arsenic can cause oxidative stress through the formation of reactive oxygen species (ROS), and reactive nitrogen species (RNS). Reactive oxygen species are produced by the enzyme NADPH oxidase, which transfers electrons from NADPH to oxygen, synthesizing a superoxide, which is a reactive free radical. This superoxide can react to form hydrogen peroxide and a reactive oxygen species. The enzyme NADPH oxidase is able to generate more reactive oxygen species in the presence of arsenic, due to the subunit p22phox, which is responsible for the electron transfer, being upregulated by arsenic. The reactive oxygen species are capable of stressing the endoplasmic reticulum, which increases the amount of the unfolded protein response signals. This leads to inflammation, cell proliferation, and eventually to cell death. Another mechanism in which reactive oxygen species cause cell death would be through the cytoskeleton rearrangement, which affects the contractile proteins. The reactive nitrogen species arise once the reactive oxygen species destroy the mitochondria. This leads to the formation of the reactive nitrogen species, which are responsible for damaging DNA in arsenic poisoning. Mitochondrial damage is known to cause the release of reactive nitrogen species, due to the reaction between superoxides and nitric oxide (NO). Nitric oxide (NO) is a part of cell regulation, including cellular metabolism, growth, division and death. Nitric oxide (NO) reacts with reactive oxygen species to form peroxynitrite. In cases of chronic arsenic exposure, the nitric oxide levels are depleted, due to the superoxide reactions. The enzyme NO synthase (NOS) uses L-arginine to form nitric oxide, but this enzyme is inhibited by monomethylated arsenic (III) compounds. DNA damage Arsenic is reported to cause DNA modifications such as aneuploidy, micronuclei formation, chromosome abnormality, deletion mutations, sister chromatid exchange and crosslinking of DNA with proteins. It has been demonstrated that arsenic does not directly interact with DNA and it is considered a poor mutagen, but instead, it helps mutagenicity of other carcinogens. For instance, a synergistic increase in the mutagenic activity of arsenic with UV light has been observed in human and other mammalian cells after exposing the UV-treated cells to arsenic. A series of experimental observations suggest that the arsenic genotoxicity is primarily linked to the generation of reactive oxygen species (ROS) during its biotransformation. The ROS production is able to generate DNA adducts, DNA strand breaks, crosslinks and chromosomal aberrations. The oxidative damage is caused by modification of DNA nucleobases, in particular 8-oxoguanine (8-OHdG) which leads to G:C to T:A mutations. Inorganic arsenic can also cause DNA strand break even at low concentrations. Inhibition of DNA repair Inhibition of DNA repair processes is considered one of main mechanism of inorganic arsenic genotoxicity. Nucleotide excision repair (NER) and base excision repair (BER) are the processes implicated in the repair of DNA base damage induced by ROS after arsenic exposure. In particular, the NER mechanism is the major pathway for repairing bulky distortions in DNA double helix, while the BER mechanism is mainly implicated in the repair of single strand breaks induced by ROS, but inorganic arsenic could also repress the BER mechanism. Exposure of isolated lymphocytes to arsenic causes decreased expression of the DNA repair protein ERCC1. Consistent with an inhibitory effect on DNA repair, lymphocytes from arsenic exposed individuals have higher levels of DNA damage. Arsenic can act as a co-carcinogen by inhibiting repair of DNA damage through its interaction with sensitive zinc finger DNA repair proteins. Neurodegenerative mechanisms Arsenic is highly detrimental to the innate and the adaptive immune system of the body. When the amount of unfolded and misfolded proteins in endoplasmic reticulum stress is excessive, the unfolded protein response (UPR) is activated to increase the activity of several receptors that are responsible the restoration of homeostasis. The inositol-requiring enzyme-1 (IRE1) and protein kinase RNA-like endoplasmic reticulum kinase (PERK) are two receptors that restrict the rate of translation. On the other hand, the unfolded proteins are corrected by the production of chaperones, which are induced by the activating transcription factor 6 (ATF6). If the number of erroneous proteins elevates, further mechanism is active which triggers apoptosis. Arsenic has evidentially shown to increase the activity of these protein sensors. Immune dysfunction Arsenic exposure in small children distorts the ratio of T helper cells (CD4) to cytotoxic T cells (CD8), which are responsible for immunodepression. In addition, arsenic also increases the number of inflammatory molecules being secreted through macrophages. The excess amount of granulocytes and monocytes lead to a chronic state of inflammation, which might result in cancer development. Arsenic poisoning treatment There are three molecules that serve as chelator agents that bond to arsenic. These three are British Anti-Lewisite (BAL, Dimercaprol), succimer (DMSA) and Unithiol (DMPS). When these agents chelate inorganic arsenic, it is converted into an organic form of arsenic because it is bound to the organic chelating agent. The sulfur atoms of the thiol groups are the site of interaction with arsenic. This is because the thiol groups are nucleophilic while the arsenic atoms are electrophilic. Once bound to the chelating agent the molecules can be excreted, and therefore free inorganic arsenic atoms are removed from the body. Other chelating agents can be used, but may cause more side effects than British Anti-Lewisite (BAL, Dimercaprol), succimer (DMSA) and (DMPS). DMPS and DMSA also have a higher therapeutic index than BAL. These drugs are efficient for acute poisoning of arsenic, which refers to the instantaneous effects caused by arsenic poisoning. For example, headaches, vomiting or sweating are some of the common examples of an instantaneous effect. In comparison, chronic poisonous effects arise later on, and unexpectedly such as organ damage. Usually it is too late to prevent them once they appear. Therefore, action should be taken as soon as acute poisonous effects arise. See also Arsenic compounds Extremophile Geomicrobiology Hypothetical types of biochemistry Organoarsenic chemistry References Arsenic Biology and pharmacology of chemical elements
Arsenic biochemistry
[ "Chemistry", "Biology" ]
4,845
[ "Biology and pharmacology of chemical elements", "Pharmacology", "Biochemistry", "Properties of chemical elements" ]
29,858,554
https://en.wikipedia.org/wiki/Moeller%20stain
Moeller staining involves the use of a steamed dye reagent in order to increase the stainability of endospores. Carbol fuchsin is the primary stain used in this method. Endospores are stained red, while the counterstain methylene blue stains the vegetative bacteria blue. Endospores are surrounded by a highly resistant spore coat, which is highly resistant to excessive heat, freezing, desiccation, as well as chemical agents. More importantly, for identification, spores are resistant to commonly employed staining techniques; therefore alternative staining methods are required. Method Carbol fuchsin is applied to a heat-fixed slide. The slide is then heated over a bunsen burner, or suspended over a hot water bath, covered with a paper towel, and steamed for 3 minutes. The slide is rinsed with acidified ethanol, and counter-stained with Methylene blue. An improved method involves the addition of the surfactant Tergitol 7 to the carbol fuchsin stain, and the omission of the steaming step. See also Schaeffer–Fulton stain References Microbiology techniques Staining Bacteriology Microscopy
Moeller stain
[ "Chemistry", "Biology" ]
242
[ "Staining", "Microbiology techniques", "Cell imaging", "Microscopy" ]
31,178,002
https://en.wikipedia.org/wiki/Heavy%20fermion%20superconductor
Heavy fermion superconductors are a type of unconventional superconductor. The first heavy fermion superconductor, CeCu2Si2, was discovered by Frank Steglich in 1978. Since then over 30 heavy fermion superconductors were found (in materials based on Ce, U), with a critical temperature up to 2.3 K (in CeCoIn5). Heavy fermion materials are intermetallic compounds, containing rare earth or actinide elements. The f-electrons of these atoms hybridize with the normal conduction electrons leading to quasiparticles with an enhanced effective mass. From specific heat measurements one knows that the Cooper pairs in the superconducting state are also formed by the heavy quasiparticles. In contrast to normal superconductors it cannot be described by BCS theory. Due to the large effective mass, the Fermi velocity is reduced and comparable to the inverse Debye frequency. This leads to the failing of the picture of electrons polarizing the lattice as an attractive force. Some heavy fermion superconductors are candidate materials for the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phase. In particular there has been evidence that CeCoIn5 close to the critical field is in an FFLO state. References Superconductivity Correlated electrons Condensed matter physics
Heavy fermion superconductor
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
290
[ "Electrical resistance and conductance", "Physical quantities", "Superconductivity", "Phases of matter", "Materials science", "Condensed matter physics", "Correlated electrons", "Matter" ]
31,178,650
https://en.wikipedia.org/wiki/Adinkra%20symbols%20%28physics%29
In supergravity and supersymmetric representation theory, Adinkra symbols are a graphical representation of supersymmetric algebras. Mathematically they can be described as colored finite connected simple graphs, that are bipartite and n-regular. Their name is derived from Adinkra symbols of the same name, and they were introduced by Michael Faux and Sylvester James Gates in 2004. Overview One approach to the representation theory of super Lie algebras is to restrict attention to representations in one space-time dimension and having supersymmetry generators, i.e., to superalgebras. In that case, the defining algebraic relationship among the supersymmetry generators reduces to . Here denotes partial differentiation along the single space-time coordinate. One simple realization of the algebra consists of a single bosonic field , a fermionic field , and a generator which acts as , . Since we have just one supersymmetry generator in this case, the superalgebra relation reduces to , which is clearly satisfied. We can represent this algebra graphically using one solid vertex, one hollow vertex, and a single colored edge connecting them. See also Feynman diagram References External links http://golem.ph.utexas.edu/category/2007/08/adinkras.html https://www.flickr.com/photos/science_and_thecity/2796684536/ https://www.flickr.com/photos/science_and_thecity/2795836787/ http://www.thegreatcourses.com/courses/superstring-theory-the-dna-of-reality.html Supersymmetry
Adinkra symbols (physics)
[ "Physics" ]
360
[ "Unsolved problems in physics", "Quantum mechanics", "Quantum physics stubs", "Physics beyond the Standard Model", "Supersymmetry", "Symmetry" ]
31,182,012
https://en.wikipedia.org/wiki/Cenderitide
Cenderitide (also known as chimeric natriuretic peptide or CD-NP) is a natriuretic peptide developed by the Mayo Clinic as a potential treatment for heart failure. Cenderitide is created by the fusion of the 15 amino acid C-terminus of the snake venom dendroaspis natriuretic peptide (DNP) with the full C-type natriuretic peptide (CNP) structure. This peptide chimera is a dual activator of the natriuretic peptide receptors NPR-A and NPR-B and therefore exhibits the natriuretic and diuretic properties of DNP, as well as the antiproliferative and antifibrotic properties of CNP. Molecular problem: fibrosis When faced with pressure overload, the heart attempts to compensate with a number of structural alterations including hypertrophy of cardiomyocytes and increase of extracellular matrix (ECM) proteins. Rapid accumulation of ECM proteins causes excessive fibrosis resulting in decreased myocardial compliance and increased myocardial stiffness. The exact mechanisms involved in excessive fibrosis are not fully understood but there is evidence that supports involvement from local growth factors FGF-2, TGF-beta and platelet-derived growth factor. TGF-β1 plays an important role in cardiac remodelling through the stimulation of fibroblast proliferation, ECM deposition and myocyte hypertrophy. The increase in TGF-beta 1 expression in a pressure-overloaded heart correlates with the degree of fibrosis, suggesting TGF-beta 1 involvement in the progression from a compensated hypertrophy to failure. Through an autocrine mechanism, TGF-beta 1 acts on fibroblasts by binding TGF-beta 1 receptors 1 and 2. Upon receptor activation, the receptor-associated transcription factor Smad becomes phosphorylated and associates with Co-Smad. This newly formed Smad-Co-Smad complex enters the nucleus where it acts as a transcription factor modulating gene expression. Cardiac remodelling of the ECM is also regulated by the CNP/NPR-B pathway as demonstrated by the improved outcomes in transgenic mice with CNP over-expression subjected to myocardial infarction. Binding of CNP to NPR-B catalyzes the synthesis of cGMP, which is responsible for mediating the anti-fibrotic effects of CNP. Fibrotic heart tissue is associated with an increase risk of ventricular dysfunction which can ultimately lead to heart failure. Thus, anti-fibrotic strategies are a promising approach in the prevention and treatment of heart failure. Molecular mechanism As cenderitide interacts with both NRP-A and NRP-B, this drug has antifibrotic potential. Binding of cenderitide to NRP-B elicits an antifibrotic response by catalyzing formation of cGMP similar to the response seen with endogenous CNP. Additionally, in vitro study of human fibroblasts demonstrates that cenderitide reduces TGF-beta 1 induced collagen production. These two proposed mechanisms illustrate therapeutic potential for the reduction of fibrotic remodelling in the hypertensive heart. Through combined effects of CNP and DNP, cenderitide treatment results in a reduction in stress on the heart (through natriuresis/diuresis) and inhibition of pro-fibrotic, remodeling pathways. References Drugs acting on the cardiovascular system Peptides Experimental drugs
Cenderitide
[ "Chemistry" ]
742
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
31,182,986
https://en.wikipedia.org/wiki/ArrayTrack
ArrayTrack is a multi-purpose bioinformatics tool primarily used for microarray data management, analysis, and interpretation. ArrayTrack was developed to support in-house filter array research for the U.S. Food and Drug Administration in 2001, and was made freely available to the public as an integrated research tool for microarrays in 2003. Since then, ArrayTrack has averaged about 5,000 users per year. It is regularly updated by the National Center for Toxicological Research. Features ArrayTrack is composed of three major components: Study Database, Tools, and Libraries, which primarily handle data management, analysis, and interpretation, respectively. Each of these components can be directly accessed from the other two, e.g., analysis Tools can be used directly on experimental data stored in the Study Database, and significant genes discovered from the results can be queried in the Libraries to view additional annotations and associated proteins, pathways, Gene Ontology terms, etc. Study Database: The Study Database contains user-imported experiment data, including both raw data and annotation data. It is mainly used to manage microarray data, but also supports proteomics and metabolomics data. Imported data are initially private to the owner but can be made available to other users. The Study Database also stores significant gene lists, which can be created directly from data analysis results in ArrayTrack. Tools: A wide variety of analysis and visualization Tools are available in ArrayTrack, including but are not limited to: statistical analysis Tools including T-Test, ANOVA, and SAM-Test; unsupervised pattern discovery Tools including Hierarchical Clustering Analysis and Principal Component Analysis; and model prediction Tools including K-Nearest Neighbors and Linear Discriminant Analysis. Although ArrayTrack's Tools are designed to accommodate imported data, they are also compatible with external data. Libraries: ArrayTrack hosts a collection of Libraries which store specific annotation data, viewable in a dynamic spreadsheet format. There is a Library specific for genes, proteins, pathways, Gene Ontology terms, chemical compounds, SNPs, QTL, chip types, and more. Each Library supports multi-input searching, sorting, filtering, copy-pasting, and exporting. Libraries can be directly queried for the desired contents of stored gene lists, analysis results, and other Libraries. A specific entry in any Library can be linked to the equivalent entry in many popular public knowledge bases, including the original sources of data. ArrayTrack is directly integrated with a variety of other bioinformatics software, such as pathway analysis tools GeneGo MetaCore and Ingenuity Pathway Analysis. Accessibility ArrayTrack is freely available to the public and can be accessed online. It is run on the client's computer using a Java-based interface that connects to an Oracle database hosted by the FDA. As a Java-based application, ArrayTrack is compatible with Windows, Mac, and Linux machines. References External links ArrayTrack main page Bioinformatics software Microarrays
ArrayTrack
[ "Chemistry", "Materials_science", "Biology" ]
631
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Bioinformatics software", "Microarrays", "Bioinformatics", "Molecular biology techniques" ]
31,184,294
https://en.wikipedia.org/wiki/Treatment%20of%20infections%20after%20exposure%20to%20ionizing%20radiation
Infections caused by exposure to ionizing radiation can be extremely dangerous, and are of public and government concern. Numerous studies have demonstrated that the susceptibility of organisms to systemic infection increased following exposure to ionizing radiation. The risk of systemic infection is higher when the organism has a combined injury, such as a conventional blast, thermal burn, or radiation burn. There is a direct quantitative relationship between the magnitude of the neutropenia that develops after exposure to radiation and the increased risk of developing infection. Because no controlled studies of therapeutic intervention in humans are available, almost all of the current information is based on animal research. Cause of infection Infections caused by ionizing radiation can be endogenous, originating from the oral and gastrointestinal bacterial flora, and exogenous, originating from breached skin following trauma. The organisms causing endogenous infections are generally gram negative bacilli such as Enterobacteriaceae (i.e. Escherichia coli, Klebsiella pneumoniae, Proteus spp. ), and Pseudomonas aeruginosa. Exposure to higher doses of radiation is associated with systemic anaerobic infections due to gram negative bacilli and gram positive cocci. Fungal infections can also emerge in those that fail antimicrobial therapy and stay febrile for over 7–10 days. Exogenous infections can be caused by organisms that colonize the skin such as Staphylococcus aureus or Streptococcus spp. and organisms that are acquired from the environment such as Pseudomonas spp. Organisms causing sepsis following exposure to ionizing radiation: Principles of treatment The management of established or suspected infection following exposure to radiation (characterized by neutropenia and fever) is similar to that used for other febrile neutropenic patients. However, important differences between the two conditions exist. The patient that develops neutropenia after radiation is susceptible to irradiation damage to other tissues, such as the gastrointestinal tract, lungs and the central nervous system. These patients may require therapeutic interventions not needed in other types of neutropenic infections. The response of irradiated animals to antimicrobial therapy is sometimes unpredictable, as was evident in experimental studies where metronidazole and pefloxacin therapies were detrimental. Antimicrobial agents that decrease the number of the strict anaerobic component of the gut flora (i.e., metronidazole) generally should not be given because they may enhance systemic infection by aerobic or facultative bacteria, thus facilitating mortality after irradiation. Choice of antimicrobials An empirical regimen of antibiotics should be selected, based on the pattern of bacterial susceptibility and nosocomial infections in the particular area and institution and the degree of neutropenia. Broad-spectrum empirical therapy (see below for choices) with high doses of one or more antibiotics should be initiated at the onset of fever. These antimicrobials should be directed at the eradication of Gram-negative aerobic organisms (i.e. Enterobacteriaceae, Pseudomonas ) that account for more than three-fourths of the isolates causing sepsis. Because aerobic and facultative Gram-positive bacteria (mostly alpha-hemolytic streptococci) cause sepsis in about a quarter of the victims, coverage for these organisms may be necessary in the rest of the individuals. A standardized plan for the management of febrile, neutropenic patients must be devised in each institution or agency., Empirical regimens must contain antibiotics broadly active against Gram-negative aerobic bacteria (a quinolones [i.e. ciprofloxacin, levofloxacin ], a fourth-generation cephalosporins [e.g. cefepime, ceftazidime ], or an aminoglycoside [i.e. gentamicin, amikacin]) Antibiotics directed against Gram-positive bacteria need to be included in instances and institutions where infections due to these organisms are prevalent. ( amoxicillin, vancomycin, or linezolid). These are the antimicrobial agents that can be used for therapy of infection following exposure to irradiation: a. First choice: ciprofloxacin (a second-generation quinolone) or levofloxacin (a third-generation quinolone) +/- amoxicillin or vancomycin. Ciprofloxacin is effective against Gram-negative organisms (including Pseudomonas species) but has poor coverage for Gram-positive organisms (including Staphylococcus aureus and Streptococcus pneumoniae) and some atypical pathogens. Levofloxacin has expanded Gram-positive coverage (penicillin-sensitive and penicillin-resistant S. pneumoniae) and expanded activity against atypical pathogens. b. Second choice: ceftriaxone (a third-generation cephalosporin) or cefepime (a fourth-generation cephalosporin) +/- amoxicillin or vancomycin. Cefepime exhibits an extended spectrum of activity for Gram-positive bacteria (staphylococci) and Gram-negative organisms, including Pseudomonas aeruginosa and certain Enterobacteriaceae that generally are resistant to most third-generation cephalosporins. Cefepime is an injectable and is not available in an oral form. c. Third choice: gentamicin or amikacin (both aminoglycosides) +/- amoxicillin or vancomycin (all injectable). Aminoglycosides should be avoided whenever feasible due to associated toxicities. The second and third choices of antimicrobials are suitable for children because quinolones are not approved for use in this age group. The use of these agents should be considered in individuals exposed to doses above 1.5 Gy, should be given to those who develop fever and neutropenia and should be administered within 48 hours of exposure. An estimation of the exposure dose should be done by biological dosimetry whenever possible and by detailed history of exposure. If infection is documented by cultures, the empirical regimen may require adjustment to provide appropriate coverage for the specific isolate(s). When the patient remains afebrile, the initial regimen should be continued for a minimum of 7 days. Therapy may need to be continued for at least 21–28 days or until the risk of infection has declined because of recovery of the immune system. A mass casualty situation may mandate the use of oral antimicrobials. Modification of therapy Modifications of this initial antibiotic regimen should be made when microbiological culture shows specific bacteria that are resistant to the initial antimicrobials. The modification, if needed, should be influenced by a thorough evaluation of the history, physical examination findings, laboratory data, chest radiograph, and epidemiological information. Antifungal coverage with amphotericin B may need to be added. If diarrhea is present, cultures of stool should be examined for enteropathogens (i.e., Salmonella, Shigella, Campylobacter, and Yersinia). Oral and pharyngeal mucositis and esophagitis suggest Herpes simplex infection or candidiasis. Either empirical antiviral or antifungal therapy or both should be considered. In addition to infections due to neutropenia, a patient with the Acute Radiation Syndrome will also be at risk for viral, fungal and parasitic infections. If these types of infection are suspected, cultures should be performed and appropriate medication started if indicated. References External links Armed Forces Radiobiology Research Institute, Uniformed Services University Infection in Radiation Sickness, Washington DC, USA Medical consequences of nuclear war. TRIAGE AND TREATMENT OF RADIATION-INJURED MASS CASUALTIES. Borden Institute 2000s Chapter 5 INFECTIOUS COMPLICATIONS OF RADIATION INJURY. Borden Institute 2000s Radiation health effects
Treatment of infections after exposure to ionizing radiation
[ "Chemistry", "Materials_science" ]
1,689
[ "Radiation effects", "Radiation health effects", "Radioactivity" ]
31,184,828
https://en.wikipedia.org/wiki/Torquoselectivity
In stereochemistry, torquoselectivity is a special kind of stereoselectivity observed in electrocyclic reactions, defined as "the preference for inward or outward rotation of substituents in conrotatory or disrotatory electrocyclic reactions." Torquoselectivity is not to be confused with the normal diastereoselectivity seen in pericyclic reactions, as it represents a further level of selectivity beyond the Woodward-Hoffman rules. The name derives from the idea that the substituents in an electrocyclization appear to rotate over the course of the reaction, and thus selection of a single product is equivalent to selection of one direction of rotation (i.e. the direction of torque on the substituents). The concept was originally developed by American chemist Kendall N. Houk. For ring closing reactions, it is an example of enantioselectivity, wherein a single enantiomer of a cyclization product is formed from the selective ring closure of the starting material. In a typical electrocyclic ring closing, selection for either conrotatory or disrotatory reactions modes still produces two enantiomers. Torquoselectivity is a discrimination between these possible enantiomers that requires asymmetric induction. Torquoselectivity is also used to describe selective electrocyclic ring openings, in which different directions of rotation produce distinct structural isomers. In these cases, steric strain is often the driving force for the selectivity. Studies have shown that the selectivity can also be changed by the presence of electron donating and electron withdrawing groups. Other mechanisms by which torquoselectivity can operate include chiral Lewis acid catalysts, induction via neighboring stereocenters (in which case the torquoselectivity is a case of diastereoselectivity), and axial-to-tetrahedral chirality transfer. An example of the latter case is shown below for the torquoselective Nazarov cyclization reaction of a chiral allenyl vinyl ketone. References Stereochemistry
Torquoselectivity
[ "Physics", "Chemistry" ]
447
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
35,431,035
https://en.wikipedia.org/wiki/Derjaguin%20approximation
The Derjaguin approximation (or sometimes also referred to as the proximity approximation), named after the Russian scientist Boris Derjaguin, expresses the force profile acting between finite size bodies in terms of the force profile between two planar semi-infinite walls. This approximation is widely used to estimate forces between colloidal particles, as forces between two planar bodies are often much easier to calculate. The Derjaguin approximation expresses the force F(h) between two bodies as a function of the surface separation as where W(h) is the interaction energy per unit area between the two planar walls and Reff the effective radius. When the two bodies are two spheres of radii R1 and R2, respectively, the effective radius is given by Experimental force profiles between macroscopic bodies as measured with the surface forces apparatus (SFA) or colloidal probe technique are often reported as the ratio F(h)/Reff. Quantities involved and validity The force F(h) between two bodies is related to the interaction free energy U(h) as where h is the surface-to-surface separation. Conversely, when the force profile is known, one can evaluate the interaction energy as When one considers two planar walls, the corresponding quantities are expressed per unit area. The disjoining pressure is the force per unit area and can be expressed by the derivative where W(h) is the surface free energy per unit area. Conversely, one has The main restriction of the Derjaguin approximation is that it is only valid at distances much smaller than the size of the objects involved, namely h ≪ R1 and h ≪ R2. Furthermore, it is a continuum approximation and thus valid at distances larger than the molecular length scale. Even when rough surfaces are involved, this approximation has been shown to be valid in many situations. Its range of validity is restricted to distances larger than the characteristic size of the surface roughness features (e.g., root mean square roughness). Special cases Frequent geometries considered involve the interaction between two identical spheres of radius R where the effective radius becomes In the case of interaction between a sphere of radius R and a planar surface, one has The above two relations can be obtained as special cases of the expression for Reff given further above. For the situation of perpendicularly crossing cylinders as used in the surface forces apparatus, one has where R1 and R2 are the curvature radii of the two cylinders involved. Simplified derivation Consider the force F(h) between two identical spheres of radius R as an illustration. The surfaces of the two respective spheres are thought to be sliced into infinitesimal disks of width dr and radius r as shown in the figure. The force is given by the sum of the corresponding swelling pressures between the two disks where x is the distance between the disks and dA the area of one of these disks. This distance can be expressed as x=h+2y. By considering the Pythagorean theorem on the grey triangle shown in the figure one has Expanding this expression and realizing that y ≪ R one finds that the area of the disk can be expressed as The force can now be written as where W(h) is the surface free energy per unit area introduced above. When introducing the equation above, the upper integration limit was replaced by infinity, which is approximately correct as long as h ≪ R. General case In the general case of two convex bodies, the effective radius can be expressed as follows where Ri and R"i are the principal radii of curvature for the surfaces i = 1 and 2, evaluated at points of closest approach distance, and φ is the angle between the planes spanned by the circles with smaller curvature radii. When the bodies are non-spherical around the position of closest approach, a torque between the two bodies develops and is given by where The above expressions for two spheres are recovered by setting R'''i = R"i = Ri. The torque vanishes in this case. The expression for two perpendicularly crossing cylinders is obtained from Ri = Ri and R"''i → ∞. In this case, torque will tend to orient the cylinders perpendicularly for repulsive forces. For attractive forces, the torque will tend to align them. These general formulas have been used to evaluate approximate interaction forces between ellipsoids. Beyond the Derjaguin approximation The Derjaguin approximation is unique given its simplicity and generality. To improve this approximation, the surface element integration method as well as the surface integration approach were proposed to obtain more accurate expressions of the forces between two bodies. These procedures also considers the relative orientation of the approaching surfaces. See also Atomic force microscopy Electrical double layer forces DLVO theory Van der Waals force References Further reading Physical chemistry Colloidal chemistry
Derjaguin approximation
[ "Physics", "Chemistry" ]
978
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Colloids", "Surface science", "nan", "Physical chemistry" ]
35,438,476
https://en.wikipedia.org/wiki/Nosema%20locustae
Nosema locustae is a microsporidium fungus that is used to kill grasshoppers, caterpillars, some corn borers and crickets. Effects on grasshoppers When consumed, N. locustae affects the digestive system of a grasshopper through a buildup in the gut, eventually killing it by creating lethargy and a lack of appetite; it is also transferable from a deceased infected grasshopper that is consumed. In a study done at Linköping University using N. locustae and a central Ethiopian grasshopper species, 55% of the grasshoppers that were not inoculated reached adulthood, while only 19% of the ones that were inoculated did. Farm Application The spores are typically applied to a carrier, usually wheat bran, and can be spread through the use of a variety of devices. Typical application is one pound per acre, at a rate of 1 billion plus spores. References Fungal pest control agents Biological control agents of pest insects Microsporidia Fungi described in 1953 Fungus species
Nosema locustae
[ "Biology" ]
213
[ "Fungi", "Fungus species", "Fungal pest control agents" ]