text
stringlengths 2
132k
| source
dict |
|---|---|
For MTT assay and cytosolic Lactate dehydrogenase (LDH) release are common cytotoxicity or cell viability assays. == Supply issue == A common problem that plagues drug development is obtaining a sustainable supply of the compound. Compounds isolated from invertebrates can be difficult to obtain in sufficient quantity for clinical trials. Synthesis is an alternate source of the compound of interest if the compound is simple otherwise, it is generally not a viable alternative. Aqua culture is another alternative if the organism is readily grown otherwise, it may not be good sustainable source of a compound. Also, the small quantity the compound is usually found in from organisms makes this alternative even more expensive. For example, ET-743 (INN name trabectedin, brand name Yondelis) can be isolated from the tunicate Ecteinascidia turbinata with a yield of 2 g per ton. This would require thousands of tons of tunicate to be grown and extracted to produce the kilograms of ET-743 that would be required for the treatment of thousands of people. Some success has been had in producing compounds of interest from microorganisms. Microorganisms can be used as a sustainable source for the production of compounds of interest. They can also be used for the production of intermediates so that semisynthesis can be used to produce the final compound. This has been achieved for ET-743 with the production of the intermediate Safracin B from Pseudomonas fluoresens and the subsequent semisynthesis into ET-743. This is currently the industrial production method for the production of Yondelis. == Compounds from marine sources in clinical level == αIncludes natural products or natural product derivatives or analogues; βNumber of active trials/number of total trials from http://www.clinicaltrials.gov/ as of July 2011 == See also == Sponge isolates == References == == External links == Medicine from the Sea at
|
{
"page_id": 12715053,
"source": null,
"title": "Marine pharmacognosy"
}
|
Cluster of Excellence "Future Ocean" by Mayer, A.M.S at Midwestern University, College of Graduate Studies, Pharmacology Department
|
{
"page_id": 12715053,
"source": null,
"title": "Marine pharmacognosy"
}
|
Dihydroxyphenylalanine may refer to either of two chemical compounds: D-DOPA (R), 3,4-dihydroxyphenylalanine L-DOPA (S), 3,4-dihydroxyphenylalanine, a precursor of a neurotransmitter
|
{
"page_id": 9569326,
"source": null,
"title": "Dihydroxyphenylalanine"
}
|
A diffusion tube is a scientific device that passively samples the concentration of one or more gases in the air, commonly used to monitor average air pollution levels over a period ranging from days to about a month. Diffusion tubes are widely used by local authorities for monitoring air quality in urban areas, in citizen science pollution-monitoring projects carried out by community groups and schools, and in indoor environments such as mines and museums. == Construction and operation == A diffusion tube consists of a small, hollow, usually transparent, acrylic or polypropylene plastic tube, roughly 70mm long, with a cap at each end. One of the caps (coloured white) is either completely removed to activate the tube (in the case of nitrogen dioxide sampling) or contains a filter allowing in just the gas being studied. The other cap (a different colour) contains metal mesh discs coated with a chemical reagent that absorbs the gas being studied as it enters the tube. Tubes that work this way are also known as Palmes tubes after their inventor, American chemist Edward Palmes, who described using such a tube as a personal air quality sensor in 1976. During operation, the tube is opened and vertically fastened with cable ties to something like a lamp-post or road sign, with the open end facing down, and the closed, coloured cap at the top. The gas being monitored, which is at a higher concentration in the atmosphere, diffuses into the bottom of the tube and is quickly absorbed by the chemical cap. As it is absorbed, the process of diffusion continues. After a fixed period of time (typically from two weeks to a month), the tube is sealed up and sent away to a laboratory for analysis. The atmospheric concentration of the gas being studied can be
|
{
"page_id": 70190130,
"source": null,
"title": "Diffusion tube"
}
|
calculated using the amount captured and Fick's laws of diffusion. Diffusion tubes can be used to sample various different gases, including oxides of nitrogen (nitrogen dioxide and nitric oxide), sulphur dioxide, ammonia, and ozone. Although tubes sampling these gases all work through the same process of molecular diffusion, there are important differences. Nitrogen dioxide tubes use triethanolamine, TEOA (often mistakenly abbreviated as TEA, which actually refers to triethylamine), as the absorbing (reagent) chemical, for example, while hydrogen sulphide tubes are opaque (rather than transparent) to prevent ultraviolet light from degrading the chemicals inside. Some types of tube can sample multiple gases at the same time. == Advantages and disadvantages == Diffusion tubes are reasonably accurate, relatively cheap, easy to use, extremely compact, passive (they need no power source), and have a fairly long shelf life; with careful positioning, they can be deployed more or less anywhere, indoors or outdoors. They give a reasonable indication of the long-term, average concentration of a pollutant gas, such as nitrogen dioxide, and they make it easy to compare average pollution levels in different places or at different times. Often, a series of tubes are mounted in exactly the same place for consecutive months of the year to enable longer-term comparisons of pollution levels. It's also common for local authorities to mount a number of tubes in different places over the same time period so pollution hotspots in towns and cities can be identified. Since diffusion tubes are designed to be left in place for days or weeks at a time, they don't indicate shorter-term fluctuations of the pollutant being studied, such as the rising and falling levels of gas during the day, the difference between one day and the next or between weekdays and weekends, or the number of times guideline pollution levels are
|
{
"page_id": 70190130,
"source": null,
"title": "Diffusion tube"
}
|
exceeded while they're in place. They're also much less accurate than the highly sensitive, automated monitoring equipment used in roadside pollution monitoring cabins. Sources of inaccuracy include air turbulence (caused by things like wind movements or air conditioners), pollution from building ventilation systems, ultraviolet light (theoretically absorbed by the plastic tube), and other pollutants. == References ==
|
{
"page_id": 70190130,
"source": null,
"title": "Diffusion tube"
}
|
Cinnamyl alcohol or styron is an organic compound that is found in esterified form in storax, Balsam of Peru, and cinnamon leaves. It forms a white crystalline solid when pure, or a yellow oil when even slightly impure. It can be produced by the hydrolysis of storax. Cinnamyl alcohol occurs naturally only in small quantities, so its industrial demand is usually fulfilled by chemical synthesis starting from cinnamaldehyde. == Properties == The compound is a solid at room temperature, forming colorless crystals that melt upon gentle heating. As is typical of most higher-molecular weight alcohols, it is sparingly soluble in water at room temperature, but highly soluble in most common organic solvents. == Uses == Cinnamyl alcohol has a distinctive odor described as "sweet, balsam, hyacinth, spicy, green, powdery, cinnamic" and is used in perfumery and as a deodorant. Cinnamyl alcohol is the starting material used in the synthesis of reboxetine. == Safety == Cinnamyl alcohol has been found to have a sensitizing effect on some people and as a result is the subject of a Restricted Standard issued by IFRA (International Fragrance Association). == Glycosides == Rosarin and rosavin are cinnamyl alcohol glycosides isolated from Rhodiola rosea. == References ==
|
{
"page_id": 6095923,
"source": null,
"title": "Cinnamyl alcohol"
}
|
Johann Schröder (1600, Bad Salzuflen – 1664) was a German physician and pharmacologist who was the first person to recognise that arsenic was an element. In 1649, he produced the elemental form of arsenic by heating its oxide, and published two methods for its preparation. == Works == Pharmacopoeia medico-chymica sive thesaurus pharmacologicus : quo composita quaeque celebriora, hinc mineralia, vegetabilia & animalia chymico-medice describuntur, atque insuper principia physicae hermetico-hippocraticae candide exhibentur; opus, non minus utile physicis quam medicis . Gerlin, Ulm Ed. secunda correctum & auctum 1644 Digital edition / 1649 Digital edition / Opus, editione quarta, plurimis in locis auctum ac emendatum 1656 Digital edition / Editione ultima, plurimis in locis auctum, correctum ac emendatum 1665 Digital edition / Hac septima emendatum, omissis locupletatum, notisque auctum / a Joanne Ludovico Witzelio 1677 Digital edition by the University and State Library Düsseldorf La pharmacopée raisonnée de Schroder . Vol. 1&2 . Amaulry, Lyon 1698 Digital edition by the University and State Library Düsseldorf Vollständige und nutz-reiche Apotheke/ Oder: Trefflich versehener Medicin-Chymischer höchstkostbarer Artzney-Schatz : Nebst D. Friedrich Hoffmanns darüber verfasseten herrlichen Anmerckungen; in fünff Bücher eingetheilt ... . Hoffmann & Streck, Franckfurt [u.a.] Nun aber bey dieser Zweyten Edition Um ein merckliches vermehret und verbessert 1709 Digital edition / Nun aber bey dieser dritten Edition um ein merckliches vermehret, verbessert 1718 Digital edition by the University and State Library Düsseldorf == References ==
|
{
"page_id": 3015731,
"source": null,
"title": "Johann Schröder (physician)"
}
|
ViroCap is a test announced in 2015 by researchers at Washington University in St. Louis which can detect most of the infectious viruses which affect both humans and animals. It was demonstrated to be as sensitive as the various Polymerase chain reaction assays for the viruses. It will not be available for clinical use until validation studies are done, which may take years. The test examines two million sequences of genetic data from viruses. The research was published in September 2015 in the online journal Genome Research. == References == == External links == GenomeWeb, "WUSTL Team Develops Virome Capture Technique"
|
{
"page_id": 47973429,
"source": null,
"title": "ViroCap"
}
|
An intercellular cleft is a channel between two cells through which molecules may travel and gap junctions and tight junctions may be present. Most notably, intercellular clefts are often found between epithelial cells and the endothelium of blood vessels and lymphatic vessels, also helping to form the blood-nerve barrier surrounding nerves. Intercellular clefts are important for allowing the transportation of fluids and small solute matter through the endothelium. == Dimensions of intercellular cleft == The dimensions of intercellular clefts vary throughout the body, however cleft lengths have been determined for a series of capillaries. The average cleft length for capillaries is about 20m/cm2. The depths of the intercellular clefts, measured from the luminal to the abluminal openings, vary among different types of capillaries, but the average is about 0.7 μm. The width of the intercellular clefts is about 20 nm outside the junctional region (i.e. in the larger part of the clefts). In intercellular clefts of capillaries, it has been calculated that the fractional area of the capillary wall occupied by the intercellular cleft is 20m/cm2 x 20 nm (length x width)= 0.004 (0.4%). This is the fractional area of the capillary wall exposed for free diffusion of small hydrophilic solutes and fluids5. == Communication via cleft == The intercellular cleft is imperative for cell-cell communication. The cleft contains gap junctions, tight junctions, desmosomes, and adheren proteins, all of which help to propagate and/or regulate cell communication through signal transduction, surface receptors, or a chemogradient. In order for a molecule to be taken into the cell either by endocytosis, phagocytosis, or receptor-mediated endocytosis, often that molecule must first enter through the cleft. The intercellular cleft itself is a channel, but what flows through the channel, like ions, fluid, and small molecules and what proteins or junctions give order to the
|
{
"page_id": 35128374,
"source": null,
"title": "Intercellular cleft"
}
|
channel is critical for the life of the cells that border the intercellular cleft. === Research utilizing cleft communication === Research at the cell level can deliver proteins, ions, or specific small molecules into the intercellular cleft as a means of injecting a cell. This method is especially useful in cell-to-cell propagation of infectious cytosolic protein aggregates. In one study, protein aggregates from yeast prions were released into a mammalian intercellular cleft and were taken up by the adjacent cell, as opposed to direct cell transfer. This process would be similar to the secretion and transmission of infectious particles through the synaptic cleft between cells of the immune system, as seen in retroviruses. Understanding the routes of intercellular protein aggregate transfer, particularly routes involving clefts is imperative in understanding the progressive spreading of this infection8. == Transport in intercellular cleft == Endothelial tight junctions are most commonly found in the intercellular cleft and provide for regulation of diffusion through the membranes. These links are most commonly found in the most apical aspect of the intercellular cleft. They prevent macromolecules from navigating the intercellular cleft and limit the lateral diffusion of intrinsic membrane proteins and lipids between the apical and basolateral cell surface domains. In the intercellular clefts of capillaries, tight junctions are the first structural barriers a neutrophil encounters as it penetrates the interendothelial cleft, or the gap linking the blood vessel lumen with the subendothelial space2. In capillary endothelium, plasma communicates with the interstitial fluid through the intercellular cleft. Blood plasma without the plasma proteins, red blood cells, and platelets pass through the intercellular cleft and into the capillary7. == Capillary intercellular clefts == Most notably, intercellular clefts are described in capillary blood vessels. The three types of capillary blood vessels are continuous, fenestrated, and discontinuous, with continuous being
|
{
"page_id": 35128374,
"source": null,
"title": "Intercellular cleft"
}
|
the least porous of the three and discontinuous capillaries being extremely high in permeability. Continuous blood capillaries have the smallest intercellular clefts, with discontinuous blood capillaries having the largest intercellular clefts, commonly accompanied with gaps in the basement membrane6.Often, fluid is forced out of the capillaries through the intercellular clefts. Fluid is push out through the intercellular cleft at the arterial end of the capillary because that's where the pressure is the highest. However, most of this fluid returns into the capillary at the venous end, creating capillary fluid dynamics. Two opposing forces achieve this balance; hydrostatic pressure and colloid osmotic pressure, using the intercellular clefts are fluid entrances and fluid exits4. In addition, the size of the intercellular clefts and pores in the capillary will influence this fluid exchange. The larger the intercellular cleft, the lesser the pressure and the more fluid will flow out the cleft. This enlargement of the cleft is caused by contraction of capillary endothelial cells, often by substances such as histamine and bradykinin. However, smaller intercellular clefts do not help this fluid exchange3. Along with fluid, electrolytes are also carried through this transport in the capillary blood vessels4. This mechanism of fluid, electrolyte, and also small solute exchange is especially important in renal glomerular capillaries3. == Intercellular cleft and BHB == Intercellular clefts also play a role in the formation of the blood-heart barrier (BHB). The intercellular cleft between endocardial endotheliocytes is 3 to 5 times deeper than the clefts between myocardial capillary endotheliocytes. Also, these clefts are often more twisting and have one or two tight junctions and zona adherens interacting with a circumferential actin filament band and several connecting proteins7. These tight junctions localize to the luminal side of the intercellular clefts, where the glycocalyx, which is important in cell–cell recognition and
|
{
"page_id": 35128374,
"source": null,
"title": "Intercellular cleft"
}
|
cell signaling, is more developed. The organization of the endocardial endothelium and the intercellular cleft help to establish the blood-heart barrier by ensuring an active transendothelial physicochemical gradient of various ions1. == References == Thiriet, M. (2015). Interactions between cardiac cell populations. In Diseases of the cardiac pump (1st ed., Vol. 7, pp. 59–61). Paris: Springer. Gabrilovich, D. (2013). Mechanisms of neutrophil migration. In The neutrophils new outlook for old cells (3rd ed., pp. 138–144). London: Imperial College Press;. Klabunde, R. (2014, April 30). Mechanisms of capillary exchange. Retrieved 2015, from http://www.cvphysiology.com/Microcirculation/M016.htm Marieb, E.N. (2003). Essentials of Human Anatomy and Physiology (Seventh ed.). San Francisco: Benjamin Cummings. ISBN 0-8053-5385-2. Chien, S. (1988). Mathematical models of intercellular clefts. In Vascular endothelium in health and disease (Vol. 242, pp. 3–5). New York City, New York: Plenum Press. Capillaries. (n.d.). Retrieved from http://www.udel.edu/biology/Wags/histopage/vascularmodelingpage/circsystempage/capillaries/capillaries.html Silberberg, A.(1988). Structure of the interendothelial cell cleft. Biorheology, 25(1–2),303–18. Hofmann, J., Denner, P., Naussbaum- Krammer, C., Kuhn, P., Suhre, M., Scheibel, T., ... Vorberg, I. (2013). Cell-to-cell propagation of infectious cytosolic protein aggregates. Proceedings of the National Academy of Sciences of the United States of America, 110(15), 5951–5956–5951–5956. doi:10.1073/pnas.1217321110 == External links == Martìn-Padura I, Lostaglio S, Schneemann M, et al. (July 1998). "Junctional adhesion molecule, a novel member of the immunoglobulin superfamily that distributes at intercellular junctions and modulates monocyte transmigration". J. Cell Biol. 142 (1): 117–27. doi:10.1083/jcb.142.1.117. PMC 2133024. PMID 9660867.
|
{
"page_id": 35128374,
"source": null,
"title": "Intercellular cleft"
}
|
WonderFest is an American fan convention focusing on science fiction and horror, held annually since 1992 after two years as a predecessor event. "One of the biggest hobby events in the country," it takes place in Louisville, Kentucky, and is the site of the annual presentation of the Rondo Hatton Classic Horror Awards. == History == WonderFest originated in 1990 as the hobbyist club The Scale Figure Modelers Society's Louisville Plastic Kit & Toy Show, held at a Ramada Inn hotel. The club had been founded the year before by Irwin Severs and Larry Johnson. In 1992, the convention changed its name to WonderFest, and was held at a larger venue. Four years later, it relocated to its current home, the hotel Crowne Plaza, formerly Executive West. The convention formally split off from the hobbyist club after the 1997 show. The edition originally scheduled for May 30–31, 2020, and then October 24–25, 200, was canceled because of the COVID-19 pandemic. Guests throughout the years have included filmmakers / TV producers Joe Dante, D. C. Fontana, Nicholas Meyer, Greg Nicotero, and George A. Romero, genre-film actors and actresses Dirk Benedict, Martine Beswick, Veronica and Angela Cartwright, Joanna Cassidy, Yvonne Craig, Claudia Christian, Denise Crosby, Sybil Danning, Keir Dullea, Anne Francis, Marta Kristen, Gary Lockwood, Kevin McCarthy, Lee Meriwether, Caroline Munro, Robert Picardo, Linnea Quigley, and Brinke Stevens, special effects artists Ray Harryhausen, Tom Savini, and Chris Walas, comics and children's-book writers / artists Frank Cho, Basil Gogos, Joe Jusko, Michael Kaluta, Mark Schultz, William Stout, and Bernie Wrightson, and horror scions Sara Karloff and Vanessa Harryhausen. It also features cosplayers. Events there include the annual presentation of the Rondo Hatton Classic Horror Awards and the WonderFest Model Contest, hosted by Amazing Figure Modeler magazine. Charitable outreach has included raffles to benefit
|
{
"page_id": 76547127,
"source": null,
"title": "WonderFest"
}
|
the Pediatric AIDS Foundation and the WHAS Crusade for Children. In 2014, three Louisville, Kentucky-based podcasters attempted to set a Guinness World Record at WonderFest for the Longest Uninterrupted Webcast (now called now Longest Audio-Only Live Stream), Tower of Technobabble, to raise money for a local animal organization's spay and neuter program. They set the then-record of 41 hours. The CEO as of 2004 was Dave Hodge. As of 2022, its CEO was Melina Angstrom. == See Also == Wonder Festival == References == == External links == Bickers, James (May 29, 2015). "WonderFest Brings 'Walking Dead' Producer to Louisville". Louisville Public Media. Retrieved April 6, 2024.
|
{
"page_id": 76547127,
"source": null,
"title": "WonderFest"
}
|
In general relativity, a scalar field solution is an exact solution of the Einstein field equation in which the gravitational field is due entirely to the field energy and momentum of a scalar field. Such a field may or may not be massless, and it may be taken to have minimal curvature coupling, or some other choice, such as conformal coupling. == Definition == In general relativity, the geometric setting for physical phenomena is a Lorentzian manifold, which is physically interpreted as a curved spacetime, and which is mathematically specified by defining a metric tensor g a b {\displaystyle g_{ab}} (or by defining a frame field). The curvature tensor R a b c d {\displaystyle R^{a}{}_{bcd}} of this manifold and associated quantities such as the Einstein tensor G a b {\displaystyle G_{ab}} , are well-defined even in the absence of any physical theory, but in general relativity they acquire a physical interpretation as geometric manifestations of the gravitational field. In addition, we must specify a scalar field by giving a function ψ {\displaystyle \psi } . This function is required to satisfy two following conditions: The function must satisfy the (curved spacetime) source-free wave equation g a b ψ ; a b = 0 {\displaystyle g^{ab}\psi _{;ab}=0} , The Einstein tensor must match the stress-energy tensor for the scalar field, which in the simplest case, a minimally coupled massless scalar field, can be written G a b = κ ( ψ ; a ψ ; b − 1 2 ψ ; m ψ ; m g a b ) {\displaystyle G_{ab}=\kappa \left(\psi _{;a}\psi _{;b}-{\frac {1}{2}}\psi _{;m}\psi ^{;m}g_{ab}\right)} . Both conditions follow from varying the Lagrangian density for the scalar field, which in the case of a minimally coupled massless scalar field is L = − g m n ψ ;
|
{
"page_id": 2425912,
"source": null,
"title": "Scalar field solution"
}
|
m ψ ; n {\displaystyle L=-g^{mn}\,\psi _{;m}\,\psi _{;n}} Here, δ L δ ψ = 0 {\displaystyle {\frac {\delta L}{\delta \psi }}=0} gives the wave equation, while δ L δ g a b = 0 {\displaystyle {\frac {\delta L}{\delta g^{ab}}}=0} gives the Einstein equation (in the case where the field energy of the scalar field is the only source of the gravitational field). == Physical interpretation == Scalar fields are often interpreted as classical approximations, in the sense of effective field theory, to some quantum field. In general relativity, the speculative quintessence field can appear as a scalar field. For example, a flux of neutral pions can in principle be modeled as a minimally coupled massless scalar field. == Einstein tensor == The components of a tensor computed with respect to a frame field rather than the coordinate basis are often called physical components, because these are the components which can (in principle) be measured by an observer. In the special case of a minimally coupled massless scalar field, an adapted frame e → 0 , e → 1 , e → 2 , e → 3 {\displaystyle {\vec {e}}_{0},\;{\vec {e}}_{1},\;{\vec {e}}_{2},\;{\vec {e}}_{3}} (the first is a timelike unit vector field, the last three are spacelike unit vector fields) can always be found in which the Einstein tensor takes the simple form G a ^ b ^ = 8 π σ [ − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] {\displaystyle G_{{\hat {a}}{\hat {b}}}=8\pi \sigma \,\left[{\begin{matrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{matrix}}\right]} where σ {\displaystyle \sigma } is the energy density of the scalar field. == Eigenvalues == The characteristic polynomial of the Einstein tensor in a minimally coupled massless scalar field solution must have the form χ ( λ ) = ( λ + 8 π
|
{
"page_id": 2425912,
"source": null,
"title": "Scalar field solution"
}
|
σ ) 3 ( λ − 8 π σ ) {\displaystyle \chi (\lambda )=(\lambda +8\pi \sigma )^{3}\,(\lambda -8\pi \sigma )} In other words, we have a simple eigenvalue and a triple eigenvalue, each being the negative of the other. Multiply out and using Gröbner basis methods, we find that the following three invariants must vanish identically: a 2 = 0 , a 1 3 + 4 a 3 = 0 , a 1 4 + 16 a 4 = 0 {\displaystyle a_{2}=0,\;\;a_{1}^{3}+4a_{3}=0,\;\;a_{1}^{4}+16a_{4}=0} Using Newton's identities, we can rewrite these in terms of the traces of the powers. We find that t 2 = t 1 2 , t 3 = t 1 3 / 4 , t 4 = t 1 4 / 4 {\displaystyle t_{2}=t_{1}^{2},\;t_{3}=t_{1}^{3}/4,\;t_{4}=t_{1}^{4}/4} We can rewrite this in terms of index gymnastics as the manifestly invariant criteria: G a a = − R {\displaystyle {G^{a}}_{a}=-R} G a b G b a = R 2 {\displaystyle {G^{a}}_{b}\,{G^{b}}_{a}=R^{2}} G a b G b c G c a = R 3 / 4 {\displaystyle {G^{a}}_{b}\,{G^{b}}_{c}\,{G^{c}}_{a}=R^{3}/4} G a b G b c G c d G d a = R 4 / 4 {\displaystyle {G^{a}}_{b}\,{G^{b}}_{c}\,{G^{c}}_{d}\,{G^{d}}_{a}=R^{4}/4} == Examples == Notable individual scalar field solutions include the Janis–Newman–Winicour scalar field solution, which is the unique static and spherically symmetric massless minimally coupled scalar field solution. == See also == Exact solutions in general relativity Lorentz group == References == Stephani, H.; Kramer, D.; MacCallum, M.; Hoenselaers, C. & Herlt, E. (2003). Exact Solutions of Einstein's Field Equations (2nd edn.). Cambridge: Cambridge University Press. ISBN 0-521-46136-7. Hawking, S. W. & Ellis, G. F. R. (1973). The Large Scale Structure of Space-time. Cambridge: Cambridge University Press. ISBN 0-521-09906-4. See section 3.3 for the stress-energy tensor of a minimally coupled scalar field.
|
{
"page_id": 2425912,
"source": null,
"title": "Scalar field solution"
}
|
Code of a Killer is a three-part British police drama television series which tells the true story of Alec Jeffreys' discovery of DNA fingerprinting and its introductory use by Detective David Baker in catching the double murderer Colin Pitchfork. Filming commenced in late September 2014, and the program aired on the ITV network, on 6 and 13 April 2015. Endemol Shine handled international distribution of the series. == Plot == Set over a nearly four-year period from 1983 to 1987, DCS David Baker leads an investigation into the vicious murders of the two Leicestershire teenage schoolgirls, Lynda Mann and Dawn Ashworth. Meanwhile, Alec Jeffreys is an ambitious scientist who has recently discovered a remarkable method to read a person's DNA and, from it, generate a unique DNA fingerprint. Convinced one local person committed both crimes, Baker approaches Jeffreys to utilise his scientific technique to solve the murders. The first-ever DNA manhunt follows, involving the blood testing of many men — all in the aid of catching the killer. == Cast == == Production == === Development === Code of a Killer was commissioned by ITV's Director of Drama Steve November and Controller of Drama Victoria Fea on 16 May 2014. The series was developed with the participation of retired Professor Sir Alec Jeffreys and former Detective Chief Superintendent David Baker. It was written by Michael Crompton, directed by James Strong, produced by Priscilla Parish, and executive produced by Simon Heath for World Productions. Filming began in late September 2014, and the episodes were shown on 6 and 13 April 2015 at 9:00 p.m. on the ITV network. == Broadcast == The series premiered in Australia on BBC First on 19 September 2015. == Episodes == Originally aired in 2015 in the UK and Australia as two 65-minute episodes; currently streams
|
{
"page_id": 42796093,
"source": null,
"title": "Code of a Killer"
}
|
online as three 45-minute episodes plus one 28-minute ‘Behind the Scenes’ special. The episode descriptions below are for the (current) thee-episode format, while air dates and viewership data apply to the (original) two-episode format. == Reception == === Critical reception === The drama received a mixed reception. The first part was criticised for dramatic sluggishness and a reliance on crime-show clichés in the portrayal of the two main characters. The depiction of Alec Jeffreys as the stereotypical absent-minded "boffin" was cited by several reviewers. Gerard O'Donovan in The Daily Telegraph called the show's version of him a "stock obsessive boffin so wedded to his lab instruments that his marriage was permanently on the brink of collapse". Julia Raeside in The Guardian wrote, "There are obligatory scenes in which Jeffreys misses a school play and receives a phone call from his wife pronouncing, 'Your dinner’s in the dog.' There are only so many times co-workers can remark, 'Don’t work too late' or 'Aren’t you going home?' before the hammering repetition starts to cause a dent in your enjoyment." Chris Bennion in The Independent concluded that "Sadly this drama had the fingerprints of countless other by-numbers crime thrillers all over it." Alex Hardy in The Times was less critical, giving the show four stars out of five and saying that "this fact-based drama managed to balance tragedy with optimism", but added that it "inevitably contained elements of soap". == References == == External links == Code of a Killer at IMDb Code of a Killer at British TV Detectives
|
{
"page_id": 42796093,
"source": null,
"title": "Code of a Killer"
}
|
Mannophryne vulcano, the Caracas collared frog, is a frog in the family Aromobatidae. It has been observed in the Sierra de Portuguesa in Lara, Venezuela.It is differentiated from other types of frogs having a narrow neck with different pattern and color.</ref>MNH>Frost, Darrel R. "Mannophryne vulcano Barrio-Amorós, Santos, and Molina, 2010". Amphibian Species of the World, an Online Reference. Version 6.0. American Museum of Natural History, New York. Retrieved March 2, 2025.</ref> == Habitat == This diurnal frog lives in riparian habitats. Scientists have not observed this in any protected places, but its known range overlaps with Waraira Repano National Park. == Reproduction == The male frogs call to the female frogs openly and from hiding places. The female frogs lay eggs on wet leaves or in moist soil, 12-16 eggs per clutch. The male frogs guard the eggs. After the eggs hatch, the male frogs carry the tadpoles to water. == Threats == The IUCN classifies this frog as near threatened. Water pollution, fires, unregulated tourism, farms, and urbanization can kill frogs or cause habitat loss. Scientists have found the fungus Batrachochytrium dendrobatidis on this frog, but the species appears to have some resistance to chytridiomycosis. == Original description == Barrio-Amoros CL; Santos JC; Molina CR (2010). "An addition to the diversity of dendrobatid frogs in Venezuela: description of three new collared frogs (Anura: Dendrobatidae: Mannophryne)". Phyllomedusa. 9: 3–35. Retrieved March 2, 2025. == References ==
|
{
"page_id": 79365186,
"source": null,
"title": "Mannophryne vulcano"
}
|
In particle physics, a relativistic particle is an elementary particle with kinetic energy greater than or equal to its rest-mass energy given by Einstein's relation, E = m 0 c 2 {\displaystyle E=m_{0}c^{2}} , or specifically, of which the velocity is comparable to the speed of light c {\displaystyle c} . This is achieved by photons to the extent that effects described by special relativity are able to describe those of such particles themselves. Several approaches exist as a means of describing the motion of single and multiple relativistic particles, with a prominent example being postulations through the Dirac equation of single particle motion. Since the energy-momentum relation of an particle can be written as: where E {\displaystyle E} is the energy, p {\displaystyle p} is the momentum, and m 0 {\displaystyle m_{0}} is the rest mass, when the rest mass tends to be zero, e.g. for a photon, or the momentum tends to be large, e.g. for a large-speed proton, this relation will collapses into a linear dispersion, i.e. This is different from the parabolic energy-momentum relation for classical particles. Thus, in practice, the linearity or the non-parabolicity of the energy-momentum relation is considered as a key feature for relativistic particles. These two types of relativistic particles are remarked as massless and massive, respectively. In experiments, massive particles are relativistic when their kinetic energy is comparable to or greater than the energy E = m 0 c 2 {\displaystyle E=m_{0}c^{2}} corresponding to their rest mass. In other words, a massive particle is relativistic when its total mass-energy is at least twice its rest mass. This condition implies that the speed of the particle is close to the speed of light. According to the Lorentz factor formula, this requires the particle to move at roughly 85% of the speed of
|
{
"page_id": 3605571,
"source": null,
"title": "Relativistic particle"
}
|
light. Such relativistic particles are generated in particle accelerators, as well as naturally occurring in cosmic radiation. In astrophysics, jets of relativistic plasma are produced by the centers of active galaxies and quasars. A charged relativistic particle crossing the interface of two media with different dielectric constants emits transition radiation. This is exploited in the transition radiation detectors of high-velocity particles. == Desktop relativistic particles == Relativistic electrons can also exist in some solid state materials, including semimetals such as graphene, topological insulators, bismuth antimony alloys, and semiconductors such as transitional metal dichalcogenide and black phosphorene layers. These lattice confined electrons with relativistic effects that can be described using the Dirac equation are also called desktop relativistic electrons or Dirac electrons. == See also == Ultrarelativistic particle Special relativity Relativistic wave equations Lorentz factor Relativistic mass Relativistic plasma Relativistic jet Relativistic beaming == Notes == == References ==
|
{
"page_id": 3605571,
"source": null,
"title": "Relativistic particle"
}
|
A nucleolar detention center (DC) is a region of the cell in which certain proteins are temporarily detained in periods of cellular stress. DCs are absent from cells under normal culture conditions, but form in response to specific environmental triggers. The detention of numerous proteins in DCs is believed to reduce metabolic activity and promote survival under unfavorable conditions. DCs form at the center of nucleoli and therefore disrupt the normal organization of these organelles. The structural remodeling that ensues leaves nucleoli unable to sustain their primary function, ribosomal biogenesis. Therefore, the formation of DCs is thought to convert nucleoli from “ribosome factories” to “prisons for proteins”. Detention center formation is thought to be controlled by the varying expression of intergenic spacer long noncoding RNA (IGS lncRNA). Under normal conditions, the genes that code for IGS lncRNA are silenced. Cellular stressors such as heat shock and acidosis trigger the expression of IGS lncRNA which, in turn, initiates the structural changes that transform the internal domain of the nucleolus into the detention center. The actual formation of the detention center domain is facilitated by the binding and sequestration of target proteins by IGS lncRNA. Different types of IGS lncRNA associate selectively with target proteins, temporarily inactivating them and causing them to aggregate in large clumps in the nucleolus. The absence of cellular stressors and return to cellular environment homeostasis decreases IGS lncRNA transcription, causing the nucleolus to relinquish detained proteins, return to its original structural confirmation, and resume the production of ribosomes. The IGS lncRNA sequences produced in response to cellular stress differ depending on the type of stress-inducing stimulus. IGS lncRNA produced in response to heat shock is transcribed from a different region than IGS lncRNA produced in response to acidosis. The set of proteins sequestered in the detention center
|
{
"page_id": 41288775,
"source": null,
"title": "Detention center (cell biology)"
}
|
is dependent on the type of IGS lncRNA produced, and therefore on the type of environmental stressor present. == References ==
|
{
"page_id": 41288775,
"source": null,
"title": "Detention center (cell biology)"
}
|
Data exploration is an approach similar to initial data analysis, whereby a data analyst uses visual exploration to understand what is in a dataset and the characteristics of the data, rather than through traditional data management systems. These characteristics can include size or amount of data, completeness of the data, correctness of the data, possible relationships amongst data elements or files/tables in the data. Data exploration is typically conducted using a combination of automated and manual activities. Automated activities can include data profiling or data visualization or tabular reports to give the analyst an initial view into the data and an understanding of key characteristics. This is often followed by manual drill-down or filtering of the data to identify anomalies or patterns identified through the automated actions. Data exploration can also require manual scripting and queries into the data (e.g. using languages such as SQL or R) or using spreadsheets or similar tools to view the raw data. All of these activities are aimed at creating a mental model and understanding of the data in the mind of the analyst, and defining basic metadata (statistics, structure, relationships) for the data set that can be used in further analysis. Once this initial understanding of the data is had, the data can be pruned or refined by removing unusable parts of the data (data cleansing), correcting poorly formatted elements and defining relevant relationships across datasets. This process is also known as determining data quality. Data exploration can also refer to the ad hoc querying or visualization of data to identify potential relationships or insights that may be hidden in the data and does not require to formulate assumptions beforehand. Traditionally, this had been a key area of focus for statisticians, with John Tukey being a key evangelist in the field. Today, data
|
{
"page_id": 43385931,
"source": null,
"title": "Data exploration"
}
|
exploration is more widespread and is the focus of data analysts and data scientists; the latter being a relatively new role within enterprises and larger organizations. == Interactive Data Exploration == This area of data exploration has become an area of interest in the field of machine learning. This is a relatively new field and is still evolving. As its most basic level, a machine-learning algorithm can be fed a data set and can be used to identify whether a hypothesis is true based on the dataset. Common machine learning algorithms can focus on identifying specific patterns in the data. Many common patterns include regression and classification or clustering, but there are many possible patterns and algorithms that can be applied to data via machine learning. By employing machine learning, it is possible to find patterns or relationships in the data that would be difficult or impossible to find via manual inspection, trial and error or traditional exploration techniques. == Software == Trifacta – a data preparation and analysis platform Paxata – self-service data preparation software Alteryx – data blending and advanced data analytics software Microsoft Power BI - interactive visualization and data analysis tool OpenRefine - a standalone open source desktop application for data clean-up and data transformation Tableau software – interactive data visualization software == See also == Exploratory data analysis Machine learning Data profiling Data visualization == References ==
|
{
"page_id": 43385931,
"source": null,
"title": "Data exploration"
}
|
Cognitive hearing science is an interdisciplinary science field concerned with the physiological and cognitive basis of hearing and its interplay with signal processing in hearing aids. The field includes genetics, physiology, medical and technical audiology, cognitive neuroscience, cognitive psychology, linguistics and social psychology. Theoretically the research in cognitive hearing science combines a physiological model for the information transfer from the outer auditory organ to the auditory cerebral cortex, and a cognitive model for how language comprehension is influenced by the interplay between the incoming language signal and the individual's cognitive skills, especially the long-term memory and the working memory. Researchers examine the interplay between type of hearing impairment or deafness, type of signal processing in different hearing aids, type of listening environment and the individual's cognitive skills. Research in cognitive hearing science has importance for the knowledge about different types of hearing impairment and its effects, as for the possibilities to determine which individuals can make use of certain type of signal processing in hearing aid or cochlear implant and thereby adapt hearing aid to the individual. Cognitive hearing science has been introduced by researchers at the Linköping University research centre Linnaeus Centre HEAD (HEaring And Deafness) in Sweden, created in 2008 with a major 10-year grant from the Swedish Research Council. == References == == Resources == Linnaeus Centre HEAD Interview, prof. Jerker Rönnberg
|
{
"page_id": 29819979,
"source": null,
"title": "Cognitive hearing science"
}
|
The Larry Sandler Memorial Award is a prestigious international award given for research in the Drosophila community. The award is given for the best dissertation of the preceding year, and is given at the annual Drosophila Research Conference. Awardees may be nominated only by their graduate advisors. The awardees give the Larry Sandler Memorial Lecture at the annual Drosophila Research Conference. The award honors Dr. Larry Sandler. == Award recipients == 1988 Bruce Edgar 1989 Kate Harding 1990 Michael Dickinson 1991 Maurice Kernan 1992 Doug Kellogg 1993 David Schneider 1994 Kendal Broadie 1995 David Begun 1996 Chaoyong Ma 1997 Abby Dernburg 1998 Nir Hacohen 1999 Terence Murphy 2000 Bin Chen 2001 James Wilhelm 2002 Matthew C. Gibson 2003 Sinisa Urban 2004 Sean McGuire 2005 Elissa Hallem 2006 Daniel Ortiz-Barrientos 2007 Yu-Chiun Wang 2008 Adam A. L. Friedman 2009 Timothy T. Weil 2010 Leonardo B. Koerich 2011 Daniel Babcock 2012 Stephanie Turner Chen 2013 Weizhe Hong 2014 Ruei-Jiun Hung 2015 Zhao Zhang 2016 Alejandra Figueroa-Clarevega 2017 Danny E. Miller 2018 Lucy Liu 2019 Laura Seeholzer 2020 Balint Kacsoh 2021 Ching-Ho Chang 2022 Lianna Wat 2023 James O'Connor 2024 Sherzod A. Tokamov == Former chairs of the Award == 1988 Chair: Barry Ganetzky 1989 Chair: Barry Ganetzky 1990 Chair: Barry Ganetzky 1991 Chair: 1992 Chair: 1993 Chair: 1994 Chair: 1995 Chair: 1996 Chair: Margaret Fuller ("Minx" Fuller) 1997 Chair: Larry Goldstein 1998 Chair: R. Scott Hawley 1999 Chair: Bill Sullivan 2000 Chair: Bill Saxton 2001 Chair: Lynn Cooley 2002 Chair: Steve DiNardo 2003 Chair: Amanda Simcox ("Mandy Simcox") 2004 Chair: Ross Cagan 2005 Chair: Gerold Schübiger 2006 Chair: R. Scott Hawley 2007 Chair: Helen Salz 2008 Chair: Mariana Wolfner 2009 Chair: John Carlson 2010 Chair: Robin Wharton 2011 Chair: Claude Desplan 2012 Chair: Richard Mann 2013 Chair: Kenneth Irvine 2014 Chair: Marc
|
{
"page_id": 8848462,
"source": null,
"title": "Larry Sandler Memorial Award"
}
|
Freeman 2015 Chair: Erika Bach 2016 Chair: Daniela Drummond-Barbosa 2017 Chair: Bob Duronio 2018 Chair: Kim McCall 2019 Chair: Daniel Barbash 2020 Chair: Barbara Mellone 2021 Chair: Guy Tanentzapf 2022 Chair: Alissa Armstrong 2023 Chair: Tim Mosca 2024 Chair: Elizabeth Rideout == See also == List of biology awards == References ==
|
{
"page_id": 8848462,
"source": null,
"title": "Larry Sandler Memorial Award"
}
|
In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory, Bayesian probability, and the principle of maximum entropy. These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g., image reconstruction, signal processing, spectral analysis, and inverse problems). MaxEnt thermodynamics began with two papers by Edwin T. Jaynes published in the 1957 Physical Review. == Maximum Shannon entropy == Central to the MaxEnt thesis is the principle of maximum entropy. It demands as given some partly specified model and some specified data related to the model. It selects a preferred probability distribution to represent the model. The given data state "testable information" about the probability distribution, for example particular expectation values, but are not in themselves sufficient to uniquely determine it. The principle states that one should prefer the distribution which maximizes the Shannon information entropy, S I = − ∑ i p i ln p i . {\displaystyle S_{\text{I}}=-\sum _{i}p_{i}\ln p_{i}.} This is known as the Gibbs algorithm, having been introduced by J. Willard Gibbs in 1878, to set up statistical ensembles to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function). A direct connection is thus made between the equilibrium thermodynamic entropy STh, a state function of pressure, volume, temperature, etc., and the information entropy for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables: S Th ( P , V , T , … ) (eqm) = k B S I ( P , V , T , … ) {\displaystyle S_{\text{Th}}(P,V,T,\ldots )_{\text{(eqm)}}=k_{\text{B}}\,S_{\text{I}}(P,V,T,\ldots )} kB, the Boltzmann constant, has no
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by Clausius (1865) (see Boltzmann constant). However, the MaxEnt school argue that the MaxEnt approach is a general technique of statistical inference, with applications far beyond this. It can therefore also be used to predict a distribution for "trajectories" Γ "over a period of time" by maximising: S I = − ∑ p Γ ln p Γ {\displaystyle S_{\text{I}}=-\sum p_{\Gamma }\ln p_{\Gamma }} This "information entropy" does not necessarily have a simple correspondence with thermodynamic entropy. But it can be used to predict features of nonequilibrium thermodynamic systems as they evolve over time. For non-equilibrium scenarios, in an approximation that assumes local thermodynamic equilibrium, with the maximum entropy approach, the Onsager reciprocal relations and the Green–Kubo relations fall out directly. The approach also creates a theoretical framework for the study of some very special cases of far-from-equilibrium scenarios, making the derivation of the entropy production fluctuation theorem straightforward. For non-equilibrium processes, as is so for macroscopic descriptions, a general definition of entropy for microscopic statistical mechanical accounts is also lacking. Technical note: For the reasons discussed in the article differential entropy, the simple definition of Shannon entropy ceases to be directly applicable for random variables with continuous probability distribution functions. Instead the appropriate quantity to maximize is the "relative information entropy", H c = − ∫ p ( x ) log p ( x ) m ( x ) d x . {\displaystyle H_{\text{c}}=-\int p(x)\log {\frac {p(x)}{m(x)}}\,dx.} Hc is the negative of the Kullback–Leibler divergence, or discrimination information, of m(x) from p(x), where m(x) is a prior invariant measure for the variable(s). The relative entropy Hc is always less than zero, and can be thought of as (the negative of) the
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
number of bits of uncertainty lost by fixing on p(x) rather than m(x). Unlike the Shannon entropy, the relative entropy Hc has the advantage of remaining finite and well-defined for continuous x, and invariant under 1-to-1 coordinate transformations. The two expressions coincide for discrete probability distributions, if one can make the assumption that m(xi) is uniform – i.e. the principle of equal a-priori probability, which underlies statistical thermodynamics. == Philosophical implications == Adherents to the MaxEnt viewpoint take a clear position on some of the conceptual/philosophical questions in thermodynamics. This position is sketched below. === The nature of the probabilities in statistical mechanics === Jaynes (1985, 2003, et passim) discussed the concept of probability. According to the MaxEnt viewpoint, the probabilities in statistical mechanics are determined jointly by two factors: by respectively specified particular models for the underlying state space (e.g. Liouvillian phase space); and by respectively specified particular partial descriptions of the system (the macroscopic description of the system used to constrain the MaxEnt probability assignment). The probabilities are objective in the sense that, given these inputs, a uniquely defined probability distribution will result, the same for every rational investigator, independent of the subjectivity or arbitrary opinion of particular persons. The probabilities are epistemic in the sense that they are defined in terms of specified data and derived from those data by definite and objective rules of inference, the same for every rational investigator. Here the word epistemic, which refers to objective and impersonal scientific knowledge, the same for every rational investigator, is used in the sense that contrasts it with opiniative, which refers to the subjective or arbitrary beliefs of particular persons; this contrast was used by Plato and Aristotle, and stands reliable today. Jaynes also used the word 'subjective' in this context because others have used it
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
in this context. He accepted that in a sense, a state of knowledge has a subjective aspect, simply because it refers to thought, which is a mental process. But he emphasized that the principle of maximum entropy refers only to thought which is rational and objective, independent of the personality of the thinker. In general, from a philosophical viewpoint, the words 'subjective' and 'objective' are not contradictory; often an entity has both subjective and objective aspects. Jaynes explicitly rejected the criticism of some writers that, just because one can say that thought has a subjective aspect, thought is automatically non-objective. He explicitly rejected subjectivity as a basis for scientific reasoning, the epistemology of science; he required that scientific reasoning have a fully and strictly objective basis. Nevertheless, critics continue to attack Jaynes, alleging that his ideas are "subjective". One writer even goes so far as to label Jaynes' approach as "ultrasubjectivist", and to mention "the panic that the term subjectivism created amongst physicists". The probabilities represent both the degree of knowledge and lack of information in the data and the model used in the analyst's macroscopic description of the system, and also what those data say about the nature of the underlying reality. The fitness of the probabilities depends on whether the constraints of the specified macroscopic model are a sufficiently accurate and/or complete description of the system to capture all of the experimentally reproducible behavior. This cannot be guaranteed, a priori. For this reason MaxEnt proponents also call the method predictive statistical mechanics. The predictions can fail. But if they do, this is informative, because it signals the presence of new constraints needed to capture reproducible behavior in the system, which had not been taken into account. === Is entropy "real"? === The thermodynamic entropy (at equilibrium) is a
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
function of the state variables of the model description. It is therefore as "real" as the other variables in the model description. If the model constraints in the probability assignment are a "good" description, containing all the information needed to predict reproducible experimental results, then that includes all of the results one could predict using the formulae involving entropy from classical thermodynamics. To that extent, the MaxEnt STh is as "real" as the entropy in classical thermodynamics. Of course, in reality there is only one real state of the system. The entropy is not a direct function of that state. It is a function of the real state only through the (subjectively chosen) macroscopic model description. === Is ergodic theory relevant? === The Gibbsian ensemble idealizes the notion of repeating an experiment again and again on different systems, not again and again on the same system. So long-term time averages and the ergodic hypothesis, despite the intense interest in them in the first part of the twentieth century, strictly speaking are not relevant to the probability assignment for the state one might find the system in. However, this changes if there is additional knowledge that the system is being prepared in a particular way some time before the measurement. One must then consider whether this gives further information which is still relevant at the time of measurement. The question of how 'rapidly mixing' different properties of the system are then becomes very much of interest. Information about some degrees of freedom of the combined system may become unusable very quickly; information about other properties of the system may go on being relevant for a considerable time. If nothing else, the medium and long-run time correlation properties of the system are interesting subjects for experimentation in themselves. Failure to accurately predict
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
them is a good indicator that relevant macroscopically determinable physics may be missing from the model. === The second law === According to Liouville's theorem for Hamiltonian dynamics, the hyper-volume of a cloud of points in phase space remains constant as the system evolves. Therefore, the information entropy must also remain constant, if we condition on the original information, and then follow each of those microstates forward in time: Δ S I = 0 {\displaystyle \Delta S_{\text{I}}=0\,} However, as time evolves, that initial information we had becomes less directly accessible. Instead of being easily summarizable in the macroscopic description of the system, it increasingly relates to very subtle correlations between the positions and momenta of individual molecules. (Compare to Boltzmann's H-theorem.) Equivalently, it means that the probability distribution for the whole system, in 6N-dimensional phase space, becomes increasingly irregular, spreading out into long thin fingers rather than the initial tightly defined volume of possibilities. Classical thermodynamics is built on the assumption that entropy is a state function of the macroscopic variables—i.e., that none of the history of the system matters, so that it can all be ignored. The extended, wispy, evolved probability distribution, which still has the initial Shannon entropy STh(1), should reproduce the expectation values of the observed macroscopic variables at time t2. However it will no longer necessarily be a maximum entropy distribution for that new macroscopic description. On the other hand, the new thermodynamic entropy STh(2) assuredly will measure the maximum entropy distribution, by construction. Therefore, we expect: S Th ( 2 ) ≥ S Th ( 1 ) {\displaystyle {S_{\text{Th}}}^{(2)}\geq {S_{\text{Th}}}^{(1)}} At an abstract level, this result implies that some of the information we originally had about the system has become "no longer useful" at a macroscopic level. At the level of the 6N-dimensional probability distribution,
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
this result represents coarse graining—i.e., information loss by smoothing out very fine-scale detail. === Caveats with the argument === Some caveats should be considered with the above. 1. Like all statistical mechanical results according to the MaxEnt school, this increase in thermodynamic entropy is only a prediction. It assumes in particular that the initial macroscopic description contains all of the information relevant to predicting the later macroscopic state. This may not be the case, for example if the initial description fails to reflect some aspect of the preparation of the system which later becomes relevant. In that case the "failure" of a MaxEnt prediction tells us that there is something more which is relevant that we may have overlooked in the physics of the system. It is also sometimes suggested that quantum measurement, especially in the decoherence interpretation, may give an apparently unexpected reduction in entropy per this argument, as it appears to involve macroscopic information becoming available which was previously inaccessible. (However, the entropy accounting of quantum measurement is tricky, because to get full decoherence one may be assuming an infinite environment, with an infinite entropy). 2. The argument so far has glossed over the question of fluctuations. It has also implicitly assumed that the uncertainty predicted at time t1 for the variables at time t2 will be much smaller than the measurement error. But if the measurements do meaningfully update our knowledge of the system, our uncertainty as to its state is reduced, giving a new SI(2) which is less than SI(1). (Note that if we allow ourselves the abilities of Laplace's demon, the consequences of this new information can also be mapped backwards, so our uncertainty about the dynamical state at time t1 is now also reduced from SI(1) to SI(2)). We know that STh(2) > SI(2);
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
but we can now no longer be certain that it is greater than STh(1) = SI(1). This then leaves open the possibility for fluctuations in STh. The thermodynamic entropy may go "down" as well as up. A more sophisticated analysis is given by the entropy Fluctuation Theorem, which can be established as a consequence of the time-dependent MaxEnt picture. 3. As just indicated, the MaxEnt inference runs equally well in reverse. So given a particular final state, we can ask, what can we "retrodict" to improve our knowledge about earlier states? However the Second Law argument above also runs in reverse: given macroscopic information at time t2, we should expect it too to become less useful. The two procedures are time-symmetric. But now the information will become less and less useful at earlier and earlier times. (Compare with Loschmidt's paradox.) The MaxEnt inference would predict that the most probable origin of a currently low-entropy state would be as a spontaneous fluctuation from an earlier high entropy state. But this conflicts with what we know to have happened, namely that entropy has been increasing steadily, even back in the past. The MaxEnt proponents' response to this would be that such a systematic failing in the prediction of a MaxEnt inference is a "good" thing. It means that there is thus clear evidence that some important physical information has been missed in the specification the problem. If it is correct that the dynamics "are" time-symmetric, it appears that we need to put in by hand a prior probability that initial configurations with a low thermodynamic entropy are more likely than initial configurations with a high thermodynamic entropy. This cannot be explained by the immediate dynamics. Quite possibly, it arises as a reflection of the evident time-asymmetric evolution of the universe on a
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
cosmological scale (see arrow of time). == Criticisms == The Maximum Entropy thermodynamics has some important opposition, in part because of the relative paucity of published results from the MaxEnt school, especially with regard to new testable predictions far-from-equilibrium. The theory has also been criticized in the grounds of internal consistency. For instance, Radu Balescu provides a strong criticism of the MaxEnt School and of Jaynes' work. Balescu states that Jaynes' and coworkers theory is based on a non-transitive evolution law that produces ambiguous results. Although some difficulties of the theory can be cured, the theory "lacks a solid foundation" and "has not led to any new concrete result". Though the maximum entropy approach is based directly on informational entropy, it is applicable to physics only when there is a clear physical definition of entropy. There is no clear unique general physical definition of entropy for non-equilibrium systems, which are general physical systems considered during a process rather than thermodynamic systems in their own internal states of thermodynamic equilibrium. It follows that the maximum entropy approach will not be applicable to non-equilibrium systems until there is found a clear physical definition of entropy. This problem is related to the fact that heat may be transferred from a hotter to a colder physical system even when local thermodynamic equilibrium does not hold so that neither system has a well defined temperature. Classical entropy is defined for a system in its own internal state of thermodynamic equilibrium, which is defined by state variables, with no non-zero fluxes, so that flux variables do not appear as state variables. But for a strongly non-equilibrium system, during a process, the state variables must include non-zero flux variables. Classical physical definitions of entropy do not cover this case, especially when the fluxes are large enough to
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
destroy local thermodynamic equilibrium. In other words, for entropy for non-equilibrium systems in general, the definition will need at least to involve specification of the process including non-zero fluxes, beyond the classical static thermodynamic state variables. The 'entropy' that is maximized needs to be defined suitably for the problem at hand. If an inappropriate 'entropy' is maximized, a wrong result is likely. In principle, maximum entropy thermodynamics does not refer narrowly and only to classical thermodynamic entropy. It is about informational entropy applied to physics, explicitly depending on the data used to formulate the problem at hand. According to Attard, for physical problems analyzed by strongly non-equilibrium thermodynamics, several physically distinct kinds of entropy need to be considered, including what he calls second entropy. Attard writes: "Maximizing the second entropy over the microstates in the given initial macrostate gives the most likely target macrostate.". The physically defined second entropy can also be considered from an informational viewpoint. == See also == Edwin Thompson Jaynes First law of thermodynamics Second law of thermodynamics Principle of maximum entropy Principle of Minimum Discrimination Information Kullback–Leibler divergence Quantum relative entropy Information theory and measure theory Entropy power inequality == References == === Bibliography of cited references === Balescu, Radu (1997). Statistical Dynamics: Matter out of equilibrium. London: Imperial College Press. Bibcode:1997sdmo.book.....B. Jaynes, E.T. (September 1968). "Prior Probabilities" (PDF). IEEE Transactions on Systems Science and Cybernetics. SSC–4 (3): 227–241. doi:10.1109/TSSC.1968.300117. Guttmann, Y.M. (1999). The Concept of Probability in Statistical Physics, Cambridge University Press, Cambridge UK, ISBN 978-0-521-62128-1. Jaynes, E.T. (1979). "Where do we stand on maximum entropy?" (PDF). In Levine, R.; Tribus M. (eds.). The Maximum Entropy Formalism. MIT Press. ISBN 978-0-262-12080-7. Jaynes, E.T. (1985). "Some random observations". Synthese. 63: 115–138. doi:10.1007/BF00485957. S2CID 46975520. Jaynes, E.T. (2003). Bretthorst, G.L. (ed.). Probability Theory: The Logic of
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
Science. Cambridge: Cambridge University Press. ISBN 978-0-521-59271-0. Kleidon, Axel; Lorenz, Ralph D. (2005). Non-equilibrium thermodynamics and the production of entropy: life, earth, and beyond. Springer. pp. 42–. ISBN 978-3-540-22495-2. == Further reading ==
|
{
"page_id": 3015758,
"source": null,
"title": "Maximum entropy thermodynamics"
}
|
Chitra Dutta is a former chief scientist and head of Structural Biology and Bioinformatics division in CSIR-Indian Institute of Chemical Biology, Kolkata, India. She is a physicist working in the areas of bioinformatics and computational biology. She is engaged in 'in-silico' analysis of genome/proteome architectures of host/vector/pathogen systems in quest of novel intervention strategies. Comparative genome analysis of various bacterial, viral and parasitic pathogens conducted by her group have not only given an insight into the natural forces driving the molecular evolution of the microbial world, but also provided a better understanding of the intricacies of pathogen–host interactions and co-evolution. She has demonstrated how the relative strengths of various selection pressures vary within and across the organisms depending on their G+C-content, life-style and taxonomic distribution. Her group has also delineated the role played by mutational imbalance, hydrophobicity, gene expressivity and aromaticity in shaping microbial protein architectures. She is also internationally acclaimed for her studies on ‘Chaos game representation’. She has developed novel algorithms for recognition of fractal patterns in nucleotide and amino acid sequences through statistical analyses of genome and proteome composition of different thermophilic, symbiotic/parasitic organisms and she has revealed that thermal adaptation involves overrepresentation of purine bases in mRNAs, higher GC-content of the structural RNAs and enhanced usage of positively charged residues and aromatic residues at the cost of neutral polar residues, while the parasitic adaptation is reflected in the extreme genome reduction, presence of weak translational selection and large heterogeneity in membrane associated proteins. Recent works from her group on 'pan-genomic analysis of human microbiome in health and diseases' have also been highly acknowledged in scientific literature. Chitra Dutta completed her B.Sc. in physics, chemistry and mathematics in 1976 and M.Sc. in physics in 1977, both from Visva-Bharati university. She completed her Ph.D. in physics from CSIR-Indian
|
{
"page_id": 44106830,
"source": null,
"title": "Chitra Dutta"
}
|
Institute of Chemical Biology, University of Calcutta in 1984. Among honours and awards conferred on her are Fellowship of National Academy of Sciences (1992), DBT Overseas Associateship (1994), The Young Physicist Award (1985), The Special Prize for Academic Achievement- Jawaharlal Nehru Memorial Fund (1978), National Merit Scholarship, Govt. of India (1976) etc. She is a member of the Advisory Committee on Bioinformatics, Department of Science & Technology, WB. She has been regularly involved in the review of manuscripts for reputed International journals. She is also involved in teaching at the post-graduate levels in Calcutta University, Visva-Bharati and West Bengal University of Technology. == References ==
|
{
"page_id": 44106830,
"source": null,
"title": "Chitra Dutta"
}
|
Sven Kullander (9 March 1936 – 28 January 2014) was a Swedish physicist. He was professor of High Energy Physics at Uppsala University. Kullander received his doctorate from Uppsala University in 1971. He took part in experiments on measurements of nuclear shell structure from meson scattering carried out in accelerators, on the structure of Helium nuclei, and on quark structure of matter by meson production. He also contributed to the development of accelerators. Since 1990, Kullander had been a member of the Royal Swedish Academy of Sciences and since 2004, chairman of its Energy Committee. == References ==
|
{
"page_id": 31458385,
"source": null,
"title": "Sven Kullander (physicist)"
}
|
This page provides supplementary chemical data on glycerol. == Material Safety Data Sheet == The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source and follow its directions. [1] == Structure and properties == == Thermodynamic properties == == Vapor pressure of liquid == Table data obtained from CRC Handbook of Chemistry and Physics, 44th ed. loge of Glycerol vapor pressure. Uses formula: log e P k P a = {\displaystyle \scriptstyle \log _{e}P_{kPa}=} A × l n ( T ) + B / T + C + D × T 2 {\displaystyle \scriptstyle A\times ln(T)+B/T+C+D\times T^{2}} with coefficients A=-2.125867E+01, B=-1.672626E+04, C=1.655099E+02, and D=1.100480E-05 obtained from CHERIC == Freezing point of aqueous solutions == Table data obtained from Lange's Handbook of Chemistry, 10th ed. Specific gravity is at 15 °C, referenced to water at 15 °C. See details on: Freezing Points of Glycerine-Water Solutions Dow Chemical or Freezing Points of Glycerol and Its Aqueous Solutions. == Distillation data == == Spectral data == == References ==
|
{
"page_id": 11207764,
"source": null,
"title": "Glycerol (data page)"
}
|
Ecospirituality connects the science of ecology with spirituality. It brings together religion and environmental activism. Ecospirituality has been defined as "a manifestation of the spiritual connection between human beings and the environment." The new millennium and the modern ecological crisis has created a need for environmentally based religion and spirituality. Ecospirituality is understood by some practitioners and scholars as one result of people wanting to free themselves from a consumeristic and materialistic society. Ecospirituality has been critiqued for being an umbrella term for concepts such as deep ecology, ecofeminism, and nature religion. Proponents may come from a range of faiths including: Islam; Jainism; Christianity (Catholicism, Evangelicalism and Orthodox Christianity); Judaism; Hinduism; Buddhism and Indigenous traditions. Although many of their practices and beliefs may differ, a central claim is that there is "a spiritual dimension to our present ecological crisis." According to the environmentalist Sister Virginia Jones, "Eco-spirituality is about helping people experience 'the holy' in the natural world and to recognize their relationship as human beings to all creation. Ecospirituality has been influenced by the ideas of deep ecology, which is characterized by "recognition of the inherent value of all living beings and the use of this view in shaping environmental policies" Similarly to ecopsychology, it refers to the connections between the science of ecology and the study of psychology. 'Earth-based' spirituality is another term related to ecospirituality; it is associated with pagan religious traditions and the work of prominent ecofeminist, Starhawk. Ecospirituality refers to the intertwining of intuition and bodily awareness pertaining to a relational view between human beings and the planet. == Origins == Ecospirituality finds its history in the relationship between spirituality and the environment. Some scholars say it "flows from an understanding of cosmology or the story of the origin of the universe." There are multiple
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
origin stories about how the spiritual relationship with people and the environment began. In Native America philosophy, there are many unique stories of how spirituality came to be. A common theme in a number of them is the discussion of a Great Spirit that lives within the universe and the earth represents its presence. Ecospirituality has also sprung from a reaction to the Western world's materialism and consumerism, characterized by ecotheologian Thomas Berry as a "crisis of cosmology." Scholars have argued that "the modern perspective is based on science and focused on the human self with everything else being outside, resulting in the demise of the metaphysical world and the disenchantment with the cosmos." Therefore, ecospirituality originates as a rebuttal to the emphasis on the material as well as Western separation from the environment, where the environment is regarded as a set of material resources with primarily instrumental value. == Ecological crisis == Ecospirituality became popularized due to a need for a reconceptualization of the human relationship with the environment. Terms such as environmental crisis, ecological crisis, climate change, global warming all refer to an ongoing global issue that needs to be addressed. Generally the ecological crisis is referring to the destruction of the earth's ecosystem. What this encompasses is a highly controversial debate in scientific and political spheres. Globally we are faced with pollution of our basic needs (air, and water) as well as the depletion of important resources, most notably food resources. Annette Van Schalkwyk refers to the environmental crisis as “man-made”. It is arguably the result of a “mechanistic and capitalistic world view”. Whether it is man-made, or as some argue, a natural occurrence, humans are not helping. Pollution and depletion of resources play a major role in the ecological crisis. Bringing religion into the ecological crisis
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
is controversial due to the divide between religion and science. Ecospirituality is prepared to acknowledge science, and work in tandem with religion to frame the environment as a sacred entity in need of protection. Mary Evelyn Tucker notes the importance of religion and ecology connecting with sustainability. Due to the environmental crisis, perceptions of sustainability are changing. Religion and ecology, and the way people experience ecospirituality, could contribute to this changing definition of sustainability. == Research on ecospirituality == Ecospirituality has been studied by academics in order to understand a clearer definition of what individuals label as ecospirituality and the framework in which they create this definition. One study focused on holistic nurses, who themselves characterize their profession as having a fundamentally spiritual nature and a sense of the importance of the environment. Researchers performed a phenomenological study where they assessed the nurses' ecospiritual consciousness. For the purpose of their study, they defined ecospiritual consciousness as "accessing a deep awareness of one's ecospiritual relationship." They then narrowed down their findings to the five principles of ecospiritual consciousness, which are: tending, dwelling, reverence, connectedness, and sentience. Tending was defined as "being awake and conscious," with "deep, inner self-reflection." Dwelling was defined as "a process of being with the seen and the unseen." Reverence was defined as "rediscovering the mystery present present in all creation and is embodied sense of the sacred," focusing on the earth. Connectedness was defined as an "organic relationship with the universe." Sentience was defined as "a sense of knowing." Another study looked at medical effects of ecospirituality by having patients with cardiovascular disease practice "environmental meditation" and log regular journal entries about their experiences. Researchers started out with the research question of, "What is the essence of the experience of ecospirituality meditation in patients with CVD?" CVD
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
is an acronym for cardiovascular disease. From analyzing journal entries of participants, researchers abstracted four major themes of ecospirituality meditation: entering a new time zone, environmental reawakening, finding a new rhythm, and the creation of a healing environment. Entering a new time zone was described by researchers as "the expansion of time during meditation." Environmental Reawakening was described by researchers as "opened participants’ eyes to vistas not previously noticed" Finding a new rhythm was described by the researchers as "enhanced relationships with their family, friends, coworkers, and even their pets." The creation of a healing environment was described by the researchers as "With raised consciousnesses, they became aware of the choices they had regarding what types of intentions and energy that wanted to put out in their environment" This research was driven by the goal of raising awareness among healthcare professionals about ecospirituality and the medical importance of both self and environmental consciousness. Anecdotal evidence showed a decrease in blood pressure. However, the psychological benefits of environmental meditation were the main focus for the researchers. == Dark Green Religion == Dark Green Religion is one way in which people, both secular and religious, connect with nature on a spiritual level. Bron Taylor defines Dark Green Religion as "religion that considers nature to be sacred, imbued by intrinsic value, and worthy of reverent care" in his book Dark Green Religion: Nature Spirituality and the Planetary Future. Nature religion is an overarching term of which Dark Green Religion is a part of. A key part of Dark Green Religion is the "depth of its consideration of nature." Dark Green Religion differs from Green Religion. Green Religion claims that it is a religious obligation for humans to be environmental stewards, while Dark Green Religion is a movement that simply holds nature as valuable
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
and sacred. Spiritual types of Dark Green Religion include Naturalistic and Supernaturalistic forms of Animism and of Gaianism. The diverse views within Dark Green Religion are not without the idea that the earth is sacred and worthy of care. The perceptions of Dark Green Religion are global and flexible. Taylor's use of the word 'Dark' gestures toward these negative possibilities. According to Taylor, Dark Green Religion has the possibility to "inspire the emergence of a global, civic, earth religion." Dark Green, Green and Nature Religions are arguably all a part of ecospirituality. The term ecospirituality is versatile and overarching. == Ecofeminism and spirituality == The umbrella term "ecospirituality" covers the feminist theology called Ecofeminism. The term ecofeminism was first coined by the French writer Françoise D'Eaubonne in her book, Le Féminisme ou la Mort in order to name the connection between the patriarchal subjugation of women and the destruction of nature. In it, she argues that women have different ways of seeing and relating to the world than men. These differences can give rise alternative insights on interactions between humans and the natural world when women's perspectives are considered. The suppression and control of woman and the natural world are connected. On the ecofeminist view, women are controlled because they are thought to be closer to primitive nature. By understanding the connection between femininity and nature and by exploring feminine ways of seeing and relating, ecofeminism asserts that humans can realize positive ways of interacting with the natural world and with each other. === Ecofeminism and Christianity on the ecological crisis === A significant figure in Christian ecofeminism is Rosemary Radford Ruether. Ruether argues that feminism and ecology share a common vision, even though they use different languages. In her work, Gaia and God: An Ecofeminist Theology of Earth Healing
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
Ruether provides three recommendations on ways to move forward with repairing and "healing" the ecological crisis. The first recommendation is that "the ecological crisis needs to be seen not just as a crisis in the health of nonhuman ecosystems, polluted water, contaminated skies, threatened climate change, deforestation, extinction of species, important as all these realities are. Rather one needs to see the interconnections between the impoverishment of the earth and the impoverishment of human groups, even as others are enriching themselves to excess." The second recommendation is that "a healed ecosystem – humans, animals, land, air, and water together – needs to be understood as requiring a new way of life, not just a few adjustments here and there." The third and final recommendation is that the need for a new vision is necessary: "one needs to nurture the emergence of a new planetary vision and communal ethic that can knit together people across religions and cultures. There is rightly much dismay at the role that religions are playing in right-wing politics and even internecine violence today. But we need also to recognize the emergence of new configurations of inter-religious relations." === Ecofeminism and Christianity in liberation theology === According to Ivone Gebara, in Latin America, particularly in Christian Churches in Brazil, it is difficult to be a feminist, but more difficult to be an ecofeminist. Gebara explains ecology as one of the "deepest concerns of feminism and ecology as having a deep resonance or a political and anthropolocial consequence from a feminist perspective." Gebara believes that it is the task of different groups of Latin American women to "provide a new order of meaning including marginalized people." This task is both challenging and political. Gebara says: "We can choose the life of the planet and the respect of all
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
living beings or we choose to die by our own bad decisions." == World religions and ecospirituality == === Ecospirituality and paganism === Paganism is a nature-based religion that exists in a multitude of forms. There is no official doctrine or sacred text that structures its practice. Due to its lack of structure, many Pagans believe that it should be used as a tool to combat the current ecological crisis because it is flexible and can adapt to the environment's needs. Ecospirituality advocates contend that an ecology-based religion that focuses on the nurturing and healing of the earth is necessary in modernity. As paganism is already based in nature worship, many believe it would be a useful starting point for ecospirituality. In fact, neopagan revivals have seen the emergence of pagan communities that are more earth-focused. They may build their rituals around advocacy for a sustainable lifestyle and emphasize complete interconnectedness with the earth. Paganism understands divine figures to exist not as transcendent beings, but as immanent beings in the present realm, meaning that their divine figures exist within each of us, and within nature. Many pagans believe in interconnectedness among all living beings, which allows them to foster moments of self-reflection before acting. These pagan ideals coincide with ecospirituality because pagans understand the environment to be part of the divine realm and part of their inner self. Therefore, in their view, harming the environment directly affects their wellbeing. Pagans have already recognized the importance of incorporating environmental ideologies with their own religious beliefs. The Dragon Environmental Network is a pagan community based in the UK. They are committed to practicing "eco-magic" with the intention of recognizing the earth as sacred and divine. Their four goals are as follows: Increase general awareness of the sacredness of the Earth. Encourage pagans
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
to become involved in conservation work. Encourage pagans to become involved in environmental campaigns. Develop the principles and practice of magical and spiritual action for the environment. Paganism combines religion with environmental activism. Pagans organize protests, campaigns, and petitions with the environment in mind while staying true to their religious beliefs. Bron Taylor, argues that their core Pagan beliefs greatly improves their environmental activism. Additionally, the Pagan community has recently released a statement on the ecological crisis. It explains that Pagans lead lives that foster “harmony with the rhythms of our great Earth" and that they view the Earth as their equal in stating “we are neither above nor separate from the rest of nature”. It states that we are part of a web of life, and are fully interconnected with the biosphere. This connection to all living beings is seen as spiritual and sacred. And in turn it provides a framework that Pagans can use to combine their religious beliefs with environmental activism. It calls for a return to ancient understandings of the earth by listening to ancient wisdom. It asks Pagans to practice their religion in all aspects of their lives in order to give the Earth room to heal. The statement concludes by stating “building a truly sustainable culture means transforming the systems of domination and exploitation that threaten our future into systems of symbiotic partnership that support our ecosystems”. === Ecospirituality and Christianity === Most Christian theology has centered on the doctrine of creation. According to Elizabeth Johnson, in recent years, this has led to growing ecological awareness among Christians. The logic of this stance is rooted in the theological idea that since God created the world freely, it has an intrinsic value and is worthy of our respect and care. In 1990, Pope John Paul
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
II wrote a letter on ecological issues. He concluded the letter with a discussion of Christian belief and how it should lead to ethical care of the earth. He ended the letter with the principle "respect for life and the dignity of human person must extend also to the rest of creation." The doctrines of Christ that Christians follow also have the potential for ecological spirituality for they support interpretations that are consistent with ecospirituality. According to Elizabeth Johnson, Jesus' view of the Kingdom of God included earthly wellbeing. According to Thomas Berry, Christians recognize a need for an Earth Ethic. The Ecumenical Patriarch Bartholomew, leader of the Greek Orthodox Church, has organized major religion and science symposia on water issues across Europe, the Amazon River and Greenland. He has issued statements – including a joint statement with John Paul II in 2002 – calling destruction of the environment "ecological sin." Bishop Malone, president of the National Conference of Catholic Bishops has said: "The Church stands in need of a new symbolic and affective system through which to proclaim the Gospel to the modern world." In the ecotheology of the late Thomas Berry, he argues that Christians often fail to realize that both their social and religious wellbeing depend on the wellbeing of Earth. Earth provides sustenance for physical, imaginative and emotions, and religious wellbeing. In Thomas Berry's view, the Christian future will depend on the ability of Christians to assume their responsibility for Earth's fate. An example of such responsibility-taking can be seen in the founding of an association called "Sisters of Earth," which is made up of nuns and laywomen. This network of women from diverse religious communities is significant, both for the movement of general concern for the natural world and for the religious life in Christian
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
contexts. === Ecospirituality and Hinduism === Many teachings in Hinduism are intertwined with the ethics of ecospirituality in their stress on environmental wellbeing. The Hindu text called the Taittariya Upanishad refers to creation as offspring of the Supreme Power, paramatman. Thus, the environment is related to something that is divine and therefore deserves respect. Since the late 1980s when the negative effects of mass industrialization were becoming popularized, India instituted administrative policies to deal with environmental conservation. These policies were rooted in the ways that the Hindu religion is tied to the land. In the Hindu text Yajurveda (32.10), God is described as being present in all living things, further reinforcing the need to show respect for creation. Passages such as this lead some Hindus to become vegetarian and to affirm a broader type of ecospiritual connection to the Earth. Vishnu Purana 3.8.15. states that, "God, Kesava, is pleased with a person who does not harm or destroy other non-speaking creatures or animals." This notion is tied in with the Hindu concept of karma. Karma means that the pain caused to other living things will come back to you through the process of reincarnation. Ecospirituality can also be seen in the Prithivi Sukta which is a "Hymn to Mother Earth." In this text, the Earth is humanized into a spiritual being to which humans have familial ties. Through ecospirituality, the notion of praising and viewing the Earth in this way brings about its strong connections to Hinduism. === Ecospirituality and Jainism === Contemporary Jaina faith is "inherently ecofriendly". In terms of the ecological crisis, Jains are “quite self-conscious of the ecological implications of their core teachings.” Jain teachings center on five vows that lead to reverse the flow of or release karma. One of these vows is ahimsa or non-violence.
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
Ahimsa “is said to contain the key to advancement along the spiritual path (sreni). This requires abstaining from harm to any being that possesses more than one sense” The principles of the Jaina tradition are rooted in environmental practices. The Jaina connection to nature is conducive to ecospirituality. === Ecospirituality and Islam === Some scholars argue that while looking at the scriptural sources of Islam, you can see it is an ecologically orientated religion. Looking at textual sources of Islam, the shari'a preach a number of environmentally focused guidelines to push environmentalism, in particular, "maintenance of preserves, distribution of water, and the development of virgin lands." Much of Muslim environmentalism is a result of the Qur'anic stress of stewardship which is explained through the Arabic concept khilafa. A quote translated from the hadith states, "Verily, this world is sweet and appealing, and Allah placed you as vice-regents therein! He will see what you do." Within the Islamic faith, there is a set importance to following the messages set forth in scripture, therefore the environmentalism spoken through them has led to a spirituality around the environment. This spirituality can also be seen with Qur'anic concept of tawhid, which translates to unity. Many Muslim environmentalists see this meaning spiritually as "all-inclusive" when in relation to the Earth. A majority of Muslim writers draw attention to the environmental crisis as a direct result of social injustice. Many argue that the problem is not that, "humans as a species are destroying the balance of nation, but rather that some humans are taking more than their share." Muslim environmentalists such as Fazlun Khalid, Yasin Dutton, Omar Vadillo, and Hashim Dockrat have drawn a correlation between the capitalist nature of the global economy to being un-Islamic and essentiality leading to ecological crisis. The issues of environmental
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
degradation are especially important to Muslims as majority of Muslims live in developing countries where they see the effects of the ecological crisis on a daily basis. This has led to conferences discussing Islam and the environment to take place in Iran and Saudi Arabia as well as the introduction of environmental nongovernmental organizations. === Ecospirituality and Buddhism === Buddhism was founded in ancient India between the 6th and 4th century BCE. However, with modern concerns on issues such as global warming, many Buddhist scholars have looked back at what Buddhist teaching has to say about the environmental crisis and developed what is called Green Buddhism. One of the key players in this introduction was Gary Snyder who brought to light where Buddhist practice and ecological thinking intertwine. Green Buddhism made waves in the 1980s when they publicly addressed the ecological crisis to create awareness and in 1989 when the Dalai Lama won a Nobel Peace Prize for the proposed introduction of Tibet as an ecological reserve. Buddhism has been open to working with other world religions to combat the environmental crisis seen at an international conference for Buddhist-Christian studies that addressed the environment. Although Green Buddhism has not commented much on technical issues such as air and water pollution, they use their spirituality to focus heavily on "rich resources for immediate application in food ethics, animal rights, and consumerism." == See also == Religion and environmentalism – Interdisciplinary subfield Ecopsychology – Psychological relationship between humans and the natural world Spiritual ecology – Field in religion, conservation, and academia Environmental ethics – Part of environmental philosophy Indigenous American philosophy – Philosophies of the first inhabitants of the Americas Religious naturalism – Naturalism in religion == References ==
|
{
"page_id": 4457559,
"source": null,
"title": "Ecospirituality"
}
|
The Anselme Payen Award is an annual prize named in honor of Anselme Payen, the French scientist who discovered cellulose, and was a pioneer in the chemistry of both cellulose and lignin. In 1838, he discovered that treating successively wood with nitric acid and an alkaline solution yielded a major insoluble residue that he called "cellulose", while dissolved incrustants were later called "lignin" by Frank Schulze. He was the first to attempt separation of wood into its component parts. After treating different woods with nitric acid he obtained a fibrous substance common to all which he also found in cotton and other plants. His analysis revealed the chemical formula of the substance to be C6H10O5. He reported the discovery and the first results of this classic work in 1838 in Comptes Rendus. The name "cellulose" was coined by him, and this was introduced into the scientific literature next year, in 1839. Anselme Payen Award RecipientsThe Anselme Payen Award, which includes a medal and an honorarium given by the American Chemical Society's Cellulose and Renewable Materials Division, to honor and encourage "outstanding professional contributions to the science and chemical technology of cellulose and its allied products".The Anselme Payen Award is an international award and any scientist conducting cellulose and cellulose related research is eligible for nomination. Selection of the awardee is based upon an evaluation of the nomination packages submitted on behalf of potential awardees. These documents are individually ranked by a panel of nine judges who are appointed by the current Chair-Elect and are unknown to each other. Three judges rotate off the panel each year. The identity of all members is known only to the Chair of the awards committee who compiles the results. After the awardee accepts, the Chair of the Awards Committee announces the winner at the
|
{
"page_id": 12387417,
"source": null,
"title": "Anselme Payen Award"
}
|
next Spring ACS meeting. The awardee for that year is honored at the following Spring ACS meeting at a Symposium and Banquet. The award bears the year the winner was announced. It is presented the following year to allow time for organization of the Symposium and Banquet. == Recipients == == References ==
|
{
"page_id": 12387417,
"source": null,
"title": "Anselme Payen Award"
}
|
Cytochrome P450, family 76, also known as CYP76, is a cytochrome P450 family in land plants, related to the biosynthesis of many plant monoterpenes and diterpenes such as 8-hydroxygeraniol, tanshinone and alkannin. The first gene identified in this family is the CYP76A1 and CYP76A2 from the eggplant. == References ==
|
{
"page_id": 76874842,
"source": null,
"title": "CYP76 family"
}
|
Membranome is the set of biological membranes existing in a specific organism. The term was proposed by British biologist Thomas Cavalier-Smith to discuss epigenetics of biological membranes. The term was also used to define the entire set of membrane proteins in an organism or a combination of membrane proteome and lipidome. == References == == See also == Membranome database
|
{
"page_id": 19268700,
"source": null,
"title": "Membranome"
}
|
Mark L. Wheelis is an American microbiologist. Wheelis is currently a professor in the College of Biological Sciences, University of California, Davis. Carl Woese and Otto Kandler with Wheelis wrote the important paper Towards a natural system of organisms: proposal for the domains Archaea, Bacteria, and Eucarya that proposed a change from the Two-empire system of Prokaryotes and Eukaryotes to the Three-domain system of the domains Eukaryota, Bacteria and Archaea. Wheelis's research interests include the history of biological warfare. He co-authored (with Larry Gonick) The Cartoon Guide to Genetics (1983). Wheelis provided the scientific knowledge and text, while Gonick contributed the illustrations and humor. == Works == Larry Gonick & Mark Wheelis, The Cartoon Guide to Genetics, Longman Higher Education, 1983, 216 pp. "Biological Warfare before 1914", In: Geissler E, Moon JEvC, editors. Biological and toxin weapons: research, development and use from the Middle Ages to 1945. London: Oxford University Press; 1999. pp 8–34. == References == == External links == Mark Wheelis at IMDb
|
{
"page_id": 40436829,
"source": null,
"title": "Mark Wheelis"
}
|
In astrophysics, the ergosphere is a region located outside a rotating black hole's outer event horizon. Its name was proposed by Remo Ruffini and John Archibald Wheeler during the Les Houches lectures in 1971 and is derived from Ancient Greek ἔργον (ergon) 'work'. It received this name because it is theoretically possible to extract energy and mass from this region. The ergosphere touches the event horizon at the poles of a rotating black hole and extends to a greater radius at the equator. A black hole with modest angular momentum has an ergosphere with a shape approximated by an oblate spheroid, while faster spins produce a more pumpkin-shaped ergosphere. The equatorial (maximal) radius of an ergosphere is the Schwarzschild radius, the radius of a non-rotating black hole. The polar (minimal) radius is also the polar (minimal) radius of the event horizon which can be as little as half the Schwarzschild radius for a maximally rotating black hole. == Rotation == As a black hole rotates, it twists spacetime in the direction of the rotation at a speed that decreases with distance from the event horizon. This process is known as the Lense–Thirring effect or frame-dragging. Because of this dragging effect, an object within the ergosphere cannot appear stationary with respect to an outside observer at a great distance unless that object were to move at faster than the speed of light (an impossibility) with respect to the local spacetime. The speed necessary for such an object to appear stationary decreases at points further out from the event horizon, until at some distance the required speed is negligible. The set of all such points defines the ergosphere surface, called ergosurface. The outer surface of the ergosphere is called the static surface or static limit. This is because world lines change from
|
{
"page_id": 1508445,
"source": null,
"title": "Ergosphere"
}
|
being time-like outside the static limit to being space-like inside it. It is the speed of light that arbitrarily defines the ergosphere surface. Such a surface would appear as an oblate that is coincident with the event horizon at the pole of rotation, but at a greater distance from the event horizon at the equator. Outside this surface, space is still dragged, but at a lesser rate. == Radial pull == A suspended plumb, held stationary outside the ergosphere, will experience an infinite/diverging radial pull as it approaches the static limit. At some point it will start to fall, resulting in a gravitomagnetically induced spinward motion. An implication of this dragging of space is the existence of negative energies within the ergosphere. Since the ergosphere is outside the event horizon, it is still possible for objects that enter that region with sufficient velocity to escape from the gravitational pull of the black hole. An object can gain energy by entering the black hole's rotation and then escaping from it, thus taking some of the black hole's energy with it (making the maneuver similar to the exploitation of the Oberth effect around "normal" space objects). This process of removing energy from a rotating black hole was proposed by the mathematician Roger Penrose in 1969 and is called the Penrose process. The maximal amount of energy gain possible for a single particle via this process is 20.7% in terms of its mass equivalence, and if this process is repeated by the same mass, the theoretical maximal energy gain approaches 29% of its original mass-energy equivalent. As this energy is removed, the black hole loses angular momentum, and thus the limit of zero rotation is approached as spacetime dragging is reduced. In the limit, the ergosphere no longer exists. This process is considered
|
{
"page_id": 1508445,
"source": null,
"title": "Ergosphere"
}
|
a possible explanation for a source of energy of such energetic phenomena as gamma-ray bursts. Results from computer models show that the Penrose process is capable of producing the high-energy particles that are observed being emitted from quasars and other active galactic nuclei. == Ergosphere size == The size of the ergosphere, the distance between the ergosurface and the event horizon, is not necessarily proportional to the radius of the event horizon, but rather to the black hole's gravity and its angular momentum. A point at the poles does not move, and thus has no angular momentum, while at the equator a point would have its greatest angular momentum. This variation of angular momentum that extends from the poles to the equator is what gives the ergosphere its oblate shape. As the mass of the black hole or its rotation speed increases, the size of the ergosphere increases as well. == References == == Further reading == Chandrasekhar, Subrahmanyan (1999). Mathematical Theory of Black Holes. Oxford University Press. ISBN 0-19-850370-9. Misner, Charles; Thorne, Kip S.; Wheeler, John (1973). Gravitation. W. H. Freeman and Company. ISBN 0-7167-0344-0. Carroll, Sean (2003). Spacetime and Geometry: An Introduction to General Relativity. Addison Wesley. ISBN 0-8053-8732-3. == External links == Black Hole Thermodynamics The Gravitomagnetic Field and Penrose Processes A Rotating Black Hole
|
{
"page_id": 1508445,
"source": null,
"title": "Ergosphere"
}
|
Biomarkers of aging are biomarkers that could predict functional capacity at some later age better than chronological age. Stated another way, biomarkers of aging would give the true "biological age", which may be different from the chronological age. Validated biomarkers of aging would allow for testing interventions to extend lifespan, because changes in the biomarkers would be observable throughout the lifespan of the organism. Although maximum lifespan would be a means of validating biomarkers of aging, it would not be a practical means for long-lived species such as humans because longitudinal studies would take far too much time. Ideally, biomarkers of aging should assay the biological process of aging and not a predisposition to disease, should cause a minimal amount of trauma to assay in the organism, and should be reproducibly measurable during a short interval compared to the lifespan of the organism. An assemblage of biomarker data for an organism could be termed its "ageotype". Graying of hair and skin wrinkles increase with age and are valid biomarker of ageing, even though to what extent they and other common age-related changes in appearance are better indicators of future functionality than chronological age is not firmly established yet. Biogerontologists have continued efforts to find and validate biomarkers of aging, but success thus far has been limited. Levels of CD4 and CD8 memory T cells and naive T cells have been used to give good predictions of the expected lifespan of middle-aged mice. Advances in big data analysis allowed for the new types of "aging clocks" to be developed. The epigenetic clock is a promising biomarker of aging and can accurately predict human chronological age. Basic blood biochemistry and cell counts can also be used to accurately predict the chronological age. Further studies of the hematological clock on the large datasets
|
{
"page_id": 33948767,
"source": null,
"title": "Biomarkers of aging"
}
|
from South Korean, Canadian, and Eastern European populations demonstrated that biomarkers of aging may be population-specific and predictive of mortality. It is also possible to predict the human chronological age using the transcriptomic clock. == Epigenetic marks == === Loss of histones === A new epigenetic mark found in studies of aging cells is the loss of histones. Most evidence shows that loss of histones is linked to cell division. In aging and dividing yeast MNase-seq (Micrococcal Nuclease sequencing) showed a loss of nucleosomes of ~50%. Proper histone dosage is important in yeast as shown from the extended lifespans seen in strains that are overexpressing histones. A consequence of histone loss in yeast is the amplification of transcription. In younger cells, genes that are most induced with age have specific chromatin structures, such as fuzzy nuclear positioning, lack of a nucleosome depleted region (NDR) at the promoter, weak chromatin phasing, a higher frequency of TATA elements, and higher occupancy of repressive chromatin factors. In older cells, however, the same genes nucleosome loss at the promoter is more prevalent which leads to higher transcription of these genes. This phenomenon is not only seen in yeast, but has also been seen in aging worms, during aging of human diploid primary fibroblasts, and in senescent human cells. In human primary fibroblasts, reduced synthesis of new histones was seen to be a consequence of shortened telomeres that activate the DNA damage response. Loss of core histones may be a general epigenetic mark of aging across many organisms. === Histone variants === In addition to the core histones, H2A, H2B, H3, and H4, there are other versions of the histone proteins that can be significantly different in their sequence and are important for regulating chromatin dynamics. Histone H3.3 is a variant of histone H3 that
|
{
"page_id": 33948767,
"source": null,
"title": "Biomarkers of aging"
}
|
is incorporated into the genome independent of replication. It is the major form of histone H3 seen in the chromatin of senescent human cells, and it appears that excess H3.3 can drive senescence. There are multiple variants of histone 2, the one most notably implicated in aging is macroH2A. The function of macroH2A has generally been assumed to be transcriptional silencing; most recently, it has been suggested that macroH2A is important in repressing transcription at Senescence-Associated Heterochromatin Foci (SAHF). Chromatin that contains macroH2A is impervious to ATP-dependent remodeling proteins and to the binding of transcription factors. === Histone modifications === Increased acetylation of histones contributes to chromatin taking a more euchromatic state as an organism ages, similar to the increased transcription seen due to the loss of histones. There is also a reduction in the levels of H3K56ac during aging and an increase in the levels of H4K16ac. Increased H4K16ac in old yeast cells is associated with the decline in levels of the HDAC Sir2, which can increase the life span when overexpressed. Methylation of histones has been tied to life span regulation in many organisms, specifically H3K4me3, an activating mark, and H4K27me3, a repressing mark. In C. elegans, the loss of any of the three Trithorax proteins that catalyze the trimethylation of H3K4 such as, WDR-5 and the methyltransferases SET-2 and ASH-2, lowers the levels of H3K4me3 and increases lifespan. Loss of the enzyme that demethylates H3K4me3, RB-2, increases H3K4me3 levels in C. elegans and decreases their life spans. In the rhesus macaque brain prefrontal cortex, H3K4me2 increases at promoters and enhancers during postnatal development and aging. These increases reflect progressively more active and transcriptionally accessible (or open) chromatin structures that are often associated with stress responses such as the DNA damage response. These changes may form an epigenetic
|
{
"page_id": 33948767,
"source": null,
"title": "Biomarkers of aging"
}
|
memory of stresses and damages experienced by the organism as it develops and ages. UTX-1, a H3K27me3 demethylase, plays a critical role in the aging of C.elegans: increased utx-1 expression correlates with a decrease in H3K27me3 and a decrease in lifespan. Utx-1 knockdowns showed an increase in lifespan Changes in H3K27me3 levels also have affects on aging cells in Drosophila and humans. === DNA methylation === Methylation of DNA is a common modification in mammalian cells. The cytosine base is methylated and becomes 5-methylcytosine, most often when in the CpG context. Hypermethylation of CpG islands is associated with transcriptional repression and hypomethylation of these sites is associated with transcriptional activation. Many studies have shown that there is a loss of DNA methylation during ageing in many species such as, rats, mice, cows, hamsters, and humans. It has also been shown that DNMT1 and DNMT3a decrease with aging and DNMT3b increases. Hypomethylation of DNA can lower genomic stability, induce the reactivation of transposable elements, and cause the loss of imprinting, all of which can contribute to cancer progression and pathogenesis. == Immune biomarkers == Recent data suggests that an increased frequency of senescent CD8+ T cells in the peripheral blood is associated with the development of hyperglycemia from a pre-diabetic state suggestive of senescence playing a role in metabolic aging. Senescent Cd8+ T cells could be utilized as a biomarker to signal the transition from pre-diabetes to overt hyperglycemia. Recently, Hashimoto and coworkers profiled thousands of circulating immune cells from supercentenarians at single-cell resolution. They identified a unique increase in cytotoxic CD4 T cells in these supercentenarians. Generally, CD4 T-cells have helper, but not cytotoxic, functions under physiological conditions however these supercentenarians, subjected to single cell profiling of their T-cell receptors, revealed accumulations of cytotoxic CD4 T-cells through clonal expansion. The
|
{
"page_id": 33948767,
"source": null,
"title": "Biomarkers of aging"
}
|
conversion of helper CD4 T-cells to a cytotoxic variety might be an adaptation to the late stage of aging aiding in the fighting infections and potentially enhancing tumor surveillance. == Applications of aging biomarkers == The main mechanisms identified as potential biomarkers of aging are DNA methylation, loss of histones, and histone modification. The uses for biomarkers of aging are ubiquitous and identifying a physical parameter of biological aging would allow humans to determine our true age, mortality, and morbidity. The change in the physical biomarker should be proportional to the change in the age of the species. Thus after establishing a biomarker of aging, humans would be able to dive into research on extending life spans and finding timelines for the arise of potential genetic diseases. One of the applications of this finding would allow for identification of the biological age of a person. DNA methylation uses the structure of DNA at different stages of life to determine an age. DNA methylation is the methylation of the cysteine in the CG or Cpg region. The hypermethylation of this region is associated with decreased transcriptional activity and the opposite for hypomethylation. In other words, the more "tightly" held the DNA region then the more stable and "younger" the species. Looking at DNA methylation's properties in tissues, it was found to be almost zero for embryonic tissues, it can be used to determine acceleration of age and the results can be reproduced in chimpanzee tissue. More recently, biomarkers of aging has been used in multiple clinical trials to measure slowing or reversing of age-related decline or biological aging. The Biomarkers of Aging Consortium is currently examining the application of these biomarkers to identify longevity interventions and ways to validate them. Moreover, open-source resources, such as the R package methylCIPHER and the
|
{
"page_id": 33948767,
"source": null,
"title": "Biomarkers of aging"
}
|
Python package pyaging are available to the public as hubs for several biomarkers of aging. == See also == Epigenetic clock Hallmarks of aging Biomarker (medicine) Senescence == References == == External links == Biomarkers of Aging News Advisory National Institute on Aging
|
{
"page_id": 33948767,
"source": null,
"title": "Biomarkers of aging"
}
|
The Association for Politics and the Life Sciences (APLS) was formed in 1981 and exists to study the field of biopolitics as a subfield of political science. APLS owns an academic peer-reviewed journal, Politics and the Life Sciences (PLS), which is published semi-annually by Cambridge University Press. == External links == Official website Politics and the Life Sciences
|
{
"page_id": 9700447,
"source": null,
"title": "Association for Politics and the Life Sciences"
}
|
Monosaccharide nomenclature is the naming system of the building blocks of carbohydrates, the monosaccharides, which may be monomers or part of a larger polymer. Monosaccharides are subunits that cannot be further hydrolysed in to simpler units. Depending on the number of carbon atom they are further classified into trioses, tetroses, pentoses, hexoses etc., which is further classified in to aldoses and ketoses depending on the type of functional group present in them. == Systematic name of molecular graph == The elementary formula of a simple monosaccharide is CnH2nOn, where the integer n is at least 3 and rarely greater than 7. Simple monosaccharides may be named generically based on the number of carbon atoms n: trioses, tetroses, pentoses, hexoses, etc. Every simple monosaccharide has an acyclic (open chain) form, which can be written as H − ( CH ( OH ) ) x − ( C = O ) − ( CH ( OH ) ) y − H {\displaystyle {\ce {H-(CH(OH))_{\mathit {x}}-(C=O)-(CH(OH))_{\mathit {y}}-H}}} ; that is, a straight chain of carbon atoms, one of which is a carbonyl group, all the others bearing a hydrogen -H and a hydroxyl -OH each, with one extra hydrogen at either end. The carbons of the chain are conventionally numbered from 1 to n, starting from the end which is closest to the carbonyl. If the carbonyl is at the very beginning of the chain (carbon 1), the monosaccharide is said to be an aldose, otherwise it is a ketose. These names can be combined with the chain length prefix, as in aldohexose or ketopentose. Most ketoses found in nature have the carbonyl in position 2; when that is not the case, one uses a numeric prefix to indicate the carbonyl's position. Thus for example, aldohexose means H(C=O)(CHOH)5H, ketopentose means H(CHOH)(C=O)(CHOH)3H, and 3-ketopentose
|
{
"page_id": 22414431,
"source": null,
"title": "Monosaccharide nomenclature"
}
|
means H(CHOH)2(C=O)(CHOH)2H. An alternative nomenclature uses the suffix '-ose' only for aldoses, and '-ulose' for ketoses. The position of the carbonyl (when it is not 1 or 2) is indicated by a numerical infix. For example, hexose in this nomenclature means H(C=O)(CHOH)5H, pentulose means H(CHOH)(C=O)(CHOH)3H, and hexa-3-ulose means H(CHOH)2(C=O)(CHOH)3H. == Naming of acyclic stereoisomers == Open-chain monosaccharides with same molecular graph may exist as two or more stereoisomers. The Fischer projection is a systematic way of drawing the skeletal formula of an open-chain monosaccharide so that each stereoisomer is uniquely identified. Two isomers whose molecules are mirror-images of each other are identified by prefixes 'D-' or 'L-', according to the handedness of the chiral carbon atom that is farthest from the carbonyl. In the Fischer projection, that is the second carbon from the bottom; the prefix is 'D-' or 'L-' according to whether the hydroxyl on that carbon lies to the right or left of the backbone, respectively. If the molecular graph is symmetrical (H(CHOH)x(CO)(CHOH)xH) and the two halves are mirror images of each other, then the molecule is identical to its mirror image, and there is no 'L-' form. A distinct common name, such as "glucose" or "ribose", is traditionally assigned to each pair of mirror-image stereoisomers, and to each achiral stereoisomer. These names have standard three-letter abbreviations, such as 'Glc' for glucose and 'Rib' for ribose. Another nomenclature uses the systematic name of the molecular graph, a 'D-' or 'L-' prefix to indicate the position of the last chiral hydroxyl on the Fischer diagram (as above), and another italic prefix to indicate the positions of the remaining hydroxyls relative to the first one, read from bottom to top in the diagram, skipping the keto group if any. These prefixes are attached to the systematic name of the molecular
|
{
"page_id": 22414431,
"source": null,
"title": "Monosaccharide nomenclature"
}
|
graph. So for example, D-glucose is D-gluco-hexose, D-ribose is D-ribo-pentose, and D-psicose is D-ribo-hexulose. Note that, in this nomenclature, mirror-image isomers differ only in the 'D'/'L' prefix, even though all their hydroxyls are reversed. The following tables shows the Fischer projections of selected monosaccharides (in open-chain form), with their conventional names. The table shows all aldoses with 3 to 6 carbon atoms, and a few ketoses. For chiral molecules, only the 'D-' form (with the next-to-last hydroxyl on the right side) is shown; the corresponding forms have mirror-image structures. Some of these monosaccharides are only synthetically prepared in the laboratory and not found in nature. === Names of aldoses === === Names of ketoses === === Names of 3-ketoses === == Cyclic forms == For monosaccharides in their cyclic form, an infix is placed before the '-ose', '-ulose', or 'n-ulose' suffix to specify the ring size. The infix is "furan" for a 5-atom ring, "pyran" for 6, "septan" for 7, and so on. Ring closure creates another chiral center at the anomeric carbon (the one with the hemiacetal or acetal functionality), and therefore each open-chain stereoisomer gives rise to two distinct stereoisomers (anomers). These are identified by the prefixes 'α-' and 'β-', which denote the relative configuration of the anomeric carbon to that of the stereocenter at the other end of the carbon chain. If the conformation (R or S) is identical at both the anomeric carbon and the most distant stereocenter, the configuration is 'α-'. If the conformations are different, the configuration is 'β-' Examples == Glycosides == Glycosides are saccharides in which the hydroxyl -OH at the anomeric centre is replaced by an oxygen-bridged group -OR. The carbohydrate part of the molecule is called glycone, the -O- bridge is the glycosisdic oxygen, and the attached group is the
|
{
"page_id": 22414431,
"source": null,
"title": "Monosaccharide nomenclature"
}
|
aglycone. Glycosides are named by giving the aglyconic alcohol HOR, followed by the saccharide name with the '-e' ending replaced by '-ide'; as in phenol D-glucopyranoside. == Modified sugars == Modification of sugar is generally done by replacing one or more –OH group with other functional groups at all positions except C-1. Rules for nomenclature of modified sugars: State if the sugar is a deoxy sugar, which means the –OH group is replaced by H. Specify the position of deoxygenation. If there is a substituent other than H in the place of –OH, specify what it is. Specify the relative configuration of all stereogenic centres (manno, gluco etc.). Specify the ring size (furanose, pyranose etc.) and anomeric configuration (a or b). State the chain length only in situation where –OH is replaced with H. Alphabetize all the substituent groups (deoxy, -iodo, -amino etc.). Di-, tri- etc. prefixes do not count. Examples === Protected sugars === Sugars in which –OH is protected by some modification are called protected sugars. Rules for nomenclature for protected sugars: Specify the number of particular protecting groups (di, tri, tetra etc.). List groups alphabetically along with all other substituents (di, tri prefixes do not count). == See also == Carbohydrate conformation Symbol Nomenclature For Glycans Polysaccharide Oligosaccharide Oligosaccharide nomenclature == References ==
|
{
"page_id": 22414431,
"source": null,
"title": "Monosaccharide nomenclature"
}
|
The Birkeland–Eyde process was one of the competing industrial processes in the beginning of nitrogen-based fertilizer production. It is a multi-step nitrogen fixation reaction that uses electrical arcs to react atmospheric nitrogen (N2) with oxygen (O2), ultimately producing nitric acid (HNO3) with water. The resultant nitric acid was then used as a source of nitrate (NO3−) in the reaction HNO 3 + H 2 O ⟶ H 3 O + + NO 3 − {\textstyle {\ce {HNO3 + H2O -> H3O+ + NO3-}}} which may take place in the presence of water or another proton acceptor. It was developed by Norwegian industrialist and scientist Kristian Birkeland along with his business partner Sam Eyde in 1903, based on a method used by Henry Cavendish in 1784. A factory based on the process was built in Rjukan and Notodden in Norway, combined with the building of large hydroelectric power facilities. The Birkeland–Eyde process is relatively inefficient in terms of energy consumption. Therefore, in the 1910s and 1920s, it was gradually replaced in Norway by a combination of the Haber process and the Ostwald process. The Haber process produces ammonia (NH3) from molecular nitrogen (N2) and hydrogen (H2), the latter usually but not necessarily produced by steam reforming methane (CH4) gas in current practice. The ammonia from the Haber process is then converted into nitric acid (HNO3) in the Ostwald process. == The process == An electrical arc was formed between two coaxial water-cooled copper tube electrodes powered by a high voltage alternating current of 5 kV at 50 Hz. A strong static magnetic field generated by a nearby electromagnet spreads the arc into a thin disc by the Lorentz force. This setup is based on an experiment by Julius Plücker who in 1861 showed how to create a disc of sparks by
|
{
"page_id": 9569377,
"source": null,
"title": "Birkeland–Eyde process"
}
|
placing the ends of a U-shaped electromagnet around a spark gap so that the gap between them was perpendicular to the gap between the electrodes, and which was later replicated similarly by Walther Nernst and others. The plasma temperature in the disc was in excess of 3000 °C. Air was blown through this arc, causing some of the nitrogen to react with oxygen forming nitric oxide. By carefully controlling the energy of the arc and the velocity of the air stream, yields of up to approximately 4–5% nitric oxide were obtained at 3000 °C and less at lower temperatures. The process is extremely energy intensive. Birkeland used a nearby hydroelectric power station for the electricity as this process demanded about 15 MWh per ton of nitric acid, yielding approximately 60 g per kWh. The same reaction is carried out by lightning, providing a natural source for converting atmospheric nitrogen to soluble nitrates. N 2 + O 2 ⟶ 2 NO {\displaystyle {\ce {N2 + O2 -> 2NO}}} The hot nitric oxide is cooled and combines with atmospheric oxygen to produce nitrogen dioxide. The time this process takes depends on the concentration of NO in the air. At 1% it takes about 180 seconds and at 6% about 40 seconds to achieve 90% conversion. 2 NO + O 2 ⟶ 2 NO 2 {\displaystyle {\ce {2 NO + O2 -> 2 NO2}}} This nitrogen dioxide is then dissolved in water to give rise to nitric acid, which is then purified and concentrated by fractional distillation. 3 NO 2 + H 2 O ⟶ 2 HNO 3 + NO {\displaystyle {\ce {3 NO2 + H2O -> 2 HNO3 + NO}}} The design of the absorption process was critical to the efficiency of the whole system. The nitrogen dioxide was absorbed into water
|
{
"page_id": 9569377,
"source": null,
"title": "Birkeland–Eyde process"
}
|
in a series of packed column or plate column absorption towers each four stories tall to produce approximately 40–50% nitric acid. The first towers bubbled the nitrogen dioxide through water and non-reactive quartz fragments. Once the first tower reached final concentration, the nitric acid was moved to a granite storage container, and liquid from the next water tower replaced it. That movement process continued to the last water tower which was replenished with fresh water. About 20% of the produced oxides of nitrogen remained unreacted so the final towers contained an alkaline solution of lime to convert the remaining oxides to calcium nitrate (also known as Norwegian saltpeter) except approximately 2% which were released into the air. == References ==
|
{
"page_id": 9569377,
"source": null,
"title": "Birkeland–Eyde process"
}
|
Phycoerythrobilin is a red phycobilin, i.e. an open tetrapyrrole chromophore found in cyanobacteria and in the chloroplasts of red algae, glaucophytes and some cryptomonads. Phycoerythrobilin is present in the phycobiliprotein phycoerythrin, of which it is the terminal acceptor of energy. The amount of phycoerythrobilin in phycoerythrins varies a lot, depending on the considered organism. In some Rhodophytes and oceanic cyanobacteria, phycoerythrobilin is also present in the phycocyanin, then termed R-phycocyanin. Like all phycobilins, phycoerythrobilin is covalently linked to these phycobiliproteins by a thioether bond. == References == == External links == Chemical Structure of phycoerythrobilin
|
{
"page_id": 3736674,
"source": null,
"title": "Phycoerythrobilin"
}
|
Guaiacol () is an organic compound with the formula C6H4(OH)(OCH3). It is a phenolic compound containing a methoxy functional group. Guaiacol appears as a viscous colorless oil, although aged or impure samples are often yellowish. It occurs widely in nature and is a common product of the pyrolysis of wood. == Occurrence == Guaiacol is usually derived from guaiacum or wood creosote. There also appears to be a petrochemical route to it with great commercial use. It is produced by a variety of plants. It is also found in essential oils from celery seeds, tobacco leaves, orange leaves, and lemon peels. The pure substance is colorless, but samples become yellow upon exposure to air and light. The compound is present in wood smoke, resulting from the pyrolysis of lignin. The compound contributes to the flavor of many substances such as whiskey and roasted coffee. == Preparation == The compound was first isolated by Otto Unverdorben in 1826. Guaiacol is produced by methylation of o-catechol, for example using potash and dimethyl sulfate: C6H4(OH)2 + (CH3O)2SO2 → C6H4(OH)(OCH3) + HO(CH3O)SO2 === Laboratory methods === Guaiacol can be prepared by diverse routes in the laboratory. o-Anisidine, derived in two steps from anisole, can be hydrolyzed via its diazonium derivative. Guaiacol can be synthesized by the dimethylation of catechol followed by selective mono-demethylation. C6H4(OCH3)2 + C2H5SNa → C6H4(OCH3)(ONa) + C2H5SCH3 == Uses and chemical reactions == === Syringyl/guaiacyl ratio === Lignin, comprising a major fraction of biomass, is sometimes classified according to the guaiacyl component. Pyrolysis of lignin from gymnosperms gives more guaiacol, resulting from removal of the propenyl group of coniferyl alcohol. These lignins are said to have a high guaiacyl (or G) content. In contrast, lignins derived from sinapyl alcohol affords syringol. A high syringyl (or S) content is indicative of lignin
|
{
"page_id": 3212389,
"source": null,
"title": "Guaiacol"
}
|
from angiosperms. Sugarcane bagasse is one useful source of guaiacol; pyrolysis of the bagasse lignins yields compounds including guaiacol, 4-methylguaiacol and 4-vinylguaiacol. === Chemical intermediate === Guaiacol is a useful precursor for the synthesis of other compounds. Being derived from biomass, it is a potential component or precursor to "green fuels". Guaiacol is also a useful reagent for the quantification of peroxidases, as in the presence of hydrogen peroxide these enzymes will catalyse with it the formation of tetraguaiacol, a coloured compound that can be quantified by its absorbance at 420–470 nm, following the equation: 4 guaiacol (colorless) + 2 H2O2 → tetraguaiacol (colored) + 8 H2O. === Medicinal and food === Guaiacol is a precursor to various flavorants, such as eugenol. An estimated 85% of the world's supply of vanillin comes from guaiacol. Because consumers tend to prefer natural vanillin to synthetic vanillin, methods such as microbial fermentation have been adopted. The route entails the condensation reaction of glyoxylic acid with guaiacol to give mandelic acid, which is oxidized to produce phenylglyoxylic acid. This acid undergoes a decarboxylation to afford vanillin. The crude vanillin product can then be purified with vacuum distillation and recrystallization. Guaiacol is also used medicinally as an expectorant, antiseptic, and local anesthetic. Guaiacol is produced in the gut of desert locusts, Schistocerca gregaria, by the breakdown of plant material. This process is undertaken by the gut bacterium Pantoea agglomerans (Enterobacter). It is one of the main components of the pheromones that cause locust swarming. == Safety == Methoxyphenols are potential biomarkers of biomass smoke exposure, such as from inhalation of woodsmoke. Dietary sources of methoxyphenols overwhelm the contribution from inhalational exposures to woodsmoke. == See also == Creosote Guaifenesin == References ==
|
{
"page_id": 3212389,
"source": null,
"title": "Guaiacol"
}
|
Replica cluster move in condensed matter physics refers to a family of non-local cluster algorithms used to simulate spin glasses. It is an extension of the Swendsen-Wang algorithm in that it generates non-trivial spin clusters informed by the interaction states on two (or more) replicas instead of just one. It is different from the replica exchange method (or parallel tempering), as it performs a non-local update on a fraction of the sites between the two replicas at the same temperature, while parallel tempering directly exchanges all the spins between two replicas at different temperature. However, the two are often used alongside to achieve state-of-the-art efficiency in simulating spin-glass models. == The Chayes-Machta-Redner representation == The Chayes-Machta-Redner (CMR) representation is a graphical representation of the Ising spin glass which extends the standard FK representation. It is based on the observation that the total Hamiltonian of two independent Ising replicas α and β, H = − ∑ < i j > J i j ( σ i α σ j α + σ i β σ j β ) , {\displaystyle H=-\sum _{<ij>}J_{ij}{\big (}\sigma _{i}^{\alpha }\sigma _{j}^{\alpha }+\sigma _{i}^{\beta }\sigma _{j}^{\beta }{\big )},} can be written as the Hamiltonian of a 4-state clock model. To see this, we define the following mapping ( σ α , σ β ) → θ : { ( + 1 , + 1 ) , ( + 1 , − 1 ) , ( − 1 , − 1 ) , ( − 1 , + 1 ) } ↦ { 0 , π 2 , π , 3 π 2 } , {\displaystyle (\sigma ^{\alpha },\sigma ^{\beta })\to \theta :\quad {\big \{}(+1,+1),(+1,-1),(-1,-1),(-1,+1){\big \}}\mapsto {\big \{}0,{\frac {\pi }{2}},\pi ,{\frac {3\pi }{2}}{\big \}},} where θ {\displaystyle \theta } is the orientation of the 4-state clock, then the
|
{
"page_id": 67437670,
"source": null,
"title": "Replica cluster move"
}
|
total Hamiltonian can be represented as H = − 2 J i j ∑ < i j > cos ( θ j − θ i ) . {\displaystyle H=-2J_{ij}\sum _{<ij>}\cos(\theta _{j}-\theta _{i}).} In the graphical representation of this model, there are two types of bonds that can be open, referred to as blue and red. To generate the bonds on the lattice, the following rules are imposed: If J i j cos ( θ j − θ i ) = 1 {\displaystyle J_{ij}\cos(\theta _{j}-\theta _{i})=1} , or when the interactions on edge ( i , j ) {\displaystyle (i,j)} are satisfied on both replicas, then a blue bond is open with probability p blue = 1 − e − 4 β | J i j | {\displaystyle p_{\text{blue}}=1-e^{-4\beta |J_{ij}|}} . If J i j cos ( θ j − θ i ) = 0 {\displaystyle J_{ij}\cos(\theta _{j}-\theta _{i})=0} , or when the interaction on edge ( i , j ) {\displaystyle (i,j)} is satisfied in exactly one replica, then a red bond is open with probability p red = 1 − e − 2 β | J i j | {\displaystyle p_{\text{red}}=1-e^{-2\beta |J_{ij}|}} . Otherwise, a closed bond is formed. Under these rules, it can be checked that a cycle of open bonds can only contain an even number of red bonds. A cluster formed with blue bonds is referred to as a blue cluster, and a super-cluster formed together with both blue and red bonds is referred to as a grey cluster. Once the clusters are generated, there are two types of non-local updates that can be made to the clock states independently in the clock clusters (and thus the spin states in both replicas). First, for every blue cluster, we can flip (or rotate 180
|
{
"page_id": 67437670,
"source": null,
"title": "Replica cluster move"
}
|
∘ {\displaystyle 180^{\circ }} ) the clock states with some arbitrary probability. Following this, for every grey cluster (blue clusters connected with red bonds), we can rotate all the clock states simultaneously by a random angle. It can be shown that both updates are consistent with the bond-formation rules, and satisfy detailed balance. Therefore, an algorithm based on this CMR representation will be correct when used in conjunction with other ergodic algorithms. However, the algorithm is not necessarily efficient, as a giant grey cluster will tend to span the entire lattice at sufficiently low temperatures (e.g. even at paramagnetic phases of spin-glass models). == Houdayer cluster move == The Houdayer cluster move is a simpler cluster algorithm based on a site percolation process on sites with negative spin overlaps. It is discovered by Jerome Houdayer in 2001. For two independent Ising replicas, we can define the spin overlap as q i = σ i α σ j β , {\displaystyle q_{i}=\sigma _{i}^{\alpha }\sigma _{j}^{\beta },} and a cluster is formed by randomly selecting a site and percolating through the adjacent sites with q = − 1 {\displaystyle q=-1} (with a percolation ratio of 1) until the maximal cluster is formed. The spins in the cluster are then exchanged between the two replicas. It can be shown that the exchange update is isoenergetic, meaning that the total energy is conserved in the update. This gives an acceptance ratio of 1 as calculated from the Metropolis-Hastings rule. In other words, the update is rejection-free. === Suppressing percolation of large clusters === The efficiency of this algorithm is highly sensitive to the site percolation threshold of the underlying lattice. If the percolation threshold is too small, then a giant cluster will likely span the entire lattice, resulting in the trivial update of exchanging
|
{
"page_id": 67437670,
"source": null,
"title": "Replica cluster move"
}
|
nearly all the spins between the replicas. This is why the original algorithm only performs well in low dimensional settings (where the site percolation ratio is sufficiently high). To efficiently extend this algorithm to higher dimensions, one has to perform certain algorithmic interventions. For instance, one can restrict the cluster moves to low-temperature replicas where one expects only a few number of negative-overlap sites to appear (such that the algorithm does not percolate supercritically). In addition, one can perform a global spin-flip in one of the two replicas when the number of negative-overlap sites exceeds half the lattice size, in order to further suppress the percolation process. The Jorg cluster move is another way to reduce the sizes of the Houdayer clusters. In each Houdayer cluster, the algorithm forms open bonds with probability 1 − e − 4 β | J i j | {\displaystyle 1-e^{-4\beta |J_{ij}|}} , similar to the Swensden-Wang algorithm. This will form sub-clusters that are smaller than the Houdayer clusters, and the spins in these sub-clusters can then be exchange between replicas in a similar fashion as a Houdayer cluster move. == References ==
|
{
"page_id": 67437670,
"source": null,
"title": "Replica cluster move"
}
|
Palmitate mediated localization is a biological process that trafficks a palmitoylated protein to ordered lipid domains. == Biological function == One function is thought to cluster proteins to increase the efficiency of protein-protein interactions and facilitate biological processes. In the opposite scenario palmitate mediated localization sequesters proteins away from a non-localized molecule. In theory, disruption of palmitate mediated localization then allows a transient interaction of two molecules through lipid mixing. In the case of an enzyme, palmitate can sequester an enzyme away from its substrate. Disruption of palmitate mediated localization then activates the enzyme by substrate presentation. == Mechanism of sequestration == Palmitate mediated localization is integral to spatial biology; in particular, lipid partitioning and the formation of lipid rafts. Sequestration of palmitoylated proteins is regulated by cholesterol. Depletion of cholesterol with methyl-beta cyclodextrin disrupts palmitate mediated localization. == References ==
|
{
"page_id": 62325867,
"source": null,
"title": "Palmitate mediated localization"
}
|
A photochemical logic gate is based on the photochemical intersystem crossing and molecular electronic transition between photochemically active molecules, leading to logic gates that can be produced. == The OR gate electron–photon transfer chain == The OR gate is based on the activation of molecule A, and thus pass electron / photon to molecule C's excited state orbitals (C*). The electron from molecule A inter system crosses to C* via the excited state orbitals of B, eventually utilised as a signal in the C* hνc emission. The 'OR' gate uses two inputs of light (photons) to molecule A in two separate electron transfer chains, both of which are capable of transferring to C* and thus producing the output of an OR gate. Therefore, if either electron transfer chain is activated, molecule C's excitation produces a valid/ output emission. == The 'AND' gate == Excitation A→A* by hνa photon, whereby the promoted electron is passed down to the C* molecular orbital. A second photon applied to the system (hνc2) causes the excitation of the electron in the C* molecular orbital to the C** molecular orbital -analogous pump probe spectroscopy. Above, the energy level diagram illustrating the principle of pump probe spectroscopy –the excitation of an excited state. The AND gate is produced by the necessity of both A→A* and the C**→C excitations occurring at the same time -input hν and hν, are simultaneously required. To prevent erroneous emissions of light from a single input to the AND gate, it would be necessary to have an electron transfer series with ability accept any electrons (energy) from C* energy level. The electron transfer series would terminate with a low (non-radiative decay) of the energy The alternatives for producing an AND gate, using molecular photphysics, are two. (1) The emission produced by the electron
|
{
"page_id": 4457582,
"source": null,
"title": "Photochemical logic gate"
}
|
drop from C*→C (hνc) is not a valid output frequency. The emission from the C** (hνc + hνc2, hνc3) molecular orbital is a valid output signal;. to be used in subsequent logic gates -arranged to respond to the C ∗ ∗ → c 2 C {\displaystyle C^{**}{\xrightarrow[{c2}]{}}C} emission. The second input of photon(s) to trigger the rapid conversion of a molecule used to complete the electron transfer chain. A very complex molecule like a protein can be engineered to possess high strain energies, so that in the absence of the second light frequency molecule B is inactive (B). The second photon input triggers B→B' where the forward rate constant is much smaller than the reverse. If such a molecule is used as molecule B, the transfer chain can be switched on and off. == Creating the NOT gate == To stop the electron transfer chain completing, producing output signals, the input of a photon, hνc2, is used to produce a 'pump probe spectroscopy' effect by promoting an electron in an electron transfer chain. The fall of the pump probe promoted electron produces an output that is quenched down an electron transfer chain. An alternative is similar to the AND gate alternative; an input causes a change in molecule structure breaking the electron transfer chain by not allowing the smooth energy transfer of electrons. == See also == Photochemistry Photochemical reaction Photohydrogen Photocatalysis Photodissociation Photoelectrolysis Photosynthesis Artificial photosynthesis == References ==
|
{
"page_id": 4457582,
"source": null,
"title": "Photochemical logic gate"
}
|
Theistic evolution (also known as theistic evolutionism or God-guided evolution), alternatively called evolutionary creationism, is a view that God acts and creates through laws of nature. Here, God is taken as the primary cause while natural causes are secondary, positing that the concept of God and religious beliefs are compatible with the findings of modern science, including evolution. Theistic evolution is not in itself a scientific theory, but includes a range of views about how science relates to religious beliefs and the extent to which God intervenes. It rejects the strict creationist doctrines of special creation, but can include beliefs such as creation of the human soul. Modern theistic evolution accepts the general scientific consensus on the age of the Earth, the age of the universe, the Big Bang, the origin of the Solar System, the origin of life, and evolution. Supporters of theistic evolution generally attempt to harmonize evolutionary thought with belief in God and reject the conflict between religion and science; they hold that religious beliefs and scientific theories do not need to contradict each other. Diversity exists regarding how the two concepts of faith and science fit together. == Definition == Francis Collins describes theistic evolution as the position that "evolution is real, but that it was set in motion by God", and characterizes it as accepting "that evolution occurred as biologists describe it, but under the direction of God". He lists six general premises on which different versions of theistic evolution typically rest. They include: The prevailing cosmological model, with the universe coming into being about 13.8 billion years ago; The fine-tuned universe; Evolution and natural selection; No special supernatural intervention is involved once evolution got under way; Humans are a result of these evolutionary processes; and Despite all these, humans are unique. The concern for
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
the Moral Law (the knowledge of right and wrong) and the continuous search for God among all human cultures defy evolutionary explanations and point to our spiritual nature. The executive director of the National Center for Science Education in the United States of America, Eugenie Scott, has used the term to refer to the part of the overall spectrum of beliefs about creation and evolution holding the theological view that God creates through evolution. It covers a wide range of beliefs about the extent of any intervention by God, with some approaching deism in rejecting the concepts of continued intervention or special creation, while others believe that God has directly intervened at crucial points such as the origin of humans. In the Catholic version of theistic evolution, human evolution may have occurred, but God must create the human soul, and the creation story in the book of Genesis should be read metaphorically. Some Muslims believe that only humans were exceptions to common ancestry (human exceptionalism), while some give an allegorical reading of Adam's creation (Non-exceptionalism). Some Muslims believe that only Adam and Hawa (Eve) were special creations and they alongside their earliest descendants were exceptions to common ancestry, but the later descendants (including modern humans) share common ancestry with the rest of life on Earth because there were human-like beings on Earth before Adam's arrival who came through evolution. This belief is known as "Adamic exceptionalism". When evolutionary science developed, so did different types of theistic evolution. Creationists Henry M. Morris and John D. Morris have listed different terms which were used to describe different positions from the 1890s to the 1920s: "Orthogenesis" (goal-directed evolution), "nomogenesis" (evolution according to fixed law), "emergent evolution", "creative evolution", and others. The Jesuit paleontologist Pierre Teilhard de Chardin (1881–1955) was an influential proponent of
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
God-directed evolution or "orthogenesis", in which man will eventually evolve to the "omega point" of union with the Creator. === Alternative terms === Others see "evolutionary creation" (EC, also referred to by some observers as "evolutionary creationism") as the belief that God, as Creator, uses evolution to bring about his plan. Eugenie Scott states in Evolution Vs. Creationism that it is a type of evolution rather than creationism, despite its name. "From a scientific point of view, evolutionary creationism is hardly distinguishable from Theistic Evolution ... [the differences] lie not in science but in theology." Those who hold to evolutionary creationism argue that God is involved to a greater extent than the theistic evolutionist believes. Canadian biologist Denis Lamoureux published a 2003 article and a 2008 theological book, both aimed at Christians who do not believe in evolution (including young Earth creationists), and at those looking to reconcile their Christian faith with evolutionary science. His main argument was that Genesis presents the "science and history of the day" as "incidental vessels" to convey spiritual truths. Lamoureux rewrote his article as a 2009 journal paper, incorporating excerpts from his books, in which he noted the similarities of his views to theistic evolution, but objected to that term as making evolution the focus rather than creation. He also distanced his beliefs from the deistic or more liberal beliefs included in theistic evolution. He also argued that although referring to the same view, the word arrangement in the term "theistic evolution" places "the process of evolution as the primary term, and makes the Creator secondary as merely a qualifying adjective". Divine intervention is seen at critical intervals in history in a way consistent with scientific explanations of speciation, with similarities to the ideas of progressive creationism that God created "kinds" of animals sequentially.
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
Regarding the embracing of Darwinian evolution, historian Ronald Numbers describes the position of the late 19th-century geologist George Frederick Wright as "Christian Darwinism". Jacob Klapwijk and Howard J. Van Till have, while accepting both theistic creation and evolution, rejected the term "theistic evolution". In 2006, American geneticist and Director of the National Institute of Health, Francis Collins, published The Language of God. He stated that faith and science are compatible and suggested the word "BioLogos" (Word of Life) to describe theistic evolution. Collins later laid out the idea that God created all things, but that evolution is the best scientific explanation for the diversity of all life on Earth. The name BioLogos instead became the name of the organization Collins founded years later. This organization now prefers the term "evolutionary creation" to describe their take on theistic evolution. == Historical development == Historians of science (and authors of pre-evolutionary ideas) have pointed out that scientists had considered the concept of biological change well before Darwin. In the 17th century, the English Nonconformist/Anglican priest and botanist John Ray, in his book The Wisdom of God Manifested in the Works of Creation (1692), had wondered "why such different species should not only mingle together, but also generate an animal, and yet that that hybridous production should not again generate, and so a new race be carried on". 18th-century scientist Carl Linnaeus (1707–1778) published Systema Naturae (1735), a book in which he considered that new varieties of plants could arise through hybridization, but only under certain limits fixed by God. Linnaeus had initially embraced the Aristotelian idea of immutability of species (the idea that species never change), but later in his life he started to challenge it. Yet, as a Christian, he still defended "special creation", the belief that God created "every living
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
creature" at the beginning, as read in Genesis, with the peculiarity a set of original species of which all the present species have descended. Linnaeus wrote: Let us suppose that the Divine Being in the beginning progressed from the simpler to the complex; from few to many; similarly that He in the beginning of the plant kingdom created as many plants as there were natural orders. These plant orders He Himself, there from producing, mixed among themselves until from them originated those plants which today exist as genera. Nature then mixed up these plant genera among themselves through generations -of double origin (hybrids) and multiplied them into existing species, as many as possible (whereby the flower structures were not changed) excluding from the number of species the almost sterile hybrids, which are produced by the same mode of origin. Linnaeus attributed the active process of biological change to God himself, as he stated: We imagine that the Creator at the actual time of creation made only one single species for each natural order of plants, this species being different in habit and fructification from all the rest. That he made these mutually fertile, whence out of their progeny, fructification having been somewhat changed, Genera of natural classes have arisen as many in number as the different parents, and since this is not carried further, we regard this also as having been done by His Omnipotent hand directly in the beginning; thus all Genera were primeval and constituted a single Species. That as many Genera having arisen as there were individuals in the beginning, these plants in course of time became fertilised by others of different sort and thus arose Species until so many were produced as now exist ... these Species were sometimes fertilised out of congeners, that is other
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
Species of the same Genus, whence have arisen Varieties. Jens Christian Clausen (1967), refers to Linnaeus' theory as a "forgotten evolutionary theory [that] antedates Darwin's by nearly 100 years", and reports that he was a pioneer in doing experiments about hybridization. Later observations by Protestant botanists Carl Friedrich von Gärtner (1772–1850) and Joseph Gottlieb Kölreuter (1733–1806) denied the immutability of species, which the Bible never teaches. Kölreuter used the term "transmutation of species" to refer to species which have experienced biological changes through hybridization, although they both were inclined to believe that hybrids would revert to the parental forms by a general law of reversion, and therefore, would not be responsible for the introduction of new species. Later, in a number of experiments carried out between 1856 and 1863, the Augustinian friar Gregor Mendel (1822–1884), aligning himself with the "new doctrine of special creation" proposed by Linnaeus, concluded that new species of plants could indeed arise, although limitedly and retaining their own stability. Georges Cuvier's analysis of fossils and discovery of extinction disrupted static views of nature in the early 19th century, confirming geology as showing a historical sequence of life. British natural theology, which sought examples of adaptation to show design by a benevolent Creator, adopted catastrophism to show earlier organisms being replaced in a series of creations by new organisms better adapted to a changed environment. Charles Lyell (1797–1875) also saw adaptation to changing environments as a sign of a benevolent Creator, but his uniformitarianism envisaged continuing extinctions, leaving unanswered the problem of providing replacements. As seen in correspondence between Lyell and John Herschel, scientists were looking for creation by laws rather than by miraculous interventions. In continental Europe, the idealism of philosophers including Lorenz Oken (1779–1851) developed a Naturphilosophie in which patterns of development from archetypes were
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
a purposeful divine plan aimed at forming humanity. These scientists rejected transmutation of species as materialist radicalism threatening the established hierarchies of society. The idealist Louis Agassiz (1807–1873), a persistent opponent of transmutation, saw mankind as the goal of a sequence of creations, but his concepts were the first to be adapted into a scheme of theistic evolutionism, when in Vestiges of the Natural History of Creation published in 1844, its anonymous author (Robert Chambers) set out goal-centred progressive development as the Creator's divine plan, programmed to unfold without direct intervention or miracles. The book became a best-seller and popularised the idea of transmutation in a designed "law of progression". The scientific establishment strongly attacked Vestiges at the time, but later more sophisticated theistic evolutionists followed the same approach of looking for patterns of development as evidence of design. The comparative anatomist Richard Owen (1804–1892), a prominent figure in the Victorian era scientific establishment, opposed transmutation throughout his life. When formulating homology he adapted idealist philosophy to reconcile natural theology with development, unifying nature as divergence from an underlying form in a process demonstrating design. His conclusion to his On the Nature of Limbs of 1849 suggested that divine laws could have controlled the development of life, but he did not expand this idea after objections from his conservative patrons. Others supported the idea of development by law, including the botanist Hewett Watson (1804–1881) and the Reverend Baden Powell (1796–1860), who wrote in 1855 that such laws better illustrated the powers of the Creator. In 1858 Owen in his speech as President of the British Association said that in "continuous operation of Creative power" through geological time, new species of animals appeared in a "successive and continuous fashion" through birth from their antecedents by a Creative law rather than through
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
slow transmutation. === On the Origin of Species === When Charles Darwin published On the Origin of Species in 1859, many liberal Christians accepted evolution provided they could reconcile it with divine design. The clergymen Charles Kingsley (1819–1875) and Frederick Temple (1821–1902), both conservative Christians in the Church of England, promoted a theology of creation as an indirect process controlled by divine laws. Some strict Calvinists welcomed the idea of natural selection, as it did not entail inevitable progress and humanity could be seen as a fallen race requiring salvation. The Anglo-Catholic Aubrey Moore (1848–1890) also accepted the theory of natural selection, incorporating it into his Christian beliefs as merely the way God worked. Darwin's friend Asa Gray (1810–1888) defended natural selection as compatible with design. Darwin himself, in his second edition of the Origin (January 1860), had written in the conclusion: I believe that animals have descended from at most only four or five progenitors, and plants from an equal or lesser number. Analogy would lead me one step further, namely, to the belief that all animals and plants have descended from some one prototype. But analogy may be a deceitful guide. Nevertheless all living things have much in common, in their chemical composition, their germinal vesicles, their cellular structure, and their laws of growth and reproduction. We see this even in so trifling a circumstance as that the same poison often similarly affects plants and animals; or that the poison secreted by the gall-fly produces monstrous growths on the wild rose or oak-tree. I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed by the Creator. Within a decade most scientists had started espousing evolution, but from
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
the outset some expressed opposition to the concept of natural selection and searched for a more purposeful mechanism. In 1860 Richard Owen attacked Darwin's Origin of Species in an anonymous review while praising "Professor Owen" for "the establishment of the axiom of the continuous operation of the ordained becoming of living things". In December 1859 Darwin had been disappointed to hear that Sir John Herschel apparently dismissed the book as "the law of higgledy-pigglety", and in 1861 Herschel wrote of evolution that "[a]n intelligence, guided by a purpose, must be continually in action to bias the direction of the steps of change—to regulate their amount—to limit their divergence—and to continue them in a definite course". He added "On the other hand, we do not mean to deny that such intelligence may act according to law (that is to say, on a preconceived and definite plan)". The scientist Sir David Brewster (1781–1868), a member of the Free Church of Scotland, wrote an article called "The Facts and Fancies of Mr. Darwin" (1862) in which he rejected many Darwinian ideas, such as those concerning vestigial organs or questioning God's perfection in his work. Brewster concluded that Darwin's book contained both "much valuable knowledge and much wild speculation", although accepting that "every part of the human frame had been fashioned by the Divine hand and exhibited the most marvellous and beneficent adaptions for the use of men". In the 1860s theistic evolutionism became a popular compromise in science and gained widespread support from the general public. Between 1866 and 1868 Owen published a theory of derivation, proposing that species had an innate tendency to change in ways that resulted in variety and beauty showing creative purpose. Both Owen and Mivart (1827–1900) insisted that natural selection could not explain patterns and variation, which they
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
saw as resulting from divine purpose. In 1867 the Duke of Argyll published The Reign of Law, which explained beauty in plumage without any adaptive benefit as design generated by the Creator's laws of nature for the delight of humans. Argyll attempted to reconcile evolution with design by suggesting that the laws of variation prepared rudimentary organs for a future need. Cardinal John Henry Newman wrote in 1868: "Mr Darwin's theory need not then to be atheistical, be it true or not; it may simply be suggesting a larger idea of Divine Prescience and Skill ... and I do not [see] that 'the accidental evolution of organic beings' is inconsistent with divine design—It is accidental to us, not to God." In 1871 Darwin published his own research on human ancestry in The Descent of Man, concluding that humans "descended from a hairy quadruped, furnished with a tail and pointed ears", which would be classified amongst the Quadrumana along with monkeys, and in turn descended "through a long line of diversified forms" going back to something like the larvae of sea squirts. Critics promptly complained that this "degrading" image "tears the crown from our heads", but there is little evidence that it led to loss of faith. Among the few who did record the impact of Darwin's writings, the naturalist Joseph LeConte struggled with "distress and doubt" following the death of his daughter in 1861, before enthusiastically saying in the late 1870s there was "not a single philosophical question connected with our highest and dearest religious and spiritual interests that is fundamentally affected, or even put in any new light, by the theory of evolution", and in the late 1880s embracing the view that "evolution is entirely consistent with a rational theism". Similarly, George Frederick Wright (1838–1921) responded to Darwin's Origin
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
of Species and Charles Lyell's 1863 Geological Evidences of the Antiquity of Man by turning to Asa Gray's belief that God had set the rules at the start and only intervened on rare occasions, as a way to harmonise evolution with theology. The idea of evolution did not seriously shake Wright's faith, but he later suffered a crisis when confronted with historical criticism of the Bible. == Acceptance == According to Eugenie Scott: "In one form or another, Theistic Evolutionism is the view of creation taught at the majority of mainline Protestant seminaries, and despite the Catholic Church having no official position, it does support belief in it. Studies show that acceptance of evolution is lower in the United States than in Europe or Japan; among 34 countries sampled, only Turkey had a lower rate of acceptance than the United States. Theistic evolution has been described as arguing for compatibility between science and religion, and as such it is viewed with disdain both by some atheists and many young Earth creationists. == Hominization == Hominization, in both science and religion, involves the process or the purpose of becoming human. The process and means by which hominization occurs is a key problem in theistic evolutionary thought. This is noticeable more so in Abrahamic religions, which often have held as a core belief that the souls of animals and humans differ in some capacity. Thomas Aquinas taught animals did not have immortal souls, but that humans did. Many versions of theistic evolution insist on a special creation consisting of at least the addition of a soul just for the human species. Scientific accounts of the origin of the universe, the origin of life, and subsequent evolution of pre-human life forms may not cause any difficulty but the need to reconcile religious and
|
{
"page_id": 328815,
"source": null,
"title": "Theistic evolution"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.