id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
26,595,811 | https://en.wikipedia.org/wiki/Catalyst%20Story%20Institute%20%26%20Content%20Festival | Catalyst Story Institute & Content Festival, formerly the ITVFest (The Independent Television Festival), is an arts organization and an annual festival gathering of fans, artists and executives from around the world to celebrate outstanding independent narrative production.
The festival gives independent television and narrative artists the opportunity to share their work with television professionals and a larger community of independent TV producers.
Catalyst Story Institute & Content Festival is the only independent television festival in the United States. ITVFest is the first festival dedicated to independently produced pilots and webseries, with an emphasis on new media.
Festival support has come from companies and media outlets including Sony, The TV Academy (the Emmys), Fox TV Studios, RDF, Comedy Central, SyFy, FX (TV channel), GSN, HBO, Current TV, The Hollywood Reporter, Tubefilter, Stickam, B-Side and Comcast.
History
Catalyst Story Institute & Content Festival was founded as The Independent Television Festival in 2006 by producer AJ Tesler, and was originally held in Los Angeles, California.
In 2013, Philip Gilpin, Jr. took over as the festival's Executive Director. ITVFest moved from Los Angeles to Vermont to find a rural setting where industry leaders and creators could escape the big city to build relationships with the goal of creating an industry pipeline for independent television creators to have direct access to industry decision makers. In 2015, on attending ITVFest IndieWire called independent television is "a real, new part of the industry that deserves attention."
In 2017, ITVFest partnered with HBO in a deal that awarded festival winners meetings with top executives. Bob Bakish, CEO of Viacom, keynoted at ITVFest 2017. In 2018, Adaptive Studios bought the rights to Astral, a television drama set to include actors Ben Affleck and Matt Damon, from a creator met at ITV Festival. Powderkeg's Laura Fischer and Gary Dourdan were among the judges at ITVFest in 2018. After ITVFest 2018, Vermont officials dropped their support for television and film in the state.
In 2019, ITVFest moved to Duluth, Minnesota, attracting thousands of attendees.
Also in 2019, ITVFest contracted a multiyear sponsorship with Abrams Artists Agency, marking the first direct connection between a TV festival and an agency, while simultaneously rebranding to Catalyst Story Investitute & Content Festival.
Previous winners
2024 winners
PitchWorld Champions (Best Series Pitch): Secondchance Maggie - Rebecca Knowles; Branson - Erin Bradford
Best Student Script: Monday School
Best Drama Script: Rebeccas - Megen Musegades
Best Comedy Script: Tamarack - Elle Thoni
Best Sci-fi and Fantasy Script: Nightfall - David Guthrie
Best Kids and Animation Script: Bright Blue - Mary McGloin
Best Thriller/Horror Script: Shuttlecock - Tapan Sharma; Magnolia Hill - Terra Wellington
Best Animation Pilot: Chorus to Dero - Dana Corrigan, Joseph Solinsky
Best Comedy Pilot: The Countdown - Melanie Renfroe, Shannon McLemore, Heidi-Marie Ferren, Yorke Fryer, James Renfroe, Kelly Roberts, Noah Hougland, Peter Beirne, David Gielan
Best Documentary Pilot: Woman at the End of the World - Martyna Wojciechowska, Hanna Jewsiewicka, Ewa Marcinowska, Jowita Baraniecka-Ogden
Best Drama Pilot: Madam - Tom Hern, Halaifonua Finau, Marci Wiseman, Nick Spicer, Aram Tertzakian, Belindalee Hope, Kacie Anning, Madeleine Sami, Peter Salmon, Shoshana McCallum, and Harry McNaughton
Best Reality Pilot: Affiliated - Truman Kewley, Terry 'Carver Tee' Williams, Deawnne Buckmire
Best Sci-Fi and Fantasy Pilot: Two Breaths - Kateryna Kurganska, Timur Guseynov, Don John, c Craig
Best Short Form and Experimental Pilot: My Dearest Elizabeth - MaryLynn Suchan, Poonam Basu
Best Sports Pilot: Mary Queen of Van Life - Jasia Ka
Best Podcast: abandoned: The All-American Ruins Podcast
Best of Fest: Madam - Tom Hern, Halaifonua Finau, Marci Wiseman, Nick Spicer, Aram Tertzakian, Belindalee Hope, Kacie Anning, Madeleine Sami, Peter Salmon, Shoshana McCallum, and Harry McNaughton
2019 winners
Best Breakout Creators - Sam and Colby
Best Breakout Series - "Challenge Accepted"
Best Script Pitch - "Georgi & the Bot"
Best Documentary - "Beneath the Ink"
Best Reality Series - "Run"
Best Animation - "Starship Goldfish"
Best Short Film - "Lose It"
Best Short Comedy Series - "Doxxed"
Best Short Drama Series - "Dad Man Walking"
Best Comedy Series - "This Isn't Me"
Best Drama Series - "Home Turf"
Best Comedy Script - Wendy Braff, "Mr. Trivia"
Best Drama Script - Justin Moran, "Rust"
Best Cinematography - Ryan Z. Emanuel, "Chosen"
Best Writing - Brandon Garegnani, "Scribbles"
Best Directing - Mara Joly, "Home Turf"
Best Editing - Oliver Parker, "The System"
Best Comedy Actor - Ben Kawaller, "This Isn't Me"
Best Comedy Actress - Ani Tatintsyan, "Pre-Mortem"
Best Drama Actor - Akintola Jiboyewa, "The System"
Best Drama Actress - Alison Jaye, "Chosen"
Best of Festival - "Work/Friends"
2019 Documentary Official Selection
Beneath the Ink
Caminantes (Walkers)
Jeffrey T. Larson
Magnolia's Hope
Outsourced: The New Wisconsin Idea
2018 winners
Best Podcast - Yarn Story Podcast
Best Drama Script - Near Death, Owen Hornstein III, Andrew Bryan, And James Roe
Best Comedy Script - Not Liz, Liz Murpy
Best Documentary - Jacks & Jills
Best Short Film - Monday
Best Cinematography - Adrian Correia, Avenues
Best Reality - Charlie Bee Company
Best Writing - Stephen Ohl, White River Tales
Best Editing - Darian Dauchan, The New Adventures of Brobot Johnson
Best Drama Actor - Kolman Domingo, Nothingham
Best Drama Actress - Vongai Shava, Patiri in the Promise Land
Best Comedy Actor - Tomy Kang, Taking a Hit
Best Comedy Actress - Caroline Parsons, The Russian Cousin
Best Directing - Yair Valer, Wild Weeds
Web Short Comedy Series - Susaneland
Best Short Drama Series - Revenge Tour
Best Drama Series - Currency
Best Comedy Series - Filth City
Best of ITVFest - 88
2016 winners
Best TV Short Drama - People Like Us
2015 winners
Best Visual Effects - Border Queen
Best Reality - Kickin' it Caucasian
Best Short Film - February
Best Writing - The Wake
Best Cinematography - Zero Point
Best Acting Ensemble - Fu@K I Love U
Best Actor - Darrell Lake, The Incredible Life of Darrell
Best Actress - Alex Trow, Cooking for One
Best Director - Kerry Valderrama, Sanitarium
Best Documentary - Port of Indecision
Web Web Series Comedy - The KATEering SHow
Best Web Series Drama - Farr
Best TV Drama - Trouble
Best TV Comedy - Life Sucks
Best of ITVFest - The Wake
2013 winners
Best in Show - Old Souls
Best Acting - Mythos
Best Comedy - Preggers
Best Drama - Event Zero
Best Documentary - Comrade Sunshine
2012 winners
Best Writer - Underwater, Nathan Marshall & Michael Traynor
Best Actress in a Drama - Underwater, Rachel Nichols
2010 winners
"Innovator" Award - Illeana Douglas
Best Overall Web - Octane Pistols of Fury, Chris Prine, Greg Stees
Best Overall TV - Going to Pot, Leo Simone, Scott Perlman, Jamie Kennedy
Best Documentary - Going to Pot, Leo Simone, Scott Perlman, Jamie Kennedy
Best Animated Pilot- Time Traveling Finger, Stephen Leonard
Best Drama - the_source, Marc D’Agostino
Best Comedy- Octane Pistols of Fury, Chris Prine, Greg Stees
Mobifest Winner - Phobias, Kasi Brown and Brandon Walter
Best Actor - La Manzana, Paula Roman
Best Director - 15 Minutes, Bobby Salomon
Best Writer - Odd Jobs, Jeremy Redleaf
Best Cinematographer - Goodsam & Max, Gil Nievo
2009 winners
"I Am Independent" Award - Kevin Pollak
Best Overall Web - OzGirl, Nicholas Carlton, Sophie Tilson
Best Overall TV - Dog, Barry Gribble
Best Documentary - Pushing The Limits, Javier Bermudez
Best Animated Pilot- Wentworth & Buxbury, Lucas Crandles, Timothy Nash and Hayden Grubb
Best Drama - Urban Wolf, Napoleon Premiere, Laurent Touil-Tartour
Best Comedy- MERRIme.com, Kaily Smith
Mobifest Winner - Chelsey & Kelsey, Claire Coffee, Ellie Knaus, Marie-Amelie Rechberg
Best Actor - OzGirl, Shanrah Wakefield, Sophie Tilson
Best Director - Dark Room Theater, Benjamin Pollack
Best Writer - Imaginary Bitches, Andrew Miller
Best Cinematographer - Goodsam & Max, Gil Nievo
2008 winners
Best Dramatic Program - Turnover, Michael Blieden
Best Comedic Program - Small Bits of Happiness, Blake Barrie and Thiago Gadelha
Best Documentary Program - Wal-Mart Nation - Andrew Munger
Best Alternative Program - Welcome to Plainville - Jason Frederick, Anne Gregory, Kevin McShane, Opus Moreschi, Charlotte Newhouse
Best Webseries - Violent Jake, Samuel Smith, Clint Gossett and Tansy Brook
Audience Award - Small Town News - Sarah Babineau
Best Director - Turnover, Michael Blieden
Best Writer - Hit Factor, James Cromwell, Saba Homayoon, Neil Hopkins, Jamie Rosenblatt, Kerry Sullivan
Best Ensemble Acting - Hit Factor, James Cromwell, Saba Homayoon, Neil Hopkins, Jamie Rosenblatt, Kerry Sullivan
2007 Winners
Best Dramatic Program - The Collectors, Steve Alper
Best Comedic Program - Partners - Seth Menachem, Avi Rothman
Best Documentary Program - Gusto - Mike Maniglia, Subterra Films
Best Alternative Program - King Kaiser - Steven Burrows, The Burrows of Hollywood
Best Webseries - Trekant, Diaperdog Productions
Audience Award - King Kaiser - Steven Burrows, The Burrows of Hollywood
Webseries Audience Award - Flipper Nation, Space Shank Media
Vuze Second Chance Competition - Adam Ray TV, Adam Ray
Best Director - Mr. Jackson's Neighborhood, Nathan Marshall
Best Writer - Deal With It, Steven Muterspaugh, Michael Kary, Jamey Hood and Karin Kary
Best Ensemble Acting - Grounds Zero, Alan Keller
2006 Winners
Best Dramatic Program - "FBI Guys" - Paul Darrigo, 2 Wolves Production
Best Comedic Program - "As Seen on TV" - Ryan Sage
Best Variety Program - "Loading Zone" - Nick Barnes, Frog Island Films
Best Reality Program - "Meet Tom Kramer" - Rachael Pihlaja
Audience Award - "This is My Friend" - Jeremy Konner, Morning Knight Films
Best Director - "The Perverts" - John Gegenhuber, The Perverts Pictures
Best Writer - "As Seen on TV" - Ryan Sage
Best Ensemble Acting - "Van Stone: Tour of Duty" - Tim Bennett, Dakota
See also
List of television festivals
References
External links
Official Website
Twitter
YouTube
Festivals in Los Angeles
Television festivals
New media
Multigenre conventions
Festivals in Minnesota
Arts festivals in California | Catalyst Story Institute & Content Festival | Technology | 2,323 |
76,205,678 | https://en.wikipedia.org/wiki/WaLSA%20Team | The Waves in the Lower Solar Atmosphere (WaLSA) team is an international consortium that is focused on investigating wave activity in the Sun's lower atmosphere. The purpose of the group is to understand how magnetohydrodynamic (MHD) waves generated within the Sun's interior and lower atmosphere affect the dynamics and heating of its outer layers.
The WaLSA team's research has been supported by organizations including the Research Council of Norway through the Rosseland Centre for Solar Physics, the Royal Society, and the International Space Science Institute.
Research
The WaLSA team's research focuses on understanding various wave modes propagating through solar structures. They have investigated the coupling mechanisms between different wave modes and measured energy carried by MHD waves.
References
Astrophysics | WaLSA Team | Physics,Astronomy | 154 |
17,206,552 | https://en.wikipedia.org/wiki/Spiral%20of%20Theodorus | In geometry, the spiral of Theodorus (also called the square root spiral, Pythagorean spiral, or Pythagoras's snail) is a spiral composed of right triangles, placed edge-to-edge. It was named after Theodorus of Cyrene.
Construction
The spiral is started with an isosceles right triangle, with each leg having unit length. Another right triangle (which is the only automedian right triangle) is formed, with one leg being the hypotenuse of the prior right triangle (with length the square root of 2) and the other leg having length of 1; the length of the hypotenuse of this second right triangle is the square root of 3. The process then repeats; the th triangle in the sequence is a right triangle with the side lengths and 1, and with hypotenuse . For example, the 16th triangle has sides measuring , 1 and hypotenuse of .
History and uses
Although all of Theodorus' work has been lost, Plato put Theodorus into his dialogue Theaetetus, which tells of his work. It is assumed that Theodorus had proved that all of the square roots of non-square integers from 3 to 17 are irrational by means of the Spiral of Theodorus.
Plato does not attribute the irrationality of the square root of 2 to Theodorus, because it was well known before him. Theodorus and Theaetetus split the rational numbers and irrational numbers into different categories.
Hypotenuse
Each of the triangles' hypotenuses gives the square root of the corresponding natural number, with .
Plato, tutored by Theodorus, questioned why Theodorus stopped at . The reason is commonly believed to be that the hypotenuse belongs to the last triangle that does not overlap the figure.
Overlapping
In 1958, Kaleb Williams proved that no two hypotenuses will ever coincide, regardless of how far the spiral is continued. Also, if the sides of unit length are extended into a line, they will never pass through any of the other vertices of the total figure.
Extension
Theodorus stopped his spiral at the triangle with a hypotenuse of . If the spiral is continued to infinitely many triangles, many more interesting characteristics are found.
Growth rate
Angle
If is the angle of the th triangle (or spiral segment), then:
Therefore, the growth of the angle of the next triangle is:
The sum of the angles of the first triangles is called the total angle for the th triangle. It grows proportionally to the square root of , with a bounded correction term :
where
().
Radius
The growth of the radius of the spiral at a certain triangle is
Archimedean spiral
The Spiral of Theodorus approximates the Archimedean spiral. Just as the distance between two windings of the Archimedean spiral equals mathematical constant , as the number of spins of the spiral of Theodorus approaches infinity, the distance between two consecutive windings quickly approaches .
The following table shows successive windings of the spiral approaching pi:
As shown, after only the fifth winding, the distance is a 99.97% accurate approximation to .
Continuous curve
The question of how to interpolate the discrete points of the spiral of Theodorus by a smooth curve was proposed and answered by Philip J. Davis in 2001 by analogy with Euler's formula for the gamma function as an interpolant for the factorial function. Davis found the function
which was further studied by his student Leader and by Iserles. This function can be characterized axiomatically as the unique function that satisfies the functional equation
the initial condition and monotonicity in both argument and modulus.
An analytic continuation of Davis' continuous form of the Spiral of Theodorus extends in the opposite direction from the origin.
In the figure the nodes of the original (discrete) Theodorus spiral are shown as small green circles. The blue ones are those, added in the opposite direction of the spiral.
Only nodes with the integer value of the polar radius are numbered in the figure.
The dashed circle in the coordinate origin is the circle of curvature at .
See also
Fermat's spiral
List of spirals
References
Further reading
Theodorus
Pythagorean theorem
Pi | Spiral of Theodorus | Mathematics | 875 |
12,578,574 | https://en.wikipedia.org/wiki/C6H14O | The molecular formula C6H14O may refer to:
tert-Amyl methyl ether
Diisopropyl ether
Dimethylbutanols
2,2-Dimethyl-1-butanol
3,3-Dimethyl-1-butanol
Dipropyl ether
2-Ethyl-1-butanol
Ethyl tert-butyl ether
Hexanols
1-Hexanol
2-Hexanol
3-Hexanol
Methylpentanols
2-Methyl-1-pentanol
3-Methyl-1-pentanol
4-Methyl-1-pentanol
2-Methyl-2-pentanol
3-Methyl-2-pentanol
4-Methyl-2-pentanol
2-Methyl-3-pentanol
3-Methyl-3-pentanol
Pinacolyl alcohol | C6H14O | Chemistry | 184 |
10,393,861 | https://en.wikipedia.org/wiki/Spherical%20polyhedron | In geometry, a spherical polyhedron or spherical tiling is a tiling of the sphere in which the surface is divided or partitioned by great arcs into bounded regions called spherical polygons. A polyhedron whose vertices are equidistant from its center can be conveniently studied by projecting its edges onto the sphere to obtain a corresponding spherical polyhedron.
The most familiar spherical polyhedron is the soccer ball, thought of as a spherical truncated icosahedron. The next most popular spherical polyhedron is the beach ball, thought of as a hosohedron.
Some "improper" polyhedra, such as hosohedra and their duals, dihedra, exist as spherical polyhedra, but their flat-faced analogs are degenerate. The example hexagonal beach ball, is a hosohedron, and is its dual dihedron.
History
During the 10th Century, the Islamic scholar Abū al-Wafā' Būzjānī (Abu'l Wafa) studied spherical polyhedra as part of a work on the geometry needed by craftspeople and architects.
The work of Buckminster Fuller on geodesic domes in the mid 20th century triggered a boom in the study of spherical polyhedra. At roughly the same time, Coxeter used them to enumerate all but one of the uniform polyhedra, through the construction of kaleidoscopes (Wythoff construction).
Examples
All regular polyhedra, semiregular polyhedra, and their duals can be projected onto the sphere as tilings:
Improper cases
Spherical tilings allow cases that polyhedra do not, namely hosohedra: figures as {2,n}, and dihedra: figures as {n,2}. Generally, regular hosohedra and regular dihedra are used.
Relation to tilings of the projective plane
Spherical polyhedra having at least one inversive symmetry are related to projective polyhedra (tessellations of the real projective plane) – just as the sphere has a 2-to-1 covering map of the projective plane, projective polyhedra correspond under 2-fold cover to spherical polyhedra that are symmetric under reflection through the origin.
The best-known examples of projective polyhedra are the regular projective polyhedra, the quotients of the centrally symmetric Platonic solids, as well as two infinite classes of even dihedra and hosohedra:
Hemi-cube, {4,3}/2
Hemi-octahedron, {3,4}/2
Hemi-dodecahedron, {5,3}/2
Hemi-icosahedron, {3,5}/2
Hemi-dihedron, {2p,2}/2, p>=1
Hemi-hosohedron, {2,2p}/2, p>=1
See also
Spherical geometry
Spherical trigonometry
Polyhedron
Projective polyhedron
Toroidal polyhedron
Conway polyhedron notation
References
Polyhedra
Tessellation
Spheres | Spherical polyhedron | Physics,Mathematics | 630 |
4,621,973 | https://en.wikipedia.org/wiki/Neural%20Audio%20Corporation | Neural Audio Corporation was an audio research company based in Kirkland, Washington.
The company specialized in high-end audio research. It helped XM Satellite Radio launch their service using the Neural Codec Pre-Conditioner, which was designed to provide higher quality audio at lower bitrates.
History
The company was co-founded by two audio engineers, Paul Hubert and Robert Reams in 2000.
In 2009 the company was acquired by DTS Inc. for $15 million in cash.
Products
Neural was mostly known for its work in the field of audio processing and its "Neural Surround" sound format. ESPN, FOX, NBC, CBS, Sony, Universal, Warner Bros, THX, Yamaha, Pioneer Electronics, Ford, Honda, Nissan, Vivendi and SiriusXM were partners and customers in connection with sound for movies, broadcasting applications, music reproduction and video games.
"Neural Surround" is a technology similar to MPEG Surround, where a 5.1 stream is downmixed into stereo than recovered using cues – encoded into the downmixed stereo. NPR participated in a trial of the "Neural Surround" technology in 2004, using the Harris NeuStar 5225. XM HD Surround was based on the same technology.
Neural provides its "Codec Pre-Conditioner" in at least two types of devices, a "NeuStar UltraLink digital radio audio conditioner" built as a physical device and a "Neustar SW4.0" built as a piece of software on Windows XP. Manual of the software indicates that the pre-conditioner works by analyzing noise in each frequency bin and masking them to not exceed predefined limits, so that they would not overwhelm a codec.
Harris Broadcast acted as a redistributor of Neural technology.
References
External links
Celebrity Voice Changer
Mixing & Mastering Services
Audio engineering | Neural Audio Corporation | Engineering | 379 |
36,241,461 | https://en.wikipedia.org/wiki/Resistance%20%28creativity%29 | Resistance is a concept created by American novelist Steven Pressfield that illustrates the universal force that he claims acts against human creativity. It was first described in his non-fiction book The War of Art and elaborated in the follow-up books Do The Work and Turning Pro, and in other essays. It is also a recurring theme in some of his fiction novels such as The Legend of Bagger Vance and The Virtues of War.
Resistance is described in a mythical fashion as a universal force that has one sole mission: to keep things as they are. Pressfield claims that Resistance does not have a personal vendetta against anyone, rather it is simply trying to accomplish its only mission. It is the force that will stop an individual's creative activity through any means necessary, whether it be rationalizing, inspiring fear and anxiety, emphasizing other distractions that require attention, raising the voice of an inner critic, and much more. It will use any tool to stop creation flowing from an individual, no matter what field the creation is in.
Pressfield goes on to claim that Resistance is the most dangerous element to one's life and dreams since its sole mission is to sabotage aspirations. He explains steps that human beings can take to overcome this force and keep it subdued so that they can create to their fullest potential, although Resistance is never fully gone.
Pressfield's concept of Resistance has been cited by authors such as Seth Godin, David M. Kelley and Tom Kelley, Eric Liu and the Lincoln Center Institute, Robert Kiyosaki and Sharon Lechter, and Gina Trapani.
Criticism
Psychologist Frederick Heide has cited Pressfield's book The War of Art and questioned whether "fighting" Resistance is always a helpful metaphor; Heide suggested that such agonistic metaphors could end up "ironically perpetuating the resistance it predicts." Nevertheless, Heide noted, such an agonistic approach to resistance remains widespread in psychotherapeutic thinking. Heide cites a scholarly article that points to some alternative nonagonistic strategies for working with resistance in relational psychoanalytic psychotherapy, personal construct therapy, narrative therapy, motivational interviewing, process-experiential therapy, and coherence therapy.
See also
Decisional balance sheet
Immunity to change
Procrastination
Psychological resistance
Sin
Notes
References
Creativity
Literary concepts
Mythological archetypes | Resistance (creativity) | Biology | 477 |
56,024,347 | https://en.wikipedia.org/wiki/National%20Prize%20for%20Exact%20Sciences%20%28Chile%29 | The National Prize for Exact Sciences () was created in 1992 as one of the replacements for the National Prize for Sciences under Law 19169. The other two prizes in this same area are for Natural Sciences and Applied Sciences and Technologies.
It is part of the National Prize of Chile.
Jury
The jury is made up of the Minister of Education, who calls it, the Rector of the University of Chile, the President of the , a representative of the Council of Rectors, and the last recipient of the prize.
Winners
1981, (physics)
1991, (physics)
1993, and Eric Goles (mathematics)
1995, Claudio Bunster (physics)
1997, María Teresa Ruiz (astronomy)
1999, José Maza Sancho (astronomy)
2001, (physics)
2003, Carlos Conca (mathematics)
2005, (physics)
2007, (physics)
2009, Ricardo Baeza Rodríguez (mathematics)
2011, (mathematics)
2013, Manuel del Pino (mathematics)
2015, Mario Hamuy (astronomy)
2017, (astronomy)
2019, Dora Altbir (nanoscience and nanotechnology)
2021, (astronomy)
2023, Jaime San Martín (mathematics)
See also
CONICYT
List of astronomy awards
List of computer science awards
List of mathematics awards
List of physics awards
References
1992 establishments in Chile
Awards established in 1992
Chilean science and technology awards
Mathematics awards
Physics awards
Astronomy prizes
Computer science awards
Information science awards
1992 in Chilean law | National Prize for Exact Sciences (Chile) | Astronomy,Technology | 289 |
53,075,803 | https://en.wikipedia.org/wiki/Erythranthe | Erythranthe, the monkey-flowers and musk-flowers, is a diverse plant genus with more than 120 members (as of 2022) in the family Phrymaceae. Erythranthe was originally described as a separate genus, then generally regarded as a section within the genus Mimulus, and recently returned to generic rank. Mimulus sect. Diplacus was segregated from Mimulus as a separate genus at the same time. Mimulus remains as a small genus of eastern North America and the Southern Hemisphere. Molecular data show Erythranthe and Diplacus to be distinct evolutionary lines that are distinct from Mimulus as strictly defined, although this nomenclature is controversial.
Member species are usually annuals or herbaceous perennials. Flowers are red, pink, or yellow, often in various combinations. A large number of the Erythranthe species grow in moist to wet soils with some growing even in shallow water. They are not very drought resistant, but many of the species now classified as Diplacus are. Species are found at elevations from oceanside to high mountains as well as a wide variety of climates, though most prefer wet areas such as riverbanks.
The largest concentration of species is in western North America, but species are found elsewhere in the United States and Canada, as well as from Mexico to Chile and eastern Asia. Pollination is mostly by either bees or hummingbirds. Member species are widely cultivated and are subject to several pests and diseases. Several species are listed as threatened by the International Union for Conservation of Nature.
Description
Erythranthe is a highly diverse genus with the characteristics unifying the various species being axile placentation and long pedicels. Other characteristics of species can vary widely, especially between the sections, and even within some sections. Some species of Erythranthe are annuals and some are perennials. Flowers are red, pink, purple, or yellow, often in various combinations and shades of those colors. Some species produce copious amounts of aromatic compounds, giving them a musky odor (hence "musk-flowers"). Erythranthe is used as food by the larvae of some Lepidoptera species, such as the mouse moth (Amphipyra tragopoginis), as a main part of their diet.
Within the section Erythranthe, stems and leaves range from glabrous to hirsute, and are generally glandular. Leaves can be oblong, elliptical, or oval, with small tooths. Fruiting pedicels are longer than calyces. Calyces have sharp, definite angles and flat sides. Corollas are deciduous, relatively large (tube-throat long), and strongly red to purplish, magentarose, pink, or white, rarely yellow.
Erythranthe guttata is the most widespread of the genus Erythranthe and its characteristics are fairly representative of the genus. E. guttata is tall with disproportionately large long, tubular flowers. Leaves are opposite and oval, long. The species as strictly defined is perennial and spreads with stolons or rhizomes. The stem may be erect or recumbent. In the latter form, roots may develop at lower leaf nodes. Sometimes dwarfed, it may be hairless or have some hairs. Leaves are opposite, round to oval, usually coarsely and irregularly toothed or lobed. The bright yellow flowers are born on a raceme, most often with five or more flowers. The calyx has five lobes that are much shorter than the flower. Each flower has bilateral symmetry and has two lips. The upper lip usually has two lobes; the lower, three. The lower lip may have one large to many small red to reddish brown spots. The opening to the flower is hairy. The fruit is a two-valved capsule long, containing many seeds.
Erythranthe alsinoides is similar to several species found in the Pacific Northwest. It is an annual herb that blooms from April–June with a preference for shady and moist dense habitats. The plant is hairy to slightly hairy and grows from tall. The stems are often reddish. The leaves are opposite and have a few prominent upper veins. Blades are long. The petiole is about the same length. The flowers are yellow with reddish-brown spots, usually on the lower lip, and the upper and lower lips have fused, growing . Each flower is attached by a pedicel. The fruits are capsules.
Etymology and taxonomy
The derivation of Erythranthe is from Greek ἐρυθρός ("erythros"), red, with ἄνθος ("anthos"), flower. They are called monkey-flowers because some species have flowers shaped like a monkey's face. The widely used generic name, Latin mimus meaning "mimic actor", from the Greek mimos meaning "imitator" also alludes to the fancied monkey resemblance. The stem of Erythranthe can be either smooth or hairy, and this is known in a few species to be a trait determined by a simple allelic difference. At least E. lewisii is known to possess "flypaper-type" traps and is apparently protocarnivorous, supplementing its nutrients with small insects. Variations in color largely reflect concentrations of anthocyanins. The species that are subshrubs with woody stems were originally placed in the section Diplacus, and this was subsequently made a separate genus. Diplacus is clearly derived from within Mimulus, broadly defined, and was not usually considered to be a separate genus.
The French botanist Édouard Spach established Erythranthe as a separate genus with just the type species Erythranthe cardinalis. In 1885, American botanist Edward Lee Greene classified Erythranthe as a section of Mimulus while adding E. lewisii and E. parishii. In the 2012 restructuring of Mimulus by Barker et al., based largely upon DNA evidence, seven species were left in Mimulus as strictly defined; Erythranthe was greatly enlarged to include 111 species, based upon axile placentation and long pedicels, 46 placed into Diplacus (species with parietal placentation and sessile flowers), two placed in Uvedalia, and one each placed in Elacholoma, Mimetanthe, and Thyridia. All of the American genera are still referred to as "monkey-flowers".
Views on the evolutionary position of the monkey-flower species have changed. It was long considered to be in the family Scrophulariaceae, but is now placed in Phyrmaceae, primarily on the basis of DNA evidence. The genus Phryma (comprising only a single species), for which the family is named, is considerably different in morphology from all of the monkey-flowers.
Attempts at crossing species, whether from different sections or within the same section, of Erythranthe are not always successful. E. peregrina is an example of a successful naturally occurring hybrid that not only arose independently in two different locations, but is also a rare example of evolutionary recent allopolyploidization, complete chromosomal inheritance.
Charles Darwin's 1876 study of inbreeding depression and self-fertility in South American species was a progenitor for the study of Erythranthe biology. The genus has become a model system "for studies of evolutionary and ecological functional genomics ... [as it] ... contains a wide array of phenotypic, ecological and genomic diversity." Species under intense genomic study are mostly among the section Simiolus (E. guttata and relatives) and the section Erythranthe (including E. lewisii, E. cardinalis, E. parishii, and others). The genome sequence of E. guttata was released in late spring, 2007.
Many issues remain in Erythranthe taxonomy. E. guttata is highly complex, with many variations apparently reflecting differences in geographic environment and elevation. Molecular geneticists regard the species broadly as including both perennial and annual populations, but there is rationale for treating this complex as several distinct species (perennials are E. guttata, E. grandis, and E. corallina; annuals are E. microphylla and others). The perennials and annuals differ as groups from each other by an inversion sequence on chromosome 8. Evidence tentatively indicates that the perennials evolved from annual ancestors, although some evidence has been interpreted to indicate that E. nasuta evolved from E. guttata in central California between 500,000 and 200,000 years ago and since then become primarily a self-pollinator. Relationships among the apparently closely related E. tilingii, E. minor, and E. caespitosa are not clearly understood. Some currently recognized species may be just variants of other species: E. arenicola, E. brachystylis, E. regni. Chromosomal issues may affect the classification of some species: E. corallina, E. guttata, E. nasuta, E. tilingii, and E. utahensis.
Species
Species alphabetically
, Plants of the World Online accepted the following species and hybrids:
Erythranthe acutidens (Greene) G.L.Nesom
Erythranthe alsinoides (Douglas ex Benth.) G.L.Nesom & N.S.Fraga – chickweed monkey-flower (British Columbia to northern California)
Erythranthe ampliata (A.L.Grant) G.L.Nesom
Erythranthe androsacea (Curran ex Greene) N.S.Fraga – rockjasmine monkey-flower (California)
Erythranthe arenaria (A.L.Grant) G.L.Nesom
Erythranthe arenicola (Pennell) G.L.Nesom
Erythranthe arvensis (Greene) G.L.Nesom
Erythranthe austrolatidens G.L.Nesom
Erythranthe barbata (Greene) N.S.Fraga
Erythranthe bhutanica (Yamazaki) G.L.Nesom – (Asia)
Erythranthe bicolor (Hartw. ex Benth.) G.L.Nesom & N.S.Fraga – yellow and white monkey-flower (California)
Erythranthe bodinieri (Vaniot) G.L.Nesom – (Asia)
Erythranthe brachystylis (Edwin) G.L.Nesom
Erythranthe bracteosa (P.C.Tsoong) G.L.Nesom – (Asia)
Erythranthe breviflora (Piper) G.L.Nesom – (British Columbia to California to Wyoming)
Erythranthe brevinasuta G.L.Nesom
Erythranthe breweri (Greene) G.L.Nesom & N.S.Fraga – Brewer's monkey-flower (British Columbia to California to Colorado)
Erythranthe bridgesii (Benth.) G.L.Nesom – (South America)
Erythranthe caespitosa (Greene) G.L.Nesom
Erythranthe calcicola N.S.Fraga
Erythranthe calciphila (Gentry) G.L.Nesom
Erythranthe cardinalis (Douglas ex Benth.) Spach – scarlet monkey-flower (southwestern United States and Baja California)
Erythranthe carsonensis N.S.Fraga – Carson Valley monkey-flower (California and Nevada)
Erythranthe charlestonensis G.L.Nesom
Erythranthe chinatiensis G.L.Nesom
Erythranthe cinnabarina G.L.Nesom
Erythranthe corallina (Greene) G.L.Nesom
Erythranthe cordata (Greene) G.L.Nesom
Erythranthe cuprea (Dombrain) G.L.Nesom – Flor de cobre (Eng: copper flower) (central and southern Chile)
Erythranthe decora (A.L.Grant) G.L.Nesom
Erythranthe dentata (Nutt. ex Benth.) G.L.Nesom – toothleaf monkey-flower, coastal monkey-flower (British Columbia to northern California)
Erythranthe dentiloba (B.L.Rob. & Fernald) G.L.Nesom
Erythranthe depressa (Phil.) G.L.Nesom
Erythranthe diffusa (A.L.Grant) N.S.Fraga
Erythranthe diminuens G.L.Nesom – (Sonora, Mexico)
Erythranthe discolor (A.L.Grant) N.S.Fraga
Erythranthe eastwoodiae (Rydb.) G.L.Nesom & N.S.Fraga
Erythranthe erubescens G.L.Nesom
Erythranthe exigua (A.Gray) G.L.Nesom & N.S.Fraga – San Bernardino Mountains monkey-flower (southern California, Baja California)
Erythranthe filicaulis (S.Watson) G.L.Nesom & N.S.Fraga – slender-stemmed monkey-flower (California)
Erythranthe filicifolia (Sexton, K.G.Ferris & Schoenig) G.L.Nesom
Erythranthe flammea G.L.Nesom
Erythranthe floribunda (Douglas ex Lindl.) G.L.Nesom – manyflowered monkey-flower (western Canada, Pacific Coast, Rocky Mountains, northern Mexico)
Erythranthe gemmipara (W.A.Weber) G.L.Nesom & N.S.Fraga – Rocky Mountain monkey-flower (Colorado)
Erythranthe geniculata (Greene) G.L.Nesom
Erythranthe geyeri (Torr.) G.L.Nesom
Erythranthe glabrata (Kunth) G.L.Nesom – roundleaf monkey-flower (widespread in North America, Mesoamerica and South America)
Erythranthe glaucescens (Greene) G.L.Nesom – shieldbract monkey-flower (California)
Erythranthe gracilipes (B.L.Rob.) N.S.Fraga – slenderstalk monkey-flower (California)
Erythranthe grandis (Greene) G.L.Nesom
Erythranthe grayi (A.L.Grant) G.L.Nesom
Erythranthe guttata (Fisch. ex DC.) G.L.Nesom – common large monkey-flower, common monkey-flower, stream monkey-flower, seep monkey-flower (AK, AZ, CA, CO, CT, DE, ID, MI, MT, ND, NE, NM, NV, NY, OR, PA, SD, UT, WA, WY; Canada: BC, Yukon; Mexico to Guatemala; naturalized in Britain)
Erythranthe hallii (Greene) G.L.Nesom
Erythranthe hardhamiae N.S.Fraga
Erythranthe howaldiae G.L.Nesom
Erythranthe hymenophylla (Meinke) G.L.Nesom
Erythranthe inamoena (Greene) G.L.Nesom
Erythranthe inconspicua (A.Gray) G.L.Nesom – (syns. Mimulus acutidens and M. grayi)
Erythranthe inflata (Miq.) G.L.Nesom – (Asia)
Erythranthe inflatula (Suksd.) G.L.Nesom
Erythranthe jungermannioides (Suksd.) G.L.Nesom
Erythranthe karakormiana (Yamazaki) G.L.Nesom – (Asia)
Erythranthe laciniata (A.Gray) G.L.Nesom
Erythranthe lagunensis G.L.Nesom
Erythranthe latidens (Greene) G.L.Nesom – broadtooth monkey-flower (southern California, Baja California)
Erythranthe lewisii (Pursh) G.L.Nesom & N.S.Fraga – great purple monkey-flower, Lewis' monkey-flower (Alaska to California to Colorado)
Erythranthe linearifolia (A.L.Grant) G.L.Nesom & N.S.Fraga
Erythranthe lutea (L.) G.L.Nesom – yellow monkey-flower, monkey musk, blotched monkey-flower, and blood-drop-emlets (North and South America, naturalized in Britain)
Erythranthe madrensis (Seem.) G.L.Nesom
Erythranthe marmorata (Greene) G.L.Nesom
Erythranthe michiganensis (Pennell) G.L.Nesom – Michigan monkey-flower (Michigan)
Erythranthe microphylla (Benth.) G.L.Nesom
Erythranthe minima (C.Bohlen) J.M.Watson & A.R.Flores– (Michoacan, Mexico)
Erythranthe minor (A. Nelson) G.L.Nesom
Erythranthe montioides (A.Gray) N.S.Fraga – montia-like monkey-flower (California, Nevada)
Erythranthe moschata (Douglas ex Lindl.) G.L.Nesom – (North and South America, naturalized in Britain and Finland)
Erythranthe naiandina (J.M.Watson & C.Bohlen) G.L.Nesom
Erythranthe nasuta (Greene) G.L.Nesom
Erythranthe nelsonii (A.L.Grant) G.L.Nesom & N.S.Fraga – In 2014 Nesom lists as a synonym of Erythranthe verbenacea
Erythranthe nepalensis (Benth.) G.L.Nesom (Asia)
Erythranthe norrisii (Heckard & Shevock) G.L.Nesom
Erythranthe nudata (Curran ex Greene) G.L.Nesom
Erythranthe orizabae (Benth.) G.L.Nesom – (Mexico)
Erythranthe pallens (Greene) G.L.Nesom
Erythranthe palmeri (A.Gray) N.S.Fraga – Palmer's monkey-flower (central California south to Baja California)
Erythranthe pardalis (Pennell) G.L.Nesom
Erythranthe parishii (Greene) G.L.Nesom & N.S.Fraga – Parish's monkey-flower (southern California, western Nevada, Baja California)
Erythranthe parvula (Wooton & Standl.) G.L.Nesom
Erythranthe patula (Pennell) G.L.Nesom
Erythranthe pennellii (Gentry) G.L.Nesom
Erythranthe percaulis G.L.Nesom
Erythranthe platyphylla (Franch.) G.L.Nesom – (Asia)
Erythranthe plotocalyx G.L.Nesom
Erythranthe primuloides (Benth.) G.L.Nesom & N.S.Fraga – primrose monkey-flower (WA, OR, CA, ID, NV, UT, AZ, MT, NM)
Erythranthe procera (A.L.Grant) G.L.Nesom – (Asia)
Erythranthe ptilota G.L.Nesom
Erythranthe pulsiferae (A.Gray) G.L.Nesom – candelabrum monkey-flower (Washington to northern California)
Erythranthe purpurea (A.L.Grant) N.S.Fraga – little purple monkey-flower (southern California, Baja California)
Erythranthe regni G.L.Nesom
Erythranthe rhodopetra N.S.Fraga
Erythranthe rubella (A.Gray) N.S.Fraga – little redstem monkey-flower (CA, NV, UT, WY, CO, NM, TX)
Erythranthe rupestris (Greene) G.L.Nesom & N.S.Fraga
Erythranthe scouleri (Hook.) G.L.Nesom
Erythranthe serpentinicola D.J.Keil
Erythranthe sessilifolia (Maxim.) G.L.Nesom – (Asia)
Erythranthe shevockii (Heckard & Bacig.) N.S.Fraga – Kelso Creek monkey-flower (Kern County, California)
Erythranthe sierrae N.S.Fraga
Erythranthe sinoalba G.L.Nesom – (Asia)
Erythranthe sookensis B.G. Benedict – newly discovered 2012, originally named M. sookensis (British Columbia to northern California)
Erythranthe stolonifera (Novopokr.) G.L.Nesom – (Russia)
Erythranthe suksdorfii (A.Gray) N.S.Fraga – Suksdorf's monkey-flower and miniature monkey-flower (Washington, Oregon, California, Idaho, Montana, Wyoming, Colorado, Nevada, Utah, Arizona, New Mexico)
Erythranthe szechuanensis (Pai) G.L.Nesom – (Asia)
Erythranthe taylorii G.L.Nesom
Erythranthe tenella (Bunge) G.L.Nesom – (Asia)
Erythranthe thermalis (A. Nelson) G.L.Nesom – (Yellowstone National Park)
Erythranthe tibetica (P.C.Tsoong & H.P.Yang) G.L.Nesom – (Asia)
Erythranthe tilingii (Regel) G.L.Nesom – large mountain monkey-flower, Tiling's monkey-flower (Alaska to New Mexico)
Erythranthe trinitiensis G.L.Nesom
Erythranthe unimaculata (Pennell) G.L.Nesom
Erythranthe utahensis (Pennell) G.L.Nesom
Erythranthe verbenacea (Greene) G.L.Nesom & N.S.Fraga
Erythranthe veronicifolia (Greene) G.L.Nesom
Erythranthe visibilis G.L.Nesom
Erythranthe washingtonensis (Gand.) G.L.Nesom
Erythranthe willisii G.L.Nesom
Hybrids:
Erythranthe × burnetii (S.Arn.) Silverside
Erythranthe × hybrida (Voss) Silverside
Erythranthe × maculosa (T.Moore) Mabb.
Erythranthe × robertsii (Silverside) G.L.Nesom, syn. Erythranthe peregrina (M. Vallejo-Marin) G.L.Nesom – newly discovered 2012, originally named M. peregrinus (Scotland)
Species sectionally
In a 2014 paper, G. L. Nesom and N. S. Fraga placed Erythranthe members into the following 12 sections (unless listed below as "newly discovered"). Names accepted are from Plants of the World Online.
Erythranthe sect. Simiolus
Erythranthe arenicola (Pennell) G.L.Nesom
Erythranthe arvensis (Greene) G.L.Nesom
Erythranthe brachystylis (Edwin) G.L.Nesom
Erythranthe brevinasuta G.L.Nesom
Erythranthe caespitosa (Greene) G.L.Nesom
Erythranthe calciphila (Gentry) G.L.Nesom
Erythranthe charlestonensis G.L.Nesom
Erythranthe chinatiensis G.L.Nesom
Erythranthe corallina (Greene) G.L.Nesom
Erythranthe cordata (Greene) G.L.Nesom
Erythranthe decora (A.L.Grant) G.L.Nesom
Erythranthe diminuens G.L.Nesom – newly discovered in 2017 and added to this list (Sonora, Mexico)
Erythranthe dentiloba (B.L.Rob. & Fernald) G.L.Nesom
Erythranthe filicifolia (Sexton, K.G.Ferris & Schoenig) G.L.Nesom
Erythranthe geyeri (Torr.) G.L.Nesom
Erythranthe glabrata (Kunth) G.L.Nesom – roundleaf monkey-flower (widespread in North America, Mesoamerica and South America)
Erythranthe glaucescens (Greene) G.L.Nesom – shieldbract monkey-flower (California)
Erythranthe grandis (Greene) G.L.Nesom
Erythranthe guttata (Fisch. ex DC.) G.L.Nesom – common large monkey-flower, common monkey-flower, stream monkey-flower, seep monkey-flower (AK, AZ, CA, CO, CT, DE, ID, MI, MT, ND, NE, NM, NV, NY, OR, PA, SD, UT, WA, WY; Canada: BC, Yukon; Mexico to Guatemala; naturalized in Britain)
Erythranthe hallii (Greene) G.L.Nesom
Erythranthe inamoena (Greene) G.L.Nesom
Erythranthe laciniata (A.Gray) G.L.Nesom
Erythranthe lagunensis G.L.Nesom
Erythranthe madrensis (Seem.) G.L.Nesom
Erythranthe marmorata (Greene) G.L.Nesom
Erythranthe michiganensis (Pennell) G.L.Nesom – Michigan monkey-flower (Michigan)
Erythranthe microphylla (Benth.) G.L.Nesom
Erythranthe minima (C.Bohlen) J.M.Watson & A.R.Flores – (Michoacan, Mexico)
Erythranthe minor (A. Nelson) G.L.Nesom
Erythranthe nasuta (Greene) G.L.Nesom
Erythranthe nudata (Curran ex Greene) G.L.Nesom
Erythranthe pallens (Greene) G.L.Nesom
Erythranthe pardalis (Pennell) G.L.Nesom
Erythranthe parvula (Wooton & Standl.) G.L.Nesom
Erythranthe pennellii (Gentry) G.L.Nesom
Erythranthe percaulis G.L.Nesom
Erythranthe peregrina M. Vallejo-Marin, synonym of Erythranthe × robertsii – newly discovered 2012, originally named M. peregrinus (Scotland)
Erythranthe regni G.L.Nesom
Erythranthe scouleri (Hook.) G.L.Nesom
Erythranthe sookensis B.G. Benedict – originally named M. sookensis (British Columbia to northern California)
Erythranthe thermalis (A. Nelson) G.L.Nesom – (Yellowstone National Park)
Erythranthe tilingii (Regel) G.L.Nesom – large mountain monkey-flower, Tiling's monkey-flower (Alaska to New Mexico)
Erythranthe unimaculata (Pennell) G.L.Nesom
Erythranthe utahensis (Pennell) G.L.Nesom
Erythranthe visibilis G.L.Nesom
(South America)
Erythranthe acaulis (Phil.) G.L.Nesom, synonym of Erythranthe depressa var. depressa
Erythranthe andicola (Kunth) G.L.Nesom, synonym of Erythranthe glabrata
Erythranthe cuprea (Dombrain) G.L.Nesom – Flor de cobre (Eng: copper flower) (central and southern Chile)
Erythranthe depressa (Phil.) G.L.Nesom
Erythranthe lacerata (Pennell) G.L.Nesom, synonym of Erythranthe lutea var. lutea
Erythranthe lutea (L.) G.L.Nesom – yellow monkey-flower, monkey musk, blotched monkey-flower, and blood-drop-emlets (North and South America, naturalized in Britain)
Erythranthe naiandina (J.M.Watson & C.Bohlen) G.L.Nesom
Erythranthe parviflora (Lindl.) G.L.Nesom
Erythranthe pilosiuscula (Kunth) G.L.Nesom, synonym of Erythranthe glabrata
Erythranthe sect. Erythranthe
Erythranthe cardinalis (Douglas ex Benth.) Spach – scarlet monkey-flower (southwestern United States and Baja California)
Erythranthe cinnabarina G.L.Nesom
Erythranthe eastwoodiae (Rydb.) G.L.Nesom & N.S.Fraga
Erythranthe erubescens G.L.Nesom
Erythranthe flammea
Erythranthe lewisii (Pursh) G.L.Nesom & N.S.Fraga – great purple monkey-flower, Lewis' monkey-flower (Alaska to California to Colorado)
Erythranthe nelsonii (A.L.Grant) G.L.Nesom & N.S.Fraga – In 2014 Nesom lists as a synonym of Erythranthe verbenacea
Erythranthe parishii (Greene) G.L.Nesom & N.S.Fraga – Parish's monkey-flower (southern California, western Nevada, Baja California)
Erythranthe rupestris (Greene) G.L.Nesom & N.S.Fraga
Erythranthe verbenacea (Greene) G.L.Nesom & N.S.Fraga
Erythranthe sect. Mimulosma
Erythranthe ampliata (A.L.Grant) G.L.Nesom
Erythranthe arenaria (A.L.Grant) G.L.Nesom
Erythranthe austrolatidens G.L.Nesom
Erythranthe breviflora (Piper) G.L.Nesom – (British Columbia to California to Wyoming)
Erythranthe floribunda (Douglas ex Lindl.) G.L.Nesom – manyflowered monkey-flower (western Canada, Pacific Coast, Rocky Mountains, northern Mexico)
Erythranthe geniculata (Greene) G.L.Nesom
Erythranthe hymenophylla (Meinke) G.L.Nesom
Erythranthe inflatula (Suksd.) G.L.Nesom
Erythranthe inodora (Greene) G.L.Nesom, synonym of Erythranthe moschata
Erythranthe jungermannioides (Suksd.) G.L.Nesom
Erythranthe latidens (Greene) G.L.Nesom – broadtooth monkey-flower (southern California, Baja California)
Erythranthe moniliformis (Greene) G.L.Nesom, synonym of Erythranthe moschata
Erythranthe moschata (Douglas ex Lindl.) G.L.Nesom – (North and South America, naturalized in Britain and Finland)
Erythranthe norrisii (Heckard & Shevock) G.L.Nesom
Erythranthe patula (Pennell) G.L.Nesom
Erythranthe pulsiferae (A.Gray) G.L.Nesom – candelabrum monkey-flower (Washington to northern California)
Erythranthe taylorii G.L.Nesom
Erythranthe trinitiensis G.L.Nesom
Erythranthe washingtonensis (Gand.) G.L.Nesom
Erythranthe stolonifera (Novopokr.) G.L.Nesom – (Russia)
Erythranthe sect. Achlyopitheca
Erythranthe acutidens (Greene) G.L.Nesom
Erythranthe grayi (A.L.Grant) G.L.Nesom
Erythranthe inconspicua (A.Gray) G.L.Nesom – (syns. Mimulus acutidens and M. grayi)
Erythranthe sect. Paradantha
Erythranthe androsacea (Curran ex Greene) N.S.Fraga – rockjasmine monkey-flower (California)
Erythranthe barbata (Greene) N.S.Fraga
Erythranthe calcicola N.S.Fraga
Erythranthe carsonensis N.S.Fraga – Carson Valley monkey-flower (California and Nevada)
Erythranthe diffusa (A.L.Grant) N.S.Fraga
Erythranthe discolor (A.L.Grant) N.S.Fraga
Erythranthe gracilipes (B.L.Rob.) N.S.Fraga – slenderstalk monkey-flower (California)
Erythranthe hardhamiae N.S.Fraga
Erythranthe montioides (A.Gray) N.S.Fraga – montia-like monkey-flower (California, Nevada)
Erythranthe palmeri (A.Gray) N.S.Fraga – Palmer's monkey-flower (central California south to Baja California)
Erythranthe purpurea (A.L.Grant) N.S.Fraga – little purple monkey-flower (southern California, Baja California)
Erythranthe rhodopetra N.S.Fraga
Erythranthe rubella (A.Gray) N.S.Fraga – little redstem monkey-flower (CA, NV, UT, WY, CO, NM, TX)
Erythranthe shevockii (Heckard & Bacig.) N.S.Fraga – Kelso Creek monkey-flower (Kern County, California)
Erythranthe sierrae N.S.Fraga
Erythranthe suksdorfii (A.Gray) N.S.Fraga – Suksdorf's monkey-flower and miniature monkey-flower (Washington, Oregon, California, Idaho, Montana, Wyoming, Colorado, Nevada, Utah, Arizona, New Mexico)
Erythranthe sect. Monantha
Erythranthe linearifolia (A.L.Grant) G.L.Nesom & N.S.Fraga
Erythranthe primuloides (Benth.) G.L.Nesom & N.S.Fraga – primrose monkey-flower (WA, OR, CA, ID, NV, UT, AZ, MT, NM)
Erythranthe sect. Monimanthe
Erythranthe bicolor (Hartw. ex Benth.) G.L.Nesom & N.S.Fraga – yellow and white monkey-flower (California)
Erythranthe breweri (Greene) G.L.Nesom & N.S.Fraga – Brewer's monkey-flower (British Columbia to California to Colorado)
Erythranthe filicaulis (S.Watson) G.L.Nesom & N.S.Fraga – slender-stemmed monkey-flower (California)
Erythranthe sect. Alsinimimulus
Erythranthe alsinoides (Douglas ex Benth.) G.L.Nesom & N.S.Fraga – chickweed monkey-flower (British Columbia to northern California)
Erythranthe sect. Simigemma
Erythranthe gemmipara (W.A.Weber) G.L.Nesom & N.S.Fraga – Rocky Mountain monkey-flower (Colorado)
Erythranthe sect. Exigua
Erythranthe exigua (A.Gray) G.L.Nesom & N.S.Fraga – San Bernardino Mountains monkey-flower (southern California, Baja California)
Erythranthe sect. Sinopitheca
Erythranthe bracteosa (P.C.Tsoong) G.L.Nesom – (Asia)
Erythranthe bridgesii (Benth.) G.L.Nesom – (South America)
Erythranthe platyphylla (Franch.) G.L.Nesom – (Asia)
Erythranthe sessilifolia (Maxim.) G.L.Nesom – (Asia)
Erythranthe tibetica (P.C.Tsoong & H.P.Yang) G.L.Nesom – (Asia)
Erythranthe sect. Mimulasia
Erythranthe dentata (Nutt. ex Benth.) G.L.Nesom – toothleaf monkey-flower, coastal monkey-flower (British Columbia to northern California)
Erythranthe orizabae (Benth.) G.L.Nesom – (Mexico)
Erythranthe bhutanica (Yamazaki) G.L.Nesom – (Asia)
Erythranthe bodinieri (Vaniot) G.L.Nesom – (Asia)
Erythranthe inflata (Miq.) G.L.Nesom – (Asia)
Erythranthe karakormiana (Yamazaki) G.L.Nesom – (Asia)
Erythranthe nepalensis (Benth.) G.L.Nesom – (Asia)
Erythranthe procera (A.L.Grant) G.L.Nesom – (Asia)
Erythranthe sinoalba G.L.Nesom – (Asia)
Erythranthe szechuanensis (Pai) G.L.Nesom – (Asia)
Erythranthe tenella (Bunge) G.L.Nesom – (Asia)
Reproductive biology
Before recognition of E. cinnabarina as a species, E. lewisii was interpreted to be the sister of E. cardinalis. It is now clear that E. cinnabarina and E. cardinalis are sister species and that E. lewisii and E. erubescens are sister species. In the hypothesized phylogeny, the 'cinnabarina/cardinalis' pair is sister to the 'lewisii/erubescens' pair.
Erythranthe lewisii is a model system for studying pollinator-based reproductive isolation. E. lewisii is pollinated by bees, primarily Bombus and Osmia, which feed on its nectar and transfer its pollen. Although it is fully interfertile with its sister species E. cardinalis, the two do not interbreed in the wild, a difference ascribed primarily to pollinator differences; E. cardinalis is pollinated by hummingbirds, especially Calypte anna and Selasphorus rufus. It was previously reported that evidence strongly linking pollination preference to color differences between the species, but this has been disproven. E. erubescens is mostly pollinated by Bombus balteatus, B. centralis, B. flavifrons, and B. vosnesenskii.
Erythranthe parishii is also closely related to E. lewisii, but it has evolved in a different direction as a self-pollinated species with small flowers.
E. eastwoodiae, E. nelsonii, E. rupestris, and E. verbenacea are also pollinated by hummingbirds. These four species as well as E. cardinalis and E. nelsonii produce bisexual flowers and are self-compatible. This approximate ratio of insect vs hummingbird pollination holds true for the rest of the genus. There have been two separate transformations to hummingbird pollination. Pollination changes are highly affected by changes in flower morphology. E. cardinalis and its sister species E. cinnabarina likely evolved via allopatric speciation.
Erythranthe guttata is pollinated by bees, such as Bombus impatiens. Inbreeding reduces flower quantity and size and pollen quality and quantity. E. guttata also displays a high degree of self-pollination. Erythranthe nasuta evolved from E. guttata in central California between 200,000 and 500,000 years ago and since then has become primarily a self-pollinator.
Distribution and habitat
Over 80% of Erythranthe species are found in western North America, especially California, Oregon, and Washington. Genus members are also found in Baja California, Alaska, British Columbia, Nevada, Utah, Idaho, Montana, Wyoming, Colorado, Arizona, New Mexico, and to a lesser extent the midwestern states, northeastern states, Canada, and Latin America. Members of this genus are found in eastern Asia; several species of which have a high degree of similarity with some of the species found in North and South America.
A large number of the species grow in moist to wet soils with some growing even in shallow water. They are not very drought resistant, but the species now classified as Diplacus are. Some species grow in dry areas, others in wet habitats, such as members of the section Simiolus, which are hydrophilic. Both overall plant size and corolla size vary greatly throughout the genus. A minimum of 25 of the species are listed as threatened by the International Union for Conservation of Nature. Species are found at elevations from oceanside to high mountains as well as a wide variety of climates, though most prefer wet areas such as riverbanks.
Pests and diseases
Diplacus, Erythranthe, and Mimulus are subject to a very similar set of pests and diseases. The pests these genera are susceptible to include: gall midges, golden mealybugs, thrips, and seed bugs. Diseases they are susceptible to include: crown gall, aster yellows phytoplasma, impatiens necrotic spot virus (INSV), leaf spots, powdery mildew — especially Erysiphe brunneopunctata and
Erysiphe cichoracearum, botrytis blight, pythium root rot, rusts, cucumber mosaic virus (CMV), as well as mineral and nutrient deficiencies.
Human culture
Horticulture
In horticulture, several species, cultivars and hybrids are used. Because of their wide range and many variations, the most important are those derived from E. gutatta and E. lutea. E. cuprea alone has at least 10 cultivars and hybrids.
Culinary uses
Erythranthe species tend to concentrate sodium chloride and other salts absorbed from the soils in which they grow in their leaves and stem tissues. Native Americans and early travelers in the American West used this plant as a salt substitute to season wild game. The entire plant is edible, but reported to be very salty and bitter unless well cooked. The juice from the leaves was used as a poultice for mild skin irritations and burns. Leaves can be used in salads and soups; flowers taste best before blooming. E. lutea has been used for cooking in Peru.
Alternative medical use
Erythranthe has been listed as one of the 38 plants that are used to prepare Bach flower remedies, a kind of alternative medicine promoted for its effect on health. However, according to Cancer Research UK, "there is no scientific evidence to prove that flower remedies can control, cure or prevent any type of disease, including cancer".
References
External links
Lamiales genera
Plant models | Erythranthe | Biology | 9,763 |
17,395,232 | https://en.wikipedia.org/wiki/Open%20mapping%20theorem%20%28complex%20analysis%29 | In complex analysis, the open mapping theorem states that if is a domain of the complex plane and is a non-constant holomorphic function, then is an open map (i.e. it sends open subsets of to open subsets of , and we have invariance of domain.).
The open mapping theorem points to the sharp difference between holomorphy and real-differentiability. On the real line, for example, the differentiable function is not an open map, as the image of the open interval is the half-open interval .
The theorem for example implies that a non-constant holomorphic function cannot map an open disk onto a portion of any line embedded in the complex plane. Images of holomorphic functions can be of real dimension zero (if constant) or two (if non-constant) but never of dimension 1.
Proof
Assume is a non-constant holomorphic function and is a domain of the complex plane. We have to show that every point in is an interior point of , i.e. that every point in has a neighborhood (open disk) which is also in .
Consider an arbitrary in . Then there exists a point in such that . Since is open, we can find such that the closed disk around with radius is fully contained in . Consider the function . Note that is a root of the function.
We know that is non-constant and holomorphic. The roots of are isolated by the identity theorem, and by further decreasing the radius of the disk , we can assure that has only a single root in (although this single root may have multiplicity greater than 1).
The boundary of is a circle and hence a compact set, on which is a positive continuous function, so the extreme value theorem guarantees the existence of a positive minimum , that is, is the minimum of for on the boundary of and .
Denote by the open disk around with radius . By Rouché's theorem, the function will have the same number of roots (counted with multiplicity) in as for any in . This is because
, and for on the boundary of , . Thus, for every in , there exists at least one in such that . This means that the disk is contained in .
The image of the ball , is a subset of the image of , . Thus is an interior point of . Since was arbitrary in we know that is open. Since was arbitrary, the function is open.
Applications
Maximum modulus principle
Rouché's theorem
Schwarz lemma
See also
Open mapping theorem (functional analysis)
References
Theorems in complex analysis
Articles containing proofs | Open mapping theorem (complex analysis) | Mathematics | 533 |
40,481,287 | https://en.wikipedia.org/wiki/Nitromemantine | Nitromemantine (developmental code name YQW-36) is a derivative of memantine developed in 2006 for the treatment of Alzheimer's disease. It has been shown to reduce excitotoxicity mediated by over-activation of the glutamatergic system, by blocking NMDA receptors.
Pharmacology
Like memantine, nitromemantine is a low-affinity voltage-dependent uncompetitive antagonist at glutamatergic NMDA receptors, however nitromemantine selectively inhibits extrasynaptic NMDA receptors while sparing normal physiological synaptic NMDA receptor activity, resulting in less side effects and a greater neuroprotective action, as well as stimulating regrowth of synapses with prolonged administration. The discoverers of nitromemantine have demonstrated that the amyloid-β peptide associated with Alzheimer's disease acts as an agonist at α7 nicotinic acetylcholine receptors, chronic overstimulation of which then results in uncontrolled release of glutamate, and consequent excitotoxicity. By blocking extrasynaptic NMDA receptors, nitromemantine is able to largely prevent this excitotoxicity while minimising the side effects usually seen with less selective NMDA antagonists. The nitrate group of nitromemantine was found to bind to a second site on the extrasynaptic NMDA receptor which had previously been targeted with nitroglycerin, and this double action is thought to be responsible for the increased effectiveness of nitromemantine.
See also
Memantine
Neramexane
Amantadine
References
Adamantanes
Amines
Antidementia agents
NMDA receptor antagonists
Nitrate esters | Nitromemantine | Chemistry | 359 |
24,778,911 | https://en.wikipedia.org/wiki/Cycle%20basis | In graph theory, a branch of mathematics, a cycle basis of an undirected graph is a set of simple cycles that forms a basis of the cycle space of the graph. That is, it is a minimal set of cycles that allows every even-degree subgraph to be expressed as a symmetric difference of basis cycles.
A fundamental cycle basis may be formed from any spanning tree or spanning forest of the given graph, by selecting the cycles formed by the combination of a path in the tree and a single edge outside the tree. Alternatively, if the edges of the graph have positive weights, the minimum weight cycle basis may be constructed in polynomial time.
In planar graphs, the set of bounded cycles of an embedding of the graph forms a cycle basis. The minimum weight cycle basis of a planar graph corresponds to the Gomory–Hu tree of the dual graph.
Definitions
A spanning subgraph of a given graph G has the same set of vertices as G itself but, possibly, fewer edges. A graph G, or one of its subgraphs, is said to be Eulerian if each of its vertices has even degree (its number of incident edges). Every simple cycle in a graph is an Eulerian subgraph, but there may be others. The cycle space of a graph is the collection of its Eulerian subgraphs. It forms a vector space over the two-element finite field. The vector addition operation is the symmetric difference of two or more subgraphs, which forms another subgraph consisting of the edges that appear an odd number of times in the arguments to the symmetric difference operation.
A cycle basis is a basis of this vector space in which each basis vector represents a simple cycle. It consists of a set of cycles that can be combined, using symmetric differences, to form every Eulerian subgraph, and that is minimal with this property. Every cycle basis of a given graph has the same number of cycles, which equals the dimension of its cycle space. This number is called the circuit rank of the graph, and it equals where is the number of edges in the graph, is the number of vertices, and is the number of connected components.
Special cycle bases
Several special types of cycle bases have been studied, including the fundamental cycle bases, weakly fundamental cycle bases, sparse (or 2-) cycle bases, and integral cycle bases.
Induced cycles
Every graph has a cycle basis in which every cycle is an induced cycle. In a 3-vertex-connected graph, there always exists a basis consisting of peripheral cycles, cycles whose removal does not separate the remaining graph. In any graph other than one formed by adding one edge to a cycle, a peripheral cycle must be an induced cycle.
Fundamental cycles
If is a spanning tree or spanning forest of a given graph , and is an edge that does not belong to , then the fundamental cycle defined by is the simple cycle consisting of together with the path in connecting the endpoints of . There are exactly fundamental cycles, one for each edge that does not belong to . Each of them is linearly independent from the remaining cycles, because it includes an edge that is not present in any other fundamental cycle. Therefore, the fundamental cycles form a basis for the cycle space. A cycle basis constructed in this way is called a fundamental cycle basis or strongly fundamental cycle basis.
It is also possible to characterize fundamental cycle bases without specifying the tree for which they are fundamental. There exists a tree for which a given cycle basis is fundamental if and only if each cycle contains an edge that is not included in any other basis cycle, that is, each cycle is independent of others. It follows that a collection of cycles is a fundamental cycle basis of if and only if it has the independence property and has the correct number of cycles to be a basis of .
Weakly fundamental cycles
A cycle basis is called weakly fundamental if its cycles can be placed into a linear ordering such that each cycle includes at least one edge that is not included in any earlier cycle. A fundamental cycle basis is automatically weakly fundamental (for any edge ordering). If every cycle basis of a graph is weakly fundamental, the same is true for every minor of the graph. Based on this property, the class of graphs (and multigraphs) for which every cycle basis is weakly fundamental can be characterized by five forbidden minors: the graph of the square pyramid, the multigraph formed by doubling all edges of a four-vertex cycle, two multigraphs formed by doubling two edges of a tetrahedron, and the multigraph formed by tripling the edges of a triangle.
Face cycles
If a connected finite planar graph is embedded into the plane, each face of the embedding is bounded by a cycle of edges. One face is necessarily unbounded (it includes points arbitrarily far from the vertices of the graph) and the remaining faces are bounded. By Euler's formula for planar graphs, there are exactly bounded faces.
The symmetric difference of any set of face cycles is the boundary of the corresponding set of faces, and different sets of bounded faces have different boundaries, so it is not possible to represent the same set as a symmetric difference of face cycles in more than one way; this means that the set of face cycles is linearly independent. As a linearly independent set of enough cycles, it necessarily forms a cycle basis. It is always a weakly fundamental cycle basis, and is fundamental if and only if the embedding of the graph is outerplanar.
For graphs properly embedded onto other surfaces so that all faces of the embedding are topological disks, it is not in general true that there exists a cycle basis using only face cycles. The face cycles of these embeddings generate a proper subset of all Eulerian subgraphs. The homology group of the given surface characterizes the Eulerian subgraphs that cannot be represented as the boundary of a set of faces. Mac Lane's planarity criterion uses this idea to characterize the planar graphs in terms of the cycle bases: a finite undirected graph is planar if and only if it has a sparse cycle basis or 2-basis, a basis in which each edge of the graph participates in at most two basis cycles. In a planar graph, the cycle basis formed by the set of bounded faces is necessarily sparse, and conversely, a sparse cycle basis of any graph necessarily forms the set of bounded faces of a planar embedding of its graph.
Integral bases
The cycle space of a graph may be interpreted using the theory of homology as the homology group of a simplicial complex with a point for each vertex of the graph and a line segment for each edge of the graph. This construction may be generalized to the homology group over an arbitrary ring . An important special case is the ring of integers, for which the homology group is a free abelian group, a subgroup of the free abelian group generated by the edges of the graph. Less abstractly, this group can be constructed by assigning an arbitrary orientation to the edges of the given graph; then the elements of are labelings of the edges of the graph by integers with the property that, at each vertex, the sum of the incoming edge labels equals the sum of the outgoing edge labels. The group operation is addition of these vectors of labels. An integral cycle basis is a set of simple cycles that generates this group.
Minimum weight
If the edges of a graph are given real number weights, the weight of a subgraph may be computed as the sum of the weights of its edges. The minimum weight basis of the cycle space is necessarily a cycle basis: by Veblen's theorem, every Eulerian subgraph that is not itself a simple cycle can be decomposed into multiple simple cycles, which necessarily have smaller weight.
By standard properties of bases in vector spaces and matroids, the minimum weight cycle basis not only minimizes the sum of the weights of its cycles, it also minimizes any other monotonic combination of the cycle weights. For instance, it is the cycle basis that minimizes the weight of its longest cycle.
Polynomial time algorithms
In any vector space, and more generally in any matroid, a minimum weight basis may be found by a greedy algorithm that considers potential basis elements one at a time, in sorted order by their weights, and that includes an element in the basis when it is linearly independent of the previously chosen basis elements. Testing for linear independence can be done by Gaussian elimination. However, an undirected graph may have an exponentially large set of simple cycles, so it would be computationally infeasible to generate and test all such cycles.
provided the first polynomial time algorithm for finding a minimum weight basis, in graphs for which every edge weight is positive. His algorithm uses this generate-and-test approach, but restricts the generated cycles to a small set of cycles, called Horton cycles. A Horton cycle is a fundamental cycle of a shortest path tree of the given graph. There are at most n different shortest path trees (one for each starting vertex) and each has fewer than m fundamental cycles, giving the bound on the total number of Horton cycles. As Horton showed, every cycle in the minimum weight cycle basis is a Horton cycle.
Using Dijkstra's algorithm to find each shortest path tree and then using Gaussian elimination to perform the testing steps of the greedy basis algorithm leads to a polynomial time algorithm for the minimum weight cycle basis.
Subsequent researchers have developed improved algorithms for this problem, reducing the worst-case time complexity for finding a minimum weight cycle basis in a graph with edges and vertices to .
NP-hardness
Finding the fundamental basis with the minimum possible weight is closely related to the problem of finding a spanning tree that minimizes the average of the pairwise distances; both are NP-hard. Finding a minimum weight weakly fundamental basis is also NP-hard, and approximating it is MAXSNP-hard. If negative weights and negatively weighted cycles are allowed, then finding a minimum cycle basis (without restriction) is also NP-hard, as it can be used to find a Hamiltonian cycle: if a graph is Hamiltonian, and all edges are given weight −1, then a minimum weight cycle basis necessarily includes at least one Hamiltonian cycle.
In planar graphs
The minimum weight cycle basis for a planar graph is not necessarily the same as the basis formed by its bounded faces: it can include cycles that are not faces, and some faces may not be included as cycles in the minimum weight cycle basis. However, there exists a minimum weight cycle basis in which no two cycles cross each other: for every two cycles in the basis, either the cycles enclose disjoint subsets of the bounded faces, or one of the two cycles encloses the other one. This set of cycles corresponds, in the dual graph of the given planar graph, to a set of cuts that form a Gomory–Hu tree of the dual graph, the minimum weight basis of its cut space. Based on this duality, an implicit representation of the minimum weight cycle basis in a planar graph can be constructed in time .
Applications
Cycle bases have been used for solving periodic scheduling problems, such as the problem of determining the schedule for a public transportation system. In this application, the cycles of a cycle basis correspond to variables in an integer program for solving the problem.
In the theory of structural rigidity and kinematics, cycle bases are used to guide the process of setting up a system of non-redundant equations that can be solved to predict the rigidity or motion of a structure. In this application, minimum or near-minimum weight cycle bases lead to simpler systems of equations.
In distributed computing, cycle bases have been used to analyze the number of steps needed for an algorithm to stabilize.
In bioinformatics, cycle bases have been used to determine haplotype information from genome sequence data. Cycle bases have also been used to analyze the tertiary structure of RNA.
The minimum weight cycle basis of a nearest neighbor graph of points sampled from a three-dimensional surface can be used to obtain a reconstruction of the surface.
In cheminformatics, the minimal cycle basis of a molecular graph is referred to as the smallest set of smallest rings.
References
Algebraic graph theory | Cycle basis | Mathematics | 2,506 |
50,927,957 | https://en.wikipedia.org/wiki/Profit%20extraction%20mechanism | In mechanism design and auction theory, a profit extraction mechanism (also called profit extractor or revenue extractor) is a truthful mechanism whose goal is to win a pre-specified amount of profit, if it is possible.
Profit extraction in a digital goods auction
Consider a digital goods auction in which a movie producer wants to decide on a price in which to sell copies of his movie. A possible approach is for the producer to decide on a certain revenue, R, that he wants to make. Then, the R-profit-extractor works in the following way:
Ask each agent how much he is willing to pay for the movie.
For each integer , let be the number of agents willing to pay at least . Note that is weakly increasing with .
If there exists such that , then find the largest such (which must be equal to ), sell the movie to these agents, and charge each such agent a price of .
If no such exists, then the auction is canceled and there are no winners.
This is a truthful mechanism. Proof: Since the agents have single-parametric utility functions, truthfulness is equivalent to monotonicity. The profit extractor is monotonic because:
If a winning agent increases his bid, then weakly increases and the agent is still one of the highest bidders, so he still wins.
A winning agent pays , which is exactly the threshold price - the price under which the bid stops being a winner.
Estimating the maximum revenue
The main challenge in using an auction based on a profit-extractor is to choose the best value for the parameter . Ideally, we would like to be the maximum revenue that can be extracted from the market. However, we do not know this maximum revenue in advance. We can try to estimate it using one of the following ways:
1. Random sampling:
randomly partition the bidders to two groups, such that each bidder has a chance of 1/2 to go to each group. Let R1 be the maximum revenue in group 1 and R2 the maximum revenue in group 2. Run R1-profit-extractor in group 2, and R2-profit-extractor in group 1.
This mechanism guarantees a profit of at least 1/4 the maximum profit. A variant of this mechanism partitions the agents to three groups instead of two, and attains at least 1/3.25 of the maximum profit.
2. Consensus estimate:
Calculate the maximum revenue in the entire population; apply a certain random rounding process that guarantees that the calculation is truthful with-high-probability. Let R be the estimated revenue; run R-profit-extractor in the entire population.
This mechanism guarantees a profit of at least 1/3.39 the maximum profit, in a digital goods auction.
Profit extraction in a double auction
The profit-extraction idea can be generalized to arbitrary single-parameter utility agents. In particular, it can be used in a double auction where several sellers sell a single unit of some item (with different costs) and several buyers want at most a single unit of that item (with different valuations).
The following mechanism is an approximate profit extractor:
Order the buyers by descending price and the sellers by ascending price.
Find the largest such that .
The high-value buyers buy an item at price . The low-cost sellers sell an item at price .
The mechanism is truthful - this can be proved using a monotonicity argument similar to the digital-goods auction. The auctioneer's revenue is , which approaches the required revenue when it is sufficiently large.
Combining this profit-extractor with a consensus-estimator gives a truthful double-auction mechanism which guarantees a profit of at least 1/3.75 of the maximum profit.
History
The profit extractor mechanism is a special case of a cost sharing mechanism. It was adapted from the cost-sharing literature to the auction setting.
References
Mechanism design | Profit extraction mechanism | Mathematics | 795 |
5,064,357 | https://en.wikipedia.org/wiki/Nickel%28II%29%20oxide | Nickel(II) oxide is the chemical compound with the formula . It is the principal oxide of nickel. It is classified as a basic metal oxide. Several million kilograms are produced annually of varying quality, mainly as an intermediate in the production of nickel alloys. The mineralogical form of , bunsenite, is very rare. Other nickel(III) oxides have been claimed, for example: and , but remain unproven.
Production
can be prepared by multiple methods. Upon heating above 400 °C, nickel powder reacts with oxygen to give . In some commercial processes, green nickel oxide is made by heating a mixture of nickel powder and water at 1000 °C; the rate for this reaction can be increased by the addition of . The simplest and most successful method of preparation is through pyrolysis of nickel(II) compounds such as the hydroxide, nitrate, and carbonate, which yield a light green powder. Synthesis from the elements by heating the metal in oxygen can yield grey to black powders which indicates nonstoichiometry.
Structure
adopts the structure, with octahedral Ni2+ and O2− sites. The conceptually simple structure is commonly known as the rock salt structure. Like many other binary metal oxides, is often non-stoichiometric, meaning that the Ni:O ratio deviates from 1:1. In nickel oxide, this non-stoichiometry is accompanied by a color change, with the stoichiometrically correct NiO being green and the non-stoichiometric being black.
Applications and reactions
has a variety of specialized applications and generally, applications distinguish between "chemical grade", which is relatively pure material for specialty applications, and "metallurgical grade", which is mainly used for the production of alloys. It is used in the ceramic industry to make frits, ferrites, and porcelain glazes. The sintered oxide is used to produce nickel steel alloys. Charles Édouard Guillaume won the 1920 Nobel Prize in Physics for his work on nickel steel alloys which he called invar and elinvar.
is a commonly used hole transport material in thin film solar cells. It was also a component in the nickel-iron battery, also known as the Edison Battery, and is a component in fuel cells. It is the precursor to many nickel salts, for use as specialty chemicals and catalysts. More recently, was used to make the NiCd rechargeable batteries found in many electronic devices until the development of the environmentally superior NiMH battery. an anodic electrochromic material, have been widely studied as counter electrodes with tungsten oxide, cathodic electrochromic material, in complementary electrochromic devices.
About 4000 tons of chemical grade are produced annually. Black is the precursor to nickel salts, which arise by treatment with mineral acids. is a versatile hydrogenation catalyst.
Heating nickel oxide with either hydrogen, carbon, or carbon monoxide reduces it to metallic nickel. It combines with the oxides of sodium and potassium at high temperatures (>700 °C) to form the corresponding nickelate.
Electronic structure
NiO is useful for illustrating the failure of density functional theory (using functionals based on the local-density approximation) and Hartree–Fock theory to account for the strong correlation. The term strong correlation refers to behavior of electrons in solids that is not well described (often not even in a qualitatively correct manner) by simple one-electron theories such as the local-density approximation (LDA) or Hartree–Fock theory. For instance, the seemingly simple material NiO has a partially filled 3d-band (the Ni atom has 8 of 10 possible 3d-electrons) and therefore would be expected to be a good conductor. However, strong Coulomb repulsion (a correlation effect) between d-electrons makes NiO instead a wide band gap Mott insulator. Thus, NiO has an electronic structure that is neither simply free-electron-like nor completely ionic, but a mixture of both.
Health risks
Long-term inhalation of NiO is damaging to the lungs, causing lesions and in some cases cancer.
The calculated half-life of dissolution of NiO in the blood is more than 90 days. NiO has a long retention half-time in the lungs; after administration to rodents, it persisted in the lungs for more than 3 months. Nickel oxide is classified as a human carcinogen based on increased respiratory cancer risks observed in epidemiological studies of sulfidic ore refinery workers.
In a 2-year National Toxicology Program green NiO inhalation study, some evidence of carcinogenicity in F344/N rats but equivocal evidence in female B6C3F1 mice was observed; there was no evidence of carcinogenicity in male B6C3F1 mice. Chronic inflammation without fibrosis was observed in the 2-year studies.
References
External links
Bunsenite at mindat.org
Bunsenite mineral data
Transition metal oxides
Nickel compounds
Non-stoichiometric compounds
IARC Group 1 carcinogens
Hydrogenation catalysts
Rock salt crystal structure | Nickel(II) oxide | Chemistry | 1,055 |
44,498,929 | https://en.wikipedia.org/wiki/Suillellus%20adonis | Suillellus adonis is a species of bolete fungus described from Croatia. Originally described as a species of Boletus in 2002, it was transferred to Suillellus in 2014, based on melacular phylogenetic data. This apparently rare fungus is so far known only from the islands of Cres and Cyprus.
References
External links
adonis
Fungi described in 2002
Fungi of Europe
Fungus species | Suillellus adonis | Biology | 80 |
5,759,586 | https://en.wikipedia.org/wiki/List%20of%20psychoactive%20plants | This is a list of plant species that, when consumed by humans, are known or suspected to produce psychoactive effects: changes in nervous system function that alter perception, mood, consciousness, cognition or behavior. Many of these plants are used intentionally as psychoactive drugs, for medicinal, religious, and/or recreational purposes. Some have been used ritually as entheogens for millennia.
The plants are listed according to the specific psychoactive chemical substances they contain; many contain multiple known psychoactive compounds.
Cannabinoids
Species of the genus Cannabis, known colloquially as marijuana, including Cannabis sativa and Cannabis indica, is a popular psychoactive plant that is often used medically and recreationally. The principal psychoactive substance in Cannabis, tetrahydrocannabinol (THC), contains no nitrogen, unlike many (but not all) other psychoactive substances and is not an indole, tryptamine, phenethylamine, anticholinergic (deliriant) or dissociative drug. THC is just one of more than 100 identified cannabinoid compounds in Cannabis, which also include cannabinol (CBN) and cannabidiol (CBD).
Cannabis plants vary widely, with different strains producing dynamic balances of cannabinoids (THC, CBD, etc.) and yielding markedly different effects. Popular strains are often hybrids of C. sativa and C. indica.
The medicinal effects of cannabis are widely studied, and are active topics of research both at universities and private research firms. Many jurisdictions have laws regulating or prohibiting the cultivation, sale and/or use of medical and recreational cannabis.
Tryptamines
Many of the psychedelic plants contain dimethyltryptamine (DMT), or other tryptamines, which are either snorted (Virola, Yopo snuffs), vaporized, or drunk with MAOIs (Ayahuasca). It cannot simply be eaten as it is not orally active without an MAOI and it needs to be extremely concentrated to be vaporized.
Acanthaceae
"Species, Alkaloid content, where given, refers to dried material"
Fittonia albivenis, a common ornamental plant from South America.
Aceraceae
Acer saccharinum (silver maple) was found to contain the indole alkaloid gramine (not active and extremely toxic) 0.05% in the leaves, so it is possible that other members of this plant family contain active compounds.
Aizoaceae
Delosperma acuminatum, DMT, 5-MeO-DMT
Delosperma cooperi, DMT, 5-MeO-DMT
Delosperma ecklonis, DMT
Delosperma esterhuyseniae, DMT
Delosperma hallii, 5-MeO-DMT
Delosperma harazianum, DMT, 5-MeO-DMT
Delosperma harazianum Shibam, DMT
Delosperma hirtum, DMT
Delosperma hallii aff. litorale
Delosperma lydenbergense, DMT, 5-MeO-DMT
Delosperma nubigenum, 5-MeO-DMT
Delosperma pageanum, DMT, 5-MeO-DMT
Delosperma pergamentaceum, Traces of DMT
Delosperma tradescantioides, DMT
Apocynaceae
Prestonia amazonica: DMT
Voacanga africana: Up to 10% Iboga alkaloids
Asteraceae
Pilosella officinarum
Erythroxylaceae
Erythroxylum pungens: DMT
Fabaceae (Leguminosae)
Acacia acuminata, Up to 1.5% alkaloids, mainly consisting of dimethyltryptamine in bark & leaf Also, harman, tryptamine, NMT, other alkaloids in leaf.
Acacia alpina, Active principles in leaf
Acaciella angustissima, β-methyl-phenethylamine, NMT and DMT in leaf (1.1-10.2 ppm)
Vachellia aroma, Tryptamine alkaloids. Significant amount of tryptamine in the seeds.
Acacia auriculiformis, 5-MeO-DMT in stem bark
Acacia baileyana, 0.02% tryptamine and β-carbolines, in the leaf, Tetrahydroharman
Acacia beauverdiana, Psychoactive Ash used in Pituri.
Senegalia berlandieri, DMT, phenethylamine, mescaline, nicotine
Senegalia catechu, DMT and other tryptamines in leaf, bark
Vachellia caven, Psychoactive
Senegalia chundra, DMT and other tryptamines in leaf, bark
Acacia colei, DMT
Acacia complanata, 0.3% alkaloids in leaf and stem, almost all N-methyl-tetrahydroharman, with traces of tetrahydroharman, some of tryptamine
Acacia confusa, DMT & NMT in leaf, stem & bark 0.04% NMT and 0.02% DMT in stem. Also N,N-dimethyltryptamine N-oxide
Vachellia cornigera, Psychoactive, Tryptamines DMT according to C. Rastch.
Acacia cultriformis, Tryptamine, in the leaf, stem and seeds. Phenethylamine in leaf and seeds
Acacia cuthbertsonii, Psychoactive
Acacia decurrens, Psychoactive, but less than 0.02% alkaloids
Acacia delibrata, Psychoactive
Acacia falcata, Psychoactive, but less than 0.02% alkaloids
Vachellia farnesiana, Traces of 5-MeO-DMT in fruit. β-methyl-phenethylamine, flower. Ether extracts about 2–6% of the dried leaf mass. Alkaloids are present in the bark and leaves.
Acacia flavescens, Strongly Psychoactive, Bark
Acacia floribunda, Tryptamine, phenethylamine, in flowers other tryptamines, DMT,tryptamine,NMT 0.3–0.4% phyllodes.
Acacia georginae, Psychoactive, plus deadly toxins
Vachellia horrida, Psychoactive
Acacia implexa, Psychoactive
Mimosa jurema, DMT, NMT
Vachellia karroo, Psychoactive
Senegalia laeta, DMT, in the leaf
Acacia longifolia, 0.2% tryptamine in bark, leaves, some in flowers, phenylethylamine in flowers, 0.2% DMT in plant. Histamine alkaloids.
Acacia sophorae, Tryptamine in leaves, bark
Acacia macradenia, Tryptamine
Acacia maidenii, 0.6% NMT and DMT in about a 2:3 ratio in the stem bark, both present in leaves
Acacia mangium, Psychoactive
Acacia melanoxylon, DMT, in the bark and leaf, but less than 0.02% total alkaloids
Senegalia mellifera, DMT, in the leaf
Vachellia nilotica, DMT, in the leaf
Vachellian ilotica subsp. adstringens, Psychoactive, DMT in the leaf
Acacia neurophylla DMT in bark, Harman in leaf.
Acacia obtusifolia, Tryptamine, DMT, NMT, other tryptamines, 0.4–0.5% in dried bark,0.15–0.2% in leaf, 0.07% in branch tips.
Vachellia oerfota, Less than 0.1% DMT in leaf, NMT
Acacia penninervis, Psychoactive
Acacia phlebophylla, 0.3% DMT in leaf, NMT
Acacia podalyriifolia, Tryptamine in the leaf, 0.5% to 2% DMT in fresh bark, phenethylamine, trace amounts. Although this species is claimed to contain 0.5% to 2% DMT in fresh bark the reference for this is invalid as there is no reference to Acacia Podalyriffolia anywhere in the reference article. Additionally, well known and proven extraction techniques for DMT have failed to produce any DMT or alkaloids from fresh bark or the leaves on multiple sample taken at various seasons. Should DMT actually exist in this species of Acacia then it exists in extremely small amounts and have failed to produce any alkaloids with Acid/Base extraction techniques using HCl/Na(OH)2. On the same note, more academic research is definitely required into the DMT content of this and other Australian Acacia species with proper chemical analysis of sample.
Senegalia polyacantha, DMT in leaf and other tryptamines in leaf, bark
Senegalia polyacantha ssp. campylacantha, Less than 0.2% DMT in leaf, NMT; DMT and other tryptamines in leaf, bark
Senegalia rigidula: Phenethylamine, tryptamine, tyramine, and β-Methylphenethylamine.
Acacia sassa, Psychoactive
Vachellia schaffneri, β-methyl-phenethylamine, Phenethylamine
Senegalia senegal, Less than 0.1% DMT in leaf, NMT, other tryptamines. DMT in plant, DMT in bark.
Vachellia seyal, DMT, in the leaf. Ether extracts about 1–7% of the dried leaf mass.
Vachellia sieberiana, DMT, in the leaf
Acacia simplex, DMT and NMT, in the leaf, stem and trunk bark, 0.81% DMT in bark, MMT
Vachellia tortilis, DMT, NMT, and other tryptamines
Acacia vestita, Tryptamine, in the leaf and stem, but less than 0.02% total alkaloids
Acacia victoriae, tryptamines, 5-MeO-alkyltryptamine
List of acacia species having little or no alkaloids in the material sampled:(0% C 0.02%, Concentration of alkaloids)
Acacia acinacea
Acacia baileyana
Acacia decurrens
Acacia dealbata
Acacia mearnsii
Acacia drummondii
Acacia elata
Acacia falcata
Acacia leprosa
Acacia linearis
Acacia melanoxylon
Acacia pycnantha
Acacia retinodes
Acacia saligna
Acacia stricta
Acacia verticillata
Acacia vestita
Pseudalbizzia inundata leaves contain DMT.
Anadenanthera colubrina, Bufotenin, Beans, Bufotenin oxide, Beans, N,N-Dimethyltryptamine, Beans, pods,
Anadenanthera colubrina var. cebil – Bufotenin and Dimethyltryptamine have been isolated from the seeds and seed pods, 5-MeO-DMT from the bark of the stems. The seeds were found to contain 12.4% bufotenine, 0.06% 5-MeO-DMT and 0.06% DMT.
Anadenanthera peregrina,
1,2,3,4-Tetrahydro-6-methoxy-2,9-dimethyl-beta-carboline, Plant, 1,2,3,4-Tetrahydro-6-methoxy-2-methyl-beta-carboline, Plant, 5-Methoxy-N,N-dimethyltryptamine, Bark, 5-Methoxy-N-methyltryptamine, Bark, Bufotenin, plant, beans, Bufotenin N-oxide, Fruit, beans, N,N-Dimethyltryptamine-oxide, Fruit
Anadenanthera peregrina var. peregrina, Bufotenine is in the seeds.
Desmanthus illinoensis, 0–0.34% DMT in root bark, highly variable. Also NMT, N-hydroxy-N-methyltryptamine, 2-hydroxy-N-methyltryptamine, and gramine (toxic).
Desmanthus leptolobus, 0.14% DMT in root bark, more reliable than D. illinoensis
Desmodium caudatum (syn. Ohwia caudata), Roots: 0.087% DMT,
Codariocalyx motorius(syn. Desmodium gyrans), DMT, 5-MeO-DMT, leaves, roots
Desmodium racemosum, 5-MeO-DMT
Desmodium triflorum, 0.0004% DMT-N-oxide, roots, less in stems and trace in leaves.
Lespedeza capitata
Lespedeza bicolor, DMT, Lespedamine, and 5-MeO-DMT in leaves and roots
Lespedeza bicolor var. japonica, DMT, 5-MeO-DMT in leaves and root bark
Mimosa ophthalmocentra, Dried root: DMT 1.6%, NMT 0.0012% and hordenine 0.0065%
Mimosa scabrella, tryptamine, NMT, DMT and N-methyltetrahydrocarboline in bark
Mimosa somnians, tryptamines and MMT
Mimosa tenuiflora (syn. "Mimosa hostilis"), 0.31-0.57% DMT (dry root bark).
Mimosa verrucosa, DMT in root bark
Mucuna pruriens, the seeds of the plant contain about 3.1–6.1% .
Petalostylis casseoides, 0.4–0.5% tryptamine, DMT, etc. in leaves and stems
Petalostylis labicheoides var. casseoides, DMT in leaves and stems; 0.4–0.5% alkaloids in leaves and stems; Tryptamines in leaves and stems, MAO's up to 0.5%
Phyllodium pulchellum(syn. Desmodium pulchellum), DMT; 0.2% 5-MeO-DMT, small quantities of DMT DMT (dominates in seedlings and young plants), 5-MeO-DMT (dominates in mature plant), whole plant, roots, stems, leaves, flowers;
Erythrina flabelliformis, other Erythrina species, seeds contain the alkaloids erysodin and erysovin
Zornia latifolia, the flavones genistein, apigenin and syzalterin may explain the cannabis-like effects
Lauraceae
Nectandra megapotamica, NMT
Malpighiaceae
Diplopterys cabrerana: McKenna et al. (1984) assayed and found the leaves contain 0.17% DMT
Myristicaceae
Horsfieldia superba: 5-MeO-DMT, Horsfiline, and beta-carbolines
Iryanthera macrophylla: 5-MeO-DMT in bark;
Iryanthera ulei: 5-MeO-DMT in bark
Osteophloem platyspermum: DMT, 5-MeO-DMT in bark
Virola calophylla, Leaves 0.149% DMT, leaves 0.006% MMT 5-MeO-DMT in bark
Virola calophylloidea, DMT, 5-MeO-DMT
Virola carinata, DMT in leaves; DMT, 5-MeO-DMT
Virola cuspidata, DMT
Virola divergens, DMT in leaves
Virola elongata(syn. Virola theiodora), DMT, 5-MeO-DMT in bark, roots, leaves and flowers
Virola melinonii, DMT in bark; DMT, 5-MeO-DMT
Virola multinervia, DMT, 5-MeO-DMT in bark and roots
Virola pavonis, DMT in leaves
Virola peruviana, DMT, 5-MeO-DMT; 5-MeO-DMT, traces of DMT and 5-MeO-tryptamine in bark
Virola rufula, Alkaloids in bark and root, 95% of which is MeO-DMT 0.190% 5-MeO-DMT in bark, 0.135% 5-MeO-DMT in root, 0.092% DMT in leaves.
Virola sebifera, The bark contains 0.065% to 0.25% alkaloids, most of which are DMT and 5-MeO-DMT.
Virola venosa, DMT, 5-MeO-DMT in roots, leaves DMT
Ochnaceae
Testulea gabonensis: 0.2% 5-MeO-DMT, small quantities of DMT, DMT in bark and root bark, NMT
Pandanaceae
Genus Pandanus (Screw Pine): DMT in nuts
Poaceae (Gramineae)
Some Graminae (grass) species contain gramine, which can cause brain damage, other organ damage, central nervous system damage and death in sheep.
Arundo donax, 0.0057% DMT in dried rhizome, no stem, 0.026% bufotenine, 0.0023% 5-MeO-MMT
Phalaris aquatica, 0.0007–0.18% Total alkaloids, 0.100% DMT, 0.022% 5-MeO-DMT, 0.005% 5-OH-DMT
Phalaris arundinacea, 0.0004–0.121% Total alkaloids
Phalaris brachystachys, aerial parts up to 3% total alkaloids, DMT present
Phalaris coerulescens, Coerulescine and 2-methyl-1,2,3,4-Tetrahydro-β-carboline in rhizome.
Phragmites australis, DMT, 5-MeO-DMT, bufotenine and gramine in the rhizome.
None of the above alkaloids are said to have been found in Phalaris californica, Phalaris canariensis, Phalaris minor and hybrids of P. arundinacea together with P. aquatica.
Polygonaceae
Eriogonum : DMT
Rubiaceae
Psychotria carthagenensis, 0.2% average DMT in dried leaves.
Psychotria colorata, Presence of mu opioid receptor(MOR) agonist and NMDA antagonist: hodgkinsine, psychotridine. Also mentioned in The Encyclopedia of Psychoactive Plants: Ethnopharmacology and Its Applications.
Psychotria expansa, DMT
Psychotria forsteriana, DMT
Psychotria insularum, DMT
Psychotria poeppigiana, DMT
Psychotria rostrata, DMT
Psychotria rufipilis, DMT
Psychotria viridis, DMT 0.1–0.61% dried mass.
Rutaceae
Dictyoloma incanescens, 5-MeO-DMT in leaves, 0.04% 5-MeO-DMT in bark
Dutaillyea drupacea, > 0.4% 5-MeO-DMT in leaves
Dutaillyea oreophila, 5-MeO-DMT in leaves
Tetradium ruticarpum(syn. Evodia rutaecarpa), 5-MeO-DMT in leaves, fruit and roots
Limonia acidissima, Traces of DMT; 5-MeO-DMT in stems
Euodia leptococca (formerly Melicope), 0.2% total alkaloids, 0.07% 5-MeO-DMT; 5-MeO-DMT in leaves and stems, also "5-MeO-DMT-Oxide and a beta-carboline"
Pilocarpus organensis, DMT, 5-MeO-DMT in leaves (Might also contain pilocarpine)
Vepris ampody, Up to 0.2% DMT in leaves and branches
Zanthoxylum arborescens, Traces of DMT; DMT in leaves
Zanthoxylum procerum, DMT in leaves
Citrus limon, DMT, N-Methylated tryptamine derivative in leaves
Citrus sinesis,DMT, N-Methylated tryptamine derivative
Citrus bergamia,DMT, N-Methylated tryptamine derivative
Mandarin orange Traces of N-methylated tryptamine derivative in leaf.
Chinotto Tree, N-Methylated tryptamine derivative in leaf
Citrus medica, N-Methylated tryptamine derivative in leaf
Phenethylamines
Species, Alkaloid Content (Fresh) – Alkaloid Content (Dried)
Coryphantha contains various phenethylamine alkaloids including macromerine, coryphanthine, O-methyl-candicine, corypalmine, and N-methyl-corypalmine.
Cylindropuntia echinocarpa (syn. Opuntia echinocarpa), Mescaline 0.01%, DMPEA 0.01%, 4-hydroxy-3-5-dimethoxyphenethylamine 0.01%
Cylindropuntia spinosior (syn. Opuntia spinosior), Mescaline 0.00004%, 3-methoxytyramine 0.001%, tyramine 0.002%, 3-4-dimethoxyphenethylamine.
Echinopsis lageniformis (syns Echinopsis scopulicola, Trichocereus bridgesii), Mescaline > 0.025%, also DMPEA < 1%, 3-methoxytyramine < 1%, tyramine < 1%; Mescaline 2%
Echinopsis macrogona (syn. Trichocereus macrogonus), > 0.01–0.05% Mescaline
Echinopsis pachanoi (syn. Trichocereus pachanoi), Mescaline 0.006–0.12%, 0.05% Average; Mescaline 0.01%–2.375%
Echinopsis peruviana (syn. Trichocereus peruvianus), Mescaline 0.0005%–0.12%; Mescaline
Echinopsis spachiana (syn. Trichocereus spachianus), Mescaline; Mescaline
Echinopsis tacaquirensis subsp. taquimbalensis (syn. Trichocereus taquimbalensis), > 0.005–0.025% mescaline
Echinopsis terscheckii (syn. Trichocereus terscheckii, Trichocereus werdemannianus) > 0.005–0.025% Mescaline; mescaline 0.01%–2.375%
Echinopsis valida, 0.025% mescaline
Lophophora williamsii (Peyote), 0.4% Mescaline; 3–6% Mescaline
Opuntia acanthocarpa Mescaline
Opuntia basilaris Mescaline 0.01%, plus 4-hydroxy-3-5-dimethoxyphenethylamine
Pelecyphora aselliformis, mescaline
Beta-carbolines
Beta-carbolines are "reversible" MAO-A inhibitors. They are found in some plants used to make Ayahuasca. In high doses the harmala alkaloids are somewhat hallucinogenic on their own. β-carboline is a benzodiazepine receptor inverse agonist and can therefore have convulsive, anxiogenic and memory enhancing effects.
Apocynaceae
Amsonia tabernaemontana, Harman
Aspidosperma exalatum, Beta-carbolines
Aspidosperma polyneuron, Beta-carbolines
Apocynum cannabinum, Harmalol
Ochrosia nakaiana, Harman
Pleiocarpa mutica, Beta-carbolines
Bignoniaceae
Newbouldia laevis, Harman
Calycanthaceae
Calycanthus occidentalis, Harman; Harmine
Chenopodiaceae
Hammada leptoclada, Harman; Tetrahydroharman, etc.
Kochia scoparia, Harman; Harmine, etc.
Combretaceae
Guiera senegalensis, Tetrahydroharmine; Harman, etc.
Cyperaceae
Carex brevicollis, Harmine, etc.
Carex parva, Beta-carbolines
Elaeagnaceae
Elaeagnus angustifolia, Harman, etc.
Elaeagnus commutata, Beta-carbolines
Elaeagnus hortensis, Tetrahydroharman, etc.
Elaeagnus orientalis, Tetrahydroharman
Elaeagnus spinosa, Tetrahydroharman
Hippophae rhamnoides, Harman, etc.
Shepherdia argentea, Tetrahydroharmol
Shepherdia canadensis, Tetrahydroharmol
Gramineae
Arundo donax, Tetrahydroharman, etc.
Festuca arundinacea, Harman, etc.
Lolium perenne, (Perennial Ryegrass), Harman, etc.
Phalaris aquatica, Beta-carbolines
Phalaris arundinacea, Beta-carbolines
Lauraceae
Nectandra megapotamica, Beta-carbolines
Leguminosae
Acacia baileyana, Tetrahydroharman
Acacia complanata, Tetrahydroharman, etc.
Burkea africana, Harman, etc.
Desmodium gangeticum, Beta-carbolines
Desmodium gyrans, Beta-carbolines
Mucuna pruriens, 6-Methoxyharman, Dihydroharman, Harman
Petalostylis labicheoides, Tetrahydroharman; MAO's up to 0.5%
Prosopis nigra, Harmalicin, Harman, etc.
Shepherdia pulchellum, Beta-carbolines
Loganiaceae
Strychnos melinoniana, Beta-carbolines
Strychnos usambarensis, Harman
Malpighiaceae
Banisteriopsis argentia, 5-methoxytetrahydroharman, (−)-N(6)-methoxytetrahydroharman, dimethyltryptamine-N(6)-oxide
Banisteriopsis caapi, Harmine 0.31–0.84%, tetrahydroharmine, telepathine, dihydroshihunine, 5-MeO-DMT in bark
Banisteriopsis inebrians, Beta-carbolines
Banisteriopsis lutea, Harmine, telepathine
Banisteriopsis metallicolor, Harmine, telepathine
Banisteriopsis muricata Harmine up to 6%, harmaline up to 4%, plus DMT
Diplopterys cabrerana, Beta-carbolines
Cabi pratensis, Beta-carbolines
Callaeum antifebrile(syn. Cabi paraensis), Harmine
Tetrapterys methystica(syn. Tetrapteris methystica), Harmine
Myristicaceae
Gymnacranthera paniculata, Beta-carbolines
Horsfieldia superba Beta-carbolines
Virola cuspidata, 6-Methoxy-Harman
Virola rufula, Beta-carbolines
Virola theiodora, Beta-carbolines
Ochnaceae
Testulea gabonensis, Beta-carbolines
Palmae
Plectocomiopsis geminiflora, Beta-carbolines
Papaveraceae
Meconopsis horridula, Beta-carbolines
Meconopsis napaulensis, Beta-carbolines
Meconopsis paniculata, Beta-carbolines
Meconopsis robusta, Beta-carbolines
Meconopsis rudis, Beta-carbolines
Papaver rhoeas, Beta-carbolines
Papaver Bracteatum ~ Tefamine
Papaver Paeoniflorum ~ Morphine
Papaver Setigerum ~ Morphine
Papaver somniferum ~ Morphine
Passifloraceae
Passiflora actinia, Harman
Passiflora alata, Harman
Passiflora alba, Harman
Passiflora bryonoides, Harman
Passiflora caerulea, Harman
Passiflora capsularis, Harman
Passiflora decaisneana, Harman
Passiflora edulis, Harman, 0–7001 ppm in fruit
Passiflora eichleriana, Harman
Passiflora foetida, Harman
Passiflora incarnata (with bee), Harmine, Harmaline, Harman, etc. 0.03%. Alkaloids in rind of fruit 0.25%
Passiflora quadrangularis, Harman
Passiflora ruberosa, Harman
Passiflora subpeltata, Harman
Passiflora warmingii, Harman
Polygonaceae
Calligonum minimum, Beta-carbolines
Leptactinia densiflora, Tetrahydroharmine, etc.
Ophiorrhiza japonica, Harman
Pauridiantha callicarpoides, Harman
Pauridiantha dewevrei, Harman
Pauridiantha lyalli, Harman
Pauridiantha viridiflora, Harman
Simira klugei, Harman
Simira rubra, Harman
Rubiaceae
Borreria verticillata, Beta-carbolines
Leptactinia densiflora, Beta-carbolines
Nauclea diderrichii, Beta-carbolines
Ophiorrhiza japonica, Beta-carbolines
Pauridiantha callicarpoides, Beta-carbolines
Pauridiantha dewevrei, Beta-carbolines
Pauridiantha yalli, Beta-carbolines
Pauridiantha viridiflora, Beta-carbolines
Pavetta lanceolata, Beta-carbolines
Psychotria carthagenensis, Beta-carbolines
Psychotria viridis, Beta-carbolines
Simira klugei, Beta-carbolines
Simira rubra, Beta-carbolines
Uncaria attenuata, Beta-carbolines
Uncaria canescens, Beta-carbolines
Uncaria orientalis, Beta-carbolines
Nauclea latifolia, Tramadol
Rutaceae
Tetradium (syn. Evodia) species: Some contain carbolines
Euodia leptococca Beta-carboline
Araliopsis tabouensis, Beta-carbolines
Flindersia laevicarpa, Beta-carbolines
Xanthoxylum rhetsa, Beta-carbolines
Sapotaceae
Chrysophyllum lacourtianum, Norharman etc.
Scutellaria nana
Simaroubaceae
Ailanthus malabarica, Beta-carbolines. (See also Nag Champa)
Perriera madagascariensis, Beta-carbolines
Picrasma ailanthoides, Beta-carbolines
Picrasma crenata, Beta-carbolines
Picrasma excelsa, Beta-carbolines
Picrasma javanica, Beta-carbolines
Solanaceae
Vestia foetida, (Syn V. lycioides) Beta-carbolines
Symplocaceae
Symplocos racemosa, Harman
Tiliaceae
Grewia mollis, Beta-carbolines
Zygophyllaceae
Fagonia cretica, Harman
Nitraria schoberi, Beta-carbolines
Peganum harmala, (Syrian Rue), The seeds contain about 2–6% alkaloids, most of which is harmaline. Peganum harmala is also an abortifacient.
Peganum nigellastrum, Harmine
Tribulus terrestris, Harmine etc.; Harman
Zygophyllum fabago, Harmine etc.; Harman
Opiates
Opiates are the natural products of many plants, the most famous and historically relevant of which is Papaver somniferum. Opiates are defined as natural products (or their esters and salts that revert to the natural product in the human body), whereas opioids are defined as semi-synthetic or fully synthetic compounds that trigger the Opioid receptor of the mu sub-type. Other opiate receptors, such as kappa- and delta-opiate receptors are part of this system but do not cause the characteristic behavioral depression and analgesia which is mostly mediated through the mu-opiate receptor.
An opiate, in classical pharmacology, is a substance derived from opium. In more modern usage, the term opioid is used to designate all substances, both natural and synthetic, that bind to opioid receptors in the brain (including antagonists). Opiates are alkaloid compounds naturally found in the Papaver somniferum plant (opium poppy). The psychoactive compounds found in the opium plant include morphine, codeine, and thebaine. Opiates have long been used for a variety of medical conditions with evidence of opiate trade and use for pain relief as early as the eighth century AD. Opiates are considered drugs with moderate to high abuse potential and are listed on various "Substance-Control Schedules" under the Uniform Controlled Substances Act of the United States of America.
In 2014, between 13 and 20 million people used opiates recreationally (0.3% to 0.4% of the global population between the ages of 15 and 65). According to the CDC, from this population, there were 47,000 deaths, with a total of 500,000 deaths from 2000 to 2014. In 2016, the World Health Organization reported that 27 million people suffer from Opioid use disorder. They also reported that in 2015, 450,000 people died as a result of drug use, with between a third and a half of that number being attributed to opioids.
Papaver somniferum
The plant contains a latex that thickens into opium when it is dried. Opium contains approximately 40 alkaloids, which are summarized as opium alkaloids. The main psychoactive alkaloids are:
Morphine: 3 to 20% in opium
Codeine 0.1 to 4% in opium
Thebaine 0.1 to 4% in opium
Noscapine 1 to 11% in opium
Oripavine
Atherospermataceae
Laurelia novae-zelandiae ~ pukateine
Cnidium officinale
Mitragyna speciosa/Mitragyna parvifolia
Mitragynine: Approx. 0.33% in dried leaves
7-Hydroxymitragynine
Mitragynine pseudoindoxyl
Picralima nitida
Akuammicine
Pericine It may also have convulsant effects.
Psychotria colorata
Hodgkinsine
Aspidosperma spp.
Akuammicine
Plants containing other psychoactive substances
See also
Aztec entheogenic complex
Entheogenic drugs and the archaeological record
God in a Pill?
Hallucinogenic fish
Hallucinogenic plants in Chinese herbals
List of Acacia species known to contain psychoactive alkaloids
List of entheogenic/hallucinogenic species
List of plants used for smoking
List of poisonous plants
List of psychoactive plants, fungi, and animals
Louisiana State Act 159
N,N-Dimethyltryptamine
Psilocybin mushrooms
Psychoactive cactus
Psychoactive plant
Notes
References
Bibliography
External links
Descriptions of psychoactive Cacti. Lycaeum Visionary Cactus Guide
Erowid Tryptamine FAQ – More Plants Containing Tryptamines
John Stephen Glasby, Dictionary of Plants Containing Secondary Metabolites, Published by CRC Press
Golden Guide to Hallucinogenic Plants
Hallucinogens on the Internet: A Vast New Source of Underground Drug Information John H. Halpern, M.D. and Harrison G. Pope, Jr., M.D.
Chemical Investigations of the Alkaloids from the Plants of the Family Elaeocarpaceae – Peter L. Katavic, Chemical Investigations of the Alkaloids From the Plants Of The Family Elaeocarpaceae, School of Science/Natural Product Discovery (NPD), Faculty of Science, Griffith University
Alexander T. Shulgin, Psychotomimetic Drugs: Structure-Activity Relationships
UNODC The plant kingdom and hallucinogens (part II)
UNODC The plant kingdom and hallucinogens (part III)
Virola – Dried Herbarium Specimens
Virola Species Pictures – USGS
Desmanthus illinoensis – USDA
Psychedelic Reader (Google Books)
Medicinal plants
Psychoactive plants | List of psychoactive plants | Chemistry,Biology | 8,028 |
32,500,763 | https://en.wikipedia.org/wiki/Catecholaminergic%20cell%20groups | Catecholaminergic cell groups refers to collections of neurons in the central nervous system that have been demonstrated by histochemical fluorescence to contain one of the neurotransmitters dopamine or norepinephrine. Thus, it represents the combination of dopaminergic cell groups and noradrenergic cell groups. Some authors include in this category 'putative' adrenergic cell groups, collections of neurons that stain for PNMT, the enzyme that converts norepinephrine to epinephrine (adrenaline).
Catecholaminergic cell groups and Parkinson's disease have an interactive relationship. Catecholaminergic neurons containing neuromelanin are more susceptible to Parkinson's related cell death than nonmelanized catecholaminergic neurons. Neuromelanin is an autoxidation byproduct of catecholamines, and it has been suggested that catecholaminergic neurons surrounded by a low density of glutathione peroxidase cells are more susceptible to degeneration in Parkinson's disease than those protected against oxidative stress. Hyperoxidation may be responsible for the selective degeneration of catecholaminergic neurons, specifically in the substantia nigra.
References
External links
More information at BrainInfo
Neurochemistry | Catecholaminergic cell groups | Chemistry,Biology | 287 |
26,315,688 | https://en.wikipedia.org/wiki/M63%20ground%20mount | The M63 ground mount is a four-legged anti-aircraft weapon mount used on the M2HB Browning machine gun.
The tripod itself weighs 65 kg (144 lb) and has a height of 106.7 cm (42 in) with M2. It has a maximum elevation of 85°, depression of 29° and traverse of 360°.
The mount is usually sandbagged in a hole with each leg staked down. Use against ground targets is better suited to the M3 tripod because the mount tends to be unstable when the gun is fired at low angles.
References
See also
M205 tripod
M3 tripod
M2 tripod
M122 tripod
M192 Lightweight Ground Mount
Firearm components
United States Marine Corps equipment | M63 ground mount | Technology | 150 |
71,314,483 | https://en.wikipedia.org/wiki/Muellerella%20thalamita | Muellerella thalamita is a species of lichenicolous fungus in the family Verrucariaceae. It grows on the apothecia of Orcularia insperata, Baculifera micromera and Hafellia disciformis, which are lichens that grow on bark.
References
Verrucariales
Fungi described in 1877
Lichenicolous fungi
Taxa named by James Mascall Morrison Crombie
Fungus species | Muellerella thalamita | Biology | 94 |
61,148,222 | https://en.wikipedia.org/wiki/C10H13N3O | {{DISPLAYTITLE:C10H13N3O}}
The molecular formula C10H13N3O may refer to:
AL-34662
ODMA (drug) | C10H13N3O | Chemistry | 42 |
45,000,569 | https://en.wikipedia.org/wiki/Colombian%20units%20of%20measurement | A variety of units of measurement were used in Colombia to measure quantities like length, mass and area. In Colombia, International Metric System has adopted since 1853, and has been compulsory since 1854.
Pre-metric units
Several different units were used before 1854. Older system before Metric system was derived from Spanish Castillian System.
Length
Different units were used to measure length. As in the 1920s too, some units were derived from metric system. One vara was equal to 0.8 m (or 0.84 m). Some other units are provided below:
1 pulgada = vara
1 cuarta = vara
1 pie = vara
1 cuadra = 100 varas
1 legua = 6250 varas
Mass
A number of units were used to measure mass. As in the 1920s too, some units were derived from the metric system. One libra was equal to 0.500 kg (i.e. 500 g) (or 0.54354 kg). Some other units are provided below:
1 onza = libra
1 arroba = 25 libra
1 quintal = 100 libra
1 saco = 125 libra
1 carga = 250 libra
1 tonelada = 2000 libra
Area
Several units were used to measure area. As in 1920s too, some units were derived from metric system. one vara2 was equal to 0.64 m2, and one fanegada was equal to 10,000 vara2
References
Culture of Colombia
Colombia | Colombian units of measurement | Mathematics | 306 |
7,852,580 | https://en.wikipedia.org/wiki/Water%20trading | Water trading is the process of buying and selling water access entitlements, also often called water rights. The terms of the trade can be either permanent or temporary, depending on the legal status of the water rights. Some of the western states of the United States, Chile, South Africa, Australia, Iran and Spain's Canary Islands have water trading schemes. Some consider Australia's to be the most sophisticated and effective in the world. Some other countries, especially in South Asia, also have informal water trading schemes. Water markets tend to be local and informal, as opposed to more formal schemes.
Some economists argue that water trading can promote more efficient water allocation because a market based price acts as an incentive for users to allocate resources from low value activities to high value activities. There are debates about the extent to which water markets operate efficiently in practice, what the social and environmental outcomes of water trading schemes are, and the ethics of applying economic principles to a resource such as water.
In the United States, water trading takes on several forms that differ from project to project, and are dependent upon the history, geography, and other factors of the area. Water law in many western U.S. states is based in the doctrine of "prior appropriation," or "first in time, first in use." Economists argue that this has created inefficiency in the way water is allocated, especially as urban populations increase and in times of drought. Water markets are promoted as a way to correct these inefficiencies.
In addition to the supply of tap water, many local water resources are also being acquired by private companies, most notably Nestlé Waters with its numerous brands, in order to provide commodity for the bottled water industry. This industrywhich often bottles common ground water and sells it as spring watercompetes with local communities for access to their water supplies, and is accused of reselling the water at drastically higher prices compared to what citizens pay for tap water.
Water trading markets
Water trading is a voluntary exchange or transfer of a quantifiable water allocation between a willing buyer and seller. In a water trading market, the seller holds a water right or entitlement that is surplus to its current water demand, and the buyer faces a water deficit and is willing to pay to meet its water demand. Local exchanges that occur for short durations between neighbors are considered "spot markets" and may operate under rules different from water rights trading markets.
Economic theory
Economic theory suggests that trade in water rights is a way to reallocate water from less to more economically productive activities. Water rights based on prior appropriation – first in time, first in right – led to inefficient water allocation and other inefficiencies, like overuse of land and less adoption of water conservation technologies. For example, it has been observed that urban users can pay up to 10 times more for water than agricultural users. Alternatively, water markets should provide a clear measure of the value of water and encourage conservation. Water trading can be a solution because marginal prices for users will be equalized and one price would allocate water according to each users demand curve; additionally information about the value of water in different uses will result, and compatible incentives will be created. Studies have shown that only modest transfers of water (10%) from agriculture to urban areas would be needed to bring allocation of developed uses into economic balance. Potential environmental benefits of trading can also include improved instream water quality, because water will not be diverted to the least economically productive users. Trading also re-allocates risk, whereas the prior appropriation system inefficiently and unequally allocates water and risk among similar users.
Water trading should be Pareto efficient, which means that the socially optimal water allocation is an allocation such that no person can be made better off without making someone worse off, and includes compensating transfers of money to losers. The socially optimum level is where water is allocated to those who value it most, though this can be contingent on reallocation in drought years. However, it is frequently not practicable to compensate losers from water transfers because of the difficulty of identifying them, or they may be located in different legal jurisdictions. Economists acknowledge that the final results of a water trading market and how they are achieved are important policy questions.
Conditions for a water trading market
The aforementioned benefits of water trading are improved as the following conditions are met:
Voluntary buyers and sellers: Parties interested in buying must have access to water rights and those interested in selling their entitlements must be allowed to do so.
Allocation of vested rights: In order to achieve efficiency in the distribution and use of water rights, available sources must be allotted to specific parties. While owners do not own the water itself, they do own the right to use the water. These property rights are established by a governing body and in many cases are available for sale or for lease.
Information: In order to function efficiently in a market, participants must know their own estimated costs. One who owns the water rights of a specific area must know the quantity of water needed, the value of that water, and the point at which additional consumption of water is no longer beneficial (ex. Land has been fully irrigated and additional water would be detrimental).
Clear definition of rights: This refers to measurement of bodies of water accessible by the rights’ holders. Bodies of water must be consistently identified, including the source of water body as well as measurement (acre/ft) of "right to use." Lack of clarity on permits or water rights allocation may lead to lost, null, or void transactions.
Transferable from land rights: In reference to water ranching, water trading will be most efficient when access to water is independent of the land. Buying and selling of rights becomes less complicated when one is not considering the sale or lease of water and land.
Changeable types of water use: As agricultural users of water rights are electing to sell or lease these allotments, the destination of the water will vary. It is necessary that transferred rights may be redeemed for whatever agricultural, municipal, industrial, or residential purpose it is assigned.
Types of trades
Several types of stakeholders are recognized as potential participants in a water market, including agricultural users, industrial, and urban, as well as those who value in stream uses for recreation, habitat preservation, or other environmental benefits. Water rights holdersparticularly agricultural userscan make water available for trade by employing water conservation technology, through permanent fallowing, seasonal fallowing, shifts in crop choice, or voluntary water conservation (for example, residential water conservation practices). Trades can then be long-term leases, permanent transfers, short-term leases, or a callable transfer, which is the ability of a city to lease water under specified drought conditions. There are other flexible trade tools used by urban areas, such as water leasebacks, wherein the municipality purchases the water right from agricultural users and then can lease it back to those users in non-drought years, as a way to ensure the urban water supply. Banking water is a related tool, wherein water is stored underground in non-drought years to be used in drought years, though this is not to be confused with water banks, which are brokering institutions. Water ranching is a method of accessing unclaimed water rights that are legally bound to land rights. While many areas have detached these two types of rights from one another, some still prohibit the severing of rights and thus continue to promote water ranching. In the event of water ranching, the groundwater removed from the property is often much greater than that which would be used for average agricultural use, which can be harmful to the ecosystems which rely on it. This practice also creates inefficiency in the dispersal of land and water access ownership, as non-agricultural parties, such as municipalities, may purchase a plot of land simply for its water.
Water credits: The idea is to have a tradable certificate which notifies the quantity of water saved by an institution, organization or an individual this would help in maximum utilization of every available drop of water. It may be defined as a permit that allows the holder to trade the conserved water in the international market at their current market price.
Justification for water trading markets
Establishing a water market may be an appropriate solution for the problem of distributing scarce water resources among increasing demand, depending on the historical, political, legal, and economic context of a community. For example, where prior-appropriation water rights dictate freshwater allocation, such as in the Western United States, new consumers may have little recourse to obtain sufficient water quantities to meet their demands without the use of water markets (alternatives to water markets are discussed below). Thus, historical appropriative rights might neglect consumers willing to pay more than current consumers. Water trading serves as a mechanism to promote the distribution of rights to those who value them most. Also, instream demands (that reflect the benefits that fisheries and lentic and lotic ecosystems receive from greater water flows, as well as benefits to water-dependent recreational activities or aesthetic appreciation) might be ignored in an appropriative system. For example, institutions governing water resources in the US have historically favored water allocation to uses that stimulate the economy, such as agricultural, hydroelectric, or municipal application. Correspondingly, western water law evolved to encourage water diversion offstream; water left instream was considered "wasteful" and so instream water demand was ignored.
There are multiple manifestations of water rights. Most commonly, water rights fall into the categories of prior-appropriation water rights and riparian water rights. Prior appropriation dictates that the first party to use the water for beneficial use maintains right to continue using it in this manner, unless they elect to sell or lease these rights. Riparian water rights are allocated to parties in ownership of land adjacent to a body of water. Frequently with riparian rights, the water rights cannot be severed from the land rights, and the water found within may not be transferred outside the watershed of origin. Both of these types of water rights are divided into "senior" and "junior" rights, appointed by the order in which they were allocated. The senior rights holders receive their water first, to be followed by the junior rights holders, and ideally there would be sufficient water for each party to redeem their allotments. In the absence of water trading, a drought may cause rights holders to lose their full access. In these drier seasons, it is possible for governing authorities to curtail junior water rights for a period of time to allow senior rights holders to redeem their full quantities through a process called priority administration.
Where water is scarce, tradable water rights may incentivize water conservation and make more water available for trading. Using the western appropriative rights system as an example, water rights are usufruct, indefinite, and senior or junior rights holders may forfeit their rights if they do not use their full water allocation for beneficial use (criteria for forfeiture and definition of "beneficial use" varies by state). Because rights holders do not "own" water rights, but merely consume water, they bear no cost for overconsumption, and may consume more water than needed to avoid forfeiture. Agricultural water users may intentionally apply water to low-value crops or water-intensive crops to justify and maintain their current water allocation. Thus, tradable rights may also encourage agricultural production of more high-value crops and/or less water-intensive crops. In the case of the western United States, as of 2006, irrigators typically consume more than 80% of freshwater, and so water trading markets could reallocate water to urban, environmental, and higher-value crop uses, where water is valued greater at the margin.
Water markets may be appropriate where there are no or inefficient rules established to govern groundwater use. Because groundwater is generally available to anyone who sinks a well and pumps, water tables worldwide have fallen precipitously in the last several decades. Diminishing aquifers and lower water tables are a concern because aquifers are relatively slow to recharge, and lower water tables can lead to salt intrusion and make freshwater unfit for consumption.
Generally, water markets are considered flexible instruments that, in theory, should adjust for changing prices, and respond to changing markets conditions (e.g. less rainfall, increased demand). Historically, certain communities, such as those in the western United States, may have responded to water shortage and increasing demand through supply-side solutions, like increased storage capacity and transportation infrastructure (e.g. dam and aqueduct construction). Due to higher capital costs, decreasing sites for dam construction, and increasing awareness of environmental damage from dam construction (consider impaired Northwest salmon runs, for one example), water markets may be preferable to supply-side solutions that are not viable or sustainable in the long-run. For the water storage and transportation infrastructure that still exists, water markets may shift the financial burden of maintenance from government agencies to private sellers and buyers that participate in water markets.
Role of institutions
Empirical research established that outcomes of long-term sustainability and successful management of common pool resources (CPR) depend on the governing institutions involved, and that no single type of institution or management system uniformly manages common pool resources optimally across all scenarios.
A CPR is "a natural or man-made resource system sufficiently large as to make it costly (but not impossible) to exclude beneficiaries from obtaining benefits from its use"(). Water is inherently a common pool resource; however it takes on qualities of a private good when property rights are assigned and its consumption becomes both rival and excludable by a water rights holder. Still, water is not a pure private good when property rights are assigned because non-water rights holding beneficiaries can access water upstream. Irrigation water will also percolate and recycle back to the stream so non-water rights holding beneficiaries downstream will benefit from return flows. Thus, water retains certain common pool resource qualities even in a water trading market and must be managed as such.
In the world of common pool resources, an appropriator is a person who withdraws from the resource system, providers are agents who arrange the provision of a CPR, and producers construct, repair, or take action to ensure long terms sustenance of a CPR system. Also bearing on water trading markets, Garrett Hardin suggested common pool resources will terminate in a "tragedy of the commons", where all appropriators will continue to consume common pool resources to maximize their individual utility without considering social utility or cost: "Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons".
To redress this, one traditional scheme for common pool resource management is a "Leviathan" strategy, in which a central authority (like government) must enforce rules, and coerce and punish appropriators as necessary to obey resource rules; however a large enforcer cannot catch all offenders or obtain complete information so the Leviathan strategy is not a perfect solution. The second traditional scheme for common pool resource management is privatization, in which resources are tangibly divided and exclusively managed and consumed by individual entities. However, privatization is not a perfect solution because it erroneously assumes when the resource pool is divided, all resultant units have equal value.
As an alternative, Elinor Ostrom posits common pool resources are embedded in complex, social-ecological systems and can be managed by nested or polycentric public enterprises, where institutions at different scales (e.g. national to local hydrologic basin) horizontally and vertically collaborate to sustainably manage a common pool resource. External enforcers do not necessarily need to monitor and enforce penalties; rather, participants can internally monitor appropriations and levy sanctions. Also, those internal actors who know best about costs and benefits of local resource appropriation participate in management. Case studies below provide examples of the role of institutions in specific water markets, however the combination of institutions involved in water allocation distribution will carry unique capacities and constraints.
Complications in water trading markets
Impediments to the development of water markets include the fact that water is largely a public good, and water rights rest with a governing body while individuals essentially have "use" rights. In addition, water is not a standard commodity, rather the water supply is stochastic and flows through complex natural and manmade systems. Thin markets with few participants can result from fluctuations in water supply. Transaction costs for water trades can be high because of the need to physically transport the water and the required administrative approvals, which may not be given because of externalities to third parties. Additionally, institutional features affect transaction costs, such as the structure of the water district, the water rationing mechanism, and other rules such as return-flow requirements. These institutional structures have been observed to form in the early stages of a project and to resist change, because investments that are often irreversible are made by stakeholders and third parties based upon these institutions.
Third party effects
Third party effects of water trading can be positive or negative and will occur when the benefits or costs of a trade accrue to persons besides the buyer and seller involved in a water right trade. Examples of third party effects include:
Unreliable supply: Relates to the probability a water rights holder will receive expected allocation in a given water year. This probability of receipt is dependent on natural variability of water supply (e.g. drought, irregular rainfall), governing institutions that manage water allocation, storage and conveyance losses (e.g. from evaporation or seepage), and access to return flows.
Delayed delivery: Relates to the capacity of water storage and transportation infrastructure and the fact such infrastructure are congestible goods. During peak demand times, infrastructure may not have the capacity to store or deliver water demanded by all users in the moment they demand it. However, because water demand is seasonal (e.g. greater demand in hotter or drier months), it may not be cost-effective to augment infrastructure to meet peak demand level year round. Thus, some users may not receive their water allocation when they need it most.
Unaccounted costs of storage and/or delivery Again, the time of year, location, and elevation are important because water is non-compressible (unlike something like natural gas) and cannot be cheaply stored or transported long distances or elevations.
Water quality: Return flows can improve or reduce water quality depending on the location of the origin and endpoint of water traded.
Fishery degradation: Reduced flows from water allocated offstream can negatively impact fishery health.
Area-of-Origin effect: Relates to a chain-reaction of decreased local economic activity and a diminished tax base in the geographic area of water traded when water traded lessens the agricultural or industrial water rights holder's economic activity. Academic consensus is that the area-of-origin effect exists, but the magnitude of its impact ranges and is widely debated.
Barriers to trade
The following factors may impede trading in a water market:
High infrastructure costs or constraints: A great deal of physical infrastructure is required to move water long distances to reallocate it to its most valued use from a seller to a buyer, and allow markets to form and succeed. Most often the capital to build elaborate canal and ditch infrastructure is provided and maintained by a government entity.
High transaction costs: The transaction cost of trading in a water market is the sum of the cost of obtaining information, search cost of finding willing traders, negotiation cost of achieving mutually beneficial trades, cost of effecting and registering trades, and cost of enforcing trade contracts. Increasing the geographic range of the trade and number of stakeholders involved tends to increase the transaction cost of the trade.
Legal barriers: In the United States, federal regulations under as the Endangered Species Act, Public Trust Doctrine, or Clean Water Act, which can require minimum flows to protect a species or maintain water quality, may prevent trades offstream (for example, see court protection of the Hypomesus transpacificus in California.
System of water rights: If there is a rationing system in which senior rights users get all of their allotment before junior rights users get any of theirs under drought conditions (priority system), there is little incentive for senior users to conserve if they can't easily trade water to other users. This complicates trading because heterogeneous rights must be quantified and priced for each trade, and is less adaptable to short run changes in supply.
Political and social barriers: In order for a water market to realize success, multiple factions within society must be able to view water markets as serving social values and objectives. Given the array of societal factions though, sometimes with competing values or objectives, water market approval needed across multiple groups may be difficult to achieve. Stakeholders involved in the market must agree on and adhere to rules governing trade for effective and efficient Coasean bargaining to occur. Elected political leadership may be unwilling to support water markets, support trade-enabling laws, or raise water prices to reflect scarcity conditions if constituents disapprove. Political leadership must also mutually define desired policy outcomes of water markets and water market administration must be feasible and sustainable to achieve the desired policy outcome. Finally, although water allocation dedicated to environmental use is increasingly recognized, stigma against "nonconsumptive" water use may persist in communities that have historically viewed water left instream as wasteful because it does not contribute to economic welfare.
Other considerations
Equity: Besides resource allocation principled on economic efficiency, water allocation based on social equity concerns of fairness and water access for all, irrespective of individual ability to pay, is another consideration in water distribution. Distributing water to achieve social equity will exclude economic efficiency if government provides subsidies, free services, or administratively sets water pricing to make water available to those not able to pay market price for water. Equity is also a concern for water markets in the sense participating buyers and sellers should perceive equal opportunity gains in a transaction (which theoretically should occur if water is priced according to the equimarginal principle).
Pricing: Marginal price in a water market should reflect parity between the marginal willingness to pay of all consumers in the market (i.e. the marginal social demand) and the marginal social cost of provision (that accounts for private costs, externalities to third parties, storage and conveyance costs, and resource scarcity). However, in practice, water is often administratively priced by management institutions, who are reluctant to raise prices to reflect water scarcity, so water is under-priced and over-consumed. Because administrative prices are not determined by market conditions, they do not automatically respond to changes in long-term or short-term supply, and can be set at a variety of levels (such as short-run marginal cost or long-run marginal cost, with varying assumptions about fixed or variable demand and costs) that are not Pareto efficient. One challenge in quantifying an accurate social marginal price involves difficulty in measuring the price elasticity of water. Determining price elasticity for the agricultural or industrial sector can be difficult because water use might not be metered or water is free. The marginal value of instream demand requires nonmarket valuation techniques such as recreational demand models, contingent valuation or hedonic housing models. Yet as the diversion of water offstream increases, the marginal value of water left instream will increase and nonmarket valuation techniques will only reflect a static price. Municipal water demand elasticity is also difficult to measure because historically municipal water is under-priced and set at the long-run marginal cost of supply.
Evaluation: Trade volumes may not tell the whole story about the efficiency or importance of a water market. A low number of trades does not necessarily indicate an inefficient water market, nor does a high number indicate efficiency. Actual trade volumes have been relatively low in studied water markets. Outside of the U.S., in Australia, 51 inter-state trades took place between 2000 and 2002. Yet within the Australian state of New South Wales, even though water trading has existed since the 1980s as a way to address droughts, the market is still thin and exists mostly within the irrigation sector. In Chile most trades take place in Santiago or in the desert north. In the U.S., between 1990 and 2000 in 19 western states there were 1,065 sales and 552 leases of water rights, but the majority of the sales were in Colorado in relation to the Big Thompson Dam Project. In general, efficient water markets trade in homogeneous water shares, many buyers and sellers, ease of entry and exit, and low transaction costs, all of which depend upon the particular market's structure.
Alternatives to water trading markets
Where water markets are either not viable or desired, the following mechanisms may be used to allocate scarce water resources:
Administrative transfers (also called "public allocation")
Forfeiture or abandonment of water rights, as determined by governing institutional law
Government exercise of eminent domain
Legal challenges to existing water allocation
Legislative settlement
Reallocation of water via redesign of large-scale water projects
Marginal-cost pricing
User-based allocation
Water trading by country
Australia
The first time that water access entitlements were separated from land title in Australia was in 1983, when South Australia introduced a permanent water trading scheme. Like many other countries, Australia's irrigation sector was subject to centralised control for more than a century. Many irrigation settlements were placed in inappropriate parts of the landscape where the risks of waterlogging, land salinisation or river salinisation were high and returns from production were low. Farm sizes on irrigated settlements were also initially based on non-commercial criteria like ‘the home maintenance area’ (the maximum area necessary to support one family – as judged by government). Irrigators were in this way condemned to a frugal existence from the start. Changing commodity markets and above all changing irrigation technologies amplified these initial errors and left Australian irrigation with difficult adjustment problems.
Australia's institutions, and rhetoric, are now geared to the market with the benefits of trade between ‘willing sellers’ and ‘willing buyers’ extolled by policymakers. Irrigators who can generate higher returns are now buying water from those who believe they can make more money by selling their water entitlements rather than using them. Nonetheless, the instinct for central planning lives on and some policy makers are tempted to favour those crops deemed to produce high gross values per megalitre when economics teaches that it is marginal valuations that are important. This distinction is critical because many ostensibly water efficient crops have limited markets. Rather than make judgements about what crops should be grown on farms, economic orthodoxy is to let individual irrigators make their own judgements about whether they can profit from their investment in water entitlements. Australian governments mostly shy away from ‘picking winners’. Nevertheless, in popular discussion, there is considerable emphasis on the crops being grown when what matters most for public policy is the amount of water taken from rivers and any externalities associated with irrigation.
In 1994, Australia's National Water Commission took the step in unbundling property rights, separating land from water rights. Upon doing this, steps were taken to increase the efficiency of water distribution. By 2010, the water rights market was valued at A$2.8 billion. Various kinds of market intermediaries facilitate the trade of water, including water brokers, water exchanges and message boards. Decentralized markets are created such that one water exchange does not process all trades. A trade may occur between a private buyer and seller, through a broker or through an exchange. Some brokers may use an exchange to locate buyers or sellers.
The Murray–Darling Basin is one area in Australia studied for its water trading schemes. The Murray–Darling Basin receives approximately 90% of the region's water. In the 1990s, the Australian Government has shifted its emphasis from building dams and subsidizing water from area farmers to the establishment of prices and trading within the water market. Trading for these rights occur across Australian states, with caps being set for each area to assure that water is not being over-extracted from the Basin to another region. This method operates on estimated net benefits, including the return flow to the Basin. Additionally, this water is traded with the full cognizance of Australia's highly varying climate. As the second driest continent on Earth, the water allocations are more valuable when distributed as seasonal allocation or temporary trades, to ensure that, should it be necessary, water can be returned to the Murray–Darling Basin region.
The Water Services Association of Australia operates on a volume-metering system. This means that market players do not simply apply to possess the water rights, but instead they are paying for the quantity of water they consume. Yet, recent reports raise concerns regarding over-allocation and the confusion between environmental outcomes and economic efficiency.
The sustainability of the present system for water marketing may be affected by the structure and the conditionalities of marketable rights. While in the US water marketing is limited to effectively used rights, and to historical water consumption, Australian water marketing accepts the marketing of sleeper rights that have not been utilized.
Chile
The Chilean system is characterized by a strongly free-market approach, and has been controversial both in Chile and in international circles. As part of the water resources management in Chile, under the 1981 Water Code (water law), water rights are private property, separate from land, can be freely traded, are subject to minimal state regulation and are regulated by civil law. Under the Code, the Chilean state grants the existing water users the property rights for surface water and groundwater without any additional fee. Any new or unallocated water rights are auctioned and then can be sold or transferred at price. During the 1990s, the World Bank and the Inter-American Development Bank actively promoted the Chilean system as an example of effective and efficient water resources management. Other institutions, such as Eclac (Economic Commission for Latin America and the Caribbean, United Nations), questioned the structure and conditionalities of Chilean water rights, and consequently the resulting market for water rights, on grounds of efficiency and equity. As Australia, Chile allows the marketing of unused water rights. While the US marketing systems limit transactions to historically consumed waters, according to effective and beneficial utilization, Chile allows the transaction of nominal entitlements, without limitation to effective use and consumption. Water rights are not forfeited if not utilized. This resulted in the monopolization of water rights on one hand, and on the trading of nominal entitlements on the other, with negative impacts on sustainability and third parties. A Water Law Reform (2005) partially amended the system, but water marketing in some areas is still affected by sustainability problems. Sustainability may also be affected by public subsidies to irrigation, which are not environmentally assessed. Although the Chilean model has been recommended for adoption in other Latin American countries, none has yet accepted it in its original form. The proposed transfer of one element of the Chilean model played a role in the 2000 water war in Cochabamba, Bolivia; that which awarded ownership of all water resources to the new concessionaire, International Water. This legal change meant that existing users, which included peasant farmers and small-scale water supply networks, were immediately illegalized, resulting in widespread angry protests.
In Chile, opinion over the effectiveness and the fairness of the water markets model is deeply divided. Specific concerns that have arisen include the hoarding of water rights without using them for speculative purposes and the lack of state regulation to ensure that the market works properly and fairly. Some researchers have argued that the model does deliver economic benefits, but other evidence shows that the system does not work well in practice and that poorer water users (such as peasant farmers) have less access to water rights. Some of these concerns led to the amendment of the Water Code in 2005.
Iran
Iran has been in the throes of a water crisis for the past few decades. Population growth, mismanagement in water resources and changes in precipitation patterns are a few causes to name that made Iran to start different coping strategies including water trading to deal with its water crisis.
United Kingdom
Water trading in the UK is open since 2001. Currently, only the trading of water rights (trading of licenses) is authorised. Some changes in the policies are being investigated by the Environment Agency.
United States
Water trading in the United States varies by state, according to the state's water code, system of water rights, and governmental bodies involved in regulating water trading. Water trading is practiced more in western states, where states historically have followed a water rights system of prior appropriation, and vast regions are arid so water is naturally scarce. Presented here are some cases of water trading and relevant regulatory rules and bodies, however these cases are not exhaustive.
Arizona
Arizona follows the prior appropriation doctrine for determining water rights. There are three categories of tradable rights in Arizona: surface water rights under Common Law Rights, surface water rights under Statutory Rights, and groundwater rights created by Arizona's 1980 Groundwater Code. The former two surface water rights pertain to any surface water in Arizona excluding flows in the Colorado River (Colorado River water rights are governed extraneously, by the Colorado River Compact). Specifically, appropriated surface water can be: "Waters of all sources, flowing in streams, canyons, ravines or other natural channels, or in definite underground channels, whether perennial or intermittent, floodwaters, wastewater, or surplus water, and of lakes, ponds and springs on the surface" (A.R.S. §45-101). Common Law Rights apply to surface water diversions appropriated prior to the creation of Arizona's 1919 Public Water Code and are senior rights. Statutory Rights apply to any appropriation claimed after 1919, in which case the claimant must apply for and receive a permit from the Arizona Department of Water Resource (ADWR) before diverting claimed surface water. Rights holders must apply appropriated water to beneficial use (recognized as domestic, municipal, irrigation, stockwatering, power, mining, recreation, wildlife and fish, or groundwater recharge) or risk forfeiture of rights (which occurs if water is not applied to beneficial use for 5 or more consecutive years). Arizona permits transfer of surface water rights, however there are maximum limits to the amount of water transferred and temporal duration of the transfer (A.R.S. §45-1002), and transfers are subject to review and approval by the ADWR. In the case of transfers to instream flows to benefit fish, wildlife, or recreation, rights holders may follow a "sever and transfer" process, by which the holder permanently transfers the water right to the State of Arizona or a political subdivision (as trustees of instream flows), pending approval of the ADWR. This type of transfer will preserve the priority status of the water right, so that if the right transferred is a senior right, the benefiting instream flow will receive its water allocation before junior rights holders in the case of a water shortage. This transfer process is boon to ecosystem health and recreational value because wildlife and fish were not recognized as beneficial uses until 1941 and 1962, respectively. Groundwater rights transfers are more restricted relative to surface water. The 1980 Arizona Groundwater Code created jurisdictions called "Active Management Areas" (AMA) in parts of the state with high water demand, such as Phoenix, Tucson, and Prescott. Groundwater rights owners living in places outside of AMAs are entitled to a "reasonable" quantity of pumped groundwater that can be applied to beneficial use without waste. Groundwater rights holders outside of AMAs may transfer rights under certain conditions and are rarely permitted to transfer groundwater outside the hydrologic basin. Groundwater transfers within AMAs are also permissible, but are even more restricted, and groundwater regulation in AMAs is different and much stricter than regulation outside of AMAs. Legal rules governing water exchanges in Arizona are codified in Title 45, Chapter 4, of the Arizona Revised Statutes . Water transfers within Arizona are most common in the Phoenix AMA.
California
At its statehood (1850), California adopted the system of English Common Law riparian rights, but with the advent of the California Gold Rush and eventual abundance of water claims by miners, California adopted the appropriative rights system as well one year later. California also observes Pueblo rights, a remnant of Spanish law in modern-day California, which allows an entire town to claim right to water. There are other rights California observes, such as prescriptive rights and federal reserved rights, but riparian and prior appropriation rights are the two prominent types of rights in the state. Finally, California has observed the doctrine of "reasonable use" for groundwater since 1903. Because of the many water rights California recognizes, its water rights scheme is a considered a "plural system". Bearing on water trading, because California adopted riparian rights before appropriative rights, riparian rights have priority over senior appropriative rights. California's 1914 Water Commission Act established a permit system for surface water appropriative rights and created an agency (that would eventually become the California State Water Resources Control Board (SWRCB)), to administer those permits. All water application must meet beneficial use requirements (California Water Code §100) (beneficial use includes aquaculture, domestic use, fire protection, fish and wildlife, crop frost protection, heat control, industrial use, irrigation, mining, municipal, power, recreation, stockwatering, and water quality control) but post-1914 appropriative rights are subject to more scrutiny and regulation by the SWRCB. By law (California Water Code §102), water in California is public property (and therefore a common pool resource); water rights only entitle the holder to use of water, not ownership of water. In fact, §104 and §105 of the California Water Code expressly state the people have a "paramount interest in the use of all water", the State may control surface and underground water for public use or public protection, and that the State should develop water for "the greatest public benefit". Because of these provisions, and the characteristic of water as a common pool resource, California law requires state agencies to review and approve independent market transfers on behalf of the public. California's Division of Water Rights keep record of water appropriation and use, and the SWRCB reviews and issues permits, adjudicates rights, investigates complaints, and approves temporary transfers (duration is no longer than 1 year) of post-1914 appropriative rights. Injury to other legal water users, unreasonable effects on fish and wildlife, and unreasonable effects on the overall economy in the country from which water is transferred are legally obligated items the SWRCB must consider when reviewing a transfer. Chapter 7 of the California Water Code defines water transfers, declares voluntary water transfers results in efficient use of water that alleviates water shortages, saves capital outlay development costs, and conserves water and energy, and explicitly requires government to assist in voluntary transfers. Chapter 10.5 of the California Water Code states provisions for the process of water transfers for temporary (§1725–1732) and long-term exchanges (over 1 year in duration) (§1735–1737). Long-term exchanges can be subject to review by the Department of Fish and Game as well.
There are hundreds of water transfers in California each year, the majority of which are short-term transfers between agricultural users in the same hydrologic basin. Intra-basin transfers have a relatively low transaction cost because the local jurisdiction water district often owns the water rights, and so is the only body that needs to approve transfers between its farming members (i.e. the SWRCB is not involved). Water transfers also help meet the instream demands; for example, those of the state's Environmental Water Account. Finally, in officially declared emergency situations, the California Department of Water Resources opens a California Drought Water Bank, which buys surplus water allocations from northern California water rights holders and sells and transports those allocations to drought-stricken areas in southern California.
Colorado
Surface water rights in Colorado are administered by the Colorado Division of Water Resources (CDWR) and by the water courts, which are district courts that only hear water matters. To get a surface water right, individuals submit an application to the water courts, and must show intent to divert the water for beneficial use – if there is no opposition, the right will be signed into a decree. The system is prior appropriation and priority based, with some priority dates going back to the 1890s. Transfers of rights require that there will be no adverse effect on other senior or junior rights holders, the result of which is that only the amount of water used consumptively in the past can be transferred. The CDWR administers the river water up to the head gate of a ditch that diverts water from the Colorado River, where a ditch company then controls the allocation of water to shareholders.
Federal projects can overrule this state system, and this is true for the Colorado-Big Thompson Project (CBT). The CBT Dam is a Bureau of Reclamation irrigation project where a water market has developed, and is based on a proportional rights system. This market has been in operation since the early 1960s, and has well-developed infrastructure to move water within the area of service. There are market prices, brokers, short term rentals and permanent leases of water in this system, and trades within the agricultural sector and between the agricultural and municipal and industrial sectors.
Water rights are homogeneous and trades are in allotments of the use of (for 1 year) of the per year of water supplied by the CBT; and each acre-foot is a tradable allotment. Water rights are thus well defined, and understood by traders. Supplies are also reliable, and the delivery of water assured- users know what they are getting. The CBT was developed to supply water that is supplemental to a users main supply (to reduce variability in supplies), so in wet years the quota is cut back proportionally for all shareholders to save for drier years. Similarly, if there is less than available for a given year, all supplies are cut back proportionally. Additionally, conserved water can be transferred to another use, which is not the case in prior appropriation systems. Water can also be rented to users outside the district through exchanges and replacements and internal ditch companies also trade non-CBT water, though transaction costs are higher for these. Rental prices are less than allotments, because of the higher risk of unavailability, and make up 30% of transactions.
The system is administered by the Northern Colorado Water Conservation District (NCWCD) which was created by the Water Conservancy Act, and it operates independently of the CDWR. The NCWCD puts parties in contact to facilitate trade and reviews applications which must be submitted to the district to make sure water will be for beneficial use as well as to guard against speculative purchases. The transfer process takes 4 to 6 weeks, is relatively simple and straightforward, and does not require the approval of a state engineer, significantly reducing the time and costs involved. Sometimes auctions will be advertised, but usually are negotiated between traders directly. Transaction costs are lower in the CBT which has only the NCWCD as its governance structure to contract with the Bureau of Reclamation for both agricultural and urban users. Also, in contrast to other systems, impacts to downstream third parties do not have to be considered since there are no required downstream return flows and there is not a no-injury rule in place.
The system is well regarded as a model for other markets and credited with having allowed northern Colorado to adjust to short and long term shifts in water demand and supply. In 1962, irrigators owned 82% water allotments, down to 64% in 1982, and 55% in 1992, but still were able to use 71% of the water in 1992 through water leasebacks. Between 1970 and 1993 there were 2,698 transactions of one-third of the water allotments to another use or for use at a different location.
New Mexico
The State of New Mexico has entrusted its water rights governing to a State-appointed position, that of the Water Engineer. The role of this position is to not only facilitate the exchanges of water rights that occur, but also monitor the aquifer levels as resources are consumed. In 2003, the state of New Mexico implemented a Water Plan, which sought to protect the allotment of water rights, but also consider the associated water supply and quality, the relationships between sellers and buyers, State requirements, and promotion of future investment in infrastructure.
New Mexico must be responsible for the management of its own resource supply, as the inability to do so will require the surrender of authority to the federal level of administration. To ensure effective guidance in sales and trades, many state departments and commissions are engaged in the efforts. Each of these parties is delegated roles and responsibilities aimed towards best planning and management of State and Regional water exchanges. The New Mexico Interstate Stream Commission and the Office of the State Engineer are two leading parties, but included in this council are the New Mexico Environment Department, the Water Quality Control Commission, the New Mexico Acequia Commission, and the Water Trust Board, among others. These groups work in conjunction with one another to ensure water right implementation, potential pollutants, developing databases and information systems, and fulfill other roles that lead to efficient use of New Mexico's water resources.
The state of New Mexico honors the system of prior-appropriation water rights. In this "first in time, first in right" system, many of the original recipients of water access rights were Pueblos and Tribes. As sovereign states, these groups are entitled to their senior rights, which are then governed on a federal level rather than state. Because of the uniqueness of these rights, any policy decisions that may affect these Tribes and Pueblos must be presented by the State for discussion with these parties.
The state of New Mexico mandates that any rights with a common hydrological source be formally adjudicated through a court proceeding, documenting the full legal and physical quantification of the rights. This is accomplished with the purpose of assisting the State Water Engineer in allocating water allotments across the spread of State demands and those with senior rights. The rights held by Pueblos and Tribes must also be adjudicated to establish the legal parameters of their water access. Under the McCarran Amendment, these rights must be defined and quantified under federal law in order to be evaluated as part of the stream-associated water rights administration.
Because of the nearly complete allocation of surface waters in New Mexico, efforts have been made to increase the water supply available to the expanding needs of the State. In order to do this, groundwater is reserved in aquifers that are connected to rivers throughout the state. Reclaiming these stores would diminish the river flows which would thus reduce the water available to senior rights holders. The solution here is the purchase and retirement of these senior surface rights. This will begin a new emphasis on the groundwater resources that have been stored for future access.
The Office of the State Engineer occupies a critical role in New Mexico's water trading system. This administrator will establish a prior appropriation assessment of water rights in part with a priority administration plan of action The State Engineer is provided with funding for investments in technology, such as water measuring and metering, GIS units, surface and groundwater models, and manuals.
To aid in the fulfillment of these and additional requirements of the State Water Engineer, New Mexico has recently implemented an "active resource management" plan. In this platform, a state-appointed staff is assigned roles in identifying, measuring, and metering water rights, facilitating transfers, and appointing district water masters. Water masters operate within established water districts in administering rights as necessary. Each basin team includes a project manager, hydrologist, attorney, communication manager, personnel manager, and technical support staff.
The full responsibilities of the active resource management plan along with the Office of the State Water Engineer are diagrammed in the 2003 Water Plan, as well as the 2006 Progress Report and 2008 Review and Proposed Update. The ideals of this proposal are clearly identified and accompanied by methods of execution and public opinion. While comparatively young, this program is aggressively seeking out efficient allocation and use of New Mexico's water supply.
New Mexico considers all water to be public property. Right to use, however, is a possession that may be purchased or leased. Once allocated to a party, failure to put to beneficial use for period of time (commonly 4 years) may lead to rights being reclaimed by the State. Upon reclamation, these rights may then be sold or leased to another interested party. The rights may be obtained through application of a permit through the Office of the State or through a personal water attorney.
Texas
Overall, the water rights situation in Texas is similar to that of the states where water rights have been clearly defined. Texas Supreme Court's decision in Day McDaniels vs. Edwards Aquifer Authority in support of the right of capture in 2012, set the foundation for the trading of the ground water rights (surface water is regulated through a separate mechanism). Texas Water Exchange, founded in 2013, is the only public marketplace for trading ground water rights in the state, and, currently, in the US. Traditional methods of trading water rights through water attorneys also still exist.
Sample of economic applications of policy tools
Coase theorem
Between producers and consumers, there is the possibility of externalities arising. These may take the form of damages to either party, one of whom may or may not have the property rights concerning the externality. Under the assumptions of perfect information, both parties being price-takers, costless court systems, profit and utility maximization by producers and consumers respectively, no income/wealth effects, and no transaction costs, the parties may be able to meet an efficient level of compensation. Although these assumptions are rarely simultaneously met, an arrangement can be made between parties.
In the case of water trading, an example occurs when those accessing their water rights infringe on others’ rights of another nature. A Coasean bargaining system would unfold if the damaged parties offered to pay the rights holders to refrain from accessing part of their rights. This payment would fall within the range of the rights-holders lost benefits and the victim's damages. Another example of the Coase Theorem is when a water rights owner pays a land owner to access a body of water on their property. An appropriate price will fall between the cost of damages incurred by the landowners and the benefit to the individual accessing their rights.
Pareto efficiency
An underlying objective of water trading is to achieve Pareto efficiency. This is the point of water right distribution in which no further allocation can make a party better off without making another party worse off in the same degree. The optimal level of allocation occurs when water is allocated to those who value it most, presuming non-drought years.
Pigouvian tax
A Pigouvian fee is an emission fee exactly equal to the aggregate marginal damage caused by the emissions when evaluated at the efficient level of pollution. In the case of water trading, the negative externalities frequently manifest in the form of third party damages. When water is displaced, when pipelines are built, or when communities change as result of water trading, each of these is a negative attribute of the water trade. It has been proposed that a means of compensating damaged parties is through a tax associated with water trading. This tax would be embedded in the cost associated with purchasing a short term water transfer and the generated revenue would then accumulate in a designated fund. At the end of the trading year, erred parties would then be permitted to file for compensation based on the nature and severity of the damages.
See also
Drinking water
Tap water
Water law
Water purification
References
External links
Water trading overview – Government of South Australia
Summit agrees on permanent water trading – ABC South Australia
We Lead the World in Water – Courier Mail article
Cave Review – competition and innovation in water markets
Water
Trade by commodity | Water trading | Environmental_science | 10,674 |
75,285,165 | https://en.wikipedia.org/wiki/PKS%200637-752 | PKS 0637-752 is a quasar located six billion light years in the constellation of Mensa. It is noted for having a bright and largest astrophysical jet at redshift of z = 0.651. Discovered by Einstein Observatory in 1980 through X-rays, PKS 0637-752 was the first celestial object to be observed by Chandra X-ray Observatory upon its commissioning in July 23, 1999.
Characteristics
PKS 0637-752 contains an active galactic nucleus. It is classified a blazar, a type of an active galaxy with a relativistic jet pointing towards Earth's direction. Like other quasars, PKS 0637-752 is considered luminous, powering up 10 trillion times the sun, with a supermassive black hole in its center.
X-ray jet
PKS 0637-752 contains a high γ-ray flux X-ray jet studied by Hubble Space Telescope and Spitzer. The jet extends ≥100 kiloparsecs wide and has a luminosity of ~1044.6 ergs−1. It produces X-ray emission through inverse Compton scattering from the cosmic microwave background.
Further observations from Hubble also found three small knots occurring concurrently with the X-ray emission and peak radio. According to observations made by Australia Telescope Compact Array, these knots are shown to be quasi-periodic with a separation gap of ~1.1 arcsecs. Using two class models, astronomers calculated the jet power of PKS 0637-752 to be Q ~ 1046 erg/s and the jet engine modulation to be 2 x 103 yr < \tau < 3x 105 yr. Such evidence, proves the jet structure in the quasar might result from an unstable accretion disk, causing limit cycle behavior.
References
Quasars
Mensa (constellation)
Principal Galaxies Catalogue objects
Blazars
Astronomical objects discovered in 1980
IRAS catalogue objects | PKS 0637-752 | Astronomy | 402 |
56,313 | https://en.wikipedia.org/wiki/Zoning | In urban planning, zoning is a method in which a municipality or other tier of government divides land into "zones", each of which has a set of regulations for new development that differs from other zones. Zones may be defined for a single use (e.g. residential, industrial), they may combine several compatible activities by use, or in the case of form-based zoning, the differing regulations may govern the density, size and shape of allowed buildings whatever their use. The planning rules for each zone determine whether planning permission for a given development may be granted. Zoning may specify a variety of outright and conditional uses of land. It may indicate the size and dimensions of lots that land may be subdivided into, or the form and scale of buildings. These guidelines are set in order to guide urban growth and development.
Zoning is the most common regulatory urban planning method used by local governments in developed countries. Exceptions include the United Kingdom and the City of Houston, Texas.
Most zoning systems have a procedure for granting variances (exceptions to the zoning rules), usually because of some perceived hardship caused by the particular nature of the property in question.
History
The origins of zoning districts can be traced back to antiquity. The ancient walled city was the predecessor for classifying and regulating land, based on use. Outside the city walls were the undesirable functions, which were usually based on noise and smell. The space between the walls is where unsanitary and dangerous activities occurred such as butchering, waste disposal, and brick-firing. Within the walls were civic and religious places, and where the majority of people lived.
Beyond distinguishing between urban and non-urban land, most ancient cities further classified land types and uses inside their walls. This was practiced in many regions of the world – for example, in China during the Zhou Dynasty (1046 – 256 BC), in India during the Vedic Era (1500 – 500 BC), and in the military camps that spread throughout the Roman Empire (31 BC – 476 AD).
Throughout the Age of Enlightenment and Industrial Revolution, cultural and socio-economic shifts led to the rapid increase in the enforcement and invention of urban regulations. The shifts were informed by a new scientific rationality, the advent of mass production and complex manufacturing, and the subsequent onset of urbanisation. Industry leaving the home reshaped modern cities. The definition of home was tied to the definition of economy, which caused a much greater mixing of uses within the residential quarters of cities.
Separation between uses is a feature of many planned cities designed before the advent of zoning. A notable example is Adelaide in South Australia, whose city centre, along with the suburb of North Adelaide, is surrounded on all sides by a park, the Adelaide Park Lands. The park was designed by Colonel William Light in 1836 in order to physically separate the city centre from its suburbs. Low density residential areas surround the park, providing a pleasant walk between work in the city within and the family homes outside.
Sir Ebenezer Howard, founder of the garden city movement, cited Adelaide as an example of how green open space could be used to prevent cities from expanding beyond their boundaries and coalescing. His design for an ideal city, published in his 1902 book Garden Cities of To-morrow, envisaged separate concentric rings of public buildings, parks, retail space, residential areas and industrial areas, all surrounded by open space and farmland. All retail activity was to be conducted within a single glass-roofed building, an early concept for the modern shopping centre inspired by the Crystal Palace.
However, these planned or ideal cities were static designs embodied in a single masterplan. What was lacking was a regulatory mechanism to allow the city to develop over time, setting guidelines to developers and private citizens over what could be built where. The first modern zoning systems were applied in the United States with the Los Angeles zoning ordinances of 1904 and the New York City 1916 Zoning Resolution.
Types
There are a great variety of zoning types, some of which focus on regulating building form and the relation of buildings to the street with mixed uses, known as form-based, others with separating land uses, known as use-based, or a combination thereof. Use-based zoning systems can comprise single-use zones, mixed-use zones - where a compatible group of uses are allowed to co-exist - or a combination of both single and mixed-use zones in one system.
The main approaches include use-based, form-based, performance and incentive zoning. There are also several additional zoning provisions used in combination with the main approaches.
Main approaches to zoning
Use-based zoning
Use-based or functional zoning systems can comprise single-use zones, mixed-use zones—where a compatible group of uses are allowed to co-exist —or a combination of both single- and mixed-use zones in one system.
Single-use zoning
The primary purpose of single-use zoning is to geographically separate uses that are thought to be incompatible. In practice, zoning is also used to prevent new development from interfering with existing uses and/or to preserve the character of a community.
Single-use zoning is where only one kind of use is allowed per zone, or district. It is also known as exclusionary zoning or, in the United States, as Euclidean zoning because of a court case in Euclid, Ohio, Village of Euclid, Ohio v. Ambler Realty Co. , which established its constitutionality. It has been the dominant system of zoning in North America, especially the United States, since its first implementation.
Commonly defined single-use districts include: residential, commercial, and industrial. Each category can have a number of sub-categories, for example, within the commercial category there may be separate districts for small retail, large retail, office use, lodging and others, while industrial may be subdivided into heavy manufacturing, light assembly and warehouse uses. Special districts may also be created for purposes like public facilities, recreational amenities, and green space.
The application of single-use zoning has led to the distinctive form of many cities in the United States, Canada, Australia, and New Zealand, in which a very dense urban core, often containing skyscrapers, is surrounded by low density residential suburbs, characterised by large gardens and leafy streets. Some metropolitan areas such as Minneapolis–Saint Paul and Sydney have several such cores.
Mixed-use zoning
Mixed-use zoning combines residential, commercial, office, and public uses into a single space. Mixed-use zoning can be vertical, within a single building, or horizontal, involving multiple buildings.
Planning and community activist Jane Jacobs wrote extensively on the connections between the separation of uses and the failure of urban renewal projects in New York City. She advocated dense mixed-use developments and walkable streets. In contrast to villages and towns, in which many residents know one another, and low-density outer suburbs that attract few visitors, cities and inner city areas have the problem of maintaining order between strangers. This order is maintained when, throughout the day and evening, there are sufficient people present with eyes on the street. This can be accomplished in successful urban districts that have a great diversity of uses, creating interest and attracting visitors. Jacobs' writings, along with increasing concerns about urban sprawl, are often credited with inspiring the New Urbanism movement.
To accommodate the New Urbanist vision of walkable communities combining cafés, restaurants, offices and residential development in a single area, mixed-use zones have been created within some zoning systems. These still use the basic regulatory mechanisms of zoning, excluding incompatible uses such as heavy industry or sewage farms, while allowing compatible uses such as residential, commercial and retail activities so that people can live, work and socialise within a compact geographic area.
The mixing of land uses is common throughout the world. Mixed-use zoning has particular relevance in the United States, where it is proposed as a remedy to the problems caused by widespread single-use zoning.
Form-based zoning
Form-based or intensity zoning regulates not the type of land use, but the form that land use may take. For instance, form-based zoning in a dense area may insist on low setbacks, high density, and pedestrian accessibility. Form-based codes (FBCs) are designed to directly respond to the physical structure of a community in order to create more walkable and adaptable environments.
Form-based zoning codes have five main elements: a regulating plan, public standards, building standards, and precise definitions of technical terms. Form-based codes recognize the interrelated nature of all components of land-use planning—zoning, subdivision, and public works—and integrate them to define districts based on the community's desired character and intensity of development.
The French planning system is mostly form-based; zones in French cities generally allow many types of uses. The city of Paris has used its zoning system to concentrate high-density office buildings in the district of La Défense rather than allow heritage buildings across the city to be demolished to make way for them, as is often the case in London or New York. The construction of the Montparnasse Tower in 1973 led to an outcry. As a result, two years after its completion the construction of buildings over seven storeys high in the city centre was banned.
Performance zoning
Performance zoning, also known as flexible or impact zoning or effects-based planning, was first advocated by Lane Kendig in 1973. It uses performance-based or goal-oriented criteria to establish review parameters for proposed development projects. Performance zoning may use a menu of compliance options where a property developer can earn points or credits for limiting environmental impacts, including affordable housing units, or providing public amenities. In addition to the menu and points system, there may be additional discretionary criteria included in the review process. Performance zoning may be applied only to a specific type of development, such as housing, and may be combined with a system of use-based districts.
Performance zoning is flexible, logical, and transparent while offering a form of accountability. These qualities are in contrast with the seemingly arbitrary nature of use-based zoning. Performance zoning can also fairly balance a region's environmental and housing needs across local jurisdictions. Performance zoning balances principles of markets and private property rights with environmental protection goals. However, performance zoning can be extremely difficult to implement due to the complexity of preparing an impact study for each project, and can require the supervising authority to exercise a lot of discretion. Performance zoning has not been adopted widely in the US.
Incentive zoning
Incentive zoning allows property developers to develop land more intensively, such as with greater density or taller buildings, in exchange for providing some public benefits, such as environmental amenities or affordable housing units. The public benefits most often incentivised by US cities are "mixed-use development, open space conservation, walkability, affordable housing, and public parks."
Incentive zoning allows for a high degree of flexibility, but may be complex to administer. The more a proposed development takes advantage of incentive criteria, the more closely it has to be reviewed on a discretionary basis. The initial creation of the incentive structure in order to best serve planning priorities also may be challenging and often requires extensive ongoing revision to maintain balance between incentive magnitude and value given to developers. Incentive zoning may be most effective in communities with well-established standards and where demand for both land and for specific amenities is high. However, hidden costs may still offset its benefits. Incentive zoning has also been criticized for increasing traffic, reducing natural light, and offering developers larger rewards than those reaped by the public.
Additional provisions
Additional zoning provisions exist that are not their own distinct types of zoning but seek to improve existing varieties through the incorporation of flexible practices and other elements such as information and communication technologies (ICTs).
Smart zoning
Smart zoning is a broad term that consists of several alternatives to use-based zoning that incorporate information and communication technologies. There are a number of different techniques to accomplish smart zoning. Floating zones, cluster zoning, and planned unit developments (PUDs) are possible—even as the conventional use-based code exists—or the conventional code may be completely replaced by a smart performance or form-based code, as the city of Miami did in 2019. The incorporation of ICTs to measure metrics such as walkability, and the flexibility and adaptability that smart zoning can provide, have been cited as advantages of smart zoning over "non-smart" performance or form-based codes.
Floating zones
Floating zones describe a zoning district's characteristics and codify requirements for its establishment, but its location remains unspecified until conditions exist to implement that type of zoning district. When the criteria for implementation of a floating zone are met, the floating zone ceases "to float" and its location is established by a zoning amendment.
Cluster zoning
Cluster zoning permits residential uses to be clustered more closely together than normally allowed, thereby leaving substantial land area to be devoted to open space. Cluster zoning has been favored for its preservation of open space and reduction in construction and utility costs via consolidation, although existing residents may often disapprove due to a reduction in lot sizes.
Planned unit development (PUD)
The term planned unit development (PUD) can refer either to the regulatory process or to the development itself. A PUD groups multiple compatible land uses within a single unified development. A PUD can be residential, mixed-use, or a larger master-planned community. Rather than being governed by standard zoning ordinances, the developer negotiates terms with the local government. At best, a PUD provides flexibility to create convenient ways for residents to access commercial and other amenities. In the US, residents of a PUD have an ongoing role in management of the development through a homeowner's association.
Pattern zoning
Pattern zoning is a zoning technique in which a municipality provides licensed, pre-approved building designs, typically with an expedited permitting process. Pattern zoning is used to reduce barriers to housing development, create more affordable housing, reduce burdens on permit-review staff, and create quality housing designs within a certain neighborhood or jurisdiction. Pattern zoning may also be used to promote certain building types such as missing middle housing and affordable small-scale commercial properties. In some cases, a municipality purchases design patterns and constructs the properties themselves while in other cases the municipality offers the patterns for private development.
Hybrid zoning
A hybrid zoning code combines two or more approaches, often use-based and form-based zoning. Hybrid zoning can be used to introduce form and design considerations into an existing community's zoning without completely rewriting the zoning ordinance.
Composite zoning is a particular type of hybrid zoning that combines use, form, and site design components:
the use component establishes how land can be used within a district, as in use-based or functional zoning;
the form (also known as architectural) component sets standards for building design, such as height and facades;
the site design component specifies how buildings are situated on the site, such as setbacks and open space.
An advantage of composite zoning is the ability to create flexible zoning districts for smoother transitions between adjacent properties with different uses.
Inclusionary zoning
Inclusionary zoning refers to policies to increase the number of housing units within a development that are affordable to low and middle-income households. These policies can be mandatory as part of performance zoning or based on voluntary incentives, such as allowing greater density of development.
Overlay zoning
An overlay zone is a zoning district that overlaps one or more zoning districts to address a particular concern or feature of that area, such as wetlands, historic buildings or transit-oriented development. Overlay zoning has the advantage of providing targeted regulation to address a specific issue, such as a natural hazard, without having to significantly rewrite an existing zoning ordinance. However, development of overlay zoning regulation often requires significant technical expertise.
Transferable development rights
Transferable development rights, also known as transfer of development credits and transferable development units, are based on the concept that with land ownership comes the right of use of land, or land development. These land-based development rights can, in some jurisdictions, be used, unused, sold, or otherwise transferred by the owner of a parcel. These are typically used to transfer development rights from rural areas (sending sites) to urban areas (receiving sites) with more demand and infrastructure to support development.
Spot zoning
Spot zoning is a controversial practice in which a small part of a larger zoning district is rezoned in a way that is not consistent with the community's broader planning process. While a jurisdiction can rezone even a single parcel of land in some cases, spot zoning is often disallowed when the change would conflict with the policies and objectives of existing land-use plans. Other factors that may be considered in these cases are the size of the parcel, the zoning categories involved, how adjacent properties are zoned and used, and expected benefits and harms to the landowner, neighbors, and community.
Conditional zoning
Conditional zoning is a legislative process in which site-specific standards and conditions become part of the zoning ordinance at the request of the property owner. The conditions may be more or less restrictive than the standard zoning. Conditional zoning can be considered spot zoning and can be challenged on those grounds.
Conditional zoning should not be confused with conditional-use permits (also called special-use permits), a quasi-judicial process that enables land uses that, because of their special nature, may be suitable only in certain locations, or when arranged or operated in a particular manner. Uses which might be disallowed under current zoning, such as a school or a community center, can be permitted via conditional-use permits.
Contract zoning
Contract zoning is a controversial practice in which there is a bilateral agreement between a property owner and a local government to rezone a property in exchange for a commitment from the developer. It typically involves loosening restrictions on how the property can be used. Contract zoning is controversial and sometimes prohibited because it deviates from the broader planning process and has been considered an illegal bargaining away of the government's police powers to enforce zoning.
Fiscal zoning
Fiscal zoning is a controversial practice in which local governments use land use regulation, including zoning, to encourage land uses that generate high tax revenue and exclude uses that place a high demand on public services.
Effectiveness and criticism
Environmental activists argue that putting everyday uses out of walking distance of each other leads to an increase in traffic, since people have to own cars in order to live a normal life where their basic human needs are met, and get in their cars and drive to meet their needs throughout the day. Single-use zoning and urban sprawl have also been criticized as making work–family balance more difficult to achieve, as greater distances need to be covered in order to integrate the different life domains. These issues are especially acute in the United States, with its high level of car usage combined with insufficient or poorly maintained urban rail and metro systems.
Some economists claim that zoning laws work against economic efficiency, reduce responsiveness to consumer demands and hinder development in a free economy, as poor zoning restrictions hinder the more efficient usage of a given area. Even without zoning restrictions, a landfill, for example, would likely gravitate to cheaper land and not a residential area. Single-use zoning laws can get in the way of creative developments like mixed-use buildings and can even stop harmless activities like yard sales. The Houston example of non-zoning or private zoning with no restriction on particular land use but with other development code shows a combination of private and public planning.
Other critics of zoning argue that zoning laws are a disincentive to provide housing which results in an increase in housing costs and a decrease in productive economic output. For example, A 2017 study showed that if all states deregulated their zoning laws only halfway to the level of Texas, a state known for low zoning regulations, their GDP would increase by 12 percent due to more productive workers and opportunity. Furthermore, critics note that it impedes the ability of those that wish to provide charitable housing from doing so. For example, in 2022, Gloversville's Free Methodist Church in New York wished to provide 40 beds for the homeless population in -4 degree weather and were inhibited from doing so.
Corruption is a challenge for zoning. Some have argued that zoning laws increase economic inequality. Empirical effectiveness estimates show some zoning approaches can contribute to housing crisis.
Alternatives
In Houston, Texas, the lack of a local zoning ordinance means that property owners make heavy use of deed restrictions to prevent unwanted development. This practice is sometimes known as "private zoning". Non-zoned land regulations can still include requirements like minimum lot size and setbacks.
By country
Australia
The legal framework for land use zoning in Australia is established by States and Territories, hence each State or Territory has different zoning rules. Land use zones are generally defined at local government level, and most often called Planning Schemes. In reality, however in all cases the state governments have an absolute ability to overrule the local decision-making. There are administrative appeal processes such as VCAT to challenge decisions.
Statutory planning, otherwise known as town planning, development control or development management, refers to the part of the planning process that is concerned with the regulation and management of changes to land use and development. Planning and zoning have a great political dimension, with governments often criticized for favouring developers; also nimbyism is very prevalent.
Canada
In Canada, land-use control is a provincial responsibility deriving from the constitutional authority over property and civil rights. This authority had been granted to the provinces under the British North America Acts of 1867 and was carried forward in the Constitution Act, 1982. The zoning power relates to real property, or land and the improvements constructed thereon that become part of the land itself (in Québec, immeubles). The provinces empowered the municipalities and regions to control the use of land within their boundaries, letting the municipalities establish their own zoning by-laws. There are provisions for control of land use in unorganized areas of the provinces. Provincial tribunals are the ultimate authority for appeals and reviews.
France
In France, the Code of Urbanism or Code de l’urbanisme (called the Town Planning Code), a national law, guides regional and local planning and outlines procedures for obtaining building permits. Unlike England where planners must use their discretion to allow use or building type changes, private development in France is permitted as long as the developer follows the legally-binding regulations.
Japan
Zoning districts are classified into twelve use zones. Each zone determines a building's shape and permitted uses. A building's shape is controlled by zonal restrictions on allowable floor area ratio and height (in absolute terms and in relation with adjacent buildings and roads). These controls are intended to allow adequate light and ventilation between buildings and on roads. Instead of single-use zoning, zones are defined by the "most intense" use permitted. Uses of lesser intensity are permitted in zones where higher intensity uses are permitted but higher intensity uses are not allowed in lower intensity zones.
New Zealand
New Zealand's planning system is grounded in effects-based Performance Zoning under the Resource Management Act.
Philippines
Zoning and land use planning in the Philippines is governed by the Department of Human Settlements and Urban Development (DHSUD) and previously by the Housing and Land Use Regulatory Board (HLURB), which lays out national zoning guidelines and regulations, and oversees the preparation and implementation of comprehensive land use plans (CLUPs) and zoning ordinances by city and municipal governments under their mandate in the Local Government Code of 1991 (Republic Act No. 7160).
The present zoning scheme used in the Philippines is detailed in the HLURB's Model Zoning Ordinance published in 2014, which outlines 26 basic zone types based on primary usage and building regulations (as defined in the National Building Code), and also includes public domain and water bodies within the municipality's jurisdiction. Local governments may also add overlays identifying special use zones such as areas prone to natural disasters, ancestral lands of indigenous peoples (IPs), heritage zones, ecotourism areas, transit-oriented developments (TODs), and scenic corridors. Residential and commercial zones are further subdivided into subclasses defined by density, commercial zones also allow for residential uses, and industrial zones are subdivided by their intensity and the environmental impact of the uses allowed. Regulations on residential, commercial, and industrial zones may differ between municipalities, so one municipality may permit 4-storey buildings on medium-density residential zones, while another may only permit 2-storey buildings.
Singapore
The framework for governing land uses in Singapore is administered by the Urban Redevelopment Authority (URA) through the Master Plan. The Master Plan is a statutory document divided into two sections: the plans and the Written Statement. The plans show the land use zoning allowed across Singapore, while the Written Statement provides a written explanation of the zones available and their allowed uses.
South Africa
There are five (5) zoning categories in South Africa; residential, business, industrial, agricultural, and open space zoning. These five categories are further classified into subcategories. The zoning categories are governed by the Spatial Planning and Land Use Management Act enacted in 2016. To change a land use from one zone to another requires a process of rezoning.
United Kingdom
The United Kingdom does not use zoning as a technique for controlling land use. British land use control began its modern phase after the Town and Country Planning Act of 1947. Rather than dividing municipal maps into land use zones, English planning law places all development under the control of local and regional governments, effectively abolishing the ability to develop land by-right. However, existing development allows land use by-right as long as the use does not constitute a change in the type of land use. A property owner must apply to change land use type of any existing building, and such changes must be consistent with the local and regional land use plans.
Development control or planning control is the element of the United Kingdom's system of town and country planning through which local government regulates land use and new building. There are 421 Local Planning Authorities (LPAs) in the United Kingdom. Generally they are the local borough or district council or a unitary authority. They each use a discretionary "plan-led system" whereby development plans are formed and the public consulted. Subsequent development requires planning permission, which will be granted or refused with reference to the development plan as a material consideration.
The plan does not provide specific guidance on what type of buildings will be allowed in a given location, rather it provides general principles for development and goals for the management of urban change. Because planning committees (made up of directly elected local councillors) or in some cases planning officers themselves (via delegated decisions) have discretion on each application for development or change of use made, the system is considered a 'discretionary' one.
Planning applications can differ greatly in scale, from airports and new towns to minor modifications to individual houses. In order to prevent local authorities from being overwhelmed by high volumes of small-scale applications from individual householders, a separate system of permitted development has been introduced. Permitted development rules are largely form-based, but in the absence of zoning, are applied at the national level. Examples include allowing a two-storey extension up to three metres at the rear of a property, extensions up to 50% of the original width at each side, and certain types of outbuildings in the garden, provided that no more than 50% of the land area is built over. These are appropriately sized for a typical three bedroom semi-detached property, but must be applied across a wide variety of housing types, from small terraces, to larger detached properties and manor houses.
In August 2020, the UK Government published a consultation document called Planning for the Future. The proposals hinted at a move toward zoning, with areas given a Growth, Renewal or Protected designation, with the possibility of "sub-areas within each category", although the document did not elaborate on what the details of these might have been. Nothing was done with these proposals and following the 2024 general election there are no plans for the UK to adopt zoning within its planning system.
United States
Under the police power rights, state governments may exercise over private real property. With this power, special laws and regulations have long been made restricting the places where particular types of business can be carried on. In 1904, Los Angeles established the nation's first land-use restrictions for a portion of the city. New York City adopted the first zoning regulations to apply city-wide in 1916.
The constitutionality of zoning ordinances was upheld by the U.S. Supreme Court in the 1926 case Village of Euclid, Ohio v. Ambler Realty Co. Among large populated cities in the United States, Houston is unique in having no zoning ordinances. The city instead has a proliferation of private deed restrictions and retains government regulations like minimum lot size and setbacks.
Scale
Early zoning practices were subtle and often debated. Some claim the practices started in the 1920s while others suggest the birth of zoning occurred in New York in 1916. Both of these examples for the start of zoning, however, were urban cases. Zoning becomes an increasing legal force as it continues to expand in its geographical range through its introduction in other urban centres and use in larger political and geographical boundaries. Regional zoning was the next step in increased geographical size of areas under zoning laws. A major difference between urban zoning and regional zoning was that "regional areas consequently seldom bear direct relationship to arbitrary political boundaries". This form of zoning also included rural areas which was counter-intuitive to the theory that zoning was a result of population density. Finally, zoning also expanded again but back to a political boundary again with state zoning.
Types in use in the United States
Use-based zoning, especially single-use zoning, is by far the most common type of zoning in the US, where it is known as Euclidean zoning, after Euclid, Ohio's role in a landmark U.S. Supreme Court case, Village of Euclid v. Ambler Realty Co.
Single-use zoning in the United States
Single-use zoning takes two forms, flat and hierarchical, also known as cumulative or pyramidal. Under flat zoning, each district is strictly designated for one use. In a simple hierarchical zoning system, districts are organized with residential (the most sensitive and least disruptive category) at the top, followed by commercial and industrial. Residential and commercial buildings are allowed in industrial zones and residential buildings are allowed in commercial zones. More complex hierarchical systems account for multiple levels within categories, such as multiple types of residential buildings in multifamily residential districts. Hierarchical zoning generally fell out of favor in the United States in the mid-twentieth century, with flat zoning becoming more popular, although many municipalities still incorporate some degree of hierarchy in their zoning ordinances.
Single-use zoning is used by many municipalities due to its ease of implementation (one set of explicit, prescriptive rules), long-established legal precedent, and familiarity to planners and design professionals. Single-use zoning has been criticized, however, for its lack of flexibility. Separation of uses can contribute to urban sprawl, loss of open space, heavy infrastructure costs, and automobile dependency. In particular, single-family zoning, residential districts where only single-family homes can be built, has been widely criticized as a cause of sprawl and racial segregation.
Social problems in the United States
The United States suffers from greater levels of deurbanization and urban decay than other developed countries, and additional problems such as urban prairies that do not occur elsewhere. Jonathan Rothwell has argued that zoning encourages racial segregation. He claims a strong relationship exists between an area's allowance of building housing at higher density and racial integration between blacks and whites in the United States. The relationship between segregation and density is explained by Rothwell and Massey as the restrictive density zoning producing higher housing prices in white areas and limiting opportunities for people with modest incomes to leave segregated areas. Between 1980 and 2000, racial integration occurred faster in areas that did not have strict density regulations than those that did. Rothwell and Massey suggest homeowners and business interests are the two key players in density regulations that emerge from a political economy. They propose that in older states where rural jurisdictions are primarily composed of homeowners, it is the narrow interests of homeowners to block development because tax rates are lower in rural areas, and taxation is more likely to fall on the median homeowner. Business interests are unable to counteract the homeowners' interests in rural areas because business interests are weaker and business ownership is rarely controlled by people living outside the community. This translates into rural communities that have a tendency to resist development by using density regulations to make business opportunities less attractive. Density zoning regulations in the U.S increase residential segregation in metropolitan areas by reducing the availability of affordable housing in some jurisdictions; other zoning regulations like school infrastructure regulations and growth controls are also variables associated with higher segregation. With more permissive zoning regulations there are lower levels of segregation; desegregation is higher in places with more liberal regulations on zoning, allowing the residents to integrate racially. Metropolitan areas that allowed higher density development moved rapidly toward racial integration than their counterparts with strict density limitations. The greater the allowable density, the lower the level of racial segregation.
Zoning laws that limit the construction of new housing (like single-family zoning) are associated with reduced affordability and are a major factor in residential segregation in the United States by income and race.
See also
Activity centre
Agricultural protection zoning
Context theory
Ekistics
Exclusionary zoning
Fenceline community
Form-based codes
Greenspace (disambiguation)
Open space reserve
Urban open space
Inclusionary zoning
Locally unwanted land use
Mixed use development
New urbanism
NIMBY
Non-conforming use
Planning permission
Police power
Principles of Intelligent Urbanism
Reverse sensitivity
Road
Single-use zoning
Spot zoning
Statutory planning
Subdivision (land)
Traffic
Variance (land use)
YIMBY
Zoning district
Zoning in the United States
References
Further reading
Taylor, George Town Planning for Australia (Studies in International Planning History), Routledge, 2018, .
Gurran, N., Gallent, N. and Chiu, R.L.H. Politics, Planning and Housing Supply in Australia, England and Hong Kong (Routledge Research in Planning and Urban Design), Routledge, 2016.
Bassett, E.M. The master plan, with a discussion of the theory of community land planning legislation. New York: Russell Sage foundation, 1938.
Bassett, E. M. Zoning. New York: Russell Sage Foundation, 1940
Hirt, Sonia. Zoned in the USA: The Origins and Implications of American Land-Use Regulation (Cornell University Press, 2014) 245 pp. online review
Stephani, Carl J. and Marilyn C. ZONING 101, originally published in 1993 by the National League of Cities, now available in a Third Edition, 2012.
External links
ZoningPoint – A searchable database of zoning maps and zoning codes for every county and municipality in the United States.
Crenex – Zoning Maps – Links to zoning maps and planning commissions of 50 most populous cities in the US.
New York City Department of City Planning – Zoning History
Michigan State University Extension Planning & Zoning information
Schindler's Land Use Page (Michigan State University Extension Land Use Team)
Zoning Compliance and Zoning Certification - Analysis and Reporting
Land Policy Institute at Michigan State University
By Bradley C. Karkkainen (1994). Zoning: A Reply To The Critics, Journal of Land Use & Environmental Law
Urban planning | Zoning | Engineering | 7,209 |
12,507,777 | https://en.wikipedia.org/wiki/Fujian%20pond%20turtle | The Fujian pond turtle ("Mauremys" × iversoni) is a possibly also naturally occurring intergeneric hybrid turtle in the family Geoemydidae (formerly Bataguridae). The Fujian pond turtle is produced in larger numbers by Chinese turtle farms as a "copy" of the golden coin turtle Cuora trifasciata. It appears to occur in China and Vietnam. Before its actual origin became known, it was listed as data deficient in the IUCN Red List.
The parents of this hybrid are the Asian yellow pond turtle (Mauremys mutica) and the golden coin turtle, with the male apparently usually of the latter species. While it is not unusual for perfectly valid geoemydid species to arise from hybridization, recognition as a species would require that the hybrids are fertile and constitute a phenotypically distinct and self-sustaining lineage. This does not appear to be the case in this "species" as only single specimens have been found rather than an entire population of these turtles and captive breeding has rarely been successful as most males proved to be infertile (while females are fully fertile).
The Fujian pond turtle's scientific name was given in dedication to American herpetologist John B. Iverson.
"Clemmys guangxiensis" is a composite taxon described from specimens of Mauremys mutica and the natural hybrid "Mauremys" × iversoni.
See also
"Mauremys" × pritchardi
"Ocadia" × glyphistoma
Ocadia philippeni
Cuora serrata
References
Footnotes
General sources
Reptiles of China
Mauremys
Taxonomy articles created by Polbot
Hybrid animals
Intergeneric hybrids | Fujian pond turtle | Biology | 353 |
31,099,545 | https://en.wikipedia.org/wiki/Intraguild%20predation | Intraguild predation, or IGP, is the killing and sometimes eating of a potential competitor of a different species. This interaction represents a combination of predation and competition, because both species rely on the same prey resources and also benefit from preying upon one another. Intraguild predation is common in nature and can be asymmetrical, in which one species feeds upon the other, or symmetrical, in which both species prey upon each other. Because the dominant intraguild predator gains the dual benefits of feeding and eliminating a potential competitor, IGP interactions can have considerable effects on the structure of ecological communities.
Types
Intraguild predation can be classified as asymmetrical or symmetrical. In asymmetrical interactions one species consistently preys upon the other, while in symmetrical interactions both species prey equally upon each other. Intraguild predation can also be age structured, in which case the vulnerability of a species to predation is dependent on age and size, so only juveniles or smaller individuals of one of the predators are fed upon by the other. A wide variety of predatory relationships are possible depending on the symmetry of the interaction and the importance of age structure. IGP interactions can range from predators incidentally eating parasites attached to their prey to direct predation between two apex predators.
Ecology of intraguild predation
Intraguild predation is common in nature and widespread across communities and ecosystems. Intraguild predators must share at least one prey species and usually occupy the same trophic guild, and the degree of IGP depends on factors such as the size, growth, and population density of the predators, as well as the population density and behavior of their shared prey. When creating theoretical models for intraguild predation, the competing species are classified as the "top predator" or the "intermediate predator," (the species more likely to be preyed upon). In theory, intraguild predation is most stable if the top predator benefits strongly from killing off or feeding on the intermediate predator, and if the intermediate predator is a better competitor for the shared prey resource.
The ecological effects of intraguild predation include direct effects on the survival and distribution of the competing predators, as well as indirect effects on the abundance and distribution of prey species and other species within the community. Because they are so common, IGP interactions are important in structuring communities. Intraguild predation may actually benefit the shared prey species by lowering overall predation pressure, particularly if the intermediate predator consumes more of the shared prey. Intraguild predation can also dampen the effects of trophic cascades by providing redundancy in predation: if one predator is removed from the ecosystem, the other is still consuming the same prey species. Asymmetrical IGP can be a particularly strong influence on habitat selection. Often, intermediate predators will avoid otherwise optimal habitat because of the presence of the top predator. Behavioral changes in intermediate predator distribution due to increased risk of predation can influence community structure more than direct mortality caused by the top predators.
Examples
Terrestrial
Intraguild predation is well documented in terrestrial arthropods such as insects and arachnids. Hemipteran insects and larval lacewings both prey upon aphids, but the competing predators can cause high enough mortality among the lacewings to effectively relieve predation upon the aphids. Several species of centipede are considered to be intraguild predators.
Among the most dramatic examples of intraguild predation are those between large mammalian carnivores. Large canines and felines are the mammal groups most often involved in IGP, with larger species such as lions and gray wolves preying upon smaller species such as foxes and cheetah. In North America, coyotes function as intraguild predators of gray foxes and bobcats, and may exert a strong influence over the population and distribution of gray foxes. However, in areas where wolves have been reintroduced, coyotes become an intermediate predator and experience increased mortality and a more restricted range.
Aquatic and marine
Intraguild predation is also important in aquatic and marine ecosystems. As top predators in most marine environments, sharks show strong IGP interactions, both between species of sharks and with other top predators like toothed whales. In tropical areas where multiple species of sharks may have significantly overlapping diets, the risk of injury or predation can determine the local range and available prey resources for different species. Large pelagic species such as blue and mako sharks are rarely observed feeding in the same areas as great white sharks, and the presence of white sharks will prevent other species from scavenging on whale carcasses. Intraguild predation between sharks and toothed whales usually involves large sharks preying upon dolphins and porpoises while also competing with them for fish prey, but orcas reverse this trend by preying upon large sharks while competing for large fish and seal prey. Intraguild predation can occur in freshwater systems as well. For example, invertebrate predators such as insect larvae and predatory copepods and cladocerans can act as intraguild prey, with planktivorous fish the interguild predator and herbivorous zooplankton acting as the basal resource.
Importance to management and conservation
The presence and intensity of intraguild predation is important to both management and conservation of species. Human influence on communities and ecosystems can affect the balance of these interactions, and the direct and indirect effects of IGP may have economic consequences.
Fisheries managers have only recently begun to understand the importance of intraguild predation on the availability of fish stocks as they attempt to move towards ecosystem-based management. IGP interactions between sharks and seals may prevent seals from feeding in areas where commercially important fish species are abundant, which may indirectly make more of these fish available to fishermen. However, IGP may also negatively influence fisheries. Intraguild predation by spiny dogfish and various skate species on economically important fishes like cod and haddock have been cited as a possible reason for the slow recovery of the groundfish fishery in the western North Atlantic.
Intraguild predation is also an important consideration for restoring ecosystems. Because the presence of top predators can so strongly affect the distribution and abundance of both intermediate predator and prey species, efforts to either restore or control predator populations can have significant and often unintended ecological consequences. In Yellowstone National Park, the reintroduction of wolves caused them to become intraguild predators of coyotes, which had far-reaching effects on both the animal and plant communities in the park. Intraguild predation is an important ecological interaction, and conservation and management measures will need to take it into consideration.
References
Biological interactions
Carnivory
Conservation biology
Ecosystems
Eating behaviors | Intraguild predation | Biology | 1,379 |
23,027,812 | https://en.wikipedia.org/wiki/Mat%C3%A9rn%20covariance%20function | In statistics, the Matérn covariance, also called the Matérn kernel, is a covariance function used in spatial statistics, geostatistics, machine learning, image analysis, and other applications of multivariate statistical analysis on metric spaces. It is named after the Swedish forestry statistician Bertil Matérn. It specifies the covariance between two measurements as a function of the distance between the points at which they are taken. Since the covariance only depends on distances between points, it is stationary. If the distance is Euclidean distance, the Matérn covariance is also isotropic.
Definition
The Matérn covariance between measurements taken at two points separated by d distance units is given by
where is the gamma function, is the modified Bessel function of the second kind, and ρ and are positive parameters of the covariance.
A Gaussian process with Matérn covariance is times differentiable in the mean-square sense.
Spectral density
The power spectrum of a process with Matérn covariance defined on is the (n-dimensional) Fourier transform of the Matérn covariance function (see Wiener–Khinchin theorem). Explicitly, this is given by
Simplification for specific values of ν
Simplification for ν half integer
When , the Matérn covariance can be written as a product of an exponential and a polynomial of degree . The modified Bessel function of a fractional order is given by Equations 10.1.9 and 10.2.15 as
.
This allows for the Matérn covariance of half-integer values of to be expressed as
which gives:
for :
for :
for :
The Gaussian case in the limit of infinite ν
As , the Matérn covariance converges to the squared exponential covariance function
Taylor series at zero and spectral moments
From the basic relation satisfied by the Gamma function
and the basic relation satisfied by the Modified Bessel Function of the second
and the definition of the modified Bessel functions of the first
the behavior for can be obtained by the following Taylor series (when is not an integer and bigger than 2):
When defined, the following spectral moments can be derived from the Taylor series:
For the case of , similar Taylor series can be obtained:
When is an integer limiting values should be taken, (see ).
See also
Radial basis function
References
Geostatistics
Spatial analysis
Covariance and correlation | Matérn covariance function | Physics | 494 |
78,260,418 | https://en.wikipedia.org/wiki/Vista%20paradox | The Vista paradox is a natural optical illusion where an object seen through an aperture appears to shrink in apparent size as the observer approaches the aperture. The paradox takes place when the distant object that seems to be shrinking or enlarging as its visual angle respectively increases or decreases is many times further away than the maximum distance between the observer and the aperture.
See also
Depth perception
Forced perspective
Perceived visual angle
Perspective distortion (photography)
References
External links
Special effects
Optical illusions
Visual perception | Vista paradox | Physics | 94 |
19,735,968 | https://en.wikipedia.org/wiki/Egg%20of%20Columbus%20%28mechanical%20puzzle%29 | The names Egg of Columbus and Columbus Egg have been used for several mechanical toys and puzzles inspired on the legend of Columbus balancing an egg on its end to drive a point. Typically, these puzzles are egg-shaped objects with internal mechanisms that make the egg stand up, once the user discovers the secret.
Mechanisms typically used in such toys include moving weights, mercury flowing in sealed tubes or compartments, and steel balls rolling on grooves and pits.
History
A Columbus Egg puzzle using a rolling ball was described in an 1893 book. The Montgomery Ward catalog of 1894 includes a "Columbus Egg" toy.
There are at least 18 United States patents for such devices, starting from 1891.
The German company Pussycat sells a Columbus Egg that can be balanced by holding it upright for 25 seconds, then quickly inverting it. The egg will then stand on its pointed end for 15 seconds, then topple on its side.
See also
Egg of Columbus (tangram puzzle)
Egg of Li Chun
Superegg
Tesla's Egg of Columbus
References
Mechanical puzzles
Eggs in culture
Cultural depictions of Christopher Columbus | Egg of Columbus (mechanical puzzle) | Mathematics | 217 |
5,555,347 | https://en.wikipedia.org/wiki/John%20W.%20Powell | John William Powell (July 3, 1919 – December 15, 2008) was a journalist and small business proprietor who edited the China Weekly Review, an English-language journal first published by his father, John B. Powell in Shanghai.
John W. Powell was tried for sedition in 1959 after publishing an article that reported on allegations made by Mainland Chinese officials that the United States and Japan were carrying out germ warfare in the Korean War. In 1956, the Eisenhower Administration's Department of Justice pressed sedition charges against Powell, his wife Sylvia, and Julian Schuman, after federal prosecutors secured grand jury indictments against them for publishing allegations of bacteriological warfare. However, the prosecutors failed to get any convictions. The defendants invoked their Constitutional right to refuse to reveal self-incriminating evidence, and U.S. Department of Defense officials also refused to provide any incriminating archives or witnesses. This information was not revealed until decades later as a result of Freedom of Information Act requests.
All three of the defendants were acquitted of all charges over the next six years, after a Federal judge dismissed the core aspects of the case against them in 1959, due to obviously insufficient evidence against them.
Early life and career
Powell was born in Shanghai, China, in 1919. One year later, Powell's parents decided that Shanghai was unsafe for their infant, so they sent him to live with his mother's family in Hannibal, Missouri. In 1917, Powell's father, John Benjamin Powell, had been a co-founder of the tiny publication, the China Weekly Review (originally Thomas Franklin Fairfax Millard’s Review of the Far East, 1922 renamed Weekly Review of the Far East, 1923 renamed The China Weekly Review, retaining the Chinese heading Mìlè Pínglúnbào 《密勒評論報》, i.e. “Millard’s Review”), modeled after the influential American political journals The New Republic and The Nation, and which featured original reporting, reports on Chinese subjects, and editorials.
Interrupting his journalism studies at the University of Missouri, Powell rejoined his father at the China Weekly Review. After the Japanese Attack on Pearl Harbor, Powell joined the American Office of War Information, the military's journalism program, as a news editor. In 1943, Powell was sent to Chungking, China, a city in far southwestern China (and the wartime capital of Free China), where he remained for the rest of the war. For eight years after World War II, from 1945 until June, 1953, Powell published his journal, first as the "China Weekly Review" and later on, when its revenues declined greatly, as the "China Monthly Review". While in China, Powell was an advocate for Chinese sovereignty and was a supporter of Chinese president Cao Kun.
Sedition allegations
During the Red-baiting 1950s, the Federal government initially accused Powell and his wife of treason. On April 26, 1956, the Powells, along with an associate at the "China Monthly Review", learned that a Federal Grand Jury had indicted each of them on a charge of sedition. Each count in the indictment was punishable by up to twenty years in prison and up to $10,000 in fines. The most damaging charge was that the defendants had falsely reported that the United States had engaged in bacteriological warfare during the Korean War, and that North Koreans had forced American Prisoners of War to read published reports of these charges as part of their indoctrination processes and brainwashing.
In their coverage of the breaking news, the San Francisco Chronicle newspaper, among other news publications, used two-inch-high bold type on its front page, exclaiming "S.F. JURY INDICTS WRITER – SEDITION". The grand jury had charged Powell with a dozen counts of sedition and a count of conspiring to commit sedition. His wife, Sylvia and Julian Schuman, who had been Powell's associate editor, were also charged with a single count of conspiracy, each. The Powells responded to the charges by asserting they had properly reported on what was said by Chinese officials and troops coming from the front lines of the Korean War.
Powell's trial, which ended in a mistrial, took place in 1959 at the Federal Courthouse in San Francisco, the same location where Marie Equi had been tried and convicted of sedition in 1918. The treason charges against Powell were formally dismissed in July, 1959, and two years later, in 1961, Attorney General Robert F. Kennedy finally dropped the rest of the sedition charges.
Later developments
Although direct official evidence such as military records and similar documentation that bacteriological warfare was employed during the Korean War by either side does not exist, some contend that there is overwhelming indirect and unofficial evidence that the US used biological weapons during this war. In an effort to advocate his opinions about American involvement in bacteriological warfare in Asia, Powell published an article titled "Japan's Germ Warfare: The U.S. Cover-up of a War Crime" in the October/December, 1980, issue of the "Bulletin of Concerned Asian Scholars". An editor from the United Press International had told Powell his story was "old news," and it was not published by mainstream publications.
However, with the documents that he had obtained under the Freedom of Information Act, Powell was able to provide additional evidence supporting his earlier reports in the "China Monthly Review". The second article, "Japan's Biological Weapons, 1930-1945," was published in the October 1981 edition of The Bulletin of the Atomic Scientists. It wasn't until 1989 that a detailed account of the Japanese bacteriological warfare experiments in China appeared. The British journalists Peter Williams and David Wallace published their book, "Unit 731: Japan's Secret of Secrets" (London: Hodder and Stoughton). [Also published in New York City that same year as "Unit 731: Japan's Secret Biological Warfare in World War II"]. Even in the 21st Century, 60 years after the Japanese bacteriological warfare camps, American intelligence agencies and the Department of Defense still withhold certain information about the World War II Japanese program in China.
Powell's articles in The Bulletin of the Atomic Scientists eventually led to the broadcast of segments on the CBS-TV investigative news program 60 Minutes and ABC-TV's 20/20 program. Powell's reporting had brought widespread public attention to the use of bacteriological warfare, which helped prompt the United States Congress into hearing testimony from former American Prisoners of War in 1982 and 1986.
Personal life
Powell met his wife Sylvia Powell in 1947, while he was in Shanghai opening up a news bureau for the Office of War Information, and they were married soon afterwards.
After returning to the United States from China, the Powells bought an old house on Potrero Hill in San Francisco, undertook extensive repairs and renovations, and then sold it for a profit. They next settled into a pattern of buying, rehabilitating, and reselling fourteen houses and several apartment buildings. "It was kind of rough," John Powell said, "Obviously, I couldn't get a job on a newspaper. I tried various things, working as a salesman, selling teaching aids to schools."
Eventually, the Powells bought a house on Church Street, in San Francisco's Mission District, and lived there for thirty years. This house had a storefront where they also ran an antiques shop and remodeled Victorian homes for about fifteen years as a result of Powell being "blackballed" due to his alleged sedition..
Powell died on December 15, 2008, in San Francisco at the age of 89 as a result of complications from pneumonia.
See also
Allegations of biological warfare in the Korean War
References
Selected publications
A Plague Upon Humanity: The Secret Genocide of Axis Japan's Germ Warfare Operation, Daniel Barenblatt, New York: HarperCollins, 2004,
The United States and Biological Warfare: Secrets from the Early Cold War and Korea, Stephen Endicott and Edward Hagerman, Bloomington: Indiana University Press, 1998
Unit 731: The Japanese Army's Secret of Secrets, Peter Williams and David Wallace, London: Hodder and Stoughton, 1989. Also published in the United States in 1989 as: Unit 731: Japan's Secret Biological Warfare in World War II.
Further reading
"The American Inquisition: Justice and Injustice In The Cold War" Stanley I. Kutler, Hill & Wang, New York, 1982
French, Paul. Carl Crow: A Tough Old China Hand: The Life, Times, and Adventures of an American in Shanghai. Hong Kong University Press, 2007.
French, Paul. Through the Looking Glass: Foreign Journalists in China, from the Opium Wars to Mao. Hong Kong University Press, 2009.
Powell, John Benjamin. My Twenty-Five Years in China. New York: The Macmillan Co., 1945. Autobiography of John W. Powell's father, who lived in Shanghai 1917-1942 when interned by the Japanese.
External links
NewsReview.com - "Dirty secrets: The government tried to put journalist John W. Powell in prison half a century ago for reporting that the U.S. Army had used germ warfare in Korea. He's still convinced it's true." Robert Speer, Chico News & Review (July, 2006)
SFGate.com - "Sylvia Powell—writer accused, then cleared, of treason in 1950s" obituary for Powell's wife), Michael Taylor, San Francisco Chronicle (July 13, 2004)
1919 births
2008 deaths
Powell, John W.
Powell, John W.
American freelance journalists
American alternative journalists
American investigative journalists
People related to biological warfare
Powell, John W.
Mission District, San Francisco
Censorship in the United States
People of the United States Office of War Information
American expatriates in China | John W. Powell | Biology | 2,012 |
50,947,918 | https://en.wikipedia.org/wiki/YInMn%20Blue | YInMn Blue (/jɪnmɪn/; for the chemical symbols Y for yttrium, In for indium, and Mn for manganese), also known as Oregon Blue or Mas Blue, is an inorganic blue pigment that was discovered by Mas Subramanian and his (then) graduate student, Andrew Smith, at Oregon State University in 2009. The pigment is noteworthy for its vibrant, near-perfect blue color and unusually high NIR reflectance. The chemical compound has a unique crystal structure in which trivalent manganese ions in the trigonal bipyramidal coordination are responsible for the observed intense blue color. Since the initial discovery, the fundamental principles of colour science have been explored extensively by the Subramanian research team at Oregon State University, resulting in a wide range of rationally designed novel green, purple, and orange pigments, all through intentional addition of a chromophore in the trigonal bipyramidal coordination environment.
Historical pigments
The discovery of the first known synthetic blue pigment, Egyptian blue () was promoted by the Egyptian pharaohs who sponsored the creation of new pigments to be used in art. Other civilizations combined organic and mineral materials to create blue pigments ranging from azure-blue like the Maya blue to the Han blue (), which was developed by the Chinese Han dynasty and manipulated to produce a light or dark blue color.
A number of pigments are used to impart the blue color. Cobalt blue () was first described in 1777; it is extremely stable and has been traditionally used as a coloring agent in ceramics. Ultramarine () was made by grinding the forbiddingly expensive lapis lazuli into a powder until a cheaper synthetic form was invented in 1826 by the French industrialist Jean Baptiste Guimet and in 1828 by the German chemist Christian Gmelin. Prussian blue () was first described by the German polymath Johann Leonhard Frisch and the president of the Prussian Academy of Sciences, Gottfried Wilhelm Leibniz, in 1708. Azurite () is a soft, deep-blue copper mineral produced by weathering copper ore deposits; it was used since ancient times and was first recorded by the first century Roman writer Pliny the Elder. Phthalocyanine Blue BN was first prepared in 1927 and has wide range of applications.
Most known pigments have detrimental health and environmental effects or durability problems. Cobalt blue causes cobalt poisoning when inhaled or ingested. Prussian blue is known to liberate hydrogen cyanide under certain acidic conditions. Ultramarine and azurite are not stable particularly in high-temperature and acidic conditions; additionally, ultramarine production involves the emission of a large amount of the toxic sulfur dioxide. The newer Phthalocyanine Blue BN is non-biodegradable and has been found to cause neuroanatomical defects in developing chicken embryos when injected directly into incubating eggs.
Inorganic blue pigments in which manganese (in the pentavalent oxidation state and in a tetrahedral coordination) is the chromophore have been employed since the Middle Ages (e.g., the fossil bone odontolite, which is isostructural to the apatite structure). Synthetic alternatives, such as barium manganate sulfate (or Manganese Blue, developed in 1907 and patented in 1935), have been phased out industrially due to safety and regulatory concerns, hence YInMn Blue fills the niche of an inorganic, environmentally safe alternative to the traditionally used blue pigments, and offers a durable intense blue color.
Discovery
In 2008, Mas Subramanian received a National Science Foundation grant to explore novel materials for electronics applications. Under this project, he was particularly interested in synthesizing multiferroics based on manganese oxides. He guided Andrew E. Smith, the first graduate student in his lab, to research an oxide solid solution between (a ferroelectric material) and (an antiferromagnetic material) at . The resulting compound Smith synthesized was by coincidence a vibrant blue material. Because of Subramanian's experience at DuPont, he recognized the compound's potential use as a blue pigment and together they filed a patent disclosure covering the invention. After publishing their results, Shepherd Color Company successfully contacted Subramanian for possible collaboration in commercialization efforts. For his outstanding contributions to inorganic color pigment chemistry, Subramanian was awarded the Perkin Medal from the Society of Dyers and Colourists in 2019.
The pigment is noteworthy for its vibrant, near-perfect blue color and unusually high NIR reflectance. The color may be adjusted by varying the In/Mn ratio in the pigment's base formula of , but the bluest pigment, , has a color comparable to standard cobalt blue pigments.
Properties and preparation
YInMn Blue is chemically stable, does not fade, and is non-toxic. It is more durable than alternative blue pigments such as ultramarine or Prussian blue, retaining its vibrant color in oil and water, and is safer than cobalt blue, which is a suspected carcinogen and may cause cobalt poisoning.
The pigment is resistant to acids such as nitric acid, and is difficult to combust. When YInMn Blue does ignite, it burns a violet color attributed to the indium atoms.
Infrared radiation is strongly reflected by YInMn Blue, which makes this pigment suitable for energy-saving, cool coatings. It can be prepared by heating the oxides of the elements yttrium, indium, and manganese to a temperature of approximately .
Commercialization
In popular culture
After Subramanian, Smith, and other colleagues published their results, companies began inquiring about commercial uses. Shepherd Color Company eventually won the license to commercialize the pigment in May 2015. Many companies such as AMD and Crayola rushed to use the new pigment name in product announcements and press releases. It is unclear when the first commercial application of YInMn blue reached the consumer market.
AMD announced in July 2016 that the pigment would be used on new Radeon Pro WX and Pro SSG professional GPUs for the energy efficiency that stems from its near-infrared reflecting property.
The American art supplies company Crayola announced in May 2017 that it planned to replace its retired Dandelion color (a yellow) with a new color "inspired by" YInMn. The new color does not contain any YInMn. Crayola held a contest for more pronounceable name ideas, and announced the new color name, "Bluetiful", on 14 September 2017. The new crayon color was made available in late 2017.
In artists' pigments
In June 2016, an Australian company, Derivan, published experiments using YInMn within their artist range (Matisse acrylics), and subsequently released the pigment for purchase.
As of April 2021, Golden Paints has commercially licensed and sourced the pigment from Shepherd Color Company. According to Golden, the supply of the raw pigment is extremely limited. Shepherd Color Company received the required environmental and safety approvals to sell the pigment in the U.S. in 2020.
Gamblin Artists Colors made a first Limited Edition batch of YInMn Blue in November 2020.
See also
International Klein Blue
List of inorganic pigments
Notes
References
External links
United States patent 8282728: "Materials with trigonal bipyramidal coordination and methods of making the same"
YInMn Blue at Shepherd Color Company
Medal for YInMnBlue and Dr. Mas Subramanian
Shades of blue
Yttrium compounds
Indium compounds
Manganese(III) compounds
Transition metal oxides
Inorganic pigments | YInMn Blue | Chemistry | 1,563 |
3,674,396 | https://en.wikipedia.org/wiki/Dan%20Shechtman | Dan Shechtman (; born January 24, 1941) is the Philip Tobias Professor of Materials Science at the Technion – Israel Institute of Technology, an Associate of the US Department of Energy's Ames National Laboratory, and Professor of Materials Science at Iowa State University. On April 8, 1982, while on sabbatical at the U.S. National Bureau of Standards in Washington, D.C., Shechtman discovered the icosahedral phase, which opened the new field of quasiperiodic crystals.
He was awarded the 2011 Nobel Prize in Chemistry for the discovery of quasicrystals, making him one of six Israelis who have won the Nobel Prize in Chemistry.
Biography
Dan Shechtman was born in 1941 in Tel Aviv, in what was then Mandatory Palestine; the city became part of the new state of Israel in 1948. He grew up in Petah Tikva and Ramat Gan in a Jewish family. His grandparents had immigrated to Palestine during the Second Aliyah (1904–1914) and founded a printing house. As a child Shechtman was fascinated by Jules Verne's The Mysterious Island (1874), which he read many times. His childhood dream was to become an engineer like the main protagonist, Cyrus Smith.
Shechtman is married to Prof. Tzipora Shechtman, Head of the Department of Counseling and Human Development at Haifa University, and author of two books on psychotherapy. They have a son Yoav Shechtman (a postdoctoral researcher in the lab of W. E. Moerner) and three daughters: Tamar Finkelstein (an organizational psychologist at the Israeli police leadership center), Ella Shechtman-Cory (a PhD in clinical psychology), and Ruth Dougoud-Nevo (also a PhD in clinical psychology).
Academic career
After receiving his Ph.D. in Materials Engineering from the Technion in 1972, where he also obtained his B.Sc. in Mechanical Engineering in 1966 and M.Sc. in Materials Engineering in 1968, Shechtman was an NRC fellow at the Aerospace Research Laboratories at Wright Patterson AFB, Ohio, where he studied for three years the microstructure and physical metallurgy of titanium aluminides. In 1975, he joined the department of materials engineering at Technion. In 1981–1983 he was on sabbatical at Johns Hopkins University, where he studied rapidly solidified aluminum transition metal alloys, in a joint program with NBS. During this study he discovered the icosahedral phase which opened the new field of quasiperiodic crystals.
In 1992–1994 he was on sabbatical at National Institute of Standards and Technology (NIST), where he studied the effect of the defect structure of CVD diamond on its growth and properties. Shechtman's Technion research is conducted in the Louis Edelstein Center, and in the Wolfson Centre which is headed by him. He served on several Technion Senate Committees and headed one of them.
Shechtman joined the Iowa State faculty in 2004. He currently spends about five months a year in Ames on a part-time appointment.
Since 2014 he has been the head of the International Scientific Council of Tomsk Polytechnic University.
Work on quasicrystals
From the day Shechtman published his findings on quasicrystals in 1984 to the day Linus Pauling died in 1994, Shechtman experienced hostility from him toward the non-periodic interpretation. "For a long time it was me against the world," he said. "I was a subject of ridicule and lectures about the basics of crystallography. The leader of the opposition to my findings was the two-time Nobel Laureate Linus Pauling, the idol of the American Chemical Society and one of the most famous scientists in the world. For years, 'til his last day, he fought against quasi-periodicity in crystals. He was wrong, and after a while, I enjoyed every moment of this scientific battle, knowing that he was wrong."
Linus Pauling is noted saying "There is no such thing as quasicrystals, only quasi-scientists." Pauling was apparently unaware of a paper in 1981 by H. Kleinert and K. Maki which had pointed out the possibility of a non-periodic Icosahedral Phase in quasicrystals (see the
historical notes). The head of Shechtman's research group told him to "go back and read the textbook" and a couple of days later "asked him to leave for 'bringing disgrace' on the team." Shechtman felt rejected. On publication of his paper, other scientists began to confirm and accept empirical findings of the existence of quasicrystals.
The Nobel Committee at the Royal Swedish Academy of Sciences said that "his discovery was extremely controversial," but that his work "eventually forced scientists to reconsider their conception of the very nature of matter."
Through Shechtman's discovery, several other groups were able to form similar quasicrystals by 1987, finding these materials to have low thermal and electrical conductivity, while possessing high structural stability. Quasicrystals have also been found naturally.
A quasiperiodic crystal, or, in short, quasicrystal, is a structure that is ordered but not periodic. A quasicrystalline pattern can continuously fill all available space, but it lacks translational symmetry. "Aperiodic mosaics, such as those found in the medieval Islamic mosaics of the Alhambra palace in Spain and the Darb-i Imam shrine in Iran, have helped scientists understand what quasicrystals look like at the atomic level. In those mosaics, as in quasicrystals, the patterns are regular – they follow mathematical rules – but they never repeat themselves.""An intriguing feature of such patterns, [which are] also found in Arab mosaics, is that the mathematical constant known as the Greek letters phi or tau, or the "golden ratio", occurs over and over again. Underlying it is a sequence worked out by Fibonacci in the 13th century, where each number is the sum of the preceding two."
Quasicrystalline materials could be used in a large number of applications, including the formation of durable steel used for fine instrumentation, and non-stick insulation for electrical wires and cooking equipment., but presently have no technological applications.
The Nobel prize was 10 million Swedish krona (approximately ).
Presidential bid
On January 17, 2014, in an interview with Israel's Channel One, Shechtman announced his candidacy for President of Israel. Shechtman received the endorsement of the ten Members of Knesset required to run. In the elections, held on June 10, 2014, he was awarded only one vote. This led Israeli press and Israeli humorists to qualify Shechtman as "quasi-president" in reference to the "quasi-scientist" quote.
Awards
2019 Honorary John von Neumann Professor title
2014 Fray International Sustainability Award, SIPS 2014
2013 Honorary doctorate from Bar-Ilan University
2011 Nobel Prize in Chemistry for the discovery of quasicrystals
2008 European Materials Research Society (E-MRS) 25th Anniversary Award
2002 EMET Prize in Chemistry
2000 Muriel & David Jacknow Technion Award for Excellence in Teaching
2000 Gregori Aminoff Prize of the Royal Swedish Academy of Sciences
1999 Wolf Prize in Physics.
1998 Israel Prize, for Physics.
1993 Weizmann Science Award
1990 Rothschild Prize in Engineering
1988 New England Academic Award of the Technion
1988 International Award for New Materials of the American Physical Society
1986 Physics Award of the Friedenberg Fund for the Advancement of Science and Education
Published works
See also
List of Israel Prize recipients
List of Israeli Nobel laureates
List of Jewish Nobel laureates
Science and technology in Israel
References
Further reading
D. P. DiVincenzo and P. J. Steinhardt, eds. 1991. Quasicrystals: The State of the Art. Directions in Condensed Matter Physics, Vol 11. .
T. Janssen. 2007. Quasicrystals: Comparative dynamics. Nature Materials, Vol 6., 925–926.
External links
Nobel Laureates from Technion – Israel Institute of Technology.
Story of quasicrystals as told by Shechtman to APS News in 2002.
Biography/CV Page – Technion
TechnionLIVE e-newsletter
Dan Shechtman (Iowa State faculty page)
2012 interview with The Times of Israel
1941 births
Nobel laureates in Chemistry
Israeli Nobel laureates
Crystallographers
Iowa State University faculty
Israel Prize in physics recipients
Israeli chemists
Israeli Jews
Israeli atheists
Israeli materials scientists
Israeli physicists
Jewish atheists
Jewish chemists
Jewish physicists
Living people
Members of the European Academy of Sciences and Arts
Members of the Israel Academy of Sciences and Humanities
Members of the United States National Academy of Engineering
Foreign members of the Russian Academy of Sciences
EMET Prize recipients in the Exact Sciences
Scientists from Tel Aviv
Technion – Israel Institute of Technology alumni
Academic staff of Technion – Israel Institute of Technology
Wolf Prize in Physics laureates
Articles containing video clips
Candidates for President of Israel
Quasicrystals
Weizmann Prize recipients | Dan Shechtman | Physics,Chemistry,Materials_science | 1,869 |
42,739,142 | https://en.wikipedia.org/wiki/Vijay%20P.%20Singh | Vijay P. Singh (born July 15, 1946) is a Distinguished Professor and a Regents Professor, and holds the Caroline and William N. Lehrer Distinguished Chair in Water Engineering at Texas A&M University. His research interests include Surface-water Hydrology, Groundwater Hydrology, Hydraulics, Irrigation Engineering, Environmental Quality, and Water Resources.
Birth and education
Vijay P. Singh was born in Agra, India in 1946. He graduated with a BS in Engineering and Technology with emphasis on Soil and Water Conservation Engineering from [G.B Pant University of Agriculture and Technology], India in 1967. He earned an MS in Engineering with specialization in Hydrology from University of Guelph, Canada in 1970 and a Ph.D. in Civil Engineering with specialization in Hydrology and Water Resources from Colorado State University, Fort Collins, USA in 1974. He also earned a D.Sc. from the University of the Witwatersrand, Johannesburg, South Africa in 1998.
He was elected a member of the National Academy of Engineering (NAE) in 2022 for his contributions to wave modeling and development of entropy-based theories of hydrologic processes and hydroclimatic extremes.
Vijay P. Singh is also a two-time winner of the Norman Medal, the highest honor by the American Society of Civil Engineers established in 1872 to recognize engineering papers distinguished by their "practical value" and "impact on engineering practice". He is one of the few scientists to have received this award twice: he won a Norman Medal in 2010 for a paper with Tommaso Moramarco and Claudia Pandolfo, and in 2023 for a paper with Solomon Vimal.
References
External links
Department of Biological & Agricultural Engineering, TAMU
Texas A&M University faculty
American hydrologists
Indian hydrologists
Hydrologists
Indian scientists
American scientists
1946 births
American people of Indian descent
Living people | Vijay P. Singh | Environmental_science | 376 |
68,533,868 | https://en.wikipedia.org/wiki/Tweet%20%28social%20media%29 | A tweet (now officially known as a post since 2023) is a short status update on the social networking site Twitter (officially known as X since 2023) which can include images, videos, GIFs, straw polls, hashtags, mentions, and hyperlinks. Around 80% of all tweets are made by 10% of users, averaging 138 tweets per month, with the median user making only two tweets per month.
Following the acquisition of Twitter by Elon Musk in October 2022, and rebranding of the site as "X" in June 2023, all references to the word "tweet" were removed from the service, changed to "post", and "retweet" changed to "repost". The terms "tweet" and "retweet" are still more popular when referring to posts on X.
Content
The service has experimented with changing how tweets work over the years to attract more users and to keep them on the site. The character limit was originally 140 characters when the service started, had media attachments no longer count in the mid-2010s, and doubled altogether in 2017. Now, a tweet can contain up to 280 characters and include media. Users subscribed to X Premium (formerly Twitter Blue) can post up to 25,000 characters and can include bold and italic styling.
Character limit
Tweets originally were limited to 140 characters when the service launched, in 2006. Twitter was originally designed to be used on SMS text messages, which are limited to 160 characters. Twitter reserved 20 characters for the username, leaving 140 characters for the post. The original limit was seen as an iconic fixture of the platform, encouraging "speed and brevity".
Increasing the limit had been a topic of discussion inside the company for years, and had been resurfaced in 2015 for ways to grow the userbase. At the time, internal discussion also involved excluding links and mentions from the character limit. By January 2016, an internal product named "Beyond 140" was in development, targeting Q1 of the same year for expanding tweet limits. By the end of 2015, the company was moving close to introducing a 5,000 or 10,000 character limit. An unfinalized version had tweets that went over the old 140 character threshold only showing the first 140 characters, with a call-to-action that there was more in the tweet. Clicking on the tweet would reveal the rest, which was done to retain the same feel of the timeline.
The change was controversial internally and met with backlash by users. Dorsey confirmed that the 140 character limit would remain, but had told employees upon his return as CEO that the once-sacred aspects of Twitter were no longer untouchable.
In May 2016, a week after being leaked, Twitter announced that media attachments (images, GIFs, videos, polls, quote tweets) nor mentions in replies would no longer increase the character limit to be rolled out later in the year to ready developers. The changes rolled out in September, except for the @replies, which were tested in October and then rolled out in March 2017, a year after the original announcement. These changes were a compromise to internal resistance to a 10,000 character limit from the year before.
On September 26, 2017, Twitter announced the company was testing doubling the character limit—from 140 to 280. It was an effort for users to be more expressive with their tweets, as users were otherwise cramming ideas into a single tweet by rewriting and removing vowels, or not tweeting at all. It began testing to a small group of users in all languages, excluding Japanese, Chinese, and Korean, because the three languages can say double the amount of information in one character. According to the company's statistics, 0.4% of tweets in Japanese hit the 140 character ceiling, while 9% of tweets in English hit the ceiling. Users not in the test group were able to see and interact with them normally.
The change was similarly controversial internally as the 10,000 character limit proposal. The immediate reaction by Twitter users was largely negative.
Links
URLs can be linked on Twitter. A tweet's links are converted to the t.co link shortener, and use up 23 characters out of the limit. The shortener was introduced in June 2011 to allow users to save space on their links, without needing a third-party service like Bitly or TinyURL.
Media
Some users use screenshots of text and uploaded them as images to increase the amount of words they could include in a tweet.
Cards
Beginning in 2012, tweets linking to partnered websites would show, below the content of the tweet, expanded media: an excerpt of a linked news article or an embedded video. Twitter already had a way to see Instagram posts and YouTube videos, called "expanded tweets". Twitter then began allowing websites apply to test offering cards for Twitter users. Later in 2012, notably after Facebook purchased it, Instagram started cropping images displayed in cards, with the plan to end them all together.
CoTweets
Between July 2022 and January 2023, Twitter tested a feature where two users could be the author of a tweet, which would be posted on both of their accounts. Both users' profile pictures, names, and handles are shown. One user drafts a tweet in the Composer field, then invite a user that is both following them and has their account published. Edits could not be made once the invite was sent, with the alternative being deleting the invitation and making a second one. The second author could accept the invitation, at which the tweet would then be posted to both accounts. Once published, the second user could revoke them being a co-author, and the tweet would change to being written by the first author and being removed from the second author's tweets. Until the second author accepts the invitation, the tweet would be unlisted, not appearing on the authors' timelines or in searches, but available via a direct link.
It was tested with some accounts in the US, Canada, and South Korea. The company noted during the test that the feature may be turned off and all CoTweets deleted. The feature was spotted in code in December 2021.
On January 31, Twitter suddenly and quietly decided to stop new CoTweets from being made, though noted that it could return in the future. CoTweets were able to be seen for another month, before being converted to a normal tweet for the first author, and a retweet for the second author. Though Twitter's support page offered a generic reasoning for discontinuing the feature, Elon Musk said that it was to focus on allowing users to add text attachments.
Vibes
Twitter briefly tested a feature in 2022 that allowed users to set the current status—codenamed "vibe"— for a tweet or account, from a small set of emoji-phrase combinations. It would allow the user to either tag per-tweet, or on the profile level with it showing on tweets and the profile. Testing began on vibes in June 2022 with a wider selection that could be put above tweets, but disappeared after some time. Phrases included "✔️ Current status" and "💤 Case of the Mondays". Twitter removed the ability to add vibes to tweets.
Interactions
Users can interact with tweets by 'retweeting' (reblogging), liking, quoting the tweet, or replying to it.
Retweets
In November 2009, Twitter began rolling out the ability to 'retweet' a tweet. Prior to this, people would write "RT @username" before quoting the original tweet. Some people limited their 140 character limit down further, so that other people could always fit their entire tweet in a proto-retweet. In 2023, with the rebranding of Twitter to X, "retweets" were quietly renamed to "reposts"; however, "retweets" remained the most commonly used term on the site.
Liking
Tweets can be liked by users, adding them to a list that other users used to be able to view, prior to likes becoming private for all users. The feature was available when Twitter launched in 2006. Until 2015, 'likes' were called 'favorites' (or 'favs'). The service renamed them because people "often misunderstood" the feature, and people reacted more positively in user tests. Users had the option of hiding their likes from the public, though their like would not be hidden from the list of users who likes a given tweet. Jack Dorsey said in 2019 that, if he had to create Twitter over again, he would deemphasize the like, or not include it altogether because it did not positively contribute to healthy conversations.
Likes used to be public and they are not broadcast to the user's tweets timeline. When likes were public, users would often forget their likes were public or liked more revealing tweets. High-profile users and politicians' accounts have liked pornographic, hateful, and racist tweets. For instance, in 2017, Ted Cruz's account liked a tweet with a two-minute porn video about a day after it was posted. Cruz said that many people had access to his account and one of his staff members pressed the like button in "an honest mistake".
Likes would later be privatized for all users profiles, with Elon Musk stating "Public likes are incentivizing the wrong behavior", and encouraging users to like more Tweets without fear of being noticed. Likes are now anonymous, except to the author of the tweet, and to the person who liked it. Verified users could choose to private their likes prior to the update.
When not logged in, users' tweets are sorted by how many likes they received, opposed to reverse-chronological.
Quote tweets
In 2014, Twitter began testing a new feature that allows users to embed a tweet inside their tweet to add additional commentary. Prior to this, users could include a snippet of another tweet in a new tweet, but were limited to quoting the—at the time—140 character limit. It was originally called "retweet with comment", and was later named "quote tweet". Following the rebranding of Twitter to X, quote tweets were renamed, simply dropping the "tweet" to become "quotes". The common name still largely remains "quote tweets".
Threads
Multiple tweets in reply to each other are grouped together in 'threads'. The 140 character limit prevented users from posting as complete thoughts as they desired, and resorted to making upwards of dozens of tweets, which all showed in a disjointed manner, dubbed a "tweetstorm". It was popularized by Marc Andreessen.
Bookmarking
Users are able to add a bookmark to individual tweets via the bookmark button, or within share icon menu, saving them to revisit them later. The bookmarks are private, but tweets display the number of times it has been bookmarked, if at all.
The development was revealed to in October 2017. The feature, highly requested by Japanese users, started from an annual hack week at the company and called "#ShareForLater". Previously, users would resort to liking the tweet or by sending it to themselves. Liking tweets is often seen as an endorsement or positive endorsement, and the likes are public and are notified to the user who made the tweet. The feature was tested in November for some users, and rolled out in February 2018 on mobile alongside a new share menu. The web version of Twitter did not test the bookmark feature until November 2018 When released, the user who made the tweet would have been unaware that a tweet was bookmarked.
Fact-checking
In March 2020, Twitter added a label to a manipulated video of then-candidate Joe Biden that Donald Trump retweeted. Two months later, as a result of the COVID-19 pandemic, Twitter introduced a policy that would label or warn users on tweets with COVID-19 misinformation. The company said at the time that other areas would have labels covered, and shortly afterwards, misleading information on elections were included.
On March 26, then-US president Donald Trump made two false statements about mail-in ballots, claiming they were "substantially fraudulent". Within 24 hours of the tweet, Twitter's general counsel and the acting head of policy jointly decided to label Trump's tweets, with several hours of internal debate from company leaders, and then-CEO Jack Dorsey signed off on the decision shortly before the label was applied. The labels, which told readers to "Get the facts about mail-in ballots", was the first time they were applied to Trump's tweets. A spokesperson for Twitter said that the tweets contained "potentially misleading information about voting processes and have been labeled to provide additional context around mail-in ballots". The label linked to articles by CNN, The Washington Post, and The Hill, as well as summaries of claims of fraud.
Three days later, a tweet about the George Floyd protests in Minneapolis–Saint Paul was hidden from view.
Community Notes
In the weeks after the January 6 United States Capitol attack, Twitter rolled out a new program that allowed users to add notes underneath tweets that would benefit from additional context.
Prior to the transfer of Twitter to Elon Musk, Community Notes were officially called Birdwatch.
History
The first tweet, made by Jack Dorsey, was made on March 21, 2006. It has the Snowflake ID of 20.
The Iconfactory was developing a Twitter application in 2006 called "Twitterrific" and developer Craig Hockenberry began a search for a shorter way to refer to "Post a Twitter Update." In 2007 they began using "twit" before Twitter developer Blaine Cook suggested that "tweet" be used instead.
"Tweet" was added to the Merriam-Webster dictionary in 2011 and to the Oxford English Dictionary in 2012. Both its use as a verb and noun were added. This was notable as the Oxford English Dictionary normally waits ten years after the coining of a word to add it to the dictionary.
In 2023, the terms "tweet" and "retweet" were quietly retired in favor of the terms "post" and "repost", as a part of Twitter's rebrand to X, but many users continue to use the former terms on the platform.
Demographics
The median Twitter user tweets twice a month. Around 80% of tweets made are from 10% of the users, who tweet 138 times per month. 65% of the prolific users are women, compared to 48% of the bottom 90%. Most of the prolific users tweet about political issues. There is no difference in political views between the two groups. 25% of the prolific users use automated tools to make tweets, compared to 15% of the others.
References
External links
How to Tweet — Twitter Support
Twitter
Internet slang
2000s neologisms
Internet terminology
2006 introductions | Tweet (social media) | Technology | 3,238 |
48,231,632 | https://en.wikipedia.org/wiki/British%20Mid-Ocean%20Ridge%20Initiative | The British Mid-Ocean Ridge Initiative (the BRIDGE Programme) was a multidisciplinary scientific investigation of the creation of the Earth's crust in the deep oceans. It was funded by the UK's Natural Environment Research Council (NERC) from 1993 to 1999.
Mid-Ocean ridges
Mid-Ocean ridges are active volcanic mountain ranges snaking through the depths of the Earth's oceans. They occur where the edges of the Earth's tectonic plates are separating, allowing mantle rock to rise to the seafloor and harden, creating new crust. The addition of this crust can cause ocean basins to widen perpendicular to the ridge. This seafloor spreading is the engine of continental drift. At intervals along the mid-ocean ridges super-heated mineral-rich fluids are vented from the seabed. These hydrothermal vents are populated by animal and bacterial species not found elsewhere on Earth.
BRIDGE investigated the geological setting of the ridge, the geochemistry of vent fluids, and ways in which biological communities survive in this apparently hostile environment. To achieve this the programme developed novel deep-ocean technologies for deployment from surface ships and manned submersibles. It also conducted experimental research into the mechanical and chemical nature of the rocks and underlying crust in these active volcanic regions. The scale of the investigation ranged from extensive regional studies mapping unexplored seafloor to microscopic and chemical analyses at individual vent sites. To achieve the programme's objectives work was focused at five contrasting locations: the Mid-Atlantic Ridge at 24–30°N; the Mid-Atlantic Ridge at 36–39°N; Iceland and the Reykjanes Ridge to its south west; the Scotia back-arc basin (SW Atlantic); and the Lau basin (SW Pacific). Intensive localised studies were made within these areas.
Background to BRIDGE
The idea for a British mid-ocean ridge research programme was developed by Professors Joe Cann of Leeds University and Roger Searle of Durham University after they attended a meeting in Oregon in 1987 where the idea for a US mid-ocean ridge research programme (the RIDGE Program) was being developed.
In the UK researchers in many disciplines were already studying mid-ocean ridges but it was felt this research could be better integrated to produce new multidisciplinary approaches yielding results of wider significance.
The 'BRIDGE' branding of research commenced before research council funding was sought for a formal programme. BRIDGE was mentioned by name in The Independent newspaper in February 1989. By this time the community of researchers in this field were referring to themselves as the BRIDGE Consortium. Deep-ocean science cruises were being identified as BRIDGE cruises by 1990. The first BRIDGE newsletter appeared in 1991.
Once the idea of BRIDGE was in place an application for funding was made to the Natural Environment Research Council. This was successful and full funding commenced in 1993 for a programme that would run until 1999. The final budget was £13M.
Aims
To invest in British mid-ocean ridge research so that both human skills and instrument resources were increased
To use both existing capabilities and newly developed instruments to solve some of the fundamental scientific problems pertaining to mid-ocean ridges
To expand UK mid-ocean ridge research to involve a wider range of skills and new techniques
To seek both direct and indirect commercial benefits from mid-ocean ridge research
To liaise with other national programmes to maximise the benefits of British activities
This last aim was achieved directly and by participation in the international InterRidge network.
Objectives
To undertake the crucial observations, experiments and modelling aimed at solving fundamental scientific problems
To conduct the basic surveys necessary to site both regional and local studies
To develop new marine instrumentation for use in experiments
To acquire access to the survey vehicles and instruments necessary for undertaking the science
To attract scientists from diverse disciplines to participate in mid-ocean ridge research
To consult the UK marine instrumentation community to refine requirements for, and capabilities of, new instruments
To seek active involvement of the UK biotechnology community in mid-ocean ridge research
To construct and update plans that would enable these aims and objectives to be met
Scientific problems
From the wide range of scientific problems that could be addressed by mid-ocean ridge research, BRIDGE identified six that were of most relevance to UK research.
How does the three-dimensional structure of mid-ocean ridges, and especially their segmentation by transform faults and similar features, relate to the physical properties and dynamics of the underlying Earth's mantle?
Can the geochemistry of the lavas erupted at mid-ocean ridges give insights into the scale and origin of heterogeneities in the underlying mantle?
What is the nature of the magmatic plumbing system within the crust and upper mantle below mid-ocean ridges?
How does the rate of flow and geochemical composition of the black smoker hydrothermal vent fluids vary with time, and what causes this variation? Can this help us to understand more about the origin of ore deposits found on land?
How do the bacteria that live around the black smoker hot springs survive the high temperatures and the toxic environment? Can these capabilities be exploited in biotechnology?
How do the chemical and biological processes at the black smoker vent fields affect the global flux of chemical species in and out of the ocean? Are the nutrient levels of the oceans partly controlled from the mid-ocean ridges?
Programme structure
Science
The scientific aims and objectives of the programme were directed by an international steering committee which met twice a year. The programme held a series of annual funding rounds to which scientists and engineers in the field submitted research proposals. Following a peer-review assessment of each proposal by independent referees the steering committee ranked the most highly rated proposals on their scientific merit and contribution to the programme's objectives. This short-list was then recommended to NERC for funding.
Management
From 1993 to 1995 programme management (day-to-day administration and budget oversight) was undertaken by NERC head office in Swindon. A separate Science Coordinator role (incorporating, among other duties, responsibility for expanding the BRIDGE Consortium, organising national conferences and publishing the newsletter) was based at Leeds University where the BRIDGE Chief Scientist, Joe Cann, was chairman of Earth Sciences.
In 1995 NERC began contracting out programme management for their large programmes. BRIDGE programme management absorbed the science coordination role and a new programme manager was appointed, based at Leeds University. The Leeds BRIDGE office was the programme hub until the end of March 1999 after which the conclusion of the programme was administered by NERC.
Programme content
BRIDGE funded 44 research projects: 4 multidisciplinary; 15 geology; 6 biology; 11 studies of the hydrothermal environment at vent fields (9 of the ocean floor and 2 of the overlying water column); and 8 engineering projects to develop the required technologies. More than 200 scientists in 28 research centres around the UK contributed to this programme. There were 26 BRIDGE deep-ocean research cruises to the North Atlantic, SW Atlantic, SE Pacific, SW Pacific and Indian oceans, 18 of which were directly funded by the programme.
Results
To discuss and publicise the programme's results BRIDGE organised its own science conferences at Durham University (1991), the Institute of Oceanographic Sciences Deacon Laboratory (IOSDL), Wormley (1992), Leeds University (1993), Oxford University (1994), the Geological Society of London (1994, 1995 and 1997), Cambridge University (1996), Southampton Oceanography Centre (1997) and Bristol University (1998). In addition BRIDGE science was reported at other meetings nationally and internationally, for example: at the Royal Society meeting Mid-Ocean Ridges: Dynamics of Processes Associated with Creation of New Ocean Crust (1996), the 1996 British Association for the Advancement of Science annual science festival at Birmingham in a BRIDGE session entitled Abyssal Inferno: Seafloor volcanoes, hot vents and exotic life at the mid-ocean ridges, at Geoscience 98, Keele University (1998), at the meeting Technology for Deep-Sea Geological Investigations at the Geological Society of London (1998) and at meetings of the American Geophysical Union.
Three of the BRIDGE conferences resulted in books published by the Geological Society of London, presenting in greater detail the science reported at the meetings.
Throughout the programme rapid publication of results was effected through The BRIDGE Newsletter. In style this was an academic journal (but without peer review) comprising BRIDGE science results together with conference announcements, meeting reports, cruise reports, updates from the mid-ocean ridge programmes of other nations and general news items of relevance to this field of research. It was published twice a year in spring and autumn. The first issue of eight stapled sheets appeared in August 1991 but after NERC funding commenced it was commercially printed and bound. By issue 10, in April 1996, it had grown to 100 pages and was being distributed to more than 600 researchers and interested parties in 20 countries. The last newsletter, No. 17, was produced in autumn 1999 as a magazine called The Fiery Deep, Exploring a New Earth summarising the programme and its results to that time. On 16 November 1999 at the Natural History Museum, London these results were presented to invited guests at a formal end of programme meeting.
As the programme ended, Joe Cann reported, "As a result of the BRIDGE initiative, several groups of UK scientists are at the forefront of international research in mid-ocean ridge science. The areas of expertise of these scientists range from marine geophysics and geodynamics, physical and chemical oceanography, to marine biology." "Every area had success. Here are a few examples. We found new pools of molten rock below the ocean floor where none was expected. We discovered large fields of hot springs, where the wisdom of the time said there should be none. We followed the strange lifecycle of the blind shrimp that live around hot springs in the Atlantic. We made sonar images of the first of a family of enormous faults that slice through the ocean floor, bringing deep rock to the surface. We showed how the flow of one of the big, hot spring fields was affected by scientific drilling. We traced the relationships between animals in hot spring communities up and down the Atlantic. We built new instruments, too, that can operate in these hostile regions".
Legacy
In addition to the results of the researches, which are still quoted, the BRIDGE Programme left an interdisciplinary community of deep-ocean scientists with a proven track record of collaboration and new equipment for working at depths of over 3,500 metres.
BRIDGE equipment
BRIDGE had purchased for the UK research fleet a Simrad multibeam echosounder for mapping the seafloor from a surface ship. To increase detail in any geographical areas of interest it also funded upgrades to the existing UK Towed Ocean Bottom Instrument (TOBI), which made 3D images of the seabed as it was towed 300m above the ocean floor. TOBI was modified to increase its resolution, to add a gyrocompass and to add a three component magnetometer for measuring the magnetic field of the seafloor rock over which it was towed.
The BRIDGE Towed instrument (BRIDGET), was developed for hunting and studying the plumes of warm, mineral rich fluids rising into the water column from vent fields. This "hot-spring sniffer" was towed at depth behind a ship in areas where vent fields were suspected to occur and fed geochemical data back to the ship in real time.
Once fields had been detected the fluids venting from the sea-floor could be studied directly using the MEDUSA instrument. Deployed by a deep submergence vehicle, this could be placed over individual vents for extended time periods to record the characteristics of the fluids as they emerge. At the BRIDGE programme's close six MEDUSA instruments had been built with BRIDGE funding, three more were constructed for the Geological Survey of Japan, and the next generation was being developed for various US agencies including NASA.
For examining the rock of the mid-ocean ridge a new deep ocean drill, the BRIDGE Drill, was developed which marked the core as it drilled. The marking of the core allowed the original north-south orientation of the core to be known after it had been removed. This permitted the magnetic alignment of the rock from which the sample was taken to be determined, providing information on sea-bed movements that had taken place after the rock had formed.
For study of the dispersal of animals found at the vent fields, the biologists developed a Planktonic Larval Sampler for Molecular Analysis (PLASMA). This was designed to take samples of water to catch the dispersing larvae of animals living around the vents. PLASMA could be left on the sea-bed in the vicinity of a vent field for up to a year if required, sampling at programmed intervals and preserving any larvae for DNA analysis after the recovery of the equipment.
BRIDGE data archive
BRIDGE collected and compiled: multibeam bathymetry, sonar imagery, seismic data, electromagnetic data, gravimetry, petrology (including rock sections, cores, sediments and analytical data), chemical and physical oceanography (samples and analytical data), macro- and microbiology (specimens, film and analytical data); numerical models and audiovisual records. For the benefit of future researchers a BRIDGE data archive was lodged with the UK's National Oceanography Centre at Southampton.
Notes
References
BRIDGE Newsletter page references
External links
BRIDGE Data Archive
BRIDGE Drill
Natural Environment Research Council
Oceanography
Research projects | British Mid-Ocean Ridge Initiative | Physics,Environmental_science | 2,680 |
74,288,076 | https://en.wikipedia.org/wiki/Coalescent%20angiogenesis | Angiogenesis is the process of the formation of new blood vessels from pre-existing vascular structures, which is needed for oxygenation of - and providing nutrients to - expanding tissue. Angiogenesis takes place through different modes of action. Coalescent angiogenesis is a mode of angiogenesis where vessels coalesce or fuse to increase blood circulation. This process transforms an inefficient net structure into a more efficient treelike structure. It is the opposite of intussusceptive angiogenesis, which is where vessels split to form new vessels.
Background
While the most studied mode of angiogenesis is sprouting angiogenesis, several different modes of angiogenesis have been described. Among these are intussusceptive angiogenesis or splitting angiogenesis, vessel cooption, and vessel elongation. A novel form of angiogenesis is the process called ‘’’coalescent angiogenesis’’’, which is the opposite of intussusceptive angiogenesis. This mode of angiogenesis was reported from studies of long-term time-lapse microscopy in the vasculature of the chick chorioallantoic membrane (CAM), where this novel non-sprouting mode for vessel generation was observed.
Specifically, isotropic capillary meshes enclosing tissue islands evolve into preferred flow pathways consisting of larger blood vessels transporting more blood in a faster pace. These preferential flow pathways progressively enlarge by coalescence of capillaries and elimination of internal tissue pillars, in a fast time frame of hours. This way coalescent angiogenesis is the reverse of intussusceptive angiogenesis. Concomitantly, less perfused segments of the vasculature regress. An initially mesh-like capillary network is remodelled into a tree structure, while conserving vascular wall components and maintaining blood flow. Coalescent angiogenesis, thus, describes the remodelling of an initial hemodynamically inefficient mesh structure, into a hierarchical tree structure that provides efficient convective transport, allowing for the rapid expansion of the vasculature with maintained blood supply and function during development.
Vascular fusion was initially described to happen during the formation of the dorsal aorta. All research presented has been derived from embryo development studies. It is unknown whether coalescent angiogenesis has extended out of the domain of embryology. In any case, it has been overlooked in the field of cancer research and it is currently only assumed to play a role in the formation of tumor vasculature.
References
Angiogenesis | Coalescent angiogenesis | Biology | 544 |
31,194,124 | https://en.wikipedia.org/wiki/5-cell%20honeycomb | In four-dimensional Euclidean geometry, the 4-simplex honeycomb, 5-cell honeycomb or pentachoric-dispentachoric honeycomb is a space-filling tessellation honeycomb. It is composed of 5-cells and rectified 5-cells facets in a ratio of 1:1.
Structure
Cells of the vertex figure are ten tetrahedrons and 20 triangular prisms, corresponding to the ten 5-cells and 20 rectified 5-cells that meet at each vertex. All the vertices lie in parallel realms in which they form alternated cubic honeycombs, the tetrahedra being either tops of the rectified 5-cell or the bases of the 5-cell, and the octahedra being the bottoms of the rectified 5-cell.
Alternate names
Cyclopentachoric tetracomb
Pentachoric-dispentachoric tetracomb
Projection by folding
The 5-cell honeycomb can be projected into the 2-dimensional square tiling by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same vertex arrangement:
Two different aperiodic tilings with 5-fold symmetry can be obtained by projecting two-dimensional slices of the honeycomb: the Penrose tiling composed of rhombi, and the Tübingen triangle tiling composed of isosceles triangles.
A4 lattice
The vertex arrangement of the 5-cell honeycomb is called the A4 lattice, or 4-simplex lattice. The 20 vertices of its vertex figure, the runcinated 5-cell represent the 20 roots of the Coxeter group. It is the 4-dimensional case of a simplectic honeycomb.
The A lattice is the union of five A4 lattices, and is the dual to the omnitruncated 5-simplex honeycomb, and therefore the Voronoi cell of this lattice is an omnitruncated 5-cell
∪ ∪ ∪ ∪ = dual of
Related polytopes and honeycombs
The tops of the 5-cells in this honeycomb adjoin the bases of the 5-cells, and vice versa, in adjacent laminae (or layers); but alternating laminae may be inverted so that the tops of the rectified 5-cells adjoin the tops of the rectified 5-cells and the bases of the 5-cells adjoin the bases of other 5-cells. This inversion results in another non-Wythoffian uniform convex honeycomb. Octahedral prisms and tetrahedral prisms may be inserted in between alternated laminae as well, resulting in two more non-Wythoffian elongated uniform honeycombs.
Rectified 5-cell honeycomb
The rectified 4-simplex honeycomb or rectified 5-cell honeycomb is a space-filling tessellation honeycomb.
Alternate names
small cyclorhombated pentachoric tetracomb
small prismatodispentachoric tetracomb
Cyclotruncated 5-cell honeycomb
The cyclotruncated 4-simplex honeycomb or cyclotruncated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be seen as a birectified 5-cell honeycomb.
It is composed of 5-cells, truncated 5-cells, and bitruncated 5-cells facets in a ratio of 2:2:1. Its vertex figure is a tetrahedral antiprism, with 2 regular tetrahedron, 8 triangular pyramid, and 6 tetragonal disphenoid cells, defining 2 5-cell, 8 truncated 5-cell, and 6 bitruncated 5-cell facets around a vertex.
It can be constructed as five sets of parallel hyperplanes that divide space into two half-spaces. The 3-space hyperplanes contain quarter cubic honeycombs as a collection facets.
Alternate names
Cyclotruncated pentachoric tetracomb
Small truncated-pentachoric tetracomb
Truncated 5-cell honeycomb
The truncated 4-simplex honeycomb or truncated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be called a cyclocantitruncated 5-cell honeycomb.
Alaternate names
Great cyclorhombated pentachoric tetracomb
Great truncated-pentachoric tetracomb
Cantellated 5-cell honeycomb
The cantellated 4-simplex honeycomb or cantellated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be called a cycloruncitruncated 5-cell honeycomb.
Alternate names
Cycloprismatorhombated pentachoric tetracomb
Great prismatodispentachoric tetracomb
Bitruncated 5-cell honeycomb
The bitruncated 4-simplex honeycomb or bitruncated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be called a cycloruncicantitruncated 5-cell honeycomb.
Alternate names
Great cycloprismated pentachoric tetracomb
Grand prismatodispentachoric tetracomb
Omnitruncated 5-cell honeycomb
The omnitruncated 4-simplex honeycomb or omnitruncated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be seen as a cyclosteriruncicantitruncated 5-cell honeycomb.
.
It is composed entirely of omnitruncated 5-cell (omnitruncated 4-simplex) facets.
Coxeter calls this Hinton's honeycomb after C. H. Hinton, who described it in his book The Fourth Dimension in 1906.
The facets of all omnitruncated simplectic honeycombs are called permutohedra and can be positioned in n+1 space with integral coordinates, permutations of the whole numbers (0,1,..,n).
Alternate names
Omnitruncated cyclopentachoric tetracomb
Great-prismatodecachoric tetracomb
A4* lattice
The A lattice is the union of five A4 lattices, and is the dual to the omnitruncated 5-cell honeycomb, and therefore the Voronoi cell of this lattice is an omnitruncated 5-cell.
∪ ∪ ∪ ∪ = dual of
Alternated form
This honeycomb can be alternated, creating omnisnub 5-cells with irregular 5-cells created at the deleted vertices. Although it is not uniform, the 5-cells have a symmetry of order 10.
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
Notes
References
Norman Johnson Uniform Polytopes, Manuscript (1991)
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (1.9 Uniform space-fillings)
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 134
, x3o3o3o3o3*a - cypit - O134, x3x3x3x3x3*a - otcypit - 135, x3x3x3o3o3*a - gocyropit - O137, x3x3o3x3o3*a - cypropit - O138, x3x3x3x3o3*a - gocypapit - O139, x3x3x3x3x3*a - otcypit - 140
Affine Coxeter group Wa(A4), Quaternions, and Decagonal Quasicrystals, Mehmet Koca, Nazife O. Koca, Ramazan Koc (2013)
Honeycombs (geometry)
5-polytopes | 5-cell honeycomb | Physics,Chemistry,Materials_science | 1,823 |
61,708 | https://en.wikipedia.org/wiki/Shrub | A shrub or bush is a small-to-medium-sized perennial woody plant. Unlike herbaceous plants, shrubs have persistent woody stems above the ground. Shrubs can be either deciduous or evergreen. They are distinguished from trees by their multiple stems and shorter height, less than tall. Small shrubs, less than 2 m (6.6 ft) tall are sometimes termed as subshrubs. Many botanical groups have species that are shrubs, and others that are trees and herbaceous plants instead.
Some define a shrub as less than and a tree as over 6 m. Others use as the cutoff point for classification. Many trees do not reach this mature height because of hostile, less than ideal growing conditions, and resemble shrub-sized plants. Others in such species have the potential to grow taller in ideal conditions. For longevity, most shrubs are classified between perennials and trees. Some only last about five years in good conditions. Others, usually larger and more woody, live beyond 70. On average, they die after eight years.
Shrubland is the natural landscape dominated by various shrubs; there are many distinct types around the world, including fynbos, maquis, shrub-steppe, shrub swamp and moorland. In gardens and parks, an area largely dedicated to shrubs (now somewhat less fashionable than a century ago) is called a shrubbery, shrub border or shrub garden. There are many garden cultivars of shrubs, bred for flowering, for example rhododendrons, and sometimes even leaf colour or shape.
Compared to trees and herbaceous plants, a small number of shrubs have culinary usage. Apart from the several berry-bearing species (using the culinary rather than botanical definition), few are eaten directly, and they are generally too small for much timber use unlike trees. Those that are used include several perfumed species such as lavender and rose, and a wide range of plants with medicinal uses. Tea and coffee are on the tree-shrub boundary; they are normally harvested from shrub-sized plants, but these would be large enough to become small trees if left to grow instead.
Definition
Shrubs are perennial woody plants, and therefore have persistent woody stems above ground (compare with succulent stems of herbaceous plants). Usually, shrubs are distinguished from trees by their height and multiple stems. Some shrubs are deciduous (e.g. hawthorn) and others evergreen (e.g. holly). Ancient Greek philosopher Theophrastus divided the plant world into trees, shrubs and herbs.
Small, low shrubs, generally less than tall, such as lavender, periwinkle and most small garden varieties of rose, are often termed as subshrubs.
Most definitions characterize shrubs as possessing multiple stems with no main trunk below. This is because the stems have branched below ground level. There are exceptions to this, with some shrubs having main trunks, but these tend to be very short and divide into multiple stems close to ground level without a reasonable length beforehand. Many trees can grow in multiple stemmed forms also while being tall enough to be trees, such as oak or ash.
Use in gardens and parks
An area of cultivated shrubs in a park or a garden is known as a shrubbery. When clipped as topiary, suitable species or varieties of shrubs develop dense foliage and many small leafy branches growing close together. Many shrubs respond well to renewal pruning, in which hard cutting back to a "stool", removes everything but vital parts of the plant, resulting in long new stems known as "canes". Other shrubs respond better to selective pruning to dead or unhealthy, or otherwise unattractive parts to reveal their structure and character.
Shrubs in common garden practice are generally considered broad-leaved plants, though some smaller conifers such as mountain pine and common juniper are also shrubby in structure. Species that grow into a shrubby habit may be either deciduous or evergreen.
Botanical structure
In botany and ecology, a shrub is more specifically used to describe the particular physical canopy structure or plant life-form of woody plants which are less than high and usually multiple stems arising at or near the surface of the ground. For example, a descriptive system widely adopted in Australia is based on structural characteristics based on life-form, plus the height and amount of foliage cover of the tallest layer or dominant species.
For shrubs that are high, the following structural forms are categorized:
dense foliage cover (70–100%) — closed-shrubs
mid-dense foliage cover (30–70%) — open-shrubs
sparse foliage cover (10–30%) — tall shrubland
very sparse foliage cover (<10%) — tall open shrubland
For shrubs less than high, the following structural forms are categorized:
dense foliage cover (70–100%) — closed-heath or closed low shrubland—(North America)
mid-dense foliage cover (30–70%) — open-heath or mid-dense low shrubland—(North America)
sparse foliage cover (10–30%) — low shrubland
very sparse foliage cover (<10%) — low open shrubland
List
Those marked with * can also develop into tree form if in ideal conditions.
A
Abelia (Abelia)
Acer (Maple) *
Actinidia (Actinidia)
Aloe (Aloe)
Aralia (Angelica Tree, Hercules' Club) *
Arctostaphylos (Bearberry, Manzanita) *
Aronia (Chokeberry)
Artemisia (Sagebrush)
Aucuba (Aucuba)
B
Berberis (Barberry)
Bougainvillea (Bougainvillea)
Brugmansia (Angel's trumpet)
Buddleja (Butterfly bush)
Buxus (Box) *
C
Calia (Mescalbean)
Callicarpa (Beautyberry) *
Callistemon (Bottlebrush) *
Calluna (Heather)
Calycanthus (Sweetshrub)
Camellia (Camellia, Tea) *
Caragana (Pea-tree) *
Carpenteria (Carpenteria)
Caryopteris (Blue Spiraea)
Cassiope (Moss-heather)
Ceanothus (Ceanothus) *
Celastrus (Staff vine) *
Ceratostigma (Hardy Plumbago)
Cercocarpus (Mountain-mahogany) *
Chaenomeles (Japanese Quince)
Chamaebatiaria (Fernbush)
Chamaedaphne (Leatherleaf)
Chimonanthus (Wintersweet)
Chionanthus (Fringe-tree) *
Choisya (Mexican-orange Blossom) *
Cistus (Rockrose)
Clerodendrum (Clerodendrum)
Clethra (Summersweet, Pepperbush) *
Clianthus (Glory Pea)
Colletia (Colletia)
Colutea (Bladder Senna)
Comptonia (Sweetfern)
Cornus (Dogwood) *
Corylopsis (Winter-hazel) *
Cotinus (Smoketree) *
Cotoneaster (Cotoneaster) *
Cowania (Cliffrose)
Crataegus (Hawthorn) *
Crinodendron (Crinodendron) *
Cytisus and allied genera (Broom) *
D
Daboecia (Heath)
Danae (Alexandrian laurel)
Daphne (Daphne)
Decaisnea (Decaisnea)
Dasiphora (Shrubby Cinquefoil)
Dendromecon (Tree poppy)
Desfontainea (Desfontainea)
Deutzia (Deutzia)
Diervilla (Bush honeysuckle)
Dipelta (Dipelta)
Dirca (Leatherwood)
Dracaena (Dragon tree) *
Drimys (Winter's Bark) *
Dryas (Mountain Avens)
E
Edgeworthia (Paper Bush) *
Elaeagnus (Elaeagnus) *
Embothrium (Chilean Firebush) *
Empetrum (Crowberry)
Enkianthus (Pagoda Bush)
Ephedra (Ephedra)
Epigaea (Trailing Arbutus)
Erica (Heath)
Eriobotrya (Loquat) *
Escallonia (Escallonia)
Eucryphia (Eucryphia) *
Euonymus (Spindle) *
Exochorda (Pearl Bush)
F
Fabiana (Fabiana)
Fallugia (Apache Plume)
Fatsia (Fatsia)
Forsythia (Forsythia)
Fothergilla (Fothergilla)
Franklinia (Franklinia) *
Fremontodendron (Flannelbush)
Fuchsia (Fuchsia) *
G
Garrya (Silk-tassel) *
Gaultheria (Salal)
Gaylussacia (Huckleberry)
Genista (Broom) *
Gordonia (Loblolly-bay) *
Grevillea (Grevillea)
Griselinia (Griselinia) *
H
Hakea (Hakea) *
Halesia (Silverbell) *
Halimium (Rockrose)
Hamamelis (Witch-hazel) *
Hebe (Hebe)
Hedera (Ivy)
Helianthemum (Rockrose)
Hibiscus (Hibiscus) *
Hippophae (Sea-buckthorn) *
Hoheria (Lacebark) *
Holodiscus (Creambush)
Hudsonia (Hudsonia)
Hydrangea (Hydrangea)
Hypericum (Rose of Sharon)
Hyssopus (Hyssop)
I
Ilex (Holly) *
Illicium (Star Anise) *
Indigofera (Indigo)
Itea (Sweetspire)
J
Jamesia (Cliffbush)
Jasminum (Jasmine)
Juniperus (Juniper) *
K
Kalmia (Mountain-laurel)
Kerria (Kerria)
Kolkwitzia (Beauty-bush)
L
Lagerstroemia (Crape-myrtle) *
Lapageria (Copihue)
Lantana (Lantana)
Lavandula (Lavender)
Lavatera (Tree Mallow)
Ledum (Ledum)
Leitneria (Corkwood) *
Lespedeza (Bush Clover) *
Leptospermum (Manuka) *
Leucothoe (Doghobble)
Leycesteria (Leycesteria)
Ligustrum (Privet) *
Lindera (Spicebush) *
Linnaea (Twinflower)
Lonicera (Honeysuckle)
Lupinus (Tree Lupin)
Lycium (Boxthorn)
M
Magnolia (Magnolia)
Mahonia (Mahonia)
Malpighia (Acerola)
Menispermum (Moonseed)
Menziesia (Menziesia)
Mespilus (Medlar) *
Microcachrys (Microcachrys)
Myrica (Bayberry) *
Myricaria (Myricaria)
Myrtus and allied genera (Myrtle) *
N
Neillia (Neillia)
Nerium (Oleander)
O
Olearia (Daisy bush) *
Osmanthus (Osmanthus)
P
Pachysandra (Pachysandra)
Paeonia (Tree-peony)
Persoonia (Geebungs)
Philadelphus (Mock orange) *
Phlomis (Jerusalem Sage)
Photinia (Photinia) *
Physocarpus (Ninebark) *
Pieris (Pieris)
Pistacia (Pistachio, Mastic) *
Pittosporum (Pittosporum) *
Plumbago (Leadwort)
Polygala (Milkwort)
Poncirus *
Prunus (Cherry) *
Purshia (Antelope Bush)
Pyracantha (Firethorn)
Q
Quassia (Quassia) *
Quercus (Oak) *
Quillaja (Quillay)
Quintinia (Tawheowheo) *
R
Rhamnus (Buckthorn) *
Rhododendron (Rhododendron, Azalea) *
Rhus (Sumac) *
Ribes (Currant, Gooseberry)
Romneya (Tree poppy)
Rosa (Rose)
Rosmarinus (Rosemary)
Rubus (Bramble, Raspberry, Salmonberry, Wineberry)
Ruta (Rue)
S
Sabia *
Salix (Willow) *
Salvia (Sage)
Sambucus (Elder) *
Santolina (Lavender Cotton)
Sapindus (Soapberry) *
Senecio (Senecio)
Simmondsia (Jojoba)
Skimmia (Skimmia)
Smilax (Smilax)
Sophora (Kōwhai) *
Sorbaria (Sorbaria)
Spartium (Spanish Broom)
Spiraea (Spiraea) *
Staphylea (Bladdernut) *
Stephanandra (Stephanandra)
Styrax *
Symphoricarpos (Snowberry)
Syringa (Lilac) *
T
Tamarix (Tamarix) *
Taxus (Yew) *
Telopea (Waratah) *
Thuja cvs. (Arborvitae) *
Thymelaea
Thymus (Thyme)
Trochodendron *
U
Ulex (Gorse)
Ulmus pumila celer (Turkestan elm – Wonder Hedge)
Ungnadia (Mexican Buckeye)
V
Vaccinium (Bilberry, Blueberry, Cranberry)
Verbesina centroboyacana
Verbena (Vervain)
Viburnum (Viburnum) *
Vinca (Periwinkle)
Viscum (Mistletoe)
W
Weigela (Weigela)
X
Xanthoceras
Xanthorhiza (Yellowroot)
Xylosma
Y
Yucca (Yucca, Joshua tree) *
Z
Zanthoxylum *
Zauschneria
Zenobia
Ziziphus *
References
Plants
Plant morphology
Lists of plants
Plant life-forms
Plants by habit | Shrub | Biology | 2,867 |
4,194,127 | https://en.wikipedia.org/wiki/Count%20room | A count room or counting room is a room that is designed and equipped for the purpose of counting large volumes of currency. Count rooms are operated by central banks and casinos, as well as some large banks and armored car companies that transport currency.
A count room may be divided into two separate areas, one for counting banknotes (sometimes referred to as soft count) and one for counting coins (sometimes referred to as hard count). Some high-volume cash businesses, especially casinos, will operate two distinct rooms.
Surveillance
Most count rooms are equipped with closed-circuit television cameras and sometimes sound recording equipment to assist in detecting theft, fraud, or collusion among the count room personnel.
References
Banking terms
Rooms
Casinos | Count room | Engineering | 143 |
36,875,681 | https://en.wikipedia.org/wiki/HD%2047667 | HD 47667 is a single star in the southern constellation of Canis Major. It is visible to the naked eye with an apparent visual magnitude of 4.832. The estimated distance to this star, based upon an annual parallax shift of , is roughly 1,000 light years. It is moving further away with a heliocentric radial velocity of +29 km/s. The star made its closest approach to the Sun some 8.7 million years ago at a separation of around .
Roughly 40 million years old, this is an evolved K-type giant star with a stellar classification of . The suffix notation indicates overabundances of calcium and the cyanide molecule have been found in the spectrum of the stellar atmosphere. The star has 7.4 times the mass of the Sun and has expanded to 28 times the Sun's radius. It is radiating 2,317 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,200 K.
References
K-type giants
Canis Majoris, 61
Canis Major
Durchmusterung objects
047667
031827
2450 | HD 47667 | Astronomy | 235 |
45,754 | https://en.wikipedia.org/wiki/Where%20Mathematics%20Comes%20From | Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (hereinafter WMCF) is a book by George Lakoff, a cognitive linguist, and Rafael E. Núñez, a psychologist. Published in 2000, WMCF seeks to found a cognitive science of mathematics, a theory of embodied mathematics based on conceptual metaphor.
WMCF definition of mathematics
Mathematics makes up that part of the human conceptual system that is special in the following way:
It is precise, consistent, stable across time and human communities, symbolizable, calculable, generalizable, universally available, consistent within each of its subject matters, and effective as a general tool for description, explanation, and prediction in a vast number of everyday activities, [ranging from] sports, to building, business, technology, and science. - WMCF, pp. 50, 377
Nikolay Lobachevsky said "There is no branch of mathematics, however abstract, which may not some day be applied to phenomena of the real world." A common type of conceptual blending process would seem to apply to the entire mathematical procession.
Human cognition and mathematics
Lakoff and Núñez's avowed purpose is to begin laying the foundations for a truly scientific understanding of mathematics, one grounded in processes common to all human cognition. They find that four distinct but related processes metaphorically structure basic arithmetic: object collection, object construction, using a measuring stick, and moving along a path.
WMCF builds on earlier books by Lakoff (1987) and Lakoff and Johnson (1980, 1999), which analyze such concepts of metaphor and image schemata from second-generation cognitive science. Some of the concepts in these earlier books, such as the interesting technical ideas in Lakoff (1987), are absent from WMCF.
Lakoff and Núñez hold that mathematics results from the human cognitive apparatus and must therefore be understood in cognitive terms. WMCF advocates (and includes some examples of) a cognitive idea analysis of mathematics which analyzes mathematical ideas in terms of the human experiences, metaphors, generalizations, and other cognitive mechanisms giving rise to them. A standard mathematical education does not develop such idea analysis techniques because it does not pursue considerations of A) what structures of the mind allow it to do mathematics or B) the philosophy of mathematics.
Lakoff and Núñez start by reviewing the psychological literature, concluding that human beings appear to have an innate ability, called subitizing, to count, add, and subtract up to about 4 or 5. They document this conclusion by reviewing the literature, published in recent decades, describing experiments with infant subjects. For example, infants quickly become excited or curious when presented with "impossible" situations, such as having three toys appear when only two were initially present.
The authors argue that mathematics goes far beyond this very elementary level due to a large number of metaphorical constructions. For example, the Pythagorean position that all is number, and the associated crisis of confidence that came about with the discovery of the irrationality of the square root of two, arises solely from a metaphorical relation between the length of the diagonal of a square, and the possible numbers of objects.
Much of WMCF deals with the important concepts of infinity and of limit processes, seeking to explain how finite humans living in a finite world could ultimately conceive of the actual infinite. Thus much of WMCF is, in effect, a study of the epistemological foundations of the calculus. Lakoff and Núñez conclude that while the potential infinite is not metaphorical, the actual infinite is. Moreover, they deem all manifestations of actual infinity to be instances of what they call the "Basic Metaphor of Infinity", as represented by the ever-increasing sequence 1, 2, 3, ...
WMCF emphatically rejects the Platonistic philosophy of mathematics. They emphasize that all we know and can ever know is human mathematics, the mathematics arising from the human intellect. The question of whether there is a "transcendent" mathematics independent of human thought is a meaningless question, like asking if colors are transcendent of human thought—colors are only varying wavelengths of light, it is our interpretation of physical stimuli that make them colors.
WMCF (p. 81) likewise criticizes the emphasis mathematicians place on the concept of closure. Lakoff and Núñez argue that the expectation of closure is an artifact of the human mind's ability to relate fundamentally different concepts via metaphor.
WMCF concerns itself mainly with proposing and establishing an alternative view of mathematics, one grounding the field in the realities of human biology and experience. It is not a work of technical mathematics or philosophy. Lakoff and Núñez are not the first to argue that conventional approaches to the philosophy of mathematics are flawed. For example, they do not seem all that familiar with the content of Davis and Hersh (1981), even though the book warmly acknowledges Hersh's support.
Lakoff and Núñez cite Saunders Mac Lane (the inventor, with Samuel Eilenberg, of category theory) in support of their position. Mathematics, Form and Function (1986), an overview of mathematics intended for philosophers, proposes that mathematical concepts are ultimately grounded in ordinary human activities, mostly interactions with the physical world.
Examples of mathematical metaphors
Conceptual metaphors described in WMCF, in addition to the Basic Metaphor of Infinity, include:
Arithmetic is motion along a path, object collection/construction;
Change is motion;
Sets are containers, objects;
Continuity is gapless;
Mathematical systems have an "essence," namely their axiomatic algebraic structure;
Functions are sets of ordered pairs, curves in the Cartesian plane;
Geometric figures are objects in space;
Logical independence is geometric orthogonality;
Numbers are sets, object collections, physical segments, points on a line;
Recurrence is circular.
Mathematical reasoning requires variables ranging over some universe of discourse, so that we can reason about generalities rather than merely about particulars. WMCF argues that reasoning with such variables implicitly relies on what it terms the Fundamental Metonymy of Algebra.
Example of metaphorical ambiguity
WMCF (p. 151) includes the following example of what the authors term "metaphorical ambiguity." Take the set Then recall two bits of standard terminology from elementary set theory:
The recursive construction of the ordinal natural numbers, whereby 0 is , and is
The ordered pair (a,b), defined as
By (1), A is the set {1,2}. But (1) and (2) together say that A is also the ordered pair (0,1). Both statements cannot be correct; the ordered pair (0,1) and the unordered pair {1,2} are fully distinct concepts. Lakoff and Johnson (1999) term this situation "metaphorically ambiguous." This simple example calls into question any Platonistic foundations for mathematics.
While (1) and (2) above are admittedly canonical, especially within the consensus set theory known as the Zermelo–Fraenkel axiomatization, WMCF does not let on that they are but one of several definitions that have been proposed since the dawning of set theory. For example, Frege, Principia Mathematica, and New Foundations (a body of axiomatic set theory begun by Quine in 1937) define cardinals and ordinals as equivalence classes under the relations of equinumerosity and similarity, so that this conundrum does not arise. In Quinian set theory, A is simply an instance of the number 2. For technical reasons, defining the ordered pair as in (2) above is awkward in Quinian set theory. Two solutions have been proposed:
A variant set-theoretic definition of the ordered pair more complicated than the usual one;
Taking ordered pairs as primitive.
The Romance of Mathematics
The "Romance of Mathematics" is WMCFs light-hearted term for a perennial philosophical viewpoint about mathematics which the authors describe and then dismiss as an intellectual myth:
Mathematics is transcendent, namely it exists independently of human beings, and structures our actual physical universe and any possible universe. Mathematics is the language of nature, and is the primary conceptual structure we would have in common with extraterrestrial aliens, if any such there be.
Mathematical proof is the gateway to a realm of transcendent truth.
Reasoning is logic, and logic is essentially mathematical. Hence mathematics structures all possible reasoning.
Because mathematics exists independently of human beings, and reasoning is essentially mathematical, reason itself is disembodied. Therefore, artificial intelligence is possible, at least in principle.
It is very much an open question whether WMCF will eventually prove to be the start of a new school in the philosophy of mathematics. Hence the main value of WMCF so far may be a critical one: its critique of Platonism and romanticism in mathematics.
Critical response
Many working mathematicians resist the approach and conclusions of Lakoff and Núñez. Reviews of WMCF by mathematicians in professional journals, while often respectful of its focus on conceptual strategies and metaphors as paths for understanding mathematics, have taken exception to some of the WMCFs philosophical arguments on the grounds that mathematical statements have lasting 'objective' meanings. For example, Fermat's Last Theorem means exactly what it meant when Fermat initially proposed it in 1664. Other reviewers have pointed out that multiple conceptual strategies can be employed in connection with the same mathematically defined term, often by the same person (a point that is compatible with the view that we routinely understand the 'same' concept with different metaphors). The metaphor and the conceptual strategy are not the same as the formal definition which mathematicians employ. However, WMCF points out that formal definitions are built using words and symbols that have meaning only in terms of human experience.
Critiques of WMCF include the humorous:
and the physically informed:
Lakoff and Núñez tend to dismiss the negative opinions mathematicians have expressed about WMCF, because their critics do not appreciate the insights of cognitive science. Lakoff and Núñez maintain that their argument can only be understood using the discoveries of recent decades about the way human brains process language and meaning. They argue that any arguments or criticisms that are not grounded in this understanding cannot address the content of the book.
It has been pointed out that it is not at all clear that WMCF establishes that the claim "intelligent alien life would have mathematical ability" is a myth. To do this, it would be required to show that intelligence and mathematical ability are separable, and this has not been done. On Earth, intelligence and mathematical ability seem to go hand in hand in all life-forms, as pointed out by Keith Devlin among others. The authors of WMCF have not explained how this situation would (or even could) be different anywhere else.
Lakoff and Núñez also appear not to appreciate the extent to which intuitionists and constructivists have presaged their attack on the Romance of (Platonic) Mathematics. Brouwer, the founder of the intuitionist/constructivist point of view, in his dissertation On the Foundation of Mathematics, argued that mathematics was a mental construction, a free creation of the mind and totally independent of logic and language. He goes on to criticize the formalists for building verbal structures that are studied without intuitive interpretation. Symbolic language should not be confused with mathematics; it reflects, but does not contain, mathematical reality.
Educators have taken some interest in what WMCF suggests about how mathematics is learned, and why students find some elementary concepts more difficult than others.
However, even from an educational perspective, WMCF is still problematic. From the conceptual metaphor theory's point of view, metaphors reside in a different realm, the abstract, from that of 'real world', the concrete. In other words, despite their claim of mathematics being human, established mathematical knowledge — which is what we learn in school — is assumed to be and treated as abstract, completely detached from its physical origin. It cannot account for the way learners could access to such knowledge.
WMCF is also criticized for its monist approach. First, it ignores the fact that the sensori-motor experience upon which our linguistic structure — thus, mathematics — is assumed to be based may vary across cultures and situations. Second, the mathematics WMCF is concerned with is "almost entirely... standard utterances in textbooks and curricula", which is the most-well established body of knowledge. It is negligent of the dynamic and diverse nature of the history of mathematics.
WMCF's logo-centric approach is another target for critics. While it is predominantly interested in the association between language and mathematics, it does not account for how non-linguistic factors contribute to the emergence of mathematical ideas (e.g. See Radford, 2009; Rotman, 2008).
Summing up
WMCF (pp. 378–79) concludes with some key points, a number of which follow. Mathematics arises from our bodies and brains, our everyday experiences, and the concerns of human societies and cultures. It is:
The result of normal adult cognitive capacities, in particular the capacity for conceptual metaphor, and as such is a human universal. The ability to construct conceptual metaphors is neurologically based, and enables humans to reason about one domain using the language and concepts of another domain. Conceptual metaphor is both what enabled mathematics to grow out of everyday activities, and what enables mathematics to grow by a continual process of analogy and abstraction;
Symbolic, thereby enormously facilitating precise calculation;
Not transcendent, but the result of human evolution and culture, to which it owes its effectiveness. During experience of the world a connection to mathematical ideas is going on within the human mind;
A system of human concepts making extraordinary use of the ordinary tools of human cognition;
An open-ended creation of human beings, who remain responsible for maintaining and extending it;
One of the greatest products of the collective human imagination, and a magnificent example of the beauty, richness, complexity, diversity, and importance of human ideas.
The cognitive approach to formal systems, as described and implemented in WMCF, need not be confined to mathematics, but should also prove fruitful when applied to formal logic, and to formal philosophy such as Edward Zalta's theory of abstract objects. Lakoff and Johnson (1999) fruitfully employ the cognitive approach to rethink a good deal of the philosophy of mind, epistemology, metaphysics, and the history of ideas.
See also
Abstract object
Cognitive science
Cognitive science of mathematics
Conceptual metaphor
Embodied philosophy
Foundations of mathematics
From Action to Mathematics per Mac Lane
Metaphor
Philosophy of mathematics
The Unreasonable Effectiveness of Mathematics in the Natural Sciences
Footnotes
References
Davis, Philip J., and Reuben Hersh, 1999 (1981). The Mathematical Experience. Mariner Books. First published by Houghton Mifflin.
George Lakoff, 1987. Women, Fire and Dangerous Things. Univ. of Chicago Press.
------ and Mark Johnson, 1999. Philosophy in the Flesh. Basic Books.
George Lakoff and Rafael Núñez, 2000, Where Mathematics Comes From. Basic Books.
John Randolph Lucas, 2000. The Conceptual Roots of Mathematics. Routledge.
Saunders Mac Lane, 1986. Mathematics: Form and Function. Springer Verlag.
External links
WMCF web site.
Reviews of WMCF.
Joseph Auslander in American Scientist;
Bonnie Gold, MAA Reviews 2001
Lakoff's response to Gold's MAA review.
Books about philosophy of mathematics
Infinity
Linguistics books
2000 non-fiction books
Mathematics books
Books about metaphors
Cognitive science literature | Where Mathematics Comes From | Mathematics | 3,191 |
245,206 | https://en.wikipedia.org/wiki/Plus%20and%20minus%20signs | The plus sign () and the minus sign () are mathematical symbols used to denote positive and negative functions, respectively. In addition, represents the operation of addition, which results in a sum, while represents subtraction, resulting in a difference. Their use has been extended to many other meanings, more or less analogous. and are Latin terms meaning "more" and "less", respectively.
The forms and are used in many countries around the world. Other designs include for plus and for minus.
History
Though the signs now seem as familiar as the alphabet or the Hindu–Arabic numerals, they are not of great antiquity. The Egyptian hieroglyphic sign for addition, for example, resembled a pair of legs walking in the direction in which the text was written (Egyptian could be written either from right to left or left to right), with the reverse sign indicating subtraction:
Nicole Oresme's manuscripts from the 14th century show what may be one of the earliest uses of as a sign for plus.
In early 15th century Europe, the letters "P" and "M" were generally used. The symbols (P with overline, , for (more), i.e., plus, and M with overline, , for (less), i.e., minus) appeared for the first time in Luca Pacioli's mathematics compendium, , first printed and published in Venice in 1494.
The sign is a simplification of the (comparable to the evolution of the ampersand ). The may be derived from a macron written over when used to indicate subtraction; or it may come from a shorthand version of the letter itself.
In his 1489 treatise, Johannes Widmann referred to the symbols and as minus and mer (Modern German ; "more"): . They were not used for addition and subtraction in the treatise, but were used to indicate surplus and deficit; usage in the modern sense is attested in a 1518 book by Henricus Grammateus.
Robert Recorde, the designer of the equals sign, introduced plus and minus to Britain in 1557 in The Whetstone of Witte: "There be other 2 signes in often use of which the first is made thus + and betokeneth more: the other is thus made − and betokeneth lesse."
Plus sign
The plus sign () is a binary operator that indicates addition, as in 2 + 3 = 5. It can also serve as a unary operator that leaves its operand unchanged (+x means the same as x). This notation may be used when it is desired to emphasize the positiveness of a number, especially in contrast with the negative numbers (+5 versus −5).
The plus sign can also indicate many other operations, depending on the mathematical system under consideration. Many algebraic structures, such as vector spaces and matrix rings, have some operation which is called, or is equivalent to, addition. It is though conventional to use the plus sign to only denote commutative operations.
The symbol is also used in chemistry and physics. For more, see .
Minus sign
The minus sign () has three main uses in mathematics:
The subtraction operator: a binary operator to indicate the operation of subtraction, as in 5 − 3 = 2. Subtraction is the inverse of addition.
The function whose value for any real or complex argument is the additive inverse of that argument. For example, if x = 3, then −x = −3, but if x = −3, then −x = +3. Similarly, −(−x) = x.
A prefix of a numeric constant. When it is placed immediately before an unsigned number, the combination names a negative number, the additive inverse of the positive number that the numeral would otherwise name. In this usage, '−5' names a number the same way 'semicircle' names a geometric figure, with the caveat that 'semi' does not have a separate use as a function name.
In many contexts, it does not matter whether the second or the third of these usages is intended: −5 is the same number. When it is important to distinguish them, a raised minus sign () is sometimes used for negative constants, as in elementary education, the programming language APL, and some early graphing calculators.
All three uses can be referred to as "minus" in everyday speech, though the binary operator is sometimes read as "take away". In American English nowadays, −5 (for example) is generally referred to as "negative five" though speakers born before 1950 often refer to it as "minus five". (Temperatures tend to follow the older usage; −5° is generally called "minus five degrees".) Further, a few textbooks in the United States encourage −x to be read as "the opposite of x" or "the additive inverse of x"—to avoid giving the impression that −x is necessarily negative (since x itself may already be negative).
In mathematics and most programming languages, the rules for the order of operations mean that −52 is equal to −25: Exponentiation binds more strongly than the unary minus, which binds more strongly than multiplication or division. However, in some programming languages (Microsoft Excel in particular), unary operators bind strongest, so in those cases is 25, but is −25.
Similar to the plus sign, the minus sign is also used in chemistry and physics. (For more, see below.)
Use in elementary education
Some elementary teachers use raised minus signs before numbers to disambiguate them from the operation of subtraction. The same convention is also used in some computer languages. For example, subtracting −5 from 3 might be read as "positive three take away negative 5", and be shown as
3 − −5 becomes 3 + 5 = 8,
which can be read as:
+3 −1(−5)
or even as
+3 − −5 becomes +3 + +5 = +8.
Use as a qualifier
When placed after a number, a plus sign can indicate an open range of numbers. For example, "18+" is commonly used as shorthand for "ages 18 and up" although "eighteen plus", for example, is now common usage.
In US grading systems, the plus sign indicates a grade one level higher and the minus sign a grade lower. For example, ("B minus") is one grade lower than . In some occasions, this is extended to two plus or minus signs (e.g., being two grades higher than ).
A common trend in branding, particularly with streaming video services, has been the use of the plus sign at the end of brand names, e.g. Google+, Disney+, Paramount+, and Apple TV+. Since the word "plus" can mean an advantage, or an additional amount of something, such "+" signs imply that a product offers extra features or benefits.
Positive and negative are sometimes abbreviated as and , and on batteries and cell terminals are often marked with and .
Mathematics
In mathematics the one-sided limit means approaches from the right (i.e., right-sided limit), and means approaches from the left (i.e., left-sided limit). For example, as but as .
Blood
Blood types are often qualified with a plus or minus to indicate the presence or absence of the Rh factor. For example, A+ means type A blood with the Rh factor present, while B− means type B blood with the Rh factor absent.
Music
In music, augmented chords are symbolized with a plus sign, although this practice is not universal (as there are other methods for spelling those chords). For example, "C+" is read "C augmented chord". Sometimes the plus is written as a superscript.
Uses in computing
As well as the normal mathematical usage, plus and minus signs may be used for a number of other purposes in computing.
Plus and minus signs are often used in tree view on a computer screen—to show if a folder is collapsed or not.
In some programming languages, concatenation of strings is written , and results in .
In most programming languages, subtraction and negation are indicated with the ASCII hyphen-minus character, . In APL a raised minus sign (here written using ) is used to denote a negative number, as in . While in J a negative number is denoted by an underscore, as in .
In C and some other computer programming languages, two plus signs indicate the increment operator and two minus signs a decrement; the position of the operator before or after the variable indicates whether the new or old value is read from it. For example, if x equals 6, then increments x to 7 but sets y to 6, whereas would set both x and y to 7. By extension, is sometimes used in computing terminology to signify an improvement, as in the name of the language C++.
In regular expressions, is often used to indicate "1 or more" in a pattern to be matched. For example, means "one or more of the letter x". This is the Kleene plus notation. Hyphen-minus usually indicates a range ( - any capital from 'A' to 'Z'), although it can stand for itself ( any capital from 'A' to 'E' or '-').
There is no concept of negative zero in mathematics, but in computing −0 may have a separate representation from zero. In the IEEE floating-point standard, 1 / −0 is negative infinity () whereas 1 / 0 is positive infinity ().
is also used to denote added lines in output in the or the .
Other uses
In physics, the use of plus and minus signs for different electrical charges was introduced by Georg Christoph Lichtenberg.
In chemistry, superscripted plus and minus signs are used to indicate an ion with a positive or negative charge of 1 (e.g., NH). If the charge is greater than 1, a number indicating the charge is written before the sign (as in SO).
A plus sign prefixed to a telephone number is used to indicate the form used for International Direct Dialing. Its precise usage varies by technology and national standards. In the International Phonetic Alphabet, subscripted plus and minus signs are used as diacritics to indicate advanced or retracted articulations of speech sounds.
The minus sign is also used as tone letter in the orthographies of Dan, Krumen, Karaboro, Mwan, Wan, Yaouré, Wè, Nyabwa, and Godié. The Unicode character used for the tone letter () is different from the mathematical minus sign.
The plus sign sometimes represents in the orthography of Huichol.
In the algebraic notation used to record games of chess, the plus sign is used to denote a move that puts the opponent into check, while a double plus is sometimes used to denote double check. Combinations of the plus and minus signs are used to evaluate a move (+/−, +/=, =/+, −/+).
In linguistics, a superscript plus sometimes replaces the asterisk, which denotes unattested linguistic reconstruction.
In botanical names, a plus sign denotes graft-chimaera.
In Catholicism, the plus sign before a last name denotes a Bishop, and a double plus is used to denote an Archbishop.
Codepoints
Variants of the symbols have unique codepoints in Unicode:
Alternative minus signs
There is a commercial minus sign, , which is used in Germany and Scandinavia. The symbol is used to denote subtraction in Scandinavia.
The hyphen-minus symbol () is the form of hyphen most commonly used in digital documents. On most keyboards, it is the only character that resembles a minus sign or a dash so it is also used for these. The name hyphen-minus derives from the original ASCII standard, where it was called hyphen–(minus). The character is referred to as a hyphen, a minus sign, or a dash according to the context where it is being used.
Alternative plus sign
A Jewish tradition that dates from at least the 19th century is to write plus using the symbol , to avoid the writing of a symbol that could look like a Christian cross. This practice was adopted into Israeli schools and is still commonplace today in elementary schools (including secular schools) but in fewer secondary schools. It is also used occasionally in books by religious authors, but most books for adults use the international symbol . Unicode has this symbol at position .
See also
En dash, a dash that looks similar to the subtraction symbol but is used for different purposes
Glossary of mathematical symbols
⊕ (disambiguation)
Notes
References
External links
Elementary arithmetic
Mathematical symbols
Addition
Subtraction
Sign (mathematics)
de:Vorzeichen (Zahl)#Plus- und Minuszeichen | Plus and minus signs | Mathematics | 2,689 |
11,439,064 | https://en.wikipedia.org/wiki/Pseudocercospora%20vitis | {{Speciesbox
| image =Pseudocercospora vitis 154983434.jpg
| genus = Pseudocercospora
| species = vitis
| authority = (Lév.) Speg., (1910)
| synonyms_ref =<ref>{{cite web |title=GSD Species Synonymy Pseudocercospora vitis |url=https://www.speciesfungorum.org/Names/GSDSpecies.asp?RecordID=187930 |access-date=12 September 2023}}</ref>
| synonyms = Cercospora viticola (Ces.) Sacc., Syll. fung. (Abellini) 4: 458 (1886)
Cercospora vitis Sacc., Fungi italica autogr. del. 17–28: tab. 671 (1881)
Cercospora vitis f. parthenocissi
Cercosporiopsis vitis (Lév.) Miura, (1928)
Cladosporium viticola Ces., in Klotzsch, Herb. Viv. Mycol., Cent. 19: no. 1877 (1854)
Cladosporium vitis
Cladosporium vitis (Lév.) Miura, Flora of Manchuria and East Mongolia, III Cryptogams, Fungi (Industr. Contr. S. Manch. Rly 27): 527 (1928)
Helminthosporium vitis (Lév.) Pirotta, Revue mycol., Toulouse 11(no. 44): 185 (1889)
Phaeoisariopsis vitis (Lév.) Sawada, Rep. Dept Agric., Govern. Res. Inst. Formosa, Spec. Bull. Agric. Exp. Station Formosa 2: 164 (1922)
Septonema vitis Lév.
}}Pseudocercospora vitis''' is a fungal plant pathogen which causes isariopsis leaf spot, (named mistakenly after a genus of fungi in the family Mycosphaerellaceae, Isariopsis'').
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
vitis
Fungi described in 1848
Taxa named by Joseph-Henri Léveillé
Fungus species | Pseudocercospora vitis | Biology | 484 |
1,250,001 | https://en.wikipedia.org/wiki/Geostationary%20ring | In orbital mechanics, the geostationary ring is the region of space around the Earth that includes geostationary orbits and the volume of space which can be reached by uncontrolled objects which begin in geostationary orbits and are subsequently perturbed. Objects in geostationary orbit can be perturbed by anomalies in the gravitational field of the Earth, by the gravitational effects of Sun and Moon, and by solar radiation pressure.
A precessional motion of the orbital plane is caused by the oblatedness of the Earth (), and the gravitational effects of Sun and Moon. This motion has a period of about 53 years. The two parameters describing the direction of the orbit plane in space, the right ascension of the ascending node, and the inclination are affected by this precession. The maximum inclination reached during the 53-year cycle is about 15 degrees. Therefore, the definition of the geostationary ring foresees a declination range from -15 degrees to +15 degrees. In addition, solar radiation pressure induces an eccentricity that leads to a variation of the orbit radius by ± 75 kilometers in some cases. This leads to the definition of the geostationary ring as being a segment of space around the geostationary orbit that ranges from 75 km below GEO to 75 km above GEO and from -15 degrees to 15 degrees declination.
The number of objects in the ring is increasing, and is a source of concern that the risk of collision with space debris in this region is particularly high.
References
Astrodynamics
Earth orbits | Geostationary ring | Engineering | 321 |
541,197 | https://en.wikipedia.org/wiki/Misoprostol | Misoprostol is a synthetic prostaglandin medication used to prevent and treat stomach and duodenal ulcers, induce labor, cause an abortion, and treat postpartum bleeding due to poor contraction of the uterus. It is taken by mouth when used to prevent gastric ulcers in people taking nonsteroidal anti-inflammatory drugs (NSAID). For abortions it is used by itself or in conjunction with mifepristone or methotrexate. By itself, effectiveness for abortion is between 66% and 90%. For labor induction or abortion, it is taken by mouth, dissolved in the mouth, or placed in the vagina. For postpartum bleeding it may also be used rectally.
Common side effects include diarrhea and abdominal pain. It is in pregnancy category X, meaning that it is known to result in negative outcomes for the fetus if taken during pregnancy. In rare cases, uterine rupture may occur. It is a prostaglandin analogue—specifically, a synthetic prostaglandin E1 (PGE1).
Misoprostol was developed in 1973. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
Medical uses
Ulcer prevention
Misoprostol is used for the prevention of NSAID-induced gastric ulcers. It acts upon gastric parietal cells, inhibiting the secretion of gastric acid by G-protein coupled receptor-mediated inhibition of adenylate cyclase, which leads to decreased intracellular cyclic AMP levels and decreased proton pump activity at the apical surface of the parietal cell. Misoprostol is sometimes coprescribed with NSAIDs to prevent their common adverse effect of gastric ulceration (e.g., with diclofenac in Arthrotec).
However, even in the treatment of NSAID-induced ulcers, omeprazole proved to be at least as effective as misoprostol, but was significantly better tolerated, so misoprostol should not be considered a first-line treatment. Misoprostol-induced diarrhea and the need for multiple daily doses (typically four) are the main issues impairing compliance with therapy.
Labor induction
Misoprostol is commonly used for labor induction. It causes uterine contractions and the ripening (effacement or thinning) of the cervix. It can be less expensive than the other commonly used ripening agent, dinoprostone.
Oxytocin has long been used as the standard agent for labor induction, but does not work well when the cervix is not yet ripe. Misoprostol also may be used in conjunction with oxytocin.
Between 2002 and 2012, a misoprostol vaginal insert was studied, and was approved in the EU. It was not approved for use in the United States, and the US FDA still considers cervical ripening and labor induction to be outside of the approved uses for misoprostol.
Myomectomy
When administered prior to myomectomy in women with uterine fibroids, misoprostol reduces operative blood loss and requirement of blood transfusion.
Abortion
Misoprostol is used either alone or in conjunction with another medication (mifepristone or methotrexate) for medical abortions as an alternative to surgical abortion. Medical abortion has the advantage of being less invasive, and more autonomous, self-directed, and discreet. It is preferred by some women because it feels more natural, as the drugs induce a miscarriage. It is also more easily accessible in places where abortion is illegal. The World Health Organization (WHO) provides clear guidelines on the use, benefits and risks of misoprostol for abortions.
Misoprostol is most effective when it is used in combination with methotrexate or mifepristone (RU-486). Mifepristone blocks signaling by progesterone, causing the uterine lining to degrade, the blood vessels of the cervix and uterus to dilate and causing uterine contraction, similar to a menstrual period, which causes the embryo to detach from the uterine walls. Misoprostol then dilates the cervix and induces muscle contractions which clear the uterus. Misoprostol alone is less effective (typically 88% up to eight-weeks gestation). It is not inherently unsafe if medically supervised, but 1% of women will have heavy bleeding requiring medical attention, some women may have ectopic pregnancy, and the 12% of pregnancies that continue after misoprostol failure are more likely to have birth defects and are usually followed up with a more effective method of abortion.
Most large studies recommend a protocol for the use of misoprostol in combination with mifepristone. Together they are effective in around 95% for early pregnancies. Misoprostol alone may be more effective in earlier gestation.
Misoprostol can also be used to dilate the cervix in preparation for a surgical abortion, particularly in the second trimester (either alone or in combination with osmotic dilators).
Misoprostol by mouth is the least effective treatment for producing complete abortion in a period of 24 hours due to the liver's first-pass effect which reduces the bioavailability of the misoprostol. Vaginal and sublingual routes result in greater efficacy and extended duration of action because these routes of administration allow the drug to be directly absorbed into circulation by bypassing the liver first-pass effect.
Hematocrit or Hb tests and Rh testing are recommended before use for abortion confirmation of pregnancy. Following use, it is recommended that people attend a follow-up visit 2 weeks after treatment. If used for treatment of complete abortion, a pregnancy test, physical examination of the uterus, and ultrasound should be performed to ensure success of treatment. Surgical management is possible in the case of failed treatment.
Early pregnancy loss
Misoprostol may be used to complete a miscarriage or missed abortion when the body does not expel the embryo or fetus on its own. Compared to no medication or placebo, it could decrease the time to complete expulsion. Use of a single dose of misoprostol vaginally or buccally is preferred, with additional doses as needed. It also can be used in combination with mifepristone, with a similar regimen to medical abortion.
Misoprostol is regularly used in some Canadian hospitals for labour induction for fetal deaths early in pregnancy, and for termination of pregnancy for fetal anomalies. A low dose is used initially, then doubled for the remaining doses until delivery. In the case of a previous Caesarian section, however, lower doses are used.
Postpartum bleeding
Misoprostol is also used to prevent and treat post-partum bleeding. Orally administered misoprostol was marginally less effective than oxytocin. The use of rectally administered misoprostol is optimal in cases of bleeding; it was shown to be associated with lower rates of side effects compared to other routes. Rectally administered misoprostol was reported in a variety of case reports and randomised controlled trials. However, it is inexpensive and thermostable (thus does not require refrigeration like oxytocin), making it a cost-effective and valuable drug to use in the developing world. A randomised control trial of misoprostol use found a 38% reduction in maternal deaths due to post partum haemorrhage in resource-poor communities. Misoprostol is recommended due to its cost, effectiveness, stability, and low rate of side effects. Oxytocin must also be given by injection, while misprostol can be given orally or rectally for this use, making it much more useful in areas where nurses and physicians are less available.
Insertion of intrauterine contraceptive device
In women with prior caesarean section or prior failure of insertion of an intrauterine contraceptive device, pre-procedure administration of misoprostol reduces the rate of failure of insertion of intrauterine contraceptive device. However, due to a higher rate of adverse effects, routine use of misoprostol for this purpose in other women is not supported by the data.
Other
For cervical ripening in advance of endometrial biopsy to reduce the need for use of a tenaculum or cervical dilator.
There is limited evidence supporting the use of misoprostol for the treatment of trigeminal neuralgia in patients with multiple sclerosis.
Adverse effects
The most commonly reported adverse effect of taking misoprostol by mouth for the prevention of stomach ulcers is diarrhea. In clinical trials, an average 13% of people reported diarrhea, which was dose-related and usually developed early in the course of therapy (after 13 days) and was usually self-limiting (often resolving within 8 days), but sometimes (in 2% of people) required discontinuation of misoprostol.
The next most commonly reported adverse effects of taking misoprostol by mouth for the prevention of gastric ulcers are: abdominal pain, nausea, flatulence, headache, dyspepsia, vomiting, and constipation, but none of these adverse effects occurred more often than when taking placebos.
There are increased side effects with sublingual or oral misoprostol, compared to a low dose (400 μg) vaginal misoprostol. However, low dose vaginal misoprostol was linked with low complete abortion rate. The study concluded that sublingually administered misoprostol dosed at 600 μg or 400 μg had greater instances of fever and diarrhea due to its quicker onset of action, higher peak concentration and bioavailability in comparison to vaginal or oral misoprostol.
For the indication of medical abortion, bleeding and cramping is commonly experienced after administration of misoprostol. Bleeding and cramping is likely to be greater than that experienced with menses, however, emergency care is advised if bleeding is excessive.
Misoprostol should not be taken by pregnant women with wanted pregnancies to reduce the risk of NSAID-induced gastric ulcers because it increases uterine tone and contractions in pregnancy, which may cause partial or complete abortions, and because its use in pregnancy has been associated with birth defects.
All cervical ripening and induction agents can cause uterine hyperstimulation, which can negatively affect the blood supply to the fetus and increases the risk of complications such as uterine rupture. Concern has been raised that uterine hyperstimulation that occurs during a misoprostol-induced labor is more difficult to treat than hyperstimulation during labors induced by other drugs. Because the complications are rare, it is difficult to determine if misoprostol causes a higher risk than do other cervical ripening agents. One estimate is that it would require around 61,000 people enrolled in randomized controlled trials to detect a difference in serious fetal complications and about 155,000 people to detect a difference in serious maternal complications.
Contraindications
It is recommended that medical treatment for missed abortion with misoprostol should only be considered in people without the following contraindications: suspected ectopic pregnancy, use of non-steroidal drugs, signs of pelvic infections or sepsis, unstable hemodynamics, known allergy to misoprostol, previous caesarean section, mitral stenosis, hypertension, glaucoma, bronchial asthma, and remote areas without a hospital nearby.
Pharmacology
Mechanism of action
Misoprostol, a prostaglandin analogue, binds to myometrial cells to cause strong myometrial contractions leading to expulsion of tissue. This agent also causes cervical ripening with softening and dilation of the cervix. Misoprostol binds to and stimulates prostaglandin EP2 receptors, prostaglandin EP3 receptor and prostaglandin EP4 receptor but not prostaglandin EP1 receptor and therefore is expected to have a more restricted range of physiological and potentially toxic actions than prostaglandin E2 or other analogs which activate all four prostaglandin receptors.
Society and culture
In August 2000, a letter from G.D. Searle, LLC, the inventor of the drug, generated controversy by warning against its use by pregnant women because of its ability to induce abortion, citing reports of maternal and fetal deaths when it was used to induce labor. The American College of Obstetricians and Gynecologists holds that substantial evidence supports the use of misoprostol for induction of labor, a position it reaffirmed in response to the Searle letter. It is on the World Health Organization's List of Essential Medicines.
A vaginal form of the medication is sold in the EU under the names Misodel and Mysodelle for use in labor induction.
Black market
Misoprostol is used for self-induced abortions in Brazil, where black market prices exceed US$100 per dose. Illegal medically unsupervised misoprostol abortions in Brazil are associated with a lower complication rate than other forms of illegal self-induced abortion, but are still associated with a higher complication rate than legal, medically supervised surgical and medical abortions. Failed misoprostol abortions are associated with birth defects in some cases. Low-income and immigrant populations in New York City have also been observed to use self-administered misoprostol to induce abortions, as this method is much cheaper than a surgical abortion (about $2 per dose). The drug is readily available in Mexico. Use of misoprostol has also increased in Texas in response to increased regulation of abortion providers. Following the United States Supreme Court decision of Dobbs v. Jackson Women's Health Organization, many states restricted access to legal abortion services, including medication abortion using misoprostol. As a result of these restrictions, it was reported that there was an increase in self-managed abortions by women in the United States. Many women purchased the pills from overseas online pharmacies or obtained misoprostol from Mexico.
References
Abortifacients
Carboxylate esters
Diols
Gastroenterology
Gynaecology
Ketones
Methods of abortion
Methyl esters
Prostaglandins
Drugs developed by Pfizer
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Uterotonics | Misoprostol | Chemistry | 3,125 |
37,136,427 | https://en.wikipedia.org/wiki/NGC%204183 | NGC 4183 is a spiral galaxy with a faint core and an open spiral structure located about 55 million light-years from the Sun. Spanning about eighty thousand light-years, it appears in the constellation of Canes Venatici. NGC 4183 was observed for the first time by British astronomer William Herschel on 14 January 1788.
The galaxy is part of the Ursa Major Cluster.
One supernova has been observed in NGC 4183: SN 1968U (type unknown, mag. 14.5) was discovered by Justus R. Dunlap on 29 October 1968.
References
External links
Unbarred spiral galaxies
17880114
4183
07222
38988
Ursa Major Cluster
Canes Venatici | NGC 4183 | Astronomy | 153 |
39,802,440 | https://en.wikipedia.org/wiki/Trusted%20execution%20environment | A trusted execution environment (TEE) is a secure area of a main processor. It helps the code and data loaded inside it be protected with respect to confidentiality and integrity. Data confidentiality prevents unauthorized entities from outside the TEE from reading data, while code integrity prevents code in the TEE from being replaced or modified by unauthorized entities, which may also be the computer owner itself as in certain DRM schemes described in Intel SGX.
This is done by implementing unique, immutable, and confidential architectural security, which offers hardware-based memory encryption that isolates specific application code and data in memory. This allows user-level code to allocate private regions of memory, called enclaves, which are designed to be protected from processes running at higher privilege levels. A TEE as an isolated execution environment provides security features such as isolated execution, integrity of applications executing with the TEE, and confidentiality of their assets. In general terms, the TEE offers an execution space that provides a higher level of security for trusted applications running on the device than a rich operating system (OS) and more functionality than a 'secure element' (SE).
History
The Open Mobile Terminal Platform (OMTP) first defined TEE in their "Advanced Trusted Environment:OMTP TR1" standard, defining it as a "set of hardware and software components providing facilities necessary to support applications," which had to meet the requirements of one of two defined security levels. The first security level, Profile 1, was targeted against only software attacks, while Profile 2, was targeted against both software and hardware attacks.
Commercial TEE solutions based on ARM TrustZone technology, conforming to the TR1 standard, were later launched, such as Trusted Foundations developed by Trusted Logic.
Work on the OMTP standards ended in mid-2010 when the group transitioned into the Wholesale Applications Community (WAC).
The OMTP standards, including those defining a TEE, are hosted by GSMA.
Details
The TEE typically consists of a hardware isolation mechanism plus a secure operating system running on top of that isolation mechanism, although the term has been used more generally to mean a protected solution. Whilst a GlobalPlatform TEE requires hardware isolation, others, such as EMVCo, use the term TEE to refer to both hardware and software-based solutions. FIDO uses the concept of TEE in the restricted operating environment for TEEs based on hardware isolation. Only trusted applications running in a TEE have access to the full power of a device's main processor, peripherals, and memory, while hardware isolation protects these from user-installed apps running in a main operating system. Software and cryptogaphic inside the TEE protect the trusted applications contained within from each other.
Service providers, mobile network operators (MNO), operating system developers, application developers, device manufacturers, platform providers, and silicon vendors are the main stakeholders contributing to the standardization efforts around the TEE.
To prevent the simulation of hardware with user-controlled software, a so-called "hardware root of trust" is used. This is a set of private keys that are embedded directly into the chip during manufacturing; one-time programmable memory such as eFuses is usually used on mobile devices. These cannot be changed, even after the device resets, and whose public counterparts reside in a manufacturer database, together with a non-secret hash of a public key belonging to the trusted party (usually a chip vendor) which is used to sign trusted firmware alongside the circuits doing cryptographic operations and controlling access.
The hardware is designed in a way which prevents all software not signed by the trusted party's key from accessing the privileged features. The public key of the vendor is provided at runtime and hashed; this hash is then compared to the one embedded in the chip. If the hash matches, the public key is used to verify a digital signature of trusted vendor-controlled firmware (such as a chain of bootloaders on Android devices or 'architectural enclaves' in SGX). The trusted firmware is then used to implement remote attestation.
When an application is attested, its untrusted components loads its trusted component into memory; the trusted application is protected from modification by untrusted components with hardware. A nonce is requested by the untrusted party from verifier's server and is used as part of a cryptographic authentication protocol, proving integrity of the trusted application. The proof is passed to the verifier, which verifies it. A valid proof cannot be computed in simulated hardware (i.e. QEMU) because in order to construct it, access to the keys baked into hardware is required; only trusted firmware has access to these keys and/or the keys derived from them or obtained using them. Because only the platform owner is meant to have access to the data recorded in the foundry, the verifying party must interact with the service set up by the vendor. If the scheme is implemented improperly, the chip vendor can track which applications are used on which chip and selectively deny service by returning a message indicating that authentication has not passed.
To simulate hardware in a way which enables it to pass remote authentication, an attacker would have to extract keys from the hardware, which is costly because of the equipment and technical skill required to execute it. For example, using focused ion beams, scanning electron microscopes, microprobing, and chip decapsulation is difficult, or even impossible, if the hardware is designed in such a way that reverse-engineering destroys the keys. In most cases, the keys are unique for each piece of hardware, so that a key extracted from one chip cannot be used by others (for example physically unclonable functions).
Though deprivation of ownership is not an inherent property of TEEs (it is possible to design the system in a way that allows only the user who has obtained ownership of the device first to control the system by burning a hash of their own key into e-fuses), in practice all such systems in consumer electronics are intentionally designed so as to allow chip manufacturers to control access to attestation and its algorithms. It allows manufacturers to grant access to TEEs only to software developers who have a (usually commercial) business agreement with the manufacturer, monetizing the user base of the hardware, to enable such use cases as tivoization and DRM and to allow certain hardware features to be used only with vendor-supplied software, forcing users to use it despite its antifeatures, like ads, tracking and use case restriction for market segmentation.
Uses
There are a number of use cases for the TEE. Though not all possible use cases exploit the deprivation of ownership, TEE is usually used exactly for this.
Premium Content Protection/Digital Rights Management
Note: Much TEE literature covers this topic under the definition "premium content protection," which is the preferred nomenclature of many copyright holders. Premium content protection is a specific use case of digital rights management (DRM) and is controversial among some communities, such as the Free Software Foundation. It is widely used by copyright holders to restrict the ways in which end users can consume content such as 4K high-definition films.
The TEE is a suitable environment for protecting digitally encoded information (for example, HD films or audio) on connected devices such as smartphones, tablets, and HD televisions. This suitability comes from the ability of the TEE to deprive the owner of the device of access stored secrets, and the fact that there is often a protected hardware path between the TEE and the display and/or subsystems on devices.
The TEE is used to protect the content once it is on the device. While the content is protected during transmission or streaming by the use of encryption, the TEE protects the content once it has been decrypted on the device by ensuring that decrypted content is not exposed to the environment not approved by the app developer or platform vendor.
Mobile financial services
Mobile commerce applications such as: mobile wallets, peer-to-peer payments, contactless payments or using a mobile device as a point of sale (POS) terminal often have well-defined security requirements. TEEs can be used, often in conjunction with near-field communication (NFC), SEs, and trusted backend systems to provide the security required to enable financial transactions to take place
In some scenarios, interaction with the end user is required, and this may require the user to expose sensitive information such as a PIN, password, or biometric identifier to the mobile OS as a means of authenticating the user. The TEE optionally offers a trusted user interface which can be used to construct user authentication on a mobile device.
With the rise of cryptocurrency, TEEs are increasingly used to implement crypto-wallets, as they offer the ability to store tokens more securely than regular operating systems, and can provide the necessary computation and authentication applications.
Authentication
The TEE is well-suited for supporting biometric identification methods (facial recognition, fingerprint sensor, and voice authorization), which may be easier to use and harder to steal than PINs and passwords. The authentication process is generally split into three main stages:
Storing a reference "template" identifier on the device for comparison with the "image" extracted in the next stage.
Extracting an "image" (scanning the fingerprint or capturing a voice sample).
Using a matching engine to compare the "image" and the "template".
A TEE is a good area within a mobile device to house the matching engine and the associated processing required to authenticate the user. The environment is designed to protect the data and establish a buffer against the non-secure apps located in mobile OSes. This additional security may help to satisfy the security needs of service providers in addition to keeping the costs low for handset developers.
Enterprise, government, and cloud
The TEE can be used by governments, enterprises, and cloud service providers to enable the secure handling of confidential information on mobile devices and on server infrastructure. The TEE offers a level of protection against software attacks generated in the mobile OS and assists in the control of access rights. It achieves this by housing sensitive, ‘trusted’ applications that need to be isolated and protected from the mobile OS and any malicious malware that may be present. Through utilizing the functionality and security levels offered by the TEE, governments, and enterprises can be assured that employees using their own devices are doing so in a secure and trusted manner. Likewise, server-based TEEs help defend against internal and external attacks against backend infrastructure.
Secure modular programming
With the rise of software assets and reuses, modular programming is the most productive process to design software architecture, by decoupling the functionalities into small independent modules. As each module contains everything necessary to execute its desired functionality, the TEE allows the organization of the complete system featuring a high level of reliability and security, while preventing each module from vulnerabilities of the others.
In order for the modules to communicate and share data, TEE provides means to securely have payloads sent/received between the modules, using mechanisms such as object serialization, in conjunction with proxies.
See Component-based software engineering
TEE operating systems
Hardware support
The following hardware technologies can be used to support TEE implementations:
AMD:
Platform Security Processor (PSP)
AMD Secure Encrypted Virtualization (SEV) and the Secure Nested Paging extension
ARM:
TrustZone
Realm Management Extension / Confidential Compute Architecture (CCA)
IBM:
IBM Secure Service Container, formerly zACI, first introduced in IBM z13 generation machines (including all LinuxONE machines) in driver level 27.
IBM Secure Execution, introduced in IBM z15 and LinuxONE III generation machines on April 14, 2020.
Intel:
Trusted Execution Technology (TXT)
Software Guard Extensions (SGX)
"Silent Lake" (available on Atom processors)
RISC-V:
MultiZone Security Trusted Execution Environment
Keystone Customizable TEE Framework
Penglai Scalable TEE for RISC-V
See also
Open Mobile Terminal Platform
Trusted Computing Group
FIDO Alliance
Java Card
Intel Management Engine
Intel LaGrande
Software Guard Extensions
AMD Platform Security Processor
Trusted Platform Module
ARM TrustZone
NFC Secure Element
Next-Generation Secure Computing Base
References
Security
Security technology
Mobile security
Mobile software
Standards | Trusted execution environment | Technology,Engineering | 2,494 |
21,266,258 | https://en.wikipedia.org/wiki/Aerolite%20%28adhesive%29 | Aerolite is a urea-formaldehyde gap filling adhesive which is water- and heat-resistant. It is used in large quantities by the chipboard industry and also by wooden boat builders for its high strength and durability. It is also used in joinery, veneering and general woodwork assembly. Aerolite has also been used for wooden aircraft construction, and a properly made Aerolite joint is said to be three times stronger than spruce wood.
History
Dr. Norman A. de Bruyne founded Aero Research Limited in 1934. The following year de Bruyne suggested that synthetic adhesives might play a part in aircraft production and engaged Cambridge University chemist R.E. Clark to investigate new adhesives for aircraft applications. The result was Aerolite, a urea-formaldehyde adhesive which unlike conventional glues of the time, resisted water and micro-organisms. Further research showed that gap-bridging hardeners incorporating formic acid enabled Aerolite to be used as an assembly adhesive. Aerolite was the first adhesive of its type to be invented and manufactured in Britain and used in resin-bonded plywood.
When World War II broke out, the small company began to grow. Morris Motors used Aerolite and Aero Research's strip heating process to assemble Airspeed Horsa gliders, as did de Havilland on its Mosquito, as well as on other aircraft and also naval launches and patrol boats. On the Mosquito, Aerolite soon replaced the original "Beetle Cement" (known as "Kaurit" in Germany) synthetic resin adhesive used, after this glue was found not to stand up to the hot and humid climate in the Far East.
Following the end of the war, in 1948 de Bruyne sold control of Aero Research to the Swiss company Ciba, but remained as managing director until 1960.
Uses
Aerolite is currently marketed for use in boat building. Aerospace adhesives are used to assemble aircraft exteriors, engines, and interiors. Sealants seal the space between surfaces. Sealants increase the airtightness and watertightness of spaces. Aircraft have glues such as cockpit doors, fasteners, lights, etc. These glues are used in the general aviation industry. Aerospace adhesives and sealants are known for toughness, viscosity, longer durability and shorter cure times, depending on the requirements of aerospace applications.
See also
Aero Research Limited
Araldite
Redux
Tego film
Notes
External links
"Aerolite" Synthetic Glue on the Market - 1938 news article in Flight magazine
Adhesives
Aerospace engineering | Aerolite (adhesive) | Engineering | 530 |
3,832,555 | https://en.wikipedia.org/wiki/Abhyankar%27s%20conjecture | In abstract algebra, Abhyankar's conjecture for affine curves is a conjecture of Shreeram Abhyankar posed in 1957, on the Galois groups of algebraic function fields of characteristic p. The soluble case was solved by Serre in 1990 and the full conjecture was proved in 1994 by work of Michel Raynaud and David Harbater.
Statement
The problem involves a finite group G, a prime number p, and the function field K(C) of a nonsingular integral algebraic curve C defined over an algebraically closed field K of characteristic p.
The question addresses the existence of a Galois extension L of K(C), with G as Galois group, and with specified ramification. From a geometric point of view, L corresponds to another curve , together with
a morphism
π : → C.
Geometrically, the assertion that π is ramified at a finite set S of points on C
means that π restricted to the complement of S in C is an étale morphism.
This is in analogy with the case of Riemann surfaces.
In Abhyankar's conjecture, S is fixed, and the question is what G can be. This is therefore a special type of inverse Galois problem.
Results
The subgroup p(G) is defined to be the subgroup generated by all the Sylow subgroups of G for the prime number p. This is a normal subgroup, and the parameter n is defined as the minimum number of generators of
G/p(G).
Raynaud proved the case where C is the projective line over K, the conjecture states that G can be realised as a Galois group of L, unramified outside S containing s + 1 points, if and only if
n ≤ s.
The general case was proved by Harbater, in which g is the genus of C and G can be realised if and only if
n ≤ s + 2 g.
References
External links
A layman's perspective of Abhyankar's conjecture from Purdue University
Algebraic curves
Galois theory
Theorems in abstract algebra
Conjectures that have been proved | Abhyankar's conjecture | Mathematics | 430 |
48,476,573 | https://en.wikipedia.org/wiki/List%20of%20databases%20using%20MVCC | The following database management systems and other software use multiversion concurrency control.
Databases
Altibase
Berkeley DB
Cloudant
Cloud Spanner
Clustrix
CockroachDB
Couchbase
CouchDB
CUBRID
IBM Db2 – since IBM DB2 9.7 LUW ("Cobra") under CS isolation level – in currently committed mode
Drizzle
Druid
etcd
Exasol
eXtremeDB
Firebird
FLAIM
FoundationDB
GE Smallworld Version Managed Data Store
H2 Database Engine – experimental since version 1.0.57 (2007-08-25)
HBase
HSQLDB – starting with version 2.0
IBM Netezza
Ingres
InterBase – all versions
LMDB
MariaDB (MySQL fork) – when used with XtraDB, an InnoDB fork and that is included in MariaDB sources and binaries or PBXT
MarkLogic Server – a bit of this is described in
MemSQL
Microsoft SQL Server – when using READ_COMMITTED_SNAPSHOT, starting with SQL Server 2005
MonetDB
MongoDB – when used with the WiredTiger storage engine
MySQL – when used with InnoDB, Falcon, or Archive storage engines
NuoDB
ObjectDB
ObjectStore
Oracle Database – all versions since Oracle 4
Oracle (née DEC) Rdb
OrientDB
PostgreSQL
Postgres-XL
RDM Embedded
REAL Server
Realm
RethinkDB
SAP HANA
SAP IQ
ScyllaDB
sones GraphDB
Sybase SQL Anywhere
TerminusDB
Actian Vector
YugabyteDB
Zope Object Database
Other software with MVCC
JBoss Cache – v 3.0
Ehcache – v 1.6.0-beta4
Clojure – language software transactional memory
Apache Jackrabbit Oak
References
Databases using MVCC
Concurrency control algorithms | List of databases using MVCC | Technology | 368 |
14,719,430 | https://en.wikipedia.org/wiki/Iobenguane | Iobenguane, or MIBG, is an aralkylguanidine analog of the adrenergic neurotransmitter norepinephrine (noradrenaline), typically used as a radiopharmaceutical. It acts as a blocking agent for adrenergic neurons. When radiolabeled, it can be used in nuclear medicinal diagnostic and therapy techniques as well as in neuroendocrine chemotherapy treatments.
It localizes to adrenergic tissue and thus can be used to identify the location of tumors such as pheochromocytomas and neuroblastomas. With iodine-131 it can also be used to treat tumor cells that take up and metabolize norepinephrine.
Usage and mechanism
MIBG is absorbed by and accumulated in granules of adrenal medullary chromaffin cells, as well as in pre-synaptic adrenergic neuron granules. The process in which this occurs is closely related to the mechanism employed by norepinephrine and its transporter in vivo. The norepinephrine transporter (NET) functions to provide norepinephrine uptake at the synaptic terminals and adrenal chromaffin cells. MIBG, by bonding to NET, finds its roles in imaging and therapy.
Metabolites and excretion
Less than 10% of the administered MIBG gets metabolized into m-iodohippuric acid (MIHA), and the mechanism for how this metabolite is produced is unknown.
Diagnostic imaging
MIBG concentrates in endocrine tumors, most commonly neuroblastoma, paraganglioma, and pheochromocytoma. It also accumulates in norepinephrine transporters in adrenergic nerves in the heart, lungs, adrenal medulla, salivary glands, liver, and spleen, as well as in tumors that originate in the neural crest. When labelled with iodine-123 it serves as a whole-body, non-invasive scintigraphic screening for germ-line, somatic, benign, and malignant neoplasms originating in the adrenal glands. It can detect both intra and extra-adrenal disease. The imaging is highly sensitive and specific.
Iobenguane concentrates in presynaptic terminals of the heart and other autonomically innervated organs. This enables the possible non-invasive use as an in vivo probe to study these systems.
Alternatives to imaging with 123I-MIBG, for certain indications and under clinical and research use, include the positron-emitting isotope iodine-124, and other radiopharmaceuticals such as 68Ga-DOTA and 18F-FDOPA for positron emission tomography (PET). 123I-MIBG imaging on a gamma camera can offer significantly higher cost-effectiveness and availability compared to PET imaging, and is particularly effective where 131I-MIBG therapy is subsequently planned, due to their directly comparable uptake.
Side effects
Side effects post imaging are rare but can include tachycardia, pallor, vomiting, and abdominal pain.
Radionuclide therapy
MIBG can be radiolabelled with the beta emitting radionuclide 131I for the treatment of certain pheochromocytomas, paragangliomas, carcinoid tumors, neuroblastomas, and medullary thyroid cancer.
Thyroid precautions
Thyroid blockade with (nonradioactive) potassium iodide is indicated for nuclear medicine scintigraphy with iobenguane/mIBG. This competitively inhibits radioiodine uptake, preventing excessive radioiodine levels in the thyroid and minimizing risk of thyroid ablation (in treatment with 131I). The minimal risk of thyroid cancer is also reduced as a result.
The dosing regime for the FDA-approved commercial 123I-MIBG product Adreview is potassium iodide or Lugol's solution containing 100 mg iodide, weight adjusted for children and given an hour before injection. EANM guidelines, endorsed by the SNMMI, suggest a variety of regimes in clinical use, for both children and adults.
Product labeling for diagnostic 131I iobenguane recommends giving potassium iodide one day before injection and continuing 5 to 7 days following. 131I iobenguane used for therapeutic purposes requires a different pre-medication duration, beginning 24–48 hours before iobenguane injection and continuing 10–15 days after injection.
Clinical trials
Iobenguane I 131 for cancers
Iobenguane I 131, marketed under the trade name Azedra, has had a clinical trial as a treatment for malignant, recurrent or unresectable pheochromocytoma and paraganglioma, and the FDA approved it on July 30, 2018. The drug is developed by Progenics Pharmaceuticals.
References
External links
Adrenergic receptor antagonists
Diagnostic endocrinology
Guanidines
3-Iodophenyl compounds
Radiopharmaceuticals | Iobenguane | Chemistry | 1,079 |
20,606,605 | https://en.wikipedia.org/wiki/Ecology%20of%20contexts | The ecology of contexts is a term used in many disciplines and refers to the dynamic interplay of contexts and demands that constrain and define an entity.
Environmental ecology
An agroecosystem exists amid contexts including climate, soil, plant genetics, government policies, and the personal beliefs and predilections of the agriculturalist. Not only are these contexts too numerous to list in their entirety for any agroecosystem, but their interactions are so complex it is impossible to perfectly characterize a system, let alone predict the effect a given perturbation will have on the whole. At the same time, all of these contexts are dynamic, albeit at wildly diverging time scales, so the ecology of contexts for an agroecosystem is fundamentally mutable. An awareness of the ecology of contexts is helpful for agroecologists, as the nearly axiomatic acceptance dynamic, and thereby unperfectable, nature of agroecosystems precludes the often damaging notion of a best or ideal approach to agroecosystem management as well as an awareness of the complexity of the response that can result from any perturbation of the system.
This concept of the ecology of contexts provides a useful epistemological device for understanding agroecosystems.
This dual relationship of an entity in an ecology of contexts underscores the ecological analogy, with its emphasis on holonic interactions.
Human ecology
Anil Thomas defines "ecology check" as:... checking to see if the desired result of a technique will work out in other areas of a person's life. ... checking the consequences of your future actions and plans. ... what happens after you make a desired change. How does it affect a person's home, their family, their finances, health, time, etc.? Is it in line with their value system? Is this what they really want? It prevents self-sabotage by making sure a change will be acceptable to all parts. ... When you think "ecologically," you are taking every aspect of your outcome into account. You check to make sure that you are not going to achieve X at the expense of Y, if both are important to you. ...In child development, for instance, it can refer to the nested scales at which influences on children reside, from the individual (e.g. age) to the broadest elements, like government policies or cultural attitudes.
From computer science is the concept of ecology of context-aware computing, where a device's operation is tempered by information the device itself has about how the environment will affect its functioning and vice versa.
In the field of music therapy, Trygve Aasgaard dealt with the reciprocity of an ecology of contexts, seeing the role of music in therapy as responsive to cultural and other contexts, while at the same time forming part of the environmental context.
References
Social epistemology
Ecological restoration
Ecology
Agroecology
Sustainable urban planning
Urban studies and planning terminology
Environmental psychology
Sustainable agriculture | Ecology of contexts | Chemistry,Technology,Engineering,Biology,Environmental_science | 629 |
44,745,151 | https://en.wikipedia.org/wiki/Automotive%20Dealership%20Institute | Automotive Dealership Institute is an Arizona-based and licensed training program that offers classroom and online instruction in management, finance, and insurance for the auto industry.
History
Automotive Dealership Institute was founded in December 2004. The institute offers management training for automotive dealerships. Alan Algan is Executive Director, and Robert W. Serum is the Chancellor.
References
2004 establishments in Arizona
Automotive industry in the United States
Transport education
Vocational education in the United States | Automotive Dealership Institute | Physics | 89 |
51,621,291 | https://en.wikipedia.org/wiki/Fonsecaea%20compacta | Fonsecaea compacta is a saprophytic fungal species found in the family Herpotrichiellaceae. It is a rare etiological agent of chromoblastomycosis, with low rates of correspondence observed from reports. The main active components of F. compacta are glycolipids, yet very little is known about its composition. F. compacta is widely regarded as a dysplastic variety of Fonsecaea pedrosoi, its morphological precursor. The genus Fonsecaea presently contains two species, F. pedrosoi and F. compacta. Over 100 strains of F. pedrosoi have been isolated but only two of F. compacta.
History
Fonsecaea compacta was first proposed by Carrion in 1935. This proposal was considered invalid because a Latin diagnosis was not provided at the time. The name F. compacta was later validated in 1940 when Carrion provided the required Latin diagnosis. Carrion & Emmons reported the presence of phialides in F. compacta, which were described as being typical of those formed by Phialophora verrucosa. Owing to this observation, Redaelli & Ciferri transferred F. compacta to the genus Phialophora in 1942. Given that the generic name Fonsecaea is feminine, the species epithet "compacta" rather than "compactum" is used for gender agreement.
Classification
There is some disagreement concerning the nomenclature, such as whether the genus Fonsecaea is suitable. This is largely due to discrepancy among medical mycologists as to which characteristics should be used to identify them. At one time or another, F. compacta had been placed in other genera, including, Phialophora, Hormodendrum, Acrotheca, Phialoconidiophora, Rhinocladiella or Trichosporium. The two more common ones are Rhinocladiella and Phialophora. Confusion surrounding F. pedrosoi and F. compacta has resulted from their polymorphic nature, in that they may form more than one type of conidia arrangement within a single culture. Evaluation of different isolates confirms the genus Fonsecaea is most logical, as characterized by their complex heads of conidia. In 2004, it was reported that based on sequences of the internal transcribed spacer (ITS) region, 39 strains of Fonsecaea spp. and related species could be classified into three groups: Group A, including F. pedrosoi and F. compacta; Group B, including F. monophora and Group C, a heterogeneous collection containing Fonsecaea sp. and Cladophialophora spp.
Taxonomic debate
The taxonomic status of F. compacta is uncertain. The debate whether or not F. compacta is a distinct species of Fonsecaea has persisted for years, essentially since it was discovered. Some authors maintain that F. compacta and F. pedrosoi are separate species given small differences in morphology of conidiophores and conidia. F. compacta and F. pedrosoi are readily distinguishable from each other. F. compacta is characterized by its compact conidial heads, blunt scars and subglobose to ovoid conidia, while F. pedrosoi has loose conidial heads, prominent scars, and elongated conidia. It was once thought that the two can not be combined into a single species considering there are base substitutions in 48 positions. The two were also found to have identical D1/D2 sequences, a 600 nucleotide domain in a subunit of rDNA. RAPD and RFLP methods were used to investigate genetic variations between these species, however no variations were found. In 2004, scientists from the University of Chiba in Japan found that there is no difference in subunit ribosomal DNA D1/D2 domain sequence between F. pedrosoi and F. compacta, which may indicate that the latter is merely a morphological variation of the first. More recently, several molecular investigations such as restriction fragment length polymorphism (RFLP) of mitochondrial DNA, ribosomal RNA (rRNA), ITS sequence, random amplified polymorphic DNA (RAPD), large subunit (LSU) rRNA D1/D2 domain sequence, and RFLP of small subunit (SSU) rRNA and ITS regions have revealed that F. pedrosoi and F. compacta have few distinctions at the molecular level. and accordingly F. compacta has been considered a morphological variant of F. pedrosoi.
Growth and morphology
The morphological forms of F. compacta are referred to as RhinocIadiella-like, Cladosporium-Iike, and Phialophora-like. The Rhinocladiella-like and Phialophora-like types of development are best referred to as additional anamorphs of Fonsecaea. Some isolates of Fonsecaea may form phialides with collarettes that are typical of the genus Phialophora. When fungi produce more than one morphologic form in culture, such as the case with F. compacta and F. pedrosoi, the most stable, distinct, and unique form that is produced under standard conditions are used for identifying the fungus. Colonies on potato dextrose agar are slow growing, velvety to woolly, and olive to olivaceous black in color. Isolates of F. compacta may produce up to four different types of conidiophores. The diagnostic form consists of densely clustered, one-celled, pale brown, primary conidia, up to 4 × 8 μm that develop irregularly upon pegs at the terminus of erect, dark, irregularly swollen, club shaped, conidiophores. The primary conidia give rise to one-celled, 3 × 3.5 μm, secondary conidia in a like manner. The secondary conidia may in turn give rise to tertiary conidia. The conidia are rounded and form compact heads. Conidiophores bearing one-celled conidia like those produced by Rhinocladiella, branched chains of one-celled conidia arising from erect conidiophores like those produced by Cladosporium, and flask-shaped phialides having flared collarettes and balls of one-celled conidia like those produced by Phialophora may also be present. On average, sizes range from 5 to 20 μm in diameter.
Habitat and ecology
F. compacta is predominantly found in humid conditions such as Latin America and Asia, although it has also been seen in Europe. A large number of cases have been reported from Madagascar in Africa, Brazil and Japan. Its natural habitat consists of soil and woody plant material. It is a saprotroph, commonly associated with forest litter decomposition.
Disease in humans
Fonsecaea compacta has the ability to cause a disease called Chromoblastomycosis. The five main causal fungi of chromoblastomycosis are F. compacta, F. pedrosoi, Phialophora verrucosa, Exophiala dermatitidis and Cladophialophora carrionii. F. compacta is a rare etiological agent of chromoblastomycosis in humans, as it has only been reported in a few instances. A Puerto Rican case in which the disease was confined to an upper limb and the lesions consisted of extensive, diffuse, even areas of infiltration with some papillomata on the hand and without tumors or nodules was confirmed to be caused by F. compacta.
Epidemiology
Chromoblastomycosis is distributed worldwide, although it is more common in tropical and subtropical countries. Large numbers of cases have been reported from Madagascar in Africa, Brazil and Japan. Several studies have shown that it is prevalent in several other countries as well like Thailand, Korea, Pakistan. The five types of lesions described by Carrion in chromoblastomycosis are nodules, tumors, plaques, warty lesions. F. compacta is a very rare species, known only from a few clinical collections. A few of these instances include five cases in India from which F. compacta was isolated. One study of F. compacta in India produced an isolation rate of 15%. Another study from Sri Lanka reported isolation of 2 cases of F. compacta. Infection occurs more commonly in males than females, and typically between the ages of 30-50. It is less commonly seen in adolescence, with onset occurring before the age of 20 in 24% of cases.
Transmission
Infection caused by F. compacta is thought to be acquired through the same mechanisms as other more common agents of chromoblastomycosis, such as through puncture wounds caused by wooden splinters or thorny plants which allow the fungus to gain entry. Increased cases are seen in agricultural workers such as adult male farmers and laborers, whose occupation brings them into close contact with soil, are mainly affected. Poverty and malnutrition in Indian children may be responsible for the early development of clinical infection. The Fonsecaea species have been reported to be recoverable from environmental sources and therefore the disease is considered to be of traumatic origin. Nevertheless, the precise natural niche of both F. compacta has remained uncertain and hence it is unclear where and how symptomatic patients have acquired their infection.
Treatment
Good hygiene and adequate nutrition may help the individual abort a potential infection. Early stages of treatment for minor chromoblastomycosis cases involve surgical excision, electrodesiccation. cryosurgery, physical therapy, using liquid nitrogen for localized lesions is very effective and can be applied in combination with antifungal therapies. More advanced cases require systemic antifungals treatment for extended periods of time. Severe lesions tend to respond slowly or even become non-responding to antifungal drugs. Presently, the most useful antifungals against chromoblastomycosis include itraconazole and terbinafine, which are highly expensive and often used in combination. Cure rates observed with antifungal drugs vary from 15 to 80%. In severe forms cure rates are particularly low and relapse rates are high. F. compacta and F. pedrosoi are less susceptible to antifungal treatments so cure rates are lower compared to other agents of the disease.
References
Eurotiomycetes
Fungus species | Fonsecaea compacta | Biology | 2,181 |
14,800,736 | https://en.wikipedia.org/wiki/PHB2 | Prohibitin-2 is a protein that in humans is encoded by the PHB2 gene.
Interactions
PHB2 has been shown to interact with PTMA.
References
Further reading | PHB2 | Chemistry | 37 |
242,672 | https://en.wikipedia.org/wiki/Toaster | A toaster is a small electric appliance that uses radiant heat to brown sliced bread into toast, the color caused from the Maillard reaction. It typically consists of one or more slots into which bread is inserted, and heating elements, often made of nichrome wire, to generate heat and toast the bread to the desired level of crispiness.
Types
Pop-up toaster
In a pop-up or automatic toaster, a single vertical piece of bread is dropped into a slot on the top of the toaster. A lever on the side of the toaster is pressed down, lowering the bread into the toaster and activating the heating elements. The length of the toasting cycle (and therefore the degree of toasting) is adjustable via a lever, knob, or series of pushbuttons, and when an internal device determines that the toasting cycle is complete, the toaster turns off and the toast pops up out of the slots.
The completion of toasting may be determined by a timer (sometimes manually set) or by a thermal sensor, such as a bimetallic strip, located close to the toast.
Toasters may also be used to toast other foods such as teacakes, toaster pastries, potato waffles and crumpets, though the resultant accumulation of fat and sugar inside the toaster can contribute to its eventual failure.
Among pop-up toasters, those toasting two slices of bread are more purchased than those that can toast four. Pop-up toasters can have a range of appearances beyond just a square box and may have an exterior finish of chrome, copper, brushed metal, or any colored plastic. The marketing and price of toasters may not be an indication of quality for producing good toast. A typical modern two-slice pop-up toaster can draw from 600 to 1200 watts.
Beyond the basic toasting function, some pop-up toasters offer additional features such as:
One-sided toasting, which some people prefer when toasting bagels
The ability to power the heat elements in only one of the toaster's several slots
Slots of various depths, lengths, and widths to accommodate a variety of bread types
Provisions to allow the bread to be lifted higher than the normal raised position, so toast that has shifted during the toasting process can safely and easily be removed
Toaster oven
Invented in 1910, toaster ovens are small electric ovens that provide toasting capability plus a limited amount of baking and broiling capability. Similarly to a conventional oven, toast or other items are placed on a small wire rack, but toaster ovens can heat foods faster than regular ovens due to their small volume. They are especially useful when the users do not also have a kitchen stove with an integral oven, such as in smaller apartments and recreational vehicles such as truck campers.
Conveyor toaster
A conveyor toaster is an appliance that caramelizes and carries bread products on a belt or chain into and through a heated chamber. Conveyor toasters are designed to make many slices of toast and are generally used in the catering industry, restaurants, cafeterias, institutional cooking facilities, and other commercial food service situations where constant or high-volume toasting is required. Bread can be toasted at a rate of 250–1800+ slices an hour. The total radiant heat a conveyor toaster applies to each slice can be controlled by adjusting the conveyor speed or the output strength of the heating elements. Conveyor toasters are generally available with either a vertical or horizontal conveyor orientation. Conveyor toasters have been produced for home use; in 1938, for example, the Toast-O-Lator went into limited production.
History
Before the development of the electric toaster, sliced bread was toasted by placing it in a metal frame or on a long-handled toasting fork and holding it near a fire or over a kitchen grill.
From the 16th century onward, long-handled forks were used as toasters, "sometimes with fitment for resting on bars of grate or fender."
Wrought-iron scroll-ornamented toasters appeared in Scotland in the 17th century. Another wrought-iron toaster was documented to be from 18th-century England.
Utensils for toasting bread over open flames appeared in America in the early 19th century, including decorative implements made from wrought iron.
Development of the heating element
The primary technical problem in toaster development at the turn of the 20th century was the development of a heating element that would be able to sustain repeated heating to red-hot temperatures without breaking or becoming too brittle. A similar technical challenge had recently been surmounted with the invention of the first successful incandescent lightbulbs by Joseph Swan and Thomas Edison. However, the light bulb took advantage of the presence of a vacuum, something that could not be used for the toaster.
The first stand-alone electric toaster, the Eclipse, was made in 1893 by Crompton & Company of Chelmsford, Essex. Its bare wires toasted bread on one side at a time.
The problem of the heating element was solved in 1905 by a young engineer named Albert Marsh, who designed an alloy of nickel and chromium, which came to be known as nichrome.
The first US patent application for an electric toaster was filed by George Schneider of the American Electrical Heater Company of Detroit in collaboration with Marsh. One of the first applications that the Hoskins company considered for its Chromel wire was for use in toasters, but the company eventually abandoned such efforts, to focus on making just the wire itself.
The first commercially successful electric toaster was introduced by General Electric in 1909 for the GE model D-12.
Dual-side toasting and automated pop-up technologies
In 1913, Lloyd Groff Copeman and his wife Hazel Berger Copeman applied for various toaster patents, and in that same year, the Copeman Electric Stove Company introduced a toaster with an automatic bread turner. Before this, electric toasters cooked bread on one side, meaning the bread needed to be flipped by hand to cook both sides. Copeman's toaster turned the bread around without having to touch it.
The automatic pop-up toaster, which ejects the toast after toasting it, was first patented by Charles Strite in 1921. In 1925, using a redesigned version of Strite's toaster, the Waters Genter Company introduced the Model 1-A-1 Toastmaster, the first automatic, pop-up, household toaster that could brown bread on both sides simultaneously, set the heating element on a timer, and eject the toast when finished.
Toasting technology after the 1940s
In the 1980s, some high-end U.S. toasters featured automatic toast lowering and raising without the need to operate levers – simply dropping the bread into one of these "elevator toasters", such as the Sunbeam Radiant Control toaster models made from the late 1940s through the 1990s, began the toasting cycle. These toasters use the mechanically multiplied thermal expansion of the resistance wire in the center element assembly to lower the bread; the inserted slice of bread trips a lever switch to activate the heating elements and their thermal expansion is harnessed to lower the bread.
When the toast is done, as determined by a small bimetallic sensor actuated by the heat radiating off the toast, the heaters are shut off and the pull-down mechanism returns to its room-temperature position, slowly raising the finished toast. This sensing of the heat radiating off the toast means that regardless of the type of bread (white or whole grain) or its initial temperature (even frozen), the bread is always toasted to the same consistency.
Research
Several projects have added advanced technology to toasters. In 1990, Simon Hackett and John Romkey created "The Internet Toaster", a toaster that could be controlled by the Internet. In 2001, Robin Southgate from Brunel University in England created a toaster that could toast a graphic of the weather prediction (limited to sunny or cloudy) onto a piece of bread. The toaster dials a pre-coded phone number to get the weather forecast.
In 2005, Technologic Systems, a vendor of embedded systems hardware, designed a toaster running the NetBSD Unix-like operating system as a sales demonstration system. In 2012, Basheer Tome, a student at Georgia Tech, designed a toaster using color sensors to toast bread to the exact shade of brown specified by a user.
A toaster that used Twitter was cited as an early example of an application of the Internet of Things. Toasters have been used as advertising devices for online marketing.
With permanent modifications, a toaster oven can be used as a reflow oven to solder electronic components to circuit boards.
Similar inventions
Hot dog toaster
A hot dog toaster is a variation on the toaster design; it can cook hot dogs without the use of microwaves or stoves. The appliance looks similar to a regular toaster, except that there are two slots in the middle for hot dogs and two slots on the outside for toasting the buns. Or there can be a set of skewers upon which hot dog are impaled.
See also
Alan MacMasters hoax
Bachelor griller
Dualit
Heating Element
List of cooking appliances
List of home appliances
Pie iron
Thermal radiation
References
External links
Electric cooker
Electric heater, GE D-12
19th-century inventions
Cooking appliances
Home appliances
Kitchen
Ovens
Products introduced in 1909 | Toaster | Physics,Technology | 1,943 |
19,094,964 | https://en.wikipedia.org/wiki/Elimination%20rate%20constant | The elimination rate constant K or Ke is a value used in pharmacokinetics to describe the rate at which a drug is removed from the human system.
It is often abbreviated K or Ke. It is equivalent to the fraction of a substance that is removed per unit time measured at any particular instant and has units of T−1. This can be expressed mathematically with the differential equation
,
where is the blood plasma concentration of drug in the system at a given point in time , is an infinitely small change in time, and is the concentration of drug in the system after the infinitely small change in time.
The solution of this differential equation is useful in calculating the concentration after the administration of a single dose of drug via IV bolus injection:
Ct is concentration after time t
C0 is the initial concentration (t=0)
K is the elimination rate constant
Derivation
In first-order (linear) kinetics, the plasma concentration of a drug at a given time t after single dose administration via IV bolus injection is given by;
where:
C0 is the initial concentration (at t=0)
t1/2 is the half-life time of the drug, which is the time needed for the plasma drug concentration to drop to its half
Therefore, the amount of drug present in the body at time t is;
where Vd is the apparent volume of distribution
Then, the amount eliminated from the body after time t is;
Then, the rate of elimination at time t is given by the derivative of this function with respect to t;
And since is fraction of the drug that is removed per unit time measured at any particular instant, then if we divide the rate of elimination by the amount of drug in the body at time t, we get;
References
Pharmacokinetic metrics | Elimination rate constant | Chemistry | 361 |
56,788,543 | https://en.wikipedia.org/wiki/Lusutrombopag | Lusutrombopag, sold under the brand name Mulpleta among others, is a medication that has been developed for certain conditions that lead to thrombocytopenia (abnormally low platelet counts) such as thrombocytopenia associated with chronic liver disease in patients prior to elective invasive procedures. It is being manufactured and marketed in Japan by Shionogi. It was approved by the U.S. Food and Drug Administration (FDA) in July 2018, and NICE in January 2020.
It was approved for medical use in the European Union in February 2019.
References
External links
Drugs acting on the blood and blood forming organs
Orphan drugs
Thrombopoietin receptor agonists
Thiazoles
Chloroarenes
Carboxylic acids
Japanese inventions
Carboxamides
Ethers | Lusutrombopag | Chemistry | 169 |
65,505,167 | https://en.wikipedia.org/wiki/Route%20Mobile | Route Mobile (formerly Routesms Solutions Ltd) is an Indian cloud communications platform as a service (CPaaS) company. Started in 2004 and headquartered in Mumbai, the company has presence in more than 15 locations across Asia-Pacific, Middle East, Africa, Europe and North America.
History
Route Mobile was started in Mumbai in May 2004 as a cloud communications platform provider for over the top (OTT) and mobile network operators (MNO). It partnered with companies including Idea Cellular, Lanka Bell, and Arab Financials Services for providing messaging services in India, Sri Lanka, Middle East and North Africa region.
In May 2017, Route Mobile acquired Call 2 Connect, an ITES provider based out of India. In the same year, Route Mobile acquired 365squared, a SMS firewall company based out of Malta.
In July 2023, it was announced that Belgian telecommunications company Proximus Group would be taking a majority stake in Route Mobile through its daughter holding Opal. On May 8, 2024, it was announced that this transaction had been completed.
Recognition
2019: The Best Messaging Innovation – Consumer Solution Award at Messaging and SMS Global Awards, London.
2019: The Fastest Growing Indian Technology and Telecom Company Award in UK at India Meets Britain Tracker 2019.
2020: Among the most fastest growing companies in Technology & Telecom sector and overall 2nd in the UK's top fastest growing Indian companies in the United Kingdom 2020.
2019: The Best Governed Company in the unlisted segment (Emerging Category) award at the 19th ICSI National Award for Corporate governance.
References
External links
Official Website
SMS Gateway
Mobile telecommunications
Companies based in Mumbai
Software companies of India
Mobile technology companies
Cloud communication platforms
Indian companies established in 2004
2004 establishments in Maharashtra
Software companies established in 2004
Companies listed on the National Stock Exchange of India
Companies listed on the Bombay Stock Exchange
2024 mergers and acquisitions | Route Mobile | Technology | 375 |
6,366,541 | https://en.wikipedia.org/wiki/Michael%20Sorkin | Michael David Sorkin (August 2, 1948 – March 26, 2020) was an American architectural and urban critic, designer, and educator. He was considered to be "one of architecture's most outspoken public intellectuals", a polemical voice in contemporary culture and the design of urban places at the turn of the twenty-first century. Sorkin first rose to prominence as an architectural critic for the Village Voice in New York City, a post which he held for a decade throughout the 1980s. In the ensuing years, he taught at prominent universities around the world, practiced through his eponymous firm, established a nonprofit book press, and directed the urban design program at the City College of New York. He died at age 71 from complications brought on by COVID-19 during the COVID-19 pandemic.
Early life and education
Sorkin was born in Washington, D.C., in 1948. He was an architect and urbanist whose practice spanned design, planning, criticism, and teaching. Sorkin received a bachelor's degree from the University of Chicago in 1969, and a masters in architecture from the Massachusetts Institute of Technology (M.Arch '74). Sorkin also held a master's degree in English from Columbia University (MA '70). He was founding principal of the Michael Sorkin Studio, a New York-based global design practice with special interests in urban planning, urban design and green urbanism.
Career
Early career
Sorkin was house architecture critic for The Village Voice in the 1980s, and he authored numerous articles and books on the subjects of contemporary architecture, design, cities, and the role of democracy in architecture.
Academia
Sorkin was an educator at the collegiate level. He held positions of professor of urbanism and director of Institute of Urbanism of the Academy of Fine Arts, Vienna from 1993 to 2000, He was a visiting professor to many schools, including, for ten years, the Cooper Union in New York. Sorkin also held the Hyde Chair at the University of Nebraska–Lincoln College of Architecture, the Davenport Chair at Yale University School of Architecture, and the Taubman College of Architecture and Urban Planning, Eliel Saarinen Visiting Professorship, University of Michigan.
He was a guest lecturer and critic at the Architectural Association School of Architecture in London, Harvard Graduate School of Design, Cornell University College of Architecture, Art, and Planning, University of Illinois: Urbana Champaign, Aarhus School of Architecture, Copenhagen, Denmark, and the London Consortium. He also taught at a number of institutions, including Columbia University, London's Architectural Association, and Harvard University.
Dedicated to architectural education for social change, Sorkin oversaw fieldwork in distressed environments such as Johannesburg, South Africa and Havana, Cuba. He co-organized "Project New Orleans" with collaborators Carol McMichael Reese and Anthony Fontenot, to support the post-Katrina city. In 2008, Sorkin was appointed Distinguished Professor of Architecture of the City University of New York.
Design practice
He was a principal in the Michael Sorkin Studio. The studio in New York City focuses primarily on professional practice in the urban public realm. Sorkin designed environmental projects in Hamburg, Germany, and proposed master plans for the Palestinian capital in East Jerusalem, and the Brooklyn waterfront and Queens Plaza in New York City. His urban studies have been the subject of gallery exhibits, and in 2010, he received the American Academy of Arts and Letters award in architecture. Sorkin presented regularly at regional, national, and international conferences, and he served as adviser and juror on numerous professional committees, including The Guggenheim Helsinki Design Competition, The Aga Khan Trust for Culture's Aga Khan Award for Architecture, Chrysler Design Award, the New York City Chapter of the American Institute of Architecture, Architectural League of New York, and in the area of design writing and commentary, for Core 77.
Sorkin was the co-president of the Institute for Urban Design, an education and advocacy organization, and vice president of the Urban Design Forum in New York.
Urban planning projects (selection)
1994: Masterplan for the Brooklyn Waterfront.
1994: Proposal for Südraum Leipzig
1998: Alternative University of Chicago campus masterplan.
2001: Proposal for Arverne Urban Renewal Area on the Rockaway peninsula, Queens, N.Y.
2001: A Plan For Lower Manhattan.
2004: Project for Penang Peaks, Penang, Malaysia.
2005: Masterplan for New City, Chungcheong, South Korea.
2009: Seven Star Hotel, Tianjin Highrise Building, Tianjin, China.
2010: Case Study: Feeding New York in New York. 3rd International Holcim Forum 2010 in Mexico City.
2010: Plan for Lower Manhattan. Exhibition, Our Cities Ourselves: The Future of Transportation in Urban Life Center for Architecture, Greenwich Village, N.Y.
2012: concept for Xi'an, China Airport Office Building
2013: 28+: MOMA PS1 Rockaway.
2013: New York City Football Stadium Site Survey.
2013: An alternative proposal for NYU.
Writing and publishing
Sorkin had a broad career as an architecture writer. He wrote on the topics of contemporary architecture and urban dynamics, along the dimensions of environmentalism, sustainability, pedestrianization, public space, urban culture, and the legacy of modernist approaches to urban planning. He was a member of the International Committee of Architectural Critics. For ten years, Sorkin was architecture critic for The Village Voice, and he wrote for Architectural Record, The New York Times, The Architectural Review, Metropolis, Mother Jones, Vanity Fair, the Wall Street Journal, Architectural Review, and The Nation. As a volume editor, he organized multi-authored publications, and he contributed essays to a range of architecture publications. He also authored 20 books.
Legacy
Death
Sorkin died on March 26, 2020, from complications brought on by COVID-19 in Manhattan. His death was among the design profession's most prominent losses during the beginning of the COVID-19 pandemic — making news internationally and met with an outpouring of tributes and obituaries in mainstream, leftist, and architectural media.
Awards and recognitions
2009, 2010: Fellow of the American Academy of Arts & Sciences
2010: Graham Foundation Architecture Award
2011 Graham Foundation, for New York City (Steady) State with Robin Balles and Christian Eusebio.
2013: Cooper Hewitt, Smithsonian Design Museum Design Mind Award.
2015: John Simon Guggenheim Memorial Foundation Fellow in Architecture, Planning and Design
Bibliography
Books
Sorkin, M. & Beede Howe, M. (1981) Go Blow Your Nose. New York: St. Martin's Press.
Sorkin, M. (1991) Exquisite Corpse: Writing on Buildings. London: Verso.
Sorkin, M. (1993) Local Code: The Constitution of a City at 42° N Latitude. New York: Princeton Architectural Press. (1993)
Sorkin, M. (1997) Traffic In Democracy. Ann Arbor, Michigan: University of Michigan College of Architecture and Urban Planning.
Sorkin, M. (2001) Some Assembly Required. Minneapolis: University of Minnesota Press.
Sorkin, M. (2002) Pamphlet Architecture 22 : Other Plans: University of Chicago Studies, 1998–2000.New York: Princeton Architectural Press.
Sorkin, M. (2003) Starting From Zero: Reconstructing Downtown New York. New York : Routledge.
Sorkin, M. (ed.) (2005) "Against the Wall: Israel's Barrier to Peace." New York : Norton.
Sorkin, M. (2008) Indefensible Space : The Architecture of the National Insecurity State. New York : Routledge.
Sorkin, M. (2009) Twenty Minutes in Manhattan. London: Reaktion.
Sorkin, M. (2011) All Over The Map: Writing on Buildings and Cities. London: Verso.
Sorkin, M. (2018) What Goes Up: The Right and Wrongs to the City London: Verso.
Editor, contributor, selected
Sorkin, M., "The Domestic Apparatus." In Ranalli, G., "George Ranalli : buildings and projects." Princeton Architectural Press, 1988.
Sorkin, M., "Ciao Manhattan." In Klotz, H. "New York architecture, 1970–1990." New York, N.Y: Rizzoli International, 1989. Publications.
Sorkin, M., "Forward." In Vanlaethem, F.,"Gaetano Pesce : architecture, design, art." New York : Rizzoli, 1989.
Sorkin, M., "Nineteen millennial mantras." In Noever, P.(ed.), "Architecture in transition: Between deconstruction and new modernism." Munich: Prestel, 1991.
Sorkin, M., "Introduction: Variations on a Theme Park." In Sorkin, M. (ed.), "Variations on a Theme Park : Scenes From The Few American City and the End of Public Space." Hill and Wang, 1992, pp. xi-xv.
Sorkin, M., "Preface." In "Hugh Hardy, Malcolm Holzman, and Norman Pfeiffer: Hardy Holzman Pfeiffer Associates Buildings and projects, 1967–1992." New York: Rizzoli International, 1992.
Sorkin, M., "Ten for TEN." In TEN Arquitectos (Firm), "TEN Arquitectos: Enrique Norten, Bernardo Gómez-Pimienta." New York: Monacelli Press, 1998.
Sorkin, M., "Introduction: Traffic in Democracy." In Joan Copjec, (ed.), "Giving ground : the politics of propinquity." London: Verso, 1999.
Sorkin, M., "Frozen Light." In Friedman, M. (ed.), "Gehry talks : architecture + process." New York : Rizzoli, 1999.
Sorkin, M. "Measure of Comfort." In Chambers, K. & Sorkin, M.(eds.), "Comfort : reclaiming place in a virtual world." Cleveland, Ohio : Cleveland Center for Contemporary Art, 2001, pp. i-xi.
Sorkin, M., "The Center Cannot Hold." In Sorkin, S. & Zukin, S.(eds.), "After the World Trade Center: Rethinking New York City." New York City: Routledge, 2002.
Sorkin, M. (ed.), "The next Jerusalem: sharing the divided city." New York, NY: Monacelli Press, 2002.
Sorkin, M., "Sex, drugs, rock and roll, cars, dolphins, and architecture." In Lewallen, C., Seid, S., Lord, C., & Ant Farm (Design group)(eds.),"Ant Farm, 1968–1978." Berkeley: University of California Press, 2004.
Sorkin, M., "More or less." In Brown, D.J.(ed.),"The HOME House Project : the future of affordable housing," Winston Salem: Southeastern Center for Contemporary Art, 2004.
Sorkin, M., "Lunch With Emilio." In Ambasz, E. & Dodds, J., (eds.), "Analyzing Ambasz." New York, Monacelli Press, 2004.
Sorkin, M., "With the Grain." In Sirefman, S., Sorkin, M.(eds.), "Whereabouts: New architecture with local identities." New York: Monacelli Press, 2004.
Sorkin, M., "The second greatest generation." In Saunders, W. S., & Frampton, K. "Commodification and spectacle in architecture: A Harvard design magazine reader." Minneapolis: University of Minnesota Press, 2005, pp. 22–33.
Sorkin, M., "Introduction: Saratoga Springs!," in Ranalli, G., "Saratoga, George Ranalli" San Rafael, Calif.: Oro Editions, 2009, pp. 6–11.
Sorkin, M., "Forward." In "Miguel Ángel Aragonés" New York: Rizzoli, 2013.
Sorkin, M., Essay. In Abbott, C., "In/formed by the land: The architecture of Carl Abbott." San Francisco, Calif.: Oro Editions, 2013.
Fontenot, A., McReese, C., Sorkin, M. (eds.), "New Orleans under Reconstruction: The Crisis of Planning." London: Verso, 2014.
Sorkin, M., "Preface." In Durán Calisto, A.M., Altwicker, M., Sorkin, M., (eds.), "Beyond Petropolis: Designing a Practical Utopia in Nueva Loja." Shinzen, China: Oscar Riera Ojeda Publishers, 2015.
Sorkin, M., Can China's Cities Survive? In:Terreform (ed.) Letters to the Leaders of China: Kongjian Yu and the Future of the Chinese, pp. 8–17.
References
External links
Michael Sorkin Studio
1948 births
2020 deaths
MIT School of Architecture and Planning alumni
Columbia Graduate School of Arts and Sciences alumni
American urban planners
Deaths from the COVID-19 pandemic in New York (state)
Jewish architects
American Jews
Urban theorists
American architecture writers
Architecture critics
Architecture educators
American male non-fiction writers
Architectural design
Columbia University faculty
The Nation (U.S. magazine) people
Architects from Washington, D.C.
University of Chicago alumni
Cooper Union faculty | Michael Sorkin | Engineering | 2,827 |
44,390,248 | https://en.wikipedia.org/wiki/Nanoelectromechanical%20relay | A nanoelectromechanical (NEM) relay is an electrically actuated switch that is built on the nanometer scale using semiconductor fabrication techniques. They are designed to operate in replacement of, or in conjunction with, traditional semiconductor logic. While the mechanical nature of NEM relays makes them switch much slower than solid-state relays, they have many advantageous properties, such as zero current leakage and low power consumption, which make them potentially useful in next generation computing.
A typical NEM relay requires a potential on the order of the tens of volts in order to "pull in" and have contact resistances on the order of gigaohms. Coating contact surfaces with platinum can reduce achievable contact resistance to as low as 3 kΩ. Compared to transistors, NEM relays switch relatively slowly, on the order of nanoseconds.
Operation
A NEM relay can be fabricated in two, three, or four terminal configurations. A three terminal relay is composed of a source (input), drain (output), and a gate (actuation terminal). Attached to the source is a cantilevered beam that can be bent into contact with the drain in order to make an electrical connection. When a significant voltage differential is applied between the beam and gate, and the electrostatic force overcomes the elastic force of the beam enough to bend it into contact with the drain, the device "pulls in" and forms an electrical connection. In the off position, the source and drain are separated by an air gap. This physical separation allows NEM relays to have zero current leakage, and very sharp on/off transitions.
The nonlinear nature of the electric field, and adhesion between the beam and drain cause the device to "pull out" and lose connection at a lower voltage than the voltage at which it pulls in. This hysteresis effect means there is a voltage between the pull in voltage, and the pull out voltage that will not change the state of the relay, no matter what its initial state is. This property is very useful in applications where information needs to be stored in the circuit, such as in static random-access memory.
Fabrication
NEM relays are usually fabricated using surface micromachining techniques typical of microelectromechanical systems (MEMS). Laterally actuated relays are constructed by first depositing two or more layers of material on a silicon wafer. The upper structural layer is photolithographically patterned in order to form isolated blocks of the uppermost material. The layer below is then selectively etched away, leaving thin structures, such as the relay's beam, cantilevered above the wafer, and free to bend laterally. A common set of materials used in this process is polysilicon as the upper structural layer, and silicon dioxide as the sacrificial lower layer.
NEM relays can be fabricated using a back end of line compatible process, allowing them to be built on top of CMOS. This property allows NEM relays to be used to significantly reduce the area of certain circuits. For example, a CMOS-NEM relay hybrid inverter occupies 0.03 μm2, one-third the area of a 45 nm CMOS inverter.
History
The first switch made using silicon micro-machining techniques was fabricated in 1978. Those switches were made using bulk micromachining processes and electroplating. In the 1980s, surface micromachining techniques were developed and the technology was applied to the fabrication of switches, allowing for smaller, more efficient relays.
A major early application of MEMS relays was for switching radio frequency signals at which solid-state relays had poor performance. The switching time for these early relays was above 1 μs. By shrinking dimensions below one micrometer, and moving into the nano scale, MEMS switches have achieved switching times in the ranges of hundreds of nanoseconds.
Applications
Mechanical computing
Due to transistor leakage, there is a limit to the theoretical efficiency of CMOS logic. This efficiency barrier ultimately prevents continued increases in computing power in power-constrained applications. While NEM relays have significant switching delays, their small size and fast switching speed when compared to other relays means that mechanical computing utilizing NEM Relays could prove a viable replacement for typical CMOS based integrated circuits, and break this CMOS efficiency barrier.
A NEM relay switches mechanically about 1000 times slower than a solid-state transistor takes to switch electrically. While this makes using NEM relays for computing a significant challenge, their low resistance would allow many NEM relays to be chained together and switch all at once, performing a single large calculation. On the other hand, transistor logic has to be implemented in small cycles of calculations, because their high resistance does not allow many transistors to be chained together while maintaining signal integrity. Therefore, it would be possible to create a mechanical computer using NEM relays that operates at a much lower clock speed than CMOS logic, but performs larger, more complex calculations during each cycle. This would allow a NEM relay based logic to perform to standards comparable to current CMOS logic.
There are many applications, such as in the automotive, aerospace, or geothermal exploration businesses, in which it would be beneficial to have a microcontroller that could operate at very high temperatures. However, at high temperatures, semiconductors used in typical microcontrollers begin to fail as the electrical properties of the materials they are made of degrade, and the transistors no longer function. NEM relays do not rely on the electrical properties of materials to actuate, so a mechanical computer utilizing NEM relays would be able to operate in such conditions. NEM relays have been successfully tested at up to 500 °C, but could theoretically withstand much higher temperatures.
Field-programmable gate arrays
The zero leakage current, low energy usage, and ability to be layered on top of CMOS properties of NEM relays make them a promising candidate for usage as routing switches in field-programmable gate arrays (FPGA). A FPGA utilizing a NEM relay to replace each routing switch and its corresponding static random-access memory block could allow for a significant reduction in programming delay, power leakage, and chip area compared to a typical 22nm CMOS based FPGA. This area reduction mainly comes from the fact that the NEM relay routing layer can be built on top of the CMOS layer of the FPGA.
See also
Nanoelectromechanical systems
References
Relays
Microelectronic and microelectromechanical systems
Nanoelectronics | Nanoelectromechanical relay | Materials_science,Engineering | 1,380 |
3,108,072 | https://en.wikipedia.org/wiki/Galleting | Galleting, sometimes known as garreting or garneting, is an architectural technique in which spalls (small pieces of stone) are pushed into wet mortar joints during the construction of a masonry building. The term comes from the French word galet, which means "pebble." In general, the word "galleting" refers to the practice while the word "gallet" refers to the spall. Galleting was mostly used in England, where it was common in South East England and the county of Norfolk.
Description
Galleting is mainly used in stone masonry buildings constructed out of sandstone or flint. The technique varies depending on which of these materials is used. In sandstone buildings, the spalls are often a different type of sandstone than the one used in the wall, though sometimes they are pieces of the same stone. For example, carstone, also known as ironstone, is a type of sandstone that is commonly used for galleting. In sandstone buildings, the spalls are usually shaped into small cubes about half an inch in diameter and are flush with the stone. In flint buildings, the edges of thin slivers of flint are commonly pushed into the mortar, so that the surface of the wall is uneven and the edges of the flint spalls jut out from the wall. In some cases, these techniques are combined such that flint walls are galleted with sandstone spalls or vice versa, however it is uncommon. Although it is also uncommon, galleting has been used in brick masonry construction, where sandstone spalls are generally used over flint ones. More eclectic materials used as gallets include brick, tile, beach pebbles, glass, and oyster shells. In higher status buildings, galleting was superseded by square knapping the flints to produce flat, squared stones that produced a surface with little exposed mortar.
It is unclear whether galleting performs a practical, structural function or is an aesthetic application. It is possible that galleting is used when the local stone is not an easily worked freestone, which means that the stone is more irregular and therefore requires thick mortar joints. In this case, gallets would serve as wedges to provide structural support to the stone and would shield the mortar from weather. It is also possible that galleting does not reinforce the mortar and was used purely for aesthetic reasons. Scholarship has also suggested that galleting was neither a structural nor an aesthetic practice, but rather a superstitious one in an attempt to protect a building from witches and other evil influences. However, Historic Scotland Technical Advice Note 1, regarding use of lime mortars, 1995, CLEARLY states "...numerous small pinning stones which contributed to the overall stability of the masonry, reduced the quantity of expensive lime required and minimised the effects of drying shrinkage in the mortar".
Location
In England, galleting can be found almost exclusively in the South East between the North and South Downs, where sandstone is common, and in the county of Norfolk, where flint is common. Given that these locations are not contiguous, much has been debated about the origin and spread of the practice, with some attributing its geographical prevalence to the particularities of the stonemason trade.
Most scholarship focuses on the use of galleting in England. However, there is evidence that it was used in rural Pennsylvania and Maryland as well as in Philadelphia, Vienna, Austria, the Azores, Paris, and Barcelona.
Period of use
There is some debate about when galleting was most commonly practiced. Some sources associate the technique with late medieval building construction, while others suggest that galleting was used mostly in the 17th and 18th centuries before declining in popularity over the course of the 19th century. Historical records indicate that parts of Windsor Castle (n.d.), Eton College (c. 1441), and the Tower of London (c. 1514) were galleted with flint or oyster shells. This suggests that galleting may have been first used in more prestigious buildings and was later adopted in less prestigious buildings once timber framing was supplanted by masonry construction.
Examples
Sevenoaks School
Knole House
Ightham Mote
Tigbourne Court
Norwich Guildhall
Strangers' Hall in Norwich
The village of Heacham in Norfolk boasts examples of a wide variety of types of galleting.
St James' Episcopal Church, Philadelphia, PA (U.S.)
Hancock's Resolution, Anne Arundel County, MD (U.S.)
The greenhouse at Bartram's Garden, Philadelphia, PA (U.S.)
Bradford Friends Meeting House, West Bradford Township, Chester County, PA (U.S.)
Sully Stone Dairy, Sully Historic Site, Chantilly, Virginia (U.S.)
References
Architectural elements
Masonry | Galleting | Technology,Engineering | 984 |
57,680,998 | https://en.wikipedia.org/wiki/Matrix%20factorization%20%28recommender%20systems%29 | Matrix factorization is a class of collaborative filtering algorithms used in recommender systems. Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. This family of methods became widely known during the Netflix prize challenge due to its effectiveness as reported by Simon Funk in his 2006 blog post, where he shared his findings with the research community. The prediction results can be improved by assigning different regularization weights to the latent factors based on items' popularity and users' activeness.
Techniques
The idea behind matrix factorization is to represent users and items in a lower dimensional latent space. Since the initial work by Funk in 2006 a multitude of matrix factorization approaches have been proposed for recommender systems. Some of the most used and simpler ones are listed in the following sections.
Funk MF
The original algorithm proposed by Simon Funk in his blog post factorized the user-item rating matrix as the product of two lower dimensional matrices, the first one has a row for each user, while the second has a column for each item. The row or column associated to a specific user or item is referred to as latent factors. Note that, in Funk MF no singular value decomposition is applied, it is a SVD-like machine learning model.
The predicted ratings can be computed as , where is the user-item rating matrix, contains the user's latent factors and the item's latent factors.
Specifically, the predicted rating user u will give to item i is computed as:
It is possible to tune the expressive power of the model by changing the number of latent factors. It has been demonstrated that a matrix factorization with one latent factor is equivalent to a most popular or top popular recommender (e.g. recommends the items with the most interactions without any personalization). Increasing the number of latent factors will improve personalization, therefore recommendation quality, until the number of factors becomes too high, at which point the model starts to overfit and the recommendation quality will decrease. A common strategy to avoid overfitting is to add regularization terms to the objective function.
Funk MF was developed as a rating prediction problem, therefore it uses explicit numerical ratings as user-item interactions.
All things considered, Funk MF minimizes the following objective function:
Where is defined to be the frobenius norm whereas the other norms might be either frobenius or another norm depending on the specific recommending problem.
SVD++
While Funk MF is able to provide very good recommendation quality, its ability to use only explicit numerical ratings as user-items interactions constitutes a limitation. Modern day recommender systems should exploit all available interactions both explicit (e.g. numerical ratings) and implicit (e.g. likes, purchases, skipped, bookmarked). To this end SVD++ was designed to take into account implicit interactions as well.
Compared to Funk MF, SVD++ takes also into account user and item bias.
The predicted rating user u will give to item i is computed as:
Where refers to the overall average rating over all items and and refers to the observed deviation of the item and the user respectively from the average. SVD++ has however some disadvantages, with the main drawback being that this method is not model-based. This means that if a new user is added, the algorithm is incapable of modeling it unless the whole model is retrained. Even though the system might have gathered some interactions for that new user, its latent factors are not available and therefore no recommendations can be computed. This is an example of a cold-start problem, that is the recommender cannot deal efficiently with new users or items and specific strategies should be put in place to handle this disadvantage.
A possible way to address this cold start problem is to modify SVD++ in order for it to become a model-based algorithm, therefore allowing to easily manage new items and new users.
As previously mentioned in SVD++ we don't have the latent factors of new users, therefore it is necessary to represent them in a different way. The user's latent factors represent the preference of that user for the corresponding item's latent factors, therefore user's latent factors can be estimated via the past user interactions. If the system is able to gather some interactions for the new user it is possible to estimate its latent factors.
Note that this does not entirely solve the cold-start problem, since the recommender still requires some reliable interactions for new users, but at least there is no need to recompute the whole model every time. It has been demonstrated that this formulation is almost equivalent to a SLIM model, which is an item-item model based recommender.
With this formulation, the equivalent item-item recommender would be . Therefore the similarity matrix is symmetric.
Asymmetric SVD
Asymmetric SVD aims at combining the advantages of SVD++ while being a model based algorithm, therefore being able to consider new users with a few ratings without needing to retrain the whole model. As opposed to the model-based SVD here the user latent factor matrix H is replaced by Q, which learns the user's preferences as function of their ratings.
The predicted rating user u will give to item i is computed as:
With this formulation, the equivalent item-item recommender would be . Since matrices Q and W are different the similarity matrix is asymmetric, hence the name of the model.
Group-specific SVD
A group-specific SVD can be an effective approach for the cold-start problem in many scenarios. It clusters users and items based on dependency information and similarities in characteristics. Then once a new user or item arrives, we can assign a group label to it, and approximates its latent factor by the group effects (of the corresponding group). Therefore, although ratings associated with the new user or item are not necessarily available, the group effects provide immediate and effective predictions.
The predicted rating user u will give to item i is computed as:
Here and represent the group label of user u and item i, respectively, which are identical across members from the same group. And and are matrices of group effects. For example, for a new user whose latent factor is not available, we can at least identify their group label , and predict their ratings as:
This provides a good approximation to the unobserved ratings.
Hybrid MF
In recent years many other matrix factorization models have been developed to exploit the ever increasing amount and variety of available interaction data and use cases. Hybrid matrix factorization algorithms are capable of merging explicit and implicit interactions or both content and collaborative data
Deep-learning MF
In recent years a number of neural and deep-learning techniques have been proposed, some of which generalize traditional Matrix factorization algorithms via a non-linear neural architecture.
While deep learning has been applied to many different scenarios: context-aware, sequence-aware, social tagging etc. its real effectiveness when used in a simple Collaborative filtering scenario has been put into question. Systematic analysis of publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles are reproducible, with as little as 14% in some conferences. Overall the studies identify 26 articles, only 12 of them could be reproduced and 11 of them could be outperformed by much older and simpler properly tuned baselines. The articles also highlights a number of potential problems in today's research scholarship and call for improved scientific practices in that area. Similar issues have been spotted also in sequence-aware recommender systems.
See also
Collaborative filtering
Recommender system
References
Collective intelligence
Information systems
Recommender systems | Matrix factorization (recommender systems) | Technology | 1,605 |
23,820,104 | https://en.wikipedia.org/wiki/Gymnopilus%20amarissimus | Gymnopilus amarissimus is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Index Fungorum
amarissimus
Taxa named by William Alphonso Murrill
Fungus species | Gymnopilus amarissimus | Biology | 54 |
6,575,768 | https://en.wikipedia.org/wiki/Thioureas | In organic chemistry, thioureas are members of a family of organosulfur compounds with the formula and structure . The parent member of this class of compounds is thiourea (). Substituted thioureas are found in several commercial chemicals.
Structure and bonding
Thioureas have a trigonal planar molecular geometry of the core. The bond distance is near 1.71 Å, which is 0.1 Å longer than in normal ketones (). The C–N bond distances are short. Thioureas occurs in two tautomeric forms. For the parent thiourea, the thione form predominates in aqueous solutions. The thiol form, known as an isothiourea, can be encountered in substituted compounds such as isothiouronium salts.
On the other hand, some compounds depicted as isothioureas and in fact thioureas, one example being mercaptobenzimidazole.
Synthesis
N,N′-unsubstituted thioureas can be prepared by treating the corresponding cyanamide with hydrogen sulfide or similar sulfide sources. Organic ammonium salts react with potassium thiocyanate as the source of the thiocarbonyl ().
Alternatively, N,N′-disubstituted thioureas can be prepared by coupling two amines with thiophosgene:
Amines also condense with organic thiocyanates to give thioureas:
Cyclic thioureas are prepared by transamidation of thiourea with diamines. Ethylene thiourea is synthesized by treating ethylenediamine with carbon disulfide. In some cases, thioureas can be prepared by thiation of ureas using phosphorus pentasulfide.
Applications
Agrichemicals that feature the thiourea functional group include methimazole, carbimazole (converted in vivo to methimazole), and propylthiouracil.
Catalysis
Some thioureas are vulcanization accelerators. Thioureas are also used in a research theme called thiourea organocatalysis.
References
Further reading
External links
INCHEM assessment of thiourea
International Chemical Safety Card 0680
Functional groups | Thioureas | Chemistry | 482 |
61,647,390 | https://en.wikipedia.org/wiki/Merope%20%28supercomputer%29 | Merope was a cluster composed of repurposed Intel Xeon X5670 (Westmere) processors that were once part of the Pleiades supercomputer. The system is used both for running real-world computational jobs for NASA scientists and engineers and for testing purposes. Housed in an auxiliary processing center located about 1 kilometer from the NAS facility at NASA Ames Research Center.
Merope (pronounced MEH-reh-pee) is named after one of the seven stars that make up the Pleiades open star cluster in the constellation Taurus.
References
NASA supercomputers
SGI supercomputers | Merope (supercomputer) | Technology | 127 |
56,255,646 | https://en.wikipedia.org/wiki/Transcriptome%20instability | Transcriptome instability is a genome-wide, pre-mRNA splicing-related characteristic of certain cancers. In general, pre-mRNA splicing is dysregulated in a high proportion of cancerous cells. For certain types of cancer, like in colorectal and prostate, the number of splicing errors per cancer has been shown to vary greatly between individual cancers, a phenomenon referred to as transcriptome instability. Transcriptome instability correlates significantly with reduced expression level of splicing factor genes. Mutation of DNMT3A contributes to development of hematologic malignancies, and DNMT3A-mutated cell lines exhibit transcriptome instability as compared to their isogenic wildtype counterparts.
References
Gene expression
Cancer | Transcriptome instability | Chemistry,Biology | 155 |
10,444,281 | https://en.wikipedia.org/wiki/One-Million-Liter%20Test%20Sphere | The One-Million-Liter Test Sphere—also known as the Test Sphere, the Horton Test Sphere, the Cloud Study Chamber, Building 527, and the "Eight Ball" (or "8-ball")—is a decommissioned biological warfare (BW) chamber and testing facility located on Fort Detrick, Maryland, US. It was constructed and utilized by the U.S. Army Biological Warfare Laboratories as part of its BW research program from 1951 to 1969. It is the largest aerobiology chamber ever constructed and was placed on the National Register of Historic Places in 1977.
The structure
The stainless steel test sphere, a cloud chamber used to study static microbial aerosols, is a four-story high, 131-ton structure. Its , carbon steel hull was designed to withstand the internal detonation of "hot" biological bombs without risk to outsiders. It was originally contained within a cubical brick building.
Its purpose was the study of infectious agent aerosols and testing of pathogen-filled munitions. The device was designed to allow exposure of animals and humans to carefully controlled numbers of organisms by an aerosol (inhalational) route. Live, tethered animals were inserted into the chamber along with BW bombs for exposure tests. Human volunteers breathed metered aerosols of Q fever or tularemia organisms through ports along the perimeter of the sphere.
History (1947–1969)
Herbert G. Tanner, the head of Camp (now Fort) Detrick's Munitions Division, had envisioned an enclosed environment where biological tests could be conducted on site, rather than at remote places like Dugway Proving Ground, Utah and Horn Island, Mississippi.
The facility was constructed during 1947 and 1948 and became operational at Camp Detrick in 1950.
The test sphere was utilized during the Operation Whitecoat studies (1954–73), the first exposure taking place on January 25, 1955.
History (1969-present day)
The test sphere has not been used since 1969, when the US offensive BW program was disestablished by President Nixon. The brick building housing the test sphere was destroyed by fire in 1974. However, the chamber itself was placed on the National Register of Historic Places in 1977.
See also
Aerobiology
United States Army Medical Research Institute of Infectious Diseases
United States biological weapons program
Building 470
References
External links
Photos of the "Eight Ball"
, including photo from 2001, at Maryland Historical Trust
Former medical research facilities of the United States Army
Military installations closed in the 20th century
Buildings and structures in Frederick County, Maryland
Military facilities on the National Register of Historic Places in Maryland
Government buildings completed in 1951
Biological warfare facilities
Fort Detrick
National Register of Historic Places in Frederick County, Maryland
United States biological weapons program | One-Million-Liter Test Sphere | Biology | 550 |
2,848,762 | https://en.wikipedia.org/wiki/ZetaGrid | ZetaGrid was at one time the largest distributed computing project, designed to explore the non-trivial roots of the Riemann zeta function, checking over one billion roots a day.
Roots of the zeta function are of particular interest in mathematics; a single root out of alignment would disprove the Riemann hypothesis, with far-reaching consequences for all of mathematics.
The project ended in November 2005 due to instability of the hosting provider. The first more than 1013 zeroes were checked. The project administrator stated that after the results were analyzed, they would be posted on the American Mathematical Society website. The official status remains unclear, however, as it was never published nor independently verified. This is likely because there was no evidence that each zero was actually computed, as there was no process implemented to check each one as it was calculated.
References
External links
Home page (Web archive)
Grid computing
Zeta and L-functions
Hilbert's problems
Experimental mathematics | ZetaGrid | Mathematics | 191 |
12,066,155 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Computer%20and%20Information%20Technology | International Conference on Computer and Information Technology or ICCIT is a series of computer science and information technology based conferences that is hosted in Bangladesh since 1997 by a different university each year. ICCIT provides a forum for researchers, scientists, and professionals from both academia and industry to exchange up-to-date knowledge and experience in different fields of computer science/engineering and information and communication technology (ICT). This is a regularly held ICT based major annual conference (held typically in December) in Bangladesh now in its 25th year. ICCIT series has succeeded in engaging the most number of universities in Bangladesh from both public and private sectors. Each new university in Bangladesh have been investing in computer science, computer engineering, information systems, and related fields.
Starting 2008, the ICCIT is co-sponsored by IEEE. On average, since 2003, 31.1% manuscripts submitted are accepted for presentation and inclusion in the IEEE Xplore Digital Library, one of the largest scholarly research database containing over two million records that indexes, abstracts, and provides full-text for articles and papers on computer science, electrical engineering, electronics, information technology, and physical sciences.
History
ICCIT trace its history to 1997 when University of Dhaka organized a conference, National Conference on Computer and Information Systems (NCCIS) based on IT and Computer Science. Probably it was the first initiative to organize an IT based conference in Bangladesh with participation from multiple universities. Very next year in 1998, this conference was renamed to its current name and gained international status by opening its door to the participants from outside of Bangladesh. Since then each year a university approved by the ICCIT committee hosts this event during late December.
Areas
ICCIT is mainly focused on computer science and information technology but also covers related electronic engineering topics. Major areas of ICCIT include, but not limited to:
Algorithms
Artificial intelligence
Bengali language processing
Bio-informatics
Computer vision
Computer graphics and multimedia
Computer network and data communications
Computer based education
Database systems
Digital signal processing and image processing
Digital system and logic design
Distributed and parallel processing
E-commerce and E-governance
Human computer interaction
Information systems
Internet and World Wide Web Applications
Knowledge data engineering
Neural networks
Pattern recognition
Robotics
Software engineering
System security
Ubiquitous computing
VLSI
Wireless communications and mobile computing
Past conferences
Starting 1997, ICCIT has had 24 successful events at 20 different universities.
1997 University of Dhaka, Dhaka (as NCCIS '97)
1998 Bangladesh University of Engineering and Technology (BUET), Dhaka
1999 Shahjalal University of Science and Technology, Sylhet
2000 North South University, Dhaka
2001 University of Dhaka, Dhaka
2002 East West University, Dhaka
2003 Jahangirnagar University, Savar
2004 BRAC University, Dhaka
2005 Islamic University of Technology (IUT), Gazipur
2006 Independent University Bangladesh (IUB), Dhaka
2007 United International University (UIU), Dhaka
2008 Khulna University of Engineering and Technology (KUET), Khulna
2009 Independent University Bangladesh (IUB) and Military Institute of Science and Technology, Dhaka
2010 Ahsanullah University of Science and Technology, Dhaka
2011 American International University-Bangladesh, Dhaka
2012 Chittagong University, Chittagong
2013 Khulna University, Khulna
2014 Daffodil International University, Dhaka
2015 Military Institute of Science and Technology, Dhaka
2016 North South University, Dhaka
2017 University of Asia Pacific, Dhaka
2018 United International University, Dhaka
2019 Southeast University, Dhaka
2020 Ahsanullah University of Science and Technology, Dhaka
2021 North South University, Dhaka
International Program Committee
The key to the success of ICCIT is its International Program Committee (IPC), co-chaired by Professor Mohammad Ataul Karim, of University of Massachusetts Dartmouth and Professor Mohammad Showkat Alam of Texas A&M University-Kingsville. The IPC for ICCIT 2012, for example, is a body of eighty five (85) field experts all of whom are affiliated with either a university or a research organisation from outside of Bangladesh. The national make-up of the latest IPC is as follows: USA (43), Australia (12), Canada(6), UK (5), Malaysia (4), Japan (3), Germany (2), India (2), Korea (2), New Zealand (2), Belgium (1), China (1), Ireland (1), Norway (1), and Switzerland (1).
Journal special issues
Starting with ICCIT 2008, a selected number of manuscripts after further enhancement and extensive review process are being included in one of several journal special issues. ICCIT doesn't end with just conference proceedings but with those that are indexed worldwide and takes many of its better papers to its next logical level to the journals. To date, 14 journal special issues have been produced by ICCIT IPC featuring works of Bangladesh-based researchers in the fields of communications, computing, multimedia, networks, and software. This is a serious feat for Bangladesh its many researchers; the outcome from this single conference is allowing for about 30–35 team of researchers each year to be able to showcase their research through archival and indexed journals that really matter. It is a major scholarly milestone which makes ICCIT series different from all other technical conferences held in Bangladesh. In its latest iteration, 32 selected enhanced ICCIT 2011 manuscripts after having gone through extensive reviews have been accepted now for inclusion in the following international journals.
Journal of Communications
Guest Editors: M.N. Islam, SUNY Farmingdale, US; K.M. Iftekharuddin, Old Dominion University, US; M.A. Karim, Old Dominion University, US; M.A. Salam, Southern University & A&M College, Louisiana, US
Journal of Computers
Guest Editors: S.M. Aziz, University of South Australia, Australia; M.S. Alam, University of South Alabama, US; K.V. Asari, University of Dayton, US; M. Alamgir Hossain, University of Northumbria, UK; M.A. Karim, Old Dominion University, US; M. Milanova, University of Arkansas at Little Rock, US
Journal of Multimedia
Guest Editors: M. Murshed, Monash University, Australia; M.A. Karim, Old Dominion University, US; M. Paul, Monash University, Australia; S. Zhang, College of Staten Island, US
Journal of Networks
Guest Editors: S. Jabir, France Telecom, Japan; J. Abawajy, Deakin University, Australia; F. Ahmed, Johns Hopkins University Applied Physics Laboratory, US; M.A. Karim, Old Dominion University, US; J. Kamruzzaman, Monash University, Australia; Nurul I. Sarkar, Auckland University of Technology, New Zealand
References
External links
11th ICCIT Home page
15th ICCIT Home page
Computer science conferences
Information technology in Bangladesh | International Conference on Computer and Information Technology | Technology | 1,383 |
58,437,070 | https://en.wikipedia.org/wiki/Participatory%20surveillance | Participatory surveillance is community-based monitoring of other individuals. This term can be applied to both digital media studies and ecological field studies. In the realm of media studies, it refers to how users surveil each other using the internet. Through the use of social media, search engines, people search sites and other web-based aggregators of data, one has the power to find information about the individual being searched, whether voluntarily shared by them or not. Issues of privacy emerge within this sphere of participatory surveillance, predominantly focused on how much information is available on the web that an individual does not consent to. More so, disease outbreak researchers can study social-media based patterns to decrease the time it takes to detect an outbreak, an emerging field of study called infodemiology. Within the realm of ecological fieldwork, participatory surveillance is used as an overarching term for the method in which indigenous and rural communities are used to gain greater accessibility to causes of disease outbreak. By using these communities, disease outbreak can be spotted earlier than through traditional means or healthcare institutions.
History
Towards the beginning of the development of Web 2.0, an increase in online socializing and interaction emerged, largely from the function of social media platforms. Social media platforms originally emerged within the context of the online information highway, where users can control what information is available to other users of the platform. Users can now digitally attach people to locations, without having to physically be within the location, a concept coined as geotagging. With added awareness of the locations of users, an aspect of greater socialization and interconnectivity emerges within both the digital and tangible world. Since the online information highway collects and stores information more permanently than the physical world, many interactions amongst online users can last much longer than physical ones. Since users can control the information and locations in which they associate themselves, they can in part surveil themselves and others to an extent. This is participatory surveillance within a web-based paradigm.
In addition to this, participatory surveillance has begun to be referred to as a tool for ecological field research. Currently, it is extremely difficult to detect disease outbreaks in enough time to prepare people for the outcomes. Often, in hard to reach areas such as the Arctic, researchers cannot gain an intensive look into the subject of disease outbreak closely enough to gain accurate results. Indigenous peoples know the ecology of the land better and how to reach overlooked locations of research. Researchers can use these people as rural surveyors, capturing instances of disease outbreak much quicker and easier than the researchers themselves.
Social media
Counter-surveillance
Counter-surveillance refers to surveillance-based challenges to power imbalances between individuals and institutions. Although state and industry mass surveillance has received substantial public attention in the wake of disclosures like those made by Edward Snowden about the National Security Agency, interest in activist-deployed and peer surveillance has been increasing. Whereas the average person may not fully understand the surveillance programs of larger collectivities, people are drawing upon surveillance tools themselves in interpersonal relationships and in attempts to bring about institutional accountability. Some researchers assert that by using these technologies of surveillance, the same ones used by companies to track consumer tendencies, the public is essentially feeding into practices of their own personal surveillance.
Empowerment
One argument towards social media based participatory surveillance is participatory surveillance within social digital media schemas work to emphasize the power that comes from monitoring what is surveilled of themselves in the context of others rather than being constituted as an invasion of privacy, or disempowerment. Within the visual discourse of reality television, the artistic narrative associated with presenting lives, creates a fake reality in which people can contextualize, therefore keeping the reality of some aspects of an individual or collectives' lives still privatized. This thinking can be transposed to other socially constructed media technologies. In contrast, ambient awareness is associated with cell phones since they are rarely turned off. This poses a greater security risk. Surveillance webcams focus on the aspects of what real users want to show to the digital audience. It is privatized in the respect that users control what they allow others to see causing them to feel liberated.
Infodemiology
An emerging term within social media based participatory surveillance, infodemiology refers to the use of digital based applications or surveys, to better track disease patterns. Information people search for related to health as well as what the public says on digital-based platforms makes up the fabric of this field of study. Coming about in 2002, infodemiology measures common social media platforms, disease and illness related websites, search engine information, and any other online user-related health data. Crowdsourcing based health-related sites have also been gaining traction in infodemiology. Some include Flu Near You, Influenza.net, Guardians of Health, AfyaData, FluTracking, Vigilant-e, and Saúde na Copa. These sites usually gather information through mapping similar symptoms of users. Some sites, such as InfluenzaNet, provide incentives for users to continue tracking their symptoms or encourage their friends to start tracking theirs.
H1N1 virus
Twitter, a user-generated platform for social media, can effectively help track users’ thoughts and opinions on diseases as well as help track disease at a greater rate. For example, the H1N1 virus (swine flu) outbreak in 2009 was analyzed through Twitter reactions and responses in order to investigate these areas of thought. After analyzing and comparing tweets through different severities of the H1N1 outbreak, the researchers posited that tweets can be a reliable estimate in understanding disease patterns.
The speed at which social media reveals public thought and trends is about two weeks faster than that of standardized disease surveillance through the proper health-related institutions. An example of social media reactions related to the H1N1 virus include an increased lack of discussion around antiviral drugs at approximately the same time as the H1N1 virus became less prevalent. However, due to the nature of social media as user-generated and unregulated, deciphering between what is relevant versus irrelevant material can blur generalizations and facts. Along with this, people are wavering and unreliable with when and what they post about on social media. With that, social media is an unstable variable which, in order to become standardized, would require great expense to create measures in which it would become feasible to make valid generalizations about. To elaborate using the example of Twitter, information on sickness can change meaning in a connotative sense. For example, if a user tweets about popular pop artist Justin Bieber saying they have “Bieber Fever,” this is very apparently not a real sickness, but a faux sickness based on the popularity of an artist. This creates issues in organizing information, requiring complex algorithms that can analyze the contours of these social meanings. Nonetheless, a recent study noted that studies focusing on the use of Youtube to detect outbreaks only had a twenty to thirty percent range of error, leading researchers' to continue looking into the prospect of social media as a force for change in disease outbreak.
Chikungunya virus
Chikungunya virus, associated with moderate to severe skin rashes and joint pain, spread to Italy at the beginning of 2007. The outbreak caused great social concern, therefore causing a plethora of social media reactions to emerge. Using an infodemiological approach, the sites where the outbreak was recorded, specifically PubMed, Twitter, Google Trends and News, and Wikipedia views and edits all provided information into when the disease was received, the concerns associated with the outbreak, and popular opinion on the disease. Interestingly, most of the Twitter posts related to Chikungunya were highly guided by search engine queries rather than empirical investigations, leading to non-usable data. Using mediation technology, Wikipedia proved to be ineffective in determining whether the site was helpful in understanding the outbreak. Moreover, users who gained opinions on the outbreak from news sources were similar to the Wikipedia edits and reactions. Similarly, the PubMed responses were consistent with that of the Wikipedia and Twitter responses. Overall, a significant amount of information was gathered from these sources, deeming these sites to be useful in documenting disease and public reaction.
Ecological field work
Cholera outbreak
In hard to access regions such as the Arctic and rural Canada, researching ecological processes and disease spread can be difficult without constant monitoring. Indigenous populations have become a key aspect in understanding the spread of disease, due to their proximity and connection to the land. For example, the Inuit populations' observations during the outbreak of avian cholera helped identify specific zones of infection in Arctic Canada. Specifically, the Common Eider, a species of sea duck, was being tracked to understand an increase in mortality from the disease. The Inuit were the first to report the increase in deaths, due to their reliance on Common Eider for meat, feathers, and eggs. With support from the Cape Dorset, Iqaluit, Aupaluk, Kangirsuk, Kangiqsujuaq, and Ivujivik Inuit communities, the researchers were able to detect the outbreak of avian cholera in thirteen locations from 2004 to 2016. The Inuit peoples were able to keep a closer eye on death rates of the Common Eider due to their daily routines and subsistence on the duck.
Privacy concerns
As digital technology advances with many dangers associated to privacy, individuals are attempting to be more accountable when meeting others. Background check websites and search engine sources reveal just how many people attempt to find information on another person, whatever the reason. Many researchers altogether ignore the idea of privacy when analyzing methods of participatory surveillance. More so, from a social media perspective, some researchers claim that by openly sharing information with others, this cannot be deemed a breach of privacy. However, a few researchers on the topic mention breaches of privacy within the spheres of both digital media studies and infodemiology.
Infodemiology
Infodemiology relies on users' information to analyze health patterns and public health concerns. However, the legality behind using other people's information without their consent can cause serious ethical privacy violations. However, limitations such as individual privacy concerns and unreliable information cause participatory digital information to sometimes be inaccurate and hard to differentiate from truth.
Doxing
Doxing is a form of cyberbullying, using the Internet to post private information about an individual or organization as a means of attack against the entity. Common information that can be leaked can be anything from a past indiscretion, home address, or even social security number of the victim. This information could be freely available on the internet for the attacker to access and further publicize. This differentiates it from other types of information leaks, since the information is simply being brought to the forefront of the public's awareness. In other words, the public information being leaked could be found freely by other parties even if it was not exposed in a more public light. The term "doxing" comes from the origins of document, first used in 2001 with the infamous hacker collective called Anonymous.
With today's current laws, most legislation pertaining to cyber threats and attacks are rooted in the 1990s, when the Internet was just developing. Due to information being stored online, doxing does not adhere to standard rights of privacy, and may be a breach of legal purpose limitation principles — data posted online may not be freely used for any purpose, even if it is publicly available, and consent cannot be assumed.
From the US constitutional perspective, individuals should have the right to disclose or not disclose information, while at the same time being able to make decisions about privacy. The US First Amendment protects the right to free speech, but doxing uniquely uses information available to the public, leading some 'doxers' to claim that they are simply exercising their First Amendment rights. The only exception to First Amendment rights came about from Cohen v. California, which established the "true threat" exception. This exception established a breach of free speech rights whenever the content of the speech maliciously invades privacy interests. However, this exception may only work in some doxing situations, where the court measures the extent of the offense and the reactions from the attack.
In the EU and UK, the GDPR privacy law gives personal data special protection, so that any data which can identify an individual must only be processed or used if one of the specified legal reasons allow it.
See also
Digital privacy
Infoveillance
Open-source intelligence
Revenge porn
Search engine privacy
Shadow profile
Sousveillance
Swatting
References
Epidemiology
Surveillance
Media studies
Privacy | Participatory surveillance | Environmental_science | 2,580 |
234,627 | https://en.wikipedia.org/wiki/Famoxadone | Famoxadone is a fungicide to protect agricultural products against various fungal diseases on fruiting vegetables, tomatoes, potatoes, curcurbits, lettuce and grapes. It is used in combination with cymoxanil. Famoxadone is a QI, albeit with a chemistry different from most QIs. (It is an oxazolidine-dione while most are strobilurins.) It is commonly used against Plasmopara viticola, Alternaria solani, Phytophthora infestans, and Septoria nodorum.
Molecular interaction
Famoxadone is of lesser interaction strength at the Q pocket than some other QIs, for example, azoxystrobin. This is because azoxystrobin and such interact more centrally in the Q pocket than does famoxadone.
Resistance management
Although it has a different chemistry, famoxadone shows full cross-resistance with the rest of the main FRAC group 11 that it belongs to, which is almost entirely strobs. It has not shown cross-resistance with the 11A subgroup however. As with all QIs there is a high risk of resistance development and so pesticide stewardship is important.
Populations of P. infestans and A. solani in northern and western Europe are not known to be resistant to famoxadone.
Great Britain approval withdrawn
On June 30 2024, approval for famoxadone's use in Great Britain was withdrawn by the Health and Safety Executive due to the risk it presents to birds. Its use was already banned in the European Union, and there was in 2024 concern about the levels of allowed residue particularly on table grapes being too high.
References
External links
Fungicides
Oxazolidinediones
Phenol ethers
Quinone outside inhibitors | Famoxadone | Biology | 383 |
52,378,082 | https://en.wikipedia.org/wiki/Suryanarayanasastry%20Ramasesha | Suryanarayanasastry Ramasesha (born 16 January 1950) is an Indian quantum chemist and a former Dean of the Faculty of Science at the Indian Institute of Science. He is a former chair of the Solid State and Structural Chemistry Unit and Amrut Modi Chair professor of Chemical Sciences at IISc. He is known for his studies on conjugated organic systems and low-dimensional solids and is an elected fellow of the Indian National Science Academy, the Indian Academy of Sciences and The World Academy of Sciences. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 1992, for his contributions to chemical sciences.
Biography
S. Ramasesha, born in the south Indian state of Karnataka on 16 January 1950, completed his BSc (Hons) (1968) from Bangalore University before securing MSc (1970) and PhD (1977) from the Indian Institute of Technology, Kanpur. Moving back to his home state, he did his post-doctoral studies at the Indian Institute of Science as well as at Oxford University, Louisiana State University, and Princeton University. He started his career in 1984 as a member of faculty at the Indian Institute of Science where he spent his entire academic career, serving in such different positions as professor and chair of Solid State and Structural Chemistry Unit (1992–97), as the Amrut Modi Chair professor of Chemical Sciences (2000-003) and as Dean, faculty of science (from 2014) to superannuate from service in 2015. In between, he has served as a visiting professor at Princeton University, University of Arizona, Bordeaux University, École normale supérieure de Cachan, and University of Mons-Hainaut.
Legacy
Ramasesha is known to have conducted extensive researches on the electronic structure and nonlinear properties of conjugated organic systems and low-dimensional solids using the Valence Bond method. His researches have assisted in designing new protocols for the development of many-body models for investigating large molecules, low-dimensional materials and real time dynamics. When Jawaharlal Nehru Centre for Advanced Scientific Research established its first computer laboratory, Ramsesha served as its founder head and has also served as the convenor of the National Centre for Science Information of the Indian Institute of Science. He has published over 240 peer-reviewed articles and guided 23 research scholars in their doctoral studies.
Awards and honors
The Indian National Science Academy awarded Ramasesha the Young Scientists' Medal in 1978 and he received the B. M. Birla Science Prize in 1990. The Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards, in 1992. A J. C. Bose National Fellow, he is also a recipient of the Silver Medal of the Chemical Research Society of India and the Alumni Award for Excellence in Research of the Indian Institute of Science. He is an elected fellow of the Indian National Science Academy, Indian Academy of Sciences and The World Academy of Sciences,. He was awarded the Sir M. Visvesvaraya life-time achievement award by the government of Karnataka, for the year 2018. He was conferred the Honorary Fellowship of the Karnataka Science and Technology Academy in 2020. He was also conferred the Chemical Research Society of India Gold Medal for lifetime achievement.
References
External links
Recipients of the Shanti Swarup Bhatnagar Award in Chemical Science
1950 births
Scientists from Bengaluru
Fellows of the Indian Academy of Sciences
Fellows of the Indian National Science Academy
20th-century Indian chemists
TWAS fellows
20th-century Indian inventors
Indian theoretical chemists
Bangalore University alumni
IIT Kanpur alumni
Academic staff of the Indian Institute of Science
Alumni of the University of Oxford
Louisiana State University alumni
Princeton University alumni
Princeton University faculty
University of Arizona faculty
Academic staff of the University of Bordeaux
Living people | Suryanarayanasastry Ramasesha | Chemistry | 796 |
2,319,017 | https://en.wikipedia.org/wiki/Publications%20of%20the%20Astronomical%20Society%20of%20the%20Pacific | Publications of the Astronomical Society of the Pacific (often abbreviated as PASP in references and literature) is a monthly peer-reviewed scientific journal managed by the Astronomical Society of the Pacific. It publishes research and review papers, instrumentation papers and dissertation summaries in the fields of astronomy and astrophysics. Between 1999 and 2016 it was published by the University of Chicago Press and since 2016, it has been published by IOP Publishing. The current editor-in-chief is Jeff Mangum of the National Radio Astronomy Observatory.
PASP has been published monthly since 1899, and along with The Astrophysical Journal, The Astronomical Journal, Astronomy and Astrophysics, and the Monthly Notices of the Royal Astronomical Society, is one of the primary journals for the publication of astronomical research.
See also
List of astronomy journals
References
Astronomy journals
IOP Publishing academic journals
Publications established in 1899
Academic journals associated with learned and professional societies
Astronomical Society of the Pacific | Publications of the Astronomical Society of the Pacific | Astronomy | 185 |
10,231,439 | https://en.wikipedia.org/wiki/Panaeolus%20cambodginiensis | Panaeolus cambodginiensis is a potent hallucinogenic mushroom that contains psilocybin and psilocin. It was described in 1979 as Copelandia cambodginiensis.
Description
The cap is less than 23 mm across, with a convex shape and an incurved margin when young, expanding to broadly convex. The cap surface is smooth, often cracking with irregular fissures. The gills are gray to black. The stem is tall, 4 mm thick, and slightly swollen at the base. The spores are black, shaped like lemons, smooth, measuring 11 x 8 μm. The entire mushroom quickly bruises blue where it is handled.
It can be differentiated from the similar Panaeolus cyanescens by microscopic characteristics.
Distribution and habitat
Panaeolus cambodginiensis is mushroom that grows on dung of water buffalo. It was first described from Cambodia and is widespread throughout the Asian subtropics and Hawaii.
Alkaloid content
Strongly bluing species. Merlin and Allen (1993) reported the presence of psilocybin and psilocin, up to .55% and .6%, respectively.
See also
List of Panaeolus species
References
External links
Mushroom John - Panaeolus cambodginiensis
Erowid - Panaeolus cambodginiensis
A Worldwide Geographical Distribution of the Neurotropic Fungi
cambodginiensis
Psychoactive fungi
Psychedelic tryptamine carriers
Fungi of Asia
Fungi of Oceania
Fungi of Hawaii
Fungi without expected TNC conservation status
Fungus species | Panaeolus cambodginiensis | Biology | 318 |
1,973,177 | https://en.wikipedia.org/wiki/Arithmetic%20geometry | In mathematics, arithmetic geometry is roughly the application of techniques from algebraic geometry to problems in number theory. Arithmetic geometry is centered around Diophantine geometry, the study of rational points of algebraic varieties.
In more abstract terms, arithmetic geometry can be defined as the study of schemes of finite type over the spectrum of the ring of integers.
Overview
The classical objects of interest in arithmetic geometry are rational points: sets of solutions of a system of polynomial equations over number fields, finite fields, p-adic fields, or function fields, i.e. fields that are not algebraically closed excluding the real numbers. Rational points can be directly characterized by height functions which measure their arithmetic complexity.
The structure of algebraic varieties defined over non-algebraically closed fields has become a central area of interest that arose with the modern abstract development of algebraic geometry. Over finite fields, étale cohomology provides topological invariants associated to algebraic varieties. p-adic Hodge theory gives tools to examine when cohomological properties of varieties over the complex numbers extend to those over p-adic fields.
History
19th century: early arithmetic geometry
In the early 19th century, Carl Friedrich Gauss observed that non-zero integer solutions to homogeneous polynomial equations with rational coefficients exist if non-zero rational solutions exist.
In the 1850s, Leopold Kronecker formulated the Kronecker–Weber theorem, introduced the theory of divisors, and made numerous other connections between number theory and algebra. He then conjectured his "liebster Jugendtraum" ("dearest dream of youth"), a generalization that was later put forward by Hilbert in a modified form as his twelfth problem, which outlines a goal to have number theory operate only with rings that are quotients of polynomial rings over the integers.
Early-to-mid 20th century: algebraic developments and the Weil conjectures
In the late 1920s, André Weil demonstrated profound connections between algebraic geometry and number theory with his doctoral work leading to the Mordell–Weil theorem which demonstrates that the set of rational points of an abelian variety is a finitely generated abelian group.
Modern foundations of algebraic geometry were developed based on contemporary commutative algebra, including valuation theory and the theory of ideals by Oscar Zariski and others in the 1930s and 1940s.
In 1949, André Weil posed the landmark Weil conjectures about the local zeta-functions of algebraic varieties over finite fields. These conjectures offered a framework between algebraic geometry and number theory that propelled Alexander Grothendieck to recast the foundations making use of sheaf theory (together with Jean-Pierre Serre), and later scheme theory, in the 1950s and 1960s. Bernard Dwork proved one of the four Weil conjectures (rationality of the local zeta function) in 1960. Grothendieck developed étale cohomology theory to prove two of the Weil conjectures (together with Michael Artin and Jean-Louis Verdier) by 1965. The last of the Weil conjectures (an analogue of the Riemann hypothesis) would be finally proven in 1974 by Pierre Deligne.
Mid-to-late 20th century: developments in modularity, p-adic methods, and beyond
Between 1956 and 1957, Yutaka Taniyama and Goro Shimura posed the Taniyama–Shimura conjecture (now known as the modularity theorem) relating elliptic curves to modular forms. This connection would ultimately lead to the first proof of Fermat's Last Theorem in number theory through algebraic geometry techniques of modularity lifting developed by Andrew Wiles in 1995.
In the 1960s, Goro Shimura introduced Shimura varieties as generalizations of modular curves. Since the 1979, Shimura varieties have played a crucial role in the Langlands program as a natural realm of examples for testing conjectures.
In papers in 1977 and 1978, Barry Mazur proved the torsion conjecture giving a complete list of the possible torsion subgroups of elliptic curves over the rational numbers. Mazur's first proof of this theorem depended upon a complete analysis of the rational points on certain modular curves. In 1996, the proof of the torsion conjecture was extended to all number fields by Loïc Merel.
In 1983, Gerd Faltings proved the Mordell conjecture, demonstrating that a curve of genus greater than 1 has only finitely many rational points (where the Mordell–Weil theorem only demonstrates finite generation of the set of rational points as opposed to finiteness).
In 2001, the proof of the local Langlands conjectures for GLn was based on the geometry of certain Shimura varieties.
In the 2010s, Peter Scholze developed perfectoid spaces and new cohomology theories in arithmetic geometry over p-adic fields with application to Galois representations and certain cases of the weight-monodromy conjecture.
See also
Arithmetic dynamics
Arithmetic of abelian varieties
Birch and Swinnerton-Dyer conjecture
Moduli of algebraic curves
Siegel modular variety
Siegel's theorem on integral points
Category theory
Frobenioid
References | Arithmetic geometry | Mathematics | 1,025 |
25,520,575 | https://en.wikipedia.org/wiki/Xylomannan | Xylomannan is an antifreeze molecule, found in the freeze-tolerant Alaskan beetle Upis ceramboides. Unlike antifreeze proteins, xylomannan is not a protein. Instead, it is a combination of a sugar (saccharide) and a fatty acid that is found in cell membranes. As such is expected to work in a different manner than AFPs. It is believed to work by incorporating itself directly into the cell membrane and preventing the freezing of water molecules within the cell.
Xylomannan is also found in the red seaweed Nothogenia fastigiata (Scinaiaceae family). Fraction F6 of a sulphated xylomannan from Nothogenia fastigiata was found to inhibit replication of a variety of viruses, including Herpes simplex virus types 1 and 2 (HSV-1, HSV-2), Human cytomegalovirus (HCMV, HHV-5), Respiratory syncytial virus (RSV), Influenzavirus A, Influenzavirus B, Junin and Tacaribe virus, Simian immunodeficiency virus, and (weakly) Human immunodeficiency virus types 1 and 2.
References
Cryobiology | Xylomannan | Physics,Chemistry,Biology | 259 |
1,143,193 | https://en.wikipedia.org/wiki/Transition%20nuclear%20protein | Transition nuclear proteins (TNPs) are proteins that are involved in the packaging of sperm nuclear DNA during spermiogenesis. They take the place of histones associated with the sperm DNA, and are subsequently themselves replaced by protamines.
TNPs in humans include TNP1 and TNP2.
See also
Chromatin
Histone
Protamine
Sperm
Spermatogenesis
Spermiogenesis
References
Andrology
Reproductive system
Proteins | Transition nuclear protein | Chemistry,Biology | 86 |
48,551,445 | https://en.wikipedia.org/wiki/Our%20Health%20Partnership | Our Health Partnership is a large provider of primary care services in Birmingham and Shropshire established in November 2015.
Our Health Partnership is one of the UK’s biggest general practitioner partnerships. It brings together 47 surgeries in the Midlands and Shropshire. It is described as a super-partnership which will generate efficiencies of scale when compared with traditional general practice in the UK. It claims that the constituent practices will retain their own operational autonomy.
The partnership is proposing to lead primary care networks across the country including practices not in their organisation.
The Board
The board is made up of nine elected general practitioner partners – seven from Birmingham and two from Shropshire, an appointed practice manager and an operations director and finance director. It is also supported by an external strategic advisor. The board is responsible for the central functions, including accounting, human resources, finance and quality (including a care quality commission).
The chair is Dr Vish Ratnasuriya MBE, a general practice partner at Lordswood House Medical Practice in Harborne.
References
External links
Our Health Partnership
General practice organizations
Private providers of NHS services
Electronic health records
Health in Birmingham, West Midlands
Medical and health organisations based in England | Our Health Partnership | Technology | 234 |
77,524,208 | https://en.wikipedia.org/wiki/NGC%205988 | NGC 5988 is a large spiral galaxy in the constellation of Serpens. Its velocity with respect to the cosmic microwave background is 10697 ± 10km/s, which corresponds to a Hubble distance of . However, one non-redshift measurement gives a much larger distance of . It was discovered by American astronomer Lewis Swift on 17 April 1887.
NGC 5988 is a LINER galaxy, i.e. it has a type of nucleus that is defined by its spectral line emission which has weakly ionized or neutral atoms, while the spectral line emission from strongly ionized atoms is relatively weak.
One supernova has been observed in NGC 5988: SN 2023hbv (type II, mag 19.278) was discovered by ATLAS on 29 April 2023.
See also
List of NGC objects (5001–6000)
References
External links
5988
055921
09998
Serpens
18870417
Discoveries by Lewis Swift
+02-40-012
Spiral galaxies | NGC 5988 | Astronomy | 209 |
949,778 | https://en.wikipedia.org/wiki/List%20of%20architecture%20firms | The following is a list of architectural firms. It includes notable worldwide examples of architecture firms, companies, practices, partnerships, etc.
1–9
360 Architecture, United States
3LHD, Croatia
3XN, Denmark
1100 Architect, United States, Germany
5468796 Architecture, Canada
A
A69 Architects, Czech Republic
AART architects, Denmark
Adler & Sullivan, United States
Adrian Smith + Gordon Gill Architecture (AS+GG), United States
Aedas, United Kingdom, United States, Hong Kong
Allen Jack+Cottier, Australia
Allison & Allison, United States
Altius Architects, Canada
Archigram, United Kingdom
archimania, United States
Architecture Brio, India
Arkitektfirmaet C. F. Møller, Denmark
Armet Davis Newlove Architects, United States
Arquitectonica, United States
Ash Sakula Architects, United Kingdom
Ashton Raggatt McDougall, Australia
Asymptote, United States
Atelier 5, Switzerland
Atelier Bow-Wow, Japan
Auer+Weber+Assoziierte, Germany
Ayers Saint Gross, United States
B
Ballinger, United States
Barnett, Haynes & Barnett, United States
Bates Smart, Australia
Baumschlager-Eberle, Austria
BBPR, Italy
Behnisch Architekten, Germany
Bennetts Associates, United Kingdom
Benoy, United Kingdom
Benson & Forsyth, United Kingdom
Bjarke Ingels Group, Denmark
Bohlin Cywinski Jackson, United States
Boller Brothers, United States
Booty Edwards & Partners, Malaysia
Bora Architects, United States
Bregman + Hamann Architects, Canada
Brooks + Scarpa, United States
Building Design Partnership, United Kingdom
C
C Concept Design, Netherlands
Carrère and Hastings, United States
Chabanne et partenaires, France
Chapman and Oxley, Canada
Jack Allen Charney, Associates, United States
Claude and Starck, United States
Claus en Kaan Architecten, Netherlands
Cobe architects, Denmark
Concertus Design and Property Consultants, England
Consolidated Consultants CC, Jordan
COOKFOX, United States
Coop Himmelb(l)au, Austria
Cooper Carry, United States
Cooper, Robertson & Partners, United States
Corgan, United States
Costas Kondylis and Partners, LLP, United States
Cram and Ferguson, United States
D
DA Architects + Planners, Canada
Dar Al-Handasah, Beirut | Cairo | London | Pune
Davis Brody Bond, United States | Brazil
Deborah Berke & Partners Architects
Denton Corker Marshall, Australia
Diamond and Schmitt Architects, Canada
Dico si Tiganas, Romania
Diener & Diener, Switzerland
Diller and Scofidio, United States
Dissing + Weitling, Denmark
Dixon Jones, United Kingdom
Donaldson and Meier, United States
Dorte Mandrup Architects, Denmark
E
Ellerbe Becket, United States
F
F+A Architects, United States
Farrells, United Kingdom and Hong Kong
Fender Katsalidis Architects, Australia
Fiske & Meginnis, Nebraska, United States
Flad Architects, United States
FMA Architects, Nigeria, South Africa
Foster and Partners, United Kingdom
FRCH Design Worldwide, United States
Future Systems (1982-2009), United Kingdom
G
Gehl Architects, Denmark
Gensler, United States
Gerkan, Marg and Partners, Germany
Gillespie, Kidd & Coia, Scotland
Glenn Howells, United Kingdom
GRAFT, United States
Graham, Anderson, Probst & White, United States
Greene and Greene, United States
Gregory Henriquez, Canada
Gregory Phillips Architects, United Kingdom
Grimshaw, United Kingdom
Guida Moseley Brown Architects, Australia
Gwathmey Siegel, United States
H
Handel Architects, United States
Hampshire County Architects, United Kingdom
Harley Ellis Devereaux, United States
Hassell, Australia
Haworth Tompkins, United Kingdom
Hazen and Robinson, Nebraska, United States
HDR, Inc., Nebraska, United States
H. E. and A. Bown. United Kingdom
Heikkinen – Komonen Architects, Finland
Henning Larsen Architects, Denmark
Herzog & de Meuron, Switzerland
HKS, Inc., United States
HNTB Corporation, United States
Hodgetts + Fung, United States
Hoffmann Architects, United States
HOK, North America, Europe, Asia-Pacific, India, Middle East
Holabird & Roche/Holabird & Root, United States
Hollmén Reuter Sandman, Finland
Hudson Architects, United Kingdom
I
iArc, South Korea
IBI Group, Canada
Ingenhoven Architects, Germany
Integrated Design Associates (IDA), Hong Kong
J
Jaeger Kahlen Partner, Germany, Italy, China
Jestico + Whiles, United Kingdom
JLG Architects, United States
John Robertson Architects, London, United Kingdom
Johnston Marklee & Associates, United States
Johnsen Schmaling Architects, United States
K
Sunita Kohli, (K2India - Kohelika Kohli Architects and Designers Pvt Ltd), India
Karen Bausman + Associates, United States
Kemp, Bunch & Jackson (KBJ), United States
Kimmel Eshkolot Architects, Israel
Kirksey, United States
Kohn Pedersen Fox (KPF), United States
Koning Eizenberg Architecture, Inc. (KEA), United States
L
Leo A Daly, United States
Lifschutz Davidson Sandilands, Great Britain
Line and Space, United States
Link Arkitektur, Norway
LMN Architects, United States
Longfellow, Alden & Harlow, United States
Loyn & Co, Wales, United Kingdom
Lyons, Australia
M
MacGabhann Architects, Ireland
Mackenzie Wheeler Architects and Designers, United Kingdom
Marshall and Fox, United States
Mathews & Associates Architects, South Africa
MBH Architects, United States
McKim, Mead & White, United States
Mecanoo, Netherlands
Michael Green Architecture, Canada
Miller and Pflueger (1923–37), United States
Miller / Hull, United States
Mithun, United States
Moriyama & Teshima, Canada
Morphogenesis, India
Morphosis, United States
muf architecture/art, United Kingdom
Muhlenberg Greene Architects, United States
MulvannyG2 Architecture, United States
MVRDV, Netherlands
N
NBBJ, United States
Neutelings Riedijk Architects, Netherlands
Norman and Dawbarn, UK
O
Office for Metropolitan Architecture (OMA), Netherlands
O'Donnell & Tuomey, Ireland
Omrania and Associates, Saudi Arabia
P
Pascall+Watson, United Kingdom
Pearson and Darling, Canada
Pei Cobb Freed & Partners, United States
Percy Thomas Partnership (c.1912-2004), United Kingdom
Perkins and Will, United States
Perkins Eastman, United States
Peter Chermayeff LLC, United States
Peter Tolkin Architecture, United States
PLH Architects, Denmark
PLP Architecture, United Kingdom
Populous, United States
Pugin & Pugin (c.1851-c.1928), United Kingdom
R
R.E. Chisholm Architects, United States
RAMSA, United States
Rapp & Rapp, United States
Renzo Piano, Italy France
Reynolds, Smith & Hills (RS&H), United States
Rex Architecture P.C., United States
RHWL, United Kingdom
Rogers Stirk Harbour + Partners, United Kingdom
Ricardo Bofill Taller de Arquitectura, Spain
RMJM, United Kingdom
S
SAMOO Architects & Engineers, Republic of Korea
SANAA, Japan
Sauerbruch Hutton, Germany
Schmidt hammer lassen, Denmark
Schultze and Weaver, United States
Shepley, Rutan and Coolidge, United States
Shilpa Architects, India & United States
SHoP Architects, United States
Shore Tilbe Irwin + Partners, Canada
Skidmore, Owings and Merrill (SOM), United States
Smith Hinchman & Grylls, United States
Snøhetta, Norway
SOMA, United States
Studio Gang Architects, United States
Synthesis Design + Architecture, United States
Sundukovy Sisters Design Studio, Russia
T
Tate Snyder Kimsey Architects, United States
terrain:loenhart&mayr, Germany
Tod Williams Billie Tsien Architects
Troppo Architects, Australia
Trost & Trost, United States
Terry Farrell, United Kingdom
T Sakhi, Lebanon
U
UNStudio, Netherlands
Urban Design Group, United States
Ushida Findlay Architects, Japan/UK
V
Van Der Merwe Miszewski Architects, South Africa
Vandkunsten, Denmark
Voorhees, Gmelin and Walker, United States
W
Walker & Weeks, United States
Weiss/Manfredi, United States
West 8, Netherlands
White, Sweden, Denmark, Norway, United Kingdom
WilkinsonEyre, United Kingdom
Wittehaus, United States
WOHA, Singapore
Wood Marsh, Australia
Woods Bagot, Australia
Woollen, Molzan and Partners, United States
Warren & Mahoney, New Zealand
WZMH Architects, Canada
Y
York and Sawyer, United States
Yamasaki & Associates, United States
Z
Zaha Hadid Architects, United Kingdom
See also
List of British architecture firms
References
White, Norval; Willensky, Elliot (2000). AIA Guide to New York City. New York City: Three Rivers Press.
Firms
Architecture firms | List of architecture firms | Engineering | 1,877 |
40,630,369 | https://en.wikipedia.org/wiki/Tylopilus%20intermedius | Tylopilus intermedius, commonly known as the bitter parchment bolete, is a bolete fungus in the family Boletaceae native to the eastern United States.
Taxonomy
The bolete was first officially described in Alexander H. Smith and Harry D. Thiers' 1971 monograph of boletes in the Michigan area. The specific epithet intermedius refers to its intermediate appearance between Tylopilus peralbidus and T. rhoadsiae. It is commonly known as the "bitter parchment bolete".
Description
The fruit bodies have caps that are broadly convex to flat in maturity, reaching a diameter of wide. The cap margin is curved inward in young fruit bodies, and has a thin band of sterile (non-reproductive) tissue. The cap surface is uneven and often wrinkled. Initially whitish, it sometimes develops pinkish tones and brownish stains in age. The pores on the cap underside are initially white but become pinkish as the spores mature. The pores are roughly circular, measuring about 1 or 2 per millimeter; the tubes are deep. The club-shaped stipe measures long by thick. It is white or whitish like the cap, and also develops brownish stains in age. Reticulation (a mesh-like pattern) on the stipe is variable. The flesh is firm and white, and slowly stains brown where it has been cut; the staining reaction may take up to an hour or more to occur. It has no distinctive odor and a bitter taste that renders it inedible.
The spore print is pinkish brown. Spores are nearly oblong, smooth, hyaline (translucent) to pale brown, and measure 10–15 by 3–5 μm. The caps of young fruit bodies will stain pinkish when a drop of iron(II) sulfate (FeSO4) solution is applied.
Habitat and distribution
Fruit bodies of Tylopilus intermedius grow on the ground scattered or in groups under deciduous trees, especially oak. Found in eastern North America, the true distribution limits of the bolete are unknown, but it has been collected from the US states of New England south to North Carolina, and west to Michigan.
See also
List of North American boletes
References
External links
intermedius
Fungi described in 1971
Fungi of the United States
Inedible fungi
Fungi without expected TNC conservation status
Fungus species | Tylopilus intermedius | Biology | 484 |
2,407,841 | https://en.wikipedia.org/wiki/Euglenophyceae | Euglenophyceae (ICBN) or Euglenea (ICZN) is a group of single-celled algae belonging to the phylum Euglenozoa. They have chloroplasts originated from an event of secondary endosymbiosis with a green alga. They are distinguished from other algae by the presence of paramylon as a storage product and three membranes surrounding each chloroplast.
Description
Euglenophyceae are unicellular algae, protists that contain chloroplasts. Their chloroplasts originated from a secondary endosymbiosis with a green alga, particularly from the order Pyramimonadales, and contain chlorophylls a and b.Some have secondarily lost this ability and evolved toward osmotrophy. In addition to photosynthetic plastids, most species have a photosensitive eyespot.
Ecology
Euglenophyceae are mainly present in the water column of freshwater habitats. They are abundant in small eutrophic water bodies of temperate climates, where they are capable of forming blooms, including toxic blooms such as those caused by Euglena sanguinea. In tropical climate, blooms are common in ponds. In marine environments they have been reported in a lower amount. Some species are capable of migrating vertically through the sand along with the cycles of ocean tides. Two lineages of Euglenophyceae are part of the marine plankton: Rapazida and Eutreptiales. Eutreptiales can amount up to 46% of the total phytoplankton biomass when blooming in eutrophic coastal waters.
Classification
Euglenophyceae encompasses three taxonomic groups: the mixotrophic Rapaza viridis and two mainly phototrophic orders, Euglenales and Eutreptiales. The classification is as follows (species numbers based on AlgaeBase):
Order Euglenales
Family Euglenaceae [Euglenidae]
Colacium – 17 spp.
Cryptoglena – 11 spp.
Euglena – 174 spp.
Euglenaformis – 3 spp.
Euglenaria – 4 spp.
Monomorphina – 17 spp.
Strombomonas – 99 spp.
Trachelomonas – 410 spp.
Family Phacaceae [Phacidae]
Discoplastis – 6 spp.
Flexiglena – 1 sp.
Lepocinclis – 90 spp.
Phacus – 188 spp.
Order Eutreptiales
Family Eutreptiaceae [Eutreptiidae]
Eutreptia – 11 spp.
Eutreptiella – 9 spp.
Order Rapazida
Family Rapazidae
Rapaza – 1 sp.
Several genera assigned to Euglenophyceae are considered incertae sedis, because the lack of genetic data makes their phylogenetic position unresolved:
Ascoglena – 4 spp.
Euglenamorpha – 2 spp.
Euglenopsis – 11 spp.
Glenoclosterium – 1 sp.
Hegneria – 1 sp.
Klebsina – 1 sp.
Euglenocapsa – 1 sp.
Menoidium – 28 spp.
Parmidium – 10 spp.
References
Excavata classes
Algae
Taxa described in 1925
Euglenozoa | Euglenophyceae | Biology | 693 |
2,540,429 | https://en.wikipedia.org/wiki/List%20of%20terms%20used%20in%20bird%20topography | The following is a list of terms used in bird topography:
Plumage features
Back
Belly
Breast
Cheek
Chin
Crest
Crown
Crown patch
Ear-coverts
Eye-ring
Eyestripe (or eye line)
Feather, see category: :Category:Feathers
Flanks
Forecrown
Gorget
Hood (or half-hood)
Lateral throat stripe
Lores
Malar
Mantle
Mask
Moustachial stripe
Nape
Nuchal collar
Operculum (on pigeons).
Pennaceous feathers
Postocular stripe
Remiges
Rump
Spectacles
Submoustachial stripe
Supercilium
Supraloral
Parts of the tail include:
Rectrices
Tail corner
Terminal band
Subterminal band
Throat
Undertail coverts
Upper mandible (or maxilla)
Uppertail coverts
Vent, crissum or cloaca
Vent band
Parts of the wings include:
Alula
Apical spot
Axillar
Bend of wing
Carpal covert
Emargination
Greater coverts
Leading edge of wing
Lesser coverts
Marginal coverts
Median coverts
Mirror (on gulls)
Primaries
Primary projection
Primary numbers (e.g. 1, 2, 3, etc.)
Scapulars
Scapular crescent (on gulls)
Secondaries
Speculum
Tertials
Tertial step (on gulls)
Trailing edge of wing
Upper scapulars
Wing bar
Wing coverts
Wing edging
Wing linings
Wing tip or point (denoted by the number of the longest primary, counted from the carpal joint)
Bare-parts features
Beak or bill
Cere
Culmen
Gape
Gonys
Gonydeal angle
Gonydeal spot
Nail (of beak)
Nares
Rhamphotheca
Gnathotheca
Rhinotheca
Tomia
Brooding patch
Caruncle (bird anatomy)
Comb, or Coxcomb
Orbital skin, or orbital ring
Tarsus
Tibia
Wattle
See also
Glossary of bird terms
References | List of terms used in bird topography | Biology | 378 |
18,692,111 | https://en.wikipedia.org/wiki/Chlorococcum%20amblystomatis | Chlorococcum amblystomatis, (previously Oophila amblystomatis), is a species of single-celled green algae known for its symbiotic relationship with the spotted salamander, Ambystoma maculatum. It grows symbiotically inside salamander eggs, primarily in the eggs of the spotted salamander, Ambystoma maculatum. It has also been reported in other salamander species, such as the Japanese black salamander, Hynobius nigrescens, which is endemic to Japan.
Taxonomy and etymology
C. amblystomatis was originally named in the genus Oophila. When placed in the genus Oophila, it was the only species.
Growth
C. amblystomatis cells invade and grow inside salamander egg capsules. Once inside, it metabolizes the carbon dioxide produced by the embryo and provides it with oxygen and sugar as a result of photosynthesis. This is an example of endosymbiosis. The relationship between some salamanders and some species of green algae, including C. amblystomatis, is the only known example of an intracellular endosymbiont in vertebrates. This symbiosis between C. amblystomatis and the salamander may exist beyond the oocyte and early embryonic stage. Chlorophyll autofluorescence observation and ribosomal DNA analysis suggest that this algal species has invaded embryonic salamander tissues and cells during development and may even be transmitted to the next generation.
Free-living C. amblystomatis have been reported growing in freshwater woodland ponds. They grow best at a water depth of with the water temperature being and an air temperature of . Their optimal pH tolerance ranges from 6.26 to 6.46. Cells are motile via a flagellum. C. amblystomatis can reproduce sexually and asexually. 16S rRNA has been partially sequenced as well as the 18S rRNA for the plasmid, however whole genome sequencing has not been done.
See also
Chlorogonium
References
Further reading
External links
Green Eggs and Jam: Adaptations That Help Spotted Salamanders Reproduce at Henderson State University.
Ambystoma maculatum, the spotted salamander, at AmphibiaWeb.
Image of salamander egg with algae at North Carolina Museum of Natural Sciences.
Symbiosis
Flora of Northern America
Chlorococcaceae
Plants described in 1909
Chlorophyta species | Chlorococcum amblystomatis | Biology | 536 |
8,746,431 | https://en.wikipedia.org/wiki/Lesser%20palatine%20nerve | The lesser palatine nerves (posterior palatine nerve) are branches of the maxillary nerve (CN V2). They descends through the greater palatine canal alongside the greater palatine nerve, and emerge (separately) through the lesser palatine foramen to pass posteriorward. They supply the soft palate, tonsil, and uvula.
See also
Greater palatine nerve
References
External links
()
Trigeminal nerve
Otorhinolaryngology
Nervous system | Lesser palatine nerve | Biology | 91 |
21,333,258 | https://en.wikipedia.org/wiki/Bogoliubov%E2%80%93Parasyuk%20theorem | The Bogoliubov–Parasyuk theorem in quantum field theory states that renormalized Green's functions and matrix elements of the scattering matrix (S-matrix) are free of ultraviolet divergencies. Green's functions and scattering matrix are the fundamental objects in quantum field theory which determine basic physically measurable quantities. Formal expressions for Green's functions and S-matrix in any physical quantum field theory contain divergent integrals (i.e., integrals which take infinite values) and therefore formally these expressions are meaningless. The renormalization procedure is a specific procedure to make these divergent integrals finite and obtain (and predict) finite values for physically measurable quantities. The Bogoliubov–Parasyuk theorem states that for a wide class of quantum field theories, called renormalizable field theories, these divergent integrals can be made finite in a regular way using a finite (and small) set of certain elementary subtractions of divergencies.
The theorem guarantees that computed within the perturbation expansion Green's functions and matrix elements of the scattering matrix are finite for any renormalized quantum field theory. The theorem specifies a concrete procedure (the Bogoliubov–Parasyuk R-operation) for subtraction of divergences in any order of perturbation theory, establishes correctness of this procedure, and guarantees the uniqueness of the obtained results.
The theorem was proved by Nikolay Bogoliubov and Ostap Parasyuk in 1955. The proof of the Bogoliubov–Parasyuk theorem was simplified later.
See also
Renormalization
Krylov-Bogolyubov theorem on the existence of invariant measures in dynamics.
References
O. I. Zav'yalov (1994). "Bogolyubov's R-operation and the Bogolyubov–Parasyuk theorem", Russian Math. Surveys, 49(5): 67—76 (in English).
D. V. Shirkov (1994): "The Bogoliubov renormalization group", Russian Math. Surveys 49(5): 155—176.
Quantum field theory
Theorems in quantum mechanics | Bogoliubov–Parasyuk theorem | Physics,Mathematics | 453 |
23,759,231 | https://en.wikipedia.org/wiki/Digestive%20system%20of%20gastropods | The digestive system of gastropods has evolved to suit almost every kind of diet and feeding behavior. Gastropods (snails and slugs) as the largest taxonomic class of the mollusca are very diverse: the group includes carnivores, herbivores, scavengers, filter feeders, and even parasites.
In particular, the radula is often highly adapted to the specific diet of the various group of gastropods. Another distinctive feature of the digestive tract is that, along with the rest of the visceral mass, it has undergone torsion, twisting around through 180 degrees during the larval stage, so that the anus of the animal is located above its head.
A number of species have developed special adaptations to feeding, such as the "drill" of some limpets, or the harpoon of the neogastropod genus Conus. Filter feeders use the gills, mantle lining, or nets of mucus to trap their prey, which they then pull into the mouth with the radula. The highly modified parasitic genus Enteroxenos has no digestive tract at all, and simply absorbs the blood of its host through the body wall.
The digestive system usually has the following parts:
buccal mass (including the mouth, pharynx, and retractor muscles of the pharynx) and salivary glands with salivary ducts
oesophagus and oesophagal crop
stomach, also known as the gastric pouch
digestive gland, also known as the hepatopancreas
intestine
rectum and anus
Buccal mass
The buccal mass is the first part of the digestive system, and consists of the mouth and pharynx. The mouth includes a radula, and in most cases, also a pair of jaws. The pharynx can be very large, especially in carnivorous species.
Many carnivorous species have developed a proboscis, containing the oral cavity, radula, and part of the oesophagus. At rest, the proboscis is enclosed within a sac-like sheath, with an opening at the front of the animal that resembles a true mouth. When the animal feeds, it pumps blood into the proboscis, inflating it and pushing it out through the opening to grasp the gastropod's prey. A set of retractor muscles help pull the proboscis back inside the sheath once feeding is completed.
Radula
The radula is a chitinous ribbon used for scraping or cutting food.
Jaw
Several herbivorous species, as well as carnivores that prey on sessile animals, have also developed simple jaws, which help to hold the food steady while the radula works on it. The jaw is opposite to the radula and reinforces part of the foregut.
The more purely carnivorous the diet, the more the jaw is reduced.
There are often pieces of food in the gut corresponding to the shape of the jaw.
The jaw structure can be ribbed or smooth:
Some species have no jaw.
Salivary glands
Salivary glands plays primary role in the anatomical and physiological adaptations of the digestive system of predatory gastropods. Ducts from large salivary glands lead into the buccal cavity, and the oesophagus also supplies the digestive enzymes that help to break down the food. Salivary secretions lubricate the food and they also contain bioactive compounds.
Oesophagus
The mouth of gastropods opens into an oesophagus, which connects to the stomach. Because of torsion, the oesophagus usually passes around the stomach, and opens into its posterior portion, furthest from the mouth. In species that have undergone de-torsion, however, the oesophagus may open into the anterior of the stomach, which is therefore reversed from the usual gastropod arrangement.
In Tarebia granifera, the brood pouch is above the oesophagus.
There is available an extensive rostrum on the anterior part of the oesophagus in all carnivorous gastropods.
Some basal gastropod clades have oesophageal gland.
Stomach
In most species, the stomach itself is a relatively simple sac, and is the main site of digestion. In many herbivores, however, the hind part of the oesophagus is enlarged to form a crop, which, in terrestrial pulmonates, may even replace the stomach entirely. In many aquatic herbivores, however, the stomach is adapted into a gizzard that helps to grind up the food. The gizzard may have a tough cuticle, or may be filled with abrasive sand grains.
In the most primitive gastropods, however, the stomach is a more complex structure. In these species, the hind part of the stomach, where the oesophagus enters, is chitinous, and includes a sorting region lined with cilia.
In all gastropods, the portion of the stomach furthest from the oesophagus, called the "style sac", is lined with cilia. These beat in a rotary motion, pulling the food forward in a steady stream from the mouth. Usually, the food is embedded in a string of mucus produced in the mouth, creating a coiled conical mass in the style sac. This action, rather than muscular peristalsis, is responsible for the movement of food through the gastropod digestive tract.
Two diverticular glands open into the stomach, and secrete enzymes that help to break down the food. In the more primitive species, these glands may also absorb the food particles directly and digest them intracellularly.
Hepatopancreas
The hepatopancreas is the largest organ in stylommatophoran gastropods. It produces enzymes, and absorbs and stores nutrients.
Intestine
The anterior portion of the stomach opens into a coiled intestine, which helps to resorb water from the food, producing faecal pellets. The anus opens above the head.
References
Further reading
External links
Photos of jaws
Gastropod anatomy
Digestive system | Digestive system of gastropods | Biology | 1,276 |
65,072,138 | https://en.wikipedia.org/wiki/Asus%20ZenFone%207 | The ZenFone 7 and ZenFone 7 Pro are Android-based smartphones manufactured, released and marketed by Asus. The phones were unveiled on 26 August 2020, and succeed the ZenFone 6.
Introduction
On 26 August 2020, Asus launched the ZenFone 7 series in a Mandarin online press conference from their Taiwan headquarters. The ZenFone 7 series consists of the ZenFone 7 and ZenFone 7 Pro, retaining the hallmark flip-up camera form factor of the ZenFone 6 with the addition of a 3x telephoto camera, Sony IMX686 main sensor, 8K video recording capabilities, improved actuation mechanism, and optical image stabilisation exclusive to the Pro model. The ZenFone 7 series features a 6.67-inch 90 Hz AMOLED display with 200 Hz touch sampling and a 5G-capable Snapdragon 865 system on a chip, with the higher-clocked Snapdragon 865 Plus on the Pro model. Other changes include the removal of the headphone jack, ZenUI 7, 30W fast charging, combined side-mounted fingerprint scanner–power button–smart key, UFS 3.1 storage, three-microphone array utilising Nokia’s OZO Audio processing, and a larger and heavier overall form factor. The ZenFone 7 and 7 Pro are priced starting at and , respectively. The ZenFone 7 series will not be available in North America because of a lack of 5G band support.
References
External links
Mobile phones introduced in 2020
Mobile phones with multiple rear cameras
Mobile phones with 8K video recording
Asus ZenFone
Discontinued flagship smartphones | Asus ZenFone 7 | Technology | 341 |
490,067 | https://en.wikipedia.org/wiki/Blinding%20%28cryptography%29 | In cryptography, blinding is a technique by which an agent can provide a service to (i.e., compute a function for) a client in an encoded form without knowing either the real input or the real output. Blinding techniques also have applications to preventing side-channel attacks on encryption devices.
More precisely, Alice has an input x and Oscar has a function f. Alice would like Oscar to compute for her without revealing either x or y to him. The reason for her wanting this might be that she doesn't know the function f or that she does not have the resources to compute it.
Alice "blinds" the message by encoding it into some other input E(x); the encoding E must be a bijection on the input space of f, ideally a random permutation. Oscar gives her f(E(x)), to which she applies a decoding D to obtain .
Not all functions allow for blind computation. At other times, blinding must be applied with care. An example of the latter is Rabin–Williams signatures. If blinding is applied to the formatted message but the random value does not honor Jacobi requirements on p and q, then it could lead to private key recovery. A demonstration of the recovery can be seen in discovered by Evgeny Sidorov.
A common application of blinding is in blind signatures. In a blind signature protocol, the signer digitally signs a message without being able to learn its content.
The one-time pad (OTP) is an application of blinding to the secure communication problem, by its very nature. Alice would like to send a message to Bob secretly, however all of their communication can be read by Oscar. Therefore, Alice sends the message after blinding it with a secret key or OTP that she shares with Bob. Bob reverses the blinding after receiving the message. In this example, the function
f is the identity and E and D are both typically the XOR operation.
Blinding can also be used to prevent certain side-channel attacks on asymmetric encryption schemes. Side-channel attacks allow an adversary to recover information about the input to a cryptographic operation, by measuring something other than the algorithm's result, e.g., power consumption, computation time, or radio-frequency emanations by a device. Typically these attacks depend on the attacker knowing the characteristics of the algorithm, as well as (some) inputs. In this setting, blinding serves to alter the algorithm's input into some unpredictable state. Depending on the characteristics of the blinding function, this can prevent some or all leakage of useful information. Note that security depends also on the resistance of the blinding functions themselves to side-channel attacks.
For example, in RSA blinding involves computing the blinding operation , where r is a random integer between 1 and N and relatively prime to N (i.e. , x is the plaintext, e is the public RSA exponent and N is the RSA modulus. As usual, the decryption function is applied thus giving . Finally it is unblinded using the function . Multiplying by yields , as desired. When decrypting in this manner, an adversary who is able to measure time taken by this operation would not be able to make use of this information (by applying timing attacks RSA is known to be vulnerable to) as she does not know the constant r and hence has no knowledge of the real input fed to the RSA primitives.
Examples
Blinding in GPG 1.x
References
External links
Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS and Other Systems
Breaking the Rabin-Williams digital signature system implementation in the Crypto++ library
Cryptography | Blinding (cryptography) | Mathematics,Engineering | 759 |
13,924,093 | https://en.wikipedia.org/wiki/Environmental%20epidemiology | Environmental epidemiology is a branch of epidemiology concerned with determining how environmental exposures impact human health. This field seeks to understand how various external risk factors may predispose to or protect against disease, illness, injury, developmental abnormalities, or death. These factors may be naturally occurring or may be introduced into environments where people live, work, and play.
Scope
The World Health Organization European Centre for Environment and Health (WHO-ECEH) claims that 1.4 million deaths per year in Europe alone are due to avoidable environmental exposures. Environmental exposures can be broadly categorized into those that are proximate (e.g., directly leading to a health condition), including chemicals, physical agents, and microbiological pathogens, and those that are distal (e.g., indirectly leading to a health condition), such as socioeconomic conditions, climate change, and other broad-scale environmental changes. Proximate exposures occur through air, food, water, and skin contact. Distal exposures cause adverse health conditions directly by altering proximate exposures, and indirectly through changes in ecosystems and other support systems for human health.
Environmental epidemiology research can inform government policy change, risk management activities, and development of environmental standards. Vulnerability is the summation of all risk and protective factors that ultimately determine whether an individual or subpopulation experiences adverse health outcomes when an exposure to an environmental agent occurs. Sensitivity is an individual's or subpopulation's increased responsiveness, primarily for biological reasons, to that exposure. Biological sensitivity may be related to developmental stage, pre-existing medical conditions, acquired factors, and genetic factors. Socioeconomic factors also play a critical role in altering vulnerability and sensitivity to environmentally mediated factors by increasing the likelihood of exposure to harmful agents, interacting with biological factors that mediate risk, and/or leading to differences in the ability to prepare for or cope with exposures or early phases of illness. Populations living in certain regions may be at increased risk due to location and the environmental characteristics of a region.
History
Acknowledgement that the environment impacts human health can be found as far back as 460 B.C. in Hippocrates' essay On Airs, Waters, and Places. In it, he urges physicians to contemplate how factors such as drinking water can impact the health of their patients. Another famous example of environment-health interaction is the lead poisoning experienced by the ancient Romans, who used lead in their water pipes and kitchen pottery. Vitruvius, a Roman architect, wrote to discourage the use of lead pipes, citing health concerns:
"Water conducted through earthen pipes is more wholesome than that through lead; indeed that conveyed in lead must be injurious, because from it white lead is obtained, and this is said to be injurious to the human system. Hence, if what is generated from it is pernicious, there can be no doubt that itself cannot be a wholesome body. This may be verified by observing the workers in lead, who are of a pallid colour; for in casting lead, the fumes from it fixing on the different members, and daily burning them, destroy the vigour of the blood; water should therefore on no account be conducted in leaden pipes if we are desirous that it should be wholesome. That the flavour of that conveyed in earthen pipes is better, is shewn at our daily meals, for all those whose tables are furnished with silver vessels, nevertheless use those made of earth, from the purity of the flavour being preserved in them"
Generally considered to be one of the founders of modern epidemiology, John Snow conducted perhaps the first environmental epidemiology study in 1854. He showed that London residents who drank sewage-contaminated water were more likely to develop cholera than those who drank clean water.
U.S. government regulation
Throughout the 20th century, the United States Government passed legislation and regulations to address environmental health concerns. A partial list is below.
Precautionary principle
The precautionary principle is a concept in the environmental sciences that if an activity is suspected to cause harm, we should not wait until sufficient evidence of that harm is collected to take action. It has its roots in German environmental policy, and was adopted in 1990 by the participants of the North-Sea Conferences in The Hague by declaration. In 2000, the European Union began to formally adopt the precautionary principle into its laws as a Communication from the European Commission. The United States has resisted adoption of this principle, citing concerns that unfounded science could lead to obligations for expensive control measures, especially as related to greenhouse gas emissions.
Investigations
Observational studies
Environmental epidemiology studies are most frequently observational in nature, meaning researchers look at people's exposures to environmental factors without intervening and then observe the patterns that emerge. This is due to the fact that it is often unethical or unfeasible to conduct an experimental study of environmental factors in humans. For example, a researcher cannot ask some of their study subjects to smoke cigarettes to see if they have poorer health outcomes than subjects who are asked not to smoke. The study types most often employed in environmental epidemiology are:
Cohort studies
Case-control studies
Cross-sectional studies
Estimating risk
Epidemiologic studies that assess how an environmental exposure and a health outcome may be connected use a variety of biostatistical approaches to attempt to quantify the relationship. Risk assessment tries to answer questions such as "How does an individual's risk for disease A change when they are exposed to substance B?," and "How many excess cases of disease A can we prevent if exposure to substance B is lowered by X amount?."
Some statistics and approaches used to estimate risk are:
Odds ratio
Relative risk
Hazards ratio
Regression modeling
Mortality rates
Attributable risk
Ethics
Environmental epidemiology studies often identify associations between pollutants in the air, water, or food and adverse health outcomes; these findings can be inconvenient for polluting industries. Environmental epidemiologists are confronted with significant ethical challenges because of the involvement of powerful stakeholders who may try to influence the results or interpretation of their studies. Epidemiologic findings can sometimes have direct effects on industry profits. Because of these concerns, environmental epidemiology maintains guidelines for ethical practice. The International Society for Environmental Epidemiology (ISEE) first adopted ethics guidelines in the late 1990s. The guidelines are maintained by its Ethics and Philosophy Committee, one of the earliest, active, and enduring ethics committees in the field of epidemiology. Since its inception in 1991, the Committee has taken an active role in supporting ethical conduct and promulgating Ethics Guidelines for Environmental Epidemiologists. The most recent Ethics Guidelines were adopted in 2023.
Bradford Hill factors
To differentiate between correlation and causation, epidemiologists often consider a set of factors to determine the likelihood that an observed relationship between an environmental exposure and health consequence is truly causal. In 1965, Austin Bradford Hill devised a set of postulates to help him determine if there was sufficient evidence to conclude that cigarette smoking causes lung cancer.
The Bradford Hill criteria are:
Strength of association
Consistency of evidence
Specificity
Temporality
Biological gradient
Plausibility
Coherence
Experiment
Analogy
These factors are generally considered to be a guide to scientists, and it is not necessary that all of the factors be met for a consensus to be reached.
See also
Epidemiology
Envirome
Environmental health
Environmental science
Epigenetics
Exposome
Occupational epidemiology
Occupational safety and health
Pollution
Air pollution
References
Further reading
External links
"ENVIRONMENTAL EPIDEMIOLOGY] journal
International Epidemiological Association
International Society for Environmental Epidemiology
Journal of Exposure Science and Environmental Epidemiology
EPIDEMIOLOGY journal
[https://ehp.niehs.nih.gov Environmental Health Perspectives'' (news and peer-reviewed research journal published by the National Institute of Environmental Health Sciences)
Environmental Health News current events in environmental health
Epidemiology
Environmental health | Environmental epidemiology | Environmental_science | 1,648 |
27,674,506 | https://en.wikipedia.org/wiki/Ure2 | Ure2, or Ure2p, is a yeast protein encoded by a gene known as URE2 (systematic designation YNL229C). The Ure2 protein can also form a yeast prion known as [URE3]. When Ura2p is expressed at high levels in yeast, it will readily convert from its native protein conformation into an aggregate known as an amyloid. [URE3], along with [PSI+], were both determined by Wickner (1994) to meet the genetic definition of a yeast prion.
The gene prefix "URE" stands for ureidosuccinate transport, as the Ure2 protein in its native state is responsible for repressing nitrogen catabolism of glutamine by controlling the GLN3 transcription factor. Gln3p is retained in the cytoplasm by Ure2p when a preferred nitrogen source like ammonium sulfate is present in the growth media, but enters the nucleus when the cells are shifted to a nonpreferred source of nitrogen such as proline. Ure2 protein also plays a role in responding to oxidative stress.
Ure2p is a protein composed of 354 amino acids and has a molecular weight of 40,226. Its gene, URE2, been mapped to chromosome XIV, 5 map units from KEX2.
References
External links
URE2 at the Saccharomyces Genome Database
Digestive system
Bacteriology | Ure2 | Biology | 308 |
78,712,317 | https://en.wikipedia.org/wiki/Tectonics%20%28architecture%29 | In modern architectural theory, the tectonics is an artistic way to express the corporeality of a building through architectural forms that reflect the actual structure. An example of the use of tectonics and its opposite, atectonics, can be found at the AEG turbine factory: Peter Behrens, the architect, had applied tectonics by revealing the steel frame that supports the roof on the long side of the building, and used atectonics by constructing massive "Egyptian-like" walls in the corners that are not connected to the roof and thus conceal the actual load and support organization of the frontal facade.
This "poetics of construction" has multiple related meanings.
Tectonics is inseparable from the actual buildings and thus counteracts external influences of other visual arts on architecture.
History
The word "tectonic" comes from , "carpenter, builder" that eventually led to master builder, (now architect). First application to modern architecture belongs to Karl Otfried Müller, in Handbuch der Archaologie der Kunst (Handbook of the Archeology of Art, 1830) he defined the art forms that combine art with utility (from utensils to dwellings) as , with the architecture being the peak of this tectonic activities. Karl Botticher in his Die Tektonik der Hellenen (The Tectonic of the Hellenes, 1843-1852) suggested splitting the design into a structural "core-form" () and decorative "art-form" (). Art-form was supposed to reflect the functionality of the core-form: for example, rounding and tapering of the column should suggest its load-bearing function. Tectonic system was supposed to bind these multiple facets of a building (Greek temple) into a unified whole (for example, through relief sculptures using structural elements as framing).
Atectonics
Atectonics is an inverse of tectonics, a situation where the artistic appearance of the architectural form is detached from its structure and construction. Eduard Sekler introduced the concept of atectonics in 1911 as the arrangement where the interplay between load and support is "visually neglected or obscured". An architect can use both the tectonics and atectonics simultaneously (cf. the AEG turbine factory example above). Even if the construction and structure are interdependent and exposed, like in the Crystal Palace, there is some space left for the atectonics (while the columns in this building carried different loads, they all appeared to be of the uniform width, with load variations accommodated through the thickness of their walls).
References
Sources
Architectural theory | Tectonics (architecture) | Engineering | 540 |
24,723,482 | https://en.wikipedia.org/wiki/C7H13NO4 | The molecular formula C7H13NO4 (molar mass: 175.18 g/mol, exact mass: 175.0845 u) may refer to:
Valienamine
EGLU
Molecular formulas | C7H13NO4 | Physics,Chemistry | 45 |
3,588,554 | https://en.wikipedia.org/wiki/IBM%202741 | The IBM 2741 is a printing computer terminal that was introduced in 1965. Compared to the teletypewriter machines that were commonly used as printing terminals at the time,
the 2741 offers 50% higher speed, much higher quality printing, quieter operation, interchangeable type fonts, and both upper and lower case letters.
It was used primarily with the IBM System/360 series of computers, but was used with other IBM and non-IBM systems where its combination of higher speed and letter-quality output was desirable. It was influential in the development and popularity of the APL programming language.
It was supplanted, starting in the mid-1970s,
primarily by printing terminals using daisy wheel mechanisms.
Design
The IBM 2741 combines a ruggedized Selectric typewriter mechanism with IBM SLT electronics and an RS-232-C serial interface.
It operates at about 14.1 characters per second with a data rate of 134.5 bits/second (one start bit, six data bits, an odd parity bit, and one and a half stop bits). In contrast to serial terminals employing ASCII code, the most significant data bit of each character is sent first.
As with the standard office Selectrics of the day, there were 88 printing characters (not quite enough for the entire EBCDIC or ASCII printing character set including the lower case alphabet) plus space and a few nonprinting control codes,
more than can be represented with six data bits, so shift characters are used to allow the machine's entire character set to be used. This could cause a significant reduction in the print speed since printing "Armonk, New York, U.S." requires 10 shift characters resulting in a total of 32 characters transmitted to print 22 characters.
The machine was packaged into its own small desk, giving the appearance of square tabletop with a Selectric typewriter partly sunken into the surface, with the electronics in a vertically oriented chassis at the rear. This allowed a significant reduction in the noise it generated.
It supplanted the earlier IBM 1050, which was more expensive and cumbersome, in remote terminal applications.
The IBM 1050 and its variations were designed for a higher duty cycle
and so were frequently used as console devices for computers such as the IBM 1130 and IBM System/360.
By contrast, the 2741 was primarily focused on remote terminal applications.
Character codes
The IBM 2741 came in two different varieties, one using "correspondence coding" and the other using either "PTT/BCD coding" or "PTT/EBCD coding". These refer to the positioning of the characters around the typeball and, therefore, the tilt/rotate codes that have to be applied to the mechanism to produce a given character.
A "correspondence coding" machine can use type elements from a standard office Selectric (i.e. elements used for "office correspondence").
"PTT/BCD coding" and "PTT/EBCD coding" machines need special elements, and did not have as wide a variety of fonts available.
The IBM 1050 and its derivatives were only available in PTT/BCD coding.
The two element types are physically interchangeable, but code-incompatible,
so a type element from, say, a System/360 console printer (a variety of IBM 1050) produces gibberish on a "correspondence coding" 2741 or an office Selectric, and vice versa.
The two varieties of IBM 2741 use different character codes on the serial interface as well, so software in the host computer needed to have a way to distinguish which type of machine each user had. One way this was accomplished was by having the user type a unique character such as # , 9 or a standard command such as "login" immediately after connecting. The host software would recognize which code was used by the value of the characters it received.
Line protocol
The protocol is simple and symmetric. Each message begins with a control character called "circle D" in the documentation, shown as , and ends with a "circle C" . Each message was assumed to begin with the shift mode in lower case.
When the remote end is sending, the local keyboard is locked.
The "Receive Interrupt" feature allows the operator to interrupt the sending machine
and regain control by pressing a special "Attention" key (labeled ATTN).
This key causes the 2741 to send a continuous "spacing condition" for 200 or more milliseconds. This will be recognized by the receiving system as a framing error (a start bit that is not followed by a stop bit in the expected time). (The break key on ASCII terminals works the same way: continuous spacing is a "break condition" used to signal the remote end of an interruption.)
If the attention signal is honored, it causes the remote system to stop sending data, prepare to receive data from the 2741, and send a "circle C", meaning "end of message". Upon receipt of the "circle C" the local 2741 unlocks its keyboard and the operator can send another input to the system.
Protocol symmetry allows two people using 2741s to communicate with each other with no computer in between, but this was a rare configuration.
Applications
The 2741 was initially developed and marketed for use with the IBM Administrative Terminal System (ATS/360).
ATS is an interactive, multi-user text editing and storage system implemented in the mid-1960s using IBM System/360 assembly language.
The 2741's existence encouraged the development of other remote terminal systems for the IBM System/360,
particularly systems that could benefit from the high print quality, interchangeable typing elements, and other advantages of
its Selectric mechanism.
APL\360
The IBM 2741 became closely associated with the APL programming language.
As originally proposed by Dr. Kenneth Iverson, APL required a large variety of special characters.
IBM implemented it as a timesharing system on the IBM System/360, calling it APL\360. It required the use of an IBM 2741
or IBM 1050 with an APL typeball.
There were only 26 alphabet characters,
all displayed as upper case italic, even though they were typed with the machine in lower case mode. The "shifted" keystroke characters provided many of the special symbols with the remainder being handled by overstrike.
Keyboard layout for use with the APL typeball :
ALGOL 68
Similar to APL, ALGOL 68 was defined with a large number of special characters. Many of them
( ∨, ∧, ¬, ≠, ≤, ≥, ×, ÷, ⌷, ↑, ↓, ⌊, ⌈ and ⊥ ) were available on the APL Selectric typeball, so this element was
used to prepare the ALGOL 68 programming language standard Final Report (August 1968),
even though APL and ALGOL have no direct relationship.
Related machines
The IBM 2740 is a similar terminal that lacked the interrupt feature and dialup capability, but is capable of operating in point-to-point, multipoint or broadcast mode. For the better use of multipoint lines, it could add a data buffer, letting the line run at 600bit/s without being constrained by the speed of the typing mechanism.
Some later IBM Selectric-based machines, such as the Communicating Magnetic Card Selectric Typewriter, can emulate the 2741 and be used in its place.
IBM sold the underlying Selectric mechanism to other manufacturers, who produced 2741 clones at lower cost.
Some of these were integrated into larger systems instead of being sold as standalone terminals.
For example, a 2741-type mechanism formed the principal user interface for a series of machines from the 1960s and 1970s built in the United Kingdom by Business Computers Ltd.
Decline
The 2741 and similar Selectric-based machines were supplanted by ASCII terminals using the Xerox Diablo 630 "daisy wheel" and similar print mechanisms where hard copy was required.
These offered equivalent print quality, better reliability, twice the speed (30 char/s), and lower cost than the 2741.
They could use a variety of fonts (including APL) via interchangeable print wheels
and, unlike the 2741, supported the entire ASCII printing character set. When hard copy wasn't needed, video terminals often replaced them.
The IBM 3767 terminal, which used a dot-matrix printer capable of 80 or 120 char/s, was an alternative replacement.
Character sets
Function codes
The function codes were independent of the character set used and the shift state.
Circle-D used a code assigned to a printing, non-function character – 8 2 1 (EBCD '#'). It was identified as a control code based on its position as the first character in a transmission,
PTTC/EBCD code
See also
IBM Selectric typewriter
References
External links
IBM 2741 Communications Terminal manual
A picture and some information about IBM 2741
New York Courts history mentioning IBM 2741
Information on terminals including the IBM 2741
IBM 2741 in use at Queen's University
IBM 2741 mechanism as the console typewriter for BCL machines
2741
1965 introductions
History of human–computer interaction | IBM 2741 | Technology | 1,886 |
47,253,391 | https://en.wikipedia.org/wiki/Zambia%20Wildlife%20Authority | The Zambia Wildlife Authority (ZAWA) was an autonomous agency of the Zambian Government established to manage and conserve Zambia’s wildlife estate comprising 20 National Parks, 36 Game Management Areas and one bird sanctuary, which cover 31 percent of the country’s land mass.
It was established in 1999 under the Zambia Wildlife Act replacing the former Department of National Parks and Wildlife Service.
In 2015 it was announced that ZAWA would be abolished with its functions returning to the Ministry of Tourism and Arts.
In 2016 ZAWA was dissolved and its responsibilities passed to the newly formed Department of National Parks and Wildlife a department of the Ministry of Tourism and Arts.
Department of National Parks and Wildlife
The Department of National Parks and Wildlife was established in 2015 to protect and conserve Zambia's wildlife and improve the quality of the life among communities in the wildlife estates. The Department also aims to sustain e biodiversity in national parks and game management areas.
See also
List of protected areas in Zambia
The Zambia Wildlife Act 2015
References
Government agencies of Zambia
Wildlife conservation | Zambia Wildlife Authority | Biology | 204 |
581,370 | https://en.wikipedia.org/wiki/Acyl%20chloride | In organic chemistry, an acyl chloride (or acid chloride) is an organic compound with the functional group . Their formula is usually written , where R is a side chain. They are reactive derivatives of carboxylic acids (). A specific example of an acyl chloride is acetyl chloride, . Acyl chlorides are the most important subset of acyl halides.
Nomenclature
Where the acyl chloride moiety takes priority, acyl chlorides are named by taking the name of the parent carboxylic acid, and substituting -yl chloride for -ic acid. Thus:
butyric acid (C3H7COOH) → butyryl chloride (C3H7COCl)
(Idiosyncratically, for some trivial names, -oyl chloride substitutes -ic acid. For example, pivalic acid becomes pivaloyl chloride and acrylic acid becomes acryloyl chloride. The names pivalyl chloride and acrylyl chloride are less commonly used, although they are arguably more logical.)
When other functional groups take priority, acyl chlorides are considered prefixes — chlorocarbonyl-:
Properties
Lacking the ability to form hydrogen bonds, acyl chlorides have lower boiling and melting points than similar carboxylic acids. For example, acetic acid boils at 118 °C, whereas acetyl chloride boils at 51 °C. Like most carbonyl compounds, infrared spectroscopy reveals a band near 1750 cm−1.
The simplest stable acyl chloride is acetyl chloride; formyl chloride is not stable at room temperature, although it can be prepared at –60 °C or below.
Acyl chlorides hydrolyze (react with water) to form the corresponding carboxylic acid and hydrochloric acid:
RCOCl + H2O -> RCOOH + HCl
Synthesis
Industrial routes
The industrial route to acetyl chloride involves the reaction of acetic anhydride with hydrogen chloride:
(CH3CO)2O + HCl -> CH3COCl + CH3CO2H
Propionyl chloride is produced by chlorination of propionic acid with phosgene:
CH3CH2CO2H + COCl2 -> CH3CH2COCl + HCl + CO2
Benzoyl chloride is produced by the partial hydrolysis of benzotrichloride:
C6H5CCl3 + H2O -> C6H5C(O)Cl + 2 HCl
Similarly, benzotrichlorides react with carboxylic acids to the acid chloride. This conversion is practiced for the reaction of 1,4-bis(trichloromethyl)benzene to give terephthaloyl chloride:
C6H4(CCl3)2 + C6H4(CO2H)2 -> 2 C6H4(COCl)2 + 2 HCl
Laboratory methods
Thionyl chloride
In the laboratory, acyl chlorides are generally prepared by treating carboxylic acids with thionyl chloride (). The reaction is catalyzed by dimethylformamide and other additives.
Thionyl chloride is a well-suited reagent as the by-products (HCl, ) are gases and residual thionyl chloride can be easily removed as a result of its low boiling point (76 °C).
Phosphorus chlorides
Phosphorus trichloride () is popular, although excess reagent is required. Phosphorus pentachloride () is also effective, but only one chloride is transferred:
RCO2H + PCl5 -> RCOCl + POCl3 + HCl
Oxalyl chloride
Another method involves the use of oxalyl chloride:
RCO2H + ClCOCOCl ->[DMF] RCOCl + CO + CO2 + HCl
The reaction is catalysed by dimethylformamide (DMF), which reacts with oxalyl chloride to give the Vilsmeier reagent, an iminium intermediate that which reacts with the carboxylic acid to form a mixed imino-anhydride. This structure undergoes an acyl substitution with the liberated chloride, forming the acid anhydride and releasing regenerated molecule of DMF. Relative to thionyl chloride, oxalyl chloride is more expensive but also a milder reagent and therefore more selective.
Other laboratory methods
Acid chlorides can be used as a chloride source. Thus acetyl chloride can be distilled from a mixture of benzoyl chloride and acetic acid:
CH3CO2H + C6H5COCl -> CH3COCl + C6H5CO2H
Other methods that do not form HCl include the Appel reaction:
RCO2H + Ph3P + CCl4 -> RCOCl + Ph3PO + HCCl3
Another is the use of cyanuric chloride:
RCO2H + C3N3Cl3 -> RCOCl + C3N3Cl2OH
Reactions
Acyl chloride are reactive, versatile reagents. Acyl chlorides have a greater reactivity than other carboxylic acid derivatives like acid anhydrides, esters or amides:
Acyl chlorides hydrolyze, yielding the carboxylic acid:
This hydrolysis is usually a nuisance rather than intentional.
Alcoholysis, aminolysis, and related reactions
Acid chlorides are useful for the preparation of amides, esters, anhydrides. These reactions generate chloride, which can be undesirable.Acyl chlorides are used to prepare acid anhydrides, amides and esters, by reacting acid chlorides with: a salt of a carboxylic acid, an amine, or an alcohol, respectively.
Acid halides are the most reactive acyl derivatives, and can easily be converted into any of the others. Acid halides will react with carboxylic acids to form anhydrides. If the structure of the acid and the acid chloride are different, the product is a mixed anhydride. First, the carboxylic acid attacks the acid chloride (1) to give tetrahedral intermediate 2. The tetrahedral intermediate collapses, ejecting chloride ion as the leaving group and forming oxonium species 3. Deprotonation gives the mixed anhydride, 4, and an equivalent of HCl.
Alcohols and amines react with acid halides to produce esters and amides, respectively, in a reaction formally known as the Schotten-Baumann reaction. Acid halides hydrolyze in the presence of water to produce carboxylic acids, but this type of reaction is rarely useful, since carboxylic acids are typically used to synthesize acid halides. Most reactions with acid halides are carried out in the presence of a non-nucleophilic base, such as pyridine, to neutralize the hydrohalic acid that is formed as a byproduct.
Mechanism
The alcoholysis of acyl halides (the alkoxy-dehalogenation) is believed to proceed via an SN2 mechanism (Scheme 10). However, the mechanism can also be tetrahedral or SN1 in highly polar solvents (while the SN2 reaction involves a concerted reaction, the tetrahedral addition-elimination pathway involves a discernible intermediate).
Bases, such as pyridine or N,N-dimethylformamide, catalyze acylations. These reagents activate the acyl chloride via a nucleophilic catalysis mechanism. The amine attacks the carbonyl bond and presumably first forms a transient tetrahedral intermediate, then forms a quaternary acylammonium salt by the displacement of the leaving group. This quaternary acylammonium salt is more susceptible to attack by alcohols or other nucleophiles.
The use of two phases (aqueous for amine, organic for acyl chloride) is called the Schotten-Baumann reaction. This approach is used in the preparation of nylon via the so-called nylon rope trick.
Reactions with carbanions
Acid halides react with carbon nucleophiles, such as Grignards and enolates, although mixtures of products can result. While a carbon nucleophile will react with the acid halide first to produce a ketone, the ketone is also susceptible to nucleophilic attack, and can be converted to a tertiary alcohol. For example, when benzoyl chloride (1) is treated with two equivalents of a Grignard reagent, such as methyl magnesium bromide (MeMgBr), 2-phenyl-2-propanol (3) is obtained in excellent yield. Although acetophenone (2) is an intermediate in this reaction, it is impossible to isolate because it reacts with a second equivalent of MeMgBr rapidly after being formed.
Unlike most other carbon nucleophiles, lithium dialkylcuprates – often called Gilman reagents – can add to acid halides just once to give ketones. The reaction between an acid halide and a Gilman reagent is not a nucleophilic acyl substitution reaction, however, and is thought to proceed via a radical pathway. The Weinreb ketone synthesis can also be used to convert acid halides to ketones. In this reaction, the acid halide is first converted to an N–methoxy–N–methylamide, known as a Weinreb amide. When a carbon nucleophile – such as a Grignard or organolithium reagent – adds to a Weinreb amide, the metal is chelated by the carbonyl and N–methoxy oxygens, preventing further nucleophilic additions.
Carbon nucleophiles such as Grignard reagents, convert acyl chlorides to ketones, which in turn are susceptible to the attack by second equivalent to yield the tertiary alcohol. The reaction of acyl halides with certain organocadmium reagents stops at the ketone stage. The reaction with Gilman reagents also afford ketones, reflecting the low nucleophilicity of these lithium diorganocopper compounds.
Reduction
Acyl chlorides are reduced by lithium aluminium hydride and diisobutylaluminium hydride to give primary alcohols. Lithium tri-tert-butoxyaluminium hydride, a bulky hydride donor, reduces acyl chlorides to aldehydes, as does the Rosenmund reduction using hydrogen gas over a poisoned palladium catalyst.
Acylation of arenes
In the Friedel–Crafts acylation, acid halides act as electrophiles for electrophilic aromatic substitution. A Lewis acid – such as zinc chloride (ZnCl2), iron(III) chloride (FeCl3), or aluminum chloride (AlCl3) – coordinates to the halogen on the acid halide, activating the compound towards nucleophilic attack by an activated aromatic ring. For especially electron-rich aromatic rings, the reaction will proceed without a Lewis acid.
Because of the harsh conditions and the reactivity of the intermediates, this otherwise quite useful reaction tends to be messy, as well as environmentally unfriendly.
Oxidative addition
Acyl chlorides react with low-valent metal centers to give transition metal acyl complexes. Illustrative is the oxidative addition of acetyl chloride to Vaska's complex, converting square planar Ir(I) to octahedral Ir(III):
IrCl(CO)(PPh3)2 + CH3COCl -> CH3COIrCl2(CO)(PPh3)2
Hazards
Low molecular weight acyl chlorides are often lachrymators, and they react violently with water, alcohols, and amines.
References
Functional groups | Acyl chloride | Chemistry | 2,542 |
966,794 | https://en.wikipedia.org/wiki/Isoelectric%20focusing | Isoelectric focusing (IEF), also known as electrofocusing, is a technique for separating different molecules by differences in their isoelectric point (pI). It is a type of zone electrophoresis usually performed on proteins in a gel that takes advantage of the fact that overall charge on the molecule of interest is a function of the pH of its surroundings.
Procedure
IEF involves adding an ampholyte solution into immobilized pH gradient (IPG) gels. IPGs are the acrylamide gel matrix co-polymerized with the pH gradient, which result in completely stable gradients except the most alkaline (>12) pH values. The immobilized pH gradient is obtained by the continuous change in the ratio of immobilines. An immobiline is a weak acid or base defined by its pK value.
A protein that is in a pH region below its isoelectric point (pI) will be positively charged and so will migrate toward the cathode (negatively charged electrode). As it migrates through a gradient of increasing pH, however, the protein's overall charge will decrease until the protein reaches the pH region that corresponds to its pI. At this point it has no net charge and so migration ceases (as there is no electrical attraction toward either electrode). As a result, the proteins become focused into sharp stationary bands with each protein positioned at a point in the pH gradient corresponding to its pI. The technique is capable of extremely high resolution with proteins differing by a single charge being fractionated into separate bands.
Molecules to be focused are distributed over a medium that has a pH gradient (usually created by aliphatic ampholytes). An electric current is passed through the medium, creating a "positive" anode and "negative" cathode end. Negatively charged molecules migrate through the pH gradient in the medium toward the "positive" end while positively charged molecules move toward the "negative" end. As a particle moves toward the pole opposite of its charge it moves through the changing pH gradient until it reaches a point in which the pH of that molecule's isoelectric point is reached. At this point the molecule no longer has a net electric charge (due to the protonation or deprotonation of the associated functional groups) and as such will not proceed any further within the gel. The gradient is established before adding the particles of interest by first subjecting a solution of small molecules such as polyampholytes with varying pI values to electrophoresis.
The method is applied particularly often in the study of proteins, which separate based on their relative content of acidic and basic residues, whose value is represented by the pI. Proteins are introduced into an immobilized pH gradient gel composed of polyacrylamide, starch, or agarose where a pH gradient has been established. Gels with large pores are usually used in this process to eliminate any "sieving" effects, or artifacts in the pI caused by differing migration rates for proteins of differing sizes. Isoelectric focusing can resolve proteins that differ in pI value by as little as 0.01. Isoelectric focusing is the first step in two-dimensional gel electrophoresis, in which proteins are first separated by their pI value and then further separated by molecular weight through SDS-PAGE. Isoelectric focusing, on the other hand, is the only step in preparative native PAGE at constant pH.
Living cells
According to some opinions, living eukaryotic cells perform isoelectric focusing of proteins in their interior to overcome a limitation of the rate of metabolic reaction by diffusion of enzymes and their reactants, and to regulate the rate of particular biochemical processes. By concentrating the enzymes of particular metabolic pathways into distinct and small regions of its interior, the cell can increase the rate of particular biochemical pathways by several orders of magnitude. By modification of the isoelectric point (pI) of molecules of an enzyme by, e.g., phosphorylation or dephosphorylation, the cell can transfer molecules of the enzyme between different parts of its interior, to switch on or switch off particular biochemical processes.
Microfluidic chip based
Microchip based electrophoresis is a promising alternative to capillary electrophoresis since it has the potential to provide rapid protein analysis, straightforward integration with other microfluidic unit operations, whole channel detection, nitrocellulose films, smaller sample sizes and lower fabrication costs.
Multi-junction
The increased demand for faster and easy-to-use protein separation tools has accelerated the evolution of IEF towards in-solution separations. In this context, a multi-junction IEF system was developed to perform fast and gel-free IEF separations. The multi-junction IEF system utilizes a series of vessels with a capillary passing through each vessel. Part of the capillary in each vessel is replaced by a semipermeable membrane. The vessels contain buffer solutions with different pH values, so that a pH gradient is effectively established inside the capillary. The buffer solution in each vessel has an electrical contact with a voltage divider connected to a high-voltage power supply, which establishes an electrical field along the capillary. When a sample (a mixture of peptides or proteins) is injected in the capillary, the presence of the electrical field and the pH gradient separates these molecules according to their isoelectric points. The multi-junction IEF system has been used to separate tryptic peptide mixtures for two-dimensional proteomics and blood plasma proteins from Alzheimer's disease patients for biomarker discovery.
References
Electrophoresis
Industrial processes
Protein methods
Molecular biology techniques | Isoelectric focusing | Chemistry,Biology | 1,180 |
410,923 | https://en.wikipedia.org/wiki/Neutron%20radiation | Neutron radiation is a form of ionizing radiation that presents as free neutrons. Typical phenomena are nuclear fission or nuclear fusion causing the release of free neutrons, which then react with nuclei of other atoms to form new nuclides—which, in turn, may trigger further neutron radiation. Free neutrons are unstable, decaying into a proton, an electron, plus an electron antineutrino. Free neutrons have a mean lifetime of 887 seconds (14 minutes, 47 seconds).
Neutron radiation is distinct from alpha, beta and gamma radiation.
Sources
Neutrons may be emitted from nuclear fusion or nuclear fission, or from other nuclear reactions such as radioactive decay or particle interactions with cosmic rays or within particle accelerators. Large neutron sources are rare, and usually limited to large-sized devices such as nuclear reactors or particle accelerators, including the Spallation Neutron Source.
Neutron radiation was discovered from observing an alpha particle colliding with a beryllium nucleus, which was transformed into a carbon nucleus while emitting a neutron, Be(α, n)C. The combination of an alpha particle emitter and an isotope with a large (α, n) nuclear reaction probability is still a common neutron source.
Neutron radiation from fission
The neutrons in nuclear reactors are generally categorized as slow (thermal) neutrons or fast neutrons depending on their energy. Thermal neutrons are similar in energy distribution (the Maxwell–Boltzmann distribution) to a gas in thermodynamic equilibrium; but are easily captured by atomic nuclei and are the primary means by which elements undergo nuclear transmutation.
To achieve an effective fission chain reaction, neutrons produced during fission must be captured by fissionable nuclei, which then split, releasing more neutrons. In most fission reactor designs, the nuclear fuel is not sufficiently refined to absorb enough fast neutrons to carry on the chain reaction, due to the lower cross section for higher-energy neutrons, so a neutron moderator must be introduced to slow the fast neutrons down to thermal velocities to permit sufficient absorption. Common neutron moderators include graphite, ordinary (light) water and heavy water. A few reactors (fast neutron reactors) and all nuclear weapons rely on fast neutrons.
Cosmogenic neutrons
Cosmogenic neutrons are produced from cosmic radiation in the Earth's atmosphere or surface, as well as in particle accelerators. They often possess higher energy levels compared to neutrons found in reactors. Many of these neutrons activate atomic nuclei before reaching the Earth's surface, while a smaller fraction interact with nuclei in the atmospheric air. When these neutrons interact with nitrogen-14 atoms, they can transform them into carbon-14 (14C), which is extensively utilized in radiocarbon dating.
Uses
Cold, thermal and hot neutron radiation is most commonly used in scattering and diffraction experiments, to assess the properties and the structure of materials in crystallography, condensed matter physics, biology, solid state chemistry, materials science, geology, mineralogy, and related sciences. Neutron radiation is also used in Boron Neutron Capture Therapy to treat cancerous tumors due to its highly penetrating and damaging nature to cellular structure. Neutrons can also be used for imaging of industrial parts termed neutron radiography when using film, neutron radioscopy when taking a digital image, such as through image plates, and neutron tomography for three-dimensional images. Neutron imaging is commonly used in the nuclear industry, the space and aerospace industry, as well as the high reliability explosives industry.
Ionization mechanisms and properties
Neutron radiation is often called indirectly ionizing radiation. It does not ionize atoms in the same way that charged particles such as protons and electrons do (exciting an electron), because neutrons have no charge. However, neutron interactions are largely ionizing, for example when neutron absorption results in gamma emission and the gamma ray (photon) subsequently removes an electron from an atom, or a nucleus recoiling from a neutron interaction is ionized and causes more traditional subsequent ionization in other atoms. Because neutrons are uncharged, they are more penetrating than alpha radiation or beta radiation. In some cases they are more penetrating than gamma radiation, which is impeded in materials of high atomic number. In materials of low atomic number such as hydrogen, a low energy gamma ray may be more penetrating than a high energy neutron.
Health hazards and protection
In health physics, neutron radiation is a type of radiation hazard. Another, more severe hazard of neutron radiation, is neutron activation, the ability of neutron radiation to induce radioactivity in most substances it encounters, including bodily tissues. This occurs through the capture of neutrons by atomic nuclei, which are transformed to another nuclide, frequently a radionuclide. This process accounts for much of the radioactive material released by the detonation of a nuclear weapon. It is also a problem in nuclear fission and nuclear fusion installations as it gradually renders the equipment radioactive such that eventually it must be replaced and disposed of as low-level radioactive waste.
Neutron radiation protection relies on radiation shielding. Due to the high kinetic energy of neutrons, this radiation is considered the most severe and dangerous radiation to the whole body when it is exposed to external radiation sources. In comparison to conventional ionizing radiation based on photons or charged particles, neutrons are repeatedly bounced and slowed (absorbed) by light nuclei so hydrogen-rich material is more effective at shielding than iron nuclei. The light atoms serve to slow down the neutrons by elastic scattering so they can then be absorbed by nuclear reactions. However, gamma radiation is often produced in such reactions, so additional shielding must be provided to absorb it. Care must be taken to avoid using materials whose nuclei undergo fission or neutron capture that causes radioactive decay of nuclei, producing gamma rays.
Neutrons readily pass through most material, and hence the absorbed dose (measured in grays) from a given amount of radiation is low, but interact enough to cause biological damage. The most effective shielding materials are water, or hydrocarbons like polyethylene or paraffin wax. Water-extended polyester (WEP) is effective as a shielding wall in harsh environments due to its high hydrogen content and resistance to fire, allowing it to be used in a range of nuclear, health physics, and defense industries. Hydrogen-based materials are suitable for shielding as they are proper barriers against radiation.
Concrete (where a considerable number of water molecules chemically bind to the cement) and gravel provide a cheap solution due to their combined shielding of both gamma rays and neutrons. Boron is also an excellent neutron absorber (and also undergoes some neutron scattering). Boron decays into carbon or helium and produces virtually no gamma radiation with boron carbide, a shield commonly used where concrete would be cost prohibitive. Commercially, tanks of water or fuel oil, concrete, gravel, and B4C are common shields that surround areas of large amounts of neutron flux, e.g., nuclear reactors. Boron-impregnated silica glass, standard borosilicate glass, high-boron steel, paraffin, and Plexiglas have niche uses.
Because neutrons that strike the hydrogen nucleus (proton, or deuteron) impart energy to that nucleus, they in turn break from their chemical bonds and travel a short distance before stopping. Such hydrogen nuclei are high linear energy transfer particles, and are in turn stopped by ionization of the material they travel through. Consequently, in living tissue, neutrons have a relatively high relative biological effectiveness, and are roughly ten times more effective at causing biological damage compared to gamma or beta radiation of equivalent energy exposure. These neutrons can either cause cells to change in their functionality or to completely stop replicating, causing damage to the body over time. Neutrons are particularly damaging to soft tissues like the cornea of the eye.
Effects on materials
High-energy neutrons damage and degrade materials over time; bombardment of materials with neutrons creates collision cascades that can produce point defects and
dislocations in the material, the creation of which is the primary driver behind microstructural changes occurring over time in materials exposed to radiation. At high neutron fluences this can lead to embrittlement of metals and other materials, and to neutron-induced swelling in some of them. This poses a problem for nuclear reactor vessels and significantly limits their lifetime (which can be somewhat prolonged by controlled annealing of the vessel, reducing the number of the built-up dislocations). Graphite neutron moderator blocks are especially susceptible to this effect, known as Wigner effect, and must be annealed periodically. The Windscale fire was caused by a mishap during such an annealing operation.
Radiation damage to materials occurs as a result of the interaction of an energetic incident particle (a neutron, or otherwise) with a lattice atom in the material. The collision causes a massive transfer of kinetic energy to the lattice atom, which is displaced from its lattice site, becoming what is known as the primary knock-on atom (PKA). Because the PKA is surrounded by other lattice atoms, its displacement and passage through the lattice results in many subsequent collisions and the creations of additional knock-on atoms, producing what is known as the collision cascade or displacement cascade. The knock-on atoms lose energy with each collision, and terminate as interstitials, effectively creating a series of Frenkel defects in the lattice. Heat is also created as a result of the collisions (from electronic energy loss), as are possibly transmuted atoms. The magnitude of the damage is such that a single 1 MeV neutron creating a PKA in an iron lattice produces approximately 1,100 Frenkel pairs. The entire cascade event occurs over a timescale of 1 × 10−13 seconds, and therefore, can only be "observed" in computer simulations of the event.
The knock-on atoms terminate in non-equilibrium interstitial lattice positions, many of which annihilate themselves by diffusing back into neighboring vacant lattice sites and restore the ordered lattice. Those that do not or cannot leave vacancies, which causes a local rise in the vacancy concentration far above that of the equilibrium concentration. These vacancies tend to migrate as a result of thermal diffusion towards vacancy sinks (i.e., grain boundaries, dislocations) but exist for significant amounts of time, during which additional high-energy particles bombard the lattice, creating collision cascades and additional vacancies, which migrate towards sinks. The main effect of irradiation in a lattice is the significant and persistent flux of defects to sinks in what is known as the defect wind. Vacancies can also annihilate by combining with one another to form dislocation loops and later, lattice voids.
The collision cascade creates many more vacancies and interstitials in the material than equilibrium for a given temperature, and diffusivity in the material is dramatically increased as a result. This leads to an effect called radiation-enhanced diffusion, which leads to microstructural evolution of the material over time. The mechanisms leading to the evolution of the microstructure are many, may vary with temperature, flux, and fluence, and are a subject of extensive study.
Radiation-induced segregation results from the aforementioned flux of vacancies to sinks, implying a flux of lattice atoms away from sinks; but not necessarily in the same proportion to alloy composition in the case of an alloyed material. These fluxes may therefore lead to depletion of alloying elements in the vicinity of sinks. For the flux of interstitials introduced by the cascade, the effect is reversed: the interstitials diffuse toward sinks resulting in alloy enrichment near the sink.
Dislocation loops are formed if vacancies form clusters on a lattice plane. If these vacancy concentration expand in three dimensions, a void forms. By definition, voids are under vacuum, but may became gas-filled in the case of alpha-particle radiation (helium) or if the gas is produced as a result of transmutation reactions. The void is then called a bubble, and leads to dimensional instability (neutron-induced swelling) of parts subject to radiation. Swelling presents a major long-term design problem, especially in reactor components made out of stainless steel. Alloys with crystallographic isotropy, such as Zircaloys are subject to the creation of dislocation loops, but do not exhibit void formation. Instead, the loops form on particular lattice planes, and can lead to irradiation-induced growth, a phenomenon distinct from swelling, but that can also produce significant dimensional changes in an alloy.
Irradiation of materials can also induce phase transformations in the material: in the case of a solid solution, the solute enrichment or depletion at sinks radiation-induced segregation can lead to the precipitation of new phases in the material.
The mechanical effects of these mechanisms include irradiation hardening, embrittlement, creep, and environmentally-assisted cracking. The defect clusters, dislocation loops, voids, bubbles, and precipitates produced as a result of radiation in a material all contribute to the strengthening and embrittlement (loss of ductility) in the material. Embrittlement is of particular concern for the material comprising the reactor pressure vessel, where as a result the energy required to fracture the vessel decreases significantly. It is possible to restore ductility by annealing the defects out, and much of the life-extension of nuclear reactors depends on the ability to safely do so. Creep is also greatly accelerated in irradiated materials, though not as a result of the enhanced diffusivities, but rather as a result of the interaction between lattice stress and the developing microstructure. Environmentally-assisted cracking or, more specifically, irradiation-assisted stress corrosion cracking (IASCC) is observed especially in alloys subject to neutron radiation and in contact with water, caused by hydrogen absorption at crack tips resulting from radiolysis of the water, leading to a reduction in the required energy to propagate the crack.
See also
Neutron emission
Neutron flux
Neutron radiography
References
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.222501
External links
EPA definitions of various terms
Comparison of Neutron Radiographic and X-Radiographic Images
Neutron techniques A unique tool for research and development
IARC Group 1 carcinogens
Ionizing radiation
Radiation
Neutron-related techniques | Neutron radiation | Physics | 2,972 |
17,326,228 | https://en.wikipedia.org/wiki/History%20of%20structural%20engineering | The history of structural engineering dates back to at least 2700 BC when the step pyramid for Pharaoh Djoser was built by Imhotep, the first architect in history known by name. Pyramids were the most common major structures built by ancient civilizations because it is a structural form which is inherently stable and can be almost infinitely scaled (as opposed to most other structural forms, which cannot be linearly increased in size in proportion to increased loads).
Another notable engineering feat from antiquity still in use today is the qanat water management system.
Qanat technology developed in the time of the Medes, the predecessors of the Persian Empire (modern-day Iran which has the oldest and longest Qanat (older than 3000 years and longer than 71 km) that also spread to other cultures having had contact with the Persian.
Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stone masons and carpenters, rising to the role of master builder. No theory of structures existed and understanding of how structures stood up was extremely limited, and based almost entirely on empirical evidence of 'what had worked before'. Knowledge was retained by guilds and seldom supplanted by advances. Structures were repetitive, and increases in scale were incremental.
No record exists of the first calculations of the strength of structural members or the behaviour of structural material, but the profession of structural engineer only really took shape with the Industrial Revolution and the re-invention of concrete (see History of concrete). The physical sciences underlying structural engineering began to be understood in the Renaissance and have been developing ever since.
Early structural engineering
The recorded history of structural engineering starts with the ancient Egyptians. In the 27th century BC, Imhotep was the first structural engineer known by name and constructed the first known step pyramid in Egypt. In the 26th century BC, the Great Pyramid of Giza was constructed in Egypt. It remained the largest man-made structure for millennia and was considered an unsurpassed feat in architecture until the 19th century AD.
The understanding of the physical laws that underpin structural engineering in the Western world dates back to the 3rd century BC, when Archimedes published his work On the Equilibrium of Planes in two volumes, in which he sets out the Law of the Lever, stating:
Archimedes used the principles derived to calculate the areas and centers of gravity of various geometric figures including triangles, paraboloids, and hemispheres. Archimedes's work on this and his work on calculus and geometry, together with Euclidean geometry, underpin much of the mathematics and understanding of structures in modern structural engineering.
The ancient Romans made great bounds in structural engineering, pioneering large structures in masonry and concrete, many of which are still standing today. They include aqueducts, thermae, columns, lighthouses, defensive walls and harbours. Their methods are recorded by Vitruvius in his De Architectura written in 25 BC, a manual of civil and structural engineering with extensive sections on materials and machines used in construction. One reason for their success is their accurate surveying techniques based on the dioptra, groma and chorobates.
During the High Middle Ages (11th to 14th centuries) builders were able to balance the side thrust of vaults with that of flying buttresses and side vaults, to build tall spacious structures, some of which were built entirely of stone (with iron pins only securing the ends of stones) and have lasted for centuries.
In the 15th and 16th centuries and despite lacking beam theory and calculus, Leonardo da Vinci produced many engineering designs based on scientific observations and rigour, including a design for a bridge to span the Golden Horn. Though dismissed at the time, the design has since been judged to be both feasible and structurally valid
The foundations of modern structural engineering were laid in the 17th century by Galileo Galilei, Robert Hooke and Isaac Newton with the publication of three great scientific works. In 1638 Galileo published Dialogues Relating to Two New Sciences, outlining the sciences of the strength of materials and the motion of objects (essentially defining gravity as a force giving rise to a constant acceleration). It was the first establishment of a scientific approach to structural engineering, including the first attempts to develop a theory for beams. This is also regarded as the beginning of structural analysis, the mathematical representation and design of building structures.
This was followed in 1676 by Robert Hooke's first statement of Hooke's Law, providing a scientific understanding of elasticity of materials and their behaviour under load.
Eleven years later, in 1687, Sir Isaac Newton published Philosophiae Naturalis Principia Mathematica, setting out his Laws of Motion, providing for the first time an understanding of the fundamental laws governing structures.
Also in the 17th century, Sir Isaac Newton and Gottfried Leibniz both independently developed the Fundamental theorem of calculus, providing one of the most important mathematical tools in engineering.
Further advances in the mathematics needed to allow structural engineers to apply the understanding of structures gained through the work of Galileo, Hooke and Newton during the 17th century came in the 18th century when Leonhard Euler pioneered much of the mathematics and many of the methods which allow structural engineers to model and analyse structures. Specifically, he developed the Euler–Bernoulli beam equation with Daniel Bernoulli (1700–1782) circa 1750 - the fundamental theory underlying most structural engineering design.
Daniel Bernoulli, with Johann (Jean) Bernoulli (1667–1748), is also credited with formulating the theory of virtual work, providing a tool using equilibrium of forces and compatibility of geometry to solve structural problems. In 1717 Jean Bernoulli wrote to Pierre Varignon explaining the principle of virtual work, while in 1726 Daniel Bernoulli wrote of the "composition of forces".
In 1757 Leonhard Euler went on to derive the Euler buckling formula, greatly advancing the ability of engineers to design compression elements.
Modern developments in structural engineering
Throughout the late 19th and early 20th centuries, materials science and structural analysis underwent development at a tremendous pace.
Though elasticity was understood in theory well before the 19th century, it was not until 1821 that Claude-Louis Navier formulated the general theory of elasticity in a mathematically usable form. In his leçons of 1826 he explored a great range of different structural theory, and was the first to highlight that the role of a structural engineer is not to understand the final, failed state of a structure, but to prevent that failure in the first place. In 1826 he also established the elastic modulus as a property of materials independent of the second moment of area, allowing engineers for the first time to both understand structural behaviour and structural materials.
Towards the end of the 19th century, in 1873, Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy.
In 1824, Portland cement was patented by the engineer Joseph Aspdin as "a superior cement resembling Portland Stone", British Patent no. 5022. Although different forms of cement already existed (Pozzolanic cement was used by the Romans as early as 100 B.C. and even earlier by the ancient Greek and Chinese civilizations) and were in common usage in Europe from the 1750s, the discovery made by Aspdin used commonly available, cheap materials, making concrete construction an economical possibility.
Developments in concrete continued with the construction in 1848 of a rowing boat built of ferrocement - the forerunner of modern reinforced concrete - by Joseph-Louis Lambot. He patented his system of mesh reinforcement and concrete in 1855, one year after W.B. Wilkinson also patented a similar system. This was followed in 1867 when a reinforced concrete planting tub was patented by Joseph Monier in Paris, using steel mesh reinforcement similar to that used by Lambot and Wilkinson. Monier took the idea forward, filing several patents for tubs, slabs and beams, leading eventually to the Monier system of reinforced structures, the first use of steel reinforcement bars located in areas of tension in the structure.
Steel construction was first made possible in the 1850s when Henry Bessemer developed the Bessemer process to produce steel. He gained patents for the process in 1855 and 1856 and successfully completed the conversion of cast iron into cast steel in 1858. Eventually mild steel would replace both wrought iron and cast iron as the preferred metal for construction.
During the late 19th century, great advancements were made in the use of cast iron, gradually replacing wrought iron as a material of choice. Ditherington Flax Mill in Shrewsbury, designed by Charles Bage, was the first building in the world with an interior iron frame. It was built in 1797. In 1792 William Strutt had attempted to build a fireproof mill at Belper in Derby (Belper West Mill), using cast iron columns and timber beams within the depths of brick arches that formed the floors. The exposed beam soffits were protected against fire by plaster. This mill at Belper was the world's first attempt to construct fireproof buildings, and is the first example of fire engineering. This was later improved upon with the construction of Belper North Mill, a collaboration between Strutt and Bage, which by using a full cast iron frame represented the world's first "fire proofed" building.
The Forth Bridge was built by Benjamin Baker, Sir John Fowler and William Arrol in 1889, using steel, after the original design for the bridge by Thomas Bouch was rejected following the collapse of his Tay Rail Bridge. The Forth Bridge was one of the first major uses of steel, and a landmark in bridge design. Also in 1889, the wrought-iron Eiffel Tower was built by Gustave Eiffel and Maurice Koechlin, demonstrating the potential of construction using iron, despite the fact that steel construction was already being used elsewhere.
During the late 19th century, Russian structural engineer Vladimir Shukhov developed analysis methods for tensile structures, thin-shell structures, lattice shell structures and new structural geometries such as hyperboloid structures. Pipeline transport was pioneered by Vladimir Shukhov and the Branobel company in the late 19th century.
Again taking reinforced concrete design forwards, from 1892 onwards François Hennebique's firm used his patented reinforced concrete system to build thousands of structures throughout Europe. Thaddeus Hyatt in the US and Wayss & Freitag in Germany also patented systems. The firm AG für Monierbauten constructed 200 reinforced concrete bridges in Germany between 1890 and 1897 The great pioneering uses of reinforced concrete however came during the first third of the 20th century, with Robert Maillart and others furthering of the understanding of its behaviour. Maillart noticed that many concrete bridge structures were significantly cracked, and as a result left the cracked areas out of his next bridge design - correctly believing that if the concrete was cracked, it was not contributing to the strength. This resulted in the revolutionary Salginatobel Bridge design. Wilhelm Ritter formulated the truss theory for the shear design of reinforced concrete beams in 1899, and Emil Mörsch improved this in 1902. He went on to demonstrate that treating concrete in compression as a linear-elastic material was a conservative approximation of its behaviour. Concrete design and analysis has been progressing ever since, with the development of analysis methods such as yield line theory, based on plastic analysis of concrete (as opposed to linear-elastic), and many different variations on the model for stress distributions in concrete in compression
Prestressed concrete, pioneered by Eugène Freyssinet with a patent in 1928, gave a novel approach in overcoming the weakness of concrete structures in tension. Freyssinet constructed an experimental prestressed arch in 1908 and later used the technology in a limited form in the Plougastel Bridge in France in 1930. He went on to build six prestressed concrete bridges across the Marne River, firmly establishing the technology.
Structural engineering theory was again advanced in 1930 when Professor Hardy Cross developed his Moment distribution method, allowing the real stresses of many complex structures to be approximated quickly and accurately.
In the mid 20th century John Fleetwood Baker went on to develop the plasticity theory of structures, providing a powerful tool for the safe design of steel structures. The possibility of creating structures with complex geometries, beyond analysis by hand calculation methods, first arose in 1941 when Alexander Hrennikoff submitted his D.Sc thesis at MIT on the topic of discretization of plane elasticity problems using a lattice framework. This was the forerunner to the development of finite element analysis. In 1942, Richard Courant developed a mathematical basis for finite element analysis. This led in 1956 to the publication by J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's of a paper on the "Stiffness and Deflection of Complex Structures". This paper introduced the name "finite-element method" and is widely recognised as the first comprehensive treatment of the method as it is known today.
High-rise construction, though possible from the late 19th century onwards, was greatly advanced during the second half of the 20th century. Fazlur Khan designed structural systems that remain fundamental to many modern high rise constructions and which he employed in his structural designs for the John Hancock Center in 1969 and Sears Tower in 1973. Khan's central innovation in skyscraper design and construction was the idea of the "tube" and "bundled tube" structural systems for tall buildings. He defined the framed tube structure as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation." Closely spaced interconnected exterior columns form the tube. Horizontal loads, for example wind, are supported by the structure as a whole. About half the exterior surface is available for windows. Framed tubes allow fewer interior columns, and so create more usable floor space. Where larger openings like garage doors are required, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. The first building to apply the tube-frame construction was in the DeWitt-Chestnut Apartment Building which Khan designed in Chicago. This laid the foundations for the tube structures used in most later skyscraper constructions, including the construction of the World Trade Center.
Another innovation that Fazlur Khan developed was the concept of X-bracing, which reduced the lateral load on the building by transferring the load into the exterior columns. This allowed for a reduced need for interior columns thus creating more floor space, and can be seen in the John Hancock Center. The first sky lobby was also designed by Khan for the John Hancock Center in 1969. Later buildings with sky lobbies include the World Trade Center, Petronas Twin Towers and Taipei 101.
In 1987 Jörg Schlaich and Kurt Schafer published the culmination of almost ten years of work on the strut and tie method for concrete analysis - a tool to design structures with discontinuities such as corners and joints, providing another powerful tool for the analysis of complex concrete geometries.
In the late 20th and early 21st centuries the development of powerful computers has allowed finite element analysis to become a significant tool for structural analysis and design. The development of finite element programs has led to the ability to accurately predict the stresses in complex structures, and allowed great advances in structural engineering design and architecture. In the 1960s and 70s computational analysis was used in a significant way for the first time on the design of the Sydney Opera House roof. Many modern structures could not be understood and designed without the use of computational analysis.
Developments in the understanding of materials and structural behaviour in the latter part of the 20th century have been significant, with detailed understanding being developed of topics such as fracture mechanics, earthquake engineering, composite materials, temperature effects on materials, dynamics and vibration control, fatigue, creep and others. The depth and breadth of knowledge now available in structural engineering, and the increasing range of different structures and the increasing complexity of those structures has led to increasing specialisation of structural engineers.
See also
Base isolation
History of construction
History of architecture
History of sanitation and water supply
Qanat water management system
References
External links
"World Expos. A history of structures". Isaac López César. A history of architectural structures over the last 150 years.
3rd-millennium BC introductions
Structural engineering | History of structural engineering | Engineering | 3,343 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.