id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
66,254,033 | https://en.wikipedia.org/wiki/Methylol%20urea | Methylol urea is the organic compound with the formula H2NC(O)NHCH2OH. It is a white, water-soluble solid that decomposes near 110 °C.
Methylolurea is the product of the condensation reaction of formaldehyde and urea. As such it is an intermediate in the formation of urea-formaldehyde resins as well as fertilizer compositions such as methylene diurea. It has also been investigated as a corrosion inhibitor.
References
Ureas | Methylol urea | Chemistry | 109 |
24,154,443 | https://en.wikipedia.org/wiki/C14H16N2O2 | {{DISPLAYTITLE:C14H16N2O2}}
The molecular formula C14H16N2O2 may refer to:
o-Dianisidine, a chemical precursor to dyes
Etomidate, a short acting intravenous anaesthetic agent
Imiloxan, an alpha blocker
Tetramethylxylene diisocyanate | C14H16N2O2 | Chemistry | 80 |
2,951,323 | https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Wold%20theorem | In mathematics, the Cramér–Wold theorem or the Cramér–Wold device is a theorem in measure theory and which states that a Borel probability measure on is uniquely determined by the totality of its one-dimensional projections. It is used as a method for proving joint convergence results. The theorem is named after Harald Cramér and Herman Ole Andreas Wold, who published the result in 1936.
Let
and
be random vectors of dimension k. Then converges in distribution to if and only if:
for each , that is, if every fixed linear combination of the coordinates of converges in distribution to the correspondent linear combination of coordinates of .
If takes values in , then the statement is also true with .
References
Theorems in measure theory
Probability theorems
Convergence (mathematics) | Cramér–Wold theorem | Mathematics | 155 |
48,193 | https://en.wikipedia.org/wiki/Camera%20obscura | A camera obscura (; ) is the natural phenomenon in which the rays of light passing through a small hole into a dark space form an image where they strike a surface, resulting in an inverted (upside down) and reversed (left to right) projection of the view outside.
Camera obscura can also refer to analogous constructions such as a darkened room, box or tent in which an exterior image is projected inside or onto a translucent screen viewed from outside. Camera obscuras with a lens in the opening have been used since the second half of the 16th century and became popular as aids for drawing and painting. The technology was developed further into the photographic camera in the first half of the 19th century, when camera obscura boxes were used to expose light-sensitive materials to the projected image.
The image (or the principle of its projection) of a lensless camera obscura is also referred to as a "pinhole image".
The camera obscura was used to study eclipses without the risk of damaging the eyes by looking directly into the Sun. As a drawing aid, it allowed tracing the projected image to produce a highly accurate representation, and was especially appreciated as an easy way to achieve proper graphical perspective.
Before the term camera obscura was first used in 1604, other terms were used to refer to the devices: cubiculum obscurum, cubiculum tenebricosum, conclave obscurum, and locus obscurus.
A camera obscura without a lens but with a very small hole is sometimes referred to as a "pinhole camera", although this more often refers to simple (homemade) lensless cameras where photographic film or photographic paper is used.
Physical explanation
Rays of light travel in straight lines and change when they are reflected and partly absorbed by an object, retaining information about the color and brightness of the surface of that object. Lighted objects reflect rays of light in all directions. A small enough opening in a barrier admits only the rays that travel directly from different points in the scene on the other side, and these rays form an image of that scene where they reach a surface opposite from the opening.
The human eye (and that of many other animals) works much like a camera obscura, with rays of light entering an opening (pupil), getting focused through a convex lens and passing a dark chamber before forming an inverted image on a smooth surface (retina). The analogy appeared early in the 16th century and would in the 17th century find common use to illustrate Western theological ideas about God creating the universe as a machine, with a predetermined purpose (just like humans create machines). This had a huge influence on behavioral science, especially on the study of perception and cognition. In this context, it is noteworthy that the projection of inverted images is actually a physical principle of optics that predates the emergence of life (rather than a biological or technological invention) and is not characteristic of all biological vision.
Technology
A camera obscura consists of a box, tent, or room with a small hole in one side or the top. Light from an external scene passes through the hole and strikes a surface inside, where the scene is reproduced, inverted (upside-down) and reversed (left to right), but with color and perspective preserved.
To produce a reasonably clear projected image, the aperture is typically smaller than 1/100 the distance to the screen.
As the pinhole is made smaller, the image gets sharper, but dimmer. With too small of a pinhole, sharpness is lost because of diffraction. Optimum sharpness is attained with an aperture diameter approximately equal to the geometric mean of the wavelength of light and the distance to the screen.
In practice, camera obscuras use a lens rather than a pinhole because it allows a larger aperture, giving a usable brightness while maintaining focus.
If the image is caught on a translucent screen, it can be viewed from the back so that it is no longer reversed (but still upside-down). Using mirrors, it is possible to project a right-side-up image. The projection can also be displayed on a horizontal surface (e.g., a table). The 18th-century overhead version in tents used mirrors inside a kind of periscope on the top of the tent.
The box-type camera obscura often has an angled mirror projecting an upright image onto tracing paper placed on its glass top. Although the image is viewed from the back, it is reversed by the mirror.
History
Prehistory to 500 BC: Possible inspiration for prehistoric art and possible use in religious ceremonies, gnomons
There are theories that occurrences of camera obscura effects (through tiny holes in tents or in screens of animal hide) inspired paleolithic cave paintings. Distortions in the shapes of animals in many paleolithic cave artworks might be inspired by distortions seen when the surface on which an image was projected was not straight or not in the right angle.
It is also suggested that camera obscura projections could have played a role in Neolithic structures.
Perforated gnomons projecting a pinhole image of the sun were described in the Chinese Zhoubi Suanjing writings (1046 BC–256 BC with material added until ). The location of the bright circle can be measured to tell the time of day and year. In Middle Eastern and European cultures its invention was much later attributed to Egyptian astronomer and mathematician Ibn Yunus around 1000 AD.
500 BC to 500 AD: Earliest written observations
One of the earliest known written records of a pinhole image is found in the Chinese text called Mozi, dated to the 4th century BC, traditionally ascribed to and named for Mozi (circa 470 BC-circa 391 BC), a Chinese philosopher and the founder of Mohist School of Logic. These writings explain how the image in a "collecting-point" or "treasure house" is inverted by an intersecting point (pinhole) that collects the (rays of) light. Light coming from the foot of an illuminated person gets partly hidden below (i.e., strikes below the pinhole) and partly forms the top of the image. Rays from the head are partly hidden above (i.e., strike above the pinhole) and partly form the lower part of the image.
Another early account is provided by Greek philosopher Aristotle (384–322 BC), or possibly a follower of his ideas. Similar to the later 11th-century Middle Eastern scientist Alhazen, Aristotle is also thought to have used camera obscura for observing solar eclipses. The formation of pinhole images is touched upon as a subject in the work Problems – Book XV, asking: and further on:
In an attempt to explain the phenomenon, the author described how the light formed two cones; one between the Sun and the aperture and one between the aperture and the Earth. However, the roundness of the image was attributed to the idea that parts of the rays of light (assumed to travel in straight lines) are cut off at the angles in the aperture become so weak that they cannot be noticed.
Many philosophers and scientists of the Western world would ponder the contradiction between light travelling in straight lines and the formation of round spots of light behind differently shaped apertures, until it became generally accepted that the circular and crescent-shapes described in the "problem" were pinhole image projections of the sun.
In his book Optics (circa 300 BC, surviving in later manuscripts from around 1000 AD), Euclid proposed mathematical descriptions of vision with "lines drawn directly from the eye pass through a space of great extent" and "the form of the space included in our vision is a cone, with its apex in the eye and its base at the limits of our vision." Later versions of the text, like Ignazio Danti's 1573 annotated translation, would add a description of the camera obscura principle to demonstrate Euclid's ideas.
500 to 1000: Earliest experiments, study of light
In the 6th century, the Byzantine-Greek mathematician and architect Anthemius of Tralles (most famous as a co-architect of the Hagia Sophia) experimented with effects related to the camera obscura. Anthemius had a sophisticated understanding of the involved optics, as demonstrated by a light-ray diagram he constructed in 555 AD.
In his optical treatise De Aspectibus, Al-Kindi (c. 801–873) wrote about pinhole images to prove that light travels in straight lines.
In the 10th century Yu Chao-Lung supposedly projected images of pagoda models through a small hole onto a screen to study directions and divergence of rays of light.
1000 to 1400: Optical and astronomical tool, entertainment
Middle Eastern physicist Ibn al-Haytham (known in the West by the Latinised Alhazen) (965–1040) extensively studied the camera obscura phenomenon in the early 11th century.
In his treatise "On the shape of the eclipse" he provided the first experimental and mathematical analysis of the phenomenon.
He understood the relationship between the focal point and the pinhole.
In his Book of Optics (circa 1027), Ibn al-Haytham explained that rays of light travel in straight lines and are distinguished by the body that reflected the rays, writing:
Latin translations of the Book of Optics from about 1200 onward seemed very influential in Europe. Among those Ibn al-Haytham is thought to have inspired are Witelo, John Peckham, Roger Bacon, Leonardo da Vinci, René Descartes and Johannes Kepler. However, On the shape of the eclipse remained exclusively available in Arabic until the 20th century and no comparable explanation was found in Europe before Kepler addressed it. It were actually al-Kindi's work and especially the widely circulated pseudo-Euclidean De Speculis that were cited by the early scholars who were interested in pinhole images.
In his 1088 book, Dream Pool Essays, the Song dynasty Chinese scientist Shen Kuo (1031–1095) compared the focal point of a concave burning-mirror and the "collecting" hole of camera obscura phenomena to an oar in a rowlock to explain how the images were inverted:
Shen Kuo also responded to a statement of Duan Chengshi in Miscellaneous Morsels from Youyang written in about 840 that the inverted image of a Chinese pagoda tower beside a seashore, was inverted because it was reflected by the sea: "This is nonsense. It is a normal principle that the image is inverted after passing through the small hole."
English statesman and scholastic philosopher Robert Grosseteste (c. 1175 – 9 October 1253) was one of the earliest Europeans who commented on the camera obscura.
English philosopher and Franciscan friar Roger Bacon (c. 1219/20 – c. 1292) falsely stated in his De Multiplicatione Specerium (1267) that an image projected through a square aperture was round because light would travel in spherical waves and therefore assumed its natural shape after passing through a hole. He is also credited with a manuscript that advised to study solar eclipses safely by observing the rays passing through some round hole and studying the spot of light they form on a surface.
A picture of a three-tiered camera obscura (see illustration) has been attributed to Bacon, but the source for this attribution is not given. A very similar picture is found in Athanasius Kircher's Ars Magna Lucis et Umbrae (1646).
Polish friar, theologian, physicist, mathematician and natural philosopher Vitello wrote about the camera obscura in his influential treatise Perspectiva (circa 1270–1278), which was largely based on Ibn al-Haytham's work.
English archbishop and scholar John Peckham (circa 1230 – 1292) wrote about the camera obscura in his Tractatus de Perspectiva (circa 1269–1277) and Perspectiva communis (circa 1277–79), falsely arguing that light gradually forms the circular shape after passing through the aperture. His writings were influenced by Bacon.
At the end of the 13th century, Arnaldus de Villa Nova is credited with using a camera obscura to project live performances for entertainment.
French astronomer Guillaume de Saint-Cloud suggested in his 1292 work Almanach Planetarum that the eccentricity of the Sun could be determined with the camera obscura from the inverse proportion between the distances and the apparent solar diameters at apogee and perigee.
Kamāl al-Dīn al-Fārisī (1267–1319) described in his 1309 work Kitab Tanqih al-Manazir (The Revision of the Optics) how he experimented with a glass sphere filled with water in a camera obscura with a controlled aperture and found that the colors of the rainbow are phenomena of the decomposition of light.
French Jewish philosopher, mathematician, physicist and astronomer/astrologer Levi ben Gershon (1288–1344) (also known as Gersonides or Leo de Balneolis) made several astronomical observations using a camera obscura with a Jacob's staff, describing methods to measure the angular diameters of the Sun, the Moon and the bright planets Venus and Jupiter. He determined the eccentricity of the Sun based on his observations of the summer and winter solstices in 1334. Levi also noted how the size of the aperture determined the size of the projected image. He wrote about his findings in Hebrew in his treatise Sefer Milhamot Ha-Shem (The Wars of the Lord) Book V Chapters 5 and 9.
1450 to 1600: Depiction, lenses, drawing aid, mirrors
Italian polymath Leonardo da Vinci (1452–1519), familiar with the work of Alhazen in Latin translation and having extensively studied the physics and physiological aspects of optics, wrote the oldest known clear description of the camera obscura, in 1502 (found in the Codex Atlanticus, translated from Latin):
These descriptions, however, would remain unknown until Venturi deciphered and published them in 1797.
Da Vinci was clearly very interested in the camera obscura: over the years he drew approximately 270 diagrams of the camera obscura in his notebooks. He systematically experimented with various shapes and sizes of apertures and with multiple apertures (1, 2, 3, 4, 8, 16, 24, 28 and 32). He compared the working of the eye to that of the camera obscura and seemed especially interested in its capability of demonstrating basic principles of optics: the inversion of images through the pinhole or pupil, the non-interference of images and the fact that images are "all in all and all in every part".
The oldest known published drawing of a camera obscura is found in Dutch physician, mathematician and instrument maker Gemma Frisius’ 1545 book De Radio Astronomica et Geometrica, in which he described and illustrated how he used the camera obscura to study the solar eclipse of 24 January 1544
Italian polymath Gerolamo Cardano described using a glass disc – probably a biconvex lens – in a camera obscura in his 1550 book De subtilitate, vol. I, Libri IV. He suggested to use it to view "what takes place in the street when the sun shines" and advised to use a very white sheet of paper as a projection screen so the colours would not be dull.
Sicilian mathematician and astronomer Francesco Maurolico (1494–1575) answered Aristotle's problem how sunlight that shines through rectangular holes can form round spots of light or crescent-shaped spots during an eclipse in his treatise Photismi de lumine et umbra (1521–1554). However this wasn't published before 1611, after Johannes Kepler had published similar findings of his own.
Italian polymath Giambattista della Porta described the camera obscura, which he called "camera obscura", in the 1558 first edition of his book series Magia Naturalis. He suggested to use a convex lens to project the image onto paper and to use this as a drawing aid. Della Porta compared the human eye to the camera obscura: "For the image is let into the eye through the eyeball just as here through the window". The popularity of Della Porta's books helped spread knowledge of the camera obscura.
In his 1567 work La Pratica della Perspettiva Venetian nobleman Daniele Barbaro (1513-1570) described using a camera obscura with a biconvex lens as a drawing aid and points out that the picture is more vivid if the lens is covered as much as to leave a circumference in the middle.
In his influential and meticulously annotated Latin edition of the works of Ibn al-Haytham and Witelo, (1572), German mathematician Friedrich Risner proposed a portable camera obscura drawing aid; a lightweight wooden hut with lenses in each of its four walls that would project images of the surroundings on a paper cube in the middle. The construction could be carried on two wooden poles. A very similar setup was illustrated in 1645 in Athanasius Kircher's influential book Ars Magna Lucis Et Umbrae.
Around 1575 Italian Dominican priest, mathematician, astronomer, and cosmographer Ignazio Danti designed a camera obscura gnomon and a meridian line for the Basilica of Santa Maria Novella, Florence, and he later had a massive gnomon built in the San Petronio Basilica in Bologna. The gnomon was used to study the movements of the Sun during the year and helped in determining the new Gregorian calendar for which Danti took place in the commission appointed by Pope Gregorius XIII and instituted in 1582.
In his 1585 book Diversarum Speculationum Mathematicarum Venetian mathematician Giambattista Benedetti proposed to use a mirror in a 45-degree angle to project the image upright. This leaves the image reversed, but would become common practice in later camera obscura boxes.
Giambattista della Porta added a "lenticular crystal" or biconvex lens to the camera obscura description in the 1589 second edition of Magia Naturalis. He also described use of the camera obscura to project hunting scenes, banquets, battles, plays, or anything desired on white sheets. Trees, forests, rivers, mountains "that are really so, or made by Art, of Wood, or some other matter" could be arranged on a plain in the sunshine on the other side of the camera obscura wall. Little children and animals (for instance handmade deer, wild boars, rhinos, elephants, and lions) could perform in this set. "Then, by degrees, they must appear, as coming out of their dens, upon the Plain: The Hunter he must come with his hunting Pole, Nets, Arrows, and other necessaries, that may represent hunting: Let there be Horns, Cornets, Trumpets sounded: those that are in the Chamber shall see Trees, Animals, Hunters Faces, and all the rest so plainly, that they cannot tell whether they be true or delusions: Swords drawn will glister in at the hole, that they will make people almost afraid."
Della Porta claimed to have shown such spectacles often to his friends. They admired it very much and could hardly be convinced by della Porta's explanations that what they had seen was really an optical trick.
1600 to 1650: Name coined, camera obscura telescopy, portable drawing aid in tents and boxes
[[File:1619 Scheiner - Oculus hoc est (frontispiece).jpg|thumb|Detail of Scheiner's Oculus hoc est (1619) frontispiece with a camera obscura'''s projected image reverted by a lens]]
The earliest use of the term camera obscura is found in the 1604 book Ad Vitellionem Paralipomena by German mathematician, astronomer, and astrologer Johannes Kepler. Kepler discovered the working of the camera obscura by recreating its principle with a book replacing a shining body and sending threads from its edges through a many-cornered aperture in a table onto the floor where the threads recreated the shape of the book. He also realized that images are "painted" inverted and reversed on the retina of the eye and figured that this is somehow corrected by the brain.
In 1607, Kepler studied the Sun in his camera obscura and noticed a sunspot, but he thought it was Mercury transiting the Sun.
In his 1611 book Dioptrice, Kepler described how the projected image of the camera obscura can be improved and reverted with a lens. It is believed he later used a telescope with three lenses to revert the image in the camera obscura.
In 1611, Frisian/German astronomers David and Johannes Fabricius (father and son) studied sunspots with a camera obscura, after realizing looking at the Sun directly with the telescope could damage their eyes. They are thought to have combined the telescope and the camera obscura into camera obscura telescopy.Surdin, V., and M. Kartashev. "Light in a dark room." Quantum 9.6 (1999): 40.
In 1612, Italian mathematician Benedetto Castelli wrote to his mentor, the Italian astronomer, physicist, engineer, philosopher, and mathematician Galileo Galilei about projecting images of the Sun through a telescope (invented in 1608) to study the recently discovered sunspots. Galilei wrote about Castelli's technique to the German Jesuit priest, physicist, and astronomer Christoph Scheiner.
From 1612 to at least 1630, Christoph Scheiner would keep on studying sunspots and constructing new telescopic solar-projection systems. He called these "Heliotropii Telioscopici", later contracted to helioscope. For his helioscope studies, Scheiner built a box around the viewing/projecting end of the telescope, which can be seen as the oldest known version of a box-type camera obscura. Scheiner also made a portable camera obscura.
In his 1613 book Opticorum Libri Sex Belgian Jesuit mathematician, physicist, and architect François d'Aguilon described how some charlatans cheated people out of their money by claiming they knew necromancy and would raise the specters of the devil from hell to show them to the audience inside a dark room. The image of an assistant with a devil's mask was projected through a lens into the dark room, scaring the uneducated spectators.
By 1620 Kepler used a portable camera obscura tent with a modified telescope to draw landscapes. It could be turned around to capture the surroundings in parts.
Dutch inventor Cornelis Drebbel is thought to have constructed a box-type camera obscura which corrected the inversion of the projected image. In 1622, he sold one to the Dutch poet, composer, and diplomat Constantijn Huygens who used it to paint and recommended it to his artist friends. Huygens wrote to his parents (translated from French):
German Orientalist, mathematician, inventor, poet, and librarian Daniel Schwenter wrote in his 1636 book Deliciae Physico-Mathematicae about an instrument that a man from Pappenheim had shown him, which enabled movement of a lens to project more from a scene through a camera obscura. It consisted of a ball as big as a fist, through which a hole (AB) was made with a lens attached on one side (B). This ball was placed inside two-halves of part of a hollow ball that were then glued together (CD), in which it could be turned around. This device was attached to a wall of the camera obscura (EF). This universal joint mechanism was later called a scioptic ball.
In his 1637 book Dioptrique French philosopher, mathematician and scientist René Descartes suggested placing an eye of a recently dead man (or if a dead man was unavailable, the eye of an ox) into an opening in a darkened room and scraping away the flesh at the back until one could see the inverted image formed on the retina.
Italian Jesuit philosopher, mathematician, and astronomer Mario Bettini wrote about making a camera obscura with twelve holes in his Apiaria universae philosophiae mathematicae (1642). When a foot soldier would stand in front of the camera, a twelve-person army of soldiers making the same movements would be projected.
French mathematician, Minim friar, and painter of anamorphic art Jean-François Nicéron (1613–1646) wrote about the camera obscura with convex lenses. He explained how the camera obscura could be used by painters to achieve perfect perspective in their work. He also complained how charlatans abused the camera obscura to fool witless spectators and make them believe that the projections were magic or occult science. These writings were published in a posthumous version of La Perspective Curieuse (1652).
1650 to 1800: Introduction of the magic lantern, popular portable box-type drawing aid, painting aid
The use of the camera obscura to project special shows to entertain an audience seems to have remained very rare. A description of what was most likely such a show in 1656 in France, was penned by the poet Jean Loret, who expressed how rare and novel it was. The Parisian society were presented with upside-down images of palaces, ballet dancing and battling with swords. Loret felt somewhat frustrated that he did not know the secret that made this spectacle possible. There are several clues that this may have been a camera obscura show, rather than a very early magic lantern show, especially in the upside-down image and Loret's surprise that the energetic movements made no sound.
German Jesuit scientist Gaspar Schott heard from a traveler about a small camera obscura device he had seen in Spain, which one could carry under one arm and could be hidden under a coat. He then constructed his own sliding box camera obscura, which could focus by sliding a wooden box part fitted inside another wooden box part. He wrote about this in his 1657 Magia universalis naturæ et artis (volume 1 – book 4 "Magia Optica" pages 199–201).
By 1659 the magic lantern was introduced and partly replaced the camera obscura as a projection device, while the camera obscura mostly remained popular as a drawing aid. The magic lantern can be regarded as a (box-type) camera obscura device that projects images rather than actual scenes. In 1668, Robert Hooke described the difference for an installation to project the delightful "various apparitions and disappearances, the motions, changes and actions" by means of a broad convex-glass in a camera obscura setup: "if the picture be transparent, reflect the rays of the sun so as that they may pass through it towards the place where it is to be represented; and let the picture be encompassed on every side with a board or cloth that no rays may pass beside it. If the object be a statue or some living creature, then it must be very much enlightened by casting the sun beams on it by refraction, reflexion, or both." For models that can't be inverted, like living animals or candles, he advised: "let two large glasses of convenient spheres be placed at appropriate distances".
The 17th century Dutch Masters, such as Johannes Vermeer, were known for their magnificent attention to detail. It has been widely speculated that they made use of the camera obscura, but the extent of their use by artists at this period remains a matter of fierce contention, recently revived by the Hockney–Falco thesis.
German philosopher Johann Sturm published an illustrated article about the construction of a portable camera obscura box with a 45° mirror and an oiled paper screen in the first volume of the proceedings of the Collegium Curiosum, Collegium Experimentale, sive Curiosum (1676).
Johann Zahn's Oculus Artificialis Teledioptricus Sive Telescopium, published in 1685, contains many descriptions, diagrams, illustrations and sketches of both the camera obscura and the magic lantern. A hand-held device with a mirror-reflex mechanism was first proposed by Johann Zahn in 1685, a design that would later be used in photographic cameras.
The scientist Robert Hooke presented a paper in 1694 to the Royal Society, in which he described a portable camera obscura. It was a cone-shaped box which fit onto the head and shoulders of its user.
From the beginning of the 18th century, craftsmen and opticians would make camera obscura devices in the shape of books, which were much appreciated by lovers of optical devices.
One chapter in the Conte Algarotti's Saggio sopra Pittura (1764) is dedicated to the use of a camera obscura ("optic chamber") in painting.
By the 18th century, following developments by Robert Boyle and Robert Hooke, more easily portable models in boxes became available. These were extensively used by amateur artists while on their travels, but they were also employed by professionals, including Paul Sandby and Joshua Reynolds, whose camera (disguised as a book) is now in the Science Museum in London. Such cameras were later adapted by Joseph Nicephore Niepce, Louis Daguerre and William Fox Talbot for creating the first photographs.
Role in the modern age
While the technical principles of the camera obscura have been known since antiquity, the broad use of the technical concept in producing images with a linear perspective in paintings, maps, theatre setups, and architectural, and, later, photographic images and movies started in the Western Renaissance and the scientific revolution. Although Alhazen (Ibn al-Haytham) had already observed an optical effect and developed a pioneering theory of the refraction of light, he was less interested in producing images with it (compare Hans Belting 2005); the society he lived in was even hostile (compare Aniconism in Islam) toward personal images.
Western artists and philosophers used the Middle Eastern findings in new frameworks of epistemic relevance. For example, Leonardo da Vinci used the camera obscura as a model of the eye, René Descartes for eye and mind, and John Locke started to use the camera obscura as a metaphor of human understanding per se. The modern use of the camera obscura as an epistemic machine had important side effects for science.Don Ihde Art Precedes Science: or Did the Camera Obscura Invent Modern Science? In Instruments in Art and Science: On the Architectonics of Cultural Boundaries in the 17th Century Helmar Schramm, Ludger Schwarte, Jan Lazardzig, Walter de Gruyter, 2008
While the use of the camera obscura has waxed and waned, one can still be built using a few simple items: a box, tracing paper, tape, foil, a box cutter, a pencil, and a blanket to keep out the light. Homemade camera obscura are popular primary- and secondary-school science or art projects.
In 1827, critic Vergnaud complained about the frequent use of camera obscura in producing many of the paintings at that year's Salon exhibition in Paris: "Is the public to blame, the artists, or the jury, when history paintings, already rare, are sacrificed to genre painting, and what genre at that!... that of the camera obscura." (translated from French)
British photographer Richard Learoyd has specialized in making pictures of his models and motifs with a camera obscura instead of a modern camera, combining it with the ilfochrome process which creates large grainless prints.
Other contemporary visual artists who have explicitly used camera obscura in their artworks include James Turrell, Abelardo Morell, Minnie Weisz, Robert Calafiore, Vera Lutter, Marja Pirilä, and Shi Guorui.
Digital cameras
Camera obscura principle pinhole objectives machined out of aluminium are commercially available. As the luminosity of the image is very weak in the phenomenon, long exposure times or high sensitivity must be used in digital photography. The resulting image appears hazy and the image is not that sharp, even if the objective is attached to a state of the art camera body.
See also
Bonnington Pavilion – the first Scottish camera obscura, dating from 1708
Black mirror
Clifton Observatory
Camera lucida History of cinema
Pepper's ghost
Notes
References
Sources
Hill, Donald R. (1993), "Islamic Science and Engineering", Edinburgh University Press, page 70.
Lindberg, D.C. (1976), "Theories of Vision from Al Kindi to Kepler", The University of Chicago Press, Chicago and London.
Nazeef, Mustapha (1940), "Ibn Al-Haitham As a Naturalist Scientist", , published proceedings of the Memorial Gathering of Al-Hacan Ibn Al-Haitham, 21 December 1939, Egypt Printing.
Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 1, Physics. Taipei: Caves Books Ltd.
Omar, S.B. (1977). "Ibn al-Haitham's Optics", Bibliotheca Islamica, Chicago.
Lefèvre, Wolfgang (ed.) Inside the Camera Obscura: Optics and Art under the Spell of the Projected Image. Max Planck Institut Fur Wissenschaftgesichte. Max Planck Institute for the History of Science
Burkhard Walther, Przemek Zajfert: Camera Obscura Heidelberg. Black-and-white photography and texts. Historical and contemporary literature''. edition merid, Stuttgart, 2006,
External links
1500s introductions
1502 beginnings
17th-century neologisms
Optical toys
Optical devices
Artistic techniques
Precursors of photography
Precursors of film | Camera obscura | Materials_science,Engineering | 7,059 |
496,297 | https://en.wikipedia.org/wiki/Spheroplast | A spheroplast (or sphaeroplast in British usage) is a microbial cell from which the cell wall has been almost completely removed, as by the action of penicillin or lysozyme. According to some definitions, the term is used to describe Gram-negative bacteria. According to other definitions, the term also encompasses yeasts. The name spheroplast stems from the fact that after the microbe's cell wall is digested, membrane tension causes the cell to acquire a characteristic spherical shape. Spheroplasts are osmotically fragile, and will lyse if transferred to a hypotonic solution.
When used to describe Gram-negative bacteria, the term spheroplast refers to cells from which the peptidoglycan component but not the outer membrane component of the cell wall has been removed.
Spheroplast formation
Antibiotic-induced spheroplasts
Various antibiotics convert Gram-negative bacteria into spheroplasts. These include peptidoglycan synthesis inhibitors such as fosfomycin, vancomycin, moenomycin, lactivicin and the β-lactam antibiotics. Antibiotics that inhibit biochemical pathways directly upstream of peptidoglycan synthesis induce spheroplasts too (e.g. fosmidomycin, phosphoenolpyruvate).
In addition to the above antibiotics, inhibitors of protein synthesis (e.g. chloramphenicol, oxytetracycline, several aminoglycosides) and inhibitors of folic acid synthesis (e.g. trimethoprim, sulfamethoxazole) also cause Gram-negative bacteria to form spheroplasts.
Enzyme-induced spheroplasts
The enzyme lysozyme causes Gram-negative bacteria to form spheroplasts, but only if a membrane permeabilizer such as lactoferrin or ethylenediaminetetraacetate (EDTA) is used to ease the enzyme's passage through the outer membrane. EDTA acts as a permeabilizer by binding to divalent ions such as Ca2+ and removing them from the outer membrane.
The yeast Candida albicans can be converted to spheroplasts using the enzymes lyticase, chitinase and β-glucuronidase.
Uses and applications
Antibiotic discovery
From the 1960s into the 1990s, Merck and Co. used a spheroplast screen as a primary method for discovery of antibiotics that inhibit cell wall biosynthesis. In this screen devised by Eugene Dulaney, growing bacteria were exposed to test substances under hypertonic conditions. Inhibitors of cell wall synthesis caused growing bacteria to form spheroplasts. This screen enabled the discovery of fosfomycin, cephamycin C, thienamycin and several carbapenems.
Patch clamping
Specially prepared giant spheroplasts of Gram-negative bacteria can be used to study the function of bacterial ion channels through a technique called patch clamp, which was originally designed for characterizing the behavior of neurons and other excitable cells. To prepare giant spheroplasts, bacteria are treated with a septation inhibitor (e.g. cephalexin). This causes the bacteria to form filaments, elongated cells that lack internal cross-walls. After a period of time, the cell walls of the filaments are digested, and the bacteria collapse into very large spheres surrounded by just their cytoplasmic and outer membranes. The membranes can then be analyzed on a patch clamp apparatus to determine the phenotype of the ion channels embedded in it. It is also common to overexpress a particular channel to amplify its effect and make it easier to characterize.
The technique of patch clamping giant E. coli spheroplasts has been used to study the native mechanosensitive channels (MscL, MscS, and MscM) of E. coli. It has been extended to study other heterologously expressed ion channels and it has been shown that the giant E. coli spheroplast can be used as an ion-channel expression system comparable to the Xenopus oocyte.
Cell lysis
Yeast cells are normally protected by a thick cell wall which makes extraction of cellular proteins difficult. Enzymatic digestion of the cell wall with zymolyase, creating spheroplasts, renders the cells vulnerable to easy lysis with detergents or rapid osmolar pressure changes.
Transfection
Bacterial spheroplasts, with suitable recombinant DNA inserted into them, can be used to transfect animal cells. Spheroplasts with recombinant DNA are introduced into the media containing animal cells and are fused by polyethylene glycol (PEG). With this method, nearly 100% of the animal cells may take up the foreign DNA. Upon conducting experiments following a modified Hanahan protocol using calcium chloride in E. coli, it was determined that spheroplasts may be able to transform at 4.9x10−4.
See also
Bacterial morphological plasticity
Protoplast
L-form bacteria
References
External links
Bacteria | Spheroplast | Biology | 1,102 |
16,859,795 | https://en.wikipedia.org/wiki/SIX2 | Homeobox protein SIX2 is a protein that in humans is encoded by the SIX2 gene.
References
Further reading
Transcription factors | SIX2 | Chemistry,Biology | 27 |
48,432,354 | https://en.wikipedia.org/wiki/Leccinum%20armeniacum | Leccinum armeniacum is a species of bolete fungus in the family Boletaceae. Found in the United States, it was described as new to science in 1971 by Harry Delbert Thiers.
See also
List of Leccinum species
List of North American boletes
References
Fungi described in 1971
Fungi of the United States
armeniacum
Taxa named by Harry Delbert Thiers
Fungi without expected TNC conservation status
Fungus species | Leccinum armeniacum | Biology | 87 |
1,410,400 | https://en.wikipedia.org/wiki/Persistent%20organic%20pollutant | Persistent organic pollutants (POPs) are organic compounds that are resistant to degradation through chemical, biological, and photolytic processes. They are toxic and adversely affect human health and the environment around the world. Because they can be transported by wind and water, most POPs generated in one country can and do affect people and wildlife far from where they are used and released.
The effect of POPs on human and environmental health was discussed, with intention to eliminate or severely restrict their production, by the international community at the Stockholm Convention on Persistent Organic Pollutants in 2001.
Most POPs are pesticides or insecticides, and some are also solvents, pharmaceuticals, and industrial chemicals. Although some POPs arise naturally (e.g. from volcanoes), most are man-made. The "dirty dozen" POPs identified by the Stockholm Convention include aldrin, chlordane, dieldrin, endrin, heptachlor, HCB, mirex, toxaphene, PCBs, DDT, dioxins, and polychlorinated dibenzofurans. However, there have since been many new POPs added (e.g. PFOS).
Consequences of persistence
POPs typically are halogenated organic compounds (see lists below) and as such exhibit high lipid solubility. For this reason, they bioaccumulate in fatty tissues. Halogenated compounds also exhibit great stability reflecting the nonreactivity of C-Cl bonds toward hydrolysis and photolytic degradation. The stability and lipophilicity of organic compounds often correlates with their halogen content, thus polyhalogenated organic compounds are of particular concern. They exert their negative effects on the environment through two processes: long range transport, which allows them to travel far from their source, and bioaccumulation, which reconcentrates these chemical compounds to potentially dangerous levels. Compounds that make up POPs are also classed as PBTs (persistent, bioaccumulative and toxic) or TOMPs (toxic organic micro pollutants).
Long-range transport
POPs enter the gas phase under certain environmental temperatures and volatilize from soils, vegetation, and bodies of water into the atmosphere, resisting breakdown reactions in the air, to travel long distances before being re-deposited. This results in accumulation of POPs in areas far from where they were used or emitted, specifically environments where POPs have never been introduced such as Antarctica, and the Arctic Circle. POPs can be present as vapors in the atmosphere or bound to the surface of solid particles (aerosols). A determining factor for the long-range transport is the fraction of a POP that is adsorbed on aerosols. In adsorbed form it is – as opposed to the gas phase – protected from photo-oxidation, i.e. direct photolysis as well as oxidation by OH radicals or ozone.
POPs have low solubility in water but are easily captured by solid particles, and are soluble in organic fluids (oils, fats, and liquid fuels). POPs are not easily degraded in the environment due to their stability and low decomposition rates. Due to this capacity for long-range transport, POP environmental contamination is extensive, even in areas where POPs have never been used, and will remain in these environments years after restrictions implemented due to their resistance to degradation.
Bioaccumulation
Bioaccumulation of POPs is typically associated with the compounds high lipid solubility and ability to accumulate in the fatty tissues of living organisms including human tissues for long periods of time. Persistent chemicals tend to have higher concentrations and are eliminated more slowly. Dietary accumulation or bioaccumulation is another hallmark characteristic of POPs, as POPs move up the food chain, they increase in concentration as they are processed and metabolized in certain tissues of organisms. The natural capacity for animals gastrointestinal tract to concentrate ingested chemicals, along with poorly metabolized and hydrophobic nature of POPs, makes such compounds highly susceptible to bioaccumulation. Thus POPs not only persist in the environment, but also as they are taken in by animals they bioaccumulate, increasing their concentration and toxicity in the environment. This increase in concentration is called biomagnification, which is where organisms higher up in the food chain have a greater accumulation of POPs. Bioaccumulation and long-range transport are the reason why POPs can accumulate in organisms like whales, even in remote areas like Antarctica.
Stockholm Convention on Persistent Organic Pollutants
The Stockholm Convention was adopted and put into practice by the United Nations Environment Programme (UNEP) on May 22, 2001. The UNEP decided that POP regulation needed to be addressed globally for the future. The purpose statement of the agreement is "to protect human health and the environment from persistent organic pollutants." As of 2024, there are 185 countries plus the European Union have ratified the Stockholm Convention. The convention and its participants have recognized the potential human and environmental toxicity of POPs. They recognize that POPs have the potential for long-range transport and bioaccumulation and biomagnification. The convention seeks to study and then judge whether or not a number of chemicals that have been developed with advances in technology and science can be categorized as POPs. The initial meeting in 2001 made a preliminary list, termed the "dirty dozen", of chemicals that are classified as POPs. As of 2024, the United States has signed the Stockholm Convention but has not ratified it. There are a handful of other countries that have not ratified the convention but most countries in the world have ratified the convention.
Compounds on the Stockholm Convention list
In May 1995, the UNEP Governing Council investigated POPs. Initially the Convention recognized only twelve POPs for their adverse effects on human health and the environment, placing a global ban on these particularly harmful and toxic compounds and requiring its parties to take measures to eliminate or reduce the release of POPs in the environment.
Aldrin, an insecticide used in soils to kill termites, grasshoppers, Western corn rootworm, and others, is also known to kill birds, fish, and humans. Humans are primarily exposed to aldrin through dairy products and animal meats.
Chlordane, an insecticide used to control termites and on a range of agricultural crops, is known to be lethal in various species of birds, including mallard ducks, bobwhite quail, and pink shrimp; it is a chemical that remains in the soil with a reported half-life of one year. Chlordane has been postulated to affect the human immune system and is classified as a possible human carcinogen. Chlordane air pollution is believed the primary route of human exposure.
Dieldrin, a pesticide used to control termites, textile pests, insect-borne diseases and insects living in agricultural soils. In soil and insects, aldrin can be oxidized, resulting in rapid conversion to dieldrin. Dieldrin's half-life is approximately five years. Dieldrin is highly toxic to fish and other aquatic animals, particularly frogs, whose embryos can develop spinal deformities after exposure to low levels. Dieldrin has been linked to Parkinson's disease, breast cancer, and classified as immunotoxic, neurotoxic, with endocrine disrupting capacity. Dieldrin residues have been found in air, water, soil, fish, birds, and mammals. Human exposure to dieldrin primarily derives from food.
Endrin, an insecticide sprayed on the leaves of crops, and used to control rodents. Animals can metabolize endrin, so fatty tissue accumulation is not an issue, however the chemical has a long half-life in soil for up to 12 years. Endrin is highly toxic to aquatic animals and humans as a neurotoxin. Human exposure results primarily through food.
Heptachlor, a pesticide primarily used to kill soil insects and termites, along with cotton insects, grasshoppers, other crop pests, and malaria-carrying mosquitoes. Heptachlor, even at very low doses has been associated with the decline of several wild bird populations – Canada geese and American kestrels. In laboratory tests have shown high-dose heptachlor as lethal, with adverse behavioral changes and reduced reproductive success at low-doses, and is classified as a possible human carcinogen. Human exposure primarily results from food.
Hexachlorobenzene (HCB) was first introduced in 1945–59 to treat seeds because it can kill fungi on food crops. HCB-treated seed grain consumption is associated with photosensitive skin lesions, colic, debilitation, and a metabolic disorder called porphyria turcica, which can be lethal. Mothers who pass HCB to their infants through the placenta and breast milk had limited reproductive success including infant death. Human exposure is primarily from food.
Mirex, an insecticide used against ants and termites or as a flame retardant in plastics, rubber, and electrical goods. Mirex is one of the most stable and persistent pesticides, with a half-life of up to 10 years. Mirex is toxic to several plant, fish and crustacean species, with suggested carcinogenic capacity in humans. Humans are exposed primarily through animal meat, fish, and wild game.
Toxaphene, an insecticide used on cotton, cereal, grain, fruits, nuts, and vegetables, as well as for tick and mite control in livestock. Widespread toxaphene use in the US and chemical persistence, with a half-life of up to 12 years in soil, results in residual toxaphene in the environment. Toxaphene is highly toxic to fish, inducing dramatic weight loss and reduced egg viability. Human exposure primarily results from food. While human toxicity to direct toxaphene exposure is low, the compound is classified as a possible human carcinogen.
Polychlorinated biphenyls (PCBs), used as heat exchange fluids, in electrical transformers, and capacitors, and as additives in paint, carbonless copy paper, and plastics. Persistence varies with degree of halogenation, an estimated half-life of 10 years. PCBs are toxic to fish at high doses, and associated with spawning failure at low doses. Human exposure occurs through food, and is associated with reproductive failure and immune suppression. Immediate effects of PCB exposure include pigmentation of nails and mucous membranes and swelling of the eyelids, along with fatigue, nausea, and vomiting. Effects are transgenerational, as the chemical can persist in a mother's body for up to 7 years, resulting in developmental delays and behavioral problems in her children. Food contamination has led to large scale PCB exposure.
Dichlorodiphenyltrichloroethane (DDT) is probably the most infamous POP. It was widely used as insecticide during WWII to protect against malaria and typhus. After the war, DDT was used as an agricultural insecticide. In 1962, the American biologist Rachel Carson published Silent Spring, describing the impact of DDT spraying on the US environment and human health. DDT's persistence in the soil for up to 10–15 years after application has resulted in widespread and persistent DDT residues throughout the world including the arctic, even though it has been banned or severely restricted in most of the world. DDT is toxic to many organisms including birds where it is detrimental to reproduction due to eggshell thinning. DDT can be detected in foods from all over the world and food-borne DDT remains the greatest source of human exposure. Short-term acute effects of DDT on humans are limited, however long-term exposure has been associated with chronic health effects including increased risk of cancer and diabetes, reduced reproductive success, and neurological disease.
Dioxins are unintentional by-products of high-temperature processes, such as incomplete combustion and pesticide production. Dioxins are typically emitted from the burning of hospital waste, municipal waste, and hazardous waste, along with automobile emissions, peat, coal, and wood. Dioxins have been associated with several adverse effects in humans, including immune and enzyme disorders, chloracne, and are classified as a possible human carcinogen. In laboratory studies of dioxin effects an increase in birth defects and stillbirths, and lethal exposure have been associated with the substances. Food, particularly from animals, is the principal source of human exposure to dioxins. Dioxins were present in Agent Orange, which was used by the United States in chemical warfare against Vietnam and caused devastating multi-generational effects in both Vietnamese and American civilians.
Polychlorinated dibenzofurans are by-products of high-temperature processes, such as incomplete combustion after waste incineration or in automobiles, pesticide production, and polychlorinated biphenyl production. Structurally similar to dioxins, the two compounds share toxic effects. Furans persist in the environment and classified as possible human carcinogens. Human exposure to furans primarily results from food, particularly animal products.
New POPs on the Stockholm Convention list
Since 2001, this list has been expanded to include some polycyclic aromatic hydrocarbons (PAHs), brominated flame retardants, and other compounds. Additions to the initial 2001 Stockholm Convention list are the following POPs:
Chlordecone, a synthetic chlorinated organic compound, is primarily used as an agricultural pesticide, related to DDT and Mirex. Chlordecone is toxic to aquatic organisms, and classified as a possible human carcinogen. Many countries have banned chlordecone sale and use, or intend to destroy stockpiles.
α-Hexachlorocyclohexane (α-HCH) and β-Hexachlorocyclohexane (β-HCH) are insecticides as well as by-products in the production of lindane. α-HCH and β-HCH are highly persistent in the water of colder regions. α-HCH and β-HCH have been linked to Parkinson's and Alzheimer's disease.
Hexabromodiphenyl ether (hexaBDE) and heptabromodiphenyl ether (heptaBDE) are main components of commercial octabromodiphenyl ether (octaBDE). Commercial octaBDE is highly persistent in the environment, whose only degradation pathway is through debromination and the production of bromodiphenyl ethers, which themselves can be toxic.
Lindane (γ-hexachlorocyclohexane), a pesticide used as a broad spectrum insecticide for seed, soil, leaf, tree and wood treatment, and against ectoparasites in animals and humans (head lice and scabies). Lindane undergoes rapid biomagnification and is immunotoxic, neurotoxic, carcinogenic, linked to liver and kidney damage as well as adverse reproductive and developmental effects in various laboratory animals. Production of lindane unintentionally produces two other POPs α-HCH and β-HCH.
Pentachlorobenzene (PeCB), is a pesticide and unintentional byproduct. PeCB has also been used in PCB products, dyestuff carriers, as a fungicide, a flame retardant, and a chemical intermediate. This compound is moderately toxic to humans, whilst being highly toxic to aquatic organisms.
Tetrabromodiphenyl ether (tetraBDE) and pentabromodiphenyl ether (pentaBDE) are industrial chemicals and the main components of commercial pentabromodiphenyl ether (pentaBDE). This pair of molecules have been detected in humans in all regions of the world.
Perfluorooctanesulfonic acid (PFOS) and related compounds are extremely persistent and readily biomagnify.
Endosulfans are a group of chlorinated insecticides used to control pests on crops such coffee, cotton, rice and sorghum and soybeans, tsetse flies, ectoparasites of cattle. They are used as a wood preservative. Global use and manufacturing of endosulfan has been banned under the Stockholm convention in 2011, although many countries had previously banned or introduced phase-outs of the chemical when the ban was announced. Toxic to humans and aquatic and terrestrial organisms, linked to congenital physical disorders, mental retardation, and death. Endosulfans' negative health effects are primarily liked to its endocrine disrupting capacity acting as an antiandrogen.
Hexabromocyclododecane (HBCD) is a brominated flame retardant primarily used in thermal insulation in the building industry. HBCD is persistent, toxic and ecotoxic, with bioaccumulative and long-range transport properties.
Health effects
POP exposure may cause developmental defects, chronic illnesses, and death. Some are carcinogens per IARC, possibly including breast cancer. Many POPs are capable of endocrine disruption within the reproductive system, the central nervous system, or the immune system. People and animals are exposed to POPs mostly through their diet, occupationally, or while growing in the womb. For humans not exposed to POPs through accidental or occupational means, over 90% of exposure comes from animal product foods due to bioaccumulation in fat tissues and bioaccumulate through the food chain. In general, POP serum levels increase with age and tend to be higher in females than males.
Studies have investigated the correlation between low level exposure of POPs and various diseases. In order to assess disease risk due to POPs in a particular location, government agencies may produce a human health risk assessment which takes into account the pollutants' bioavailability and their dose-response relationships.
Endocrine disruption
The majority of POPs are known to disrupt normal functioning of the endocrine system. Low level exposure to POPs during critical developmental periods of fetus, newborn and child can have a lasting effect throughout their lifespan. A 2002 study summarizes data on endocrine disruption and health complications from exposure to POPs during critical developmental stages in an organism's lifespan. The study aimed to answer the question whether or not chronic, low level exposure to POPs can have a health impact on the endocrine system and development of organisms from different species. The study found that exposure of POPs during a critical developmental time frame can produce a permanent changes in the organisms path of development. Exposure of POPs during non-critical developmental time frames may not lead to detectable diseases and health complications later in their life. In wildlife, the critical development time frames are in utero, in ovo, and during reproductive periods. In humans, the critical development timeframe is during fetal development.
Reproductive system
The same study in 2002 with evidence of a link from POPs to endocrine disruption also linked low dose exposure of POPs to reproductive health effects. The study stated that POP exposure can lead to negative health effects especially in the male reproductive system, such as decreased sperm quality and quantity, altered sex ratio and early puberty onset. For females exposed to POPs, altered reproductive tissues and pregnancy outcomes as well as endometriosis have been reported.
Gestational weight gain and newborn head circumference
A Greek study from 2014 investigated the link between maternal weight gain during pregnancy, their PCB-exposure level and PCB level in their newborn infants, their birth weight, gestational age, and head circumference. The lower the birth weight and head circumference of the infants was, the higher POP levels during prenatal development had been, but only if mothers had either excessive or inadequate weight gain during pregnancy. No correlation between POP exposure and gestational age was found.
A 2013 case-control study conducted 2009 in Indian mothers and their offspring showed prenatal exposure of two types of organochlorine pesticides (HCH, DDT and DDE) impaired the growth of the fetus, reduced the birth weight, length, head circumference and chest circumference.
Health effects of PFAS
Additive and synergistic effects
Evaluation of the effects of POPs on health is very challenging in the laboratory setting. For example, for organisms exposed to a mixture of POPs, the effects are assumed to be additive. Mixtures of POPs can in principle produce synergistic effects. With synergistic effects, the toxicity of each compound is enhanced (or depressed) by the presence of other compounds in the mixture. When put together, the effects can far exceed the approximated additive effects of the POP compound mixture.
In urban areas and indoor environments
Traditionally it was thought that human exposure to POPs occurred primarily through food, however indoor pollution patterns that characterize certain POPs have challenged this notion. Recent studies of indoor dust and air have implicated indoor environments as a major sources for human exposure via inhalation and ingestion. Furthermore, significant indoor POP pollution must be a major route of human POP exposure, considering the modern trend in spending larger proportions of life indoors. Several studies have shown that indoor (air and dust) POP levels to exceed outdoor (air and soil) POP concentrations.
In rainwater
Control and removal in the environment
Current studies aimed at minimizing POPs in the environment are investigating their behavior in photocatalytic oxidation reactions. POPs that are found in humans and in aquatic environments the most are the main subjects of these experiments. Aromatic and aliphatic degradation products have been identified in these reactions. Photochemical degradation is negligible compared to photocatalytic degradation. A method of removal of POPs from marine environments that has been explored is adsorption. It occurs when an absorbable solute comes into contact with a solid with a porous surface structure. This technique was investigated by Mohamed Nageeb Rashed of Aswan University, Egypt. Current efforts are more focused on banning the use and production of POPs worldwide rather than removal of POPs.
See also
Aarhus Protocol on Persistent Organic Pollutants
Center for International Environmental Law (CIEL)
International POPs Elimination Network (IPEN)
Silent Spring
Environmental Persistent Pharmaceutical Pollutant (EPPP)
Persistent, bioaccumulative and toxic substances (PBT)
Tetraethyllead
Triclocarban
Triclosan
References
External links
World Health Organization Persistent Organic Pollutants: Impact on Child Health
Pops.int, Stockholm Convention on Persistent Organic Pollutants
Resources on Persistent Organic Pollutants (POPs)
Monarpop.at, POP monitoring in the Alpine region (Europe)
Biodegradable waste management
Ecotoxicology
Environmental effects of pesticides
Persistent organic pollutants
Pollutants
Pollution | Persistent organic pollutant | Chemistry | 4,697 |
5,541,228 | https://en.wikipedia.org/wiki/Fragrance%20extraction | Fragrance extraction refers to the separation process of aromatic compounds from raw materials, using methods such as distillation, solvent extraction, expression, sieving, or enfleurage. The results of the extracts are either essential oils, absolutes, concretes, or butters, depending on the amount of waxes in the extracted product.
To a certain extent, all of these techniques tend to produce an extract with an aroma that differs from the aroma of the raw materials. Heat, chemical solvents, or exposure to oxygen in the extraction process may denature some aromatic compounds, either changing their odour character or rendering them odourless, and the proportion of each aromatic component that is extracted can differ.
Maceration/solvent extraction
Certain plant materials contain too little volatile oil to undergo expression, or their chemical components are too delicate and easily denatured by the high heat used in hydrodistillation. Instead, the oils are extracted using their solvent properties.
Organic solvent extraction
Organic solvent extraction is the most common and most economically important technique for extracting aromatics in the modern perfume industry. Raw materials are submerged and agitated in a solvent that can dissolve the desired aromatic compounds. Commonly used solvents for maceration/solvent extraction include hexane, and dimethyl ether.
In organic solvent extraction, aromatic compounds as well as other hydrophobic soluble substances such as wax and pigments are also obtained. The extract is subjected to vacuum processing, which removes the solvent for re-use. The process can last anywhere from hours to months. Fragrant compounds for woody and fibrous plant materials are often obtained in this matter as are all aromatics from animal sources. The technique can also be used to extract odorants that are too volatile for distillation or easily denatured by heat. The remaining waxy mass is known as a concrete, which is a mixture of essential oil, waxes, resins, and other lipophilic (oil-soluble) plant material, since these solvents effectively remove all hydrophobic compounds in the raw material. The solvent is then removed by a lower temperature distillation process and reclaimed for re-use.
Although highly fragrant, concretes are too viscous – even solid – at room temperature to be useful. This is due to the presence of high-molecular-weight, non-fragrant waxes and resins. Another solvent, often ethyl alcohol, which only dissolves the fragrant low-molecular weight compounds, must be used to extract the fragrant oil from the concrete. The alcohol is removed by a second distillation, leaving behind the absolute. These extracts from plants such as jasmine and rose, are called absolutes.
Due to the low temperatures in this process, the absolute may be more faithful to the original scent of the raw material, which is subjected to high heat during the distillation process.
Supercritical fluid extraction
Supercritical fluid extraction is a relatively new technique for extracting fragrant compounds from a raw material, which often employs supercritical CO2 as the extraction solvent. When carbon dioxide is put under high pressure at slightly above room temperature, a supercritical fluid forms (Under normal pressure CO2 changes directly from a solid to a gas in a process known as sublimation.) Since CO2 in a non-polar compound has low surface tension and wets easily, it can be used to extract the typically hydrophobic aromatics from the plant material. This process is identical to one of the techniques for making decaffeinated coffee.
In supercritical fluid extraction, high pressure carbon dioxide gas (up to 100 atm.) is used as a solvent. Due to the low heat of process and the relatively unreactive solvent used in the extraction, the fragrant compounds derived often closely resemble the original odour of the raw material. Like solvent extraction, the CO2 extraction takes place at a low temperature, extracts a wide range of compounds, and leaves the aromatics unaltered by heat, rendering an essence more faithful to the original. Since CO2 is gas at normal atmospheric pressure, it also leaves no trace of itself in the final product, thus allowing one to get the absolute directly without having to deal with a concrete. It is a low-temperature process, and the solvents are easily removed. Extracts produced using this process are known as CO2 extracts.
Ethanol extraction
Ethanol extraction is a type of solvent extraction used to extract fragrant compounds directly from dry raw materials, as well as the impure oils or concrete resulting from organic solvent extraction, expression, or enfleurage. Ethanol extracts from dry materials are called tinctures, while ethanol washes for purifying oils and concretes are called absolutes.
The impure substances or oils are mixed with ethanol, which is less hydrophobic than solvents used for organic extraction, dissolves more of the oxidized aromatic constituents (alcohols, aldehydes, etc.), leaving behind the wax, fats, and other generally hydrophobic substances. The alcohol is evaporated under low-pressure, leaving behind absolute. The absolute may be further processed to remove any impurities that are still present from the solvent extraction.
Ethanol extraction is not typically used to extract fragrance from fresh plant materials; these contain large quantities of water, which will be extracted into the ethanol, although this is sometimes not a concern.
Distillation
Distillation is a common technique for obtaining aromatic compounds from plants, such as orange blossoms and roses. The raw material is heated and the fragrant compounds are re-collected through condensation of the distilled vapor. Distilled products, whether through steam or dry distillation are known either as essential oils or ottos.
Today, most common essential oils, such as lavender, peppermint, and eucalyptus, are distilled. Raw plant material, consisting of the flowers, leaves, wood, bark, roots, seeds, or peel, is put into an alembic (distillation apparatus) over water.
Steam distillation
Steam from boiling water is passed through the raw material for 60–105 minutes, which drives out most of their volatile fragrant compounds. The condensate from distillation, which contain both water and the aromatics, is settled in a Florentine flask. This allows for the easy separation of the fragrant oils from the water as the oil will float to the top of the distillate where it is removed, leaving behind the watery distillate. The water collected from the condensate, which retains some of the fragrant compounds and oils from the raw material, is called hydrosol and is sometimes sold for consumer and commercial use. This method is most commonly used for fresh plant materials such as flowers, leaves, and stems. Popular hydrosols are rose water, lavender water, and orange blossom water. Many plant hydrosols have unpleasant smells and are therefore not sold.
Most oils are distilled in a single process. One exception is Ylang-ylang (Cananga odorata), which takes 22 hours to complete distillation. It is fractionally distilled, producing several grades (Ylang-Ylang "extra", I, II, III and "complete", in which the distillation is run from start to finish with no interruption).
Dry/destructive distillation
Also known as rectification, the raw materials are directly heated in a still without a carrier solvent such as water. Fragrant compounds that are released from the raw material by the high heat often undergo anhydrous pyrolysis, which results in the formation of different fragrant compounds, and thus different fragrant notes. This method is used to obtain fragrant compounds from fossil amber and fragrant woods (such as birch tar) where an intentional "burned" or "toasted" odour is desired.
Fractionation distillation
Through the use of a fractionation column, different fractions distilled from a material can be selectively excluded to manipulate the scent of the final product. Although the product is more expensive, this is sometimes performed to remove unpleasant or undesirable scents of a material and affords the perfumer more control over their composition process. This is often performed as a second step on material that has already been extracted rather than on raw material.
Expression
Expression as a method of fragrance extraction where raw materials are pressed, squeezed or compressed and the essential oils are collected. In contemporary times, the only fragrant oils obtained using this method are the peels of fruits in the citrus family. This is due to the large quantity of oil is present in the peels of these fruits as to make this extraction method economically feasible. Citrus peel oils are expressed mechanically, or cold-pressed. Due to the large quantities of oil in citrus peel and the relatively low cost to grow and harvest the raw materials, citrus-fruit oils are cheaper than most other essential oils to the extent that purified limonene extracted from these fruit is available as an affordable naturally-derived solvent. Lemon or sweet orange oils that are obtained as by-products of the commercial citrus industry are among the cheapest citrus oils.
Expression was mainly used prior to the discovery of distillation, and this is still the case in cultures such as Egypt. Traditional Egyptian practice involves pressing the plant material, then burying it in unglazed ceramic vessels in the desert for a period of months to drive out water. The water has a smaller molecular size, so it diffuses through the ceramic vessels, while the larger essential oils do not. The lotus oil in Tutankhamen's tomb, which retained its scent after 3000 years sealed in alabaster vessels, was pressed in this manner.
Enfleurage
Enfleurage is a process in which the odour of aromatic materials is absorbed into wax or fat, which is then often extracted with alcohol. Extraction by enfleurage was commonly used when distillation was not possible because some fragrant compounds denature through high heat. This technique is not commonly used in modern industry, due to both its prohibitive cost and the existence of more efficient and effective extraction methods.
See also
Perfume
Rose oil
Clove oil
References
Oils
Aromatherapy
Perfumery
Flavor technology | Fragrance extraction | Chemistry | 2,100 |
40,033,814 | https://en.wikipedia.org/wiki/Effaceable%20functor | In mathematics, an effaceable functor is an additive functor F between abelian categories C and D for which, for each object A in C, there exists a monomorphism , for some M, such that . Similarly, a coeffaceable functor is one for which, for each A, there is an epimorphism into A that is killed by F. The notions were introduced in Grothendieck's Tohoku paper.
A theorem of Grothendieck says that every effaceable δ-functor (i.e., effaceable in each degree) is universal.
References
External links
Meaning of “efface” in “effaceable functor” and “injective effacement”
Functors | Effaceable functor | Mathematics | 165 |
8,142,356 | https://en.wikipedia.org/wiki/Indoleamine%202%2C3-dioxygenase | Indoleamine-pyrrole 2,3-dioxygenase (IDO or INDO ) is a heme-containing enzyme physiologically expressed in a number of tissues and cells, such as the small intestine, lungs, female genital tract or placenta. In humans is encoded by the IDO1 gene. IDO is involved in tryptophan metabolism. It is one of three enzymes that catalyze the first and rate-limiting step in the kynurenine pathway, the O2-dependent oxidation of -tryptophan to N-formylkynurenine, the others being indolamine-2,3-dioxygenase 2 (IDO2) and tryptophan 2,3-dioxygenase (TDO). IDO is an important part of the immune system and plays a part in natural defense against various pathogens. It is produced by the cells in response to inflammation and has an immunosuppressive function because of its ability to limit T-cell function and engage mechanisms of immune tolerance. Emerging evidence suggests that IDO becomes activated during tumor development, helping malignant cells escape eradication by the immune system. Expression of IDO has been described in a number of types of cancer, such as acute myeloid leukemia, ovarian cancer or colorectal cancer. IDO is part of the malignant transformation process and plays a key role in suppressing the anti-tumor immune response in the body, so inhibiting it could increase the effect of chemotherapy as well as other immunotherapeutic protocols. Furthermore, there is data implicating a role for IDO1 in the modulation of vascular tone in conditions of inflammation via a novel pathway involving singlet oxygen.
Physiological function
Indoleamine 2,3-dioxygenase is the first and rate-limiting enzyme of tryptophan catabolism through the kynurenine pathway.
IDO is an important molecule in the mechanisms of tolerance and its physiological functions include the suppression of potentially dangerous inflammatory processes in the body. IDO also plays a role in natural defense against microorganisms. Expression of IDO is induced by interferon-gamma, which explains why the expression increases during inflammatory diseases or even during tumorigenesis. Since tryptophan is essential for the survival of pathogens, the activity of enzyme IDO destroys them. Microorganisms susceptible to tryptophan deficiency include bacteria of genus Streptococcus or viruses such as herpes simplex or measles.
One of the organs with high IDO expression is the placenta. In the 1990s, the immunosuppressive function of this enzyme was first described in mice due to the study of placental tryptophan metabolism. Thus, mammalian placenta, due to intensive tryptophan catabolism has the ability to suppress T cell activity, thereby contributing to its position of immunologically privileged tissue.
Clinical significance
IDO is an immune checkpoint molecule in the sense that it is an immunomodulatory enzyme produced by alternatively activated macrophages and other immunoregulatory cells. IDO is known to suppress T and NK cells, generate Tregs and myeloid-derived suppressor cells, and also supports angiogenesis.
These mechanisms are crucial in the process of carcinogenesis. IDO allows tumor cells to escape the immune system by two main mechanisms. The first mechanism is based on tryptophan depletion from the tumor microenvironment. The second mechanism is based on the production of catabolic products called kynurenins, that are cytotoxic for T lymphocytes and NK cells. Overexpression of human IDO (hIDO) is described in a variety of human tumor cell lineages and is often associated with poor prognosis. Tumors with increased production of IDO include prostate, ovarian, lung or pancreatic cancer or acute myeloid leukemia. Expression of IDO is under physiological conditions regulated by the Bin1 gene, which can be damaged by tumor transformation.
Emerging clinical studies suggest that combination of IDO inhibitors with classical chemotherapy and radiotherapy could restore immune control and provide a therapeutic response to generally resistant tumors. Enzyme IDO used by tumors to escape immune surveillance is currently in focus of research and drug discovery efforts, as well as efforts to understand if it could be used as a biomarker for prognosis.
Inhibitors
COX-2 inhibitors down-regulate indoleamine 2,3-dioxygenase, leading to a reduction in kynurenine levels as well as reducing proinflammatory cytokine activity.
1-Methyltryptophan is a racemic compound that weakly inhibits indoleamine dioxygenase, but is also a very slow substrate. The specific racemer 1-methyl--tryptophan (known as indoximod) is in clinical trials for various cancers.
Epacadostat (INCB24360), navoximod (GDC-0919), and linrodostat (BMS-986205) are potent inhibitors of the indoleamine 2,3-dioxygenase enzyme and are in clinical trials for various cancers.
See also
1-Methyltryptophan
Tryptophan 2,3-dioxygenase
References
External links
PDBe-KB provides an overview of all the structure information available in the PDB for Human Indoleamine 2,3-dioxygenase 1
EC 1.13.11
Immune system | Indoleamine 2,3-dioxygenase | Biology | 1,161 |
3,200,021 | https://en.wikipedia.org/wiki/Synthetic%20molecular%20motor | Synthetic molecular motors are molecular machines capable of continuous directional rotation under an energy input. Although the term "molecular motor" has traditionally referred to a naturally occurring protein that induces motion (via protein dynamics), some groups also use the term when referring to non-biological, non-peptide synthetic motors. Many chemists are pursuing the synthesis of such molecular motors.
The basic requirements for a synthetic motor are repetitive 360° motion, the consumption of energy and unidirectional rotation. The first two efforts in this direction, the chemically driven motor by Dr. T. Ross Kelly of Boston College with co-workers and the light-driven motor by Ben Feringa and co-workers, were published in 1999 in the same issue of Nature.
As of 2020, the smallest atomically precise molecular machine has a rotor that consists of four atoms.
Chemically driven rotary molecular motors
An example of a prototype for a synthetic chemically driven rotary molecular motor was reported by Kelly and co-workers in 1999. Their system is made up from a three-bladed triptycene rotor and a helicene, and is capable of performing a unidirectional 120° rotation.
This rotation takes place in five steps. The amine group present on the triptycene moiety is converted to an isocyanate group by condensation with phosgene (a). Thermal or spontaneous rotation around the central bond then brings the isocyanate group in proximity of the hydroxyl group located on the helicene moiety (b), thereby allowing these two groups to react with each other (c). This reaction irreversibly traps the system as a strained cyclic urethane that is higher in energy and thus energetically closer to the rotational energy barrier than the original state. Further rotation of the triptycene moiety therefore requires only a relatively small amount of thermal activation in order to overcome this barrier, thereby releasing the strain (d). Finally, cleavage of the urethane group restores the amine and alcohol functionalities of the molecule (e).
The result of this sequence of events is a unidirectional 120° rotation of the triptycene moiety with respect to the helicene moiety. Additional forward or backward rotation of the triptycene rotor is inhibited by the helicene moiety, which serves a function similar to that of the pawl of a ratchet. The unidirectionality of the system is a result from both the asymmetric skew of the helicene moiety as well as the strain of the cyclic urethane which is formed in c. This strain can be only be lowered by the clockwise rotation of the triptycene rotor in d, as both counterclockwise rotation as well as the inverse process of d are energetically unfavorable. In this respect the preference for the rotation direction is determined by both the positions of the functional groups and the shape of the helicene and is thus built into the design of the molecule instead of dictated by external factors.
The motor by Kelly and co-workers is an elegant example of how chemical energy can be used to induce controlled, unidirectional rotational motion, a process which resembles the consumption of ATP in organisms in order to fuel numerous processes. However, it does suffer from a serious drawback: the sequence of events that leads to 120° rotation is not repeatable. Kelly and co-workers have therefore searched for ways to extend the system so that this sequence can be carried out repeatedly. Unfortunately, their attempts to accomplish this objective have not been successful and currently the project has been abandoned. In 2016 David Leigh's group invented the first autonomous chemically-fuelled synthetic molecular motor.
Some other examples of synthetic chemically driven rotary molecular motors that all operate by sequential addition of reagents have been reported, including the use of the stereoselective ring opening of a racemic biaryl lactone by the use of chiral reagents, which results in a directed 90° rotation of one aryl with respect to the other aryl. Branchaud and co-workers have reported that this approach, followed by an additional ring closing step, can be used to accomplish a non-repeatable 180° rotation.
Feringa and co-workers used this approach in their design of a molecule that can repeatably perform 360° rotation. The full rotation of this molecular motor takes place in four stages. In stages A and C rotation of the aryl moiety is restricted, although helix inversion is possible. In stages B and D the aryl can rotate with respect to the naphthalene with steric interactions preventing the aryl from passing the naphthalene. The rotary cycle consists of four chemically induced steps which realize the conversion of one stage into the next. Steps 1 and 3 are asymmetric ring opening reactions which make use of a chiral reagent in order to control the direction of the rotation of the aryl. Steps 2 and 4 consist of the deprotection of the phenol, followed by regioselective ring formation.
Light-driven rotary molecular motors
In 1999 the laboratory of Prof. Dr. Ben L. Feringa at the University of Groningen, The Netherlands, reported the creation of a unidirectional molecular rotor. Their 360° molecular motor system consists of a bis-helicene connected by an alkene double bond displaying axial chirality and having two stereocenters.
One cycle of unidirectional rotation takes 4 reaction steps. The first step is a low temperature endothermic photoisomerization of the trans (P,P) isomer 1 to the cis (M,M) 2 where P stands for the right-handed helix and M for the left-handed helix. In this process, the two axial methyl groups are converted into two less sterically favorable equatorial methyl groups.
By increasing the temperature to 20 °C these methyl groups convert back exothermally to the (P,P) cis axial groups (3) in a helix inversion. Because the axial isomer is more stable than the equatorial isomer, reverse rotation is blocked. A second photoisomerization converts (P,P) cis 3 into (M,M) trans 4, again with accompanying formation of sterically unfavorable equatorial methyl groups. A thermal isomerization process at 60 °C closes the 360° cycle back to the axial positions.
A major hurdle to overcome is the long reaction time for complete rotation in these systems, which does not compare to rotation speeds displayed by motor proteins in biological systems. In the fastest system to date, with a fluorene lower half, the half-life of the thermal helix inversion is 0.005 seconds. This compound is synthesized using the Barton-Kellogg reaction. In this molecule the slowest step in its rotation, the thermally induced helix-inversion, is believed to proceed much more quickly because the larger tert-butyl group makes the unstable isomer even less stable than when the methyl group is used. This is because the unstable isomer is more destabilized than the transition state that leads to helix-inversion. The different behaviour of the two molecules is illustrated by the fact that the half-life time for the compound with a methyl group instead of a tert-butyl group is 3.2 minutes.
The Feringa principle has been incorporated into a prototype nanocar. The car synthesized has a helicene-derived engine with an oligo (phenylene ethynylene) chassis and four carborane wheels and is expected to be able to move on a solid surface with scanning tunneling microscopy monitoring, although so far this has not been observed. The motor does not perform with fullerene wheels because they quench the photochemistry of the motor moiety. Feringa motors have also been shown to remain operable when chemically attached to solid surfaces. The ability of certain Feringa systems to act as an asymmetric catalyst has also been demonstrated.
In 2016, Feringa was awarded a Nobel prize for his work on molecular motors.
Experimental demonstration of a single-molecule electric motor
A single-molecule electrically operated motor made from a single molecule of n-butyl methyl sulfide (C5H12S) has been reported. The molecule is adsorbed onto a copper (111) single-crystal piece by chemisorption.
See also
Molecular machine
Molecular motors
Molecular propeller
Nanomotor
References
Nanotechnology
Molecular machines | Synthetic molecular motor | Physics,Chemistry,Materials_science,Technology,Engineering | 1,754 |
16,143,262 | https://en.wikipedia.org/wiki/Echo%20chamber%20%28media%29 | In news media and social media, an echo chamber is an environment or ecosystem in which participants encounter beliefs that amplify or reinforce their preexisting beliefs by communication and repetition inside a closed system and insulated from rebuttal. An echo chamber circulates existing views without encountering opposing views, potentially resulting in confirmation bias. Echo chambers may increase social and political polarization and extremism. On social media, it is thought that echo chambers limit exposure to diverse perspectives, and favor and reinforce presupposed narratives and ideologies.
The term is a metaphor based on an acoustic echo chamber, in which sounds reverberate in a hollow enclosure. Another emerging term for this echoing and homogenizing effect within social-media communities on the Internet is neotribalism.
Many scholars note the effects that echo chambers can have on citizens' stances and viewpoints, and specifically implications has for politics. However, some studies have suggested that the effects of echo chambers are weaker than often assumed.
Concept
The Internet has expanded the variety and amount of accessible political information. On the positive side, this may create a more pluralistic form of public debate; on the negative side, greater access to information may lead to selective exposure to ideologically supportive channels. In an extreme "echo chamber", one purveyor of information will make a claim, which many like-minded people then repeat, overhear, and repeat again (often in an exaggerated or otherwise distorted form) until most people assume that some extreme variation of the story is true.
The echo chamber effect occurs online when a harmonious group of people amalgamate and develop tunnel vision. Participants in online discussions may find their opinions constantly echoed back to them, which reinforces their individual belief systems due to the declining exposure to other's opinions. Their individual belief systems are what culminate into a confirmation bias regarding a variety of subjects. When an individual wants something to be true, they often will only gather the information that supports their existing beliefs and disregard any statements they find that are contradictory or speak negatively upon their beliefs. Individuals who participate in echo chambers often do so because they feel more confident that their opinions will be more readily accepted by others in the echo chamber. This happens because the Internet has provided access to a wide range of readily available information. People are receiving their news online more rapidly through less traditional sources, such as Facebook, Google, and Twitter. These and many other social platforms and online media outlets have established personalized algorithms intended to cater specific information to individuals’ online feeds. This method of curating content has replaced the function of the traditional news editor. The mediated spread of information through online networks causes a risk of an algorithmic filter bubble, leading to concern regarding how the effects of echo chambers on the internet promote the division of online interaction.
Members of an echo chamber are not fully responsible for their convictions. Once part of an echo chamber, an individual might adhere to seemingly acceptable epistemic practices and still be further misled. Many individuals may be stuck in echo chambers due to factors existing outside of their control, such as being raised in one.
Furthermore, the function of an echo chamber does not entail eroding a member's interest in truth; it focuses upon manipulating their credibility levels so that fundamentally different establishments and institutions will be considered proper sources of authority.
Empirical research
However, empirical findings to clearly support these concerns are needed and the field is very fragmented when it comes to empirical results. There are some studies that do measure echo chamber effects, such as the study of Bakshy et al. (2015). In this study the researchers found that people tend to share news articles they align with. Similarly, they discovered a homophily in online friendships, meaning people are more likely to be connected on social media if they have the same political ideology. In combination, this can lead to echo chamber effects. Bakshy et al. found that a person's potential exposure to cross-cutting content (content that is opposite to their own political beliefs) through their own network is only 24% for liberals and 35% for conservatives. Other studies argue that expressing cross-cutting content is an important measure of echo chambers: Bossetta et al. (2023) find that 29% of Facebook comments during Brexit were cross-cutting expressions. Therefore, echo chambers might be present in a person's media diet but not in how they interact with others on social media.
Another set of studies suggests that echo chambers exist, but that these are not a widespread phenomenon: Based on survey data, Dubois and Blank (2018) show that most people do consume news from various sources, while around 8% consume media with low diversity. Similarly, Rusche (2022) shows that, most Twitter users do not show behavior that resembles that of an echo chamber. However, through high levels of online activity, the small group of users that do, make up a substantial share populist politicians' followers, thus creating homogeneous online spaces.
Finally, there are other studies which contradict the existence of echo chambers. Some found that people also share news reports that don't align with their political beliefs.
Others found that people using social media are being exposed to more diverse sources than people not using social media.
In summation, it remains that clear and distinct findings are absent which either confirm or falsify the concerns of echo chamber effects.
Research on the social dynamics of echo chambers shows that the fragmented nature of online culture, the importance of collective identity construction, and the argumentative nature of online controversies can generate echo chambers where participants encounter self-reinforcing beliefs. Researchers show that echo chambers are prime vehicles to disseminate disinformation, as participants exploit contradictions against perceived opponents amidst identity-driven controversies. As echo chambers build upon identity politics and emotion, they can contribute to political polarization and neotribalism.
Difficulties of researching processes
Echo chamber studies fail to achieve consistent and comparable results due to unclear definitions, inconsistent measurement methods, and unrepresentative data. Social media platforms continually change their algorithms, and most studies are conducted in the US, limiting their application to political systems with more parties.
Echo chambers vs epistemic bubbles
In recent years, closed epistemic networks have increasingly been held responsible for the era of post-truth and fake news. However, the media frequently conflates two distinct concepts of social epistemology: echo chambers and epistemic bubbles.
An epistemic bubble is an informational network in which important sources have been excluded by omission, perhaps unintentionally. It is an impaired epistemic framework which lacks strong connectivity. Members within epistemic bubbles are unaware of significant information and reasoning.
On the other hand, an echo chamber is an epistemic construct in which voices are actively excluded and discredited. It does not suffer from a lack in connectivity; rather it depends on a manipulation of trust by methodically discrediting all outside sources. According to research conducted by the University of Pennsylvania, members of echo chambers become dependent on the sources within the chamber and highly resistant to any external sources.
An important distinction exists in the strength of the respective epistemic structures. Epistemic bubbles are not particularly robust. Relevant information has merely been left out, not discredited. One can ‘pop’ an epistemic bubble by exposing a member to the information and sources that they have been missing.
Echo chambers, however, are incredibly strong. By creating pre-emptive distrust between members and non-members, insiders will be insulated from the validity of counter-evidence and will continue to reinforce the chamber in the form of a closed loop. Outside voices are heard, but dismissed.
As such, the two concepts are fundamentally distinct and cannot be utilized interchangeably. However, one must note that this distinction is conceptual in nature, and an epistemic community can exercise multiple methods of exclusion to varying extents.
Similar concepts
A filter bubble – a term coined by internet activist Eli Pariser – is a state of intellectual isolation that allegedly can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. The choices made by these algorithms are not transparent.
Homophily is the tendency of individuals to associate and bond with similar others, as in the proverb "birds of a feather flock together". The presence of homophily has been detected in a vast array of network studies. For example, a study conducted by Bakshy et. al. explored the data of 10.1 million Facebook users. These users identified as either politically liberal, moderate, or conservative, and the vast majority of their friends were found to have a political orientation that was similar to their own. Facebook algorithms recognize this and selects information with a bias towards this political orientation to showcase in their newsfeed.
Recommender systems are information filtering systems put in place on different platforms that provide recommendations depending on information gathered from the user. In general, recommendations are provided in three different ways: based on content that was previously selected by the user, content that has similar properties or characteristics to that which has been previously selected by the user, or a combination of both.
Both echo chambers and filter bubbles relate to the ways individuals are exposed to content devoid of clashing opinions, and colloquially might be used interchangeably. However, echo chamber refers to the overall phenomenon by which individuals are exposed only to information from like-minded individuals, while filter bubbles are a result of algorithms that choose content based on previous online behavior, as with search histories or online shopping activity. Indeed, specific combinations of homophily and recommender systems have been identified as significant drivers for determining the emergence of echo chambers.
Culture wars are cultural conflicts between social groups that have conflicting values and beliefs. It refers to "hot button" topics on which societal polarization occurs. A culture war is defined as "the phenomenon in which multiple groups of people, who hold entrenched values and ideologies, attempt to contentiously steer public policy." Echo chambers on social media have been identified as playing a role on how multiple social groups, holding distinct values and ideologies, create groups circulate conversations through conflict and controversy.
Implications of echo chambers
Online communities
Online social communities become fragmented by echo chambers when like-minded people group together and members hear arguments in one specific direction with no counter argument addressed. In certain online platforms, such as Twitter, echo chambers are more likely to be found when the topic is more political in nature compared to topics that are seen as more neutral. Social networking communities are communities that are considered to be some of the most powerful reinforcements of rumors due to the trust in the evidence supplied by their own social group and peers, over the information circulating the news. In addition to this, the reduction of fear that users can enjoy through projecting their views on the internet versus face-to-face allows for further engagement in agreement with their peers.
This can create significant barriers to critical discourse within an online medium. Social discussion and sharing can potentially suffer when people have a narrow information base and do not reach outside their network. Essentially, the filter bubble can distort one's reality in ways which are not believed to be alterable by outside sources.
Findings by Tokita et al. (2021) suggest that individuals’ behavior within echo chambers may dampen their access to information even from desirable sources. In highly polarized information environments, individuals who are highly reactive to socially-shared information are more likely than their less reactive counterparts to curate politically homogenous information environments and experience decreased information diffusion in order to avoid overreacting to news they deem unimportant. This makes these individuals more likely to develop extreme opinions and to overestimate the degree to which they are informed.
Research has also shown that misinformation can become more viral as a result of echo chambers, as the echo chambers provide an initial seed which can fuel broader viral diffusion.
Offline communities
Many offline communities are also segregated by political beliefs and cultural views. The echo chamber effect may prevent individuals from noticing changes in language and culture involving groups other than their own. Online echo chambers can sometimes influence an individual's willingness to participate in similar discussions offline. A 2016 study found that "Twitter users who felt their audience on Twitter agreed with their opinion were more willing to speak out on that issue in the workplace".
Group polarization can occur as a result of growing echo chambers. The lack of external viewpoints and the presence of a majority of individuals sharing a similar opinion or narrative can lead to a more extreme belief set. Group polarisation can also aid the current of fake news and misinformation through social media platforms. This can extend to offline interactions, with data revealing that offline interactions can be as polarising as online interactions (Twitter), arguably due to social media-enabled debates being highly fragmented.
Examples
Echo chambers have existed in many forms. Examples cited since the late 20th century include:
News coverage of the 1980s McMartin preschool trial was criticized by David Shaw in a series of 1990 Pulitzer Prize winning articles as an echo chamber. Shaw noted that, despite the charges in the trial never being proven, news media reporting on the trial "largely acted in a pack" and "fed on one another", creating an "echo chamber of horrors" where journalists ultimately abandoned journalistic principles and sensationalized coverage to be "the first with the latest shocking allegation".
Conservative radio host Rush Limbaugh and his radio show were categorized as an echo chamber in the first empirical study concerning echo chambers by researchers Kathleen Hall Jamieson and Frank Capella in their 2008 book Echo Chamber: Rush Limbaugh and the Conservative Media Establishment.
The Clinton–Lewinsky scandal reporting was chronicled in Time magazine's 16 February 1998 "Trial by Leaks" cover story "The Press And The Dress: The anatomy of a salacious leak, and how it ricocheted around the walls of the media echo chamber" by Adam Cohen. This case was also reviewed in depth by the Project for Excellence in Journalism in "The Clinton/Lewinsky Story: How Accurate? How Fair?"
A New Statesman essay argued that echo chambers were linked to the United Kingdom Brexit referendum.
The subreddit /r/incels and other online incel communities have also been described as echo chambers.
Discussion concerning opioid drugs and whether or not they should be considered suitable for long-term pain maintenance has been described as an echo chamber capable of affecting drug legislation.
The 2016 United States presidential election was described as an echo chamber, as information on the campaigns were exchanged primarily among individuals with similar political and ideological views. Donald Trump and Hillary Clinton were extremely vocal on Twitter throughout the electoral campaigns, bringing many vocal opinion leaders to the platform. A study conducted by Guo et. al. showed that Twitter communities in support of Trump and Clinton differed significantly, and those that were most vocal were responsible for creating echo chambers within these communities.
The network of social media accounts and communities harboring and circulating the Flat Earth theory has been described as an echo chamber.
Since the creation of the internet, scholars have been curious to see the changes in political communication. Due to the new changes in information technology and how it is managed, it is unclear how opposing perspectives can reach common ground in a democracy. The effects seen from the echo chamber effect has largely been cited to occur in politics, such as Twitter and Facebook during the 2016 presidential election in the United States. Some believe that echo chambers played a big part in the success of Donald Trump in the 2016 presidential elections.
Countermeasures
From media companies
Some companies have also made efforts in combating the effects of an echo chamber on an algorithmic approach. A high-profile example of this is the changes Facebook made to its "Trending" page, which is an on-site news source for its users. Facebook modified their "Trending" page by transitioning from displaying a single news source to multiple news sources for a topic or event. The intended purpose of this was to expand the breadth of news sources for any given headline, and therefore expose readers to a variety of viewpoints. There are startups building apps with the mission of encouraging users to open their echo chambers, such as UnFound.news. Another example is a beta feature on BuzzFeed News called "Outside Your Bubble", which adds a module to the bottom of BuzzFeed News articles to show reactions from various platforms like Twitter, Facebook, and Reddit. This concept aims to bring transparency and prevent biased conversations, diversifying the viewpoints their readers are exposed to.
See also
References
Influence of mass media
Mass media issues
Media bias
Public opinion
Propaganda techniques
Social influence
Sociology of technology | Echo chamber (media) | Technology | 3,454 |
154,487 | https://en.wikipedia.org/wiki/Parkway | A parkway is a landscaped thoroughfare. The term is particularly used for a roadway in a park or connecting to a park from which trucks and other heavy vehicles are excluded.
Over the years, many different types of roads have been labeled parkways. The term may be used to describe city streets as narrow as two lanes with a landscaped median, wide landscaped setbacks, or both.
The term has also been applied to scenic highways and to limited-access roads more generally. Many parkways originally intended for scenic, recreational driving have evolved into major urban and commuter routes.
United States
Scenic roads
The first parkways in the United States were developed during the late 19th century by landscape architects Frederick Law Olmsted and Calvert Vaux as roads that separated pedestrians, bicyclists, equestrians, and horse carriages, such as Eastern Parkway, which is credited as the world's first parkway, and Ocean Parkway in the New York City borough of Brooklyn. The term "parkway" to define this type of road was coined by Calvert Vaux and Frederick Law Olmsted in their proposal to link city and suburban parks with "pleasure roads".In Buffalo, New York, Olmsted and Vaux used parkways with landscaped medians and setbacks to create the first interconnected park and parkway system in the United States. Bidwell Parkway and Chapin Parkway are 200 foot wide city streets with only one lane for cars in each direction and broad landscaped medians that provide a pleasant, shaded route to the park and serve as mini-parks within the neighborhood. The Rhode Island Metropolitan Park Commission developed several parkways in the Providence area.
Other parkways, such as Park Presidio Boulevard in San Francisco, California, were designed to serve larger volumes of traffic.
During the early 20th century, the meaning of the word was expanded to include limited-access highways designed for recreational driving of automobiles, with landscaping. These parkways originally provided scenic routes without very slow or commercial vehicles, at grade intersections, or pedestrian traffic. Examples are the Merritt Parkway in Connecticut and the Vanderbilt Motor Parkway in New York. But their success led to more development, expanding a city's boundaries, eventually limiting the parkway's recreational driving use. The Arroyo Seco Parkway between Downtown Los Angeles and Pasadena, California, is an example of lost pastoral aesthetics. It and others have become major commuting routes, while retaining the name "parkway".
Early high speed roads
In New York City, construction on the Long Island Motor Parkway (Vanderbilt Parkway) began in 1906 and planning for the Bronx River Parkway in 1907. In the 1920s, the New York City Metropolitan Area's parkway system grew under the direction of Robert Moses, the president of the New York State Council of Parks and Long Island State Park Commission, who used parkways to provide access to newly created state parks, especially for city dwellers. As Commissioner of New York City Parks under Mayor LaGuardia, he extended the parkways to the heart of the city, creating and linking its parks to the greater metropolitan systems.
Most of the New York metropolitan parkways were designed by Gilmore Clark. The famed "Gateway to New England" Merritt Parkway in Connecticut was designed in the 1930s as a pleasurable alternative for affluent locals to the congested Boston Post Road, running through forest with each bridge designed uniquely to enhance the scenery. Another example is the Sprain Brook Parkway from lower-Westchester to connect to the Taconic State Parkway to Chatham, New York. Landscape architect George Kessler designed extensive parkway systems for Kansas City, Missouri; Memphis, Tennessee; Indianapolis; and other cities at the beginning of the 20th century.
New Deal roads
In the 1930s, as part of the New Deal, the U.S. federal government constructed National Parkways designed for recreational driving and to commemorate historic trails and routes. These divided four-lane parkways have lower speed limits and are maintained by the National Park Service. An example is the Civilian Conservation Corps (CCC) built Blue Ridge Parkway in the Appalachian Mountains of North Carolina and Virginia.
Others are Skyline Drive in Virginia; the Natchez Trace Parkway in Mississippi, Alabama, and Tennessee; and the Colonial Parkway in eastern Virginia's Historic Triangle area. The George Washington Memorial Parkway and the Clara Barton Parkway, running along the Potomac River near Washington, D.C., and Alexandria, Virginia, were also constructed during this era.
Post-war parkways
In Kentucky the term "parkway" designates a freeway in the Kentucky Parkway system, with nine built in the 1960s and 1970s. They were toll roads until the construction bonds were repaid; the last of these roads to charge tolls became freeways in 2006.
The Arroyo Seco Parkway from Pasadena to Los Angeles, built in 1940, was the first segment of the vast Southern California freeway system. It became part of State Route 110 and was renamed the Pasadena Freeway. A 2010 restoration of the freeway brought the Arroyo Seco Parkway designation back.
In the New York metropolitan area, contemporary parkways are predominantly limited-access highways or freeways restricted to non-commercial traffic, excluding trucks and tractor-trailers. Some have low overpasses that also exclude buses. The Vanderbilt Parkway, an exception in western Suffolk County, is a surviving remnant of the Long Island Motor Parkway that became a surface street, no longer with controlled-access or non-commercial vehicle restrictions. The Palisades Interstate Parkway is a post-war parkway that starts at the George Washington Bridge, heads north through New Jersey, continuing through Rockland and Orange counties in New York. The Palisades Parkway was built to allow for a direct route from New York City to Harriman State Park.
In New Jersey, the Garden State Parkway, connecting the northern part of the state with the Jersey Shore, is restricted to buses and non-commercial traffic north of the Route 18 interchange, but trucks are permitted south of this point. It is one of the busiest toll roads in the country.
In the Pittsburgh region, two of the major Interstates are referred to informally as parkways. The Parkway East (I-376, formally the Penn-Lincoln Parkway) connects Downtown Pittsburgh to Monroeville, Pennsylvania. The Parkway West (I-376) runs through the Fort Pitt Tunnel and links Downtown to Pittsburgh International Airport, southbound I-79, Imperial, Pennsylvania, and westbound US 22/US 30. The Parkway North (I-279) connects Downtown to Franklin Park, Pennsylvania and northbound I-79.
In the suburbs of Philadelphia, U.S. Route 202 follows an at-grade parkway alignment known as the "U.S. Route 202 Parkway" between Montgomeryville and Doylestown. The parkway varies from two to four lanes in width, has shoulders, a walking path called the US 202 Parkway Trail on the side, and a speed limit. The parkway opened in 2012 as a bypass of a section of US 202 between the two towns; it had originally been proposed as a four-lane freeway before funding for the road was cut.
In Minneapolis, the Grand Rounds Scenic Byway system has of streets designated as parkways. These are not freeways; they have a slow speed limit, pedestrian crossings, and stop signs.
In Cincinnati, parkways are major roads which trucks are prohibited from using. Some Cincinnati parkways, such as Columbia Parkway, are high-speed, limited-access roads, while others, such as Central Parkway, are multi-lane urban roads without controlled access. Columbia Parkway carries US-50 traffic from downtown towards east-side suburbs of Mariemont, Anderson, and Milford, and is a limited access road from downtown to the Village of Mariemont.
In Boston, parkways are generally four to six lanes wide but are not usually controlled-access. They are highly trafficked in most cases, transporting people between neighborhoods quicker than a typical city street. Many of them serve as principal arterials and some (like Storrow Drive, Memorial Drive, the Alewife Brook Parkway and the VFW Parkway) have evolved into regional commuter routes.
Canada
"Parkway" is used in the names of many Canadian roads, including major routes through national parks, scenic drives, major urban thoroughfares, and even regular freeways that carry commercial traffic.
Parkways in the National Capital Region are administered by the National Capital Region (Canada). However, some of them are named "drive" or "driveway".
The term in Canada is also applied to multi-use paths and greenways used by walkers and cyclists.
Airport Parkway (Ottawa)
Aviation Parkway (Ottawa)
Broad Street in Saint John, New Brunswick
Colonel By Drive in Ottawa, Ontario
Conestoga Parkway in Kitchener, Ontario
Don Valley Parkway in Toronto, Ontario
Emil Kolb Parkway in Bolton, Ontario
Erin Mills Parkway in Mississauga, Ontario
Forest Hills Parkway in Halifax, Nova Scotia
Hanlon Expressway in Guelph, Ontario
Icefields Parkway in Alberta
Island Park Drive in Ottawa, Ontario
Lauzon Parkway in Windsor, Ontario
Lincoln M. Alexander Parkway in Hamilton, Ontario
Niagara Parkway in Southern Ontario
Ojibway Parkway in Windsor, Ontario
Queen Elizabeth Driveway in Ottawa, Ontario
Red Hill Valley Parkway in Hamilton, Ontario
The Parkway in St. John's, Newfoundland and Labrador
Thousand Islands Parkway in Eastern Ontario
United Kingdom
In the United Kingdom, the term "parkway" more commonly refers to park and ride railway stations, where this is often indicated as part of the name, as with Bristol Parkway, the first such station, opened in 1972.
Luton Airport Parkway is somewhat analogous - an interconnect railway station with an airport via a public transport shuttle (initially buses, now the Luton DART light railway).
Parkways fitting the definition applied in this article also exist, as listed in this section.
Peterborough
The city of Peterborough has roads branded as "parkways" which provide routes for much through traffic and local traffic. The majority are dual carriageways, with many of their junctions numbered. Five main parkways form an orbital outer ring road. Three parkways serve settlements.
Plymouth
In the City of Plymouth, the A38 is called "The Parkway" and bisects a rural belt of the local authority area, which coincides with the geographical centre; it has two junctions to enter the downtown part of the city.
Australia
Australian Capital Territory
The Australian Capital Territory uses the term "parkway" to refer to roadways of a standard approximately equivalent to what would be designated as an "expressway", "freeway", or "motorway" in other areas. Parkways generally have multiple lanes in each direction of travel, no intersections (crossroads are accessed by interchanges), high speed limits, and are of dual carriageway design (or have high crash barriers on the median).
Victoria
Victoria uses the term "parkway" to sometimes refer to smaller local access roads that travel through parkland. Unlike other uses of the term, these parkways are not high-speed routes but may still have some degree of limited access.
Other countries
Singapore uses the term "parkway" as an alternative to "expressway". As such, parkways are also dual carriageways with high speed limits and interchanges. The East Coast Parkway is currently the only expressway in Singapore that uses this terminology.
In Russia, long, broad (multi-lane) and beautified thoroughfares are referred to as prospekts.
See also
Central reservation
Green belt
Linear park
Park
Road verge
References
External links
"Why do we drive on the parkway and park on the driveway?" The Straight Dope
Landscape
Types of roads
Environmental design
Limited-access roads
Urban studies and planning terminology | Parkway | Engineering | 2,341 |
49,214,867 | https://en.wikipedia.org/wiki/Pholiota%20microcarpa | Pholiota microcarpa is a species of agaric fungus in the family Strophariaceae. Found in Argentina, it was described as new to science by mycologist Rolf Singer in 1969.
See also
List of Pholiota species
References
External links
Fungi described in 1969
Fungi of Argentina
Strophariaceae
Taxa named by Rolf Singer
Fungus species | Pholiota microcarpa | Biology | 73 |
37,511,492 | https://en.wikipedia.org/wiki/EraMobile | EraMobile -Epidemic-based Reliable and Adaptive Multicast for Mobile ad hoc networks is a bio-inspired reliable multicast protocol targeting mission critical ad hoc networks. EraMobile supports group applications that require high reliability and low overhead with loose delivery time constraints. The protocol aims to deliver multicast data with maximum reliability and minimal network overhead under adverse network conditions. EraMobile adopts an epidemic-based approach, which uses gossip messages, to cope with dynamic topology changes due to the mobility of network nodes. EraMobile's epidemic mechanism does not require maintaining any tree- or mesh-like structure for multicast operation. It requires neither a global nor a partial view of the network, nor does it require information about neighboring nodes and group members. The lack of a central structure for multicast lowers the network overhead by eliminating redundant data transmissions. EraMobile contains a simple adaptivity mechanism which tunes the frequency of control packages based on the node density in the network. This adaptivity mechanism helps the delivery of data reliably in both sparse networks -in which network connectivity is prone to interruptions- and dense networks -in which congestion is likely because of shared wireless medium-.
References
Internet architecture
Wireless networking
Routing protocols
Ad hoc routing protocols | EraMobile | Technology,Engineering | 246 |
33,169,638 | https://en.wikipedia.org/wiki/8tracks.com | 8tracks.com is an internet radio and social networking website revolving around the concept of streaming user-curated playlists consisting of at least 8 tracks. Users create free accounts and can browse the site and listen to other user-created mixes, as well as create their mixes. The site also has a subscription-based service, 8tracks Plus, although this is currently only available to listeners based in the United States and Canada.
8tracks is recognized on Time magazine's 2011 incarnation of its "50 Best Websites" List. 8tracks also received positive press in Wired, CNET, and Business Insider.
Citing difficulties with funding and maintaining royalty payments, 8tracks ceased its services on 31 December 2019. However, on 19 April 2020, 8tracks relaunched under the new ownership and operation of BackBeat Inc.
History
One of Porter's significant influences for the project was Napster, more specifically its "Hotlist" feature, which allowed users to add other users to their "hot list", consequently giving them access to that user's entire library. Also, after having spent three years before business school in London, Porter was fascinated by the social nature of the city's electronic music scene, in which DJs gained cult-like followings and augmented their following primarily through peer referral. Based on these concepts, Porter drafted a business plan entitled "Sampled & Sorted", now the name of his blog, and garnered some initial attention for the project from venture capital firms. However, given his relative inexperience in the business world, Porter joined Live365, gained an understanding of their business model, their strengths and weaknesses and was able to refine his original proposition. With the rise of Web 2.0, Porter finally decided to found 8tracks in Fall 2006, and after compiling a preliminary team, was able to launch the site on August 8, 2008.
In November 2011, 8tracks made its debut in the Android Market, launching with more than 300,000 mixes. An Android 2.1 or higher device is required to use the app, but Market stats reveal over 10,000 downloads within days of release.
Between 2011 and 2015, there was also a list of tracks from SoundCloud provided by 8tracks for DJs to add to their mix.
In April 2013, 8tracks made its debut in the Windows 8 App Store. Any Windows 8 Pro or RT device, including desktop PCs and tablets was able to access the app.
In early 2016, 8tracks was required to stop offering streaming music via its app outside of the United States and Canada and instead started to use YouTube videos.
While initially, 8tracks did not feature commercial interruption during playlists, they adopted them in 2018, to remove their listening cap. Users were able to bypass these ads by buying a subscription service, 8tracks plus. The cost was $25 for a six-month subscription.
On 26 December 2019, 8tracks announced in a blog post that they intended to cease operations at the end of the year due to a lack of revenue and a lack of interest in their purchase by any larger company. By this time, there were less than 1 million monthly users, down from over 8 million in 2014. However, on 19 April 2020, 8tracks relaunched under the new ownership and operation of BackBeat Inc.
Website and app usage
Listeners were able to search through existing playlists of songs as well as create their playlists. The songs in the playlist were revealed one at a time, and listeners could skip three songs per playlist before they could "skip" onto a different mix, where their three skips were restored. Individual songs within a playlist each featured a direct link to iTunes should the user wish to purchase that song. Users were able to "like" entire mixes or "star" individual tracks within them to facilitate quick access in the future and could also "follow" other users, effectively subscribing to the mixes they created. Users also could embed the mixes they created and share them through social networking sites such as Facebook, Twitter, and Tumblr. 8tracks also could reverse sync with these social networking sites to allow users to easily find their "friends" and expand their network.
Anyone could upload a playlist to become a "DJ" on 8tracks. Mixes needed to include at least eight songs, uploaded from the user's music library or directly accessed from the 8tracks library. When creating a playlist, the site also requires users to add titles, images, descriptions, and at least two tags. When a DJ uploaded songs to the site, they appeared in a list next to where the mix is created. Users searched for mixes by individual artist, specific genre, or by utilizing the "cloud" feature that sorts mixes by tags (i.e., "autumn", "love", "sad", "eclectic"). DJs also had the option to mark mixes as unlisted, which made them private or unsafe for work (NSFW), which hid them from users who opted into a filter.
8tracks had an official Android, iPhone, Windows 8.x, Xbox 360, Mac app, and several unofficial third-party apps.
The Mix Feed gave users a stream of all their favorite tracks, allowed them to search for any artist of interest, or find mixes that include them.
8tracks' development stack was built using Ruby on Rails running on Amazon AWS. For datastores, MySQL (on Amazon RDS) was used. Other database systems used include: Redis, Solr, MongoDB, and Graphite. 8tracks also allowed other developers to use it, and hosted a forum to allow them to ask questions to staff.
By requesting for a unique artist tag, Artists were able to promote their music on 8tracks with a special account. They could create mixes with a combination of their own and others' music or full post albums via a content-owner account. By using 8tracks to promote their music, fans could interact with artists. Notable artists who used 8tracks to promote their music include: Metric, Bassnectar, Carolina Liar, and B.o.B.
Partnerships and corporate connections
8tracks attempted to reach profitability by partnering with brands looking to open channels of communication with potential consumers through "music-centric interactive marketing" campaigns. For instance, the apparel store/community Threadless partnered with 8tracks to host a monthly contest in which Threadless' warehouse crew judges playlists, and the curator of their favorite mix wins a $50 gift certificate. To promote their new, retro Piiq headphones, Sony ran a contest in conjunction with fashion website Lookbook where users created mixes representative of "A Day in the Life (of You)" and those with the most likes won fashion and music-related prizes. Rolling Stone also added an interactive element to the release of its yearly "Playlist Issue" by compiling genre-specific celebrity- and artist-curated playlists that were hosted through the magazine's 8tracks user page and also embedded on the Rolling Stone website. This integrated media approach significantly allowed otherwise heavily copyrighted music to be streamed legally. Notable curators included Tom Petty, Elton John, Art Garfunkel, Coldplay's Chris Martin, and Metallica's Lars Ulrich. Finally, California hotel chain Joie de Vivre and its partners offered a variety of prizes to DJs who published and generated the most likes on mixes driven by the theme of "California road trip" in order to drive brand awareness during the peak summer travel season.
8tracks partnered with Feature.fm to offer artists the ability to play their songs as "sponsored tracks" to people listening to playlists of the artist's style of music.
8tracks paid royalties to SoundExchange, and ultimately their push for back royalties led to the closure in 2019.
See also
Comparison of on-demand streaming music services
Comparison of digital music stores
Comparison of music streaming services
Comparison of online music lockers
List of music software
List of Internet radio stations
List of online music databases
References
External links
Internet properties established in 2008
Companies based in Toronto
American music websites
Internet radio
American social networking websites
Free-content websites
IOS software
Android (operating system) software
BlackBerry software | 8tracks.com | Technology | 1,700 |
31,902,906 | https://en.wikipedia.org/wiki/PilZ%20domain | The PilZ protein family is named after the type IV pilus control protein first identified in Pseudomonas aeruginosa, expressed as part of the pil operon. It has a cytoplasmic location and is essential for type IV fimbrial, or pilus, biogenesis. PilZ is a c-di-GMP binding domain and PilZ domain-containing proteins represent the best studied class of c-di-GMP effectors. C-di-GMP, cyclic diguanosine monophosphate, the second messenger in cells, is widespread in and unique to the bacterial kingdom. Elevated intracellular levels of c-di-GMP generally cause bacteria to change from a motile single-cell state to a sessile, adhesive surface-attached multicellular state called biofilm.
Proteins which contain PilZ are known to interact with the flagellar switch-complex proteins FliG and FliM and this is mediated via the c-di-GMP-PliZ complex. This interaction results in a reduction of torque-generation and induces counterclockwise motor bias that slows the motor and induces counterclockwise rotation, inhibiting chemotaxis.
Binding and mutagenesis studies of several PilZ domain proteins have shown that c-di-GMP binding depends on residues in RxxxR and D/NxSxxG sequence-motifs. The crystal structure, at 1.7 A, of a PilZ domain::c-di-GMP complex from Vibrio cholerae shows c-di-GMP contacting seven of nine strongly conserved residues. Binding of c-di-GMP causes a conformational switch whereby the C- and N-terminal domains are brought into close opposition forming a new allosteric interaction surface that spans these domains and the c-di-GMP at their interface.
The PilZ domain is also implicated in the bacterial pathogenicity of the Lyme disease spirochaete, Borrelia burgdorferi, through its binding partner c-di-GMP.
References
Protein domains | PilZ domain | Biology | 438 |
265,551 | https://en.wikipedia.org/wiki/X-ray%20machine | An X-ray machine is a device that uses X-rays for a variety of applications including medicine, X-ray fluorescence, electronic assembly inspection, and measurement of material thickness in manufacturing operations. In medical applications, X-ray machines are used by radiographers to acquire x-ray images of the internal structures (e.g., bones) of living organisms, and also in sterilization.
Structure
An X-ray generator generally contains an X-ray tube to produce the X-rays. Possibly, radioisotopes can also be used to generate X-rays.
An X-ray tube is a simple vacuum tube that contains a cathode, which directs a stream of electrons into a vacuum, and an anode, which collects the electrons and is made of tungsten to evacuate the heat generated by the collision. When the electrons collide with the target, about 1% of the resulting energy is emitted as X-rays, with the remaining 99% released as heat. Due to the high energy of the electrons that reach relativistic speeds, the target is usually made of tungsten even if other material can be used particularly in XRF applications.
An X-ray generator also needs to contain a cooling system to cool the anode; many X-ray generators use water or oil recirculating systems.
Medical imaging
In medical imaging applications, an X-ray machine has a control console that is used by a radiologic technologist to select X-ray techniques suitable for the specific exam, a power supply that creates and produces the desired kVp (peak kilovoltage), mA (milliamperes, sometimes referred to as mAs which is actually mA multiplied by the desired exposure length) for the X-ray tube, and the X-ray tube itself.
History
The discovery of X-rays came from experimenting with Crookes tubes, an early experimental electrical discharge tube invented by English physicist William Crookes around 1869–1875. In 1895, Wilhelm Röntgen discovered X-rays emanating from Crookes tubes and the many uses for X-rays were immediately apparent. One of the first X-ray photographs was made of the hand of Röntgen's wife. The image displayed both her wedding ring and bones. On January 18, 1896 an X-ray machine was formally displayed by Henry Louis Smith. A fully functioning unit was introduced to the public at the 1904 World's Fair by Clarence Dally. The technology developed quickly: In 1909 Mónico Sánchez Moreno had produced the first portable medical device and during World War I Marie Curie led the development of X-ray machines mounted in "radiological cars" to provide mobile X-ray services for military field hospitals.
In the 1940s and 1950s, X-ray machines were used in stores to help sell footwear. These were known as Shoe-fitting fluoroscopes. However, as the harmful effects of X-ray radiation were properly considered, they finally fell out of use. Shoe-fitting use of the device was first banned by the state of Pennsylvania in 1957. (They were more a clever marketing tool to attract customers, rather than a fitting aid.) Together with Robert J. Van de Graaff, John G. Trump developed one of the first million-volt X-ray generators.
Overview
An X-ray imaging system consists of a generator control console where the operator selects desired techniques to obtain a quality readable image(kVp, mA and exposure time), an x-ray generator which controls the x-ray tube current, x-ray tube kilovoltage and x-ray emitting exposure time, an X-ray tube that converts the kilovoltage and mA into actual x-rays and an image detection system which can be either a film (analog technology) or a digital capture system and a PACS.
Applications
X-ray machines are used in health care for visualising bone structures, during surgeries (especially orthopedic) to assist surgeons in reattaching broken bones with screws or structural plates, assisting cardiologists in locating blocked arteries and guiding stent placements or performing angioplasties and for other dense tissues such as tumours. Non-medicinal applications include security and material analysis.
Medicine
The main fields in which x-ray machines are used in medicine are radiography, radiotherapy, and fluoroscopic-type procedures. Radiography is generally used for fast, highly penetrating images, and is usually used in areas with a high bone content but can also be used to look for tumors such as with mammography imaging. Some forms of radiography include:
orthopantomogram — a panoramic x-ray of the jaw showing all the teeth at once
mammography — x-rays of breast tissue
tomography — x-ray imaging in sections
In fluoroscopy, imaging of the digestive tract is done with the help of a radiocontrast agent such as barium sulfate, which is opaque to X-rays.
Radiotherapy — the use of x-ray radiation to treat malignant and benign cancer cells, a non-imaging application
Fluoroscopy is used in cases where real-time visualization is necessary (and is most commonly encountered in everyday life at airport security). Some medical applications of fluoroscopy include:
angiography — used to examine blood vessels in real time along with the placement of stents and other procedures to repair blocked arteries.
barium enema — a procedure used to examine problems of the colon and lower gastrointestinal tract
barium swallow — similar to a barium enema, but used to examine the upper gastrointestinal tract
biopsy — the removal of tissue for examination
Pain Management - used to visually see and guide needles for administering/injecting pain medications, steroids or pain blocking medications throughout the spinal region.
Orthopedic procedures - used to guide placement and removal of bone structure reinforcement plates, rods and fastening hardware used to aide the healing process and alignment of bone structures healing properly together.
X-rays are highly penetrating, ionizing radiation, therefore X-ray machines are used to take pictures of dense tissues such as bones and teeth. This is because bones absorb the radiation more than the less dense soft tissue. X-rays from a source pass through the body and onto a photographic cassette. Areas where radiation is absorbed show up as lighter shades of grey (closer to white). This can be used to diagnose broken or fractured bones.
In 2012, European Commission of Radiation Protection set leakage radiation limit from X-ray generators such as X-ray tubes and CT machines as one mGy/hour at one metre distance from the machine.
Security
X-ray machines are used to screen objects non-invasively. Luggage at airports and student baggage at some schools are examined for possible weapons, including bombs. Prices of these Luggage X-rays vary from $50,000 to $300,000. The main parts of an X-ray Baggage Inspection System are the generator used to generate x-rays, the detector to detect radiation after passing through the baggage, signal processor unit (usually a PC) to process the incoming signal from the detector, and a conveyor system for moving baggage into the system. Portable pulsed X-ray Battery Powered X-ray Generator used in Security as shown in the figure provides EOD responders safer analysis of any possible target hazard.
Operation
When baggage is placed on the conveyor, it is moved into the machine by the operator. There is an infrared transmitter and receiver assembly to detect the baggage when it enters the tunnel. This assembly gives the signal to switch on the generator and signal processing system. The signal processing system processes incoming signals from the detector and reproduce an image based upon the type of material and material density inside the baggage. This image is then sent to the display unit.
Color classification
The colour of the image displayed depends upon the material and material density : organic material such as paper, clothes and most explosives are displayed in orange. Mixed materials such as aluminum are displayed in green. Inorganic materials such as copper are displayed in blue and non-penetrable items are displayed in black (some machines display this as a yellowish green or red). The darkness of the color depends upon the density or thickness of the material.
The material density determination is achieved by two-layer detector. The layers of the detector pixels are separated with a strip of metal. The metal absorbs soft rays, letting the shorter, more penetrating wavelengths through to the bottom layer of detectors, turning the detector to a crude two-band spectrometer.
Advances in X-ray technology
A film of carbon nanotubes (as a cathode) that emits electrons at room temperature when exposed to an electrical field has been fashioned into an X-ray device. An array of these emitters can be placed around a target item to be scanned and the images from each emitter can be assembled by computer software to provide a 3-dimensional image of the target in a fraction of the time it takes using a conventional X-ray device. The system also allows rapid, precise control, enabling prospective physiological gated imaging.
Engineers at the University of Missouri (MU), Columbia, have invented a compact source of x-rays and other forms of radiation.
The radiation source is the size of a stick of gum and could be used to create portable x-ray scanners. A prototype handheld x-ray scanner using the source could be manufactured in as soon as three years.
See also
Fluoroscope
Backscatter X-ray e.g., for security scanning passengers (rather than baggage)
X-ray crystallography
Radiography
X-ray fluorescence
X-ray astronomy (detectors)
Notes
References
Generator
Radiography
Aviation security
Explosive detection | X-ray machine | Technology,Engineering | 1,995 |
1,066,621 | https://en.wikipedia.org/wiki/Characteristic%20%28algebra%29 | In mathematics, the characteristic of a ring , often denoted , is defined to be the smallest positive number of copies of the ring's multiplicative identity () that will sum to the additive identity (). If no such number exists, the ring is said to have characteristic zero.
That is, is the smallest positive number such that:
if such a number exists, and otherwise.
Motivation
The special definition of the characteristic zero is motivated by the equivalent definitions characterized in the next section, where the characteristic zero is not required to be considered separately.
The characteristic may also be taken to be the exponent of the ring's additive group, that is, the smallest positive integer such that:
for every element of the ring (again, if exists; otherwise zero). This definition applies in the more general class of rngs (see ); for (unital) rings the two definitions are equivalent due to their distributive law.
Equivalent characterizations
The characteristic of a ring is the natural number such that is the kernel of the unique ring homomorphism from to .
The characteristic is the natural number such that contains a subring isomorphic to the factor ring , which is the image of the above homomorphism.
When the non-negative integers are partially ordered by divisibility, then is the smallest and is the largest. Then the characteristic of a ring is the smallest value of for which . If nothing "smaller" (in this ordering) than will suffice, then the characteristic is . This is the appropriate partial ordering because of such facts as that is the least common multiple of and , and that no ring homomorphism exists unless divides .
The characteristic of a ring is precisely if the statement for all implies that is a multiple of .
Case of rings
If and are rings and there exists a ring homomorphism , then the characteristic of divides the characteristic of . This can sometimes be used to exclude the possibility of certain ring homomorphisms. The only ring with characteristic is the zero ring, which has only a single element . If a nontrivial ring does not have any nontrivial zero divisors, then its characteristic is either or prime. In particular, this applies to all fields, to all integral domains, and to all division rings. Any ring of characteristic is infinite.
The ring of integers modulo has characteristic . If is a subring of , then and have the same characteristic. For example, if is prime and is an irreducible polynomial with coefficients in the field with elements, then the quotient ring is a field of characteristic . Another example: The field of complex numbers contains , so the characteristic of is .
A -algebra is equivalently a ring whose characteristic divides . This is because for every ring there is a ring homomorphism , and this map factors through if and only if the characteristic of divides . In this case for any in the ring, then adding to itself times gives .
If a commutative ring has prime characteristic , then we have for all elements and in – the normally incorrect "freshman's dream" holds for power .
The map then defines a ring homomorphism , which is called the Frobenius homomorphism. If is an integral domain it is injective.
Case of fields
As mentioned above, the characteristic of any field is either or a prime number. A field of non-zero characteristic is called a field of finite characteristic or positive characteristic or prime characteristic. The characteristic exponent is defined similarly, except that it is equal to when the characteristic is ; otherwise it has the same value as the characteristic.
Any field has a unique minimal subfield, also called its . This subfield is isomorphic to either the rational number field or a finite field of prime order. Two prime fields of the same characteristic are isomorphic, and this isomorphism is unique. In other words, there is essentially a unique prime field in each characteristic.
Fields of characteristic zero
The most common fields of characteristic zero are the subfields of the complex numbers. The p-adic fields are characteristic zero fields that are widely used in number theory. They have absolute values which are very different from those of complex numbers.
For any ordered field, such as the field of rational numbers or the field of real numbers , the characteristic is . Thus, every algebraic number field and the field of complex numbers are of characteristic zero.
Fields of prime characteristic
The finite field has characteristic .
There exist infinite fields of prime characteristic. For example, the field of all rational functions over , the algebraic closure of or the field of formal Laurent series .
The size of any finite ring of prime characteristic is a power of . Since in that case it contains it is also a vector space over that field, and from linear algebra we know that the sizes of finite vector spaces over finite fields are a power of the size of the field. This also shows that the size of any finite vector space is a prime power.
Notes
References
Sources
Ring theory
Field (mathematics) | Characteristic (algebra) | Mathematics | 1,003 |
71,486,679 | https://en.wikipedia.org/wiki/Trioctylphosphine%20selenide | Trioctylphosphine selenide (TOPSe) is an organophosphorus compound with the formula SeP(C8H17)3. It is used as a source of selenium in the preparation of cadmium selenide. TOPSe is a white, air-stable solid that is soluble in organic solvents. The molecule features a tetrahedral phosphorus center.
Preparation and use
TOPSe is usually prepared by oxidation of trioctylphosphine with elemental selenium:
Often the reaction is conducted without isolation of the TOPSe.
As a solution with trioctylphosphine oxide, TOPSe reacts with dimethylcadmium to give cadmium selenide. The mechanism is proposed to proceed in two steps, beginning with the formation of cadmium metal followed by its oxidation with the TOPSe. Similarly it has been used to produce lead selenide.
References
Organophosphorus compounds
Selenides | Trioctylphosphine selenide | Chemistry | 196 |
25,882,297 | https://en.wikipedia.org/wiki/C25H36O3 | {{DISPLAYTITLE:C25H36O3}}
The molecular formula C25H36O3 (molar mass: 384.552 g/mol) may refer to:
Estradiol enantate
Nandrolone cyclohexanecarboxylate
THCP-O-acetate
Variecolactone
Molecular formulas | C25H36O3 | Physics,Chemistry | 76 |
62,624,514 | https://en.wikipedia.org/wiki/Epichlo%C3%AB%20funkii | Epichloë funkii is a hybrid asexual species in the fungal genus Epichloë.
A systemic and seed-transmissible grass symbiont first described in 2007, Epichloë funkii is a natural allopolyploid of Epichloë elymi and Epichloë festucae.
Epichloë funkii is found in North America, where it has been identified in the grass species Achnatherum robustum.
References
funkii
Fungi described in 2007
Fungi of North America
Fungus species | Epichloë funkii | Biology | 107 |
4,117,384 | https://en.wikipedia.org/wiki/NGC%206302 | NGC 6302 (also known as the Bug Nebula, Butterfly Nebula, or Caldwell 69) is a bipolar planetary nebula in the constellation Scorpius. The structure in the nebula is among the most complex ever seen in planetary nebulae. The spectrum of Butterfly Nebula shows that its central star is one of the hottest stars known, with a surface temperature in excess of 250,000 degrees Celsius, implying that the star from which it formed must have been very large.
The central star, a white dwarf, was identified in 2009, using the upgraded Wide Field Camera 3 on board the Hubble Space Telescope. The star has a current mass of around 0.64 solar masses. It is surrounded by a dense equatorial disc composed of gas and dust. This dense disc is postulated to have caused the star's outflows to form a bipolar structure similar to an hourglass. This bipolar structure shows features such as ionization walls, knots and sharp edges to the lobes.
Observation history
As it is included in the New General Catalogue, this object has been known since at least 1888. The earliest-known study of NGC 6302 is by Edward Emerson Barnard, who drew and described it in 1907.
The nebula featured in some of the first images released after the final servicing mission of the Hubble Space Telescope in September 2009.
Characteristics
NGC 6302 has a complex structure, which may be approximated as bipolar with two primary lobes, though there is evidence for a second pair of lobes that may have belonged to a previous phase of mass loss. A dark lane runs through the waist of the nebula obscuring the central star at all wavelengths.
The nebula contains a prominent northwest lobe which extends up to 3.0′ away from the central star and is estimated to have formed from an eruptive event around 1,900 years ago. It has a circular part whose walls are expanding such that each part has a speed proportional to its distance from the central star. At an angular distance of 1.71′ from the central star, the flow velocity of this lobe is measured to be 263 km/s. At the extreme periphery of the lobe, the outward velocity exceeds 600 km/s. The western edge of the lobe displays characteristics suggestive of a collision with pre-existing globules of gas which modified the outflow in that region.
Central star
The central star, among the hottest stars known, had escaped detection because of a combination of its high temperature (meaning that it radiates mainly in the ultraviolet), the dusty torus (which absorbs a large fraction of the light from the central regions, especially in the ultraviolet) and the bright background from the star. It was not seen in the first Hubble Space Telescope images; the improved resolution and sensitivity of the new Wide Field Camera 3 of the same telescope later revealed the faint star at the centre. A temperature of 200,000 Kelvin is indicated, and a mass of 0.64 solar masses. The original mass of the star was much higher, but most was ejected in the event which created the planetary nebula. The luminosity and temperature of the star indicate it has ceased nuclear burning and is on its way to becoming a white dwarf, fading at a predicted rate of 1% per year.
Dust chemistry
The prominent dark lane that runs through the centre of the nebula has been shown to have an unusual composition, showing evidence for multiple crystalline silicates, crystalline water ice and quartz, with other features which have been interpreted as the first extra-solar detection of carbonates. This detection has been disputed, due to the difficulties in forming carbonates in a non-aqueous environment. The dispute remains unresolved.
One of the characteristics of the dust detected in NGC 6302 is the existence of both oxygen-bearing silicate molecules and carbon-bearing polycyclic aromatic hydrocarbons (PAHs). Stars are usually either oxygen-rich or carbon-rich, the change from the former to the latter occurring late in the evolution of the star due to nuclear and chemical changes in the star's atmosphere. NGC 6302 belongs to a group of objects where hydrocarbon molecules formed in an oxygen-rich environment.
See also
List of largest nebulae
Lists of nebulae
Notes
References
External links
NASA News Release
Discovery of the star
ESA/Hubble News Release
SIMBAD Query Result
Butterfly Nebula at Constellation Guide
069b
6302
Planetary nebulae
Scorpius
Sharpless objects | NGC 6302 | Astronomy | 901 |
54,798,805 | https://en.wikipedia.org/wiki/FSC%20Millport | FSC Millport, run by the Field Studies Council, is located on the island of Great Cumbrae in the Firth of Clyde, Scotland. The field centre was formerly known as the University Marine Biological Station Millport (UMBSM), a higher education institute run by the University of London in partnership with Glasgow University but was closed due to the withdrawal of higher education funding in 2013. FSC reopened the centre in 2014 and continues to host and teach university, school and college groups and to support and host research students from all over the world, whilst also extending its educational reach and providing a variety of courses in natural history and outdoor environmental activities for adult learners and families to enjoy. The centre is a very popular conference venue hosting many international events. The Robertson Museum and Aquarium (named after the founder of the original Marine Station, David Robertson) is open to visitors between March and November. The centre also functions as a Meteorological Office Weather Station and Admiralty Tide Monitor.
History
The Ark, an 84 ft lighter originally moored in the flooded Granton quarry, was fitted out as a floating laboratory by the father of modern oceanography, Sir John Murray. This boat was brought to Port Loy on the Isle of Cumbrae in 1885 and formed the beginnings of the Scottish Marine Station. She attracted a stream of distinguished scientists, drawn by the richness of the fauna and flora of the Firth of Clyde, but gradually fell into disuse after the opening of the Millport Marine Station, and on the night of 20 January 1900 was completely destroyed by a great storm.
In 1894 a committee headed by amateur naturalist David Robertson began to build a marine station on the Isle of Cumbrae and took over the Ark. Sadly David Robertson died before completion of the centre, but in 1897 Millport Marine Biological Station (MMBS) was opened by Sir John Murray. Despite many struggles during its first few decades, in which sufficient funding was difficult to attain and there was much conflict between research priorities and the needs of education, the station persisted.
On 21 July 1904 Scotia, the ship of Dr William Speirs Bruce's Scottish National Antarctic Expedition, returned to her first Scottish landing site at the Keppel Pier on the Isle of Cumbrae.
From this beginning, the station was gradually built up to its present size. The original building proved too small for the purpose and an architectural copy was built alongside. In 1914 the Scottish Marine Biological Association was established at MMBS. In 1922 Sheina Marshall joined the Scottish Marine Laboratory, beginning a scientific career dedicated to the study of plant and animal plankton. She went on to become one of the first women to be elected a Fellow of the Royal Society of Edinburgh, and later became a Fellow of the Royal Society, as well as being awarded the Order of the British Empire in 1966. From 1966 to 1987 the station ran under the Directorship of Ronald Ian Currie FRSE who was responsible for the creation of RV Challenger and RV Calanus.
In 1970 the Scottish Marine Biological Station moved to Dunstaffnage Bay (Oban), and MMBS was taken over by the University of London in partnership with Glasgow University, becoming the University Marine Biological Station Millport (UMBSM). It continued to expand, with a hostel accommodation block opening in 1975.
In December 2012 it was announced that the University Marine Biological Station Millport would be forced to close after the Higher Education Funding Council for England withdrew the grant of 400,000 pounds that it gave to the University of London to run the station. UMBSM closed on 31 October 2013.
Ownership of MMBS was transferred to the Field Studies Council on 1 January 2014. In May 2014 a four-million-pound package of funding was announced that allowed a comprehensive programme of development and refurbishment to be completed over five years. FSC Millport continues to develop and grow as one of the Field Studies Council's centres.
See also
Sheina Marshall
David Robertson (naturalist)
Field Studies Council
References
External links
Official website
Educational institutions established in 1885
Buildings and structures in North Ayrshire
Education in North Ayrshire
Science and technology in Scotland
Biological stations
Marine biology
Field studies centres in the United Kingdom
Millport, Cumbrae
Firth of Clyde
1885 establishments in Scotland
Oceanographic organizations
Scientific organisations based in the United Kingdom | FSC Millport | Biology | 863 |
66,580,704 | https://en.wikipedia.org/wiki/Perfect%20month | A perfect month or a rectangular month designates a month whose number of days is divisible by the number of days in a week and whose first day corresponds to the first day of the week. This causes the arrangement of the days of the month to resemble a rectangle. In the Gregorian calendar, this arrangement can only occur for the month of February.
Constraints
To satisfy such an arrangement in the Gregorian calendar, the number of days in the month must be divisible by seven. Only the month of February of a common year can meet this constraint as the month has 28 days, a multiple of 7.
For a February to be a perfect month, the month must start on the first day of the week (usually considered to be Sunday or Monday). For Sunday-first calendars, this means that the year must start on a Thursday, and for Monday-first calendars, the year must start on a Friday. It must also occur in a common year, as the phenomenon does not occur when February has 29 days.
Occurrence
In the Gregorian calendar, the phenomenon occurs every six years or eleven years following a 6-11-11, 11-6-11, or an 11-11-6 sequence until the end of the 21st century. The most recent perfect months were February 2015 (Sunday-first) and February 2021 (Monday-first). Due to calculation rules, the years 1700, 1800, and 1900 are not leap years, causing a shift in the sequence with a spacing of twelve years between 1698 and 1710, 1795 and 1807, and 1897 and 1909 respectively; however 2094, 2100 and 2106 will all feature perfect months with spacings of six years on Monday-first calendars.
The next perfect months will be February 2026 (Sunday-first) and February 2027 (Monday-first).
Attributes
The calendar arrangement brings together notions of harmony and organization.
See also
Palindrome#Dates
Perfectionism (psychology)
Perfectionism (philosophy)
References
Calendars
February
Months | Perfect month | Physics | 411 |
20,757,984 | https://en.wikipedia.org/wiki/Conformity | Conformity or conformism is the act of matching attitudes, beliefs, and behaviors to group norms, politics or being like-minded. Norms are implicit, specific rules, guidance shared by a group of individuals, that guide their interactions with others. People often choose to conform to society rather than to pursue personal desires – because it is often easier to follow the path others have made already, rather than forging a new one. Thus, conformity is sometimes a product of group communication. This tendency to conform occurs in small groups and/or in society as a whole and may result from subtle unconscious influences (predisposed state of mind), or from direct and overt social pressure. Conformity can occur in the presence of others, or when an individual is alone. For example, people tend to follow social norms when eating or when watching television, even if alone.
The Asch conformity experiment demonstrates how much influence conformity has on people. In a laboratory experiment, Asch asked 50 male students from Swarthmore College in the US to participate in a 'vision test'. Asch put a naive participant in a room with seven confederates/stooges in a line judgment task. When confronted with the line task, each confederate had already decided what response they would give. The real members of the experimental group sat in the last position, while the others were pre-arranged experimenters who gave apparently incorrect answers in unison; Asch recorded the last person's answer to analyze the influence of conformity. Surprisingly, about one third (32%) of the participants who were placed in this situation sided with the clearly incorrect majority on the critical trials. Over the 12 critical trials, about 75% of participants conformed at least once. After being interviewed, subjects acknowledged that they did not actually agree with the answers given by others. The majority of them, however, believe that groups are wiser or do not want to appear as mavericks and choose to repeat the same obvious misconception. It is clear from this that conformity has a powerful effect on human perception and behavior, even to the extent that it can be faked against a person's basic belief system.
Changing one's behaviors to match the responses of others, which is conformity, can be conscious or not. People have an intrinsic tendency to unconsciously imitate other's behaviors such as gesture, language, talking speed, and other actions of the people they interact with. There are two other main reasons for conformity: informational influence and normative influence. People display conformity in response to informational influence when they believe the group is better informed, or in response to normative influence when they are afraid of rejection. When the advocated norm could be correct, the informational influence is more important than the normative influence, while otherwise the normative influence dominates.
People often conform from a desire for security within a group, also known as normative influence—typically a group of a similar age, culture, religion or educational status. This is often referred to as groupthink: a pattern of thought characterized by self-deception, forced manufacture of consent, and conformity to group values and ethics, which ignores realistic appraisal of other courses of action. Unwillingness to conform carries the risk of social rejection. Conformity is often associated in media with adolescence and youth culture, but strongly affects humans of all ages.
Although peer pressure may manifest negatively, conformity can be regarded as either good or bad. Driving on the conventionally-approved side of the road may be seen as beneficial conformity. With the appropriate environmental influence, conforming, in early childhood years, allows one to learn and thus, adopt the appropriate behaviors necessary to interact and develop "correctly" within one's society. Conformity influences the formation and maintenance of social norms, and helps societies function smoothly and predictably via the self-elimination of behaviors seen as contrary to unwritten rules. Conformity was found to impair group performance in a variable environment, but was not found to have a significant effect on performance in a stable environment.
According to Herbert Kelman, there are three types of conformity: 1) compliance (which is public conformity, and it is motivated by the need for approval or the fear of disapproval; 2) identification (which is a deeper type of conformism than compliance); 3) internalization (which is to conform both publicly and privately).
Major factors that influence the degree of conformity include culture, gender, age, size of the group, situational factors, and different stimuli. In some cases, minority influence, a special case of informational influence, can resist the pressure to conform and influence the majority to accept the minority's belief or behaviors.
Definition and context
Definition
Conformity is the tendency to change our perceptions, opinions, or behaviors in ways that are consistent with group norms. Norms are implicit, specific rules shared by a group of individuals on how they should behave. People may be susceptible to conform to group norms because they want to gain acceptance from their group.
Peer
Some adolescents gain acceptance and recognition from their peers by conformity. This peer moderated conformity increases from the transition of childhood to adolescence. It follows a U-shaped age pattern wherein conformity increases through childhood, peaking at sixth and ninth grades and then declines. Adolescents often follow the logic that "if everyone else is doing it, then it must be good and right". However, it is found that they are more likely to conform if peer pressure involves neutral activities such as those in sports, entertainment, and prosocial behaviors rather than anti-social behaviors. Researchers have found that peer conformity is strongest for individuals who reported strong identification with their friends or groups, making them more likely to adopt beliefs and behaviors accepted in such circles.
There is also the factor that the mere presence of a person can influence whether one is conforming or not. Norman Triplett (1898) was the researcher that initially discovered the impact that mere presence has, especially among peers. In other words, all people can affect society. We are influenced by people doing things beside us, whether this is in a competitive atmosphere or not. People tend to be influenced by those who are their own age especially. Co-actors that are similar to us tend to push us more than those who are not.
Social responses
According to Donelson Forsyth, after submitting to group pressures, individuals may find themselves facing one of several responses to conformity. These types of responses to conformity vary in their degree of public agreement versus private agreement.
When an individual finds themselves in a position where they publicly agree with the group's decision yet privately disagrees with the group's consensus, they are experiencing compliance or acquiescence. This is also referenced as apparent conformity. This type of conformity recognizes that behavior is not always consistent with our beliefs and attitudes, which mimics Leon Festinger's cognitive dissonance theory. In turn, conversion, otherwise known as private acceptance or "true conformity", involves both publicly and privately agreeing with the group's decision. In the case of private acceptance, the person conforms to the group by changing their beliefs and attitudes. Thus, this represents a true change of opinion to match the majority.
Another type of social response, which does not involve conformity with the majority of the group, is called convergence. In this type of social response, the group member agrees with the group's decision from the outset and thus does not need to shift their opinion on the matter at hand.
In addition, Forsyth shows that nonconformity can also fall into one of two response categories. Firstly, an individual who does not conform to the majority can display independence. Independence, or dissent, can be defined as the unwillingness to bend to group pressures. Thus, this individual stays true to his or her personal standards instead of the swaying toward group standards. Secondly, a nonconformist could be displaying anticonformity or counterconformity which involves the taking of opinions that are opposite to what the group believes. This type of nonconformity can be motivated by a need to rebel against the status quo instead of the need to be accurate in one's opinion.
To conclude, social responses to conformity can be seen to vary along a continuum from conversion to anticonformity. For example, a popular experiment in conformity research, known as the Asch situation or Asch conformity experiments, primarily includes compliance and independence. Also, other responses to conformity can be identified in groups such as juries, sports teams and work teams.
Main experiments
Sherif's experiment (1935)
Muzafer Sherif was interested in knowing how many people would change their opinions to bring them in line with the opinion of a group. In his experiment, participants were placed in a dark room and asked to stare at a small dot of light 15 feet away. They were then asked to estimate the amount it moved. The trick was, there was no movement, it was caused by a visual illusion known as the autokinetic effect. The participants stated estimates ranging from 1–10 inches. On the first day, each person perceived different amounts of movement, but from the second to the fourth day, the same estimate was agreed on and others conformed to it. Over time, the personal estimates converged with the other group members' estimates once discussing their judgments aloud. Sherif suggested this was a simulation for how social norms develop in a society, providing a common frame of reference for people. His findings emphasize that people rely on others to interpret ambiguous stimuli and new situations.
Subsequent experiments were based on more realistic situations. In an eyewitness identification task, participants were shown a suspect individually and then in a lineup of other suspects. They were given one second to identify him, making it a difficult task. One group was told that their input was very important and would be used by the legal community. To the other it was simply an experiment. Being more motivated to get the right answer increased the tendency to conform. Those who wanted to be more accurate conformed 51% of the time as opposed to 35% in the other group. Sherif's study provided a framework for subsequent studies of influence such as Solomon Asch's 1955 study.
Asch's experiment (1951)
Solomon E. Asch conducted a modification of Sherif's study, assuming that when the situation was very clear, conformity would be drastically reduced. He exposed people in a group to a series of lines, and the participants were asked to match one line with a standard line. All participants except one were accomplices and gave the wrong answer in 12 of the 18 trials.
The results showed a surprisingly high degree of conformity: 74% of the participants conformed on at least one trial. On average people conformed one third of the time. A question is how the group would affect individuals in a situation where the correct answer is less obvious.
After his first test, Asch wanted to investigate whether the size or unanimity of the majority had greater influence on test subjects. "Which aspect of the influence of a majority is more important – the size of the majority or its unanimity? The experiment was modified to examine this question. In one series the size of the opposition was varied from one to 15 persons." The results clearly showed that as more people opposed the subject, the subject became more likely to conform. However, the increasing majority was only influential up to a point: from three or more opponents, there is more than 30% of conformity.
Besides that, this experiment proved that conformity is powerful, but also fragile. It is powerful because just by having actors giving the wrong answer made the participant to also give the wrong answer, even though they knew it was not correct. It is also fragile, however, because in one of the variants for the experiment, one of the actors was supposed to give the correct answer, being an "ally" to the participant. With an ally, the participant was more likely to give the correct answer than he was before the ally. In addition, if the participant was able to write down the answer, instead of saying out loud, he was also more likely to put the correct answer. The reason for that is because he was not afraid of being different from the rest of the group since the answers were hidden.
Milgram's shock experiment (1961)
This experiment was conducted by Yale University psychologist Stanley Milgram in order to portray obedience to authority. They measured the willingness of participants (men aged 20 to 50 from a diverse range of occupations with different levels of education) to obey the instructions from an authority figure to supply fake electric shocks that would gradually increase to fatal levels. Regardless of these instructions going against their personal conscience, 65% of the participants shocked all the way to 450 volts, fully obeying the instruction, even if they did so reluctantly. Additionally, all participants shocked to at least 300 volts.
In this experiment, the subjects did not have punishments or rewards if they chose to disobey or obey. All they might receive is disapproval or approval from the experimenter. Since this is the case they had no motives to sway them to perform the immoral orders or not. One of the most important factors of the experiment is the position of the authority figure relative to the subject (the shocker) along with the position of the learner (the one getting shocked). There is a reduction in conformity depending on if the authority figure or learner was in the same room as the subject. When the authority figure was in another room and only phoned to give their orders the obedience rate went down to 20.5%. When the learner was in the same room as the subject the obedience rate dropped to 40%.
Stanford prison experiment (August 15–21, 1971)
This experiment, led by psychology professor Philip G. Zimbardo, recruited Stanford students using a local newspaper ad, who he checked to be both physically and mentally healthy. Subjects were either assigned the role of a "prisoner" or "guard" at random over an extended period of time, within a pretend prison setting on the Stanford University Campus. The study was set to be over the course of two weeks but it was abruptly cut short because of the behaviors the subjects were exuding. It was terminated due to the "guards" taking on tyrannical and discriminatory characteristics while "prisoners" showed blatant signs of depression and distress.
In essence, this study showed us a lot about conformity and power imbalance. For one, it demonstrates how situations determines the way our behavior is shaped and predominates over our personality, attitudes, and individual morals. Those chosen to be "guards" were not mean-spirited. But, the situation they were put in made them act accordingly to their role. Furthermore, this study elucidates the idea that humans conform to expected roles. Good people (i.e. the guards before the experiment) were transformed into perpetrators of evil. Healthy people (i.e. the prisoners before the experiment) were subject to pathological reactions. These aspects are also traceable to situational forces. This experiment also demonstrated the notion of the banality of evil which explains that evil is not something special or rare, but it is something that exists in all ordinary people.
Varieties
Harvard psychologist Herbert Kelman identified three major types of conformity.
Compliance is public conformity, while possibly keeping one's own original beliefs for yourself. Compliance is motivated by the need for approval and the fear of being rejected.
Identification is conforming to someone who is liked and respected, such as a celebrity or a favorite uncle. This can be motivated by the attractiveness of the source, and this is a deeper type of conformism than compliance.
Internalization is accepting the belief or behavior and conforming both publicly and privately, if the source is credible. It is the deepest influence on people, and it will affect them for a long time.
Although Kelman's distinction has been influential, research in social psychology has focused primarily on two varieties of conformity. These are informational conformity, or informational social influence, and normative conformity, also called normative social influence. In Kelman's terminology, these correspond to internalization and compliance, respectively. There are naturally more than two or three variables in society influential on human psychology and conformity; the notion of "varieties" of conformity based upon "social influence" is ambiguous and indefinable in this context.
According to Deutsch and Gérard (1955), conformity results from a motivational conflict (between the fear of being socially rejected and the wish to say what we think is correct) that leads to normative influence, and a cognitive conflict (others create doubts in what we think) which leads to informational influence.
Informational influence
Informational social influence occurs when one turns to the members of one's group to obtain and accept accurate information about reality. A person is most likely to use informational social influence in certain situations: when a situation is ambiguous, people become uncertain about what to do and they are more likely to depend on others for the answer; and during a crisis when immediate action is necessary, in spite of panic. Looking to other people can help ease fears, but unfortunately, they are not always right. The more knowledgeable a person is, the more valuable they are as a resource. Thus, people often turn to experts for help. But once again people must be careful, as experts can make mistakes too. Informational social influence often results in internalization or private acceptance, where a person genuinely believes that the information is right.
Normative influence
Normative social influence occurs when one conforms to be liked or accepted by the members of the group. This need of social approval and acceptance is part of our state of humans. In addition to this, we know that when people do not conform with their group and therefore are deviants, they are less liked and even punished by the group. Normative influence usually results in public compliance, doing or saying something without believing in it. The experiment of Asch in 1951 is one example of normative influence. Even though John Turner et al. argued that the post experimental interviews showed that the respondents were uncertain about the correct answers in some cases. The answers might have been evident to the experimenters, but the participants did not have the same experience. Subsequent studies pointed out the fact that the participants were not known to each other and therefore did not pose a threat against social rejection. See: Normative influence vs. referent informational influence
In a reinterpretation of the original data from these experiments Hodges and Geyer (2006) found that Asch's subjects were not so conformist after all: The experiments provide powerful evidence for people's tendency to tell the truth even when others do not. They also provide compelling evidence of people's concern for others and their views. By closely examining the situation in which Asch's subjects find themselves they find that the situation places multiple demands on participants: They include truth (i.e., expressing one's own view accurately), trust (i.e., taking seriously the value of others' claims), and social solidarity (i.e., a commitment to integrate the views of self and others without deprecating). In addition to these epistemic values, there are multiple moral claims as well: These include the need for participants to care for the integrity and well-being of other participants, the experimenter, themselves, and the worth of scientific research.
Deutsch & Gérard (1955) designed different situations that variated from Asch' experiment and found that when participants were writing their answer privately, they gave the correct one.
Normative influence, a function of social impact theory, has three components. The number of people in the group has a surprising effect. As the number increases, each person has less of an impact. A group's strength is how important the group is to a person. Groups we value generally have more social influence. Immediacy is how close the group is in time and space when the influence is taking place. Psychologists have constructed a mathematical model using these three factors and are able to predict the amount of conformity that occurs with some degree of accuracy.
Baron and his colleagues conducted a second eyewitness study that focused on normative influence. In this version, the task was easier. Each participant had five seconds to look at a slide instead of just one second. Once again, there were both high and low motives to be accurate, but the results were the reverse of the first study. The low motivation group conformed 33% of the time (similar to Asch's findings). The high motivation group conformed less at 16%. These results show that when accuracy is not very important, it is better to get the wrong answer than to risk social disapproval.
An experiment using procedures similar to Asch's found that there was significantly less conformity in six-person groups of friends as compared to six-person groups of strangers. Because friends already know and accept each other, there may be less normative pressure to conform in some situations. Field studies on cigarette and alcohol abuse, however, generally demonstrate evidence of friends exerting normative social influence on each other.
Minority influence
Although conformity generally leads individuals to think and act more like groups, individuals are occasionally able to reverse this tendency and change the people around them. This is known as minority influence, a special case of informational influence. Minority influence is most likely when people can make a clear and consistent case for their point of view. If the minority fluctuates and shows uncertainty, the chance of influence is small. However, a minority that makes a strong, convincing case increases the probability of changing the majority's beliefs and behaviors. Minority members who are perceived as experts, are high in status, or have benefited the group in the past are also more likely to succeed.
Another form of minority influence can sometimes override conformity effects and lead to unhealthy group dynamics. A 2007 review of two dozen studies by the University of Washington found that a single "bad apple" (an inconsiderate or negligent group member) can substantially increase conflicts and reduce performance in work groups. Bad apples often create a negative emotional climate that interferes with healthy group functioning. They can be avoided by careful selection procedures and managed by reassigning them to positions that require less social interaction.
Specific predictors
Culture
Stanley Milgram found that individuals in Norway (from a collectivistic culture) exhibited a higher degree of conformity than individuals in France (from an individualistic culture). Similarly, Berry studied two different populations: the Temne (collectivists) and the Inuit (individualists) and found that the Temne conformed more than the Inuit when exposed to a conformity task.
Bond and Smith compared 134 studies in a meta-analysis and found that there is a positive correlation between a country's level of collectivistic values and conformity rates in the Asch paradigm. Bond and Smith also reported that conformity has declined in the United States over time.
Influenced by the writings of late-19th- and early-20th-century Western travelers, scholars or diplomats who visited Japan, such as Basil Hall Chamberlain, George Trumbull Ladd and Percival Lowell, as well as by Ruth Benedict's influential book The Chrysanthemum and the Sword, many scholars of Japanese studies speculated that there would be a higher propensity to conform in Japanese culture than in American culture. However, this view was not formed on the basis of empirical evidence collected in a systematic way, but rather on the basis of anecdotes and casual observations, which are subject to a variety of cognitive biases. Modern scientific studies comparing conformity in Japan and the United States show that Americans conform in general as much as the Japanese and, in some situations, even more. Psychology professor Yohtaro Takano from the University of Tokyo, along with Eiko Osaka reviewed four behavioral studies and found that the rate of conformity errors that the Japanese subjects manifested in the Asch paradigm was similar with that manifested by Americans. The study published in 1970 by Robert Frager from the University of California, Santa Cruz found that the percentage of conformity errors within the Asch paradigm was significantly lower in Japan than in the United States, especially in the prize condition. Another study published in 2008, which compared the level of conformity among Japanese in-groups (peers from the same college clubs) with that found among Americans found no substantial difference in the level of conformity manifested by the two nations, even in the case of in-groups.
Gender
Societal norms often establish gender differences and researchers have reported differences in the way men and women conform to social influence. For example, Alice Eagly and Linda Carli performed a meta-analysis of 148 studies of influenceability. They found that women are more persuadable and more conforming than men in group pressure situations that involve surveillance. Eagly has proposed that this sex difference may be due to different sex roles in society. Women are generally taught to be more agreeable whereas men are taught to be more independent.
The composition of the group plays a role in conformity as well. In a study by Reitan and Shaw, it was found that men and women conformed more when there were participants of both sexes involved versus participants of the same sex. Subjects in the groups with both sexes were more apprehensive when there was a discrepancy amongst group members, and thus the subjects reported that they doubted their own judgments.
Sistrunk and McDavid made the argument that women conformed more because of a methodological bias. They argued that because stereotypes used in studies are generally male ones (sports, cars..) more than female ones (cooking, fashion..), women felt uncertain and conformed more, which was confirmed by their results.
Age
Research has noted age differences in conformity. For example, research with Australian children and adolescents ages 3 to 17 discovered that conformity decreases with age. Another study examined individuals that were ranged from ages 18 to 91. The results revealed a similar trend – older participants displayed less conformity when compared to younger participants.
In the same way that gender has been viewed as corresponding to status, age has also been argued to have status implications. Berger, Rosenholtz and Zelditch suggest that age as a status role can be observed among college students. Younger students, such as those in their first year in college, are treated as lower-status individuals and older college students are treated as higher-status individuals. Therefore, given these status roles, it would be expected that younger individuals (low status) conform to the majority whereas older individuals (high status) would be expected not to conform.
Researchers have also reported an interaction of gender and age on conformity. Eagly and Chrvala examined the role of age (under 19 years vs. 19 years and older), gender and surveillance (anticipating responses to be shared with group members vs. not anticipating responses being shared) on conformity to group opinions. They discovered that among participants that were 19 years or older, females conformed to group opinions more so than males when under surveillance (i.e., anticipated that their responses would be shared with group members). However, there were no gender differences in conformity among participants who were under 19 years of age and in surveillance conditions. There were also no gender differences when participants were not under surveillance. In a subsequent research article, Eagly suggests that women are more likely to conform than men because of lower status roles of women in society. She suggests that more submissive roles (i.e., conforming) are expected of individuals that hold low status roles. Still, Eagly and Chrvala's results do conflict with previous research which have found higher conformity levels among younger rather than older individuals.
Size of the group
Although conformity pressures generally increase as the size of the majority increases, Asch's experiment in 1951 stated that increasing the size of the group will have no additional impact beyond a majority of size three. Brown and Byrne's 1997 study described a possible explanation that people may suspect collusion when the majority exceeds three or four. Gerard's 1968 study reported a linear relationship between the group size and conformity when the group size ranges from two to seven people. According to Latane's 1981 study, the number of the majority is one factor that influences the degree of conformity, and there are other factors like strength and immediacy.
Moreover, a study suggests that the effects of group size depend on the type of social influence operating. This means that in situations where the group is clearly wrong, conformity will be motivated by normative influence; the participants will conform in order to be accepted by the group. A participant may not feel much pressure to conform when the first person gives an incorrect response. However, conformity pressure will increase as each additional group member also gives the same incorrect response.
Situational factors
Research has found different group and situation factors that affect conformity. Accountability increases conformity, if an individual is trying to be accepted by a group which has certain preferences, then individuals are more likely to conform to match the group. Similarly, the attractiveness of group members increases conformity. If an individual wishes to be liked by the group, they are increasingly likely to conform.
Accuracy also effects conformity, as the more accurate and reasonable the majority is in their decision than the more likely the individual will be to conform. As mentioned earlier, size also effects individuals' likelihood to conform. The larger the majority the more likely an individual will conform to that majority. Similarly, the less ambiguous the task or decision is, the more likely someone will conform to the group. When tasks are ambiguous people are less pressured to conform. Task difficulty also increases conformity, but research has found that conformity increases when the task is difficult but also important.
Research has also found that as individuals become more aware that they disagree with the majority they feel more pressure, and hence are more likely to conform to the decisions of the group. Likewise, when responses must be made face-face, individuals increasingly conform, and therefore conformity increases as the anonymity of the response in a group decreases. Conformity also increases when individuals have committed themselves to the group making decisions.
Conformity has also been shown to be linked to cohesiveness. Cohesiveness is how strongly members of a group are linked together, and conformity has been found to increase as group cohesiveness increases. Similarly, conformity is also higher when individuals are committed and wish to stay in the group. Conformity is also higher when individuals are in situations involving existential thoughts that cause anxiety, in these situations individuals are more likely to conform to the majority's decisions.
Different stimuli
In 1961 Stanley Milgram published a study in which he utilized Asch's conformity paradigm using audio tones instead of lines; he conducted his study in Norway and France. He found substantially higher levels of conformity than Asch, with participants conforming 50% of the time in France and 62% of the time in Norway during critical trials. Milgram also conducted the same experiment once more, but told participants that the results of the study would be applied to the design of aircraft safety signals. His conformity estimates were 56% in Norway and 46% in France, suggesting that individuals conformed slightly less when the task was linked to an important issue. Stanley Milgram's study demonstrated that Asch's study could be replicated with other stimuli, and that in the case of tones, there was a high degree of conformity.
Neural correlates
Evidence has been found for the involvement of the posterior medial frontal cortex (pMFC) in conformity, an area associated with memory and decision-making. For example, Klucharev et al. revealed in their study that by using repetitive transcranial magnetic stimulation on the pMFC, participants reduced their tendency to conform to the group, suggesting a causal role for the brain region in social conformity.
Neuroscience has also shown how people quickly develop similar values for things. Opinions of others immediately change the brain's reward response in the ventral striatum to receiving or losing the object in question, in proportion to how susceptible the person is to social influence. Having similar opinions to others can also generate a reward response.
The amygdala and hippocampus have also been found to be recruited when individuals participated in a social manipulation experiment involving long-term memory. Several other areas have further been suggested to play a role in conformity, including the insula, the temporoparietal junction, the ventral striatum, and the anterior and posterior cingulate cortices.
More recent work stresses the role of orbitofrontal cortex (OFC) in conformity not only at the time of social influence, but also later on, when participants are given an opportunity to conform by selecting an action. In particular, Charpentier et al. found that the OFC mirrors the exposure to social influence at a subsequent time point, when a decision is being made without the social influence being present. The tendency to conform has also been observed in the structure of the OFC, with a greater grey matter volume in high conformers.
See also
Authoritarian personality
Bandwagon effect
Behavioral contagion
Convention (norm)
Conventional wisdom
Countersignaling
Cultural assimilation
Honne and tatemae
Knowledge falsification
Milieu control
Preference falsification
Propaganda: The Formation of Men's Attitudes
Spiral of silence
Social inertia
References
External links
Group processes
Organizational behavior
Social influence
Social agreement | Conformity | Biology | 6,899 |
27,717,039 | https://en.wikipedia.org/wiki/LY-293284 | LY-293284 is a research chemical developed by the pharmaceutical company Eli Lilly and used for scientific studies. It acts as a potent and selective 5-HT1A receptor full agonist. It was derived through structural simplification of the ergoline based psychedelic LSD, but is far more selective for 5-HT1A with over 1000× selectivity over other serotonin receptor subtypes and other targets. It has anxiogenic effects in animal studies.
See also
8-OH-DPAT
RDS-127
RU-28306
References
Serotonin receptor agonists
Anxiogenics
Drugs developed by Eli Lilly and Company
Tryptamines
Ketones | LY-293284 | Chemistry | 145 |
76,645,072 | https://en.wikipedia.org/wiki/NEW%20ID | NEW ID is a digital streaming service and television channel established in 2019 that broadcasts Korean entertainment content, including K-dramas, K-pop, and Korean movies. It is available on various international platforms such as Xumo, LG Channels, Pluto TV, and Samsung TV Plus.
History
NEW ID was founded in 2019, initially available on platforms such as Xumo and LG TV. Over time, it expanded its offerings to include Samsung TV Plus, aiming to reach a broader international audience in the US.
Content and Programming
NEW ID broadcasts a variety of Korean programming, ranging from K-dramas and K-pop concerts to classic Korean movies. The service regularly updates its content library and has obtained FAST/AVOD broadcasting rights for several shows and events. Additionally, NEW ID distributes content from other Asian markets, including channels like OnDemand China and Rakuten Viki.
NEW KPOP
NEW KMOVIES
NEW KFOOD
My Little Pet
PINKFONG BABY SHARK TV
RAKUTEN VIKI
ON DEMAND CHINA
World Billiards TV
Toony Planet
MUBEAT
SBS KDrama
ROMCOM K-DRAMA
YTN News
BINGE Korea
BINGE Korea is another streaming service focusing on Korean content. In 2023, it was made available on BMW's in-car systems, allowing vehicle owners to access its content through the multimedia systems. BINGE Korea also expanded its availability in the United States by launching on several major streaming devices and platforms including Samsung, LG, Roku, and Amazon Fire TV.
Launched Platforms (BINGE Korea)
Samsung
LG
Roku
Amazon Fire TV
Xperi on BMW
VIZIO
External links
References
Television networks in South Korea
Streaming television | NEW ID | Technology | 339 |
3,431,556 | https://en.wikipedia.org/wiki/Turbosteamer | A turbosteamer is a BMW combined cycle engine using a waste heat recovery unit. Waste heat energy from the internal combustion engine is used to generate steam for a steam engine which creates supplemental power for the vehicle. The turbosteamer device is affixed to the exhaust and cooling system. It salvages the heat wasted in the exhaust and radiator (as much as 80% of heat energy) and uses a steam piston or turbine to relay that power to the crankshaft. The steam circuit produces and of torque at peak (for a 1.8 straight-4 engine), yielding an estimated 15% gain in fuel efficiency. Unlike gasoline-electric hybrids, these gains increase at higher, steadier speeds.
Timescale
BMW has been the pioneer of this concept as early as 2000 under the direction of Dr. Raymond Freymann, and while they were designing this system to fit to most current BMW models, the technology didn't reach production.
See also
COGAS
Cogeneration
Exhaust heat recovery system
Still engine
Turbo-compound engine
Publications
R. Freymann, W. Strobl, A. Obieglo: The Turbosteamer: A System Introducing the Principle of Cogeneration in Automotive Applications. Motortechnische Zeitschrift, MTZ 05/2008 Jahrgang 69, pp.404-412.
References
External links
Gizmag article discussing BMW's turbosteamer
Article on BMW's alternative Combined Cycle Hybrid technology
Looking for the next gram. BMW Group. Retrieved 5 December 2011.
Engine technology
Steam power | Turbosteamer | Physics,Technology | 319 |
68,849,776 | https://en.wikipedia.org/wiki/DL%20Tauri | DL Tauri is a young T Tauri-type pre-main sequence stars in the constellation of Taurus about away, belonging to the Taurus Molecular Cloud. It is partially obscured by the foreground gas cloud rich in carbon monoxide, and is still accreting mass, producing 0.14 due to release of accretion energy. The stellar spectrum shows the lines of ionized oxygen, nitrogen, sulfur and iron.
Protoplanetary disk
Star is surrounded by a massive (0.029 ) protoplanetary disk, which is extensive yet relatively flattened and rich in large grains, indicated a significantly evolved state. With a mass this massive the disk can possibly form a brown dwarf. The area of disk about 100 AU from the star may be on the verge of the gravitational instability. The disk have a multiple dust rings with poorly resolved gaps between.
Suspected planetary companion
The object 2MASS J04333960+2520420, designated DL Tau/cc1 in 2008, is a suspected superjovian planet with mass about 12 on the likely bound orbit around DL Tauri. The object is either a sub-brown dwarf or a low mass brown dwarf or even a low-mass ultra-cool red dwarf star if strongly veiled by accretion disk, which is not unusual for the young star systems.
References
T Tauri stars
Circumstellar disks
Taurus (constellation)
J04333906+2520382
Hypothetical planetary systems
Tauri, DL | DL Tauri | Astronomy | 306 |
73,038 | https://en.wikipedia.org/wiki/List%20of%20computer%20display%20standards | Computer display standards are a combination of aspect ratio, display size, display resolution, color depth, and refresh rate. They are associated with specific expansion cards, video connectors, and monitors.
History
Various computer display standards or display modes have been used in the history of the personal computer. They are often a combination of aspect ratio (specified as width-to-height ratio), display resolution (specified as the width and height in pixels), color depth (measured in bits per pixel), and refresh rate (expressed in hertz). Associated with the screen resolution and refresh rate is a display adapter. Earlier display adapters were simple frame-buffers, but later display standards also specified a more extensive set of display functions and software controlled interface.
Beyond display modes, the VESA industry organization has defined several standards related to power management and device identification, while ergonomics standards are set by the TCO.
Standards
A number of common resolutions have been used with computers descended from the original IBM PC. Some of these are now supported by other families of personal computers. These are de facto standards, usually originated by one manufacturer and reverse-engineered by others, though the VESA group has co-ordinated the efforts of several leading video display adapter manufacturers. Video standards associated with IBM-PC-descended personal computers are shown in the diagram and table below, alongside those of early Macintosh and other makes for comparison. (From the early 1990s onwards, most manufacturers moved over to PC display standards thanks to widely available and affordable hardware).
Display resolution prefixes
Although the common standard prefixes super and ultra do not indicate specific modifiers to base standard resolutions, several others do:
Quarter (Q or q)
A quarter of the base resolution. E.g. QVGA, a term for a 320×240 resolution, half the width and height of VGA, hence the quarter total resolution. The "Q" prefix usually indicates "Quad" (4 times as many, not 1/4 times as many) in higher resolutions, and sometimes "q" is used instead of "Q" to specify quarter (by analogy with SI prefixes m/M), but this usage is not consistent.
Wide (W)
The base resolution increased by increasing the width and keeping the height constant, for square or near-square pixels on a widescreen display, usually with an aspect ratio of either 16:9 (adding an extra 1/3rd width vs a standard 4:3 display) or 16:10 (adding an extra 1/5th). However, it is sometimes used to denote a resolution that would have roughly the same total pixel count as this, but in a different aspect and sharing neither the horizontal OR vertical resolution—typically for a 16:10 resolution which is narrower but taller than the 16:9 option, and therefore larger in both dimensions than the base standard (e.g., compare 1366×768 and 1280×800, both commonly labelled as "WXGA", vs the base 1024×768 "XGA").
Quad(ruple) (Q)
Four times as many pixels compared to the base resolution, i.e. twice the horizontal and vertical resolution respectively.
Hex(adecatuple) (H)
Sixteen times as many pixels compared to the base resolution, i.e. four times the horizontal and vertical resolutions respectively.
Super (S), eXtended (X), Plus (+) and/or Ultra (U)
Vaguer terms denoting successive incremental steps up the resolution ladder from some comparative, more established base, usually somewhat less severe a jump than quartering or Quadrupling—typically less than doubling, and sometimes not even as much of a change as making a "wide" version; for example SVGA (800×600 vs 640×480), SXGA (1280×1024 vs 1024×768), SXGA+ (1400×1050 vs 1280×1024) and UXGA (1600×1200 vs 1024×768 - or more fittingly, vs the 1280×1024 of SXGA, the conceptual "next step down" at the time of UXGA's inception, or the 1400×1050 of SXGA+). Given the use of "X" in "XGA", it is not often used as an additional modifier (e.g. there is no such thing as XVGA except as an alternative designation for SXGA) unless its meaning would be unambiguous.
These prefixes are also often combined, as in WQXGA or WHUXGA, with levels of stacking not hindered by the same consideration towards readability as the decline of the added "X" - especially as there is not even a defined hierarchy or value for S/X/U/+ modifiers.
See also
Display resolution; this also lists the display resolutions of standard and HD televisions, which are sometimes used as computer monitors.
Graphics display resolution
List of common resolutions
List of video connectors
References
External links
Display the resolution and color bit depth of your current monitor
Calculate screen dimensions according to format and diagonal
Calculate and compare display sizes, resolutions, and source material
Standard resolutions used for computer graphics equipment, TV and video applications and mobile devices.
Large image of graphic card history tree
Computer display standards
Graphics standards
VESA | List of computer display standards | Technology | 1,116 |
7,964,642 | https://en.wikipedia.org/wiki/Czechoslovak%20border%20fortifications | Czechoslovakia built a system of border fortifications as well as some fortified defensive lines inland, from 1935 to 1938 as a defensive countermeasure against the rising threat of Nazi Germany. The objective of the fortifications was to prevent the taking of key areas by an enemy—not only Germany but also Hungary and Poland—by means of a sudden attack before the mobilization of the Czechoslovak Army could be completed, and to enable effective defense until allies—Britain and France, and possibly the Soviet Union—could help.
History
With the rise of Hitler and his demands for unification of German minorities, including the Sudeten Germans, and the return of other claimed territories—Sudetenland—the alarmed Czechoslovak leadership began defensive plans. While some basic defensive structures were built early on, it was not until after conferences with the French military on their design that a full-scale effort began.
A change in the design philosophy was noticeable in the "pillboxes" and larger blockhouses similar to the French Maginot line when the massive construction program began in 1936. The original plan was to have the first stage of construction finished in 1941–1942, whilst the full system should have been completed by the early 1950s.
Construction was very rapid, and by the time of the Munich Agreement in September 1938, there were completed in total 264 heavy blockhouses (small forts or elements of strongholds) and 10,014 light pillboxes, which means about 20 percent of the heavy objects and 70 percent of the light objects. Moreover, many other objects were near completion and would have been functional at least as shelters despite missing certain heavy armaments in some structures.
After the German occupation of Czechoslovakia border regions as a result of the Sudeten Crisis, the Germans used these objects to test and develop new weapons and tactics, plan, and practice the attacks eventually used against the Maginot Line and Belgium's forts, resulting in astounding success. After the fall of France and the Low Countries, the Germans began to dismantle the "Beneš Wall", blowing up the cupolas, or removing them and the embrasures, some of which were eventually installed in the Atlantic Wall.
Later in the war, with the Soviet forces to the east collapsing the German front, the Germans hurriedly repaired what they could of the fortifications, often just bricking up the holes where the embrasures once were, leaving a small hole for a machine gun. The east–west portion of the line that ran from Ostrava to Opava, a river valley with a steep rise to the south, became the scene of heavy fighting.
During World War II the Germans had removed many armored parts like domes, cupolas and embrasures from the majority of the objects. Some objects became subjects of German penetration shells or explosives testing and are heavily damaged. In the post-war period, many of the remaining armoured parts were scrapped as a result of a loss of their strategic value and general drive for steel.
After the war they were further stripped of useful materials and then sealed. A couple of the large underground structures continued to be used long after as military hardware storage, and some still are to this day, by the once again independent Czech Army.
Design
The basic philosophy of the design was a mutual defensive line, that is, most of the firepower was directed laterally from the approaching enemy. The facing wall of all the fortifications, large and small, was the thickest, covered with boulders and debris, and covered again with soil so even the largest caliber shells would have lost most of their energy before reaching the concrete. The only frontal armament was machine gun ports in cupolas designed for observation and anti-infantry purposes. Any enemy units that tried to go between the blockhouses would have been stopped by anti-tank, anti-infantry barricades, machine gun and cannon fire. A few of the larger blockhouses, or artillery forts, had indirect fire mortars and heavy cannon mounts. Behind the major structures were two rows of smaller four-to-seven-man pillboxes that mirrored their larger relatives, with a well protected front and lateral cross fire to stop any enemy that managed to get on top of the fort, or come up from behind. Most of the lines consisted of just the smaller pillboxes.
The "light objects" were simple hollow boxes with one or two machine gun positions, a retractable observation periscope, grenade tubes, a hand-operated air blower, and a solid inner door at 90 degrees to a steel bar outer door. The machine gun was mounted near the end of the barrel, so that the port hole was only large enough for the bullets and a scope to see through, unlike most other designs where a large opening is used. A heavy steel plate could be slid down to quickly close the tiny hole for added protection.
The "heavy objects" were infantry blockhouses very similar to the southern part of the Maginot Line, but with substantial improvements. Just like the pillboxes, the cannons and machine guns were pivoted at the tip, and this time fully enclosed, protecting the occupants from all but the heaviest of cannons. The fortresses had a full ventilation system with filtration so even chemical attacks would not affect the defenders. Besides grid power, a two-cylinder diesel engine provided internal power. These fortifications also had full toilet and wash basin amenities, a luxury compared to its French counterpart casemates – however, these facilities were designed to be used only during the combat. While largely hollow with a few concrete walls as part of the structure, each chamber was further divided into smaller rooms by simple brick and mortar walls, with a last gap at the ceiling filled with tarred cork, since construction of a few of the casemates stopped before the internal walls were finished.
Current state
Today almost all of the remaining light objects are freely accessible. Some of the heavy objects are also accessible, others may be rented or sold to enthusiasts. A certain number were turned into museums and two of the artillery fortifications, namely "Adam" and "Smolkov" are military munitions depot, Fort "Hůrka" being a munitions depot until 2008, it is now a museum. The "Hanička" Artillery Fort was being rebuilt into a modern shelter for the Ministry of Interior between 1979 and 1993 but declared unneeded in 1995. A museum has been created there.
Many of the open museums are located between Ostrava and Opava, with more being near the town of Králíky near Kłodzk close to the present Polish border, which had been the German border before World War II. Of the nine artillery forts that were either completed or under construction by September 1938, six now function as museums while two are still used by the military with a further one not accessible.
See also
Czechoslovak border fortifications during the Cold War
Fall Grün
Maginot Line
Museum of the fortifications in Hlučín
Rupnik Line
Çakmak Line
References
Further reading
Fura, Z.; Katzl, M. (2010). The 40 Most Interesting Czech WWII Bunkers: A Brief Guide. PragueHouse. .
Halter, M. (2011). History of the Maginot Line. Strasbourg. .
Kauffmann, J.; Jurga, R. (2002). Fortress Europe: European Fortifications of World War II. Da Capo. .
External links
Interactive map of Czechoslovak border fortification system
Major site on Czech military, fortification section
Military History of East Bohemia
Czechoslovak border fortifications
General military – amateur historical groups site
Czechoslovak border fortifications – large database of bunkers
Czechoslovak defenses – database of heavy fortification objects
Czechoslovak light fortifications – with an extensive summary in English
20th-century fortifications
World War II defensive lines
World War II sites in the Czech Republic
World War II sites in Slovakia
Borders of Czechoslovakia
Munich Agreement
Tunnel warfare | Czechoslovak border fortifications | Engineering | 1,591 |
552,466 | https://en.wikipedia.org/wiki/System%20identification | The field of system identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models as well as model reduction. A common approach is to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into many details of what is actually happening inside the system; this approach is called black box system identification.
Overview
A dynamic mathematical model in this context is a mathematical description of the dynamic behavior of a system or process in either the time or frequency domain. Examples include:
physical processes such as the movement of a falling body under the influence of gravity;
economic processes such as stock markets that react to external influences.
One of the many possible applications of system identification is in control systems. For example, it is the basis for modern data-driven control systems, in which concepts of system identification are integrated into the controller design, and lay the foundations for formal controller optimality proofs.
Input-output vs output-only
System identification techniques can utilize both input and output data (e.g. eigensystem realization algorithm) or can include only the output data (e.g. frequency domain decomposition). Typically an input-output technique would be more accurate, but the input data is not always available.
Optimal design of experiments
The quality of system identification depends on the quality of the inputs, which are under the control of the systems engineer. Therefore, systems engineers have long used the principles of the design of experiments. In recent decades, engineers have increasingly used the theory of optimal experimental design to specify inputs that yield maximally precise estimators.
White- and black-box
One could build a white-box model based on first principles, e.g. a model for a physical process from the Newton equations, but in many cases, such models will be overly complex and possibly even impossible to obtain in reasonable time due to the complex nature of many systems and processes.
A more common approach is therefore to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into the details of what is actually happening inside the system. This approach is called system identification. Two types of models are common in the field of system identification:
grey box model: although the peculiarities of what is going on inside the system are not entirely known, a certain model based on both insight into the system and experimental data is constructed. This model does however still have a number of unknown free parameters which can be estimated using system identification. One example uses the Monod saturation model for microbial growth. The model contains a simple hyperbolic relationship between substrate concentration and growth rate, but this can be justified by molecules binding to a substrate without going into detail on the types of molecules or types of binding. Grey box modeling is also known as semi-physical modeling.
black box model: No prior model is available. Most system identification algorithms are of this type.
In the context of nonlinear system identification Jin et al. describe grey-box modeling by assuming a model structure a priori and then estimating the model parameters. Parameter estimation is relatively easy if the model form is known but this is rarely the case. Alternatively, the structure or model terms for both linear and highly complex nonlinear models can be identified using NARMAX methods. This approach is completely flexible and can be used with grey box models where the algorithms are primed with the known terms, or with completely black-box models where the model terms are selected as part of the identification procedure. Another advantage of this approach is that the algorithms will just select linear terms if the system under study is linear, and nonlinear terms if the system is nonlinear, which allows a great deal of flexibility in the identification.
Identification for control
In control systems applications, the objective of engineers is to obtain a good performance of the closed-loop system, which is the one comprising the physical system, the feedback loop and the controller. This performance is typically achieved by designing the control law relying on a model of the system, which needs to be identified starting from experimental data. If the model identification procedure is aimed at control purposes, what really matters is not to obtain the best possible model that fits the data, as in the classical system identification approach, but to obtain a model satisfying enough for the closed-loop performance. This more recent approach is called identification for control, or I4C in short.
The idea behind I4C can be better understood by considering the following simple example. Consider a system with true transfer function :
and an identified model :
From a classical system identification perspective, is not, in general, a good model for . In fact, modulus and phase of are different from those of at low frequency. What is more, while is an asymptotically stable system, is a simply stable system. However, may still be a model good enough for control purposes. In fact, if one wants to apply a purely proportional negative feedback controller with high gain , the closed-loop transfer function from the reference to the output is, for
and for
Since is very large, one has that . Thus, the two closed-loop transfer functions are indistinguishable. In conclusion, is a perfectly acceptable identified model for the true system if such feedback control law has to be applied. Whether or not a model is appropriate for control design depends not only on the plant/model mismatch but also on the controller that will be implemented. As such, in the I4C framework, given a control performance objective, the control engineer has to design the identification phase in such a way that the performance achieved by the model-based controller on the true system is as high as possible.
Sometimes, it is even more convenient to design a controller without explicitly identifying a model of the system, but directly working on experimental data. This is the case of direct data-driven control systems.
Forward model
A common understanding in Artificial Intelligence is that the controller has to generate the next move for a robot. For example, the robot starts in the maze and then the robot decides to move forward. Model predictive control determines the next action indirectly. The term “model” is referencing to a forward model which doesn't provide the correct action but simulates a scenario. A forward model is equal to a physics engine used in game programming. The model takes an input and calculates the future state of the system.
The reason why dedicated forward models are constructed is because it allows one to divide the overall control process. The first question is how to predict the future states of the system. That means, to simulate a plant over a timespan for different input values. And the second task is to search for a sequence of input values which brings the plant into a goal state. This is called predictive control.
The forward model is the most important aspect of a MPC-controller. It has to be created before the solver can be realized. If it's unclear what the behavior of a system is, it's not possible to search for meaningful actions. The workflow for creating a forward model is called system identification. The idea is to formalize a system in a set of equations which will behave like the original system. The error between the real system and the forward model can be measured.
There are many techniques available to create a forward model: ordinary differential equations is the classical one which is used in physics engines like Box2d. A more recent technique is a neural network for creating the forward model.
See also
Black box
Generalized filtering
Hysteresis
Structural identifiability
System realization
Parameter estimation
Linear time-invariant system theory
Model selection
Nonlinear autoregressive exogenous model
Open system (systems theory)
Pattern recognition
System dynamics
Systems theory
Model order reduction
Grey box completion and validation
Data-driven control system
Black box model of power converter
References
Further reading
Daniel Graupe: Identification of Systems, Van Nostrand Reinhold, New York, 1972 (2nd ed., Krieger Publ. Co., Malabar, FL, 1976)
Eykhoff, Pieter: System Identification – Parameter and System Estimation, John Wiley & Sons, New York, 1974.
Lennart Ljung: System Identification — Theory For the User, 2nd ed, PTR Prentice Hall, Upper Saddle River, N.J., 1999.
Jer-Nan Juang: Applied System Identification, Prentice-Hall, Upper Saddle River, N.J., 1994.
Oliver Nelles: Nonlinear System Identification, Springer, 2001.
T. Söderström, P. Stoica, System Identification, Prentice Hall, Upper Saddle River, N.J., 1989.
R. Pintelon, J. Schoukens, System Identification: A Frequency Domain Approach, 2nd Edition, IEEE Press, Wiley, New York, 2012.
Spall, J. C. (2003), Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, Wiley, Hoboken, NJ.
External links
L. Ljung: Perspectives on System Identification, July 2008
System Identification and Model Reduction via Empirical Gramians
Classical control theory
Dynamical systems
Engineering statistics
Identification
Identification
Biological models | System identification | Physics,Mathematics,Engineering,Biology | 1,894 |
69,009,928 | https://en.wikipedia.org/wiki/Curium%28III%29%20bromide | Curium(III) bromide is the bromide salt of curium. It has an orthorhombic crystal structure.
Preparation
Curium bromide can be produced by reacting curium chloride and ammonium bromide in a hydrogen atmosphere at 400–450 °C.
It can also be produced by reacting curium(III) oxide and hydrobromic acid at 600 °C.
Properties
Curium bromide is an ionic compound composed of Cm3+ and Br−, appearing as a colorless solid. It is orthorhombic, with space group Cmcm (No. 63) and lattice parameters a = 405 pm, b = 1266 pm and c = 912 pm. Its crystal structure is isostructural with plutonium(III) bromide.
References
Curium compounds
bromides
Actinide halides | Curium(III) bromide | Chemistry | 170 |
62,140,469 | https://en.wikipedia.org/wiki/H3K4me1 | H3K4me1 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the mono-methylation at the 4th lysine residue of the histone H3 protein and often associated with gene enhancers.
Nomenclature
H3K4me1 indicates monomethylation of lysine 4 on histone H3 protein subunit:
Lysine methylation
This diagram shows the progressive methylation of a lysine residue. The mono-methylation (second from left) denotes the methylation present in H3K4me1.
Understanding histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K4me1.
Mechanism and function of modification
H3K4me1 is enriched at active and primed enhancers. Transcriptional enhancers control the cell-identity gene expression and are important in the cell identity. Enhancers are primed by histone H3K4 mono-/di-methyltransferase MLL4 and then are activated by histone H3K27 acetyltransferase p300. H3K4me1 fine-tunes the enhancer activity and function rather than controls. H3K4me1 is put down by KMT2C (MLL3) and KMT2D (MLL4)
LSD1, and the related LSD2/KDM1B demethylate H3K4me1 and H3K4me2.
Marks associated with active gene transcription like H3K4me1 and H3K9me1 have very short half-lives.
H3K4me1 with MLL3/4 can also act at promoters and repress genes.
Relationship with other modifications
H3K4me1 is a chromatin signature of enhancers, H3K4me2 is highest toward the 5′ end of transcribing genes and H3K4me3 is highly enriched at promoters and in poised genes. H3K27me3, H4K20me1 and H3K4me1 silence transcription in embryonic fibroblasts, macrophages, and human embryonic stem cells (ESCs).
Enhancers that have two opposing marks like the active mark H3K4me1 and repressive mark H3K27me3 at the same time are called bivalent or poised. These bivalent enhancers convert and become enriched with H3K4me1 and acetylated H3K27 (H3K27ac) after differentiation.
Epigenetic implications
The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a Histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together.
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. Five core histone modifications were found with each respective one being linked to various cell functions.
H3K4me1- primed enhancers
H3K4me3-promoters
H3K36me3-gene bodies
H3K27me3-polycomb repression
H3K9me3-heterochromatin
The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Clinical significance
Suppression of the H3K4 mono- and di-demethylase LSD-1 might extend lifespan in various species.
H3K4me allows binding of MDB and increased activity of DNMT1 which could give rise to CpG island methylator phenotype (CIMP). CIMP is a type of colorectal cancers caused by the inactivation of many tumor suppressor genes from epigenetic effects.
Methods
The histone mark H3K4me1 can be detected in a variety of ways:
1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.
2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences.
3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
See also
Histone methylation
Histone methyltransferase
Methyllysine
References
Epigenetics
Post-translational modification | H3K4me1 | Chemistry | 1,457 |
23,642,794 | https://en.wikipedia.org/wiki/Apache%20Cordova | Apache Cordova (formerly PhoneGap) is a mobile application development framework created by Nitobi. Adobe Systems purchased Nitobi in 2011, rebranded it as PhoneGap, and later released an open-source version of the software called Apache Cordova. Apache Cordova enables software programmers to build hybrid web applications for mobile devices using CSS3, HTML5, and JavaScript, instead of relying on platform-specific APIs like those in Android, iOS, or Windows Phone. It enables the wrapping up of CSS, HTML, and JavaScript code depending on the platform of the device. It extends the features of HTML and JavaScript to work with the device. The resulting applications are hybrid, meaning that they are neither truly native mobile application nor purely Web-based. They are not native because all layout rendering is done via Web views instead of the platform's native UI framework. They are not Web apps because they are packaged as apps for distribution and have access to native device APIs. Mixing native and hybrid code snippets has been possible since version 1.9.
The software was previously called just "PhoneGap", then "Apache Callback".
PhoneGap was Adobe's commercial version of Cordova along with its associated ecosystem. Many other tools and frameworks are also built on top of Cordova, including Ionic, Monaca, VoltBuilder, TACO, Onsen UI, GapDebug, App Builder, Cocoon, Framework7, Quasar Framework, Evothings Studio, NSB/AppStudio, Mobiscroll, and Telerik Platform. These tools use Cordova, and not PhoneGap for their core tools.
Contributors to the Apache Cordova project include Adobe, BlackBerry, Google, IBM, Intel, Microsoft, Mozilla, and others.
History
PhoneGap was first developed by Nitobi Software at an iPhoneDevCamp event in San Francisco in August 2008. Apple Inc. has confirmed that the framework has its approval, even with the change to clause 3.3.1 of the Apple iPhone SDK developer license agreement 4.0 adopted in 2010. The PhoneGap framework is used by several mobile application platforms such as Monaca, appMobi, Convertigo, ViziApps, and Worklight as the backbone of their mobile client development engine.
Adobe acquired Nitobi Software on October 3, 2011. The PhoneGap code was subsequently contributed to the Apache Software Foundation to start a new project called Apache Cordova. The project's original name, Apache Callback, was viewed as too generic. It also appears in Adobe Systems as Adobe PhoneGap and also as Adobe PhoneGap Build.
Early versions of PhoneGap required an Apple computer to create iOS apps and a Windows computer to create Windows Mobile apps. After September 2012, Adobe's PhoneGap Build service allows programmers to upload CSS, HTML, and JavaScript source code to a "cloud compiler" that generates apps for every supported platform. This service was discontinued in 2020.
Design and rationale
The core of an Apache Cordova application uses CSS3 and HTML5 for rendering and JavaScript for logic. HTML5 provides access to underlying hardware such as the accelerometer, camera, and GPS. However, browsers' support for HTML5-based device access is not consistent across mobile browsers, particularly older versions of Android. To overcome these limitations, Apache Cordova embeds the HTML5 code inside a native WebView on the device, using a foreign function interface to access the native resources of it.
Apache Cordova can be extended with native plug-ins, allowing developers to add more functionalities that can be called from JavaScript, making it communicate directly between the native layer and the HTML5 page. These plugins allow access to the device's accelerometer, camera, compass, file system, microphone, and more.
However, the use of Web-based technologies leads some Apache Cordova applications to run slower than native applications with similar functionality.
Supported platforms
As of version 11, Apache Cordova currently supports development for the operating systems Apple iOS, Google Android, Windows 8.1, Windows Phone 8.1, Windows 10 and Electron (software framework) (which in turn runs on Windows, Linux and macOS). Earlier version of Apache Cordova used to support Bada, BlackBerry, Firefox OS, LG webOS, Microsoft Windows Phone (7 and 8), macOS, Nokia Symbian OS, Tizen (SDK 2.x), and Ubuntu Touch.
See also
List of rich web application frameworks
Quasar Framework
RhoMobile Suite
Cocos2d
WinJS
NativeScript
Xamarin
Flutter
Titanium SDK
Appery.io
References
Bibliography
External links
How to create a Cordova Plugin from Scratch
2009 software
Android (operating system) development software
Cordova
BlackBerry development software
Communication software
Integrated development environments
Mobile technology companies
Rich web application frameworks | Apache Cordova | Technology | 1,024 |
66,020,925 | https://en.wikipedia.org/wiki/Fazila%20Samadova | Fazila Ibrahim gizi Samadova, also known as Fazila Samedova (; 29 March 1929, Shamakhi – 8 January 2020, Baku) was an Azerbaijani academician, chemical engineer-technologist. She was a Doctor of Chemical Sciences, Honored Scientist, Member of New York Academy of Sciences, Head of the Laboratory at the Research Institute of Petrochemical Processes Corresponding Member of Azerbaijan National Academy of Sciences (ANAS), Honorary Scientist of Europe.
Early life and education
Fazila Samadova was born on 29 March 1929 in Shamakhi, Republic of Azerbaijan. In 1946, she graduated from Baku secondary school No. 132 with a gold medal. The same year Samadova was admitted to the Faculty of Chemical Technology of the Azerbaijan Industrial Institute named after M. Azizbayov.
After graduating from the institute in 1951, Samedova received a diploma in chemical engineering and continued her studies in 1951-1955 as a postgraduate student at the Gubkin Russian State University of Oil and Gas.
Career
Samadova started her first scientific developments in the 1950s as a postgraduate student at the Gubkin Russian State University of Oil and Gas and at the Azerbaijan Industrial Institute, after which she continued them already at the Institute of Petroleum Industry. In 1956, she defended her dissertation on "The effect of the chemical composition of oils on their performance properties" and received the degree of Candidate of Technical Sciences.
From 1960 to 1981, Samadova worked in the laboratory "Chemistry and Technology of Oils" of the Azerbaijan State Institute of Petrochemical Processes named after M. Aliyev as a senior researcher. The results of research conducted by Samadova in 1960-1973 were reflected in the dissertation of the scientist on "Research of obtaining distillate and residual oils from Baku paraffin oils with high cost-effective technologies and prospects for production in Azerbaijan" and she successfully defended her doctorate.
From 1982, Samadova was in charge of the laboratory "Research of oils and technology of oils". From 1986, she headed the Petroleum Research and Oil Technology Laboratory of the NKPI.
Samadova was awarded the title of Professor of Oil and Gas Technology (1987), the title of Honored Scientist of Azerbaijan (1991) and elected a full member of the New York Academy of Sciences. Since 2001, she has become a corresponding member of Azerbaijan National Academy of Sciences (ANAS).
The results of Samadova's research work are reflected in 530 scientific works, including 24 monographs, 1 publicist book, 64 author's certificates and patents. One of her monographs is dedicated to the memory of her brother, explorer Fuad Samedov, the discoverer of Oil Rocks.
Samadova was repeatedly elected a member of Academic Councils and Specialized Scientific Commissions, both in Azerbaijan and outside the country. She was a member of the editorial board of two scientific journals. Under her leadership, 4 doctors of sciences and 20 doctors of philosophy were trained.
Samadova died on 8 January 2020 in Baku at the age of 91.
Awards
In 2004 Samadova received the Shohrat Order.
In 2012, Samadova was awarded the title of Honorary Scientist of Europe and Alexander von Humboldt Medal.
References
1929 births
2020 deaths
Azerbaijani women scientists
Recipients of the Shohrat Order
Women chemical engineers
Azerbaijani women academics
Soviet chemical engineers | Fazila Samadova | Chemistry | 678 |
646,116 | https://en.wikipedia.org/wiki/K3%20surface | In mathematics, a complex analytic K3 surface is a compact connected complex manifold of dimension 2 with а trivial canonical bundle and irregularity zero. An (algebraic) K3 surface over any field means a smooth proper geometrically connected algebraic surface that satisfies the same conditions. In the Enriques–Kodaira classification of surfaces, K3 surfaces form one of the four classes of minimal surfaces of Kodaira dimension zero. A simple example is the Fermat quartic surface
in complex projective 3-space.
Together with two-dimensional compact complex tori, K3 surfaces are the Calabi–Yau manifolds (and also the hyperkähler manifolds) of dimension two. As such, they are at the center of the classification of algebraic surfaces, between the positively curved del Pezzo surfaces (which are easy to classify) and the negatively curved surfaces of general type (which are essentially unclassifiable). K3 surfaces can be considered the simplest algebraic varieties whose structure does not reduce to curves or abelian varieties, and yet where a substantial understanding is possible. A complex K3 surface has real dimension 4, and it plays an important role in the study of smooth 4-manifolds. K3 surfaces have been applied to Kac–Moody algebras, mirror symmetry and string theory.
It can be useful to think of complex algebraic K3 surfaces as part of the broader family of complex analytic K3 surfaces. Many other types of algebraic varieties do not have such non-algebraic deformations.
Definition
There are several equivalent ways to define K3 surfaces. The only compact complex surfaces with trivial canonical bundle are K3 surfaces and compact complex tori, and so one can add any condition excluding the latter to define K3 surfaces. For example, it is equivalent to define a complex analytic K3 surface as a simply connected compact complex manifold of dimension 2 with a nowhere-vanishing holomorphic 2-form. (The latter condition says exactly that the canonical bundle is trivial.)
There are also some variants of the definition. Over the complex numbers, some authors consider only the algebraic K3 surfaces. (An algebraic K3 surface is automatically projective.) Or one may allow K3 surfaces to have du Val singularities (the canonical singularities of dimension 2), rather than being smooth.
Calculation of the Betti numbers
The Betti numbers of a complex analytic K3 surface are computed as follows. (A similar argument gives the same answer for the Betti numbers of an algebraic K3 surface over any field, defined using l-adic cohomology.) By definition, the canonical bundle is trivial, and the irregularity q(X) (the dimension of the coherent sheaf cohomology group ) is zero. By Serre duality,
As a result, the arithmetic genus (or holomorphic Euler characteristic) of X is:
On the other hand, the Riemann–Roch theorem (Noether's formula) says:
where is the i-th Chern class of the tangent bundle. Since is trivial, its first Chern class is zero, and so .
Next, the exponential sequence gives an exact sequence of cohomology groups , and so . Thus the Betti number is zero, and by Poincaré duality, is also zero. Finally, is equal to the topological Euler characteristic
Since and , it follows that .
Properties
Any two complex analytic K3 surfaces are diffeomorphic as smooth 4-manifolds, by Kunihiko Kodaira.
Every complex analytic K3 surface has a Kähler metric, by Yum-Tong Siu. (Analogously, but much easier: every algebraic K3 surface over a field is projective.) By Shing-Tung Yau's solution to the Calabi conjecture, it follows that every complex analytic K3 surface has a Ricci-flat Kähler metric.
The Hodge numbers of any K3 surface are listed in the Hodge diamond:
One way to show this is to calculate the Jacobian ideal of a specific K3 surface, and then using a variation of Hodge structure on the moduli of algebraic K3 surfaces to show that all such K3 surfaces have the same Hodge numbers. A more low-brow calculation can be done using the calculation of the Betti numbers along with the parts of the Hodge structure computed on for an arbitrary K3 surface. In this case, Hodge symmetry forces , hence . For K3 surfaces in characteristic p > 0, this was first shown by Alexey Rudakov and Igor Shafarevich.
For a complex analytic K3 surface X, the intersection form (or cup product) on is a symmetric bilinear form with values in the integers, known as the K3 lattice. This is isomorphic to the even unimodular lattice , or equivalently , where U is the hyperbolic lattice of rank 2 and is the E8 lattice.
Yukio Matsumoto's 11/8 conjecture predicts that every smooth oriented 4-manifold X with even intersection form has second Betti number at least 11/8 times the absolute value of the signature. This would be optimal if true, since equality holds for a complex K3 surface, which has signature 3−19 = −16. The conjecture would imply that every simply connected smooth 4-manifold with even intersection form is homeomorphic to a connected sum of copies of the K3 surface and of .
Every complex surface that is diffeomorphic to a K3 surface is a K3 surface, by Robert Friedman and John Morgan. On the other hand, there are smooth complex surfaces (some of them projective) that are homeomorphic but not diffeomorphic to a K3 surface, by Kodaira and Michael Freedman. These "homotopy K3 surfaces" all have Kodaira dimension 1.
Examples
The double cover X of the projective plane branched along a smooth sextic (degree 6) curve is a K3 surface of genus 2 (that is, degree 2g−2 = 2). (This terminology means that the inverse image in X of a general hyperplane in is a smooth curve of genus 2.)
A smooth quartic (degree 4) surface in is a K3 surface of genus 3 (that is, degree 4).
A Kummer surface is the quotient of a two-dimensional abelian variety A by the action . This results in 16 singularities, at the 2-torsion points of A. The minimal resolution of this singular surface may also be called a Kummer surface; that resolution is a K3 surface. When A is the Jacobian of a curve of genus 2, Kummer showed that the quotient can be embedded into as a quartic surface with 16 nodes.
More generally: for any quartic surface Y with du Val singularities, the minimal resolution of Y is an algebraic K3 surface.
The intersection of a quadric and a cubic in is a K3 surface of genus 4 (that is, degree 6).
The intersection of three quadrics in is a K3 surface of genus 5 (that is, degree 8).
There are several databases of K3 surfaces with du Val singularities in weighted projective spaces.
The Picard lattice
The Picard group Pic(X) of a complex analytic K3 surface X is the abelian group of complex analytic line bundles on X. For an algebraic K3 surface, Pic(X) is the group of algebraic line bundles on X. The two definitions agree for a complex algebraic K3 surface, by Jean-Pierre Serre's GAGA theorem.
The Picard group of a K3 surface X is always a finitely generated free abelian group; its rank is called the Picard number . In the complex case, Pic(X) is a subgroup of . It is an important feature of K3 surfaces that many different Picard numbers can occur. For X a complex algebraic K3 surface, can be any integer between 1 and 20. In the complex analytic case, may also be zero. (In that case, X contains no closed complex curves at all. By contrast, an algebraic surface always contains many continuous families of curves.) Over an algebraically closed field of characteristic p > 0, there is a special class of K3 surfaces, supersingular K3 surfaces, with Picard number 22.
The Picard lattice of a K3 surface is the abelian group Pic(X) together with its intersection form, a symmetric bilinear form with values in the integers. (Over , the intersection form is the restriction of the intersection form on . Over a general field, the intersection form can be defined using the intersection theory of curves on a surface, by identifying the Picard group with the divisor class group.) The Picard lattice of a K3 surface is always even, meaning that the integer is even for each .
The Hodge index theorem implies that the Picard lattice of an algebraic K3 surface has signature . Many properties of a K3 surface are determined by its Picard lattice, as a symmetric bilinear form over the integers. This leads to a strong connection between the theory of K3 surfaces and the arithmetic of symmetric bilinear forms. As a first example of this connection: a complex analytic K3 surface is algebraic if and only if there is an element with .
Roughly speaking, the space of all complex analytic K3 surfaces has complex dimension 20, while the space of K3 surfaces with Picard number has dimension (excluding the supersingular case). In particular, algebraic K3 surfaces occur in 19-dimensional families. More details about moduli spaces of K3 surfaces are given below.
The precise description of which lattices can occur as Picard lattices of K3 surfaces is complicated. One clear statement, due to Viacheslav Nikulin and David Morrison, is that every even lattice of signature with is the Picard lattice of some complex projective K3 surface. The space of such surfaces has dimension .
Elliptic K3 surfaces
An important subclass of K3 surfaces, easier to analyze than the general case, consists of the K3 surfaces with an elliptic fibration . "Elliptic" means that all but finitely many fibers of this morphism are smooth curves of genus 1. The singular fibers are unions of rational curves, with the possible types of singular fibers classified by Kodaira. There are always some singular fibers, since the sum of the topological Euler characteristics of the singular fibers is . A general elliptic K3 surface has exactly 24 singular fibers, each of type (a nodal cubic curve).
Whether a K3 surface is elliptic can be read from its Picard lattice. Namely, in characteristic not 2 or 3, a K3 surface X has an elliptic fibration if and only if there is a nonzero element with . (In characteristic 2 or 3, the latter condition may also correspond to a quasi-elliptic fibration.) It follows that having an elliptic fibration is a codimension-1 condition on a K3 surface. So there are 19-dimensional families of complex analytic K3 surfaces with an elliptic fibration, and 18-dimensional moduli spaces of projective K3 surfaces with an elliptic fibration.
Example: Every smooth quartic surface X in that contains a line L has an elliptic fibration , given by projecting away from L. The moduli space of all smooth quartic surfaces (up to isomorphism) has dimension 19, while the subspace of quartic surfaces containing a line has dimension 18.
Rational curves on K3 surfaces
In contrast to positively curved varieties such as del Pezzo surfaces, a complex algebraic K3 surface X is not uniruled; that is, it is not covered by a continuous family of rational curves. On the other hand, in contrast to negatively curved varieties such as surfaces of general type, X contains a large discrete set of rational curves (possibly singular). In particular, Fedor Bogomolov and David Mumford showed that every curve on X is linearly equivalent to a positive linear combination of rational curves.
Another contrast to negatively curved varieties is that the Kobayashi metric on a complex analytic K3 surface X is identically zero. The proof uses that an algebraic K3 surface X is always covered by a continuous family of images of elliptic curves. (These curves are singular in X, unless X happens to be an elliptic K3 surface.) A stronger question that remains open is whether every complex K3 surface admits a nondegenerate holomorphic map from (where "nondegenerate" means that the derivative of the map is an isomorphism at some point).
The period map
Define a marking of a complex analytic K3 surface X to be an isomorphism of lattices from to the K3 lattice . The space N of marked complex K3 surfaces is a non-Hausdorff complex manifold of dimension 20. The set of isomorphism classes of complex analytic K3 surfaces is the quotient of N by the orthogonal group , but this quotient is not a geometrically meaningful moduli space, because the action of is far from being properly discontinuous. (For example, the space of smooth quartic surfaces is irreducible of dimension 19, and yet every complex analytic K3 surface in the 20-dimensional family N has arbitrarily small deformations which are isomorphic to smooth quartics.) For the same reason, there is not a meaningful moduli space of compact complex tori of dimension at least 2.
The period mapping sends a K3 surface to its Hodge structure. When stated carefully, the Torelli theorem holds: a K3 surface is determined by its Hodge structure. The period domain is defined as the 20-dimensional complex manifold
The period mapping sends a marked K3 surface X to the complex line . This is surjective, and a local isomorphism, but not an isomorphism (in particular because D is Hausdorff and N is not). However, the global Torelli theorem for K3 surfaces says that the quotient map of sets
is bijective. It follows that two complex analytic K3 surfaces X and Y are isomorphic if and only if there is a Hodge isometry from to , that is, an isomorphism of abelian groups that preserves the intersection form and sends to .
Moduli spaces of projective K3 surfaces
A polarized K3 surface X of genus g is defined to be a projective K3 surface together with an ample line bundle L such that L is primitive (that is, not 2 or more times another line bundle) and . This is also called a polarized K3 surface of degree 2g−2.
Under these assumptions, L is basepoint-free. In characteristic zero, Bertini's theorem implies that there is a smooth curve C in the linear system |L|. All such curves have genus g, which explains why (X,L) is said to have genus g.
The vector space of sections of L has dimension g + 1, and so L gives a morphism from X to projective space . In most cases, this morphism is an embedding, so that X is isomorphic to a surface of degree 2g−2 in .
There is an irreducible coarse moduli space of polarized complex K3 surfaces of genus g for each ; it can be viewed as a Zariski open subset of a Shimura variety for the group SO(2,19). For each g, is a quasi-projective complex variety of dimension 19. Shigeru Mukai showed that this moduli space is unirational if or . In contrast, Valery Gritsenko, Klaus Hulek and Gregory Sankaran showed that is of general type if or . A survey of this area was given by .
The different 19-dimensional moduli spaces overlap in an intricate way. Indeed, there is a countably infinite set of codimension-1 subvarieties of each corresponding to K3 surfaces of Picard number at least 2. Those K3 surfaces have polarizations of infinitely many different degrees, not just 2g–2. So one can say that infinitely many of the other moduli spaces meet . This is imprecise, since there is not a well-behaved space containing all the moduli spaces . However, a concrete version of this idea is the fact that any two complex algebraic K3 surfaces are deformation-equivalent through algebraic K3 surfaces.
More generally, a quasi-polarized K3 surface of genus g means a projective K3 surface with a primitive nef and big line bundle L such that . Such a line bundle still gives a morphism to , but now it may contract finitely many (−2)-curves, so that the image Y of X is singular. (A (−2)-curve on a surface means a curve isomorphic to with self-intersection −2.) The moduli space of quasi-polarized K3 surfaces of genus g is still irreducible of dimension 19 (containing the previous moduli space as an open subset). Formally, it works better to view this as a moduli space of K3 surfaces Y with du Val singularities.
The ample cone and the cone of curves
A remarkable feature of algebraic K3 surfaces is that the Picard lattice determines many geometric properties of the surface, including the convex cone of ample divisors (up to automorphisms of the Picard lattice). The ample cone is determined by the Picard lattice as follows. By the Hodge index theorem, the intersection form on the real vector space has signature . It follows that the set of elements of with positive self-intersection has two connected components. Call the positive cone the component that contains any ample divisor on X.
Case 1: There is no element u of Pic(X) with . Then the ample cone is equal to the positive cone. Thus it is the standard round cone.
Case 2: Otherwise, let , the set of roots of the Picard lattice. The orthogonal complements of the roots form a set of hyperplanes which all go through the positive cone. Then the ample cone is a connected component of the complement of these hyperplanes in the positive cone. Any two such components are isomorphic via the orthogonal group of the lattice Pic(X), since that contains the reflection across each root hyperplane. In this sense, the Picard lattice determines the ample cone up to isomorphism.
A related statement, due to Sándor Kovács, is that knowing one ample divisor A in Pic(X) determines the whole cone of curves of X. Namely, suppose that X has Picard number . If the set of roots is empty, then the closed cone of curves is the closure of the positive cone. Otherwise, the closed cone of curves is the closed convex cone spanned by all elements with . In the first case, X contains no (−2)-curves; in the second case, the closed cone of curves is the closed convex cone spanned by all (−2)-curves. (If , there is one other possibility: the cone of curves may be spanned by one (−2)-curve and one curve with self-intersection 0.) So the cone of curves is either the standard round cone, or else it has "sharp corners" (because every (−2)-curve spans an isolated extremal ray of the cone of curves).
Automorphism group
K3 surfaces are somewhat unusual among algebraic varieties in that their automorphism groups may be infinite, discrete, and highly nonabelian. By a version of the Torelli theorem, the Picard lattice of a complex algebraic K3 surface X determines the automorphism group of X up to commensurability. Namely, let the Weyl group W be the subgroup of the orthogonal group O(Pic(X)) generated by reflections in the set of roots . Then W is a normal subgroup of O(Pic(X)), and the automorphism group of X is commensurable with the quotient group O(Pic(X))/W. A related statement, due to Hans Sterk, is that Aut(X) acts on the nef cone of X with a rational polyhedral fundamental domain.
Relation to string duality
K3 surfaces appear almost ubiquitously in string duality and provide an important tool for the understanding of it. String compactifications on these surfaces are not trivial, yet they are simple enough to analyze most of their properties in detail. The type IIA string, the type IIB string, the E8×E8 heterotic string, the Spin(32)/Z2 heterotic string, and M-theory are related by compactification on a K3 surface. For example, the Type IIA string compactified on a K3 surface is equivalent to the heterotic string compactified on a 4-torus ().
History
Quartic surfaces in were studied by Ernst Kummer, Arthur Cayley, Friedrich Schur and other 19th-century geometers. More generally, Federigo Enriques observed in 1893 that for various numbers g, there are surfaces of degree 2g−2 in with trivial canonical bundle and irregularity zero. In 1909, Enriques showed that such surfaces exist for all , and Francesco Severi showed that the moduli space of such surfaces has dimension 19 for each g.
André gave K3 surfaces their name (see the quotation above) and made several influential conjectures about their classification. Kunihiko Kodaira completed the basic theory around 1960, in particular making the first systematic study of complex analytic K3 surfaces which are not algebraic. He showed that any two complex analytic K3 surfaces are deformation-equivalent and hence diffeomorphic, which was new even for algebraic K3 surfaces. An important later advance was the proof of the Torelli theorem for complex algebraic K3 surfaces by Ilya Piatetski-Shapiro and Igor Shafarevich (1971), extended to complex analytic K3 surfaces by Daniel Burns and Michael Rapoport (1975).
See also
Enriques surface
Tate conjecture
Mathieu moonshine, a mysterious relationship between K3 surfaces and the Mathieu group M24.
Notes
References
External links
Graded Ring Database homepage for a catalog of K3 surfaces
K3 database for the Magma computer algebra system
The geometry of K3 surfaces, lectures by David Morrison (1988).
Algebraic surfaces
Complex surfaces
Differential geometry
String theory | K3 surface | Astronomy | 4,611 |
46,712,569 | https://en.wikipedia.org/wiki/Corinna%20Ulcigrai | Corinna Ulcigrai (born 3 January 1980, Trieste) is an Italian mathematician working on dynamical systems. With Krzysztof Frączek in 2013, Ulcigrai is known for proving that in the Ehrenfest model (a mathematical abstraction of billiards with an infinite array of rectangular obstacles, used to model gas diffusion) most trajectories are not ergodic.
Education and career
Ulcigrai obtained her Ph.D. in 2007 from Princeton University with Yakov Sinai as her thesis advisor. She has worked at the University of Bristol, United Kingdom. and is currently a professor at the University of Zurich, Switzerland.
Recognition
Ulcigrai was awarded the European Mathematical Society Prize in 2012, and the Whitehead Prize in 2013.
In 2020, Ulcigrai was the winner of the Michael Brin Prize in Dynamical Systems, "for her fundamental work on the ergodic theory of locally Hamiltonian flows on surfaces, of translation flows on periodic surfaces and wind-tree models, and her seminal work on higher genus generalizations of Markov and Lagrange spectra".
References
External links
Home page
Italian mathematicians
Italian women mathematicians
1980 births
Princeton University alumni
Academics of the University of Bristol
Living people
Whitehead Prize winners
Dynamical systems theorists | Corinna Ulcigrai | Mathematics | 262 |
14,881,183 | https://en.wikipedia.org/wiki/MICAL3 | Microtubule-associated monoxygenase, calponin and LIM domain containing 3, also known as MICAL3, is a human gene.
Function
Along with two other Rab proteins, Rab6 and Rab8, MICAL3 work together in the process of docking and fusing of vesicles that are involved in exocytosis.
References
Further reading | MICAL3 | Chemistry | 78 |
184,331 | https://en.wikipedia.org/wiki/Petalite | Petalite, also known as castorite, is a lithium aluminum phyllosilicate mineral LiAlSi4O10, crystallizing in the monoclinic system. Petalite occurs as colorless, pink, grey, yellow, yellow grey, to white tabular crystals and columnar masses. It occurs in lithium-bearing pegmatites with spodumene, lepidolite, and tourmaline. Petalite is an important ore of lithium, and is converted to spodumene and quartz by heating to ~500 °C and under 3 kbar of pressure in the presence of a dense hydrous alkali borosilicate fluid with a minor carbonate component. Petalite (and secondary spodumene formed from it) is lower in iron than primary spodumene, making it a more useful source of lithium in, e.g., the production of glass. The colorless varieties are often used as gemstones.
Discovery and occurrence
Petalite was discovered in 1800, by Brazilian naturalist and statesman Jose Bonifacio de Andrada e Silva. Type locality: Utö Island, Haninge, Stockholm, Sweden. The name is derived from the Greek word petalon, which means leaf, alluding to its perfect cleavage.
Economic deposits of petalite are found near Kalgoorlie, Western Australia; Aracuai, Minas Gerais, Brazil; Karibib, Namibia; Manitoba, Canada; and Bikita, Zimbabwe.
The first important economic application for petalite was as a raw material for the glass-ceramic cooking ware CorningWare. It has been used as a raw material for ceramic glazes.
References
External links
Lithium minerals
Monoclinic minerals
Minerals in space group 13
Gemstones | Petalite | Physics | 365 |
19,347,871 | https://en.wikipedia.org/wiki/Atypical%20small%20acinar%20proliferation | In urologic pathology, atypical small acinar proliferation, is a collection of small prostatic glands, on prostate biopsy, whose significance is uncertain and cannot be determined to be benign or malignant.
ASAP, generally, is not considered a pre-malignancy, or a carcinoma in situ; it is an expression of diagnostic uncertainty, and analogous to the diagnosis of ASCUS (atypical squamous cells of undetermined significance) on the Pap test.
Association with adenocarcinoma
On a subsequent biopsy, given the diagnosis of ASAP, the chance of finding prostate adenocarcinoma is approximately 40%; this is higher than if there is high-grade prostatic intraepithelial neoplasia (HGPIN).
Management
ASAP is considered an indication for re-biopsy; in one survey of urologists 98% of respondents considered it a sufficient reason to re-biopsy.
See also
Sensitivity
References
External links
Pathology
Male genital disorders | Atypical small acinar proliferation | Biology | 210 |
42,913,744 | https://en.wikipedia.org/wiki/Interlegis | Intelegis is a program of the Brazilian State, funded by the Inter-American Development Bank (IADB) and administered by the Federal Senate of Brazil. Its mission is to integrate and modernize the Brazilian Legislative Power, in Municipal, State and Federal levels. It started in 1997 in the Brazilian Senate Data Processing organ - Prodasen, born from the PhD project of Armando Roberto Cerchi Nascimento, an official of that body. In 1999 the Brazilian Government signed the contract, establishing a partnership between the IADB and the Brazilian Senate, initiating the Interlegis Program, which was divided into 3 phases: e-Parliament (technology within the parliament), e-Government (process automation and availability of network services for the citizen) and e-democracy (citizen participation in the legislative process).
In practice, Interlegis Program seeks to improve communication and information flow among legislators, increase the efficiency and competence of the Legislative Houses and promote citizen participation in the legislative processes, preparing the Brazilian parliaments for participatory democracy or e-democracy. It operates based on four pillars: communication, information, training and technology.
In the technology area, Interlegis Program develops systems for the parliaments, released them as free and open source software under the GPL license and it are developed with the participation of user communities and concerned citizens, supported by Colab environment. The main systems are:
SAPL - Legislative Process Support System - aimed at automation of electronic legislative process;
Model Portal - a CMS portal is ready to use and customized for a Legislative Houses with transparency tools, law on access to information, public participation, open data, e-democracy, among others.
SAAP - Parliamentary Activity Support System - aimed at automation of offices of parliamentarians;
SPDO - Protocol Document System - aimed at automating the electronic protocol of the Legislative Houses, reducing paper usage;
SAAL - Legislative Administration Support System - aimed at administrative automation of a parliament, as a legislative ERP. Is under development and not yet have a useful version.
The Interlegis was responsible for one of the biggest digital inclusion programs at the beginning of the 21st century, distributing equipment and connecting to the Internet 3398 Brazilian municipalities through the Municipal Councils and Legislative Assemblies, thus creating Interlegis National Network - RNI. Today the technology infrastructure sector of Interlegis also active in hosting products and services developed and provided free of charge by Interlegis to the Brazilian Legislative Houses.
In 2010 Interlegis created the Colab, which is a collaborative environment for the legislative communities of practice, which has Internet tools to encourage participation of concerned citizens, officials and parliamentarians of the Legislative Houses, with the goal of solving the practical parliaments problems, allowing better communication between the participating people and collaborate in various areas of knowledge such as legislative counsel, development of technologies, legislative communication and administration, among others.
In 2013, after the administrative reform of the Brazilian Senate, Interlegis Program, which was previously run by a special secretariat called SINTER, became administrated by the Brazilian Legislative Institute - ILB, supervised by the Brazilian Senate body responsible for the training of legislative servants. At that time, the contract with the IADB regarding the 2nd phase (e-Government) has been extended until the end of 2014, when will occur the renegotiation and redefinition of the continuity of the program to start the 3rd phase (e-democracy).
External links
Interlegis Portal
Colab - Legislative Communities Collaboration Environment
Download and documentation wiki for Interlegis developed software
References
Multiphase Program Supporting Electronic Legislative Development in Brasil (Interlegis II)
Parliamentary Power Integration
Support Multi-Phase Program Electronic Legislative Development
Legislatures
Democracy
E-democracy
E-government by country | Interlegis | Technology | 760 |
37,538,900 | https://en.wikipedia.org/wiki/Discina%20fastigiata | Discina fastigiata is a species of fungus in the family Discinaceae. Its common names are brown false morel and brown gyromitra. It is related to species containing the toxin monomethylhydrazine, so its consumption is not advised.
Description
The cap of Discina fastigiata is 4-10 cm wide, and is composed of multiple upwardly curved lobes, usually with three tips. The texture is ribbed and brain-like. The lobes are irregularly folded over and sloped towards the stem. The colour varies from yellow to reddish-brown to black, when the spores are mature. The inside of the cap is hollow and white.
The stipe is chalk-white and cylindrical, though thickening at the base and ribbed like the cap. Inside it is made out of hollow or stuffed connected channels. It measures 60-80 mm long and 25-60 mm thick. The lower part of the stipe is always covered in dirt. The flesh is white and fragile. Its texture is watery to succulent. It smells slightly sperm-like.
The hymenium (spore-bearing surface) is on the outside of the cap. The transparent spores are long and elliptical, measuring 25–30 × 11–14 μm. The surface of the spores is rough to webbed and they contain 1-3 oil drops. Each ascus contains 8 spores, and measures 18-25 x 440-525 μm. The walls of the asci show no reaction in Melzer's reagant.
It has 5-9 μm wide, thin-walled, yellow-brown paraphyses with 3-5 septa.
Distribution
Discina fastigiata grows in southeastern and midwestern United States, as well as the Great Lakes region. It fruits throughout spring. It grows alone or in groups on soil, leaf litter or rotting wood in hardwood forests.
References
Discinaceae
Fungi described in 1834
Fungi of Europe
Fungus species | Discina fastigiata | Biology | 405 |
59,113,400 | https://en.wikipedia.org/wiki/Antlia%20II | Antlia II (Ant II) is a low-surface-brightness dwarf satellite galaxy of the Milky Way at a galactic latitude of 11.2°. It spans 1.26° in the sky just southeast of Epsilon Antliae. The galaxy is similar in size to the Large Magellanic Cloud, despite being 1/10,000 as bright. Antlia II has the lowest surface brightness of any galaxy discovered and is ~ 100 times more diffuse than any known ultra diffuse galaxy. The large size of the galaxy suggests that it is currently being tidally disrupted, and is in the process of becoming a stellar stream. The southeast side of Antlia II is farther away than the northwest side, likely due to the tidal disruption. It was discovered using data from the European Space Agency's Gaia spacecraft in November 2018.
See also
Crater 2 Dwarf
Antlia Dwarf
Satellite galaxies of the Milky Way
References
Antlia
Milky Way Subgroup
Low surface brightness galaxies
Dwarf galaxies | Antlia II | Astronomy | 194 |
29,720,951 | https://en.wikipedia.org/wiki/Civilian%20casualty%20ratio | In armed conflicts, the civilian casualty ratio (also civilian death ratio, civilian-combatant ratio, etc.) is the ratio of civilian casualties to combatant casualties, or total casualties. The measurement can apply either to casualties inflicted by or to a particular belligerent, casualties inflicted in one aspect or arena of a conflict or to casualties in the conflict as a whole. Casualties usually refer to both dead and injured. In some calculations, deaths resulting from famine and epidemics are included.
Global estimates of the civilian casualty ratio vary. In 1999, the International Committee of the Red Cross estimated that between 30–65% of conflict casualties were civilians, while the Uppsala Conflict Data Program (UCDP) indicated, in 2002, that 30–60% of fatalities from conflicts were civilians. In 2017, the UCDP indicated that, for urban warfare, civilians constituted 49–66% of all known fatalities. William Eckhardt found that, when averaged across a century, the civilian casualty ratio remained at about 50% for each of the 18th, 19th and 20th centuries. It is frequently claimed that 90% of casualties are civilians, but research has shown that to be a myth.
In World War II at civilians constituted 60–67% of casualties, but some sources give a higher estimate. In the Vietnam War, the civilian ratio is estimated at 46–67%. Two studies found civilian ratio was 40% in the Bosnian war. During the Second Intifada, civilians constituted ~70% of Israelis killed by Palestinians and ~60% of Palestinians killed by Israelis. Civilians constituted ~75% and ~65% of all Palestinians killed in the 2008 war and 2014 war, respectively. In the 2023–2025 war, civilians have constituted 68% of those killed by Hamas attacks, and ~80% of those killed by the Israeli invasion.
Global estimates
Globally, the civilian casualty ratio often hovers around 50%. It is sometimes stated that 90% of victims of modern wars are civilians, but that is a myth.
In 1989, William Eckhardt studied casualties of conflicts from 1700 to 1987 and found that "the civilian percentage share of war-related deaths remained at about 50% from century to century." He noted the civilian casualty ratio remained consisted despite the fact that number of deaths from wars increased four times faster than the increase in world population, when comparing the 18th century to the 20th century.
In 1999, the International Committee of the Red Cross estimated that between 30–65% of conflict casualties were civilians. The 2005 Human Security Report noted that the Uppsala Conflict Data Program (UCDP) indicated, in 2002, that 30–60% of fatalities from conflicts were civilians.
The "Cities and Armed Conflict Events (CACE)" database of the UCDP provides death counts for all urban conflicts between 1989 and 2017. According to the CACE, in urban conflicts (defined as all cities with a population > 100,000): 28.9% of deaths are civilians, 29.5% are combatants, and 41.628% are unknown. If excluding unknowns, then civilian casualties make up 49.5% of all fatalities in warfare in cities. If the data is limited to cities with population >750,000, then 29.8% of deaths are civilians, 15.3% are combatants, and 54.9% are unknown. If excluding unknowns, then civilian casualties make up 66.1% of all fatalities in warfare in large cities.
Myth of the 90% of casualties are civilians
During the 1990s, an argument arose that civilian casualty ratio had dramatically increased. The argument stated that, as of 1900, civilians constituted 10% of all casualties, but by the 1990s, civilians constituted 90% of all casualties. This figure has been widely doubted, and research has found little to no evidence that 90% of casualties are civilians. The 2005 Human Security Report called it a "myth" and instead suggested that 30–60% of fatalities from conflicts in 2002 were civilians. Likewise, the International Committee of the Red Cross estimated in 1999, that between 30–65% of conflict casualties were civilians.
There are two original sources for the myth of 90% of casualties being civilians. The first source – Christa Ahlström and Kjell-Åke Nordquist’s 1991Casualties of Conflict published by Uppsala University – stated that "nine out of ten victims (dead and uprooted) of war and armed conflict today are civilians". Some readers misconstrued it as 90% of fatalities being civilians. In fact, the same report counted only fatalities for the year of 1989 and in that case found only 67% of fatalities were civilians. The second source was Ruth Sivard's World Military and Social Expenditures, also published in 1991. Sivard did indeed say that 90% of deaths in conflicts, during the year 1990, were civilian. But Sivard included famine-related deaths, which are typically not counted in civilian casualty ratios. Sivard was also criticized for not stating her sources, and the Human Security Report 2005 noted there was insufficient global data on deaths caused by war-related famine. Nevertheless, these claims were erroneously picked up by Graca Machel's "The Machel Review 1996–2000: A Critical Analysis of Progress Made and Obstacles Encountered in Increasing Protection for War-Affected Children" written for the UNICEF.
Comparison of conflicts
World wars
World War I
Some 7 million combatants on both sides are estimated to have died during World War I, along with an estimated 10 million non-combatants, including 6.6 million civilians. The civilian casualty ratio in this estimate would be about 59%. Boris Urlanis notes a lack of data on civilian losses in the Ottoman Empire, but estimates 8.6 million military killed and dead and 6 million civilians killed and dead in the other warring countries. The civilian casualty ratio in this estimate would be about 42%. Most of the civilian fatalities were due to famine, typhus, or Spanish flu rather than combat action. The relatively low ratio of civilian casualties in this war is due to the fact that the front lines on the main battlefront, the Western Front, were static for most of the war, so that civilians were able to avoid the combat zones.
Germany suffered 300-750,000 civilian dead during and after the war due to famine caused by the Allied blockade. Russia and Turkey suffered civilian casualties in the millions in the Russian Civil War and invasion of Anatolia respectively. Armenia suffered up to 1.5 million civilians dead in the Armenian genocide.
World War II
According to most sources, World War II was the most lethal war in world history, with some 70 million killed in six years. According to some, the civilian to combatant fatality ratio in World War II lies somewhere between 3:2 and 2:1, or from 60% to 67%. According to others, the ratio is at least 3:1 and potentially higher. The high ratio of civilian casualties in this war was due in part to the increasing effectiveness and lethality of strategic weapons which were used to target enemy industrial or population centers, and famines caused by economic disruption. An estimated 2.1–3 million Indians died in the Bengal famine of 1943 in India during World War II. A substantial number of civilians in this war were also deliberately killed by Axis Powers as a result of genocide such as the Holocaust or other ethnic cleansing campaigns.
Cold war and post-Soviet wars
Korean War
The median total estimated Korean civilian deaths in the Korean War is 2,730,000. The total estimated North Korean combatant deaths is 213,000 and the estimated Chinese combatant deaths is over 400,000. In addition to this the Republic of Korea combatant deaths is around 134,000 dead and the combatant deaths for the United Nations side is around 49,000 dead and missing (40,000 dead, 9,000 missing). The estimated total Korean war military dead is around 793,000 deaths. The civilian-combatant death ratio in the war is approximately 3:1 or 75%. One source estimates that 20% of the total population of North Korea perished in the war.
Vietnam War
The Vietnamese government has estimated the number of Vietnamese civilians killed in the Vietnam War at two million, and the number of NVA and Viet Cong killed at 1.1 million—estimates which approximate those of a number of other sources. This would give a civilian-combatant fatality ratio of approximately 2:1, or 67%. These figures do not include civilians killed in Cambodia and Laos. However, the lowest estimate of 411,000 civilians killed during the war (including civilians killed in Cambodia and Laos) would give a civilian-combatant fatality ratio of approximately 1:3, or 25%. Using the lowest estimate of Vietnamese military deaths, 400,000, the ratio is about 1:1.
Bosnian war
During the 1991–1995 Bosnian war, one study estimated 97,207 were killed, of which 39,648 (41%) were civilians. That study did acknowledge that it likely underestimated the civilian count and overestimated the soldier count. The Demographic Unit of the International Criminal Tribunal for the former Yugoslavia used the capture-recapture method to estimate war-related deaths at 104,723, of which 42,106 (40%) were civilians.
NATO in Yugoslavia
In 1999, NATO intervened in the Kosovo War with a bombing campaign against Yugoslav forces, who were conducting a campaign of ethnic cleansing. The bombing lasted about 2½ months, until forcing the withdrawal of the Yugoslav army from Kosovo.
Estimates for the number of casualties caused by the bombing vary widely depending on the source. NATO unofficially claimed a toll of 5,000 enemy combatants killed by the bombardment; the Yugoslav government, on the other hand, gave a figure of 638 of its security forces killed in Kosovo. Estimates for the civilian toll are similarly disparate. Human Rights Watch counted approximately 500 civilians killed by the bombing; the Yugoslav government estimated between 1,200 and 5,000.
If the NATO figures are to be believed, the bombings achieved a civilian to combatant kill ratio of about 1:10, on the Yugoslav government's figures, conversely, the ratio would be between 4:1 and 10:1. If the most conservative estimates from the sources cited above are used, the ratio was around 1:1.
Chechen wars
During the First Chechen War, 4,000 separatist fighters and 40,000 civilians are estimated to have died, giving a civilian-combatant ratio of 10:1. The numbers for the Second Chechen War are 3,000 fighters and 13,000 civilians, for a ratio of 4.3:1. The combined ratio for both wars is 7.6:1. Casualty numbers for the conflict are notoriously unreliable. The estimates of the civilian casualties during the First Chechen war range from 20,000 to 100,000, with remaining numbers being similarly unreliable. The tactics employed by Russian forces in both wars were heavily criticized by human rights groups, which accused them of indiscriminate bombing and shelling of civilian areas and other crimes.
Arab-Israeli conflict
Civilian casualty ratios have been a contention issue in the Israeli–Palestinian conflict. During the Second Intifada, civilians constituted ~70% of Israelis killed by Palestinians and ~60% of Palestinians killed by Israelis. Civilians constituted ~75% and ~65% of all Palestinians killed in the 2008 Gaza war and 2014 Gaza war, respectively. In the Israel–Hamas war, civilians have constituted 68% of those killed by Hamas attacks, and ~80% of those killed by the Israeli invasion. The Israeli military states the civilian ratio of the Palestinian killed is much lower.
1982 Lebanon War
In 1982, Israel invaded Lebanon with the stated aim of driving the PLO away from its northern borders. The war culminated in a seven-week-long Israeli naval, air and artillery bombardment of Lebanon's capital, Beirut, where the PLO had retreated. The bombardment eventually came to an end with an internationally brokered settlement in which the PLO forces were given safe passage to evacuate the country.
According to the International Red Cross, by the end of the first week of the war alone, some 10,000 people, including 2,000 combatants, had been killed, and 16,000 wounded—a civilian-combatant fatality ratio of 4:1. Lebanese government sources later estimated that by the end of the siege of Beirut, a total of about 18,000 had been killed, an estimated 85% of whom were civilians. This gives a civilian to military casualty ratio of about 6:1.
According to Richard A. Gabriel between 1,000 and 3,000 civilians were killed in the southern campaign. He states that an additional 4,000 to 5,000 civilians died from all actions of all sides during the siege of Beirut, and that some 2,000 Syrian soldiers were killed during the Lebanon campaign and a further 2,400 PLO guerillas were also killed. Of these, 1,000 PLO guerrillas were killed during the siege. According to Gabriel the ratio of civilian deaths to combatants during the siege was about 6 to 1 but this ratio includes civilian deaths from all actions of all sides.
2000–2007
The United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA) estimated 4,228 Palestinians and 1,024 Israelis were killed between 2000 and 2007. It quoted B'Tselem estimating that of the Israelis killed by Palestinians, 31% were members of the IDF, while 69% were civilians. For the Palestinians killed by Israelis, 41% were combatants while 59% were civilians.
During this period various other claims were made regarding Palestinian civilian to combatants killed by Israel. Amos Harel wrote that the civilian to combatant casualty ratio of Israeli airstrikes (not including ground operations) was 1:1 in 2003, but by 2007 it had improved to 1:30. Meanwhile, the Israeli intelligence agency Shin Bet claimed that of the Palestinians killed between 2006–2007 period in the Gaza Strip (not including the West Bank), only 20% were civilians. The Ha'aretz criticized the Shin Bet as underestimating the civilian casualties. B'Tselem data of Palestinians killed by Israel in the Gaza Strip (not including the West Bank) Jan 1, 2006 to Dec 31, 2007, shows 821 killed, of which 405 were combatants (49%), 346 non-combatants (42%) and the rest with unknown status.
Yagil Levy, an Israeli sociologist writing in Ha'aretz at the end of 2023, analysed civilian casualty rates in five Israeli aerial operations: Pillar of Defense (~1 week in November 2012); Guardian of the Walls (~10 days in May 2021); Breaking Dawn (3 days, August 2022); Shield and Arrow (5 days in May 2023); and the first two months of the 2023 Israel-Hamas war, based on reports of the Meir Amit Intelligence and Terrorism Information Center. He calculated civilian fatality rates for these as follows: 40%, 40%, 42%, 33% and 61%.
2008–09 Gaza War
Based on the above, most sources estimate 20% of Palestinians killed were combatants, and 75% of Israelis killed were combatants.
2014 Gaza war
Reports of casualties in the 2014 Israel-Gaza conflict have been made available by a variety of sources. Most media accounts have used figures provided by the government in Gaza or non-governmental organizations. Differing methodologies have resulted in varied reports of both the overall death toll and the civilian casualty ratio.
According to the main estimates between 2,125 and 2,310 Gazans were killed and between 10,626 and 10,895 were wounded (including 3,374 children, of whom over 1,000 were left permanently disabled). 66 Israeli soldiers, 5 Israeli civilians (including one child) and one Thai civilian were killed and 469 IDF soldiers and 261 Israeli civilians were injured. The Gaza Health Ministry, UN and some human rights groups reported that 69–75% of the Palestinian casualties were civilians; Israeli officials estimated that around 50% of those killed were civilians., giving Israeli forces a ratio between 1:1 and 3:1 during the conflict.
In March 2015, OCHA reported that 2,220 Palestinians had been killed in the conflict, of whom 1,492 were civilians (551 children and 299 women), 605 militants and 123 of unknown status, giving Israeli forces a ratio of 3:1.
2023–2025 Israel–Hamas war
The casualty counts from the Israel–Hamas war vary. It is estimated that of the nearly 1,200 people killed on October 7, 68% were civilians giving a casualty ratio of 2.1:1. Israel's bombing and invasion of Gaza Strip has killed over 46,000 Palestinians, and is ongoing. Women and children are estimated to be 60–70% of the casualties. After adding civilian adult men, most sources estimate that 80% of all Palestinians killed in Gaza Strip are civilians, giving a civilian casualty ratio of 4:1. The IDF claims the civilian casualty ratio is 1.4:1, without providing any evidence, though some observer say the IDF counts all military-age male fatalities as combatants. For mathematical inconsistencies in the IDF data, and further criticism, see Casualties of the Israel–Hamas war – Israeli military claims.
Iraq war
According to a 2010 assessment by John Sloboda of Iraq Body Count, a United Kingdom-based organization, American and Coalition forces (including Iraqi government forces) had killed at least 28,736 combatants as well as 13,807 civilians in the Iraq War, indicating a civilian to combatant casualty ratio inflicted by coalition forces of 1:2. However, overall, figures by the Iraq Body Count from 20 March 2003 to 14 March 2013 indicate that of 174,000 casualties only 39,900 were combatants, resulting in a civilian casualty rate of 77%. Most civilians were killed by anti-government insurgents and unidentified third parties.
The global coalition's War against the Islamic State, from 2014, had led to as many as 50,000 ISIL combatant casualties by the end of 2016. Airwars calculated that 8,200–13,275 civilians were killed in Coalition airstrikes, mainly up to the end of 2017, with especially high casualty rates during the Battle of Mosul. An Associated Press investigation found that in the Battle of Mosul, of the >9,000 fatalities, between 42% and 60% were civilians.
Other conflicts
War in Afghanistan
According to the Watson Institute for International and Public Affairs at Brown University, as of January 2015 roughly 92,000 people had been killed in the Afghanistan war, of which over 26,000 were civilians, for a civilian to combatant ratio of 1:2.5.
Drone strikes in Pakistan
The civilian casualty ratio for U.S. drone strikes in Pakistan conducted during 2004 and 2018 as part of the War on Terror is notoriously difficult to quantify. In 2010, the U.S. itself put the number of civilians killed from drone strikes in the last two years at no more than 20 to 30, a total that is far too low according to a spokesman for the NGO CIVIC. At the other extreme, Daniel L. Byman of the Brookings Institution suggested in 2009 that drone strikes may kill "10 or so civilians" for every militant killed, which would represent a civilian to combatant casualty ratio of 10:1. Byman argues that civilian killings constitute a humanitarian tragedy and create dangerous political problems, including damage to the legitimacy of the Pakistani government and alienation of the Pakistani populace from America. An ongoing study by the New America Foundation finds non-militant casualty rates started high but declined steeply over time, from about 60% (3 out of 5) in 2004–2007 to less than 2% (1 out of 50) in 2012. In 2011, the study put the overall non-militant casualty rate since 2004 at 15–16%, or a 1:5 ratio, out of a total of between 1,908 and 3,225 people killed in Pakistan by drone strikes since 2004.
Sri Lankan civil war
The civilian to combatant ratio in the Sri Lankan civil war was likely worse than 1:1.
Mexican Revolution (1910–20)
Although it is estimated that over 1 million people died in the Mexican Revolution, most died from disease and hunger as an indirect result of the war. Combat deaths are generally agreed to have totaled about 250,000. According to Eckhardt, these included 125,000 civilian deaths and 125,000 combatant deaths, creating a civilian-combatant death ratio of 1:1 among combat deaths.
See also
Non-combatant casualty value
Casualty recording
Collateral damage
Asymmetric warfare
Fourth generation warfare
Loss exchange ratio
Just war
Distinction (law)
Proportionality (law)
Military necessity
Notes
References
Bibliography
Anstrom, Jan; Duyvesteyn, Isabelle (2004): Rethinking the Nature of War, pp. 72-80, Routledge, .
Deane, Hugh (1999): The Korean War: 1945-1953, p. 149, China Books & Periodicals, .
Hartley, Cathy et al (2004): Survey of Arab-Israeli Relations, p. 91, Routledge, .
Larson, Eric V. (2007): Misfortunes of War: Press and Public Reactions to Civilian Deaths in Wartime, pp. 65, 71, RAND Corp., .
Layoun, Mary N. et al (2001): Wedded to the Land? Gender, Boundaries, & Nationalism in Crisis, p. 134, Duke University Press, .
Mattar, Philip: (2005): Encyclopedia Of The Palestinians, p. 47, Facts on File, .
Sadowski, Yahya M. (1998): The Myth of Global Chaos, p. 134, Brookings Institution Press, .
Snow, Donald M. (1996): Uncivil Wars: International Security and the New Internal Conflicts, pp. 64-66, Lynne Rienner Publishers, .
Further reading
Ratios
War
War on terror
Civilian casualties | Civilian casualty ratio | Mathematics | 4,495 |
48,432,371 | https://en.wikipedia.org/wiki/Leccinum%20arbuticola | Leccinum arbuticola is a species of bolete fungus in the family Boletaceae. It was described as new to science in 1975 by mycologist Harry Delbert Thiers, from collections made in Nevada County, California. It grows in association with madrone (Arbutus menziesii) and Manzanita It fruits in fall and early winter. It stains blue when bruised.
See also
List of Leccinum species
List of North American boletes
References
External links
Leccinum arbuticola at California boletes
Fungi described in 1975
Fungi of the United States
arbuticola
Fungi without expected TNC conservation status
Fungi of California
Fungus species | Leccinum arbuticola | Biology | 140 |
48,595,162 | https://en.wikipedia.org/wiki/Ceratocystis%20zombamontana | Ceratocystis zombamontana is a plant-pathogenic saprobic fungal species first found in Africa, infecting Acacia mearnsii and Eucalyptus species.
References
Further reading
Nkuekam, Gilbert Kamgan, Michael J. Wingfield, and Jolanda Roux. "Ceratocystis species, including two new taxa, from Eucalyptus trees in South Africa." Australasian Plant Pathology 42.3 (2013): 283–311.
De Beer, Z. W., et al. "Redefining Ceratocystis and allied genera." Studies in Mycology 79 (2014): 187–219.
Van Wyk, Marelize, et al. "New Ceratocystis species infecting coffee, cacao, citrus and native trees in Colombia." Fungal Diversity 40.1 (2010): 103–117.
External links
MycoBank
Fungal plant pathogens and diseases
Microascales
Fungi described in 2009
Fungus species | Ceratocystis zombamontana | Biology | 205 |
722,773 | https://en.wikipedia.org/wiki/Phalaris%20arundinacea | Phalaris arundinacea, or reed canary grass, is a tall, perennial bunchgrass that commonly forms extensive single-species stands along the margins of lakes and streams and in wet open areas, with a wide distribution in Europe, Asia, northern Africa and North America. Other common names for the plant include gardener's-garters and ribbon grass in English, alpiste roseau in French, Rohrglanzgras in German, kusa-yoshi in Japanese, caniço-malhado in Portuguese, and hierba cinta and pasto cinto in Spanish.
Description
The stems can reach in height. The leaf blades are usually green, but may be variegated. The panicles are up to long. The spikelets are light green, often streaked with darker green or purple. This is a perennial grass which spreads underground by its thick rhizomes.
Uses
A number of cultivars of P. arundinacea have been selected for use as ornamental plants, including variegated (striped) cultivars – sometimes called ribbon grass – such as 'Castor' and 'Feesey'. The latter has a pink tinge to the leaves. When grown, although drought-tolerant, it likes abundant water and can even be grown as an aquatic plant.
Reed canary grass grows well on poor soil and contaminated industrial sites, and researchers at Teesside University's Contaminated Land & Water Centre have suggested it as the ideal candidate for phytoremediation in improving soil quality and biodiversity at brownfield sites.
The grass can also easily be turned into bricks or pellets for burning in biomass power stations. Furthermore, it provides fibers which find use in pulp and papermaking processes.
P. arundinacea is also planted as a hay crop or for forage.
This species of Phalaris may also be used as a source for the psychedelic drugs DMT, 5-MeO-DMT and 5-OH-DMT (bufotenin), as well as Hordenine and 5-MeO-NMT; however, N,N-DMT is considered most desirable. Although the concentrations of these compounds is lower than in other potential sources, such as Psychotria viridis and Mimosa tenuiflora, large enough quantities of the grass can be refined to make an ad hoc ayahuasca brew.
Ecology
In many places, P. arundinacea is an invasive species in wetlands, particularly in disturbed areas. It has been reported as an invasive weed in floodplains, riverside meadows, and other wetland habitat types around the world. When P. arundinacea invades a wetland, it inhibits native vegetation and reduces biological diversity. It alters the entire ecosystem. The grass propagates by seed and rhizome, and once established, is difficult to eradicate.
Distribution
P. arundinacea now has worldwide distribution. It is regarded as native to both North America and Eurasia, but this is debated and it appears that the populations in North America are made up of a mixture of cultivars of both those that were introduced from Europe and indigenous varieties.
Chemical properties
Specimens contain varying levels of hordenine and gramine.
Leaves of P. arundinacea contain DMT, 5-MeO-DMT and related compounds. Levels of beta-carbolines and hordenine have also been reported.
References
External links
Flora Europaea: Phalaris arundinacea
Jepson Manual Treatment - taxonomy and distribution within California
Bunchgrasses of Africa
Bunchgrasses of Asia
Bunchgrasses of Europe
Bunchgrasses of North America
Flora of Korea
Garden plants of Asia
Garden plants of Europe
Garden plants of North America
Grasses of Canada
Grasses of the United States
Herbal and fungal hallucinogens
arundinacea
Phytoremediation plants
Plants described in 1753
Psychedelic tryptamine carriers
Taxa named by Carl Linnaeus
Grasses of Lebanon | Phalaris arundinacea | Biology | 801 |
22,016,239 | https://en.wikipedia.org/wiki/Size%20theory | In mathematics, size theory studies the properties of topological spaces endowed with -valued functions, with respect to the change of these functions. More formally, the subject of size theory is the study of the natural pseudodistance between size pairs.
A survey of size theory can be found in
.
History and applications
The beginning of size theory is rooted in the concept of size function, introduced by Frosini. Size functions have been initially used as a mathematical tool for shape comparison in computer vision and pattern recognition.
An extension of the concept of size function to algebraic topology was made in the 1999 Frosini and Mulazzani paper
where size homotopy groups were introduced, together with the natural pseudodistance for -valued functions.
An extension to homology theory (the size functor) was introduced in 2001.
The size homotopy group and the size functor are strictly related to the concept of persistent homology group
studied in persistent homology. It is worth to point out that the size function is the rank of the -th persistent homology group, while the relation between the persistent homology group
and the size homotopy group is analogous to the one existing between homology groups and homotopy groups.
In size theory, size functions and size homotopy groups are seen as tools to compute lower bounds for the natural pseudodistance.
Actually, the following link exists between the values taken by the size functions , and the natural pseudodistance
between the size pairs
,
An analogous result holds for size homotopy group.
The attempt to generalize size theory and the concept of natural pseudodistance to norms that are different from the supremum norm has led to the study of other reparametrization invariant norms.
See also
Size function
Natural pseudodistance
Size functor
Size homotopy group
Size pair
Matching distance
References
Topology
Algebraic topology | Size theory | Physics,Mathematics | 375 |
2,952,019 | https://en.wikipedia.org/wiki/Acoustically%20Navigated%20Geological%20Underwater%20Survey | The Acoustically Navigated Geological Underwater Survey (ANGUS) was a deep-towed still-camera sled operated by the Woods Hole Oceanographic Institute (WHOI) in the early 1970s. It was the first unmanned research vehicle made by WHOI. ANGUS was encased in a large steel frame designed to explore rugged volcanic terrain and able to withstand high impact collisions. It was fitted with three 35 mm color cameras with of film. Together, its three cameras were able to photograph a strip of the sea floor with a width up to . Each camera was equipped with strobe lights allowing them to photograph the ocean floor from above. On the bottom of the body was a downward-facing sonar system to monitor the sled's height above the ocean floor. It was capable of working in depths up to and could therefore reach roughly 98% of the sea floor. ANGUS could remain in the deep ocean for work sessions of 12 to 14 hours at a time, taking up to 16,000 photographs in one session. ANGUS was often used to scout locations of interest to later be explored and sampled by other vehicles such as Argo or Alvin.
ANGUS has been used to search for and photograph underground geysers and the creatures living near them, and it was equipped with a heat sensor to alert the tether-ship when it passed over one. It was used on expeditions such as Project FAMOUS (French-American Mid Ocean Undersea Study 1973–1974), the Discovery expedition with Argo to survey the wreckage of the Titanic. (1985), and again in the return mission to the Titanic (1986). ANGUS was the only ROV used on both dives to the Titanic.
On Project FAMOUS, ANGUS helped change scientists' views of the ocean floor. It showed them how different geological formations and chemical compositions of sediments can be, disproving previous assumptions of ocean floor uniformity The project also provided new insight to the theory of seafloor spreading by observing and sampling the rock formations around ridges and the horizontal formation of layers parallel to the ridge.
In another 1977 expedition with ANGUS, scientists monitored temperatures over the ocean floor for any fluctuation. It was not until late at night the crew noticed temperatures rise drastically. They would review the photograph footage taken after the vehicle's session. ANGUS provided the first photographic evidence for hydrothermal vents and black smokers. It had returned with over 3000 colored photos showing both vents as well as colonies of clams and other organisms. They would later return with Alvin to take samples.
Scientists nicknamed ANGUS Dope on a rope due to its durability and lack of fragile sensors. It was also given the motto "takes a lickin' but it keeps on clickin'". ANGUS was retired in the late 1980s, having completed over 250 voyages.
References
External links
Project FAMOUS: Exploring the Mid-Atlantic Ridge
Oceanography | Acoustically Navigated Geological Underwater Survey | Physics,Environmental_science | 578 |
440,704 | https://en.wikipedia.org/wiki/Colonnade | In classical architecture, a colonnade is a long sequence of columns joined by their entablature, often free-standing, or part of a building. Paired or multiple pairs of columns are normally employed in a colonnade which can be straight or curved. The space enclosed may be covered or open. In St. Peter's Square in Rome, Bernini's great colonnade encloses a vast open elliptical space.
When in front of a building, screening the door (Latin porta), it is called a portico. When enclosing an open court, a peristyle. A portico may be more than one rank of columns deep, as at the Pantheon in Rome or the stoae of Ancient Greece.
When the intercolumniation is alternately wide and narrow, a colonnade may be termed "araeosystyle" (Gr. αραιος, "widely spaced", and συστυλος, "with columns set close together"), as in the case of the western porch of St Paul's Cathedral and the east front of the Louvre.
History
Colonnades (formerly as colonade) have been built since ancient times and interpretations of the classical model have continued through to modern times, and Neoclassical styles remained popular for centuries. At the British Museum, for example, porticos are continued along the front as a colonnade. The porch of columns that surrounds the Lincoln Memorial in Washington, D.C., (in style a peripteral classical temple) can be termed a colonnade. As well as the traditional use in buildings and monuments, colonnades are used in sports stadiums such as the Harvard Stadium in Boston, where the entire horseshoe-shaped stadium is topped by a colonnade. The longest colonnade in the United States, with 36 Corinthian columns, is the New York State Education Building in Albany, New York.
Notable colonnades
Ancient world
Renaissance and Baroque periods
Neoclassical
Modern interpretations
See also
Arcade
Cloister
Engaged column
References
Columns and entablature
Architectural elements | Colonnade | Technology,Engineering | 432 |
35,287,906 | https://en.wikipedia.org/wiki/Teleological%20behaviorism | Teleological behaviorism is a variety of behaviorism. Like all other forms of behaviorism it relies heavily on attention to outwardly observable human behaviors. Similarly to other branches of behaviorism, teleological behaviorism takes into account cognitive processes, like emotions and thoughts, but does not view these as empirical causes of behavior. Teleological behaviorism instead looks at these emotions and thoughts as behaviors themselves. Teleological behaviorism differs from other branches of Behaviorism through its focus on human capacity for self-control and also emphasizes the concept of free will.
Overview
The founder of teleological behaviorism is Howard Rachlin, an Emeritus Research Professor of Psychology at the State University of New York, Stony Brook. Originally focusing his work on operant behavior, he eventually became interested in the concepts of free-will as they applied to Behavioral Economics and turned his interest to the related field of teleological behaviorism from there. A large influence for Rachlin’s work was Aristotle’s early philosophies on the mind, specifically how “Artistotle’s classification of movements in terms of final rather than efficient causes corresponds to B.F. Skinner’s conception of an operant as a class of movements with a common end”. This concept that Rachlin is referring to is Artistotle’s concept of Telos, “the final cause” that drives us all forward towards a common end. Rachlin also found heavy inspiration in the writings and work of Tolman and Bandura after their work in Behaviorism. An example of Artistotle’s concept of Telos could come from the concept of drinking water. While most behaviorists would approach drinking water as a direct reaction to being thirsty, Rachlin would also consider the long-term effects and consider that the person is drinking water so that they do not eventually die of thirst. This far-sighted view offers a different viewpoint into the behaviors of human beings that may not be explained as clearly by operant conditioning, a concept of Behavioral Psychology that mostly focuses upon the short-term reactions that someone has learned.
On the subject of self-control, Rachlin states that it is less of a subject of knowing that one should not do something and more a matter of patience. He considers those with strong self-control simply more “long-term behaviorally-oriented” or “far-sighted.” This concept of considering further implications in the future is related to many other fields, particularly Rachlin’s second interest of Behavioral Economics. He likens the ability to weight the options of the future to avoiding making short-term poor investments in exchange for profitable, more beneficial ones at a much later point in time.
Many people criticize Rachlin for his perspective, however, as his concepts of simply considering the potential reactions to certain situations over a longer time frame have been hailed as being closer to a self-help practice of making good investments and life choices than it is close to an actual Psychological practice. His main counter to this argument is that through this approach he has managed to help people by preparing them for potential negative outcomes in the future and trained them to recognize their own potential outcomes behind their actions.
Rachlin also has a different viewpoint on the subject of Free Will than most other people. He writes, “the meaning of free will lies not in what may or may not be going on in people’s heads or elsewhere in their bodies, but rather in how the term free will is actually used by the community to describe and to guide overt behavior” . In other words, society’s definitions and expectations are what guide us in terms of what they consider right and wrong as any other external factors do. Our definitions of free-will are contingent upon how much we let them affect our actions.
References
Sources
Behaviorism
Teleology | Teleological behaviorism | Biology | 783 |
31,153,340 | https://en.wikipedia.org/wiki/Active%20zone | The active zone or synaptic active zone is a term first used by Couteaux and Pecot-Dechavassinein in 1970 to define the site of neurotransmitter release. Two neurons make near contact through structures called synapses allowing them to communicate with each other. As shown in the adjacent diagram, a synapse consists of the presynaptic bouton of one neuron which stores vesicles containing neurotransmitter (uppermost in the picture), and a second, postsynaptic neuron which bears receptors for the neurotransmitter (at the bottom), together with a gap between the two called the synaptic cleft (with synaptic adhesion molecules, SAMs, holding the two together). When an action potential reaches the presynaptic bouton, the contents of the vesicles are released into the synaptic cleft and the released neurotransmitter travels across the cleft to the postsynaptic neuron (the lower structure in the picture) and activates the receptors on the postsynaptic membrane.
The active zone is the region in the presynaptic bouton that mediates neurotransmitter release and is composed of the presynaptic membrane and a dense collection of proteins called the cytomatrix at the active zone (CAZ). The CAZ is seen under the electron microscope to be a dark (electron dense) area close to the membrane. Proteins within the CAZ tether synaptic vesicles to the presynaptic membrane and mediate synaptic vesicle fusion, thereby allowing neurotransmitter to be released reliably and rapidly when an action potential arrives.
Function
The function of the active zone is to ensure that neurotransmitters can be reliably released in a specific location of a neuron and only released when the neuron fires an action potential.
As an action potential propagates down an axon it reaches the axon terminal called the presynaptic bouton. In the presynaptic bouton, the action potential activates calcium channels (VDCCs) that cause a local influx of calcium. The increase in calcium is detected by proteins in the active zone and forces vesicles containing neurotransmitter to fuse with the membrane. This fusion of the vesicles with the membrane releases the neurotransmitters into the synaptic cleft (space between the presynaptic bouton and the postsynaptic membrane). The neurotransmitters then diffuse across the cleft and bind to ligand gated ion channels and G-protein coupled receptors on the postsynaptic membrane. The binding of neurotransmitters to the postsynaptic receptors then induces a change in the postsynaptic neuron. The process of releasing neurotransmitters and binding to the postsynaptic receptors to cause a change in the postsynaptic neuron is called neurotransmission.
Structure
The active zone is present in all chemical synapses examined so far and is present in all animal species. The active zones examined so far have at least two features in common, they all have protein dense material that project from the membrane and tethers synaptic vesicles close to the membrane and they have long filamentous projections originating at the membrane and terminating at vesicles slightly farther from the presynaptic membrane. The protein dense projections vary in size and shape depending on the type of synapse examined. One striking example of the dense projection is the ribbon synapse (see below) which contains a "ribbon" of protein dense material that is surrounded by a halo of synaptic vesicles and extends perpendicular to the presynaptic membrane and can be as long as 500 nm. The glutamate synapse contains smaller pyramid like structures that extend about 50 nm from the membrane. The neuromuscular synapse contains two rows of vesicles with a long proteinaceous band between them that is connected to regularly spaced horizontal ribs extending perpendicular to the band and parallel with the membrane. These ribs are then connected to the vesicles which are each positioned above a peg in the membrane (presumably a calcium channel). Previous research indicated that the active zone of glutamatergic neurons contained a highly regular array of pyramid shaped protein dense material and indicated that these pyramids were connected by filaments. This structure resembled a geometric lattice where vesicles were guided into holes of the lattice. This attractive model has come into question by recent experiments. Recent data shows that the glutamatergic active zone does contain the dense protein material projections but these projections were not in a regular array and contained long filaments projecting about 80 nm into the cytoplasm.
There are at least five major scaffold proteins that are enriched in the active zone; UNC13B/Munc13, RIMS1 (Rab3-interacting molecule), Bassoon, Piccolo/aczonin, ELKS, and liprins-α. These scaffold proteins are thought to be the constituents of the dense pyramid like structures of the active zone and are thought to bring the synaptic vesicles into close proximity to the presynaptic membrane and the calcium channels. The protein ELKS binds to the cell adhesion protein, β-neurexin, and other proteins within the complex such as Piccolo and Bassoon. β-neurexin then binds to cell adhesion molecule, neuroligin located on the postsynaptic membrane. Neuroligin then interacts with proteins that bind to postsynaptic receptors. Protein interactions like that seen between Piccolo/ELKS/β-neurexin/neuroligin ensures that machinery that mediates vesicle fusion is in close proximity to calcium channels and that vesicle fusion is adjacent to postsynaptic receptors. This close proximity vesicle fusion and postsynaptic receptors ensures that there is little delay between the activation of the postsynaptic receptors and the release of neurotransmitters.
Neurotransmitter release mechanism
The release of neurotransmitter is accomplished by the fusion of neurotransmitter vesicles to the presynaptic membrane. Although the details of this mechanism are still being studied there is a consensus on some details of the process. Synaptic vesicle fusion with the presynaptic membrane is known to require a local increase of calcium from as few as a single, closely associated calcium channels and the formation of highly stable SNARE complexes. One prevailing model of synaptic vesicle fusion is that SNARE complex formation is catalyzed by the proteins of the active zone such as Munc18, Munc13, and RIM. The formation of this complex is thought to "prime" the vesicle to be ready for vesicle fusion and release of neurotransmitter (see below: releasable pool). After the vesicle is primed then complexin binds to the SNARE complex this is called "superprimed". The vesicles that are superprimed are within the readily releasable pool (see below) and are ready to be rapidly released. The arrival of an action potential opens voltage gated calcium channels near the SNARE/complexin complex. Calcium then binds to change the conformation of synaptotagmin. This change in conformation of allows synaptotagmin to then dislodge complexin, bind to the SNARE complex, and bind to the target membrane. When synaptotagmin binds to both the SNARE complex and the membrane this induces a mechanical force on the membrane so that it causes the vesicle membrane and presynaptic membrane to fuse. This fusion opens a membrane pore that releases the neurotransmitter. The pore increases in size until the entire vesicle membrane is indistinguishable from the presynaptic membrane.
Synaptic vesicle cycle
The presynaptic bouton has an efficiently orchestrated process to fuse vesicles to the presynaptic membrane to release neurotransmitters and regenerate neurotransmitter vesicles. This process called the synaptic vesicle cycle maintains the number of vesicles in the presynaptic bouton and allows the synaptic terminal to be an autonomous unit. The cycle begins with (1) a region of the golgi apparatus is pinched off to form the synaptic vesicle and this vesicle is transported to the synaptic terminal. At the terminal (2) the vesicle is filled with neurotransmitter. (3) The vesicle is transported to the active zone and docked in close proximity to the plasma membrane. (4) During an action potential the vesicle is fused with the membrane, releases the neurotransmitter and allows the membrane proteins previously on the vesicle to diffuse to the periactive zone. (5) In the periactive zone the membrane proteins are sequestered and are endocytosed forming a clathrin coated vesicle. (6) The vesicle is then filled with neurotransmitter and is then transported back to the active zone.
The endocytosis mechanism is slower than the exocytosis mechanism. This means that in intense activity the vesicle in the terminal can become depleted and no longer available to be released. To help prevent the depletion of synaptic vesicles the increase in calcium during intense activity can activate calcineurin which dephosphorylate proteins involved in clathrin-mediated endocytosis.
Vesicle pools
The synapse contains at least two clusters of synaptic vesicles, the readily releasable pool and the reserve pool. The readily releasable pool is located within the active zone and connected directly to the presynaptic membrane while the reserve pool is clustered by cytoskeletal and is not directly connected to the active zone.
Releasable pool
The releasable pool is located in the active zone and is bound directly to the presynaptic membrane. It is stabilized by proteins within the active zone and bound to the presynaptic membrane by SNARE proteins. These vesicles are ready to release by a single action potential and are replenished by vesicles from the reserve pool. The releasable pool is sometimes subdivided into the readily releasable pool and the releasable pool.
Reserve pool
The reserve pool is not directly connected to the active zone. The increase in presynaptic calcium concentration activates calcium–calmodulin-dependent protein kinase (CaMK). CaMK phosphorylates a protein, synapsin, that mediates the clustering of the reserve pool vesicles and attachment to the cytoskeleton. Phosphorylation of synapsin mobilizes vesicles in the reserve pool and allows them to migrate to the active zone and replenish the readily releasable pool.
Periactive zone
The periactive zone surrounds the active zone and is the site of endocytosis of the presynaptic terminal. In the periactive zone, scaffolding proteins such as intersectin 1 recruit proteins that mediate endocytosis such as dynamin, clathrin and endophilin. In Drosophila the intersectin homolog, Dap160, is located in the periactive zone of the neuromuscular junction and mutant Dap160 deplete synaptic vesicles during high frequency stimulation.
Ribbon synapse active zone
The ribbon synapse is a special type of synapse found in sensory neurons such as photoreceptor cells, retinal bipolar cells, and hair cells. Ribbon synapses contain a dense protein structure that tethers an array of vesicles perpendicular to the presynaptic membrane. In an electron micrograph it appears as a ribbon like structure perpendicular to the membrane. Unlike the 'traditional' synapse, ribbon synapses can maintain a graded release of vesicles. In other words, the more depolarized a neuron the higher the rate of vesicle fusion. The Ribbon synapse active zone is separated into two regions, the archiform density and the ribbon. The archiform density is the site of vesicle fusion and the ribbon stores the releasable pool of vesicles. The ribbon structure is composed primarily of the protein RIBEYE, about 64–69% of the ribbon volume, and is tethered to the archiform density by scaffolding proteins such as Bassoon.
Proteins
Measuring neurotransmitter release
Neurotransmitter release can be measured by determining the amplitude of the postsynaptic potential after triggering an action potential in the presynaptic neuron. Measuring neurotransmitter release this way can be problematic because the effect of the postsynaptic neuron to the same amount of released neurotransmitter can change over time. Another way is to measure vesicle fusion with the presynaptic membrane directly using a patch pipette. A cell membrane can be thought of as a capacitor in that positive and negative ions are stored on both sides of the membrane. The larger the area of membrane the more ions that are necessary to hold the membrane at a certain potential. In electrophysiology this means that a current injection into the terminal will take less time to charge a membrane to a given potential before vesicle fusion than it will after vesicle fusion. The time course to charge the membrane to a potential and the resistance of the membrane is measured and with these values the capacitance of the membrane can be calculated by the equation Tau/Resistance=Capacitance. With this technique researchers can measure synaptic vesicle release directly by measuring increases in the membrane capacitance of the presynaptic terminal.
See also
Paired pulse facilitation
Postsynaptic density
References
Neurophysiology
Cellular neuroscience
Cell signaling
Signal transduction
Molecular neuroscience | Active zone | Chemistry,Biology | 2,978 |
11,963,455 | https://en.wikipedia.org/wiki/HD%20192699 | HD 192699 is a yellow subgiant star located approximately 214 light-years away in the constellation of Aquila. It has the apparent magnitude of 6.45. Based on its mass of 1.68 solar, it was an A-type star when it was a main-sequence. In April 2007, a planet was announced orbiting the star, together with HD 175541 b and HD 210702 b.
The star HD 192699 is named Chechia. The name was selected in the NameExoWorlds campaign by Tunisia, during the 100th anniversary of the IAU. Chechia is a flat-surfaced, traditional red wool hat.
See also
HD 175541
HD 210702
List of extrasolar planets
References
External links
G-type subgiants
Planetary systems with one confirmed planet
Aquila (constellation)
Durchmusterung objects
192699
099894 | HD 192699 | Astronomy | 186 |
176,354 | https://en.wikipedia.org/wiki/Mineral%20wool | Mineral wool is any fibrous material formed by spinning or drawing molten mineral or rock materials such as slag and ceramics.
Applications of mineral wool include thermal insulation (as both structural insulation and pipe insulation), filtration, soundproofing, and hydroponic growth medium.
Naming
Mineral wool is also known as mineral cotton, mineral fiber, man-made mineral fiber (MMMF), and man-made vitreous fiber (MMVF).
Specific mineral wool products are stone wool and slag wool. Europe also includes glass wool which, together with ceramic fiber, are entirely artificial fibers that can be made into different shapes and are spiky to touch.
History
Slag wool was first made in 1840 in Wales by Edward Parry, "but no effort appears to have been made to confine the wool after production; consequently it floated about the works with the slightest breeze, and became so injurious to the men that the process had to be abandoned". A method of making mineral wool was patented in the United States in 1870 by John Player and first produced commercially in 1871 at Georgsmarienhütte in Osnabrück Germany. The process involved blowing a strong stream of air across a falling flow of liquid iron slag which was similar to the natural occurrence of fine strands of volcanic slag from Kilauea called Pele's hair created by strong winds blowing apart the slag during an eruption.
According to a mineral wool manufacturer, the first mineral wool intended for high-temperature applications was invented in the United States in 1942 but was not commercially viable until approximately 1953. More forms of mineral wool became available in the 1970s and 1980s.
High-temperature mineral wool
High-temperature mineral wool is a type of mineral wool created for use as high-temperature insulation and generally defined as being resistant to temperatures above 1,000 °C. This type of insulation is usually used in industrial furnaces and foundries. Because high-temperature mineral wool is costly to produce and has limited availability, it is almost exclusively used in high-temperature industrial applications and processes.
Definitions
Classification temperature is the temperature at which a certain amount of linear contraction (usually two to four percent) is not exceeded after a 24-hour heat treatment in an electrically heated laboratory oven in a neutral atmosphere. Depending on the type of product, the value may not exceed two percent for boards and shaped products and four percent for mats and papers.
The classification temperature is specified in 50 °C steps starting at 850 °C and up to 1600 °C. The classification temperature does not mean that the product can be used continuously at this temperature. In the field, the continuous application temperature of amorphous high-temperature mineral wool (AES and ASW) is typically 100 °C to 150 °C below the classification temperature. Products made of polycrystalline wool can generally be used up to the classification temperature.
Types
There are several types of high-temperature mineral wool made from different types of minerals. The mineral chosen results in different material properties and classification temperatures.
Alkaline earth silicate wool (AES wool)
AES wool consists of amorphous glass fibers that are produced by melting a combination of calcium oxide (CaO−), magnesium oxide (MgO−), and silicon dioxide (SiO2). Products made from AES wool are generally used in equipment that continuously operates and in domestic appliances. Some formulations of AES wool are bio-soluble, meaning they dissolve in bodily fluids within a few weeks and are quickly cleared from the lungs.
Alumino silicate wool (ASW)
Alumino silicate wool, also known as refractory ceramic fiber (RCF), consists of amorphous fibers produced by melting a combination of aluminum oxide (Al2O3) and silicon dioxide (SiO2), usually in a weight ratio 50:50 (see also VDI 3469 Parts 1 and 5, as well as TRGS 521). Products made of alumino silicate wool are generally used at application temperatures of greater than 900 °C for equipment that operates intermittently and in critical application conditions (see Technical Rules TRGS 619).
Polycrystalline wool (PCW)
Polycrystalline wool consists of fibers that contain aluminum oxide (Al2O3) at greater than 70 percent of the total materials and is produced by sol–gel method from aqueous spinning solutions. The water-soluble green fibers obtained as a precursor are crystallized by means of heat treatment. Polycrystalline wool is generally used at application temperatures greater than 1300 °C and in critical chemical and physical application conditions.
Kaowool
Kaowool is a type of high-temperature mineral wool made from the mineral kaolin. It was one of the first types of high-temperature mineral wool invented and has been used into the 21st century. It can withstand temperatures close to .
Manufacture
Stone wool is a furnace product of molten rock at a temperature of about 1600 °C through which a stream of air or steam is blown. More advanced production techniques are based on spinning molten rock in high-speed spinning heads somewhat like the process used to produce cotton candy. The final product is a mass of fine, intertwined fibers with a typical diameter of 2 to 6 micrometers. Mineral wool may contain a binder, often a terpolymer, and an oil to reduce dusting.
Use
Though the individual fibers conduct heat very well, when pressed into rolls and sheets, their ability to partition air makes them excellent insulators and sound absorbers. Though not immune to the effects of a sufficiently hot fire, the fire resistance of fiberglass, stone wool, and ceramic fibers makes them common building materials when passive fire protection is required, being used as spray fireproofing, in stud cavities in drywall assemblies and as packing materials in firestops.
Other uses are in resin bonded panels, as filler in compounds for gaskets, in brake pads, in plastics in the automotive industry, as a filtering medium, and as a growth medium in hydroponics.
Mineral fibers are produced in the same way, without binder. The fiber as such is used as a raw material for its reinforcing purposes in various applications, such as friction materials, gaskets, plastics, and coatings.
Hydroponics
Mineral wool products can be engineered to hold large quantities of water and air that aid root growth and nutrient uptake in hydroponics; their fibrous nature also provides a good mechanical structure to hold the plant stable. The naturally high pH of mineral wool makes them initially unsuitable to plant growth and requires "conditioning" to produce a wool with an appropriate, stable pH. Conditioning methods include pre-soaking mineral wool in a nutrient solution adjusted to pH 5.5 until it stops bubbling.
High-temperature mineral wool
High-temperature mineral wool is used primarily for insulation and lining of industrial furnaces and foundries to improve efficiency and safety. It is also used to prevent the spread of fire.
The use of high-temperature mineral wool enables a more lightweight construction of industrial furnaces and other technical equipment as compared to other methods such as fire bricks, due to its high heat resistance capabilities per weight, but has the disadvantage of being more expensive than other methods.
Safety of material
The International Agency for Research on Cancer (IARC) reviewed the carcinogenicity of man-made mineral fibers in October 2002. The IARC Monograph's working group concluded only the more biopersistent materials remain classified by IARC as "possibly carcinogenic to humans" (Group 2B). These include refractory ceramic fibers, which are used industrially as insulation in high-temperature environments such as blast furnaces, and certain special-purpose glass wools not used as insulating materials. In contrast, the more commonly used vitreous fiber wools produced since 2000, including insulation glass wool, stone wool, and slag wool, are considered "not classifiable as to carcinogenicity in humans" (Group 3).
High bio soluble fibers are produced that do not cause damage to the human cell. These newer materials have been tested for carcinogenicity and most are found to be noncarcinogenic. IARC elected not to make an overall evaluation of the newly developed fibers designed to be less bio persistent such as the alkaline earth silicate or high-alumina, low-silica wools. This decision was made in part because no human data were available, although such fibers that have been tested appear to have low carcinogenic potential in experimental animals, and because the Working Group had difficulty in categorizing these fibers into meaningful groups based on chemical composition.
The European Regulation (CE) n° 1272/2008 on classification, labelling and packaging of substances and mixtures updated by the Regulation (CE) n°790/2009 does not classify mineral wool fibers as a dangerous substance if they fulfil criteria defined in its Note Q.
The European Certification Board for mineral wool products, EUCEB, certify mineral wool products made of fibers fulfilling Note Q ensuring that they have a low bio persistence and so that they are quickly removed from the lung. The certification is based on independent experts' advice and regular control of the chemical composition.
Due to the mechanical effect of fibers, mineral wool products may cause temporary skin itching. To diminish this and to avoid unnecessary exposure to mineral wool dust, information on good practices is available on the packaging of mineral wool products with pictograms or sentences. Safe Use Instruction Sheets similar to Safety data sheet are also available from each producer.
People can be exposed to mineral wool fibers in the workplace by breathing them in, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for mineral wool fiber exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 5 mg/m3 total exposure and 3 fibers per cm3 over an 8-hour workday.
Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) is a European Union regulation of 18 December 2006. REACH addresses the production and use of chemical substances, and their potential impacts on both human health and the environment. A Substance Information Exchange Forum (SIEF) has been set up for several types of mineral wool. AES, ASW and PCW have been registered before the first deadline of 1 December 2010 and can, therefore, be used on the European market.
ASW/RCF is classified as carcinogen category 1B.
AES is exempted from carcinogen classification based on short-term in vitro study result.
PCW wools are not classified; self-classification led to the conclusion that PCW are not hazardous.
On 13 January 2010, some of the aluminosilicate refractory ceramic fibers and zirconia aluminosilicate refractory ceramic fibers have been included in the candidate list of Substances of Very High Concern. In response to concerns raised with the definition and the dossier two additional dossiers were posted on the ECHA website for consultation and resulted in two additional entries on the candidate list. This actual (having four entries for one substance/group of substances) situation is contrary to the REACH procedure intended. Aside from this situation, concerns raised during the two consultation periods remain valid.
Regardless of the concerns raised, the inclusion of a substance in the candidate list triggers immediately the following legal obligations of manufacturers, importers and suppliers of articles containing that substance in a concentration above 0.1% (w/w):
Notification to ECHA -REACH Regulation Art. 7
Provision of Safety Data Sheet- REACH Regulation Art. 31.1
Duty to communicate safe use information or responding to customer requests -REACH Regulation Art. 33
Crystalline silica
Amorphous high-temperature mineral wool (AES and ASW) is produced from a molten glass stream which is aerosolized by a jet of high-pressure air or by letting the stream impinge onto spinning wheels. The droplets are drawn into fibers; the mass of both fibers and remaining droplets cool very rapidly so that no crystalline phases may form.
When amorphous high-temperature mineral wool is installed and used in high-temperature applications such as industrial furnaces, at least one face may be exposed to conditions causing the fibers to partially devitrify. Depending on the chemical composition of the glassy fiber and the time and temperature to which the materials are exposed, different stable crystalline phases may form.
In after-use high-temperature mineral wool crystalline silica crystals are embedded in a matrix composed of other crystals and glasses. Experimental results on the biological activity of after-use high-temperature mineral wool have not demonstrated any hazardous activity that could be related to any form of silica they may contain.
Substitutes for mineral wool in construction
Due to the mineral wool non-degradability and potential health risks, substitute materials are being developed: hemp, flax, wool, wood, and cork insulations are the most prominent. Biodegradability and health profile are the main advantages of those materials. Their drawbacks when compared to mineral wool are their substantially lower mold resistance, higher combustibility, and slightly higher thermal conductivity (hemp insulation: 0.040 Wmk, mineral wool insulation: 0.030-0.045 Wmk).
See also
Asbestos, a mineral that is naturally fibrous
Basalt fiber, a mineral fiber having high tensile strength
Glass wool
Pele's hair
Risk and Safety Statements
References
External links
Statistics Canada documents on shipments of mineral wool in Canada
Review of published data on exposure to mineral wool during installation work by A Jones and A Sanchez Jimenez, Institute of Occupational Medicine Research Report TM/11/01
Assessment of airborne mineral wool fibres in domestic houses by J Dodgson and others. Institute of Occupational Medicine Research Report TM/87/18
Building insulation materials
Materials | Mineral wool | Physics | 2,877 |
1,923,965 | https://en.wikipedia.org/wiki/Historical%20geography | Historical geography is the branch of geography that studies the ways in which geographic phenomena have changed over time. In its modern form, it is a synthesizing discipline which shares both topical and methodological similarities with history, anthropology, ecology, geology, environmental studies, literary studies, and other fields. Although the majority of work in historical geography is considered human geography, the field also encompasses studies of geographic change which are not primarily anthropogenic. Historical geography is often a major component of school and university curricula in geography and social studies. Current research in historical geography is being performed by scholars in more than forty countries.
Themes
This sub-branch of human geography is closely related to history, environmental history, and historical ecology.
Historical geography seeks to determine how cultural features of various societies across the planet emerged and evolved by understanding their interaction with their local environment and surroundings.
More recent studies make use of non-traditional methods, such as botany and archeology.
Development of the discipline
In its early days, historical geography was difficult to define as a subject. A textbook from the 1950s cites a previous definition as an 'unsound attempt by geographers to explain history'. Its author, J. B. Mitchell, came down firmly on the side of geography: 'the historical geographer is a geographer first last and all the time'. By 1975 the first number of the Journal of Historical Geography had widened the discipline to a broader audience: 'the writings of scholars of any disciplinary provenance who have something to say about matters of geographical interest relating to past time'.
In the United States, the term historical geography is the name given by Carl Ortwin Sauer of the University of California, Berkeley to his program of reorganizing cultural geography (some say all geography) along regional lines, beginning in the first decades of the 20th century. To Sauer, a landscape and the cultures in it could only be understood if all of its influences through history were taken into account: physical, cultural, economic, political, environmental. Sauer stressed regional specialization as the only means of gaining sufficient expertise on regions of the world. Sauer's philosophy was the principal shaper of American geographic thought in the mid-20th century. Regional specialists remain in academic geography departments to this day. Despite this, some geographers feel that it harmed the discipline; that too much effort was spent on data collection and classification, and too little on analysis and explanation. Studies became more and more area-specific as later geographers struggled to find places to make names for themselves. These factors may have led in turn to the 1950s crisis in geography, which raised serious questions about geography as an academic discipline in the USA.
List of historical geographers
Major institutions
The Historical Geography Specialty Group of the American Association of Geographers
The Historical Geography Research Group of the Royal Geographical Society
Major journals
Journal of Historical Geography
Historical Geography
See also
Historical atlas
References
Further reading
Catchpole, Brian. A Map History of the Modern World: 1890 to the Present Day. 1972 ed. Agincourt, Ont.: Bellhaven House, 1972. N.B.: First ed. published in 1968; an earlier revision with corrections appeared in 1970; partly an atlas of historical geography, partly an atlas illustrating historical events and trends.
Baker, A.R.H. Geography and History: Bridging the Divide (Cambridge University Press, 2003)
Human geography | Historical geography | Environmental_science | 695 |
37,921,837 | https://en.wikipedia.org/wiki/Platform%20gap | A platform gap (also known technically as the platform train interface or PTI in some countries) is the space between a train car (or other mass transit vehicle) and the edge of the station platform, often created by geometric constraints, historic legacies, or use of partially compatible equipment.
Many high-quality bus rapid transit (BRT) systems also use high platforms at station stops to allow fast and efficient level boarding and alighting, but potentially leaving hazardous gaps between the platforms and the buses. Alignment setups such as Kassel curbs help to reduce platform gaps without requiring time-consuming manual alignment at each BRT station stop.
Definition and measurement
A platform gap has two component measurements:
vertical (difference between the platform height and train floor height)
horizontal (distance from the platform edge to the train step)<ref name="gazette">Gareth Dennis, 'Time to "mind the gap" once and for all' in Railway Gazette International May 2023 p40</ref>
Straight platforms
The ideal platform would be straight and align perfectly with a train or other large vehicle. Even in this case, a small gap between the conveyances and the platform is necessary to allow the vehicles to move freely without rubbing against the platform edge. In 2007, the Long Island Rail Road regarded an platform gap as typical on its non-curved platforms.
Curved platforms
In real-world situations, stations are often constrained by limited space, legacy designs, and track geometry or roadway layout. Stations may have to use a compromise design, with a platform curved in a way that will allow a vehicle or train to arrive and depart without mechanical interference, but which leaves unavoidable horizontal and possibly vertical gaps between the cars and the platform edge. These spaces are caused by the geometric gap between a curve (circular arc or otherwise) and the straight-line chord or tangent formed by a railcar or bus in proximity to a platform. These types of gaps are geometrically intrinsic, and cannot be eliminated as long as the platform is located on a curved or banked segment of track or guideway.
When passenger car doors are located only at the ends of each car (a common design for commuter rail and long-distance trains), platform access from a concave platform is preferred, since this brings the car ends in closest proximity to the platform edge. By contrast, a convex platform would leave the largest possible gaps between the car ends and the platform edge, making this design undesirable and thus rarely implemented.
An example of platforms designed for access from the concave side is at Lansdowne station in Boston, where side platforms for both the inbound and outbound directions are located to reduce platform gaps to commuter rail trains of the Framingham/Worcester Line. Concave and convex gaps also exist in several MTR stations in Hong Kong, particularly on the East Rail line, which was built on the historic Kowloon–Canton Railway line.
Gap fillers
Mechanical platform edge extensions known as platform gap fillers may be used to bridge the gap between platform and vehicle. These stopgaps require careful alignment of the vehicle upon arrival, and careful synchronization to avoid serious damage caused by departure of the vehicle before the extenders are fully retracted. They increase station dwell time, and introduce safety and maintenance concerns of their own.
Alternatively, the gap fillers may be mounted on the train, and linked to the door operating mechanism. They may be found on modern trainsets, like various versions of the Stadler GTW and the British Rail Class 555 for the Tyne and Wear Metro. Train-mounted gap fillers eliminate the need for careful alignment and, as the driver only gets the signal that the doors have closed when the fillers have fully retracted, require no special synchronization on departure. Moving all active components of the system to the train instead of the platform allows maintenance to be performed in a shop, rather than in the field. Singapore has committed to specifying its newer trains with gap fillers, to reduce the incidence of platform gap accidents in its crowded stations.
On the Berlin U-Bahn, which has two different loading gauges (Kleinprofil on U1-U4 and Großprofil on U5-U9), so-called Blumenbretter ("flower boards") bridging the platform gap were attached to Kleinprofil trains that ran on Großprofil lines at various times of rolling stock shortage.
Equipment compatibility
In some rail systems, significant platform gaps may also occur (both horizontally and vertically) because of equipment and platforms designed to different and somewhat incompatible height and width standards. This situation may occur especially when previously separate rail systems are consolidated, or start to interoperate, thus allowing equipment to be moved onto tracks where it had not been used before.
In 2007, public testimony by the acting president of the Long Island Rail Road cited the need to interoperate with freight service and other passenger services such as New Jersey Transit and Amtrak, in addition to its own diverse rolling stock, as complicating and slowing efforts to deal with platform gap hazards.
Other contributing factors
Other variables that can increase platform gaps include rail wear, wheel wear, condition of the railcar suspension, and passenger load. A further complication is super-elevation, deliberate tilting of the railbed to allow faster travel around curves. This factor is especially relevant on systems where some express trains (such as long-distance Amtrak trains) operate non-stop through local stations located on curves. Higher pass-through speeds also increase railcar sway, requiring even larger physical clearances to avoid platform strikes.
Specifications and limits
In the US, the Americans with Disabilities Act requires that platforms be “readily accessible to and usable by individuals with disabilities, including individuals who use wheelchairs (49 CFR Part 37, Appendix A, 10.3.1 (9))”. However, this rule only applies to new construction or major renovations of stations. A 2009 report to the New Jersey Department of Transportation (NJDOT) observes that ADA rules specify that "At stations with high level platforms, there may be a gap of no more than 3” horizontal and 5/8" vertical between platform edge and entrance to the rail car. However, currently no passenger rail system in the U.S. has been able to achieve this without the use of manually operated 'bridge plates'.”
, the US Federal Railway Administration recommended platform gap maximum limits of , and on curves.
Mitigation
Physical measures to reduce platform gaps may include realigning trackbeds, realigning platform slabs, and extending platform edges with wooden boards. Operational measures may include "zoning off" some railcars (not opening certain doors at problematic stations), relocating where trains stop along a lengthy platform, and temporarily deploying "platform conductor" personnel to assist passengers.
On systems where the floor level of the vehicle and the platform height closely match, an extendible platform can be installed below the doors of the vehicle, to deploy when the doors are opened. This significantly decreases the gap and thus the risks when boarding and alighting vehicles at stations or stops. This method is used by the German BR423 EMU's and its derivatives, including the Dutch variant SLT.
A public awareness campaign may be used, employing visually distinct platform edge markings, posters, signs, public safety announcements, and web videos to increase safety awareness. The MTA Long Island Rail Road website lists some precautions passengers should observe regarding platform gaps.
An article in The Guardian'' conceded that some passengers who have fallen into platform gaps were drunk at the time, but pointed out other incidents when victims did not have that impairment. The writer complained specifically about gaps that measured from which posed safety threats to children and the elderly, and called for modification of dangerous platforms.
Criticism
In 1865 the Franklin Institute reported on 'the frequent loss of life that occurred on station platforms' and stated that 'platforms should be built up to the level of the flooring of the carriages, and that a dangerous space between the platform and the carriages ought not to exist'.
A 2009 American report identified platform gap injury risk factors, including "mobility, being elderly, having disabilities (visual impairment), being accompanied by small children or incidents occurring to small children, behavior of other passengers such as pushing or jostling, carry luggage and other articles, alcohol, degraded platform conditions such as crowding, wet platforms or uneven platforms, and stepping distances".
In 2023, British transport systems lecturer and co-founder of UK-based Campaign for Level Boarding Gareth Dennis said achieving level boarding "should be a core objective" for any operators and that it is "not acceptable" for passengers to have to worry whether there will be an attendant with a ramp at their destination. He criticised London's Crossrail project's "poor decision making" which set new inner-city station floor heights on the Elizabeth line at train floor level, while outer suburban platforms remained at their pre-existing height, about 200mm lower: "This brand-new railway has cornered itself into perpetually offering an inaccessible service."
Incidents and accidents
An incident once occurred involving Robert Todd Lincoln (son of American president Abraham Lincoln) and a platform gap in Jersey City, New Jersey during the American Civil War. While waiting on a crowded train platform, Robert Lincoln was pushed against the train, and the train started to move, dropping his feet into the gap. He was saved from possible serious injury or death by the prompt actions of well-known actor Edwin Booth, whose brother John Wilkes Booth later assassinated President Lincoln).
In 2014, a news service in Mumbai, India reported several serious platform gap mutilation incidents and a death within a few months, mostly attributed to crowded conditions. In 2015, Singapore had at least two platform gap incidents which were eventually resolved, but caused significant disruptions in rush-hour service.
In 2014 in Perth, Australia, an accident occurred when a man fell between the platform and the train, and could not release his leg because the gap was too small. Other passengers "rocked" the carriage sideways to increase the gap, allowing the victim to escape.
In 2022 in Duvvada, India, a girl who was standing at the door for alighting was knocked down by the door due to a sudden jerk, and she fell into the gap between coach and the platform, In spite of immediate rescue efforts launched by authorities to free her, it took almost an hour to cut the platform and rush her to the hospital. Injuries to her internal organs led to her death within a day.
See also
References
Further reading
London Underground platform gaps
Railway platforms
Railway safety
Accessibility
Technology hazards | Platform gap | Technology,Engineering | 2,168 |
47,953,853 | https://en.wikipedia.org/wiki/Murine%20UL16%20binding%20protein-like%20transcript | Murine UL16 binding protein-like transcript (MULT-1) is a murine cell surface glycoprotein encoded by MULT-1 gene located on murine chromosome 10. MULT-1 is related to MHC class I and is composed of α1α2 domain, a transmembrane segment, and a large cytoplasmic domain. MULT-1 functions as a stress-induced ligand for NKG2D receptor.
References
Glycoproteins | Murine UL16 binding protein-like transcript | Chemistry | 104 |
41,150,682 | https://en.wikipedia.org/wiki/C15H12N2O3 | {{DISPLAYTITLE:C15H12N2O3}}
The molecular formula C15H12N2O3 (molar mass: 268.27 g/mol, exact mass: 268.0848 g/mol) may refer to:
Disperse Red 11
Hydrofuramide
Molecular formulas | C15H12N2O3 | Physics,Chemistry | 67 |
55,166,797 | https://en.wikipedia.org/wiki/Paecilomyces%20marquandii | Paecilomyces marquandii is a soil-borne filamentous fungus distributed throughout temperate to tropical latitudes worldwide including forest, grassland, sewage sludge and strongly metal polluted area characterized by high tolerance in heavy metals. Simultaneous toxic action of zinc and alachlor result an increase in uptake of metal in this fungus but disrupts the cell membrane. Paecilomyces marquandii is known to parasitize the mushroom, Cuphophyllus virgineus, in the family, Hygrophoraceae. Paecilomyces marquandii is categorised as a biosafety risk group 1 in Canada and is not thought to be a significant pathogen of humans or animals.
History
The genus Verticillium was erected by British mycologist G.E. Massee in 1898 to accommodate Verticillium marquandii. This species was initially thought conspecific with Spicaria violacea based on its pattern of cell wall division. The fungus was transferred to the genus Paecilomyces as Paecilomyces marquandii by Canadian mycologist Stanley John Hughes in 1951 because of its morphological inconsistency with the emerging, modern concept of Verticillium Paecilomyces marquandii is often confused with Purpureocillium lilacinum because of their similar brownish-violet colony colors and bright yellow reverse pigmentation. In 2014, Metarhizium marquandii was introduced to accommodate this species, but it is considered a synonym.
Growth and morphology
Paecilomyces marquandii is an anamorphic eurotiomycete. It forms brush-like conidiophores borne on thin-walled, hyaline, and smooth-walled stalks that reach lengths from 50 to 300 μm and 2.5 to 3 μm wide. Conidiophores of P. marquandii resemble those of the genus Penicillium where brush-like conidiophores terminate with phialides with swollen bases and tapered necks 8 to 15 μm long and 1.5 to 2 μm wide. Conidia are produced in connected chains consisting of smooth walled hyaline broadly ellipsoidal to spindle-shaped spores, 3 to 3.5 μm long and 2.2 μm wide. Single phialides are not associated with conidiophores but may arise on vegetative aerial hyphae. Globe to ellipsoid chlamydospores 3.5 μm in diameter may be produced submerged in the growth medium beneath the mycelium. No sexual state is known. Colonies are odorless.
Paecilomyces marquandii can grow at wide range of temperature from with optimal growth at 25 °C but no growth above 37 °C. Temperature tolerance is a characteristic that distinguishes Paecilomyces marquandii from Purpureocillium lilacinum with the latter exhibiting growth above 37 °C. Colonies of P. marquandii grown on malt agar reach 5–7 cm in diameter in 14 days at 25 °C with a velvety, brownish-violet aerial mycelium occasionally producing short tufts of conidiophores called synnemata. Colonies begin as white becoming violet then dark vinaceous brown with bright yellow to orange yellow reverse at maturity. Optimal growth of P. marquandii occurs at a water potential of 45 bars. Growth is inhibited at atmospheric concentrations of carbon dioxide less than 3%. Paecilomyces marquandii exhibits antagonism towards Rhizoctonia solani and other fungi. However, it exerts stimulatory effects on some crop plants including corn.
Physiology
Paecilomyces marquandii utilizes starch, gelatin, chitin, and nitrite. Cellulose decomposition is absent or very poor. Paecilomyces marquandii is characterised by high tolerance to metals such as zinc, copper and lead. This fungus is proficient at taking up minerals and heavy metals from soil particularly at high pH conditions, although very high concentrations of metals disrupt the cell membrane. This species is also able to take up and decompose the banned herbicide alachlor and break it down by nitrogen acetyl oxidation. P. marquandii produces highly active, specific keratinases. In presence of keratin chips with phosphate and magnesium ions, it forms large quantities of struvite crystals. Oxygen uptake of P. marquandii is reduced by saturated 8-11 carbon chain fatty acids as a sole carbon source but favoured by compounds with shorter or longer fatty acid chains. Optimum pH for P. marquandii growth is 5-6. It is sensitive to organic chemicals like carbon disulfide.
Habitat and ecology
Paecilomyces marquandii has been isolated from soils in Netherlands, Austria, Czech Republic, Russia, United States, Canada, Spain, Turkey, Israel, Syria, Zaire, central Africa, the Ivory Coast, South Africa, India, Pakistan, Nepal, Jamaica, the Bahamas, Brazil, Central America, New Zealand, and Japan. It has been found in various types of soils including forest soils, under aspen forests, mixed hardwood with high humus accumulation, grasslands particularly in the upper soil layers, soils with steppe-type vegetation, arable and other cultivated soils down to a depth of 40 cm. This species has been found in agricultural fields treated with sewage sludge, in sewage sludge itself, streams with a lower degree of pollution, river sediments, estuarine slit, sand dunes, carst caves, and bat guano. It has been also isolated from pine litter, pine humus, peat, truffle grounds, roots of strawberry, the rhizosphere of corn, wheat, grasses, Beta vulgaris, and sugar cane and the rhizosphere of Lupinus angustifolius. Conidia of P. marquandii have been observed to germinate near roots of peas and radish. This effect is inhibited in the presence of the fungicide, miconazole.
Human and animal disease
This species has been reported as an agent of cellulitis on the leg of an immunosuppressed kidney transplant patient receiving corticosteroid therapy. Successful control of disseminated P. marquandii infection was obtained with miconazole. The fungus has exhibited tolerance to amphotericin B and flucytosine. It is not thought to be a significant pathogen of humans or animals.
References
marquandii
Fungi described in 1898
Fungus species | Paecilomyces marquandii | Biology | 1,382 |
628,948 | https://en.wikipedia.org/wiki/Signal%20strength%20and%20readability%20report | A signal strength and readability report is a standardized format for reporting the strength of the radio signal and the readability (quality) of the radiotelephone (voice) or radiotelegraph (Morse code) signal transmitted by another station as received at the reporting station's location and by their radio station equipment. These report formats are usually designed for only one communications mode or the other, although a few are used for both telegraph and voice communications. All but one of these signal report formats involve the transmission of numbers.
History
As the earliest radio communication used Morse code, all radio signal reporting formats until about the 1920s were for radiotelegraph, and the early voice radio signal report formats were based on the telegraph report formats.
Timeline of signal report formats
The first signal report format code may have been QJS.
The U.S. Navy used R and K signals starting in 1929.
The QSK code was one of the twelve Q Codes listed in the 1912 International Radiotelegraph Convention Regulations, but may have been in use earlier.
The QSA code was included in the Madrid Convention (Appendix 10, General Regulations) sometime prior to 1936.
The Amateur radio R-S-T system signal report format currently in use was first developed in 1934.
As early as 1943, the U.S and UK military published the first guidance that included the modern "Weak but readable", "Strong but distorted", and "Loud and clear" phrases.
By 1951, the CCEB had published ACP 125(A) (a.k.a. SGM-1O82-51), which formalized the 1943 "Loud and clear" format.
Radiotelegraph report formats
Q-Code signal report formats
The QSA code and QRK code are interrelated and complementary signal reporting codes for use in wireless telegraphy (Morse code). They replaced the earlier QSJ code.
Currently, the QSA and QRK codes are officially defined in the ITU Radio Regulations 1990, Appendix 13: Miscellaneous Abbreviations and Signals to Be Used in Radiotelegraphy Communications Except in the Maritime Mobile Service. They are also described identically in ACP131(F),:
R-S-T system
Amateur radio users in the U.S. and Canada have used the R-S-T system since 1934. This system was developed by amateur radio operator Arthur W. Braaten, W2BSR. It reports the readability on a scale of 1 to 5, the signal strength on a scale of 1 to 9, and the tone of the Morse code continuous wave signal on a scale of 1 to 9. During amateur radio contests, where the rate of new contacts is paramount, contest participants often give a perfect signal report of 599 even when the signal is lower quality, because always providing the same signal format enables them to send Morse code with less thought and thus increased speed.
SINPO code
SINPO is an acronym for Signal, Interference, Noise, Propagation, and Overall, which was developed by the CCIR in 1951 (as C.C.I.R. Recommendation No. 251) for use in radiotelegraphy, and the standard is contained in Recommendation ITU-R Sm.1135, SINPO and SINPFEMO codes. This format is most notably used by the BBC for receiving signal reports on postcards mailed from listeners, even though that same standard specifies that the SINPFEMO code should be used for radiotelephony transmissions. SINPO is the official radiotelegraph signal reporting codes for international civil aviation and ITU-R.
Radiotelephony report formats
R-S-T system
Amateur radio operators use the R-S-T system to describe voice transmissions, dropping the last digit (Tone report) because there is no continuous wave tone to report on.
SINPEMFO code
An extension of SINPO code, for use in radiotelephony (voice over radio) communications, SINPFEMO is an acronym for Signal, Interference, Noise, Propagation, Frequency of Fading, Depth, Modulation, and Overall.
Plain-language radio checks
The move to plain-language radio communications means that number-based formats are now considered obsolete, and are replaced by plain language radio checks. These avoid the ambiguity of which number stands for which type of report and whether a 1 is considered good or bad. This format originated with the U.S. military in World War II, and is currently defined by ACP 125 (G)., published by the Combined Communications Electronics Board.
The prowords listed below are for use when initiating and answering queries concerning signal strength and readability.
Use in analog vs. digital radio transmission modes
In analog radio systems, as receiving stations move away from a radio transmitting site, the signal strength decreases gradually, causing the relative noise level to increase. The signal becomes increasingly difficult to understand until it can no longer be heard as anything other than static.
These reporting systems are usable for, but perhaps not completely appropriate for, rating digital signal quality. This is because digital signals have fairly consistent quality as the receiver moves away from the transmitter until reaching a threshold distance. At this threshold point, sometimes called the "digital cliff,"the signal quality takes a severe drop and is lost". This difference in reception reduces attempts to ascertain subjective signal quality to simply asking, "Can you hear me now?" or similar. The only possible response is "yes"; otherwise, there is just dead air. This sudden signal drop was also one of the primary arguments of analog proponents against moving to digital systems. However, the "five bars" displayed on many cell phones does directly correlate to the signal strength rating.
Informal terminology and slang
The phrase "five by five" can be used informally to mean "good signal strength" or "loud and clear". An early example of this phrase was in 1946, recounting a wartime conversation. The phrase was used in 1954 in the novel The Blackboard Jungle. Another example usage of this phrase is from June 1965 by the crew of the Gemini IV spacecraft. This phrase apparently refers to the fact that the format consists of two digits, each ranging from one to five, with five/five being the best signal possible.
Some radio users have inappropriately started using the Circuit Merit telephone line quality measurement. This format is unsuitable for radiotelegraph or radio-telephony use because it focuses on voice-to-noise ratios, for judging whether a particular telephone line is suitable for commercial (paying customer) use, and does not include separate reports for signal strength and voice quality.
See also
Mean opinion score
Perceptual Evaluation of Speech Quality (PESQ)
Perceptual Objective Listening Quality Analysis (POLQA)
Procedure word
References
External links
Ham Radio RST Signal Reporting System for CW Operation, by Charlie Bautsch, W5AM
itu.int: SM.1135 - Sinpo and sinpfemo codes - ITU
English phrases
Operating signals
Quality control
Nondestructive testing | Signal strength and readability report | Materials_science | 1,436 |
59,472,484 | https://en.wikipedia.org/wiki/Charlotte%20Fitch%20Roberts | Charlotte Fitch Roberts (February 13, 1859 – December 5, 1917) was an American chemist best known for her work on stereochemistry.
Life
Roberts was born on February 13, 1859, in New York City to Horace Roberts and Mary Roberts (née Hart).
Education and career
Roberts attended Wellesley College in 1880. Wellesley made her a graduate assistant in 1881, an instructor in 1882, and an associate professor in 1886. In 1885 she spent a year at Cambridge University working with Sir James Dewar, a chemist and physicist. In 1896 she published The Development and Present Aspects of Stereochemistry. She obtained a PhD from Yale in 1894 and a post at the University of Berlin from 1899 to 1900. She was made a professor and the head of the chemistry department from 1896 to 1917 at Wellesley College.
Awards and professional bodies
Roberts was made a fellow of the American Association for the Advancement of Science, and a chemistry professorship at Wellesley now bears her name.
References
External links
1859 births
1917 deaths
Scientists from New York City
American women chemists
Stereochemists
20th-century American chemists
19th-century American chemists
Wellesley College alumni
Yale University alumni
Wellesley College faculty
Fellows of the American Association for the Advancement of Science | Charlotte Fitch Roberts | Chemistry | 242 |
55,799,101 | https://en.wikipedia.org/wiki/Glossary%20of%20representation%20theory | This is a glossary of representation theory in mathematics.
The term "module" is often used synonymously for a representation; for the module-theoretic terminology, see also glossary of module theory.
See also Glossary of Lie groups and Lie algebras, list of representation theory topics and :Category:Representation theory.
Notations: We write . Thus, for example, a one-representation (i.e., a character) of a group G is of the form .
A
B
C
D
E
F
G
H
I
J
K
L
M
O
P
Q
R
S
T
U
V
W
Y
Z
Notes
References
Theodor Bröcker and Tammo tom Dieck, Representations of compact Lie groups, Graduate Texts in Mathematics 98, Springer-Verlag, Berlin, 1995.
Claudio Procesi (2007) Lie Groups: an approach through invariants and representation, Springer, .
N. Wallach, Real Reductive Groups, 2 vols., Academic Press 1988,
Further reading
M. Duflo et M. Vergne, La formule de Plancherel des groupes de Lie semi-simples réels, in “Representations of Lie Groups;” Kyoto, Hiroshima (1986), Advanced Studies in Pure Mathematics 14, 1988.
External links
https://math.stanford.edu/~bump/
Representation theory
Wikipedia glossaries using description lists | Glossary of representation theory | Mathematics | 278 |
14,275 | https://en.wikipedia.org/wiki/Hacker%20ethic | The hacker ethic is a philosophy and set of moral values within hacker culture. Practitioners believe that sharing information and data with others is an ethical imperative. The hacker ethic is related to the concept of freedom of information, as well as the political theories of anti-authoritarianism, anarchism, and libertarianism.
While some tenets of the hacker ethic were described in other texts like Computer Lib/Dream Machines (1974) by Ted Nelson, the term hacker ethic is generally attributed to journalist Steven Levy, who appears to have been the first to document both the philosophy and the founders of the philosophy in his 1984 book titled Hackers: Heroes of the Computer Revolution.
History
The hacker ethic originated at the Massachusetts Institute of Technology in the 1950s–1960s. The term "hacker" has long been used there to describe college pranks that MIT students would regularly devise, and was used more generally to describe a project undertaken or a product built to fulfill some constructive goal, but also out of pleasure for mere involvement.
MIT housed an early IBM 704 computer inside the Electronic Accounting Machinery (EAM) room in 1959. This room became the staging grounds for early hackers, as MIT students from the Tech Model Railroad Club sneaked inside the EAM room after hours to attempt programming the 30-ton, computer.
The hacker ethic was described as a "new way of life, with a philosophy, an ethic and a dream". However, the elements of the hacker ethic were not openly debated and discussed; rather they were implicitly accepted and silently agreed upon.
The free software movement was born in the early 1980s from followers of the hacker ethic. Its founder, Richard Stallman, is referred to by Steven Levy as "the last true hacker".
Richard Stallman describes:
"The hacker ethic refers to the feelings of right and wrong, to the ethical ideas this community of people had—that knowledge should be shared with other people who can benefit from it, and that important resources should be utilized rather than wasted."
and states more precisely that hacking (which Stallman defines as playful cleverness) and ethics are two separate issues:
"Just because someone enjoys hacking does not mean he has an ethical commitment to treating other people properly. Some hackers care about ethics—I do, for instance—but that is not part of being a hacker, it is a separate trait. [...] Hacking is not primarily about an ethical issue. [...] hacking tends to lead a significant number of hackers to think about ethical questions in a certain way. I would not want to completely deny all connection between hacking and views on ethics."The hacker culture has been compared to early Protestantism . Protestant sectarians emphasized individualism and loneliness, similar to hackers who have been considered loners and nonjudgmental individuals. The notion of moral indifference between hackers characterized the persistent actions of computer culture in the 1970s and early 1980s. According to Kirkpatrick, author of The Hacker Ethic, the "computer plays the role of God, whose requirements took priority over the human ones of sentiment when it came to assessing one's duty to others."
According to Kirkpatrick's The Hacker Ethic:
"Exceptional single-mindedness and determination to keep plugging away at a problem until the optimal solution had been found are well-documented traits of the early hackers. Willingness to work right through the night on a single programming problem are widely cited as features of the early 'hacker' computer culture."
The hacker culture is placed in the context of 1960s youth culture when American youth culture challenged the concept of capitalism and big, centralized structures. The hacker culture was a subculture within 1960s counterculture. The hackers' main concern was challenging the idea of technological expertise and authority. The 1960s hippy period attempted to "overturn the machine." Although hackers appreciated technology, they wanted regular citizens, and not big corporations, to have power over technology "as a weapon that might actually undermine the authority of the expert and the hold of the monolithic system."
The hacker ethics
As Levy summarized in the preface of Hackers, the general tenets or principles of hacker ethic include:
Sharing
Openness
Decentralization
Free access to computers
World Improvement (foremost, upholding democracy and the fundamental laws we all live by, as a society)
In addition to those principles, Levy also described more specific hacker ethics and beliefs in chapter 2, The Hacker Ethic: The ethics he described in chapter 2 are:
1. "Access to computers—and anything which might teach you something about the way the world works—should be unlimited and total. Always yield to the Hands-On Imperative!" Levy is recounting hackers' abilities to learn and build upon pre-existing ideas and systems. He believes that access gives hackers the opportunity to take things apart, fix, or improve upon them and to learn and understand how they work. This gives them the knowledge to create new and even more interesting things. Access aids the expansion of technology.
2. "All information should be free" Linking directly with the principle of access, information needs to be free for hackers to fix, improve, and reinvent systems. A free exchange of information allows for greater overall creativity. In the hacker viewpoint, any system could benefit from an easy flow of information, a concept known as transparency in the social sciences. As Stallman notes, "free" refers to unrestricted access; it does not refer to price.
3. "Mistrust authority—promote decentralization" The best way to promote the free exchange of information is to have an open system that presents no boundaries between a hacker and a piece of information or an item of equipment that they need in their quest for knowledge, improvement, and time on-line. Hackers believe that bureaucracies, whether corporate, government, or university, are flawed systems.
4. "Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, sex, or position" Inherent in the hacker ethic is a meritocratic system where superficiality is disregarded in esteem of skill. Levy articulates that criteria such as age, sex, race, position, and qualification are deemed irrelevant within the hacker community. Hacker skill is the ultimate determinant of acceptance. Such a code within the hacker community fosters the advance of hacking and software development.
5. "You can create art and beauty on a computer" Hackers deeply appreciate innovative techniques which allow programs to perform complicated tasks with few instructions. A program's code was considered to hold a beauty of its own, having been carefully composed and artfully arranged. Learning to create programs which used the least amount of space almost became a game between the early hackers.
6. "Computers can change your life for the better" Hackers felt that computers had enriched their lives, given their lives focus, and made their lives adventurous. Hackers regarded computers as Aladdin's lamps that they could control. They believed that everyone in society could benefit from experiencing such power and that if everyone could interact with computers in the way that hackers did, then the hacker ethic might spread through society and computers would improve the world. The hackers succeeded in turning dreams of endless possibilities into realities. The hacker's primary object was to teach society that "the world opened up by the computer was a limitless one" (Levy 230:1984)
Sharing
From the early days of modern computing through to the 1970s, it was far more common for computer users to have the freedoms that are provided by an ethic of open sharing and collaboration. Software, including source code, was commonly shared by individuals who used computers. Most companies had a business model based on hardware sales, and provided or bundled the associated software free of charge. According to Levy's account, sharing was the norm and expected within the non-corporate hacker culture. The principle of sharing stemmed from the open atmosphere and informal access to resources at MIT. During the early days of computers and programming, the hackers at MIT would develop a program and share it with other computer users.
If the hack was deemed particularly good, then the program might be posted on a board somewhere near one of the computers. Other programs that could be built upon it and improved it were saved to tapes and added to a drawer of programs, readily accessible to all the other hackers. At any time, a fellow hacker might reach into the drawer, pick out the program, and begin adding to it or "bumming" it to make it better. Bumming referred to the process of making the code more concise so that more can be done in fewer instructions, saving precious memory for further enhancements.
In the second generation of hackers, sharing was about sharing with the general public in addition to sharing with other hackers. A particular organization of hackers that was concerned with sharing computers with the general public was a group called Community Memory. This group of hackers and idealists put computers in public places for anyone to use. The first community computer was placed outside of Leopold's Records in Berkeley, California.
Another sharing of resources occurred when Bob Albrecht provided considerable resources for a non-profit organization called the People's Computer Company (PCC). PCC opened a computer center where anyone could use the computers there for fifty cents per hour.
This second generation practice of sharing contributed to the battles of free and open software. In fact, when Bill Gates' version of BASIC for the Altair was shared among the hacker community, Gates claimed to have lost a considerable sum of money because few users paid for the software. As a result, Gates wrote an Open Letter to Hobbyists. This letter was published by several computer magazines and newsletters, most notably that of the Homebrew Computer Club where much of the sharing occurred.
According to Brent K. Jesiek in "Democratizing Software: Open Source, the Hacker Ethic, and Beyond," technology is being associated with social views and goals. Jesiek refers to Gisle Hannemyr's views on open source vs. commercialized software. Hannemyr concludes that when a hacker constructs software, the software is flexible, tailorable, modular in nature and is open-ended. A hacker's software contrasts mainstream hardware which favors control, a sense of being whole, and be immutable (Hannemyr, 1999).
Furthermore, he concludes that 'the difference between the hacker’s approach and those of the industrial programmer is one of outlook: between an agoric, integrated and holistic attitude towards the creation of artifacts and a proprietary, fragmented and reductionist one' (Hannemyr, 1999). As Hannemyr’s analysis reveals, the characteristics of a given piece of software frequently reflect the attitude and outlook of the programmers and organizations from which it emerges."
Copyright and patents
As copyright and patent laws limit the ability to share software, opposition to software patents is widespread in the hacker and free software community.
Hands-On Imperative
Many of the principles and tenets of hacker ethic contribute to a common goal: the Hands-On Imperative. As Levy described in Chapter 2, "Hackers believe that essential lessons can be learned about the systems—about the world—from taking things apart, seeing how they work, and using this knowledge to create new and more interesting things."
Employing the Hands-On Imperative requires free access, open information, and the sharing of knowledge. To a true hacker, if the Hands-On Imperative is restricted, then the ends justify the means to make it unrestricted so that improvements can be made. When these principles are not present, hackers tend to work around them. For example, when the computers at MIT were protected either by physical locks or login programs, the hackers there systematically worked around them in order to have access to the machines. Hackers assumed a "willful blindness" in the pursuit of perfection.
This behavior was not malicious in nature: the MIT hackers did not seek to harm the systems or their users. This deeply contrasts with the modern, media-encouraged image of hackers who crack secure systems in order to steal information or complete an act of cyber-vandalism.
Community and collaboration
Throughout writings about hackers and their work processes, a common value of community and collaboration is present. For example, in Levy's Hackers, each generation of hackers had geographically based communities where collaboration and sharing occurred. For the hackers at MIT, it was the labs where the computers were running. For the hardware hackers (second generation) and the game hackers (third generation) the geographic area was centered in Silicon Valley where the Homebrew Computer Club and the People's Computer Company helped hackers network, collaborate, and share their work.
The concept of community and collaboration is still relevant today, although hackers are no longer limited to collaboration in geographic regions. Now collaboration takes place via the Internet. Eric S. Raymond identifies and explains this conceptual shift in The Cathedral and the Bazaar:
Before cheap Internet, there were some geographically compact communities where the culture encouraged Weinberg's egoless programming, and a developer could easily attract a lot of skilled kibitzers and co-developers. Bell Labs, the MIT AI and LCS labs, UC Berkeley: these became the home of innovations that are legendary and still potent.
Raymond also notes that the success of Linux coincided with the wide availability of the World Wide Web. The value of community is still in high practice and use today.
Levy's "true hackers"
Levy identifies several "true hackers" who significantly influenced the hacker ethic. Some well-known "true hackers" include:
Bill Gosper: Mathematician and hacker
Richard Greenblatt: Programmer and early designer of LISP machines
John McCarthy: Co-founder of the MIT Artificial Intelligence Lab and Stanford AI Laboratory
Jude Milhon: Founder of the cypherpunk movement, senior editor at Mondo 2000, and co-founder of Community Memory
Richard Stallman: Programmer and political activist who is well known for GNU, Emacs and the Free Software Movement
Levy also identified the "hardware hackers" (the "second generation", mostly centered in Silicon Valley) and the "game hackers" (or the "third generation"). All three generations of hackers, according to Levy, embodied the principles of the hacker ethic. Some of Levy's "second-generation" hackers include:
Steve Dompier: Homebrew Computer Club member and hacker who worked with the early Altair 8800
John Draper: A legendary figure in the computer programming world. He wrote EasyWriter, the first word processor.
Lee Felsenstein: A hardware hacker and co-founder of Community Memory and Homebrew Computer Club; a designer of the Sol-20 computer
Bob Marsh: A designer of the Sol-20 computer
Fred Moore: Activist and founder of the Homebrew Computer Club
Steve Wozniak: One of the founders of Apple Computer
Levy's "third generation" practitioners of hacker ethic include:
John Harris: One of the first programmers hired at On-Line Systems (which later became Sierra Entertainment)
Ken Williams: Along with wife Roberta, founded On-Line Systems after working at IBM – the company would later achieve mainstream popularity as Sierra.
Other descriptions
In 2001, Finnish philosopher Pekka Himanen promoted the hacker ethic in opposition to the Protestant work ethic. In Himanen's opinion, the hacker ethic is more closely related to the virtue ethics found in the writings of Plato and of Aristotle. Himanen explained these ideas in a book, The Hacker Ethic and the Spirit of the Information Age, with a prologue contributed by Linus Torvalds and an epilogue by Manuel Castells.
In this manifesto, the authors wrote about a hacker ethic centering on passion, hard work, creativity and joy in creating software. Both Himanen and Torvalds were inspired by the Sampo in Finnish mythology. The Sampo, described in the Kalevala saga, was a magical artifact constructed by Ilmarinen, the blacksmith god, that brought good fortune to its holder; nobody knows exactly what it was supposed to be. The Sampo has been interpreted in many ways: a world pillar or world tree, a compass or astrolabe, a chest containing a treasure, a Byzantine coin die, a decorated Vendel period shield, a Christian relic, etc. Kalevala saga compiler Lönnrot interpreted it to be a "quern" or mill of some sort that made flour, salt, and wealth.
See also
Hacks at the Massachusetts Institute of Technology
Hacker (programmer subculture)
Hacker (term)
Hacktivism
Tech Model Railroad Club
The Cathedral and the Bazaar
Free software movement
Free software philosophy
Footnotes
References
Further reading
External links
Learn Ethical Hacking From Android
Gabriella Coleman, an anthropologist at McGill University, studies hacker cultures and has written extensively on the hacker ethic and culture
Tom Chance's essay on The Hacker Ethic and Meaningful Work
Hacker ethic from the Jargon file
Directory of free software
ITERATIVE DISCOURSE AND THE FORMATION OF NEW SUBCULTURES by Steve Mizrach describes the hacker terminology, including the term cracker.
Richard Stallman's Personal Website
Is there a Hacker Ethic for 90s Hackers? by Steven Mizrach
The Hacker's Ethics by the Cyberpunk Project
Computing and society
Hacker culture
Decentralization | Hacker ethic | Technology | 3,568 |
14,307 | https://en.wikipedia.org/wiki/Hall%20effect | The Hall effect is the production of a potential difference (the Hall voltage) across an electrical conductor that is transverse to an electric current in the conductor and to an applied magnetic field perpendicular to the current. It was discovered by Edwin Hall in 1879.
The Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. It is a characteristic of the material from which the conductor is made, since its value depends on the type, number, and properties of the charge carriers that constitute the current.
Discovery
Wires carrying current in a magnetic field experience a mechanical force perpendicular to both the current and magnetic field.
In the 1820s, André-Marie Ampère observed this underlying mechanism that led to the discovery of the Hall effect. However it was not until a solid mathematical basis for electromagnetism was systematized by James Clerk Maxwell's "On Physical Lines of Force" (published in 1861–1862) that details of the interaction between magnets and electric current could be understood.
Edwin Hall then explored the question of whether magnetic fields interacted with the conductors or the electric current, and reasoned that if the force was specifically acting on the current, it should crowd current to one side of the wire, producing a small measurable voltage. In 1879, he discovered this Hall effect while he was working on his doctoral degree at Johns Hopkins University in Baltimore, Maryland. Eighteen years before the electron was discovered, his measurements of the tiny effect produced in the apparatus he used were an experimental tour de force, published under the name "On a New Action of the Magnet on Electric Currents".
Hall effect within voids
The term ordinary Hall effect can be used to distinguish the effect described in the introduction from a related effect which occurs across a void or hole in a semiconductor or metal plate when current is injected via contacts that lie on the boundary or edge of the void. The charge then flows outside the void, within the metal or semiconductor material. The effect becomes observable, in a perpendicular applied magnetic field, as a Hall voltage appearing on either side of a line connecting the current-contacts. It exhibits apparent sign reversal in comparison to the "ordinary" effect occurring in the simply connected specimen. It depends only on the current injected from within the void.
Hall effect superposition
Superposition of these two forms of the effect, the ordinary and void effects, can also be realized. First imagine the "ordinary" configuration, a simply connected (void-less) thin rectangular homogeneous element with current-contacts on the (external) boundary. This develops a Hall voltage, in a perpendicular magnetic field. Next, imagine placing a rectangular void within this ordinary configuration, with current-contacts, as mentioned above, on the interior boundary of the void. (For simplicity, imagine the contacts on the boundary of the void lined up with the ordinary-configuration contacts on the exterior boundary.) In such a combined configuration, the two Hall effects may be realized and observed simultaneously in the same doubly connected device: A Hall effect on the external boundary that is proportional to the current injected only via the outer boundary, and an apparently sign-reversed Hall effect on the interior boundary that is proportional to the current injected only via the interior boundary. The superposition of multiple Hall effects may be realized by placing multiple voids within the Hall element, with current and voltage contacts on the boundary of each void.
Further "Hall effects" may have additional physical mechanisms but are built on these basics.
Theory
The Hall effect is due to the nature of the current in a conductor. Current consists of the movement of many small charge carriers, typically electrons, holes, ions (see Electromigration) or all three. When a magnetic field is present, these charges experience a force, called the Lorentz force. When such a magnetic field is absent, the charges follow approximately straight paths between collisions with impurities, phonons, etc. However, when a magnetic field with a perpendicular component is applied, their paths between collisions are curved; thus, moving charges accumulate on one face of the material. This leaves equal and opposite charges exposed on the other face, where there is a scarcity of mobile charges. The result is an asymmetric distribution of charge density across the Hall element, arising from a force that is perpendicular to both the straight path and the applied magnetic field. The separation of charge establishes an electric field that opposes the migration of further charge, so a steady electric potential is established for as long as the charge is flowing.
In classical electromagnetism electrons move in the opposite direction of the current (by convention "current" describes a theoretical "hole flow"). In some metals and semiconductors it appears "holes" are actually flowing because the direction of the voltage is opposite to the derivation below.
For a simple metal where there is only one type of charge carrier (electrons), the Hall voltage can be derived by using the Lorentz force and seeing that, in the steady-state condition, charges are not moving in the -axis direction. Thus, the magnetic force on each electron in the -axis direction is cancelled by a -axis electrical force due to the buildup of charges. The term is the drift velocity of the current which is assumed at this point to be holes by convention. The term is negative in the -axis direction by the right hand rule.
In steady state, , so , where is assigned in the direction of the -axis, (and not with the arrow of the induced electric field as in the image (pointing in the direction), which tells you where the field caused by the electrons is pointing).
In wires, electrons instead of holes are flowing, so and . Also . Substituting these changes gives
The conventional "hole" current is in the negative direction of the electron current and the negative of the electrical charge which gives where is charge carrier density, is the cross-sectional area, and is the charge of each electron. Solving for and plugging into the above gives the Hall voltage:
If the charge build up had been positive (as it appears in some metals and semiconductors), then the assigned in the image would have been negative (positive charge would have built up on the left side).
The Hall coefficient is defined as
or
where is the current density of the carrier electrons, and is the induced electric field. In SI units, this becomes
(The units of are usually expressed as m3/C, or Ω·cm/G, or other variants.) As a result, the Hall effect is very useful as a means to measure either the carrier density or the magnetic field.
One very important feature of the Hall effect is that it differentiates between positive charges moving in one direction and negative charges moving in the opposite. In the diagram above, the Hall effect with a negative charge carrier (the electron) is presented. But consider the same magnetic field and current are applied but the current is carried inside the Hall effect device by a positive particle. The particle would of course have to be moving in the opposite direction of the electron in order for the current to be the same—down in the diagram, not up like the electron is. And thus, mnemonically speaking, your thumb in the Lorentz force law, representing (conventional) current, would be pointing the same direction as before, because current is the same—an electron moving up is the same current as a positive charge moving down. And with the fingers (magnetic field) also being the same, interestingly the charge carrier gets deflected to the left in the diagram regardless of whether it is positive or negative. But if positive carriers are deflected to the left, they would build a relatively positive voltage on the left whereas if negative carriers (namely electrons) are, they build up a negative voltage on the left as shown in the diagram. Thus for the same current and magnetic field, the electric polarity of the Hall voltage is dependent on the internal nature of the conductor and is useful to elucidate its inner workings.
This property of the Hall effect offered the first real proof that electric currents in most metals are carried by moving electrons, not by protons. It also showed that in some substances (especially p-type semiconductors), it is contrarily more appropriate to think of the current as positive "holes" moving rather than negative electrons. A common source of confusion with the Hall effect in such materials is that holes moving one way are really electrons moving the opposite way, so one expects the Hall voltage polarity to be the same as if electrons were the charge carriers as in most metals and n-type semiconductors. Yet we observe the opposite polarity of Hall voltage, indicating positive charge carriers. However, of course there are no actual positrons or other positive elementary particles carrying the charge in p-type semiconductors, hence the name "holes". In the same way as the oversimplistic picture of light in glass as photons being absorbed and re-emitted to explain refraction breaks down upon closer scrutiny, this apparent contradiction too can only be resolved by the modern quantum mechanical theory of quasiparticles wherein the collective quantized motion of multiple particles can, in a real physical sense, be considered to be a particle in its own right (albeit not an elementary one).
Unrelatedly, inhomogeneity in the conductive sample can result in a spurious sign of the Hall effect, even in ideal van der Pauw configuration of electrodes. For example, a Hall effect consistent with positive carriers was observed in evidently n-type semiconductors. Another source of artefact, in uniform materials, occurs when the sample's aspect ratio is not long enough: the full Hall voltage only develops far away from the current-introducing contacts, since at the contacts the transverse voltage is shorted out to zero.
Hall effect in semiconductors
When a current-carrying semiconductor is kept in a magnetic field, the charge carriers of the semiconductor experience a force in a direction perpendicular to both the magnetic field and the current. At equilibrium, a voltage appears at the semiconductor edges.
The simple formula for the Hall coefficient given above is usually a good explanation when conduction is dominated by a single charge carrier. However, in semiconductors and many metals the theory is more complex, because in these materials conduction can involve significant, simultaneous contributions from both electrons and holes, which may be present in different concentrations and have different mobilities. For moderate magnetic fields the Hall coefficient is
or equivalently
with
Here is the electron concentration, the hole concentration, the electron mobility, the hole mobility and the elementary charge.
For large applied fields the simpler expression analogous to that for a single carrier type holds.
Relationship with star formation
Although it is well known that magnetic fields play an important role in star formation, research models indicate that Hall diffusion critically influences the dynamics of gravitational collapse that forms protostars.
Quantum Hall effect
For a two-dimensional electron system which can be produced in a MOSFET, in the presence of large magnetic field strength and low temperature, one can observe the quantum Hall effect, in which the Hall conductance undergoes quantum Hall transitions to take on the quantized values.
Spin Hall effect
The spin Hall effect consists in the spin accumulation on the lateral boundaries of a current-carrying sample. No magnetic field is needed. It was predicted by Mikhail Dyakonov and V. I. Perel in 1971 and observed experimentally more than 30 years later, both in semiconductors and in metals, at cryogenic as well as at room temperatures.
The quantity describing the strength of the Spin Hall effect is known as Spin Hall angle, and it is defined as:
Where is the spin current generated by the applied current density .
Quantum spin Hall effect
For mercury telluride two dimensional quantum wells with strong spin-orbit coupling, in zero magnetic field, at low temperature, the quantum spin Hall effect has been observed in 2007.
Anomalous Hall effect
In ferromagnetic materials (and paramagnetic materials in a magnetic field), the Hall resistivity includes an additional contribution, known as the anomalous Hall effect (or the extraordinary Hall effect), which depends directly on the magnetization of the material, and is often much larger than the ordinary Hall effect. (Note that this effect is not due to the contribution of the magnetization to the total magnetic field.) For example, in nickel, the anomalous Hall coefficient is about 100 times larger than the ordinary Hall coefficient near the Curie temperature, but the two are similar at very low temperatures. Although a well-recognized phenomenon, there is still debate about its origins in the various materials. The anomalous Hall effect can be either an extrinsic (disorder-related) effect due to spin-dependent scattering of the charge carriers, or an intrinsic effect which can be described in terms of the Berry phase effect in the crystal momentum space (-space).
Hall effect in ionized gases
The Hall effect in an ionized gas (plasma) is significantly different from the Hall effect in solids (where the Hall parameter is always much less than unity). In a plasma, the Hall parameter can take any value. The Hall parameter, , in a plasma is the ratio between the electron gyrofrequency, , and the electron-heavy particle collision frequency, :
where
is the elementary charge (approximately )
is the magnetic field (in teslas)
is the electron mass (approximately ).
The Hall parameter value increases with the magnetic field strength.
Physically, the trajectories of electrons are curved by the Lorentz force. Nevertheless, when the Hall parameter is low, their motion between two encounters with heavy particles (neutral or ion) is almost linear. But if the Hall parameter is high, the electron movements are highly curved. The current density vector, , is no longer collinear with the electric field vector, . The two vectors and make the Hall angle, , which also gives the Hall parameter:
Other Hall effects
The Hall Effects family has expanded to encompass other quasi-particles in semiconductor nanostructures. Specifically, a set of Hall Effects has emerged based on excitons and exciton-polaritons n 2D materials and quantum wells.
Applications
Hall sensors amplify and use the Hall effect for a variety of sensing applications.
Corbino effect
The Corbino effect, named after its discoverer Orso Mario Corbino, is a phenomenon involving the Hall effect, but a disc-shaped metal sample is used in place of a rectangular one. Because of its shape the Corbino disc allows the observation of Hall effect–based magnetoresistance without the associated Hall voltage.
A radial current through a circular disc, subjected to a magnetic field perpendicular to the plane of the disc, produces a "circular" current through the disc. The absence of the free transverse boundaries renders the interpretation of the Corbino effect simpler than that of the Hall effect.
See also
Electromagnetic induction
Nernst effect
Thermal Hall effect
References
Sources
Introduction to Plasma Physics and Controlled Fusion, Volume 1, Plasma Physics, Second Edition, 1984, Francis F. Chen
Further reading
Annraoi M. de Paor. Correction to the classical two-species Hall Coefficient using twoport network theory. International Journal of Electrical Engineering Education 43/4.
The Hall effect - The Feynman Lectures on Physics
University of Washington The Hall Effect
External links
, P. H. Craig, System and apparatus employing the Hall effect
, J. T. Maupin, E. A. Vorthmann, Hall effect contactless switch with prebiased Schmitt trigger
Understanding and Applying the Hall Effect
Hall Effect Thrusters Alta Space
Hall effect calculators
Interactive Java tutorial on the Hall effect National High Magnetic Field Laboratory
Science World (wolfram.com) article.
"The Hall Effect". nist.gov.
Table with Hall coefficients of different elements at room temperature .
Simulation of the Hall effect as a Youtube video
Hall effect in electrolytes
Condensed matter physics
Electric and magnetic fields in matter | Hall effect | Physics,Chemistry,Materials_science,Engineering | 3,247 |
60,837,114 | https://en.wikipedia.org/wiki/FlexAID | FlexAID is a molecular docking software that can use small molecules and peptides as ligands and proteins and nucleic acids as docking targets. As the name suggests, FlexAID supports full ligand flexibility as well side-chain flexibility of the target. It does using a soft scoring function based on the complementarity of the two surfaces (ligand and target).
FlexAID has been shown to outperform existing widely used software such as AutoDock Vina and FlexX in the prediction of binding poses. This is particularly true in cases where target flexibility is crucial, such as is likely to be the case when using homology models. The source code is available on GitHub under Apache License.
Graphical user interface
A PyMOL plugin for FlexAID, NRGsuite, has also been developed by the original authors.
See also
Docking (molecular)
Virtual screening
List of protein-ligand docking software
References
External links
— Najmanovich Research Group resources
Molecular modelling software
Molecular modelling
Free and open-source software
Software using the Apache license
Free software programmed in C
Free software programmed in C++ | FlexAID | Chemistry | 223 |
47,885,449 | https://en.wikipedia.org/wiki/HD%20124099 | HD 124099 (HR 5306; NSV 20066; 7 G. Apodis) is a solitary orange-hued star located in the southern circumpolar constellation Apus. It has an average apparent magnitude of 6.47, placing it very close to the limit for naked eye visibility, even under ideal conditions. The object is located relatively far at a distance of approximately 2,030 light-years based on Gaia DR3 parallax measurements, but it is drifting closer with a heliocentric radial velocity of . At its current distance, HD 124099's average brightness is diminished by 0.47 magnitudes due to interstellar extinction and it has an absolute magnitude of −2.10.
HD 124099 has a stellar classification of K2 IIp, indicating that it is an evolved K-type bright giant with peculiarities in its spectrum; the peculiarity being that it has either a very weak or no G-band in its spectrum. It has 4.22 times the mass of the Sun but it has expanded to 71.4 times the radius of the Sun. It radiates 1,545 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . However, Gaia DR3 stellar evolution models give a larger radius of and a higher luminosity of . HD 124099 is metal deficient with an iron abundance 61.2% that of the Sun's ([Fe/H] = −0.21) and it spins modestly with a projected rotational velocity of . The star is suspected to be a semiregular variable of the SRD subtype and it ranges from 6.46 to 6.49 within 528 days.
References
K-type bright giants
Apus
Semiregular variable stars
Apodis,7
CD-77 00643
124099
069778
5306 | HD 124099 | Astronomy | 400 |
55,010,974 | https://en.wikipedia.org/wiki/Amauroderma%20laccatostiptatum | Amauroderma laccatostiptatum is a polypore fungus in the family Ganodermataceae. It was described as a new species in 2015 by mycologists Allyne Christina Gomes-Silva, Leif Ryvarden, and Tatiana Gibertoni. The specific epithet laccatostiptatum (from the Latin words laccatus = "appearing varnished" and stipitatum = "with a stipe") refers to the varnished stipe. A. laccatostiptatum is found in the states of Amazonas, Pará, and Rondônia in the Brazilian Amazon. The fungus fruits on soil.
References
laccatostiptatum
Fungi described in 2015
Fungi of Brazil
Taxa named by Leif Ryvarden
Fungus species | Amauroderma laccatostiptatum | Biology | 161 |
5,096,175 | https://en.wikipedia.org/wiki/Average%20propensity%20to%20save | In Keynesian economics, the average propensity to save (APS), also known as the savings ratio, is the proportion of income which is saved, usually expressed for household savings as a fraction of total household disposable income (taxed income).
The ratio differs considerably over time and between countries. The savings ratio for an entire economy can be affected by (for example) the proportion of older people (as they have less motivation and capability to save), and the rate of inflation (as expectations of rising prices can encourage people to spend now rather than later) or current interest rates.
APS can express the social preference for investing in the future over consuming in the present.
The complement (1 minus the APS) is the average propensity to consume (APC).
Low average propensity to save might be the indicator of a large percentage of old people or high percentage of irresponsible young people in the population.
With income level changes, APS becomes an inexact tool for measuring these changes. So, the marginal propensity to save is used in these cases.
Characteristics of APS
Mathematics
From the equation:
APS is calculated from the amount of savings as a fraction of income.
APS can be calculated as total savings divided by the income level for which we want to determine the average propensity to save.
Example 1: The income level is 90 and total savings for that level is 25, then we will get 25/90 as the APS.
Average propensity to save can not be greater than or equal to 1, but APS can be negative, if income is zero and consumption has a positive value.
Example 2: The income is 0 and consumption is 20, so the APS value will be -0.2.
Average propensity to save is decreasing
As is a fraction of income, an increase in income would make the proportion of saving lower. Also income rises faster than savings so APS tens to decrease as income increase.
Marginal propensity to save (MPS)
Marginal propensity to save is the proportion of an increase in income devoted to savings.
Mathematically, the function is expressed as the derivative of the savings function with respect to disposable income , i.e., the instantaneous slope of the - curve.
or, approximately,
, where is the change in savings, and is the change in disposable income that produced the consumption.
See also
Average propensity to consume
Marginal propensity to save
Marginal propensity to consume
Golden Rule savings rate
Kinetic exchange models of markets
References
Gross domestic product
Personal finance
Financial ratios | Average propensity to save | Mathematics | 539 |
10,648,303 | https://en.wikipedia.org/wiki/Chambliss%20Amateur%20Achievement%20Award | The Chambliss Amateur Achievement Award is awarded by the American Astronomical Society for an achievement in astronomical research made by an amateur astronomer resident in North America. The prize is named after Carlson R. Chambliss of Kutztown University, who donated the funds to support the prize. The award will consist of a 224-gram (½-lb) silver medal and $1,000 cash.
Previous winners
Source:
2006 Brian D. Warner
2007 Ronald H. Bissinger
2008 Steve Mandel
2009 Robert D. Stephens
2010 R. Jay GaBany
2011 Tim Puckett
2012 Kian Jek
2013 No award
2014 Mike Simonsen
2015 No award
2016 Daryll LaCourse
2017 No award
2018 Donald G. Bruns
2019 No award
2020 Dennis Conti
See also
List of astronomy awards
List of awards named after people
Amateur Achievement Award of the Astronomical Society of the Pacific
References
External links
AAS Grants and Prizes
Astronomy prizes
American Astronomical Society | Chambliss Amateur Achievement Award | Astronomy,Technology | 192 |
26,751,317 | https://en.wikipedia.org/wiki/Beppo-Levi%20space | In functional analysis, a branch of mathematics, a Beppo Levi space, named after Beppo Levi, is a certain space of generalized functions.
In the following, is the space of distributions, is the space of tempered distributions in , the differentiation operator with a multi-index, and is the Fourier transform of .
The Beppo Levi space is
where denotes the Sobolev semi-norm.
An alternative definition is as follows: let such that
and define:
Then is the Beppo-Levi space.
References
Wendland, Holger (2005), Scattered Data Approximation, Cambridge University Press.
Rémi Arcangéli; María Cruz López de Silanes; Juan José Torrens (2007), "An extension of a bound for functions in Sobolev spaces, with applications to (m,s)-spline interpolation and smoothing" Numerische Mathematik
Rémi Arcangéli; María Cruz López de Silanes; Juan José Torrens (2009), "Estimates for functions in Sobolev spaces defined on unbounded domains" Journal of Approximation Theory
External links
L. Brasco, D. Gómez-Castro, J.L. Vázquez, Characterisation of homogeneous fractional Sobolev spaces https://link.springer.com/content/pdf/10.1007/s00526-021-01934-6.pdf
J. Deny, J.L. Lions, Les espaces du type de Beppo-Levy https://aif.centre-mersenne.org/item/10.5802/aif.55.pdf
R. Adams, J. Fournier, Sobolev Spaces (2003), Academic press -- Theorem 4.31
Functional analysis | Beppo-Levi space | Mathematics | 370 |
74,040,345 | https://en.wikipedia.org/wiki/International%20day%20against%20violence%20and%20bullying%20at%20school%20including%20cyberbullying | The International Day Against Violence and Bullying at School, including Cyberbullying is a UN Educational, Scientific and Cultural Organization (UNESCO) holiday celebrated every year on the first Thursday of November.
This International Day was designated by the member states of UNESCO in 2019 and it was first held in November 2020.
According to UNICEF, one in three young people in 30 countries have been a victim of online bullying (2019 poll) and half of students aged 13 to 15 experience peer violence around school (2018 report).
In 67 countries, corporal punishment is still allowed in schools.
The UNESCO International day remind people that violence in schools violates the right of children and adolescents to education, health and well-being. The aim is to call on the international community, civil society (including parents, pupils and teachers), the tech industry, the education community and the education authorities to take part in the fight against violence and bullying at school.
Celebration
2022
The 2022 edition emphasized the role of teachers in making schools safe places for everyone. An International seminar on the role of teachers in preventing and addressing school violence and bullying called "Not on my watch" was organized at UNESCO headquarters.
2023
The theme was "No place for fear: Ending school violence for better mental health and learning."
See also
School violence
Bullying
Cyberbullying
Bullying in teaching
Anti-bullying legislation
References
United Nations days
School violence
Harassment and bullying
Cyberbullying
UNESCO
November observances | International day against violence and bullying at school including cyberbullying | Biology | 292 |
11,392,625 | https://en.wikipedia.org/wiki/Wildlife%20of%20the%20Comoros | The wildlife of the Comoro Islands is composed of their flora and fauna.
Fauna
Mammals
The mammalian diversity of the Comoros, like most other young volcanic islands, is restricted to marine mammals and bats.
Birds
Flora
The country is home to 72 species of orchids.
References
Biota of the Comoros
Comoros | Wildlife of the Comoros | Biology | 64 |
78,274,408 | https://en.wikipedia.org/wiki/Taragarestrant | Taragarestrant is an orally bioavailable selective estrogen receptor degrader (SERD) developed by Inventis Bio for the treatment of estrogen receptor-positive (ER+) breast cancer. Structurally similar to AZD9496, taragarestrant has demonstrated potent efficacy across multiple breast cancer cell lines expressing ER and related xenograft models. In preclinical studies, taragarestrant exhibited anti-tumor activity, warranting further clinical investigation.
A phase I study (NCT03471663) evaluated taragarestrant in females with ER+/HER2- advanced or metastatic breast cancer, both as monotherapy and in combination with the CDK4/6 inhibitor palbociclib. A phase III clinical trial has been initiated in China in patients with ER+/HER2- advanced or metastatic breast cancer.
References
Antineoplastic drugs
Selective estrogen receptor degraders
Beta-Carbolines
Enoic acids
Chlorobenzene derivatives
Fluoroalkanes
Isobutyl compounds | Taragarestrant | Chemistry | 221 |
1,083,121 | https://en.wikipedia.org/wiki/Enterotoxin | An enterotoxin is a protein exotoxin released by a microorganism that targets the intestines. They can be chromosomally or plasmid encoded. They are heat labile (> 60 °C), of low molecular weight and water-soluble. Enterotoxins are frequently cytotoxic and kill cells by altering the apical membrane permeability of the mucosal (epithelial) cells of the intestinal wall. They are mostly pore-forming toxins (mostly chloride pores), secreted by bacteria, that assemble to form pores in cell membranes. This causes the cells to die.
Clinical significance
Enterotoxins have a particularly marked effect upon the gastrointestinal tract, causing traveler's diarrhea and food poisoning. The action of enterotoxins leads to increased chloride ion permeability of the apical membrane of intestinal mucosal cells. These membrane pores are activated either by increased cAMP or by increased calcium ion concentration intracellularly. The pore formation has a direct effect on the osmolarity of the luminal contents of the intestines. Increased chloride permeability leads to leakage into the lumen followed by sodium and water movement. This leads to a secretory diarrhea within a few hours of ingesting enterotoxin. Several microbial organisms contain the necessary enterotoxin to create such an effect, such as Staphylococcus aureus and E. coli.
The drug linaclotide, used to treat some forms of constipation, is based on the mechanism of enterotoxins.
Classification and 3D structures
Bacterial
Enterotoxins can be formed by the bacterial pathogens Staphylococcus aureus and Bacillus cereus and can cause Staphylococcal Food Poisoning and Bacillus cereus diarrheal disease, respectively. Staphylococcal enterotoxins and streptococcal exotoxins constitute a family of biologically and structurally related pyrogenic superantigens. 25 staphylococcal enterotoxins (SEs), mainly produced by Staphylococcus aureus, have been identified to date and named alphabetically (SEA – SEZ). It has been suggested that staphylococci other than S. aureus can contribute to Staphylococcal Food Poisoning by forming enterotoxins. Streptococcal exotoxins are produced by Streptococcus pyogenes. These toxins share the ability to bind to the major histocompatibility complex proteins of their hosts. A more distant relative of the family is the S. aureus toxic shock syndrome toxin, which shares only a low level of sequence similarity with this group.
All of these toxins share a similar two-domain fold (N and C-terminal domains) with a long alpha-helix in the middle of the molecule, a characteristic beta-barrel known as the "oligosaccharide/oligonucleotide fold" at the N-terminal domain and a beta-grasp motif at the C-terminal domain. An example is staphylococcal enterotoxin B. Each superantigen possesses slightly different binding mode(s) when it interacts with MHC class II molecules or the T-cell receptor.
The beta-grasp domain has some structural similarities to the beta-grasp motif present in immunoglobulin-binding domains, ubiquitin, 2Fe-2 S ferredoxin and translation initiation factor 3 as identified by the SCOP database.
Clostridioides difficile
Clostridium perfringens (Clostridium enterotoxin)
Vibrio cholerae (Cholera toxin)
Staphylococcus aureus (Staphylococcal enterotoxin B)
Yersinia enterocolitica
Shigella dysenteriae (Shiga toxin)
Viral
Viruses in the families Reoviridae, Caliciviridae, and Astroviridae are responsible for a huge percentage of gastrointestinal disease worldwide. Rotaviruses (of Reoviridae) have been found to contain an enterotoxin which plays a role in viral pathogenesis. NSP4, is a protein that is made during the intracellular phase of the virion's life cycle and is known to have a primary function in intracellular virion maturation. However, when NSP4 from group A Rotaviruses was purified (4 alleles tested), concentrated, and injected into a mouse model, diarrheal disease mimicking that caused by Rotavirus infection commenced. A putative mode of toxicity is that NSP4 activates a signal transduction pathway that ultimately results in an increased cellular concentration of calcium and subsequent chloride secretion from the cell. Secretion of ions from villi lining the gut alter normal osmotic pressures and prevent uptake of water, eventually causing diarrhea.
Rotavirus (NSP4)
See also
Endotoxin
Exotoxin
References
External links
Toxins by organ system affected
Peripheral membrane proteins
Protein families | Enterotoxin | Biology | 1,076 |
25,105,541 | https://en.wikipedia.org/wiki/Index%20mark | Index mark has multiple meanings.
In computing, an index mark or index track is a physical impression made on a hard disk drive. Its purpose is to indicate the starting point for each track on the hard disk drive. Usually, an index mark takes the form of a hole, gap, or magnetic strip. It also allows a hard disk drive head to quickly move to various spots on the drive.
In electronics components, an index mark is a reference symbol printed on or molded into the casing of a device or circuit board, to indicate the location of "Pin 1". This allows the correct orientation of the component in a larger circuit assembly, so that the electrical leads can be correctly connected.
Another kind of index mark is a component of the registration system for road vehicles in the United Kingdom. It consists of a two-letter combination allocated to a local vehicle licensing office. Certain letters are associated with particular parts of the British Isles. Marks that include I or Z are issued either in the Republic of Ireland or Northern Ireland, those that include S in Scotland, and others in England and Wales, though some combinations have not been authorised for use. As an example the combinations AF, CV, GL, and RL were allocated to the Truro licensing office.
References
Computer storage devices | Index mark | Technology | 259 |
59,763,862 | https://en.wikipedia.org/wiki/Microsat-R | Microsat-R was claimed to be an experimental imaging satellite manufactured by DRDO and launched by Indian Space Research Organisation on 24 January 2019 for military use. The satellite served as a target for an anti-satellite test on 27 March, 2019.
Launch
Microsat-R, along with KalamsatV2 as piggy-back, was launched on 24 January 2019 at 23:37 hrs from First Launch Pad of Satish Dhawan Space Centre. The launch marks the 46th flight of PSLV. After 13 minutes 26 seconds in flight, Microsat-R was injected at targeted altitude of about 277.2 km. This was the first flight of a new variant of PSLV called PSLV-DL with two strap-ons, each carrying 12.2-tonne of solid propellant.
Anti-satellite test
Microsat-R served as target for Indian ASAT experiment on March 27, 2019. The impact generated more than 400 pieces of orbital debris with 24 having apogee higher than ISS orbit. According to initial assessment by DRDO some of the debris (depending on size and trajectory) should re-enter in 45 days. A spokesperson from NASA disagreed, saying the debris could last for years because the solar minimum had contracted the atmosphere that would otherwise cause the debris to reenter. Analysis from a leading space trajectory and environment simulation company AGI has also came to same conclusion that few debris fragments will take more than a year to come down and other debris fragments might pose a risk to other satellites and ISS and these results were also presented in the 35th Space Symposium at Colorado Springs.
As of March 2022, only one catalogued piece of debris from Microsat-R remained in orbit: COSPAR 2019-006DE, SATCAT 44383. This final piece decayed from orbit 14 June 2022.
See also
Microsat (ISRO)
Kosmos 149
Kosmos 320
SLATS
References
External links
Earth observation satellites of India
Spacecraft launched by India in 2019
Spacecraft launched by PSLV rockets
January 2019 events in India
Intentionally destroyed artificial satellites
Military equipment introduced in the 2010s
Satellite collisions | Microsat-R | Technology | 431 |
46,864,355 | https://en.wikipedia.org/wiki/RD-0110R | The RD-0110R (, GRAU index: 14D24) is a rocket engine burning kerosene in liquid oxygen in a gas generator combustion cycle. It has four nozzles that can gimbal up to 45 degrees in a single axis and is used as the vernier thruster on the Soyuz-2.1v first stage. It also has heat exchangers that heat oxygen and helium to pressurize the LOX and RG-1 tanks of the Soyuz-2.1v first stage, respectively. The oxygen is supplied from the same LOX tank in liquid form, while the helium is supplied from separate high pressure bottles (known as the T tank).
The engine's development started in 2010 and it is a heavily modified version of the RD-0110. The main areas of work were shortening the nozzles to optimize them for the atmospheric part of the flight (the RD-0110 is a vacuum optimized engine), propellant piping, heat exchangers and the gimballing system, which was developed by RKTs Progress. The RD-0110R engine is produced at the Voronezh Mechanical Plant.
See also
Soyuz-2-1v - The first rocket to use the RD-0110R
KBKhA - The RD-0110R designer bureau
RSC Progress - The designer of the Soyuz-2.1v and the RD-0110R nozzle gimbal
Voronezh Mechanical Plant - A space hardware manufacturer company that manufactures the RD-0110R
References
External links
https://kbkha.ru/deyatel-nost/raketnye-dvigateli-ao-kbha/rd0110r/
https://web.archive.org/web/20211208214531/http://www.vmzvrn.ru/produktsiya-i-uslugi/zhrd/rd-0110r/
http://russianspaceweb.com/rd0110r.html
Rocket engines of Russia
Rocket engines of the Soviet Union
Rocket engines using kerosene propellant
Rocket engines using the gas-generator cycle
KBKhA rocket engines | RD-0110R | Astronomy | 463 |
19,880,157 | https://en.wikipedia.org/wiki/Diaminopimelic%20acid | Diaminopimelic acid (DAP) is an amino acid, representing an epsilon-carboxy derivative of lysine. meso-α,ε-Diaminopimelic acid is the last intermediate in the biosynthesis of lysine and undergoes decarboxylation by diaminopimelate decarboxylase to give the final product.
DAP is a characteristic of certain cell walls of some bacteria. DAP is often found in the peptide linkages of NAM-NAG chains that make up the cell wall of gram-negative bacteria. When provided, they exhibit normal growth. When in deficiency, they still grow but with the inability to make new cell wall peptidoglycan.
This is also the attachment point for Braun's lipoprotein.
See also
Aspartate-semialdehyde dehydrogenase, an enzyme involved in DAP synthesis
Peptidoglycan
Pimelic acid
Images
References
Alpha-Amino acids
Dicarboxylic acids
Non-proteinogenic amino acids | Diaminopimelic acid | Chemistry | 216 |
15,491 | https://en.wikipedia.org/wiki/Integer%20factorization | In mathematics, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is a composite number, or it is not, in which case it is a prime number. For example, is a composite number because , but is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example . Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem.
To factorize a small integer using mental or pen-and-paper arithmetic, the simplest method is trial division: checking if the number is divisible by prime numbers , , , and so on, up to the square root of . For larger numbers, especially when using a computer, various more sophisticated factorization algorithms are more efficient. A prime factorization algorithm typically involves testing whether each factor is prime each time a factor is found.
When the numbers are sufficiently large, no efficient non-quantum integer factorization algorithm is known. However, it has not been proven that such an algorithm does not exist. The presumed difficulty of this problem is important for the algorithms used in cryptography such as RSA public-key encryption and the RSA digital signature. Many areas of mathematics and computer science have been brought to bear on this problem, including elliptic curves, algebraic number theory, and quantum computing.
Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest classical computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any classical computer increases drastically.
Many cryptographic protocols are based on the presumed difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure.
Prime decomposition
By the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. (By convention, 1 is the empty product.) Testing whether the integer is prime can be done in polynomial time, for example, by the AKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors.
Given a general algorithm for integer factorization, any integer can be factored into its constituent prime factors by repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if where are very large primes, trial division will quickly produce the factors 3 and 19 but will take divisions to find the next factor. As a contrasting example, if is the product of the primes , , and , where , Fermat's factorization method will begin with which immediately yields and hence the factors and . While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of for is a factor of 10 from .
Current state of the art
Among the -bit numbers, the most difficult to factor in practice using existing algorithms are those semiprimes whose factors are of similar size. For this reason, these are the integers used in cryptographic applications.
In 2019, a 240-digit (795-bit) number (RSA-240) was factored by a team of researchers including Paul Zimmermann, utilizing approximately 900 core-years of computing power. These researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.
The largest such semiprime yet factored was RSA-250, an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using Intel Xeon Gold 6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the general number field sieve run on hundreds of machines.
Time complexity
No algorithm has been published that can factor all integers in polynomial time, that is, that can factor a -bit number in time for some constant . Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist.
There are published algorithms that are faster than for all positive , that is, sub-exponential. , the algorithm with best theoretical asymptotic running time is the general number field sieve (GNFS), first published in 1993, running on a -bit number in time:
For current computers, GNFS is the best published algorithm for large (more than about 400 bits). For a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time. Shor's algorithm takes only time and space on -bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by using NMR techniques on molecules that provide seven qubits.
In order to talk about complexity classes such as P, NP, and co-NP, the problem has to be stated as a decision problem.
It is known to be in both NP and co-NP, meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorization with . An answer of "no" can be certified by exhibiting the factorization of into distinct primes, all larger than ; one can verify their primality using the AKS primality test, and then multiply them to obtain . The fundamental theorem of arithmetic guarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in both UP and co-UP. It is known to be in BQP because of Shor's algorithm.
The problem is suspected to be outside all three of the complexity classes P, NP-complete, and co-NP-complete.
It is therefore a candidate for the NP-intermediate complexity class.
In contrast, the decision problem "Is a composite number?" (or equivalently: "Is a prime number?") appears to be much easier than the problem of specifying factors of . The composite/prime problem can be solved in polynomial time (in the number of digits of ) with the AKS primality test. In addition, there are several probabilistic algorithms that can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease of primality testing is a crucial part of the RSA algorithm, as it is necessary to find large prime numbers to start with.
Factoring algorithms
Special-purpose
A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms.
An important subclass of special-purpose factoring algorithms is the Category 1 or First Category algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors. For example, naive trial division is a Category 1 algorithm.
Trial division
Wheel factorization
Pollard's rho algorithm, which has two common flavors to identify group cycles: one by Floyd and one by Brent.
Algebraic-group factorization algorithms, among which are Pollard's algorithm, Williams' algorithm, and Lenstra elliptic curve factorization
Fermat's factorization method
Euler's factorization method
Special number field sieve
Difference of two squares
General-purpose
A general-purpose factoring algorithm, also known as a Category 2, Second Category, or Kraitchik family algorithm, has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factor RSA numbers. Most general-purpose factoring algorithms are based on the congruence of squares method.
Dixon's factorization method
Continued fraction factorization (CFRAC)
Quadratic sieve
Rational sieve
General number field sieve
Shanks's square forms factorization (SQUFOF)
Other notable algorithms
Shor's algorithm, for quantum computers
Heuristic running time
In number theory, there are many integer factoring algorithms that heuristically have expected running time
in little-o and L-notation.
Some examples of those algorithms are the elliptic curve method and the quadratic sieve.
Another such algorithm is the class group relations method proposed by Schnorr, Seysen, and Lenstra, which they proved only assuming the unproved generalized Riemann hypothesis.
Rigorous running time
The Schnorr–Seysen–Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance to have expected running time by replacing the GRH assumption with the use of multipliers.
The algorithm uses the class group of positive binary quadratic forms of discriminant denoted by .
is the set of triples of integers in which those integers are relative prime.
Schnorr–Seysen–Lenstra algorithm
Given an integer that will be factored, where is an odd positive integer greater than a certain constant. In this factoring algorithm the discriminant is chosen as a multiple of , , where is some positive multiplier. The algorithm expects that for one there exist enough smooth forms in . Lenstra and Pomerance show that the choice of can be restricted to a small set to guarantee the smoothness result.
Denote by the set of all primes with Kronecker symbol . By constructing a set of generators of and prime forms of with in a sequence of relations between the set of generators and are produced.
The size of can be bounded by for some constant .
The relation that will be used is a relation between the product of powers that is equal to the neutral element of . These relations will be used to construct a so-called ambiguous form of , which is an element of of order dividing 2. By calculating the corresponding factorization of and by taking a gcd, this ambiguous form provides the complete prime factorization of . This algorithm has these main steps:
Let be the number to be factored.
To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and the Jacobi sum test.
Expected running time
The algorithm as stated is a probabilistic algorithm as it makes random choices. Its expected running time is at most .
See also
Aurifeuillean factorization
Bach's algorithm for generating random numbers with their factorizations
Canonical representation of a positive integer
Factorization
Multiplicative partition
-adic valuation
Integer partition – a way of writing a number as a sum of positive integers.
Notes
References
Chapter 5: Exponential Factoring Algorithms, pp. 191–226. Chapter 6: Subexponential Factoring Algorithms, pp. 227–284. Section 7.4: Elliptic curve method, pp. 301–313.
Donald Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition. Addison-Wesley, 1997. . Section 4.5.4: Factoring into Primes, pp. 379–417.
.
External links
msieve – SIQS and NFS – has helped complete some of the largest public factorizations known
Richard P. Brent, "Recent Progress and Prospects for Integer Factorisation Algorithms", Computing and Combinatorics", 2000, pp. 3–22. download
Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "PRIMES is in P." Annals of Mathematics 160(2): 781–793 (2004). August 2005 version PDF
Eric W. Weisstein, “RSA-640 Factored” MathWorld Headline News, November 8, 2005
Dario Alpern's Integer factorization calculator – A web app for factoring large integers
Computational hardness assumptions
Unsolved problems in computer science
Factorization | Integer factorization | Mathematics | 2,622 |
20,670,767 | https://en.wikipedia.org/wiki/Enlaces | Enlaces is a Chilean educational program designed to create a structural change in Chilean education in order to prepare youth, along with their parents and guardians, to participate in the emergent society of knowledge, and to create networks of communication that help integrate them with the world. Enlaces was started in 1992 by Chile's Ministry of Education.
Test phase
The testing phase of this program began in 12 schools in Santiago, the capital of Chile. After 3 years the program had extended to the Ninth Region, the Araucanía Region, (the part of the country with the largest indigenous population) with more than 100 schools participating in the program. When the government decided that this project was viable, it focused all of its efforts on the spread of this program to all the regions of the country.
First phase
From 1995-2007 (the unofficial First Phase of the Enlaces project), the government spent more than $121 million and international cooperation from organizations such as the Canadian International Development Agency (CIDA) in global education, infrastructure, computers, access to educational recourses, technical assistance, and teacher training. In 2003, in order to continue their efforts to connect the country, the Ministry of Education signed an agreement with local telecommunication companies to offer low-cost broadband connection for educational institutions. Sixty percent of the students in Chile have access to broadband internet because of this agreement. In 2007, the Enlaces Program had 3,372,943 students, and 95% of the students in the country have access to computer.
Future plans
In 2007, the Chilean Ministry of Education presented ExpoEnlaces 2007 where they showcased the latest innovations in Information and Communication Technology (ICT) so that people can understand the abilities of these new technologies and their uses in the classroom. That same year, the Ministry of Education published its new objectives and projects for the coming years. Didier de Saint Pierre, the executive director of Enlaces, divided the program into two phases, the first phase, which is coming to a close, focused on infrastructure in the schools and equipping them to install the necessary programs and administration. The second phase involves developing the technological infrastructure so that it has the capacity to improve the quality of learning in Chile. Once the basic technological infrastructure is in place, the Ministry will extend the technology to the remaining schools in less developed regions, and it will implement new technologies in the more advanced regions.
Examples of the new technologies that Enlaces will implement are electronic whiteboards for math, palm pilots or pocket PCs for physics, and projectors for teaching sciences. Other new strategies will include ICT in the classroom, where they will integrate portable technology and projectors in 3,000 classrooms, benefiting 87,000 students (3% of the total number of students). The goal is to implement this program in 16,000 classrooms by 2010.
The science program is a good example of the direction in which the Enlaces program is headed. A very important aspect of this program is teaching style. Enlaces encourages the use of the HEI (hypothesis, experimentation, instruction) method. The new science classes are being developed in a multimedia format using simulations delivered via the internet.
This new program offers students many benefits. The classes that follow this methodology better prepare students for lessons and projects. When presented with an experiment, students can form a hypothesis, the teacher can demonstrate every aspect of the experiment using a computer, a projection system, and the internet, and the students can follow the other steps of the scientific method. While the use of simulations is not the exactly the same as doing the physical experiment, simulation eliminates the problem of lack of equipment and materials (such as chemicals for chemistry, and animals for dissection for biology) that many schools in Chile face.
Adult Education
Part of the objective of the Enlaces program is to help the community through adult education. The idea was that children would bring the technology home with them, but this has not worked the way the Ministry had hoped. The majority of parents (especially those in rural areas) recognize the benefits of technology for their children, but feel that they cannot benefit personally. For adults who do want to learn more about technology, the Enlaces program offers an 18-hour digital literacy class where they learn the basic functions of a computer, how to use a word processor, how to use the internet, and how to create and use an email account.
See also
Education in Chile
References
Education in Chile
Computer science education | Enlaces | Technology | 908 |
70,353,036 | https://en.wikipedia.org/wiki/James%20Duncan%20Hague | James Duncan Hague (18361908) was an American mining engineer, mineralogist, and geologist.
Early years
Hague was born in Boston, Massachusetts, to the Rev. William Hague and Mary Bowditch Moriarty. He attended school in Boston and Newark, New Jersey, before enrolling at the Lawrence Scientific School at Harvard University in 1854. The following year, he headed to the Georg-August University of Göttingen in Germany to study chemistry and mineralogy for a year before studying mining engineering at the Royal Saxon Mining Academy in Freiberg for two years.
Career
After returning to New York, Hague was selected by financier William H. Webb to explore several equatorial coral islands in the Pacific Ocean. Webb was involved in the guano business, and Hague examined and documented phosphate deposits on Baker, Howland, and Jarvis Islands.
During the American Civil War, he spent 1862 and 1863 in Port Royal, South Carolina, as a judge advocate for the U.S. Navy, handling negotiations involving the Atlantic Blockading Squadron. He then worked for Edwin J. Hulbert, developing the Calumet and Hecla copper mines in Michigan, before joining Clarence King in 1867 as an assistant geologist on the Geological Exploration of the Fortieth Parallel.
In 1871, he headed to California to work as a consulting expert on mining engineering, working with both private and governmental clients throughout the western U.S. and Mexico. In 1878, he was a member of the U.S. delegation to the Exposition Universelle in Paris, writing a report on the mining companies and innovations on display.
In 1887, Hague acquired the North Star Mining Co. on Lafayette Hill near Grass Valley, California, which he had helped develop during his time in the state. He reorganized the company in 1889 and acquired several other mines, including Gold Hill. Working with his brother-in-law, Arthur De Wint Foote, he grew North Star's operations, eventually deciding to commission North Star House as an event space for the company and home for the company's supervisor, Foote.
Affiliations
Hague was a fellow of the American Association for the Advancement of Science and in 1904 he became a member of the American Academy of Arts and Sciences. From 1906, Hague was vice president of the American Institute of Mining Engineers. In 1887, he became a fellow of the American Geographical Society before becoming a councillor in 1907 and the society's vice president a year later.
Personal life
In April 1872, Hague married Mary Ward Foote (18461898). They had three children: Marian (18731971), Eleanor (18751954), and William (18821918). He died August 3, 1908, at his summer home in Stockbridge, Massachusetts, and was buried in Albany Rural Cemetery in Colonie, New York.
References
1908 deaths
1836 births
Fellows of the American Academy of Arts and Sciences
Fellows of the American Association for the Advancement of Science
Fellows of the American Institute of Mining, Metallurgical, and Petroleum Engineers
American Geographical Society
19th-century American geologists
Scientists from Boston
American mining engineers
American mineralogists
Mining engineers | James Duncan Hague | Engineering | 634 |
13,766,136 | https://en.wikipedia.org/wiki/Water%20testing | Water testing is a broad description for various procedures used to analyze water quality. Millions of water quality tests are carried out daily to fulfill regulatory requirements and to maintain safety.
Testing may be performed to evaluate:
ambient or environmental water quality – the ability of a surface water body to support aquatic life as an ecosystem. See Environmental monitoring, Freshwater environmental quality parameters and Bioindicator.
wastewater – characteristics of polluted water (domestic sewage or industrial waste) before treatment or after treatment. See Environmental chemistry and Wastewater quality indicators.
"raw water" quality – characteristics of a water source prior to treatment for domestic consumption (drinking water). See Bacteriological water analysis and specific tests such as turbidity and hard water.
"finished" water quality – water treated at a municipal water purification plant. See Bacteriological water analysis and :Category:Water quality indicators.
suitability of water for industrial uses such as laboratory, manufacturing or equipment cooling. See purified water.
Government regulation
Government regulations related to water testing and water quality for some major countries is given below.
China
Ministry of Environmental Protection
The Ministry of Environmental Protection of the People's Republic of China is the nation's environmental protection department charged with the task of protecting China's air, water, and land from pollution and contamination. Directly under the State Council, it is empowered and required by law to implement environmental policies and enforce environmental laws and regulations. Complementing its regulatory role, it funds and organizes research and development. See Ministry of Environmental Protection of the People's Republic of China.
Regulatory challenges and debates
In late 2009, a survey was carried out by China Ministry of Housing and Urban-Rural Development to assess the water quality of urban supplies in China's cities, which revealed that "at least 1,000" water treatment plants out of more than 4,000 plants surveyed at the county level and above failed to comply with government requirements. The survey results were never formally released to the public, but in 2012, China's Century Weekly published the leaked survey data. In response, Wang Xuening, a health ministry official, released figures derived from a pilot monitoring scheme in 2011 and suggested that 80% of China's urban tap water was up to standard.
China's new drinking water standards involve 106 indicators. Of China's 35 major cities, only 40% of cities have the capacity to test for all 106 indicators. The department in charge of local water and the health administration department will enter into a discussion to determine results for more than 60 of the new measures; hence it is not required to test the water using every indicator. The grading of water quality is based on an overall average of 95% to fulfill government requirements. The frequency of water quality inspections at water treatment plants is twice yearly.
Pakistan
Pakistan Council of Research in Water Resources
Established in 1964, the Pakistan Council of Research in Water Resources aims to conduct, organize, coordinate and promote research in all aspects of water resources. As a national research organization, it undertakes and promotes applied and basic research in different disciplines of water sector.
Recent developments
In March 2013, Minister for Science and Technology Mir Changez Khan Jamali notified the National Assembly that groundwater samples collected revealed that only 15-18% samples were deemed safe for drinking both in urban and rural areas in Pakistan. The Ministry has created 24 Water Quality Testing Laboratories across Pakistan, developed and commercialized water quality test kits, water filters, water disinfection tablets and drinking water treatment sachets, conducted training for 2,660 professionals of water supply agencies and surveyed 10,000 water supply schemes out of a grand total of 12,000 schemes.
United Kingdom
Drinking Water Inspectorate
The Drinking Water Inspectorate is a section of Department for Environment, Food and Rural Affairs set up to regulate the public water supply companies in England and Wales. Water testing in England and Wales can be conducted at the environmental health office at the local authority. See Drinking Water Inspectorate.
United States
Department of Homeland Security
The U.S. Department of Homeland Security is a cabinet department of the United States federal government, created in response to the September 11 attacks, and with the primary responsibilities of protecting the United States of America and U.S. territories (including protectorates) from and responding to terrorist attacks, man-made accidents, and natural disasters. See United States Department of Homeland Security.
The Homeland Security Presidential Directive 7 designates the Environmental Protection Agency as the sector-specific agency for the water sector's critical infrastructure protection activities. All Environmental Protection Agency activities related to water security are carried out in consultation with the Department of Homeland Security. Possible threats to water quality include contamination with deadly agents, such as cyanide, and physical attacks like the release of toxic gaseous chemicals.
Environmental Protection Agency
The principal U.S. federal laws governing water testing are the Safe Drinking Water Act (SDWA) and the Clean Water Act. The U.S. Environmental Protection Agency (EPA) issues regulations under each law specifying analytical test methods. EPA's annual Regulatory Agenda sets a schedule for specific objectives on improving its oversight of water testing.
Drinking water analysis
Under the Safe Drinking Water Act, public water systems are required to regularly monitor their treated water for contaminants. Water samples must be analyzed using EPA-approved testing methods, by laboratories that are certified by EPA or a state agency.
The 2013 revised total coliform rule and the 1989 total coliform rule are the only microbial drinking water regulations that apply to all public water systems. The revised rule highlights the frequency and timing of microbial testing by water systems based on population served, system type, and source water type. It also places a legal limit on the level for Escherichia coli. Potential health threats must be disclosed to EPA or the appropriate state agency, and public notification is required in some circumstances.
Methods for measuring acute toxicity usually take between 24 and 96 hours to identify contaminants in water supplies.
Wastewater analysis
All facilities in the United States that discharge wastewater to surface waters (e.g. rivers, lakes or coastal waters) must obtain a permit under the National Pollutant Discharge Elimination System, a Clean Water Act program administered by EPA and state agencies. The facilities covered include sewage treatment plants, industrial and commercial plants, military bases and other facilities. Most permittees are required to regularly collect wastewater samples and analyze them for compliance with permit requirements, and report the results either to EPA or the state agency.
Private wells
Private wells are not regulated by the federal government. In general, private well owners are responsible for testing their wells. Some state or local governments regulate well construction and may require well testing. Generally well testing required by local governments is limited to a handful of contaminants including coliform and E. Coli bacteria and perhaps a few predominant local contaminants such as nitrates or arsenic. EPA publishes test methods for contaminants that it regulates under the SDWA.
Publication of test methods
Peer-reviewed test methods have been published by government agencies, private research organizations and international standards organizations for ambient water, wastewater and drinking water. Approved published methods must be used when testing to demonstrate compliance with regulatory requirements.
Regulatory challenges and debates
Hydraulic fracturing
The Energy Policy Act of 2005 created a loophole that exempts companies drilling for natural gas from disclosing the chemicals involved in fracturing operations that would normally be required under federal clean water laws. The loophole is commonly known as the "Halliburton loophole" because Dick Cheney, the former chief executive officer of Halliburton, was reportedly instrumental in its passage. Although the Safe Drinking Water Act excludes hydraulic fracturing from the Underground Injection Control regulations, the use of diesel fuel during hydraulic fracturing is still regulated. State oil and gas agencies may issue additional regulations for hydraulic fracturing. States or EPA have the authority under the Clean Water Act to regulate discharge of produced waters from hydraulic fracturing operations.
In December 2011, federal environment officials scientifically linked underground water pollution with hydraulic fracturing for the first time in central Wyoming. EPA stated that the water supply contained at least 10 compounds known to be used in fracking fluids. The findings in the report contradicted arguments by the drilling industry on the safety of the fracturing process, such as the hydrologic pressure that naturally forces fluids downwards instead of upwards. EPA also commented that the pollution from 33 abandoned oil and gas waste pits were responsible for some degree of minor groundwater pollution in the vicinity.
In January 2013, the Alaska Oil and Gas Conservation Commission, which is responsible for overseeing oil and gas production in Alaska, proposed new rules for regulating hydraulic fracturing in the state, which contains over two billion barrels of shale oil (second only to the Bakkan) and over 80 trillion cubic feet of natural gas. Companies will be required to conduct water testing at least 90 days prior to and up to 120 days after hydraulically fracturing a well, which includes analysis of pH, alkalinity, total dissolved solids, and total petroleum hydrocarbons. The proposed rules necessitate disclosure of the identity and volume of chemicals used in fracturing fluid. See Alaska Oil and Gas Conservation Commission.
In February 2013, the state of Illinois introduced the Illinois Hydraulic Fracturing Regulatory Act, H.B. 2615, which imposes strict controls on fracturing companies, such as chemical disclosure requirements and water testing requirements. The bill includes baseline and periodic post-frack testing of potentially affected waters, such as surface water and groundwater sources near fracturing wells, to identify contamination associated with hydraulic fracturing. Fracturing wells will be closed if fracturing fluid is released outside of the shale rock formation being fractured.
Pharmaceuticals and personal care products
Detectable levels of pharmaceuticals and personal care products, in the parts per trillion, are found in many public drinking water systems in the US as many water testing plants lack the technological know-how to remove these chemical compounds from raw water. There are now increasing worries about how these compounds degrade and react in the environment, during the treatment process, inside our bodies, and the long-term exposure to multiple contaminants at low levels. Out of over 80,000 chemicals registered with the EPA, the US federal drinking water rules mandate testing for only 83 chemicals, which calls for increased monitoring of pharmaceuticals on the presence and concentrations of chemical compounds in rivers, streams, and treated tap water. As traditional waste water regulations and treatment systems target microorganisms and nutrients, there are no federal standards for pharmaceuticals in drinking water or waste water.
Recent developments
In May 2012, the Environmental Protection Agency released a new list of contaminants, known as the unregulated contaminant monitoring regulation 3 (UCMR3), that will be part of municipal water systems testing starting this year and continuing through 2015. The UCMR3 testing will help municipal water system operators measure the occurrence and exposure of contamination levels that may endanger human health. The State Hygienic Laboratory at the University of Iowa is the only state environmental public health laboratory that has been certified and approved to test for all 28 chemical contaminants on the new list.
In March 2013, the Environmental Protection Agency developed a new rapid water quality test that provides accurate same day results of contamination levels, which marks a significant improvement from current tests that require at least 24 hours to obtain results. The new test will help authorities determine whether beaches are safe for swimming to keep the public from falling sick and could help prevent beaches from being closed.
International organizations
The International Maritime Organization, known as the Inter-Governmental Maritime Consultative Organization until 1982, was established in Geneva in 1948, and came into force ten years later, meeting for the first time in 1959. See International Maritime Organization.
The International Maritime Organization has been at the forefront of the international community by taking the lead in addressing the transfer of aquatic invasive species through shipping. On 13 February 2004, the International Convention for the Control and Management of Ships' Ballast Water and Sediments was adopted by consensus at a diplomatic conference held at the International Maritime Organization headquarters in London. According to the convention, all ships are required to implement a ballast water and sediments management plan. All ships will have to carry a Ballast Water Record Book and will be required to carry out ballast water management procedures to a given standard. Parties to the convention are given the option to take additional measures which are subject to criteria set out in the Convention and to International Maritime Organization guidelines. Ballast water management is subjected to the ballast water exchange standard and the ballast water performance standard. Ships performing ballast water exchange shall do so with an efficiency of 95 per cent volumetric exchange of ballast water and ships using a ballast water management system (BWMS) shall meet a performance standard based on agreed numbers of organisms per unit of volume. The convention will enter into force 12 months after ratification by 30 States, representing 35 per cent of world merchant shipping tonnage. See Ballast water discharge and the environment.
Water test initiatives
EarthEcho Water Challenge
The [[World Water Monitoring Day|EarthEcho Water Challenge]] is an international education and outreach program that generates public awareness and involvement in safeguarding water resources globally by engaging citizens to conduct water testing of local water bodies. Participants learn how to conduct simple water quality tests, analyze common indicators of water health, specifically dissolved oxygen, pH, temperature, and turbidity. The program was originally called "World Water Monitoring Day" and later "World Water Monitoring Challenge", and was established in 2003. EarthEcho International encourages participants to conduct their monitoring activities as part of the "EarthEcho Water Challenge" during any period between March 22 (World Water Day) and December of each year.
Water test market
Market size and structure
As of 2009, the global water test market, which includes in-house, small commercial and large laboratory groups, is approximately US$3.6 billion. The global market for low-end test equipment is roughly $300–400 million. The global market for in-line monitors is approximately $100–130 million.
Product offering
Key products include analytical systems, instrumentation, and reagents for water quality and safety analysis. Reagents are chemical testing compounds that identify presence of chlorine, pH, alkalinity, turbidity and other metrics.
The equipment market comprises low-end, onsite field testing equipment, in-line monitors, and high-end testing laboratory instruments. High-end lab equipment are Mass Spectrometry devices that conduct organic analysis, using Gas Chromatography and Liquid Chromatography, or metals analysis, using Inductively Coupled Plasma.
New developments
Several trends to monitor include digital sensor plug-and-play techniques and luminescent dissolved oxygen meters replacing sensors.
"Razor and Razor-blade" business model
The water test market is approximately two-thirds equipment and one-third consumables. Reagents are used with each test and generate recurring revenue for companies. Aftermarket maintenance agreements, operator training and parts replacement help to ensure resources are maximized. The market leader with an estimated 21% market share, Danaher, is able to reap EBIT margins in the high-teens-to-low-20% on test equipment, but can command 40%+ margins on the water test reagents. See Freebie marketing.
Distribution
Companies tend to employ the "direct-to-end-user" model for most products, but may also try to sell low-end equipment via the Internet to reduce distribution costs.
Pricing
Pricing depends on application and type of product. Instruments range from as low as $10 to thousands of dollars.
Suppliers
The low-end test equipment is dominated by few large suppliers, notably Germany's Loviband and Merck, DelAgua & ITS Europe Water Testing of the UK who work globally, and US-based LaMotte. Major manufacturers of in-line equipment include Siemens and Danaher's Hach. Thermo Scientific and Waters are key producers of high-end test equipment.
End markets
The end markets include municipal water plants, industrial users, such as beverage and electronics, and environmental agencies, such as the United States Geological Survey.
Water testing facilities
There are two main types of laboratories: commercial and in-house.
In-house laboratories
In-house laboratories are usually present in municipal water and waste water facilities, breweries and pharmaceutical manufacturing plants. They account for roughly half of all tests run annually.
Commercial laboratories
Most of the commercial laboratories are single-site firms that only service institutions in the geographical region. The employee head count for each laboratory is usually fewer than five people, and revenues are under $1 million. These laboratories account for one quarter of all tests. There are several major laboratory groups, such as UK-based Inspicio and Australia-based ALS, which account for another quarter of all tests.
Privatization
Opinion
The conventional impression is that private water systems, which sources groundwater from rural areas, produce higher water quality compared to public water systems. Studies have demonstrated that groundwater is vulnerable to antibiotic-resistant bacteria, which necessitates frequent water testing. However, critics like Charrois argue that inconvenience and time constraint impede regular testing in private wells and water systems, which poses risk of poor water quality to consumers.
Sydney water crisis
In 1998, Sydney, Australia's water supply, 85% controlled by Suez Lyonnaise des Eaux until 2021, contained high concentrations of parasites Giardia and Cryptosporidium. However, the public was not immediately informed of the water contamination when it had first occurred.
Ontario's Common Sense Revolution
In Ontario, Canada, the Harris government introduced the "Common Sense Revolution" to cut the large provincial deficit accumulated under the previous Rae government, implementing major cuts to the environment budget, privatizing water testing labs, deregulating water protection infrastructure, and firing trained water testing experts. See Mike Harris.
In 1999, in spite of a Canadian federal government study that found a third of Ontario's rural wells were contaminated with E. coli, the Ontario government dropped testing for E.coli from its Drinking Water Surveillance Program and subsequently closed the program in 2000. In June 2000, there was a wave of E. coli outbreaks in several communities in rural Ontario, where at least seven people died from consuming the water in Walkerton. The private testing company, A&L Laboratories, detected E. coli in the water but failed to disclose the contamination to provincial authorities due to a loophole in the "common sense" regulation. A&L Laboratories claimed that the test results were "confidential intellectual property" and therefore belonged only to the "client", who was the authorities of Walkerton who lacked the training for proper assessment. See Escherichia coli.
Recent news
Water poisoning cases
In 2011, Hong Kong Education Secretary Michael Suen was diagnosed with Legionnaires' disease. The bacteria contamination stemmed from Hong Kong's HK$5.5 billion government headquarters site, where traces of the bacteria were found to be up to 14 times above acceptable levels.
Water contamination cases
In March 2013, French consumer magazine 60 Millions de Consommateurs and non-governmental organization Fondation France Libertés conducted an investigation that found traces of pesticides and prescription drugs, including a medicine for breast cancer treatment, in almost one in five French brands of bottled water, which are commonly touted as cleaner, healthier and purer alternatives to French tap water. Out of 47 brands of bottled water commonly available in French supermarkets, 10 brands contained "residues from drugs or pesticides".
In March 2013, almost 200 water fountains in Jersey City public schools were found to contain lead above regulatory standards, where one of the water fountains had lead contamination at levels more than 800 times the EPA's standard. The situation warrants concern because exposure to lead in water could lead to mental retardation for children.
Legal cases
In March 2013, a defense lawyer asked a federal judge to dismiss charges against the owner of Mississippi Environmental Analytical Laboratories Inc. accused of falsifying records on industrial waste water samples. According to the indictment, Borg Warner Emissions Systems Inc. hired Tennie White, the owner of the laboratory, to test waste water discharge at its car parts plant in Water Valley. White is accused of creating three reports in 2009 that indicated tests were completed when they were not. The motion to dismiss was based on the lawyer's argument that the documents referred to in the indictment were not signed and were not submitted to a government agency.
Sequestration cuts
Water quality testing for private wells in Chemung County is affected by budget cuts.
See also
List of chemical analysis methods
Water chemistry analysis
Water quality: measurement
References
Drinking water
Water pollution
Water chemistry
Water treatment | Water testing | Chemistry,Engineering,Environmental_science | 4,191 |
44,331,535 | https://en.wikipedia.org/wiki/Fluidized%20bed%20concentrator | A fluidized bed concentrator (FBC) is an industrial process for the treatment of exhaust air. The system uses a bed of activated carbon beads to adsorb volatile organic compounds (VOCs) from the exhaust gas. Differently from the fixed-bed or carbon rotor concentrators, the FBC system forces the VOC-laden air through several perforated steel trays, increasing the velocity of the air and allowing the sub-millimeter carbon beads to fluidize, or behave as if suspended in a liquid. This increases the surface area of the carbon-gas interaction, making it more effective at capturing VOCs.
Components
The fluidized bed concentrator consists of five primary components:
Adsorption tower
Desorption tower
Thermal oxidizer
Carbon transport system
Process fans: inlet adsorber, inlet desorber, outlet oxidizer to stack
How it works
Industrial processes requiring ventilation, including paint booths, printing, and chemical production, exhaust the ventilated air to the fluidized bed concentrator at room temperature. The air first passes into the adsorption tower, where it moves through six perforated trays of clean carbon beads. The 0.7 mm bead activated carbon (BAC) fluidizes in the trays and captures the VOCs as they intermix.
The saturated carbon beads are passed from the adsorber tower to the desorber tower, where the beads are heated to 350 °F and the VOCs are released. Typically the adsorber tower is many times larger than the desorber tower, leading to an air volume reduction and an increase in VOC concentration. The ratio of adsorber size to desorber size is called the concentration ratio, and ranges from 10:1 to 100:1.
The concentrated VOC gas stream is sent from the desorb tower to a thermal oxidizer, where the organic compounds are heated to 1400 °F and oxidized, or broken down into carbon dioxide (CO2), water (H2O), and by-products. In some cases, small amounts of carbon monoxide (CO), nitrogen oxide (NOX), and other gases are produced.
Emissions and energy usage
The primary advantage of the FBC over traditional rotor concentrators lies in its ability to achieve any concentration ratio up to the lower explosive limit (LEL). This allows Honda Alabama's paint shop to switch from oxidizing 100,000 CFM of VOCs in a regenerative thermal oxidizer (RTO), to oxidizing only 1,500 CFM of VOCs in a small thermal oxidizer, at a much higher concentration. Reducing the volume of air to be oxidized from 100,000 CFM to 1,500 CFM (66:1 concentration ratio), allows for a much lower energy usage and consequently, fewer CO2 and NOX emissions.
Industries served
Paint finishing
Automotive
Aerospace
Heavy machinery
Transportation
Printing
Chemical production
Semiconductor
Food processing
See also
Volatile organic compound
National Emissions Standards for Hazardous Air Pollutants
Air pollution in the United States
Activated carbon
Air pollution
References
External links
Clean Air Act plus further links to relevant rules, reports, and programs.
Organic NESHAP
Air pollution control systems
Pollution control technologies
Air pollution in the United States
Hazardous air pollutants
United States Environmental Protection Agency
Chemical safety
Volatile organic compound abatement
Industrial processes
Fluidization | Fluidized bed concentrator | Chemistry,Engineering | 696 |
2,626,268 | https://en.wikipedia.org/wiki/STAT%20protein | Members of the signal transducer and activator of transcription (STAT) protein family are intracellular transcription factors that mediate many aspects of cellular immunity, proliferation, apoptosis and differentiation. They are primarily activated by membrane receptor-associated Janus kinases (JAK). Dysregulation of this pathway is frequently observed in primary tumors and leads to increased angiogenesis which enhances the survival of tumors and immunosuppression. Gene knockout studies have provided evidence that STAT proteins are involved in the development and function of the immune system and play a role in maintaining immune tolerance and tumor surveillance.
STAT family
The first two STAT proteins were identified in the interferon system. There are seven mammalian STAT family members that have been identified: STAT1, STAT2, STAT3, STAT4, STAT5 (STAT5A and STAT5B), and STAT6.
STAT1 homodimers are involved in type II interferon signalling, and bind to the GAS (Interferon-Gamma Activated Sequence) promoter to induce expression of interferon stimulated genes (ISG). In type I interferon signaling, STAT1-STAT2 heterodimer combines with IRF9 (Interferon Response Factor) to form ISGF3 (Interferon Stimulated Gene Factor), which binds to the ISRE (Interferon-Stimulated Response Element) promoter to induce ISG expression.
Structure
All seven STAT proteins share a common structural motif consisting of an N-terminal domain followed by a coiled-coil, DNA-binding domain, linker, Src homology 2 (SH2), and a C-terminal transactivation domain. Much research has focused on elucidating the roles each of these domains play in regulating different STAT isoforms. Both the N-terminal and SH2 domains mediate homo or heterodimer formation, while the coiled-coil domain functions partially as a nuclear localization signal (NLS). Transcriptional activity and DNA association are determined by the transactivation and DNA-binding domains, respectively.
Activation
Extracellular binding of cytokines or growth factors induce activation of receptor-associated Janus kinases, which phosphorylate a specific tyrosine residue within the STAT protein promoting dimerization via their SH2 domains. The phosphorylated dimer is then actively transported to the nucleus via an importin α/β ternary complex. Originally, STAT proteins were described as latent cytoplasmic transcription factors as phosphorylation was thought to be required for nuclear retention. However, unphosphorylated STAT proteins also shuttle between the cytosol and nucleus, and play a role in gene expression. Once STAT reaches the nucleus, it binds to a consensus DNA-recognition motif called gamma-activated sites (GAS) in the promoter region of cytokine-inducible genes and activates transcription. The STAT protein can be dephosphorylated by nuclear phosphatases, which leads to inactivation of STAT and subsequent transport out of the nucleus by an exportin-RanGTP complex.
See also
JAK-STAT pathway
DNA-binding protein
STAT Inhibitors
Additional images
References
External links
Drosophila Signal-transducer and activator of transcription protein at 92E - The Interactive Fly
Gene expression
Immune system
Protein families
Transcription factors
Signal transduction | STAT protein | Chemistry,Biology | 692 |
996,772 | https://en.wikipedia.org/wiki/George%20Combe | George Combe (21 October 1788 – 14 August 1858) was a Scottish lawyer and a spokesman of the phrenological movement for over 20 years. He founded the Edinburgh Phrenological Society in 1820 and wrote The Constitution of Man (1828). After marriage in 1833, Combe devoted his later years to promoting phrenology internationally.
Early life
George Combe was born at Livingston's Yards, Edinburgh, the son of Marion (née Newton, died 1819) and George Combe, a prosperous brewer in the city. His younger brother was the physician Andrew Combe. After attending the High School of Edinburgh, he studied law at the University of Edinburgh, entered a lawyer's office in 1804, and in 1812 began a solicitor's practice at 11 Bank Street.
In 1820 Combe moved his office to Mylnes Court on the Royal Mile and moved house to 8 Hermitage Place in Stockbridge. In 1825 he moved with Andrew to 2 Brown Square off the Grassmarket. The Combe brothers lived together in a large dwelling at 25 Northumberland Street in the New Town from 1829.
Phrenological Society
In 1815, the Edinburgh Review contained an article on the system of "craniology" devised by Franz Joseph Gall and Johann Gaspar Spurzheim, denouncing it as "a piece of thorough quackery from beginning to end". When Spurzheim came to Edinburgh in 1816, Combe was invited to a friend's house, where he watched Spurzheim dissect a human brain. Impressed by the demonstration, he attended a second series of Spurzheim's lectures. On investigating the subject for himself, he became satisfied that the fundamental principles of phrenology were true: "that the brain is the organ of mind; that the brain is an aggregate of several parts, each subserving a distinct mental faculty; and that the size of the cerebral organ is, caeteris paribus, an index of power or energy of function."
His first essay on phrenology was published in Scots Magazine in 1817, and were followed by a series of papers Literary and Statistical Magazine. The writings were collected and published in 1819 in book form as Essays on Phrenology and, in later editions, as A System of Phrenology. In 1820, Combe helped to found the Phrenological Society of Edinburgh, which in 1823 established a Phrenological Journal. His lectures and writings also drew attention to phrenology in Europe and the United States.
Debate with Hamilton
Combe began to lecture at Edinburgh in 1822. He published a Manual, Elements of Phrenology, in June 1824. He took private tuition in elocution; contemporaries described him as clever and opinionated. Combe's discussions had an air of confidentiality and theatrical urgency. Converts came in, societies sprang up and controversies began.
A second edition of Elements, 1825, was attacked by Francis Jeffrey in the Edinburgh Review of September 1825. Combe replied in a pamphlet and the journal. The phrenologists were attacked again in 1826 and 1827 by Sir William Hamilton in addresses to the Royal Society of Edinburgh. The sharp controversy included challenges to public disputes and mutual charges of misrepresentation, in which Spurzheim took part. The correspondence appeared in the fourth and fifth volumes of the Phrenological Journal.
Social interests: schools, prisons and asylums
In 1836, Combe stood for the chair of Logic at the University of Edinburgh against two other candidates: Sir William Hamilton and Isaac Taylor. Hamilton won by 18 votes against 14 for Taylor. In 1838 Combe visited the United States to study the treatment of criminals there. He initiated a programme of public education in chemistry, physiology, history and moral philosophy.
Combe sought to improve public education through a national, non-sectarian system. He helped to set up a school in Edinburgh run on the principles of William Ellis, and did some teaching there in phrenology and physiology. It was prompted by the London Birkbeck School, which had opened on 17 July 1848. Combe was strongly behind the view that the state should be involved in the education system. In this he was backed by William Jolly, an inspector of schools, and noted by Frank Pierrepont Graves.
Combe was much concerned about prison reform. He and William A. F. Browne opened a debate on introducing humane treatment of psychiatric patients in publicly funded asylums.
Later life
John Ramsay L'Amy, son of James L'Amy, trained under Combe at his offices at 25 Northumberland Street in Edinburgh's New Town.
In 1842, Combe gave a course of 22 lectures on phrenology at the Ruprecht Karl University of Heidelberg and travelled about Europe enquiring into management of schools, prisons and asylums.
On retiring, Combe took a substantial terraced townhouse, 45 Melville Street, in Edinburgh's West End. He was revising the 9th edition of the Constitution of Man when he died at Moor Park, Farnham in August 1858. He lies under a simple headstone in the Dean Cemetery, Edinburgh, against the north wall of the original section. His wife Cecilia Siddons is buried with him.
Works
In 1817, Combe's first essay on phrenology in The Scots Magazine was followed by a series of papers on the subject in the Literary and Statistical Magazine. These appeared in book form in 1819 as Essays on Phrenology, entitled A System of Phrenology in later editions.
Combe's most popular work, The Constitution of Man, appeared in 1828 but was widely denounced as materialist and atheist. He argued in it: "Mental qualities are determined by the size, form and constitution of the brain; and these are transmitted by hereditary descent."
Combe was one of an active Edinburgh scene of people thinking about the nature of heredity and its possible malleability, as Lamarck proposed. Combe himself was no Lamarckian, but in the decades before Darwin's Origin of Species was published, the Constitution was probably the single most important vehicle for disseminating naturalistic progressivism in the English-speaking world.
Combe's 1838 Answers to the Objections Urged Against Phrenology was followed in 1840 by Moral Philosophy and in 1841 by Notes on the United States of North America. Phrenology Applied to Painting and Sculpture ensued in 1855. The culmination of Combe's autobiographical philosophy appeared in "On the Relation between Science and Religion", first publicly issued in 1857. Combe moved into the economic arena with a pamphlet on The Currency Question (1858). A fuller phrenological approach to political economy was set out later by William Ballantyne Hodgson.
Family
In 1833, Combe married Cecilia Siddons, daughter of the actress Sarah Siddons and sister of Henry Siddons, author of Practical Illustrations of Rhetorical Gesture and Action (1807). She brought him a fortune and a happy, though childless marriage, preceded by a phrenological check for compatibility. A few years later, he retired from the law in comfortable circumstances.
Bibliography
George Combe (1828), The Constitution of Man Considered in Relation to External Objects. J. Anderson jun. (reissued by Cambridge University Press, 2009; )
George Combe (1830), A System of Phrenology Edinburgh: J Anderson. Full Text Available at archive.org
George Combe (1857), On the Relation Between Science and Religion. Maclachlan and Stewart (reissued by Cambridge University Press, 2009; )
Notes
Attribution:
External links
Articles on Phrenological practice by George Combe, Andrew Combe, and other early Phrenologists.
1788 births
1858 deaths
Scientists from Edinburgh
Phrenology
Phrenologists
Scottish non-fiction writers
Mental health professionals
Burials at the Dean Cemetery
Mental health activists
Alumni of the University of Edinburgh School of Law
Kemble family | George Combe | Biology | 1,596 |
22,094 | https://en.wikipedia.org/wiki/Nutation | Nutation () is a rocking, swaying, or nodding motion in the axis of rotation of a largely axially symmetric object, such as a gyroscope, planet, or bullet in flight, or as an intended behaviour of a mechanism. In an appropriate reference frame it can be defined as a change in the second Euler angle. If it is not caused by forces external to the body, it is called free nutation or Euler nutation (after Leonhard Euler). A pure nutation is a movement of a rotational axis such that the first Euler angle is constant. Therefore it can be seen that the circular red arrow in the diagram indicates the combined effects of precession and nutation, while nutation in the absence of precession would only change the tilt from vertical (second Euler angle). However, in spacecraft dynamics, precession (a change in the first Euler angle) is sometimes referred to as nutation.
In a rigid body
If a top is set at a tilt on a horizontal surface and spun rapidly, its rotational axis starts precessing about the vertical. After a short interval, the top settles into a motion in which each point on its rotation axis follows a circular path. The vertical force of gravity produces a horizontal torque about the point of contact with the surface; the top rotates in the direction of this torque with an angular velocity such that at any moment
(vector cross product)
where is the instantaneous angular momentum of the top.
Initially, however, there is no precession, and the upper part of the top falls sideways and downward, thereby tilting. This gives rise to an imbalance in torques that starts the precession. In falling, the top overshoots the amount of tilt at which it would precess steadily and then oscillates about this level. This oscillation is called nutation. If the motion is damped, the oscillations will die down until the motion is a steady precession.
The physics of nutation in tops and gyroscopes can be explored using the model of a heavy symmetrical top with its tip fixed. (A symmetrical top is one with rotational symmetry, or more generally one in which two of the three principal moments of inertia are equal.) Initially, the effect of friction is ignored. The motion of the top can be described by three Euler angles: the tilt angle between the symmetry axis of the top and the vertical (second Euler angle); the azimuth of the top about the vertical (first Euler angle); and the rotation angle of the top about its own axis (third Euler angle). Thus, precession is the change in and nutation is the change in .
If the top has mass and its center of mass is at a distance from the pivot point, its gravitational potential relative to the plane of the support is
In a coordinate system where the axis is the axis of symmetry, the top has angular velocities and moments of inertia about the , and axes. Since we are taking a symmetric top, we have =. The kinetic energy is
In terms of the Euler angles, this is
If the Euler–Lagrange equations are solved for this system, it is found that the motion depends on two constants and (each related to a constant of motion). The rate of precession is related to the tilt by
The tilt is determined by a differential equation for of the form
where is a cubic polynomial that depends on parameters and as well as constants that are related to the energy and the gravitational torque. The roots of are cosines of the angles at which the rate of change of is zero. One of these is not related to a physical angle; the other two determine the upper and lower bounds on the tilt angle, between which the gyroscope oscillates.
Astronomy
The nutation of a planet occurs because the gravitational effects of other bodies cause the speed of its axial precession to vary over time, so that the speed is not constant. English astronomer James Bradley discovered the nutation of Earth's axis in 1728.
Earth
Nutation subtly changes the axial tilt of Earth with respect to the ecliptic plane, shifting the major circles of latitude that are defined by the Earth's tilt (the tropical circles and the polar circles).
In the case of Earth, the principal sources of tidal force are the Sun and Moon, which continuously change location relative to each other and thus cause nutation in Earth's axis. The largest component of Earth's nutation has a period of 18.6 years, the same as that of the precession of the Moon's orbital nodes. However, there are other significant periodic terms that must be accounted for depending upon the desired accuracy of the result. A mathematical description (set of equations) that represents nutation is called a "theory of nutation". In the theory, parameters are adjusted in a more or less ad hoc method to obtain the best fit to data. Simple rigid body dynamics do not give the best theory; one has to account for deformations of the Earth, including mantle inelasticity and changes in the core–mantle boundary.
The principal term of nutation is due to the regression of the Moon's nodal line and has the same period of 6798 days (18.61 years). It reaches plus or minus 17″ in longitude and 9.2″ in obliquity. All other terms are much smaller; the next-largest, with a period of 183 days (0.5 year), has amplitudes 1.3″ and 0.6″ respectively. The periods of all terms larger than 0.0001″ (about as accurately as available technology can measure) lie between 5.5 and 6798 days; for some reason (as with ocean tidal periods) they seem to avoid the range from 34.8 to 91 days, so it is customary to split the nutation into long-period and short-period terms. The long-period terms are calculated and mentioned in the almanacs, while the additional correction due to the short-period terms is usually taken from a table. They can also be calculated from the Julian day according to IAU 2000B methodology.
In popular culture
In the 1961 disaster film The Day the Earth Caught Fire, the near-simultaneous detonation of two super-hydrogen bombs near the poles causes a change in Earth's nutation, as well as an 11° shift in the axial tilt and a change in Earth's orbit around the Sun.
In Star Trek: The Next Generation, rapidly 'cycling' or 'changing' the 'shield nutation' is frequently mentioned as a means by which to delay the antagonist in their efforts to break through the defences and pillage the Enterprise or other spacecraft.
See also
Libration
Teetotum
Notes
References
The Feynman Lectures on Physics Vol. I Ch. 20: Rotation in space
Rotation in three dimensions
Astrometry
Geodynamics | Nutation | Astronomy | 1,448 |
1,596,341 | https://en.wikipedia.org/wiki/Gliese%20710 | Gliese 710, or HIP 89825, is an orange star in the constellation Serpens Cauda. It is projected to pass near the Sun in about 1.29 million years at a predicted minimum distance of 0.051 parsecs— (about 1.6 trillion km)—about 1/25th of the current distance to Proxima Centauri. Such a distance would make for a similar brightness to the brightest planets, optimally reaching an apparent visual magnitude of about −2.7. The star's proper motion will peak around one arcminute per year, a rate of apparent motion that would be noticeable over a human lifespan. This is a timeframe, based on data from Data Release 3 from the Gaia spacecraft, well within the parameters of current models which cover the next 15 million years.
Description
Gliese 710 currently is from Earth in the constellation Serpens and has a below naked-eye visual magnitude of 9.69. A stellar classification of K7 Vk means it is a small main-sequence star mostly generating energy through the thermonuclear fusion of hydrogen at its core. (The suffix 'k' indicates that the spectrum shows absorption lines from interstellar matter.) Stellar mass is about 57% of the Sun's mass with an estimated 58% of the Sun's radius. It is suspected to be a variable star that may vary in magnitude from 9.65 to 9.69. As of 2020, no planets have been detected orbiting it.
Computing and details of the closest approach
In their 2010 work, Bobylev et al. suggested that Gliese 710 has an 86% chance of passing through the Oort cloud, assuming the Oort cloud to be a spheroid around the Sun with semiminor and semimajor axes of 80,000 and 100,000 AU, respectively. The distance of closest approach of Gliese 710 is generally difficult to compute precisely as it depends sensitively on its current position and velocity; Bobylev et al. estimated that Gliese 710 would pass within () of the Sun. At the time, there was even a 1-in-10,000 chance of the star penetrating into the region (d < 1,000 AU) where the influence of the passing star on Kuiper belt objects would be significant.
Results from new calculations that include input data from Gaia DR3 indicate that the flyby of Gliese 710 to the Solar System will on average be closer at () in time, but with considerably less uncertainty. The effects of such an encounter on the orbit of the Pluto–Charon system (and therefore, on the classical trans-Neptunian belt) are negligible, but Gliese 710 will traverse the outer Oort cloud (inside 100,000 AU or 0.48 pc) and reach the outskirts of the inner Oort cloud (inward of 20,000 AU).
Gliese 710 has the potential to perturb the Oort cloud in the outer Solar System, exerting enough force to send showers of comets into the inner Solar System for millions of years, triggering visibility of about ten naked-eye comets per year, and possibly causing an impact event. According to Filip Berski and Piotr Dybczyński, this event will be "the strongest disrupting encounter in the future and history of the Solar System." Earlier dynamic models indicated that the net increase in cratering rate due to the passage of Gliese 710 would be no more than 5%. They had originally estimated that the closest approach would happen in 1.36 million years when the star will approach within () of the Sun. Gaia DR2 later found the minimum perihelion distance to be or 13,900 ± 3,200 AU, about 1.281 million years from now.
Table of parameters of predictions of Gliese 710 encounter with Sun
In popular culture
In 2022, the final track on popular Australian psychedelic rock band King Gizzard & The Lizard Wizard's album Ice, Death, Planets, Lungs, Mushrooms and Lava was entitled Gliese 710.
See also
List of nearest stars
Notes
References
External links
SolStation.com
VizieR variable star database
Wikisky image of HD 168442 (Gliese 710)
BD-01 3474
168442
089825
0710
K-type main-sequence stars
Serpens
TIC objects | Gliese 710 | Astronomy | 916 |
38,456,645 | https://en.wikipedia.org/wiki/CallFire | CallFire Inc. is a cloud telephony services provider (SaaS) headquartered in Santa Monica, California, known locally as Silicon Beach. CallFire develops web-based VoIP products and services as a business-to-business (B2B) service for small and medium-sized businesses (SMB's).
History
The company was incorporated in 2006 by Dinesh Ravishanker, Vijesh Mehta, and Komnieve Singh. Punit Shah and T. J. Thinakaran came on board in 2006 and 2007 respectively to round out the founding team. Dan Retzlaff, and James Nguyen were hired during the initial years, and Ronald Burr was hired in spring 2012.
In late December of 2012, CallFire acquired EZTexting from Shane Neman, who was the original founder and CEO from 2005 to 2013.
The company provides cloud communication services such as voice broadcasting, power dialing, and Interactive Voice Response.
Reception
In 2010, CallFire was ranked No. 285 on Inc. Magazine’s 29th annual List of America’s Fastest Growing Private Companies.
CallFire was ranked No. 15 within the Telecommunications industry in the Los Angeles metropolitan region. Much of CallFire’s annual growth is attributed to “the growth in calls and use of its service in U.S. elections as well as Hurricane Sandy”.
References
External links
CallFire website
Cloud communication platforms
Cloud applications
Cloud platforms | CallFire | Technology | 291 |
45,346,083 | https://en.wikipedia.org/wiki/Rectal%20douching | Rectal douching is the act of rinsing the rectum with intent to clean it. An instance of this rinsing or a tool used to perform the rinse may be called a rectal douche.
Uses
Rectal douching is a hygienic practice to clean the rectum to void hardened stools as opposed to a pharmaceutical method to soften the stool.
Rectal douching is distinguished from anal cleansing, which is the routine cleaning of the anus after defecation.
Risks
Evidence is not clear, but it is possible that rectal douching before anal sex can increase the risk of transferring HIV, and other diseases. There is evidence that douching sometimes can disrupt the epithelium, or tissue in the rectum, and if this tissue is damaged, then diseases can spread more easily.
Rectal douching before anal sex increases the risk of transfer of Hepatitis B.
There are reports that activities which can have the side effect of causing unintentional forcing of water into the rectum, such as waterskiing, may cause discomfort but can potentially bring other harms.
Technique
Liquid, typically water, is inserted into the rectum by means of some tool. After some time, the water is expelled in the manner of a routine bowel movement, and, in the process, the rectum eliminates waste and is cleaned.
Most people who use rectal douching do so with plain water. The use of a hose connected to a tap, either in a shower or sink, has been reported as the most popular way to administer a douche. Another popular way is with a handheld bulb and syringe designed for rectal douching.
Less commonly, some people used commercial products sold for performing rectal douching, with single-use bottles of saline being most used. Also commercially available but even less commonly used for rectal douching are mineral oil products intended to assist in an enema.
History
A rectal douche device was patented in 1957 in the United States by Patricia Bragg.
Society and culture
From a public health perspective, understanding rectal douching practices may be important because the practice can be paired with behaviors which are risk factors to acquiring a sexually transmitted infection.
Research
Research into rectal microbicide to prevent the transmission of HIV increased interest into researching safer and more gentle rectal douching techniques. The hope in that research is that a rectal microbicide could be delivered with a rectal douche.
References
External links
Anal Sex Prep, a video explanation by Lindsey Doe which is suitable for any audience
Douching for Bottom Boys, a video explanation targeted to gay males
Anal eroticism
Hygiene
Rectum
Sexology | Rectal douching | Biology | 557 |
42,495,828 | https://en.wikipedia.org/wiki/Ventral%20nervous%20system%20defective | The ventral nervous system defective (vnd) gene present in fruit flies of the genus Drosophila functions during embryonic brain development and is necessary for the formation and specification of neural cell lineages. This transcription factor is expressed in all brain neuromeres and functions as a columnar patterning gene in the early stages of cell differentiation. Vnd is important in cell development as well as cell maintenance during embryogenesis due to the expression of the gene in both the developing neural cells and the specified neuromeres. Knockout experiments of vnd reveal similar axonal defects as the knockout of the Hox gene labial (lab). Like the Hox genes, vnd is required for specification along the dorsoventral axis. Mutant vnd produce the loss of tritocerebral neural lineages as well as neuromeres, therefore this gene is crucial for the development of these cell lineages. Without proper vnd expression throughout Drosophila embryonic development, the brain would not become functional.
References
Hirth F, Reichert H, Rijli F, Sprecher S, Technau G, Urbach R. The columnar gene vnd is required for tritocerebral neurmore formation during embryonic brain development of Drosophila. Development. 2006;133:4331-4339.
Drosophila melanogaster genes
Nervous system | Ventral nervous system defective | Biology | 286 |
2,793,808 | https://en.wikipedia.org/wiki/Secretariat%20of%20Energy | In Mexico, the Secretariat of Energy (Spanish: Secretaría de Energia) is the government department in charge of production and regulation of energy. This secretary is a member of the Executive Cabinet.
History
On December 7, 1946, the Secretaría de Bienes Nacionales e Inspección Administrativa (Secretariat of National Assets and Administrative Inspection) was created with Alfonso Caso Andrade serving as secretary. In 1958, the name was changed to Secretaría de Patrimonio Nacional (Secretariat of National Assets, SEPANAL). In the late 1970s and 1980s it changed names two more times, being known as the Secretaría de Patrimonio y Fomento Industrial (Secretariat of Assets and Industrial Development) during the presidency of José López Portillo PRI and as the Secretaría de Energía, Minas e Industria Paraestatal (Secretariat of Energy, Mining and Semi-Public Industries) until obtaining its current name in 1994.
Three individuals served as Secretary of Energy under President Felipe Calderón PAN: Georgina Yamilet Kessel Martínez (December 2006-January 2011), José Antonio Meade (January–September 2011), and Jordy Herrera Flores (September 2011-November 2012).
See also
César Emiliano Hernández Ochoa
References
External links
Official site
Energy
Mexico | Secretariat of Energy | Engineering | 264 |
18,360,358 | https://en.wikipedia.org/wiki/Affect%20consciousness | Affect consciousness (or affect integration - a more generic term for the same phenomenon) refers to an individual's ability to consciously perceive, tolerate, reflect upon, and express affects. These four abilities are operationalized as degrees of awareness, tolerance, emotional (nonverbal) expression, and conceptual (verbal) expression of each of the following eleven affect categories:
The Affect Consciousness Interview (ACI) (Monsen et al., 2008), a semi-structured interview, is used to evaluate an individual's affect consciousness. The ACI evaluates the individual's awareness, tolerance, emotional expression, and conceptual expression of each of the affect categories are evaluated using a nine-point Affect Consciousness Scale (ACS), with the most current version containing eleven affect categories. The AC-construct and its psychotherapeutic implications were first proposed and described by Norwegian Psychology Professor Jon Monsen and his associates in the early 1980s. The construct has become increasingly popular and more widely researched in recent years.
Conceptual background
A number of authors and theoretical traditions inspired the development of the AC-construct, most notably Silvan Tomkins' Basic Affect Theory, Script Theoretical formulations and differential emotions theory (Izard, 1977, 1991). Modern self psychological formulations, specifically those advocated by Stolorow, Brandchaft, & Atwood (1995), Stolorow & Atwood
(1992), and Basch (1983) are also central, along with the writings of Stern (1985), and the seminal studies by Emde and his associates (e.g., Sorce, Emde, Campos & Klinnert, 1985) on nonverbal affective communication with infants. Based on Tomkins' affect and script theory (2008b,1995a), the affect consciousness model posits that affect, along with pain, homeostatic life, support processes, and the cyclical drives, constitute the primary motivating forces in all human affairs. Of these motivational forces the affects are seen as the primary, and by far the most flexible. (Solbakken, Hansen & Monsen, 2011).
Continuum
A person with a low level of affect consciousness is expected to be unable to make sense of both his or her own feelings and the emotions of others and to have difficulties attributing causes for his or her own and others' behaviors. A person with high AC is expected to make sense of both his or her and others' emotions.
Solbakken et al. describes the variations in AC in three levels of conscious. "At low levels these scales indicate poor awareness and recognition of affects, a tendency for being overwhelmed by, unable to cope with and unable to decode meaningful information from affect activation, along with disavowal and shutdown of bodily expressive acts and inability to articulate and express semantic descriptions of affective experience. At intermediate levels affects are stably recognized and accepted, and both bodily expressive acts and semantic articulation of experience are generally acknowledged. Finally, high levels are characterized by capacity for focused and flexible awareness of nuances specific to different contexts and affect intensities, distinct openness to affective activation and its motivating and regulating functions, along with explicit reflection about the information inherent in the affect with its meanings and consequences for one's understanding of both self and others. At this level the nonverbal and conceptual expressions of affects are clear, nuanced, authentic and characterized by the experience of choice, responsibility and awareness of others' reactions to one's communications (or lack thereof)".
Clinical applications
Psychotherapy Model
A specific AC-psychotherapy treatment model (ACT – not to be confused with acceptance and commitment therapy, which is a more recent model) has been developed and systematically tested (Monsen et al.,
1995a, b) for treating severe and complex mental disorders. It has later been revised and tested in a randomized controlled study with chronic pain patients (Monsen & Monsen, 1999, 2000). A recent revision of the model has been written by Monsen & Solbakken (2013) and is currently being tested empirically.
Psychopathology
As noted by Solbakken et al., affect consciousness scores (both overall mean of all aspect-scores across affects and scores on each integrating aspect, and discrete affects) are strongly correlated with relevant measures of psychological dysfunction. These data shows a possible relationship between psychopathology and affect consciousness. Affect integration (operationalized through Affect Consciousness constructs and measured with the ACI and ACS) at different levels shows a stable correlation of psychopathology and psychological dysfunction such as symptom severity, interpersonal problems, personality disorder traits, and general functioning. Furthermore, the integration of specific affects have been shown to have distinct and predictable relationships with various types of relational problems.
As a predictor of change in psychotherapy
It has been shown that in brief time-limited psychotherapy high levels of affect consciousness predict more extensive changes in symptoms and problems. On the other hand, Solbakken, Hansen, Havik, & Monsen demonstrated that in open-ended psychotherapy focusing on the experience and expression of emotion, low levels of AC at the onset of treatment predicted greater changes in symptoms, relational difficulties, and personality disorder traits. Thus, under such psychotherapeutic conditions low AC represent primarily an increased potential for change.
Mentalization
It has been suggested that affect consciousness and the concept of mentalization partly overlap. Both mentalization theory and affect consciousness theory argue that the child's experience and expression of affects develop in relationship (primarily between one or more primary caregivers and the infant). On the other hand, affect consciousness theory focuses more strongly on the biological foundations for affect differentiation and the adaptive properties inherent in discrete affects while emphasizing the individual's own perception and organization of his or her own affects.
Notes
Further reading
Monsen, J. T., & Monsen, K. (1999). Affects and affect consciousness: A psychotherapy model integrating Silvan Tomkins' affect- and script theory within the framework of self psychology. In A. Goldberg (Ed.), Pluralism in self psychology: Progress in self psychology, Vol. 15. Hillsdale, NJ: Analytic Press.
Solbakken, O. A., Hansen, R. S., Havik, O. E., & Monsen, J. T. (2011). The assessment of affect integration: validation of the affect consciousness construct. Journal of Personality Assessment, 93, 257-265.
Solbakken, O.A., Hansen, R. S., & Monsen, J. T. (2011). Affect integration and reflective function; clarification of central conceptual issues. Psychotherapy Research, 21, 482-496.
Tomkins, S. S. (2008a). Affect Imagery Consciousness: The complete edition. Volumes 1-4. New York: Springer Publishing Company.
Consciousness
Feeling
Emotion | Affect consciousness | Biology | 1,420 |
17,633,886 | https://en.wikipedia.org/wiki/Frankenia%20pauciflora | Frankenia pauciflora, the common sea-heath or southern sea-heath, is an evergreen shrub native to southern Australia. It is part of the Frankenia genus of the Frankeniaceae family.
It can be prostrate or may grow up to 0.5 m in height. Pink or white flowers are produced between June and February in its native range. It occurs in saline flats, salt marshes, or coastal limestone areas.
Taxonomy
The specific epithet pauciflora, referring the Latin words paucus, meaning few, and florus meaning flower, referring to the fact that the species produces few flowers.
Varieties
The currently recognised varieties are:
F. p. var. fruticulosa (DC.) Summerh.
F. p. var. gunnii Summerh.
F. p. var. longifolia Summerh.
F. p. var. pauciflora DC.
Habitat
Frankenia pauciflora is characterized as a halophyte and as such is found to localize in sandy soils, salt floats, salt marshes, and coastal limestones. The plant subsists in environments with a soil class of S2 and S3 which is described as moderately to highly saline soil. The species is a xerophyte, a drought-tolerant plant and survives in environments with sustained predictable dry periods followed by periods of moist soil. Frankenia pauciflora can subsist in a range of soil pHs ranging from acidic to alkaline. In addition, the plant tolerates hot overhead sun to warm low sun and is characterized as is shade tolerant.
Distribution
The species occurs in Western Australia, South Australia, Victoria, and Tasmania, where it is represented by the variety F. p. var. gunnii which only grows on Flinders, Short, and Harcus Islands. The species is generally considered not threatened, but F. p. var gunnii is considered rare as it only has a small population located in Tasmania that may be at more risk. Var. fruticulosa is found primarily in Southern Australia; var. longifolia is found in Western and Southern Australia, and var. pauciflora is found in only Western Australia.
Description
Individuals of this species are prostrate perennial shrubs up to 0.5 m in height. It forms many branches that create a thick mat-like structure. It produces fleshy, linear grey-green leaves reaching up to 2 cm, somewhat resembling thyme. The leaves can range for hairless to densely haired.
Flowers
Its flowers are 2 cm across, stalkless and are generally pink, but sometimes white. The flowers have four to six petals that usually have irregular edges. The flowers bloom either solitary at the base of stalks or in bunches of 2–25 that can be found either at the base or end of stems. Each flower is supported by 4 bracts. The circular pollen has a tricolpate morphology with a reticulated surface pattern. The species in the Frankenia pauciflora is distinguished from other members of its family by the structure of its ovaries. The female flower has a 3-branched style, while the male flower most commonly has six stamens. The ovaries usually have 3 placentae in a basal or parietal configuration. Each placenta is known to contain 2-6 ovules. The fruit is a small brown cylindrical capsule shape.
Leaves
Due to its halophytic properties, Frankenia pauciflora’s leaves are covered in minuscule salt crystals. These crystals cover the smooth upper surface of the leaf, which range from 2 to 13 mm long and 0.5 to 2.2 mm wide. Small hairs can be seen on most leaves, mainly on the midrib of the lower surface, along with folded over edges. Its leaves generally wilt and turn brown during drought periods. Var gunnii is distinguished in that they have longer, narrower leaves, and inconspicuous mid vein.
Seeds
There is one brown seed per fruit capsule, a cylindrical capsule with 5 or 6 ribbed parts measuring 3–7 mm long. The seeds come attached with a pappus-like structure and separate easily from the fruits. Seeds are sprouted during the months of late January to mid-March.
Bark
Frankenia pauciflora's bark differs from its trunk versus its younger branches. Its new branches have a smoother, and rusty brown appearance while its trunk contains rough and flaky grey to brown bark.
Reproduction
Frankenia pauciflora does not have a set flowering time, flowering throughout the year but particularly between the months of June and February, and can produce seeds at any time during the year. The flowers of Frankenia pauciflora are insect-pollinated to produce dicotyledon seeds. In particular, the flower of F. p. var gunnii are pollinated by insects in the order Diptera, Hymenoptera and Lepidoptera. It has been found that xenogamy in this species leads to more fruits per flower and more seeds in each fruit compared to autogamy; this was reported to be true in both observational studies and controlled experiments.
Uses
The relative simplicity of growth and ability for the plant to adapt to a wide range of soils makes Frankenia Pauciflora an attractive choice for home gardening. Its flame-retardant properties also provide reduced chances of bush-fire spread in risk zones such as Australia when planted surrounding homes.
Frankenia pauciflora provides shelter for many faunae as well as being a food source for a number of insects. Its thick network of fine roots are also useful for providing stability in sediments and floodplains.
References
pauciflora
Halophytes
Caryophyllales of Australia
Flora of South Australia
Flora of Tasmania
Flora of Victoria (state)
Eudicots of Western Australia | Frankenia pauciflora | Chemistry | 1,194 |
56,160,987 | https://en.wikipedia.org/wiki/Kernel%20page-table%20isolation | Kernel page-table isolation (KPTI or PTI, previously called KAISER) is a Linux kernel feature that mitigates the Meltdown security vulnerability (affecting mainly Intel's x86 CPUs) and improves kernel hardening against attempts to bypass kernel address space layout randomization (KASLR). It works by better isolating user space and kernel space memory. KPTI was merged into Linux kernel version 4.15, and backported to Linux kernels 4.14.11, 4.9.75, and 4.4.110. Windows and macOS released similar updates. KPTI does not address the related Spectre vulnerability.
Background on KAISER
The KPTI patches were based on KAISER (short for Kernel Address Isolation to have Side-channels Efficiently Removed), a technique conceived in 2016 and published in June 2017 back when Meltdown was not known yet. KAISER makes it harder to defeat KASLR, a 2014 mitigation for a much less severe issue.
In 2014, the Linux kernel adopted kernel address space layout randomization (KASLR), which makes it more difficult to exploit other kernel vulnerabilities, which relies on kernel address mappings remaining hidden from user space. Despite prohibiting access to these kernel mappings, it turns out that there are several side-channel attacks in modern processors that can leak the location of this memory, making it possible to work around KASLR.
KAISER addressed these problems in KASLR by eliminating some sources of address leakage. Whereas KASLR merely prevents address mappings from leaking, KAISER also prevents the data from leaking, thereby covering the Meltdown case.
KPTI is based on KAISER. Without KPTI enabled, whenever executing user-space code (applications), Linux would also keep its entire kernel memory mapped in page tables, although protected from access. The advantage is that when the application makes a system call into the kernel or an interrupt is received, kernel page tables are always present, so most context switching-related overheads (TLB flush, page-table swapping, etc) can be avoided.
Meltdown vulnerability and KPTI
In January 2018, the Meltdown vulnerability was published, known to affect Intel's x86 CPUs and ARM Cortex-A75. It was a far more severe vulnerability than the KASLR bypass that KAISER originally intended to fix: It was found that contents of kernel memory could also be leaked, not just the locations of memory mappings, as previously thought.
KPTI (conceptually based on KAISER) prevents Meltdown by preventing most protected locations from being mapped to user space.
AMD x86 processors are not currently known to be affected by Meltdown and don't need KPTI to mitigate them. However, AMD processors are still susceptible to KASLR bypass when KPTI is disabled.
Implementation
KPTI fixes these leaks by separating user-space and kernel-space page tables entirely. One set of page tables includes both kernel-space and user-space addresses same as before, but it is only used when the system is running in kernel mode. The second set of page tables for use in user mode contains a copy of user-space and a minimal set of kernel-space mappings that provides the information needed to enter or exit system calls, interrupts and exceptions.
On processors that support the process-context identifiers (PCID), a translation lookaside buffer (TLB) flush can be avoided, but even then it comes at a significant performance cost, particularly in syscall-heavy and interrupt-heavy workloads.
The overhead was measured to be 0.28% according to KAISER's original authors; a Linux developer measured it to be roughly 5% for most workloads and up to 30% in some cases, even with the PCID optimization; for database engine PostgreSQL the impact on read-only tests on an Intel Skylake processor was 7–17% (or 16–23% without PCID), while a full benchmark lost 13–19% (Coffee Lake vs. Broadwell-E). Many benchmarks have been done by Phoronix, Redis slowed by 6–7%. Linux kernel compilation slowed down by 5% on Haswell.
KPTI can partially be disabled with the "nopti" kernel boot option. Also provisions were created to disable KPTI if newer processors fix the information leaks.
References
External links
17. Page Table Isolation (PTI) - The Linux Kernel documentation
KPTI documentation patch
Linux kernel features
Virtual memory
X86 architecture
Transient execution CPU vulnerabilities | Kernel page-table isolation | Technology | 955 |
58,713,607 | https://en.wikipedia.org/wiki/Raffaella%20Ocone | Raffaella Ocone is Professor of Chemical Engineering at Heriot-Watt University and a Fellow of the Royal Academy of Engineering. In 2006 she was awarded the title Cavaliere of the Order of Merit of the Italian Republic and in the 2019 New Year Honours she was appointed OBE.
Early life and education
Ocone was born in Morcone, Italy. She graduated from the University of Naples Federico II with Laurea (degree) in Chemical Engineering. In 1989 she achieved her MA, and in 1992 her PhD, both from Princeton University.
Career
Ocone's first role after her PhD was as a lecturer at the University of Naples Federico II, from 1991 to 1995. Following this she was a reader at the University of Nottingham, and a visiting professor at Louisiana State University in the US and the Claude Bérnard University, Lyon in France. She was the first “Caroline Herschel Visiting Professor” at RHUR Universität, Bochum, Germany (July–November 2017) and the recipient of a Visiting Research Fellowship from the Institute for Advanced Studies, University of Bologna, Italy (March–April 2018).
She has been professor of chemical engineering at Heriot-Watt University since 1999, and she was the first female professor of chemical engineering in Scotland. In 2003 she became a Chartered Engineer with the Engineering Council. She is also a Chartered Scientist with the Science Council.
Her research is in the field of modelling of complex reactive systems, for which she has been internationally recognised, including election as Fellow to a number of Royal Societies. Her work has application to the design and operation of industrial systems involving material flow. In 2013 Ocone was elected a Fellow of the Royal Academy of Engineering, which she described as "the greatest accolade for an engineer". She is an authority on complex reactive systems, and her research has been applied to the development of carbon capture and storage technologies. She co-authored at Royal Academy of Engineering report, funded by the UK government, on the biofuels industry.
She has an interest in ethics and engineering, and chaired the Royal Academy of Engineering’s Teaching Ethics group.
Awards
Fellow of the Institution of Chemical Engineers (2003)
Fellow of the Royal Society of Chemistry (2003)
Fellow of the Royal Society of Edinburgh (2006)
Cavaliere (Knighthood), Order of the Star of Italian Solidarity (2006)
Fellow of the Royal Society of Arts (2009)
Fellow of the Royal Academy of Engineering (2013)
Established Career Felliowship Heriot-Watt University (2019)
Order of the British Empire (for services to engineering) (2019)
Public engagement
Ocone wrote on the 2040 ban on new petrol and diesel cars and on the sustainability of BECCS for The Conversation, an independent news source from the academic and research community. In 2016 she hosted an event in conversation with author Roberto Constantini at the Italian Institute in Edinburgh, discussing the success of the detective story. The previous year, she took part in a similar event with author Maurizio de Giovanni. 2018 she delivered a lecture at the plenary session on Investigating Wet Particle Systems.as part of the discussion on 21st century energy mix. She has challenged some of the proposed solutions to the carbon crisis such as the conversion of power stations to use wood chips.
Selected publications
References
Women chemical engineers
Italian women academics
Italian chemical engineers
Italian women chemists
University of Naples Federico II alumni
Princeton University alumni
Academics of Heriot-Watt University
Fellows of the Royal Academy of Engineering
Female fellows of the Royal Academy of Engineering
Living people
Officers of the Order of the British Empire
21st-century women engineers
Year of birth missing (living people) | Raffaella Ocone | Chemistry | 736 |
4,473,862 | https://en.wikipedia.org/wiki/MyLifeBits | MyLifeBits is a life-logging experiment begun in 2001. It is a Microsoft Research project inspired by Vannevar Bush's hypothetical Memex computer system. The project includes full-text search, text and audio annotations, and hyperlinks. The "experimental subject" of the project is computer scientist Gordon Bell, and the project will try to collect a lifetime of storage on and about Bell. Jim Gemmell of Microsoft Research and Roger Lueder were the architects and creators of the system and its software.
MyLifeBits is an attempt to fulfill Vannevar Bush's vision of an automated store of the documents, pictures (including those taken automatically), and sounds an individual has experienced in his lifetime, to be accessed with speed and ease. For this, Bell digitized all documents he had read or produced, CDs, emails, and so on. He continued to do so through his death in 2024, gathering web pages browsed, phone and instant messaging conversations and the like more or less automatically. The book Total Recall describes the vision and implications for a personal, lifetime e-memory for recall, work, health, education, and immortality.
In 2010, Total Recall was published in paperback. , Bell was no longer using the wearable camera associated with the project. He described the rise of the smartphone as largely fulfilling Bush's vision of the Memex.
See also
Dymaxion Chronofile
Lifelog
Microsoft SenseCam
References
External links
MyLifeBits - Microsoft Research Archived version.
Gordon Bell and Jim Gemmell – A look into Microsoft's Bay Area Research Center, Part I Channel9 video, including MyLifeBits material.
Flogging Gordon Bell's Memory Thinking about how lifelogs, or flogs, would fundamentally change psychotherapy and psychiatry.
"A Head For Detail" Clive Thompson, Fast Company
Microsoft Research
Lifelogging | MyLifeBits | Technology | 392 |
30,040 | https://en.wikipedia.org/wiki/Titanium | Titanium is a chemical element; it has symbol Ti and atomic number 22. Found in nature only as an oxide, it can be reduced to produce a lustrous transition metal with a silver color, low density, and high strength, resistant to corrosion in sea water, aqua regia, and chlorine.
Titanium was discovered in Cornwall, Great Britain, by William Gregor in 1791 and was named by Martin Heinrich Klaproth after the Titans of Greek mythology. The element occurs within a number of minerals, principally rutile and ilmenite, which are widely distributed in the Earth's crust and lithosphere; it is found in almost all living things, as well as bodies of water, rocks, and soils. The metal is extracted from its principal mineral ores by the Kroll and Hunter processes. The most common compound, titanium dioxide, is a popular photocatalyst and is used in the manufacture of white pigments. Other compounds include titanium tetrachloride (TiCl4), a component of smoke screens and catalysts; and titanium trichloride (TiCl3), which is used as a catalyst in the production of polypropylene.
Titanium can be alloyed with iron, aluminium, vanadium, and molybdenum, among other elements. The resulting titanium alloys are strong, lightweight, and versatile, with applications including aerospace (jet engines, missiles, and spacecraft), military, industrial processes (chemicals and petrochemicals, desalination plants, pulp, and paper), automotive, agriculture (farming), sporting goods, jewelry, and consumer electronics. Titanium is also considered one of the most biocompatible metals, leading to a range of medical applications including prostheses, orthopedic implants, dental implants, and surgical instruments.
The two most useful properties of the metal are corrosion resistance and strength-to-density ratio, the highest of any metallic element. In its unalloyed condition, titanium is as strong as some steels, but less dense. There are two allotropic forms and five naturally occurring isotopes of this element, Ti through Ti, with Ti being the most abundant (73.8%).
Characteristics
Physical properties
As a metal, titanium is recognized for its high strength-to-weight ratio. It is a strong metal with low density that is quite ductile (especially in an oxygen-free environment), lustrous, and metallic-white in color. Due to its relatively high melting point (1,668 °C or 3,034 °F) it has sometimes been described as a refractory metal, but this is not the case. It is paramagnetic and has fairly low electrical and thermal conductivity compared to other metals. Titanium is superconducting when cooled below its critical temperature of 0.49 K.
Commercially pure (99.2% pure) grades of titanium have ultimate tensile strength of about 434 MPa (63,000 psi), equal to that of common, low-grade steel alloys, but are less dense. Titanium is 60% denser than aluminium, but more than twice as strong as the most commonly used 6061-T6 aluminium alloy. Certain titanium alloys (e.g., Beta C) achieve tensile strengths of over 1,400 MPa (200,000 psi). However, titanium loses strength when heated above .
Titanium is not as hard as some grades of heat-treated steel; it is non-magnetic and a poor conductor of heat and electricity. Machining requires precautions, because the material can gall unless sharp tools and proper cooling methods are used. Like steel structures, those made from titanium have a fatigue limit that guarantees longevity in some applications.
The metal is a dimorphic allotrope of a hexagonal close packed α form that changes into a body-centered cubic (lattice) β form at . The specific heat of the α form increases dramatically as it is heated to this transition temperature but then falls and remains fairly constant for the β form regardless of temperature.
Chemical properties
Like aluminium and magnesium, the surface of titanium metal and its alloys oxidize immediately upon exposure to air to form a thin non-porous passivation layer that protects the bulk metal from further oxidation or corrosion. When it first forms, this protective layer is only 1–2 nm thick but it continues to grow slowly, reaching a thickness of 25 nm in four years. This layer gives titanium excellent resistance to corrosion against oxidizing acids, but it will dissolve in dilute hydrofluoric acid, hot hydrochloric acid, and hot sulfuric acid.
Titanium is capable of withstanding attack by dilute sulfuric and hydrochloric acids at room temperature, chloride solutions, and most organic acids. However, titanium is corroded by concentrated acids. Titanium is a very reactive metal that burns in normal air at lower temperatures than the melting point. Melting is possible only in an inert atmosphere or vacuum. At , it combines with chlorine. It also reacts with the other halogens and absorbs hydrogen.
Titanium readily reacts with oxygen at in air, and at in pure oxygen, forming titanium dioxide. Titanium is one of the few elements that burns in pure nitrogen gas, reacting at to form titanium nitride, which causes embrittlement. Because of its high reactivity with oxygen, nitrogen, and many other gases, titanium that is evaporated from filaments is the basis for titanium sublimation pumps, in which titanium serves as a scavenger for these gases by chemically binding to them. Such pumps inexpensively produce extremely low pressures in ultra-high vacuum systems.
Occurrence
Titanium is the ninth-most abundant element in Earth's crust (0.63% by mass) and the seventh-most abundant metal. It is present as oxides in most igneous rocks, in sediments derived from them, in living things, and natural bodies of water. Of the 801 types of igneous rocks analyzed by the United States Geological Survey, 784 contained titanium. Its proportion in soils is approximately 0.5–1.5%.
Common titanium-containing minerals are anatase, brookite, ilmenite, perovskite, rutile, and titanite (sphene). Akaogiite is an extremely rare mineral consisting of titanium dioxide. Of these minerals, only rutile and ilmenite have economic importance, yet even they are difficult to find in high concentrations. About 6.0 and 0.7 million tonnes of those minerals were mined in 2011, respectively. Significant titanium-bearing ilmenite deposits exist in Australia, Canada, China, India, Mozambique, New Zealand, Norway, Sierra Leone, South Africa, and Ukraine. About 210,000 tonnes of titanium metal sponge were produced in 2020, mostly in China (110,000 t), Japan (50,000 t), Russia (33,000 t) and Kazakhstan (15,000 t). Total reserves of anatase, ilmenite, and rutile are estimated to exceed 2 billion tonnes.
The concentration of titanium is about 4 picomolar in the ocean. At 100 °C, the concentration of titanium in water is estimated to be less than 10−7 M at pH 7. The identity of titanium species in aqueous solution remains unknown because of its low solubility and the lack of sensitive spectroscopic methods, although only the 4+ oxidation state is stable in air. No evidence exists for a biological role, although rare organisms are known to accumulate high concentrations of titanium.
Titanium is contained in meteorites, and it has been detected in the Sun and in M-type stars (the coolest type) with a surface temperature of . Rocks brought back from the Moon during the Apollo 17 mission are composed of 12.1% TiO2. Native titanium (pure metallic) is very rare.
Isotopes
Naturally occurring titanium is composed of five stable isotopes: 46Ti, 47Ti, 48Ti, 49Ti, and 50Ti, with 48Ti being the most abundant (73.8% natural abundance). At least 21 radioisotopes have been characterized, the most stable of which are 44Ti with a half-life of 63 years; 45Ti, 184.8 minutes; 51Ti, 5.76 minutes; and 52Ti, 1.7 minutes. All other radioactive isotopes have half-lives less than 33 seconds, with the majority less than half a second.
The isotopes of titanium range in atomic weight from (39Ti) to (64Ti). The primary decay mode for isotopes lighter than 46Ti is positron emission (with the exception of 44Ti which undergoes electron capture), leading to isotopes of scandium, and the primary mode for isotopes heavier than 50Ti is beta emission, leading to isotopes of vanadium.
Titanium becomes radioactive upon bombardment with deuterons, emitting mainly positrons and hard gamma rays.
Compounds
The +4 oxidation state dominates titanium chemistry, but compounds in the +3 oxidation state are also numerous. Commonly, titanium adopts an octahedral coordination geometry in its complexes, but tetrahedral TiCl4 is a notable exception. Because of its high oxidation state, titanium(IV) compounds exhibit a high degree of covalent bonding.
Oxides, sulfides, and alkoxides
The most important oxide is TiO2, which exists in three important polymorphs; anatase, brookite, and rutile. All three are white diamagnetic solids, although mineral samples can appear dark (see rutile). They adopt polymeric structures in which Ti is surrounded by six oxide ligands that link to other Ti centers.
The term titanates usually refers to titanium(IV) compounds, as represented by barium titanate (BaTiO3). With a perovskite structure, this material exhibits piezoelectric properties and is used as a transducer in the interconversion of sound and electricity. Many minerals are titanates, such as ilmenite (FeTiO3). Star sapphires and rubies get their asterism (star-forming shine) from the presence of titanium dioxide impurities.
A variety of reduced oxides (suboxides) of titanium are known, mainly reduced stoichiometries of titanium dioxide obtained by atmospheric plasma spraying. Ti3O5, described as a Ti(IV)-Ti(III) species, is a purple semiconductor produced by reduction of TiO2 with hydrogen at high temperatures, and is used industrially when surfaces need to be vapor-coated with titanium dioxide: it evaporates as pure TiO, whereas TiO2 evaporates as a mixture of oxides and deposits coatings with variable refractive index. Also known is Ti2O3, with the corundum structure, and TiO, with the rock salt structure, although often nonstoichiometric.
The alkoxides of titanium(IV), prepared by treating TiCl4 with alcohols, are colorless compounds that convert to the dioxide on reaction with water. They are industrially useful for depositing solid TiO2 via the sol-gel process. Titanium isopropoxide is used in the synthesis of chiral organic compounds via the Sharpless epoxidation.
Titanium forms a variety of sulfides, but only TiS2 has attracted significant interest. It adopts a layered structure and was used as a cathode in the development of lithium batteries. Because Ti(IV) is a "hard cation", the sulfides of titanium are unstable and tend to hydrolyze to the oxide with release of hydrogen sulfide.
Nitrides and carbides
Titanium nitride (TiN) is a refractory solid exhibiting extreme hardness, thermal/electrical conductivity, and a high melting point. TiN has a hardness equivalent to sapphire and carborundum (9.0 on the Mohs scale), and is often used to coat cutting tools, such as drill bits. It is also used as a gold-colored decorative finish and as a barrier layer in semiconductor fabrication. Titanium carbide (TiC), which is also very hard, is found in cutting tools and coatings.
Halides
Titanium tetrachloride (titanium(IV) chloride, TiCl4) is a colorless volatile liquid (commercial samples are yellowish) that, in air, hydrolyzes with spectacular emission of white clouds. Via the Kroll process, TiCl4 is used in the conversion of titanium ores to titanium metal. Titanium tetrachloride is also used to make titanium dioxide, e.g., for use in white paint. It is widely used in organic chemistry as a Lewis acid, for example in the Mukaiyama aldol condensation. In the van Arkel–de Boer process, titanium tetraiodide (TiI4) is generated in the production of high purity titanium metal.
Titanium(III) and titanium(II) also form stable chlorides. A notable example is titanium(III) chloride (TiCl3), which is used as a catalyst for production of polyolefins (see Ziegler–Natta catalyst) and a reducing agent in organic chemistry.
Organometallic complexes
Owing to the important role of titanium compounds as polymerization catalyst, compounds with Ti-C bonds have been intensively studied. The most common organotitanium complex is titanocene dichloride ((C5H5)2TiCl2). Related compounds include Tebbe's reagent and Petasis reagent. Titanium forms carbonyl complexes, e.g. (C5H5)2Ti(CO)2.
Anticancer therapy studies
Following the success of platinum-based chemotherapy, titanium(IV) complexes were among the first non-platinum compounds to be tested for cancer treatment. The advantage of titanium compounds lies in their high efficacy and low toxicity in vivo. In biological environments, hydrolysis leads to the safe and inert titanium dioxide. Despite these advantages the first candidate compounds failed clinical trials due to insufficient efficacy to toxicity ratios and formulation complications. Further development resulted in the creation of potentially effective, selective, and stable titanium-based drugs.
History
Titanium was discovered in 1791 by the clergyman and geologist William Gregor as an inclusion of a mineral in Cornwall, Great Britain. Gregor recognized the presence of a new element in ilmenite when he found black sand by a stream and noticed the sand was attracted by a magnet. Analyzing the sand, he determined the presence of two metal oxides: iron oxide (explaining the attraction to the magnet) and 45.25% of a white metallic oxide he could not identify. Realizing that the unidentified oxide contained a metal that did not match any known element, in 1791 Gregor reported his findings in both German and French science journals: Crell's Annalen and Observations et Mémoires sur la Physique. He named this oxide manaccanite.
Around the same time, Franz-Joseph Müller von Reichenstein produced a similar substance, but could not identify it. The oxide was independently rediscovered in 1795 by Prussian chemist Martin Heinrich Klaproth in rutile from Boinik (the German name of Bajmócska), a village in Hungary (now Bojničky in Slovakia).
Klaproth found that it contained a new element and named it for the Titans of Greek mythology. After hearing about Gregor's earlier discovery, he obtained a sample of manaccanite and confirmed that it contained titanium.
The currently known processes for extracting titanium from its various ores are laborious and costly; it is not possible to reduce the ore by heating with carbon (as in iron smelting) because titanium combines with the carbon to produce titanium carbide. An extraction of 95% pure titanium was achieved by Lars Fredrik Nilson and Otto Petterson. To achieve this they chlorinated titanium oxide in a carbon monoxide atmosphere with chlorine gas before reducing it to titanium metal by the use of sodium. Pure metallic titanium (99.9%) was first prepared in 1910 by Matthew A. Hunter at Rensselaer Polytechnic Institute by heating TiCl4 with sodium at under great pressure in a batch process known as the Hunter process. Titanium metal was not used outside the laboratory until 1932 when William Justin Kroll produced it by reducing titanium tetrachloride (TiCl4) with calcium. Eight years later he refined this process with magnesium and with sodium in what became known as the Kroll process. Although research continues to seek cheaper and more efficient routes, such as the FFC Cambridge process, the Kroll process is still predominantly used for commercial production.
Titanium of very high purity was made in small quantities when Anton Eduard van Arkel and Jan Hendrik de Boer discovered the iodide process in 1925, by reacting with iodine and decomposing the formed vapors over a hot filament to pure metal.
In the 1950s and 1960s, the Soviet Union pioneered the use of titanium in military and submarine applications (Alfa class and Mike class) as part of programs related to the Cold War. Starting in the early 1950s, titanium came into use extensively in military aviation, particularly in high-performance jets, starting with aircraft such as the F-100 Super Sabre and Lockheed A-12 and SR-71.
Throughout the Cold War period, titanium was considered a strategic material by the U.S. government, and a large stockpile of titanium sponge (a porous form of the pure metal) was maintained by the Defense National Stockpile Center, until the stockpile was dispersed in the 2000s. As of 2021, the four leading producers of titanium sponge were China (52%), Japan (24%), Russia (16%) and Kazakhstan (7%).
Production
Mineral beneficiation processes
The Becher process is an industrial process used to produce synthetic rutile, a form of titanium dioxide, from the ore ilmenite.
The Chloride process.
The Sulfate process: "relies on sulfuric acid (H2SO4) to leach titanium from ilmenite ore (FeTiO3). The resulting reaction produces titanyl sulfate (TiOSO4). A secondary hydrolysis stage is used to break the titanyl sulfate into hydrated TiO2 and H2SO4. Finally, heat is used to remove the water and create the end product - pure TiO2."
Purification processes
Hunter process
The Hunter process was the first industrial process to produce pure metallic titanium. It was invented in 1910 by Matthew A. Hunter, a chemist born in New Zealand who worked in the United States. The process involves reducing titanium tetrachloride (TiCl4) with sodium (Na) in a batch reactor with an inert atmosphere at a temperature of 1,000 °C. Dilute hydrochloric acid is then used to leach the salt from the product.
TiCl4(g) + 4 Na(l) → 4 NaCl(l) + Ti(s)
Kroll process
The processing of titanium metal occurs in four major steps: reduction of titanium ore into "sponge", a porous form; melting of sponge, or sponge plus a master alloy to form an ingot; primary fabrication, where an ingot is converted into general mill products such as billet, bar, plate, sheet, strip, and tube; and secondary fabrication of finished shapes from mill products.
Because it cannot be readily produced by reduction of titanium dioxide, titanium metal is obtained by reduction of titanium tetrachloride (TiCl4) with magnesium metal in the Kroll process. The complexity of this batch production in the Kroll process explains the relatively high market value of titanium, despite the Kroll process being less expensive than the Hunter process. To produce the TiCl4 required by the Kroll process, the dioxide is subjected to carbothermic reduction in the presence of chlorine. In this process, the chlorine gas is passed over a red-hot mixture of rutile or ilmenite in the presence of carbon.
After extensive purification by fractional distillation, the TiCl4 is reduced with molten magnesium in an argon atmosphere.
2FeTiO3 + 7Cl2 + 6C ->[900^oC] 2FeCl3 + 2TiCl4 + 6CO
TiCl4 + 2Mg ->[1100^oC] Ti + 2MgCl2
Arkel-Boer process
The van Arkel–de Boer process was the first semi-industrial process for pure Titanium. It involves thermal decomposition of titanium tetraiodide.
Armstrong process
Titanium powder is manufactured using a flow production process known as the Armstrong process that is similar to the batch production Hunter process. A stream of titanium tetrachloride gas is added to a stream of molten sodium; the products (sodium chloride salt and titanium particles) is filtered from the extra sodium. Titanium is then separated from the salt by water washing. Both sodium and chlorine are recycled to produce and process more titanium tetrachloride.
Pilot plants
Methods for electrolytic production of Ti metal from using molten salt electrolytes have been researched and tested at laboratory and small pilot plant scales. The lead author of an impartial review published in 2017 considered his own process "ready for scaling up." A 2023 review "discusses the electrochemical principles involved in the recovery of metals from aqueous solutions and fused salt electrolytes", with particular attention paid to titanium. While some metals such as nickel and copper can be refined by electrowinning at room temperature, titanium must be in the molten state and "there is a strong chance of attack of the refractory lining by molten titanium." Zhang et al concluded their Perspective on Thermochemical and Electrochemical Processes for Titanium Metal Production in 2017 that "Even though there are strong interests in the industry for finding a better method to produce Ti metal, and a large number of new concepts and improvements have been investigated at the laboratory or even at pilot plant scales, there is no new process to date that can replace the Kroll process commercially."
The Hydrogen assisted magnesiothermic reduction (HAMR) process uses titanium dihydride.
Fabrication
All welding of titanium must be done in an inert atmosphere of argon or helium to shield it from contamination with atmospheric gases (oxygen, nitrogen, and hydrogen). Contamination causes a variety of conditions, such as embrittlement, which reduce the integrity of the assembly welds and lead to joint failure.
Titanium is very difficult to solder directly, and hence a solderable metal or alloy such as steel is coated on titanium prior to soldering. Titanium metal can be machined with the same equipment and the same processes as stainless steel.
Titanium alloys
Common titanium alloys are made by reduction. For example, cuprotitanium (rutile with copper added), ferrocarbon titanium (ilmenite reduced with coke in an electric furnace), and manganotitanium (rutile with manganese or manganese oxides) are reduced.
About fifty grades of titanium alloys are designed and currently used, although only a couple of dozen are readily available commercially. The ASTM International recognizes 31 grades of titanium metal and alloys, of which grades one through four are commercially pure (unalloyed). Those four vary in tensile strength as a function of oxygen content, with grade 1 being the most ductile (lowest tensile strength with an oxygen content of 0.18%), and grade 4 the least ductile (highest tensile strength with an oxygen content of 0.40%). The remaining grades are alloys, each designed for specific properties of ductility, strength, hardness, electrical resistivity, creep resistance, specific corrosion resistance, and combinations thereof.
In addition to the ASTM specifications, titanium alloys are also produced to meet aerospace and military specifications (SAE-AMS, MIL-T), ISO standards, and country-specific specifications, as well as proprietary end-user specifications for aerospace, military, medical, and industrial applications.
Forming and forging
Commercially pure flat product (sheet, plate) can be formed readily, but processing must take into account of the tendency of the metal to springback. This is especially true of certain high-strength alloys. Exposure to the oxygen in air at the elevated temperatures used in forging results in formation of a brittle oxygen-rich metallic surface layer called "alpha case" that worsens the fatigue properties, so it must be removed by milling, etching, or electrochemical treatment. The working of titanium is very complicated, and may include Friction welding, cryo-forging, and Vacuum arc remelting.
Applications
Titanium is used in steel as an alloying element (ferro-titanium) to reduce grain size and as a deoxidizer, and in stainless steel to reduce carbon content. Titanium is often alloyed with aluminium (to refine grain size), vanadium, copper (to harden), iron, manganese, molybdenum, and other metals. Titanium mill products (sheet, plate, bar, wire, forgings, castings) find application in industrial, aerospace, recreational, and emerging markets. Powdered titanium is used in pyrotechnics as a source of bright-burning particles.
Pigments, additives, and coatings
About 95% of all titanium ore is destined for refinement into titanium dioxide (), an intensely white permanent pigment used in paints, paper, toothpaste, and plastics. It is also used in cement, in gemstones, and as an optical opacifier in paper.
pigment is chemically inert, resists fading in sunlight, and is very opaque: it imparts a pure and brilliant white color to the brown or grey chemicals that form the majority of household plastics. In nature, this compound is found in the minerals anatase, brookite, and rutile. Paint made with titanium dioxide does well in severe temperatures and marine environments. Pure titanium dioxide has a very high index of refraction and an optical dispersion higher than diamond. Titanium dioxide is used in sunscreens because it reflects and absorbs UV light.
Aerospace and marine
Because titanium alloys have high tensile strength to density ratio, high corrosion resistance, fatigue resistance, high crack resistance, and ability to withstand moderately high temperatures without creeping, they are used in aircraft, armor plating, naval ships, spacecraft, and missiles. For these applications, titanium is alloyed with aluminium, zirconium, nickel, vanadium, and other elements to manufacture a variety of components including critical structural parts, landing gear, firewalls, exhaust ducts (helicopters), and hydraulic systems. In fact, about two thirds of all titanium metal produced is used in aircraft engines and frames. The titanium 6AL-4V alloy accounts for almost 50% of all alloys used in aircraft applications.
The Lockheed A-12 and the SR-71 "Blackbird" were two of the first aircraft frames where titanium was used, paving the way for much wider use in modern military and commercial aircraft. A large amount of titanium mill products are used in the production of many aircraft, such as (following values are amount of raw mill products used, only a fraction of this ends up in the finished aircraft): 116 metric tons are used in the Boeing 787, 77 in the Airbus A380, 59 in the Boeing 777, 45 in the Boeing 747, 32 in the Airbus A340, 18 in the Boeing 737, 18 in the Airbus A330, and 12 in the Airbus A320. In aero engine applications, titanium is used for rotors, compressor blades, hydraulic system components, and nacelles. An early use in jet engines was for the Orenda Iroquois in the 1950s.
Because titanium is resistant to corrosion by sea water, it is used to make propeller shafts, rigging, heat exchangers in desalination plants, heater-chillers for salt water aquariums, fishing line and leader, and divers' knives. Titanium is used in the housings and components of ocean-deployed surveillance and monitoring devices for science and military. The former Soviet Union developed techniques for making submarines with hulls of titanium alloys, forging titanium in huge vacuum tubes.
Industrial
Welded titanium pipe and process equipment (heat exchangers, tanks, process vessels, valves) are used in the chemical and petrochemical industries primarily for corrosion resistance. Specific alloys are used in oil and gas downhole applications and nickel hydrometallurgy for their high strength (e. g.: titanium beta C alloy), corrosion resistance, or both. The pulp and paper industry uses titanium in process equipment exposed to corrosive media, such as sodium hypochlorite or wet chlorine gas (in the bleachery). Other applications include ultrasonic welding, wave soldering, and sputtering targets.
Titanium tetrachloride (TiCl4), a colorless liquid, is important as an intermediate in the process of making TiO2 and is also used to produce the Ziegler–Natta catalyst. Titanium tetrachloride is also used to iridize glass and, because it fumes strongly in moist air, it is used to make smoke screens.
Consumer and architectural
Titanium metal is used in automotive applications, particularly in automobile and motorcycle racing where low weight and high strength and rigidity are critical. The metal is generally too expensive for the general consumer market, though some late model Corvettes have been manufactured with titanium exhausts, and a Corvette Z06's LT4 supercharged engine uses lightweight, solid titanium intake valves for greater strength and resistance to heat.
Titanium is used in many sporting goods: tennis rackets, golf clubs, lacrosse stick shafts; cricket, hockey, lacrosse, and football helmet grills, and bicycle frames and components. Although not a mainstream material for bicycle production, titanium bikes have been used by racing teams and adventure cyclists.
Titanium alloys are used in spectacle frames that are rather expensive but highly durable, long lasting, light weight, and cause no skin allergies. Titanium is a common material for backpacking cookware and eating utensils. Though more expensive than traditional steel or aluminium alternatives, titanium products can be significantly lighter without compromising strength. Titanium horseshoes are preferred to steel by farriers because they are lighter and more durable.
Titanium has occasionally been used in architecture. The Monument to Yuri Gagarin, the first man to travel in space (), as well as the Monument to the Conquerors of Space on top of the Cosmonaut Museum in Moscow are made of titanium for the metal's attractive color and association with rocketry. The Guggenheim Museum Bilbao and the Cerritos Millennium Library were the first buildings in Europe and North America, respectively, to be sheathed in titanium panels. Titanium sheathing was used in the Frederic C. Hamilton Building in Denver, Colorado.
Because of titanium's superior strength and light weight relative to other metals (steel, stainless steel, and aluminium), and because of recent advances in metalworking techniques, its use has become more widespread in the manufacture of firearms. Primary uses include pistol frames and revolver cylinders. For the same reasons, it is used in the body of some laptop computers (for example, in Apple's PowerBook G4).
In 2023, Apple launched the iPhone 15 Pro, which uses a titanium enclosure.
Some upmarket lightweight and corrosion-resistant tools, such as shovels, knife handles and flashlights, are made of titanium or titanium alloys.
Jewelry
Because of its durability, titanium has become more popular for designer jewelry (particularly, titanium rings). Its inertness makes it a good choice for those with allergies or those who will be wearing the jewelry in environments such as swimming pools. Titanium is also alloyed with gold to produce an alloy that can be marketed as 24-karat gold because the 1% of alloyed Ti is insufficient to require a lesser mark. The resulting alloy is roughly the hardness of 14-karat gold and is more durable than pure 24-karat gold.
Titanium's durability, light weight, and dent and corrosion resistance make it useful for watch cases. Some artists work with titanium to produce sculptures, decorative objects and furniture.
Titanium may be anodized to vary the thickness of the surface oxide layer, causing optical interference fringes and a variety of bright colors. With this coloration and chemical inertness, titanium is a popular metal for body piercing.
Titanium has a minor use in dedicated non-circulating coins and medals. In 1999, Gibraltar released the world's first titanium coin for the millennium celebration. The Gold Coast Titans, an Australian rugby league team, award a medal of pure titanium to their player of the year.
Medical
Because titanium is biocompatible (non-toxic and not rejected by the body), it has many medical uses, including surgical implements and implants, such as hip balls and sockets (joint replacement) and dental implants that can stay in place for up to 20 years. The titanium is often alloyed with about 4% aluminium or 6% Al and 4% vanadium.
Titanium has the inherent ability to osseointegrate, enabling use in dental implants that can last for over 30 years. This property is also useful for orthopedic implant applications. These benefit from titanium's lower modulus of elasticity (Young's modulus) to more closely match that of the bone that such devices are intended to repair. As a result, skeletal loads are more evenly shared between bone and implant, leading to a lower incidence of bone degradation due to stress shielding and periprosthetic bone fractures, which occur at the boundaries of orthopedic implants. However, titanium alloys' stiffness is still more than twice that of bone, so adjacent bone bears a greatly reduced load and may deteriorate.
Because titanium is non-ferromagnetic, patients with titanium implants can be safely examined with magnetic resonance imaging (convenient for long-term implants). Preparing titanium for implantation in the body involves subjecting it to a high-temperature plasma arc which removes the surface atoms, exposing fresh titanium that is instantly oxidized.
Modern advancements in additive manufacturing techniques have increased potential for titanium use in orthopedic implant applications. Complex implant scaffold designs can be 3D-printed using titanium alloys, which allows for more patient-specific applications and increased implant osseointegration.
Titanium is used for the surgical instruments used in image-guided surgery, as well as wheelchairs, crutches, and any other products where high strength and low weight are desirable.
Titanium dioxide nanoparticles are widely used in electronics and the delivery of pharmaceuticals and cosmetics.
Nuclear waste storage
Because of its corrosion resistance, containers made of titanium have been studied for the long-term storage of nuclear waste. Containers lasting more than 100,000 years are thought possible with manufacturing conditions that minimize material defects. A titanium "drip shield" could also be installed over containers of other types to enhance their longevity.
Precautions
Titanium is non-toxic even in large doses and does not play any natural role inside the human body. An estimated quantity of 0.8 milligrams of titanium is ingested by humans each day, but most passes through without being absorbed in the tissues. It does, however, sometimes bio-accumulate in tissues that contain silica. One study indicates a possible connection between titanium and yellow nail syndrome.
As a powder or in the form of metal shavings, titanium metal poses a significant fire hazard and, when heated in air, an explosion hazard. Water and carbon dioxide are ineffective for extinguishing a titanium fire; Class D dry powder agents must be used instead.
When used in the production or handling of chlorine, titanium should not be exposed to dry chlorine gas because it may result in a titanium–chlorine fire.
Titanium can catch fire when a fresh, non-oxidized surface comes in contact with liquid oxygen.
Function in plants
An unknown mechanism in plants may use titanium to stimulate the production of carbohydrates and encourage growth. This may explain why most plants contain about 1 part per million (ppm) of titanium, food plants have about 2 ppm, and horsetail and nettle contain up to 80 ppm.
See also
Titanium alloys
Suboxide
Titanium in zircon geothermometry
Titanium Man
Footnotes
References
Bibliography
External links
"Titanium: Our Next Major Metal" in Popular Science (October 1950), one of first general public detailed articles on Titanium
Titanium at Periodic Videos (University of Nottingham)
Titanium.org: official website of the International Titanium Association, an industry association
Metallurgy of Titanium and its Alloys - slide presentations, movies, and other material from Harshad Bhadeshia and other Cambridge University metallurgists
Aerospace materials
Biomaterials
Chemical elements with hexagonal close-packed structure
Chemical elements
Native element minerals
Pyrotechnic fuels
Transition metals | Titanium | Physics,Engineering,Biology | 7,628 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.