id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
1,176,614
https://en.wikipedia.org/wiki/Granzyme
Granzymes are serine proteases released by cytoplasmic granules within cytotoxic T cells and natural killer (NK) cells. They induce programmed cell death (apoptosis) in the target cell, thus eliminating cells that have become cancerous or are infected with viruses or bacteria. Granzymes also kill bacteria and inhibit viral replication. In NK cells and T cells, granzymes are packaged in cytotoxic granules along with perforin. Granzymes can also be detected in the rough endoplasmic reticulum, golgi complex, and the trans-golgi reticulum. The contents of the cytotoxic granules function to permit entry of the granzymes into the target cell cytosol. The granules are released into an immune synapse formed with a target cell, where perforin mediates the delivery of the granzymes into endosomes in the target cell, and finally into the target cell cytosol. Granzymes are part of the serine esterase family. They are closely related to other immune serine proteases expressed by innate immune cells, such as neutrophil elastase and cathepsin G. Granzyme B activates apoptosis by activating caspases (especially caspase-3), which cleaves many substrates, including caspase-activated DNase to execute cell death. Granzyme B also cleaves the protein Bid, which recruits the proteins Bax and Bak to change the membrane permeability of the mitochondria, causing the release of cytochrome c (which is one of the parts needed to activate caspase-9 via the apoptosome), Smac/Diablo and Omi/HtrA2 (which suppress the inhibitor of apoptosis proteins (IAPs)), among other proteins. Granzyme B also cleaves many of the proteins responsible for apoptosis in the absence of caspase activity. The other granzymes activate cell death by caspase-dependent and caspase-independent mechanisms. In addition to killing their target cells, granzymes can target and kill intracellular pathogens. Granzymes A and B induce lethal oxidative damage in bacteria by cleaving components of the electron transport chain, while granzyme B cleaves viral proteins to inhibit viral activation and replication. The granzymes bind directly to the nucleic acids DNA and RNA; this enhances their cleavage of nucleic acid binding proteins. More recently, in addition to T lymphocytes, granzymes have been shown to be expressed in other types of immune cells such as dendritic cells, B cells and mast cells. In addition, granzymes may also be expressed in non-immune cells such as keratinocytes, pneumocytes and chondrocytes. As many of these cell types either do not express perforin or do not form immunological synapses, granzyme B is released extracellularly. Extracellular granzyme B can accumulate in the extracellular space in diseases associated with dysregulated or chronic inflammation leading to the degradation of extracellular matrix proteins and impaired tissue healing and remodelling. Extracellular granzyme B has been implicated in the pathogenesis of atherosclerosis, aneurysm, vascular leakage, chronic wound healing, and skin aging. History In 1986 Jürg Tschopp and his group published a paper on their discovery of granzymes. In the paper they discussed how they purified, characterized and discovered a variety of granzymes found within cytolytic granules that were carried by cytotoxic T lymphocytes and natural killer cells. Jürg was able to identify 8 different granzymes and discovered partial amino acid sequences for each. The molecules were unofficially named Grs for five years before Jürg and his team came up with the name granzymes which was widely accepted by the scientific community. Granzyme secretion can be detected and measured using Western Blot or ELISA techniques. Granzyme secreting cells can be identified and quantified by flow cytometry or ELISPOT. Alternatively, granzyme activity can be assayed by virtue of their protease activity. Other functions In Cullen's paper “Granzymes in Cancer and Immunity” he discusses how granzyme A has been known to be found in elevated levels within patients who currently have an infectious disease and/or in a pro-inflammatory state. Granzymes have also been found to help initiate the inflammatory response. “For example, rheumatoid arthritis patients have increased levels of granzyme A in the synovial fluid of swollen joints”. When granzymes are in an extracellular state they have the ability to activate macrophages and mast cells to initiate the inflammatory response. The interaction between the granzymes and somatic cells are still unexplainable but advances in understanding the process are being made constantly. Other granzymes like granzyme K have been found in high levels of patients who have gone septic. Granzyme H has been found to have a direct correlation with patients who have a viral infection. Scientists are able to conclude that granzyme H specializes in detecting ‘proteolytic degradation’ which is found in viral proteins. Cullen further states in his paper that granzymes may have a role in immunomodulation, or the job of maintaining homeostasis in the immune system during an infection. “In humans, loss of perforin function leads to a syndrome called familial hemophagocytic lymphohistiocytosis […]”. This syndrome can lead to death because both T cells and macrophages multiply to fight the pathogen, resulting in harmful levels of proinflammatory cytokines. The overactivation can lead to inflammation of vital organs, anemia via overactivated macrophages phagocytosing blood cells, and can potentially be fatal. In Trapani's paper he talks about how granzymes may have other functions, in addition to their ability to fight off infection. Granzyme A contains certain chemicals that allow it to cause proliferation in B cells to reduce the chance of cancer growth and formation. Test on mice have shown that granzyme A and B might not have a direct link to controlling viral infections, but helping accelerate the immune systems response. In cancer research In Cullen's paper “Granzymes in Cancer and Immunity” he describes the process of “immune surveillance [as] the process whereby precancerous and malignant cells are recognized by the immune system as damaged and are consequently targeted for elimination”. For a tumor to progress it requires conditions within the body and surrounding area to be growth-promoting. Almost all people have suitable immune cells to fight off tumors in the body. Studies have shown that the immune system even has the ability to prevent precancerous cells from growing and arbitrate the regression of established tumors. The dangerous thing about cancer cells is they have the ability to inhibit the function of the immune system. Although a tumor may be in its beginning stage and very weak, it may be giving off chemicals that inhibit the function of the immune system allowing it to grow and become harmful. Tests have shown that mice without granzymes and perforins are at high risk to have tumors spread throughout their body. Tumors have the ability to escape from immune surveillance by secreting immunosuppressive TGF-β. This inhibits proliferation and activation of T cells. TGF-β production is the most potent mechanism of immune avoidance used by tumors. TGF-β inhibits expression of five different cytotoxic genes including perforin, granzyme A, and granzyme B, which then inhibits T cell-mediated tumor clearance. Perforin's role in protecting the body against lymphoma was emphasized when scientists discovered that p53 did not have as big of a role in lymphoma surveillance as its counterpart perforin. Perforin and granzymes have been found to have a directly related ability to protect the body against the formation of different kinds of lymphomas. Genes References EC 3.4.21 Immune system Programmed cell death
Granzyme
Chemistry,Biology
1,743
26,991,991
https://en.wikipedia.org/wiki/Pilish
Pilish is a style of constrained writing in which the lengths of consecutive words or sentences match the digits of the number (pi). The shortest example is any three-letter word, such as "hat", but many longer examples have been constructed, including sentences, poems, and stories. Examples The following sentence is an example which matches the first fifteen digits of : How I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics! The following Pilish poem (written by Joseph Shipley) matches the first 31 digits of π: But a time I spent wandering in bloomy night; Yon tower, tinkling chimewise, loftily opportune. Out, up, and together came sudden to Sunday rite, The one solemnly off to correct plenilune. A full-length Pilish novel has been published, Not A Wake by Mike Keith which currently holds the record of the longest Pilish text with 10,000 digits represented. Basic Pilish and Standard Pilish In order to deal with occurrences of the digit zero, the following rule set was introduced (referred to as Basic Pilish): In Basic Pilish, each word of n letters represents (1) The digit n if n < 10 (2) The digit 0 if n = 10 Since long runs of small non-zero digits are difficult to deal with naturally (such as 1121 or 1111211), another rule set called Standard Pilish was introduced: In Standard Pilish, each word of n letters represents (1) The digit n if n < 10 (2) The digit 0 if n = 10 (3) Two consecutive digits if n > 10 (for example, an 11-letter word, such as "imagination", represents the digits 1,1) Cadae Cadae is a poetry form similar to the fib, but based on . The word "cadae" is the alphabetical equivalent of the first five digits of , 3.1415. The form of a cadae is based on pi on two levels. There are five stanzas, with 3, 1, 4, 1, and 5 lines each, respectively for a total of fourteen lines in the poem. Each line of the poem also contains an appropriate number of syllables. The first line has three syllables, the second has one, the third has four, and so on, following the sequence of pi as it extends infinitely. Rachel Hommel wrote an untitled "Cadaeic Cadae", which uses the cadaeic form as explained above, and adds a level of complexity to it wherein the number of letters in each word represents a digit of pi. For his book, "The Burning Door," Tony Leuzzi wrote a series of 33 untitled poems in cadaeic form. Cadaeic Cadenza "Cadaeic Cadenza" is a 1996 short story by Mike Keith. It is an example of cadae and pilish; a cadenza is a solo passage in music. In addition to the main restriction, the author attempts to mimic portions, or entire works, of different types and pieces of literature ("The Raven", "Jabberwocky", the lyrics of Yes, "The Love Song of J. Alfred Prufrock", Rubaiyat, Hamlet, and Carl Sandburg's Grass) in story, structure, and rhyme. Some sections of the poem use words of more than ten letters as a one followed by another digit: {|style="border: none; text-align: center;" |- |And||fear||overcame||my||being||–||the||fear||of||"forevermore". |- |3 ||4 ||8 ||2 ||5 || ||3 ||4 ||2 ||11 |} where 11 represents two consecutive digit "1"s in pi. The first part of Cadaeic Cadenza is slightly changed from an earlier version, "Near a Raven", which was a retelling of Edgar Allan Poe's "The Raven". The text of poem begins: It begins: See also Constrained writing Six nines in pi (handled at the start of chapter 2, "Change") References 2. Walkowicz, Nathan (2021) Stile: “An Infinite Mystery” Kindle Direct Publishing. External links Writing in Pilish Pilish Checker Accidental Pilish Cadaeic Cadenza Constrained writing Pi
Pilish
Mathematics
919
233,266
https://en.wikipedia.org/wiki/Martensite
Martensite is a very hard form of steel crystalline structure. It is named after German metallurgist Adolf Martens. By analogy the term can also refer to any crystal structure that is formed by diffusionless transformation. Properties Martensite is formed in carbon steels by the rapid cooling (quenching) of the austenite form of iron at such a high rate that carbon atoms do not have time to diffuse out of the crystal structure in large enough quantities to form cementite (Fe3C). Austenite is gamma-phase iron (γ-Fe), a solid solution of iron and alloying elements. As a result of the quenching, the face-centered cubic austenite transforms to a highly strained body-centered tetragonal form called martensite that is supersaturated with carbon. The shear deformations that result produce a large number of dislocations, which is a primary strengthening mechanism of steels. The highest hardness of a pearlitic steel is 400 Brinell, whereas martensite can achieve 700 Brinell. The martensitic reaction begins during cooling when the austenite reaches the martensite start temperature (Ms), and the parent austenite becomes mechanically unstable. As the sample is quenched, an increasingly large percentage of the austenite transforms to martensite until the lower transformation temperature Mf is reached, at which time the transformation is completed. For a eutectoid steel (0.76% C), between 6 and 10% of austenite, called retained austenite, will remain. The percentage of retained austenite increases from insignificant for less than 0.6% C steel, to 13% retained austenite at 0.95% C and 30–47% retained austenite for a 1.4% carbon steel. A very rapid quench is essential to create martensite. For a eutectoid carbon steel of thin section, if the quench starting at 750 °C and ending at 450 °C takes place in 0.7 seconds (a rate of 430 °C/s) no pearlite will form, and the steel will be martensitic with small amounts of retained austenite. For steel with 0–0.6% carbon, the martensite has the appearance of lath and is called lath martensite. For steel with greater than 1% carbon, it will form a plate-like structure called plate martensite. Between those two percentages, the physical appearance of the grains is a mix of the two. The strength of the martensite is reduced as the amount of retained austenite grows. If the cooling rate is slower than the critical cooling rate, some amount of pearlite will form, starting at the grain boundaries where it will grow into the grains until the Ms temperature is reached, then the remaining austenite transforms into martensite at about half the speed of sound in steel. In certain alloy steels, martensite can be formed by working the steel at Ms temperature by quenching to below Ms and then working by plastic deformations to reductions of cross section area between 20% and 40% of the original. The process produces dislocation densities up to 1013/cm2. The great number of dislocations, combined with precipitates that originate and pin the dislocations in place, produces a very hard steel. This property is frequently used in toughened ceramics like yttria-stabilized zirconia and in special steels like TRIP steels. Thus, martensite can be thermally induced or stress induced. The growth of martensite phase requires very little thermal activation energy because the process is a diffusionless transformation, which results in the subtle but rapid rearrangement of atomic positions, and has been known to occur even at cryogenic temperatures. Martensite has a lower density than austenite, so that the martensitic transformation results in a relative change of volume. Of considerably greater importance than the volume change is the shear strain, which has a magnitude of about 0.26 and which determines the shape of the plates of martensite. Martensite is not shown in the equilibrium phase diagram of the iron-carbon system because it is not an equilibrium phase. Equilibrium phases form by slow cooling rates that allow sufficient time for diffusion, whereas martensite is usually formed by very high cooling rates. Since chemical processes (the attainment of equilibrium) accelerate at higher temperature, martensite is easily destroyed by the application of heat. This process is called tempering. In some alloys, the effect is reduced by adding elements such as tungsten that interfere with cementite nucleation, but more often than not, the nucleation is allowed to proceed to relieve stresses. Since quenching can be difficult to control, many steels are quenched to produce an overabundance of martensite, then tempered to gradually reduce its concentration until the preferred structure for the intended application is achieved. The needle-like microstructure of martensite leads to brittle behavior of the material. Too much martensite leaves steel brittle; too little leaves it soft. See also Eutectic Eutectoid Ferrite (iron) Maraging steel Spring steel Tool steel References External links Comprehensive resources on martensite from the University of Cambridge YouTube Lecture by Prof. HDKH Bhadeshia , from the University of Cambridge Metallurgy for the Non-Metallurgist from the American Society for Metals PTCLab---Capable of calculating martensite crystallography with single shear or double shear theory New book for free download, on Theory of Transformations in Steels, the University of Cambridge Ceramic materials Steels
Martensite
Engineering
1,169
562,827
https://en.wikipedia.org/wiki/POP-11
POP-11 is a reflective, incrementally compiled programming language with many of the features of an interpreted language. It is the core language of the Poplog programming environment developed originally by the University of Sussex, and recently in the School of Computer Science at the University of Birmingham, which hosts the main Poplog website. POP-11 is an evolution of the language POP-2, developed in Edinburgh University, and features an open stack model (like Forth, among others). It is mainly procedural, but supports declarative language constructs, including a pattern matcher, and is mostly used for research and teaching in artificial intelligence, although it has features sufficient for many other classes of problems. It is often used to introduce symbolic programming techniques to programmers of more conventional languages like Pascal, who find POP syntax more familiar than that of Lisp. One of POP-11's features is that it supports first-class functions. POP-11 is the core language of the Poplog system. The availability of the compiler and compiler subroutines at run-time (a requirement for incremental compiling) gives it the ability to support a far wider range of extensions (including run-time extensions, such as adding new data-types) than would be possible using only a macro facility. This made it possible for (optional) incremental compilers to be added for Prolog, Common Lisp and Standard ML, which could be added as required to support either mixed language development or development in the second language without using any POP-11 constructs. This made it possible for Poplog to be used by teachers, researchers, and developers who were interested in only one of the languages. The most successful product developed in POP-11 was the Clementine data mining system, developed by ISL. After SPSS bought ISL, they renamed Clementine to SPSS Modeler and decided to port it to C++ and Java, and eventually succeeded with great effort, and perhaps some loss of the flexibility provided by the use of an AI language. POP-11 was for a time available only as part of an expensive commercial package (Poplog), but since about 1999 it has been freely available as part of the open-source software version of Poplog, including various added packages and teaching libraries. An online version of ELIZA using POP-11 is available at Birmingham. At the University of Sussex, David Young used POP-11 in combination with C and Fortran to develop a suite of teaching and interactive development tools for image processing and vision, and has made them available in the Popvision extension to Poplog. Simple code examples Here is an example of a simple POP-11 program: define Double(Source) -> Result; Source*2 -> Result; enddefine; Double(123) => That prints out: ** 246 This one includes some list processing: <nowiki> define RemoveElementsMatching(Element, Source) -> Result; lvars Index; [[% for Index in Source do unless Index = Element or Index matches Element then Index; endunless; endfor; %]] -> Result; enddefine; RemoveElementsMatching("the", [[the cat sat on the mat]]) => ;;; outputs [[cat sat on mat]] RemoveElementsMatching("the", [[the cat] [sat on] the mat]) => ;;; outputs [[the cat] [sat on] mat] RemoveElementsMatching([[= cat]], [[the cat]] is a [[big cat]]) => ;;; outputs [[is a]] </nowiki> Examples using the POP-11 pattern matcher, which makes it relatively easy for students to learn to develop sophisticated list-processing programs without having to treat patterns as tree structures accessed by 'head' and 'tail' functions (CAR and CDR in Lisp), can be found in the online introductory tutorial. The matcher is at the heart of the SimAgent (sim_agent) toolkit. Some of the powerful features of the toolkit, such as linking pattern variables to inline code variables, would have been very difficult to implement without the incremental compiler facilities. See also COWSEL (aka POP-1) programming language References R. Burstall, A. Collins and R. Popplestone, Programming in Pop-2 University Press, Edinburgh, 1968 D.J.M. Davies, POP-10 Users' Manual, Computer Science Report #25, University of Western Ontario, 1976 S. Hardy and C. Mellish, 'Integrating Prolog in the Poplog environment', in Implementations of Prolog, Ed., J.A. Campbell, Wiley, New York, 1983, pp 147–162 R. Barrett, A, Ramsay and A. Sloman, POP-11: a Practical Language for Artificial Intelligence, Ellis Horwood, Chicester, 1985 M. Burton and N. Shadbolt, POP-11 Programming for Artificial Intelligence, Addison-Wesley, 1987 J. Laventhol, Programming in POP-11, Blackwell Scientific Publications Ltd., 1987 R. Barrett and A. Ramsay, Artificial Intelligence in Practice:Examples in Pop-11, Ellis Horwood, Chicester, 1987. M. Sharples et al., Computers and Thought, MIT Press, 1987. (An introduction to Cognitive Science using Pop-11. Online version referenced above.) James Anderson, Ed., Pop-11 Comes of Age: The Advancement of an AI Programming Language, Ellis Horwood, Chichester, 1989 G. Gazdar and C. Mellish, Natural Language Processing in Pop11/Prolog/Lisp, Addison Wesley, 1989. (read online) R. Smith, A. Sloman and J. Gibson, POPLOG's two-level virtual machine support for interactive languages, in Research Directions in Cognitive Science Volume 5: Artificial Intelligence, Eds. D. Sleeman and N. Bernsen, Lawrence Erlbaum Associates, pp. 203–231, 1992. (Available as Cognitive Science Research Report 153, School of Informatics, University of Sussex). Chris Thornton and Benedict du Boulay, Artificial Intelligence Through Search, Kluwer Academic (Paperback version Intellect Books) Dordrecht Netherlands & Norwell, MA USA (Intellect at Oxford) 1992. A. Sloman, Pop-11 Primer, 1999 (Third edition) External links , Free Poplog Portal Information about POP-11 teaching materials The Poplog.org website (including partial mirror of Free poplog web site) (currently defunct: see its more recent copy (Jun 17, 2008) @ Internet Archive Wayback Machine) An Overview of POP-11 (Primer for experienced programmers) (alt. PDF) Waldek Hebisch produced a small collection of programming examples in Pop-11, showing how it can be used for symbol manipulation, numerical calculation, logic and mathematics. Computers and Thought: A practical Introduction to Artificial Intelligence on-line book introducing Cognitive Science through Pop-11. The SimAgent (sim_agent) Toolkit Pop-11 Eliza in the poplog system. Tutorial on Eliza History of AI teaching in Pop-11 since about 1976. 2-D (X) graphics in Pop-11 Objectclass the object oriented programming extension to Pop-11 (modelled partly on CLOS and supporting multiple inheritance). Tutorial introduction to object oriented programming in Pop-11. Further references Online documentation on Pop-11 and Poplog Online system documentation, including porting information Entry for Pop-11 at HOPL (History of Programming Languages) web site Lisp programming language family Artificial intelligence History of computing in the United Kingdom Science and technology in East Sussex University of Sussex
POP-11
Technology
1,607
27,850,970
https://en.wikipedia.org/wiki/Touch%20Book
The Touch Book is a portable computing device that functions as a netbook, and a tablet computer. Designed by Always Innovating, a company situated in the city of Menlo Park, in California, USA, it was launched at the DEMO conference in March 2009. Its designers stated at launch that it is the first netbook featuring a detachable keyboard with a long battery life (more than 10 hours). It is based on the ARM TI OMAP3530 processor (taking advantage of the Beagleboard and existing open source software) and features a touchscreen. First units to customers were shipped in August 2009. There were some (expected) software issues for early adopters, which are being progressively addressed. There were also some hardware issues, which resulted in community discontent. After much speculation on the community forum, a revised v.2 Touch Book and new Smart Book product were announced. The Smart Book is based on the BeagleBoard-xM design. Overview The Touch Book is a netbook and a touch tablet device. It features a detachable keyboard, a removable back cover to access the electronics of the device, and several Linux distributions shipped by default and offered via a multiboot system. The default operating system launched is a custom Linux OS based on Ångström, being custom themed to fit the small form factor. Since 2010, the Touch Book comes with a multi-boot graphical interface, allowing users to run also Ubuntu and Android. Users can install other OSes like Gentoo and RISC OS. Touch Book's major intended uses are media viewing and web browsing, although more power-hungry applications such as OpenOffice.org are available on the device. The Touch Book ships standard graphics libraries such as OpenGL ES and SDL. The Always Innovating team claims the device follows an open hardware philosophy, so that anyone can access the hardware design from the company's website, modify it and redistribute it. Although the business model of open hardware is still maturing, it seems to be profitable to the company In addition to this open hardware approach, the Touch Book fully relies on open source software. A Git repository of the entire OS is currently available, allowing for download of the latest kernel source as well as the different root file systems. A community of contributors has emerged and is interacting with the Always Innovating developers. Technical specifications ARM 600 MHz (overclocked to 720 MHz) Cortex-A8 superscalar microprocessor core: Texas Instruments OMAP3530 System-on-Chip 512 MB DDR-333 SDRAM 256 MB NAND flash memory DSP Core video processor at 430 MHz OpenGL ES 2.0 compliant Freescale-based 3D accelerometer Integrated Ralink-based Wi-Fi 802.11b/g/n Integrated Bluetooth 2.0 1024x600 resolution touchscreen LCD, 8.9" widescreen, 16.7 million colors SDHC card slot (currently supporting up to 32 GB of storage) Headphone output, microphone input Standard QWERTY keyboard and touchpad USB 2.0 OTG port (480 Mbit/s) 6× USB 2.0 host ports (480 Mbit/s) JTAG debugging interface Runs the Linux kernel (2.6.x) Dual 12000mAh + 6000mAh rechargeable lithium polymer battery Estimated 10+ hour battery life for video and general applications Detachable magnet system for the tablet Dimensions: 248 mm × 158 mm × 19 mm for the tablet, 248 mm × 180 mm × 33.5 mm for the full netbook Mass: 675 g for the Tablet; 1,418 g for the full netbook Similar products Other single-board computers using OMAP3500 series processors include OSWALD, Beagle Board, IGEPv2, Pandora and Gumstix Overo series. Since launch of the device in March 2009, several similar devices have come up in early 2010, such as the Freescale tablet reference design or the Lenovo U1 IdeaPad. or the Panasonic Toughbook. References External links RISC OS Embedded Linux Texas Instruments hardware Linux-based devices 2-in-1 PCs Computer-related introductions in 2009
Touch Book
Technology
873
62,064,684
https://en.wikipedia.org/wiki/Adriana%20Lita
Adriana Eleni Lita is a Romanian materials scientist who is a member of the faint photonics group at National Institute of Standards and Technology. She works on fabrication and development of single-photon detectors such as transition-edge sensors and superconducting nanowire single-photon detector devices. Life Lita earned a B.S. in physics from the University of Bucharest. She completed a Ph.D. in materials science and engineering at University of Michigan in 2000. Her dissertation was titled Correlation between microstructure and surface structure evolution in polycrystalline films. Lita's doctoral advisor was John E. Sanchez, Jr. In 2003, Lita joined the faint photonics group at National Institute of Standards and Technology (NIST) Boulder. She works on fabrication and development of single-photon detectors such as transition-edge sensors (TES) and superconducting nanowire single-photon detector (SNSPD) devices. Her work includes development of record high quantum efficiency TES devices optimized at various wavelengths from UV to near IR, integration of TES with optical waveguides platforms for photonic circuits, as well as materials development for SNSPDs. Her research has included Bell test experiments and the practical implementation of quantum key distribution. In 2021, Lita was awarded the Department of Commerce Silver Medal. Selected publications See also Timeline of women in science in the United States References Living people Year of birth missing (living people) Place of birth missing (living people) National Institute of Standards and Technology people 21st-century American women scientists Women materials scientists and engineers American materials scientists Nationality missing University of Michigan alumni University of Bucharest alumni 21st-century Romanian women 21st-century Romanian scientists Romanian women scientists Romanian emigrants to the United States Expatriate academics in the United States
Adriana Lita
Materials_science,Technology
368
27,625,488
https://en.wikipedia.org/wiki/Ulefos%20Jernv%C3%A6rk
Ulefos Jernværk is an iron foundry located at Ulefoss in the municipality Nome, Norway. It was established in 1657 by Ove Gjedde and Preben von Ahnen. The company produced pig iron until 1877. Wood-burning stoves were important products until the 1950s. From 1999 the foundry is owned by the holding company Ulefos NV. Further reading References 1657 establishments in Norway Companies based in Telemark Iron and steel mills Metal companies of Norway
Ulefos Jernværk
Chemistry
105
74,647,829
https://en.wikipedia.org/wiki/PX5%20RTOS
PX5 RTOS is a real-time operating system (RTOS) designed for embedded systems. It is implemented using the ANSI C programming language. Overview The PX5 RTOS, created by William Lamie, is an embedded real-time operating system (RTOS) that was launched in January 2023. Lamie, who also developed other RTOSes such as Nucleus RTX, Nucleus PLUS, and ThreadX (acquired by Microsoft), currently serves as the President and CEO of PX5, an embedded software company headquartered in San Diego, California, United States. Among these RTOSes, approximately 10 billion devices are operated by the ThreadX RTOS, while the Nucleus RTOS is used in around 3 billion devices. The name PX5 is an abbreviation where P stands for POSIX threads, X stands for thread switching, and 5 represents fifth generation RTOS. Written in ANSI C, the PX5 RTOS is compatible with various embedded microcontroller unit (MCU) and memory protection unit (MPU) architectures. It has minimal resource requirements, needing less than 1KB of FLASH and 1KB of RAM for basic operations on microcontrollers. One of the notable features of the PX5 RTOS is its native support for POSIX Threads (pthreads), which is an industry-standard API often absent in many other RTOS solutions. Additionally, it offers real-time extensions such as event flags, fast queues, tick timers, and memory management. The PX5 RTOS executes most API calls and context switches in less than a microsecond on typical 32-bit microcontrollers. It is also deterministic – ensuring predictable processing for each API and context switch regardless of the number of active threads. The PX5 RTOS incorporates Pointer/Data Verification (PDV) technology, which verifies function return addresses, function pointers, system objects, global data, memory pools, and more. In November 2023, PX5 introduced PX5 NET adding TCP/IP networking to the PX5 RTOS. Like PX5 RTOS, PX5 NET has a small minimal footprint (under 6KB) and leverages PDV for run-time safety and security. Supported platforms PX5 RTOS supports most of the embedded MCU and MPU architectures, including ARM's Cortex-M, Cortex-R, Cortex-A, and RISC-V architecture families. It supports both 32-bit and 64-bit architectures, and provides support for both asymmetric multiprocessing (AMP) and symmetric multiprocessing (SMP) configurations. Technology The PX5 RTOS uses a microkernel which enhances device security by integrating with Arm TrustZone technology, specifically designed for Cortex-M23 and Cortex-M33 microcontrollers. As a fifth-generation RTOS, PX5 is tailored for industrial-grade applications, enabling the separation of secure and non-secure MCU functions at the hardware level. To further strengthen security measures, PX5 RTOS incorporates a technology called Pointer/Data Verification (PDV). This technology identifies and prevents computer program errors, including buffer errors. In addition, the operating system is constructed using industry-standard POSIX pthreads APIs, facilitating the development of multi-threaded programs in C/C++. This allows for the execution of multiple tasks simultaneously across different operating systems. The POSIX pthreads APIs in PX5 RTOS offer support for various mechanisms, such as signals, condition variables, semaphore, mutex, and message queues. Furthermore, extensions like event flags, fast queues, tick timers, and memory management are also included. PX5 RTOS maintains a small footprint and exhibits rapid scalability. Its installation process involves a 3-step procedure, aided by two accessible source files: px5.c and px5_binding.s. Additionally, the operating system automatically promotes one "main" file to the first system thread. PX5 RTOS accommodates read-only memory (ROM) Flash, ranging from a minimum of 1KB to a maximum of less than 40KB. The solution also ensures portability through its support for portable ANSI C for system programming. Moreover, PX5 RTOS has been verified by C-STAT static analysis and adheres to MISRA compliance standards. Partnerships In January 2023, PX5 and Clarinox have joined forces to facilitate wireless connectivity in resource-constrained embedded systems. They integrated ClarinoxBlue and ClarinoxWiFi protocol stack software with the PX5 RTOS. On 25 January 2023, Cypherbridge made an announcement regarding the integration of its SDKPac and uLoadXL IoT software with PX5 RTOS. In March 2023, Percepio AB entered into a partnership agreement with PX5. The PX5 integrated the Percepio Tracealyzer trace recorder and Percepio supported the RTOS PX5 in a commercially available version. References See also Nucleus RTOS ThreadX Microsoft Azure RTOS Embedded operating systems Real-time operating systems
PX5 RTOS
Technology
1,092
69,517,557
https://en.wikipedia.org/wiki/Paul%20Hagenmuller
Paul Hagenmuller (August 3, 1921 – January 7, 2017) was a French chemist. Hagenmuller founded the Laboratoire de Chimie du Solide (Solid-State Chemistry Laboratory) of the French National Centre for Scientific Research (CNRS) and he served as its Director until 1985. He is considered "one of the founders of solid-state chemistry." Biography Hagenmuller was born in 1921 in Alsace, France. After studying in Strasbourg and Clermont-Ferrand, during WW2, Hagenmuller was imprisoned in the Buchenwald and Mittelbau-Dora concentration camps. During those years, he was involved in sabotaging German missiles. In 1950 he received his PhD from Sorbonne University. Subsequently, he spent two years teaching as a lecturer (maître de conférences) in Vietnam. He returned to France in 1956 and was appointed Professor of Inorganic Chemistry at the University of Rennes, working on "nonstoichiometry in vanadium and tungsten bronzes, two-dimensional oxyhalogenides, borides, and silicides, magnetic spinels". In 1961 he started working at the University of Bordeaux. Hagenmuller was noted for instigating cooperation between French researchers and researchers from the Soviet Union and Germany, his years in the concentration camps greatly affected his character. He also collaborated with noted scientists such as John Goodenough, Jacques Friedel and Nevill Francis Mott on insulator-to-metal transitions of vanadium oxides. In the 1970s, he started working with Neil Bartlett on metal fluorides. His most noted research discovery was the synthesis of and , which would later become important superconductor materials. His work on sodium-ion batteries received great interest years after it was published. In 2018 Hagenmuller remained the 4th most cited author from the Journal of Solid State Chemistry. Awards and decorations Croix de Guerre 1939–1945 Bundesverdienstkreuz (1985) Legion of Honour (1988) Gay-Lussac Humboldt Prize (1982) Prix de la Fondation de la Maison de la Chimie (1986) Bibliography References French chemists University of Strasbourg alumni University of Clermont-Ferrand alumni Buchenwald concentration camp survivors Mittelbau-Dora concentration camp survivors 1921 births 2017 deaths Members of the French Academy of Sciences University of Paris alumni Recipients of the Legion of Honour Recipients of the Croix de Guerre (France) 20th-century French chemists Research directors of the French National Centre for Scientific Research Academic staff of the University of Rennes Academic staff of the University of Bordeaux Inorganic chemists Solid state chemists Members of the Royal Swedish Academy of Sciences French materials scientists
Paul Hagenmuller
Chemistry
546
6,546,592
https://en.wikipedia.org/wiki/Camar%C3%ADn
A camarín is a shrine or chapel set above and behind the altar in a church, but still visible from the body of the church. They are especially found in Spain and Portugal and throughout Latin America. George Kubler and Martin Soria, in Art and Architecture of Spain and Portugal, trace the typology to the mid-15th century Aragonese "viril", a window in the high altar created to display the consecrated host. According to Kubler and Soria, the camarín is first utilized in the Basílica de la Virgen de los Desamparados (Valencia), designed by Diego Martinez Ponce de Urrana 1652–1657. In de Uranna's design, one passes from the oval nave through one of two doorways flanking the high altar. These open on to chambers, at the rear of which stairways lead to the rear of the camarín, so that one emerges into the space looking out on the nave beyond. Also noteworthy is the chapel of the Virgen del Pilar that protects the Holy Column and the image of the Virgin in the Basilica del Pilar in Zaragoza. The space was designed by the architect Ventura Rodríguez starting in 1754. It is made up of thin sheets of green marble from the Greek island of Tynos studded with seventy-two stars. All of them constitute authentic jewels as they are studded with precious stones, except for seven that are located in the canopy carved in silver and with ivory applications. Definition Capilla ó pieza que suele haber detras de un altar, donde se venera alguna Imágen. Diccionario de arquitectura civil, 1802. Translation: "Chapel or room, usually behind the altar, in which an image is venerated." References Sources Architectural elements Church architecture
Camarín
Technology,Engineering
371
1,644,632
https://en.wikipedia.org/wiki/Brisbane%20Line
The "Brisbane Line" was a defence proposal supposedly formulated during World War II to concede the northern portion of the Australian continent in the event of an invasion by the Japanese. Although a plan to prioritise defence in the vital industrial regions between Brisbane and Melbourne in the event of invasion had been proposed in February 1942, it was rejected by Labor Prime Minister John Curtin and the Australian War Cabinet. An incomplete understanding of this proposal and other planned responses to invasion led Labor minister Eddie Ward to publicly allege that the previous government (a United Australia Party-Country Party coalition under Robert Menzies and Arthur Fadden) had planned to abandon most of northern Australia to the Japanese. Ward continued to promote the idea during late 1942 and early 1943, and the idea that it was an actual defence strategy gained support after General Douglas MacArthur referred to it during a press conference in March 1943, where he also coined the term "Brisbane Line". Ward initially offered no evidence to support his claims, but later claimed that the relevant records had been removed from the official files. A Royal Commission concluded that no such documents had existed, and the government under Menzies and Fadden had not approved plans of the type alleged by Ward. The controversy contributed to Labor's win in the 1943 federal election, although Ward was assigned to minor portfolios afterward. Ward's allegations In October 1942, Labor politician Eddie Ward, the Minister for Labour and National Service under Prime Minister John Curtin, alleged that the preceding government under Prime Minister Robert Menzies (and his successor, Prime Minister Arthur Fadden) had prepared plans to abandon the majority of the continent as soon as the Japanese invaded, and concentrate defensive efforts on the south-eastern region. Ward had apparently been leaked the information by a Major working in the Secretary for Defence Office. A memorandum had been submitted to the Australian War Cabinet in February 1942 (after Menzies, Fadden, and the United Australia Party-Country Party coalition had moved to Opposition), where the General Officer Commanding-in-Chief of the Home Forces, Lieutenant-General Iven Mackay, had advocated that in the event of an invasion, the majority of available Australian forces be concentrated in the area between Brisbane and Melbourne, where most of the nation's industrial capability was located. Mackay had previously been instructed to prioritise the regions around Sydney and Newcastle, with Darwin as a secondary priority, and had to consider the fact that a large portion of Australia's military and naval forces were deployed overseas. Ward's theory was based on an incomplete understanding of this plan (which had been submitted to and rejected by Ward's own government, catered for the defence of strategic northern locations, including Darwin and Townsville, and instead of simply abandoning the rest of the country to the Japanese, advocated a scorched earth policy and guerrilla warfare to slow invaders until other forces could be deployed), along with public knowledge of evacuation plans for regions of Queensland (which, instead of a total evacuation south, was to clear potential battle sites of civilians). Ward did not present any direct evidence of his claims at the time, and Menzies, along with all the ministers that had served under him during the previous government, denied the allegation. At an Advisory War Council meeting in December 1942, Menzies, among others, expressed concern that a responsible minister was making claims that could only be disproved through the disclosure of secret defence plans. Curtin did little to quell Ward's attacks, and Ward continued to claim that Menzies and Fadden were responsible for the "defeatist" and "treacherous" plan. Public awareness of the alleged plan was raised when General Douglas MacArthur referred to it during a press conference in March 1943, during which he coined the term 'Brisbane Line'. Ward repeated his assertions over the following months, and when asked to provide proof, claimed that he had been informed of the removal of documents relating to the plan from the official files. Royal Commission and aftermath Curtin appointed a Royal Commission led by Charles Lowe to determine if such documents had existed, and if the Menzies administration had made such plans. The Commission reported in July 1943 that there was no evidence supporting an official plan to abandon most of Australia to invading forces, and that the files for the time in question were complete. The royal commission and the Brisbane Line controversy contributed to Curtin and the Labor Party winning the 1943 federal election by a significant margin, but Ward was effectively demoted by being assigned the portfolios of Transport (the assets of which were under direct Army control) and External Territories (most of which had been captured by the Japanese). Post-war claims Proponents of the existence of the Brisbane Line proposal often refer to the existence of concrete tank traps near places such as Tenterfield, which were constructed in the late 1930s, as evidence. In his memoir, Reminiscences, MacArthur claims that the Australian military had proposed designating a line roughly following the Darling River as the focus of defence during the expected Japanese invasion of Australia. MacArthur credits himself with the plan's dismissal in favour of offensive operations to stop Japanese advancement in New Guinea. Citations References Further reading Military history of Australia during World War II World War II defensive lines World War II sites in Australia Politics of World War II Queensland in World War II
Brisbane Line
Engineering
1,077
5,454,605
https://en.wikipedia.org/wiki/Gordon%20Bell%20Prize
The Gordon Bell Prize is an award presented by the Association for Computing Machinery each year in conjunction with the SC Conference series (formerly known as the Supercomputing Conference). The prize recognizes outstanding achievement in high-performance computing applications. The main purpose is to track the progress over time of parallel computing, by acknowledging and rewarding innovation in applying high-performance computing to applications in science, engineering, and large-scale data analytics. The prize was established in 1987. A cash award of $10,000 (since 2011) accompanies the recognition, funded by Gordon Bell, a pioneer in high-performance and parallel computing. The Prizes were preceded by a nominal prize ($100) established by Alan Karp, a numerical analyst (then of IBM) who challenged claims of MIMD performance improvements proposed in the Letters to the Editor section of the Communications of the ACM. Karp went on to be one of the first Gordon Bell Prize judges. Individuals or teams may apply for the award by submitting a technical paper describing their work through the SC conference submissions process. Finalists present their work at that year's conference, and their submissions are included in the conference proceedings. Prize criteria The ACM Gordon Bell Prize is primarily intended to recognize performance achievements that demonstrate: evidence of important algorithmic and/or implementation innovations clear improvement over the previous state-of-the-art solutions that don’t depend on one-of-a-kind architectures (systems that can only be used to address a narrow range of problems, or that can’t be replicated by others) performance measurements that have been characterized in terms of scalability (strong as well as weak scaling), time to solution, efficiency (in using bottleneck resources, such as memory size or bandwidth, communications bandwidth, I/O), and/or peak performance achievements that are generalizable, in the sense that other people can learn and benefit from the innovations In earlier years, multiple prizes were sometimes awarded to reflect different types of achievements. According to current policies, the Prize can be awarded in one or more of the following categories, depending on the entries received in a given year: Peak Performance: If the entry demonstrates outstanding performance in terms of floating point operations per second on an important science/engineering problem; the efficiency of the application in using bottleneck resources (such as memory size or bandwidth) is also taken into consideration. Special Achievement in Scalability, Special Achievement in Time to Solution: If the entry demonstrates exceptional Scalability, in terms of both strong and weak scaling, and/or total time to solve an important science/engineering problem. See also List of computer science awards References External links ACM Gordon Bell Prize Winners 2006-present Gordon Bell Prize News - 2013-2022 - ACM Gordon Bell Prize 1987-2015 Gordon Bell Prize description from SC13 ACM Gordon Bell Prize Winners 2006-2015 Earlier Prize Winners 1987–1999 Prize Winners 1987-2015 Gordon Bell Prize official page on ACM Website The SC (formerly "Supercomputing") Conference Series Awards of the Association for Computing Machinery Computer science awards Awards established in 1987
Gordon Bell Prize
Technology
631
20,829,208
https://en.wikipedia.org/wiki/C4H5NS
{{DISPLAYTITLE:C4H5NS}} The molecular formula C4H5NS (molar mass: 99.15 g/mol, exact mass: 99.0143 u) may refer to: Allyl isothiocyanate (AITC) Thiazine Molecular formulas
C4H5NS
Physics,Chemistry
65
18,920,299
https://en.wikipedia.org/wiki/Jim%20Propp
James Gary Propp is a professor of mathematics at the University of Massachusetts Lowell. Education and career In high school, Propp was one of the national winners of the United States of America Mathematical Olympiad (USAMO), and an alumnus of the Hampshire College Summer Studies in Mathematics. Propp obtained his AB in mathematics in 1982 at Harvard. After advanced study at Cambridge, he obtained his PhD from the University of California at Berkeley. He has held professorships at seven universities, including Harvard, MIT, the University of Wisconsin, and the University of Massachusetts Lowell. Mathematical research Propp is the co-editor of the book Microsurveys in Discrete Probability (1998) and has written more than fifty journal articles on game theory, combinatorics and probability, and recreational mathematics. He lectures extensively and has served on the Mathematical Olympiad Committee of the Mathematical Association of America, which sponsors the USAMO. In the early 90s Propp lived in Boston and later in Arlington, Massachusetts. In 1996, Propp and David Wilson invented coupling from the past, a method for sampling from the stationary distribution of a Markov chain among Markov chain Monte Carlo (MCMC) algorithms. Contrary to many MCMC algorithms, coupling from the past gives in principle a perfect sample from the stationary distribution. His papers have discussed the use of surcomplex numbers in game theory; the solution to the counting of alternating sign matrices; and occurrences of Grandi's series as an Euler characteristic of infinite-dimensional real projective space. Other contributions Propp was a member of the National Puzzlers' League under the pseudonym Aesop. He was recruited for the organisation by colleague Henri Picciotto, cruciverbalist and co-author of the league's first cryptic crossword collection. Propp is the creator of the "Self-Referential Aptitude Test", a humorous multiple-choice test in which all questions except the last make self-references to their own answers. It was created in the early 1990s for a puzzlers' party. Propp is the author of Tuscanini, a 1992 children's book about a musical elephant, illustrated by Ellen Weiss. Awards and honours In 2015 he was elected as a fellow of the American Mathematical Society "for contributions to combinatorics and probability, and for mentoring and exposition." Personal He is married to research psychologist Alexandra (Sandi) Gubin. They have a son Adam and a daughter Eliana. Notes External links Propp's website Year of birth missing (living people) Living people Harvard University alumni University of California, Berkeley alumni University of Wisconsin–Madison faculty Massachusetts Institute of Technology faculty Harvard University Department of Mathematics faculty Alumni of the University of Cambridge Recreational mathematicians American probability theorists 20th-century American mathematicians 21st-century American mathematicians Fellows of the American Mathematical Society University of Massachusetts Lowell faculty
Jim Propp
Mathematics
572
42,662,705
https://en.wikipedia.org/wiki/Carbohydrate%20Research
Carbohydrate Research is a peer-reviewed scientific journal covering research on the chemistry of carbohydrates. It is published by Elsevier and was established in 1965. The editor-in-chief is M. Carmen Galan (University of Bristol). According to the Journal Citation Reports, the journal has a 2022 impact factor of 3.1. References External links Biochemistry journals Elsevier academic journals Academic journals established in 1965 English-language journals Carbohydrate chemistry
Carbohydrate Research
Chemistry
101
77,259,262
https://en.wikipedia.org/wiki/Career%20cushioning
In human resources, career cushioning refers to employees who discreetly upskill and network as a contingency plan in the event of job loss. Career cushioning may involved getting certifications, expanding professional networks, updating resumes and profiles, and discretely applying to alternative jobs. The proactive approach provides a sense of security during uncertain economic times. Employers can combat career cushioning by improving their market competitiveness. The term came to prominence in 2022 following the COVID-19 pandemic layoffs and stems from cushioning in dating, where partners have a backup plan and cushioning a fall. References Labor relations 2022 neologisms Popular culture neologisms Human resource management Occupational stress Motivation Work Labor
Career cushioning
Biology
147
849,543
https://en.wikipedia.org/wiki/Darcy%27s%20law
Darcy's law is an equation that describes the flow of a fluid flow trough a porous medium and through a Hele-Shaw cell. The law was formulated by Henry Darcy based on results of experiments on the flow of water through beds of sand, forming the basis of hydrogeology, a branch of earth sciences. It is analogous to Ohm's law in electrostatics, linearly relating the volume flow rate of the fluid to the hydraulic head difference (which is often just proportional to the pressure difference) via the hydraulic conductivity. In fact, the Darcy's law is a special case of the Stokes equation for the momentum flux, in turn deriving from the momentum Navier-Stokes equation. Background Darcy's law was first determined experimentally by Darcy, but has since been derived from the Navier–Stokes equations via homogenization methods. It is analogous to Fourier's law in the field of heat conduction, Ohm's law in the field of electrical networks, and Fick's law in diffusion theory. One application of Darcy's law is in the analysis of water flow through an aquifer; Darcy's law along with the equation of conservation of mass simplifies to the groundwater flow equation, one of the basic relationships of hydrogeology. Morris Muskat first refined Darcy's equation for a single-phase flow by including viscosity in the single (fluid) phase equation of Darcy. It can be understood that viscous fluids have more difficulty permeating through a porous medium than less viscous fluids. This change made it suitable for researchers in the petroleum industry. Based on experimental results by his colleagues Wyckoff and Botset, Muskat and Meres also generalized Darcy's law to cover a multiphase flow of water, oil and gas in the porous medium of a petroleum reservoir. The generalized multiphase flow equations by Muskat and others provide the analytical foundation for reservoir engineering that exists to this day. Description In the integral form, Darcy's law, as refined by Morris Muskat, in the absence of gravitational forces and in a homogeneously permeable medium, is given by a simple proportionality relationship between the volumetric flow rate , and the pressure drop through a porous medium. The proportionality constant is linked to the permeability of the medium, the dynamic viscosity of the fluid , the given distance over which the pressure drop is computed, and the cross-sectional area , in the form: Note that the ratio: can be defined as the Darcy's law hydraulic resistance. The Darcy's law can be generalised to a local form: where is the hydraulic gradient and is the volumetric flux which here is called also superficial velocity. Note that the ratio: can be thought as the Darcy's law hydraulic conductivity. In the (less general) integral form, the volumetric flux and the pressure gradient correspond to the ratios: . In case of an anisotropic porous media, the permeability is a second order tensor, and in tensor notation one can write the more general law: Notice that the quantity , often referred to as the Darcy flux or Darcy velocity, is not the velocity at which the fluid is travelling through the pores. It is the specific discharge, or flux per unit area. The flow velocity () is related to the flux () by the porosity () with the following equation: The Darcy's constitutive equation, for single phase (fluid) flow, is the defining equation for absolute permeability (single phase permeability). With reference to the diagram to the right, the flow velocity is in SI units , and since the porosity is a nondimensional number, the Darcy flux , or discharge per unit area, is also defined in units ; the permeability in units , the dynamic viscosity in units and the hydraulic gradient is in units . In the integral form, the total pressure drop is in units , and is the length of the sample in units , the Darcy's volumetric flow rate , or discharge, is also defined in units and the cross-sectional area in units . A number of these parameters are used in alternative definitions below. A negative sign is used in the definition of the flux following the standard physics convention that fluids flow from regions of high pressure to regions of low pressure. Note that the elevation head must be taken into account if the inlet and outlet are at different elevations. If the change in pressure is negative, then the flow will be in the positive direction. There have been several proposals for a constitutive equation for absolute permeability, and the most famous one is probably the Kozeny equation (also called Kozeny–Carman equation). By considering the relation for static fluid pressure (Stevin's law): one can decline the integral form also into the equation: where ν is the kinematic viscosity. The corresponding hydraulic conductivity is therefore: Darcy's law is a simple mathematical statement which neatly summarizes several familiar properties that groundwater flowing in aquifers exhibits, including: if there is no pressure gradient over a distance, no flow occurs (these are hydrostatic conditions), if there is a pressure gradient, flow will occur from high pressure towards low pressure (opposite the direction of increasing gradient — hence the negative sign in Darcy's law), the greater the pressure gradient (through the same formation material), the greater the discharge rate, and the discharge rate of fluid will often be different — through different formation materials (or even through the same material, in a different direction) — even if the same pressure gradient exists in both cases. A graphical illustration of the use of the steady-state groundwater flow equation (based on Darcy's law and the conservation of mass) is in the construction of flownets, to quantify the amount of groundwater flowing under a dam. Darcy's law is only valid for slow, viscous flow; however, most groundwater flow cases fall in this category. Typically any flow with a Reynolds number less than one is clearly laminar, and it would be valid to apply Darcy's law. Experimental tests have shown that flow regimes with Reynolds numbers up to 10 may still be Darcian, as in the case of groundwater flow. The Reynolds number (a dimensionless parameter) for porous media flow is typically expressed as where is the kinematic viscosity of water, is the specific discharge (not the pore velocity — with units of length per time), is a representative grain diameter for the porous media (the standard choice is math|d30, which is the 30% passing size from a grain size analysis using sieves — with units of length). Derivation For stationary, creeping, incompressible flow, i.e. , the Navier–Stokes equation simplifies to the Stokes equation, which by neglecting the bulk term is: where is the viscosity, is the velocity in the direction, and is the pressure. Assuming the viscous resisting force is linear with the velocity we may write: where is the porosity, and is the second order permeability tensor. This gives the velocity in the direction, which gives Darcy's law for the volumetric flux density in the direction, In isotropic porous media the off-diagonal elements in the permeability tensor are zero, for and the diagonal elements are identical, , and the common form is obtained as below, which enables the determination of the liquid flow velocity by solving a set of equations in a given region. The above equation is a governing equation for single-phase fluid flow in a porous medium. Use in petroleum engineering Another derivation of Darcy's law is used extensively in petroleum engineering to determine the flow through permeable media — the most simple of which is for a one-dimensional, homogeneous rock formation with a single fluid phase and constant fluid viscosity. Almost all oil reservoirs have a water zone below the oil leg, and some also have a gas cap above the oil leg. When the reservoir pressure drops due to oil production, water flows into the oil zone from below, and gas flows into the oil zone from above (if the gas cap exists), and we get a simultaneous flow and immiscible mixing of all fluid phases in the oil zone. The oil field operator may also inject water (or gas) to improve oil production. The petroleum industry is, therefore, using a generalized Darcy equation for multiphase flow developed by Muskat et alios. Because Darcy's name is so widespread and strongly associated with flow in porous media, the multiphase equation is denoted Darcy's law for multiphase flow or generalized Darcy equation (or law) or simply Darcy's equation (or law) or flow equation if the context says that the text is discussing the multiphase equation of Muskat. Multiphase flow in oil and gas reservoirs is a comprehensive topic, and one of many articles about this topic is Darcy's law for multiphase flow. Use in coffee brewing A number of papers have utilized Darcy's law to model the physics of brewing in a moka pot, specifically how the hot water percolates through the coffee grinds under pressure, starting with a 2001 paper by Varlamov and Balestrino, and continuing with a 2007 paper by Gianino, a 2008 paper by Navarini et al., and a 2008 paper by W. King. The papers will either take the coffee permeability to be constant as a simplification or will measure change through the brewing process. Additional forms Differential expression Darcy's law can be expressed very generally as: where q is the volume flux vector of the fluid at a particular point in the medium, h is the total hydraulic head, and K is the hydraulic conductivity tensor, at that point. The hydraulic conductivity can often be approximated as a scalar. (Note the analogy to Ohm's law in electrostatics. The flux vector is analogous to the current density, head is analogous to voltage, and hydraulic conductivity is analogous to electrical conductivity.) Quadratic law For flows in porous media with Reynolds numbers greater than about 1 to 10, inertial effects can also become significant. Sometimes an inertial term is added to the Darcy's equation, known as Forchheimer term. This term is able to account for the non-linear behavior of the pressure difference vs flow data. where the additional term is known as inertial permeability, in units of length . The flow in the middle of a sandstone reservoir is so slow that Forchheimer's equation is usually not needed, but the gas flow into a gas production well may be high enough to justify using it. In this case, the inflow performance calculations for the well, not the grid cell of the 3D model, are based on the Forchheimer equation. The effect of this is that an additional rate-dependent skin appears in the inflow performance formula. Some carbonate reservoirs have many fractures, and Darcy's equation for multiphase flow is generalized in order to govern both flow in fractures and flow in the matrix (i.e. the traditional porous rock). The irregular surface of the fracture walls and high flow rate in the fractures may justify the use of Forchheimer's equation. Correction for gases in fine media (Knudsen diffusion or Klinkenberg effect) For gas flow in small characteristic dimensions (e.g., very fine sand, nanoporous structures etc.), the particle-wall interactions become more frequent, giving rise to additional wall friction (Knudsen friction). For a flow in this region, where both viscous and Knudsen friction are present, a new formulation needs to be used. Knudsen presented a semi-empirical model for flow in transition regime based on his experiments on small capillaries. For a porous medium, the Knudsen equation can be given as where is the molar flux, is the gas constant, is the temperature, is the effective Knudsen diffusivity of the porous media. The model can also be derived from the first-principle-based binary friction model (BFM). The differential equation of transition flow in porous media based on BFM is given as This equation is valid for capillaries as well as porous media. The terminology of the Knudsen effect and Knudsen diffusivity is more common in mechanical and chemical engineering. In geological and petrochemical engineering, this effect is known as the Klinkenberg effect. Using the definition of molar flux, the above equation can be rewritten as This equation can be rearranged into the following equation Comparing this equation with conventional Darcy's law, a new formulation can be given as where This is equivalent to the effective permeability formulation proposed by Klinkenberg: where is known as the Klinkenberg parameter, which depends on the gas and the porous medium structure. This is quite evident if we compare the above formulations. The Klinkenberg parameter is dependent on permeability, Knudsen diffusivity and viscosity (i.e., both gas and porous medium properties). Darcy's law for short time scales For very short time scales, a time derivative of flux may be added to Darcy's law, which results in valid solutions at very small times (in heat transfer, this is called the modified form of Fourier's law), where is a very small time constant which causes this equation to reduce to the normal form of Darcy's law at "normal" times (> nanoseconds). The main reason for doing this is that the regular groundwater flow equation (diffusion equation) leads to singularities at constant head boundaries at very small times. This form is more mathematically rigorous but leads to a hyperbolic groundwater flow equation, which is more difficult to solve and is only useful at very small times, typically out of the realm of practical use. Brinkman form of Darcy's law Another extension to the traditional form of Darcy's law is the Brinkman term, which is used to account for transitional flow between boundaries (introduced by Brinkman in 1949), where is an effective viscosity term. This correction term accounts for flow through medium where the grains of the media are porous themselves, but is difficult to use, and is typically neglected. Validity of Darcy's law Darcy's law is valid for laminar flow through sediments. In fine-grained sediments, the dimensions of interstices are small; thus, the flow is laminar. Coarse-grained sediments also behave similarly, but in very coarse-grained sediments, the flow may be turbulent. Hence Darcy's law is not always valid in such sediments. For flow through commercial circular pipes, the flow is laminar when the Reynolds number is less than 2000 and turbulent when it is more than 4000, but in some sediments, it has been found that flow is laminar when the value of the Reynolds number is less than 1. See also The darcy, a unit of fluid permeability Hydrogeology Groundwater flow equation Mathematical model Black-oil equations Fick's law Ergun equation References Water Civil engineering Soil mechanics Soil physics Hydrology Transport phenomena
Darcy's law
Physics,Chemistry,Engineering,Environmental_science
3,163
35,480,438
https://en.wikipedia.org/wiki/Nudge%20theory
Nudge theory is a concept in behavioral economics, decision making, behavioral policy, social psychology, consumer behavior, and related behavioral sciences that proposes adaptive designs of the decision environment (choice architecture) as ways to influence the behavior and decision-making of groups or individuals. Nudging contrasts with other ways to achieve compliance, such as education, legislation or enforcement. The nudge concept was popularized in the 2008 book Nudge: Improving Decisions About Health, Wealth, and Happiness, by behavioral economist Richard Thaler and legal scholar Cass Sunstein, two American scholars at the University of Chicago. It has influenced British and American politicians. Several nudge units exist around the world at the national level (UK, Germany, Japan, and others) as well as at the international level (e.g. World Bank, UN, and the European Commission). It is disputed whether "nudge theory" is a recent novel development in behavioral economics or merely a new term for one of many methods for influencing behavior. There have been some controversies regarding effectiveness of nudges. Maier et al. wrote that, after correcting the publication bias found by Mertens et al. (2021), there is no evidence that nudging would have any effect. "Nudging" is an umbrella term referring to many techniques, and skeptics believe some nudges (e.g. default effect) can be highly effective while others have little to no effect, and call for future work that shift away from investigating average effects but focus on moderators instead. A meta analysis of all unpublished nudging studies carried by nudge units with over 23 million individuals in the United Kingdom and United States found support for many nudges, but with substantially weaker effects than effects found in published studies. Moreover, some researchers criticized the "one-nudge-for-all" approach and advocated for more studies and implementations of personalized nudging (based on individual differences), which appear to be substantially more effective, with a more robust and consistent evidence base. Nudges Definition The first formulation of the term nudge and associated principles was developed in cybernetics by James Wilk before 1995 and described by Brunel University academic D. J. Stewart as "the art of the nudge" (sometimes referred to as micronudges). It also drew on methodological influences from clinical psychotherapy tracing back to Gregory Bateson, including contributions from Milton Erickson, Watzlawick, Weakland and Fisch, and Bill O'Hanlon. In this variant, the nudge is a microtargeted design geared toward a specific group of people, irrespective of the scale of intended intervention. In 2008, Richard Thaler and Cass Sunstein's book Nudge: Improving Decisions About Health, Wealth, and Happiness brought nudge theory to prominence. The authors refer to the influencing of behaviour without coercion as libertarian paternalism and the influencers as choice architects. Thaler and Sunstein defined their concept as the following: In this form, drawing on behavioral economics, the nudge is more generally applied in order to influence behaviour. One of the most frequently cited examples of a nudge is the etching of the image of a housefly into the men's room urinals at Amsterdam's Schiphol Airport, which is intended to "improve the aim." The book also gained a following among US and UK politicians, as well as in the private sector and in public health. Overview A nudge makes it more likely that an individual will make a particular choice, or behave in a particular way, by altering the environment so that automatic cognitive processes are triggered to favour the desired outcome. An individual's behaviour is not always in alignment with their intentions (a discrepancy known as a value-action gap). It is common knowledge that humans are not fully rational beings; that is, people will often do something that is not in their own self-interest, even when they are aware that their actions are not in their best interest. As an example, when hungry, people who diet often underestimate their ability to lose weight, and their intentions to eat healthy can be temporarily weakened until they are satiated. Nobel Laureate Daniel Kahneman describes two distinct systems for processing information as to why people sometimes act against their own self-interest: System 1 is fast, automatic, and highly susceptible to environmental influences; System 2 processing is slow, reflective, and takes into account explicit goals and intentions. When situations are overly complex or overwhelming for an individual's cognitive capacity, or when an individual is faced with time-constraints or other pressures, System 1 processing takes over decision-making. System 1 processing relies on various judgmental heuristics to make decisions, resulting in faster decisions. Unfortunately, this can also lead to suboptimal decisions. In fact, Thaler and Sunstein trace maladaptive behaviour to situations in which System 1 processing overrides an individual's explicit values and goals. It is well documented that habitual behaviour is resistant to change without a disruption to the environmental cues that trigger that behaviour. Nudging techniques aim to use judgmental heuristics to the advantage of the party that is creating the set of choices. In other words, a nudge alters the environment so that when heuristic, or System 1, decision-making is used, the resulting choice will be the most positive or desired outcome. An example of such a nudge is switching the placement of junk food in a store, so that fruit and other healthy options are located next to the cash register, while junk food is relocated to another part of the store. Techniques Nudges are small changes in the environment that are easy and inexpensive to implement. In head-to-head comparisons, randomized experiments have that nudges can sometimes motivate behavior change more effectively than paying people. Several different techniques exist for nudging, including defaults, social-proof heuristics, and increasing the salience of the desired option. A default option is the option that person automatically receives for doing nothing. People are more likely to choose a particular option if it is the default option. For example, Pichert and Katsikopoulos (2008) found that a greater number of consumers chose the renewable energy option for electricity when it was offered as the default option. Similarly, the default options given to mobile apps developers in advertising networks can significantly impact consumers' privacy. A social-proof heuristic refers to the tendency of people to look at the behavior of others to help guide their own behavior. Studies have found some success in using social-proof heuristics to nudge people to make healthier food choices. When people's attention is drawn toward a particular option, that option will become more salient and they will be more likely to choose it. As an example, in snack shops at train stations in the Netherlands, consumers purchased more fruit and healthy snack options when they were relocated next to the cash register. Since then, other similar studies have been made regarding the placement of healthier food options close to the checkout counter and the effect on the consuming behavior of the customers, and this is now considered an effective and well-accepted nudge. Application Behavioral insights and nudges are currently used in many countries around the world. Government There are various notable examples of government applications of nudge theory. During their terms, both U.K. Prime Minister David Cameron and U.S. President Barack Obama may have sought to employ nudge theory to advance domestic policy goals in their respective countries. In 2008, the United States appointed Cass Sunstein, who helped develop the theory, as administrator of the Office of Information and Regulatory Affairs. In 2010, the British Behavioural Insights Team, or "Nudge Unit," was established at the British Cabinet Office and headed by psychologist David Halpern. In Australia, the state Government of New South Wales established a Nudge Unit of its own in 2012. In 2016, the federal government followed suit, forming the Behavioural Economics Team of Australia (BETA) as the "central unit for applying behavioural insights...to public policy." In 2020, the British government of Boris Johnson decided to rely on nudge theory to fight the coronavirus pandemic, with Chief Scientific Adviser Patrick Vallance seeking to encourage “herd immunity” with this strategy. Business Nudge theory has also been applied to business management and corporate culture. For instance, nudge is applied to health, safety, and environment (HSE) with the primary goals of achieving a "zero accident culture." The concept is also used as a key component in a lot of human-resources software. Particular forerunners in the application of nudge theory in corporate settings are top Silicon Valley companies. These companies are using nudges in various forms to increase productivity and happiness of employees. Recently, more companies are gaining interest in using what is called "nudge management" to improve the productivity of their white-collar workers. Healthcare Lately, nudge theory has also been used in different ways to help healthcare professionals make more deliberate decisions in numerous areas. For example, nudging has been used as a way to improve hand hygiene among healthcare workers to decrease the number of healthcare-associated infections. It has also been used as a way to make fluid administration a more thought-out decision in intensive care units, with the intention of reducing well known complications of fluid overload. Mandatory display of inspector reports of eatery hygiene as a public 'nudge', have received mixed responses in different countries. A recent meta-analytic review of the hygiene ratings across North America, Europe, Asia, and Oceania has shown that inspector ratings (usually a smiley or a letter grade) is useful at times, but not informative enough for consumers. Fundraising Nudge theory can also be applied to fundraising, helping to increase donor contributions and increase continuous donations from the same individual, as well as to entice new donors to give. There are some simple strategies used when applying nudge theory to this area. The first strategy is to make donating easy: creating default settings that automatically enroll a donor for continuous giving or prompts them to give every so often encourages individuals to continue giving. The second strategy to increase donors is to make giving more enticing, which can include increasing a person's motivation to give through rewards, personalized messages, or focusing on their interests. Personalized messages, small thank-you gifts, and demonstrating the impact one's donation can have on others, have been shown to be more effective when increasing donations. Another strategy helpful to increasing donors is using social influence, as people are very influenced by group norms. By allowing donors to become visible to the public and increasing their identifiability, other individuals will be more inclined to give as they conform to the social norms around them. Using peer effects has been shown to increase donations. Finally, timing is important: many studies have demonstrated that there are specific times when individuals are more likely to give, for example during holidays. Although many nudging theories have been useful to increase donations and donors, many scholars question the ethics of using such techniques on the population. Ruehle et al. (2020), state that one has to always consider an individual's autonomy when designing nudges for a fundraising campaign. They state that the power of others behind messaging and potentially intrusive prompting can cause concern and may be seen as manipulative of donor's autonomy. Artificial intelligence Nudges are used at many levels in AI algorithms, for example recommender systems, and their consequences are still being investigated. Two articles appeared in Minds & Machines in 2018 addressed the relation between nudges and Artificial Intelligence, explaining how persuasion and psychometrics can be used by personalised targeting algorithms to influence individual and collective behaviour, sometimes also in unintended ways. In 2020 an article in AI & Society addressed the use of this technology in Algorithmic Regulation. A piece in the Harvard Business Review published in 2021 was one of the first articles to coin the term "Algorithmic Nudging" (see also Algorithmic Management). The author stresses "Companies are increasingly using algorithms to manage and control individuals not by force, but rather by nudging them into desirable behavior — in other words, learning from their personalized data and altering their choices in some subtle way." While the concept builds on the work by University of Chicago economist Richard Thaler and Harvard Law School professor Cass Sunstein, "due to recent advances in AI and machine learning, algorithmic nudging is much more powerful than its non-algorithmic counterpart. With so much data about workers’ behavioral patterns at their fingertips, companies can now develop personalized strategies for changing individuals’ decisions and behaviors at large scale. These algorithms can be adjusted in real-time, making the approach even more effective." Tourism One concern researchers in enjoyment-focused contexts, such as tourism, raised is a gap between attitude, intention and behaviour because tourists seek pleasure. Several empirical pieces of evidence in the tourism suggest the nudge theory's high effectiveness in reducing the burden of tourists' activities on the environment. For instance, tourists consumed more ethical foods, selected more sustainable hotels, reused towels and bed linen during hotel stays, increased their intentions to reduce their energy consumption, increased the adoption of tourists' voluntary carbon offsetting and many other examples. Education Nudges in education are techniques used to subtly guide students towards making better choices and achieving their academic goals. These nudges are based on the principles of behavioral economics and psychology, particularly the concept of dual process theory. This theory suggests that there are two systems of thinking: System 1, which is automatic and instinctual, and System 2, which is reflective and deliberate. Nudges aim to influence behavior by targeting System 1 processes, such as habits and automatic responses, to help students overcome common obstacles like procrastination, lack of motivation, or poor study habits. By designing nudges that align with students' goals and cognitive processes, educators can effectively support students in reaching their full potential and improving their academic performance. Nudging in Education: Promises and Challenges Similar to nudging in other areas, nudging in education aims to help individuals achieve desired behaviors they may struggle with due to habits or lack of motivation. For students, this could mean meeting deadlines, paying attention in class, or staying organized. Some promising examples include sending text reminders to parents to increase home literary activities and providing information about famous scientists' struggles to improve student grades. However, challenges remain. It's unclear if nudges lead to long-lasting changes or how they work over time once removed. Additionally, it's essential to ensure that nudges align with educational principles and have a positive impact on students. More research is needed to understand how nudges influence behavior and cognitive processes in education effectively. While nudging shows potential in education, questions remain about its long-term effectiveness and how it fits within educational principles. Nudges should not only focus on end goals but also consider the cognitive processes and behaviors they influence. By understanding these aspects, educators can ensure that nudges promote positive educational practices and help students develop lasting habits. However, the implementation of nudging in education remains limited, highlighting the need for further exploration and development in this area Behavior economics concepts commonly use in education Critique The evidence on nudging having any effect has been criticized as "limited," so Mertens and colleagues (2021) produced a comprehensive meta-analysis. They found that nudging is effective but there is a moderate publication bias. Later Maier and colleagues computed that, after correcting for this publication bias appropriately, there is no evidence that nudging would have any effect. Tammy Boyce, from the public health foundation The King's Fund, has said: "We need to move away from short-term, politically motivated initiatives such as the 'nudging people' idea, which are not based on any good evidence and don't help people make long-term behaviour changes." Likewise, Mols and colleagues (2015), acknowledge nudges may at times be useful but argue that covert nudges offer limited scope for securing lasting behavior change. Cass Sunstein has responded to criticism at length in his 2016 book, The Ethics of Influence: Government in the Age of Behavioral Science, making the case in favor of nudging, against charges that nudges diminish autonomy, threaten dignity, violate liberties, or reduce welfare. He previously defended nudge theory in his 2014 book Why Nudge?: The Politics of Libertarian Paternalism by arguing that choice architecture is inevitable and that some form of paternalism cannot be avoided. Ethicists have debated nudge theory rigorously. These charges have been made by various participants in the debate from Bovens (2009) to Goodwin (2012). Wilkinson, for example, charges nudges for being manipulative, while others such as Yeung (2012) question their scientific credibility. Public opinion on the ethicality of nudges has also been shown to be susceptible to “partisan nudge bias.” Research from David Tannenbaum, Craig R. Fox, and Todd Rogers (2017) found that adults and policymakers in the United States believed behavioral policies to be more ethical when they aligned with their own political leanings. Conversely, people took these same mechanisms to be more unethical when they differed from their politics. The researchers also found that nudges are not inherently partisan: when evaluating behavioral policies absent of political cues, people across the political spectrum were alike in their assessments. When considering the future designers that would be creating these nudges, a study by Willermark and Islind (2022) showed that more than 50% of design students have positive attitudes towards the implementation of nudges as a form of choice architecture. The participants argued that "many people benefit from getting a little nudge", while about 40% have ambivalent or negative attitudes towards the concept stating that "We simply should not change the path of people’s choices". Some, such as Hausman and Welch (2010) as well as Roberts (2018) and Mrkva (2021) have inquired whether nudging should be permissible on grounds of distributive justice. Though Roberts (2018) argued that nudges do not benefit vulnerable, low-income individuals as much as individuals who are less vulnerable, Mrkva's research suggests that nudges benefit low-income and low-SES people most, if anything increasing distributive justice and reducing the disparity between those with high and low financial literacy. This research suggests that in situations where consumers lack knowledge regarding their choices and are therefore more prone to choosing the wrong one, the implementation of 'good nudges' can be ethically justified. The same study also states that nudges have the potential to "increase firm profits while decreasing consumer welfare." Lepenies and Malecka (2015) have questioned whether nudges are compatible with the rule of law. Similarly, legal scholars have discussed the role of nudges and the law. Behavioral economists such as Bob Sugden have pointed out that the underlying normative benchmark of nudging is still homo economicus, despite the proponents' claim to the contrary. It has been remarked that nudging is also a euphemism for psychological manipulation as practiced in social engineering. There exists an anticipation and, simultaneously, implicit criticism of the nudge theory in works of Hungarian social psychologists Ferenc Mérei and László Garai, who emphasize the active participation in the nudge of its target. The authors of a book titled Neuroliberalism: Behavioural Government in the Twenty-First Century (2017), argue that, while there is much value and diversity in behavioural approaches to government, there are significant ethical issues, including the danger of the neurological sciences being co-opted to the needs of neo-liberal economics. See also Choice architecture Commitment device Constructal Law - Design evolution in nature, animate and inanimate Dark pattern Default effect Libertarian paternalism List of cognitive biases Negarchy Psychohistory (fictional) Thinking, Fast and Slow Race to the Top Propaganda References Further reading Behavioral economics Behavioural sciences Cognitive biases Psychological theories Psychological manipulation
Nudge theory
Biology
4,181
14,637,814
https://en.wikipedia.org/wiki/Polymerase%20stuttering
Polymerase stuttering is the process by which a polymerase transcribes a nucleotide several times without progressing further on the mRNA chain. It is often used in addition of poly A tails or capping mRNA chains by less complex organisms such as viruses. Process A polymerase may undergo stuttering as a probability controlled event, hence it is not explicitly controlled by any mechanisms in the translation process. Generally, it is a result of many short repeated frameshifts on a slippery sequence of nucleotides on the mRNA strand. However, the frameshift is restricted to one (in some cases two) nucleotides with a pseudoknot or choke points on both sides of the sequence. Examples A polymerase that exhibits this behavior is RNA-dependent RNA polymerase, present in many RNA viruses. Reverse transcriptase has also been observed to undergo this polymerase stuttering. Literature Genetics
Polymerase stuttering
Biology
185
73,999,998
https://en.wikipedia.org/wiki/Taioro
Taioro is a condiment made from the grated flesh of the coconut and allowed to ferment. It is traditional food found throughout the islands of Oceania and is often eaten as an accompaniment to meals. Preparation Taioro is made from the meat of the coconut drupe and allowed to ferment. The flesh from the coconut is grated and salt water and the juice from the crushed heads of crustaceans is added. The liquid from the crustaceans acts as the fermenting agent and is left to ferment for several days. It is often prepared as a dish called , where Taioro is mixed together with clams or turbot snails alongside garlic, onions, salt and pepper and served at room temperature. Mitiore is prepared in a similar manner to Taioro in the Cook Islands, but sea water is absent from the preparation. Juice extracted from crushed crustaceans is mixed with the grated coconut, wrapped in leaves and left to ferment for a few hours, gaining a consistency similar to cottage cheese. It is served mixed with shellfish and spring onion or chives. Kora made by Fijians is traditionally prepared by wrapping the grated coconut flesh in packages made from banana leaves and submerging it in salt water, weighed down under a pile of rocks and leaving it to ferment for several days, though modern day methods sometimes use sacks instead of banana leaves. It is typically served mixed with sea grapes, chilli, lemon juice and salt. A similar dish is also found in Tonga where shavings left over from the extraction of coconut milk were allowed to ferment and were baked in an earth oven, often mixed together with taro leaves to create a dish known as . Names Cook Islands: Fiji: French Polynesia: Isnag: See also Miti hue – A Polynesian fermented coconut sauce. References Condiments Fermented foods French Polynesian cuisine Cook Islands cuisine Fijian cuisine Tongan cuisine Polynesian cuisine Foods containing coconut
Taioro
Biology
409
5,643,937
https://en.wikipedia.org/wiki/Music%20and%20mathematics
Music theory analyzes the pitch, timing, and structure of music. It uses mathematics to study elements of music such as tempo, chord progression, form, and meter. The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory. While music theory has no axiomatic foundation in modern mathematics, the basis of musical sound can be described mathematically (using acoustics) and exhibits "a remarkable array of number properties". History Though ancient Chinese, Indians, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans (in particular Philolaus and Archytas) of ancient Greece were the first researchers known to have investigated the expression of musical scales in terms of numerical ratios, particularly the ratios of small integers. Their central doctrine was that "all nature consists of harmony arising out of numbers". From the time of Plato, harmony was considered a fundamental branch of physics, now known as musical acoustics. Early Indian and Chinese theorists show similar approaches: all sought to show that the mathematical laws of harmonics and rhythms were fundamental not only to our understanding of the world but to human well-being. Confucius, like Pythagoras, regarded the small numbers 1,2,3,4 as the source of all perfection. Time, rhythm, and meter Without the boundaries of rhythmic structure – a fundamental equal and regular arrangement of pulse repetition, accent, phrase and duration – music would not be possible. Modern musical use of terms like meter and measure also reflects the historical importance of music, along with astronomy, in the development of counting, arithmetic and the exact measurement of time and periodicity that is fundamental to physics. The elements of musical form often build strict proportions or hypermetric structures (powers of the numbers 2 and 3). Musical form Musical form is the plan by which a short piece of music is extended. The term "plan" is also used in architecture, to which musical form is often compared. Like the architect, the composer must take into account the function for which the work is intended and the means available, practicing economy and making use of repetition and order. The common types of form known as binary and ternary ("twofold" and "threefold") once again demonstrate the importance of small integral values to the intelligibility and appeal of music. Frequency and harmony A musical scale is a discrete set of pitches used in making or describing music. The most important scale in the Western tradition is the diatonic scale but many others have been used and proposed in various historical eras and parts of the world. Each pitch corresponds to a particular frequency, expressed in hertz (Hz), sometimes referred to as cycles per second (c.p.s.). A scale has an interval of repetition, normally the octave. The octave of any pitch refers to a frequency exactly twice that of the given pitch. Succeeding superoctaves are pitches found at frequencies four, eight, sixteen times, and so on, of the fundamental frequency. Pitches at frequencies of half, a quarter, an eighth and so on of the fundamental are called suboctaves. There is no case in musical harmony where, if a given pitch be considered accordant, that its octaves are considered otherwise. Therefore, any note and its octaves will generally be found similarly named in musical systems (e.g. all will be called doh or A or Sa, as the case may be). When expressed as a frequency bandwidth an octave A2–A3 spans from 110 Hz to 220 Hz (span=110 Hz). The next octave will span from 220 Hz to 440 Hz (span=220 Hz). The third octave spans from 440 Hz to 880 Hz (span=440 Hz) and so on. Each successive octave spans twice the frequency range of the previous octave. Because we are often interested in the relations or ratios between the pitches (known as intervals) rather than the precise pitches themselves in describing a scale, it is usual to refer to all the scale pitches in terms of their ratio from a particular pitch, which is given the value of one (often written 1/1), generally a note which functions as the tonic of the scale. For interval size comparison, cents are often used. {|class="wikitable" !Commonterm !Examplename !Hz !Multiple offundamental !Ratio ofwithin octave !Centswithin octave |- | |style="text-align:center;"|A2 |110 |style="text-align:center;"| |style="text-align:center;"| | |- |rowspan=2 style="text-align:center;" |Octave |rowspan=2 style="text-align:center;" |A3 |rowspan=2 |220 |rowspan=2 style="text-align:center;" | |style="text-align:center;"| | |- |style="text-align:center;"| | |- | |style="text-align:center;"|E4 |330 |style="text-align:center;"| |style="text-align:center;"| | |- |rowspan=2 style="text-align:center;" |Octave |rowspan=2 style="text-align:center;" |A4 |rowspan=2 |440 |rowspan=2 style="text-align:center;" | |style="text-align:center;"| | |- |style="text-align:center;"| | |- | |style="text-align:center;"|C5 |550 |style="text-align:center;"| |style="text-align:center;"| | |- | |style="text-align:center;"|E5 |660 |style="text-align:center;"| |style="text-align:center;"| | |- | |style="text-align:center;"|G5 |770 |style="text-align:center;"| |style="text-align:center;"| | |- |rowspan=2 style="text-align:center;" |Octave |rowspan=2 style="text-align:center;" |A5 |rowspan=2 |880 |rowspan=2 style="text-align:center;" | |style="text-align:center;"| | |- |style="text-align:center;"| | |} Tuning systems There are two main families of tuning systems: equal temperament and just tuning. Equal temperament scales are built by dividing an octave into intervals which are equal on a logarithmic scale, which results in perfectly evenly divided scales, but with ratios of frequencies which are irrational numbers. Just scales are built by multiplying frequencies by rational numbers, which results in simple ratios between frequencies, but with scale divisions that are uneven. One major difference between equal temperament tunings and just tunings is differences in acoustical beat when two notes are sounded together, which affects the subjective experience of consonance and dissonance. Both of these systems, and the vast majority of music in general, have scales that repeat on the interval of every octave, which is defined as frequency ratio of 2:1. In other words, every time the frequency is doubled, the given scale repeats. Below are Ogg Vorbis files demonstrating the difference between just intonation and equal temperament. You might need to play the samples several times before you can detect the difference. – this sample has half-step at 550 Hz (C in the just intonation scale), followed by a half-step at 554.37 Hz (C in the equal temperament scale). – this sample consists of a "dyad". The lower note is a constant A (440 Hz in either scale), the upper note is a C in the equal-tempered scale for the first 1", and a C in the just intonation scale for the last 1". Phase differences make it easier to detect the transition than in the previous sample. Just tunings 5-limit tuning, the most common form of just intonation, is a system of tuning using tones that are regular number harmonics of a single fundamental frequency. This was one of the scales Johannes Kepler presented in his Harmonices Mundi (1619) in connection with planetary motion. The same scale was given in transposed form by Scottish mathematician and musical theorist, Alexander Malcolm, in 1721 in his 'Treatise of Musick: Speculative, Practical and Historical', and by theorist Jose Wuerschmidt in the 20th century. A form of it is used in the music of northern India. American composer Terry Riley also made use of the inverted form of it in his "Harp of New Albion". Just intonation gives superior results when there is little or no chord progression: voices and other instruments gravitate to just intonation whenever possible. However, it gives two different whole tone intervals (9:8 and 10:9) because a fixed tuned instrument, such as a piano, cannot change key. To calculate the frequency of a note in a scale given in terms of ratios, the frequency ratio is multiplied by the tonic frequency. For instance, with a tonic of A4 (A natural above middle C), the frequency is 440 Hz, and a justly tuned fifth above it (E5) is simply 440×(3:2) = 660 Hz. Pythagorean tuning is tuning based only on the perfect consonances, the (perfect) octave, perfect fifth, and perfect fourth. Thus the major third is considered not a third but a ditone, literally "two tones", and is (9:8)2 = 81:64, rather than the independent and harmonic just 5:4 = 80:64 directly below. A whole tone is a secondary interval, being derived from two perfect fifths minus an octave, (3:2)2/2 = 9:8. The just major third, 5:4 and minor third, 6:5, are a syntonic comma, 81:80, apart from their Pythagorean equivalents 81:64 and 32:27 respectively. According to Carl , "the dependent third conforms to the Pythagorean, the independent third to the harmonic tuning of intervals." Western common practice music usually cannot be played in just intonation but requires a systematically tempered scale. The tempering can involve either the irregularities of well temperament or be constructed as a regular temperament, either some form of equal temperament or some other regular meantone, but in all cases will involve the fundamental features of meantone temperament. For example, the root of chord ii, if tuned to a fifth above the dominant, would be a major whole tone (9:8) above the tonic. If tuned a just minor third (6:5) below a just subdominant degree of 4:3, however, the interval from the tonic would equal a minor whole tone (10:9). Meantone temperament reduces the difference between 9:8 and 10:9. Their ratio, (9:8)/(10:9) = 81:80, is treated as a unison. The interval 81:80, called the syntonic comma or comma of Didymus, is the key comma of meantone temperament. Equal temperament tunings In equal temperament, the octave is divided into equal parts on the logarithmic scale. While it is possible to construct equal temperament scale with any number of notes (for example, the 24-tone Arab tone system), the most common number is 12, which makes up the equal-temperament chromatic scale. In western music, a division into twelve intervals is commonly assumed unless it is specified otherwise. For the chromatic scale, the octave is divided into twelve equal parts, each semitone (half-step) is an interval of the twelfth root of two so that twelve of these equal half steps add up to exactly an octave. With fretted instruments it is very useful to use equal temperament so that the frets align evenly across the strings. In the European music tradition, equal temperament was used for lute and guitar music far earlier than for other instruments, such as musical keyboards. Because of this historical force, twelve-tone equal temperament is now the dominant intonation system in the Western, and much of the non-Western, world. Equally tempered scales have been used and instruments built using various other numbers of equal intervals. The 19 equal temperament, first proposed and used by Guillaume Costeley in the 16th century, uses 19 equally spaced tones, offering better major thirds and far better minor thirds than normal 12-semitone equal temperament at the cost of a flatter fifth. The overall effect is one of greater consonance. Twenty-four equal temperament, with twenty-four equally spaced tones, is widespread in the pedagogy and notation of Arabic music. However, in theory and practice, the intonation of Arabic music conforms to rational ratios, as opposed to the irrational ratios of equally tempered systems. While any analog to the equally tempered quarter tone is entirely absent from Arabic intonation systems, analogs to a three-quarter tone, or neutral second, frequently occur. These neutral seconds, however, vary slightly in their ratios dependent on maqam, as well as geography. Indeed, Arabic music historian Habib Hassan Touma has written that "the breadth of deviation of this musical step is a crucial ingredient in the peculiar flavor of Arabian music. To temper the scale by dividing the octave into twenty-four quarter-tones of equal size would be to surrender one of the most characteristic elements of this musical culture." 53 equal temperament arises from the near equality of 53 perfect fifths with 31 octaves, and was noted by Jing Fang and Nicholas Mercator. Connections to mathematics Set theory Musical set theory uses the language of mathematical set theory in an elementary way to organize musical objects and describe their relationships. To analyze the structure of a piece of (typically atonal) music using musical set theory, one usually starts with a set of tones, which could form motives or chords. By applying simple operations such as transposition and inversion, one can discover deep structures in the music. Operations such as transposition and inversion are called isometries because they preserve the intervals between tones in a set. Abstract algebra Expanding on the methods of musical set theory, some theorists have used abstract algebra to analyze music. For example, the pitch classes in an equally tempered octave form an abelian group with 12 elements. It is possible to describe just intonation in terms of a free abelian group. Transformational theory is a branch of music theory developed by David Lewin. The theory allows for great generality because it emphasizes transformations between musical objects, rather than the musical objects themselves. Theorists have also proposed musical applications of more sophisticated algebraic concepts. The theory of regular temperaments has been extensively developed with a wide range of sophisticated mathematics, for example by associating each regular temperament with a rational point on a Grassmannian. The chromatic scale has a free and transitive action of the cyclic group , with the action being defined via transposition of notes. So the chromatic scale can be thought of as a torsor for the group. Numbers and series Some composers have incorporated the golden ratio and Fibonacci numbers into their work. Category theory The mathematician and musicologist Guerino Mazzola has used category theory (topos theory) for a basis of music theory, which includes using topology as a basis for a theory of rhythm and motives, and differential geometry as a basis for a theory of musical phrasing, tempo, and intonation. Musicians who were or are also notable mathematicians Albert Einstein – Accomplished pianist and violinist. Art Garfunkel (Simon & Garfunkel) – Masters in Mathematics Education, Columbia University Brian Cox – Professor of particle physics in the School of Physics and Astronomy at the University of Manchester. Brian May (Queen) – BSc (Hons) in Mathematics and Physics, PhD in Astrophysics, both from Imperial College London. Brian Wecht (Ninja Sex Party) – PhD in particle physics, University of California, San Diego Dan Snaith – PhD Mathematics, Imperial College London Delia Derbyshire – BA in mathematics and music from Cambridge. Donald Knuth – Knuth is an organist and a composer. In 2016 he completed a musical piece for organ titled "Fantasia Apocalyptica". It was premièred in Sweden on January 10, 2018 Ethan Port (Savage Republic) – PhD Mathematics, University of Southern California Gregg Turner (Angry Samoans) – PhD Mathematics, Claremont Graduate University Jerome Hines – Five articles published in Mathematics Magazine 1951–1956. Jonny Buckland (Coldplay) – Studied astronomy and mathematics at University College London. Kit Armstrong – Degree in music and MSc in mathematics. Manjul Bhargava – Plays the tabla, won the Fields Medal in 2014. Phil Alvin (The Blasters) – Mathematics, University of California, Los Angeles Philip Glass – Studied mathematics and philosophy at the University of Chicago. Robert Schneider (The Apples in Stereo) – PhD Mathematics, Emory University Tom Lehrer – BA mathematics from Harvard University. William Herschel – Astronomer and played the oboe, violin, harpsichord and organ. He composed 24 symphonies and many concertos, as well as some church music. See also Computational musicology Equal temperament Euclidean rhythms (traditional musical rhythms that are generated by Euclid's algorithm) Harmony search Interval (music) List of music software Mathematics and art Musical tuning Non-Pythagorean scale Piano key frequencies Rhythm The Glass Bead Game 3rd bridge (harmonic resonance based on equal string divisions) Tonality diamond Tonnetz Utonality and otonality References Ivor Grattan-Guinness (1995) "Mozart 18, Beethoven 32: Hidden shadows of integers in classical music", pages 29 to 47 in History of Mathematics: States of the Art, Joseph W. Dauben, Menso Folkerts, Eberhard Knobloch and Hans Wussing editors, Academic Press Further reading Cool math for hot music - A first introduction to mathematics for music theorists by Guerino Mazzola, Maria Mannone, Yan Pang, Springer, 2016, Music: A Mathematical Offering by Dave Benson, Cambridge University Press, 2006, External links Axiomatic Music Theory by S.M. Nemati Music and Math by Thomas E. Fiore Twelve-Tone Musical Scale. Sonantometry or music as math discipline. Music: A Mathematical Offering by Dave Benson. Nicolaus Mercator use of Ratio Theory in Music at Convergence The Glass Bead Game Hermann Hesse gave music and mathematics a crucial role in the development of his Glass Bead Game. Harmony and Proportion. Pythagoras, Music and Space. "Linear Algebra and Music" Notefreqs — A complete table of note frequencies and ratios for midi, piano, guitar, bass, and violin. Includes fret measurements (in cm and inches) for building instruments. Mathematics & Music, BBC Radio 4 discussion with Marcus du Sautoy, Robin Wilson & Ruth Tatlow (In Our Time, May 25, 2006) Measuring note similarity with positive definite kernels, Measuring note similarity with positive definite kernels Mathematics and art Mathematics and culture
Music and mathematics
Mathematics
4,031
581,175
https://en.wikipedia.org/wiki/Vertex-transitive%20graph
In the mathematical field of graph theory, an automorphism is a permutation of the vertices such that edges are mapped to edges and non-edges are mapped to non-edges. A graph is a vertex-transitive graph if, given any two vertices and of , there is an automorphism such that In other words, a graph is vertex-transitive if its automorphism group acts transitively on its vertices. A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical. Every symmetric graph without isolated vertices is vertex-transitive, and every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric (for example, the edges of the truncated tetrahedron), and not all regular graphs are vertex-transitive (for example, the Frucht graph and Tietze's graph). Finite examples Finite vertex-transitive graphs include the symmetric graphs (such as the Petersen graph, the Heawood graph and the vertices and edges of the Platonic solids). The finite Cayley graphs (such as cube-connected cycles) are also vertex-transitive, as are the vertices and edges of the Archimedean solids (though only two of these are symmetric). Potočnik, Spiga and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices. Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees. Properties The edge-connectivity of a connected vertex-transitive graph is equal to the degree d, while the vertex-connectivity will be at least 2(d + 1)/3. If the degree is 4 or less, or the graph is also edge-transitive, or the graph is a minimal Cayley graph, then the vertex-connectivity will also be equal to d. Infinite examples Infinite vertex-transitive graphs include: infinite paths (infinite in both directions) infinite regular trees, e.g. the Cayley graph of the free group graphs of uniform tessellations (see a complete list of planar tessellations), including all tilings by regular polygons infinite Cayley graphs the Rado graph Two countable vertex-transitive graphs are called quasi-isometric if the ratio of their distance functions is bounded from below and from above. A well known conjecture stated that every infinite vertex-transitive graph is quasi-isometric to a Cayley graph. A counterexample was proposed by Diestel and Leader in 2001. In 2005, Eskin, Fisher, and Whyte confirmed the counterexample. See also Edge-transitive graph Lovász conjecture Semi-symmetric graph Zero-symmetric graph References External links A census of small connected cubic vertex-transitive graphs. Primož Potočnik, Pablo Spiga, Gabriel Verret, 2012. Vertex-transitive Graphs On Fewer Than 48 Vertices. Gordon Royle and Derek Holt, 2020. Graph families Algebraic graph theory Regular graphs
Vertex-transitive graph
Mathematics
671
6,908,822
https://en.wikipedia.org/wiki/John%20Michell%20%28writer%29
John Frederick Carden Michell (9 February 1933 – 24 April 2009) was an English author and esotericist who was a prominent figure in the development of the pseudoscientific Earth mysteries movement. Over the course of his life he published over forty books on an array of different subjects, being a proponent of the Traditionalist school of esoteric thought. Born in London to a wealthy family, Michell was educated at Cheam School and Eton College before serving as a Russian translator in the Royal Navy for two years. After failing a degree in Russian and German at Trinity College, Cambridge, he qualified as a chartered surveyor then returned to London and worked for his father's property business, there developing his interest in Ufology. Embracing the counter-cultural ideas of the Earth mysteries movement during the 1960s, in The Flying Saucer Vision he built on Alfred Watkins' ideas of ley lines by arguing that they represented linear marks created in prehistory to guide extraterrestrial spacecraft. He followed this with his most influential work, The View Over Atlantis, in 1969. His ideas were at odds with those of academic archaeologists, for whom he expressed contempt. Michell believed in the existence of an ancient spiritual tradition that connected humanity to divinity, but which had been lost as a result of modernity. He believed however that this tradition would be revived and that humanity would enter a Golden Age, with Britain as the centre of this transformation. Michell's other publications covered an eclectic range of topics, and included an overview on the Shakespeare authorship question, a tract condemning Salman Rushdie during The Satanic Verses controversy, and a book of Adolf Hitler's quotations. Keenly interested in the crop circle phenomenon, he co-founded a magazine devoted to the subject, The Cereologist, in 1990, and served as its initial editor. From 1992 until his death he wrote a column for The Oldie magazine, which was largely devoted to his anti-modernist opinions. He accompanied this with a column on esoteric topics for the Daily Mirror tabloid. A lifelong marijuana smoker, Michell died of lung cancer in 2009. Michell's impact in the Earth mysteries movement was considerable, and through it he also influenced the British Pagan movement. During the 2000s, his ideas also proved an influence on the Radical Traditionalist sector of the New Right. Biography Early life John Frederick Carden Michell was born in London on 9 February 1933. His father, Alfred Henry Michell, was of Cornish & Welsh descent and worked as a property dealer in the capital, while his mother Enid Evelyn (née Carden) was the daughter of Major Sir Frederick Carden, 3rd Baronet, great-granddaughter of Sir Robert Carden, 1st Baronet, who served as Lord Mayor of London in 1857, and 3x great-granddaughter of John Walter, founder of The Times. The eldest of three children, Michell's siblings were named Charles and Clare. Michell was raised at Stargroves, his maternal grandfather's Victorian-era estate on the Berkshire Downs near to Newbury, and it was here that he developed a love of the countryside, learning about the local flora and fauna from a neighbouring naturalist. He was raised into the Anglican denomination of Christianity, although in later life rejected the religion. Michell was initially educated as a boarder at the preparatory Cheam School, where he was Head Boy and excelled at the high jump. From there he went to study at Eton College, where he was a contemporary of Lord Moyne and Ian Cameron, the father of future Prime Minister David Cameron. He spent his two years of national service in the Royal Navy, during which time he qualified as a Russian translator at the School of Slavonic Studies. He then went on to study Russian and German at Trinity College, Cambridge, although was unable to secure a third-class degree. He then qualified as a chartered surveyor at a firm in Gloucestershire, before moving back to London to work for his father's property business. Commenting on this job, he later stated that it was "quite amusing, but of course I wasn't any good at it", with property speculators eroding much of his fortune. In 1966 one of his properties, the basement of his own residence, became the base of the London Free School. The Black Power activist Michael X, having previously run a gambling club in the basement, had now become active in the organisation of the LFS and brought Michell into counter-culture activities. Michell began to offer courses in UFOs and ley lines. In 1964, with Jocasta Innes, Michell fathered a son, Jason Goodwin, who also became a writer. The relationship with Innes did not last. Jason Goodwin did not meet his natural father until 1992, at the age of 28, at which point they became quite close. Embracing the Earth Mysteries movement Michell developed an interest in Ufology and Earth mysteries after attending a talk given by Jimmy Goddard at Kensington Central Library on the subject of "Leys and Orthonies" in November 1965. Michell's first publication on the subject of Ufology was the article "Flying Saucers", which appeared in the 30 January 1967 edition of the counter-cultural newspaper International Times. He proceeded to write a book on the subject, but lost the original manuscript after accidentally leaving it in a North London café, at which he had to rewrite it. The book eventually saw publication as The Flying Saucer Vision, published in 1967, when Michell was 35 years old. The Flying Saucer Vision took the idea of Tony Wedd that ley lines – alleged trackways across the landscape whose existence was first argued by Alfred Watkins – represented markers for the flight of extraterrestrial spacecraft and built on it, arguing that early human society was aided by alien entities who were understood as gods, but that these extraterrestrials had abandoned humanity because of the latter's greed for material and technological development. According to Lachman, at this time Michell took the view that "an imminent revelation of literally inconceivable scope" was at hand, and that the appearance of UFOs was linked to "the start of a new phase in our history". Many fans of Michell's work consider it to be "by far his most impressive book". In their social history of Ufology, David Clarke and Andy Roberts stated that Michell's work was "the catalyst and helmsman" for the growing interest in UFOs among the hippie sector of the counter-culture. Subsequently, there was a shift in Michell's emphasis as he became increasingly interested in the landscapes in which he believed that ley lines could be found rather than the UFOs themselves. He wrote an article on "Lung Mei and the Dragon Paths of England" for a September 1967 issue of Image magazine, in which he compared British ley-lines to the Chinese mythological idea of lung mei lines, arguing that this was evidence of a widespread pre-Christian dragon cult in ancient Britain. He built on these ideas for The View Over Atlantis, a book which he privately published in 1969, with a republication following three years later. Believing this earth energy to be a real magnetic phenomenon arising naturally from the ground, Michell argued that an ancient religious-scientific elite had traveled the world constructing the lines and various megalithic monuments in order to channel this energy and direct it for the good of humanity. The tone of his work reflected "a fervent religious feeling", describing the existence of an ancient, universal, and true system of belief that was once spread across the ancient world but which had been lost through the degeneracy of subsequent generations. He added however that this ancient knowledge would be revived with the dawning of the Age of Aquarius, allowing for what Michell described as the "rediscovery of access to the divine will". The Pagan studies scholar Amy Hale stated that The View Over Atlantis was "a smash countercultural success", while the historian Ronald Hutton described it as "almost the founding document of the modern earth mysteries movement". Fellow ley-hunter and later biographer Paul Screeton considered it to be a "groundbreaking" work which "re-enchanted the British landscape and empowered a generation to seek out and appreciate the spiritual dimension of the countryside, not least attracting them to reawaken the sleepy town of Glastonbury". The book inspired an array of Earth Mysteries publications in the 1970s and 1980s, accompanied by growth in the ley-hunting movement. Among the most prominent works to build on Michell's ideas during this period were Janet and Colin Bord's Mysterious Britain, which used them in its presentation of a gazetteer of ancient sites, and Paul Screeton's Quicksilver Heritage, which argued that the Neolithic had been a time devoted to spiritual endeavours which had been corrupted by the emergence of metal technologies. Michell associated with many individuals active in this ley-hunting community, and in July 1971 was one of many attendees at a ley-hunters picnic held at Risbury Camp, the largest outdoor gathering of the movement since 1939. In May 1969 Michell established a group known as the Research Into Lost Knowledge Organisation (RILKO) with his friends Keith Critchlow and Mary Williams. In conjunction with the Garnstone Press, RILKO founded the Prehistory and Ancient Science Library, a book series that brought out reprints of older works, such as Watkins' The Old Straight Track and William Stirling's The Canon, both of which contained forewords by Michell. Michell also founded a small publishing company of his own, West Country Editions, through which he brought out his own A Little History of Bladud in 1973 as well as a reprint of Howard C. Levis's 1919 book Bladud of Bath. With his friend John "Peewee" Michael, who lived in Bristol, Michell also established a second small press, Pentacle Books, although it failed to become a commercial success and was short lived. Michell was involved in the summer 1971 Glastonbury Fayre music festival near Pilton, Somerset, where the pyramid stage was built to Michell's specifications and situated at what he claimed were the apex of two ley lines. Through Michael Rainey, Michell was introduced to the members of rock band The Rolling Stones at the Courtfield Road home of band member Brian Jones. Michell befriended the band's lead singer, Mick Jagger, and he accompanied the band on a visit to Stonehenge. Michell then went on a visit to Woolhope in Herefordshire with Keith Richards, Anita Pallenberg, Christopher Gibbs, and the filmmaker Kenneth Anger, where they hunted for ley lines and UFOs. Marianne Faithfull later recounted that band member Jones was particularly interested in Michell's ideas. He would later meet with the members of The Grateful Dead on their 1972 European tour; band members Phil Lesh and Jerry Garcia expressed an interest in Michell's Earth Mysteries ideas. Michell's impact on the hippie subculture was recognised by mainstream media, and he was invited to submit an article titled "Flying saucers" to The Listener in May 1968, which was accompanied by a critical piece by editor Karl Miller, in which Michell was described as "less a hippy, perhaps, than a hippy's counsellor, one of their junior Merlins." Hale noted that Michell promoted the idea of "England as a site of spiritual redemption in the New Age", bringing together "popular ideas about sacred geometry, Druids, sacred landscapes, earth energies, Atlantis, and UFOs". In 1972 Michell published a sequel to The View Over Atlantis as City of Revelation. Shortly after publication he stated that he had written the work in "almost two years of near total solitude and intense study in Bath." This work was more complex than its predecessor, including chapters on sacred geometry, numerology, gematria, and the esoteric concept of the New Jerusalem, and required an understanding of mathematics and Classics to follow its arguments. Bob Rickard, founding editor of Fortean Times, has written that Michell's first three works "provided a synthesis of and a context for all the other weirdness of the era. It’s fair to say that it played a big part in the foundation of Fortean Times itself by helping create a readership that wanted more things to think about and a place to discuss them. The overall effect was to help the burgeoning interest in strange phenomena spread out into mainstream culture." Challenging academic archaeology The work of Michell and others in the ley-hunting and Earth mysteries communities were rejected by the professional archaeological establishment, with the prominent British archaeologist Glyn Daniel denouncing what he perceived as the "lunatic fringe". In turn, Michell was hostile to professional and academic archaeologists, accusing them of "treasure hunting and grave robbery" and viewing them as representations of what he interpreted as the evils of modernity. In response to the academic archaeological community's refusal to take the idea of ley lines seriously, in 1970 Michell offered a challenge for professional archaeologists to disprove his ideas regarding the West Peninsula leys. He stated that were he to be proved wrong then he would donate a large sum to charity, but at the time no one took up his offer. However, in 1983 his case study was analysed by two archaeologists, Tom Williamson and Liz Bellamy, as part of their work Ley Lines in Question, a critical analysis of the evidence for ley-lines. They highlighted that Michell had erroneously included medieval crosses and natural features under his definition of late prehistoric monuments, and that arguments for ley-lines more widely could not be sustained. The impact of their work on the ley-hunting community was substantial, with one section moving in a more fully religious direction by declaring that leys could only be detected by intuition, and the other renouncing a ley line belief in favour of a more ethnographically rooted analysis of linear connections in the landscape. Responding to their work, Michell said that "I just feel sorry for Williamson and Bellamy that the most exciting thing they can find to do with their youth is to discredit the ley vision." In 1983 Michell published an altered version of his best known work as The New View Over Atlantis. Ioan Culianu, a specialist in gnosticism and Renaissance esoteric studies, in a review in 1991 of The Dimensions of Paradise: The Proportions and Symbolic Numbers in Ancient Cosmology, expressed the view that, "After some deliberation the reader of this book will oscillate between two hypotheses: either that many mysteries of the universe are based on numbers, or that the book's author is a fairly learned crank obsessed with numbers." In 1970, Michell founded the Anti-Metrification Board to oppose the adoption of the metric system of measurement in the United Kingdom. Believing that the established imperial system of measurement had both ancient and sacred origins, through the Board he brought out a newsletter, Just Measure. In 1972 he published the first of his "Radical Traditionalist Papers", A Defence of Sacred Measures, in which he laid out his opposition to the metric system. In his third Radical Traditionalist Paper, published in 1973, he argued against population control, critiquing the ideas of Thomas Robert Malthus and arguing that correct use of resources could maintain an ever-growing human population. His fifth Radical Traditionalist Paper, Concordance to High Monarchists, offered Michell's proposed solution to The Troubles of Northern Ireland; in his view, Ireland should be divided into four provinces, each administered separately but all ultimately pledging allegiance to a High King, in this way mirroring what Michell believed was the socio-political organisation of prehistoric Ireland. Other publications Following the 1975 execution of Michael X for a murder committed in Trinidad, Michell published a souvenir pamphlet to commemorate the execution, claiming that all royalties from its publication would go to Michael X's widow. In 1976 he published The Hip Pocket Hitler, a book containing those quotations from Adolf Hitler, the leader of Nazi Germany, which Michell deemed to be humorous or insightful, thus seeking to portray a side to Hitler that was more favourable than the dominant paradigm. In 1979 he provided an introduction to a translation of Pliny the Elder's Inventorum Natura, which had been illustrated by Una Woodruff. That same year he brought out Simulcra, a work in which he examined perceived faces in natural forms such as trees. In collaboration with Bob Rickard, in 1977 Michell published Phenomena: A Book of Wonders, an encyclopedic work devoted to paranormal and fortean phenomena which covered such topics as UFOs, werewolves, lake monsters, and spontaneous human combustion. They followed this with a second encyclopedic volume, Living Wonders: Mysteries and Curiosities of the Animal World, which appeared in 1982 and was devoted to fortean topics involving animals, with much of it focusing on cryptozoological topics. In 1984 he published Eccentric Lives and Peculiar Notions, in which he provided brief biographies of various figures whose ideas had been rejected by mainstream scholarship and society, among them Nesta Webster, Iolo Morganwg, Brinsley Trench, and Comyns Beaumont. In Euphonics: A Poet's Dictionary of Sounds he then argued that every name represents a "vocal imitation" of the subject that it describes, for instance arguing that "s" appears in the words "snake" and "serpent" because it resembles the curved movement of the animal. Following the controversy that erupted around Salman Rushdie's 1988 book The Satanic Verses, Michell published a tract condemning Rushdie, accusing him of deliberately and provocatively insulting Islam. Titled Rushdie's Insult, Michell later withdrew the publication. Michell was keenly interested in the crop circle phenomenon, and with Christine Rhone and Richard Adams he established a magazine devoted to the subject in 1990. Initially titled The Cereologist, some issues would be alternately titled The Cerealogist, and although Michell initially served as the magazine's editor, he stepped down after the ninth issue, although continued to contribute articles to it. In 1991, he published a book on the subject, Dowsing the Crop Circles, and in 2001 followed this with a booklet titled The Face and the Message, which was devoted to a circle depicting the face of a Grey alien which had appeared in Hampshire in August 2001. Despite the longstanding animosity with which Michell held academic archaeology, in 1991 the peer-reviewed archaeological journal Antiquity invited him to author a review of a Southbank exhibit, "From Art to Archaeology", which was duly published in the journal. In the 1980s Michell was a member of the Lindisfarne Association and a teacher at its School of Sacred Architecture. He lectured at the Kairos Foundation, an "educational charity specifically founded to promote the recovery of traditional values in the Arts and Sciences". He was for some years a visiting lecturer at the Prince of Wales' School of Traditional Arts, which had been established by his friend Keith Critchlow. He became a Fellow of the Temenos Academy, a religious organisation which had Traditionalist underpinnings. Newspaper columnist: 1992–2009 From January 1992 until his death, Michell published a monthly column, "An Orthodox Voice", in The Oldie magazine. He primarily used this as an outlet for condemning the modern world and lambasting what he perceived as the stupidity of most contemporary humans. His first article in this outlet contained an attack on evolution which resulted in a published response from the evolutionary biologist Richard Dawkins. He also used his column to encourage the use of mind-altering drugs, in particular LSD. Two anthologies that collected together some of these Oldie columns would be published; the first appeared in 1995 as An Orthodox Voice while the second was published in 2005 as Confessions of a Radical Traditionalist and contained an introduction from the scholar of esotericism Joscelyn Godwin. During this period, Michell also authored occasional book reviews for the conservative magazine, The Spectator. In 1996 Michell published Who Wrote Shakespeare?, in which he outlined various candidates in the Shakespeare authorship question. Who Wrote Shakespeare? received mixed reviews: Publishers Weekly was critical, while The Washington Post and The Independent praised his treatment of the subject. To mark their fiftieth anniversary in 1999, the publisher Thames and Hudson – who had published many of Michell's works – suggested that a biography be written by Michell's friend Paul Screeton. Michell however refused to cooperate with the project, which was abandoned. In 2000, Michell published The Temple at Jerusalem: A Revelation, in which he outlined his own interpretation of Jerusalem's Old City. From 2001 to 2004 he contributed several columns to tabloid newspaper The Mirror as part of an ongoing series run by the astrologer Jonathan Cainer. Cainer had sought to bring together a range of esotericists to write on related topics, with Michell's fellow contributors including Mark Winter, Patty Greenall, Sarah Sirillan, and Uri Geller. The series came to an end when Cainer left The Mirror to work for the rival Daily Mail. A keen painter, in 2003 an exhibit of his works was held at the Christopher Gibbs Gallery. In April 2007 Michell married Denise Price, the Archdruidess of the Glastonbury Order of Druids, at a ceremony held in Glastonbury's St Benedict's Church, although their relationship ended several months later. A lifelong smoker, Michell contracted lung cancer, and in his final days he was nursed at his son's home in Poole, Dorset, ultimately dying on 24 April 2009, at the age of 76. His body was buried at St Mary's Church in Stoke Abbott on May Day. A high church memorial service was then held at All Saints' Church in Notting Hill, which was attended by around 400 mourners. His work, How the World is Made – which he regarded as his magnum opus – was published posthumously. Thought Throughout his life, Michell's "views remained relatively static", albeit with some exceptions. He characterised his viewpoint as "Radical Traditionalism", which in his words was a perspective "both idealistic and rooted in common sense". Michell was a proponent of the Traditionalist school of esoteric thought. Michell was also interested in the writings of Traditionalist philosopher Julius Evola, agreeing in particular with the sentiments expressed in Evola's Revolt Against the Modern World. He held to the Traditionalist belief in an ancient perennial tradition found across the world, believing that this was passed on by a priesthood in accordance to divine will. He shared the Traditionalist attitude of anti-modernism, believing that modernity had brought about chaos, destruction of the land, and spiritual degradation. He believed that humanity would return to what he perceived as its natural order and enter a Golden Age. Screeton believed that despite his "obvious acts of liberalism", Michell also had a "right-wing streak", with Hale describing Mitchell as being "quite right-wing in many of his views". She thought it would be "apt" to characterise Mitchell's thought as being "third positionist" in nature. Angered by the idea of evolution, Michell repeatedly authored articles denouncing it as a false creation myth. Instead he embraced a viewpoint that Screeton referred to as "intelligent design creationism". Accordingly, he was particularly critical of Charles Darwin and Dawkins, lambasting the latter alongside physicist Stephen Hawking as belonging with "the disappointed Marxists, pandering politicians, pettifoggers, grievance-mongers and atheistic bishops who set the tone in modern society." Condemning the scientific community's view of the development of the Earth and humanity, he embraced Richard Milton's claim that the Earth was only 20,000 years old, as well as Rupert Sheldrake's idea regarding "morphogenetic fields", believing that it was these – and not biological evolution – that resulted in changes occurring within species. Michell's conception of the physical and spiritual worlds was strongly influenced by the ancient Greek philosopher Plato. He believed that sacred geometry revealed a universal scheme in the landscape which reflected the structure of the heavens. His views on geometry led him to the belief that pre-industrial societies across the world respected the Earth as a living creature imbued with its own spirit, and that humans then created permanent residences for this spirit. He also embraced a belief in the tenets of astrology, alchemy, and prophecy, believing that all had been unfairly rejected by the modern world. Described as an exponent of "British nativist spirituality", he adopted the view of the British-Israelite movement that the British people represented the descendants of the Ten Lost Tribes who are mentioned in the Old Testament. Michell sometimes referred to his approach as "mystic nationalism" and interpreted the island of Britain as being sacred, connecting this attitude to those of William Blake and Lewis Spence. Adopting a millennialist attitude, he believed that in future Britain would be reborn as the New Jerusalem with the coming of a new Golden Age. He believed that humans really desired to live in a state of extreme order, deeming a societal hierarchy to be natural and inevitable. Generally opposed to democracy, except within small groups in which every person knew the individual being elected, Michell instead believed that communities should be led by a strong leader who personified the solar deity. This embrace of the Divine Right of Kings led him to believe that Queen Elizabeth II should take control of Britain as an authoritarian leader who could intercede between the British people and the divine. He was critical of multiculturalism in Britain, believing that each ethnic or cultural group should live independently in an area segregated from other groups, stating that this would allow a people's traditions to remain vibrant. He did not espouse racial supremacy, with his ideas on this subject instead being similar to the ethnopluralism of Alain de Benoist and other New Right thinkers. He was an opponent of British membership of the European Union and also opposed the UK's transition to the metric system, instead favouring the continued use of imperial measurement, believing that the latter had links to the divine order used by ancient society. Personal life At over six feet in height, Michell was described by biographer and friend Paul Screeton as having "a charismatic personality and imposing presence", being "placidly outgoing and the epitome of gentlemanly charm", and usually appeared "cheerful and optimistic". In keeping with his upper-class background, he was described as having an "unmistakable patrician hauteur", with "all the self-assurance, impeccable manners and debonair charm of one born to wealth." Screeton described Michell as "gregarious but slightly shy, unassuming but opinionated. Quixotic in behaviour, he was an exemplary host and fastidious and single-minded when embarked upon a project", although also noted that Michell was impatient with those who did not share his Traditionalist beliefs and values. In keeping with norms within the counter-culture, Michell regularly smoked marijuana, and publicly encouraged the use of mind-altering drugs. His favoured newspaper was The Telegraph, a right-wing daily. One of his hobbies was woodworking, and he constructed some of the bookshelves in his home. Although he had a strong dislike of computers and advised his readers not to possess a personal computer, in later life he obtained one in order to type up his writings using a word processor. For many years, he lived at 11 Powis Gardens in Notting Hill, North London. Legacy Screeton described Michell as "a countercultural icon", while Hale stated that on his death, Michell left "a rich legacy of publications and cultural influence". At the time he was remembered as "a charming British eccentric and champion of the outsider". His influence was strongly apparent in the British Pagan community, with many British Pagans being familiar with his writings. The archaeologist Adam Stout noted that Michell played "the major role in the 1960s rediscovery" of the work of Alfred Watkins. Hutton for instance noted that the influence of Michell's ideas could be seen on the Druidic Order of the Pendragon, a Pagan group based in Leicestershire that arose to public attention in 2004. His ideas about dragon energies across the landscape have been incorporated into novels like Judy Allen's 1973 The Spring of the Mountain and Cara Louise's 2006 Annie and the Dragon. Michell's books received a broadly positive reception amongst the "New Age" and "Earth mysteries" movements and he is credited as perhaps being "the most articulate and influential writer on the subject of leys and alternative studies of the past". Ronald Hutton describes his research as part of an alternative archaeology "quite unacceptable to orthodox scholarship." Accordingly, Screeton noted that during his life, Michell was considered to be "anathema, lunatic fringe, and cranky" by his critics, although he rejected the idea that Michell was a "crank", claiming that such an accusation was "fundamentally mistaken". Following his death, various aspects of Michell's work have been adopted by thinkers associated with the European New Right and with related right-wing currents in the United States. Michell's term "Radical Traditionalism", which he espoused in his self-published series of "Radical Traditionalist Papers" in the 1970s and 1980s, would later be taken up as a self-descriptor by Michael Moynihan and Joshua Buckley, the editors of the right-wing journal Tyr: Myth, Culture and Tradition from their inaugural 2002 edition onward. The editors of Tyr gave the term political overtones which were not present in Michell's original usage of the term. Hale believed that through Radical Traditionalism and the New Right Michell's writings have been brought to "a whole new audience" where they have a "surprisingly different sort of relevance." Bibliography 1967 The Flying Saucer Vision: the Holy Grail Restored, Sidgwick & Jackson, Abacus Books, Ace. 1969 The View Over Atlantis, HarperCollins, ; first published by Sago Press in Great Britain in 1969; new edition published in Great Britain by Garnstone Press in 1972 and Abacus in 1973, and in the United States by Ballantine Books in 1972. 1972 City of Revelation: On the Proportions and Symbolic Numbers of the Cosmic Temple, Garnstone Press, , 1974 The Old Stones of Land's End, Garnstone Press, 1975 The Earth Spirit: Its Ways, Shrines, and Mysteries, Avon, 1977 with R. J. M. Rickard, Phenomena: A Book of Wonders, Thames & Hudson, 1977 A Little History of Astro-Archaeology: Stages in the Transformation of a Heresy , Thames and Hudson, SBN-10: 0500275572 SBN-10: 0500275572, (reprinted 2001) 1979 Natural Likeness: Faces and Figures in Nature, Thames and Hudson, 1979 Plinius Scundus C., Inventorum Natura, HarperCollins, English Latin, D. MacSweeney (translator) 1981 Ancient Metrology: the Dimensions of Stonehenge and of the Whole World as Therein Symbolized, Pentacle Books, 1982 Megalithomania: Artists, Antiquarians & Archaeologists at the Old Stone Monuments, Thames and Hudson , Cornell University Press 1983 The New View Over Atlantis, Thames and Hudson , , (Much revised edition of The View Over Atlantis.) 1984 Eccentric Lives and Peculiar Notions , Thames and Hudson, reissued Harcourt Brace Jovanovich, 1985 Stonehenge – Its Druids, Custodians, Festival and Future , Richard Adams Associates (June 1985) , 1988 Geosophy – An Overview of Earth Mysteries. Paul Devereux, John Steele, John Michell, Nigel Pennick, Martin Brennan, Harry Oldfield and more, a Mystic Fire Video from Trigon Communications, Inc, New York, 1988 (reissued 1990), also by EMPRESS, Wales, UK, 95 minutes, VHS. 1986 commentary, Feng-Shui: The Science of Sacred Landscape in Old China, Ernest J. Eitel, Syngergetic Press 1988 The Dimensions of Paradise: The Proportions and Symbolic Numbers of Ancient Cosmology, London : Thames and Hudson, 1988. 1989 The Traveller's Key to Sacred England , reissued 2006, Gothic Image 1989 Secrets of the Stones: New Revelations of Astro-Archaeology and the Mystical Sciences of Antiquity, Destiny Books, 1989 Earth Spirit: Its Ways, Shrines and Mysteries , Thames and Hudson, 1990 New Light on the Ancient Mystery of Glastonbury, Gothic Image Publications (p/b), (h/b) 1991 Dowsing the Crop Circles, (Editor/Contributor), Gothic Image Publications, 1991 Twelve Tribe Nations and the Science of Enchanting the Landscape, with Christine Rhone, Thames and Hudson, 1994 At the Center of the World: Polar Symbolism Discovered in Celtic, Norse and Other Ritualized Landscapes, Thames and Hudson, 1996 Who Wrote Shakespeare?, Thames and Hudson 2000, with Bob Rickard, Unexplained Phenomena: Mysteries and Curiosities of Science, Folklore and Superstition, Rough Guides, 2000 The Temple at Jerusalem: A Revelation, Samuel Weiser. , 2001 The Dimensions of Paradise: The Proportions and Symbolic Numbers of Ancient Cosmology , Adventures Unlimited, 2002 The Face and the Message: What Do They Mean and Where Are They From?, Gothic Image, 2003 The Traveller's Guide to Sacred England: A Guide to the Legends, Lore and Landscapes of England's Sacred Places, Gothic Image Publications, 2003 Prehistoric Sacred Sites of Cornwall, Wessex Books, 2005 Confessions of a Radical Traditionalist, Dominion Press, 2006 "Prehistoric Sacred Sites of Cornwall", Wessex Books, 2006 Euphonics: A Poet's Dictionary of Sounds, Wooden Books, 2008 Dimensions of Paradise, The Sacred Geometry, Ancient Science and the Heavenly Order on Earth, (revised edition of City of Revelation) Inner Traditions, Bear & Company. 2009 How The World Is Made: The Story of Creation According To Sacred Geometry, (with Allan Brown), Thames & Hudson 2009 Sacred Center: The Ancient Art of Locating Sanctuaries, Inner Traditions, 2010 Michellany, A John Michell Reader, ed. Jonangus Mackay, Michellany Editions, London. References Footnotes Sources Further reading White, Rupert (2017). The Re-enchanted Landscape: Earth Mysteries, Paganism and Art in Cornwall Antenna Publications ISBN 9780993216435 External links The John Michell Network Michell and the 1971 Glastonbury Festival International Fortean Organisation 1933 births 2009 deaths 20th-century English novelists Alumni of Trinity College, Cambridge Ancient astronauts proponents Atlantis proponents English male novelists English writers on paranormal topics Mystics Fortean writers New Age writers People educated at Eton College Pseudohistorians Sacred geometry Far-right politics in the United Kingdom
John Michell (writer)
Engineering
7,213
76,361,821
https://en.wikipedia.org/wiki/Buellia%20oidalea
Buellia oidalea is a species of crustose lichen found along the Pacific coast of North America, from Coos County, Oregon to Baja California Sur. Morphology The thallus of B. oidalea is crustose, varying from thin and rimose-areolate to thick and rugose-verrucose or even subsquamulose. The prothallus is often present, appearing black. The thallus surface is yellowish white to glaucous gray, smooth, and esorediate. The medulla is white and lacks calcium oxalate. The apothecia of Buellia oidalea are lecideine and commonly found, ranging from 0.2 to 2 mm in diameter, and are sessile in nature. Initially, the disc is black, devoid of pruina, and flat, but as it matures, it becomes convex. The margin starts as distinct but eventually becomes excluded and black. The proper exciple measures between 35 and 95 μm in thickness, lacking secondary metabolites, and appears uniformly dark brown throughout, with carbonized cells smaller than 6 μm. It is transient and accompanied by a brown hypothecium, which is less than 280 μm thick. The epihymenium shares a continuous brown pigmentation with the outer exciple. The hymenium is hyaline, generously interspersed with oil droplets, and measures 115 to 165 μm in height. The tips of the paraphyses typically measure less than 3 μm in width and are adorned with distinct apical caps. Asci are clavate, of Bacidia-type, measuring 95 to 112 x 20 to 40 μm, and typically contain 8 spores. The ascospores of Buellia oidalea typically exhibit a hyaline to ±olive coloration, eventually transitioning to brown, while often retaining unpigmented apices. They possess a muriform structure, comprising 13–40 cells in optical section, and are ellipsoid to oblong in shape, measuring (29.5-)33.7-[39.6]-45.5(-57) x (12.5-)13.3-[15.5]-17.7(-26.5) μm. The ascospores feature apical, lateral, and septal wall thickenings, with the apical thickenings often being permanent. Their proper wall is approximately 1.2 μm thick, and they lack a perispore, with no discernible ornamentation visible under DIC. Pycnidia are rare, immersed with only the uppermost part protruding, and the wall is mainly pigmented in the upper part. The conidia are bacilliform, 4-6 x 1 μm. Chemistry Spot tests show the thallus is K-, C+ orange (best seen under the microscope), P-. The medulla is K-, C-, P-. The lichen exhibits UV+ bright or pale yellow to orange fluorescence, and the medulla is nonamyloid in iodine reaction. The secondary chemistry includes diploicin (major), isofulgidin (minor), an unknown (minor) compound, and 2,5-dichloro-3-O-methylnorlichexanthone (trace). Ecology and distribution Buellia oidalea thrives on the bark and wood surfaces of trunks, branches, and twigs found on both broad-leaved and coniferous trees and shrubs. Its habitat encompasses various open environments along the Pacific coast, including dune areas, salt marshes, chaparral, and coastal deserts. This species is predominantly found along the Pacific coast and islands of North America, spanning from Coos County, Oregon to Baja California Sur. Notably, within the Sonoran region, it has been documented in the coastal fog zones of southern California, Baja California, and Baja California Sur. Distinguishing features Buellia oidalea is distinguished by its prominent muriform ascospores. While resembling Buellia oidaliella, it sets itself apart with notably larger spores featuring thickened, frequently unpigmented apices, along with a taller hymenium, and the lack of calcium oxalate. References oidalea Fungus species
Buellia oidalea
Biology
898
3,432,177
https://en.wikipedia.org/wiki/TAUVEX
The Tel Aviv University Ultraviolet Explorer, or TAUVEX (), is a space telescope array conceived by Noah Brosch of Tel Aviv University and designed and constructed in Israel for Tel Aviv University by El-Op, Electro-Optical Industries, Ltd. (a division of Elbit systems) acting as Prime Contractor, for the exploration of the ultraviolet (UV) sky. TAUVEX was selected in 1988 by the Israel Space Agency (ISA) as its first priority scientific payload. Although originally slated to fly on a national Israeli satellite of the Ofeq series, TAUVEX was shifted in 1991 to fly as part of a Spektr-RG international observatory, a collaboration of many countries with the Soviet Union (Space Research Institute) leading. Due to repeated delays of the Spektr project, caused by the economic situation in the post-Soviet Russia, ISA decided to shift TAUVEX to a different satellite. In early-2004 ISA signed an agreement with the Indian Space Research Organisation (ISRO) to launch TAUVEX on board the Indian technology demonstrator satellite GSAT-4. The launch vehicle slated to be used was the GSLV with a new, cryogenic, upper stage. TAUVEX was a scientific collaboration between Tel Aviv University and the Indian Institute of Astrophysics in Bangalore. Its Principal Investigators were Noah Brosch at Tel Aviv University and Jayant Murthy at the Indian Institute of Astrophysics. Originally, TAUVEX was scheduled to be launched in 2008, but various delays caused the integration with GSAT-4 to take place only in November 2009 for a launch the following year. ISRO decided in January 2010 to remove TAUVEX from the satellite since the Indian-built cryogenic upper stage for GSLV was deemed under-powered to bring GSAT-4 to a geosynchronous orbit. GSAT-4 was subsequently lost in the 15 April 2010 launch failure of GSLV. On 13 March 2011 TAUVEX was returned to Israel and was stored at the Prime Contractor facility pending an ISA decision about its future. In 2012 ISA decided to terminate the TAUVEX project, against the recommendation of a committee it formed to consider its future that recommended its release for a high-altitude balloon flight. Instrumentation TAUVEX consists of three bore-sighted 20 cm diameter telescopes on a single bezel, called telescopes A, B, and C. Each telescope images the same sky area of 0.9 degree, with an angular resolution of 7-11 arcseconds. The imaging is onto position-sensitive detectors (CsTe cathodes on calcium fluoride windows) equipped with multi-channel plate electron intensifiers. The detectors oversample the point-spread-function by a factor of approximately three. The output is detected by position-sensitive anodes (wedge-and-strip) and is digitized to 12 bits. The full image of each telescope has about 300 resolution elements across its diameter. The type of cathode (CsTe) assures sensitivity from longward of Lyman α to the atmospheric limit with a peak quantum efficiency of approximately 10%. The operating spectral range is separated in a number of segments selectable with filters. Each telescope [T] is equipped with a four-position filter wheel. Each wheel contains one blocked position (shutter) and three band-selection filters [Fn]. The filter complement, and its distribution among the three telescopes, is as follows: The approximate characteristics of each filter type are summarized below: TAUVEX was mounted to the GSAT-4 spacecraft on a plate that could rotate around its axis (the MDP), enabling to point the telescopes' line-of-sight to any desired declination. Being on a geostationary satellite, the observation would therefore have been of a scanning type. A 'ribbon' of a constant declination, 0.9 degree wide, would have been scanned as time advanced, completing an entire 360 degree circuit during one sidereal day. In this mode of operation, the dwell time of a source within the detector field of view is a function of the pointing declination and of the exact location in the FOV relative to the detector diameter. The closer a source is to one of the celestial poles, the longer it resides in the TAUVEX field of view during a single scan. The longest theoretically possible exposure is for sources at |δ|>89°30'; these could be observed all day. The interface with GSAT-4 ensured that each photon event hitting the detectors would have been transmitted to the ground in real time and processed in a near-real-time pipeline. In-between the photon events a time tag is added every 128 ms. The time between the adjacent time tags is sufficiently short so that the orbital motion of the nadir-pointing platform is much smaller than the TAUVEX virtual pixel. Given that TAUVEX on GSAT-4 was planned to operate from a geo-synchronous platform that is, essentially, a telecommunications satellite, it is clear that up and downlink telemetry are much less problematic that with other astronomical satellites. In fact, TAUVEX was allowed a dedicated 1 Mbit/s downlink to the ISRO Master Control Facility (MCF) at Hassan, near Bangalore. Command sequences were planned to be uplinked after being generated by IIA and ISRO and the downlink to be analyzed on-line to monitor the payload state of health. In most situations, TAUVEX would have been able to download all the detected photon events. However, in case of strong straylight or of many bright sources in the field of view, the collected event rate could overload the capacity of the telemetry link. In this case, TAUVEX would have stored the photon events in a solid state memory module (4 GB), from which the events are transmitted at the nominal 1 Mbit/s rate. Science with TAUVEX The science of TAUVEX is based on its unique characteristics: three bore-sighted and independent telescopes able to operate independently, with different filters but measuring the same sources, and reasonably fine time resolution as every detected photon is time-tagged. A unique possibility allows the study of the interstellar dust band at 217.4 nm; the two TAUVEX filters SF2 and NBF3 are centered on this wavelength but have different widths. As the filters are located on different telescopes, it is possible to measure the same sky region with both filters simultaneously, deriving the equivalent width of the band for every star in the field of view. The use of TAUVEX as a scientific instrument is the result of calibration on the ground. This calibration was very difficult and produced unreliable results possibly indicating a significantly reduced performance. Given the uncertain results, the Principal Investigators planned to repeat and improve the calibration in space, in the months following the launch. See also Spektr-RG References External links TAUVEX Project webpage (India) History ISRO Press release on MoU Proceedings of the TAUVEX Science Meeting 2006 Space telescopes Ultraviolet telescopes Space program of Israel
TAUVEX
Astronomy
1,459
39,908,722
https://en.wikipedia.org/wiki/NGC%201288
NGC 1288 is an intermediate barred spiral galaxy located about 196 million light years away in the constellation Fornax. In the nineteenth century, English astronomer John Herschel described it as "very faint, large, round, very gradually little brighter middle." The morphological classification of SABc(rs) indicates weak bar structure across the nucleus (SAB), an incomplete inner ring orbiting outside the bar (rs), and the multiple spiral arms are moderately wound (c). The spiral arms branch at intervals of 120° at a radius of 30″ from the nucleus. The galaxy is most likely surrounded by a dark matter halo, giving it a mass-to-light ratio of /. On July 17, 2006, a supernova with a magnitude of 16.1 was imaged in this galaxy from Pretoria, South Africa, at 12″ east and 2″ of the galactic core. Designated SN 2006dr, it was determined to be a type Ia supernova. References External links Fornax Barred spiral galaxies 1288 012204
NGC 1288
Astronomy
213
939,244
https://en.wikipedia.org/wiki/Butterfly%20Cluster
The Butterfly Cluster (cataloged as Messier 6 or M6, and as NGC 6405) is an open cluster of stars in the southern constellation of Scorpius. Its name derives from the resemblance of its shape to a butterfly. The first astronomer to record the Butterfly Cluster's existence was Giovanni Battista Hodierna in 1654. However, Robert Burnham Jr. has proposed that the 2nd century astronomer Ptolemy may have seen it with the naked eye while observing its neighbor the Ptolemy Cluster (M7). Credit for the discovery is usually given to Jean-Philippe Loys de Chéseaux in 1746. Charles Messier observed the cluster on May 23, 1764, and added it to his Messier Catalog. Estimates of the Butterfly Cluster's distance have varied over the years. Wu et al. (2009) found a distance estimate of , giving it a spatial dimension of some 12 light years. Modern measurements show its total visual brightness to be magnitude 4.2. The cluster is estimated to be 94.2 million years old. Cluster members show a slightly higher abundance of elements heavier than helium compared to the Sun; what astronomers refer to as the metallicity. 120 stars, ranging down to visual magnitude 15.1, have been identified as most likely cluster members. Most of the bright stars in this cluster are hot, blue B-type stars but the brightest member is a K-type orange giant star, BM Scorpii, which contrasts sharply with its blue neighbours in photographs. BM Scorpii, is classed as a semiregular variable star, its brightness varying from magnitude +5.5 to magnitude +7.0. There are also eight candidate chemically peculiar stars. The cluster is located from the Galactic Center and is following an orbit through the Milky Way galaxy with a low eccentricity of 0.03 and an orbital period of 204.2 Myr. At present it is below the galactic plane, and it will cross the plane every 29.4 Myr. See also List of open clusters List of Messier objects References External links Messier 6, SEDS Messier pages Messier objects NGC objects Open clusters Scorpius Orion–Cygnus Arm ?
Butterfly Cluster
Astronomy
449
13,578,015
https://en.wikipedia.org/wiki/MPLAB
MPLAB is a proprietary freeware integrated development environment for the development of embedded applications on PIC and dsPIC microcontrollers, and is developed by Microchip Technology. MPLAB Extensions for Visual Studio Code and MPLAB X for NetBeans platform are the latest editions of MPLAB, including support for Microsoft Windows, macOS and Linux operating systems. MPLAB and MPLAB X support project management, code editing, debugging and programming of Microchip 8-bit PIC and AVR (including ATMEGA) microcontrollers, 16-bit PIC24 and dsPIC microcontrollers, as well as 32-bit SAM and PIC32 microcontrollers by Microchip Technology. MPLAB X MPLAB X is the latest version of the MPLAB IDE built by Microchip Technology, and is based on the open-source NetBeans platform. It replaced the older MPLAB 8.x series, which had its final release (version 8.92) on July 23, 2013. MPLAB X is the first version of the IDE to include cross-platform support for macOS and Linux operating systems, in addition to Microsoft Windows. It supports editing, very buggy debugging and programming of Microchip 8-bit, 16-bit and 32-bit PIC microcontrollers. It supports automatic code generation with the MPLAB Code Configurator and the MPLAB Harmony Configurator plugins. MPLAB X supports the following compilers: MPLAB XC8 — C compiler for 8-bit PIC and AVR devices MPLAB XC16 — C compiler for 16-bit PIC devices MPLAB XC-DSC - C compiler for dsPIC family of devices MPLAB XC32 — C/C++ compiler for 32-bit MIPS-based PIC32 and ARM-based SAM devices HI-TECH C — C compiler for 8-bit PIC devices (discontinued) SDCC — open-source 8-bit C compiler Debugger bugs: Memory view crashes the whole IDE when searching for an address Step over sometimes steps in and step out doesn't work Disassembler view is buggy showing incorrect instructions Phantom breakpoints that can't be cleared Automatic firmware update sometimes fail requiring full erase of SNAP MPLAB 8.x MPLAB 8.x is the discontinued version of the legacy MPLAB IDE technology, custom built by Microchip Technology in Microsoft Visual C++. MPLAB supports project management, editing, debugging and programming of Microchip 8-bit, 16-bit and 32-bit PIC microcontrollers. MPLAB only works on Microsoft Windows. MPLAB is still available from Microchip's archives, but is not recommended for new projects. It is designed to work with MPLAB-certified devices such as the MPLAB ICD 3 and MPLAB REAL ICE, for programming and debugging PIC microcontrollers using a personal computer. PICKit programmers are also supported by MPLAB. MPLAB supports the following compilers: MPLAB MPASM Assembler MPLAB ASM30 Assembler MPLAB C Compiler for PIC18 MPLAB C Compiler for PIC24 and dsPIC DSCs MPLAB C Compiler for PIC32 HI-TECH C References External links Microchip MPLAB Website Embedded systems
MPLAB
Technology,Engineering
693
59,823,364
https://en.wikipedia.org/wiki/Willie%20Rockward
Willie S. Rockward is a physics professor and has served as the chair of the department of physics and engineering physics at Morgan State University since August 2018. His research interests include Micro/Nano Optics Lithography, Extreme Ultraviolet Interferometry, Metamaterials, Terahertz imaging, Nanostructure Characterization, and Crossed Phase Optics. From 2018 to 2020 he was the president of the National Society of Black Physicists. Early life and education Rockward grew up in Louisiana. He attended South Terrebonne High School. He played American football at college (for the South Terrebonne High School Gators) and was a member of the varsity team. He also took part in track and field. He was offered football scholarships at Duke University and Louisiana State University, but was interested in Grambling State University because of the coach Eddie Robinson. Rockward achieved high scores in his ACT and was offered a physics scholarship at Grambling State. At Grambling State, Rockward served as President of the Omega Psi Phi fraternity. He graduated with a B.S. degree cum laude in Physics in 1988. Rockward joined University at Albany, SUNY for his graduate study and earned an M.S. degree in physics in 1991. He moved to Georgia Institute of Technology where he received an M.S. degree in physics in 1994, and a Ph.D. degree in physics in 1997, this under the supervision of Donal O'Shea. Together they worked on diffractive optics and quadrature microscopy. Whilst completing his doctorate he worked as a research physicist at the Air Force Research Laboratory. He developed laser radar and guided munitions. Career Rockward joined the faculty of Morehouse College in 1998. He was research director of the Materials and Optics Research & Engineering (MORE) Laboratory. He worked on nanolithography, terahertz imaging and physics education. He developed a range of research experiences for undergraduates and the Scholarly Mentorship in Laboratory Experiences (SMILE) program. He also established the Nuclear, Materials, and Space Sciences (NuMaSS) Summer School (NuMaSS) which introduced middle and high school students to a physics career. He was awarded tenure in 2008. In 2011 Rockward was appointed chair of the department of physics and dual degree engineering, resulting in Morehouse College having the most underrepresented minority Bachelor of Science graduates. As Chair of Department, Rockward investigated the barriers for HBCU physics departments. Rockward is an advocate for mentoring as a method to support students from underrepresented groups in physics. He launched "We C.A.R.E" (Curriculum, Advisement, Recruitment/ Retention/ Research, and Extras) a pedagogical approach that combines sessions on culture, collaboration and career, alongside the Innovative Technology Experiences for Students and Teachers program. He was made the Society of Physics Students Outstanding Chapter Advisor in 2012. In 2017 Rockward was appointed president of Sigma Pi Sigma. He joined Morgan State University in 2018. Working with Associated Universities, Inc. to secure support from the National Science Foundation to deliver the National Society of Black Physicists conference. He has delivered the keynote talk at the Conference for Underrepresented Minority Physicists (CU2MiP). His current work focuses on extreme ultraviolet laser light and spectroscopic analysis of binary star systems. Personal life Rockward has served a combination of 23 years as Pastor of the Divine Unity Missionary Baptist Church in East Point Georgia and Associate Minister of Antioch Baptist Church North in Atlanta, Georgia. Rockward is married to mathematician Michelle Rockward. References Georgia Tech alumni Morehouse College faculty Morgan State University faculty 20th-century American physicists 21st-century American physicists Scientists from Louisiana Grambling State University alumni University at Albany, SUNY alumni Living people Year of birth missing (living people) Members of the National Society of Black Physicists 20th-century African-American scientists 21st-century African-American scientists African-American physicists
Willie Rockward
Materials_science
803
2,942,638
https://en.wikipedia.org/wiki/Gloss%20%28optics%29
Gloss is an optical property which indicates how well a surface reflects light in a specular (mirror-like) direction. It is one of the important parameters that are used to describe the visual appearance of an object. Other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables, including gloss among the involved aspects. The factors that affect gloss are the refractive index of the material, the angle of incident light and the surface topography. Apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions. Theory When light illuminates an object, it interacts with it in a number of ways: Absorbed within it (largely responsible for colour) Transmitted through it (dependent on the surface transparency and opacity) Scattered from or within it (diffuse reflection, haze and transmission) Specularly reflected from it (gloss) Variations in surface texture directly influence the level of specular reflection. Objects with a smooth surface, i.e. highly polished or containing coatings with finely dispersed pigments, appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull. The image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted. Substrate material type also influences the gloss of a surface. Non-metallic materials, i.e. plastics etc. produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending on the colour of the material. Metals do not suffer from this effect producing higher amounts of reflection at any angle. The Fresnel formula gives the specular reflectance, , for an unpolarized light of intensity , at angle of incidence , giving the intensity of specularly reflected beam of intensity , while the refractive index of the surface specimen is . The Fresnel equation is given as follows : Surface roughness Surface roughness influences the specular reflectance levels; in the visible frequencies, the surface finish in the micrometre range is most relevant. The diagram on the right depicts the reflection at an angle on a rough surface with a characteristic roughness height variation . The path difference between rays reflected from the top and bottom of the surface bumps is: When the wavelength of the light is , the phase difference will be: If is small, the two beams (see Figure 1) are nearly in phase, resulting in constructive interference; therefore, the specimen surface can be considered smooth. But when , then beams are not in phase and through destructive interference, cancellation of each other will occur. Low intensity of specularly reflected light means the surface is rough and it scatters the light in other directions. If the middle phase value is taken as criterion for smooth surface, , then substitution into the equation above will produce: This smooth surface condition is known as the Rayleigh roughness criterion. History The earliest studies of gloss perception are attributed to Leonard R. Ingersoll who in 1914 examined the effect of gloss on paper. By quantitatively measuring gloss using instrumentation Ingersoll based his research around the theory that light is polarised in specular reflection whereas diffusely reflected light is non-polarized. The Ingersoll "glarimeter" had a specular geometry with incident and viewing angles at 57.5°. Using this configuration gloss was measured using a contrast method which subtracted the specular component from the total reflectance using a polarizing filter. In the 1930s work by A. H. Pfund, suggested that although specular shininess is the basic (objective) evidence of gloss, actual surface glossy appearance (subjective) relates to the contrast between specular shininess and the diffuse light of the surrounding surface area (now called "contrast gloss" or "luster"). If black and white surfaces of the same shininess are visually compared, the black surface will always appear glossier because of the greater contrast between the specular highlight and the black surroundings as compared to that with white surface and surroundings. Pfund was also the first to suggest that more than one method was needed to analyze gloss correctly. In 1937 R. S. Hunter, as part of his research paper on gloss, described six different visual criteria attributed to apparent gloss. The following diagrams show the relationships between an incident beam of light, I, a specularly reflected beam, S, a diffusely reflected beam, D and a near-specularly reflected beam, B. Specular gloss – the perceived brightness and the brilliance of highlights Defined as the ratio of the light reflected from a surface at an equal but opposite angle to that incident on the surface. Sheen – the perceived shininess at low grazing angles Defined as the gloss at grazing angles of incidence and viewing Contrast gloss – the perceived brightness of specularly and diffusely reflecting areas Defined as the ratio of the specularly reflected light to that diffusely reflected normal to the surface; Absence of bloom – the perceived cloudiness in reflections near the specular direction Defined as a measure of the absence of haze or a milky appearance adjacent to the specularly reflected light: haze is the inverse of absence-of-bloom Distinctness of image gloss – identified by the distinctness of images reflected in surfaces Defined as the sharpness of the specularly reflected light Surface texture gloss – identified by the lack of surface texture and surface blemishes Defined as the uniformity of the surface in terms of visible texture and defects (orange peel, scratches, inclusions etc.) A surface can therefore appear very shiny if it has a well-defined specular reflectance at the specular angle. The perception of an image reflected in the surface can be degraded by appearing unsharp, or by appearing to be of low contrast. The former is characterised by the measurement of the distinctness-of-image and the latter by the haze or contrast gloss. In his paper Hunter also noted the importance of three main factors in the measurement of gloss: The amount of light reflected in the specular direction The amount and way in which the light is spread around the specular direction The change in specular reflection as the specular angle changes For his research he used a glossmeter with a specular angle of 45° as did most of the first photoelectric methods of that type, later studies however by Hunter and D. B. Judd in 1939, on a larger number of painted samples, concluded that the 60 degree geometry was the best angle to use so as to provide the closest correlation to a visual observation. Standard gloss measurement Standardisation in gloss measurement was led by Hunter and ASTM (American Society for Testing and Materials) who produced ASTM D523 Standard test method for specular gloss in 1939. This incorporated a method for measuring gloss at a specular angle of 60°. Later editions of the Standard (1951) included methods for measuring at 20° for evaluating high gloss finishes, developed at the DuPont Company (Horning and Morse, 1947) and 85° (matte, or low, gloss). ASTM has a number of other gloss-related standards designed for application in specific industries including the old 45° method which is used primarily now used for glazed ceramics, polyethylene and other plastic films. In 1937, the paper industry adopted a 75° specular-gloss method because the angle gave the best separation of coated book papers. This method was adopted in 1951 by the Technical Association of Pulp and Paper Industries as TAPPI Method T480. In the paint industry, measurements of the specular gloss are made according to International Standard ISO 2813 (BS 3900, Part 5, UK; DIN 67530, Germany; NFT 30-064, France; AS 1580, Australia; JIS Z8741, Japan, are also equivalent). This standard is essentially the same as ASTM D523 although differently drafted. Studies of polished metal surfaces and anodised aluminium automotive trim in the 1960s by Tingle, Potter and George led to the standardisation of gloss measurement of high gloss surfaces by goniophotometry under the designation ASTM E430. In this standard it also defined methods for the measurement of distinctness of image gloss and reflection haze. See also List of optical topics Distinctness of image References Sources External links PCI Magazin article: What is the Level of Confidence in Measuring Gloss? NPL: Good practice guide for the measurement of Gloss Optics Physical properties
Gloss (optics)
Physics,Chemistry
1,780
75,205,974
https://en.wikipedia.org/wiki/First%20universal%20common%20ancestor
The first universal common ancestor (FUCA) is a proposed non-cellular entity that was the earliest organism with a genetic code capable of biological translation of RNA molecules into peptides to produce proteins. Its descendants include the last universal common ancestor (LUCA) and every modern cell. FUCA would also be the ancestor of ancient sister lineages of LUCA, none of which have modern descendants, but which are thought to have horizontally transferred some of their genes into the genome of early descendants of LUCA. FUCA is thought to have been composed of progenotes, proposed ancient biological systems that would have used RNA for their genome and self-replication. By comparison, LUCA would have had a complex metabolism and a DNA genome with hundreds of genes and gene families. Origins Long before the appearance of compartmentalized biological entities like FUCA, life had already begun to organize itself and emerge in a pre-cellular era known as the RNA world. The universal presence of both biological translation mechanism and genetic code in every biological systems indicates monophyly, a unique origin for all biological systems including viruses and cells. FUCA would have been the first organism capable of biological translation, using RNA molecules to convert information into peptides and produce proteins. This first translation system would have been assembled together with primeval, possibly error-prone genetic code. That is, FUCA would be the first biological system to have genetic code for proteins. The development of FUCA likely took a long time. FUCA was generated without genetic code, from the ribosome, itself a system evolved from the maturation of a ribonucleoprotein machinery. FUCA appeared when a proto-peptidyl transferase center started to first emerge, when RNA world replicators started to be capable to catalyze the bonding of amino acids into oligopeptides. The first genes of FUCA were most likely encoding ribosomal, primitive tRNA-aminoacyl transferases and other proteins that helped to stabilize and maintain biological translation. These random peptides produced possibly bound back to the single strand nucleic acid polymers and allowed a higher stabilization of the system that got more robust and was further bound to other stabilizing molecules. When FUCA had matured, its genetic code was completely established. FUCA was composed by a population of open-systems, self-replicating ribonucleoproteins. With the arrival of these systems, began the progenote era. These systems evolved into maturity when self-organization processes resulted in the creation of a genetic code. This genetic code was for the first time capable to organize an ordered interaction between nucleic acids and proteins through the formation of a biological language. This caused pre-cellular open systems to then start to accumulate information and self-organize, producing the first genomes by the assembly of biochemical pathways, which probably appeared in different progenote populations evolving independently. In the reduction hypothesis, where giant viruses evolved from primordial cells that became parasitic, viruses might have evolved after FUCA and before LUCA. Progenotes Progenotes (also called ribocytes or ribocells) are semi-open or open biological systems capable of performing an intense exchange of genetic information, before the existence of cells and LUCA. The term progenote was coined by Carl Woese in 1977, around the time he introduced the concept of the three domains of life (bacteria, archaea, and eukaryotes) and proposed that each domain originated from a different progenote. The meaning of the term changed with time. In the 1980s, Doolittle and Darnell used the word progenote to designate the ancestor of all three domains of life, now referred to as the last universal common ancestor (LUCA). The terms ribocyte and ribocell refer to progenotes as protoribosomes, primeval ribosomes that were hypothetical cellular organisms with self-replicating RNA but without DNA, and thus with a RNA genome instead of the usual DNA genome. In Carl Woese's Darwinian threshold period of cellular evolution, the progenotes are also thought to have had RNA as informational molecule instead of DNA. The evolution of the ribosome from ancient ribocytes, self-replicating machines, into its current form as a translational machine may have been the selective pressure to incorporate proteins into the ribosome's self-replicating mechanisms, so as to increase its capacity for self-replication. Ribosomal RNA is thought to have emerged before cells or viruses, at the time of progenotes. Progenotes composed and were the descendants of FUCA, and FUCA is thought to have organized the process between the initial organization of biological systems and the maturation of progenotes. Progenotes were dominants in the Progenote age, the time where biological systems originated and initially assembled. The Progenote age would have happened after the pre-biotic age of the RNA-world and Peptide-world but before the age of organisms and mature biological systems like viruses, bacteria and archaea. The most successful progenotes populations were probably the ones capable to bind and process carbohydrates, amino acids and other intermediated metabolites and co-factors. In progenotes, compartmentalization with membranes was not yet completed and translation of proteins was not precise. Not every progenote had its own metabolism; different metabolic steps were present in different progenotes. Therefore, it is assumed that there existed a community of sub-systems that started to cooperate collectively and culminated in the LUCA. Ribocytes and viruses In the eocyte hypothesis, the organism at the root of all eocytes may have been a ribocyte of the RNA-world. For cellular DNA and DNA handling, an "out of virus" scenario has been proposed: storing genetic information in DNA may have been an innovation performed by viruses and later handed over to ribocytes twice, once transforming them into bacteria and once transforming them into archaea. Similarly in viral eukaryogenesis, a hypothesis theorizing that eukaryotes evolved from a DNA Virus, ribocytes may have been an ancient host for the DNA virus. As ribocytes used RNA to store their genetic info, viruses may initially have adopted DNA as a way to resist RNA-degrading enzymes in the host ribocells. Hence, the contribution from such a new component may have been as significant as the contribution from chloroplasts or mitochondria. Following this hypothesis, archaea, bacteria, and eukaryotes each obtained their DNA informational system from a different virus. See also Abiogenesis Alternative abiogenesis scenarios Earliest known life forms RNP world References Origin of life Hypothetical life forms Evolutionary biology Genetic genealogy Events in biological evolution Phylogenetics
First universal common ancestor
Biology
1,397
44,981,609
https://en.wikipedia.org/wiki/West%20Black%20Sea%20region%20%28statistical%29
The West Black Sea Region (Turkish: Batı Karadeniz Bölgesi) (TR8) is a statistical region in Turkey. Subregions and provinces Zonguldak Subregion (TR81) Zonguldak Province (TR811) Karabük Province (TR812) Bartın Province (TR813) Kastamonu Subregion (TR82) Kastamonu Province (TR821) Çankırı Province (TR822) Sinop Province (TR823) Samsun Subregion (TR83) Samsun Province (TR831) Tokat Province (TR832) Çorum Province (TR833) Amasya Province (TR834) Age groups Internal immigration State register location of West Black Sea residents Marital status of 15+ population by gender Education status of 15+ population by gender See also NUTS of Turkey References External links TURKSTAT Sources ESPON Database Statistical regions of Turkey
West Black Sea region (statistical)
Mathematics
202
6,576,473
https://en.wikipedia.org/wiki/Smallest%20organisms
The smallest organisms found on Earth can be determined according to various aspects of organism size, including volume, mass, height, length, or genome size. Given the incomplete nature of scientific knowledge, it is possible that the smallest organism is undiscovered. Furthermore, there is some debate over the definition of life, and what entities qualify as organisms; consequently the smallest known organisms (microrganisms) may be nanobes that can be 20 nanometers long. Microorganisms Obligate endosymbiotic bacteria The genome of Nasuia deltocephalinicola, a symbiont of the European pest leafhopper, Macrosteles quadripunctulatus, consists of a circular chromosome of 112,031 base pairs. The genome of Nanoarchaeum equitans is 491 Kbp nucleotides long. Pelagibacter ubique Pelagibacter ubique is one of the smallest known free-living bacteria, with a length of and an average cell diameter of . They also have the smallest free-living bacterium genome: 1.3 Mbp, 1354 protein genes, 35 RNA genes. They are one of the most common and smallest organisms in the ocean, with their total weight exceeding that of all fish in the sea. Mycoplasma genitalium Mycoplasma genitalium, a parasitic bacterium which lives in the primate bladder, waste disposal organs, genital, and respiratory tracts, is thought to be the smallest known organism capable of independent growth and reproduction. With a size of approximately 200 to 300 nm, M. genitalium is an ultramicrobacterium, smaller than other small bacteria, including rickettsia and chlamydia. However, the vast majority of bacterial strains have not been studied, and the marine ultramicrobacterium Sphingomonas sp. strain RB2256 is reported to have passed through a ultrafilter. A complicating factor is nutrient-downsized bacteria, bacteria that become much smaller due to a lack of available nutrients. Nanoarchaeum Nanoarchaeum equitans is a species of microbe in diameter. It was discovered in 2002 in a hydrothermal vent off the coast of Iceland by Karl Stetter. A thermophile that grows in near-boiling temperatures, Nanoarchaeum appears to be an obligatory symbiont on the archaeon Ignicoccus; it must be in contact with the host organism to survive. Guinness World Records recognizes Nanoarchaeum equitans as the smallest living organism. Eukaryotes (Eukaryota) Prasinophyte algae of the genus Ostreococcus are the smallest free-living eukaryote. The single cell of an Ostreococcus measures across. Heliozoa The Erebor lineage of Microheliella maris is the smallest known heliozoan with an average cell body diameter of 2.56 μm. Viruses Some biologists consider viruses to be non-living because they lack a cellular structure and cannot metabolize by themselves, requiring a host cell to replicate and synthesize new products. Some hold that, because viruses do have genetic material and can employ the metabolism of their host, they can be considered organisms. Also, an emerging concept that is gaining traction among some virologists is that of the virocell, in which the actual phenotype of a virus is the infected cell, and the virus particle (or virion) is merely a reproductive or dispersal stage, much like pollen or a spore. The smallest viruses in terms of genome size are single-stranded DNA (ssDNA) viruses. Perhaps the most famous is the bacteriophage Phi-X174 with a genome size of 5,386 nucleotides. However, some ssDNA viruses can be even smaller. For example, Porcine circovirus type 1 has a genome of 1,759 nucleotides and a capsid diameter of . As a whole, the viral family geminiviridae is about in length. However, the two capsids making up the virus are fused; divided, the capsids would be in length. Other environmentally characterized ssDNA viruses such as CRESS DNA viruses, among others, can have genomes that are considerably less than 2,000 nucleotides. The smallest RNA virus in terms of genome size is phage BZ13 strain T72 at 3,393 nucleotides length. Viruses using both DNA and RNA in their replication (retroviruses) range in size from 7,040 to 12,195 nucleotides. The smallest double-stranded DNA viruses are the hepadnaviruses such as hepatitis B, at 3.2 kb and ; parvoviruses have smaller capsids, at , but larger genomes, at 5 kb. It is important to consider other self-replicating genetic elements, such as obelisks, ribozymes, satelliviruses and viroids. Animals (Animalia) Several species of Myxozoa (obligately parasitic cnidarians) never grow larger than . One of the smallest species (Myxobolus shekel) is no more than when fully grown, making it the smallest known animal. Molluscs (Mollusca) Bivalvia The shell of the nut clam Condylonucula maya grows long. Gastropods (Gastropoda) The smallest water snail (of all snails) is Ammonicera minortalis in North America, originally described from Cuba. It measures . The smallest land snail is Acmella nana. Discovered in Borneo, and described in November 2015, it measures . The previous record was that of Angustopila dominikae from China, which was reported in September 2015. This snail measures . Cephalopods (Cephalopoda) Maximites was the smallest known ammonoid. Adult specimens reached only in shell diameter. Arthropods (Arthropoda) The smallest arthropods are mites Cochlodispus minimus of the family Microdispidae. The body length of the smallest measured individual was . Crustaceans (Crustacea) The smallest crustaceans belong to the class Tantulocarida. The single smallest species may be Tantulacus dieteri, with a total body length of only . Another candidate is Stygotantulus stocki, with a length of . Arachnids (Arachnida) There is a debate about which spider is smallest. According to Guinness World Records, "Two contenders are from the Symphytognathidae genus Patu: males of Patu digua described in Colombia had a body length of , while the Samoan moss spider (P. marplesi) could be as small as long." Other possible smallest spider species are the Frade cave spider known as Anapistula ataecina, and the dwarf orb weaver (Anapistula caecula), the females of which are and respectively. Males of both species are potentially smaller than the females, but no Anapistula ataecina or Anapistula caecula have been measured yet. Cochlodispus minimus is the smallest mite. An adult individual measured with a body length of . However, PBS claims "The tiniest mite on record is 82 microns long" but does not name a species. Insects (Insecta) Adult males of the parasitic wasp Dicopomorpha echmepterygis can be as small as long, smaller than some species of protozoa (single-cell creatures); females are 40% larger. Megaphragma caribea from Guadeloupe, measuring long, is another contender for smallest known insect in the world. Beetles of the tribe Nanosellini are all less than long; the smallest confirmed specimen is of Scydosella musawasensis at long; a few other nanosellines are reportedly smaller, in historical literature, but none of these records have been confirmed using accurate modern tools. These are among the tiniest non-parasitic insects. The western pygmy blue (Brephidium exilis) is one of the smallest butterflies in the world, with a wingspan of about . Echinoderms (Echinodermata) The smallest sea cucumber, and also the smallest echinoderm, is Psammothuria ganapati, a synaptid that lives between sand grains on the coast of India. Its maximum length is . Sea urchins The smallest sea urchin, Echinocyamus scaber, has a test across. Starfish Patiriella parvivipara is the smallest starfish, at across. Fish One of the smallest vertebrates and the smallest fish based on the minimum size at maturity is Paedocypris progenetica from Indonesia, with mature females measuring as little as in standard length. This fish, a member of the carp family, has a translucent body and a head unprotected by a skeleton. One of the smallest fish based on the minimum size at maturity is Schindleria brevipinguis from Australia, their females reach and males . Males of S. brevipinguis have an average standard length of ; a gravid female was . This fish, a member of the goby family, differs from similar members of the group in having its first anal fin ray further forward, under dorsal fin 4. Male individuals of the anglerfish species Photocorynus spiniceps have been documented to be at maturity, and thus claimed to be a smaller species. However, these survive only by sexual parasitism and the female individuals reach the significantly larger size of . Amphibians (Amphibia) Frogs and toads (Anura) The smallest vertebrate (and smallest amphibian) known is Brachycephalus pulex, a Brazilian flea toad, with a minimum adult snout–vent length of . Brachycephalus dacnis is similarly tiny, with a minimum adult length of . Other very small frogs include: Paedophryne amauensis from Papua New Guinea, ranging in length from , and on average. Brachycephalus didactylus from Brazil (reported as ) several species of Eleutherodactylus such as E. iberia (around ) and E. limbatus () and Eleutherodactylus orientalis () from Cuba, Gardiner's Frog Sechellophryne gardineri from the Seychelles (up to ), several species of Stumpffia such as S. tridactyla () and S. pygmaea (males ; females: ) and Wakea madinika (males: ; females: ) from Madagascar. The two species Microhyla borneensis (males: ; females: ) and Arthroleptella rugosa (males: ; females: ) were once the smallest known frogs from the Old World. In general these extremely small frogs occur in tropical forest and montane environments. There is relatively little data on size variation among individuals, growth from metamorphosis to adulthood or size variation among populations in these species. Additional studies and the discovery of further minute frog species are likely to change the rank order of this list. Salamanders, newts and allies (Urodela) The average snout-to-vent length (SVL) of several specimens of the salamander Thorius arboreus was . Sauropsids (Sauropsida) Lizards and snakes (Squamata) The miniature chameleon Brookesia nana, with a snout-vent length of , may represent the smallest known lizard and smallest reptile. The dwarf gecko (Sphaerodactylus ariasae) is also one of the smallest known reptile species, with a snout-vent length of . S. ariasae was first described in 2001 by the biologists Blair Hedges and Richard Thomas. This dwarf gecko lives in Jaragua National Park in the Dominican Republic and on Beata Island (Isla Beata), off the southern coast of the Dominican Republic. A few Brookesia chameleons from Madagascar are equally small, with a reported snout-vent length of for male dwarf chameleons (B. minima), for male Mount d'Ambre leaf chameleons (B. tuberculata) and for male B. micra, though females are larger. One of the smallest known snakes is the recently discovered Barbados threadsnake (Leptotyphlops carlae). Adults average about long, which is only about twice as long as the hatchlings. The Common blind snake (Indotyphlops braminus) measures long, occasionally up to long. Turtles and tortoises (Testudines) The smallest turtle is the speckled padloper tortoise (Homopus signatus) from South Africa. The males measure , while females measure up to almost . Archosaurs (Archosauria) Crocodiles and close relatives (Crocodylomorpha) The smallest extant crocodilian is the Cuvier's dwarf caiman (Paleosuchus palpebrosus) from northern and central South America. It reaches up to in length. Some extinct crocodylomorphs were even smaller. Fully grown Bernissartia from the Early Cretaceous reached a bit more than in length. The Early Cretaceous terrestrial notosuchian Malawisuchus was no more than long. Other small notosuchians include Anatosuchus at and herbivorous Simosuchus at . Pterosaurs (Pterosauria) Nemicolopterus was the smallest pterosaur, it reached about in wingspan. Non-avian dinosaurs (Dinosauria) Sizes of non-avian dinosaurs are commonly labelled with a level of uncertainty, as the available material often (or even usually) is incomplete. The smallest known extinct non-avian dinosaur is Anchiornis, a genus of feathered dinosaur that lived in what is now China during the Late Jurassic Period 160 to 155 million years ago. Adult specimens range from long, and the weight has been estimated at up to . Parvicursor was initially seen as one of the smallest non-avian dinosaurs known from an adult specimen, at in length, and in weight. However, in 2022 its holotype was concluded to represent a juvenile individual. Epidexipteryx reached in length and in weight. Birds (Aves) With a mass of approximately and a length of , the bee hummingbird (Mellisuga helenae) is the smallest known dinosaur as well as the smallest bird species, and the smallest warm-blooded vertebrate. Called the zunzuncito in its native habitat on Cuba, it is lighter than a Canadian or U.S. penny. It is said that it is "more apt to be mistaken for a bee than a bird". The bee hummingbird eats half its total body mass and drinks eight times its total body mass each day. Its nest is across. The smallest waterfowl is pygmy goose (Nettapus). African species reaches the average weight of about for males and for females and length of single wing between and . The second smallest waterfowl is the extinct Mioquerquedula from the Miocene. The smallest penguin species is the little blue penguin (Eudyptula minor), which stands around tall and weighs . The smallest bird of prey is the Black-thighed falconet (Microhierax fringillarius), with a wingspan of , roughly the size of a sparrow. Non-mammalian synapsids (Synapsida) The smallest Mesozoic mammaliaform was Hadrocodium with a skull of in length and a body mass of . Mammals (Mammalia) Marsupials (Marsupialia) The smallest marsupial is the long-tailed planigale from Australia. It has a body length of (including tail) and weighs on average. The Pilbara ningaui is considered to be of similar size and weight. Shrews (Eulipotyphla) The Etruscan shrew (Suncus etruscus), is the smallest mammal by mass, weighing about on average. The smallest mammal that ever lived, the shrew-like Batodonoides vanhouteni, weighed . Bats (Chiroptera) The Kitti's hog-nosed bat (Craseonycteris thonglongyai), also known as the bumblebee bat, from Thailand and Myanmar is the smallest mammal, at in length and in weight. Carnivorans (Carnivora) The smallest member of the order Carnivora is the least weasel (Mustela nivalis), with an average body length of . It weighs between with females being lighter. Rodents (Rodentia) The smallest known member of the rodent order is the Baluchistan pygmy jerboa, with an average body length of . Primates (Primates) The smallest member of the primate order is Madame Berthe's mouse lemur (Microcebus berthae), found in Madagascar, with an average body length of . Cetaceans (Cetacea) The smallest cetacean, which is also (as of 2006) the most endangered, is the vaquita, a species of porpoise. Male vaquitas grow to an average of around ; the females are slightly longer, averaging about in length. Embryophytes (Embryophyta) Gymnosperms (Gymnospermae) Zamia pygmaea is a cycad found in Cuba, and the smallest known gymnosperm. It grows to a height of . Angiosperms (Angiospermae) Duckweeds of the genus Wolffia are the smallest angiosperms. Fully grown, they measure and reach a mass of just 150 μg. Dicotyledons The smallest known dicotyledon plant is the Himalayan dwarf mistletoe (Arceuthobium minutissimum). Shoots grow up to in height. Other Nanobes Nanobes are thought by some scientists to be the smallest known organisms, about one tenth the size of the smallest known bacteria. Nanobes, tiny filamental structures first found in some rocks and sediments, were first described in 1996 by Philippa Uwins of the University of Queensland, but it is unclear what they are, and if they are alive. See also Largest organisms Largest prehistoric organisms Notes References Other references External links Featherwing beetles on the UF / IFAS Featured Creatures Web site es:Tamaño de los seres vivos
Smallest organisms
Biology
3,888
13,973,133
https://en.wikipedia.org/wiki/Kolbe%20nitrile%20synthesis
The Kolbe nitrile synthesis is a method for the preparation of alkyl nitriles by reaction of the corresponding alkyl halide with a metal cyanide. A side product for this reaction is the formation of an isonitrile because the cyanide ion is an ambident nucleophile. The reaction is named after Hermann Kolbe. \underset{alkyl\ halide}{R-X} + \underset{cyanide\ ion}{CN^\ominus} -> \underset{alkyl\ nitrile}{R-C{\equiv}N} + \underset{alkyl\ isonitrile}{R-\overset\oplus N{\equiv}C^\ominus} The ratio of product isomers depends on the solvent and the reaction mechanism, and can be predicted by Kornblum's rule. With the Using alkali cyanides such as sodium cyanide and polar solvents, the reaction occurs by an SN2 mechanism via the more-nucleophilic carbon atom of the cyanide ion. This type of reaction together with dimethyl sulfoxide as a solvent is a convenient method for the synthesis of nitriles. The use of DMSO was a major advancement in the development of this reaction, as it works for more sterically hindered electrophilies (secondary and neopentyl halides) without rearrangement side-reactions. See also Rosenmund–von Braun reaction, a similar reaction for the synthesis of aromatic nitriles , a similar reaction with enones References Substitution reactions Name reactions
Kolbe nitrile synthesis
Chemistry
359
24,130,404
https://en.wikipedia.org/wiki/CrystaSulf
CrystaSulf is the trade name for a chemical process used for removing hydrogen sulfide (H2S) from natural gas, synthesis gas and other gas streams in refineries and chemical plants. CrystaSulf uses a modified liquid-phase Claus reaction to convert the hydrogen sulfide (H2S) into elemental sulfur which is then removed from the process by filtration. CrystaSulf is used in the energy industry as a mid-range process to handle sulfur amounts between 0.1 and 20 tons per day. Below 0.1 tons of sulfur per day is typically managed by H2S Scavengers and applications above 20 tons per day are typically treated with the Amine – Claus process. Process chemistry In the CrystaSulf process, a heavy hydrocarbon liquid is pumped through an absorber where the liquid contacts the gas stream that contains H2S. The H2S is absorbed from the gas stream and the clean gas stream then exits the absorber. The H2S in the liquid where it reacts with sulfur dioxide (SO2) to form elemental sulfur and water according to the following chemical equation. 2 H2S + SO2 → 3 S + 2 H2O The formed elemental sulfur remains dissolved in the hydrocarbon solution. Since the elemental sulfur remains dissolved, there are no solids, i.e., no slurry, in the absorber section of the process. This eliminates plugging problems that have been documented for aqueous redox processes when operated at high pressures. The SO2 that was present in the hydrocarbon solution was added prior to the solution being pumped into the absorber. The SO2 is chemically bound in the solution and readily available to react with H2S. The process is operated with an excess amount of SO2 in the solution so that there will always be sufficient amount for the liquid phase Claus reaction with H2S. Elemental sulfur removal After the dissolved elemental sulfur is formed in the hydrocarbon solution, the liquid is piped from the absorber, through a flash vessel if necessary to lower the operating pressure, and then through a crystallizer. The crystallizer reduces the temperature of the solution and solid sulfur is formed which is removed by a filter. After the solid sulfur is removed, the hydrocarbon solution is re-heated to approximately 150 °F and the liquid is then pumped back into the absorber to continue the process of absorbing more H2S and removing it from the gas stream. The hydrocarbon solution has a low corrosion rate. Patent information for CrystaSulf 6,416,729 B1; Process for removing hydrogen sulfide from gas streams which include or are supplemented with sulfur dioxide, by scrubbing with a nonaqueous sorbent; 9-Jul-2002; David W. DeBerry, Dennis A. Dalrymple 6,818,194; Process for removing hydrogen sulfide from gas streams which include or are supplemented with sulfur dioxide, by scrubbing with a nonaqueous sorbent; 16-Nov-2004; David W. DeBerry, Dennis A. Dalrymple, Kevin S. Fisher 5,733,516; Process for removal OF hydrogen sulfide from a gas stream; 31-Mar-1998; David W. DeBerry 5,738,834; System for removal of hydrogen sulfide from a gas stream; 14-Apr-1998; David W. DeBerry See also Acid gas Amine treating Claus process Crystatech Hydrogenation Sour gas References External links New Process for H2S Management - Bnet CrystaSulf Information Oil refining Chemical processes Acid gas control Natural gas
CrystaSulf
Chemistry
747
52,560,308
https://en.wikipedia.org/wiki/Cardiovascular%20Toxicology
Cardiovascular Toxicology is a quarterly peer-reviewed scientific journal covering molecular aspects of cardiovascular disease. It was established in 2001 and is published by Springer Science+Business Media. The editor-in-chief is Y. James Kang (University of Louisville School of Medicine). According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.239. References External links Cardiology journals Toxicology journals Springer Science+Business Media academic journals Academic journals established in 2001 English-language journals Quarterly journals
Cardiovascular Toxicology
Environmental_science
101
916,587
https://en.wikipedia.org/wiki/Polyvinyl%20butyral
Polyvinyl butyral (or PVB) is a resin mostly used for applications that require strong binding, optical clarity, adhesion to many surfaces, toughness and flexibility. It is prepared from polyvinyl alcohol by reaction with butyraldehyde. The major application is laminated safety glass for automobile windshields. Trade names for PVB-films include KB PVB, GUTMANN PVB, Saflex, GlasNovations, Butacite, WINLITE, S-Lec, Trosifol and EVERLAM. PVB is also available as 3D printer filament that is stronger and more heat resistant than polylactic acid (PLA). Applications Automotive and architectural Laminated glass, commonly used in the automotive and architectural fields, comprises a protective interlayer, usually polyvinyl butyral, bonded between two panels of glass. The bonding process takes place under heat and pressure. When laminated under these conditions, the PVB interlayer becomes optically clear and binds the two panes of glass together. Once sealed together, the glass "sandwich" (i.e., laminate) behaves as a single unit and looks like normal glass. The polymer interlayer of PVB is tough and ductile, so brittle cracks will not pass from one side of the laminate to the other. Colors PVB interlayer can be manufactured in colored sheets, such as for the blue or green "shade band" at the top edge of many automobile windshields. PVB interlayers can also be manufactured in different colors for architectural laminated glass. Solar modules PVB has gained acceptance among manufacturers of photovoltaic thin film solar modules. The photovoltaic circuit is formed on a sheet of glass using thin film deposition and patterning techniques. PVB and a second sheet of glass (called back glass) are then placed directly on the circuit. The lamination of this sandwich encapsulates the circuit, protecting it from the environment. Current is extracted from the module at a sealed terminal box that is attached to the circuit through a hole in the back glass. Another common laminant used in the solar industry is ethylene-vinyl acetate (EVA). Non-film applications PVB resins (provided by the manufacturer in powdered or granulated form) are also utilized in a range of applications including technical ceramic (temporary) binders, inks, dye transfer ribbon inks, paints & coatings (including wash primers), binders for reflective sheet and binders for magnetic media. PVB resin is particularly useful at bonding to metals, ceramics and other inorganics. Properties of PVB-laminated glass Annealed glass, heat-strengthened, or tempered glass can be used to produce laminated glass. While laminated glass will crack if struck with sufficient force, the resulting glass fragments tend to adhere to the interlayer rather than falling free and potentially causing injury. In practice, the interlayer provides three beneficial properties to laminated glass panes: first, the interlayer functions to distribute impact forces across a greater area of the glass panes, thus increasing the impact resistance of the glass; second, the interlayer functions to bind the resulting shards if the glass is ultimately broken; third the viscoelastic interlayer undergoes plastic deformation during impact and under static loads after impact, absorbing energy and reducing penetration by the impacting object as well as reducing the energy of the impact that is transmitted to impacting object, e.g. a passenger in a car crash. Thus, the benefits of laminated glass include safety and security. Laminated glass also has decorative applications. The interlayer can be colored or patterned. History PVB was invented in 1927 by the Canadian chemists Howard W. Matheson and Frederick W. Skirrow. PVB has been the dominant interlayer material since the late 1930s. It is manufactured and marketed by a number of companies worldwide, including: Saflex made by Eastman in Kingsport, Tennessee S-Lec films and powdered resins made by Sekisui in Kyoto, Japan, Winchester, Kentucky, Geleen & Roermond, The Netherlands and Cuernavaca, Mexico Kuraray Europe GmbH manufactures Trosifol and Mowital / Pioloform PVB products in Frankfurt, Germany Chang Chung Petrochemicals Co. Ltd of Taiwan manufactures WINLITE brand PVB products EVERLAM in Hamm-Uentrop, Germany markets its eponymous Everlam brand The market for laminated glass products is mature. With only minor modifications, the PVB interlayer sold today is essentially identical to the PVB sold 30 years ago. Since its introduction in 1938, the worldwide market for PVB interlayer has been dominated by a handful of large chemical companies. As a result, inventive efforts have tended toward methods of making the interlayer itself cheaper to manufacture, or making the interlayer easier to handle and less prone to material defects during the process of fabricating laminated glass. Other interlayer materials Other types of interlayer materials are in use, including polyurethanes such as Duraflex-brand thermoplastic polyurethane film, manufactured by Bayer MaterialScience, Leverkusen, Germany. See also Glass Polyvinyl chloride References Further reading Study of PVB from several manufacturers that establishes the possibility of using recycled PVB from laminated glass. (PDF; 75 kB) Vinyl polymers Synthetic resins Transparent materials Car windows
Polyvinyl butyral
Physics,Chemistry
1,119
35,951,900
https://en.wikipedia.org/wiki/Data%20grid
A data grid is an architecture or set of services that allows users to access, modify and transfer extremely large amounts of geographically distributed data for research purposes. Data grids make this possible through a host of middleware applications and services that pull together data and resources from multiple administrative domains and then present it to users upon request. The data in a data grid can be located at a single site or multiple sites where each site can be its own administrative domain governed by a set of security restrictions as to who may access the data. Likewise, multiple replicas of the data may be distributed throughout the grid outside their original administrative domain and the security restrictions placed on the original data for who may access it must be equally applied to the replicas. Specifically developed data grid middleware is what handles the integration between users and the data they request by controlling access while making it available as efficiently as possible. Middleware Middleware provides all the services and applications necessary for efficient management of datasets and files within the data grid while providing users quick access to the datasets and files. There is a number of concepts and tools that must be available to make a data grid operationally viable. However, at the same time not all data grids require the same capabilities and services because of differences in access requirements, security and location of resources in comparison to users. In any case, most data grids will have similar middleware services that provide for a universal name space, data transport service, data access service, data replication and resource management service. When taken together, they are key to the data grids functional capabilities. Universal namespace Since sources of data within the data grid will consist of data from multiple separate systems and networks using different file naming conventions, it would be difficult for a user to locate data within the data grid and know they retrieved what they needed based solely on existing physical file names (PFNs). A universal or unified name space makes it possible to create logical file names (LFNs) that can be referenced within the data grid that map to PFNs. When an LFN is requested or queried, all matching PFNs are returned to include possible replicas of the requested data. The end user can then choose from the returned results the most appropriate replica to use. This service is usually provided as part of a management system known as a Storage Resource Broker (SRB). Information about the locations of files and mappings between the LFNs and PFNs may be stored in a metadata or replica catalogue. The replica catalogue would contain information about LFNs that map to multiple replica PFNs. Data transport service Another middleware service is that of providing for data transport or data transfer. Data transport will encompass multiple functions that are not just limited to the transfer of bits, to include such items as fault tolerance and data access. Fault tolerance can be achieved in a data grid by providing mechanisms that ensures data transfer will resume after each interruption until all requested data is received. There are multiple possible methods that might be used to include starting the entire transmission over from the beginning of the data to resuming from where the transfer was interrupted. As an example, GridFTP provides for fault tolerance by sending data from the last acknowledged byte without starting the entire transfer from the beginning. The data transport service also provides for the low-level access and connections between hosts for file transfer. The data transport service may use any number of modes to implement the transfer to include parallel data transfer where two or more data streams are used over the same channel or striped data transfer where two or more steams access different blocks of the file for simultaneous transfer to also using the underlying built-in capabilities of the network hardware or specifically developed protocols to support faster transfer speeds. The data transport service might optionally include a network overlay function to facilitate the routing and transfer of data as well as file I/O functions that allow users to see remote files as if they were local to their system. The data transport service hides the complexity of access and transfer between the different systems to the user so it appears as one unified data source. Data access service Data access services work hand in hand with the data transfer service to provide security, access controls and management of any data transfers within the data grid. Security services provide mechanisms for authentication of users to ensure they are properly identified. Common forms of security for authentication can include the use of passwords or Kerberos (protocol). Authorization services are the mechanisms that control what the user is able to access after being identified through authentication. Common forms of authorization mechanisms can be as simple as file permissions. However, need for more stringent controlled access to data is done using Access Control Lists (ACLs), Role-Based Access Control (RBAC) and Tasked-Based Authorization Controls (TBAC). These types of controls can be used to provide granular access to files to include limits on access times, duration of access to granular controls that determine which files can be read or written to. The final data access service that might be present to protect the confidentiality of the data transport is encryption. The most common form of encryption for this task has been the use of SSL while in transport. While all of these access services operate within the data grid, access services within the various administrative domains that host the datasets will still stay in place to enforce access rules. The data grid access services must be in step with the administrative domains access services for this to work. Data replication service To meet the needs for scalability, fast access and user collaboration, most data grids support replication of datasets to points within the distributed storage architecture. The use of replicas allows multiple users faster access to datasets and the preservation of bandwidth since replicas can often be placed strategically close to or within sites where users need them. However, replication of datasets and creation of replicas is bound by the availability of storage within sites and bandwidth between sites. The replication and creation of replica datasets is controlled by a replica management system. The replica management system determines user needs for replicas based on input requests and creates them based on availability of storage and bandwidth. All replicas are then cataloged or added to a directory based on the data grid as to their location for query by users. In order to perform the tasks undertaken by the replica management system, it needs to be able to manage the underlying storage infrastructure. The data management system will also ensure the timely updates of changes to replicas are propagated to all nodes. Replication update strategy There are a number of ways the replication management system can handle the updates of replicas. The updates may be designed around a centralized model where a single master replica updates all others, or a decentralized model, where all peers update each other. The topology of node placement may also influence the updates of replicas. If a hierarchy topology is used then updates would flow in a tree like structure through specific paths. In a flat topology it is entirely a matter of the peer relationships between nodes as to how updates take place. In a hybrid topology consisting of both flat and hierarchy topologies updates may take place through specific paths and between peers. Replication placement strategy There are a number of ways the replication management system can handle the creation and placement of replicas to best serve the user community. If the storage architecture supports replica placement with sufficient site storage, then it becomes a matter of the needs of the users who access the datasets and a strategy for placement of replicas. There have been numerous strategies proposed and tested on how to best manage replica placement of datasets within the data grid to meet user requirements. There is not one universal strategy that fits every requirement the best. It is a matter of the type of data grid and user community requirements for access that will determine the best strategy to use. Replicas can even be created where the files are encrypted for confidentiality that would be useful in a research project dealing with medical files. The following section contains several strategies for replica placement. Dynamic replication Dynamic replication is an approach to placement of replicas based on popularity of the data. The method has been designed around a hierarchical replication model. The data management system keeps track of available storage on all nodes. It also keeps track of requests (hits) for which data clients (users) in a site are requesting. When the number of hits for a specific dataset exceeds the replication threshold it triggers the creation of a replica on the server that directly services the user’s client. If the direct servicing server known as a father does not have sufficient space, then the father’s father in the hierarchy is then the target to receive a replica and so on up the chain until it is exhausted. The data management system algorithm also allows for the dynamic deletion of replicas that have a null access value or a value lower than the frequency of the data to be stored to free up space. This improves system performance in terms of response time, number of replicas and helps load balance across the data grid. This method can also use dynamic algorithms that determine whether the cost of creating the replica is truly worth the expected gains given the location. Adaptive replication This method of replication like the one for dynamic replication has been designed around a hierarchical replication model found in most data grids. It works on a similar algorithm to dynamic replication with file access requests being a prime factor in determining which files should be replicated. A key difference, however, is the number and frequency of replica creations is keyed to a dynamic threshold that is computed based on request arrival rates from clients over a period of time. If the number of requests on average exceeds the previous threshold and shows an upward trend, and storage utilization rates indicate capacity to create more replicas, more replicas may be created. As with dynamic replication, the removal of replicas that have a lower threshold that were not created in the current replication interval can be removed to make space for the new replicas. Fair-share replication Like the adaptive and dynamic replication methods before, fair-share replication is based on a hierarchical replication model. Also, like the two before, the popularity of files play a key role in determining which files will be replicated. The difference with this method is the placement of the replicas is based on access load and storage load of candidate servers. A candidate server may have sufficient storage space but be servicing many clients for access to stored files. Placing a replicate on this candidate could degrade performance for all clients accessing this candidate server. Therefore, placement of replicas with this method is done by evaluating each candidate node for access load to find a suitable node for the placement of the replica. If all candidate nodes are equivalently rated for access load, none or less accessed than the other, then the candidate node with the lowest storage load will be chosen to host the replicas. Similar methods to the other described replication methods are used to remove unused or lower requested replicates if needed. Replicas that are removed might be moved to a parent node for later reuse should they become popular again. Other replication The above three replica strategies are but three of many possible replication strategies that may be used to place replicas within the data grid where they will improve performance and access. Below are some others that have been proposed and tested along with the previously described replication strategies. Static – uses a fixed replica set of nodes with no dynamic changes to the files being replicated. Best Client – Each node records number of requests per file received during a preset time interval; if the request number exceeds the set threshold for a file a replica is created on the best client, one that requested the file the most; stale replicas are removed based on another algorithm. Cascading – Is used in a hierarchical node structure where requests per file received during a preset time interval is compared against a threshold. If the threshold is exceeded a replica is created at the first tier down from the root, if the threshold is exceeded again a replica is added to the next tier down and so on like a waterfall effect until a replica is placed at the client itself. Plain Caching – If the client requests a file it is stored as a copy on the client. Caching plus Cascading – Combines two strategies of caching and cascading. Fast Spread – Also used in a hierarchical node structure this strategy automatically populates all nodes in the path of the client that requests a file. Tasks scheduling and resource allocation Such characteristics of the data grid systems as large scale and heterogeneity require specific methods of tasks scheduling and resource allocation. To resolve the problem, majority of systems use extended classic methods of scheduling. Others invite fundamentally different methods based on incentives for autonomous nodes, like virtual money or reputation of a node. Another specificity of data grids, dynamics, consists in the continuous process of connecting and disconnecting of nodes and local load imbalance during an execution of tasks. That can make obsolete or non-optimal results of initial resource allocation for a task. As a result, much of the data grids utilize execution-time adaptation techniques that permit the systems to reflect to the dynamic changes: balance the load, replace disconnecting nodes, use the profit of newly connected nodes, recover a task execution after faults. Resource management system (RMS) The resource management system represents the core functionality of the data grid. It is the heart of the system that manages all actions related to storage resources. In some data grids it may be necessary to create a federated RMS architecture because of different administrative policies and a diversity of possibilities found within the data grid in place of using a single RMS. In such a case the RMSs in the federation will employ an architecture that allows for interoperability based on an agreed upon set of protocols for actions related to storage resources. RMS functional capabilities Fulfillment of user and application requests for data resources based on type of request and policies; RMS will be able to support multiple policies and multiple requests concurrently Scheduling, timing and creation of replicas Policy and security enforcement within the data grid resources to include authentication, authorization and access Support systems with different administrative policies to inter-operate while preserving site autonomy Support quality of service (QoS) when requested if feature available Enforce system fault tolerance and stability requirements Manage resources, i.e. disk storage, network bandwidth and any other resources that interact directly or as part of the data grid Manage trusts concerning resources in administrative domains, some domains may place additional restrictions on how they participate requiring adaptation of the RMS or federation. Supports adaptability, extensibility, and scalability in relation to the data grid. Topology Data grids have been designed with multiple topologies in mind to meet the needs of the scientific community. On the right are four diagrams of various topologies that have been used in data grids. Each topology has a specific purpose in mind for where it will be best utilized. Each of these topologies is further explained below. Federation topology is the choice for institutions that wish to share data from already existing systems. It allows each institution control over their data. When an institution with proper authorization requests data from another institution it is up to the institution receiving the request to determine if the data will go to the requesting institution. The federation can be loosely integrated between institutions, tightly integrated or a combination of both. Monadic topology has a central repository that all collected data is fed into. The central repository then responds to all queries for data. There are no replicas in this topology as compared to others. Data is only accessed from the central repository which could be by way of a web portal. One project that uses this data grid topology is the Network for Earthquake Engineering Simulation (NEES) in the United States. This works well when all access to the data is local or within a single region with high speed connectivity. Hierarchical topology lends itself to collaboration where there is a single source for the data and it needs to be distributed to multiple locations around the world. One such project that will benefit from this topology would be CERN that runs the Large Hadron Collider that generates enormous amounts of data. This data is located at one source and needs to be distributed around the world to organizations that are collaborating in the project. Hybrid Topology is simply a configuration that contains an architecture consisting of any combination of the previous mentioned topologies. It is used mostly in situations where researchers working on projects want to share their results to further research by making it readily available for collaboration. History The need for data grids was first recognized by the scientific community concerning climate modeling, where terabyte and petabyte sized data sets were becoming the norm for transport between sites. More recent research requirements for data grids have been driven by the Large Hadron Collider (LHC) at CERN, the Laser Interferometer Gravitational Wave Observatory (LIGO), and the Sloan Digital Sky Survey (SDSS). These examples of scientific instruments produce large amounts of data that need to be accessible by large groups of geographically dispersed researchers. Other uses for data grids involve governments, hospitals, schools and businesses where efforts are taking place to improve services and reduce costs by providing access to dispersed and separate data systems through the use of data grids. From its earliest beginnings, the concept of a Data Grid to support the scientific community was thought of as a specialized extension of the “grid” which itself was first envisioned as a way to link super computers into meta-computers. However, that was short lived and the grid evolved into meaning the ability to connect computers anywhere on the web to get access to any desired files and resources, similar to the way electricity is delivered over a grid by simply plugging in a device. The device gets electricity through its connection and the connection is not limited to a specific outlet. From this the data grid was proposed as an integrating architecture that would be capable of delivering resources for distributed computations. It would also be able to service numerous to thousands of queries at the same time while delivering gigabytes to terabytes of data for each query. The data grid would include its own management infrastructure capable of managing all aspects of the data grids performance and operation across multiple wide area networks while working within the existing framework known as the web. The data grid has also been defined more recently in terms of usability; what must a data grid be able to do in order for it to be useful to the scientific community. Proponents of this theory arrived at several criteria. One, users should be able to search and discover applicable resources within the data grid from amongst its many datasets. Two, users should be able to locate datasets within the data grid that are most suitable for their requirement from amongst numerous replicas. Three, users should be able to transfer and move large datasets between points in a short amount of time. Four, the data grid should provide a means to manage multiple copies of datasets within the data grid. And finally, the data grid should provide security with user access controls within the data grid, i.e. which users are allowed to access which data. The data grid is an evolving technology that continues to change and grow to meet the needs of an expanding community. One of the earliest programs begun to make data grids a reality was funded by the Defense Advanced Research Projects Agency (DARPA) in 1997 at the University of Chicago. This research spawned by DARPA has continued down the path to creating open source tools that make data grids possible. As new requirements for data grids emerge projects like the Globus Toolkit will emerge or expand to meet the gap. Data grids along with the "Grid" will continue to evolve. Notes References Further reading Data management
Data grid
Technology
3,986
22,858,525
https://en.wikipedia.org/wiki/Kogan.com
Kogan.com is an Australian portfolio of retail and services businesses including Kogan Retail, Kogan Marketplace, Kogan Mobile, Kogan Internet, Kogan Insurance, Kogan Travel, Kogan Money, Kogan Cars, Kogan Energy, Dick Smith, Matt Blatt and Mighty Ape. The company was founded in 2006 by Ruslan Kogan. In July 2016, Kogan.com was floated on the Australian Securities Exchange. In its first year as a listed business on the Australian Securities Exchange, Kogan.com delivered $221.3 million in gross sales. History Kogan.com was established in 2006 by Ruslan Kogan in his parents' garage. He started with a website offering LCD televisions that would be assembled for him in Chinese factories. In October 2010, Ruslan Kogan announced that Kogan.com would expand to the United Kingdom. The company officially entered the UK market on 15 November, with a range of LED TVs and GPS units. The expansion makes Kogan the only Australian-owned international consumer electronics brand. On 14 September 2011, Kogan.com began shipping products from the company's Hong Kong operation. Bypassing wholesalers, distributors and retailers in this way enabled the company to offer products by brands including Apple, Canon, Nikon, Samsung, Motorola and more at low prices. On 15 March 2016, Kogan.com acquired Dick Smith Holding's online business. With the physical retail stores shut down, the Dick Smith brand transitioned to an online-only consumer electronics store. On 7 July 2016, Kogan.com floated on the Australian Securities Exchange. In May 2020, Kogan.com acquired Matt Blatt, a leading Australian online furniture retailer, for $4.4 million. Kogan.com acquired New Zealand-based online retailer Mighty Ape in December 2020 for $122.4 million. At December 2021, Kogan.com delivered $698 million in gross sales for the first half of FY22 (FY21 annual Gross Sales: $1.79 billion). In December 2022, Kogan.com acquired furniture and homewares operator Brosa for $1.5 million after the company went into voluntary administration. Businesses Kogan Retail and Kogan Marketplace Kogan.com’s success is built off technology and digital efficiencies to make in-demand products and services more affordable and accessible. Product lines available via Kogan Retail brands include TVs, consumer electronics, appliances, homewares, hardware, toys and more. Kogan Marketplace is a proprietary eCommerce platform that partners with sellers, providing them with access to Kogan.com’s millions of Active Customers. Kogan First offers members free delivery from Kogan.com on eligible products, upgrades to express shipping at no extra cost, priority customer service and access to exclusive member-only deals. Kogan.com has also caused controversy with its product range. The company voiced its opposition to the Australian government's proposed Internet filter by releasing a fictional parody product, the Kogan Portector, and received media attention in 2009 parodying Australian Prime Minister Kevin Rudd's 2007 election campaign because of its "Kevin 37" television, a 37" television marketed using advertising mimicking Rudd's campaign, which initially sold for $900 – the value of the economic stimulus payments made to many Australians in April 2009. Exclusive Brands In July 2014, Kogan.com launched its first exclusive brand beyond consumer electronics, called Fortis. By controlling the design, manufacture, packaging and distribution of the products, Kogan.com had increased the number of exclusive brands available to 10, within a year. Kogan.com currently owns and operates 20 exclusive private label brands. Kogan Mobile and Kogan Internet TPG and Vodafone Kogan Mobile was re-launched in Australia with TPG (formally Vodafone) on 19 October 2015. Kogan Mobile offers value based pre-paid mobile phone plans to Customers in Australia and New Zealand on the TPG and Vodafone New Zealand networks. Kogan Mobile New Zealand was launched in September 2019 in partnership with Vodafone New Zealand Limited offering telecommunications services in New Zealand. Glimp stated Kogan Mobile New Zealand offers one of the cheapest unlimited minutes pre-pay plans in New Zealand. Telstra / ispONE Kogan Mobile was a pre-paid mobile phone service provider in Australia. It was known for being the first pre-paid mobile provider to use the widespread Telstra network. Kogan Mobile launched on 12 December 2012. By 14 December, it had already sold more than 10,000 SIM cards. In response to the announcement, a spokesman from Telstra stated; "Telstra Wholesale is not in partnership or any other direct relationship with Kogan." Kogan Mobile experienced many problems with customers being suspended for "overuse" despite offering an "unlimited call" plan. Australian Communications Consumer Action Network (ACCAN), stated that Kogan was "falling afoul" of the Telecommunications Consumer Protections Code, introduced in September 2012, which states that a product cannot be called unlimited if it is not actually unlimited. Kogan stated that they did not suspend customers but is rather the provider ispONE. In April 2013, Kogan took the provider ispONE to the Victorian Supreme Court. Kogan won an injunction ordering service provider ispONE to reinstate the 600 accounts of Kogan Mobile users it had suspended after deeming them to be using excessive the service in excess of the levels permitted. The Business Review Weekly said this was not a good start for the 4-month-old [mobile] business. The Kogan Mobile service ended with the collapse of ispONE on 19 August 2013. Kogan nbn On 7 June 2017, Kogan.com announced that it has extended its partnership with Vodafone to 2022. As a part of the partnership, Kogan.com announced that it will offer fixed-line NBN services as well as mobile broadband plans. Kogan nbn launched on 12 April 2018. The service is supplied to Kogan by Vodafone as supplied by nbn Co Limited. Kogan Insurance Kogan entered financial services in 2017 under the name of Kogan Insurance. Underwritten by Hollard Insurance, Kogan Insurance was launched with home, contents, landlord, car, and travel insurance. By 2018, Kogan had entered four more areas of the insurance business. A partnership with Medibank enabled the creation of a health insurance brand, Kogan Health; another partnership with Hollard subsidiary PetSure Australia permitted the establishment of Kogan Pet, for pet insurance; and a brand for life and funeral insurance, Kogan Life Insurance, was founded via an agreement with Greenstone Financial Services. Kogan Energy Kogan Energy was launched on 9 September 2019 in partnership with part of the Meridian Energy Limited group and provides power and gas deals to Australian households. Kogan Cars Kogan Cars was launched on 4 July 2019 and provides consumers with access to networked negotiating power for deals on new cars, and acts as a marketplace for vehicle trade-in. Kogan Travel Launched on 21 August 2015, Kogan Travel provides travel deals to international destinations. Kogan Money Credit Cards Kogan Credit Cards is a credit card with uncapped Kogan reward points, no annual fee, complimentary Kogan First membership. It was launched in October 2019 in partnership with Citigroup Pty Ltd. Kogan Super In partnership with Mercer Australia, Kogan.com offers a no frills, ultra low fee Australian superannuation fund, Kogan Super. Kogan Pantry In January 2015 Kogan.com launched Kogan Pantry, an online service delivering non-perishable foods, confectionery, cleaning products, toiletries, and pet food. Kogan Pantry has a smaller range of products than the biggest Australian supermarkets, Coles and Woolworths. However, independent consumer group Choice says the prices at Kogan pantry are 50-60% less than the big supermarkets. According to News.com.au, over 30,000 products were sold within the first six hours of the launch. Marketing In July 2011 Kogan launched its "Cut the Cable Con" campaign in Britain. The campaign targeted John Lewis and Currys, criticising the way that such retailers attempt to sell expensive cables with their new televisions and computers, and accusing them of a campaign of "deliberate misinformation" with regard to this issue. Kogan.com began giving away cables free of charge with every television purchased. John Lewis and Currys defended the practice of charging for cables by pointing out the various features of their cables. Kogan responded by stating: "I think it's a bit misleading what they've said. When it comes to durability, it's an HDMI cable that you'll use to connect your TV to a Blu-ray player, or a Playstation, or another device. You're not using it as a skipping rope or to go rock climbing with." In August 2011 Kogan announced his desire for online retailers to be viewed separately from traditional retail by economists, stating that "If we have been grouped with traditional retailers, then we want a divorce!", and arguing that all of the innovation and growth is in online, rather than bricks-and-mortar, retail. Business Magazine BRW noted the similarity of the "divorce" to a campaign by the National Australia Bank. Financial results Sales at Kogan's online store in November 2011 were $8.12 million, up 330% from $1.89 million in the same period in 2010. During the period there were 708,525 visitors to Kogan.com.au, an increase from 222,411 the year before. In October 2010, BRW ranked Kogan as Australia's 15th fastest growing company with yearly revenue of $12.22 million. Also in 2010, the company released Q1 FY11 growth figures, with revenue up 48.12% on the previous quarter. The business recorded its highest single day of sales ever on 31 July 2012, exceeding $1 million in transactions. In October 2012, Kogan again made the BRW. list of Australia's fastest growing businesses, this time ranking at No. 14, noting a 123% growth rate since 2011. The professional services firm, Deloitte, listed Kogan as one of the top 10 on the Fast 50 Australia in November 2012. In 2017, Kogan.com released its FY17 results, which showed revenue of $289.5 million, up 37.1% on the prior year. It also grew its customer base to 955,000, up 36.0% from 30 June 2016. Kogan.com was the best performing stock on the ASX All Ordinaries index in 2017, with an annual gain of more than 300 per cent. Awards The company has won the following awards: 2017 StarTrack ORIAS People's Choice Award — Large Retailer BRW 2011 Fast 100 at rank 27, ranking the fastest growing companies in Australia in any sector and any size. BRW 2010 Fast 100 at rank 15, ranking the fastest growing companies in Australia in any sector and any size. BRW 2010 Fast Starters list at rank 17. BRW named Kogan the 15th fastest growing company in Australia, with 106.74% growth. Australian Retailer's Association Retail Innovator of the Year 2010 BRW 2009 Fast Starters list at rank 37 Power Retail's Top 100 Online Retailers of 2014, at rank 3. Controversies 2009 advertising controversy In April 2009 the company was ordered by the Australian Competition & Consumer Commission (ACCC) to modify its advertising, after it was accused of possible misleading conduct. The ACCC stated that price comparisons in the retailer's advertisements in the Herald Sun newspaper and on its website may have misled customers, and the ACCC's chairman Graeme Samuel said the advertisements may have breached sections of the Trade Practices Act 1974. In response, Kogan.com agreed that it would not advertise its products at a discount unless that product had been advertised for sale at a higher price, that it would implement a trade practices law compliance program, and that it would not make representations about the savings available to consumers unless the basis by which those savings are calculated was also stated. Harvey Norman In August 2010 Kogan began a public dispute with Gerry Harvey, co-founder of well-known Australian retailer Harvey Norman. The argument concerned the future of consumer electronics retailing in Australia, and in particular whether Australians should shop online or in a bricks and mortar retailer. Kogan challenged Harvey to a TV debate, which he declined. Kogan claimed Harvey "chickened out", causing Harvey to respond by calling Kogan a "con". Kogan responded with two satirical advertisements criticising Harvey Norman. Kogan renewed the dispute in November 2010, criticising Harvey Norman's purchase of Clive Peeters. The controversy continued into December, when Harvey announced plans to follow Myer and open up an online store based in China to avoid GST and cut costs, causing Kogan to claim that Harvey Norman and Myer were posturing to force the Government to change import laws, and that their China-based stores were a hoax. Kogan stated that if Harvey Norman and Myer succeeded in opening their China-based online stores for three months, he would place a prominent link on kogan.com.au advertising his rival's store. Kogan also claimed that Harvey Norman was "full of it", and published an article lamenting the Australian business scene's focus on regulation rather than innovation. In July 2011, Kogan came to the public defence of Harvey when he came under fire for Harvey Norman's alleged logging practices. Kogan stated: "Like him or hate him, Gerry Harvey is not a criminal – he should not be singled out for some supposed moral crime simply because he has complied with the law, and has sought Australian timber to use in his furniture." JB Hi-Fi In March 2011, Kogan argued that some of Australia's biggest retailers were overly reliant upon the success of Apple, claiming that 30% of Australian retailer JB Hi-Fi's revenue in 2010 had come from Apple or Apple related products. Terry Smart, CEO of JB Hi-Fi, responded by saying "That figure is not even close to reality. We don't have a big enough supply that represents such a substantial part of the business." Kogan responded by challenging Smart to a one million dollar bet that JB Hi-Fi would not stock Apple hardware by 14 March 2014. The deed for the bet is still available online, though it has not been accepted. Kogan also began giving away free HDMI cables to anyone who had bought a TV from JB Hi-Fi in 2011, accusing JB of "trying to trick people into thinking they need a $200 cable after buying a FULL HD TV." In October 2011, Kogan took out a full page ad in Australia's biggest newspaper calling for JB Hi-Fi to change their slogan, "Always Cheapest Prices." JB Hi-Fi did not respond to the challenge. Australian government set-top box scheme In May 2011 the Australian Government announced a plan to provide television set-top boxes to pensioners free of charge. Kogan and other leading retailers criticised the scheme for spending too much money. In 2011, the program had an estimated total cost of $308 million, with each installation costing over $350. Kogan said his company could deliver it for $50 million. In February 2012 new figures revealed that the cost per installation had risen to $698, prompting Kogan to make further public statements attacking the Government's inefficiency in spending. However, it was later clarified by the Federal Government that these figures were falsely reported by The Australian newspaper and that installations ranged from $158 to $492. It was revealed on 8 February 2012 during parliamentary question time, that Kogan.com had tendered for the Scheme to roll-out set-top boxes for New South Wales which commenced in June 2012. Apple vs. Samsung When Kogan began selling the Samsung Galaxy Tab 10.1 in September 2011, Apple demanded that the company immediately stop selling the product, because of an ongoing patent dispute with Samsung. Apple also demanded full details of Kogan's suppliers. Kogan agreed to stop selling the product until the patent dispute was resolved, but refused to disclose any further information. The Federal Court overturned the injunction on the Samsung Galaxy Tab 10.1 on 30 November 2011, and Kogan began selling the product again soon after. Microsoft Internet Explorer 7 tax On 13 June 2012 Kogan introduced a Microsoft Internet Explorer 7 'tax', which charged any user shopping at the site from IE7 an extra 6.8% – 0.1% for every month the browser had been on the market. Kogan explained that he had decided to charge the 'tax' because: "The amount of work and effort involved in making our website look normal on IE7 equalled the combined time of designing for Chrome, Safari and Firefox." Kogan accepted that it was unlikely that anyone would actually pay the charges, stating that the goal of the campaign was to encourage users to download a more up-to-date version of Internet Explorer, or a different browser. The 'tax' was the most talked about topic on social media service Twitter on the day following its launch. Several weeks later, search results for kogan.com disappeared from Microsoft Bing search results, with Kogan stating "We hope Microsoft were not too offended by what we did with the IE7 tax and this is just a temporary glitch." Microsoft denied tampering with the search results, stating that: "The ranking of our results is done in automated manner through our algorithm which can sometimes lead to unexpected results." 2018 ACCC case Kogan raised the prices of over 600 products then offered a 10% discount to their customers as part of an end of financial year promotion. The court found Kogan breached consumer law by making false and misleading representations about the end of financial year promotion in 2018. On 7 December 2020, Kogan was fined $350,000 by the federal court with Justice Davies saying, "Kogan’s contravening conduct must be viewed as serious, as misrepresentations about discounts offered on products not only harm purchasers acquiring such products on the basis that they are getting a genuine discount but also may impact on consumer confidence in discount promotions when legitimately made – that is, when products are being offered for sale with a genuine discount on price." 2021 ACMA fine An Australian Communications & Media Authority (ACMA) investigation in 2021 found Kogan sent over 40 million emails without an easy route to unsubscribe. Instead, Kogan required recipients to log in to their account or set a password in order to stop the messages. The ACMA found Kogan’s conduct breached the Spam Act, which "requires commercial electronic messages to contain a functional unsubscribe facility". To resolve the case, the company agreed to a court-enforceable undertaking and paid a $310,800 infringement notice. References External links Consumer electronics retailers of Australia Online retailers of Australia Retail companies established in 2006 Internet properties established in 2006 Australian brands Companies listed on the Australian Securities Exchange Australian companies established in 2006 Radio manufacturers
Kogan.com
Engineering
3,966
52,724,403
https://en.wikipedia.org/wiki/MySensors
MySensors is a free and open source DIY (do-it yourself) software framework for wireless IoT (Internet of Things) devices allowing devices to communicate using radio transmitters. The library was originally developed for the Arduino platform. The MySensors devices create a virtual radio network of nodes that automatically forms a self healing mesh like structure. Each node can relay messages for other nodes to cover greater distances using simple short range transceivers. Each node can have several sensors or actuators attached and can interact with other nodes in the network. The radio network can consist of up to 254 nodes where one node can act as a gateway to the internet or a home automation controller. The controller adds functionality to the radio network such as id assignment and time awareness. Supported hardware platforms The framework can natively be run on the following platforms and micro controllers. Linux / Raspberry Pi ATMega 328P ESP8266 ESP32 ARM Cortex M0 (mainly Atmel SAMD core as used in Arduino Zero) Communication options MySensors supports wireless communication using the following transceivers: NRF24L01 RFM69 RFM95 (LoRa) WiFi (ESP8266 & ESP32) Wired communication over: MQTT Serial USB RS485 Security The wireless communication can be signed using truncated HMAC-SHA256 either through hardware with Atmel ATSHA204A or compatible software emulation and optionally encrypted. The implementation is timing neutral with whitened random numbers, attack detection-and-lockout and protects against timing attacks, replay attacks and man in the middle attacks. Over the air firmware updates The firmware of a MySensor node can be updated over the air using a few different bootloader options: In place overwriting of flash memory using MySensorsBootloaderRF24. Using external flash with the DualOptiBoot. For ESP8266 nodes using the built in OTA feature. See also Arduino ESP8266 References External links Official Website OpenHardware.io Russian Community MySensors Microcontroller software Home automation Open hardware electronic devices Open hardware and software organizations and companies
MySensors
Technology
463
106,291
https://en.wikipedia.org/wiki/Ultracentrifuge
An ultracentrifuge is a centrifuge optimized for spinning a rotor at very high speeds, capable of generating acceleration as high as (approx. ). There are two kinds of ultracentrifuges, the preparative and the analytical ultracentrifuge. Both classes of instruments find important uses in molecular biology, biochemistry, and polymer science. History In 1924 Theodor Svedberg built a centrifuge capable of generating 7,000 g (at 12,000 rpm), and called it the ultracentrifuge, to juxtapose it with the Ultramicroscope that had been developed previously. In 1925-1926 Svedberg constructed a new ultracentrifuge that permitted fields up to 100,000 g (42,000 rpm). Modern ultracentrifuges are typically classified as allowing greater than 100,000 g. Svedberg won the Nobel Prize in Chemistry in 1926 for his research on colloids and proteins using the ultracentrifuge. In early 1930s, Émile Henriot found that suitably placed jets of compressed air can spin a bearingless top to very high speeds and developed an ultracentrifuge on that principle. Jesse Beams from the Physics Department at the University of Virginia first adapted that principle to a high-speed camera, and then started improving Henriot's ultracentrifuge, but his rotors consistently overheated. Beam's student Edward Greydon Pickels solved the problem in 1935 by vacuumizing the system, which allowed a reduction in friction generated at high speeds. Vacuum systems also enabled the maintenance of constant temperature across the sample, eliminating convection currents that interfered with the interpretation of sedimentation results. In 1946, Pickels cofounded Spinco (Specialized Instruments Corp.) to market analytical and preparative ultracentrifuges based on his design. Pickels considered his design to be too complicated for commercial use and developed a more easily operated, “foolproof” version. But even with the enhanced design, sales of analytical centrifuges remained low, and Spinco almost went bankrupt. The company survived by concentrating on sales of preparative ultracentrifuge models, which were becoming popular as workhorses in biomedical laboratories. In 1949, Spinco introduced the Model L, the first preparative ultracentrifuge to reach a maximum speed of 40,000 rpm. In 1954, Beckman Instruments (later Beckman Coulter) purchased the company, forming the basis of its Spinco centrifuge division. Instrumentation Ultracentrifuges are available with a wide variety of rotors suitable for a great range of experiments. Most rotors are designed to hold tubes that contain the samples. Swinging bucket rotors allow the tubes to hang on hinges so the tubes reorient to the horizontal as the rotor initially accelerate. Fixed angle rotors are made of a single block of material and hold the tubes in cavities bored at a predetermined angle. Zonal rotors are designed to contain a large volume of sample in a single central cavity rather than in tubes. Some zonal rotors are capable of dynamic loading and unloading of samples while the rotor is spinning at high speed. Preparative rotors are used in biology for pelleting of fine particulate fractions, such as cellular organelles (mitochondria, microsomes, ribosomes) and viruses. They can also be used for gradient separations, in which the tubes are filled from top to bottom with an increasing concentration of a dense substance in solution. Sucrose gradients are typically used for separation of cellular organelles. Gradients of caesium salts are used for separation of nucleic acids. After the sample has spun at high speed for sufficient time to produce the separation, the rotor is allowed to come to a smooth stop and the gradient is gently pumped out of each tube to isolate the separated components. Hazards The tremendous rotational kinetic energy of the rotor in an operating ultracentrifuge makes the catastrophic failure of a spinning rotor a serious concern, as it can explode spectacularly. Rotors conventionally have been made from high strength-to-weight metals such as aluminum or titanium. The stresses of routine use and harsh chemical solutions eventually cause rotors to deteriorate. Proper use of the instrument and rotors within recommended limits and careful maintenance of rotors to prevent corrosion and to detect deterioration is necessary to mitigate this risk. More recently some rotors have been made of lightweight carbon fiber composite material, which are up to 60% lighter, resulting in faster acceleration/deceleration rates. Carbon fiber composite rotors also are corrosion-resistant, eliminating a major cause of rotor failure. See also Analytical ultracentrifugation Gas centrifuge Theodor Svedberg Differential centrifugation Buoyant density ultracentrifugation Zippe-type centrifuge References External links Modern analytical ultracentrifugation in protein science: A tutorial review Studying multiprotein complexes by multisignal sedimentation velocity analytical ultracentrifugation Report on an ultracentrifuge explosion Centrifuges
Ultracentrifuge
Chemistry,Engineering
1,055
1,081,644
https://en.wikipedia.org/wiki/Quill%20drive
A quill drive is a mechanism that allows a drive shaft to shift its position (either axially, radially, or both) relative to its driving shaft. It consists of a hollow driving shaft (the quill) with a driven shaft inside it. The two are connected in some fashion which permits the required motion. Examples Drill press One example of a quill drive is found in a drill press where the quill allows the chuck to move vertically while being driven rotationally. Railroad locomotive Quill drives have been extensively used in railroad electric locomotives to connect between frame-mounted traction motors and the driven wheels. The two are linked by a flexible drive which allows a degree of radial motion and possibly a small amount of axial motion. This allows the motors to be mounted on top of the suspension system, moving independently of the wheels. This smooths the drive from the motors and isolates them from mechanical shock. This also decreases the unsprung weight borne directly by the wheels, thus decreasing wear on the track. Quill drives were used by many electric locomotives in the United States, particularly those of the Pennsylvania Railroad—their long-lasting GG1 design being perhaps the best known. Many locomotives built in France, Germany, Italy and Poland used quill drives as well, allowing higher locomotive speed. The English Electric–built NZR ED class used a quill drive, but was found to be hard on the track. See also Buchli drive Radial axle Tschanz drive Winterthur universal drive References Mechanisms (engineering) Automotive technologies Electric locomotives Locomotive parts Shaft drives
Quill drive
Engineering
320
981,613
https://en.wikipedia.org/wiki/XMLGUI
XMLGUI is a KDE framework for designing the user interface of an application using XML, using the idea of actions. In this framework, the programmer designs various actions that their application can implement, with several actions defined for the programmer by the KDE framework, such as opening a file or closing the application. Each action can be associated with various data including icons, explanatory text, and tooltips. The interesting part to this design is that the actions are not inserted into the menus or toolbars by the programmer. Instead, the programmer supplies an XML file, which describes the layout of the menu bar and toolbar. Using this system, it is possible for the user to redesign the user interface of an application without needing to touch the source code of the program in question. In addition, XMLGUI is useful for the KParts component programming interface for KDE, as an application can easily integrate the GUI of a KPart into its own GUI. The Konqueror file manager is the canonical example of this feature. The current version is KDE Frameworks#KXMLGUI. Other projects The name is somewhat generic. The Beryl XML GUI was formerly named xmlgui, and there are a dozen other xml-oriented gui-libraries with the same project name. The KDE XMLGUI is one in a long series of projects that have not managed to pin down the term for the resulting programming base. See also Qt Style Sheets External links KDE Guide to the XMLGUI architecture KDE Frameworks KDE Platform User interface markup languages
XMLGUI
Technology
321
34,357,587
https://en.wikipedia.org/wiki/Vogan%20diagram
In mathematics, a Vogan diagram, named after David Vogan, is a variation of the Dynkin diagram of a real semisimple Lie algebra that indicates the maximal compact subgroup. Although they resemble Satake diagrams they are a different way of classifying simple Lie algebras. References Lie algebras
Vogan diagram
Mathematics
63
1,832,706
https://en.wikipedia.org/wiki/Pemoline
Pemoline, formerly sold under the brand name Cylert among others, is a stimulant medication which has been used in the treatment of attention deficit hyperactivity disorder (ADHD) and narcolepsy. It has been discontinued in most countries due to rare but serious problems with liver toxicity. The medication was taken by mouth. Side effects of pemoline include insomnia, decreased appetite, abdominal pain, irritability, and headaches. Rarely, the medication can cause serious liver damage, and this can result in liver transplantation or death. Pemoline is a stimulant and acts as a selective dopamine reuptake inhibitor and releasing agent. Hence, it functions as an indirect agonist of dopamine receptors. Pemoline has little effect on norepinephrine and hence has minimal or no cardiovascular or sympathomimetic effects, in contrast to many other stimulants. Pemoline was synthesized in 1913 but was not discovered to be a stimulant until the 1930s and was not used in the treatment of ADHD until 1975. It was withdrawn due to liver toxicity in many countries between 1997 and 2005, including the United States. However, it remains available in Japan for the treatment of narcolepsy at lower doses than used for ADHD. Pemoline is a schedule IV controlled substance in the United States due to its relation to other stimulants and a potential for misuse. It seems to have less misuse potential than other stimulants. Medical uses Pemoline has been used in the treatment of ADHD and narcolepsy. It has also been used in the treatment of excessive daytime sleepiness. The medication was typically used at doses of 18.75 to 112.5mg once per day in the treatment of ADHD, with the effective dose for most people being in the range of 56.25 to 75mg. The onset of action of pemoline is gradual and therapeutic benefits may not occur until the third or fourth weeks of use. This may be due to a cautious low initial starting dose of 37.5mg and gradual titration in dose upwards over several weeks. Available forms Pemoline was available in the form of 18.75, 37.5, and 75mg oral immediate-release tablets (Cylert) as well as 37.5mg oral immediate-release chewable tablets. It was provided mainly in the form of the free base but also as the magnesium salt. Side effects Side effects of pemoline include insomnia, decreased appetite, abdominal pain, irritability, and headaches. It has minimal cardiovascular or sympathomimetic side effects. Pemoline is described as a lower-efficacy/milder stimulant than classical stimulants like amphetamines and methylphenidate and is said to have fewer side effects than them. Liver toxicity Rarely, pemoline is implicated in causing hepatotoxicity. Because of this, the FDA recommended that regular liver tests be performed in those treated with it. Since being introduced, it has been linked with at least 21 cases of liver failure, of which 13 resulted in liver replacement or death. Approximately 1–2% of patients taking the drug show elevated levels of liver transaminase enzymes, a marker for liver toxicity, though serious cases are rare. Over 200,000 children with ADHD were prescribed pemoline in the United States and Canada alone during the approximate 25 years that it was available, plus a smaller number of adults prescribed it for other indications (and not including prescriptions in the rest of the world). As such, the number of liver failure cases was statistically not that large. However the reactions proved idiosyncratic and unpredictable, with patients sometimes taking the drug with no issue for months or even years, before suddenly developing severe liver toxicity. There was no clear exposure–toxicity relationship, and no characteristic liver pathology findings. Some patients showed as little as one week between first appearance of jaundice and complete liver failure, and some of the patients that developed liver failure had not showed elevated liver transaminase levels when tested previously. On the other hand, there are no cases of liver failure associated with pemoline in Japan, although it is used at lower doses and is only prescribed for the niche indication of narcolepsy in this country. Overdose Overdose of pemoline may present with choreoathetosis symptoms. Interactions Other stimulants and monoamine oxidase inhibitors are contraindicated with pemoline. Pharmacology Pharmacodynamics The pharmacodynamics of pemoline are poorly understood and its precise mechanism of action hasn't been definitively determined. However, pemoline has similar activity and effects to those of other psychostimulants, and in animals the medication appears to act as a dopamine reuptake inhibitor and releasing agent. By increasing dopamine levels in the brain, it functions as an indirect agonist of dopamine receptors. In contrast to most other stimulants, pemoline appears to produce no significant central or peripheral noradrenergic effects. As a result, it has minimal or no cardiovascular or sympathomimetic effects. Pemoline is described as a selective dopamine reuptake inhibitor that only weakly stimulates dopamine release. While drugs like dextroamphetamine and methylphenidate are classified as schedule II and have considerable misuse potential, pemoline is listed as schedule IV (non-narcotic). In studies conducted on primates, pemoline fails to demonstrate a potential for self-administration. It is thought to have little potential for abuse and dependence. Nonetheless, misuse may theoretically occur owing to its similarity to other psychostimulants. Pharmacokinetics Studies of the pharmacokinetics of pemoline in humans are very limited. The time to peak levels of pemoline is 2 to 4hours. Peak levels have been reported to be in the range of 2 to 4.5μg/mL. Steady-state levels of pemoline are reached in 2 to 3days. Pemoline is variously reported to have no significant plasma protein binding or to have 50% plasma protein binding. Pemoline is metabolized in the liver. Its metabolites include pemoline conjugate, pemoline dione, mandelic acid, and unidentified polar metabolites. Pemoline is excreted mainly by the kidneys with around 50% excreted in unchanged form and only minor amounts present as metabolites. The elimination half-life of pemoline is 7 to 12hours. The half-life is 7hours in children but may increase to 11 to 12hours with age. The relatively long half-life of pemoline allows for once-daily administration. No differences in the pharmacokinetics of pemoline were found with conventional tablets, chewable tablets swallowed, or chewable tablets chewed. Chemistry Pemoline is a member of the 4-oxazolidinone class and is structurally related to other members of the class including aminorex, 4-methylaminorex, clominorex, cyclazodone, fenozolone, fluminorex, and thozalinone. The salts of pemoline in use are pemoline magnesium (free base conversion ratio .751), pemoline iron (.578), pemoline copper (.644), pemoline nickel (.578), pemoline rubidium, pemoline calcium, pemoline chromium, and chelates of the above which are identical in weight to the salt mentioned. Pemoline free base and pemoline cobalt, strontium, silver, barium, lithium, sodium, potassium, zinc, manganese, and caesium are research chemicals which can be produced in situ for experiments. Others such as lanthanide pemoline salts such as pemoline cerium can be prepared; pemoline beryllium would presumably be toxic. History Pemoline was first synthesized in 1913 but its activity was not discovered until the 1930s. Pemoline was approved for the treatment of ADHD in the United States in 1975. Cases of serious liver toxicity and associated death related to pemoline in children and adolescents were reported to the United States Food and Drug Administration's MedWatch between 1977 and 1996. Serious liver toxicity with pemoline was first described in the medical literature in 1984 and 1989 letters to the editor. Clinicians were little-aware of liver toxicity with pemoline until the 1990s. Warnings for liver toxicity for pemoline were added to the United States Food and Drug Administration (FDA) label for the medication in December 1996 and a black box warning was added in June 1999 along with requirements for written consent and frequent monitoring of liver enzymes. These warnings followed a 1995 publication on liver toxicity with pemoline. However, findings suggested that clinicians poorly followed the FDA's directives on use of pemoline. In any case, sales of pemoline in the United States increased until 1997 and declined between 1996 and 1999. Pemoline was withdrawn due to liver toxicity in the United Kingdom in September 1997, in Canada in September 1999, and in the United States in 2005. Abbott Laboratories voluntarily withdrew pemoline from the United States market in May 2005 and the FDA withdrew approval of generic pemoline in November 2005. Pemoline remains available in Japan for treatment of narcolepsy as of 2017. Society and culture Names Pemoline is the generic name of the drug and its , , and . Pemoline was formerly marketed under the brand names Cylert, Betanamin, Ceractiv, Hyperilex, Kethamed, Ronyl, Stimul, Tamilan, Tradon, Tropocer, and Volital. Availability Pemoline has been marketed in the United States, Canada, the United Kingdom, Belgium, Luxembourg, Spain, Germany, Switzerland, Japan and Argentina. It remains available in Japan for the treatment of narcolepsy as of 2017. However, the medication is said to be rarely used in Japan as narcolepsy is a niche indication and as clinicians are wary of the liver toxicity that it has been associated with. Legal status Under the Convention on Psychotropic Substances, it is a schedule IV controlled substance. Pemoline is Schedule IV Non-Narcotic (Stimulant) controlled substance with a DEA ACSCN of 1530 and is not subject to annual manufacturing quotas. Research Fatigue Pemoline has been studied in and reported to be effective in the treatment of fatigue due to multiple sclerosis and HIV-related disease. References Aminorexes Drugs developed by AbbVie Hepatotoxins Norepinephrine-dopamine releasing agents Oxazolidinones Stimulants Substances discovered in the 1910s Wakefulness-promoting agents Withdrawn drugs
Pemoline
Chemistry
2,295
74,066,841
https://en.wikipedia.org/wiki/Prix%20Georges%20Lema%C3%AEtre
The Prix Georges Lemaître is an award created in 1995, in celebration of the centenary of the birth in 1894 of Georges Lemaître. The Association des Anciens et Amis de l'Université catholique de Louvain (Association of the Alumni and Friends of the Université catholique de Louvain) initiated the award, as well as the Fondation Georges Lemaître (Georges Lemaître Foundation). The prize, endowed with 25,000 euros as of 2003, is awarded every two years to a scientist who has made a remarkable contribution to développement et à la diffusion des connaissances dans les domaines de la cosmologie, de l'astronomie, de l'astrophysique, de la géophysique, ou de la recherche spatiale (development and dissemination of knowledge in the fields of cosmology, astronomy, astrophysics, geophysics, or space science). The winner is chosen by an international jury of scientists, chaired by the rector of the Université catholique de Louvain. List of recipients 1995 — Philip James Edwin Peebles, astrophysicist and cosmologist 1997 — Jean-Claude Duplessy, geochemist and climatologist 1999 — Jean-Pierre Luminet, astrophysist, and Dominique Lambert, philosopher of science 2001 — Kurt Lambeck, geophysicist 2003 — Alain Hubert, explorer and climatologist 2005 — Édouard Bard, climatologist 2007 — Susan Solomon, atmospheric chemist and climatologist 2009 — Jean Kovalevsky, astronomer 2010 — André Berger, climatologist 2012 — Michael Heller, cosmologist 2015 — Anny Cazenave, geophysicist, and Jean-Philippe Uzan, theoretical physicist and cosmologist 2017 — Kip Thorne, theoretical physicist 2019 — George F. R. Ellis, theoretical physicist and cosmologist 2021 — hiatus due to COVID–19 pandemic 2023 — Sheperd S. Doeleman, astrophysicist References Belgian awards Science and technology awards Awards established in 1995
Prix Georges Lemaître
Technology
439
12,378,165
https://en.wikipedia.org/wiki/Fluxomics
Fluxomics describes the various approaches that seek to determine the rates of metabolic reactions within a biological entity. While metabolomics can provide instantaneous information on the metabolites in a biological sample, metabolism is a dynamic process. The significance of fluxomics is that metabolic fluxes determine the cellular phenotype. It has the added advantage of being based on the metabolome which has fewer components than the genome or proteome. Fluxomics falls within the field of systems biology which developed with the appearance of high throughput technologies. Systems biology recognizes the complexity of biological systems and has the broader goal of explaining and predicting this complex behavior. Metabolic flux Metabolic flux refers to the rate of metabolite conversion in a metabolic network. For a reaction this rate is a function of both enzyme abundance and enzyme activity. Enzyme concentration is itself a function of transcriptional and translational regulation in addition to the stability of the protein. Enzyme activity is affected by the kinetic parameters of the enzyme, the substrate concentrations, the product concentrations, and the effector molecules concentration. The genomic and environmental effects on metabolic flux are what determine healthy or diseased phenotype. Fluxome Similar to genome, transcriptome, proteome, and metabolome, the fluxome is defined as the complete set of metabolic fluxes in a cell. However, unlike the others the fluxome is a dynamic representation of the phenotype. This is due to the fluxome resulting from the interactions of the metabolome, genome, transcriptome, proteome, post-translational modifications and the environment. Flux analysis technologies Two important technologies are flux balance analysis (FBA) and 13C-fluxomics. In FBA metabolic fluxes are estimated by first representing the metabolic reactions of a metabolic network in a numerical matrix containing the stoichiometric coefficients of each reaction. The stoichiometric coefficients constrain the system model and are why FBA is only applicable to steady state conditions. Additional constraints can be imposed. By providing constraints the possible set of solutions to the system are reduced. Following the addition of constraints the system model is optimized. Flux-balance analysis resources include the BIGG database, the COBRA toolbox, and FASIMU. In 13C-fluxomics, metabolic precursors are enriched with 13C before being introduced to the system. Using an imaging technique such as mass spectrometry or nuclear magnetic resonance spectroscopy the level of incorporation of 13C into metabolites can be measured and with stoichiometry the metabolic fluxes can be estimated. Stoichiometric and kinetic paradigms A number of different methods, broadly divided into stoichiometric and kinetic paradigms. Within the stoichiometric paradigm, a number of relatively simple linear algebra methods use restricted metabolic networks or genome-scale metabolic network models to perform flux balance analysis and the array of techniques derived from it. These linear equations are useful for steady state conditions. Dynamic methods are not yet usable. On the more experimental side, metabolic flux analysis allows the empirical estimation of reaction rates by stable isotope labelling. Within the kinetic paradigm, kinetic modelling of metabolic networks can be purely theoretical, exploring the potential space of dynamic metabolic fluxes under perturbations away from steady state using formalisms such as biochemical systems theory. Such explorations are most informative when accompanied by empirical measurements of the system under study following actual perturbations, as is the case in metabolic control analysis. Constraint based reconstruction and analysis Collected methods in fluxomics have been described as "COBRA" methods, for constraint based reconstruction and analysis. A number of software tools and environments have been created for this purpose. Although it can only be measured indirectly, metabolic flux is the critical link between genes, proteins and the observable phenotype. This is due to the fluxome integrating mass-energy, information, and signaling networks. Fluxomics has the potential to provide a quantifiable representation of the effect the environment has on the phenotype because the fluxome describes the genome environment interaction. In the fields of metabolic engineering and systems biology, fluxomic methods are considered a key enabling technology due to their unique position in the ontology of biological processes, allowing genome scale stoichiometric models to act as a framework for the integration of diverse biological datasets. Examples of use in research One potential application of fluxomic techniques is in drug design. Rama et al. used FBA to study the mycolic acid pathway in Mycobacterium tuberculosis. Mycolic acids are known to be important to M. tuberculosis survival and as such its pathway has been studied extensively. This allowed the construction of a model of the pathway and for FBA to analyze it. The results of this found multiple possible drug targets for future investigation. FBA was used to analyze the metabolic networks of multidrug-resistant Staphylococcus aureus. By performing in silico single and double gene deletions many enzymes essential to growth were identified. References Bioinformatics Systems biology Computational biology
Fluxomics
Engineering,Biology
1,025
1,828,766
https://en.wikipedia.org/wiki/A%20Rape%20in%20Cyberspace
"A Rape in Cyberspace, or How an Evil Clown, a Haitian Trickster Spirit, Two Wizards, and a Cast of Dozens Turned a Database into a Society" is an article written by freelance journalist Julian Dibbell and first published in The Village Voice in 1993. The article was later included in Dibbell's book My Tiny Life on his LambdaMOO experiences. Lawrence Lessig has said that his chance reading of Dibbell's article was a key influence on his interest in the field. Sociologist David Trend called it "one of the most frequently cited essays about cloaked identity in cyberspace". History Julian Dibbell's journalism career began in the music industry, though his writings eventually came to focus mainly the Internet, including various subcultures such as LambdaMOO, a MUD, which itself was further divided into subcultures, a phenomenon he inadvertently encountered through his girlfriend. One day, when he was having difficulty contacting her by phone, he searched for her in LambdaMOO because he knew she was a visitor. When he found her, she had been in a meeting regarding how to resolve the issue of a player named Mr. Bungle. "A Rape in Cyberspace" describes a "cyberrape" that took place on a Monday night in March 1993 and discusses the repercussions of this act on the virtual community and subsequent changes to the design of the MUD program. LambdaMOO allows players to interact using avatars. The avatars are user-programmable and may interact automatically with each other and with objects and locations in the community. Users interacted through script, as there were no graphics or images on the MUD at the time. The "cyberrape" itself was performed by Mr. Bungle, who leveraged a "voodoo doll" subprogram that allowed him to make actions that were falsely attributed to other characters in the virtual community. The "voodoo doll" subprogram was eventually rendered useless by a character named Zippy. These actions, which included describing sexual acts that characters performed on each other and forcing the characters to perform acts upon themselves, went far beyond the community norms to that point and continued for several hours. They were interpreted as sexual violation of the avatars who were made to act sexually, and incited outrage among the LambdaMOO users, raising questions about the boundaries between real-life and virtual reality, and how LambdaMOO should be governed. Following Mr. Bungle's actions, several users posted on the in-MOO mailing list, *social-issues, about the emotional trauma caused by his actions. One user whose avatar was a victim, called his voodoo doll activities "a breach of civility" while, in real life, "post-traumatic tears were streaming down her face". However, despite the passionate emotions including anger voiced by many users on LambdaMOO, none were willing to punish the user behind Mr. Bungle through real-life means. Three days after the event, the users of LambdaMOO arranged an online meeting, which Dibbell attended under his screenname (Dr. Bombay), to discuss what should be done about Mr. Bungle. The meeting lasted approximately two hours and forty-five minutes, but no conclusive decisions were made. After attending the meeting, one of the master-programmers of LambdaMOO (with screenname JoeFeedback), decided on his own to terminate Mr. Bungle's user account. Additionally, upon his return from his business trip, LambdaMOO's main creator, Pavel Curtis (screenname Archwizard Haakon), set up a system of petitions and ballots where anyone could put to popular vote anything requiring administrative powers for its implementation. Through this system, LambdaMOO users put into place a @boot command, which temporarily disconnects disruptive guest users from the server, as well as a number of other new features. It was later discovered that Mr. Bungle's identity consisted not only of a young man attending NYU, but also a group of NYU students on a dorm floor which encouraged his actions by calling out suggestions during the evening of the rape. Analysis Legal and ethical debate Dibbell's "A Rape in Cyberspace" brought issues of online abuse to light that had not been heard much of during its time. It led to some debate about ethical and legal issues, free speech, how to continue to build the Internet, how to regulate it, and how to potentially prosecute crimes that had never existed before. Since the article was written, interaction with online media has become ubiquitous, making it harder to avoid negative actions of "trolls" and harassers. This sparked the debate of whether these events have real-world repercussions, as the psychological damage the users feel is real. Politics In the aftermath of the event, members of the LambdaMOO community came together to discuss how to handle what happened. The community attained a political self-consciousness about itself when deciding how to punish Mr. Bungle for his actions. Prior to the event, LambdaMOO's creator Pavel Curtis released a document known as the "New Direction" which stated that the "wizards" were to serve the purpose of technicians and were not to make decisions which affect the social life of the MOO and to only implement decisions made by the community as a whole. This forced the LambdaMOO community to invent their own self-governance from scratch; in the case of Mr. Bungle, it was decided that his character would be deleted. Psychology "A Rape in Cyberspace" demonstrates how the virtual world and the real world interact, as exemplifeied by how Dibbell's experiences in the online community affected his real-world thought process. It also demonstrates the emotional effect which the events that happened within LambdaMOO had on the players. Even though it happened in virtual reality, it was a symbolic form of violation in both realities. Legacy Dibbell's "A Rape in Cyberspace" and other publications that he has made about the Bungle incident have been seen by many scholars and professionals as a key foundation in the topic of virtual rape. The article has been used to take a look at the moral nature of actions within the virtual world. Since the Mr. Bungle case, LambdaMOO set up an arbitration system so that people can file suit against one another and this system has been put into use with the matter of a virtual death. Over two decades later, these events remain one of the primary advertisements for LambdaMOO. Research students still regularly visit the MOO (often sent there by their professors) and start asking users about these events. This article draws attention to a more modern version of the platonic binary, otherwise known as the mind-body split. The event described in the article illustrates the intellectual self from the physical self through the typing of words on a screen. Dibbell continued to participate in LambdaMOO, up to 30 hours a week, and eventually wrote My Tiny Life about his experiences, incorporating the article. He remains somewhat astonished at the impact it has had, saying in 1998, "No piece I had done before had managed to convey as vividly to readers the fact that there was something wild and different going on online, something that might profoundly alter the way they related to words and communication and culture in general." The article raised awareness in the legal implications of online activity, including Lawrence Lessig, and Dibbell himself would go on to teach cyberlaw as a Fellow at Stanford Law School Center for the Internet and Society. The article is also considered one of the earliest examples of New Games Journalism where review of computer games are meshed with social observation and consideration of surrounding issues. In 2018 The Village Voice reprinted the article following a reported gang rape in Roblox. References Citations Bibliography Further reading Dibbell, Julian. "A Rape in Cyberspace." The Village Voice. 21 December 1993. Dibbell, Julian and Clarisse Thorn. Violation: Rape in Gaming. CreateSpace & Amazon. 14 October 2012. Dibbell, Julian. My Tiny Life. Owl Books, 1999. Laurenson, Lydia.The Inevitably-Named "Rape in RPGs", Gamegrene.com. 22 March 2005. Lessig, Lawrence. Code and Other Laws of Cyberspace. Basic Books, 2000. Posted By A.M.. Rape in Roleplaying Games. Women in Gaming Archive, 4 June 1999. Whetton, A. Rape in RPGs. Principia Malefex. Wallace, Patricia M. The Psychology of The Internet. Cambridge University Press, 1999. 1993 documents Magazine articles Internet-based works Multi-user dungeon The Village Voice Online sex crimes Works about rape
A Rape in Cyberspace
Technology
1,792
14,794,097
https://en.wikipedia.org/wiki/CLNS1A
Methylosome subunit pICln is a protein that in humans is encoded by the CLNS1A gene. Interactions CLNS1A has been shown to interact with: ITGA2B, PRMT5, SNRPD1, and SNRPD3. See also Chloride channel References Further reading External links Ion channels
CLNS1A
Chemistry
67
3,408,731
https://en.wikipedia.org/wiki/DrugBank
The DrugBank database is a comprehensive, freely accessible, online database containing information on drugs and drug targets created and maintained by the University of Alberta and The Metabolomics Innovation Centre located in Alberta, Canada. As both a bioinformatics and a cheminformatics resource, DrugBank combines detailed drug (i.e. chemical, pharmacological and pharmaceutical) data with comprehensive drug target (i.e. sequence, structure, and pathway) information. DrugBank has used content from Wikipedia; Wikipedia also often links to Drugbank, posing potential circular reporting issues. The DrugBank Online website is available to the public as a free-to-access resource. However, use and re-distribution of content from DrugBank Online or the underlying DrugBank Data, in whole or part, and for any purpose requires a license. Academic users can apply for a free license for certain use cases while all other users require a paid license. The latest release of the database (version 5.0) contains 9591 drug entries including 2037 FDA-approved small molecule drugs, 241 FDA-approved biotech (protein/peptide) drugs, 96 nutraceuticals and over 6000 experimental drugs. Additionally, 4270 non-redundant protein (i.e. drug target/enzyme/transporter/carrier) sequences are linked to these drug entries. Each DrugCard entry (Fig. 1) contains more than 200 data fields with half of the information being devoted to drug/chemical data and the other half devoted to drug target or protein data. Four additional databases, HMDB, T3DB, SMPDB and FooDB are also part of a general suite of metabolomic/cheminformatic databases. HMDB contains equivalent information on more than 40,000 human metabolites, T3DB contains information on 3100 common toxins and environmental pollutants, SMPDB contains pathway diagrams for nearly 700 human metabolic pathways and disease pathways, while FooDB contains equivalent information on ~28,000 food components and food additives. Version history The first version of DrugBank was released in 2006. This early release contained relatively modest information about 841 FDA-approved small molecule drugs and 113 biotech drugs. It also included information on 2133 drug targets. The second version of DrugBank was released in 2009. This greatly expanded and improved version of the database included 1344 approved small molecule drugs and 123 biotech drugs as well as 3037 unique drug targets. Version 2.0 also included, for the first time, withdrawn drugs and illicit drugs, extensive food-drug and drug-drug interactions as well as ADMET (absorption, distribution, metabolism, excretion and toxicity) parameters. Version 3.0 was released in 2011. This version contained 1424 approved small molecule drugs and 132 biotech drugs as well as >4000 unique drug targets. Version 3.0 also included drug transporter data, drug pathway data, drug pricing, patent and manufacturing data as well as data on >5000 experimental drugs. Version 4.0 was released in 2014. This version included 1558 FDA-approved small molecule drugs, 155 biotech drugs and 4200 unique drug targets. Version 4.0 also incorporated extensive information on drug metabolites (structures and reactions), drug taxonomy, drug spectra, drug binding constants and drug synthesis information. Table 1 provides a more complete statistical summary of the history of DrugBank's development. Scope and access All data in DrugBank is derived from public non-proprietary sources. Nearly every data item is fully traceable and explicitly referenced to the original source. DrugBank data is available through a public web interface. See also ChEMBL Drug metabolism HMDB KEGG List of biological databases Pharmacology SMPDB T3DB Therapeutic Targets Database References Chemical databases Metabolomic databases Human drug metabolites Biological databases
DrugBank
Chemistry
783
41,000
https://en.wikipedia.org/wiki/Date-time%20group
In communications messages, a date-time group (DTG) is a set of characters, usually in a prescribed format, used to express the year, the month, the day of the month, the hour of the day, the minute of the hour, and the time zone, if different from Coordinated Universal Time (UTC). The order in which these elements are presented may vary. The DTG is usually placed in the header of the message. One example is " (UTC)"; while another example is "". The DTG may indicate either the date and time a message was dispatched by a transmitting station or the date and time it was handed into a transmission facility by a user or originator for dispatch. The DTG may be used as a message identifier if it is unique for each message. Military Date Time Group A form of DTG is used in the US Military's message traffic (a form of Automated Message Handling System). In US military messages and communications (e.g., on maps showing troop movements) the format is DD HHMM (SS) Z MON YY. Although occasionally seen with spaces, it can also be written as a single string of characters. Three different formats can be found: - Full time (used for software timestamps) - shortened time (used e.g. for timestamps manually written) - short time (e.g. used for planning) references the military identifier of time zone: UTC-12: Y (e.g., Baker Island) UTC-11: X (American Samoa, Niue) UTC-10: W (Honolulu, HI) UTC-9: V (Juneau, AK) UTC-8: U (PST, Los Angeles, CA) UTC-7: T (MST, Denver, CO) UTC-6: S (CST, Dallas, TX) UTC-5: R (EST, New York, NY) UTC-4: Q (Halifax, Nova Scotia) UTC-3: P (Buenos Aires, Argentina; Rio de Janeiro, Brazil) UTC-2: O (South Georgia and the South Sandwich Islands) UTC-1: N (Azores) UTC+-0: Z (Zulu time, GMT, London) UTC+1: A (CET, Stockholm, Copenhagen, Berlin, Paris, Rome, Madrid, Valletta) UTC+2: B (EET, Athens) UTC+3: C (Arab Standard Time, Iraq, Bahrain, Kuwait, Saudi Arabia, Yemen, Qatar, as well as Moscow in Russia) UTC+4: D (Oman, the UAE) UTC+5: E (Pakistan, western Kazakhstan, Tajikistan, Uzbekistan, and Turkmenistan) UTC+6: F (eastern Kazakhstan, Bangladesh) UTC+7: G (Thailand) UTC+8: H (Beijing, China, Singapore) UTC+9: I (Tokyo, Japan) UTC+10: K (Brisbane, Australia) UTC+11: L (Sydney, Australia) UTC+12: M (Wellington, New Zealand) Examples Example 1: represents the 5th day of the current month 11:00 (UTC). Example 2: represents 9 July 2011 4:30 pm (MST). Example 3: represents the current time of refresh: (UTC). Sources MIL-STD-2500A 12 October 1994 See also Calendar date ISO 8601 References TM 20-205, the Dictionary of United States Army Terms (1944) ACP 121(I) p 3–7 External links Calendaring standards Date and time representation
Date-time group
Physics
739
3,818,648
https://en.wikipedia.org/wiki/Matthew%20Hennessy
Matthew Hennessy is an Irish computer scientist who has contributed especially to concurrency, process calculi and programming language semantics. Career During 1976–77, Matthew Hennessy was an assistant professor at the University of Waterloo in Canada. Then during 1977–78, he was a visiting professor at the Universidade Federal de Pernambuco in Brazil. Subsequently, he was a research associate (1979–81) and then lecturer (1981–85) at the University of Edinburgh in Scotland. During 1985, he was a guest lecturer/researcher at the University of Aarhus in Denmark. Hennessy was Professor of Computer Science at the Department of Informatics, University of Sussex, England, from 1985 until 2008. Since then, Hennessy has held a research professorship at the Department of Computer Science, Trinity College, Dublin. Hennessy's research interests are in the area of the semantic foundations of programming and specification languages, particularly involving distributed computing, including mobile computing. He also has an interest in verification tools. His co-authors include Robin Milner and Gordon Plotkin. Hennessy is a member of the Academy of Europe. He held a Royal Society/Leverhulme Trust Senior Research Fellowship during 2005–06 and has a Science Foundation Ireland Research Professorship at Trinity College Dublin. Books Matthew Hennessy has written a number of books: Hennessy, Matthew. A Distributed Pi-Calculus. Cambridge University Press, Cambridge, UK, 2007. . Hennessy, Matthew. Algebraic Theory of Processes. The MIT Press, Cambridge, Massachusetts, 1988. . Hennessy, Matthew. The Semantics of Programming Languages: An Elementary Introduction using Structural Operational Semantics. John Wiley and Sons, New York, 1990. . See also Hennessy–Milner logic Ó hAonghusa References External links Matthew Hennessy Trinity College Dublin home page Year of birth missing (living people) Living people 20th-century Irish people 21st-century Irish people Irish computer scientists Formal methods people Computer science writers Academic staff of the University of Waterloo Academics of the University of Edinburgh Academics of the University of Sussex Fellows of Trinity College Dublin
Matthew Hennessy
Technology
417
44,596,855
https://en.wikipedia.org/wiki/5154%20aluminium%20alloy
5154 aluminium alloy is an alloy in the wrought aluminium-magnesium family (5000 or 5xxx series). As an aluminium-magnesium alloy, it combines moderate-to-high strength with excellent weldability. 5154 aluminium is commonly used in welded structures such as pressure vessels and ships. As a wrought alloy, it can be formed by rolling, extrusion, and forging, but not casting. It can be cold worked to produce tempers with a higher strength but a lower ductility. It is generally not clad. Alternate names and designations include AlMg3.5, N5, and A95154. The alloy and its various tempers are covered by the following standards: ASTM B 209: Standard Specification for Aluminium and Aluminium-Alloy Sheet and Plate ASTM B 210: Standard Specification for Aluminium and Aluminium-Alloy Drawn Seamless Tubes ASTM B 211: Standard Specification for Aluminium and Aluminium-Alloy Bar, Rod, and Wire ASTM B 221: Standard Specification for Aluminium and Aluminium-Alloy Extruded Bars, Rods, Wire, Profiles, and Tubes ASTM B 547: Standard Specification for Aluminium and Aluminium-Alloy Formed and Arc-Welded Round Tube ISO 6361: Wrought Aluminium and Aluminium Alloy Sheets, Strips and Plates Chemical composition The alloy composition of 5154 aluminium is: Aluminium: 94.4 to 96.8% Chromium: 0.15 to 0.35% Copper: 0.1% max Iron: 0.4% max Magnesium: 3.1 to 3.9% Manganese: 0.1% Silicon: 0.25% max Titanium: 0.2% max Zinc: 0.2% max Residuals: 0.15% max The similar alloy A5254 differs only in impurities limits. Properties Typical material properties for 5154 aluminium alloy include: Density: 2.66 g/cm3, or 166 lb/ft3. Young's modulus: 69 GPa, or 10 Msi. Electrical conductivity: 32% IACS. Ultimate tensile strength: 230 to 330 MPa, or 33 to 48 ksi. Thermal Conductivity: 130 W/m-K. Thermal Expansion: 23.9 μm/m-K. References Aluminium alloy table Aluminium–magnesium alloys
5154 aluminium alloy
Chemistry
467
68,738,451
https://en.wikipedia.org/wiki/James%20Bond%20007%3A%20Everything%20or%20Nothing%20%28GBA%20video%20game%29
James Bond 007: Everything or Nothing is a third-person shooter video game, developed by Griptonite Games and published by Electronic Arts for the Game Boy Advance (GBA). As MI6 agent James Bond, the player must foil an ex-KGB agent who plans to use nanotechnology for world domination. Everything or Nothing was released in November 2003, several months prior to the release of a home console version. It received "mixed or average" reviews according to Metacritic. Gameplay Everything or Nothing is primarily a third-person shooter, played from an isometric perspective. The player controls MI6 agent James Bond. Stealth is emphasized throughout the game, and several driving levels are also featured. The game is divided into eight missions set around the world. On foot, the player has various weapons and gadgets to use against guard enemies who appear throughout the game. Weapons include guns, grenades, and land mines, and the player can also punch or kick. Medical kits are found throughout the game and are used to replenish the player's health. The player can take cover behind pillars or tables and can rappel down cliffs and buildings on some levels. M and Q appear after each level to brief the player. In each level, primary objectives must be completed before advancing. Optional secondary objectives are also present in each level, providing the player with Style Points if completed. At the end of each level, the player can use the accumulated Style Points to purchase armor, weapon upgrades, increased speed, and better aiming. A blackjack mini-game can also be unlocked by collecting Style Points. The player can sneak up on guards from behind and quietly take them out for additional points. A radar is used to detect enemy locations, and an alert meter tells the player whether they have been spotted. The driving portions are played during certain missions and are also viewed from an isometric perspective. The player pursues a primary enemy vehicle and is also followed by several smaller enemy vehicles. The player's car has several weapons such as a machine gun, rockets, and oil slicks. The game offers a multiplayer deathmatch mode for up to four players, with the use of a Game Link Cable. A GameCube – Game Boy Advance link cable can also be used to connect with the GameCube version of Everything or Nothing, unlocking mini-games and upgraded gadgets, among other features. Plot The game opens in the Sahara Desert in Egypt, where James Bond infiltrates and destroys a secret weapons facility. He is then sent to a casino in Cairo, to investigate Arkady Yayakov, an ex-KGB agent and suspected smuggler. It is believed that Yayakov knows about recently stolen nanotechnology. Bond pursues Yayakov in a car chase, and his investigation leads him to a Cairo train yard that Yayakov had used as a shipping base. Bond discovers traces of the nanotechnology and also encounters a former foe, Jaws, whom he defeats in a battle. MI6 learns that one of Yayakov's clients is an ex-KGB agent named Nikolai Diavolo, who has set up a base in the Peruvian jungle near a platinum mine. Bond meets with Agent 003, who has monitored the base, and they infiltrate the site. 003 is killed, and Bond pursues Diavolo in a car chase. He learns of another compound, hidden within a crypt in New Orleans. There, he discovers Diavolo is working with scientist Dr. Katya Nadanova. She has used a biological agent found in Louisiana to create nanobots that are capable of eating through metals, except platinum. Diavolo plans to unleash an army of platinum-coated super soldiers on the world while using the nanobots to destroy military defenses such as warships and tanks. He plans to initiate his plot in Moscow and intends to launch nanotech missiles to destroy cities unless his demands are met. Bond escapes the New Orleans facility, and American agent Mya Starling is sent to destroy Diavolo's nanotech plant, located at the Andes Mountains in Peru. Bond is sent there after she goes missing, and he proceeds through the large facility, which extends underground through Incan ruins. Starling escapes, the facility is destroyed, and Bond travels to Moscow to foil Diavolo's plot. He pursues Nadanova's tank stops her, and subsequently fights against Jaws in another battle, before ultimately killing Diavolo. Development and release Everything or Nothing was developed by Griptonite Games. The game features the likenesses of Pierce Brosnan (James Bond), Judi Dench (M), Willem Dafoe (Nikolai Diavolo), Heidi Klum (Katya Nadanova), and Mýa (Mya Starling). Mýa also sings the game's eponymous song, heard during the end credits. The actors also provide minimal voice-over lines. Everything or Nothing was published by Electronic Arts for Game Boy Advance. In North America, it was released on November 17, 2003, several months before the home console version. Reception Everything or Nothing received "mixed or average" reviews according to Metacritic. Critics generally found the game to be too short, with little replay value. Frank Provo of GameSpot stated that the game adequately captured "the look and feel of a typical Bond film", and Justin Speer of X-Play said the game has an "authentic Bond feel". The on-foot gameplay was compared to Metal Gear Solid for the Game Boy Color. Matt Helgeson, writing for Game Informer, was critical of the isometric perspective: "It's sort of hard to plan your next move when you can only see about 10 virtual feet in front of you, and as a result it's usually easier to just run and gun your way through the levels". GameSpy's Avi Fryman considered the game's environments to be "severely limited", writing, "There are few nooks and crannies to explore, few secrets to reveal, and few genuinely interactive objects". Eduardo Zacarias of GameZone criticized the sluggish movements and the difficulty in aiming, and found the game somewhat repetitive. Other critics praised the gameplay. Nintendojo called the amount of variety "very admirable in today's heavily congested Game Boy Advance market". The driving portions were considered reminiscent of Spy Hunter, with some negative comparisons by critics. Greg Orlando of GMR called the driving portions "unwieldy and dull", and Speer also criticized the poor handling. Fryman found this component of the game "not quite as fun or involved" as Spy Hunter. Conversely, Zacarias praised the driving portions and considered them "highly addictive". The sound and music were praised. Craig Harris of IGN opined that the audio is "where the game really shines", writing that the soundtrack is "outstanding" and "sets the mood perfectly". GamePro wrote that the soundtrack "really brings out the exhilaration of the espionage action". The graphics also received some praise. Notes References External links James Bond 007: Everything or Nothing at MobyGames 2003 video games James Bond video games Electronic Arts games Game Boy Advance games Stealth video games Third-person shooters Fiction about nanotechnology Video games set in Egypt Video games set in New Orleans Video games set in Peru Video games set in Moscow Griptonite Games Video games developed in the United States Games with GameCube-GBA connectivity
James Bond 007: Everything or Nothing (GBA video game)
Materials_science
1,534
47,885,421
https://en.wikipedia.org/wiki/HD%20138289
HD 138289, also known as HR 5757, is a probable spectroscopic binary located in the constellation Apus, the bird-of-paradise. It has an apparent magnitude of 6.18, placing it near the limit for naked eye. Gaia DR3 parallax measurements place the object 359 light years away and it is currently receding with a heliocentric radial velocity of . At its current distance, HD 138289's brightness is diminished by 0.25 magnitudes due to extinction from interstellar dust. It has an absolute magnitude of +1.21. The visible component has a stellar classification of K2.5 IIIb CN1.5 Ba+0.5, indicating that it is a red giant with an anomalous overabundance of cyano radicals in its spectrum. The IIIb luminosity class indicates that it is a lower luminosity giant star. The Ba+0.5 suffix states that it is a mild barium star, whose barium abundance might have come from a hidden white dwarf companion. HD 138289 is estimated to be 2.8 billion years old, enough time for it to cool and expand to 13 times the radius of the Sun. It is now on the horizontal branch, fusing helium at its core. At present it has 1.59 times the mass of the Sun and radiates 52.5 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . HD 138289 has a near solar metallicity and spins modestly with a poorly constrained projected rotational velocity of . References K-type giants PD-77 01134 Apus 138289 076664 5757 Apodis, 32
HD 138289
Astronomy
358
22,672,118
https://en.wikipedia.org/wiki/Occupational%20burnout
The ICD-11 of the World Health Organization (WHO) describes occupational burnout as a work-related phenomenon resulting from chronic workplace stress that has not been successfully managed. According to the WHO, symptoms include "feelings of energy depletion or exhaustion; increased mental distance from one's job, or feelings of negativism or cynicism related to one's job; and reduced professional efficacy." It is classified as an occupational phenomenon, but is not recognized by the WHO as a medical condition. Maslach and colleagues made clear that burnout does not constitute "a single, one-dimensional phenomenon". National health bodies in some European countries do recognise it as such however, and it is also independently recognised by some health practitioners. Nevertheless, a body of evidence suggests that what is termed burnout is a depressive condition. History Kaschka, Korczak, and Broich (2011) advanced the view that burnout is described in the Book of Exodus (18:17–18). In the New International Version of the Bible, Moses’ father-in-law said to Moses, “What you are doing is not good. You and these people who come to you will only wear yourselves out. The work is too heavy for you; you cannot handle it alone." Gordon Parker suggested that the ancient European concept of acedia refers to burnout and not depression as many others believe. By 1834, the German concept of (occupational diseases) had become established. The concept reflected adverse work-related effects on mental and physical health. In 1869, New York neurologist George Beard used the term "neurasthenia" to describe a very broad condition caused by the exhaustion of the nervous system, which he argued was to be found in "civilized, intellectual communities". The concept soon became popular, and many in the United States believed themselves to suffer from it. Some came to call it "Americanitis". Beard further broadened the potential symptoms of neurasthenia such that the disorder could be the source of almost any symptom or behaviour. Don R Lipsitt would later wonder if the term "burnout" was similarly too broadly defined to be useful. In 2017 the Dutch psychologist Wilmar Schaufeli pointed out similarities between Beard's concept of neurasthenia and that of contemporary concept of occupational burnout. The rest cure was a commonly prescribed treatment for neurasthenia in the United States, particularly for women. The American doctor Silas Weir Mitchell often prescribed this treatment. Other treatments included hypnosis, Paul Charles Dubois's cognitive behavioural therapy (this is distinct from and devised much earlier than Aaron Beck's cognitive behavioral therapy), and Otto Binswanger's life normalisation therapy. In 1888, the English neurologist William Gowers coined the term occupation neurosis to describe nerve damage caused by repetitive strain injury, translating the German concept of Beschäftigungsneurosen (occupational diseases affecting the nerves). The related term occupational neurosis came to include a wide range of work-caused anxieties and other mental problems. By the late 1930s, American health professionals had become widely acquainted with the condition. It became known as in German. From 1915, the Japanese psychiatrist Shoma Morita developed Morita therapy to treat neurasthenia. He had come to have a different understanding of the condition than Beard, preferring to call it shinkeishitsui; he published two books about the condition. In 1936, the Austrohungarian-Canadian endocrinologist Hans Selye first published on the concept of the general adaptation syndrome. Selye's work was ultimately a major advance in the understanding of the human response to stress. The syndrome has three stages: alarm, resistance, and exhaustion. In 1957, Swiss psychiatrist Paul Kielholz coined the term - . The concept was one of a number of new depression-subtypes that gained traction in France and Germany during the 1960s. In 1961, British author Graham Greene published the novel A Burnt-Out Case, the story of an architect who became disenchanted with fame and volunteered to work at leper colony in the Congo. In 1965, Kielholz publicised the idea of anti-depressant therapy in the German-speaking world through his book Diagnose und Therapie der Depressionen für den Praktiker [Diagnosis and Treatment of Depression for the Practitioner]. His work inspired further writing on the topic by Volker Faust. In 1968, the WHO's DSM-II replaced "psychophysiologic nervous system reaction" with the condition neurasthenic neurosis (neurasthenia). This condition was "characterized by complaints of chronic weakness, easy fatigability, and sometimes exhaustion." Another condition added to this edition was the similar asthenic personality, which was "characterized by easy fatigability, low energy level, lack of enthusiasm, marked incapacity for enjoyment, and oversensitivity to physical and emotional stress." In 1969, American prison official Harold B Bradley used the term burnout in a criminology paper to describe the fatigued staff at a centre for treating young adult offenders. Bradley's article has been cited as the first known academic work to use the term. By 1970, the concept of hypasthenia was used in Russia to characterize a condition that involved "fatigue, depression, tearfulness, anorexia, and work inefficiency." In 1973, Canadian psychiatrist David M Berger proposed that "neurasthenia is a stress-intolerance syndrome". In 1974, Herbert Freudenberger, a German-born American psychologist, used the term "burn-out" in his academic paper "Staff Burn-Out." The paper was based on his qualitative observations of the volunteer staff (including himself) at a free clinic for drug addicts. He characterized burnout by a set of symptoms that includes exhaustion resulting from work's excessive demands as well as physical symptoms such as headaches and sleeplessness, "quickness to anger", and closed thinking. He observed that the burned-out worker "looks, acts, and seems depressed." After the publication of Freudenberger's paper, interest in the concept grew. The American psychologist Christina Maslach described in a 1976 magazine article the impact of interpersonal stress on human service workers (e.g., social workers, psychiatrists, poverty lawyers, etc.). The impact manifested itself in symptoms such as fatigue, quickness to anger, and cynical attitudes toward the people the service workers were supposed to help. Also in 1976, Israeli-American psychologist Ayala Pines and American psychologist Elliot Aronson, using group workshops, began to treat people having symptoms of burnout. Pines collaborated with Maslach in writing essentially data-free papers about burnout in individuals who worked in day care centers and mental health facilities. In January 1978, Soviet endocrinologists LA Lavrova (ЛА Лаврова) and MS Bilyalov (МШ Билялов) found that in 125 patients with neurasthenia, there were substantial hormonal differences from normal. In June 1978, a team led by Australian psychiatrist Gavin Andrews found that neurasthenic neurosis was defined by two features, "anxiety proneness" and "inability to cope with stress". In 1980, the DSM-III was released. It abolished the concepts of neurasthenia and asthenic personality, both with the explanation "This DSM-II category was rarely used." Neither was directly replaced. Also in 1980, American psychologist Cary Cherniss published the book Staff Burnout: Job Stress in the Human Services. In 1981, Maslach and fellow American psychologist Susan E. Jackson published an instrument for assessing occupational burnout, the Maslach Burnout Inventory (MBI). It was the first such instrument of its kind, and soon became the most widely used occupational burnout instrument. The two researchers described occupational burnout in terms of emotional exhaustion, depersonalization (feeling low-empathy towards other people in an occupational setting), and reduced feelings of work-related personal accomplishment. In 1988, Pines and Aronson wrote the popular book Career Burnout: Causes and Cures, an updated version of a book they had published in April 1981 with fellow American psychologist Ditsa Kafry. They found that "marriage burnout" was just as prevalent as "job burnout". The WHO's ICD-10 (1994) removed the diagnosis of asthenic personality; the WHO, however, continued to include neurasthenia (F48.0). In 1998, Swedish psychiatrists Marie Åsberg and Åke Nygren investigated a surge of depression-related health insurance claims in their country. They found that the symptoms of many cases did not match the typical presentation of depression. Complaints like fatigue and decreased cognitive ability dominated, and many believed their working conditions to be the cause. In 2005, the Swedish Board of Health and Welfare adopted a category described as "exhaustion disorder". Treatment programs followed. In December 2007, the Swiss Expert Network on Burnout (SEB) was established. It has since held a number of symposia, and published recommendations for treating burnout. In 2003, the American psychiatrists Philip M. Liu and David A. Van Liew advanced the view that the concept of burnout is largely bereft of meaning and has often come to refer to "stress-induced unhappiness" with one's job. They, however, also wrote that burnout can mean "everything from fatigue to a major depression and now seems to have become an alternative word for depression but with less serious significance" (p. 434). In 2015, French psychologist Renzo Bianchi and his colleagues provided a literature review on the burnout–depression overlap (based on 92 studies) and concluded that the studies fail to prove consistently the nosological distinctiveness of the burnout phenomenon. Bianchi et al.'s (2021) later research suggests that burnout is a depressive condition. In May 2015, the WHO adopted a new conceptualisation of "occupational burnout." The conceptualization was consistent with Maslach's. However, occupational burnout was "not itself classified by the WHO as a medical or psychiatric condition or mental disorder." As of 2017, nine European countries (Denmark, Estonia, France, Hungary, Latvia, Netherlands, Portugal, Slovakia and Sweden) legally recognized burnout syndrome as an occupational disorder, for example, by awarding workers' compensation payments to affected people. The WHO's ICD-11 began official use in 2022. Within this categorisation, the concept of neurasthenia became part of the new condition of bodily distress disorder (6C20). The WHO also modified their definition of burnout that year. Diagnosis The two main classification systems of psychiatic disorders are the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM, used in North America and elsewhere) and the World Health Organisation's (WHO) International Classification of Diseases (ICD, used in Europe and elsewhere). Burnout is not recognized as a distinct mental disorder in the DSM-5 (published in 2013). Its definitions for Adjustment Disorders, and Unspecified Trauma- and Stressor-Related Disorder in some cases reflect the condition. 2022's update, the DSM-5-TR, did not include burnout. Rotentstein et al. (2018) in a review of research on physician burnout identified 142 different definitions of burnout, underlining the great heterogeneity in diagnostic criteria for burnout. Marked differences among researchers' conceptualizations of what constitutes burnout have underlined the need for a consensus definition. As of 2017, nine European countries may legally recognise burnout in some way, such as by providing workers' compensation payments. (Legal recognition for financial purposes is not the same as medical recognition as a discrete disease.) If, after treatment, a person with burnout continues to have persistent physical symptoms triggered by the condition, in Iceland they may be considered to have "somatic symptom disorder" (DSM-5) and "bodily distress disorder" (ICD-11). The ICD-10 (current 1994–2021) classified "burn-out" as a type of non-medical life-management difficulty under code Z73.0. It was considered to be one of the "factors influencing health status and contact with health services" and "should not be used" for "primary mortality coding". It was also considered one of the "problems related to life-management difficulty". The condition is further defined as being a "state of vital exhaustion", which historically had been called neurasthenia. The ICD-10 also contained a medical condition category of "F43.8 Other reactions to severe stress". In 2005, the Swedish Board of Health and Welfare added "exhaustion disorder" (ED; F43.8A) to the Swedish version of the ICD-10, the ICD-10-SE, representing what is typically called "burnout" in English. Swedish sufferers of severe burnout had earlier been treated as having neurasthenia. According to Lindsäter et al., "The diagnosis has become almost as prevalent as major depression in Swedish health care settings, and currently accounts for more instances of long-term sick-leave reimbursement than any other single diagnosis in the country." The Royal Dutch Medical Association defined "burnout" as a subtype of adjustment disorder as part of the ICD-10 system. In the Netherlands, overspannenheid (overstrain) is a condition that leads to burn-out. In that country, burnout is included in handbooks and medical staff are trained in its diagnosis and treatment. A reform of Dutch health insurance resulted in adjustment disorder treatment being removed from the compulsory basic package in 2012. Practitioners were told that more serious cases of the condition may qualify for classification as depression or anxiety disorder. A new version of the ICD, ICD-11, was released in June 2018, for first use in January 2022. The new version has an entry coded and titled "QD85 Burn-out". The ICD-11 describes the condition as follows: This condition is classified under "Problems associated with employment or unemployment" in the section on "Factors influencing health status or contact with health services." The section is devoted to reasons other than recognized diseases or health conditions for which people contact health services. In a statement made in May 2019, the WHO said "Burn-out is included in the 11th Revision of the International Classification of Diseases (ICD-11) as an occupational phenomenon. It is not classified as a medical condition." The ICD-11 also has the medical condition "6B4Y Other specified disorders specifically associated with stress", which is the equivalent of the ICD-10's F43.8. Further detail about the varied ways clinicians and others used the then-current ICD and DSM classifications with burnout was published by Dutch psychologist Arno Van Dam in 2021. The US government's National Institutes of Health includes the condition as "psychological burnout" in its index of the National Library of Medicine, and provides a number of synonyms. It defines the condition as "An excessive reaction to stress caused by one's environment that may be characterized by feelings of emotional and physical exhaustion, coupled with a sense of frustration and failure." SNOMED CT includes the term "burnout" as a synonym for its defined condition of "Physical AND emotional exhaustion state", which is a subtype of anxiety disorder. The Diseases Database defines the condition as "professional burnout". Instruments used to assess burnout symptoms In 1981, Maslach and Jackson developed the first widely used instrument for assessing burnout, the Maslach Burnout Inventory (MBI). It remains by far the most commonly used instrument to assess the condition. Consistent with Maslach's conceptualization, the MBI operationalizes burnout as a three-dimensional syndrome consisting of emotional exhaustion, depersonalization (an unfeeling and impersonal response toward recipients of one's service, care, treatment, or instruction), and reduced personal accomplishment. The MBI originally focused on human service professionals (e.g., teachers, social workers). Since that time, the MBI has been used for a wider variety of workers (e.g., healthcare workers). The instrument or its variants are now employed with job incumbents working in many other occupations. There are other conceptualizations of burnout that differ from that suggested by Maslach and adopted by the WHO. In 1999, Demerouti and Bakker, with their Oldenburg Burnout Inventory (OLBI), conceptualized burnout in terms of exhaustion and disengagement, linking it to the job demands-resources model. This instrument is used mainly in the United States. Also that year, Wilmar Schaufeli and Arnold Bakker released the Utrecht Work Engagement Scale (UWES). It uses a similar conceptualisation to the MBI. However the UWES measures vigour, dedication and absorption; positive counterparts to the values measured by the MBI. It is used mainly in Germany. In 2005, TS Kristensen et al. released the public domain Copenhagen Burnout Inventory (CBI). They argued that the definition of burnout should be limited to fatigue and exhaustion. The CBI has had some use in Germany. In 2006, Shirom and Melamed with their Shirom-Melamed Burnout Measure (SMBM) conceptualized burnout in terms of physical exhaustion, cognitive weariness, and emotional exhaustion. An examination of the SMBM's emotional exhaustion subscale, however, indicates that the subscale more clearly embodies Maslach's concept of depersonalization than her concept of emotional exhaustion. This measure has seen some use in Sweden. In 2010, researchers from Mayo Clinic used portions of the MBI, along with other comprehensive assessments, to develop the Well-Being Index, a nine-item self-assessment tool designed to measure burnout and other dimensions of distress in healthcare workers specifically. It has been mainly used in the United States. In 2014, Aniella Besèr et al. developed the Karolinska Exhaustion Disorder Scale (KEDS), which is used mainly in Sweden. It was designed to measure the symptoms defined by the ICD-10-SE's category for exhaustion disorder. The authors believed that those with the disorder were often initially depressed, but that this soon passed. The core symptoms of the disorder were deemed to be "exhaustion, cognitive problems, sleep disturbance". The authors also believed that the condition was clearly differentiated from both depression and anxiety. In 2021, the Sydney Burnout Measure (SBM) was released by Gordon Parker et al., which "captures domains of exhaustion, cognitive impairment, loss of empathy, withdrawal and insularity, and impaired work performance, as well as several anxiety, depression and irritability symptoms." There are still other conceptualizations as well that are embodied in other instruments, including the Hamburg Burnout Inventory, and Malach-Pines's Burnout Measure. The core of all of these conceptualizations, including that of Freudenberger, is exhaustion. In 2020, the Occupational Depression Inventory (ODI), was developed to quantify the severity of work-attributed depressive symptoms and establish provisional diagnoses of job-ascribed depression. The ODI covers nine symptoms, including exhaustion (burnout's putative core). The instrument exhibits robust psychometric properties. The ODI is the only instrument that assesses work-related suicidal thoughts, a particularly important symptom calling for immediate attention. Available evidence indicates that burnout scales have very high correlations with the ODI, correlations that cannot be explained by item overlap, suggesting that the ODI is a suitable replacement for burnout scales like the MBI. Maslach advanced the idea that burnout should not be viewed as a depressive condition. Recent evidence, based on factor-analytic and meta-analytic findings, calls into question this supposition. Burnout is also now often seen as involving the full array of depressive symptoms (e.g., low mood, cognitive alterations, sleep disturbance). Different types of burnout There are throught to be other types of burnout. Caregiver burnout Burnout affects caregivers; in the ICD-11 classification, in the description for code QF27 "Difficulty or need for assistance at home and no other household member able to render care" the term "caregiver burnout" is given as a synonym. Spouse burnout Kristensen et al. and Malach-Pines (who also published as Pines) advanced the view that burnout can also occur in connection to life outside of work. For example, Malach-Pines developed a burnout measure keyed the role of spouse. Teacher burnout Burnout in teachers represents a type occupational burnout. Athlete burnout A type of occupational burnout which burdens athletes young and old. Relatively little research has been conducted on this phenomenon, but it affects the mental health and overall well-being of countless athletes across the world. It may lead to athletes feeling immensely stressed out and in extreme cases terminating their participation in an activity they once enjoyed. Further impacts are unknown, but various other detriments to mental health are possible as well. Autistic burnout as a distinct condition Autistic people are known to experience a state of mental, emotional, or physical exhaustion referred to as autistic burnout caused by masking of autistic traits and behavior and the general stress associated with living in an unaccommodating environment. Autistic burnout is considered to be distinct from occupational burnout in both etiology and presentation. In contrast to "occupational burnout", autistic burnout does not necessarily have to relate to employment and goes along with increased sensory sensitivity. Relationship with other conditions A growing body of evidence suggests that burnout is etiologically, clinically, and nosologically similar to depression. In a study that directly compared depressive symptoms in burned out workers and clinically depressed patients, no diagnostically significant differences were found between the two groups; burned out workers reported as many depressive symptoms as clinically depressed patients. Moreover, a study by Bianchi, Schonfeld, and Laurent (2014) showed that about 90% of workers with very high scores on the MBI meet diagnostic criteria for depression. The view that burnout is a form of depression has found support in several recent studies. Some authors have recommended that the nosological concept of burnout be revised or even abandoned entirely given that it is not a distinct disorder and that there is little agreement on burnout's diagnostic criteria. A newer generation of studies indicates that burnout, particularly its exhaustion dimension, problematically overlaps with depression; these studies have relied on more sophisticated statistical techniques, for example, exploratory structural equation modeling (ESEM) bifactor analysis, than earlier studies of the topic. The advantage of ESEM bifactor analysis, which combines the best features of exploratory and confirmatory factor analysis, is that it provides a granular look at item-construct relationships, without falling into traps earlier burnout researchers fell into. Liu and van Liew wrote that "the term burnout is used so frequently that it has lost much of its original meaning. As originally used, burnout meant a mild degree of stress-induced unhappiness. The solutions ranged from a vacation to a sabbatical. Ultimately, it was used to describe everything from fatigue to a major depression and now seems to have become an alternative word for depression, but with a less serious significance" (p. 434). The authors equate burnout with adjustment disorder with depressed mood. Endocrine findings Kakiashvili et al., argued that although burnout and depression have overlapping symptoms, endocrine evidence suggests that the disorders' biological bases are different. They argued that antidepressants should not be used by people with burnout because the medications can make the underlying hypothalamic–pituitary–adrenal axis dysfunction worse. Others have found Kakiashvili et al.'s argument specious. Despite its name, depression with atypical features, which is seen in the above table, is not a rare form of depression. The cortisol profile in atypical depression, in contrast to that of melancholic depression, is similar to the cortisol profile found in burnout. Commentators advanced the view that burnout differs from depression because the cortisol profile of burnout differs from that of melancholic depression; however, as the above table indicates, burnout's cortisol profile is similar to that of atypical depression. Risk factors Evidence suggests that the etiology of burnout is multifactorial, with personality factors playing an important, long-overlooked role. Cognitive dispositional factors implicated in depression have also been found to be implicated in burnout. One cause of burnout includes stressors that a person is unable to cope with fully. A 2019 survey by Cartridge People concluded that workload was the main cause of workplace stress. Burnout is thought to occur when a mismatch is present between the nature of the job and the job the person is actually doing. A common indication of this mismatch is work overload, which sometimes involves a worker who survives a round of layoffs, but after the layoffs the worker finds that he or she is doing too much with too few resources. Overload may occur in the context of downsizing, which often does not narrow an organization's goals, but requires fewer employees to meet those goals. The research on downsizing, however, indicates that downsizing has more destructive effects on the health of the workers who survive the layoffs than mere burnout; these health effects include increased levels of sickness and greater risk of mortality. The job demands-resources model has implications for burnout, as measured by the Oldenburg Burnout Inventory (OLBI). Physical and psychological job demands were concurrently associated with the exhaustion, as measured by the OLBI. Lack of job resources was associated with the disengagement component of the OLBI. Maslach, Schaufeli and Leiter identified six risk factors for burnout in 2001: mismatch in workload, mismatch in control, lack of appropriate awards, loss of a sense of positive connection with others in the workplace, perceived lack of fairness, and conflict between values. Although job stress has long been viewed as the main determinant of burnout, recent meta-analytic findings indicate that job stress is a weak predictor of burnout. These findings question one of the most central assumptions of burnout research. In a systematic literature review in 2014, the Swedish Agency for Health Technology Assessment and Assessment of Social Services (SBU) found that a number of work environment factors could affect the risk of developing exhaustion disorder or depressive symptoms: People who experience a work situation with little opportunity to influence, in combination with too high demands, develop more depressive symptoms. People who experience a lack of compassionate support in the work environment develop more symptoms of depression and exhaustion disorder than others. Those who experience bullying or conflict in their work develop more depressive symptoms than others, but it is not possible to determine whether there is a corresponding connection for symptoms of exhaustion disorder. People who feel that they have urgent work or a work situation where the reward is perceived as small in relation to the effort develops more symptoms of depression and exhaustion disorder than others. This also applies to those who experience insecurity in the employment, for example concerns that the workplace will be closed down. In some work environments, people have less trouble. People who experience good opportunities for control in their own work and those who feel that they are treated fairly develop less symptoms of depression and exhaustion disorder than others. Women and men with similar working conditions develop symptoms of depression as much as exhaustion disorder. Effects In line with the work of Christina Maslach and Susan E. Jackson The World Health Organisation has defined burnout as consisting of: feelings of energy depletion or exhaustion increased mental distance from one's job, or feelings of negativism or cynicism related to one's job reduced professional efficacy. A 2023 study by Elin Lindsäter et al. found a wide range of symptoms had by people formally diagnosed with exhaustion disorder. The most common symptoms reported by people currently suffering with the condition were tiredness (48%), lack of energy (41%), difficulty recovering from exertion (33%), poor general cognitive functioning (33%), memory issues (32%) and difficulty coping with perceived stressors and demands (31%). Some research indicates that burnout is associated with reduced job performance, coronary heart disease, and mental health problems. Examples of emotional symptoms of occupational burnout include a lack of interest in the work being done, a decrease in work performance levels, feelings of helplessness, and trouble sleeping. The Swedish health department has defined the effects of exhaustion disorder as being: Concentration difficulties or impaired memory Markedly reduced capacity to tolerate demands or to work under time pressure Emotional instability or irritability Sleep disturbance Marked fatigability or physical weakness Physical symptoms such as aches and pains, palpitations, gastrointestinal problems, vertigo or increased sensitivity to sound. There is research on dentists and physicians that suggests that burnout is a depressive syndrome. Thus reduced job performance and cardiovascular risk could be related to burnout because of burnout's tie to depression. Behavioral signs of occupational burnout are demonstrated through cynicism within workplace relationships with coworkers, clients, and the organization itself. Forced overtime, heavy workloads, and frenetic work paces give rise to debilitating repetitive stress injuries, on-the-job accidents, over-exposure to toxic substances, and other dangerous work conditions. Williams and Strasser suggested that healthcare workers have focused much attention on the workplace risk factors for heart disease and other illnesses, but have underemphasized work-related depression risk. Other effects of burnout can manifest as lower energy and productivity levels, with workers observed to be consistently late for work and feeling a sense of dread upon arriving. They can suffer concentration problems, forgetfulness, increased frustration, and/or feelings of being overwhelmed. They may complain and feel negative, or feel apathetic and believe they have little impact on their coworkers and environment. Occupational burnout is also associated with absenteeism, other time missed from work, and thoughts of quitting. As in depression, chronic burnout is also associated with cognitive impairments in memory and attention. Research suggests that burnout can manifest differently between genders, with higher levels of depersonalisation among men and increased emotional exhaustion among women. Other research suggests that people revealing a history of occupational burnout face future hiring discrimination. Treatment and prevention Health condition treatment and prevention methods are often classified as "primary prevention" (stopping the condition occurring), "secondary prevention" (removing the condition that has occurred) and "tertiary prevention" (helping people live with the condition). Primary prevention Maslach suggested that preventing burnout requires a combination of organizational change and worker education. She and Leiter argued that burnout can occur in connection to six areas of work life: workload, control, reward, community, fairness, and values. For example, with regard to workload, an organization should take steps to assure that a worker has adequate resources to meet job demands. With regard to values, clearly stated ethical organizational values are important for ensuring employee commitment. Supportive leadership and relationships with colleagues are also helpful. Hätinen et al. suggest "improving job-person fit by focusing attention on the relationship between the person and the job situation, rather than either of these in isolation, seems to be the most promising way of dealing with burnout." They also note that "at the individual level, cognitive-behavioural strategies have the best potential for success." One approach for addressing these discrepancies focuses specifically on the fairness area. In one study employees met weekly to discuss and attempt to resolve perceived inequities in their job. The intervention was associated with decreases in exhaustion over time but not cynicism or inefficacy, suggesting that a broader approach is required. Barry A. Farber suggests strategies like setting more achievable goals, focusing on the value of the work, and finding better ways of doing the job, can all be helpful ways of helping the stressed. People who do not mind the stress but want more reward can benefit from reassessing their work–life balance and implementing stress reduction techniques like meditation and exercise. Others with low stress, but are underwhelmed and bored with work, can benefit from seeking greater challenge. In addition to interventions that can address and improve conditions on the work side of work-life balance, the ways in which people spend their non-work time can help to prevent burnout and improve health and well-being. Corporate Social Responsibility (CSR) initiatives are considered a resource which counteracts the stress effects of job demands, lowering employee burnout by boosting happiness, resilience and capitalizing altruism. Establishing a sense of psychological safety (the belief that it is safe to speak up) in an organisation helps prevent burnout. Similarly, feeling heard may also help. Training employees in ways to manage stress in the workplace is effective in preventing burnout. One study suggests that social-cognitive processes such as commitment to work, self-efficacy, learned resourcefulness, and hope may insulate individuals from experiencing occupational burnout. Increasing a worker's control over his or her job is another intervention has been shown to help counteract exhaustion and cynicism in the workplace. Despite all the above recommendations, high-quality research on burnout prevention with random allocation of experimental units (either individual workers or organizational units) to intervention and control conditions has been relatively rare. For example, Richardson and Rothstein's (2008) meta-analysis of workplace interventions included only two high-quality studies that addressed burnout. In their meta-analysis, Estevez Corres et al. (2021) identified only eight high-quality studies devoted to preventing emotional exhaustion in "high-stress jobs"; fewer interventions were devoted to depersonalization and reduced accomplishment. Secondary and tertiary prevention (aka treatment and management) Hätinen et al. list a number of common treatments, including treatment of any outstanding medical conditions, stress management, time management, depression treatment, psychotherapies, ergonomic improvement and other physiological and occupational therapy, physical exercise and relaxation. They have found that is more effective to have a greater focus on "group discussions on work related issues", and discussion about "work and private life interface" and other personal needs with psychologists and workplace representatives. Mindfulness therapy has been shown to be an effective preventative for occupational burnout in medical practitioners. Additional prevention methods include: starting the day with a relaxing ritual; yoga; adopting healthy eating, exercising, and sleeping habits; setting boundaries; taking breaks from technology; nourishing one's creative side, and learning how to manage stress. Kakiashvili et al. said that "medical treatment of burnout is mostly symptomatic: it involves measures to prevent and treat the symptoms." They say the use of anxiolytics and sedatives to treat burnout related stress is effective, but does nothing to change the sources of stress. They say the poor sleep often caused by burnout (and the subsequent fatigue) is best treated with hypnotics and CBT (within which they include "sleep hygiene, education, relaxation training, stimulus control, and cognitive therapy"). They advise against the use of antidepressants as they worsen the hypothalamic–pituitary–adrenal axis dysfunction at the core of burnout. They also believe "vitamins and minerals are crucial in addressing adrenal and HPA axis dysfunction," noting the importance of specific nutrients. Omega-3 fatty acids may be helpful. DHA supplementation may also be useful for moderating norepinephrine. 11 beta-hydroxysteroid dehydrogenase (and potentially other metabolites of liquorice root extract) may help with lowered cortisol response. Salomonsson et al. found that for workers with exhaustion disorder, CBT was better than a Return to Work Intervention (RTW-I) for reducing stress; and that people whose symptoms were primarily depression, anxiety or insomnia had reduced total time away from work after a RTW-I than for CBT. van Dam et al. had also earlier found that CBT was an effective treatment. Gordon Parker et al. found that the most useful treatment strategies appear to be talking to someone and seeking support, walking or other exercise, mindfulness and meditation, improving sleep, and leaving work completely or taking time off work. The Swedish national health information service 1177 notes that "It is common for treatment and rehabilitation [of exhaustion disorder] to include several of the following parts: Information and education about how stress affects the body. Counseling and education on lifestyle and on methods to reduce daily stress. It can be done individually or in a group. Treatment with CBT. Conversation with a counsellor, psychologist or occupational therapist. Physiotherapy to work with the body in different ways. Medicines for sleep difficulties or depression." The Royal Dutch College of General Practiconers recommends a three-stage treatment process, made up of a crisis phase, a problem and solution stage, and an application stage. The Gothenburg regional government's Institute for Stress Medicine believes that "Recovery [from exhaustion disorder] is found in what is undemanding and joyful, and what that is varies greatly between individuals. Sleep and physical exercise are the basis of recovery and should be prioritized initially." According to a survey of their patients in 2018, the two most important drivers of recovery were "the sick leave itself" and "advice on physical activity." The institute's Kristina Glise (with others) has also twice detailed the institute's treatment practices in papers. Glise also wrote a series of diagnostic and treatment recommendations for doctors in February 2023. The Stressmottagningen stress clinic believes that Focussed - Acceptance and Commitment Therapy (F-ACT, a form of CBT) is a useful component of exhaustion disorder treatment. Their treatment includes "psychotherapy, physiotherapy, as well as occupational therapy and work-life planning." They also note that there is "still no established treatment method" for the condition. The Swiss Expert Network on Burnout in 2016 recommended mindfulness training, physical exercise, nature therapy, whole-body cryotherapy and whole-body hyperthermia. The Hogrefe Publishing Group published a book containting the treatment recomendations of Swiss doctor Barbara Hochstrasser in 2023. Despite the above recommendations, high-quality research (e.g., random allocation to experimental and control groups) has been relatively rare in secondary and tertiary interventions aimed at reducing burnout symptoms. One study suggests that cognitive behavioral therapy (CBT), which was developed to treat depression, can help some workers with symptoms of burnout although the application of CBT in high-quality studies to burnout has been sparse. A shortcoming of CBT and other tertiary interventions is that they help to restructure the thinking of the worker/patient but do not change the adverse working conditions that give rise to the symptoms. Communication perspective In a study conducted by Andrea Meluch, they studied how Communication Privacy Management can be applied to discussions about burnout across a diverse amount of sectors and industries. They found that discussing job burnout makes employees feel vulnerable and due to that feeling apply core and catalyst privacy rule criteria to help them make a decision about if they should disclose their job burnout. Core criteria is stable factors used to make choices about privacy rules while catalyst criteria refers to circumstantial influences that can cause a change in privacy rules. Meluch found the factors that contribute to if an employee discloses their feelings of burnout are if they feel that others in the company share the experience of burnout, the perceived judgment towards burnout, and the severity of the burnout they are feeling. Additionally the quality of the relationship they had and the level of trust they attributed to their coworkers and supervisors affected an employee's decision to disclose information. Meluch found that employees will conceal that they are burned out due to the level of risk and the worry about how they will be perceived in the workplace and how their work will be perceived. van der Klink and van Dijk suggest stress inoculation training, cognitive restructuring, graded activity and "time contingency" (progressing based on a timeline rather than patient's comfort) are effective methods of treatment. Another study by Debbie Dougherty and Kristina Drumheller explored how organizations manage the rationality/emotionality duality in the workplace. They found that in organizations that promote norms of rationality, organization members support the rationality/emotionality duality and accept and reinforce this duality by only focusing on emotions when they cause a disruption or rational practices and otherwise control their emotions. To privilege rationality over emotionality they usually recalled emotions in instances where their work was disrupted and rarely mentioned interpersonal conflict as emotional experiences. Additionally they would deny emotions, reframe emotions, rationally recite emotional experiences, and segment emotions "to a proper place and time". Organizational members would rationalize their emotions and emotional expression as well as take emotions out of their sense making to fit the expectation of being rational. Dougherty and Drumheller expressed how only privileging rationality and not also privileging emotionality can inspire extreme emotional control that can lead to explosive forms of emotional expression such as organizational violence. They propose that organizational members need to be more aware of "the complex and necessary role of emotions", promote healthy emotional expression, and recognize that organizations are locations of both emotional and rational sense making. Katie Kim and Yeunjae Lee in their research on emotional exhaustion studied how emotional exhaustion is affected by organizations using transparent communication. They found when an employee feels emotionally exhausted, they have negative or cynical feelings towards their company and engage in negative communication behavior, such as complaining to external sources about their company. Kim and Lee express how this can affect organizations as their employees' communication with external stakeholders can help with creating or losing an opportunity to build or maintain the organizations reputation. Employees can either share supportive views and Kim and Lee describe transparent communication as "an organization's communication to make available all legally releasable information to employees whether positive or negative in nature". It involves sustainability, accountability, and participation. Sustainability is the timely, accurate and unambiguous information provided to employees. Accountability is the organization's responsibility to provide objective and balanced information on activities and policies whether negative or positive. Participation is that stakeholders are involved in identifying the information that needs to be provided. Through this means of communication, Kim and Lee, found that transparent communication provides employees with the resources they feel they lack and creates a more positive relationship with the organization. Transparent communication helped alleviate emotional exhaustion and helped employees cope with burnout symptoms. Farber's categories In 1991, Barry A. Farber in his research on teachers proposed that there are three types of burnout: "wearout" and "brown-out", where someone gives up having had too much stress and/or too little reward "classic/frenetic burnout", where someone works harder and harder, trying to resolve the stressful situation and/or seek suitable reward for their work "underchallenged burnout", where someone has low stress, but the work is unrewarding. Farber found evidence that the most idealistic teachers who enter the profession are the most likely to suffer burnout. "Underchallenged burnout" later came to be known as boreout. See also Annual leave Autistic burnout Clouding of consciousness Critique of work Depression Effects of overtime Four-day workweek Job strain Karoshi Labor rights Lived experience Occupational Depression Inventory Occupational stress Overwork Paid time off Presenteeism Right to rest and leisure Six-hour day Stress (biological) Stress management Suicide crisis Tang ping Teacher burnout Workaholic Workload Writer's block Notes References Further reading External links Human resource management Occupational stress Organizational theory Motivation Organizational behavior
Occupational burnout
Biology
9,260
548,237
https://en.wikipedia.org/wiki/Malted%20milk
Malted milk or malt powder or malted milk powder, is a powder made from a mixture of malted barley, wheat flour, and evaporated whole milk powder. The powder is used to add its distinctive flavor to beverages and other foods, but it is also used in baking to help dough cook properly. History London pharmacist James Horlick developed ideas for an improved, wheat- and malt-based nutritional supplement for infants. Despairing of his opportunities in the United Kingdom, Horlick joined his brother William, who had gone to Racine, Wisconsin, in the United States, to work at a relative's quarry. In 1873, the brothers formed J & W Horlicks to manufacture their brand of infant food in nearby Chicago. Ten years later, they earned a patent for a new formula enhanced with dried milk. The company originally marketed its new product as "Diastoid", but trademarked the name "malted milk" in 1887. Despite its origins as a health food for infants and invalids, malted milk found unexpected markets. Explorers appreciated its lightweight, nonperishable, nourishing qualities, and they took malted milk on treks worldwide. William Horlick became a patron of Antarctic exploration, and Admiral Richard E. Byrd named Horlick Mountains, a mountain range in Antarctica, after him. Back in the US, people began drinking Horlick's new beverage for enjoyment. James Horlick returned to England to import his American-made product and was eventually made a baronet. Malted milk became a standard offering at soda shops, and found greater popularity when mixed with ice cream in a "malt", for which malt shops were named. Uses Malted milk biscuits Malted milkshakes Malted thickshakes Malted soyabean milk Malted hot drinks, such as Horlicks and Ovaltine Malted milk balls: malted milk is used in the candy confections Whoppers (manufactured by Hershey Co.), Mighty Malts (manufactured by Necco), and Maltesers (manufactured by Mars, Inc). Malted milk is used in some bagel recipes as a substitute for non-diastatic malt powder. See also Flavored milk List of barley-based drinks Nestlé Milo References External links What is Malted milk? – TheSpruceEats Barley-based drinks Wheat-based drinks Milk-based drinks Cold drinks Milk Food ingredients Non-alcoholic drinks Food powders Products introduced in 1887
Malted milk
Technology
519
54,084,147
https://en.wikipedia.org/wiki/Gal4%20transcription%20factor
The Gal4 transcription factor is a positive regulator of gene expression of galactose-induced genes. This protein represents a large fungal family of transcription factors, Gal4 family, which includes over 50 members in the yeast Saccharomyces cerevisiae e.g. Oaf1, Pip2, Pdr1, Pdr3, Leu3. Gal4 recognizes genes with UAS, an upstream activating sequence, and activates them. In yeast cells, the principal targets are GAL1 (galactokinase), GAL10 (UDP-glucose 4-epimerase), and GAL7 (galactose-1-phosphate uridylyltransferase), three enzymes required for galactose metabolism. This binding has also proven useful in constructing the GAL4/UAS system, a technique for controlling expression in insects. In yeast, Gal4 is by default repressed by Gal80, and activated in the presence of galactose as Gal3 binds away Gal80. Domains Two executive domains, DNA binding and activation domains, provide key function of the Gal4 protein conforming to most of the transcription factors. DNA binding Gal4 N-terminus is a zinc finger and belongs to the Zn(2)-C6 fungal family. It forms a Zn – cysteines thiolate cluster, and specifically recognizes UAS in GAL1 promoter. Gal4 transactivation Localised to the C-terminus, belongs to the nine amino acids transactivation domain family, 9aaTAD, together with Oaf1, Pip2, Pdr1, Pdr3, but also p53, E2A, MLL. Regulation Galactose induces Gal4 mediated transcription albeit Glucose causes severe repression. As a part of the Gal4 regulation, inhibitory protein Gal80 recognises and binds to the Gal4 region (853-874 aa). The inhibitory protein Gal80 is sequestered by regulatory protein Gal3 in Galactose dependent manner. This allows for Gal4 to work when there is galactose. Mutants The Gal4 loss-of-function mutant gal4-64 (1-852 aa, deletion of the Gal4 C-terminal 29 aa) lost both interaction with Gal80 and activation function. In the Gal4 reverted mutant Gal4C-62 mutant, a sequence (QTAY N AFMN) with the 9aaTAD pattern emerged and restored activation function of the Gal4 protein. Inactive constructs The activation domain Gal4 is inhibited by C-terminal domain in some Gal4 constructs. Function Target Transcription The Gal4 activation function is mediated by MED15 (Gal11). The Gal4 protein interacts also with other mediators of transcription as are Tra1, TAF9, and SAGA/MED15 complex. Proteosome A subunit of the 26 S proteasome Sug2 regulatory protein has a molecular and functional interaction with Gal4 function. Proteolytic turnover of the Gal4 transcription factor is not required for function in vivo. The native Gal4 monoubiquitination protects from 19S-mediated destabilizing under inducing conditions. Application The broad use of the Gal4 is in yeast two-hybrid screening to screen or to assay protein-protein interactions in eukaryotic cells from yeast to human. In the GAL4/UAS system, the Gal4 protein and Gal4 upstream activating region (UAS) are used to study the gene expression and function in organisms such as the fruit fly. The Gal4 and inhibitory protein Gal80 have found application in a genetics technique for creating individually labeled homozygous cells called MARCM (Mosaic analysis with a repressible cell marker). See also lac operon References Further reading Gal4p on WikiGenes Transcription coregulators Galactose Nutrition Sugar substitutes Fungal models Digestive system Probiotics Osmophiles Yeasts used in brewing Leavening agents Oenology Edible fungi
Gal4 transcription factor
Biology
827
72,380,107
https://en.wikipedia.org/wiki/List%20of%20psychoactive%20substances%20and%20precursor%20chemicals%20derived%20from%20genetically%20modified%20organisms
List of various substances that are either psychoactive themselves or serve as precursors to psychoactive compounds, all sourced from genetically modified organisms (GMOs). Psychoactive substances Psychoactive substances derived from genetically modified organisms. Cocaine GMO plant: Nicotiana benthamiana (a tobacco plant) Psilocybin GMO bacteria: Escherichia coli GMO yeast: Baker’s yeast THC GMO bacteria: Zymomonas mobilis (used to produce tequila) Tropane alkaloids: Hyoscyamine and scopolamine GMO yeast: Baker’s yeast Precursor chemicals Precursor chemicals derived from genetically modified organisms. Lysergic acid (LSD precursor) GMO yeast: Baker’s yeast Thebaine (morphine precursor) GMO bacteria: E. coli See also Biodiversity and drugs List of psychoactive substances derived from artificial fungi biotransformation References Drug-related lists Genetically modified organisms Biological sources of psychoactive drugs
List of psychoactive substances and precursor chemicals derived from genetically modified organisms
Chemistry,Engineering,Biology
201
5,520,376
https://en.wikipedia.org/wiki/Shock%20factor
Shock factor is a commonly used figure of merit for estimating the amount of shock experienced by a naval target from an underwater explosion as a function of explosive charge weight, slant range, and depression angle (between vessel and charge). R is the slant range in feet W is the equivalent TNT charge weight in pounds = charge weight (lbs) · Relative effectiveness factor is the depression angle between the hull and warhead. The application scenario for Equation 1 is illustrated by Figure 1. The numeric result from computing the shock factor has no physical meaning, but it does provide a value that can be used to estimate the effect of an underwater blast on a vessel. Table 1 describes the effect of an explosion on a vessel for a range of shock factors. {| class="wikitable" |+ Table 1: Shock Factor Table of Effects |- ! Shock Factor !! Damage |- ! < 0.1 | Very limited damage. Generally considered insignificant |- ! 0.1–0.15 | Lighting failures; electrical failures; some pipe leaks; pipe ruptures possible |- ! 0.15–0.20 | Increase in occurrence of damage above; pipe rupture likely; machinery failures |- ! 0.2 | General machinery damage |- ! ≥ 0.5 | Usually considered lethal to a ship |} Background The idea behind the shock factor is that an explosion close to a ship generates a shock wave that can impart sudden vertical motions to a ship's hull and internal systems. Many of the internal mechanical systems (e.g. engine coupling to prop) require precise alignment in order to operate. These vibrations upset these critical alignments and render these systems inoperative. The vibrations can also destroy lighting and electrical components, such as relays. The explosion also generates a gas bubble that undergoes expansion and contraction cycles. These cycles can introduce violent vibrations into a hull, generating structural damage, even to the point of breaking the ship's keel. In fact, this is a goal of many undersea weapon systems. The magnitude of an explosion's effects have been shown through empirical and theoretical analyses to be related to the size of the explosive charge, the distance of the charge from the target, and the angular relationship of the hull to the shock wave. References Explosives
Shock factor
Chemistry
465
12,532,379
https://en.wikipedia.org/wiki/Paternian
Paternian or Paternianus () is the name of an Italian saint. A native of Fermo who escaped to the mountains during the persecutions of Christians by Diocletian, he was then appointed bishop of Fano by Pope Sylvester I. (Paternian is often confused with Parthenian (Parteniano), a bishop of Bologna, also commemorated on 12 July.) Life Historical details about Paternian are scarce. The Vita Sancti Paterniani can be found in a codex of the 12th century, though it dates earlier, and was written by a monk of the 10th or 11th century. But it is legendary and not reliable. Paternian was born at Fano around 275 AD. An angel told him in a vision to escape this city and hide out in a deserted place near the Metauro River. He became a hermit and the abbot of a monastery. Later, when the persecution of Christians stopped, the citizens of Fano demanded that he become their bishop. Paternian governed the city for many years. He died on 13 November, around 360 AD. Miracles were reported at his tomb and his cult spread rapidly. Veneration According to one legend, the inhabitants of Fano competed with those of Cervia for the body of the saint. Cervia would be left with a finger, while Fano would possess the rest of the saint's relics. His cult spread across Marche, Romagna, Veneto, Tuscany, Umbria, and Dalmatia. In the area known as the Camminate di Fano, there is a cave known as the Grotta di San Paterniano, which is said to have been his refuge during the Diocletian persecution. The Austrian town of Paternion takes its name from him. The name appears in documents for the first time in 1296, and its origin is derived from the fact that the area lay under the influence of the patriarchate of Aquileia. There was an old proverb from Romagna that ran: "Par San Paternian e' trema la coda a e' can." ("On St. Paternian's day, the dog's tail wags"). This Cervian proverb refers to the fact that the cold began to be felt around the saint's feast day. References Notes External links Saints of November 23: Paternian of Fermo San Paterniano Santuario di San Paterniano in Fano Church of San Paterniano presso Cammoro Christian saints in unknown century Bishops of Bologna Italian Roman Catholic saints Weather lore Ancient Christian saints Year of birth unknown Angelic visionaries
Paternian
Physics
553
34,819,006
https://en.wikipedia.org/wiki/Pyrrhic%20defeat%20theory
Pyrrhic defeat theory is the idea that those with the power to change a system, benefit from the way it currently works. Origin This concept amalgamates ideas from Emile Durkheim, Karl Marx, Kai Erikson and Richard Quinney, drawn together by Jeffrey Reiman in 1979. In criminology, pyrrhic defeat theory is a way of looking at criminal justice policy. It suggests that the criminal justice system's intentions are the very opposite of common expectations; it functions the way it does in order to create a specific image of crime: one in which it is actually a threat from the poor. However, to justify the truth of the idea there must be some substance to back it up. Those with power in the social system need to fight crime only enough ensure it stays in a prominent position in the public eye, not enough to eliminate it. A "Pyrrhic victory" is a military victory purchased at such a cost in troops and resources that it amounts to a defeat. The Pyrrhic defeat theory argues that the failure of the criminal justice system yields such benefits to those in positions of power that it amounts to a victory... From the standpoint of those with the power to make criminal justice policy in America, nothing succeeds like failure. Reiman's ideas differ from those of Marx's slightly. Whereas Marx suggests that the criminal justice system serves the rich by conspicuously repressing the poor, Reiman suggests that it does so instead by its failure to reduce crime. Durkheim suggests that crime is functional for society, and part of the very tapestry that holds it together. He suggests that an act is perceived criminal because it affects a people's opinions: See also Cadmean victory Conflict theory Just-world fallacy Might makes right Parkinson's law Pyrrhic victory References Criminology Organizational behavior Cognitive inertia Injustice
Pyrrhic defeat theory
Biology
387
77,434,865
https://en.wikipedia.org/wiki/Nigel%20Quinn
Nigel William Trevelyan Quinn is a water resources engineer, earth scientist and academic who is most known for introducing the concept of real-time water quality management in the 1990s. He has been a Research Group Leader of the HydroEcological Engineering Advanced Decision Support group during his career at Berkeley National Laboratory and has held academic appointments at the University of California, Merced, University of California, Berkeley and California State University, Fresno. He has had a 38-year association with the US Bureau of Reclamation Divisions of Planning and Resource Management that is ongoing. Early life Quinn was born on December 28, 1955. He attended Milton and Churchill Schools in Zimbabwe. Subsequently, he worked for 11 months as a research technician with the Department of Conservation and Extension in Harare, Zimbabwe, developing and field-testing a tractor-mounted pyrethrum harvester and working in the laboratory on a rapid method for sediment estimation from soil erosion research plots, which was published in the Rhodesian Journal of Agricultural Research. Education and early career Quinn graduated with a BSc (Hons) in Agricultural/Irrigation Engineering from Cranfield University in 1977, performing research on the mechanics of footpath erosion, which was later published in the Journal of Environmental Management with co-authors Roy Morgan and Alan Smith. After graduation, he worked as an Irrigation Engineer for Farrow Irrigation, a subsidiary of the Tate and Lyle Corporation. In 1978, he accepted a teaching and research appointment at Iowa State University in the US, later joining the faculty as an Instructor. He graduated with an MS in Agricultural and Civil Engineering, having researched intercepted rainfall throughfall erosivity under various crop canopy architectures, suggesting the inclusion of a canopy subfactor in the Universal Soil Loss Equation; this research was published in the Journal of Agricultural Engineering in 1981. In 1981, he enrolled in a PhD program at Cornell University, serving as a General Electric Fellow with the Department of Civil and Environmental Engineering, and received a PhD in Water Resources Systems Engineering in 1987 under the mentorship of Walter Lynn. He conducted research on a systems approach to selenium drainage management in the San Joaquin Valley of California. Career In 1990, he was recruited by the Lawrence Berkeley National Laboratory and Sally Benson, who was leading her own research program on surface and groundwater selenium containment at the Kesterson Reservoir. The Rainbow Report, to which he contributed, provided a long-term solution roadmap for selenium contamination in the San Joaquin Valley, sparking a 38-year scientific research endeavor in this field. Success on an EPA-STAR grant led to his work on climate change impacts, integrating hydrologic, water quality, and economic models, resulting in several publications and an associate faculty position at UC Berkeley. In 2000, he founded the HydroEcological Engineering Advanced Decision Support Group (HEADS) and absorbed emeritus Professor Bill Oswald's research group, focusing on algae-based cultivation and bioremediation amid growing interest in algae biofuels. His technoeconomic assessment of algae biofuel potential, funded by the Energy Biosciences Institute at UC Berkeley, has been highly cited and contributed to Tryg Lundquist's prominence in algae biofuel technology. Contributions After the SJVDP in 1990, Quinn formed a long-term association with Alex Hildebrand (1913–2012), a farmer and CALFED Bay-Delta Advisory Committee governor appointee, sharing ideas on the concept of real-time water quality management, primarily salinity, in the San Joaquin River. He became an advocate and technical proponent of this concept, securing initial grant funding to explore it with the Department of Water Resources, Regional Water Quality Control Board, US Bureau of Reclamation, and US Geological Survey. The real-time water quality management concept was embraced by major state and federal water agencies, endorsed through California state legislation, and enshrined in the San Joaquin Basin Water Quality Control Plan. This advocacy and development resulted in over 30 research publications and book chapters. His early adoption of sensor networks and web-based information dissemination was followed by several water districts and agencies, particularly the Grassland Water District. The WARMF salinity forecasting model originated from his and his colleagues' decision to promote a watershed approach to salinity forecasting, incorporating continuous flow and salinity data into real-time forecasting, enhancing the acceptance of WARMF and similar decision support tools. Personal life Quinn has been a lifelong equestrian and polo player. He was affiliated with the Los Altos Hounds hunt, and co-managed the Wine County Polo Club for 3 years between 2014 and 2017. Additionally, he has been a member of the US Polo Association for over 30 years and a member of the Yolo Polo Club, Sutter Buttes Polo Club, Wine Country Polo Club, Cerro Pampa Polo Club and the Tierra Tropical Polo Club in San Pancho, Mexico. He has been a member of the Manorial Society of Great Britain and acquired the ancient feudal title of Lord of the Manor of Hurstpierpoint in West Sussex, England. Awards and honors 2006 – Fellow, International Symposium for Environmental Software Systems 2007 – Diplomate, American Academy for Water Resources Engineers D.WRE 2010 – Fellow, International Environmental Modelling and Software Society 2013 – Hugo B. Fischer Award, California Water and Environmental Modeling Forum 2014 – Distinguished Service Award, California Water and Environmental Modeling Forum 2015 – Fellow, American Society of Civil Engineers 2018 – Fellow, American Society of Civil Engineers, Environmental Water Resources Institute 2020 – Life Member Award, American Society of Civil Engineers Selected articles Elwell, H. A., & Quinn, N. (1975). A rapid method for estimating the dry mass of soil from erosion research plots. Rhodesian Journal of Agricultural Research, 13, 149–154. Quinn, N.W.T., Morgan, R. P. C., & Smith, A. J. (1980). Simulation of soil erosion induced by human trampling. Journal of Environmental Management, 10, 155–165. Quinn, N. W., & Laflen, J. M. (1983). Characteristics of raindrop throughfall under corn canopy. Transactions of the ASAE, 26(5), 1445. Quinn, N., Grober, L., Kipps, J., Chen, C., & Cummings, E. (1997). Computer model improves real-time management of water quality. California Agriculture, 51(5), 14–20. Quinn, N. W. T., McGahan, J., & Delamore, M. (1998). Innovative drainage management techniques to meet monthly and annual selenium load targets. California Agriculture, 52(5), 1998. Quinn, N.W.T., & Karkoski, J. 1998. Potential for real time management of water quality in the San Joaquin Basin, California. Journal of the American Water Resources Association, 36(6). Quinn, N. W., Miller, N. L., Dracup, J. A., Brekke, L., & Grober, L. F. (2001). An integrated modeling system for environmental impact analysis of climate variability and extreme weather events in the San Joaquin Basin, California. Advances in Environmental Research, 5(4), 309–317. Quinn, N. W. T., & Hanna, W. M. (2002). Real-time adaptive management of seasonal wetlands to improve water quality in the San Joaquin River. Adv. Environ. Res, 5(4), 309–317. Quinn, N. W., Brekke, L. D., Miller, N. L., Heinzer, T., Hidalgo, H., & Dracup, J. A. (2004). Model integration for assessing future hydroclimate impacts on water resources, agricultural production and environmental quality in the San Joaquin Basin, California. Environmental Modelling & Software, 19(3), 305–316. Quinn, N. W., & Hanna, W. M. (2003). A decision support system for adaptive real-time management of seasonal wetlands in California. Environmental Modelling & Software, 18(6), 503–511. References Hydrologists Fellows of the American Society of Civil Engineers United States Bureau of Reclamation personnel Lawrence Berkeley National Laboratory people University of California, Merced faculty University of California, Berkeley faculty California State University, Fresno faculty Cornell University alumni Iowa State University alumni Alumni of Cranfield University 1955 births Living people
Nigel Quinn
Environmental_science
1,744
11,815,360
https://en.wikipedia.org/wiki/GSD%20microscopy
Ground state depletion microscopy (GSD microscopy) is an implementation of the RESOLFT concept. The method was proposed in 1995 and experimentally demonstrated in 2007. It is the second concept to overcome the diffraction barrier in far-field optical microscopy published by Stefan Hell. Using nitrogen-vacancy centers in diamonds a resolution of up to 7.8 nm was achieved in 2009. This is far below the diffraction limit (~200 nm). Principle In GSD microscopy, fluorescent markers are used. In one condition, the marker can freely be excited from ground state and returns spontaneously via emission of a fluorescence photon. However, if light of appropriate wavelength is additionally applied the dye can be excited to a long-lived dark state, i.e. a state where no fluorescence occurs. As long as the molecule is in the long-lived dark state (e.g. a triplet state), it cannot be excited from the ground state. Switching between these two states (bright and dark) by applying light fulfills all preconditions for the RESOLFT concept and subwavelength scale imaging, and therefore images with very high resolution can be obtained. For successful implementation, GSD microscopy requires either special fluorophores with high triplet yield, or removal of oxygen by use of various mounting media such as Mowiol or Vectashield. The implementation in a microscope is very similar to stimulated emission depletion microscopy, however it can operate with only one wavelength for excitation and depletion. Using an appropriate ring-like focal spot for the light that switches the molecules into the dark state, the fluorescence can be quenched at the outer part of the focal spot. Therefore, fluorescence only still takes place at the center of the microscope's focal spot and the spatial resolution is increased. References Microscopy
GSD microscopy
Chemistry
381
2,768,426
https://en.wikipedia.org/wiki/Correct%20name
In botany, the correct name according to the International Code of Nomenclature for algae, fungi, and plants (ICN) is the one and only botanical name that is to be used for a particular taxon, when that taxon has a particular circumscription, position and rank. Determining whether a name is correct is a complex procedure. The name must be validly published, a process which is defined in no less than 16 Articles of the ICN. It must also be "legitimate", which imposes some further requirements. If there are two or more legitimate names for the same taxon (with the same circumscription, position and rank), then the correct name is the one which has priority, i.e. it was published earliest, although names may be conserved if they have been very widely used. Validly published names other than the correct name are called synonyms. Since taxonomists may disagree as to the circumscription, position or rank of a taxon, there can be more than one correct name for a particular plant. These may also be called synonyms. The correct name has only one correct spelling, which will generally be the original spelling (although certain limited corrections are allowed). Other spellings are called orthographical variants. The zoological equivalent of "correct name" is "valid name". Example Different taxonomic placements may well lead to different correct names. For example, the earliest name for the fastest growing tree in the world is Adenanthera falcataria L. The "L." stands for "Linnaeus" who first validly published the name. Adenanthera falcataria is thus one of the correct names for this plant. There are other correct names, based on different taxonomic treatments. It can be placed in the genus Albizia, as Fosberg first did. When placed in this genus, the first choice of correct name is the new genus name followed by the earlier species epithet, giving Albizia falcataria. This name cannot be used if there is already a species in the genus with this epithet, so that an illegitimate duplicate would be created. As this is not the case, the correct name for the plant in this genus is Albizia falcataria (L.) Fosberg. "Fosberg" is the authority for the transfer to the new genus; "L(innaeus)" the authority for the 'base name' (basionym) from which the new name is derived. It can also be placed in the genus Paraserianthes. Its correct name in that position is Paraserianthes falcataria (L.) I.C.Nielsen. Within the genus Paraserianthes, it is placed in section Falcataria. If the section is raised in rank to become the genus Falcataria, the correct name cannot be Falcataria falcataria, as might be expected, since under the botanical code (but not the zoological code) names with the same word as both the genus and the specific epithet (tautonyms) are forbidden. An alternative basionym must be sought or a new name created. The correct name is Falcataria falcata (L.) Greuter & R.Rankin. The four names Adenanthera falcataria, Albizia falcataria, Paraserianthes falcataria and Falcataria falcata can each be correct given different taxonomic opinions that put the plant in each of these four genera. Which is the 'right' genus is a problem for taxonomy, not nomenclature. Thus this tree species will have a different correct botanical name for different people. Different taxonomists may publish revisions or monographs picking a different accepted name dependent on their own circumscription of this taxon, for which the other correct names become (homotypic) synonyms; note however that there is only one correct name for a given circumscription. Prokaryotes The Prokaryotic Code inherits many concepts, including that of a "correct name", from the ICN. As with the botanical concept, different taxonomists may have different concepts of a genus, leading to different "correct names". The List of Prokaryotic names with Standing in Nomenclature (LPSN) tries to be consistent with its approach to selecting correct names. The LPSN notes that although later combinations tend to be based on better phylogenomic data, just taking "the last valid combination" is not sufficient because of possible inconsistencies in concepts. See also (specific to botany) Botanical name Botanical nomenclature International Code of Nomenclature for algae, fungi, and plants International Code of Nomenclature for Cultivated Plants International Plant Names Index International Association for Plant Taxonomy Author citation (botany) (more general) Scientific classification Binomial nomenclature Hybrid name (botany) Nomenclature Codes International Code of Zoological Nomenclature References Bibliography Botanical nomenclature
Correct name
Biology
1,020
29,596,019
https://en.wikipedia.org/wiki/Hexaminolevulinate
Hexaminolevulinate, sold under the brand name Cysview among others, is an imaging agent that lights up under blue light during a blue light cystoscopy. It is used to help detect non-muscle invasive bladder cancer (NMIBC), in particular papillary tumors and carcinoma in situ (CIS). It made by Photocure ASA, a Norwegian pharmaceutical company. Hexaminolevulinate is a structural analogue to 5-aminolevulinic acid (a precursor to the porphyrin ring of heme), and is internalized and processed into the photoactive protoporphyrin IX at a high rate by tumor cells. After exposure to 360-450 nm light, the porphyrin will fluoresce red. References Oncology Optical imaging
Hexaminolevulinate
Chemistry
165
16,569,244
https://en.wikipedia.org/wiki/18-bit%20computing
Eighteen binary digits have 262,144 (1000000 octal, 40000 hexadecimal) distinct combinations. Eighteen bits was a common word size for smaller computers in the 1960s, when large computers often using 36 bit words and 6-bit character sets, sometimes implemented as extensions of BCD, were the norm. There were also 18-bit teletypes experimented with in the 1940s. Example computer architectures Possibly the most well-known 18-bit computer architectures are the PDP-1, PDP-4, PDP-7, PDP-9 and PDP-15 minicomputers produced by Digital Equipment Corporation from 1960 to 1975. Digital's PDP-10 used 36-bit words but had 18-bit addresses. The UNIVAC division of Remington Rand produced several 18-bit computers, including the UNIVAC 418 and several military systems. The IBM 7700 Data Acquisition System was announced by IBM on December 2, 1963. The BCL Molecular 18 was a group of systems designed and manufactured in the UK in the 1970s and 1980s. The NASA Standard Spacecraft Computer NSSC-1 was developed as a standard component for the MultiMission Modular Spacecraft at the Goddard Space Flight Center (GSFC) in 1974. The flying-spot store digital memory in the first experimental electronic switching systems used nine plates of optical memory that were read and written two bits at a time, producing a word size of 18 bits. Character encoding Eighteen-bit machines use a variety of character encodings. The DEC Radix-50, called Radix 508 format, packs three characters plus two bits in each 18-bit word. The Teletype packs three characters in each 18-bit word; each character a 5-bit Baudot code and an upper-case bit. The DEC SIXBIT format packs three characters in each 18-bit word, each 6-bit character obtained by stripping the high bits from the 7-bit ASCII code, which folds lowercase to uppercase letters. References DIGITAL Computing Timelime: 18-bit architecture Architectural Evolution in DEC’s 18b Computers, Bob Supnik, 2006. Computer data
18-bit computing
Technology
446
6,796,215
https://en.wikipedia.org/wiki/Trichoderma%20harzianum
Trichoderma harzianum is a fungus that is also used as a fungicide. It is used for foliar application, seed treatment and soil treatment for suppression of fungal pathogens causing various fungal plant diseases. Commercial biotechnological products such as 3Tac have been useful for treatment of Botrytis, Fusarium and Penicillium sp. It is also used for manufacturing enzymes. Taxonomy and genetics Most Trichoderma strains have no sexual stage but instead produce only asexual spores. However, for a few strains the sexual stage is known, but not among strains that have usually been considered for biocontrol purposes. The sexual stage, when found, is within the Ascomycetes in the genus Hypocrea. Traditional taxonomy was based upon differences in morphology, primarily of the asexual sporulation apparatus, but more molecular approaches are now being used. Consequently, the taxa recently have gone from nine to at least thirty-three species. Genetics Most strains are highly adapted to an asexual life cycle. In the absence of meiosis, chromosome plasticity is the norm, and different strains have different numbers and sizes of chromosomes. Most cells have numerous nuclei, with some vegetative cells possessing more than 100. Various asexual genetic factors, such as parasexual recombination, mutation and other processes contribute to variation between nuclei in a single organism (thallus). Thus, the fungi are highly adaptable and evolve rapidly. There is great diversity in the genotype and phenotype of wild strains. While wild strains are highly adaptable and may be heterokaryotic (contain nuclei of dissimilar genotype within a single organism, and hence highly variable), strains used for biocontrol in commercial agriculture are, or should be, homokaryotic (nuclei are all genetically similar or identical). This, coupled with tight control of variation through genetic drift, allows these commercial strains to be genetically distinct and nonvariable. This is an extremely important quality control item for any company wishing to commercialize these organisms. Mycoparasitism Trichoderma spp. are fungi that are present in nearly all soils. In soil, they frequently are the most prevalent culturable fungi. They also exist in many other diverse habitats. Trichoderma readily colonizes plant roots and some strains are rhizosphere competent i.e. able to grow on roots as they develop. Trichoderma spp. also attack, parasitize and otherwise gain nutrition from other fungi. They have evolved numerous mechanisms for both attack of other fungi and for enhancing plant and root growth. Different strains of Trichoderma control almost every pathogenic fungus for which control has been sought. However, most Trichoderma strains are more efficient for control of some pathogens than others, and may be largely ineffective against some fungi. Trichoderma spp. continue to be a major source of contamination and crop loss for mushroom farmers. References Bibliography Yedidia, I., Benhamou, N., and Chet, I. 1999. Induction of defense responses in cucumber plants (Cucumis sativus L.) by the biocontrol agent Trichoderma harzianum. Appl. Environ. Microbiol. 65: 1061–1070. W. Gams and W. Meyer. What Exactly Is Trichoderma harzianum? Mycologia. Vol. 90, No. 5 (Sep. - Oct., 1998), pp. 904-915 External links Index Fungorum USDA ARS Fungal Database Fungal pest control agents harzianum Fungi described in 1930 Fungus species
Trichoderma harzianum
Biology
770
71,906,761
https://en.wikipedia.org/wiki/Slip%20bands%20in%20metals
Slip bands or stretcher-strain marks are localized bands of plastic deformation in metals experiencing stresses. Formation of slip bands indicates a concentrated unidirectional slip on certain planes causing a stress concentration. Typically, slip bands induce surface steps (e.g., roughness due persistent slip bands during fatigue) and a stress concentration which can be a crack nucleation site. Slip bands extend until impinged by a boundary, and the generated stress from dislocations pile-up against that boundary will either stop or transmit the operating slip depending on its (mis)orientation. Formation of slip bands under cyclic conditions is addressed as persistent slip bands (PSBs) where formation under monotonic condition is addressed as dislocation planar arrays (or simply slip-bands, see Slip bands in the absence of cyclic loading section). Slip-bands can be simply viewed as boundary sliding due to dislocation glide that lacks (the complexity of ) PSBs high plastic deformation localisation manifested by tongue- and ribbon-like extrusion. And, where PSBs normally studied with (effective) Burgers vector aligned with the extrusion plane because a PSB extends across the grain and exacerbates during fatigue; a monotonic slip-band has a Burger’s vector for propagation and another for plane extrusions both controlled by the conditions at the tip. Persistent slip bands (PSBs) Persistent slip-bands (PSBs) are associated with strain localisation due to fatigue in metals and cracking on the same plane. Transmission electron microscopy (TEM) and three-dimensional discrete dislocation dynamics (DDD) simulation were used to reveal and understand dislocations type and arrangement/patterns to relate it to the sub-surface structure. PSB – ladder structure – is formed mainly from low-density channels of mobile gliding screw dislocation segments and high-density walls of dipolar edge dislocation segments piled up with tangled bowing-out edge segment and different sizes of dipolar loops scattered between the walls and channels. One type of dislocation loop forms the boundary of a completely enclosed patch of slipped material on the slip plane which terminates at the free surface. Widening of the slip band: Screw dislocation can have high enough resolved shear stress for a glide on more than one slip plane. Cross-slip can occur. But this leaves some segments of dislocation on the original slip plane. Dislocation can cross-slip back on to a parallel primary slip plane. where it forms a new dislocation source, and the process can repeat. These walls in PSBs are a ‘dipole dispersion’ form of stable arrangement of edge dislocations with minimal long-range stress field which has a minimal long-range stress field. This is different to slip-bands that is a planar stack of a stable array that has a strong long-range stress field. Thus, – in the free surface – cut and open (elimination) of dislocation loops at the surface cause the irreversible/persistent surface step associated with slip-bands. Surface relief through extrusion occurs on the Burger's vector direction and extrusion height and PSB depth increase with PSB thickness. PSB and planar walls are parallel and perpendicularly aligned with the normal direction of the Critical resolved shear stress, respectively. And once dislocation saturate and reach its sessile configuration, cracks were observed to nucleate and propagate along PSB extrusions. To summarise, contrary to 2D line defects, the field at the slip-band tip is due to three-dimensional interactions where the slip band extrusion simulates a sink-like dislocation blooming along the slip band axis. The magnitude of the gradient deformation field ahead of the slip band depends on the slip height and the mechanical conditions for propagation is influenced by the emitted dislocations long range field.A surface marking, or slip band, appears at the intersection of an active slip plane and the free surface of a crystal. Slip occurs in avalanches separated in time. Avalanches from other slip systems crossing a slip plane containing an active source led to the observed stepped surface markings, with successive avalanches from the given source displaced relative to each other. Dislocations are generated on a single slip plane They point out that a dislocation segment (Frank–Read source), lying in a slip plane and pinned at both ends, is a source of an unlimited number of dislocation loops. In this way the grouping of dislocations into an avalanche of a thousand or so loops on a single slip plane can be understood. Each dislocation loop has a stress field that opposes the applied stress in the neighbourhood of the source. When enough loops have been generated, the stress at the source will fall to a value so low that additional loops cannot form. Only after the original avalanche of loops has moved some distance away can another avalanche occur. Generation of the first avalanche at a source is easily understood. When the stress at the source reaches r*, loops are generated, and continue to be generated until the back-stress stops the avalanche. A second avalanche will not occur immediately in polycrystals, for the loops in the first avalanche are stopped or partially stopped at grain boundaries. Only if the external stress is increased substantially will a second avalanche be formed. In this way the formation of additional avalanches with rising stress can be understood. It remains to explain the displacement of successive avalanches by a small amount normal to the slip plane, thereby accounting for the observed fine structure of slip bands. A displacement of this type requires that a Frank–Read source move relative to the surface where slip bands are observed. In situ nano-compression work in Transmission electron microscopy (TEM) reveals that the deformation of a-Fe at the nanoscale is an inhomogeneous process characterized by a series of short displacement bursts and intermittent large displacement bursts. The series of short bursts correspond to the collective movement of dislocations within the crystal. The large single bursts are from SBs nucleated from the specimen surface. These results suggest that the formation of SBs can be considered as a source-limited plasticity process. The initial plastic deformation is characterized by the multiplication/ movement of a few dislocations over short distances due to the availability of dislocation sources within the nano-blade. Once it has reached a stage at which the mobile dislocations along preferred slips planes have moved through the nano-blade or become entangled in sessile configurations and further dislocation movement is difficult within the crystal, plasticity is carried out by the formation of SBs, which nucleate from the surface and then propagate through the nano-blade. Fisher et al. proposed that SBs are dynamically generated from a Frank–Read source at the specimen surface and are terminated by their own stress field in single crystals. The displacement burst behaviour reported by Kiener and Minor on compressing Cu single crystal nanopillars. Obviously suppressed the progress of serrated yielding (a series of short strain bursts) relative to that without the spinodal nanostructure. The results revealed that during compression deformation, the spinodal nanostructure confined the movement of dislocations (leading to a significant increase in dislocation density), causing a notable strengthening effect, and also kept the slip band morphology planar. Dislocation activity assists the growth of austenite precipitates and provide quantitative data for revealing the stress field generated by interface migration. The jerky nature of the tip moving rate is probably due to the accumulation and relaxation of stress field near the tip. After leaving from the tip, the dislocation loop expands rapidly ahead of the tip thus the change in tip velocity is concomitant with dislocation emission. It indicates that the emitted dislocation is strongly repelled by the stress field present at the lath tip. When the loop meets the foil surface, it breaks into two dislocation segments that leave a visible trace, due to the presence of a thin oxide layer on the surface. The emission of a dislocation loop from the tip may also affect tip moving rate via interaction between the local dislocation loop and the possible interfacial dislocations in the semi-coherent interface surrounding the tip. consequently, the tip halted temporarily. The net shear stress acting on each dislocation results from a combination of the stress field at the lath tip (τtip), the image stress tending to attract the dislocation loop to the surface (τimage), the line tension (τl) and the interaction stress between dislocations (τinter). This implies the strain field due to the transformation of austenite is large enough to cause the nucleation and emission of dislocations from an austenite lath tip. Slip bands in the absence of cyclic loading While repeatedly reversed loading commonly leads to localisation of dislocation glide, creating linear extrusions and intrusions on a free surface, similar features can arise even if there is no load reversal. These arise from dislocations gliding on a particular slip plane, in a particular slip direction (within a single grain), under an external load. Steps can be created on the free surface as a consequence of the tendency for dislocations to follow one another along a glide path, of which there may be several in parallel with each other in the grain concerned. Prior passage of dislocations apparently makes glide easier for subsequent ones, and the effect may also be associated with dislocation sources, such as a Frank-Read source, acting in particular planes. The appearance of such bands, which are sometimes termed “persistent slip lines”, is similar to that of those arising from cyclic loading, but the resultant steps are usually more localised and have lower heights. They also reveal the grain structure. They can often be seen on free surfaces that were polished before the deformation took place. For example, the figure shows micrographs (taken with different magnifications) of the region around an indent created in a copper sample with a spherical indenter. The parallel lines within individual grains are each the result of several hundred dislocations of the same type reaching the free surface, creating steps with a height of the order of a few microns. If a single slip system was operational within a grain, then there is just one set of lines, but it is common for more than one system to be activated within a grain (particularly when the strain is relatively high), leading to two or more sets of parallel lines. Other features indicative of the details of how the plastic deformation took place, such as a region of cooperative shear caused by deformation twinning, can also sometimes be seen on such surfaces. In the optical micrograph shown, there is also evidence of grain rotations – for example, at the “rim” of the indent and in the form of depressions at grain boundaries. Such images can thus be very informative. Nature of the non-cyclic slip band local field The deformation field at the slip-band is due to three-dimensional elastic and plastic strains where the concentrated shear of the slip band tip deforms the grain in its vicinity. The elastic strains describe the stress concentration ahead of the slip band, which is important as it can affect the transfer of plastic deformation across grain boundaries. An understanding of this is needed to support the study of yield and inter/intra-granular fracture. The concentrated shear of slip bands can also nucleate cracks in the plane of the slip band, and persistent slip bands that lead to intragranular fatigue crack initiation and growth may also form under cyclic loading conditions. To properly characterise slip bands and validate mechanistic models for their interactions with microstructure, it is crucial to quantify the local deformation fields associated with their propagation. However, little attention has been given to slip bands within grains (i.e., in the absence of grain boundary interaction). The long-range stress field (i.e., the elastic strain field) around the tip of a stress concentrator, such as a slip band, can be considered a singularity equivalent to that of a crack. This singularity can be quantified using a path independent integral since it satisfies the conservation laws of elasticity. The conservation laws of elasticity related to translational, rotational, and scaling symmetries were derived initially by Knowles and Sternberg from the Noether's theorem. Budiansky and Rice introduced the J-, M-, L-integral and were the first to give them a physical interpretation as the strain energy-release rates for mechanisms such as cavity propagation, simultaneous uniform expansion, and defect rotation, respectively. When evaluated over a surface that encloses a defect, these conservation integrals represent a configurational force on the defect. That work paved the way for the field of Configurational mechanics of materials, with the path-independent J-integral now widely used to analyse the configurational forces in problems as diverse as dislocation dynamics, misfitting inclusions, propagation of cracks, shear deformation of clays, and co-planar dislocation nucleation from shear loaded cracks. The integrals have been applied to linear elastic and elastic-plastic materials and have been coupled with processes such as thermal and electrochemical loading, and internal tractions. Recently, experimental fracture mechanics studies have used full-field in situ measurements of displacements and elastic strains to evaluate the local deformation field surrounding the crack tip as a J-integral. Slip bands form due to plastic deformation, and the analysis of the force on a dislocation considers the two-dimensional nature of the dislocation line defect. General definitions of the Peach–Koehler configurational force (𝑃𝑘𝑗) (or the elastic energy-momentum tensor ) on a dislocation in the arbitrary 𝑥1, 𝑥2, 𝑥3 coordinate system, decompose the Burgers vector (𝑏) to orthogonal components. This leads to the generalised definition of the J-integral in equations below. For a dislocation pile-up, the J-integral is the summation of the Peach–Koehler configurational force of the dislocations in the pile-up (including out-of-plane, 𝑏3 ). 𝐽𝑘 = ∫ 𝑃𝑘𝑗 𝑛𝑗 𝑑𝑆 = ∫(𝑊𝑠 𝑛𝑘− 𝑇𝑖 𝑢𝑖,𝑘) 𝑑𝑆 𝐽𝑘𝑥 = 𝑅𝑘𝑗 𝐽𝑗, 𝑖,𝑗,𝑘=1,2,3 where 𝑆 is an arbitrary contour around the dislocation pile-up with unit outward normal 𝑛𝑖, 𝑊𝑠 is the strain energy density, 𝑇𝑖 = 𝜎𝑖𝑗 𝑛𝑗 is the traction on 𝑑𝑆, 𝑢𝑖 are the displacement vector components, 𝐽𝑘𝑥 is 𝐽-integral evaluated along the 𝑥𝑘 direction, and 𝑅𝑘𝑗 is a second-order mapping tensor that maps 𝐽𝑘 into 𝑥𝑘 direction. This vectorial 𝐽𝑘-integral leads to numerical difficulties in the analysis since 𝐽2 and, for a three-dimensional slip band or inclined crack, the 𝐽3 terms cannot be neglected. See also Deformation twinning Lüders band References Further reading Materials science Materials degradation Mechanical failure modes Structural analysis Solid mechanics Crystallography Metallurgy
Slip bands in metals
Physics,Chemistry,Materials_science,Technology,Engineering
3,116
66,549,427
https://en.wikipedia.org/wiki/AB%20Bo%C3%B6tis
AB Boötis, also known as Nova Boötis 1877 and occasionally Nova Comae Berenices 1877, is an object that may have undergone a nova outburst in 1877. It was discovered by Friedrich Schwab at Technische Universität Ilmenau in 1877. He reported observing the star as a 5th magnitude object, visible to the naked eye, on 14 nights during the period from 30 May 1877 through 14 July 1877. During that time interval, his estimate of the star's brightness only changed by 0.42 magnitudes. The star was lost (Schwab noticed its absence on January 9, 1878), and despite several searches in subsequent years, no other 19th century observations of the nova were reported. Downes et al. estimate that Schwab's reported coordinates for the star may have had a precision no better than 1/2 degree. In 1971, A. Sh. Khatisov suggested that the star Schwab saw was BD +21°2606 (whose visual magnitude is 10.64 in the Tycho-2 Catalogue, roughly 100 times fainter than the object Schwab reported), but that identification may be incorrect. In 1988 Downes and Szkody imaged the area around AB Boötis' reported position, to try to identify the nova in its quiescent state based on its color, but their search was unsuccessful. In 2000, Liu et al. published a spectrum of AB Boötis, which they describe as a cataclysmic variable (a class which includes novae), but they did not publish the coordinates of the star they examined, so exactly which star they observed is unclear. In 2020, Hoffmann and Vogt suggested that AB Boötis might be a re-appearance of a guest star that Chinese astronomers saw near Arcturus in 203 BCE. References Novae Boötes 1877 in science Boötis, AB 18770530
AB Boötis
Astronomy
394
58,017,976
https://en.wikipedia.org/wiki/Asteroid%20impact%20prediction
Asteroid impact prediction is the prediction of the dates and times of asteroids impacting Earth, along with the locations and severities of the impacts. The process of impact prediction follows three major steps: Discovery of an asteroid and initial assessment of its orbit which is generally based on a short observation arc of less than 2 weeks. Follow-up observations to improve the orbit determination Calculating if, when and where the orbit may intersect with Earth at some point in the future. The usual purpose of predicting an impact is to direct an appropriate response. Most asteroids are discovered by a camera on a telescope with a wide field of view. Image differencing software compares a recent image with earlier ones of the same part of the sky, detecting objects that have moved, brightened, or appeared. Those systems usually obtain a few observations per night, which can be linked up into a very preliminary orbit determination. This predicts approximate positions over the next few nights, and follow-ups can then be carried out by any telescope powerful enough to see the newly detected object. Orbit intersection calculations are then carried out by two independent systems, one (Sentry) run by NASA and the other (NEODyS) by ESA. Current systems only detect an arriving object when several factors are just right, mainly the direction of approach relative to the Sun, the weather, and phase of the Moon. The overall success rate is around 1% and is lower for the smaller objects. A few near misses by medium-size asteroids have been predicted years in advance, with a tiny chance of striking Earth, and a handful of small impactors have successfully been detected hours in advance. All of the latter struck wilderness or ocean, and hurt no one. The majority of impacts are by small, undiscovered objects. They rarely hit a populated area, but can cause widespread damage when they do. Performance is improving in detecting smaller objects as existing systems are upgraded and new ones come on line, but all current systems have a blind spot around the Sun that can only be overcome by a dedicated space based system or by discovering objects on a previous approach to Earth many years before a potential impact. History In 1992 a report to NASA recommended a coordinated survey (christened Spaceguard) to discover, verify and provide follow-up observations for Earth-crossing asteroids. This survey was scaled to discover 90% of all objects larger than one kilometer within 25 years. Three years later, a further NASA report recommended search surveys that would discover 60–70% of the short-period, near-Earth objects larger than one kilometer within ten years and obtain 90% completeness within five more years. In 1998, NASA formally embraced the goal of finding and cataloging, by 2008, 90% of all near-Earth objects (NEOs) with diameters of 1 km or larger that could represent a collision risk to Earth. The 1 km diameter metric was chosen after considerable study indicated that an impact of an object smaller than 1 km could cause significant local or regional damage but is unlikely to cause a worldwide catastrophe. The impact of an object much larger than 1 km diameter could well result in worldwide damage up to, and potentially including, extinction of the human race. The NASA commitment has resulted in the funding of a number of NEO search efforts, which made considerable progress toward the 90% goal by the target date of 2008 and also produced the first ever successful prediction of an asteroid impact (the 4-meter was detected 19 hours before impact). However, the 2009 discovery of several NEOs approximately 2 to 3 kilometers in diameter (e.g. , , , and ) demonstrated there were still large objects to be detected. Three years later, in 2012, the 40 meter diameter asteroid 367943 Duende was discovered and successfully predicted to be on close but non-colliding approach to Earth again just 11 months later. This was a landmark prediction as the object was only , and it was closely monitored as a result. On the day of its closest approach and by coincidence, a smaller asteroid was also approaching Earth, unpredicted and undetected, from a direction close to the Sun. Unlike 367943 Duende it was on a collision course and it impacted Earth 16 hours before 367943 Duende passed, becoming the Chelyabinsk meteor. It injured 1,500 people and damaged over 7,000 buildings, raising the profile of the dangers of even small asteroid impacts if they occur over populated areas. The asteroid is estimated to have been 17 m across. In April 2018, the B612 Foundation stated "It's 100 per cent certain we'll be hit [by a devastating asteroid], but we're not 100 per cent sure when." Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the National Near-Earth Object Preparedness Strategy Action Plan to better prepare. Discovery of near-Earth asteroids The first step in predicting impacts is detecting asteroids and determining their orbits. Finding faint near-Earth objects against the much more numerous background stars is very much a needle in a haystack search. It is achieved by sky surveys that are designed to discover near Earth asteroids. Unlike the majority of telescopes that have a narrow field of view and high magnification, survey telescopes have a wide field of view to scan the entire sky in a reasonable amount of time with enough sensitivity to pick up the faint near-Earth objects they are searching for. NEO focused surveys revisit the same area of sky several times in succession. Movement can then be detected using image differencing techniques. Anything that moves from image to image against the background of stars is compared to a catalogue of all known objects, and if it is not already known is reported as a new discovery along with its precise position and the observation time. This then allows other observers to confirm and add to the data about the newly discovered object. Cataloging vs warning surveys Asteroid surveys can be broadly classified as either cataloging surveys, which use larger telescopes to mostly identify larger asteroids well before they come notably close to Earth, or warning surveys, which use smaller telescopes to mostly look for smaller asteroids within several million kilometers of Earth. Cataloging systems focus on finding larger asteroids years in advance and they scan the sky slowly (of the order of once per month), but deeply. Warning systems focus on scanning the sky relatively quickly (of the order of once per night). They typically cannot detect objects that are as faint as cataloging systems but they will not miss an asteroid that dramatically brightens for just a few days when it passes very close to Earth. Some systems compromise and scan the sky approximately once per week. Cataloging systems For larger asteroids (> 100 m to 1 km across), prediction is based on cataloging the asteroid, years to centuries before it could impact. This technique is possible as their size makes them bright enough to be seen from a long distance. Their orbits therefore can be measured and any future impacts predicted long before they are on an impact approach to Earth. This long period of warning is important as an impact from a 1 km object would cause worldwide damage and a minimum of around a decade of lead time would be needed to deflect it away from Earth. As of 2018, the inventory is nearly complete for the kilometer-size objects (around 900) which would cause global damage, and approximately one third complete for 140 meter objects (around 8500) which would cause major regional damage. The effectiveness of the cataloging is somewhat limited by the fact that some proportion of the objects have been lost since their discovery, due to insufficient observations to accurately determine their orbits. Warning systems Smaller near-Earth objects number into millions and therefore impact Earth much more often, though obviously with much less damage. The vast majority remain undiscovered. They seldom pass close enough to Earth that they become bright enough to observe, and so most can only be observed when within a few million kilometers of Earth. They therefore cannot usually be catalogued well in advance and can only be warned about, a few weeks to days in advance. Current mechanisms for detecting asteroids on approach rely on ground based visible-light telescopes with wide fields of view. Those currently can monitor the sky at most every night, and therefore miss most of the smaller asteroids which are bright enough to detect for less than a day. Such very small asteroids much more commonly impact Earth than larger ones, but they make little damage. Missing them therefore has limited consequences. Much more importantly, ground-based telescopes are blind to most of the asteroids which impact the day side of the planet and will miss even large ones. These and other problems mean very few impacts are successfully predicted (see §Effectiveness of the current system and §Improving impact prediction). Asteroids detected by warning systems are much too close to their time of potential impact to deflect them away from Earth, but there is still enough time to mitigate the consequences of the impact by evacuating and otherwise preparing the affected area. Warning systems can also detect asteroids which have been successfully catalogued as existing, but whose orbit was insufficiently well determined to allow a prediction of where they are now. Surveys The main NEO focussed surveys are listed below, along with future telescopes that are already funded. Originally all the surveys were clustered together in a relatively small part of the Northern Hemisphere. This meant that around 15% of the sky at extreme Southern declination was never monitored, and that the rest of the Southern sky was observed over a shorter season than the Northern sky. Moreover, as the hours of darkness are fewer in summertime, the lack of a balance of surveys between North and South meant that the sky was scanned less often in the Northern summer. The ATLAS telescopes now operating at the South African Astronomical Observatory and El Sauce observatory in Chile now cover this gap in the south east of the globe. Once it is completed, the Large Synoptic Survey Telescope will improve the existing cover of the southern sky. The 3.5 m Space Surveillance Telescope, which was originally also in the southwest United States, was dismantled and moved to Western Australia in 2017. When completed, this should also improve the global coverage. Construction has been delayed due to the new site being in a cyclone region, but was completed in September 2022. ATLAS ATLAS, the "Asteroid Terrestrial-impact Last Alert System" uses four 0.5-metre telescopes. Two are located on the Hawaiian Islands, at Haleakala and Mauna Loa, one at the South African Astronomical Observatory, and one in Chile. With a field of view of 30 square degrees each, the telescopes survey the observable sky down to apparent magnitude 19 with 4 exposures every night. The survey has been operational with the two Hawaii telescopes since 2017, and in 2018 obtained NASA funding for two additional telescopes sited in the Southern hemisphere. They were expected to take 18 months to build. Their southern locations provide coverage of the 15% of the sky that cannot be observed from Hawaii, and combined with the Northern hemisphere telescopes give non-stop coverage of the equatorial night sky (the South African location is not only in the opposite hemisphere to Hawaii, but also at an opposing longitude). The full ATLAS concept consists of eight of its 50-centimeter diameter f/2 Wright-Schmidt telescopes, spread over the globe for 24h/24h coverage of the full-night-sky. Catalina Sky Survey (including Mount Lemmon Survey) In 1998, the Catalina Sky Survey (CSS) took over from Spacewatch in surveying the sky for the University of Arizona. It uses two telescopes, a 1.5 m Cassegrain reflector telescope on the peak of Mount Lemmon (also known as a survey in its own right, the Mount Lemmon Survey), and a 0.7 m Schmidt telescope near Mount Bigelow (both in the Tucson, Arizona area in the south west of the United States). Both sites use identical cameras which provide a field of view of 5 square degrees on the 1.5 m telescope and 19 square degrees on the Catalina Schmidt. The Cassegrain reflector telescope takes three to four weeks to survey the entire sky, detecting objects fainter than apparent magnitude 21.5. The 0.7 m telescope takes a week to complete a survey of the sky, detecting objects fainter than apparent magnitude 19. This combination of telescopes, one slow and one medium, has so far detected more near Earth Objects than any other single survey. This shows the need for a combination of different types of telescopes. CSS used to include a telescope in the Southern Hemisphere, the Siding Spring Survey. However operations ended in 2013 after funding was discontinued. Kiso Observatory (Tomo-e Gozen) The Kiso Observatory uses a 1.05m Schmidt telescope on Mt. Ontake near Tokyo in Japan. In late 2019 the Kiso Observatory added a new instrument to the telescope, "Tomo-e Gozen", designed to detect fast moving and rapidly changing objects. It has a wide field of view (20 square degrees) and scans the sky in just 2 hours, far faster than any other survey as of 2021. This puts it squarely in the warning survey category. In order to scan the sky so quickly, the camera captures 2 frames per second, which means the sensitivity is lower than other metre class telescopes (which have much longer exposure times), giving a limiting magnitude of just 18. However, despite not being able to see dimmer objects which are detectable by other surveys, the ability to scan the entire sky several times per night allows it to spot fast moving asteroids that other surveys miss. It has discovered a significant number of near-Earth asteroids as a result (for example see List of asteroid close approaches to Earth in 2021). Large Synoptic Survey Telescope The Large Synoptic Survey Telescope (LSST) is a wide-field survey reflecting telescope with an 8.4 meter primary mirror, currently under construction on Cerro Pachón in Chile. It will survey the entire available sky around every three nights. Science operations are due to begin in 2022. Scanning the sky relatively fast but also being able to detect objects down to apparent magnitude 27, it should be good at detecting nearby fast moving objects as well as excellent for larger slower objects that are currently further away. Near-Earth Object Surveillance Mission A planned space-based 0.5m infrared telescope designed to survey the Solar System for potentially hazardous asteroids. The telescope will use a passive cooling system, and so unlike its predecessor NEOWISE, it will not suffer from a performance degradation due to running out of coolant. It does still have a limited mission duration however as it needs to use propellant for orbital station keeping in order to maintain its position at SEL1. From here, the mission will search for asteroids hidden from Earth based satellites by the Sun's glare. It is planned for launch in 2026. NEO Survey Telescope The Near Earth Object Survey TELescope (NEOSTEL) is an ESA funded project, starting with an initial prototype currently under construction. The telescope is of a new "fly-eye" design that combines a single reflector with multiple sets of optics and CCDs, giving a very wide field of view (around 45 square degrees). When complete it will have the widest field of view of any telescope and will be able to survey the majority of the visible sky in a single night. If the initial prototype is successful, three more telescopes are planned for installation around the globe. Because of the novel design, the size of the primary mirror is not directly comparable to more conventional telescopes, but is equivalent to a conventional 1–metre telescope. The telescope itself should be complete by end of 2019, and installation on Mount Mufara, Sicily should be complete in 2020 but was pushed back to 2022. NEOWISE The Wide-field Infrared Survey Explorer is a 0.4 m infrared-wavelength space telescope launched in December 2009, and placed in hibernation in February 2011. It was re-activated in 2013 specifically to search for near-Earth objects under the NEOWISE mission. By this stage, the spacecraft's cryogenic coolant had been depleted and so only two of the spacecraft's four sensors could be used. Whilst this has still led to new discoveries of asteroids not previously seen from ground-based telescopes, the productivity has dropped significantly. In its peak year when all four sensors were operational, WISE made 2.28 million asteroid observations. In recent years, with no cryogen, NEOWISE typically makes approximately 0.15 million asteroid observations annually. The next generation of infrared space telescopes has been designed so that they do not need cryogenic cooling. Pan-STARRS Pan-STARRS, the "Panoramic Survey Telescope And Rapid Response System", currently (2018) consists of two 1.8 m Ritchey–Chrétien telescopes located at Haleakala in Hawaii. It has discovered a large number of new asteroids, comets, variable stars, supernovae and other celestial objects. Its primary mission is now to detect near-Earth objects that threaten impact events, and it is expected to create a database of all objects visible from Hawaii (three-quarters of the entire sky) down to apparent magnitude 24. The Pan-STARRS NEO survey searches all the sky north of declination −47.5. It takes three to four weeks to survey the entire sky. Space Surveillance Telescope The Space Surveillance Telescope (SST) is a 3.5 m telescope that detects, tracks, and can discern small, obscure objects, in deep space with a wide field of view system. The SST mount uses an advanced servo-control technology, that makes it one of the quickest and most agile telescopes of its size. It has a field of view of 6 square degrees and can scan the visible sky in 6 clear nights down to apparent magnitude 20.5. Its primary mission is tracking orbital debris. This task is similar to that of spotting near-Earth asteroids and so it is capable of both. The SST was initially deployed for testing and evaluation at the White Sands Missile Range in New Mexico. On 6 December 2013, it was announced that the telescope system would be moved to the Naval Communication Station Harold E. Holt in Exmouth, Western Australia. The SST was moved to Australia in 2017, captured first light in 2020 and after a two and a half year testing programme became operational in September 2022. Spacewatch Spacewatch was an early sky survey focussed on finding near Earth asteroids, founded in 1980. It was the first to use CCD image sensors to search for them, and the first to develop software to detect moving objects automatically in real-time. This led to a huge increase in productivity. Before 1990 a few hundred observations were made each year. After automation, annual productivity jumped by a factor of 100 leading to tens of thousands of observations per year. This paved the way for the surveys we have today. Although the survey is still in operation, in 1998 it was superseded by Catalina Sky Survey. Since then it has focused on following up on discoveries by other surveys, rather than making new discoveries itself. In particular it aims to prevent high priority PHOs from being lost after their discovery. The survey telescopes are 1.8 m and 0.9 m. The two follow-up telescopes are 2.3 m and 4 m. Zwicky Transient Facility The Zwicky Transient Facility (ZTF) was commissioned in 2018, superseding the Intermediate Palomar Transient Factory (2009–2017). It is designed to detect transient objects that rapidly change in brightness, for example supernovae, gamma ray bursts, collisions between two neutron stars, as well as moving objects such as comets and asteroids. The ZTF is a 1.2 m telescope that has a field of view of 47 square degrees, designed to image the entire northern sky in three nights and scan the plane of the Milky Way twice each night to a limiting magnitude of 20.5. The amount of data produced by ZTF is expected to be 10 times larger than its predecessor. Follow-up observations Once a new asteroid has been discovered and reported, other observers can confirm the finding and help define the orbit of the newly discovered object. The International Astronomical Union Minor Planet Center (MPC) acts as the global clearing house for information on asteroid orbits. It publishes lists of new discoveries that need verifying and still have uncertain orbits, and it collects the resulting follow-up observations from around the world. Unlike the initial discovery, which typically requires unusual and expensive wide-field telescopes, ordinary telescopes can be used to confirm the object as its position is now approximately known. There are far more of these around the globe, and even a well equipped amateur astronomer can contribute valuable follow-up observations of moderately bright asteroids. For example, the Great Shefford Observatory in the back garden of amateur Peter Birtwhistle typically submits thousands of observations to the Minor Planet Center every year. Nonetheless, some surveys (for example CSS and Spacewatch) have their own dedicated follow-up telescopes. Follow-up observations are important because once a sky survey has reported a discovery it may not return to observe the object again for days or weeks. By this time it may be too faint for it to detect, and in danger of becoming a lost asteroid. The more observations and the longer the observation arc, the greater the accuracy of the orbit model. This is important for two reasons: for imminent impacts it helps to make a better prediction of where the impact will occur and whether there is any danger of hitting a populated area. for asteroids that will miss Earth this time round, the more accurate the orbit model is, the further into the future its position can be predicted. This allows recovery of the asteroid on its subsequent approaches, and impacts to be predicted years in advance. Estimating size and impact severity Assessing the size of the asteroid is important for predicting the severity of the impact, and therefore the actions that need to be taken (if any). With just observations of reflected visible light by a conventional telescope, the object could be anything from 50% to 200% of the estimated diameter, and therefore anything from one-eighth to eight times the estimated volume and mass. Because of this, one key follow-up observation is to measure the asteroid in the thermal infrared spectrum (long-wavelength infrared), using an infrared telescope. The amount of thermal radiation given off by an asteroid together with the amount of reflected visible light allows a much more accurate assessment of its size than just how bright it appears in the visible spectrum. Jointly using thermal infrared and visible measurements, a thermal model of the asteroid can estimate its size to within about 10% of the true size. One example of such a follow-up observation was for 3671 Dionysus by UKIRT, the world's largest infrared telescope at the time (1997). A second example was the 2013 ESA Herschel Space Observatory follow-up observations of 99942 Apophis, which showed it was 20% larger and 75% more massive than previously estimated. However such follow-ups are rare. The size estimates of most near-Earth asteroids are based on visible light only. If the object was discovered by an infrared survey telescope initially, then an accurate size estimate will become available with visible light follow-up, and infrared follow-up will not be needed. However, none of the ground-based survey telescopes listed above operate at thermal infrared wavelengths. The NEOWISE satellite had two thermal infrared sensors but they stopped working when the cryogen ran out. There are therefore currently no active thermal infrared sky surveys which are focused on discovering near-Earth objects. There are plans for a new space based thermal infrared survey telescope, Near-Earth Object Surveillance Mission, due to launch in 2025. Impact calculation Minimum orbit intersection distance The minimum orbit intersection distance (MOID) between an asteroid and the Earth is the distance between the closest points of their orbits. This first check is a coarse measure that does not allow an impact prediction to be made, but is based solely on the orbit parameters and gives an initial measure of how close to Earth the asteroid could come. If the MOID is large then the two objects never come near each other. In this case, unless the orbit of the asteroid is perturbed so that the MOID is reduced at some point in the future, it will never impact Earth and can be ignored. However, if the MOID is small then it is necessary to carry out more detailed calculations to determine if an impact will happen in the future. Asteroids with a MOID of less than 0.05 AU and an absolute magnitude brighter than 22 are categorized as a potentially hazardous asteroid. Projecting into the future Once the initial orbit is known, the potential positions can be forecast years into the future and compared to the future position of Earth. If the distance between the asteroid and the centre of the Earth is less than Earth radius then a potential impact is predicted. To take account of the uncertainties in the orbit of the asteroid, many future projections are made (simulations) with slightly different parameters within the range of the uncertainty. This allows a percentage chance of impact to be estimated. For example, if 1,000 simulations are carried out and 73 result in an impact, then the prediction would be a 7.3% chance of impact. NEODyS NEODyS (Near Earth Objects Dynamic Site) is a European Space Agency service that provides information on near Earth objects. It is based on a continually and (almost) automatically maintained database of near Earth asteroid orbits. The site provides a number of services to the NEO community. The main service is an impact monitoring system (CLOMON2) of all near-Earth asteroids covering a period until the year 2100. The NEODyS website includes a Risk Page where all NEOs with probabilities of hitting the Earth greater than 10−11 from now until 2100 are shown in a risk list. In the table of the risk list the NEOs are divided into: "special", as was the case of (99942) Apophis "observable", objects which are presently observable and which critically need a follow-up in order to improve their orbit "possible recovery", objects which are not visible at present, but which are possible to recover in the near future "lost", objects which have an absolute magnitude (H) brighter than 25 but which are virtually lost, their orbit being too uncertain; and "small", objects with an absolute magnitude fainter than 25; even when those are "lost", they are considered too small to result in heavy damage on the ground (though the Chelyabinsk meteor would have been fainter than this). Each object has its own impactor table (IT) which shows many parameters useful to determine the risk assessment. Sentry prediction system NASA's Sentry System continually scans the MPC catalog of known asteroids, analyzing their orbits for any possible future impacts. Like ESA's NEODyS, it gives a list of possible future impacts, along with the probability of each. It uses a slightly different algorithm to NEODyS, and so provides a useful cross-check and corroboration. Currently, no impacts are predicted (the single highest probability impact currently listed is ~7 m asteroid , which is due to pass Earth in September 2095 with only a 10% predicted chance of impacting; its size is also small enough that any damage from an impact would be minimal). Impact probability calculation pattern The ellipses in the diagram on the right show the predicted position of an example asteroid at closest Earth approach. At first, with only a few asteroid observations, the error ellipse is very large and includes the Earth. The impact prediction probability is small because the Earth cover a small fraction of the large error ellipse. (Often times the error ellipse extends for tens if not hundreds of millions of km.) Further observations shrink the error ellipse. If it still includes the Earth, this raises the predicted impact probability, since the fixed-size Earth now covers a larger fraction of the smaller error region. Finally, yet more observations (often radar observations, or discovery of a previous sighting of the same asteroid on much older archival images) shrink the ellipse, usually revealing that the Earth is outside the smaller error region and the impact probability is then near zero. In rare cases, the Earth remains in the ever shrinking error ellipse and the impact probability then approaches one. For asteroids that are on track to hit Earth, the predicted probability of impact never stops increasing as more observations are made. This initially very similar pattern makes it difficult to quickly differentiate between asteroids which will be millions of kilometres from Earth and those which will hit it. This in turn makes it difficult to decide when to raise an alarm as gaining more certainty takes time, which reduces the time available to react to a predicted impact. However raising the alarm too soon has the danger of causing a false alarm and creating a Boy Who Cried Wolf effect if the asteroid in fact misses Earth. NASA will raise an alert if an asteroid has a better than 1% chance of impacting. In December 2004 when Apophis was estimated to have a 2.7% chance of impacting Earth on 13 April 2029, the uncertainty region for this asteroid had shrunk to 82,818 km. Response to predicted impact Once an impact has been predicted the potential severity needs to be assessed, and a response plan formed. Depending on the time to impact and the predicted severity this may be as simple as giving a warning to citizens. For example, although unpredicted, the 2013 impact at Chelyabinsk was spotted through the window by teacher Yulia Karbysheva. She thought it prudent to take precautionary measures by ordering her students to stay away from the room's windows and to perform a duck and cover maneuver. The teacher, who remained standing, was seriously lacerated when the blast arrived and window glass severed a tendon in one of her arms and left thigh, but none of her students, whom she ordered to hide under their desks, suffered lacerations. If the impact had been predicted and a warning had been given to the entire population, similar simple precautionary actions could have vastly reduced the number of injuries. Children who were in other classes were injured. If a more severe impact is predicted, the response may require evacuation of the area, or with sufficient lead time available, an avoidance mission to repel the asteroid. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched which was demonstrated by kinetically deflecting a minor planet moon, non-hazardous NEO Asteroid called Dimorphos with the help of the DART spacecraft. Following a ten-month journey to the Didymos system, the impactor collided with Dimorphos on 26 September 2022 at a speed of around . The collision successfully decreased Dimorphos's orbital period around Didymos by minutes. Effectiveness of the current system The effectiveness of the current system can be assessed a number of ways. The diagram below illustrates the number of successfully predicted impacts each year compared to the number of unpredicted asteroid impacts recorded by infrasound sensors designed to detect detonation of nuclear devices. It shows that the success rate is increasing over time, but that the vast majority are still missed. One problem with assessing effectiveness this way is that the sensitivity of infrasound sensors extends to small asteroids, which generally do very little damage. The missed asteroids do tend to be small, and missing small asteroids is relatively unimportant. By contrast, missing a large day-side impacting asteroid is highly problematic, with the unpredicted mid-size Chelyabinsk meteor providing a mild real-life example. In order to assess the effectiveness for detecting the (rare) larger asteroids which do matter, a different approach is needed. That effectiveness for larger asteroid can be assessed by looking at warning times for asteroids which did not impact Earth but came close. The below diagram for asteroids which came closer than the Moon shows how far in advance of closest approach they were first detected. Unlike asteroid impacts, where infrasound sensors provide ground truth, it is impossible to know for sure how many close approaches were undetected. Of the asteroids that were detected, the diagram shows that about half were not detected until after they had passed Earth. If they had been on course to impact Earth, they would not have been spotted before they hit, primarily because they approached from a direction close to the Sun. This includes larger asteroids such as 2018 AH, which approached from a direction close to the Sun and was detected 2 days after it had passed. It is estimated to be around 100 times more massive than the Chelyabinsk meteor. The number of detections is increasing as more survey sites come on line (for example ATLAS in 2016 and ZTF in 2018), but approximately half of the detections are made after the asteroid passes the Earth. The below charts visualise the warning times of the close approaches listed in the above bargraph, by the size of the asteroid instead of by the year they occurred in. The sizes of the charts show the relative sizes of the asteroids to scale. This is based on the absolute magnitude of each asteroid, an approximate measure of size based on brightness. For comparison, the approximate size of a person is also shown. Abs magnitude 30 and greater (size of a person for comparison) Abs magnitude 29–30 Absolute magnitude 28–29 Absolute magnitude 27–28 Absolute magnitude 26–27 (probable size of the Chelyabinsk meteor) Absolute magnitude 25–26 Absolute magnitude less than 25 (largest) As can be seen, the ability to predict larger asteroids has significantly improved since the early years of the 21st century, with some now being catalogued (predicted more than 1 year in advance), or having usable early warning times (greater than a week). Based on the few successfully predicted asteroid impacts, the average time between initial detection and impact is currently around 9 hours. There is some delay between the initial observation of the asteroid, data submission, and the follow-up observations and calculations which lead to an impact prediction being made. Improving impact prediction In addition to the already-funded telescopes mentioned above, two separate approaches have been suggested by NASA to improve impact prediction. Both approaches focus on the first step in impact prediction (discovering near-Earth asteroids) as this is the largest weakness in the current system. The first approach uses more powerful ground-based telescopes similar to the LSST. Being ground-based, such telescopes will still only observe part of the sky around Earth. In particular, all ground-based telescopes have a large blind spot for any asteroids coming from the direction of the Sun. In addition, they are affected by weather conditions, airglow and the phase of the Moon. To get around all of these issues, the second approach suggested is the use of space-based telescopes which can observe a much larger region of the sky around Earth. Although they still cannot point directly towards the Sun, they do not have the problem of blue sky to overcome and so can detect asteroids much closer in the sky to the Sun than ground-based telescopes. Unaffected by weather or airglow they can also operate 24 hours per day all year round. Finally, telescopes in outer space have the advantage of being able to use infrared sensors without the interference of the Earth's atmosphere. These sensors are better for detecting asteroids than optical sensors, and although there are some ground based infrared telescopes such as UKIRT, they are not designed for detecting asteroids. Space-based telescopes are more expensive, and tend to have a shorter lifespan, so Earth-based and space-based technologies complement each other to an extent. Although the majority of the IR spectrum is blocked by Earth's atmosphere, the very useful thermal (long-wavelength infrared) frequency band is not blocked (see gap at 10 μm in the diagram below). This allows for the possibility of ground based thermal imaging surveys designed for detecting near earth asteroids, though none are currently planned. Opposition effect There is a further issue that even telescopes in Earth orbit do not overcome (unless they operate in the thermal infrared spectrum). This is the issue of illumination. Asteroids go through phases similar to the lunar phases. Even though a telescope in orbit may have an unobstructed view of an object that is close in the sky to the Sun, it will still be looking at the dark side of the object. This is because the Sun is shining on the side facing away from the Earth, as is the case with the Moon when it is in a new moon phase. Because of this opposition effect, objects are far less bright in these phases than when fully illuminated, which makes them difficult to detect (see chart and diagram below). This problem can be solved by the use of thermal infrared surveys (either ground based or space based). Ordinary telescopes depend on observing light reflected from the Sun, which is why the opposition effect occurs. Telescopes which detect thermal infrared light depend only on the temperature of the object. Its thermal glow can be detected from any angle, and is particularly useful for differentiating asteroids from the background stars, which have a different thermal signature. This problem can also be solved without using thermal infrared, by positioning a space telescope away from Earth, closer to the Sun. The telescope can then look back towards Earth from the same direction as the Sun, and any asteroids closer to Earth than the telescope will then be in opposition, and much better illuminated. There is a point between the Earth and Sun where the gravities of the two bodies are perfectly in balance, called the Sun-Earth L1 Lagrange point (SEL1). It is approximately from Earth, about four times as far away as the Moon, and is ideally suited for placing such a space telescope. One problem with this position is Earth glare. Looking outward from SEL1, Earth itself is at full brightness, which prevents a telescope situated there from seeing that area of sky. Fortunately, this is the same area of sky that ground-based telescopes are best at spotting asteroids in, so the two complement each other. Another possible position for a space telescope would be even closer to the Sun, for example in a Venus-like orbit. This would give a wider view of Earth orbit, but at a greater distance. Unlike a telescope at the SEL1 Lagrange point, it would not stay in sync with Earth but would orbit the Sun at a similar rate to Venus. Because of this, it would not often be in a position to provide any warning of asteroids shortly before impact, but it would be in a good position to catalog objects before they are on final approach, especially those which primarily orbit closer to the Sun. One issue with being as close to the Sun as Venus is that the craft may be too warm to use infrared wavelengths. A second issue would be communications. As the telescope will be a long way from Earth for most of the year (and even behind the Sun at some points) communication would often be slow and at times impossible, without expensive improvements to the Deep Space Network. Solutions to problems: summary table This table summarises which of the various problems encountered by current telescopes are solved by the various different solutions. Near-Earth Object Surveyor In 2017, NASA proposed a number of alternative solutions to detect 90% of near-Earth objects of size 140 m or larger over the next few decades. As the detection sensitivity drops off with size but does not cut off, this will also improve the detection rates for the smaller objects which impact Earth much more often. Several of the proposals use a combination of an improved ground-based telescope and a space-based telescope positioned at the SEL1 Lagrange point. A number of large ground based telescopes are already in the late stages of construction (see above). A space based mission situated at SEL1, NEO Surveyor has now also been funded. It is planned for launch in 2027. List of successfully predicted asteroid impacts Below is the list of all near-Earth objects which have or may have impacted the Earth and which were predicted beforehand. This list would also include any objects identified as having greater than 50% chance of impacting in the future, but no such future impacts are predicted at this time. As asteroid detection ability increases it is expected that prediction will become more successful in the future. In addition to these objects, the meteoroid CNEOS20200918 was found in 2022 in archival ATLAS data, imaged 10 minutes before its 2020/09/18 impact. Although it technically could have been discovered before impact, it was only noticed in retrospect. There are also a number of objects which have been observed in orbit which may have impacted shortly after being observed, but may not have. It is difficult to know the true number of these possible impactors as unconfirmed tracklets have a wide range of possible orbits, and only a portion of these are consistent with earth impact. One example is A106fgF, an object observed on January 22, 2018 with an observation arc of only 39 minutes. See also Earth-grazing fireball List of asteroid close approaches to Earth List of bolidesasteroids and meteoroids that impacted Earth Notes References External links Earth Impact Database Earth Impact Effects Program NASA JPL Predicted Close Approaches (including impacts) Astronomical events Impact events Lists of asteroids Near-Earth asteroids Planetary defense
Asteroid impact prediction
Astronomy
8,491
241,965
https://en.wikipedia.org/wiki/Annals%20of%20Improbable%20Research
The Annals of Improbable Research (AIR) is a bimonthly magazine devoted to scientific humor, in the form of a satirical take on the standard academic journal. AIR, published six times a year since 1995, usually showcases at least one piece of scientific research being done on a strange or unexpected topic, but most of their articles concern real or fictional absurd experiments, such as a comparison of apples and oranges using infrared spectroscopy. Other features include such things as ratings of the cafeterias at scientific institutes, fake classifieds and advertisements for a medical plan called HMO-NO, and a very odd letters page. The magazine is headquartered in Cambridge, Massachusetts. AIR awards the annual science Ig Nobel Prizes, for ten achievements that "first make people laugh, and then make them think". AIR also runs the Luxuriant Flowing Hair Club for Scientists. History AIR is not the first science parody magazine. The Journal of Irreproducible Results (JIR) was founded by Alex Kohn and Harry J. Lipkin in 1955, but its editorial staff, including editor Marc Abrahams, left after the magazine was bought by publisher George Scherr in 1994. Scherr filed a number of court actions against AIR, alleging that it was deceptively similar to the Journal and that it had stolen the name "Ig Nobel Prize", but these actions were unsuccessful. Profile Occasional AIR articles are factual and illuminating, if a bit offbeat. For example, in 2003 researcher-documentary producer Nick T. Spark wrote about the background and history of Murphy's Law in a four-part article, "Why Everything You know About Murphy's Law is Wrong". It was revised, expanded and later published in June 2006 as the book A History of Murphy's Law. Another example: it was scientifically proved and waggishly reported that instruments can "distinguish shit from Shinola." See also Journal of Irreproducible Results Journal of Polymorphous Perversity Worm Runner's Digest References External links Bimonthly magazines published in the United States Satirical magazines published in the United States Science and technology magazines published in the United States Humor magazines Ig Nobel Prize Magazines established in 1995 Magazines published in Boston Professional humor
Annals of Improbable Research
Technology
460
64,731,000
https://en.wikipedia.org/wiki/Medicago%20Inc.
Medicago Inc. was a Canadian biotechnology company focused on the discovery, development, and commercialization of virus-like particles using plants as bioreactors to produce proteins, candidate vaccines, and medications. By using live plant leaves as hosts in the discovery and manufacturing process, the Medicago "Proficia" technology intended to create a rapid, high-yield system for its product candidates. Privately owned by a subsidiary of Mitsubishi Tanabe Pharma, Medicago and its product development programs were terminated by Mitsubishi in February 2023. The main clinical targets for Medicago product candidates were antiviral vaccines and antibody therapeutics. The company's name was derived from the Latin word for alfalfa, which was the first plant the company used to develop its technologies. Medicago technologies evolved from research at the Laval University and Agriculture and Agri-Food Canada in the 1990s. History A research partnership was formed between Laval University and Agriculture Canada in 1997. This would go on to be incorporated in 1999 as Medicago, licensing that technology researched in the partnership, from Agriculture Canada and Université Laval. In September 2013, Philip Morris International acquired a 40% stake in Medicago, the remaining 60% being acquired by Mitsubishi Tanabe Pharma Corporation and other Mitsubishi Group companies, in a joint purchase. The company had a Phase III clinical trial underway in 2020 for its candidate to prevent seasonal influenza. For its COVID-19 vaccine, Medicago grew its virus-like particles in the Australian weed, Nicotiana benthamiana. In July 2020, the company began a Phase I clinical trial on its candidate vaccine for COVID-19 disease, coVLP, which advanced to a Phase II-III trial in Canada and the United States during November 2020. The Canadian government invested $173 million into Medicago to support development of the Covifenz vaccine and help expand its production facility. In December 2021, the company announced that its CoVLP vaccine candidate exhibited 71% efficacy and no adverse effects in a multinational, Phase III clinical trial. In February 2022, Health Canada authorized use of CoVLP (brand name Covifenz) for preventing COVID-19 infection in adults 18 to 64 years old. In July 2022, the Canadian federal government determined it would not consider buying the shares owned by Medicago's parent company, tobacco company Philip Morris International, to overcome the problem of the World Health Organization accepting any products from tobacco concerns. In December 2022, Philip Morris was bought out by Mitsubishi, acquiring a 100% stake in the company. Termination In February 2023, Mitsubishi Chemical Corporation decided to shut down the company due to the changing landscape of COVID-19 vaccines and the marketplace, and the low commercial prospects of the company. Technology The Medicago technology used plants as bioreactors to produce proteins for vaccine and protein-based therapeutic candidates. The plant-based production platform was intended to be accurate and rapid to shorten product development time and prevent the risk of mutation. Medicago used its proprietary Proficia technology, which is a possible alternative to traditional egg-based methods for producing virus-like particles (VLPs) used to manufacture vaccine candidates. Typically, licensed influenza vaccines are manufactured using embryonated chicken eggs. With living plants as hosts, Proficia technology used VLP production as antigens in plant leaves, providing a flexible, high-yield system with potential to produce test material within the growth period of plants (one month). The steps of the technology are: synthesis – VLP genes are produced from a known viral sequence, requiring no live virus; infiltration – using a vacuum infiltration method, the VLP genes are introduced into plant leaves; incubation – the plants containing the genetic material are incubated over days in specific chambers for protein production to grow VLPs; harvest – leaves are collected then processed to extract VLPs; purification – clinical-grade material is purified to prepare for human testing. VLPs serve as potential vaccines by mimicking the natural structure and function of viruses, enabling recognition by the immune system. However, by absence of the main virus genetic material, VLPs are non-infectious and unable to replicate like a virus does in vivo, thereby evoking an immune response similar to a natural infection, but without the associated illnesses. COVID-19 vaccine The lead COVID-19 vaccine candidate, CoVLP, by Medicago, was a coronavirus VLP grown in the Australian weed, Nicotiana benthamiana. Medicago was developing the COVID-19 vaccine candidate in collaboration with the governments of Canada and Quebec, and by using an adjuvant manufactured by GlaxoSmithKline. Phase I research As of August 2020, the Medicago vaccine candidate was being evaluated for safety, toxicity, and immune response in a Phase I clinical trial at two locations in Quebec. In October 2020, the Government of Canada awarded Medicago a contract of up to $173 million to advance the company's COVID-19 vaccine candidate. Phase II-III research In November 2020, Medicago-GSK started Phase II-III clinical trials for their COVID-19 vaccine candidate. As of January 2021, the Phase III trial was enrolling participants toward the total goal of 30,612, with each volunteer receiving two injections 21 days apart in an amount of 3.75 micrograms of CoVLP each time. The Phase III study is scheduled to conclude in April 2022. In December 2021, the CoVLP candidate showed 71% efficacy and safety in preliminary results from the Phase III trial. Authorization In February 2022, Medicago and GlaxoSmithKline received authorization for CoVLP from Health Canada as an approved vaccine for preventing COVID-19 infection in adults 18 to 64 years old. The brand name for CoVLP is Covifenz. In March 2022, the first Canadian-made COVID-19 vaccine produced by Medicago was rejected by the World Health Organization due to the tobacco company Philip Morris International owning a stake in the company. The UN agency has a strict policy about engagement with the tobacco industry. Withdrawal and corporate termination Due to substantial competition in the global vaccine market and low demand for Covifenz, Mitsubishi announced in February 2023 that Covifenz and Medicago, Inc. would be terminated. Aramis Biotechnologies Aramis Biotechnologies was set up as a successor company to Medicago during 2023, acquiring intellectual property and equipment, in agreement with Mitsubishi Chemical Group. See also COVID-19 pandemic 2009 flu pandemic vaccine References Biotechnology companies of Canada Life sciences industry Specialty drugs Companies based in Quebec City Canadian companies established in 1999 Canadian companies disestablished in 2023 1999 establishments in Quebec 2023 disestablishments in Quebec Privately held companies of Canada COVID-19 vaccine producers Vaccine producers
Medicago Inc.
Biology
1,425
8,217,872
https://en.wikipedia.org/wiki/CT2
CT2 is a cordless telephony standard that was used in the early 1990s to provide short-range proto-mobile phone service in some countries in Europe and in Hong Kong. It is considered the precursor to the more successful DECT system. CT2 was also referred to by its marketing name, Telepoint. Overview CT2 is a digital FDMA system that uses time-division duplexing technology to share carrier frequencies between handsets and base stations. Features of the system are: Standardized on 864–868 MHz 500 frames/second (alternately base station and handset) 100 kHz carriers 32 kbit/s ADPCM voice channel compression 10 mW maximum power output GFSK data encoding Up to 100 metre (300 ft) range Unlike DECT, CT2 was a voice-only system, though like any minimally-compressed voice system, users could deploy analog modems to transfer data; in the early 1990s, Apple Computer sold a CT2 modem called the PowerBop to make use of France's Bi-Bop CT2 network. Although CT2 is a microcellular system, fully capable of supporting handoff, unlike DECT it does not support "forward handoff", meaning that it has to drop its former radio link before establishing the subsequent one, leading to a sub-second dropout in the call during the handover. Deployment and usage CT2 was deployed in a number of countries, including Britain and France. In Britain, the Ferranti Zonephone system was the first public network to go live in 1989, and the much larger Rabbit network – backed by Hong Kong's Hutchison Telecommunications – operated from 1992 to 1993. In France, the Bi-Bop network ran from 1991 to 1997. In the Netherlands, Dutch incumbent PTT deployed a CT2-based network called Greenpoint from 1992 to 1999; in the first year it used the name and mascot Kermit but royalties proved prohibitively large and the mascot was dropped. The service continued under the brand name Greenhopper, with at one time over 60,000 subscribers. In Finland, the Pointer service was available for a short time in the 1980s before being superseded by Nordic Mobile Telephone (NMT). Since 31 December 2008, CTA1 and CTA2 based phones are forbidden in Germany. Outside Europe, the system achieved a certain amount of popularity in Hong Kong with three operators offering service from 1991, until licenses were terminated in 1996. A CT2 service was offered in Singapore from 1993 to 1998 by Telecommunications Equipment under the brand name Callzone, using Motorola's Silverlink 2000 Birdie handset. Typical CT2 users were sold a handset and base station which they could connect to their own home telephone wiring. Calls via the home base station would be routed via the home telephone line and in this configuration the system was identical to a standard cordless phone, for both incoming and outgoing calls. Once out of range of the home, the CT2 user could find signs indicating a network base station in the area, and make outgoing calls (but not receive calls) using the network base station. Base stations were in a variety of places, including high-streets and other shopping areas, gas stations, and transport hubs such as rail stations. In this configuration, callers would be charged a per-minute rate which was higher than if they made calls from home, but not as high as conventional cellular charges. The advantages to the user were that the rates were generally lower than cellular, and that the same handset could be used at home and away from home. The disadvantages, compared to cellular, were that many networks did not deliver incoming calls to the phones (Bi-Bop was an exception), and that their areas of use were more limited. There are no known open CT2 networks still running. Similar systems Japan's Personal Handyphone System, another system based upon microcells, is a direct analog of CT2 and has achieved a much greater level of success. PHS is a full microcellular system with hand-off, better range, and more features. The DECT system is CT2's successor, and also supports full microcellular service and data. However, to date DECT has been used to provide commercial mobile-phone like service only in Italy in 1997-8 (the FIDO network). Canada adopted an enhanced version of CT2, known as CT2Plus, in 1993, operating in the 944–948.5 MHz band. CT2Plus class 2 systems benefited from the use of common signalling channels and offered multi-cell hand-off as well as tracking of devices. Incoming calls could be received anywhere within a multi-cell system. Nortel Networks offered a private branch exchange system based on the standard which was specified in Department of Communication document RSS-130 Annex 1. In the United States, a system similar to DECT and PHS called PACS was developed but never deployed commercially. CT2, as used in Europe and Hong Kong, required adherence to the MPT 1322 and MPT 1334 technical standards. Most striking was the use of TDD (time-division duplex) channels where one radio channel carried both sides of a duplex telephone conversation. This solved the problem of different propagation paths between two widely separated channels (up to 45 MHz in some cellular systems), but also placed an upper limit on the range of CT-2 signaling, since the speed of light (and radio signals) prevented long transmission paths. However, the use of TDD made available many frequency bands for CT-2 use, since a "paired" return path was not needed. An American company, Cellular 21, Inc. (later to become Advanced Cordless Technologies, Inc.) headed by broadcaster Matt Edwards, petitioned the FCC to permit the use of CT2 technology in the US. ACT built two active test systems which were located in Monticello, New York (outdoor), and outside and inside the South Street Seaport complex in lower Manhattan. The Monticello public field trials used Timex technology which was incompatible with the trans-European standard, while the South Street Seaport indoor test used equipment from Ferranti, GPT, and Motorola, which at the time manufactured CT2 equipment for the Singapore and Hong Kong markets. GPT and Motorola both provided CT2 equipment for the Rabbit system rollout (GPT handset and charger shown above). All the testing was under an FCC Experimental license. The ACT/Cellular 21 "Petition for Rulemaking" (RM-7152), along with a later petition by Millicom, became the basis of the FCC's PCS initiative (FCC GEN Docket 90-314) which resulted in the allocation of frequencies in the 1.7 to 2.1 GHz band as spectrum expansion for the crowded 800 MHz cellular band. The FCC used the acronym PCS to designate Personal Communications Services, separate and distinct from cellular service which was 800 MHz analog at the time. PCS was to be digital-only, and has progressed through several "generations" (mostly marketing designations) such as G3 and G4. See also CT1 Wikipedia France: Bibop References External links Cordless Telephony: The Future of Analogue and CT2 Cordless Telephony in the United Kingdom, UK OFCOM plans for phasing out of CT2 Apple PowerBop in pictures Local loop Mobile telecommunications standards Wireless communication systems
CT2
Technology
1,517
2,935,372
https://en.wikipedia.org/wiki/Growth%20differentiation%20factor-9
Growth/differentiation factor 9 is a protein that in humans is encoded by the GDF9 gene. Growth factors synthesized by ovarian somatic cells directly affect oocyte growth and function. Growth differentiation factor-9 (GDF9) is expressed in oocytes and is thought to be required for ovarian folliculogenesis. GDF9 is a member of the transforming growth factor-beta (TGFβ) superfamily. Growth Differentiation Factor 9 (GDF9) Growth differentiation factor 9 (GDF9) is an oocyte derived growth factor in the transforming growth factor β (TGF-β) superfamily. It is highly expressed in the oocyte and has a pivotal influence on the surrounding somatic cells, particularly granulosa, cumulus and theca cells. Paracrine interactions between the developing oocyte and its surrounding follicular cells is essential for the correct progression of both the follicle and the oocyte. GDF9 is essential for the overall process of folliculogenesis, oogenesis and ovulation and thus plays a major role in female fertility. Signaling Pathway GDF9 acts through two receptors on the cells surrounding the oocyte, it binds to bone morphogenic protein receptor 2 (BMPRII) and downstream to this utilizes the TGF-β receptor type 1 (ALK5). Ligand receptor activation allows the downstream phosphorylation and activation of SMAD proteins. SMAD proteins are transcription factors found in vertebrates, insects and nematodes, and are the intercellular substrates of all TGF-β molecules. GDF9 specifically activates SMAD2 and SMAD3 which form a complex with SMAD4, a common partner of all SMAD proteins, that is then able to translocate to the nucleus to regulate gene expression. Role in Folliculogenesis Early Follicle Development In many mammalian species GDF9 is essential for early follicular development through its direct action on the granulosa cells allowing proliferation and differentiation The deletion of ‘’Gdf9’’ results in decreased ovary size, halted follicular development at the stage of the primary follicle and the absence of any corpus lutea. The proliferative ability of granulosa cells is significantly reduced whereby no more than a single layer of granulosa cells is able to surround and thus support the developing oocyte. Any somatic cell formation after the primary layer is atypical and asymmetrical. Normally the follicle becomes atretic and degenerates although this does not occur emphasizing the abnormality of these supporting cells. GDF9 deficiency is further linked with the up regulation of inhibin. The normal expression of GDF9 allows the downregulation of inhibin a and thus promotes the ability of the follicle to progress past the primary stage of development. In vitro exposure of mammalian ovarian tissue to GDF9 promotes primary follicle progression. GDF9 stimulates growth of preantral follicles by preventing granulosa cell apoptosis. This may occur through increased follicle stimulating hormone (FSH) receptor expression or be a result of post-receptor signaling. Some sheep breeds show a range of fertility phenotypes due to eight single nucleotide polymorphisms (SNP) across the coding region of GDF9. A SNP in the Gdf9 gene resulting in a non conservative amino acid change was identified, whereby ewes homozygous for the SNP were infertile and completely lacked any follicle growth. Late Follicle Development Typical of later stages of follicle development is the appearance of cumulus cells. GDF9 causes the expansion of cumulus cells, a characteristic process in normal follicular development. GDF9 induces hyaluronanic synthase 2 (Has2) and suppresses urokinase plasminogen activator (uPA) mRNA synthesis in granulosa cells. This allows an extracellular matrix rich in hyaluronic acid, allowing the expansion of cumulus cells. Silencing of GDF9 expression results in the absence of cumulus cell expansion, this highlights the integral role of GDF9 signaling in altering granulosa cell enzymes and therefore allowing cumulus cell expansion in late stages of folliculogenesis. Role in Oogenesis and Ovulation Role in Oogenesis A lack of GDF9 causes pathophysiological alterations in the oocyte itself in addition to severe follicular abnormality. Oocytes reach normal size and form a zona pellucida although organelles become clustered and cortical granules do not form. In GDF9 deficient oocytes the meiotic ability is significantly altered, where less than half will proceed metaphase 1 or 2 and a large percentage of oocytes have abnormal germinal vesicle breakdown. As cumulus cells surround the oocyte during development and remain with the oocyte once it is ovulated, GDF9 expression in cumulus cells is important in allowing an ideal oocyte microenvironment. The altered phenotype observed in GDF9 deficient oocytes likely results from the lack off somatic cell input in later stages of folliculogenesis. Role in Ovulation GDF9 is required just prior to the surge of luteinizing hormone (LH), a key event responsible for ovulation. Prior to the LH surge, GDF9 supports the metabolic function of cumulus cells, allowing glycolysis and cholesterol biosynthesis. Cholesterol is a precursor of many essential steroid hormones such as progesterone. Progesterone levels rise significantly post ovulation to support the early stages of embryogenesis. In preovulatory follicles, GDF9 promotes the production of progesterone via the stimulation of the prostaglandin- EP2 receptor signaling pathway. Altered GDF9 Expression in Humans Mutations in GDF9 GDF9 mutations are present in women with premature ovarian failure, in addition to mothers of dizygotic twins. Three particular missense mutations GDF9 P103S, GDF9 P374L and GDF9 R454C have been found, although GDF9 P103S is present in women with dizygotic twins as well as women with premature ovarian failure. Given the same mutation is linked with a poly ovulatory phenotype and the failure of ovulation, these mutations are thought to alter the rate of ovulation, rather than specifically increasing or decreasing the rate. Most of these mutations are located in the pro-region of the gene that encodes GDF9, an area essential for the dimerization and hence activation of the encoded protein. Link with Polycystic Ovarian Syndrome (PCOS) PCOS accounts for approximately 90% of anovulation infertility, affecting 5-10% of woman of reproductive age. In women with PCOS, GDF9 mRNA is decreased in all stages of follicular development compared to women without PCOS. In particular, levels of GDF9 increase as the follicle develops from primordial stages to more mature stages. Women with PCOS have considerably lower expression of GDF9 in primordial, primary and secondary stages of folliculogenesis. GDF9 expression is not only reduced in women with PCOS but also delayed. Despite these facts the exact link of GDF9 with PCOS is not well established. Synergistic Interaction Bone morphogenic protein 15 (BMP15) is highly expressed in the oocyte and the surrounding follicular cells contributing greatly to folliculogenesis and oogenesis. Like GDF9, BMP15 belongs to the TGF-β superfamily. Differences in the synergistic action of BMP15 and GDF9 appear to be species dependent. BMP15 and GDF9 act in an additive manner to increase mitotic proliferation in sheep granulosa cells, although the same effect is not observed in bovine granulosa cells. The silencing of ‘’Bmp15’’ in mice results in partial fertility but normal histological appearance of the ovary. Although, when this is combined with the silencing of one allele of ‘’Gdf9’’, mice are completely infertile due to insufficient folliculogenesis and altered cumulus cell morphology. Mice with this genome also fail to release oocytes resulting in trapped oocytes in the corpus lutea. This phenotype is absent in ‘’Gdf9’’ silenced mice and only present a small population of ‘’Bmp15’’ silenced mice. This reveals the synergistic relationship of GDF9 and BMP15 whereby the silencing of both genes results in more severe outcome then either of the genes alone. It is thought that any co operative effects of GDF9 and BMP15 are modulated through the BMPRII receptor. GDF9 plays an important role in the development of primary follicles in the ovary. It has a critical role in granulosa cell and theca cell growth, as well as in differentiation and maturation of the oocyte. GDF9 has been connected to differences in ovulation rate and in premature cessation of ovary function, therefore has a significant role in fertility. The cell surface receptor through which GDF9 generates a signal is the bone morphogenetic protein type II receptor (BMPR2). References Further reading External links Growth factors
Growth differentiation factor-9
Chemistry
2,021
839,260
https://en.wikipedia.org/wiki/Venus%20Express
Venus Express (VEX) was the first Venus exploration mission of the European Space Agency (ESA). Launched in November 2005, it arrived at Venus in April 2006 and began continuously sending back science data from its polar orbit around Venus. Equipped with seven scientific instruments, the main objective of the mission was the long term observation of the Venusian atmosphere. The observation over such long periods of time had never been done in previous missions to Venus, and was key to a better understanding of the atmospheric dynamics. ESA concluded the mission in December 2014. History The mission was proposed in 2001 to reuse the design of the Mars Express mission. However, some mission characteristics led to design changes: primarily in the areas of thermal control, communications and electrical power. For example, since Mars is approximately twice as far from the Sun as Venus, the radiant heating of the spacecraft is four times greater for Venus Express than Mars Express. Also, the ionizing radiation environment is harsher. On the other hand, the more intense illumination of the solar panels results in more generated photovoltaic power. The Venus Express mission also uses some spare instruments developed for the Rosetta spacecraft. The mission was proposed by a consortium led by D. Titov (Germany), E. Lellouch (France) and F. Taylor (United Kingdom). The launch window for Venus Express was open from 26 October to 23 November 2005, with the launch initially set for 26 October 4:43 UTC. However, problems with the insulation from the Fregat upper stage led to a two-week launch delay to inspect and clear out the small insulation debris that migrated on the spacecraft. It was eventually launched by a Soyuz-FG/Fregat rocket from the Baikonur Cosmodrome in Kazakhstan on 9 November 2005 at 03:33:34 UTC into a parking Earth orbit and 1 h 36 min after launch put into its transfer orbit to Venus. A first trajectory correction maneuver was successfully performed on 11 November 2005. It arrived at Venus on 11 April 2006, after 153 days of journey, and fired its main engine between 07:10:29 and 08:00:42 UTC SCET to reduce its velocity so that it could be captured by Venusian gravity into a nine-day orbit of . The burn was monitored from ESA's Control Centre, ESOC, in Darmstadt, Germany. Seven further orbit control maneuvers, two with the main engine and five with the thrusters, were required for Venus Express to reach its final operational 24-hour orbit around Venus. Venus Express entered its target orbit at apoapsis on 7 May 2006 at 13:31 UTC, when the spacecraft was from Earth. At this point the spacecraft was running on an ellipse substantially closer to the planet than during the initial orbit. The polar orbit ranged between over Venus. The periapsis was located almost above the North pole (80° North latitude), and it took 24 hours for the spacecraft to travel around the planet. Venus Express studied the Venusian atmosphere and clouds in detail, the plasma environment and the surface characteristics of Venus from orbit. It also made global maps of the Venusian surface temperatures. Its nominal mission was originally planned to last for 500 Earth days (approximately two Venusian sidereal days), but the mission was extended five times: first on 28 February 2007 until early May 2009; then on 4 February 2009 until 31 December 2009; and then on 7 October 2009 until 31 December 2012. On 22 November 2010, the mission was extended to 2014. On 20 June 2013, the mission was extended a final time until 2015. On 28 November 2014, mission control lost contact with Venus Express. Intermittent contact was reestablished on 3 December 2014, though there was no control over the spacecraft, likely due to exhaustion of propellant. On 16 December 2014, ESA announced that the Venus Express mission had ended. A carrier signal was still being received from the vehicle, but no data was being transmitted. Mission manager Patrick Martin expected the spacecraft would fall below in early January 2015, with destruction occurring in late January or early February. The spacecraft's carrier signal was last detected by ESA on 18 January 2015. Instruments ASPERA-4: An acronym for "Analyzer of Space Plasmas and Energetic Atoms," ASPERA-4 investigated the interaction between the solar wind and the Venusian atmosphere, determine the impact of plasma processes on the atmosphere, determine global distribution of plasma and neutral gas, study energetic neutral atoms, ions and electrons, and analyze other aspects of the near Venus environment. ASPERA-4 is a re-use of the ASPERA-3 design used on Mars Express, but adapted for the harsher near-Venus environment. MAG: The magnetometer was designed to measure the strength of Venus's magnetic field and the direction of it as affected by the solar wind and Venus itself. It mapped the magnetosheath, magnetotail, ionosphere, and magnetic barrier in high resolution in three-dimensions, aid ASPERA-4 in the study of the interaction of the solar wind with the atmosphere of Venus, identify the boundaries between plasma regions, and carry planetary observations as well (such as the search for and characterization of Venus lightning). MAG was derived from the Rosetta lander's ROMAP instrument. One measuring device was placed on the body of the craft. The identical second of the pair was placed the necessary distance away from the body by unfolding a 1 m long boom (carbon composite tube). Two redundant pyrotechnical cutters cut one loop of thin rope to free the power of metal springs. The driven knee lever rotated the boom perpendicularly outwards and latched it in place. Only the use of a pair of sensors together with the rotation of the probe allowed the spacecraft to resolve the small natural magnetic field beneath the disturbing fields of the probe itself. The measurements to identify the fields produced by the craft took place on the route from Earth to Venus. The lack of magnetic cleanness was due to the reuse of the Mars Express spacecraft bus, which did not carry a magnetometer. By combining the data from two-point simultaneous measurements and using software to identify and remove interference generated by Venus Express itself, it was possible to obtain results of a quality comparable to those produced by a magnetically clean craft. VMC: The Venus Monitoring Camera is a wide-angle, multi-channel CCD. The VMC is designed for global imaging of the planet. It operated in the visible (VIS), ultraviolet (UV), and near infrared (NIR1 and NIR2) spectral ranges, and maps surface brightness distribution searching for volcanic activity, monitoring airglow, studying the distribution of unknown ultraviolet absorbing phenomenon at the cloud-tops, and making other science observations. It was derived in part from the Mars Express High Resolution Stereo Camera (HRSC) and the Rosetta Optical, Spectroscopic and Infrared Remote Imaging System (OSIRIS). The camera is based on a Kodak KAI-1010 Series, 1024 x 1024 pixel interline CCD, and included an FPGA to pre-process image data, reducing the amount transmitted to Earth. The consortium of institutions responsible for the VMC included the Max Planck Institute for Solar System Research, the Institute of Planetary Research at the German Aerospace Center and the Institute of Computer and Communication Network Engineering at Technische Universität Braunschweig. It is not to be confused with Visual Monitoring Camera mounted on Mars Express, of which it is an evolution. PFS: The "Planetary Fourier Spectrometer" (PFS) should have operated in the infrared between the 0.9 μm and 45 μm wavelength range and was designed to perform vertical optical sounding of the Venus atmosphere. It should have performed global, long-term monitoring of the three-dimensional temperature field in the lower atmosphere (cloud level up to 100 kilometers). Furthermore, it should have searched for minor atmospheric constituents that may be present, but had not yet been detected, analyzed atmospheric aerosols, and investigated surface to atmosphere exchange processes. The design was based on a spectrometer on Mars Express, but modified for optimal performance for the Venus Express mission. However PFS failed during its deployment and no useful data was transmitted. SPICAV: The "SPectroscopy for Investigation of Characteristics of the Atmosphere of Venus" (SPICAV) is an imaging spectrometer that was used for analyzing radiation in the infrared and ultraviolet wavelengths. It was derived from the SPICAM instrument flown on Mars Express. However, SPICAV had an additional channel known as SOIR (Solar Occultation at Infrared) that was used to observe the Sun through Venus's atmosphere in the infrared. VIRTIS: The "Visible and Infrared Thermal Imaging Spectrometer" (VIRTIS) was an imaging spectrometer that observed in the near-ultraviolet, visible, and infrared parts of the electromagnetic spectrum. It analyzed all layers of the atmosphere, surface temperature and surface/atmosphere interaction phenomena. VeRa: Venus Radio Science was a radio sounding experiment that transmitted radio waves from the spacecraft and passed them through the atmosphere or reflected them off the surface. These radio waves were received by a ground station on Earth for analysis of the ionosphere, atmosphere and surface of Venus. It was derived from the Radio Science Investigation instrument flown on Rosetta. Science Climate of Venus Starting out in the early planetary system with similar sizes and chemical compositions, the histories of Venus and Earth have diverged in spectacular fashion. It is hoped that the Venus Express mission data that was obtained can contribute not only to an in-depth understanding of how the Venusian atmosphere is structured, but also to an understanding of the changes that led to the current greenhouse atmospheric conditions. Such an understanding may contribute to the study of climate change on Earth. In 2006, its research result identified the differences between Venus and Earth and started to observe routine climate changes. Search for life on Earth Venus Express was also used to observe signs of life on Earth from Venus orbit. In images acquired by the probe, Earth was less than one pixel in size, which mimics observations of Earth-sized planets in other planetary systems. These observations were then used to develop methods for habitability studies of exoplanets. Timeline of the mission 3 August 2005: Venus Express completed its final phase of testing at Astrium Intespace facility in Toulouse, France. 7 August 2005: Venus Express arrived at the airport of the Baikonur Cosmodrome. 16 August 2005: First flight verification test completed. 22 August 2005: Integrated System Test-3. 30 August 2005: Last major system test successfully started. 5 September 2005: Electrical testing successful. 21 September 2005: FRR (Fuelling Readiness Review) Ongoing. 12 October 2005: Mating to the Fregat upper stage completed. 21 October 2005: Contamination detected inside the fairing – launch on hold. 5 November 2005: Arrival at launch pad. 9 November 2005: Launch from Baikonur Cosmodrome at 03:33:34 UTC. 11 November 2005: First trajectory correction maneuver successfully performed. 17 February 2006: The main engine is fired successfully in a dress rehearsal for the arrival maneuver. 24 February 2006: Second trajectory correction maneuver successfully performed. 29 March 2006: Third trajectory correction maneuver successfully performed – on target for 11 April orbit insertion. 7 April 2006: Command stack for orbit insertion maneuver is loaded on the spacecraft. 11 April 2006: The Venus Orbit Insertion (VOI) is completed successfully, according to the following timeline: {| class="wikitable" |- !Event !Spacecraft event time (UTC) !Ground receive time (UTC) |- | Liquid Settling Phase start || 07:07:56 || 07:14:41 |- | VOI main engine start || 07:10:29 || 07:17:14 |- | periapsis passage || 07:36:35 || |- | eclipse start || 07:37:46 || |- | occultation start || 07:38:30 || 07:45:15 |- | occultation end || 07:48:29 || 07:55:14 |- | eclipse end || 07:55:11 || |- | VOI burn end || 08:00:42 || 08:07:28 |} Period of this initial orbit is nine days. 13 April 2006: First images of Venus from Venus Express released. 20 April 2006: Apoapsis Lowering Manoeuvre #1 performed. Orbital period is now 40 hours. 23 April 2006: Apoapsis Lowering Manoeuvre #2 performed. Orbital period is now approx 25 hours 43 minutes. 26 April 2006: Apoapsis Lowering Manoeuvre #3 is slight fix to previous ALM. 7 May 2006: Venus Express entered its target orbit at apoapsis at 13:31 UTC 14 December 2006: First temperature map of the southern hemisphere. 27 February 2007: ESA agrees to fund mission extension until May 2009. 19 September 2007: End of the nominal mission (500 Earth days) – Start of mission extension. 27 November 2007: A series of papers was published in Nature giving the initial findings. It finds evidence for past oceans. It confirms the presence of lightning on Venus and that it is more common on Venus than it is on Earth. It also reports the discovery that a huge double atmospheric vortex exists at the south pole of the planet. 20 May 2008: The detection by the VIRTIS instrument of hydroxyl (OH) in the atmosphere of Venus is reported in the May 2008 issue of Astronomy & Astrophysics. 4 February 2009: ESA agrees to fund mission extension until 31 December 2009. 7 October 2009: ESA agrees to fund the mission through 31 December 2012. 23 November 2010: ESA agrees to fund the mission through 31 December 2014. 25 August 2011: It is reported that a layer of ozone exists in the upper atmosphere of Venus. 1 October 2012: It is reported that a cold layer where dry ice may precipate exists in the atmosphere of Venus. 18 June—11 July 2014: Performs successful aerobraking experiment. Multiple passes at 131 to 135 km altitude. 28 November 2014: Mission control loses contact with Venus Express. 3 December 2014: Intermittent contact established, spacecraft determined to likely be out of propellant. 16 December 2014: ESA declares the Venus Express mission over. 18 January 2015: Last detection of the spacecraft's X-band carrier signal. See also Uncrewed space mission List of planetary probes List of missions to Venus List of uncrewed spacecraft by program Space exploration Space telescope Space probe Timeline of artificial satellites and space probes Timeline of planetary exploration References Further reading External links Venus Express mission page by the European Space Agency Venus Express mission page by ESA Spacecraft Operations Venus Express profile by NASA's Solar System Exploration Space probes launched in 2005 European Space Agency space probes Embedded systems Missions to Venus Orbiters (space probe) Spacecraft launched by Soyuz-FG rockets
Venus Express
Technology,Engineering
3,057
55,127,221
https://en.wikipedia.org/wiki/Metuloidea%20cinnamomea
Metuloidea cinnamomea is a species of tooth fungus in the family Steccherinaceae. Found in the Andes region of Venezuela, it was initially described in 2010 by Teresa Iturriaga and Leif Ryvarden as a species of Antrodiella. Otto Miettinen and Ryvarden transferred it to the newly created genus Metuloidea in 2016. References Fungi described in 2010 Fungi of Venezuela Steccherinaceae Taxa named by Leif Ryvarden Fungus species
Metuloidea cinnamomea
Biology
106
1,397,595
https://en.wikipedia.org/wiki/Reprise
In music, a reprise ( , ; from the verb 'to resume') is the repetition or reiteration of the opening material later in a composition as occurs in the recapitulation of sonata form, though—originally in the 18th century—was simply any repeated section, such as is indicated by beginning and ending repeat signs. A partial or abbreviated reprise is known as a petite reprise ( , ). In Baroque music this usually occurs at the very end of a piece, repeating the final phrase with added ornamentation. Song reprises Reprise can refer to a version of a song which is similar to, yet different from, the song on which it is based. One example could be "Time", the fourth song from Pink Floyd's 1973 album The Dark Side of the Moon, which contains a reprise of "Breathe", the second song of the same album. Another example could be "Solo", the fifth song from Frank Ocean's 2016 album Blonde, and then "Solo (Reprise)", the tenth song of the same album. Be Here Now, the 1997 album by Oasis, features a reprise of "All Around the World", while the title track of Sgt. Pepper's Lonely Hearts Club Band, which plays at the start of the album to introduce it, has a reprise at the end of the album to close it by replacing lines like "we hope you will enjoy the show" with "we hope you have enjoyed the show". Impera by Ghost features a reprise on the final track, "Respite on the Spitalfields", of a riff previously featured in the opening track, "Imperium". Musical theater In musical theatre, reprises are any repetition of an earlier song or theme, usually with changed lyrics and shortened music to reflect the development of the story. Also, it is common for songs sung by the same character or regarding the same narrative motif to have similar tunes and lyrics, or incorporate similar tunes and lyrics. For example, in the stage version of Les Misérables, a song of the primary antagonist ("Javert's Suicide") is similar in lyrics and exactly the same in tune to a soliloquy of the protagonist when he was in a similar emotional state ("What Have I Done?"). At the end of the song, an instrumental portion is played from an earlier soliloquy of the antagonist, in which he was significantly more confident. Les Misérables in general reprises many musical themes.. Often the reprised version of a song has exactly the same tune and lyrics as the original, though frequently featuring different characters singing or including them with the original character in the reprised version. For example, in The Sound of Music, the reprise of the title song is sung by the Von Trapp children and their father, the Captain; whereas the original was sung by Maria. In "Edelweiss" (reprise), the entire Von Trapp family and Maria sing and are later joined by the audience, whereas the original features Liesl and the Captain.. Also, in the musical The Music Man, the love song "Goodnight My Someone" uses the same basic melody (though with a more ballad quality to it) as the rousing march and theme song "Seventy-Six Trombones"; in the reprised versions, Harold and Marian are heard singing a snatch of each other's songs. And in Jerome Kern and Oscar Hammerstein II's Show Boat, the song "Ol' Man River" is reprised three times after it is first sung, as if it were a commentary on the situation in the story. In some musicals, a reprise of an earlier song is sung by a different character from the one who originally sang it, with different lyrics. In Mamma Mia!, however, the reprises for the title track, Dancing Queen, and Waterloo have no altering of the lyrics, and are just shortened versions of the originals featured earlier. In RENT, the song, "I'll Cover You" gets a reprise at Angel's funeral. It is sung primarily by Collins and is slower and more emotional to reflect Collins' emotional state. Nearing the end of the song, the rest of the company begins singing a slower version of the first verse of "Seasons of Love". In addition, the second half of "Goodbye Love" features the piano playing an instrumental which is a faster version of the instrumental in "Halloween". In Hamilton, the song, "Best Of Wives, And Best Of Women" reprises the song "It's Quiet Uptown" with the same melody and similar lyrics, along with "The Story of Tonight" being reprised several times. In Frozen, the song, "For The First Time In Forever (reprise)" reprises the song "For The First Time In Forever" by Kristen Bell and Idina Menzel. Both versions are sung by the same artists. Winner reprise In musical competitions, it's named reprise or winner reprise to the winner's last performance, once its victory is proclaimed, and before the end of show. This tradition began in San Remo Festival (1951) and was adopted by several competitions, as Eurovision Song Contest. In literature In postmodernism, the term reprise has been borrowed from musical terminology to be used in literary criticism by Christian Moraru: From the postmodern perspective, reprise is a fundamental device in the whole history of art. See also Hidden track, a song that is placed on a music release in a way that avoids detection by the casual listener. Cover version, a new version of a song originated by a different artist. References Formal sections in music analysis el:Επανέκθεση (μουσική) eo:Reekprenado (muziko) ru:Реприза (форма)
Reprise
Technology
1,199
1,585,314
https://en.wikipedia.org/wiki/Matt%20Mullenweg
Matthew Charles Mullenweg (born January 11, 1984) is an American web developer and entrepreneur. He is known as a co-founder of the free and open-source web publishing software WordPress, and the founder of Automattic. Early life and education Mullenweg was born in Houston, Texas, and grew up in the Willowbend neighborhood. His father, Chuck, was a computer programmer. Mullenweg was raised Catholic. He attended the High School for the Performing and Visual Arts to play the saxophone, although he was frequently absent due to chronic migraines. After graduating from high school, he studied economics, philosophy and political science at the University of Houston, eventually dropping out after his sophomore year in 2004. WordPress Mullenweg became enamored with blogging and started contributing updates to b2—a popular open-source blogging software—in 2002. However, Michel Valdrighi—the sole maintainer—soon ceased activity, and Mullenweg discussed prospects of creating a fork with other contributors; thus, in January 2003, Mullenweg created WordPress with Mike Little under the GPL v2-or-later open-source license at the age of 19, and Valdrighi endorsed the project a few months later. In March 2003, he co-founded the Global Multimedia Protocols Group (GMPG) with Eric A. Meyer and Tantek Çelik. In April 2004, he helped launch Ping-O-Matic, a mechanism for notifying search engines about blog updates. In October 2004, he was hired by CNET who would allow him to develop WordPress part-time as part of his job. He dropped out of college and moved to San Francisco for the position. Automattic Mullenweg left CNET in October 2005 to focus on WordPress full-time. Soon after he announced Akismet, an initiative to reduce comment and trackback spam. In December, he founded Automattic, with Akismet and managed web hosting service WordPress.com as its flagship products. In January 2006, Mullenweg recruited former Yahoo! executive Toni Schneider to join Automattic as CEO. Since 2006, he has delivered an annual "State of the Word" speech on the progress and future of the WordPress software, named after the State of the Union address. In 2011, Mullenweg purchased the WordPress news website WP Tavern. In January 2014, Mullenweg became CEO of Automattic. Schneider moved to work on new projects at Automattic. Mullenweg received the Heinz Award for Technology, the Economy and Employment in 2016, for "helping to democratize online publishing". Mullenweg began a three-month sabbatical from his role as CEO at the beginning of February 2024. Later that month, Mullenweg engaged in a public feud with a transgender Tumblr user who, frustrated with the site's failure to address transphobic harassment, posted that she wished Mullenweg would die in a comedic way. The user was subsequently banned. Responding to user uproar, Mullenweg addressed the ban in posts on his personal Tumblr blog, in which he characterized the post as a death threat, and shared private account information about the user. Mullenweg also responded to individual commenters on Tumblr in posts and direct messages, and went to Twitter to respond to the banned user's tweets about the situation. A few days later, transgender employees of Tumblr and Automattic made a post on the official Tumblr staff blog characterizing his response as "unwarranted and harmful" and stating that he did not speak on their behalf. They also said that the user's post was not a realistic threat of violence and not the reason for her ban. Public disputes On several occasions, Mullenweg has publicly challenged competitors to WordPress and WordPress.com. He has stated that he prefers to settle disputes in the court of public opinion and described his approach as "brinksmanship", noting that the potential cost of legal action could put Automattic in a "tough spot". In 2008, shortly before WordPress 2.5's release, Six Apart's Movable Type published "A WordPress 2.5 Upgrade Guide"—a comparison of their CMS with their rival, WordPress—as a company blog article that Mullenweg characterized as "desperate and dirty". In 2013, developers on the digital marketplace Envato were banned from speaking at WordPress events after he criticized the platform for selling WordPress themes with the graphics and CSS components under a proprietary license instead of the GPL. In 2016, Mullenweg accused Wix.com, a competitor to WordPress.com, of reusing WordPress's mobile text editor code in Wix's own mobile app without adhering to the terms of the GPL. Despite the license's requirement to publish anything built with GPL code under the GPL, Wix's CEO claimed that the company open-sourced their forked version of the component and satisfied the license's terms before the app switched to its own fork of the MIT-licensed text editor that the WordPress editor was based upon. The new fork added a clause to the MIT license that forbids redistribution under any other license. In 2022, Mullenweg criticized GoDaddy for not reinvesting in the WordPress project sufficiently. On January 9, 2025, the representative of the WordPress Sustainability team, Thijs Buijs, resigned via WordPress.org’s Slack channel, citing dissatisfaction with Matt Mullenweg’s December 24, 2024, Reddit post titled “What drama should I create in 2025?” highlighting concerns about what he described as “unsustainable leadership”. In response, Matt Mullenweg thanked Thijs Buijs in reminding him the existence of a sustainability team, announced its disbanding, and subsequently closing Wordpress.org's #sustainability Slack channel. WP Engine dispute Audrey Capital Mullenweg is a principal at angel investment firm Audrey Capital, which he co-founded in 2008 alongside Naveen Selvadurai and Audrey Kim. , the company lists investments in companies such as CoinDesk, MakerBot, Sonos, SpaceX, Ring, as well as software companies including Calm, Chartbeat, DailyBurn, Memrise, Genius, Nord Security and Telegram. It has also funded startups that provide services to web developers including Creative Market, GitLab, NPM, SendGrid, Stripe and Typekit. From 2017 to 2019, Mullenweg also served as a board member for GitLab. Mullenweg has employed a team of contributors to WordPress through Audrey Capital since 2010, who work separately from Automattic. On the 20th anniversary of WordPress' initial release, Mullenweg announced a scholarship program aimed at the children of significant contributors to open-source projects. To remain in the program, participants must commit annually to a set of principles. See also Browse Happy References External links 1984 births 21st-century American businesspeople American computer programmers American male bloggers American social entrepreneurs American technology chief executives American technology company founders Angel investors Automattic Businesspeople from Houston Free software programmers Henry Crown Fellows High School for the Performing and Visual Arts alumni Living people Open source advocates People from Houston Tumblr University of Houston alumni Web developers Web development WordPress
Matt Mullenweg
Engineering
1,560
21,243,439
https://en.wikipedia.org/wiki/Pocket%20litter
Pocket litter is material, including notes scribbled on scraps of paper, that accumulates in an individual's pockets. It can include identity cards, transportation tickets, personal photographs, computer files and similar material. Counter-terrorism analysts report that the analysis of pocket litter can be an important tool for confirming or refuting suspects' accounts of themselves. The term was used as early as 1973, by Watergate conspirator E. Howard Hunt. The Combating Terrorism Center celebrated the first anniversary of the killing of Osama bin Laden by releasing documents seized from Osama bin Laden's Abbottabad home. The Associated Press reported that SEAL Team 6 had been specially trained to search for documents and pocket litter "that might produce leads to other terrorists." In Operation Mincemeat of World War II, false pocket litter (a photograph of a supposed girlfriend, forged receipts, and so forth) was planted on a corpse to deceive the Germans into believing that it was actually that of one "Major William Martin" of the Royal Marines (a person who did not actually exist), and thus that the false secret documents carried by "Martin" were genuine. References Intelligence analysis Waste
Pocket litter
Physics
242
56,059,449
https://en.wikipedia.org/wiki/Polynomial%20mapping
In algebra, a polynomial map or polynomial mapping between vector spaces over an infinite field k is a polynomial in linear functionals with coefficients in k; i.e., it can be written as where the are linear functionals and the are vectors in W. For example, if , then a polynomial mapping can be expressed as where the are (scalar-valued) polynomial functions on V. (The abstract definition has an advantage that the map is manifestly free of a choice of basis.) When V, W are finite-dimensional vector spaces and are viewed as algebraic varieties, then a polynomial mapping is precisely a morphism of algebraic varieties. One fundamental outstanding question regarding polynomial mappings is the Jacobian conjecture, which concerns the sufficiency of a polynomial mapping to be invertible. See also Polynomial functor References Claudio Procesi (2007) Lie Groups: an approach through invariants and representation, Springer, . Algebra
Polynomial mapping
Mathematics
191
6,792,870
https://en.wikipedia.org/wiki/BioPHP
BioPHP is a collection of open-source PHP code, with classes for DNA and protein sequence analysis, alignment, database parsing, and other bioinformatics tools. BioRuby is released under the GNU GPL version 2 licence and is one of a number of Bio* projects, designed to reduce code duplication. As an open source bioinformatics project, BioPHP is affiliated with the Open Bioinformatics Foundation. History The BioPHP project grew out of GenePHP, which was started by Serge Gregorio in 2003. GenePHP was conceived as a PHP-based implementation of similar bioinformatics packages such as BioPerl and BioPython and BioRuby. BioPHP was developed in December 2005 by Joseba Bikandi at the University of the Basque Country, Spain as an extension of GenePHP. GenePHP is one of the four projects currently forming BioPHP. Projects BioPHP is divided into four 'projects'. The GenePHP project has a similar structure to other Bio* projects, with a number of classes representing (amongst other things) DNA and protein sequences and sequence alignments. Each class is designed to be general enough to be useful in a number of BioPHP projects. Similarly, the Functions project aims to create a number of functions to perform tasks on class objects and reduce code duplication between projects. The Minitools and Tools projects aim to generate a set of PHP scripts for small, repetitive tasks; scripts in the Tools project generally have special requirements, such as interfacing with non-PHP scripts and/or code (written in, for instance, Perl or C). See also Open Bioinformatics Foundation References External links An object-oriented BioPHP Lee Katz's biophp Free bioinformatics software Bioinformatics software PHP software
BioPHP
Biology
384
38,889,765
https://en.wikipedia.org/wiki/Coherence%20%28units%20of%20measurement%29
A coherent system of units is a system of units of measurement used to express physical quantities that are defined in such a way that the equations relating the numerical values expressed in the units of the system have exactly the same form, including numerical factors, as the corresponding equations directly relating the quantities. It is a system in which every quantity has a unique unit, or one that does not use conversion factors. A coherent derived unit is a derived unit that, for a given system of quantities and for a chosen set of base units, is a product of powers of base units, with the proportionality factor being one. If a system of quantities has equations that relate quantities and the associated system of units has corresponding base units, with only one unit for each base quantity, then it is coherent if and only if every derived unit of the system is coherent. The concept of coherence was developed in the mid-nineteenth century by, amongst others, Kelvin and James Clerk Maxwell and promoted by the British Science Association. The concept was initially applied to the centimetre–gram–second (CGS) in 1873 and the foot–pound–second systems (FPS) of units in 1875. The International System of Units (SI) was designed in 1960 to incorporate the principle of coherence. Examples In the SI, the derived unit is a coherent derived unit for speed or velocity but is not a coherent derived unit. Speed or velocity is defined by the change in distance divided by a change in time. The derived unit uses the base units of the SI system. The derived unit requires numerical factors to relate to the SI base units: and . In the cgs system, is not a coherent derived unit. The numerical factor of is needed to express in the cgs system. History Before the metric system The earliest units of measure devised by humanity bore no relationship to each other. As both humanity's understanding of philosophical concepts and the organisation of society developed, so units of measurement were standardized—first particular units of measure had the same value across a community, then different units of the same quantity (for example feet and inches) were given a fixed relationship. Apart from Ancient China where the units of capacity and of mass were linked to red millet seed, there is little evidence of the linking of different quantities until the Enlightenment. Relating quantities of the same kind The history of the measurement of length dates back to the early civilization of the Middle East (10000 BC – 8000 BC). Archaeologists have been able to reconstruct the units of measure in use in Mesopotamia, India, the Jewish culture and many others. Archaeological and other evidence shows that in many civilizations, the ratios between different units for the same quantity of measure were adjusted so that they were integer numbers. In many early cultures such as Ancient Egypt, multiples with prime factors aside from 2, 3 and 5 were sometimes used—the Egyptian royal cubit being 28 fingers or 7 hands. In 2150 BC, the Akkadian emperor Naram-Sin rationalized the Babylonian system of measure, adjusting the ratios of many units of measure to multiples of which the only prime factors were 2, 3 and 5; for example there were 6 she (barleycorns) in a shu-si (finger) and 30 shu-si in a kush (cubit). Relating quantities of different kinds Non-commensurable quantities have different physical dimensions, which means that adding or subtracting them is not meaningful. For instance, adding the mass of an object to its volume has no physical meaning. However, new quantities (and, as such, units) can be derived via multiplication and exponentiation of other units. As an example, the SI unit for force is the newton, which is defined as kg⋅m⋅s−2. Since a coherent derived unit is one which is defined by means of multiplication and exponentiation of other units but not multiplied by any scaling factor other than 1, the pascal is a coherent unit of pressure (defined as kg⋅m−1⋅s−2), but the bar (defined as ) is not. Note that coherence of a given unit depends on the definition of the base units. Should the standard unit of length change such that it is shorter by a factor of , then the bar would be a coherent derived unit. However, a coherent unit remains coherent (and a non-coherent unit remains non-coherent) if the base units are redefined in terms of other units with the numerical factor always being unity. Metric system The concept of coherence was only introduced into the metric system in the third quarter of the nineteenth century; in its original form the metric system was non-coherent – in particular the litre was 0.001 m3 and the are (from which we get the hectare) was 100 m2. A precursor to the concept of coherence was however present in that the units of mass and length were related to each other through the physical properties of water, the gram having been designed as being the mass of one cubic centimetre of water at its freezing point. The CGS system had two units of energy, the erg that was related to mechanics and the calorie that was related to thermal energy, so only one of them (the erg, equivalent to the g⋅cm2/s2) could bear a coherent relationship to the base units. By contrast, coherence was a design aim of the SI, resulting in only one unit of energy being defined – the joule. Dimension-related coherence Each variant of the metric system has a degree of coherence—the various derived units being directly related to the base units without the need of intermediate conversion factors. An additional criterion is that, for example, in a coherent system the units of force, energy and power be chosen so that the equations = × = × = ÷ hold without the introduction of constant factors. Once a set of coherent units have been defined, other relationships in physics that use those units will automatically be true—Einstein's mass–energy equation, , does not require extraneous constants when expressed in coherent units. Isaac Asimov wrote, "In the cgs system, a unit force is described as one that will produce an acceleration of 1 cm/sec2 on a mass of 1 gm. A unit force is therefore 1 cm/sec2 multiplied by 1 gm." These are independent statements. The first is a definition; the second is not. The first implies that the constant of proportionality in the force law has a magnitude of one; the second implies that it is dimensionless. Asimov uses them both together to prove that it is the pure number one. Asimov's conclusion is not the only possible one. In a system that uses the units foot (ft) for length, second (s) for time, pound (lb) for mass, and pound-force (lbf) for force, the law relating force (F), mass (m), and acceleration (a) is . Since the proportionality constant here is dimensionless and the units in any equation must balance without any numerical factor other than one, it follows that 1 lbf = 1 lb⋅ft/s2. This conclusion appears paradoxical from the point of view of competing systems, according to which and . Although the pound-force is a coherent derived unit in this system according to the official definition, the system itself is not considered to be coherent because of the presence of the proportionality constant in the force law. A variant of this system applies the unit s2/ft to the proportionality constant. This has the effect of identifying the pound-force with the pound. The pound is then both a base unit of mass and a coherent derived unit of force. One may apply any unit one pleases to the proportionality constant. If one applies the unit s2/lb to it, then the foot becomes a unit of force. In a four-unit system (English engineering units), the pound and the pound-force are distinct base units, and the proportionality constant has the unit lbf⋅s2/(lb⋅ft). All these systems are coherent. One that is not is a three-unit system (also called English engineering units) in which that uses the pound and the pound-force, one of which is a base unit and the other, a non-coherent derived unit. In place of an explicit proportionality constant, this system uses conversion factors derived from the relation 1 lbf = 32.174 lb⋅ft/s2. In numerical calculations, it is indistinguishable from the four-unit system, since what is a proportionality constant in the latter is a conversion factor in the former. The relation among the numerical values of the quantities in the force law is , where the braces denote the numerical values of the enclosed quantities. Unlike in this system, in a coherent system, the relations among the numerical values of quantities are the same as the relations among the quantities themselves. The following example concerns definitions of quantities and units. The (average) velocity (v) of an object is defined as the quantitative physical property of the object that is directly proportional to the distance (d) traveled by the object and inversely proportional to the time (t) of travel, i.e., , where k is a constant that depends on the units used. Suppose that the metre (m) and the second (s) are base units; then the kilometre (km) and the hour (h) are non-coherent derived units. The metre per second (mps) is defined as the velocity of an object that travels one metre in one second, and the kilometre per hour (kmph) is defined as the velocity of an object that travels one kilometre in one hour. Substituting from the definitions of the units into the defining equation of velocity we obtain, 1 mps = k m/s and 1 kmph = k km/h = 1/3.6 k m/s = 1/3.6 mps. Now choose k = 1; then the metre per second is a coherent derived unit, and the kilometre per hour is a non-coherent derived unit. Suppose that we choose to use the kilometre per hour as the unit of velocity in the system. Then the system becomes non-coherent, and the numerical value equation for velocity becomes = 3.6 / . Coherence may be restored, without changing the units, by choosing k = 3.6; then the kilometre per hour is a coherent derived unit, with 1 kmph = 1 m/s, and the metre per second is a non-coherent derived unit, with 1 m/s = 3.6 m/s. A definition of a physical quantity is a statement that determines the ratio of any two instances of the quantity. The specification of the value of any constant factor is not a part of the definition since it does not affect the ratio. The definition of velocity above satisfies this requirement since it implies that v1/v2 = (d1/d2)/(t1/t2); thus if the ratios of distances and times are determined, then so is the ratio of velocities. A definition of a unit of a physical quantity is a statement that determines the ratio of any instance of the quantity to the unit. This ratio is the numerical value of the quantity or the number of units contained in the quantity. The definition of the metre per second above satisfies this requirement since it, together with the definition of velocity, implies that v/mps = (d/m)/(t/s); thus if the ratios of distance and time to their units are determined, then so is the ratio of velocity to its unit. The definition, by itself, is inadequate since it only determines the ratio in one specific case; it may be thought of as exhibiting a specimen of the unit. A new coherent unit cannot be defined merely by expressing it algebraically in terms of already defined units. Thus the statement, "the metre per second equals one metre divided by one second", is not, by itself, a definition. It does not imply that a unit of velocity is being defined, and if that fact is added, it does not determine the magnitude of the unit, since that depends on the system of units. In order for it to become a proper definition both the quantity and the defining equation, including the value of any constant factor, must be specified. After a unit has been defined in this manner, however, it has a magnitude that is independent of any system of units. List of coherent units This list catalogues coherent relationships in various systems of units. SI The following is a list of quantities, each with its corresponding coherent SI unit: frequency (hertz) = reciprocal of time (inverse second) force (newton) = mass (kilogram) × acceleration (m/s2) pressure (pascal) = force (newton) ÷ area (m2) energy (joule) = force (newton) × distance (metre) power (watt) = energy (joule) ÷ time (second) potential difference (volt) = power (watt) ÷ electric current (ampere) electric charge (coulomb) = electric current (ampere) × time (second) equivalent radiation dose (sievert) = energy (joule) ÷ mass (kilogram) absorbed radiation dose (gray) = energy (joule) ÷ mass (kilogram) radioactive activity (becquerel) = reciprocal of time (s−1) capacitance (farad) = electric charge (coulomb) ÷ potential difference (volt) electrical resistance (ohm) = potential difference (volt) ÷ electric current (ampere) electrical conductance (siemens) = electric current (ampere) ÷ potential difference (volt) magnetic flux (weber) = potential difference (volt) × time (second) magnetic flux density (tesla) = magnetic flux (weber) ÷ area (square metre) CGS The following is a list of coherent centimetre–gram–second (CGS) system of units: acceleration (gals) = distance (centimetre) ÷ time (s2) force (dyne) = mass (gram) × acceleration (cm/s2) energy (erg) = force (dyne) × distance (centimetre) pressure (barye) = force (dyne) ÷ area (cm2) dynamic viscosity (poise) = mass (gram) ÷ (distance (centimetre) × time (second)) kinematic viscosity (stokes) = area (cm2) ÷ time (second) FPS The following is a list of coherent foot–pound–second (FPS) system of units: force (pdl) = mass (lb) × acceleration (ft/s2) See also Systems of measurement Geometrized unit system Planck units Atomic units Metre–kilogram–second system (MKS) Metre–tonne–second system (MTS) Quadrant–eleventh-gram–second system (QES) References Systems of units Dimensional analysis
Coherence (units of measurement)
Mathematics,Engineering
3,106
50,954,000
https://en.wikipedia.org/wiki/Brasilsat%20B2
Brasilsat B2 is a Brazilian communications satellite launched on 28 March 1995, at 23:14:19 UTC, by an Ariane 44LP H10+ launch vehicle at Kourou in French Guiana. History The Boeing Company contracted the acquisition of three satellites from Hughes Electronics Corporation. As part of the contract, Hughes would divide the work with Promon Engenharia SA of São Paulo. Brasilsat B1 and B2 were tested by the National Institute for Space Research (INPE) of São José dos Campos, Brasilsat B3 and B4 were tested in the Hughes laboratories. The contract also included renovation of sensor equipment and telemetry, provided by Guaratiba Center for Satellite Signaling, located in Rio de Janeiro, as well as automation and installation of security equipment in the Tanguá Control Station. Current status In January 2008, Brasilsat B2 was moved from its former orbital position at 65° West to 92° West. Brasilsat B2 is now in inclined orbit at 63° West. Of the four Brasilsat satellites, only Brasilsat B3 and Brasilsat B4 are currently(?) transmitting signals. Main characteristics Original orbital position: 65° West Current orbital position: 63.1° West (inactive) Coverage: Brazil Transponders: 28 C-band, 1 X-band Launch date: 28 March 1995 Model: Hughes HS-376W Launch location/vehicle: Centre Spatial Guyanais / Ariane 44LP Planned life of satellite: 12 years References External links Communications satellites in geostationary orbit Spacecraft launched in 1995 Satellites using the HS-376 bus Star One satellites
Brasilsat B2
Astronomy
328
68,846,045
https://en.wikipedia.org/wiki/Constrained%20equal%20awards
Constrained equal awards (CEA), also called constrained equal gains, is a division rule for solving bankruptcy problems. According to this rule, each claimant should receive an equal amount, except that no claimant should receive more than his/her claim. In the context of taxation, it is known as leveling tax. Formal definition There is a certain amount of money to divide, denoted by (=Estate or Endowment). There are n claimants. Each claimant i has a claim denoted by . Usually, , that is, the estate is insufficient to satisfy all the claims. The CEA rule says that each claimant i should receive , where r is a constant chosen such that . The rule can also be described algorithmically as follows: Initially, all agents are active, and all agents get 0. While there are remaining units of the estate: The next estate unit is divided equally among all active agents. Each agent whose total allocation equals its claim becomes inactive. Examples Examples with two claimants: ; here . In general, when all claims are at least , each claimant receives exactly . ; here . Examples with three claimants: ; here . ; here . ; here . ; here . ; here . Usage In the Jewish law, if several creditors have claims to the same bankrupt debtor, all of which have the same precedence (e.g. all loans have the same date), then the debtor's assets are divided according to CEA. Characterizations The CEA rule has several characterizations. It is the only rule satisfying the following sets of axioms: Equal treatment of equals, invariance under truncation of claims, and composition up; Conditional full compensation, and composition down; Conditional full compensation, and claims-monotonicity. Dual rule The constrained equal losses (CEL) rule is the dual of the CEA rule, that is: for each problem , we have . References Bankruptcy theory
Constrained equal awards
Mathematics
395
42,346,684
https://en.wikipedia.org/wiki/Greedy%20Perimeter%20Stateless%20Routing%20in%20Wireless%20Networks
The Greedy Perimeter Stateless Routing in Wireless Networks (GPSR) is a routing protocol for mobile ad-hoc networks. It was developed by B. Karp. It uses a greedy algorithm to do the routing and orbits around a perimeter. Coordinates instead of receiver names GPSR is a geo routing method, which means that data packages are not sent to a special receiver but to coordinates. The packages should be relayed to the node that's geographically closest to the coordinates. This assumes that every node knows its own position. Literature B.Karp: Challenges in Geographic Routing: Sparse Networks, Obstacles, and Traffic Provisioning. In DIMACS Workshop on Pervasive Networking, Piscataway, NJ, May 2001 B.Karp: Geographic Routing for Wireless Networks. Ph.D. Dissertation, Harvard University, Cambridge, MA, October 2000 B.Karp, H.T.Kung: Greedy Perimeter Stateless Routing for Wireless Networks. In Proceedings of the Sixth Annual ACM/IEEE International Conference on Mobile Computing and Networking (MobiCom 2000), Boston, MA, August 2000, pp. 243-254 Ad hoc routing protocols
Greedy Perimeter Stateless Routing in Wireless Networks
Technology
234
75,773,058
https://en.wikipedia.org/wiki/CWISEP%20J1935-1546
CWISEP J1935-1546 (CWISEP J193518.59-154620.3 or Brown Dwarf W1935 or W1935) is a cold brown dwarf or planetary-mass object with a mass of 2-20 or 6-35 and a distance of 14.4 parsec (47 light-years). CWISEP J1935-1546 was discovered in 2019 by Marocco et al. as an extremely cold brown dwarf with a temperature range of 270-360 K and a distance of 5.6-10.9 parsec. It was discovered with the help of the python package XGBoost, using machine-learning algorithms and the CatWISE catalog, as well as the WiseView tool. According to a NASA press release CWISEP J1935-1546 was discovered by the security engineer and citizen scientist Dan Caselden. Follow-up observations with Spitzer revealed a very red object with ch1-ch2 of 3.24±0.31 mag. Later Kirkpatrick et al. 2021 showed a temperature of 367±79 K (15-173 °C; 59-343 °F) and a parallax of 69.3±3.8 mas ( parsec) for this object. The spectral type was estimated to be later than Y1. Observations with JWST found strong signatures of methane, carbon monoxide, carbon dioxide, water vapor and ammonia in the atmosphere of this brown dwarf. The abundance of hydrogen sulfide was measured, but the researchers don't mention its detection. Phosphine is undetected and the researchers only provide upper limits. Aurora At the 243rd meeting of the AAS it was announced that W1935, shows emission of methane. This is attributed to heating of the upper atmosphere by an aurora around W1935. Impacts of electrons with molecular hydrogen creates trihydrogen cation (H) in gas giants with an aurora. Emission from H was not detected in W1935, likely due the higher density of the brown dwarf, which leads to a shorter lifetime of H. Aurorae were discovered in the past around hotter brown dwarfs with radio telescopes. The solar system planets Jupiter and Saturn have an aurora because of interactions with the stellar wind and with particles from active moons, such as Enceladus and Io. The researchers propose that the aurora around W1935 is caused by either unaccounted internal processes or by external interactions with interstellar plasma or a nearby active moon. The researchers also announced that W1935 has a temperature inversion that is either caused by the aurora or has to do with internal energy transport. These results were later published in April 2024. See also List of star systems within 45–50 light-years List of Y-dwarfs References Y-type brown dwarfs Planetary-mass objects WISE objects Sagittarius (constellation)
CWISEP J1935-1546
Astronomy
594
73,800,544
https://en.wikipedia.org/wiki/Mitra%2015
The Mitra 15 is a minicomputer made by the French company CII under Plan Calcul, along with the Iris 50 and Iris 80 mainframe computers. It was marketed from 1971 to 1985 and could function in conjunction with large systems. CII manufactured a thousand Mitra 15 machines until 1975 in its Toulouse factory, then in Crolles in the suburbs of Grenoble. A total of 7,929 units were built, most of them for the French market, with a small number sold in Australia, Indonesia, and in other European countries. History The Mitra 15 is the successor to the CII 10010, also called Iris 10, a 16-bit minicomputer released in July 1967. At the time, CII also produced another 16-bit minicomputer, the CII 10020 (actually a licensed Sigma 3 from SDS) and wanted to replace them both with a new, more powerful design compatible with the latest offering of the company. The Mitra 15 was designed from the outset to complement and network with the most powerful French computer of the time, the CII Iris 80, with which it was compatible. Its name is an acronym of , meaning “Mini-machine for Real-Time and Automatic Computing”. The first versions featured a main memory of lithium ferrite cores organized in 16-bit words. It was designed and developed by a team led by Alice Recoque. The first Mitra 15 was delivered on May 10, 1971, and produced in Crolles then Échirolles. Intended for command and control of industrial processes such as scientific computing, the Mitra 15 was designed to be adaptable to very diverse fields of application, thanks to an innovative microprogramming system and a good price/performance ratio. Variants of this computer have also been produced according to the needs of CII's customers. The Mitra 15 was also developed into a militarized version, the Mitra 15M. Microprograms use firmware stored in a ROM, the execution of which causes a simple computer (the micromachine) to always execute the same algorithm, for the instructions of another computer: the macromachine, or simply the machine, which is what is visible to the programmer. Only the first version is incompatible with the CII Iris computers of the time, the Iris 50. The Mitra 15 was widely used as the front-end for the CII Iris 80 (MCR-2) computer. Initially, it was produced as a simple stand-alone module with external cabinetry. It was succeeded by the Mitra 15–20, Mitra 15–30, and Mitra 15–35, produced from 1972. The Mitra 15-30 and Mitra 15-35 which have an external chassis cabinet with extended configuration and modular drawers are intended in particular for customers in the telecommunications industry; they were priced from the dollars. Later, the low-end Mitra 15M/05 was produced in 1975. Competition and innovation The first commercially-successful minicomputer, the 12-bit DEC PDP-8 was introduced in 1965, and sold for . In 1969, Data General, founded by ex-DEC engineers introduced the 16-bit Nova, which sold for . The Hewlett-Packard HP2000 series appeared in the late 1960s and early 1970s. The main French competitor to the Mitra 15 were the Télémécanique T1600 [fr], introduced in 1971, and its successor, the Solar 16 [fr] in 1975, which sold in about 16 000 units. According to Le Monde, by 1974 the Mitra 15 had achieved revenue of 150 million francs; one eighth of the total sales of the CII, of which "30% was for remote processing" and "around 20%" for export. Users Cyclades packet-switching network Cyclades was an early packet-switching network developed by Louis Pouzin in the early 1970s, which played a significant role in the development of the Internet. It used a decentralized approach where Mitra 15 minicomputers acted as routers and allowed for the transmission of data in small packets. Cyclades was a forerunner of the Transmission Control Protocol (TCP). French nuclear program The Mitra 15 was used to monitor the deployment of the use of the new generation of electric generators from , during the French nuclear program. In particular, it was used as part of the transmission network automation master plan, launched in 1973. The Mitra 15 gradually equipped all of the network's control sites – about a hundred in France – to ensure and manage data exchanges between the remote control equipment of the sites ordered and the regional nodes which control the control of the electricity network. In 1975, EDF 's Mitra 15s were systematically fitted with monitors and printers. PTT telecom network Within the French PTT telecom network, the Mitra 15 was used with CII Iris 80s, due to its ability to handle a large number of interrupts. Telecom switches The Mitra 15 equipped the telephone switches of the E10N4 between 1972 and 1976, sold by CIT-Alcatel to the PTT. After 1976, because of the lowering of component prices, a fully electronic 2nd generation global telephone switch system, based on new integrated circuits, became affordable. Experimental computer science in secondary education As part of Plan Calcul, tt was then decided to install computers, on an experimental basis, in 58 high schools. Two minicomputers were selected for the pilot: The Télémécanique T1600 and the Mitra 15.Although the performance of the Mitra was three or four times better than the T1600, the delivery of the Mitra lagged by two months, so it was decided to install more T1600s than Mitras. The Ariane rocket In Kourou, at the Guiana Space Center, the Ariane 1 control console was built around two Mitra 15s: one for managing electrical systems, the other for fluid systems. The Ariane 4 consoles also used two Mitra 15-30 computers and peripherals for command controls. One in the preparation area (CCD) Dock Command Control, the other in the launch area (CCE) Electrical Command Control. The peripherals have evolved during the launch campaigns and in particular the DRI magnetic head disks which have been replaced by RAM memory disks whose speed of access times has required software reorganizations. On the launch pad, the Mitra 15, associated with an Intel Frontal Table Image (FTI), controlled, among other functions, the ignition sequence. A sustainability study of these computers and all the control consoles enabled them to be used until 2003, the date of the last Ariane 4 V159 flight. Political decisions in 1976 CII was handicapped by its 1974-1976 merger with Honeywell-Bull, who were more centered on traditional business computing, and by the abandonment of Unidata, which caused the termination of orders from Siemens. Sales of the Mitra 15 were tied by CII with that of the big computer, the Iris 80, to the point that Le Monde asked if CII would not be forced to launch into this market and manufacture its own equipment. The Mitra 15 mini-computer, a mainstay since 1971 of its distributed computing strategy, was then sold to its shareholder Thomson, who had been opposed for more than a year to the merger of CII with Honeywell-Bull, despite a special mediation mission. Mitra 15 successors The Mitra 125, sometimes called "Mitra 15M/125" succeeded the Mitra 15 in 1975. It introduced a memory management unit, with extended addressing capabilities, protected memory and paging support, allowing it to address up to 32 pages of 64 kB for a total of 512 kB. It also added three I/O microcoded processors, and up to sixteen units could be interconnected for distributed computing. A version specially designed for the Spacelab, a modular space laboratory used during some of the missions of the American Space Shuttle, was also developed: the Mitra 125 MS. Its immediate successor, the Mitra 225 was a powerful version built from 1975 around the AMD 2901 bit-slice microprocessors and MOS memory. This family of processors, easier to program than those of Intel, was also introduced in 1975 by Advanced Micro Devices. From 1976, Mitra minicomputers were grouped, together in the European Society for Minicomputing and Systems, formed for civilian applications, with the mini-computers T1600 and Solar of Télémécanique (24%) and 9% of IDI. The Mitra 525 ratifies, in a three-bus architecture, the possibilities of extension of the Mitra 225 with which it remains compatible. The 1982 Mitra 625 will only bring detail changes, allowing up to 25% more power. Finally, the 1984 Mitra 725 was produced at a time when SEMS was transferred to Bull, which "didn't deal much with this SEMS, having to deal with Honeywell's Level 6 as well as the heavy financial losses of the period, 1982-1984. The Mitra 525, 625, and 725 used the ECL MC10800 and MC10802 circuits, introduced by Motorola in 1975, while Intel's 3002 lost its advantage over competitors. References External links Mitra 15 Reference Manual Technical summary at the Fédération des Equipes Bull Minicomputers History of computing in France Computers designed in France 16-bit computers
Mitra 15
Technology
1,954
53,058,380
https://en.wikipedia.org/wiki/Tania%20Antoshina
Tatyana Antoshina (Russian: Таня Антошина, Татьяна Константиновна Антошина; also transliterated as Tania Antoshina, Tatiana Antoshina, Tanya Antoshina, Tatjana Antoschina, Tatyana Antoschina. From 1977 to 1997 she bore the surname Машукова (Tatyana Mashukova)) (b. 1 May 1956, Krasnoyarsk Siberia, Russia), is a French-Russian ultra contemporary artist, curator, PhD in art history, one of the first participants of the gender movement in Moscow art. In 1991 she completed postgraduate studies and received a PhD in Fine Arts Stroganov Moscow State University of Arts and Industry. Tania Antoshina is one of the most significant Russian female artists since Perestroika. Her work explores the role of women artists in society and in art history and was exhibited in the iconic ‘After the Wall’ exhibition at the Moderna Museet, Stockholm and Hamburger Bahnhof, Berlin; ‘Gender Check’, MUMOK, Vienna; and the 56th Venice Biennale. Her works are in the collections of MUMOK, Vienna; National Museum of Women in the Arts, Washington; Neues Museum Weserburg Bremen; State Russian Museum, St Petersburg; and The Tretyakov Gallery, Moscow. Antoshina lives and works in Paris. Collections MUMOK, Vienna, Austria; New Museum Weserburg Bremen, Bremen, Germany; National Museum of Women in the Arts , Washington DC, US; Corcoran Art Museum, Washington DC, US; American University Museum, Washington DC, US; Omi International Arts Center collection, New York, US; Mint Museum, Charlotte, North Carolina, US; Casoria Contemporary Art Museum, Naples, Italy; Olympic Fine Arts Museum, Beijing, China; Penang State Art Museum, Penang, Malaysia; Russian Museum, Saint Petersburg, Russia; Tretyakov Gallery, Moscow, Russia; National Centre for Contemporary Arts, Moscow, Russia; Museum of Decorative-Applied and Folk Arts, Moscow, Russia; Perm Museum of Contemporary Art, Perm, Russia; Krasnoyarsk Cultural Historical Museum complex, Krasnoyarsk, Russia; Asia-Pacific Institute of Art & Research, Jeollabuk-do, South Korea; The Francis J. Greenburger collection, New York; Kolodzei Art Foundation, New York; Tony Podesta collection, Washington; Sir Elton John Collection, London. Selected exhibitions Solo shows TANIA ANTOSHINA : L'ARCHE DE L'ESPACE / TANIA ANTOSHINA : SPACE ARK, Galerie Vallois Modern and Contemporary Art, Paris, 2023 Cold Land. Northern Tales, ZARYA Center for Contemporary Art, Vladivostok, 2017-2018 Reggae Feminism or 88 March, Dukley Art Center, Kotor, Montenegro, 2017 Museum of a Woman, Podgorica Museums & Galleries, Gallery Art, Podgorica, Montenegro, 2015 Cold Land, Krasnoyarsk Museum Center, Krasnoyarsk, Russia, 2014 My Favorite Artists, Galerie Vallois, Paris, France, 2010 Alice and Gagarin, VP Studio, Moscow, Russia, 2010 My Favorite Artists, Mario Mauroner Gallery, Vienna, Austria, 2008 Space Travelers, Guelman Gallery, Moscow, Russia, 2006 Museum of a Woman, White Space Gallery, London, UK, 2004 The Voyeurism of Alice Guy, Guelman Gallery, Moscow, Russia, 2002 Museum of a Woman, Florence Lynch Gallery, NY, US, 2001 April in Moscow, Guelman Gallery, Moscow, Russia, 1999 Museum of a Woman, Guelman Gallery, Moscow, Russia, 1997 Women of Russia, Guelman Gallery, Moscow, Russia To Moor, Expo 88, Moscow, Russia, 1996 The Hound of Baskervilles, Regina Gallery, Moscow, Russia, 1992 Group shows International Biennale of Vallauris – Contemporary Creation and Ceramics, Musée Magnelli, Musée de la céramique, Vallauris, France, 2024 DARK ROOM: VIDEOWORDS The fifth special project by Magmart, Torrance Art Museum, Torrance, USA, 2021 Moves Like Walter: New Curators Open the Corcoran Legacy Collection, Katzen Arts Center, American University Museum, Washington, DC, USA, 2019 From Non-Conformism to Feminisms: Russian Women Artists from the Kolodzei Art Foundation, Museum of Russian Art (TMORA), Minnesota, 2018-2019 18-th ASIAN ART BIENNALE BANGLADESH, Bangladesh Shilpakala Academy, National Academy of Fine and Performing Arts, Dhaka, 2018 Women at Work: Subverting the Feminine in Post-Soviet Russia, White Space Gallery, London, 2018 ART RIOT: POST-SOVIET ACTIONISM, Saatchi Gallery, London, 2017-2018 17th Asian Art Biennale Bangladesh, Bangladesh Shilpakala Academy National Art Gallery, Dhaka, Bangladesh, 2016 56 Venice Biennale, State pavilion of Mauritius, 2015 Gender Check, MUMOK, Vienna, 2012 Moscow — NY = Parallel Play, Chelsea Art Museum, NY, 2008 Moscow Biennale, Moscow, Russa, 2007 SIGHT/INSIGHT, Corcoran Art Museum, Washington DC, 2006 Photo London, Royal Academy of Arts, London, 2005 After the wall, National Gallery (Berlin), Berlin, Hamburger Bahnhof, 2001 After the Wall, Ludwig Museum, Budapest, 2000 After the Wall, Moderna Museet, Stockholm, 1999 Honours and awards Scholarship Residence Center for Contemporary Art "Zarya", Vladivostok, Russia, 2017; Scholarship Residence Dukley European Art Community, Kotor, Montenegro, 2017; Scholarship Residence Dukley European Art Community, Kotor, Montenegro, 2015; Scholarship Residency KRITI Varanasi, India, 2013; Scholarship Residences MARIPOSA Canary Islands, Spain, 2012; Olympic Art Gold Medal, Olympic Fine Arts, London, United Kingdom, 2012; Alternative Prize “Russian Activist Art”, Moscow, Russia, 2012; Olympic Art Gold Medal, Olympic Fine Arts, Beijing, China, 2008; Five Rings Prize, Olympic Landscape Sculpture Design Contest, Beijing, China, 2008; Laureate of the Magmart video festival, Naples, Italy, 2005; Scholarship Omi International Arts Center, NY, US, 2005; Winner of the "Silver Camera", Multimedia Art Museum, Moscow, 2005; Winner of the "Silver Camera", Multimedia Art Museum, Moscow, 2002; Scholarship of the Yaddo Residence, New York, US, 2001; Winner of the contest "Modern Russia", Photo Center on Gogol Boulevard, Moscow, Russia, 2001; Participant of the International Symposium CERAMICS - PAINTING - GRAPHICS Bad Lippspringe, Germany, 1992; Best Report at the Scientific Conference of Post-Graduate Students and Teachers of the Moscow Institute of Industrial Arts, Moscow, Russia, 1985; Silver Medal of VDNH for Teaching Work, Moscow, Russia, 1985; Best Teacher of the Krasnoyarsk Art Academy, Krasnoyarsk, Russia, 1984. Curatorship projects The Quest of Power, special project of 6 Moscow Biennale 2015; Terra Incognita, expedition to South Siberia for collection of ethnic and cultural material, 2014; V-5, Space as a Presence, in partnership with A.Galenz and G.Kuznetsov, InteriorDAsein, Berlin, 2012; Sons of the Big Dipper, together with G.Kuznetsov, PROEKT FABRIKA, Moscow, 2011; Two Museums, in partnership with G.Vysheslavsky, Champino, Velletri, Italy, 1992. Select publications “TANIA ANTOSHINA: L'ARCHE DE L'ESPACE”, 2023-04-06, Galerie Robert Vallois, Paris, France; “ART JUDGMENTS: Art on Trial in Russia after Perestroyka” by Sandra Frimmel (University of Zurich), ISBN 978-1-62273-277-7, Vernon Press, March 2022; Thomas Deecke, Markus Bulling (2001). ‘’8. Triennale Kleinplastik Fellbach Vor-Sicht, Rück-Sicht’’. Germany, Stadt Fellbach Auflage; Александр Боровский. «Как-то раз Зевксис с Паррасием... Современное искусство: практические наблюдения». Литрес, 2017. ; Jonson Lena (2015). ‘‘Art and Protest in Putin's Russia’‘, Taylor and Francis, pp. XII, 207, 227, 240, 261, ; Viola Hildebrand-Schat (2014). ‘‘Appropriation oder Simulacrum?’‘, p. 229-233, in Guido Isekenmeier, ‘‘Interpiktorialität: Theorie und Geschichte der Bild-Bild-Bezüge’‘, transcript Verlag, ; ‘‘Working with Feminism: Curating and Exhibitions in Eastern Europe, Acta Universitatis Tallinnensis: Artes’’, Angela Dimitrakaki, Katrin Kivimaa, Katja Kobolt, Izabela Kowalczyk, Pawel Leszkowicz, Suzana Milevska, Bojana Pejic, Rebeka Põldsam, Mara Traumane, Airi Triisberg, Hedvig Turai, p. 85. Estonia: Tallinn University Press / Tallinna Ülikooli Kirjastus. , 2012; Klaus Krüger / Leena Crasemann / Matthias Weiß: (Hgg.) (2011). ‘‘Re-Inszenierte Fotografie’‘. Munich, Germany: Wilhelm Fink Verlag.(2011) ; Thomas Deecke (2010). ‘‘Leben mit der Kunst’‘. Germany: Nicolaische Verlagsbuchhandlung. ; Alain Monvoisin (2008). ‘‘DICTIONNAIRE INTERNATIONAL DE LA SCULPTURE MODERNE ET CONTEMPORAINE’‘, p. 26-27. Paris, France, Regard. ; Julia Tulovsky (2008). ‘’The Claude and Nina Gruen Collection of Contemporary Russian Art’’, pp. 24, 79. Jane Voorhees Zimmerli Art Museum, ; Matthias Winzen; Nicole Fritz (2007). Bodycheck: Catalog of the 10th Fellbach Triennial of Contemporary Sculpture. Germany: Snoek Verlagsgesellschaft. , p. 292; Tatiana Smorodinskaya (Russian, Middlebury Coll.), Karen Evans-Romaine (Russian, Ohio Univ.), and Helena Goscilo (Slavic languages, Univ. of Pittsburgh) (ed.) (2007). ‘‘Encyclopedia of contemporary Russian Culture (Encyclopedias of Contemporary Culture Series), pp. 19, 42-43. Abingdon, UK and New York, USA: Routledge. ; Игорь Кон (2003) Мужское тело в истории культуры p. 378-382; Издательство: СЛОВО/SLOVO ; АРТ-Конституция (иллюстрированная АРТ – Конституция Российской Федерации, в создании которой принимали участие наиболее актуальные художники начала 21 века). сс. 110, 111, 131, 132, 272, 273. Тексты: Зураб Церетели, Екатерина Деготь, Наталья Колодзей. Москва: Музей современного искусства, 2003. ; СЛОВАРЬ ГЕНДЕРНЫХ ТЕРМИНОВ / Под ред. А. А. Денисовой / Региональная общественная организация "Восток-Запад: Женские Инновационные Проекты". М.: Информация XXI век, 2002. c. 256; Renee Baigell, Matthew Baigell (July 1, 2001). ‘’Peeling Potatoes, Painting Pictures: Women Artists in Post-Soviet Russia, Estonia, and Latvia. The First Decade’’. The Dodge Soviet Nonconformist Art publication series, p. 55. New Brunswick, NJ, USA: Jane Voorhees Zimmerli Art Museum: Rutgers. ; Женщина в обществе: мифы и реалии : сборник статей / редактор-составитель Круминг Л.С. , сс. 1, 4. Москва : Информация - XXI век, 2001. , сс. 1, 4; Женщина и визуальные знаки : сборник / Ин-т "Открытое общество", сс. 232-234. - М. : Идея-Пресс, 2000. . Periodicals “FEMINISM OF THE TENDER KIND Tatyana Antoshina: performative ceramics”, by Anna Tolstova for “Kommersant Weekend”, 01.09.2023; “ANTOSHINA AND RAIDERS OF THE LOST ARK”, by Alexey Tarkhanov for “Art Focus Now”, 13.04.2023; This Leads to Fire: From Nonconformism to Global Capitalism, Selections from the Kolodzei Art Foundation Collection, Neuberger Museum of Art at Purchase College, SUNY, 2015; Google Arts and Culture. Quantum Leap, May - November 2015; Female Artists and the Nude Male, Part 5, July 14, 2014; Gesellschaft vor Gericht, Neue Zürcher Zeitung, 5.3.2013; IWMpost 110 by Institut für die Wissenschaften vom Menschen - Issuu, N110 may – August 2012; Die Einheit des Universums, taz, BENNO SCHIRRMEISTER, 23.12.2006; Revolution, Transit Art Space, April 2006; Hans-Dieter Fronz, KUNSTFORUM, Bd.171 Juli-August 2004, p. 370-371, ‘’Na Kurort! Russische Kunst heute’‘, Zurich, 2004; Pat Simpson, ‘’Peripheralising Patriarchy? Gender and Identity in Post-Soviet Art: A View from the West’‘, Oxford Art Journal, Vol. 27, No. 3 (2004), Oxford University Press, pp. 406, 407, 2004; Brian Dillon, ‘’Tatiana Antoshina’‘, MODERN PAINTERS, summer 2004, pp.120-121. ‘’СAPITAL PERSPECTIVE’‘, Moscow, pp. 5-6, 2002; Francesca Piovano, ‘’Art-Forum’‘, CVA, issue 33, 2001; Texte zur Kunst, Band 11, Ausgaben 41-42, Texte zur Kunst GMBH, p. 142, 2001; Elfi Kreis, ‘’Absurdistan’‘, Kunstzeitung, nr. 57, mai 2001; ‘’Letzte Tage IsKusstwo 2000’‘, Oberbauer, Volksblah, 24/25, 2.2001; Chrictopf Wiedemann, ‘’Freie Radikale auf ihrem Weg in den Westen’‘, Seite 18, Suddeutsche Zeitung, Nr.13, 17.1.2001; Von Michael Dultz, ‘’Klassische Frauenszenen mit Mannern nachgestellt’‘ , Die Welt Bayern, 15.01.2001; Brita Sachs, ‘’Sei frech und zeige deine Katastrophen’‘ , Frankfurter Augemeine Zaitung, Februar 2001; Rod Mengham, ‘’The refugee aesthetic?’‘ , TATE, issue 20, 2000. External links Tatyana Antoshina's site Tatyana Antoshina on ARTFACT Tatyana Antoshina on ARTNET Tania Antoshina Sotheby's Notes et références 1956 births Living people Multimedia artists Women multimedia artists 20th-century Russian women artists 21st-century Russian women artists People from Krasnoyarsk Feminist artists Stroganov Moscow State Academy of Arts and Industry alumni
Tania Antoshina
Technology
3,745
24,161,058
https://en.wikipedia.org/wiki/SMath%20Studio
SMath Studio is a freeware (free of charge, but not libre), closed-source, mathematical notebook program similar to Mathcad. It is available for Windows, Linux, iOS, Android, Universal Windows Platform, and on some handhelds. Among its capabilities are: Solving differential equations; Graphing functions in two or three dimensions; Symbolic calculations, including solving systems of equations; Matrix operations, including determinants; Finding roots of polynomials and functions; Symbolic and numeric differentiation of functions; Numeric integration; Simple multiline looped programs; User-defined functions; Units of measurement. References External links Computer algebra systems
SMath Studio
Mathematics
130
15,354,712
https://en.wikipedia.org/wiki/MRPS6
28S ribosomal protein S6, mitochondrial is a protein that in humans is encoded by the MRPS6 gene. Mammalian mitochondrial ribosomal proteins are encoded by nuclear genes and help in protein synthesis within the mitochondrion. Mitochondrial ribosomes (mitoribosomes) consist of a small 28S subunit and a large 39S subunit. They have an estimated 75% protein to rRNA composition compared to prokaryotic ribosomes, where this ratio is reversed. Another difference between mammalian mitoribosomes and prokaryotic ribosomes is that the latter contain a 5S rRNA. Among different species, the proteins comprising the mitoribosome differ greatly in sequence, and sometimes in biochemical properties, which prevents easy recognition by sequence homology. This gene encodes a 28S subunit protein that belongs to the ribosomal protein S6P family. Pseudogenes corresponding to this gene are found on chromosomes 1p and 12q. References Further reading External links Ribosomal proteins
MRPS6
Chemistry
207
4,039,380
https://en.wikipedia.org/wiki/Media%20processor
A media processor, mostly used as an image/video processor, is a microprocessor-based system-on-a-chip which is designed to deal with digital streaming data in real-time (e.g. display refresh) rates. These devices can also be considered a class of digital signal processors (DSPs). Unlike graphics processing units (GPUs), which are used for computer displays, media processors are targeted at digital televisions and set-top boxes. The streaming digital media classes include: uncompressed video compressed digital video - e.g. MPEG-1, MPEG-2, MPEG-4 digital audio- e.g. PCM, AAC Such SOCs are composed of: a microprocessor optimized to deal with these media datatypes a memory interface streaming media interfaces specialized functional units to help deal with the various digital media codecs The microprocessor might have these optimizations: vector processing or SIMD functional units to efficiently deal with these media datatypes DSP-like features Previous to media processors, these streaming media datatypes were processed using fixed-function, hardwired ASICs, which could not be updated in the field. This was a big disadvantage when any of the media standards were changed. Since media processors are software programmed devices, the processing done on them could be updated with new software releases. This allowed new generations of systems to be created without hardware redesign. For set-top boxes this even allows for the possibility of in-the-field upgrade by downloading of new software through cable or satellite networks. Companies that pioneered the idea of media processors (and created the marketing term of media processor) included: MicroUnity MediaProcessor - Cancelled in 1996 before introduction IBM Mfast - Described at the Microprocessor Forum in 1995, planned to ship in mid-1997 but was cancelled before introduction Equator Semiconductor BSP line - their processors are used in Hitachi televisions, company acquired by Pixelworks Chromatic Research MPact line - their products were used on some PC graphics cards in the mid-1990s, company acquired by ATI Technologies Philips TriMedia line - used in Philips, Dell, Sony, etc. consumer electronics, Philips Semiconductors split off from Philips and became NXP Semiconductors in 2006 Consumer electronics companies have successfully dominated this market by designing their own media processors and integrating them into their video products. Companies such as Philips, Samsung, Matsushita, Fujitsu, Mitsubishi have their own in-house media processor devices. Newer generations of such devices now use various forms of multiprocessing—multiple CPUs or DSPs, in order to deal with the vastly increased computational needs when dealing with high-definition television signals. External links http://www.equator.com best lga 1155 cpu http://www.philips.com http://www.nxp.com Central processing unit Coprocessors Digital electronics Digital signal processing Digital signal processors
Media processor
Engineering
616
66,067,881
https://en.wikipedia.org/wiki/Helosciadium%20%C3%97%20longipedunculatum
Helosciadium × longipedunculatum, synonym Apium × longipedunculatum, is a hybrid plant in the umbellifer family (Apiaceae); the result of hybridisation between Helosciadium repens (creeping marshwort) and Helosciadium nodiflorum (fool's water cress). Discovery The hybrid was first discovered by George Lawson in July 1845 at Gullane, East Lothian, Scotland, and two voucher specimens were deposited in the herbarium of William Gardiner, Dundee. Following Gardiner's death in 1852 the specimens were transferred as part of a bequest to the British Museum, and shortly afterward (ca. 1854) loaned to German botanist Friedrich W. Schultz, who described them as a new variety of Helosciadium nodiflorum, var. longipedunculatum. The variety was later (1906) described in further detail by Harry J. Riddelsdell and Edumund G. Baker, who examined a number of specimens from the original Gullane locality, as well as additional specimens from Duddingston Loch, Edinburgh, Scotland. While Riddelsdell did not consider var. longipedunuclatum to be of hybrid origin, during the course of the 20th century a number of eminent botanists (e.g. Werner Rothmaler, Clive Stace) suggested that it was the hybrid between H. repens and H. nodiflorum. In 2014, plants matching H. nodiflorum var. longipedunculatum were collected at Port Meadow, Oxfordshire, England and a molecular analysis confirmed it as the hybrid H. repens × H. nodiflorum. The original specimens are still housed in the Natural History Museum, London, under voucher number BM001144095, and the one situated to the left has been designated as the lectotype. Description Creeping perennial herb with slender stems and rooting at lower nodes. Leaves simply pinnate with 4 to 7 pairs of leaflets, which are ovate to broadly ovate, coarsely serrate and occasionally lobed. Flowering umbels are borne on peduncles, which are equal to or longer than the rays of the umbel, and subtended at the base by an involucre of 1 to 3 bracts. Ripe fruits broader than long. References Apioideae Hybrid plants
Helosciadium × longipedunculatum
Biology
495
12,710,444
https://en.wikipedia.org/wiki/Procambarus%20angustatus
Procambarus angustatus was a species of crayfish in the family Cambaridae. It was only known from the type specimen, described by John Lawrence LeConte in 1856. He reported that it "lives in lesser Georgia, in the rivulets of pure water which flow between little sand hills". It was endemic to the U. S. state of Georgia, but is now believed to be extinct. References Cambaridae Extinct animals of the United States Freshwater crustaceans of North America Crustaceans described in 1856 Extinct invertebrates since 1500 Extinct crustaceans Taxonomy articles created by Polbot Species known from a single specimen Taxa named by John Lawrence LeConte
Procambarus angustatus
Biology
140
12,498,941
https://en.wikipedia.org/wiki/Nizofenone
Nizofenone (Ekonal, Midafenone) is a neuroprotective drug which protects neurons from death following cerebral anoxia (interruption of oxygen supply to the brain). It might thus be useful in the treatment of acute neurological conditions such as stroke. References Further reading Nitrobenzene derivatives Imidazoles Ketones 2-Chlorophenyl compounds Diethylamino compounds
Nizofenone
Chemistry
87
44,247,164
https://en.wikipedia.org/wiki/DnaD
DnaD is a 232 amino acid long protein that is part of the primosome involved in prokaryotic DNA replication. In Bacillus subtilis, genetic analysis has revealed three primosomal proteins, DnaB, DnaD, and DnaI, that have no obvious homologues in E. coli. They are involved in primosome function both at arrested replication forks and at the chromosomal origin. DnaB and DnaD proteins are both multimeric and bind individually to DNA. DnaD induces DnaB to bind. DnaD alone and the DnaD/DnaB complex then interact with PriA of Bacillus subtilis at several DNA sites. This suggests that the nucleoprotein assembly is sequential in the PriA, DnaD, DnaB order. References Bacterial proteins DNA replication
DnaD
Chemistry,Biology
172
61,801,258
https://en.wikipedia.org/wiki/Snaiad
Snaiad is a speculative evolution, science fiction and artistic worldbuilding project by Turkish artist C. M. Kösemen, focused on a fictional exoplanet of the same name. Begun in the early 2000s and inspired by earlier works such as Wayne Barlowe's 1990 book Expedition, Kösemen has produced hundreds of paintings and sketches of creatures of Snaiad, with detailed ecological roles and taxonomic relationships to each other. The sheer number of invented creatures and lineages makes Snaiad one of the most biologically diverse fictional worlds. Since Snaiad artwork was first published by Kösemen online, the project has garnered a following and international attention, especially online. Fans of Snaiad have produced fan art, not only of Kösemen's own creatures but also of their own imagined Snaiadi creatures, consistent with the biological principles followed by the rest of the project's lifeforms. Kösemen hopes to eventually publish the project in book form. Premise Snaiad is an exoplanet slightly larger than Earth located outside of the Milky Way. The planet is home to a large variety of fauna, which Kösemen has designed and meticulously documented, along with creating maps and a geopolitical story of Snaiad as it undergoes the process of human colonization about 300 years in the future, Snaiad being one of Earth's first interstellar colonies. According to science writer Darren Naish, Snaiad "might well break the record as goes the number of fictional entities invented so far" due to the sheer number of lineages and lifeforms designed by Kösemen. The equivalents of tetrapods on Snaiad, so-called 'para-tetrapods' have two "heads"; the first head, typically most similar to heads on Earth animals is a modified set of genitalia which is also used to catch or grab food, and the second head, located below, has only a digestive function. The 'para-tetrapods' of Snaiad have hydraulic muscles (i.e. powered by fluids being pumped in and out) and their skeletons are not made of calcium, but a carbon-based mixture more similar to very hard wood. The bones of Snaiad creatures are usually black, brown or green and can catch fire at high temperatures. Just as with Earth animals, the creatures of Snaiad are classified into different taxonomic groupings, such as the "Allotauriformes" (large and armoured herbivores), the "Cardiocetoida" (whale-like aquatic creatures with jet propulsion) and the "Kahydroniformes" (intelligent predators with hooves and claws). One of the most successful and widespread predators of Snaiad is the Kahydron, with forelimbs strengthened with claws and hooves. The hydraulic muscles of the Kahydron stretch to its cheekbones, which gives it an exceptionally strong bite. Other than the human colonists, there are no sapient lifeforms on Snaiad. According to Kösemen: 'intelligence is not a "goal" of the evolutionary process, nothing is.' Project history Snaiad was inspired by Wayne Barlowe's 1990 book Expedition, which describes the exploration and the wildlife of an alien world. Other inspirational works included Gert van Dijk's "Furaha" (a similar project focusing on an alien world), the art of Terryl Whitlatch, James Gurney's Dinotopia book series and the works of naturalist painters, such as John James Audubon. The first steps to beginning the project were taken in 2004 or 2005, when Kösemen, still a student, began drawing alien creatures. Kösemen initially called the fictional planet 'Snai 4'. As Kösemen produced more sketches and eventually paintings, ecological niches and relationships between the animals materialized almost on their own; according to Kösemen "It evolved organically. I corrected things and added new ones. It was more evolution than design!" Kösemen worked "non-stop" on Snaiad until 2007, though work on the project has continued sporadically after that. Snaiad's official website was launched in June of 2008. The Snaiad website would later go offline for many years, but was relaunched in 2014. Later that year, in August, Kösemen held a talk about Snaiad at the 72nd World Science Fiction Convention in London. At the talk, Kösemen discussed the development of the project and shared never-before-seen early renditions of creatures. By the time of the 2014 talk, Kösemen had around 200 colorful paintings of Snaiad creatures and a backlog of nearly 500 concepts. Despite not yet having been published in any other form than online, Snaiad has garnered a following. 'Snaiad' has more google search hits than Kösemen's own name, and the project grew especially popular on the art-sharing website DeviantArt, where amateur artists drew fan art of Kösemen's creatures and also invented their own Snaiadi creatures. Fans of Snaiad have also created YouTube animations and digital as well as physical models of Kösemen's creatures. Per Darren Naish, writing in response to Kösemen's 2014 talk, "The mass appeal of Snaiad was demonstrated by the invention of Spore versions of Snaiad creatures, by the number of website mentions, by fan-art of assorted genres, and by the enthusiasm of the attending audience." Kösemen attributes the appeal of Snaiad to the project being somewhat "open source", with Kösemen allowing fan art and even 'canonizing' the fan creations he liked the best, and the established common body plans and anatomy, creating the possibility of consistent creations. Kösemen has expressed interest in eventually publishing Snaiad in book form. References External links Life on Snaiad – official website of the project The Story of Snaiad – Kösemen's 2014 talk at the 72nd World Science Fiction Convention Speculative evolution Fictional planets
Snaiad
Biology
1,257
15,168,509
https://en.wikipedia.org/wiki/Orion%20Society
The Orion Society is a United States non-profit organization that engages environmental and cultural issues through publication of books, magazines, and educational materials, and facilitation of informational networks. It was founded in 1992 and is based in Great Barrington, Massachusetts. The Society is probably best known as publisher of Orion magazine. See also Conservation ethic Conservation movement Ecology movement Environmentalism List of environmental organizations Sustainability References External links Orion Magazine Environmental education Environmental organizations based in Massachusetts Non-profit organizations based in Massachusetts
Orion Society
Environmental_science
101
5,175,930
https://en.wikipedia.org/wiki/Pill%20splitting
Pill-splitting refers to the practice of splitting a tablet or pill to provide a lower dose of the active ingredient, or to obtain multiple smaller doses, either to reduce cost or because the pills available provide a larger dose than required. Many pills that are suitable for splitting (aspirin tablets for instance) come pre-scored so that they may easily be halved. The practice is also referred to as tablet scoring. It is unsafe to split some prescription medications. Pill splitters A pill-splitter is a simple and inexpensive device to split medicinal pills or tablets, comprising some means of holding the tablet in place, a blade, and usually a compartment in which to store the unused part. The tablet is positioned, and the blade pressed down to split it. With care it is often possible to cut a tablet into quarters. Also available as consumer items are multiple pill splitters, which cut numerous round or oblong pills in one operation. Pill scoring A drug manufacturer may score pills with a groove to both indicate that a pill may be split and to aid the practice of splitting pills. When manufacturers do create grooves in pills, the groove must be consistent for consumers to be able to use them effectively. Many manufacturers choose to not use grooves. The United States government Center for Drug Evaluation and Research makes the following recommendations for manufacturers when scoring pills with grooves: Pills should only have grooves if the split dosage is at least the minimum therapeutic dosage of the medication The split pill should not create a toxicity hazard Drugs which should not be split should not be scored with a groove The split pill should be stable for the expected temperature and humidity The split pill should have an equivalent effect to a full pill at an equivalent dose Dosage uniformity In the U.S. "uniformity of dosage units" is defined by the United States Pharmacopeia (USP), which describes itself as "the official public standards-setting authority for all prescription and over-the-counter medicines, dietary supplements, and other healthcare products manufactured and sold in the United States." More than 140 countries develop or rely upon US pharmaceutical standards according to the USP. The USP standard for dosage uniformity expresses statistical criteria in the complex language of sampling protocols. The pharmaceutical dosage literature sometimes boils this down as requiring a standard deviation in dosage weight of less than 6%, which roughly corresponds to the weaker rule-of-thumb offered for public consumption that the vast majority of dosage units should be within 15% of the dosage target. "Dosage unit" is a technical term which covers oral medications (tablets, pills, capsules), as well as non-oral delivery methods. A 2002 study of pill-splitting as conducted in four American long-term care facilities determined that 15 of the 22 dispensed prescriptions evaluated (68%) had fragment weight variance in excess of USP standards. Cost savings Pill-splitting can be used to save money on pharmaceutical costs, as many prescription pharmaceuticals are sold at prices less than proportional to the dose. For example, a 10 mg tablet of a drug might be sold for the same or nearly the same price as a 5 mg tablet. Splitting 10 mg tablets allows the patient to purchase half the number of tablets at a lower price than the same weight of 5 mg tablets. Both specialist and generalist physicians are not sufficiently aware of and do not communicate with patients about the cost to them of medication. Some potentially suitable medications Randall Stafford of the Stanford School of Medicine published a study in 2002 of common prescription medications in the United States in which he evaluates pill splitting for "potential cost savings and clinical appropriateness". The study identifies eleven prescription medications that satisfied the study criteria, based on the American pharmaceutical cost structure, pill formulation, and dosages of the time. Most of the medications listed in the table from the psychiatric drug class are antidepressants. Uniformity of split Not all tablets split equally well. In a 2002 study, Paxil, Zestril and Zoloft split cleanly with 0% rejects. Glucophage was described as a hard tablet, requiring significant force, causing tablet halves to fly. Glyburide exhibited very poor splitting with many splitting into multiple pieces. Hydrodiuril and Oretic crumbled. Lipitor did not split cleanly, and the coating peeled. The diamond shaped Viagra tablets made location of the midline difficult. The worst result reported was Oretic 25 mg in which 60% of tablets failed to split to within 15% of target weight. Alternative purpose Some drugs have a few different uses, and are usually sold in different packages and different doses for different applications. The price for some applications may be very different from that for other purposes. One example is Minoxidil, which is well known as a hair-growth stimulant; the same drug under the name Loniten is used for blood pressure control in much larger doses at a much lower price per unit weight. Risks The Food and Drug Administration (FDA) has called pill splitting "risky". At the same time, the FDA approves the manufacture of pills which are intended to be split. Splitting pills may result in uneven splitting and creating pieces which will not deliver accurate dosage. Pills which are split might not be correctly halved, making the cut pieces unequal in size. Some pills are difficult to split. Some pills (particularly some time release drugs) are unsafe to split, and there could be mistakes in identifying when pills should not be split. Lawsuits In a California court filing dated April 2001, Trial Lawyers for Public Justice (TLPJ) brought a class-action lawsuit against Kaiser Permanente (Timmis v. Kaiser Permanente) on the grounds that "Kaiser's mandatory pill-splitting policy endangers patients' health solely to enhance the HMO's profits" in violation of the California Unfair Competition Law (UCL) and the California Consumer Legal Remedies Act (CLRA). In December 2004, the California Court of Appeal affirmed the trial court ruling that Kaiser's policy did not violate UCL or CLRA, noting the suit had failed to present evidence that the policy was unsafe. See also Inverse benefit law References Drug delivery devices Pharmacy
Pill splitting
Chemistry
1,271
14,062,693
https://en.wikipedia.org/wiki/DIMBOA
DIMBOA (2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one) is a naturally occurring hydroxamic acid, a benzoxazinoid. DIMBOA is a powerful antibiotic present in maize, wheat, rye, and related grasses, DIMBOA was first identified in maize in 1962 as the "corn sweet substance". Etiolated maize seedlings have a very sweet, almost saccharin-like taste due to their high DIMBOA content. The biosynthesis pathway from leading from maize primary metabolism to the production of DIMBOA has been fully identified. DIMBOA is stored as an inactive precursor, DIMBOA-glucoside, which is activated by glucosidases in response to insect feeding, In maize, DIMBOA functions as natural defense against European corn borer (Ostrinia nubilalis) larvae, beet armyworms (Spodoptera exigua), corn leaf aphids (Rhopalosiphum maidis), other damaging insect pests, and pathogens, including fungi and bacteria. The exact level of DIMBOA varies between individual plants, but higher concentrations are typically found in young seedlings and the concentration decreases as the plant ages. Natural variation in the Bx1 gene influences the DIMBOA content of maize seedlings. In adult maize plants, the DIMBOA concentration is low, but it is induced rapidly in response to insect feeding. The methyltransferases Bx10, Bx11, and Bx12 convert DIMBOA into HDMBOA (2-hydroxy-4,7-dimethoxy-1,4-benzoxazin-3-one), which can be more toxic for insect herbivores. In addition to serving as a direct defensive compound due to its toxicity, DIMBOA can also function as a signaling molecule, leading to the accumulation of callose in response to treatment with chitosan (a fungal elicitor) and aphid feeding. DIMBOA can also form complexes with iron in the rhizosphere and thereby enhance maize iron supply. Specialized insect pests such as the western corn rootworm (Diabrotica virgifera virgifera) can detect complexes between DIMBOA and iron and use these complexes for host identification and foraging. References Hydroxamic acids Benzoxazines Resorcinol ethers Lactols Lactams
DIMBOA
Chemistry
522