id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
49,883,827
https://en.wikipedia.org/wiki/Mikhailo%20Lomonosov%20%28satellite%29
Mikhailo Lomonosov (MVL-300, or Mikhailo, or more commonly Lomonosov; MVL stands for Mikhail Vasilyevich Lomonosov) was an astronomical satellite operated by Moscow State University (MSU) named after Mikhail Lomonosov. Mission The objective of the mission was the observation of gamma-ray bursts, high-energy cosmic rays and transient phenomena in the Earth's upper atmosphere. Launch The mission launch was initially planned for 2011 when 300 years since the birthday of Mikhail Lomonosov was celebrated. After several postponements the mission was finally launched on 28 April 2016 from the Vostochny Cosmodrome by the Soyuz 2.1a launch vehicle, on the first launch from new cosmodrome. Scientific payload The spacecraft is equipped with seven scientific instruments: Tracking Ultraviolet Set Up system (TUS) was designed to measure fluorescence light radiated by EAS (Extensive Air Showers) of Ultra High Energy Cosmic Rays (UHECR) in the Earth atmosphere as well as for transients' studies within UV-range. This was the first space based instrument dedicated to these phenomena. The TUS-project started in 2001. Block for X-ray and gamma-radiation detection (BDRG) is intended for detecting and monitoring gamma-ray bursts and for producing a trigger signal for ShOK cameras (see below); UFFO consists of X-ray and 10 cm UV telescopes intended for studying gamma-ray bursts; Optic cameras of super-wide field of vision (ShOK) is a pair of wide-field optical cameras, which main purpose is a prompt detection of the optical radiation of gamma-ray bursts after receiving trigger signals from BDRG; Dosimeter of Electrons, PROtons and Neutrons (DEPRON) measures absorbed doses and spectra of electrons, protons, neutrons and heavy nuclei; Electron Loss and Fields Investigator for Lomonosov (ELFIN-L) comprises the Energetic Particle Detector for Electrons (EPDE), Energetic Proton Detector for Ions (EPDI) and Flux Gate Magnetometer (FGM). Its main purposes is to study energetic particles in the Earth magnetosphere; IMISS-1 is a device intended to test microelectromechanical inertial modules. End of mission The TUS-telescope aboard Lomonosov stopped data collection in late 2017. On June 30, 2018, it was published that the Lomonosov-satellite had suffered a malfunction in its data transmission system. Attempts to fix the problem were underway, but fixing the problem had so far been unsuccessful. As of 14 January 2019, the problems had not been solved and all the scientific equipment of the satellite were powered off. The recovery attempts continued (some systems of the satellite were responsive, the problem was with scientific payload systems). Before succumbing to these difficulties, the satellite had worked for one and a half years for its intended purpose. With the failure of the Lomonosov satellite and the Spektr-R end of mission on 30 May 2019, the Russian space program lost both of its scientific satellites until the launch of Spektr-RG in July 2019. The satellite decayed from orbit on 16 December 2023. References External links Mikhailo Lomonosov at Russianspaceweb.com Space telescopes Spacecraft launched in 2016 Satellites of Russia 2016 in Russia Earth observation satellites of Russia Spacecraft launched by Soyuz-2 rockets
Mikhailo Lomonosov (satellite)
Astronomy
704
19,132,640
https://en.wikipedia.org/wiki/Leopold%20Halliday%20Savile
Sir Leopold Halliday Savile, KCB (31 August 1870 – 28 January 1953) was a Scottish civil engineer. Savile was born at Bridge of Earn, Perthshire, the son of Lieutenant-Colonel John Walter Savile and Sarah Emma Stoddart. He was a great grandson of the MP Christopher Atkinson (later Savile). He was educated at Marlborough College and King's College London. He was a pupil of Sir John Wolfe Barry and Henry Marc Brunel from 1891–96. In 1931, Savile was elected a member of the Smeatonian Society of Civil Engineers and was their president in 1948. Savile was appointed to the panel of qualified civil engineers required by the Reservoirs Act of 1930 where he was responsible for the design, construction and inspection of reservoirs. At this time he was working for Alexander Gibb and partners. He served as president of the Institution of Civil Engineers between November 1940 and November 1941. He was appointed a Companion of the Order of the Bath on 1 January 1925. He was appointed a Knight Commander of the same order on 1 March 1929. In 1904, Savile married Evelyn Stileman, daughter of Frank Stileman, consulting engineer to the Furness Railway Company. They had one daughter. After her death in 1920, he married secondly, in 1929, Lilith Savile (who was his first cousin once removed), daughter of Brigadier-General Walter Clare Savile. He died in 1953. References 1870 births 1953 deaths People educated at Marlborough College Alumni of King's College London British civil engineers Scottish civil engineers Presidents of the Institution of Civil Engineers Presidents of the Smeatonian Society of Civil Engineers Knights Commander of the Order of the Bath
Leopold Halliday Savile
Engineering
354
33,757,605
https://en.wikipedia.org/wiki/Tylopilus%20atronicotianus
Tylopilus atronicotianus, commonly known as the false black velvet bolete, is a bolete fungus in the family Boletaceae. First described scientifically in 1998, it is known only from the eastern United States. Taxonomy The species was first described scientifically by Ernst Both, curator emeritus in mycology at the Buffalo Museum of Science, based on specimens he found growing in New York state. The specific epithet atronicotianus means "dark tobacco", and refers to the color of the cap. The mushroom is commonly known as the "false black velvet bolete". Description The cap ranges in shape from hemispheric to broadly convex to flattened depending on its age, and it is usually between in diameter. The cap margin is rolled inward in young specimens and unrolls as it matures. The cap surface is dry, smooth, and slightly shiny; its color ranges from light brown to olive-brown, although it tends to be darker in age. The flesh is whitish, but after it is cut or injured, will slowly stain pink to pinkish-red, eventually becoming black. The pore surface on the underside of the cap is initially white before turning reddish-brown in age. The pores are small and angular (up to 1.5 millimeters wide), and the tubes comprising the pores are deep. They are a bright brown color, and will stain black when injured. The stem is solid (not hollow), and measures long by ; it is roughly equal in width throughout its length or tapered on either end. The color of the stem is grayish to dark brown, and almost black at the base. The stem surface is finely tomentose (covered with short, dense, matted hairs), and usually lacks reticulations (a net-like pattern of rides present in some Tylopilus species), although it may be finely reticulated near the apex. The stem flesh is grayish to blackish in color. Mushrooms produce a reddish-brown spore print, while the spores themselves are narrowly oval, smooth, hyaline (translucent), and measure 7.5–10.5 by 4–5 μm. The edibility of the mushroom has not been determined. Fruit bodies have been used in mushroom dying to produce a variety of brownish colors. The "black velvet bolete", Tylopilus alboater, is roughly similar in appearance, but is distinguished by a blacker cap with less brown color, and a velvety cap texture. Habitat and distribution Tylopilus atronicotianus is a mycorrhiza species, and is found in mixed tree stands with deciduous trees such as red oak, beech, and hemlock. The fruit bodies grow on the ground solitarily, scattered, or in groups. The species is fairly common in its range, which includes western New York and West Virginia, although the true limits of its distribution have yet to be precisely determined. The Mar Lodge Estate in Scotland claims to have the only site in the World for the Black False Bolete, published in the National Trust for Scotland Autumn and winter Magazine 2022. See also List of North American boletes References External links atronicotianus Fungi described in 1998 Fungi of North America Fungus species
Tylopilus atronicotianus
Biology
675
33,584,425
https://en.wikipedia.org/wiki/Acceleromyograph
An acceleromyograph is a piezoelectric myograph, used to measure the force produced by a muscle after it has undergone nerve stimulation. Acceleromyographs may be used, during anaesthesia when muscle relaxants are administered, to measure the depth of neuromuscular blockade and to assess adequacy of recovery from these agents at the end of surgery. Acceleromyography is classified as quantitative neuromuscular monitoring. Rationale Patients who undergo anesthesia may receive a drug that paralyzes muscles, facilitating endotracheal intubation and improving operating conditions for the surgeon. Longer-acting drugs have higher prevalence of residual blockade in the PACU or ICU than shorter acting drugs. Different clinical tests to measure or exclude evidence of residual muscle weakness have been described but cannot exclude postoperative residual curarization. Small degrees of muscle blockade can only accurately be measured by the use of quantitative neuromuscular monitoring. Specifically, the observer cannot reliably measure muscular fade when train-of-four ratios are between 0.4 and 0.9. Acceleromyograph design Acceleromyographs measure muscle activity using a miniature piezoelectric transducer that is attached to the stimulated muscle. A voltage is created when the muscle accelerates and that acceleration is proportion to force of contraction. The mass of the piezoelectric transducer is known and the acceleration is measured therefore the force can be calculated, (Force = mass × acceleration). Acceleromyographs are more costly than the more common twitch monitors, but have been shown to better alleviate residual blockade and associated symptoms of muscle weakness, and to improve overall quality of recovery. See also Muscle relaxant Myograph Neuromuscular-blocking drug Neuromuscular monitoring References External links Comparison of mechanomyography and acceleromyography in myotonic dystrophy type 1 at Respond2Articles.com Anesthesia Muscular system Medical devices
Acceleromyograph
Biology
408
73,818,522
https://en.wikipedia.org/wiki/Bich-Yen%20Nguyen
Bich-Yen Nguyen is a Vietnamese electronics engineer specializing in advanced materials and technologies for integrated circuits. Educated in the US, she works in France as a senior fellow for Soitec, working on silicon on insulator technology. Education and career Nguyen is the daughter of a South Vietnamese soldier who died when she was young, leaving her family poor. Despite this hardship, her mother continued to send her to a boarding school, and then to the University of Texas at Austin in the US for her university education. While she was there, the Fall of Saigon in 1975 left the rest of her family as refugees, and she helped them resettle in the US. She graduated in 1977, with a bachelor's degree in chemical engineering. After working briefly for the city of Austin, Texas, she began working for Motorola in 1980. Her work there included the development of the multiple-independent-gate field-effect transistor (MIGFET), as well as CMOS technology. As head of advanced transistor development activities at 2004 Motorola spinoff Freescale Semiconductor, she participated in the Crolles Alliance, an international collaboration on CMOS that extended from 2002 to 2007. She was hired by Soitec as a substrate design engineer in 2007. Recognition Before leaving Motorola, Nguyen was a Motorola Distinguished Innovator and Dan Noble Fellow. She was one of the winners of the 2004 Women of Color Technology Awards. Her work as a vice president of Soitec was highlighted in the 2010 video production Paris by Night 99, honoring successful Vietnamese expatriates worldwide. She was named an IEEE Fellow, in the 2020 class of fellows, "for contributions to silicon on insulator technology". References External links Year of birth missing (living people) Living people Vietnamese engineers Electronics engineers Women electrical engineers University of Texas at Austin alumni Fellows of the IEEE
Bich-Yen Nguyen
Engineering
375
28,633,915
https://en.wikipedia.org/wiki/Stanislav%20Range%20Front%20Light
Stanislav Range Front Light or Small Adzhyhol Lighthouse is an active lighthouse and range light, located on a concrete pier on a tiny islet about west northwest of Rybalche, about from Kherson, Ukraine. Together with Adziogol lighthouse, located 109° from it, it serves guiding ships entering the Dnieper River. The lighthouse is a vertical lattice hyperboloid structure of steel bars, designed in 1910 by Vladimir Shukhov. A watch room is enclosed by the structure, and a one-storey lighthouse keeper's house adjoins the lighthouse. The site of the tower is accessible only by boat. The site is open to the public but the tower is closed. See also List of lighthouses in Ukraine Thin-shell structure List of hyperboloid structures List of thin shell structures References External links Lighthouses completed in 1911 Lattice shell structures by Vladimir Shukhov Hyperboloid structures Lighthouses in Ukraine Dnieper–Bug estuary Buildings and structures in Kherson Oblast 1911 establishments in the Russian Empire
Stanislav Range Front Light
Technology
209
55,537,059
https://en.wikipedia.org/wiki/HD%20240429%20and%20HD%20240430
HD 240429 (nicknamed Krios) and HD 240430 (Kronos) is a wide binary star system in the constellation of Cassiopeia. Both components of the system are yellow G-type main-sequence stars. HD 240430 is a Sun-like star in appearance, but it seems to have eaten its own planets, for which it is given the nickname Kronos, after the Greek god and the leader of the first generation of Titans. Its unusual properties were described by a team of astrophysicists at Princeton University in 2017, led by Semyeong Oh. Kronos and Krios are about 350 light years away from Earth. Formed around four billion years ago, they originated from the same interstellar cloud. They are moving together through space and are assumed to orbit each other slowly, with an estimated period of about 10,000 years. Kronos has a higher abundance of elements such as lithium, magnesium and iron in its atmosphere than in that of Krios. They are the most chemically different binary stars to have been discovered to date. The unusual and rich chemical composition leads scientists to the conclusion that Kronos has destroyed many of its orbiting planets. According to estimates, it might have absorbed at least 15 Earth masses. See also HD 134439/HD 134440 References External links Astronomical objects discovered in 2017 Cassiopeia (constellation) Binary stars G-type main-sequence stars 240429 30 240429 30
HD 240429 and HD 240430
Astronomy
308
838,142
https://en.wikipedia.org/wiki/Addressing%20mode
Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various addressing modes that are defined in a given instruction set architecture define how the machine language instructions in that architecture identify the operand(s) of each instruction. An addressing mode specifies how to calculate the effective memory address of an operand by using information held in registers and/or constants contained within a machine instruction or elsewhere. In computer programming, addressing modes are primarily of interest to those who write in assembly languages and to compiler writers. For a related concept see orthogonal instruction set which deals with the ability of any instruction to use any addressing mode. Caveats There are no generally accepted names for addressing modes: different authors and computer manufacturers may give different names to the same addressing mode, or the same names to different addressing modes. Furthermore, an addressing mode which, in one given architecture, is treated as a single addressing mode may represent functionality that, in another architecture, is covered by two or more addressing modes. For example, some complex instruction set computer (CISC) architectures, such as the Digital Equipment Corporation (DEC) VAX, treat registers and literal or immediate constants as just another addressing mode. Others, such as the IBM System/360 and its successors, and most reduced instruction set computer (RISC) designs, encode this information within the instruction. Thus, the latter machines have three distinct instruction codes for copying one register to another, copying a literal constant into a register, and copying the contents of a memory location into a register, while the VAX has only a single "MOV" instruction. The term "addressing mode" is itself subject to different interpretations: either "memory address calculation mode" or "operand accessing mode". Under the first interpretation, instructions that do not read from memory or write to memory (such as "add literal to register") are considered not to have an "addressing mode". The second interpretation allows for machines such as VAX which use operand mode bits to allow for a register or for a literal operand. Only the first interpretation applies to instructions such as "load effective address," which loads the address of the operand, not the operand itself. The addressing modes listed below are divided into code addressing and data addressing. Most computer architectures maintain this distinction, but there are (or have been) some architectures which allow (almost) all addressing modes to be used in any context. The instructions shown below are purely representative in order to illustrate the addressing modes, and do not necessarily reflect the mnemonics used by any particular computer. Some computers, e.g., IBM 709, RCA 3301, do not have a single address mode field but rather have separate fields for indirect addressing and indexing. Number of addressing modes Computer architectures vary greatly as to the number of addressing modes they provide in hardware. There are some benefits to eliminating complex addressing modes and using only one or a few simpler addressing modes, even though it requires a few extra instructions, and perhaps an extra register. It has proven much easier to design pipelined CPUs if the only addressing modes available are simple ones. Most RISC architectures have only about five simple addressing modes, while CISC architectures such as the DEC VAX have over a dozen addressing modes, some of which are quite complicated. The IBM System/360 architecture has only four addressing modes; a few more have been added for the ESA/390 architecture. When there are only a few addressing modes, the particular addressing mode required is usually encoded within the instruction code (e.g. IBM System/360 and successors, most RISC). But when there are many addressing modes, a specific field is often set aside in the instruction to specify the addressing mode. The DEC VAX allowed multiple memory operands for almost all instructions, and so reserved the first few bits of each operand specifier to indicate the addressing mode for that particular operand. Keeping the addressing mode specifier bits separate from the opcode operation bits produces an orthogonal instruction set. Even on a computer with many addressing modes, measurements of actual programs indicate that the simple addressing modes listed below account for some 90% or more of all addressing modes used. Since most such measurements are based on code generated from high-level languages by compilers, this reflects to some extent the limitations of the compilers being used. Important use case Some instruction set architectures, such as Intel x86 and IBM/360 and its successors, have a load effective address instruction. This calculates the effective operand address and loads it into a register, without accessing the memory it refers to. This can be useful when passing the address of an array element to a subroutine. It may also be a clever way of doing more calculations than normal in one instruction; for example, using such an instruction with the addressing mode "base+index+offset" (detailed below) allows one to add two registers and a constant together in one instruction and store the result in a third register. Simple addressing modes for code Some simple addressing modes for code are shown below. The nomenclature may vary depending on platform. Absolute or direct +----+------------------------------+ |jump| address | +----+------------------------------+ (Effective PC address = address) The effective address for an absolute instruction address is the address parameter itself with no modifications. PC-relative +----+------------------------------+ |jump| offset | jump relative +----+------------------------------+ (Effective PC address = next instruction address + offset, offset may be negative) The effective address for a PC-relative instruction address is the offset parameter added to the address of the next instruction. This offset is usually signed to allow reference to code both before and after the instruction. This is particularly useful in connection with jump instructions, because typical jumps are to nearby instructions (in a high-level language most if or while statements are reasonably short). Measurements of actual programs suggest that an 8 or 10 bit offset is large enough for some 90% of conditional jumps (roughly ±128 or ±512 bytes). For jumps to instructions that are not nearby, other addressing modes are used. Another advantage of PC-relative addressing is that the code may be position-independent, i.e. it can be loaded anywhere in memory without the need to adjust any addresses. Register indirect +-------+-----+ |jumpVia| reg | +-------+-----+ (Effective PC address = contents of register 'reg') The effective address for a register indirect instruction is the address in the specified register. For example, (A7) to access the content of address register A7. The effect is to transfer control to the instruction whose address is in the specified register. Many RISC machines, as well as the CISC IBM System/360 and successors, have subroutine call instructions that place the return address in an address register—the register indirect addressing mode is used to return from that subroutine call. Sequential addressing modes for code Sequential execution +------+ | nop | execute the following instruction +------+ (Effective PC address = next instruction address) The CPU, after executing a sequential instruction, immediately executes the following instruction. Sequential execution is not considered to be an addressing mode on some computers. Most instructions on most CPU architectures are sequential instructions. Because most instructions are sequential instructions, CPU designers often add features that deliberately sacrifice performance on the other instructions—branch instructions—in order to make these sequential instructions run faster. Conditional branches load the PC with one of 2 possible results, depending on the condition—most CPU architectures use some other addressing mode for the "taken" branch, and sequential execution for the "not taken" branch. Many features in modern CPUs—instruction prefetch and more complex pipelineing, out-of-order execution, etc.—maintain the illusion that each instruction finishes before the next one begins, giving the same final results, even though that's not exactly what happens internally. Each "basic block" of such sequential instructions exhibits both temporal and spatial locality of reference. CPUs that do not use sequential execution CPUs that do not use sequential execution with a program counter are extremely rare. In some CPUs, each instruction always specifies the address of next instruction. Such CPUs have an instruction pointer that holds that specified address; it is not a program counter because there is no provision for incrementing it. Such CPUs include some drum memory computers such as the IBM 650, the SECD machine, Librascope RPC 4000, and the RTX 32P. On processors implemented with horizontal microcode, the microinstruction may contain the high order bits of the next instruction address. Other computing architectures go much further, attempting to bypass the von Neumann bottleneck using a variety of alternatives to the program counter. Conditional execution Some computer architectures have conditional instructions (such as ARM, but no longer for all instructions in 64-bit mode) or conditional load instructions (such as x86) which can in some cases make conditional branches unnecessary and avoid flushing the instruction pipeline. An instruction such as a 'compare' is used to set a condition code, and subsequent instructions include a test on that condition code to see whether they are obeyed or ignored. Skip +------+-----+-----+ |skipEQ| reg1| reg2| skip the next instruction if reg1=reg2 +------+-----+-----+ (Effective PC address = next instruction address + 1) Skip addressing may be considered a special kind of PC-relative addressing mode with a fixed "+1" offset. Like PC-relative addressing, some CPUs have versions of this addressing mode that only refer to one register ("skip if reg1=0") or no registers, implicitly referring to some previously-set bit in the status register. Other CPUs have a version that selects a specific bit in a specific byte to test ("skip if bit 7 of reg12 is 0"). Unlike all other conditional branches, a "skip" instruction never needs to flush the instruction pipeline, though it may need to cause the next instruction to be ignored. Simple addressing modes for data Some simple addressing modes for data are shown below. The nomenclature may vary depending on platform. Register (or, register direct) +------+-----+-----+-----+ | mul | reg1| reg2| reg3| reg1 := reg2 * reg3; +------+-----+-----+-----+ This "addressing mode" does not have an effective address and is not considered to be an addressing mode on some computers. In this example, all the operands are in registers, and the result is placed in a register. Base plus offset, and variations This is sometimes referred to as 'base plus displacement' or 'truncated'. +------+-----+-----+----------------+ | load | reg | base| offset | reg := RAM[base + offset] +------+-----+-----+----------------+ (Effective address = offset + contents of specified base register) If the offset is zero, this becomes an example of register indirect addressing; the effective address is just the value in the base register. On many RISC machines, register 0 is fixed at the value zero. If register 0 is used as the base register, this becomes an example of absolute addressing. However, only a small portion of memory can be accessed. The offset is often small in relation to the size of current computer memories. However, the principle of locality of reference applies: over a short time span, most of the data items a program wants to access are fairly close to each other. This addressing mode is closely related to the indexed absolute addressing mode. Example 1: Within a subroutine a programmer will mainly be interested in the parameters and the local variables, for which one base register (the frame pointer) usually suffices. If this routine is a class method in an object-oriented language, then a second base register is needed which points at the attributes for the current object (this or self in some high level languages). Example 2: If the base register contains the address of a composite type (a record or structure), the offset can be usually used to select a field from that record. Immediate/literal +------+-----+-----+----------------+ | add | reg1| reg2| constant | reg1 := reg2 + constant; +------+-----+-----+----------------+ This "addressing mode" does not have an effective address, and is not considered to be an addressing mode on some computers. The constant might be signed or unsigned. For example, move.l #$FEEDABBA, D0 to move the immediate hex value of "FEEDABBA" into register D0. Instead of using an operand from memory, the value of the operand is held within the instruction itself. On the DEC VAX machine, the literal operand sizes could be 6, 8, 16, or 32 bits long. Andrew Tanenbaum showed that 98% of all the constants in a program would fit in 13 bits (see RISC design philosophy). Implicit +-----------------+ | clear carry bit | +-----------------+ +-------------------+ | clear Accumulator | +-------------------+ The implied addressing mode, also called the implicit addressing mode (x86 assembly language), does not explicitly specify an effective address for either the source or the destination (or sometimes both). Either the source (if any) or destination effective address (or sometimes both) is implied by the opcode. Implied addressing was quite common on older computers (up to mid-1970s). Such computers typically had only a single register in which arithmetic could be performed—the accumulator. Such accumulator machines implicitly reference that accumulator in almost every instruction. For example, the operation < a := b + c; > can be done using the sequence < load b; add c; store a; > -- the destination (the accumulator) is implied in every "load" and "add" instruction; the source (the accumulator) is implied in every "store" instruction. Later computers generally had more than one general-purpose register or RAM location which could be the source or destination or both for arithmetic—and so later computers need some other addressing mode to specify the source and destination of arithmetic. Among the x86 instructions, some use implicit registers for one of the operands or results (multiplication, division, counting conditional jump). Some instruction sets (such as x86 and AVR) have one special-purpose register called the stack pointer which is implicitly incremented or decremented when pushing or popping data from the stack, and the source or destination effective address is (implicitly) the address stored in that stack pointer. Some other instruction sets (such as m68k, ARM, and PowerPC) have more than one register that could be used as a stack pointer—and so use the "register autoincrement indirect" addressing mode to specify which of those registers should be used when pushing or popping data from a stack. Some current computer instruction sets (e.g. z/Architecture and IA-32/x86-64) contain some instructions with implicit operands in order to maintain backwards compatibility with earlier designs. On some instruction sets, instructions that flip the user/system mode bit, the interrupt-enable bit, etc. implicitly specify the special register that holds those bits. This simplifies the hardware necessary to trap those instructions in order to meet the Popek and Goldberg virtualization requirements—on such a system, the trap logic does not need to look at any operand (or at the final effective address), but only at the opcode. Some instruction sets have been designed where every operand is always implicitly specified in every instruction -- zero-operand CPUs. Other addressing modes for code or data Absolute/direct +------+-----+--------------------------------------+ | load | reg | address | +------+-----+--------------------------------------+ (Effective address = address as given in instruction) This requires space in an instruction for quite a large address. It is often available on CISC machines which have variable-length instructions, such as x86. Some RISC machines have a special Load Upper Literal instruction which places a 16- or 20-bit constant in the top half of a register. That can then be used as the base register in a base-plus-offset addressing mode which supplies the low-order 16 or 12 bits. The combination allows a full 32-bit address. Indexed absolute +------+-----+-----+--------------------------------+ | load | reg |index| address | +------+-----+-----+--------------------------------+ (Effective address = address + contents of specified index register) This also requires space in an instruction for quite a large address. The address could be the start of an array or vector, and the index could select the particular array element required. The processor may scale the index register to allow for the size of each array element. Note that this is more or less the same as base-plus-offset addressing mode, except that the offset in this case is large enough to address any memory location. Example 1: Within a subroutine, a programmer may define a string as a local constant or a static variable. The address of the string is stored in the literal address in the instruction. The offset—which character of the string to use on this iteration of a loop—is stored in the index register. Example 2: A programmer may define several large arrays as globals or as class variables. The start of the array is stored in the literal address (perhaps modified at program-load time by a relocating loader) of the instruction that references it. The offset—which item from the array to use on this iteration of a loop—is stored in the index register. Often the instructions in a loop re-use the same register for the loop counter and the offsets of several arrays. Base plus index +------+-----+-----+-----+ | load | reg | base|index| +------+-----+-----+-----+ (Effective address = contents of specified base register + contents of specified index register) The base register could contain the start address of an array or vector, and the index could select the particular array element required. The processor may scale the index register to allow for the size of each array element. This could be used for accessing elements of an array passed as a parameter. Base plus index plus offset +------+-----+-----+-----+----------------+ | load | reg | base|index| offset | +------+-----+-----+-----+----------------+ (Effective address = offset + contents of specified base register + contents of specified index register) The base register could contain the start address of an array or vector of records, the index could select the particular record required, and the offset could select a field within that record. The processor may scale the index register to allow for the size of each array element. Scaled +------+-----+-----+-----+ | load | reg | base|index| +------+-----+-----+-----+ (Effective address = contents of specified base register + scaled contents of specified index register) The base register could contain the start address of an array or vector data structure, and the index could contain the offset of the one particular array element required. This addressing mode dynamically scales the value in the index register to allow for the size of each array element, e.g. if the array elements are double precision floating-point numbers occupying 8 bytes each then the value in the index register is multiplied by 8 before being used in the effective address calculation. The scale factor is normally restricted to being a power of two, so that shifting rather than multiplication can be used. Register indirect +------+------+-----+ | load | reg1 | base| +------+------+-----+ (Effective address = contents of base register) A few computers have this as a distinct addressing mode. Many computers just use base plus offset with an offset value of 0. For example, (A7) Register autoincrement indirect +------+-----+-------+ | load | reg | base | +------+-----+-------+ (Effective address = contents of base register) After determining the effective address, the value in the base register is incremented by the size of the data item that is to be accessed. For example, (A7)+ would access the content of the address register A7, then increase the address pointer of A7 by 1 (usually 1 word). Within a loop, this addressing mode can be used to step through all the elements of an array or vector. In high-level languages it is often thought to be a good idea that functions which return a result should not have side effects (lack of side effects makes program understanding and validation much easier). This addressing mode has a side effect in that the base register is altered. If the subsequent memory access causes an error (e.g. page fault, bus error, address error) leading to an interrupt, then restarting the instruction becomes much more problematic since one or more registers may need to be set back to the state they were in before the instruction originally started. There have been at least three computer architectures that have had implementation problems with regard to recovery from faults when this addressing mode is used: DEC PDP-11. Could have one or two autoincrement register operands. Some models, such as the PDP-11/45 and PDP-11/70, had a register that recorded modifications to registers, allowing the fault handler to undo the register modifications and re-execute the instruction. Motorola 68000 series. Could have one or two autoincrement register operands. The 68010 and later processors resolved the problem by saving the processor's internal state on bus or address errors and restoring it when returning from the fault. DEC VAX. Could have up to 6 autoincrement register operands. The First Part Done bit in the saved processor status longword in a stack frame for a fault is set if the faulting instruction must not be restarted at the beginning, resolving the problem. Register autodecrement indirect +------+-----+-----+ | load | reg | base| +------+-----+-----+ (Effective address = new contents of base register) Before determining the effective address, the value in the base register is decremented by the size of the data item which is to be accessed. Within a loop, this addressing mode can be used to step backwards through all the elements of an array or vector. A stack can be implemented by using this mode in conjunction with the previous addressing mode (autoincrement). See the discussion of side-effects under the autoincrement addressing mode. Memory indirect or deferred Any of the addressing modes mentioned in this article could have an extra bit to indicate indirect addressing, i.e. the address calculated using some mode is in fact the address of a location (typically a complete word) which contains the actual effective address. Indirect addressing may be used for code or data. It can make implementation of pointers, references, or handles much easier, and can also make it easier to call subroutines which are not otherwise addressable. Indirect addressing does carry a performance penalty due to the extra memory access involved. Some early minicomputers (e.g. DEC PDP-8, Data General Nova) had only a few registers and only a limited direct addressing range (8 bits). Hence the use of memory indirect addressing was almost the only way of referring to any significant amount of memory. Half of the DEC PDP-11's eight addressing modes are deferred. Register deferred @Rn is the same as register indirect as defined above. Predecrement deferred @-(Rn), postincrement deferred @(Rn)+, and indexed deferred @nn(Rn) modes point to addresses in memory which are read to find the address of the parameter. The PDP-11's deferred mode, when combined with the program counter, provide its absolute addressing mode. PC-relative +------+------+---------+----------------+ | load | reg1 | base=PC | offset | +------+------+---------+----------------+ reg1 := RAM[PC + offset] (Effective address = PC + offset) The PC-relative addressing mode can be used to load a register with a value stored in program memory a short distance away from the current instruction. It can be seen as a special case of the "base plus offset" addressing mode, one that selects the program counter (PC) as the "base register". There are a few CPUs that support PC-relative data references. Such CPUs include: The x86-64 architecture and the 64-bit ARMv8-A architecture have PC-relative addressing modes, called "RIP-relative" in x86-64 and "literal" in ARMv8-A. The Motorola 6809 also supports a PC-relative addressing mode. The PDP-11 architecture, the VAX architecture, and the 32-bit ARM architectures support PC-relative addressing by having the PC in the register file. The IBM z/Architecture includes specific instructions, e.g., Load Relative Long, with PC-relative addressing if the General-Instructions-Extension Facility is active. When this addressing mode is used, the compiler typically places the constants in a literal pool immediately before or immediately after the subroutine that uses them, to prevent accidentally executing those constants as instructions. This addressing mode, which always fetches data from memory or stores data to memory and then sequentially falls through to execute the next instruction (the effective address points to data), should not be confused with "PC-relative branch" which does not fetch data from or store data to memory, but instead branches to some other instruction at the given offset (the effective address points to an executable instruction). Obsolete addressing modes The addressing modes listed here were used in the 1950–1980 period, but are no longer available on most current computers. This list is by no means complete; there have been many other interesting and peculiar addressing modes used from time to time, e.g. absolute-minus-logical-OR of two or three index registers. Multi-level memory indirect If the word size is larger than the address, then the word referenced for memory-indirect addressing could itself have an indirect flag set to indicate another memory indirect cycle. This flag is referred to as an indirection bit, and the resulting pointer is a tagged pointer, the indirection bit tagging whether it is a direct pointer or an indirect pointer. Care is needed to ensure that a chain of indirect addresses does not refer to itself; if it does, one can get an infinite loop while trying to resolve an address. The IBM 1620, the Data General Nova, the HP 2100 series, and the NAR 2 each have such a multi-level memory indirect, and could enter such an infinite address calculation loop. The memory indirect addressing mode on the Nova influenced the invention of indirect threaded code. The DEC PDP-10 computer with 18-bit addresses and 36-bit words allowed multi-level indirect addressing with the possibility of using an index register at each stage as well. The priority interrupt system was queried before decoding of every address word. So, an indirect address loop would not prevent execution of device service routines, including any preemptive multitasking scheduler's time-slice expiration handler. A looping instruction would be treated like any other compute-bound job. Memory-mapped registers On some computers, there were addresses that referred to registers rather than to primary storage, or to primary memory used to implement those registers. Although on some early computers there were register addresses at the high end of the address range, e.g., IBM 650, IBM 7070, the trend has been to use only register address at the low end and to use only the first 8 or 16 words of memory (e.g. ICL 1900, DEC PDP-6/PDP-10). This meant that there was no need for a separate "add register to register" instruction – one could just use the "add memory to register" instruction. In the case of early models of the PDP-10, which did not have any cache memory, if the "fast registers" option, which provided faster circuits to store the registers but still allowed them to be addressed as if they were in memory, was installed, a tight inner loop loaded into the first few words of memory ran much faster than it would have in magnetic core memory. Later models of the DEC PDP-11 series mapped the registers onto addresses in the input/output area, but this was primarily intended to allow remote diagnostics. Confusingly, the 16-bit registers were mapped onto consecutive 8-bit byte addresses. Memory indirect and autoincrement The DEC PDP-8 minicomputer had eight special locations (at addresses 8 through 15). When accessed via memory indirect addressing, these locations would automatically increment prior to use. This made it easy to step through memory in a loop without needing to use the accumulator to increment the address. The Data General Nova minicomputer had 16 special memory locations at addresses 16 through 31. When accessed via memory indirect addressing, 16 through 23 would automatically increment before use, and 24 through 31 would automatically decrement before use. Zero page The Data General Nova, Motorola 6800 family, and MOS Technology 6502 family of processors had very few internal registers. Arithmetic and logical instructions were mostly performed against values in memory as opposed to internal registers. As a result, many instructions required a two-byte (16-bit) location to memory. Given that opcodes on these processors were only one byte (8 bits) in length, memory addresses could make up a significant part of code size. Designers of these processors included a partial remedy known as "zero page" addressing. The initial 256 bytes of memory ($0000 – $00FF; a.k.a., page "0") could be accessed using a one-byte absolute or indexed memory address. This reduced instruction execution time by one clock cycle and instruction length by one byte. By storing often-used data in this region, programs could be made smaller and faster. As a result, the zero page was used similarly to a register file. On many systems, however, this resulted in high utilization of the zero page memory area by the operating system and user programs, which limited its use since free space was limited. Direct page The zero page address mode was enhanced in several late model 8-bit processors, including the WDC 65816, the CSG 65CE02, and the Motorola 6809. The new mode, known as "direct page" addressing, added the ability to move the 256-byte zero page memory window from the start of memory (offset address $0000) to a new location within the first 64 KB of memory. The CSG 65CE02 allowed the direct page to be moved to any 256-byte boundary within the first 64 KB of memory by storing an 8-bit offset value in the new base page (B) register. The Motorola 6809 could do the same with its direct page (DP) register. The WDC 65816 went a step further and allowed the direct page to be moved to any location within the first 64 KB of memory by storing a 16-bit offset value in the new direct (D) register. As a result, a greater number of programs were able to utilize the enhanced direct page addressing mode versus legacy processors that only included the zero page addressing mode. Scaled index with bounds checking This is similar to scaled index addressing, except that the instruction has two extra operands (typically constants), and the hardware checks that the index value is between these bounds. Another variation uses vector descriptors to hold the bounds; this makes it easy to implement dynamically allocated arrays and still have full bounds checking. Indirect to bit field within word Some computers had special indirect addressing modes for subfields within words. The GE/Honeywell 600 series character addressing indirect word specified either 6-bit or 9-bit character fields within its 36-bit word. The DEC PDP-10, also 36-bit, had special instructions which allowed memory to be treated as a sequence of fixed-size bit fields or bytes of any size from 1 bit to 36 bits. A one-word sequence descriptor in memory, called a "byte pointer", held the current word address within the sequence, a bit position within a word, and the size of each byte. Instructions existed to load and store bytes via this descriptor, and to increment the descriptor to point at the next byte (bytes were not split across word boundaries). Much DEC software used five 7-bit bytes per word (plain ASCII characters), with one bit per word unused. Implementations of C had to use four 9-bit bytes per word, since the 'malloc' function in C assumes that the size of an int is some multiple of the size of a char; the actual multiple is determined by the system-dependent compile-time operator sizeof. Index next instruction The Elliott 503, the Elliott 803, and the Apollo Guidance Computer only used absolute addressing, and did not have any index registers. Thus, indirect jumps, or jumps through registers, were not supported in the instruction set. Instead, it could be instructed to add the contents of the current memory word to the next instruction. Adding a small value to the next instruction to be executed could, for example, change a JUMP 0 into a JUMP 20, thus creating the effect of an indexed jump. Note that the instruction is modified on-the-fly and remains unchanged in memory, i.e. it is not self-modifying code. If the value being added to the next instruction was large enough, it could modify the opcode of that instruction as well as or instead of the address. Glossary See also Instruction set architecture Address bus Notes References External links Addressing modes in assembly language Addressing modes Computer architecture Machine code Assembly languages
Addressing mode
Technology,Engineering
7,898
39,327,628
https://en.wikipedia.org/wiki/Intellectual%20opportunism
Intellectual opportunism is the pursuit of intellectual opportunities with a selfish, ulterior motive not consistent with relevant principles. The term refers to certain self-serving tendencies of the human intellect, often involving professional producers and disseminators of ideas, who work with idea-formation all the time. Intellectual opportunism sometimes also refers to a specific school or trend of thought, or to a characteristic of a particular intellectual development. Thus, a certain set of people who share ideas are then said to display a tendency for "intellectual opportunism", often with the connotation that they deliberately act intellectually in a certain way, to gain special favor with an authority, group or organization; to justify a state of affairs that benefits themselves; or because they have the motive of financial or personal gain. Background At issue is the motive and intention involved in pursuing, creating, or expressing particular ideas (why certain ideas are being taken up), and the relevant contrast is between: the intellectual's stated principles, versus ideas he publicly or outwardly supports, endorses or concerns himself with. the original intention of ideas such as it is normally understood, versus the uses they are put to. "Theoretical opportunism" in science refers to the attempt to save a theory from refutation, or protect it from criticism, with the use of ad hoc methods that in some way lack deeper scientific consistency or credibility. Theorists may believe so strongly in the value of their own theory, that they try to explain away inconsistencies or contrary evidence – borrowing any idea that plausibly fits with the theory, rather than developing the theory in such a way, that it can truly account for the relevant evidence. The phenomenon of intellectual opportunism is frequently associated by its critics with careerism and dubious, unprincipled self-promotion, where ideas become "just another commodity" or a "bargaining tool". When human knowledge becomes a tradeable good in a market of ideas, all sorts of opportunities arise for huckstering, swindling, haggling and hustling with information in ways which are regarded as unprincipled, dubious or involve deceit of some sort. Morality The intellectual opportunist adapts his intellectual concerns, pursuits and utterances to "fit with the trend/fashion" or "fit the situation" or "with what sells" – with the (ulterior) motive of gaining personal popularity/support, protecting intellectual coherence, obtaining personal credit, acquiring privilege or status, persuading others, ingratiating himself, taking advantage or making money. Normally this assumes some degree of intellectual flexibility, agility or persuasiveness. The intellectual opportunist: "Holds his mouth where the money or the support is" or where the opportunities for self-advancement or self-promotion are. "Hires out" his own ideas for purposes that conflict with his real nature or the organization he works for, only for the purpose of gaining personal advantage. Latches onto any readily available ideas or "picks the brains of others" to advance or defend his own position. Compromises what he really believes in, for the sake of some ulterior motive or purpose. Often intellectual opportunism is therefore understood as a sign of lack of integrity or intellectual shallowness, to the extent that the opportunist is not concerned with the worth of the ideas in themselves, but only with how he can benefit from them himself by pursuing them. As a corollary, the intellectual opportunist is often apt to change his opinions, and "change his line" rapidly or arbitrarily, according to where he can gain personal advantage, in a manner not consistent or principled. The implication is usually that ideas are no longer being pursued because of their intrinsic merit or worth, or out of a genuine concern with what is at stake in an argument or idea, but only because of the instrumental value of ideas, i.e., the selfish advantage that can be gained from pursuing some ideas in preference to other ones. Observably ventilating or "advertising" suitably formulated ideas is then merely a means or a "tool" for self-advancement or the promotion of a group or organization, giving rise to accusations that the real intention of particular ideas is being twisted around to serve an alien or improper purpose. The general outcome may be that the ideas involved, though plausible at a superficial level, lack any deeper coherence, the coherence being ruled out by lack of regard for relevant principles. Intellectual "dilettantes" (people who do not truly know what they are talking about, i.e. dabblers) are often regarded as opportunists, insofar as they like to side with whatever viewpoint seems to be popular or credible at the time. Perception Intellectual opportunism may appear obvious or crass, if the selfish motives for engaging in it are clear. It may also be very difficult to detect if: the intellectual opportunist is clever and intelligent, while his audience is not, or his audience lacks sufficient relevant information to "judge the intellectual act". A clever intellectual opportunist may be able to reconcile his changing stories and his ulterior selfish motives in such a way, that his intellectual concerns seem perfectly principled and consistent. it is very difficult to distinguish between legitimately seizing an intellectual opportunity with sincere motives, and using an intellectual opportunity for some selfish, ulterior motive. the intellectual opportunist is himself not aware of his own opportunism, i.e. what it means, or what its broader significance is, regarding his own pursuit of intellectual opportunities as perfectly legitimate. In this case, the true motives or the effects of a course of action may be unclear or in dispute. the relevant and appropriate moral norms are themselves in dispute, so that the validity of the assessment of "opportunism" in intellectual behaviour depends on "point of view". To prove intellectual opportunism by an individual or a group may therefore require very comprehensive knowledge pertaining to the case. An additional complicating factor is the influence of cultural differences on human intentions. Behavior regarded as opportunist in one culture may not be so regarded in another because of differences in norms of moral propriety. For example, in American culture there is a much greater preoccupation with self-marketing, advertising and self-promotion, which in European countries might be regarded as crass opportunism, because the culturally appropriate ways to assert self-interest or self-concern are different. There may however be just as much opportunism in Europe as anywhere else, but with a different cultural style. People may say, "all's fair in love and war", but that also means that if one can represent something as a war or a matter of love, one can justify any action, since love and war permit actions that would ordinarily be regarded as unprincipled or illegitimate. References Opportunism Morality
Intellectual opportunism
Biology
1,449
3,879,709
https://en.wikipedia.org/wiki/Reactions%20on%20surfaces
Reactions on surfaces are reactions in which at least one of the steps of the reaction mechanism is the adsorption of one or more reactants. The mechanisms for these reactions, and the rate equations are of extreme importance for heterogeneous catalysis. Via scanning tunneling microscopy, it is possible to observe reactions at the solid gas interface in real space, if the time scale of the reaction is in the correct range. Reactions at the solid–gas interface are in some cases related to catalysis. Simple decomposition If a reaction occurs through these steps: A + S ⇌ AS → Products where A is the reactant and S is an adsorption site on the surface and the respective rate constants for the adsorption, desorption and reaction are k1, k−1 and k2, then the global reaction rate is: where: r is the rate, mol·m−2·s−1 is the concentration of adsorbate, mol·m−3 is the surface concentration of occupied sites, mol·m−2 is the concentration of all sites (occupied or not), mol·m−2 is the surface coverage, (i.e. ) defined as the fraction of sites which are occupied, which is dimensionless is time, s is the rate constant for the surface reaction, s−1. is the rate constant for surface adsorption, m3·mol−1·s−1 is the rate constant for surface desorption, s−1 is highly related to the total surface area of the adsorbent: the greater the surface area, the more sites and the faster the reaction. This is the reason why heterogeneous catalysts are usually chosen to have great surface areas (in the order of a hundred m2/gram) If we apply the steady state approximation to AS, then: so and The result is equivalent to the Michaelis–Menten kinetics of reactions catalyzed at a site on an enzyme. The rate equation is complex, and the reaction order is not clear. In experimental work, usually two extreme cases are looked for in order to prove the mechanism. In them, the rate-determining step can be: Limiting step: adsorption/desorption The order respect to A is 1. Examples of this mechanism are N2O on gold and HI on platinum Limiting step: reaction of adsorbed species The last expression is the Langmuir isotherm for the surface coverage. The adsorption equilibrium constant , and the numerator and denominator have each been divided by . The overall reaction rate becomes . Depending on the concentration of the reactant the rate changes: Low concentrations, then , that is to say a first order reaction in component A. High concentration, then . It is a zeroth order reaction in component A. Bimolecular reaction Langmuir–Hinshelwood mechanism In this mechanism, suggested by Irving Langmuir in 1921 and further developed by Cyril Hinshelwood in 1926, two molecules adsorb on neighboring sites and the adsorbed molecules undergo a bimolecular reaction: A + S ⇌ AS B + S ⇌ BS AS + BS → Products The rate constants are now ,for adsorption/desorption of A, ,, for adsorption/desorption of B, and for the reaction generating the final products. The rate law is: Proceeding as before we get , where is the fraction of empty sites, so . Let us assume now that the rate limiting step is the reaction of the adsorbed molecules, which is easily understood: the probability of two adsorbed molecules colliding is low. Then , with , which is nothing but Langmuir isotherm for two adsorbed gases, with adsorption constants and . Calculating from and we finally get . The rate law is complex and there is no clear order with respect to either reactant, but we can consider different values of the constants, for which it is easy to measure integer orders: Both molecules have low adsorption That means that , so . The order is one with respect to each reactant, and the overall order is two. One molecule has very low adsorption In this case , so . The reaction order is 1 with respect to B. There are two extreme possibilities for the order with respect to A: At low concentrations of A, , and the order is one with respect to A. At high concentrations, . The order is minus one with respect to A. The higher the concentration of A, the slower the reaction goes, in this case we say that A inhibits the reaction. One molecule has very high adsorption One of the reactants has very high adsorption and the other one doesn't adsorb strongly. , so . The reaction order is 1 with respect to B and −1 with respect to A. Reactant A inhibits the reaction at all concentrations. The following reactions follow a Langmuir–Hinshelwood mechanism: 2 CO + O2 → 2 CO2 on a platinum catalyst. CO + 2H2 → CH3OH on a ZnO catalyst. C2H4 + H2 → C2H6 on a copper catalyst. N2O + H2 → N2 + H2O on a platinum catalyst. C2H4 + O2 → CH3CHO on a palladium catalyst. CO + OH → CO2 + H+ + e− on a platinum catalyst. Langmuir–Rideal mechanism In this mechanism, proposed in 1922 by Irving Langmuir and later expanded upon by Eric Rideal, only one of the molecules adsorbs and the other one reacts with it directly from the gas phase, without adsorbing ("nonthermal surface reaction"): A(g) + S(s) ⇌ AS(s) AS(s) + B(g) → Products Constants are and and rate equation is . Applying steady state approximation to AS and proceeding as before (considering the reaction the limiting step once more) we get . The order is one with respect to B. There are two possibilities, depending on the concentration of reactant A: At low concentrations of A, , and the order is one with respect to A. At high concentrations of A, , and the order is zero with respect to A. The following reactions follow an Langmuir–Rideal mechanism: C2H4 + O2 (adsorbed) → (CH2CH2)O The dissociative adsorption of oxygen is also possible, which leads to secondary products carbon dioxide and water. CO2 + H2 (ads.) → H2O + CO 2 NH3 + O2 (ads.) → N2 + 3H2O on a platinum catalyst C2H2 + H2 (ads.) → C2H4 on nickel or iron catalysts The Langmuir-Rideal mechanism is often, incorrectly, attributed to Dan Eley as the Eley-Rideal mechanism. The actual Eley-Rideal mechanism, studied in the thesis of Dan Eley and proposed by Eric Rideal in 1939, was the reaction between a chemisorbed and a physisorbed molecule. As opposed to the Langmuir-Rideal mechanism, in this mechanism the physisorbed molecule is in thermal equilibrium with the surface. See also Diffusion-controlled reaction References Graphic models of Eley Rideal and Langmuir Hinshelwood mechanisms Surface science Chemical kinetics Chemical reaction engineering
Reactions on surfaces
Physics,Chemistry,Materials_science,Engineering
1,544
48,657,205
https://en.wikipedia.org/wiki/Core%20%28architecture%29
In architecture, a core is a vertical space used for circulation and services. It may also be referred to as a circulation core or service core. A core may include staircases, elevators, electrical cables, water pipes and risers. A core allows people to move between the floors of a building, and distributes services efficiently to the floors. A core may also serve a key structural role in a building, helping support it and acting as a load-bearing structure with load-bearing walls. Cores in office buildings tend to be larger than those in apartment buildings because office buildings need to handle more traffic with an increased number of elevator shafts. It is generally desirable for a core to be as small as possible to maximize floorspace within the building. The core of a building is often placed in the center of a building, but it can also be placed on a side of a building, and there can be several cores in a building. Cores on a side of a building are known as perimeter cores, are completely inside the building and can allow for more uninterrupted, column-free floor space within a building. Offset cores are similar to perimeter cores but sit partially or completely outside a building. Cores split into several smaller cores are called mixed cores. A large portion (over 40&) of offset core buildings were built after 2010. An offset core can also be used to provide shade from the sun. See also Buttressed core Skyscraper design and construction References Architectural elements
Core (architecture)
Technology,Engineering
297
64,554,970
https://en.wikipedia.org/wiki/Mathematical%20Models%20%28Fischer%29
Mathematical Models: From the Collections of Universities and Museums – Photograph Volume and Commentary is a book on the physical models of concepts in mathematics that were constructed in the 19th century and early 20th century and kept as instructional aids at universities. It credits Gerd Fischer as editor, but its photographs of models are also by Fischer. It was originally published by Vieweg+Teubner Verlag for their bicentennial in 1986, both in German (titled Mathematische Modelle. Aus den Sammlungen von Universitäten und Museen. Mit 132 Fotografien. Bildband und Kommentarband) and (separately) in English translation, in each case as a two-volume set with one volume of photographs and a second volume of mathematical commentary. Springer Spektrum reprinted it in a second edition in 2017, as a single dual-language volume. Topics The work consists of 132 full-page photographs of mathematical models, divided into seven categories, and seven chapters of mathematical commentary written by experts in the topic area of each category. These categories are: Wire and thread models, of hypercubes of various dimensions, and of hyperboloids, cylinders, and related ruled surfaces, described as "elementary analytic geometry" and explained by Fischer himself. Plaster and wood models of cubic and quartic algebraic surfaces, including Cayley's ruled cubic surface, the Clebsch surface, Fresnel's wave surface, the Kummer surface, and the Roman surface, with commentary by W. Barth and H. Knörrer. Wire and plaster models illustrating the differential geometry and curvature of curves and surfaces, including surfaces of revolution, Dupin cyclides, helicoids, and minimal surfaces including the Enneper surface, with commentary by M. P. do Carmo, G. Fischer, U. Pinkall, H. and Reckziegel. Surfaces of constant width including the surface of rotation of the Reuleaux triangle and the Meissner bodies, described by J. Böhm. Uniform star polyhedra, described by E. Quaisser. Models of the projective plane, including the Roman surface (again), the cross-cap, and Boy's surface, with commentary by U. Pinkall that includes its realization by Roger Apéry as a quartic surface (disproving a conjecture of Heinz Hopf). Graphs of functions, both with real and complex variables, including the Peano surface, Riemann surfaces, exponential function and Weierstrass's elliptic functions, with commentary by J. Leiterer. Audience and reception This book can be viewed as a supplement to Mathematical Models by Martyn Cundy and A. P. Rollett (1950), on instructions for making mathematical models, which according to reviewer Tony Gardiner "should be in every classroom and on every lecturer's shelf" but in fact sold very slowly. Gardiner writes that the photographs may be useful in undergraduate mathematics lectures, while the commentary is best aimed at mathematics professionals in giving them an understanding of what each model depicts. Gardiner also suggests using the book as a source of inspiration for undergraduate research projects that use its models as starting points and build on the mathematics they depict. Although Gardiner finds the commentary at times overly telegraphic and difficult to understand, reviewer O. Giering, writing about the German-language version of the same commentary, calls it detailed, easy-to-read, and stimulating. By the time of the publication of the second edition, in 2017, reviewer Hans-Peter Schröcker evaluates the visualizations in the book as "anachronistic", superseded by the ability to visualize the same phenomena more easily with modern computer graphics, and he writes that some of the commentary is also "slightly outdated". Nevertheless, he writes that the photos are "beautiful and aesthetically pleasing", writing approvingly that they use color sparingly and aim to let the models speak for themselves rather than dazzling with many color images. And despite the fading strength of its original purpose, he finds the book valuable both for its historical interest and for what it still has to say about visualizing mathematics in a way that is both beautiful and informative. References Mathematical tools Mathematics books 1986 non-fiction books 2017 non-fiction books
Mathematical Models (Fischer)
Mathematics,Technology
878
74,395,207
https://en.wikipedia.org/wiki/Misskey
Misskey () is an open source, federated, social networking service created in 2014 by Japanese software engineer Eiji "syuilo" Shinoda. Misskey uses the ActivityPub protocol for federation, allowing users to interact between independent Misskey instances, and other ActivityPub compatible platforms. Misskey is generally considered to be part of the Fediverse. Despite being a decentralized service, Misskey is not philosophically opposed to centralization. The name Misskey comes from the lyrics of Brain Diver, a song by the Japanese singer May'n. History Misskey was initially developed as a BBS-style internet forum by high school student Eiji Shinoda in 2014. After introducing a timeline feature, Misskey gained popularity as the microblogging platform it is today. In 2018, Misskey added support for ActivityPub, becoming a federated social media platform. The flagship Misskey server, Misskey.io, was started on April 15, 2019. Misskey, alongside Mastodon and Bluesky, has received attention as a potential replacement for Twitter following Twitter's acquisition by Elon Musk in 2022. On April 8, 2023, Misskey.io incorporated as MisskeyHQ K.K. As of February 2024, over 450,000 users were registered, making it the largest instance of Misskey. Misskey.io is crowdfunded. The administrator of Misskey.io is Japanese system administrator Yoshiki Eto, who operates under the alias Murakami-san. Eiji Shinoda serves as director. In July 2023, Twitter introduced extreme restrictions on their API in order to combat scraping from bots. Some users were critical of the changes, and as a result migrated to other social networks. The number of users registering on Misskey.io, Misskey's official instance and the largest one, increased rapidly, with other Misskey instances also receiving a spike in signups. In response to this trend, Skeb, a platform for sharing art, announced on July 14, 2023 that it would sponsor the Misskey development team. In early 2024, Misskey was targeted by a spam attack from Japan. The cause of the attack is believed to be a dispute between rival groups on a Japanese hacker forum and a DDoS attack on a Discord bot. Mastodon instances with open registration were used in the attack. Development Misskey is open source software and is licensed under the AGPLv3. The Misskey API is publicly available and is documented using the OpenAPI Specification, which allows users to build automated accounts and use it on any Misskey instance. The service is translated using Crowdin. Misskey is developed using Node.js. TypeScript is used on both the frontend and backend. PostgreSQL is used as its database. Vue.js is used for the frontend. Functionality Posts on Misskey are called "notes". Notes are limited to a maximum of 3,000 characters (a limit which can be customized by instances), and can be accompanied by any file, including polls, images, videos, and audio. Notes can be reposted, either by themselves or with another "quote" note. Misskey comes with multiple timelines to sort through the notes that an instance has available, and are displayed in reverse chronological order. The Home timeline shows notes from users that you follow, the Local timeline shows all notes from the instance in use, the Social timeline shows both the Home and Local timeline, and the Global timeline shows every public note that the instance knows about. Notes have customizable privacy settings to control what users can see a note, similar to Mastodon's post visibility ranges. Public notes show up on all timelines, while Home notes only show on a user's Home timeline. Notes can also be set to be available only for followers. Direct messages using notes can be sent to users. Forks Misskey's open source nature has led to the development of a number of forks: Firefish (formerly Calckey) has been developed by ThatOneCalculator since 2022. Firefish includes enhanced compatibility with the Mastodon API. Further development of the project is discontinued due to a lack of maintainers and code quality issues, as well as the sudden disappearance of ThatOneCalculator. Foundkey was primarily developed by Johann155, a contributor to Misskey. The fork was started as a result of various language barriers in regards to Misskey's development, as a result of an overwhelming amount of it being conducted exclusively in Japanese. Development is currently paused, and it has been advised to not use Foundkey with more than 20 users. Iceshrimp, forked from Firefish in 2023. The project is currently in the process of being rewritten into C# and .NET. Sharkey, developed by Transfem.org. Main features include note editing, local-only notes and compatibility with the Mastodon API. See also ActivityPub Bluesky Comparison of microblogging and similar services Comparison of software and protocols for distributed social networking Fediverse Mastodon References and notes External links Official website (in English) 2014 software Software that federates via ActivityPub Free and open-source software Free software programmed in TypeScript Free software websites Microblogging software Social media Social networking services Web applications
Misskey
Technology
1,104
39,895,265
https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28probability%29
This page lists events in order of increasing probability, grouped by orders of magnitude. These probabilities were calculated given assumptions detailed in the relevant articles and references. For example, the probabilities of obtaining the different poker hands assume that the cards are dealt fairly. References Probability Applied probability
Orders of magnitude (probability)
Mathematics
59
40,760,278
https://en.wikipedia.org/wiki/C22H26N2O3
{{DISPLAYTITLE:C22H26N2O3}} The molecular formula C22H26N2O3 may refer to: Geissoschizine methyl ether Hirsuteine 16-Methoxytabersonine Pseudoakuammigine Molecular formulas
C22H26N2O3
Physics,Chemistry
61
12,552,062
https://en.wikipedia.org/wiki/Lemoine%27s%20conjecture
In number theory, Lemoine's conjecture, named after Émile Lemoine, also known as Levy's conjecture, after Hyman Levy, states that all odd integers greater than 5 can be represented as the sum of an odd prime number and an even semiprime. History The conjecture was posed by Émile Lemoine in 1895, but was erroneously attributed by MathWorld to Hyman Levy who pondered it in the 1960s. A similar conjecture by Sun in 2008 states that all odd integers greater than 3 can be represented as the sum of a prime number and the product of two consecutive positive integers ( p+x(x+1) ). Formal definition To put it algebraically, 2n + 1 = p + 2q always has a solution in primes p and q (not necessarily distinct) for n > 2. The Lemoine conjecture is similar to but stronger than Goldbach's weak conjecture. Example For example, the odd integer 47 can be expressed as the sum of a prime and a semiprime in four different ways: 47 = 13 + 2×17 = 37 + 2×5 = 41 + 2×3 = 43 + 2×2. The number of ways this can be done is given by . Lemoine's conjecture is that this sequence contains no zeros after the first three. Evidence According to MathWorld, the conjecture has been verified by Corbitt up to 109. A blog post in June of 2019 additionally claimed to have verified the conjecture up to 1010. A proof was claimed in 2017 by Agama and Gensel, but this was later found to be flawed. See also Lemoine's conjecture and extensions Notes References Emile Lemoine, L'intermédiare des mathématiciens, 1 (1894), 179; ibid 3 (1896), 151. H. Levy, "On Goldbach's Conjecture", Math. Gaz. 47 (1963): 274 L. Hodges, "A lesser-known Goldbach conjecture", Math. Mag., 66 (1993): 45–47. . John O. Kiltinen and Peter B. Young, "Goldbach, Lemoine, and a Know/Don't Know Problem", Mathematics Magazine, 58(4) (Sep., 1985), pp. 195–203. . Richard K. Guy, Unsolved Problems in Number Theory New York: Springer-Verlag 2004: C1 External links Levy's Conjecture by Jay Warendorff, Wolfram Demonstrations Project. Additive number theory Conjectures about prime numbers Unsolved problems in number theory
Lemoine's conjecture
Mathematics
536
486,392
https://en.wikipedia.org/wiki/Resonator%20mode
In the resonator mode, the plasma density does not exceed the critical density. A standing electromagnetic wave, which is confined by a resonator cavity, penetrates the plasma and sustains it in the regions of highest field intensity. The geometry of this region determines the spatial distribution of the plasma. Plasmas excited in resonator mode are less resistant against detuning, for instance by the insertion of electric probes (Langmuir probes) or electrically conducting samples compared to surface-wave plasmas. There, the high plasma density better shields disturbing potentials. Waves in plasmas
Resonator mode
Physics
123
13,052,312
https://en.wikipedia.org/wiki/Elliott%20Organick
Elliott Irving Organick (February 25, 1925 – December 21, 1985) was a computer scientist and pioneer in operating systems development and education. He was considered "the foremost expositor writer of computer science", and was instrumental in founding the ACM Special Interest Group for Computer Science Education. Career Organick described the Burroughs large systems in an ACM monograph of which he was the sole author, covering the work of Robert (Bob) Barton and others. He also wrote a monograph about the Multics timesharing operating system. By the mid 1970s he had become "the foremost expositor writer of computer science"; he published 19 books. He was editor of ACM Computing Surveys (ISSN 0360-0300) between 1973 and 1976. In 1985 he received the ACM Special Interest Group on Computer Science Education Award for Outstanding Contribution to Computer Science Education. He died of leukemia on December 21, 1985. He taught at the University of Utah, where a Memorial Lecture series was established in his name. Publications The Multics System: An Examination of its Structure. MIT Press, 1972, . Still available from the MIT Libraries as a digital reprint (Laser-printed copy or PDF file of a scanned version.) Computer Systems Organization: The B5700/B6700. ACM Monograph Series, 1973. LCN: 72-88334 References External links 1985 deaths University of Utah faculty 1925 births Manhattan Project people American computer science educators University of Michigan alumni Deaths from leukemia in the United States Massachusetts Institute of Technology people
Elliott Organick
Technology
309
7,906,661
https://en.wikipedia.org/wiki/Diff-Quik
Diff-Quik is a commercial Romanowsky stain variant used to rapidly stain and differentiate a variety of pathology specimens. It is most frequently used for blood films and cytopathological smears, including fine needle aspirates. The Diff-Quik procedure is based on a modification of the Wright-Giemsa stain pioneered by Harleco in the 1970s, and has advantages over the routine Wright-Giemsa staining technique in that it reduces the 4-minute process into a much shorter operation and allows for selective increased eosinophilic or basophilic staining depending upon the time the smear is left in the staining solutions. There are generic brands of such stain, and the trade name is sometimes used loosely to refer to any such stain (much as "Coke" or "Band-Aid" are sometimes used imprecisely). Usage Diff-Quik may be utilized on material which is air-dried prior to alcohol fixation rather than immersed immediately (i.e. "wet-fixed"), although immediate alcohol fixation results in improved microscopic detail. The primary use of Romanowsky-type stains in cytopathology is for cytoplasmic detail, while Papanicolaou stain is used for nuclear detail. Diff-Quik stain highlights cytoplasmic elements such as mucins, fat droplets and neurosecretory granules. Extracellular substances, such as free mucin, colloid, and ground substance, are also easily stained, and appear metachromatic. Major applications include blood smears, bone marrow aspirates, semen analysis and cytology of various body fluids including urine and cerebrospinal fluid. Microbiologic agents, such as bacteria and fungi, also appear more easily in Diff-Quik. This is useful for the detection of for example Helicobacter pylori from gastric and pyloric specimens. Due to its short staining time, Diff-Quik stain is often used for initial screening of cytopathology specimens. This staining technique allows the cytotechnologist or pathologist to quickly assess the adequacy of the specimen, identify possible neoplastic or inflammatory changes, and decide whether or not additional staining is required. Components The Diff-Quik stain consists of 3 solutions: Diff-Quik fixative reagent Triarylmethane dye Methanol Diff-Quik solution I (eosinophilic) Xanthene dye (Eosin Y) pH buffer Diff-Quik solution II (basophilic thiazine dyes) Methylene blue Azure A pH buffer Results Alternatives Wright Giemsa stain Papanicolaou stain References Staining Histopathology Hematopathology Cytopathology Romanowsky stains
Diff-Quik
Chemistry,Biology
591
78,831,814
https://en.wikipedia.org/wiki/AM-350S
The AM-350S is a Pakistani S-band active electronically scanned array (AESA) 3-dimensional air search radar developed jointly by NRTC and Blue Surge. Overview The AM-350S was revealed to the public during the 2024 IDEAS expo in Karachi based on the Hino 500 truck. It is a gallium nitride (GaN)-based active electronically scanned array (AESA) radar with digital beamforming capabilities with a surveillance range of 350km and has a 360° FOV and flexible elevation features, enabling it to monitor altitudes of up to 60,000 feet. It has resilient anti-jamming capabilities with frequency hopping, vector control, and side-lobe suppression. Being an AESA-based system, the AM-350S can leverage its multiple TRMs to emit in different or unique frequencies in one pulse, making it more difficult for an enemy ECM systems to single out a particular frequency for jamming. See also AN/FPS-117 Ground Master 400 References Early warning systems Ground radars Military equipment introduced in the 2020s
AM-350S
Technology
222
8,122,079
https://en.wikipedia.org/wiki/Introduction%20to%20Commutative%20Algebra
Introduction to Commutative Algebra is a well-known commutative algebra textbook written by Michael Atiyah and Ian G. Macdonald. It deals with elementary concepts of commutative algebra including localization, primary decomposition, integral dependence, Noetherian and Artinian rings and modules, Dedekind rings, completions and a moderate amount of dimension theory. It is notable for being among the shorter English-language introductory textbooks in the subject, relegating a good deal of material to the exercises. (Hardcover 1969, ) (Paperback 1994, ) 1969 non-fiction books Mathematics textbooks Commutative algebra
Introduction to Commutative Algebra
Mathematics
125
12,479,744
https://en.wikipedia.org/wiki/Symobi
Symobi (System for mobile applications) is a proprietary modern and mobile real-time operating system. It was and is developed by the German company Miray Software, since 2002 partly in cooperation with the research team of Prof. Dr. Uwe Baumgarten at the Technical University of Munich. The graphical operating system is designed for the area of embedded and mobile systems. It is also often used on PCs for end users and in the field of industry. Design The basis of Symobi is the message-oriented operating system μnOS, which is on its part based on the real-time microkernel Sphere. μnOS offers communication through message passing between all processes (from basic operating system service processes to application processes) using the integrated process manager. On the lowest level, the responsibility of the Sphere microkernel is to implement and enforce security mechanisms and resource management in real-time. Symobi itself additionally offers a complete graphical operating system environment with system services, a consistent graphical user interface, as well as standard programs and drivers. Classification Symobi combines features from different fields of application in one operating system. As a modern operating system it offers separated, isolated processes, light-weight threads, and dynamic libraries, like Windows, Linux, and Unix for example. In the area of mobile embedded operating systems, through its low resource requirement and the support of mobile devices it resembles systems like Windows CE, SymbianOS or Palm OS. With conventional real-time operating systems like QNX or VxWorks it shares the real-time ability and the support of different processor architectures. History The development of Sphere, μnOS and Symobi is based on the ideas and work of Konrad Foikis and Michael Haunreiter (founders of the company Miray Software), initiated during their schooldays, even before they started studying computer science. The basic concept was to combine useful and necessary features (like real-time and portability) with modern characteristics (like microkernel and inter-process communication etc.) to form a stable and reliable operating system. Originally, it was only supposed to serve as a basis for the different application programs developed by Foikis and Haunreiter during their studies. In 2000, Konrad Foikis and Michael Haunreiter founded the company Miray Software when they realised that μnOS was suited for far more than their own use. The cooperation with the TU Munich already evolved two years later. In 2006, the first official version of Symobi was completed, and in autumn of the same year it was introduced in professional circles on the Systems exhibition. Support Single-Core Intel: 80386, 80486, Pentium, Pentium Pro, Pentium II, Pentium III, Pentium 4, Core Solo, Core 2 Solo AMD: Élan SC410, Élan SC520, K6, K6-2, K6-III, Duron, Sempron, Athlon, Opteron VIA: Cyrix Mark II, Cyrix III, C3, C7, Eden Rise: mP6 Marvell / Intel: PXA-250, PXA-255, PXA-270, IXP-420 Motorola / Freescale: G2, G3, G4 Multi-Core Intel: Pentium 4, Core Duo, Core 2 Duo AMD: Athlon X2, Opteron Application areas Symobi is suited for hand-held products (portable communicators, internet appliances), as well as for consumer appliances (set-top boxes, home gateways, games, consoles). Furthermore, it is used in the areas of automotive (control and infotainment systems), industrial control systems (motion control, process control), and point of sale (cashier systems, ticket machines, information terminals). Advantages and disadvantages The operating system stands out through its real-time microkernel and its multi-processor ability. Furthermore, it is portable and therefore not bound to specific hardware platforms. Symobi's inter-process communication guarantees security and flexibility. It has a modern architecture and runs with only low resource requirements (processor, system memory). The system offers a Java-VM. In the area of standard appliances the operating system it not yet widely spread. It has only a rudimentary POSIX support and has restricted hardware support through drivers. In addition, Symobi is not an open source operating system and at present does not offer office applications, email functions, or a web browser. References Miray Software: Introducing Symobi, a modern embeddable RTOS, 2006 External links Symobi Chair for Operating Systems at the Technical University of Munich Miray Software Embedded operating systems Real-time operating systems Microkernel-based operating systems
Symobi
Technology
985
286,064
https://en.wikipedia.org/wiki/Pervaporation
Pervaporation (or pervaporative separation) is a processing method for the separation of mixtures of liquids by partial vaporization through a non-porous or porous membrane. Theory The term pervaporation is a portmanteau of the two steps of the process: (a) permeation through the membrane by the permeate, then (b) its evaporation into the vapor phase. This process is used by a number of industries for several different processes, including purification and analysis, due to its simplicity and in-line nature. The membrane acts as a selective barrier between the two phases: the liquid-phase feed and the vapor-phase permeate. It allows the desired components of the liquid feed to transfer through it by vaporization. Separation of components is based on a difference in transport rate of individual components through the membrane. Typically, the upstream side of the membrane is at ambient pressure and the downstream side is under vacuum to allow the evaporation of the selective component after permeation through the membrane. Driving force for the separation is the difference in the partial pressures of the components on the two sides and not the volatility difference of the components in the feed. The driving force for transport of different components is provided by a chemical potential difference between the liquid feed/retentate and vapor permeate at each side of the membrane. The retentate is the remainder of the feed leaving the membrane feed chamber, which is not permeated through the membrane. The chemical potential can be expressed in terms of fugacity, given by Raoult's law for a liquid and by Dalton's law for (an ideal) gas. During operation, due to removal of the vapor-phase permeate, the actual fugacity of the vapor is lower than anticipated on basis of the collected (condensed) permeate. Separation of components (e.g. water and ethanol) is based on a difference in transport rate of individual components through the membrane. This transport mechanism can be described using the solution-diffusion model, based on the rate/degree of dissolution of a component into the membrane and its velocity of transport (expressed in terms of diffusivity) through the membrane, which will be different for each component and membrane type leading to separation. Applications Pervaporation is effective for dilute solutions containing trace or minor amounts of the component to be removed. Based on this, hydrophilic membranes are used for dehydration of alcohols containing small amounts of water and hydrophobic membranes are used for removal/recovery of trace amounts of organics from aqueous solutions. Pervaporation is an efficient energy conserving alternative to processes such as distillation and evaporation. It allows the exchange of two phases without direct contact. Examples include solvent dehydration: dehydrating the ethanol/water and isopropanol/water azeotropes, continuous ethanol removal from yeast fermentors, continuous water removal from condensation reactions such as esterifications to enhance conversion and rate of the reaction, membrane introduction mass spectrometry, removing organic solvents from industrial waste waters, combination of distillation and pervaporation/vapour permeation, and concentration of hydrophobic flavour compounds in aqueous solutions (using hydrophobic membranes). Recently, a number of organophilic pervaporation membranes have been introduced to the market. Organophilic pervaporation membranes can be used for the separation of organic-organic mixtures, e.g.: reduction of the aromatics content in refinery streams, breaking of azeotropes, purification of extraction media, purification of product stream after extraction, and purification of organic solvents. Materials Hydrophobic membranes are often polydimethylsiloxane based where the actual separation mechanism is based on the solution-diffusion model described above. Hydrophilic membranes are more widely available. The commercially most successful pervaporation membrane system to date is based on polyvinyl alcohol. More recently also membranes based on polyimide have become available. To overcome the intrinsic disadvantages of polymeric membrane systems ceramic membranes have been developed over the last decade. These ceramic membranes consist of nanoporous layers on top of a macroporous support. The pores must be large enough to let water molecules pass through and retain any other solvents that have a larger molecular size such as ethanol. As a result, a molecular sieve with a pore size of about 4 Å is obtained. The most widely available member of this class of membranes is that based on zeolite A. Alternatively to these crystalline materials, the porous structure of amorphous silica layers can be tailored towards molecular selectivity. These membranes are fabricated by sol-gel chemical processes. Research into novel hydrophilic ceramic membranes has been focused on titania or zirconia. Very recently a break-through in hydrothermal stability has been achieved through the development of an organic-inorganic hybrid material. See also Mass transfer References Further reading Analytical chemistry Membrane technology
Pervaporation
Chemistry
1,044
28,544,110
https://en.wikipedia.org/wiki/Tulosesus%20impatiens
Tulosesus impatiens (formerly Coprinellus impatiens) is a species of fungus in the family Psathyrellaceae. First described in 1821, it has been classified variously in the genera Psathyrella, Pseudocoprinus, Coprinarius, and Coprinus, before molecular phylogenetics reaffirmed it as a Coprinellus species in 2001. In 2020, German mycologists Dieter Wächter and Andreas Melzer reclassified many species of the Psathyrellaceae family based on a phylogenetic analysis and the species was renamed Tulosesus impatiens. The fungus is found in North America and Europe, where the mushrooms grow on the ground in deciduous forests. The fruit bodies have buff caps that are up to in diameter, held by slender whitish stems that can be up to tall. Several other Coprinopsis species that resemble T. impatiens may be distinguished by differences in appearance, habit, or spore morphology. Taxonomy and phylogeny The species was first described in 1821 as Agaricus impatiens by Swedish mycologist Elias Magnus Fries in his Systema Mycologicum. In 1886, Lucien Quélet transferred the species to Coprinarius (a defunct genus now synonymous with Panaeolus) and then to Coprinus a couple of years later in his Flore Mycologique de la France. In 1936, Robert Kühner segregated the genus Pseudocoprinus from Coprinus, including species that did not have deliquescent gills (that is, gills that "melt" into liquid), and he included Coprinus impatiens in this generic transfer. He later changed his mind about the taxonomic separation of Coprinus and Pseudocoprinus. Gillet transferred the species to Psathyrella in 1936. In 1938 Jakob Emanuel Lange published the new combination Coprinellus impatiens. Despite the taxonomic shuffling, the species was popularly known as Coprinus impatiens until 2001, when a large-scale phylogenetic analysis resulted in the splitting of the genus Coprinus into several smaller genera, and the authors confirmed the validity of the generic placement in Coprinellus. The specific epithet impatiens is derived from the Latin word for "impatient". A 2005 phylogenetics study proposed that C. impatiens was sister (closely related on the phylogenetic tree) to a large Psathyrella clade, and that consequently, the genus Coprinellus was polyphyletic. A later (2008) study suggested, however, that these results were due to an artifact of taxon sampling—not enough species were analyzed to adequately represent the genetic variation in the genera. The 2008 study demonstrated that Coprinellus, at that time including T. impatiens, was monophyletic, descended from a common ancestor. In their analysis, T. impatiens was most closely related to T. congregatus, T. bisporus, T. callinus, and T. heterosetulosus. The species was known as Coprinellus impatiens until 2020 when the German mycologists Dieter Wächter & Andreas Melzer reclassified many species in the Psathyrellaceae family based on phylogenetic analysis. Description The cap of the fruit bodies is initially egg-shaped, then conical to convex before flattening out, reaching diameters between . It has deep narrow grooves reaching almost as far as the center of the cap. The surface color is a pale buff, tawny or cinnamon towards the center, but the color loses intensity when the mushroom is dry. The flesh is whitish, thin, fragile and barely deliquescent (auto-digesting). The gills are initially buff, then turn grayish brown. They are either free from attachment to the stem, or adnexed, meaning only a small portion of the gill is attached. The stem is whitish, very slender, and more or less equal in width throughout its length, or slightly thicker at the base; its dimensions are by thick. The stem surface of young specimens are pruinose—appearing to be coated with a minute layer of fine white particles; this eventually is sloughed off, leaving a smooth or silky surface. The odor and taste of the fruit bodies are not distinctive. The gills of this species do not autodigest with age, or barely do so; the fruit bodies tend to become more fragile with age. The spore print is dark brown. The spores are smooth, ellipsoid or almond-shaped, with a germ pore; they measure 9–12 by 5–6 μm. The spore-bearing cells, the basidia, are four-spored and tetramorphic (the spores lie on several different levels, and mature at different times). The cheilocystidia (cystidia found on the gill edge) are roughly spherical, 20–35 μm broad, or lageniform (flask-shaped), 36–64 by 10–15 μm, with the apex often rather acute, about 3–5 μm wide. Pleurocystidia (cystidia found on the gill face) are absent in this species. Similar species Coprinellus disseminatus resembles Tulosesus impatiens, but may be distinguished by its slightly larger fruit body, somewhat deliquescent gills, and tendency to fruit in smaller groups on the ground, rather than on or around rotting wood. Also, C. disseminatus has smaller spores than T. impatiens, typically 6.6–9.7 by 4.1–5.8 μm. Tulosesus eurysporus is similar to C. disseminatus but usually grows in groups on fallen branches, and has broader spores that measure 8.3–10.3 by 6.7–8.4 μm. T. hiascens usually grows in small dense clumps, has narrower spores (typically 9–11 by 4.5–5.5 μm, and produces smaller fruit bodies. Habitat and distribution Tulosesus impatiens is found in North America and Europe (including Germany, Poland, and Ukraine) including northern Turkey. In the Pacific Northwest region of the United States, it is found in Oregon and Idaho. Fruit bodies grow solitarily, or rarely in small bundles, on forest litter in deciduous forests, especially ones dominated by beech. References External links Mushroom Observer Images Inedible fungi Fungi described in 1821 Fungi of North America Fungi of Europe Taxa named by Elias Magnus Fries impatiens Fungus species
Tulosesus impatiens
Biology
1,346
36,449,886
https://en.wikipedia.org/wiki/C16H12Cl2N2O
{{DISPLAYTITLE:C16H12Cl2N2O}} The molecular formula C16H12Cl2N2O may refer to: Cloroqualone Diclazepam (Ro5-3448) Ro5-4864, or 4'-chlorodiazepam SL-164 Molecular formulas
C16H12Cl2N2O
Physics,Chemistry
73
76,383,658
https://en.wikipedia.org/wiki/HD%20195479
HD 195479, also designated as HR 7839, is a solitary star located in the northern constellation Delphinus, the dolphin. It has an apparent magnitude of 6.20, placing it near the limit for naked eye visibility, even under ideal conditions. The object is located relatively close at a distance of 288 light-years based on Gaia DR3 parallax measurements and it is drifting closer with a heliocentric radial velocity of . At its current distance, HD 195479's brightness is diminished by an interstellar extinction of 0.27 magnitudes and it has an absolute magnitude of +1.53. HD 195479 is an Am star with a stellar classification of kA1hA9mF2. The notion indicates that it has the calcium K-lines of an A1 star, the hydrogen lines of an A9 star, and the metallic lines of a F2 star. It has 2.05 times the mass of the Sun and 2.15 times the radius of the Sun. It radiates 38.96 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it white hue when viewed in the night sky. HD 195479 is deficient in iron, having an abundance 77.6% of the Sun's. It is estimated to be 631 million years old and it spins modestly with a projected rotational velocity of , common for Am stars. The star has two optical companions: a 12th magnitude star designated B located 5.7" away along a position angle of 88° and a 13th magnitude star designated C located 55.9" away along a position angle of 206°. They were both observed by American astronomer Sherburne Wesley Burnham during the late 19th century. B and C are both background stars that are far more distant than HD 195479. References Am stars A-type main-sequence stars Triple stars Delphinus BD+20 04602 195479 101213 7839 00379435621
HD 195479
Astronomy
414
2,904,878
https://en.wikipedia.org/wiki/Trading%20blows
Trading blows or trading licks is an endurance test in which the participants (usually two boys or young men) take turns, alternating between administering a blow to an opponent and assuming the agreed exposed position (e.g. bending over an object or grabbing the ankles) to endure the next one, using the same implement (e.g. a fraternity paddle), until only the winner can still bring himself to endure the gradually increasing pain in the progressively tormented target part of their anatomy (usually the posterior, in which case it is a form of spanking or the cheeks), which in the interest of fairness should be covered by a common uniform. This can be anything from regular jeans or pants, underwear, and finally, bared (naked) buttocks. As the blows are not given by the same person but by the parties themselves, the strongest-armed one actually has an unfair (but not always decisive) physical advantage. Such rather macho displays of willpower, restraining the instinct to avoid pain, can serve various purposes, including: a physical punishment, especially for quarreling, possibly the origin of the practice (this ritualized alternative to a more disabling or even lethal duel should also make its participants realize the futility of physical aggression) a motivation test, especially as part of an initiation process, such as hazing an obedience test, as in certain paddle games (possibly really an excuse for the rather sadistic amusement of the seniors) a duel, either personal or as champions representing similar, especially rivaling, groups; in certain fraternities, refusing such a challenge may result in exclusion from the membership, even for an alumnus as a game, either to 'proudly' display one's tenacity (often to impress some audience) or in the pursuit of a sadist and/or masochistic, erotic or pain-addicted, kick. as a sexual fetish Another game with the same name is often played among boys or young men, where two people agree on a place to hit the other (e.g. the shoulder or chest) and the two take turns trading punches until one person cannot stand the pain any longer. The first person to give up is the loser. See also One-upmanship Sources Game in a scout troop in St.Louis, Missouri in the 1950s- scroll to "the Baker paddle" Abuse Aggression
Trading blows
Biology
478
67,463,456
https://en.wikipedia.org/wiki/Presepsin
Presepsin (soluble CD14 subtype, sCD14-ST) is a 13-kDa-cleavage product of CD14 receptor. Function Presepsin is a soluble PRR. Presepsin in the circulation is an indicator of monocyte-macrophage activation in response to pathogens. Clinical relevance Several clinical studies have demonstrated that presepsin is a specific and sensitive marker for the diagnosis, severity assessment and outcome prediction of sepsis. In addition, presepsin can be used for diagnosing infections in patients with a chronic inflammatory condition, such as liver cirrhosis. References Blood proteins Biomarkers
Presepsin
Biology
136
54,892,406
https://en.wikipedia.org/wiki/NGC%20469
NGC 469 is a spiral galaxy in the constellation Pisces. Located approximately 167 million light-years from Earth, it was discovered by Albert Marth in 1864. See also List of galaxies List of spiral galaxies References External links Deep Sky Catalog SEDS 469 Pisces (constellation) Spiral galaxies 004753
NGC 469
Astronomy
66
8,837,155
https://en.wikipedia.org/wiki/Hakon%20Haugnes
Hakon Haugnes is one of the founders of the .name top-level domain founded and launched by Global Name Registry (GNR) in 2000/2001. Previously Mr Haugnes was a co-founder of Nameplanet.com, which provided personalized email addresses to 1 million users in March 2000. Hakon Haugnes is CFO/COO of Andurand Capital and responsible for all operational and financial (non-investment) aspects of the company. Hakon was previously Risk Manager for BlueGold Capital (2010-2012), reporting to the CFO and CIO on all risk management aspects of the hedge fund which at its peak managed over US$2 billion. Mr Haugnes also developed BlueGold's information systems and headed up the in-house development team. Hakon was Business Analyst for BlueGold from 2009 to 2010. Prior to BlueGold, Haugnes was co-founder and president of Global Name Registry, a private company which was sold in Q4 2008 to VeriSign Inc (NASDAQ:VRSN). He served with the Norwegian Armed Forces as Strategist and holds a master's degree (honours) in mathematical modelling from the Institute of Cybernetics at the Norwegian Institute of Science and Technology (NTNU) and studied engineering at Institut National des Sciences Appliquees (INSA) in Toulouse, France. External links Global Name Registry Nameplanet.com References People in information technology Living people Year of birth missing (living people) Place of birth missing (living people)
Hakon Haugnes
Technology
320
36,452,986
https://en.wikipedia.org/wiki/C8H15N3
{{DISPLAYTITLE:C8H15N3}} The molecular formula C8H15N3 (molar mass: 153.225 g/mol) may refer to: Impentamine 7-Methyl-1,5,7-triazabicyclo[4.4.0]dec-5-ene Molecular formulas
C8H15N3
Physics,Chemistry
76
3,976,441
https://en.wikipedia.org/wiki/Geodemographic%20segmentation
In marketing, geodemographic segmentation is a multivariate statistical classification technique for discovering whether the individuals of a population fall into different groups by making quantitative comparisons of multiple characteristics with the assumption that the differences within any group should be less than the differences between groups. Principles Geodemographic segmentation is based on two simple principles: People who live in the same neighborhood are more likely to have similar characteristics than are two people chosen at random. Neighborhoods can be categorized in terms of the characteristics of the population which they contain. Any two neighborhoods can be placed in the same category, i.e., they contain similar types of people, even though they are widely separated. Clustering algorithms The use of different algorithms leads to different results, but there is no single best approach for selecting the best algorithm, just as no algorithm offers any theoretical proof of its certainty. One of the most frequently used techniques in geodemographic segmentation is the widely known k-means clustering algorithm. In fact most of the current commercial geodemographic systems are based on a k-means algorithm. Still, clustering techniques coming from artificial neural networks, genetic algorithms, or fuzzy logic are more efficient within large, multidimensional databases (Brimicombe 2007). Neural networks can handle non-linear relationships, are robust to noise and exhibit a high degree of automation. They do not assume any hypotheses regarding the nature or distribution of the data and they provide valuable assistance in handling problems of a geographical nature that, to date, have been impossible to solve. One of the best known and most efficient neural network methods for achieving unsupervised clustering is the Self-Organizing Map (SOM). SOM has been proposed as an improvement over the k-means method, for it provides a more flexible approach to census data clustering. The SOM method has been recently used by Spielman and Thill (2008) to develop geodemographic clustering of a census dataset concerning New York City. Another way of characterizing an individual polygon's similarity to all the regions is based on fuzzy logic. The basic concept of fuzzy clustering is that an object may belong to more than one cluster. In binary logic, the set is limited by the binary yes–no definition, meaning that an object either belongs or does not belong to a cluster. Fuzzy clustering allows a spatial unit to belong to more than one cluster with varying membership values. Most studies concerning geodemographic analysis and fuzzy logic employ the Fuzzy C-Means algorithm and the Gustafson-Kessel algorithm, (Feng and Flowerdew 1999). Systems Famous geodemographic segmentation systems are Claritas Prizm (US), CanaCode Lifestyles (Canada), PSYTE HD (Canada), Tapestry (US), CAMEO (UK), ACORN (UK), and MOSAIC (UK). New systems targeting population subgroups are also emerging. For example, Segmentos examines the geodemographic lifestyles of Hispanics in the United States. Both MOSAIC and ACORN use Onomastics to infer the ethnicity from resident names. CanaCode Lifestyle Clusters CanaCode Lifestyle Clusters is developed by Manifold Data Mining and classifies Canadian postal codes into 18 distinct major lifestyle groups and 110 niche lifestyles. It uses current-year statistics on over 10,000 variables ranging from demographics to socioeconomic factors to expenditures to lifestyle traits (e.g. consumer behaviors) including product usage, media usage, and psychographics. PSYTE HD PSYTE HD Canada is a geodemographic market segmentation system that classifies Canadian postal codes and Dissemination Areas into 57 unique lifestyle groups and mutually exclusive neighborhood types. PSYTE HD Canada is built on the Canadian Census demographic and socioeconomic base in addition to various other third party data inputs combined in a state of the art cluster build environment. The resultant clusters represent the most accurate snapshots of Canadian neighborhoods available. PSYTE HD Canada is an effective tool for analyzing customer data and potential markets, gaining market intelligence and insight, and interpreting consumer behavior across the diverse Canadian marketplace. CAMEO system The CAMEO Classifications are a set of consumer classifications that are used internationally by organisations as part of their sales, marketing and network planning strategies. CAMEO UK has been built at postcode, household and individual level and classifies over 50 million British consumers. It has been built to accurately segment the British market into 68 distinct neighbourhood types and 10 key marketing segments. Internationally Global CAMEO is the largest consumer segmentation system in the world, covering 40 nations. There is also single global classification CAMEO International which segments across borders. CAMEO was developed and is maintained by Callcredit Information Group. Acorn system A Classification Of Residential Neighborhoods (Acorn) is developed by CACI in London. It is the only geodemographic tool currently available that is built using current year data rather than 2011 Census information. Acorn helps to analyse and understand consumers in order to increase engagement with customers and service users to deliver strategies across all channels. Acorn segments all 1.9 million UK postcodes into 6 categories, 18 groups and 62 types. MOSAIC system Mosaic UK is Experian's people classification system. Originally created by Prof Richard Webber (visiting Professor of Geography at Kings College University, London) in association with Experian. The latest version of Mosaic was released in 2009. It classifies the UK population into 15 main socio-economic groups and, within this, 66 different types. Mosaic UK is part of a family of Mosaic classifications that covers 29 countries including most of Western Europe, the United States, Australia and the Far East. Mosaic Global is Experian's global consumer classification tool. It is based on the simple proposition that the world's cities share common patterns of residential segregation. Mosaic Global is a consistent segmentation system that covers over 400 million of the world's households using local data from 29 countries. It has identified 10 types of residential neighbourhood that can be found in each of the countries. geoSmart system In Australia, geoSmart is a geodemographic segmentation system based on the principle that people with similar demographic profiles and lifestyles tend to live near each other. It is developed by an Australian supplier of geodemographic solutions, RDA Research. geoSmart geodemographic segments are produced from the Australian Census (Australian Bureau of Statistics) demographic measures and modeled characteristics, and the system is updated for recent household growth. The clustering creates a single segment code that is represented by a descriptive statement or a thumbnail sketch. In Australia, geoSmart is mainly used for database segmentation, customer acquisition, trade area profiling and letterbox targeting, although it can be used in a broad range of other applications. The Output Area Classification The Output Area Classification (OAC) is the UK Office for National Statistics' (ONS) free and open geodemographic segmentation based upon the UK Census of Population 2011. It classifies 41 census variables into a three-tier classification of 7, 21, and 52 groups. The perceived advantages of OAC over other commercial classifications stem from the fact that the methodology is open and documented, and that the data is open and freely available to both the public and commercial organizations, subject to licensing conditions. OAC has a wide variety of potential applications, from geographic analysis to social marketing and consumer profiling. The UK public sector is one of the main users of OAC. ESRI Community Tapestry This method classifies US neighborhoods into 67 market segments, based on socioeconomic and demographic factors, then consolidates these 67 segments into 14 types of LifeModes with names such as "High Society", "Senior Styles", and "Factories and Farms". The smallest spatial granularity of data is produced at the level of the U.S. Census Block Group. See also Market segmentation#Companies (proprietary segmentation databases) References Feng, Z., Flowerdew, R., 1999. The use of fuzzy classification to improve geodemographic targeting. In B.Gittings (Ed.), Innovations in GIS 6 London:Taylor &Francis, (pp. 133 –144). Demography Geodemography Geostatistics Market research Market segmentation Population geography Statistical classification
Geodemographic segmentation
Environmental_science
1,694
73,392,991
https://en.wikipedia.org/wiki/Resilience%20%28power%20system%29
Power resilience refers to a company's ability to adapt to power outages. Frequent outages have forced businesses to take into account the "cost of not having access to power" in addition to the traditional "cost of power". Climate-related issues have intensified the attention on energy sustainability and resilience. In the United States, electric utility firms have registered over 2500 significant power outages since 2002, with almost half of them (specifically 1172) attributed to weather events, including storms, hurricanes, and other unspecified severe weather occurrences. These incidents often lead to significant economic losses. The Committee on Enhancing the Resilience of the Nation's Electric Power Transmission and Distribution System has developed strategies that seek to reduce the impact of large-scale, long-duration outages. Resilience is not just about preventing these outages from happening, but also limiting their scope and impact, restoring power quickly, and preparing for future events. Some parts of the United States still rely on regulated, vertically integrated utilities, while others have adopted competitive markets. Efforts to improve resilience must take into account this institutional and policy heterogeneity. The use of automation at the high-voltage level can improve grid reliability, but also introduces cybersecurity vulnerabilities. These "smart grids" use improved sensing, communication, automation technologies, and advanced metering infrastructure. Distributed energy resources are rapidly growing in some states, but most U.S. customers will continue to depend on the large-scale, interconnected, and hierarchically structured electric grid. Therefore, strategies to enhance electric power resilience must consider a diverse set of technical and institutional arrangements and a wide variety of hazards. There is no single solution that fits all situations when it comes to avoiding, planning for, coping with, and recovering from major outages. Definition According to the US Department of Homeland Security (DHS), resilience is defined as "the ability to adapt to changing conditions and withstand and rapidly recover from disruption due to emergencies". Causes Power outages can be caused by various events, not just weather conditions. These events can be classified as either "low-frequency high-impact" or "high-frequency low-impact." Dealing with low-frequency high-impact events, also known as "large area long duration" events, is particularly challenging due to the significant devastation they cause over a vast area for an extended period. These events are generally unpredictable and occur unexpectedly, but advances in weather and disaster forecasting technology can offer some warning time to prepare for certain situations. Power outages can be caused by a wide range of factors, including natural disasters, cyberattacks, equipment failure, human error, and political instability. The impact of a disruptive event on the power system infrastructure can be significant, depending on the severity of the event and the condition of the infrastructure. For example, a severe storm can knock out power to a large geographical area, while a cyberattack on the communication systems can disrupt the entire power grid. Additionally, the interdependence of different infrastructures, such as energy, transportation, and communication, can exacerbate the impact of a disruptive event. Finally, the spatial and temporal impacts of a disruptive event can affect how quickly power can be restored, as well as the level of damage to the infrastructure. Overall, managing the risk of power outages requires a comprehensive approach that considers a range of potential disruptive events and their potential impact on the power system infrastructure. Importance Regardless of the reasons, one growing concern is that power outages result in economic losses and hardship for people who have become increasingly reliant on electricity for even basic comforts. So it is essential that electrical power systems (EPSs) around the world are resilient. A resilient EPS should ensure uninterrupted power supply, even in the face of minor faults and major disruptive events. It should be robust enough to be reliable and have the ability to predict and prepare for potential outages. Additionally, a resilient EPS should have a mechanism to quickly recover and restore power to critical establishments. However, while power system reliability is well-defined and has established metrics in the electricity sector, resiliency is often confused with reliability, despite some similarities. According to the findings of National Academies report, the electric grid's smooth operation, which is organized in a hierarchical structure and tightly interconnected on a large scale, will remain crucial for ensuring dependable electric service to the majority of consumers over the next two decades. Power disruptions are problematic for both consumers and the electric system itself. These disruptions are typically caused by physical damage to local parts of the system, such as lightning strikes, falling trees, or equipment failure. The majority of outages affecting customers in the United States are caused by events that occur in the distribution system, while larger storms, natural phenomena, and operator errors can cause outages across the high-voltage system. A variety of events, such as hurricanes, ice storms, droughts, earthquakes, wildfires, and vandalism, can lead to outages. When power goes out, life becomes more challenging, especially in terms of communication, business operations, and traffic control. Brief outages are usually manageable, but longer and wider outages result in greater costs and inconveniences. Critical services like medical care, emergency services, and communications can be disrupted, leading to potential loss of life. This report focuses on building a resilient electric system that minimizes adverse impacts of large outages, particularly blackouts that last several days or longer and extend over multiple areas or states, which are particularly problematic for a modern economy that depends on reliable electric supply. Resilience vs reliability Despite the efforts of utilities to prevent and mitigate large-scale power outages, they still occur and cannot be eliminated due to the numerous potential sources of disruption to the power system. It is somewhat surprising that such outages are not more frequent, considering the magnitude of the system and the potential for problems. However, the planners and operators of the system have made great efforts over many years to ensure that the electric system is engineered and operated with a high level of reliability. In recent times, there has been an increased emphasis on resilience as well. The North American Electric Reliability Corporation (NERC), which is responsible for developing reliability standards for the bulk power system, defines reliability in terms of two fundamental concepts. Adequacy: Adequacy refers to the capability of the electricity system to meet the overall electricity demand and energy needs of end-users consistently, considering both planned and unexpected outages of system components that are reasonably anticipated. Operating reliability: The capability of the overall electrical power system to endure unexpected disruptions, like electrical faults or unforeseen component failures due to credible emergencies, without experiencing unmanaged, widespread power outages or harm to machinery. The system's reliability standards vary in practice, and while the bulk power system maintains a relatively high level of reliability throughout the United States, it cannot be made completely faultless due to its complexity as a "cyber-physical system." To ensure adequacy of electricity generation capability, a one-day-in-ten-years loss of load standard is commonly used, which means that the generation reserves must be sufficient to prevent voluntary load shedding due to inadequate supply from occurring more than once every ten years. However, with millions of intricate physical, communications, computational, and networked components and systems, the system is inherently complex and cannot attain perfect reliability. Resilience and reliability are two different concepts. Resilience, as defined by the Random House Dictionary of the English Language, refers to the ability to return to the original state after being stretched, compressed, or bent. Moreover, resilience involves recovering from adversity, illness, depression, or other similar situations. It also encompasses the ability to rebound and cope with outages effectively by reducing their impacts, regrouping quickly and efficiently after the event ends, and learning to handle future events better. See also Resilient control systems References Engineering concepts
Resilience (power system)
Engineering
1,662
71,368,236
https://en.wikipedia.org/wiki/Phacopsis%20vulpina
Phacopsis vulpina is a species of lichenicolous (lichen-dwelling) fungus in the family Parmeliaceae, and the type species of the genus Phacopsis. It was formally described as a new species in 1852 by French mycologist Edmond Tulasne. The fungus is restricted to the genus Letharia as a host and consequently has a Northern Hemisphere distribution. Externally, it is somewhat similar in appearance to P. lethariellae, but P. vulpina does not have a brown hypothecium (the area of tissue in the apothecium immediately below the subhymenium). References Parmeliaceae Fungi described in 1852 Taxa named by Edmond Tulasne Lichenicolous fungi Fungus species
Phacopsis vulpina
Biology
156
17,887,714
https://en.wikipedia.org/wiki/Supersonic%20gas%20separation
Supersonic gas separation is a technology to remove one or several gaseous components out of a mixed gas (typically raw natural gas). The process condensates the target components by cooling the gas through expansion in a Laval nozzle and then separates the condensates from the dried gas through an integrated cyclonic gas/liquid separator. The separator is only using a part of the field pressure as energy and has technical and commercial advantages when compared to commonly used conventional technologies. Background Raw natural gas out of a well is usually not a salable product but a mix of various hydro-carbonic gases with other gases, liquids and solid contaminants. This raw gas needs gas conditioning to get it ready for pipeline transport and processing in a gas processing plant to separate it into its components.Some of the common processing steps are CO2 removal, dehydration, LPG extraction, dew-pointing. Technologies used to achieve these steps are adsorption, absorption, membranes and low temperature systems achieved by refrigeration or expansion through a Joule Thomson Valve or a Turboexpander. If such expansion is done through the Supersonic Gas Separator instead, frequently mechanical, economical and operational advantages can be gained as detailed below. The supersonic gas separator A supersonic gas separator consists of several consecutive sections in tubular form, usually designed as flanged pieces of pipe. The feed gas (consisting of at least two components) first enters a section with an arrangement of static blades or wings, which induce a fast swirl in the gas. Thereafter the gas stream flows through a Laval nozzle, where it accelerates to supersonic speeds and undergoes a deep pressure drop to about 30% of feed pressure. This is a near isentropic process and the corresponding temperature reduction leads to condensation of target components of the mixed feed gas, which form a fine mist. The droplets agglomerate to larger drops, and the swirl of the gas causes cyclonic separation. The dry gas continues forward, while the liquid phase together with some slip gas (about 30% of the total stream) is separated by a concentric divider and exits the device as a separate stream. The final section are diffusers for both streams, where the gas is slowed down and about 80% of the feed pressure (depending on application) is recovered. This section might also include another set of static devices to undo the swirling motion. The installation scheme The supersonic separator requires a certain process scheme, which includes further auxiliary equipment and often forms a skid or processing block. The typical basic scheme for supersonic separation is an arrangement where the feed gas is pre-cooled in a heat exchanger by the dry stream of the separator unit. The liquid phase from the supersonic separator goes into a 2-phase or 3-phase separator, where the slip gas is separated from water and/or from liquid hydrocarbons. The gaseous phase of this secondary separator joins the dry gas of the supersonic separator, the liquids go for transport, storage or further processing and the water for treatment and disposal. Depending on the task at hand other schemes are possible and for certain cases have advantages. Those variations are very much part of the supersonic gas separation process to achieve thermodynamic efficiency and several of them are protected by patents. Advantages and application The supersonic gas separator recovers part of the pressure drop needed for cooling and as such has a higher efficiency than a JT valve in all conditions of operation. The supersonic gas separator can in many cases have a 10–20% higher efficiency than a turboexpander. The supersonic separator has a smaller footprint and a lower weight than a turboexpander or contactor columns. This is of particular advantage for platforms, FPSOs and crowded installations. It needs a lower capital investment and lower operating expenditure as it is completely static. Very little maintenance is required and no (or greatly reduced) amounts of chemicals. The fact that no operational or maintenance personnel is required might enable unmanning of usually manned platforms with the associated large savings in capital and operational expenditure. The fields of application commercially developed until today on an industrial scale are: dehydration dewpointing (water and/or hydrocarbons) LPG extraction Applications in the development stage for near term commercialization are: CO2 and H2S bulk removal Commercial realization There are several patents on supersonic gas separation, relating to features of the device as well as methods. The technology has been researched and proven in laboratory installations since about 1998, special HYSYS modules have been developed as well as 3D gas computer modeling. The supersonic gas separation technology has meanwhile moved successfully into industrial applications (e.g. in Nigeria, Malaysia and Russia) for dehydration as well as for LPG extraction. Consultancy, engineering and equipment for supersonic gas separation are being offered by ENGO Engineering Ltd. under the brand "3S". They are also provided by Twister BV, a Dutch firm affiliated with Royal Dutch Shell, under the brand "Twister Supersonic Separator". References Natural gas technology
Supersonic gas separation
Chemistry
1,067
4,264,509
https://en.wikipedia.org/wiki/Ovoid%20%28projective%20geometry%29
In projective geometry an ovoid is a sphere like pointset (surface) in a projective space of dimension . Simple examples in a real projective space are hyperspheres (quadrics). The essential geometric properties of an ovoid are: Any line intersects in at most 2 points, The tangents at a point cover a hyperplane (and nothing more), and contains no lines. Property 2) excludes degenerated cases (cones,...). Property 3) excludes ruled surfaces (hyperboloids of one sheet, ...). An ovoid is the spatial analog of an oval in a projective plane. An ovoid is a special type of a quadratic set. Ovoids play an essential role in constructing examples of Möbius planes and higher dimensional Möbius geometries. Definition of an ovoid In a projective space of dimension a set of points is called an ovoid, if (1) Any line meets in at most 2 points. In the case of , the line is called a passing (or exterior) line, if the line is a tangent line, and if the line is a secant line. (2) At any point the tangent lines through cover a hyperplane, the tangent hyperplane, (i.e., a projective subspace of dimension ). (3) contains no lines. From the viewpoint of the hyperplane sections, an ovoid is a rather homogeneous object, because For an ovoid and a hyperplane , which contains at least two points of , the subset is an ovoid (or an oval, if ) within the hyperplane . For finite projective spaces of dimension (i.e., the point set is finite, the space is pappian), the following result is true: If is an ovoid in a finite projective space of dimension , then . (In the finite case, ovoids exist only in 3-dimensional spaces.) In a finite projective space of order (i.e. any line contains exactly points) and dimension any pointset is an ovoid if and only if and no three points are collinear (on a common line). Replacing the word projective in the definition of an ovoid by affine, gives the definition of an affine ovoid. If for an (projective) ovoid there is a suitable hyperplane not intersecting it, one can call this hyperplane the hyperplane at infinity and the ovoid becomes an affine ovoid in the affine space corresponding to . Also, any affine ovoid can be considered a projective ovoid in the projective closure (adding a hyperplane at infinity) of the affine space. Examples In real projective space (inhomogeneous representation) (hypersphere) These two examples are quadrics and are projectively equivalent. Simple examples, which are not quadrics can be obtained by the following constructions: (a) Glue one half of a hypersphere to a suitable hyperellipsoid in a smooth way. (b) In the first two examples replace the expression by . Remark: The real examples can not be converted into the complex case (projective space over ). In a complex projective space of dimension there are no ovoidal quadrics, because in that case any non degenerated quadric contains lines. But the following method guarantees many non quadric ovoids: For any non-finite projective space the existence of ovoids can be proven using transfinite induction. Finite examples Any ovoid in a finite projective space of dimension over a field of characteristic is a quadric. The last result can not be extended to even characteristic, because of the following non-quadric examples: For odd and the automorphism the pointset is an ovoid in the 3-dimensional projective space over (represented in inhomogeneous coordinates). Only when is the ovoid a quadric. is called the Tits-Suzuki-ovoid. Criteria for an ovoid to be a quadric An ovoidal quadric has many symmetries. In particular: Let be an ovoid in a projective space of dimension and a hyperplane. If the ovoid is symmetric to any point (i.e. there is an involutory perspectivity with center which leaves invariant), then is pappian and a quadric. An ovoid in a projective space is a quadric, if the group of projectivities, which leave invariant operates 3-transitively on , i.e. for two triples there exists a projectivity with . In the finite case one gets from Segre's theorem: Let be an ovoid in a finite 3-dimensional desarguesian projective space of odd order, then is pappian and is a quadric. Generalization: semi ovoid Removing condition (1) from the definition of an ovoid results in the definition of a semi-ovoid: A point set of a projective space is called a semi-ovoid if the following conditions hold: (SO1) For any point the tangents through point exactly cover a hyperplane. (SO2) contains no lines. A semi ovoid is a special semi-quadratic set which is a generalization of a quadratic set. The essential difference between a semi-quadratic set and a quadratic set is the fact, that there can be lines which have 3 points in common with the set and the lines are not contained in the set. Examples of semi-ovoids are the sets of isotropic points of an hermitian form. They are called hermitian quadrics. As for ovoids in literature there are criteria, which make a semi-ovoid to a hermitian quadric. See, for example. Semi-ovoids are used in the construction of examples of Möbius geometries. See also Ovoid (polar space) Möbius plane Notes References Further reading External links E. Hartmann: Planar Circle Geometries, an Introduction to Moebius-, Laguerre- and Minkowski Planes. Skript, TH Darmstadt (PDF; 891 kB), S. 121-123. Projective geometry Incidence geometry
Ovoid (projective geometry)
Mathematics
1,303
6,449
https://en.wikipedia.org/wiki/Clock
A clock or chronometer is a device that measures and displays time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units such as the day, the lunar month, and the year. Devices operating on several physical processes have been used over the millennia. Some predecessors to the modern clock may be considered "clocks" that are based on movement in nature: A sundial shows the time by displaying the position of a shadow on a flat surface. There is a range of duration timers, a well-known example being the hourglass. Water clocks, along with sundials, are possibly the oldest time-measuring instruments. A major advance occurred with the invention of the verge escapement, which made possible the first mechanical clocks around 1300 in Europe, which kept time with oscillating timekeepers like balance wheels. Traditionally, in horology (the study of timekeeping), the term clock was used for a striking clock, while a clock that did not strike the hours audibly was called a timepiece. This distinction is not generally made any longer. Watches and other timepieces that can be carried on one's person are usually not referred to as clocks. Spring-driven clocks appeared during the 15th century. During the 15th and 16th centuries, clockmaking flourished. The next development in accuracy occurred after 1656 with the invention of the pendulum clock by Christiaan Huygens. A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The mechanism of a timepiece with a series of gears driven by a spring or weights is referred to as clockwork; the term is used by extension for a similar mechanism not used in a timepiece. The electric clock was patented in 1840, and electronic clocks were introduced in the 20th century, becoming widespread with the development of small battery-powered semiconductor devices. The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates at a particular frequency. This object can be a pendulum, a balance wheel, a tuning fork, a quartz crystal, or the vibration of electrons in atoms as they emit microwaves, the last of which is so precise that it serves as the definition of the second. Clocks have different ways of displaying the time. Analog clocks indicate time with a traditional clock face and moving hands. Digital clocks display a numeric representation of time. Two numbering systems are in use: 12-hour time notation and 24-hour notation. Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays. For the blind and for use over telephones, speaking clocks state the time audibly in words. There are also clocks for the blind that have displays that can be read by touch. Etymology The word clock derives from the medieval Latin word for 'bell'——and has cognates in many European languages. Clocks spread to England from the Low Countries, so the English word came from the Middle Low German and Middle Dutch . The word is also derived from the Middle English , Old North French , or Middle Dutch , all of which mean 'bell'. History of time-measuring devices Sundials The apparent position of the Sun in the sky changes over the course of each day, reflecting the rotation of the Earth. Shadows cast by stationary objects move correspondingly, so their positions can be used to indicate the time of day. A sundial shows the time by displaying the position of a shadow on a (usually) flat surface that has markings that correspond to the hours. Sundials can be horizontal, vertical, or in other orientations. Sundials were widely used in ancient times. With knowledge of latitude, a well-constructed sundial can measure local solar time with reasonable accuracy, within a minute or two. Sundials continued to be used to monitor the performance of clocks until the 1830s, when the use of the telegraph and trains standardized time and time zones between cities. Devices that measure duration, elapsed time and intervals Many devices can be used to mark the passage of time without respect to reference time (time of day, hours, minutes, etc.) and can be useful for measuring duration or intervals. Examples of such duration timers are candle clocks, incense clocks, and the hourglass. Both the candle clock and the incense clock work on the same principle, wherein the consumption of resources is more or less constant, allowing reasonably precise and repeatable estimates of time passages. In the hourglass, fine sand pouring through a tiny hole at a constant rate indicates an arbitrary, predetermined passage of time. The resource is not consumed, but re-used. Water clocks Water clocks, along with sundials, are possibly the oldest time-measuring instruments, with the only exception being the day-counting tally stick. Given their great antiquity, where and when they first existed is not known and is perhaps unknowable. The bowl-shaped outflow is the simplest form of a water clock and is known to have existed in Babylon and Egypt around the 16th century BC. Other regions of the world, including India and China, also have early evidence of water clocks, but the earliest dates are less certain. Some authors, however, write about water clocks appearing as early as 4000 BC in these regions of the world. The Macedonian astronomer Andronicus of Cyrrhus supervised the construction of the Tower of the Winds in Athens in the 1st century BC, which housed a large clepsydra inside as well as multiple prominent sundials outside, allowing it to function as a kind of early clocktower. The Greek and Roman civilizations advanced water clock design with improved accuracy. These advances were passed on through Byzantine and Islamic times, eventually making their way back to Europe. Independently, the Chinese developed their own advanced water clocks () by 725 AD, passing their ideas on to Korea and Japan. Some water clock designs were developed independently, and some knowledge was transferred through the spread of trade. Pre-modern societies do not have the same precise timekeeping requirements that exist in modern industrial societies, where every hour of work or rest is monitored and work may start or finish at any time regardless of external conditions. Instead, water clocks in ancient societies were used mainly for astrological reasons. These early water clocks were calibrated with a sundial. While never reaching the level of accuracy of a modern timepiece, the water clock was the most accurate and commonly used timekeeping device for millennia until it was replaced by the more accurate pendulum clock in 17th-century Europe. Islamic civilization is credited with further advancing the accuracy of clocks through elaborate engineering. In 797 (or possibly 801), the Abbasid caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas together with a "particularly elaborate example" of a water clock. Pope Sylvester II introduced clocks to northern and western Europe around 1000 AD. Mechanical water clocks The first known geared clock was invented by the great mathematician, physicist, and engineer Archimedes during the 3rd century BC. Archimedes created his astronomical clock, which was also a cuckoo clock with birds singing and moving every hour. It is the first carillon clock as it plays music simultaneously with a person blinking his eyes, surprised by the singing birds. The Archimedes clock works with a system of four weights, counterweights, and strings regulated by a system of floats in a water container with siphons that regulate the automatic continuation of the clock. The principles of this type of clock are described by the mathematician and physicist Hero, who says that some of them work with a chain that turns a gear in the mechanism. Another Greek clock probably constructed at the time of Alexander was in Gaza, as described by Procopius. The Gaza clock was probably a Meteoroskopeion, i.e., a building showing celestial phenomena and the time. It had a pointer for the time and some automations similar to the Archimedes clock. There were 12 doors opening one every hour, with Hercules performing his labors, the Lion at one o'clock, etc., and at night a lamp becomes visible every hour, with 12 windows opening to show the time. The Tang dynasty Buddhist monk Yi Xing along with government official Liang Lingzan made the escapement in 723 (or 725) to the workings of a water-powered armillary sphere and clock drive, which was the world's first clockwork escapement. The Song dynasty polymath and genius Su Song (1020–1101) incorporated it into his monumental innovation of the astronomical clock tower of Kaifeng in 1088. His astronomical clock and rotating armillary sphere still relied on the use of either flowing water during the spring, summer, and autumn seasons or liquid mercury during the freezing temperatures of winter (i.e., hydraulics). In Su Song's waterwheel linkwork device, the action of the escapement's arrest and release was achieved by gravity exerted periodically as the continuous flow of liquid-filled containers of a limited size. In a single line of evolution, Su Song's clock therefore united the concepts of the clepsydra and the mechanical clock into one device run by mechanics and hydraulics. In his memorial, Su Song wrote about this concept: According to your servant's opinion there have been many systems and designs for astronomical instruments during past dynasties all differing from one another in minor respects. But the principle of the use of water-power for the driving mechanism has always been the same. The heavens move without ceasing but so also does water flow (and fall). Thus if the water is made to pour with perfect evenness, then the comparison of the rotary movements (of the heavens and the machine) will show no discrepancy or contradiction; for the unresting follows the unceasing. Song was also strongly influenced by the earlier armillary sphere created by Zhang Sixun (976 AD), who also employed the escapement mechanism and used liquid mercury instead of water in the waterwheel of his astronomical clock tower. The mechanical clockworks for Su Song's astronomical tower featured a great driving-wheel that was 11 feet in diameter, carrying 36 scoops, into each of which water was poured at a uniform rate from the "constant-level tank". The main driving shaft of iron, with its cylindrical necks supported on iron crescent-shaped bearings, ended in a pinion, which engaged a gear wheel at the lower end of the main vertical transmission shaft. This great astronomical hydromechanical clock tower was about ten metres high (about 30 feet), featured a clock escapement, and was indirectly powered by a rotating wheel either with falling water or liquid mercury. A full-sized working replica of Su Song's clock exists in the Republic of China (Taiwan)'s National Museum of Natural Science, Taichung city. This full-scale, fully functional replica, approximately 12 meters (39 feet) in height, was constructed from Su Song's original descriptions and mechanical drawings. The Chinese escapement spread west and was the source for Western escapement technology. In the 12th century, Al-Jazari, an engineer from Mesopotamia (lived 1136–1206) who worked for the Artuqid king of Diyar-Bakr, Nasir al-Din, made numerous clocks of all shapes and sizes. The most reputed clocks included the elephant, scribe, and castle clocks, some of which have been successfully reconstructed. As well as telling the time, these grand clocks were symbols of the status, grandeur, and wealth of the Urtuq State. Knowledge of these mercury escapements may have spread through Europe with translations of Arabic and Spanish texts. Fully mechanical The word (from the Greek —'hour', and —'to tell') was used to describe early mechanical clocks, but the use of this word (still used in several Romance languages) for all timekeepers conceals the true nature of the mechanisms. For example, there is a record that in 1176, Sens Cathedral in France installed an 'horologe', but the mechanism used is unknown. According to Jocelyn de Brakelond, in 1198, during a fire at the abbey of St Edmundsbury (now Bury St Edmunds), the monks "ran to the clock" to fetch water, indicating that their water clock had a reservoir large enough to help extinguish the occasional fire. The word clock (via Medieval Latin from Old Irish , both meaning 'bell'), which gradually supersedes "horologe", suggests that it was the sound of bells that also characterized the prototype mechanical clocks that appeared during the 13th century in Europe. In Europe, between 1280 and 1320, there was an increase in the number of references to clocks and horologes in church records, and this probably indicates that a new type of clock mechanism had been devised. Existing clock mechanisms that used water power were being adapted to take their driving power from falling weights. This power was controlled by some form of oscillating mechanism, probably derived from existing bell-ringing or alarm devices. This controlled release of power – the escapement – marks the beginning of the true mechanical clock, which differed from the previously mentioned cogwheel clocks. The verge escapement mechanism appeared during the surge of true mechanical clock development, which did not need any kind of fluid power, like water or mercury, to work. These mechanical clocks were intended for two main purposes: for signalling and notification (e.g., the timing of services and public events) and for modeling the solar system. The former purpose is administrative; the latter arises naturally given the scholarly interests in astronomy, science, and astrology and how these subjects integrated with the religious philosophy of the time. The astrolabe was used both by astronomers and astrologers, and it was natural to apply a clockwork drive to the rotating plate to produce a working model of the solar system. Simple clocks intended mainly for notification were installed in towers and did not always require faces or hands. They would have announced the canonical hours or intervals between set times of prayer. Canonical hours varied in length as the times of sunrise and sunset shifted. The more sophisticated astronomical clocks would have had moving dials or hands and would have shown the time in various time systems, including Italian hours, canonical hours, and time as measured by astronomers at the time. Both styles of clocks started acquiring extravagant features, such as automata. In 1283, a large clock was installed at Dunstable Priory in Bedfordshire in southern England; its location above the rood screen suggests that it was not a water clock. In 1292, Canterbury Cathedral installed a 'great horloge'. Over the next 30 years, there were mentions of clocks at a number of ecclesiastical institutions in England, Italy, and France. In 1322, a new clock was installed in Norwich, an expensive replacement for an earlier clock installed in 1273. This had a large (2 metre) astronomical dial with automata and bells. The costs of the installation included the full-time employment of two clockkeepers for two years. Astronomical An elaborate water clock, the 'Cosmic Engine', was invented by Su Song, a Chinese polymath, designed and constructed in China in 1092. This great astronomical hydromechanical clock tower was about ten metres high (about 30 feet) and was indirectly powered by a rotating wheel with falling water and liquid mercury, which turned an armillary sphere capable of calculating complex astronomical problems. In Europe, there were the clocks constructed by Richard of Wallingford in Albans by 1336, and by Giovanni de Dondi in Padua from 1348 to 1364. They no longer exist, but detailed descriptions of their design and construction survive, and modern reproductions have been made. They illustrate how quickly the theory of the mechanical clock had been translated into practical constructions, and also that one of the many impulses to their development had been the desire of astronomers to investigate celestial phenomena. The Astrarium of Giovanni Dondi dell'Orologio was a complex astronomical clock built between 1348 and 1364 in Padua, Italy, by the doctor and clock-maker Giovanni Dondi dell'Orologio. The Astrarium had seven faces and 107 moving gears; it showed the positions of the sun, the moon and the five planets then known, as well as religious feast days. The astrarium stood about 1 metre high, and consisted of a seven-sided brass or iron framework resting on 7 decorative paw-shaped feet. The lower section provided a 24-hour dial and a large calendar drum, showing the fixed feasts of the church, the movable feasts, and the position in the zodiac of the moon's ascending node. The upper section contained 7 dials, each about 30 cm in diameter, showing the positional data for the Primum Mobile, Venus, Mercury, the moon, Saturn, Jupiter, and Mars. Directly above the 24-hour dial is the dial of the Primum Mobile, so called because it reproduces the diurnal motion of the stars and the annual motion of the sun against the background of stars. Each of the 'planetary' dials used complex clockwork to produce reasonably accurate models of the planets' motion. These agreed reasonably well both with Ptolemaic theory and with observations. Wallingford's clock had a large astrolabe-type dial, showing the sun, the moon's age, phase, and node, a star map, and possibly the planets. In addition, it had a wheel of fortune and an indicator of the state of the tide at London Bridge. Bells rang every hour, the number of strokes indicating the time. Dondi's clock was a seven-sided construction, 1 metre high, with dials showing the time of day, including minutes, the motions of all the known planets, an automatic calendar of fixed and movable feasts, and an eclipse prediction hand rotating once every 18 years. It is not known how accurate or reliable these clocks would have been. They were probably adjusted manually every day to compensate for errors caused by wear and imprecise manufacture. Water clocks are sometimes still used today, and can be examined in places such as ancient castles and museums. The Salisbury Cathedral clock, built in 1386, is considered to be the world's oldest surviving mechanical clock that strikes the hours. Spring-driven Clockmakers developed their art in various ways. Building smaller clocks was a technical challenge, as was improving accuracy and reliability. Clocks could be impressive showpieces to demonstrate skilled craftsmanship, or less expensive, mass-produced items for domestic use. The escapement in particular was an important factor affecting the clock's accuracy, so many different mechanisms were tried. Spring-driven clocks appeared during the 15th century, although they are often erroneously credited to Nuremberg watchmaker Peter Henlein (or Henle, or Hele) around 1511. The earliest existing spring driven clock is the chamber clock given to Phillip the Good, Duke of Burgundy, around 1430, now in the Germanisches Nationalmuseum. Spring power presented clockmakers with a new problem: how to keep the clock movement running at a constant rate as the spring ran down. This resulted in the invention of the stackfreed and the fusee in the 15th century, and many other innovations, down to the invention of the modern going barrel in 1760. Early clock dials did not indicate minutes and seconds. A clock with a dial indicating minutes was illustrated in a 1475 manuscript by Paulus Almanus, and some 15th-century clocks in Germany indicated minutes and seconds. An early record of a seconds hand on a clock dates back to about 1560 on a clock now in the Fremersdorf collection. During the 15th and 16th centuries, clockmaking flourished, particularly in the metalworking towns of Nuremberg and Augsburg, and in Blois, France. Some of the more basic table clocks have only one time-keeping hand, with the dial between the hour markers being divided into four equal parts making the clocks readable to the nearest 15 minutes. Other clocks were exhibitions of craftsmanship and skill, incorporating astronomical indicators and musical movements. The cross-beat escapement was invented in 1584 by Jost Bürgi, who also developed the remontoire. Bürgi's clocks were a great improvement in accuracy as they were correct to within a minute a day. These clocks helped the 16th-century astronomer Tycho Brahe to observe astronomical events with much greater precision than before. Pendulum The next development in accuracy occurred after 1656 with the invention of the pendulum clock. Galileo had the idea to use a swinging bob to regulate the motion of a time-telling device earlier in the 17th century. Christiaan Huygens, however, is usually credited as the inventor. He determined the mathematical formula that related pendulum length to time (about 99.4 cm or 39.1 inches for the one second movement) and had the first pendulum-driven clock made. The first model clock was built in 1657 in the Hague, but it was in England that the idea was taken up. The longcase clock (also known as the grandfather clock) was created to house the pendulum and works by the English clockmaker William Clement in 1670 or 1671. It was also at this time that clock cases began to be made of wood and clock faces to use enamel as well as hand-painted ceramics. In 1670, William Clement created the anchor escapement, an improvement over Huygens' crown escapement. Clement also introduced the pendulum suspension spring in 1671. The concentric minute hand was added to the clock by Daniel Quare, a London clockmaker and others, and the second hand was first introduced. Hairspring In 1675, Huygens and Robert Hooke invented the spiral balance spring, or the hairspring, designed to control the oscillating speed of the balance wheel. This crucial advance finally made accurate pocket watches possible. The great English clockmaker Thomas Tompion, was one of the first to use this mechanism successfully in his pocket watches, and he adopted the minute hand which, after a variety of designs were trialled, eventually stabilised into the modern-day configuration. The rack and snail striking mechanism for striking clocks, was introduced during the 17th century and had distinct advantages over the 'countwheel' (or 'locking plate') mechanism. During the 20th century there was a common misconception that Edward Barlow invented rack and snail striking. In fact, his invention was connected with a repeating mechanism employing the rack and snail. The repeating clock, that chimes the number of hours (or even minutes) on demand was invented by either Quare or Barlow in 1676. George Graham invented the deadbeat escapement for clocks in 1720. Marine chronometer A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The position of a ship at sea could be determined with reasonable accuracy if a navigator could refer to a clock that lost or gained less than about 10 seconds per day. This clock could not contain a pendulum, which would be virtually useless on a rocking ship. In 1714, the British government offered large financial rewards to the value of 20,000 pounds for anyone who could determine longitude accurately. John Harrison, who dedicated his life to improving the accuracy of his clocks, later received considerable sums under the Longitude Act. In 1735, Harrison built his first chronometer, which he steadily improved on over the next thirty years before submitting it for examination. The clock had many innovations, including the use of bearings to reduce friction, weighted balances to compensate for the ship's pitch and roll in the sea and the use of two different metals to reduce the problem of expansion from heat. The chronometer was tested in 1761 by Harrison's son and by the end of 10 weeks the clock was in error by less than 5 seconds. Mass production The British had dominated watch manufacture for much of the 17th and 18th centuries, but maintained a system of production that was geared towards high quality products for the elite. Although there was an attempt to modernise clock manufacture with mass-production techniques and the application of duplicating tools and machinery by the British Watch Company in 1843, it was in the United States that this system took off. In 1816, Eli Terry and some other Connecticut clockmakers developed a way of mass-producing clocks by using interchangeable parts. Aaron Lufkin Dennison started a factory in 1851 in Massachusetts that also used interchangeable parts, and by 1861 was running a successful enterprise incorporated as the Waltham Watch Company. Early electric In 1815, the English scientist Francis Ronalds published the first electric clock powered by dry pile batteries. Alexander Bain, a Scottish clockmaker, patented the electric clock in 1840. The electric clock's mainspring is wound either with an electric motor or with an electromagnet and armature. In 1841, he first patented the electromagnetic pendulum. By the end of the nineteenth century, the advent of the dry cell battery made it feasible to use electric power in clocks. Spring or weight driven clocks that use electricity, either alternating current (AC) or direct current (DC), to rewind the spring or raise the weight of a mechanical clock would be classified as an electromechanical clock. This classification would also apply to clocks that employ an electrical impulse to propel the pendulum. In electromechanical clocks the electricity serves no time keeping function. These types of clocks were made as individual timepieces but more commonly used in synchronized time installations in schools, businesses, factories, railroads and government facilities as a master clock and slave clocks. Where an AC electrical supply of stable frequency is available, timekeeping can be maintained very reliably by using a synchronous motor, essentially counting the cycles. The supply current alternates with an accurate frequency of 50 hertz in many countries, and 60 hertz in others. While the frequency may vary slightly during the day as the load changes, generators are designed to maintain an accurate number of cycles over a day, so the clock may be a fraction of a second slow or fast at any time, but will be perfectly accurate over a long time. The rotor of the motor rotates at a speed that is related to the alternation frequency. Appropriate gearing converts this rotation speed to the correct ones for the hands of the analog clock. Time in these cases is measured in several ways, such as by counting the cycles of the AC supply, vibration of a tuning fork, the behaviour of quartz crystals, or the quantum vibrations of atoms. Electronic circuits divide these high-frequency oscillations to slower ones that drive the time display. Quartz The piezoelectric properties of crystalline quartz were discovered by Jacques and Pierre Curie in 1880. The first crystal oscillator was invented in 1917 by Alexander M. Nicholson, after which the first quartz crystal oscillator was built by Walter G. Cady in 1921. In 1927 the first quartz clock was built by Warren Marrison and J.W. Horton at Bell Telephone Laboratories in Canada. The following decades saw the development of quartz clocks as precision time measurement devices in laboratory settings—the bulky and delicate counting electronics, built with vacuum tubes at the time, limited their practical use elsewhere. The National Bureau of Standards (now NIST) based the time standard of the United States on quartz clocks from late 1929 until the 1960s, when it changed to atomic clocks. In 1969, Seiko produced the world's first quartz wristwatch, the Astron. Their inherent accuracy and low cost of production resulted in the subsequent proliferation of quartz clocks and watches. Atomic Currently, atomic clocks are the most accurate clocks in existence. They are considerably more accurate than quartz clocks as they can be accurate to within a few seconds over trillions of years. Atomic clocks were first theorized by Lord Kelvin in 1879. In the 1930s the development of magnetic resonance created practical method for doing this. A prototype ammonia maser device was built in 1949 at the U.S. National Bureau of Standards (NBS, now NIST). Although it was less accurate than existing quartz clocks, it served to demonstrate the concept. The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale ephemeris time (ET). As of 2013, the most stable atomic clocks are ytterbium clocks, which are stable to within less than two parts in 1 quintillion (). Operation The invention of the mechanical clock in the 13th century initiated a change in timekeeping methods from continuous processes, such as the motion of the gnomon's shadow on a sundial or the flow of liquid in a water clock, to periodic oscillatory processes, such as the swing of a pendulum or the vibration of a quartz crystal, which had the potential for more accuracy. All modern clocks use oscillation. Although the mechanisms they use vary, all oscillating clocks, mechanical, electric, and atomic, work similarly and can be divided into analogous parts. They consist of an object that repeats the same motion over and over again, an oscillator, with a precisely constant time interval between each repetition, or 'beat'. Attached to the oscillator is a controller device, which sustains the oscillator's motion by replacing the energy it loses to friction, and converts its oscillations into a series of pulses. The pulses are then counted by some type of counter, and the number of counts is converted into convenient units, usually seconds, minutes, hours, etc. Finally some kind of indicator displays the result in human readable form. Power source Oscillator The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates repetitively at a precisely constant frequency. In mechanical clocks, this is either a pendulum or a balance wheel. In some early electronic clocks and watches such as the Accutron, they use a tuning fork. In quartz clocks and watches, it is a quartz crystal. In atomic clocks, it is the vibration of electrons in atoms as they emit microwaves. In early mechanical clocks before 1657, it was a crude balance wheel or foliot which was not a harmonic oscillator because it lacked a balance spring. As a result, they were very inaccurate, with errors of perhaps an hour a day. The advantage of a harmonic oscillator over other forms of oscillator is that it employs resonance to vibrate at a precise natural resonant frequency or "beat" dependent only on its physical characteristics, and resists vibrating at other rates. The possible precision achievable by a harmonic oscillator is measured by a parameter called its Q, or quality factor, which increases (other things being equal) with its resonant frequency. This is why there has been a long-term trend toward higher frequency oscillators in clocks. Balance wheels and pendulums always include a means of adjusting the rate of the timepiece. Quartz timepieces sometimes include a rate screw that adjusts a capacitor for that purpose. Atomic clocks are primary standards, and their rate cannot be adjusted. Synchronized or slave clocks Some clocks rely for their accuracy on an external oscillator; that is, they are automatically synchronized to a more accurate clock: Slave clocks, used in large institutions and schools from the 1860s to the 1970s, kept time with a pendulum, but were wired to a master clock in the building, and periodically received a signal to synchronize them with the master, often on the hour. Later versions without pendulums were triggered by a pulse from the master clock and certain sequences used to force rapid synchronization following a power failure. Synchronous electric clocks do not have an internal oscillator, but count cycles of the 50 or 60 Hz oscillation of the AC power line, which is synchronized by the utility to a precision oscillator. The counting may be done electronically, usually in clocks with digital displays, or, in analog clocks, the AC may drive a synchronous motor which rotates an exact fraction of a revolution for every cycle of the line voltage, and drives the gear train. Although changes in the grid line frequency due to load variations may cause the clock to temporarily gain or lose several seconds during the course of a day, the total number of cycles per 24 hours is maintained extremely accurately by the utility company, so that the clock keeps time accurately over long periods. Computer real-time clocks keep time with a quartz crystal, but can be periodically (usually weekly) synchronized over the Internet to atomic clocks (UTC), using the Network Time Protocol (NTP). Radio clocks keep time with a quartz crystal, but are periodically synchronized to time signals transmitted from dedicated standard time radio stations or satellite navigation signals, which are set by atomic clocks. Controller This has the dual function of keeping the oscillator running by giving it 'pushes' to replace the energy lost to friction, and converting its vibrations into a series of pulses that serve to measure the time. In mechanical clocks, this is the escapement, which gives precise pushes to the swinging pendulum or balance wheel, and releases one gear tooth of the escape wheel at each swing, allowing all the clock's wheels to move forward a fixed amount with each swing. In electronic clocks this is an electronic oscillator circuit that gives the vibrating quartz crystal or tuning fork tiny 'pushes', and generates a series of electrical pulses, one for each vibration of the crystal, which is called the clock signal. In atomic clocks the controller is an evacuated microwave cavity attached to a microwave oscillator controlled by a microprocessor. A thin gas of caesium atoms is released into the cavity where they are exposed to microwaves. A laser measures how many atoms have absorbed the microwaves, and an electronic feedback control system called a phase-locked loop tunes the microwave oscillator until it is at the frequency that causes the atoms to vibrate and absorb the microwaves. Then the microwave signal is divided by digital counters to become the clock signal. In mechanical clocks, the low Q of the balance wheel or pendulum oscillator made them very sensitive to the disturbing effect of the impulses of the escapement, so the escapement had a great effect on the accuracy of the clock, and many escapement designs were tried. The higher Q of resonators in electronic clocks makes them relatively insensitive to the disturbing effects of the drive power, so the driving oscillator circuit is a much less critical component. Counter chain This counts the pulses and adds them up to get traditional time units of seconds, minutes, hours, etc. It usually has a provision for setting the clock by manually entering the correct time into the counter. In mechanical clocks this is done mechanically by a gear train, known as the wheel train. The gear train also has a second function; to transmit mechanical power from the power source to run the oscillator. There is a friction coupling called the 'cannon pinion' between the gears driving the hands and the rest of the clock, allowing the hands to be turned to set the time. In digital clocks a series of integrated circuit counters or dividers add the pulses up digitally, using binary logic. Often pushbuttons on the case allow the hour and minute counters to be incremented and decremented to set the time. Indicator This displays the count of seconds, minutes, hours, etc. in a human readable form. The earliest mechanical clocks in the 13th century did not have a visual indicator and signalled the time audibly by striking bells. Many clocks to this day are striking clocks which strike the hour. Analog clocks display time with an analog clock face, which consists of a dial with the numbers 1 through 12 or 24, the hours in the day, around the outside. The hours are indicated with an hour hand, which makes one or two revolutions in a day, while the minutes are indicated by a minute hand, which makes one revolution per hour. In mechanical clocks a gear train drives the hands; in electronic clocks the circuit produces pulses every second which drive a stepper motor and gear train, which move the hands. Digital clocks display the time in periodically changing digits on a digital display. A common misconception is that a digital clock is more accurate than an analog wall clock, but the indicator type is separate and apart from the accuracy of the timing source. Talking clocks and the speaking clock services provided by telephone companies speak the time audibly, using either recorded or digitally synthesized voices. Types Clocks can be classified by the type of time display, as well as by the method of timekeeping. Time display methods Analog Analog clocks usually use a clock face which indicates time using rotating pointers called "hands" on a fixed numbered dial or dials. The standard clock face, known universally throughout the world, has a short "hour hand" which indicates the hour on a circular dial of 12 hours, making two revolutions per day, and a longer "minute hand" which indicates the minutes in the current hour on the same dial, which is also divided into 60 minutes. It may also have a "second hand" which indicates the seconds in the current minute. The only other widely used clock face today is the 24 hour analog dial, because of the use of 24 hour time in military organizations and timetables. Before the modern clock face was standardized during the Industrial Revolution, many other face designs were used throughout the years, including dials divided into 6, 8, 10, and 24 hours. During the French Revolution the French government tried to introduce a 10-hour clock, as part of their decimal-based metric system of measurement, but it did not achieve widespread use. An Italian 6 hour clock was developed in the 18th century, presumably to save power (a clock or watch striking 24 times uses more power). Another type of analog clock is the sundial, which tracks the sun continuously, registering the time by the shadow position of its gnomon. Because the sun does not adjust to daylight saving time, users must add an hour during that time. Corrections must also be made for the equation of time, and for the difference between the longitudes of the sundial and of the central meridian of the time zone that is being used (i.e. 15 degrees east of the prime meridian for each hour that the time zone is ahead of GMT). Sundials use some or part of the 24 hour analog dial. There also exist clocks which use a digital display despite having an analog mechanism—these are commonly referred to as flip clocks. Alternative systems have been proposed. For example, the "Twelv" clock indicates the current hour using one of twelve colors, and indicates the minute by showing a proportion of a circular disk, similar to a moon phase. Digital Digital clocks display a numeric representation of time. Two numeric display formats are commonly used on digital clocks: the 24-hour notation with hours ranging 00–23; the 12-hour notation with AM/PM indicator, with hours indicated as 12AM, followed by 1AM–11AM, followed by 12PM, followed by 1PM–11PM (a notation mostly used in domestic environments). Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays; many other display technologies are used as well (cathode-ray tubes, nixie tubes, etc.). After a reset, battery change or power failure, these clocks without a backup battery or capacitor either start counting from 12:00, or stay at 12:00, often with blinking digits indicating that the time needs to be set. Some newer clocks will reset themselves based on radio or Internet time servers that are tuned to national atomic clocks. Since the introduction of digital clocks in the 1960s, there has been a notable decline in the use of analog clocks. Some clocks, called 'flip clocks', have digital displays that work mechanically. The digits are painted on sheets of material which are mounted like the pages of a book. Once a minute, a page is turned over to reveal the next digit. These displays are usually easier to read in brightly lit conditions than LCDs or LEDs. Also, they do not go back to 12:00 after a power interruption. Flip clocks generally do not have electronic mechanisms. Usually, they are driven by AC-synchronous motors. Hybrid (analog-digital) Clocks with analog quadrants, with a digital component, usually minutes and hours displayed analogously and seconds displayed in digital mode. Auditory For convenience, distance, telephony or blindness, auditory clocks present the time as sounds. The sound is either spoken natural language, (e.g. "The time is twelve thirty-five"), or as auditory codes (e.g. number of sequential bell rings on the hour represents the number of the hour like the bell, Big Ben). Most telecommunication companies also provide a speaking clock service as well. Word Word clocks are clocks that display the time visually using sentences. E.g.: "It's about three o'clock." These clocks can be implemented in hardware or software. Projection Some clocks, usually digital ones, include an optical projector that shines a magnified image of the time display onto a screen or onto a surface such as an indoor ceiling or wall. The digits are large enough to be easily read, without using glasses, by persons with moderately imperfect vision, so the clocks are convenient for use in their bedrooms. Usually, the timekeeping circuitry has a battery as a backup source for an uninterrupted power supply to keep the clock on time, while the projection light only works when the unit is connected to an A.C. supply. Completely battery-powered portable versions resembling flashlights are also available. Tactile Auditory and projection clocks can be used by people who are blind or have limited vision. There are also clocks for the blind that have displays that can be read by using the sense of touch. Some of these are similar to normal analog displays, but are constructed so the hands can be felt without damaging them. Another type is essentially digital, and uses devices that use a code such as Braille to show the digits so that they can be felt with the fingertips. Multi-display Some clocks have several displays driven by a single mechanism, and some others have several completely separate mechanisms in a single case. Clocks in public places often have several faces visible from different directions, so that the clock can be read from anywhere in the vicinity; all the faces show the same time. Other clocks show the current time in several time-zones. Watches that are intended to be carried by travellers often have two displays, one for the local time and the other for the time at home, which is useful for making pre-arranged phone calls. Some equation clocks have two displays, one showing mean time and the other solar time, as would be shown by a sundial. Some clocks have both analog and digital displays. Clocks with Braille displays usually also have conventional digits so they can be read by sighted people. Purposes Clocks are in homes, offices and many other places; smaller ones (watches) are carried on the wrist or in a pocket; larger ones are in public places, e.g. a railway station or church. A small clock is often shown in a corner of computer displays, mobile phones and many MP3 players. The primary purpose of a clock is to display the time. Clocks may also have the facility to make a loud alert signal at a specified time, typically to waken a sleeper at a preset time; they are referred to as alarm clocks. The alarm may start at a low volume and become louder, or have the facility to be switched off for a few minutes then resume. Alarm clocks with visible indicators are sometimes used to indicate to children too young to read the time that the time for sleep has finished; they are sometimes called training clocks. A clock mechanism may be used to control a device according to time, e.g. a central heating system, a VCR, or a time bomb (see: digital counter). Such mechanisms are usually called timers. Clock mechanisms are also used to drive devices such as solar trackers and astronomical telescopes, which have to turn at accurately controlled speeds to counteract the rotation of the Earth. Most digital computers depend on an internal signal at constant frequency to synchronize processing; this is referred to as a clock signal. (A few research projects are developing CPUs based on asynchronous circuits.) Some equipment, including computers, also maintains time and date for use as required; this is referred to as time-of-day clock, and is distinct from the system clock signal, although possibly based on counting its cycles. Time standards For some scientific work timing of the utmost accuracy is essential. It is also necessary to have a standard of the maximum accuracy against which working clocks can be calibrated. An ideal clock would give the time to unlimited accuracy, but this is not realisable. Many physical processes, in particular including some transitions between atomic energy levels, occur at exceedingly stable frequency; counting cycles of such a process can give a very accurate and consistent time—clocks which work this way are usually called atomic clocks. Such clocks are typically large, very expensive, require a controlled environment, and are far more accurate than required for most purposes; they are typically used in a standards laboratory. Navigation Until advances in the late twentieth century, navigation depended on the ability to measure latitude and longitude. Latitude can be determined through celestial navigation; the measurement of longitude requires accurate knowledge of time. This need was a major motivation for the development of accurate mechanical clocks. John Harrison created the first highly accurate marine chronometer in the mid-18th century. The Noon gun in Cape Town still fires an accurate signal to allow ships to check their chronometers. Many buildings near major ports used to have (some still do) a large ball mounted on a tower or mast arranged to drop at a pre-determined time, for the same purpose. While satellite navigation systems such as GPS require unprecedentedly accurate knowledge of time, this is supplied by equipment on the satellites; vehicles no longer need timekeeping equipment. Sports and games Clocks can be used to measure varying periods of time in games and sports. Stopwatches can be used to time the performance of track athletes. Chess clocks are used to limit the board game players' time to make a move. In various sports, measure the duration the game or subdivisions of the game, while other clocks may be used for tracking different durations; these include play clocks, shot clocks, and pitch clocks. Culture Folklore and superstition In the United Kingdom, clocks are associated with various beliefs, many involving death or bad luck. In legends, clocks have reportedly stopped of their own accord upon a nearby person's death, especially those of monarchs. The clock in the House of Lords supposedly stopped at "nearly" the hour of George III's death in 1820, the one at Balmoral Castle stopped during the hour of Queen Victoria's death, and similar legends are related about clocks associated with William IV and Elizabeth I. Many superstitions exist about clocks. One stopping before a person has died may foretell coming death. Similarly, if a clock strikes during a church hymn or a marriage ceremony, death or calamity is prefigured for the parishioners or a spouse, respectively. Death or ill events are foreshadowed if a clock strikes the wrong time. It may also be unlucky to have a clock face a fire or to speak while a clock is striking. In Chinese culture, giving a clock () is often taboo, especially to the elderly, as it is a homophone of the act of attending another's funeral (). Specific types Awards (GPHG) See also 24-hour analog dial Allan variance Allen-Bradley Clock Tower at Rockwell Automation Headquarters Building (Wisconsin) American Watchmakers-Clockmakers Institute BaselWorld Biological clock Clockarium The clock as herald of the Industrial Revolution (Lewis Mumford) Clock drift Clock ident Clock network Clock of the Long Now Colgate Clock (Indiana) Colgate Clock (New Jersey), largest clock in US Cosmo Clock 21, world's largest clock Cox's timepiece Cuckooland Museum Date and time representation by country Debt clock Le Défenseur du Temps (automata) Department of Defense master clock (U.S.) Doomsday Clock Earth clock Federation of the Swiss Watch Industry FH Guard tour patrol system (watchclocks) Iron Ring Clock Jens Olsen's World Clock Jewel bearing List of biggest clock faces List of international common standards List of largest cuckoo clocks National Association of Watch and Clock Collectors Replica watch Rubik's Clock Star clock Singing bird box System time Timeline of time measurement technology Watchmaker Notes and references Bibliography Baillie, G.H., O. Clutton, & C.A. Ilbert. Britten's Old Clocks and Watches and Their Makers (7th ed.). Bonanza Books (1956). Bolter, David J. Turing's Man: Western Culture in the Computer Age. The University of North Carolina Press, Chapel Hill, NC (1984). pbk. Summary of the role of "the clock" in its setting the direction of philosophic movement for the "Western World". Cf. picture on p. 25 showing the verge and foliot. Bolton derived the picture from Macey, p. 20. Edey, Winthrop. French Clocks. New York: Walker & Co. (1967). Kak, Subhash, Babylonian and Indian Astronomy: Early Connections. 2003. Kumar, Narendra "Science in Ancient India" (2004). . Landes, David S. Revolution in Time: Clocks and the Making of the Modern World. Cambridge: Harvard University Press (1983). Landes, David S. Clocks & the Wealth of Nations, Daedalus Journal, Spring 2003. Lloyd, Alan H. "Mechanical Timekeepers", A History of Technology, Vol. III. Edited by Charles Joseph Singer et al. Oxford: Clarendon Press (1957), pp. 648–675. Macey, Samuel L., Clocks and the Cosmos: Time in Western Life and Thought, Archon Books, Hamden, Conn. (1980). North, John. God's Clockmaker: Richard of Wallingford and the Invention of Time. London: Hambledon and London (2005). Opie, Iona, & Moira Tatem. "A Dictionary of Superstitions". Oxford: Oxford University Press (1990). Palmer, Brooks. The Book of American Clocks, The Macmillan Co. (1979). Robinson, Tom. The Longcase Clock. Suffolk, England: Antique Collector's Club (1981). Smith, Alan. The International Dictionary of Clocks. London: Chancellor Press (1996). Tardy. French Clocks the World Over. Part I and II. Translated with the assistance of Alexander Ballantyne. Paris: Tardy (1981). Yoder, Joella Gerstmeyer. Unrolling Time: Christiaan Huygens and the Mathematization of Nature. New York: Cambridge University Press (1988). Zea, Philip, & Robert Cheney. Clock Making in New England: 1725–1825. Old Sturbridge Village (1992). External links National Association of Watch & Clock Collectors Museum Blackboard clock Time measurement systems Articles containing video clips
Clock
Physics,Technology,Engineering
10,644
42,753,276
https://en.wikipedia.org/wiki/Bougainvillea%20%C3%97%20buttiana
Bougainvillea × buttiana is a flowering plant, a garden hybrid of Bougainvillea glabra and Bougainvillea peruviana. Growing to tall by broad, It is an evergreen vine, with thorny stems and tiny trumpet shaped white flowers, usually appearing in clusters surrounded by three showy bright magenta-rose papery bracts. The leaves are ovate and dark green. This plant can be grown in a warm temperate or subtropical environment where the temperature does not fall below freezing (), against a south-facing wall in full sun. Numerous cultivars have been developed, of which the following have gained the Royal Horticultural Society's Award of Garden Merit: 'Miss Manila' 'Mrs Butt' 'Poulton's Special' References External links Birmingham botanical gardens Nyctaginaceae Hybrid plants
Bougainvillea × buttiana
Biology
172
6,689,642
https://en.wikipedia.org/wiki/Subclinical%20infection
A subclinical infection—sometimes called a preinfection or inapparent infection—is an infection by a pathogen that causes few or no signs or symptoms of infection in the host. Subclinical infections can occur in both humans and animals. Depending on the pathogen, which can be a virus or intestinal parasite, the host may be infectious and able to transmit the pathogen without ever developing symptoms; such a host is called an asymptomatic carrier. Many pathogens, including HIV, typhoid fever, and coronaviruses such as COVID-19 spread in their host populations through subclinical infection. Not all hosts of asymptomatic subclinical infections will become asymptomatic carriers. For example, hosts of Mycobacterium tuberculosis bacteria will only develop active tuberculosis in approximately one-tenth of cases; the majority of those infected by Mtb bacteria have latent tuberculosis, a non-infectious type of tuberculosis that does not produce symptoms in individuals with sufficient immune responses. Because subclinical infections often occur without eventual overt sign, in some cases their presence is only identified by microbiological culture or DNA techniques such as polymerase chain reaction (PCR) tests. Transmission In humans Many pathogens are transmitted through their host populations by hosts with few or no symptoms, including sexually transmitted infections such as syphilis and genital warts. In other cases, a host may develop more symptoms as the infection progresses beyond its incubation period. These hosts create a natural reservoir of individuals that can transmit a pathogen to other individuals. Because cases often do not come to clinical attention, health statistics frequently are unable to measure the true prevalence of an infection in a population. This prevents accurate modeling of its transmissibility. In animals Some animal pathogens are also transmitted through subclinical infections. The A(H5) and A(H7) strains of avian influenza are divided into two categories: low pathogenicity avian influenza (LPAI) viruses, and highly pathogenic avian influenza (HPAI) viruses. While HPAI viruses have a very high mortality rate for chickens, LPAI viruses are very mild and produce few, if any symptoms; outbreaks in a flock may go undetected without ongoing testing. Wild ducks and other waterfowl are asymptomatic carriers of avian influenza, notably HPAI, and can be infected without showing signs of illness. The prevalence of subclinical HPAI infection in waterfowl has contributed to the international outbreak of highly lethal H5N8 virus that began in early 2020. Pathogens known to cause subclinical infection The following pathogens (together with their symptomatic illnesses) are known to be carried asymptomatically, often in a large percentage of the potential host population: Baylisascaris procyonis Bordetella pertussis (Pertussis or whooping cough) Chlamydia pneumoniae Chlamydia trachomatis (Chlamydia) Clostridioides difficile Cyclospora cayetanensis Dengue virus Dientamoeba fragilis Entamoeba histolytica Enterotoxigenic Escherichia coli Epstein–Barr virus Group A streptococcal infection Helicobacter pylori Herpes simplex (oral herpes, genital herpes, etc.) HIV-1 (HIV/AIDS) Influenza (strains) Legionella pneumophila (Legionnaires' disease) Measles viruses Mycobacterium leprae (leprosy) Mycobacterium tuberculosis (tuberculosis) Neisseria gonorrhoeae (gonorrhoea) Neisseria meningitidis (Meningitis) Nontyphoidal Salmonella Noroviruses Poliovirus (Poliomyelitis) Plasmodium (Malaria) Rabies lyssavirus (Rabies) Rhinoviruses (Common cold) Salmonella enterica serovar Typhi (Typhoid fever) SARS-CoV-2 (COVID-19) and other coronaviruses Staphylococcus aureus Streptococcus pneumoniae (Bacterial pneumonia) Treponema pallidum (syphilis) See also References Further reading Epidemiology Infectious diseases Medical terminology Symptoms
Subclinical infection
Environmental_science
913
1,765,998
https://en.wikipedia.org/wiki/Wildlife%20conservation
Wildlife conservation refers to the practice of protecting wild species and their habitats in order to maintain healthy wildlife species or populations and to restore, protect or enhance natural ecosystems. Major threats to wildlife include habitat destruction, degradation, fragmentation, overexploitation, poaching, pollution, climate change, and the illegal wildlife trade. The IUCN estimates that 42,100 species of the ones assessed are at risk for extinction. Expanding to all existing species, a 2019 UN report on biodiversity put this estimate even higher at a million species. It is also being acknowledged that an increasing number of ecosystems on Earth containing endangered species are disappearing. To address these issues, there have been both national and international governmental efforts to preserve Earth's wildlife. Prominent conservation agreements include the 1973 Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) and the 1992 Convention on Biological Diversity (CBD). There are also numerous nongovernmental organizations (NGO's) dedicated to conservation such as the Nature Conservancy, World Wildlife Fund, and Conservation International. Threats to wildlife Habitat destruction Habitat destruction decreases the number of places where wildlife can live in. Habitat fragmentation breaks up a continuous tract of habitat, often dividing large wildlife populations into several smaller ones. Human-caused habitat loss and fragmentation are primary drivers of species declines and extinctions. Key examples of human-induced habitat loss include deforestation, agricultural expansion, and urbanization. Habitat destruction and fragmentation can increase the vulnerability of wildlife populations by reducing the space and resources available to them and by increasing the likelihood of conflict with humans. Moreover, destruction and fragmentation create smaller habitats. Smaller habitats support smaller populations, and smaller populations are more likely to go extinct. The COVID-19 pandemic has caused a significant shift in human behavior, resulting in mandatory and voluntary limitations on movement. As a result, people have started utilizing green spaces more frequently, which were previously habitats for wildlife. Unfortunately, this increased human activity has caused destruction to the natural habitat of various species. Deforestation Deforestation is the clearing and cutting down forests on purpose. Deforestation is a cause of human-induced habitat action destruction, by cutting down habitats of different species in the process of removing trees. Deforestation is often done for several reasons, often for either agricultural purposes or for logging, which is the obtainment of timber and wood for use in construction or fuel. Deforestation causes many threats to wildlife as it not only causes habitat destruction for the many animals that survive in forests, as more than 80% of the world's species live in forests but also leads to further climate change. Deforestation is a main concern in the tropical forests of the world. Tropical forests, like the Amazon, are home to the most biodiversity out of any other biome, making deforestation there an even more prevalent issue, especially in populated areas, as in these areas deforestation leads to habitat destruction and the endangerment of many species in one area. Some policies have been enacted to attempt to stop deforestation in different parts of the world, like the Wilderness Act of 1964 which designated specific areas wilderness to be protected. Overexploitation Overexploitation is the harvesting of animals and plants at a rate that's faster than the species' ability to recover. While often associated with Overfishing, overexploitation can apply to many groups including mammals, birds, amphibians, reptiles, and plants. The danger of overexploitation is that if too many of a species offspring are taken, then the species may not recover. For example, overfishing of top marine predatory fish like tuna and salmon over the past century has led to a decline in fish sizes as well as fish numbers. Poaching Poaching for illegal wildlife trading is a major threat to certain species, particularly endangered ones whose status makes them economically valuable. Such species include many large mammals like African elephants, tigers, and rhinoceros (traded for their tusks, skins, and horns respectively). Less well-known targets of poaching include the harvest of protected plants and animals for souvenirs, food, skins, pets, and more. Poaching causes already small populations to decline even further as hunters tend to target threatened and endangered species because of their rarity and large profits. Ocean Acidification As carbon dioxide levels increase concentration in the atmosphere, they increase in the ocean as well. Typically, the ocean will absorb carbon from the atmosphere, where it can be sequestered in the deep ocean and sea floor; this is a process called the biological pump. Increased carbon dioxide emissions and increased stratification (which slows the biological pump) decrease the ocean pH, making it more acidic. Calcifying organisms such as coral are especially susceptible to decreased pH, resulting in mass bleaching events, inevitably destroying a habitat for many of coral's diverse inhabitants. Research (conducted through methods such as coral fossils and ancient ice core carbon analysis) suggests ocean acidification has occurred in the geological past (more likely at a slower pace), and correlate with past extinction events. Culling Culling is the deliberate and selective killing of wildlife by governments for various purposes. An example of this is shark culling, in which "shark control" programs in Queensland and New South Wales (in Australia) have killed thousands of sharks, as well as turtles, dolphins, whales, and other marine life. The Queensland "shark control" program alone has killed about 50,000 sharks — it has also killed more than 84,000 marine animals. There are also examples of population culling in the United States, such as bison in Montana and swans, geese, and deer in New York and other places. Pollution A wide range of pollutants negatively impact wildlife health. For some pollutants, simple exposure is enough to do damage (e.g. pesticides). For others, its through inhaling (e.g. air pollutants) or ingesting it (e.g. toxic metals). Pollutants affect different species in different ways so a pollutant that is bad for one might not affect another. Air pollutants: Most air pollutants come from burning fossil fuels and industrial emissions. These have direct and indirect effects on the health of wildlife and their ecosystems. For example, high levels of sulfur oxides (SOx) can damage plants and stunt their growth. Sulfur oxides also contribute to acid rain, harming both terrestrial and aquatic ecosystems. Other air pollutants like smog, ground-level ozone, and particulate matter decrease air quality. Heavy metals: Heavy metals like arsenic, lead, and mercury naturally occur at low levels in the environment, but when ingested in high doses, can cause organ damage and cancer. How toxic they are depends on the exact metal, how much was ingested, and the animal that ingested it. Human activities such as mining, smelting, burning fossil fuels, and various industrial processes have contributed to the rise in heavy metal levels in the environment. Toxic chemicals: There are many sources of toxic chemical pollution including industrial wastewater, oil spills, and pesticides. There's a wide range of toxic chemicals so there's also a wide range of negative health effects. For example, synthetic pesticides and certain industrial chemicals are persistent organic pollutants. These pollutants are long-lived and can cause cancer, reproductive disorders, immune system problems, and nervous system problems. Climate change Humans are responsible for present-day climate change currently changing Earth's environmental conditions. It is related to some of the aforementioned threats to wildlife like habitat destruction and pollution. Rising temperatures, melting ice sheets, changes in precipitation patterns, severe droughts, more frequent heat waves, storm intensification, ocean acidification, and rising sea levels are some of the effects of climate change. Phenomena like droughts, wildfires, heatwaves, intense storms, ocean acidification, and rising sea levels, directly lead to habitat destruction. For example, longer dry seasons, warmer springs, and dry soil has been observed to increase the length of wildfire season in forests, shrublands and grasslands. Increased severity and longevity of wildfires can completely wipe out entire ecosystems, causing them to take decades to fully recover. Wildfires are a prime example of the direct negative effect climate change has on wildlife and ecosystems. Meanwhile, a warming climate, fluctuating precipitation, and changing weather patterns will impact species ranges. Overall, the effects of climate change increase stress on ecosystems, and species unable to cope with the rapidly changing conditions will go extinct. While modern climate change is caused by humans, past climate change events occurred naturally and have led to extinctions. Illegal Wildlife Trade The illegal wildlife trade is the illegal trading of plants and wildlife. This illegal trading is worth an estimate of 7-23 billion and an annual trade of around 100 million plants and animals. In 2021 it was found that this trade has caused a 60% decline in species abundance, and 80% for endangered species. This trade can be devastating to both humans and animals. It has the capacity to spread zoonotic diseases to humans, as well as contribute to local extinction. The pathogens to humans may be spread through small animal vectors like ticks, or through ingestion of food and water. Extinction can be caused due to non-native species being introduced that become invasive. An example of how this may happen is through by-catch.These new species will outcompete the native species and take over, therefore causing the local or global extinction of a species. Due to the fittest animals in the species being hunted or poached, the less fit organisms will mate, causing less fitness in the generations to come. In addition to species fitness being lowered and therefore endangering species, the illegal wildlife trade has ecological costs. Sex-ratio balances may be tipped or reproduction rates are slowed, which can be detrimental to vulnerable species. The recovery of these populations may take longer due to the reproduction rates being slower. The wildlife trade also causes issues for natural resources that people use in their everyday lives. Ecotourism is how some people bring in money to their homes, and with depleting the wildlife, this may be a factor in taking away jobs. Illegal wildlife trade has also become normalized through various social media outlets. There are TikTok accounts that have gone viral for their depiction of exotic pets, such as various monkey and bird species. These accounts show a cute and fun side of owning exotic pets, therefore indirectly encouraging illegal wildlife trade. On March 30, 2021, TikTik joined the Coalition to End Wildlife Trafficking Online. They, along with other big social media companies work to protect species from illegal, harmful trade online. Research has shown that machine learning can filter through social media posts to identify indications of illegal wildlife trade. This filtration system is able to search for keywords, pictures, and phrases that indicate illegal wildlife trade, and report it. Species conservation It is estimated that, because of human activities, current species extinction rates are about 1000 times greater than the background extinction rate (the 'normal' extinction rate that occurs without additional influence). According to the IUCN, out of all species assessed, over 42,100 are at risk of extinction and should be under conservation. Of these, 25% are mammals, 14% are birds, and 40% are amphibians. However, because not all species have been assessed, these numbers could be even higher. A 2019 UN report assessing global biodiversity extrapolated IUCN data to all species and estimated that 1 million species worldwide could face extinction. Conservation of a select species are often prioritized on several factors which include significant economic and ecological value, as well as desirability or attractiveness. Yet, because resources are limited, sometimes it is not possible to give all species that need conservation due consideration. The species problem occurring in some cases due to natural hybridization, cryptic species, and natural evolution of species can be represented for species conservation by different approaches, such as multicriteria species approaches, subspecies, evolutionarily significant units, distinct population segments or species-population continuum. Leatherback sea turtle The leatherback sea turtle (Dermochelys coriacea) is the largest turtle in the world, is the only turtle without a hard shell, and is endangered. It is found throughout the central Pacific and Atlantic Oceans but several of its populations are in decline across the globe (though not all). The leatherback sea turtle faces numerous threats including being caught as bycatch, harvest of its eggs, loss of nesting habitats, and marine pollution. In the US where the leatherback is listed under the Endangered Species Act, measures to protect it include reducing bycatch captures through fishing gear modifications, monitoring and protecting its habitat (both nesting beaches and in the ocean), and reducing damage from marine pollution. There is currently an international effort to protect the leatherback sea turtle. Habitat conservation Habitat conservation is the practice of protecting a habitat in order to protect the species within it. This is sometimes preferable to focusing on a single species especially if the species in question has very specific habitat requirements or lives in a habitat with many other endangered species. The latter is often true of species living in biodiversity hotspots, which are areas of the world with an exceptionally high concentration of endemic species (species found nowhere else in the world). Many of these hotspots are in the tropics, mainly tropical forests like the Amazon. Habitat conservation is usually carried out by setting aside protected areas like national parks or nature reserves. Even when an area isn't made into a park or reserve, it can still be monitored and maintained. Red-cockaded woodpecker The red-cockaded woodpecker (Picoides borealis) is an endangered bird in the southeastern US. It only lives in longleaf pine savannas which are maintained by wildfires in mature pine forests. Today, it is a rare habitat (as fires have become rare and many pine forests have been cut down for agriculture) and is commonly found on land occupied by US military bases, where pine forests are kept for military training purposes and occasional bombings (also for training) set fires that maintain pine savannas. Woodpeckers live in tree cavities they excavate in the trunk. In an effort to increase woodpecker numbers, artificial cavities (essentially birdhouses planted within tree trunks) were installed to give woodpeckers a place to live. An active effort is made by the US military and workers to maintain this rare habitat used by red-cockaded woodpeckers. Conservation genetics Conservation genetics studies genetic phenomena that impact the conservation of a species. Most conservation efforts focus on managing population size, but conserving genetic diversity is typically a high priority as well. High genetic diversity increases survival because it means greater capacity to adapt to future environmental changes. Meanwhile, effects associated with low genetic diversity, such as inbreeding depression and loss of diversity from genetic drift, often decrease species survival by reducing the species' capacity to adapt or by increasing the frequency of genetic problems. Though not always the case, certain species are under threat because they have very low genetic diversity. As such, the best conservation action would be to restore their genetic diversity. Florida panther The Florida panther is a subspecies of cougar (specifically Puma concolor coryi) that resides in the state of Florida and is currently endangered. Historically, the Florida panther's range covered the entire southeastern US. In the early 1990s, only a single population with 20-25 individuals were left. The population had very low genetic diversity, was highly inbred, and suffered from several genetic issues including kinked tails, cardiac defects, and low fertility. In 1995, eight female Texas cougars were introduced to the Florida population. The goal was to increase genetic diversity by introducing genes from a different, unrelated puma population. By 2007, the Florida panther population had tripled and offspring between Florida and Texas individuals had higher fertility and less genetic problems. In 2015, the US Fish and Wildlife Service estimated there were 230 adult Florida panthers and in 2017, there were signs that the population's range was expanding within Florida. Conservation methods Wildlife Monitoring Monitoring of wildlife populations is an important part of conservation because it allows managers to gather information about the status of threatened species and to measure the effectiveness of management strategies. Monitoring can be local, regional, or range-wide, and can include one or many distinct populations. Metrics commonly gathered during monitoring include population numbers, geographic distribution, and genetic diversity, although many other metrics may be used. Monitoring methods can be categorized as either "direct" or "indirect". Direct methods rely on directly seeing or hearing the animals, whereas indirect methods rely on "signs" that indicate the animals are present. For terrestrial vertebrates, common direct monitoring methods include direct observation, mark-recapture, transects, and variable plot surveys. Indirect methods include track stations, fecal counts, food removal, open or closed burrow-opening counts, burrow counts, runaway counts, knockdown cards, snow tracks, or responses to audio calls. For large, terrestrial vertebrates, a popular method is to use camera traps for population estimation along with mark-recapture techniques. This method has been used successfully with tigers, black bears and numerous other species. Trail cameras can be triggered remotely and automatically via sound, infrared sensors, etc. Computer vision-based animal individual re-identification methods have been developed to automate such sight-resight calculations. Mark-recapture methods are also used with genetic data from non-invasive hair or fecal samples. Such information can be analyzed independently or in conjunction with photographic methods to get a more complete picture of population viability. When designing a wildlife monitoring strategy, it is important to minimize harm to the animal and implement the 3Rs principles (Replacement, Reduction, Refinement). In wildlife research, this can be done through the use of non-invasive methods, sharing samples and data with other research groups, or optimizing traps to prevent injuries. Vaccine administration Distributing vaccinations to wildlife who are particularly vulnerable is useful in conservation to prevent or decelerate extreme population declination in a species from disease and also decrease the risk of a zoonotic spillover to humans. A pathogen that has never once been exposed to a specific species' evolutionary pathway can have detrimental impacts on the population. In most cases, these risks escalate in conjunction to other anthropogenic stressors, such as climate change or habitat loss, that ultimately lead a population to extinction without human intervention. Methods of vaccination varies depending on both the extent and efficiency of limiting the transmission of disease, and can be applied orally, topically, intranasally (IN), or injected either subcutaneously (SC) or intramuscularly (IM). Conservation efforts regarding vaccinations often only serve the purpose of preventing disease related extinction. Rather than completely cleansing the population of the pathogen, infection rates are limited to a smaller percentage of the population. Case study: Ethiopian WolfThe Ethiopian Wolf (Canis simensis), a canid native to Ethiopia, is an endangered species with less than 440 wolves remaining in the wild. These wolves are primarily exposed to the rabies virus by domestic dogs and are facing extreme population declines, especially in the southern Ethiopia region of the Bale Mountains. To counter this, oral vaccinations are administered to these wolves within favorable bait that is widely distributed around their territories. The wolves consume the bait and with it ingest the vaccine, developing an immunity to rabies as antibodies are produced at significant levels. Wolves within these packs who did not ingest the vaccine will be protected by herd immunity as fewer wolves are exposed to the virus. With continued periodic vaccinations, conservationists will be able to spend more resources on further proactive efforts to help prevent their extinction. Government involvement In the US, the Endangered Species Act of 1973 was passed to protect US species deemed in danger of extinction. The concern at the time was that the country was losing species that were scientifically, culturally, and educationally important. In the same year, the Convention on International Trade in Endangered Species of Fauna and Flora (CITES) was passed as part of an international agreement to prevent the global trade of endangered wildlife. In 1980, the World Conservation Strategy was developed by the IUCN with help from the UN Environmental Programme, World Wildlife Fund, UN Food and Agricultural Organization, and UNESCO. Its purpose was to promote the conservation of living resources important to humans. In 1992, the Convention on Biological Diversity (CBD) was agreed on at the UN Conference on Environment and Development (often called the Rio Earth Summit) as an international accord to protect the Earth's biological resources and diversity. According to the National Wildlife Federation, wildlife conservation in the US gets a majority of its funding through appropriations from the federal budget, annual federal and state grants, and financial efforts from programs such as the Conservation Reserve Program, Wetlands Reserve Program and Wildlife Habitat Incentives Program. A substantial amount of funding comes from the sale of hunting/fishing licenses, game tags, stamps, and excise taxes from the purchase of hunting equipment and ammunition. The Endangered Species Act is a continuously updated list that remains up-to-date on species that are endangered or threatened. Along with the update of the list, the Endangered Species Act also seeks to implement actions to protect the species within its list. Furthermore, the Endangered Species Act also lists the species that the act has recovered. It is estimated that the act has prevented the extinction of about 291 species, like bald eagles and humpback whales, since its implementation through its different recovery plans and the protection that it provides for these threatened species. Non-government involvement In the late 1980s, as the public became dissatisfied with government environmental conservation efforts, people began supporting private sector conservation efforts which included several non-governmental organizations (NGOs) . Seeing this rise in support for NGOs, the U.S. Congress made amendments to the Foreign Assistance Act in 1979 and 1986 “earmarking U.S. Agency for International Development (USAID) funds for [biodiversity]”. From 1990 till now, environmental conservation NGOs have become increasingly more focused on the political and economic impact of USAID funds dispersed for preserving the environment and its natural resources. After the terrorist attacks on 9/11 and the start of former President Bush's War on Terror, maintaining and improving the quality of the environment and its natural resources became a “priority” to “prevent international tensions” according to the Legislation on Foreign Relations Through 2002 and section 117 of the 1961 Foreign Assistance Act. Non-governmental organizations Many NGOs exist to actively promote, or be involved with, wildlife conservation: The Nature Conservancy World Wide Fund for Nature (WWF) Conservation International Fauna and Flora International WildTeam Wildlife Conservation Society Audubon Society Traffic (conservation programme) Born Free Foundation African Wildlife Defence Force Save Cambodia's Wildlife WildEarth Guardians Zoological Society of London See also Conservation movement Conservation biology Endangered species Refuge (ecology) Wildlife management References External links
Wildlife conservation
Biology
4,710
47,564,307
https://en.wikipedia.org/wiki/Wi-Fi%20deauthentication%20attack
A Wi-Fi deauthentication attack is a type of denial-of-service attack that targets communication between a user and a Wi-Fi wireless access point. Technical details Unlike most radio jammers, deauthentication acts in a unique way. The IEEE 802.11 (Wi-Fi) protocol contains the provision for a deauthentication frame. Sending the frame from the access point to a station is called a "sanctioned technique to inform a rogue station that they have been disconnected from the network". An attacker can send a deauthentication frame at any time to a wireless access point, with a spoofed address for the victim. The protocol does not require any encryption for this frame, even when the session was established with Wired Equivalent Privacy (WEP), WPA or WPA2 for data privacy, and the attacker only needs to know the victim's MAC address, which is available in the clear through wireless network sniffing. Usage Evil twin access points One of the main purposes of deauthentication used in the hacking community is to force clients to connect to an evil twin access point which then can be used to capture network packets transferred between the client and the access point. The attacker conducts a deauthentication attack to the target client, disconnecting it from its current network, thus allowing the client to automatically connect to the evil twin access point. Password attacks In order to mount a brute-force or dictionary based WPA password cracking attack on a Wi‑Fi user with WPA or WPA2 enabled, a hacker must first sniff the WPA 4-way handshake. This sequence can be elicited by first forcing the user offline with the deauthentication attack. Attacks on hotel guests and convention attendees The Federal Communications Commission has fined hotels and other companies for launching deauthentication attacks on their own guests; the purpose being to drive them off their own personal hotspots and force them to pay for on-site Wi-Fi services. Toolsets There are a number of software toolsets that can mount a Wi‑Fi deauthentication attack, including: Aircrack-ng suite, MDK3, Void11, Scapy, and Zulu. A Pineapple rogue access point can also issue a deauth attack. See also Radio jamming IEEE 802.11w – offers increased security of its management frames including authentication/deauthentication References Further reading author's link (no paywall) GPS, Wi-Fi, and Cell Phone Jammers — FCC FAQ Denial-of-service attacks Wi-Fi
Wi-Fi deauthentication attack
Technology
531
71,998,469
https://en.wikipedia.org/wiki/475%20%C2%B0C%20embrittlement
Duplex stainless steels are a family of alloys with a two-phase microstructure consisting of both austenitic (face-centred cubic) and ferritic (body-centred cubic) phases. They offer excellent mechanical properties, corrosion resistance, and toughness compared to other types of stainless steel. However, duplex stainless steel can be susceptible to a phenomenon known as embrittlement or duplex stainless steel age hardening, which is a type of aging process that causes loss of plasticity in duplex stainless steel when it is heated in the range of . At this temperature range, spontaneous phase separation of the ferrite phase into iron-rich and chromium-rich nanophases occurs, with no change in the mechanical properties of the austenite phase. This type of embrittlement is due to precipitation hardening, which makes the material become brittle and prone to cracking. Duplex stainless steel Duplex stainless steel is a type of stainless steel that has a two-phase microstructure consisting of both austenitic (face-centred cubic) and ferritic (body-centred cubic) phases. This dual-phase structure gives duplex stainless steel a combination of mechanical and corrosion-resistant properties that are superior to those of either austenitic or ferritic stainless steel alone. The austenitic phase provides the steel with good ductility, high toughness, and high corrosion resistance, especially in acidic and chloride-containing environments. The ferritic phase, on the other hand, provides the steel with good strength, high resistance to stress corrosion cracking, and high resistance to pitting and crevice corrosion. They are therefore used extensively in the offshore oil and gas industry for pipework systems, manifolds, risers, etc. and in the petrochemical industry in the form of pipelines and pressure vessels. A duplex stainless steel mixture of austenite and ferrite microstructure is not necessarily in equal proportions, and where the alloy solidifies as ferrite, it is partially transformed to austenite when the temperature falls to around . Duplex steels have a higher chromium content compared to austenitic stainless steel, 20–28%; higher molybdenum, up to 5%; lower nickel, up to 9%; and 0.05–0.50% nitrogen. Thus, duplex stainless steel alloys have good corrosion resistance and higher strength than standard austenitic stainless steels such as type 304 or 316. Alpha (α) phase is a ferritic phase with body-centred cubic (BCC) structure, Imm [229] space group, 2.866 Å lattice parameter, and has one twinning system {112}<111> and three slip systems {110}<111>, {112}<111> and {123}<111>; however, the last system rarely activates. Gamma () phase is austenitic with a face-centred cubic (FCC) structure, Fmm [259] space group, and 3.66 Å lattice parameter. It normally has more nickel, copper, and interstitial carbon and nitrogen. Plastic deformation occurs in austenite more readily than in ferrite. During deformation, straight slip bands form in the austenite grains and propagate to the ferrite-austenite grain boundaries, assisting in the slipping of the ferrite phase. Curved slip bands also form due to the bulk-ferrite-grain deformation. The formation of slip bands indicates a concentrated unidirectional slip on certain planes causing a stress concentration. Age hardening by spinodal decomposition Duplex stainless steel can have limited toughness due to its large ferritic grain size, and its tendencies to hardening and embrittlement, i.e., loss of plasticity, at temperatures ranging from , especially at . At this temperature range, spinodal decomposition of the supersaturated solid ferrite solution into iron-rich nanophase () and chromium-rich nanophase (), accompanied by G-phase precipitation, occurs. This makes the ferrite phase a preferential initiation site for micro-cracks. This is because aging encourages Σ3 {112}<111> ferrite deformation twinning at slow strain rate and room temperature in tensile or compressive deformation, nucleating from local stress concentration sites, and parent-twinning boundaries, with 60° (in or out) misorientation, are suitable for cleavage crack nucleation. Spinodal decomposition refers to the spontaneous separation of a phase into two coherent phases via uphill diffusion, i.e., from a region of lower concentration to a region of higher concentration resulting in a negative diffusion coefficient , without a barrier to nucleation due to the phase being thermodynamically unstable (i.e., miscibility gap, + region in the figure), where is the Gibbs free energy per mole of solution and the composition. It increases hardness and decreases magneticity. Miscibility gap describes the region in a phase diagram below the melting point of each compound where the solid phase splits into the liquid of two separated stable phases. For 475 °C embrittlement to occur, the chromium content needs to exceed 12%. The addition of nickel accelerates the spinodal decomposition by promoting the iron-rich nanophase formation. Nitrogen changes the distribution of chromium, nickel, and molybdenum in the ferrite phase but does not prevent the phase decomposition. Other elements like molybdenum, manganese, and silicon do not affect the formation of iron-rich nanophase. However, manganese and molybdenum partition to the iron-rich nanophase, while nickel partitions to the chromium-rich nanophase. Microscopy characterisation Using Field Emission Gun Transmission Electron Microscope FEG-TEM, the nanometre-scaled modulated structure of the decomposed ferrite was revealed as chromium-rich nanophase gave the bright image, and iron-rich darker image. It also revealed that these modulated nanophases grow coarser with aging time. Decomposed phases start as irregular rounded shapes with no particular arrangement, but with time the chromium-rich nanophase takes a plat shape aligned in the <110> directions. Consequences Spinodal decomposition increases the hardening of the material due to the misfit between the chromium-rich and iron-rich nano-phases, internal stress, and variation of elastic modulus. The formation of coherent precipitates induces an equal but opposite strain, raising the system's free energy depending on the precipitate shape and matrix and precipitate elastic properties. Around a spherical inclusion, the distortion is purely hydrostatic. G-phase precipitates appear prominently at grain boundaries. and are phase rich in nickel, titanium, and silicon, but chromium and manganese may substitute titanium sites. G-phase precipitates occur during long-term aging, are encouraged by increasing nickel content in the ferrite phase, and reduce corrosion resistance significantly. It has ellipsoid morphology, structure (Fmm), and 11.4 Å lattice parameter, with a diameter less than 50 nm that increases with aging. Thus, the embrittlement is caused by dislocations impediment/ locking by the spinodally decomposed matrix and strain around G-phase precipitates, i.e., internal stress relaxation by the formation of Cottrell atmosphere. Furthermore, the ferrite hardness increases with aging time, the hardness of the ductile austenite phase remains nearly unchanged due to faster diffusivity in ferrite compared to the austenite. However, austenite undergoes a substitutional redistribution of elements, enhancing galvanic corrosion between the two phases. Treatment 550 °C heat treatment can reverse spinodal decomposition but not affect the G-phase precipitates. The ferrite matrix spinodal decomposition can be substantially reversed by introducing an external pulsed electric current that changes the system's free energy due to the difference in electrical conductivity between the nanophases and the dissolution of G-phase precipitates. Cyclic loading suppresses spinodal decomposition, and radiation accelerates it but changes the decomposition nature from an interconnected network of modulated nanophases to isolated islands. References Further reading Steel Materials degradation
475 °C embrittlement
Materials_science,Engineering
1,748
239,414
https://en.wikipedia.org/wiki/Teasing
Teasing has multiple meanings and uses. In human interactions, teasing exists in three major forms: playful, hurtful, and educative. Teasing can have a variety of effects, depending on how it is used and its intended effect. When teasing is unwelcome, it may be regarded as harassment or mobbing, especially in the workplace and school, or as a form of bullying or emotional abuse. If done in public, it may be regarded as humiliation. Teasing can also be regarded as educative when it is used as a way of informal learning. Adults in some of the Indigenous American communities often tease children to playfully illustrate and teach them how their behavior negatively affects the community. Children in many Indigenous American communities also learn by observing what others do in addition to collaborating with them. Along with teasing, this form of informal learning is different from the ways that Western American children learn. Informal ways of child learning include mutual responsibility, as well as active collaboration with adults and peers. This differentiates from the more formal way of learning because it is not adult-oriented. People may be teased on matters such as their appearance, weight, behavior, family, gender, faith, health/medical issues, abilities, clothing, and intelligence. From the victim's point of view, this kind of teasing is often hurtful, irrespective of the intention of the teaser. One may also tease an animal. Some animals, such as dogs and cats, may recognize this both as play or harassment. Nature A common form of teasing is verbal bullying or taunting. This behavior is intended to distract, disturb, offend, sadden, anger, bother, irritate, or annoy the recipient. Because it is hurtful, it is different from joking and is generally accompanied by some degree of social rejection. Teasing can also be taken to mean "To make fun of; mock playfully" or be sarcastic about and use sarcasm. Dacher Keltner uses Penelope Brown's classic study on the difference between "on-record" and "off-record" communication to illustrate how people must learn to read others' tone of voice and facial expressions in order to learn appropriate responses to teasing. A form of teasing that is usually overlooked is educational teasing. This form is commonly used by parents and caregivers in two Indigenous American communities and Mexican Heritage communities to guide their children into responding with more Prosocial behavior. For example, when a parent teases a child who is throwing a tantrum for a piece of candy, the parent will pretend to give the child candy but then take it away and ask the child to correct their behavior before giving the child that piece of candy. In this way, the parent teaches the child the importance of maintaining self-control. When adults educate children through teasing, they are informally teaching the children. This type of learning is often overlooked because it is different from the way Western American Communities teach their children. Another form of teasing is to pretend to give something which the other desires, or give it very slowly. This is usually done by arousing curiosity or desire, and may not actually involve the intent to satisfy or disclose. This form of teasing could be called "tantalizing", after the story of Tantalus. Tantalizing is generally playful among adults, although among children it can be hurtful, such as when one child acquires possession of another's property and will not return it. It is also common in flirting and dating. For example, a person who is interested in someone else romantically might reject an advance the first time in order to arouse interest and curiosity, and give in the second or third time. Whether teasing is playful or hurtful or educative is largely subject to the interpretation of the person being teased. If the person being teased feels harmed, then the teasing is hurtful. A difference in power between people may also make the behavior hurtful rather than playful. Ultimately though, if someone perceives themselves as the victim of teasing, and experiences the teasing as unpleasant, then it is considered hurtful. If parents' intentions are positive, as in many Indigenous American communities, then teasing to the community can be seen as an educational tool. The child may or may not understand that at the moment. If the other person continues to do it after being asked to stop then it is a form of bullying or abuse. Another way to look at teasing is as an honest reflection on differences, expressed in a joking fashion with the goal of "clearing the air". It can express comfort with the other which can be comforting. As opposed to being nice to someone's face while making disparaging remarks behind their back, teasing can be a way to express differences in a direct fashion rather than internalizing them. In Indigenous American communities Some indigenous American communities use teasing to teach their children about the expectations and values of the community and to change negative behaviors. Teasing gives children a better understanding of how their behavior affects the people around them. Teasing in Indigenous American communities is used to learn community acceptance, humbleness, correcting behavior and social control. In some Mexican indigenous American communities, teasing is used in an effective educative way. Teasing is found more useful because it allows the child to feel and understand the relevant effect of their behavior instead of receiving out-of-context feedback. Some parents in Indigenous American communities believe it mildly embarrasses the children in a shared reference to give them a good sense of the consequences of their behavior. This type of teasing is thought to teach children to be less egocentric, teaches autonomy and responsibility to monitor their own behavior. Parental teasing also is practiced to encourage the child to think of their behavior in a social context. Some Indigenous American mothers have reported that this urges the children to understand how their behavior affects others around them. From examples in Eisenberg's article, parents use teasing as a way of reinforcing relationships and participation in group/community activities (prosocial behavior). Parents tease their children to be able to "control the behavior of the child and to have fun with them". An Inuit principle of learning that follows a similar teasing pattern is known as issumaksaiyuk, meaning to cause thought. Oftentimes, adults pose questions or hypothetical situations to the children (sometimes dangerous) but in a teasing, playful manner, often dramatizing their responses. These questions raise the child's awareness to issues surrounding their community, as well as give them a sense of agency within the community as a member capable of having an effect and creating change. Once the child begins to answer the questions reasonably, like an adult, the questions would stop. In some Cherokee communities, teasing is a way of diffusing aggressive or hostile situations and teaching the individual about the consequences of their behavior. It allows the individual to feel how their behaviors are affecting others and control their behavior. Other usages To tease, or to "be a tease" in a sexual sense can refer to the use of posture, language or other means of flirting to cause another person to become sexually aroused. Such teasing may or may not be a prelude to intercourse, an ambiguity which can lead to uncomfortable situations. In a more physical sense, it can also refer to sexual stimulation. Teasing is also used to describe playing part of a song at a concert. Jam bands will often quote the main riff of another song during jams. "Tease it" is also used as a slang term to smoke marijuana. The word "tease" can also be used as a noun to stand for marijuana. In a very different context, hair can be teased, "ratted", or "backcombed". As the name suggests, backcombing involves combing the hair backwards from end to root to intentionally tangle the strands to create volume. It can also be done excessively in sections to create dreadlocks. See also Bullying Eve teasing Mobbing Sarcasm Sardonic Social rejection Taunting References External links The Art of the Teaser Abuse Harassment and bullying Human behavior
Teasing
Biology
1,628
49,733,699
https://en.wikipedia.org/wiki/Ion%20transporter%20superfamily
The ion transporter (IT) superfamily is a superfamily of secondary carriers that transport charged substrates. Families As of early 2016, the currently recognized and functionally defined families that make up the IT superfamily include: 2.A.8 - The Gluconate:H+ Symporter (GntP) Family 2.A.11 - The Citrate-Mg2+:H+ (CitM) Citrate-Ca2+:H+ (CitH) Symporter (CitMHS) Family 2.A.13 - The C4-Dicarboxylate Uptake (Dcu) Family 2.A.14 - The Lactate Permease (LctP) Family 2.A.34 - The NhaB Na+:H+ Antiporter (NhaB) Family 2.A.35 - The NhaC Na+:H+ Antiporter (NhaC) Family 2.A.45 - The Arsenite-Antimonite (ArsB) Efflux Family 2.A.47 - The Divalent Anion:Na+ Symporter (DASS) Family 2.A.61 - The C4-dicarboxylate Uptake C (DcuC) Family 2.A.62 - The NhaD Na+:H+ Antiporter (NhaD) Family 2.A.68 - The p-Aminobenzoyl-glutamate Transporter (AbgT) Family 2.A.94 - The Phosphate Permease (Pho1) Family 2.A.101 - The Malonate Uptake (MatC) Family 2.A.111 - The Na+/H+ Antiporter-E (NhaE) Family 2.A.118 - The Basic Amino Acid Antiporter (ArcD) Family See also Ion transporters Sodium-Proton antiporter Arsenite-Antimonite efflux Amino acid transporter Solute carrier family Transporter Classification Database Membrane protein References Solute carrier family Protein superfamilies
Ion transporter superfamily
Biology
428
61,879,941
https://en.wikipedia.org/wiki/Aminochlorination
In organic synthesis, aminochlorination is a reaction that installs both a chlorine atom and an amino (or amido) group to give an 2-aminoalkyl chloride. The reaction typically is effected by combining alkene substrates with chloramines. An alternative implementation involves Pd(II)-induced nucleophilic attack of the amine on the alkene followed by oxidation by a cupric chloride. References Organic synthesis
Aminochlorination
Chemistry
97
46,919,344
https://en.wikipedia.org/wiki/External%20vision%20system
An external vision system (XVS) refers to any of several methods to provide the pilot of an aircraft with a means to see outside the aircraft where traditional windscreens may not be feasible due to the aircraft configuration. An XVS would consist of external sensors, primarily video imagery, which is provided to the pilot(s) in real time via one or more displays intended to augment or replace the windscreen. In recent years, other types of vision systems have been introduced primarily on business jets. Both enhanced vision systems (EVS) and synthetic vision system (SVS) have become standard equipment on many larger business jets such as those manufactured by Gulfstream, Bombardier, Dassault, and most recently, Embraer. However, EVS typically provides the pilot(s) with an infrared video image, usually displayed on the head-up display (HUD), which overlays the pilot view of the outside world through the windscreen. SVS is a computer generated version of the outside world created from an onboard terrain database. SVS can also be displayed conformally on the HUD, but it is not real time in that anything that is not part of the static terrain database cannot be displayed. Both EVS and SVS are primarily intended to improve situational awareness of the flight deck crew, especially at night and in poor visibility weather conditions such as rain, snow, fog, or smoke. XVS is different in that it is intended to provide the flight deck crew a real time view of the outside world in visual meteorological conditions (VMC). Research Efforts NACA and later NASA conducted several flight experiments with onboard video systems in the late 1950s and 1960s. Renewed interest in XVS came again when civil supersonic transport aircraft such as the Concorde. Supersonic aircraft typically have long, protruding noses to reduce drag at high speeds. This creates a problem for designers who then may not be able to incorporate large enough windows to allow pilots the required view of the outside world. The solution on the Concorde was to have an articulating nose that drooped, exposing larger windows and allowing the pilots a better view during taxi, takeoff, approach, and landing. However, the structural and mechanism weight penalty for a solution similar to that used on the Concorde is undesirable and thus designers began looking for other solutions. During the High Speed Civil Transport (HSCT) program, NASA and its industry partners began looking at an early XVS for use on a proposed US supersonic civil transport. XVS was again proposed on the follow-on High Speed Research (HSR) program. In 2008, following the Quiet Spike supersonic research program, NASA and Gulfstream again collaborated on an XVS flight demonstration program using NASA's TF-18 flight test aircraft using commercial off-the-shelf High Definition video cameras and video displays while artificially restricting the aft seat pilot's view of the outside world. As a follow-on research project, NASA Langley Research Center equipped a test aircraft with multiple HD cameras and displays to provide resolution nearly equivalent to "20/20" human visual acuity. See also Index of aviation articles Aircraft periscope Telescopic Sighting System TISEO References Augmented reality applications Avionics
External vision system
Technology
659
74,663,368
https://en.wikipedia.org/wiki/Interstellar%3A%20The%20Search%20for%20Extraterrestrial%20Life%20and%20Our%20Future%20in%20the%20Stars
Interstellar: The Search for Extraterrestrial Life and Our Future in the Stars (also known as Interstellar) is a popular science book written by American theoretical physicist and Harvard University astronomer Avi Loeb that was published by Mariner Books on 29 August 2023. On 24 August 2023, The New York Times published an article about Loeb and his related search for signs of extraterrestrial life, as well as his related publications, including Extraterrestrial (2021) and Interstellar (2023). Contents Author Avi Loeb, according to Sarah Scoles of Undark Magazine, claims that a "search for physical evidence of alien technology within our solar system represents not just an interesting scientific pursuit but also one that will elevate our species, perhaps by connecting it to more advanced cosmic civilizations." According to Loeb, it's "arrogant of us to think that we are alone, that we don't have a neighbor out there. ... There are tens of billions of planets in the Milky Way galaxy alone and hundreds of billions of galaxies like the Milky Way in the observable volume of the universe, ... Perhaps noticing a neighbor will be a wake-up call that will bring us together, ... There might be many more neighbors that are far more accomplished than we are, and we can learn from them. So my hope is that it will bring humanity to a better place in the long term future." Reviews Book reviewer Leonard David notes that Interstellar is a "mind-meld of philosophy, physics, and cutting-edge science ... [and] blueprints a radical approach to our search for ET – and how best to brace for the reality of what's ahead". Sarah Scoles of Undark Magazine states that, "Loeb makes solid points about how modern science works, and could work better." but also writes that "the book is a fairly disorganized, rambling affair whose topics and metaphors leap wildly to and fro." A Daily Kos book reviewer writes that Interstellar "provides a realistic and practical blueprint for how a [human and alien life] interaction might actually occur, resetting our cultural understanding and expectation of what it means to identify an extraterrestrial object. ... [the author] also lays out the profound implications of becoming—or not becoming—interstellar; in an urgent, eloquent appeal for more proactive engagement with the world beyond ours, powerfully contends why we must seek out other life forms, and in the process, choose who and what we are within the universe." According to book reviewer Patrick Rapa of The Philadelphia Inquirer, "I think Loeb's brand of data-based speculation is useful. And fun. Why not imagine the possibilities? Nobody knows what Oumuamua, [an interstellar object], was. What's the harm in dreaming?" References External links Official Academic Book WebSite Official Publishrers WebSite Official Harvard WebSite Official Author WebSite American non-fiction books Astronomy books Popular science books 2023 non-fiction books
Interstellar: The Search for Extraterrestrial Life and Our Future in the Stars
Astronomy
641
2,010,363
https://en.wikipedia.org/wiki/Nonverbal%20learning%20disorder
Nonverbal learning disorder (NVLD or NLD) is a proposed neurodevelopmental disorder characterized by core deficits in nonverbal skills, especially visual-spatial processing. People with this condition have normal or advanced verbal intelligence and significantly lower nonverbal intelligence. A review of papers found that proposed diagnostic criteria were inconsistent. Proposed additional diagnostic criteria include intact verbal intelligence, and deficits in the following: visuoconstruction abilities, speech prosody, fine motor coordination, mathematical reasoning, visuospatial memory, and social skills. NVLD is not recognised by the DSM-5 and is not clinically distinct from learning disorders. NVLD symptoms can overlap with symptoms of autism, bipolar disorder, and attention deficit hyperactivity disorder (ADHD). For this reason, some claim a diagnosis of NVLD is more appropriate in some subset of these cases. Signs and symptoms Considered to be neurologically based, nonverbal learning disorder is characterized by: impairments in visuospatial processing discrepancy between average to superior verbal abilities and impaired nonverbal abilities, such as: visuoconstruction fine motor coordination mathematical reasoning visuospatial memory socioemotional skills People with NVLD may have trouble understanding charts, reading maps, assembling jigsaw puzzles, and using an analog clock to tell time. Motor coordination deficits are common in people with NVLD, especially children, and it may take a child with NVLD longer than usual to learn how to tie shoelaces or to ride a bicycle. At the beginning of their school careers, children with symptoms of NVLD struggle with tasks that require eye–hand coordination, such as coloring and using scissors, but often excel at memorizing verbal content, spelling, and reading once the shapes of the letters are learned. A child with NVLD's average or superior verbal skills can be misattributed to attention deficit hyperactivity disorder, defiant behavior, inattention, or lack of effort. Early researchers of NVLD, Johnson and Myklebust, characterize how the children appear in a classroom: "An example is the child who fails to learn the meaning of the actions of others [...] We categorize this child as having a deficiency in social perception, meaning that he has an inability which precludes acquiring the significance of basic nonverbal aspects of daily living, though his verbal level of intelligence falls within or above the average." In the adolescent years, when schoolwork becomes more abstract and the executive demands for time management, organization, and social interactions increase, students with NVLD begin to struggle. They focus on separate details and struggle to summarize information or to integrate ideas into a coherent whole, and they struggle to apply knowledge to other situations, to infer implicit information, to make predictions, and to organize information logically. As adults, tasks such as driving a car or navigating to an unfamiliar location may be difficult. Difficulty with keeping track of responsibilities or managing social interactions may affect job performance. People with NVLD may also fit the diagnostic criteria of dyscalculia, dysgraphia, or dyspraxia. Cause Research suggests that there is an association with an imbalance of neural activity in the right hemisphere of the brain connected to the white matter. Diagnosis Assorted diagnoses have been discussed as sharing symptoms with NVLD. In some cases, especially the form of autism previously called Asperger syndrome, the overlap can be significant; a major clinical difference is that NVLD criteria do not mention the presence or absence of either repetitive behaviors or narrow subject-matter interests, which is part of the diagnostic criteria for autism. These overlapping conditions include, among others: attention deficit hyperactivity disorder (ADHD) autism, especially high-functioning autism bipolar disorder developmental coordination disorder (dyspraxia) dyscalculia social communication disorder right hemisphere brain damage and developmental right hemisphere syndrome social-emotional processing disorder Gerstmann syndrome There is diagnostic overlap between nonverbal learning disorder and autism, and some clinicians and researchers consider them to be the same condition. Some claim that some diagnoses of ADHD would be more appropriately classified as NVLD. History While various nonverbal learning difficulties were recognized since early studies in child neurology, there is ongoing debate as to whether (or the extent to which) existing conceptions of NVLD provide a valid diagnostic framework. As presented in 1967, "nonverbal disabilities" (p. 44) or "disorders of nonverbal learning" was a category encompassing non-linguistic learning problems. "Nonverbal learning disabilities" were further discussed by Myklebust in 1975 as representing a subtype of learning "disability" with a range of presentations involving "mainly visual cognitive processing," social imperception, a gap between higher verbal ability and lower nonverbal processing, as well as difficulty with handwriting. Later neuropsychologist Byron Rourke sought to develop consistent criteria with a theory and model of brain functioning that would establish NVLD as a distinct syndrome (1989). Questions remain about how best to frame the perceptual, cognitive and motor issues associated with NVLD. See also Alexithymia – difficulty with understanding emotions Children's Nonverbal Learning Disabilities Scale Deficits in attention, motor control and perception References Further reading Books By authors with NVLD External links NVLD Project Learning disabilities Pervasive developmental disorders
Nonverbal learning disorder
Physics
1,145
16,164,493
https://en.wikipedia.org/wiki/Affordable%20Weapon%20System
The Affordable Weapon System is a US Navy program to design and produce a low cost "off the shelf" cruise missile launchable from a self-contained unit mounted in a standard shipping container. The need for the US Army to mass-manufacture more affordable, low overhead weapons became a pressing matter during the 1970s, a decade when costs to operate and support an armed inventory grew rapidly and consequently reduced budgets for new weapons acquisitions. The US weapons inventory is the most advanced in the world, but its volume is deemed insufficient in a theoretical war against China for example (especially the long-range precision-guided weaponry). To that effect, BAE Systems had developed a kit (Advanced Precision Kill Weapon System) to convert Hydra rockets into smart, precision-guided ammo. Specifications Length: (w/o booster): 3.32 m (10 ft 11 in) Diameter: 34.3 cm (13.5 in) Weight: 394 kg (737 lb) Speed: 400 km/h (250 mph) Ceiling: 4570 m (15000 ft) Range: > 1560 km (840 nm) Propulsion: Solid rocket booster and SWB Turbines SWB-65 turbojet sustainer. Payload: 200 lbs. Guidance: GPS and in-flight datalink. Program status April 2002 - International Systems LLC of San Diego, Calif. (subsidiary of Titan Corp.) awarded a $25,657,312 cost-plus-fixed-fee contract for continuing development and implementation. June 2005 - Titan awarded a $32.4 million contract modification to produce approximately 85 missiles for demonstration, test and evaluation. The contract also includes work for the AWS launcher design and ship integration. September 2005 - Titan awards contract for launch systems to BAE Systems. 2007 - Duncan L. Hunter pushed a 30 million dollars budget in the yearly defense appropriations bill to continue the development of AWS, despite inconclusive 2006 tryouts. July 2008 - DOD Research, Development, Test, and Evaluation budget earmarks $15,200,000 for program. November 2008 - MBDA Incorporated is awarded a $4,530,231 contract for research into the best material approach and the completion of risk reduction tasks for the AWS. References External links SWB Turbines SWB-65 Titan shoots for bargain missile - San Diego Union-Tribune Cruise missiles of the United States Area denial weapons Naval weapons Proposed weapons of the United States Equipment of the United States Navy
Affordable Weapon System
Engineering
492
5,037,034
https://en.wikipedia.org/wiki/P300-CBP%20coactivator%20family
The p300-CBP coactivator family in humans is composed of two closely related transcriptional co-activating proteins (or coactivators): p300 (also called EP300 or E1A binding protein p300) CBP (also known as CREB-binding protein or CREBBP) Both p300 and CBP interact with numerous transcription factors and act to increase the expression of their target genes. Protein structure p300 and CBP have similar structures. Both contain five protein interaction domains: the nuclear receptor interaction domain (RID), the KIX domain (CREB and MYB interaction domain), the cysteine/histidine regions (TAZ1/CH1 and TAZ2/CH3) and the interferon response binding domain (IBiD). The last four domains, KIX, TAZ1, TAZ2 and IBiD of p300, each bind tightly to a sequence spanning both transactivation domains 9aaTADs of transcription factor p53. In addition p300 and CBP each contain a protein or histone acetyltransferase (PAT/HAT) domain and a bromodomain that binds acetylated lysines and a PHD finger motif with unknown function. The conserved domains are connected by long stretches of unstructured linkers. Regulation of gene expression p300 and CBP are thought to increase gene expression in three ways: by relaxing the chromatin structure at the gene promoter through their intrinsic histone acetyltransferase (HAT) activity. recruiting the basal transcriptional machinery including RNA polymerase II to the promoter. acting as adaptor molecules. p300 regulates transcription by directly binding to transcription factors (see external reference for explanatory image). This interaction is managed by one or more of the p300 domains: the nuclear receptor interaction domain (RID), the CREB and MYB interaction domain (KIX), the cysteine/histidine regions (TAZ1/CH1 and TAZ2/CH3) and the interferon response binding domain (IBiD). The last four domains, KIX, TAZ1, TAZ2 and IBiD of p300, each bind tightly to a sequence spanning both transactivation domains 9aaTADs of transcription factor p53. Enhancer regions, which regulate gene transcription, are known to be bound by p300 and CBP, and ChIP-seq for these proteins has been used to predict enhancers. Work done by Heintzman and colleagues showed that 70% of the p300 binding occurs in open chromatin regions as seen by the association with DNase I hypersensitive sites. Furthermore, they have described that most p300 binding (75%) occurs far away from transcription start sites (TSSs) and these binding sites are also associated with enhancer regions as seen by H3K4me1 enrichment. They have also found some correlation between p300 and RNAPII binding at enhancers, which can be explained by the physical interaction with promoters or by enhancer RNAs. Function in G protein signaling An example of a process involving p300 and CBP is G protein signaling. Some G proteins stimulate adenylate cyclase that results in elevation of cAMP. cAMP stimulates PKA, which consists of four subunits, two regulatory and two catalytic. Binding of cAMP to the regulatory subunits causes release of the catalytic subunits. These subunits can then enter the nucleus to interact with transcriptional factors, thus affecting gene transcription. The transcription factor CREB, which interacts with a DNA sequence called a cAMP response element (or CRE), is phosphorylated on a serine (Ser 133) in the KID domain. This modification is PKA mediated, and promotes the interaction of the KID domain of CREB with the KIX domain of CBP or p300 and enhances transcription of CREB target genes, including genes that aid gluconeogenesis. This pathway can be initiated by adrenaline activating β-adrenergic receptors on the cell surface. Clinical significance Mutations in CBP, and to a lesser extent p300, are the cause of Rubinstein-Taybi Syndrome, which is characterized by severe mental retardation. These mutations result in the loss of one copy of the gene in each cell, which reduces the amount of CBP or p300 protein by half. Some mutations lead to the production of a very short, nonfunctional version of the CBP or p300 protein, while others prevent one copy of the gene from making any protein at all. Although researchers do not know how a reduction in the amount of CBP or p300 protein leads to the specific features of Rubinstein-Taybi syndrome, it is clear that the loss of one copy of the CBP or p300 gene disrupts normal development. Defects in CBP HAT activity appears to cause problems in long-term memory formation. CBP and p300 have also been found to be involved in multiple rare chromosomal translocations that are associated with acute myeloid leukemia. For example, researchers have found a translocation between chromosomes 8 and 22 (in the region containing the p300 gene) in several people with a cancer of blood cells called acute myeloid leukemia (AML). Another translocation, involving chromosomes 11 and 22, has been found in a small number of people who have undergone cancer treatment. This chromosomal change is associated with the development of AML following chemotherapy for other forms of cancer. Mutations in the p300 gene have been identified in several other types of cancer. These mutations are somatic, which means they are acquired during a person's lifetime and are present only in certain cells. Somatic mutations in the p300 gene have been found in a small number of solid tumors, including cancers of the colon and rectum, stomach, breast and pancreas. Studies suggest that p300 mutations may also play a role in the development of some prostate cancers, and could help predict whether these tumors will increase in size or spread to other parts of the body. In cancer cells, p300 mutations prevent the gene from producing any functional protein. Without p300, cells cannot effectively restrain growth and division, which can allow cancerous tumors to form. Mouse models CBP and p300 are critical for normal embryonic development, as mice completely lacking either CBP or p300 protein, die at an early embryonic stage. In addition, mice which lack one functional copy (allele) of both the CBP and p300 genes (i.e. are heterozygous for both CBP and p300) and thus have half of the normal amount of both CBP and p300, also die early in embryogenesis. This indicates that the total amount of CBP and p300 protein is critical for embryo development. Data suggest that some cell types can tolerate loss of CBP or p300 better than the whole organism can. Mouse B cells or T cells lacking either CBP and p300 protein develop fairly normally, but B or T cells that lack both CBP and p300 fail to develop in vivo. Together, the data indicate that, while individual cell types require different amounts of CBP and p300 to develop or survive and some cell types are more tolerant of loss of CBP or p300 than the whole organism, it appears that many, if not all cell types may require at least some p300 or CBP to develop. References External links p300-CBP regulatory mechanism in the IFN-β enhanceosome complex Gene families Membrane biology EC 2.3.1 Transcription coregulators
P300-CBP coactivator family
Chemistry
1,623
57,458,855
https://en.wikipedia.org/wiki/Beverton%20Medal
The Beverton Medal is a prestigious. international fish biology and/or fisheries science prize awarded annually. It is awarded to a distinguished scientist for a lifelong contribution to all aspects of the study of fish biology and/or fisheries science, with a focus on ground-breaking research. The medal was established as the highest award of the Fisheries Society of the British Isles (FSBI) to recognize distinction in the field of fish biology and fisheries science, to raise the profile of the discipline and of the Society in the wider scientific community. Medals are awarded to individuals who have made an outstanding contribution to fish biology and/or fisheries. The Beverton Medal is traditionally awarded in July at the Fisheries Society of the British Isles annual international conference. The first medal was awarded to Ray Beverton. In his honour, the medal is now known as the Beverton Medal. In 2017, to mark the 50th anniversary of the Fisheries Society of the British Isles (FSBI) the medal was awarded to Ray's collaborator Sidney Holt, having together written the book On the Dynamics of Exploited Fish Populations in 1957 Medallists Source: FSBI 2021 - Daniel Pauly 2020 - Beth Fulton 2019 - Neil B. Metcalfe 2018 - Gary Carvahlo 2017 - Sidney Holt 2016 - Lennart Persson 2015 - Ian Cowx 2014 - Alexander (Sandy) Scot 2013 - Felicity Huntingford 2012 - Charles Tyler 2011 - Imantes (Monty) Priede 2010 - Tony Farrell 2009 - Peter Maitland 2008 - Paul J.B. Hart 2007 - Richard Mann (M.B.E.) 2006 - Anne Magurran 2005 - J.P. Sumpter 2004 - A. Ferguson 2003 - Tony J. Pitcher 2002 - J.E. Thorpe 2001 - H. Bern 2000 - Rosemary Lowe-McConnell 1999 - J.M. Elliott 1998 - J.H.S. Blaxter 1997 - E. Houde 1996 - E.D. Le Cren 1995 - Ray Beverton See also List of biology awards References Biology awards Fisheries science British science and technology awards Awards established in 1995
Beverton Medal
Technology
428
4,905,722
https://en.wikipedia.org/wiki/AmigaOS%20version%20history
AmigaOS is the proprietary native operating system of the Amiga personal computer. Since its introduction with the launch of the Amiga 1000 in 1985, there have been four major versions and several minor revisions of the operating system. Initially the Amiga operating system had no strong name and branding, as it was simply considered an integral part of the Amiga system as a whole. Early names used for the Amiga operating system included "CAOS" (which stood for "Commodore Amiga Operating System") and "AmigaDOS". Another non-official name was "Workbench", from the name of the Amiga desktop environment, which was included on a floppy disk named "Amiga Workbench". Version 3.1 of the Amiga operating system was the first version to be officially referred to as "Amiga OS" (with a space between "Amiga" and "OS") by Commodore. Version 4.0 of the Amiga operating system was the first version to be branded as a less generic "AmigaOS" (without the space). What many consider the first versions of AmigaOS (Workbench 1.0 up to 3.0) are here indicated with the Workbench name of their original disks. Kickstart/Workbench 1.0, 1.1, 1.2, 1.3 Workbench 1.0 was released for the first time in October 1985. The 1.x series of Workbench defaults to a distinctive blue and orange color scheme, designed to give high contrast on even the worst of television screens (the colors can be changed by the user). Version 1.1 consists mostly of bug fixes and, like version 1.0, was distributed only for the Amiga 1000. The entire Workbench operating system consisted of three floppy disks: Kickstart, Workbench and ABasic by MetaComCo. The Amiga 1000 needed a Kickstart disk to be inserted into floppy drive to boot up. An image of a simple illustration of a hand on a white screen, holding a blue Kickstart floppy, invited the user to perform this operation. After the kickstart was loaded into a special section of memory called the writable control store (WCS), the image of the hand appeared again, this time inviting the user to insert the Workbench disk. Workbench version 1.2 was the first to support Kickstart stored in a ROM. A Kickstart disk was still necessary for Amiga 1000 models; it was no longer necessary for Amiga 500 or 2000, but the users of these systems had to change the ROMs (which were socketed) to change the Kickstart version. Workbench now spanned two floppy disks, and supported installing and booting from a hard drive (assuming the Amiga was equipped with one), the name of the main disk was still named "Workbench" (which is also the user interface portion of the operating system). The second disk was the Extras disk. The system now shipped with AmigaBasic by Microsoft, the only software Microsoft ever wrote for the Amiga. Kickstart version 1.2 corrected various flaws and added AutoConfig support. AutoConfig is a protocol similar to and is the predecessor of Plug and Play, in that it can configure expansion boards without user intervention. Kickstart version 1.3 improved little on its predecessor, the most notable change being auto booting from hard drives. Workbench 1.3, on the other hand, users can find several significant improvements to Workbench, including FFS a faster file system for hard disks storage which resolved the problem of old Amiga filesystem which wasted too much hard disk space due to the fact it could store only 488 bytes in any block of 512 bytes keeping 24 bytes for checksums. Many improvements were made to the CLI (command line interface) of Amiga which was now a complete text based Shell, named AmigaShell, and various additional tools and programs. Kickstart/Workbench 1.4 Kickstart/Workbench 1.4 was a beta version of the upcoming 2.0 update and never released, but the Kickstart part was shipped in very small quantities with early Amiga 3000 computers, where it is often referred to as the "Superkickstart ROM". In these machines it is only used to bootstrap the machine and load the Kickstart that will be used to actually boot the system. The appearance of a very early first release of 1.4 was similar to 1.3, but with colors slightly changed. A second version was similar to that of 2.0 and higher, with just minor differences. It is, however, possible to dump out of the OS selection screen by clicking where one would expect to see a close gadget. This will cause the machine to boot Kickstart 1.4 using either the wb_2.x: partition, or from a floppy. Workbench 2.0, 2.04, 2.05, 2.1 Workbench 2.0 was released in 1990 and introduced a lot of improvements and major advances to the GUI of the overall Amiga operating system. The harsh blue and orange colour scheme was replaced with a much easier on the eye grey and light blue with 3D aspect in the border of the windows. The Workbench was no longer tied to the 640×256 (PAL) or 640×200 (NTSC) display modes, and much of the system was improved with an eye to making future expansion easier. For the first time, a standardised "look and feel" was added. This was done by creating the Amiga Style Guide, and including libraries and software which assisted developers in making conformant software. Technologies included the GUI element creation library gadtools, the software installation scripting language Installer, and the AmigaGuide hypertext help system. Workbench 2.04 introduced ARexx, a system-wide scripting language. Programmers could add so-called "ARexx ports" to their programs, which allowed them to be controlled from ARexx scripts. Using ARexx, you could make two completely different programs from different vendors work together seamlessly. For example, you could batch-convert a directory of files to thumbnail images with an ARexx-capable image-manipulation program, create and index HTML table of the thumbnails linking to the original images, and display it in a web browser, all from one script. ARexx became very popular, and was widely adopted by programmers. The AmigaDOS, previously written in BCPL and very difficult to develop for beyond basic file manipulation, was mostly rewritten in C. Unfortunately, some badly written software – especially games – failed to run with 2.x, and so a lot of people were upset with this update. Most often, the failure occurred because programmers had used directly manipulated private structures maintained by the operating system, rather than using official function calls. Many users circumvented the problem by installing so-called "kickstart switchers", a small circuit board which held both a Kickstart 1.3 and 2.0 chip, with which they could switch between Kickstart versions. 2.x shipped with the A500+ (2.04), A600 (2.05), A3000 and A3000T. Workbench 2.1 was the last in this series, and only released as a software update. It included useful features such as CrossDOS, to support working with floppy disks formatted for PCs. Since 2.1 was a software-only release, there was no Kickstart 2.1 ROM. 2.x also introduced PCMCIA card support, for the slot on the A600. Workbench 2.1 introduced also a standard hypertext markup language for easily building guides for the user or help files, or manuals. It was called AmigaGuide. Release 2.1 was also the first Workbench release to feature a system-standard localization system, allowing the user to make an ordered list of preferred languages; when a locale-aware application runs, it asks the operating system to find the catalog (a file containing translations of the application's strings) best matching the user's preferences. Amiga OS 3.0, 3.1 Amiga OS 3.0 was released in 1992 and version 3.1 between 1993 (for the CD32) and 1994 (for other Amiga models). Amiga OS 3.1 was the last version released by Commodore. The 3.x series added support for new Amiga models. Other new features included: A universal data system, known as DataTypes, that allowed programs to load pictures, sound, text and other content in formats they didn't understand directly, through the use of standard plugs (see object-oriented operating system) (3.0) Better color remapping for high-color display modes and support for the new AGA chipset. (3.0) Improved visual appearance for Workbench desktop. (3.0) CD-ROM support as required for Amiga CD32. (3.1) 3.x shipped with the CD32, A1200, A4000 and A4000T. AmigaOS 3.5, 3.9 After the demise of Commodore, Workbench 3.5 was released on 18 October 1999 and Workbench 3.9 in December 2000 by German company Haage & Partner, which was granted the license to update the Amiga operating system by its new owners. Whereas all previous OS releases ran on Motorola 68000, AmigaOS 3.5 onwards required a 68020 or better, CD-ROM and at least 4 MB RAM. Unlike previous releases, 3.5 and 3.9 were released on CD-ROM. Kickstart 3.1 was also required, as the operating system didn't include the new ROM. Updates included: Supplied with TCP/IP stack (unregistered time-limited free MiamiDX demo in 3.5, unrestricted AmiTCP in 3.9), web browser (AWeb), and e-mail client Improved GUI and new toolkit called "ReAction" AVI/MPEG movie player (OS3.9) New partitioning software to support hard disks larger than 4 GB HTML documentation in English and German MP3 and CD audio player (OS3.9) Dock program (OS3.9) Improved Workbench with asynchronous features Find utility (OS3.9) Unarchiving system called XAD (OS3.9) WarpOS PowerPC kernel to support PowerUP accelerator boards AmigaOS 3.1.4, 3.2 AmigaOS 3.1.4 was released in September 2018 by Hyperion Entertainment with many fixes and enhancements. In particular, support of larger hard drives including at bootup; the entire line of Motorola 680x0 CPUs up to (and including) the Motorola 68060; and a modernized Workbench with a new, optional icon set. The version number caused some confusion in the community as it was released after AmigaOS 3.5, 3.9, and even 4.x, but relates to the fact that the codebase is a clean slate building from the original 3.1 source code from Commodore. The source code for both 3.5 and 3.9 by Haage & Partner could not legally be used due to licensing reasons, and 4.x is built and reserved for the PowerPC platform. Unlike AmigaOS 3.5, AmigaOS 3.1.4 still supports the Motorola 68000 CPU, thus the complete range of classic Amiga computers. In May 2021, Hyperion Entertainment released AmigaOS 3.2, which includes all features of the previous version (3.1.4.1) and adds several new improvements such as support for ReAction GUI, management of Amiga Disk File images, help system and improved datatypes. AmigaOS 4 A new version of AmigaOS was released on December 24, 2006, after five years of development by Hyperion Entertainment (Belgium) under license from Amiga, Inc. for AmigaOne registered users. During the five years of development, users of AmigaOne machines could download from Hyperion repository Pre-Release Versions of AmigaOS 4.0 as long as these were made available. As witnessed by many users into Amiga discussion forum sites, these versions were stable and reliable, despite the fact that they are technically labeled as "pre-releases". Last stable version of AmigaOS 4.0 for AmigaOne computers is the "July 2007 Update", released for download 18 July 2007 to the registered users of AmigaOne machines. AmigaOS 4 Classic was released commercially for older Amiga computers with CyberstormPPC and BlizzardPPC accelerator cards in November 2007. It had previously been available only to developers and beta-testers. Version 4.0 The new version is PowerPC-native, finally abandoning the Motorola 68k processor. AmigaOS 4.0 will run on some PowerPC hardware, which currently only includes A1200, A3000 and A4000 with PowerPC accelerator boards and AmigaOne motherboards. Amiga, Inc.'s distribution policies for AmigaOS 4.0 and any later versions require that for third-party hardware the OS must be bundled with it, with the sole exception of Amigas with Phase 5 PowerPC accelerator boards, for which the OS will be sold separately. AmigaOS 4.0 Final introduced a new memory system based on the slab allocator. Features, among others: Fully skinnable GUI Virtualized memory Integrated viewer for PDF and other document formats Support for PowerPC (native) and 68k (interpreted/JIT) applications New drivers for various hardware New memory allocation system Support for file sizes larger than 2 GB Integrated Picasso 96 2D Graphics API Integrated Warp3D 3D Graphics API Version 4.1 AmigaOS 4.1 was presented to the public July 11, 2008, and went on sale September 2008. This is a new version and not only a simple update as it features, among others: Memory paging JXFS filesystem with the support for drives and partitions of multiple terabyte size Hardware compositing engine (Radeon R1xx and R2xx family) Implementation of the Cairo device-independent 2D rendering library New and improved DOS functionality (full 64 bit support, universal notification support, automatic expunge and reload of updated disk resources) Improved 3D hardware accelerated screen-dragging See also Kickstart versions References AmigaOS First Update Release announcement at Hyperion site. AmigaOS new memory system revisited article on OS4.Hyperion site AmigaOS new system for allocating memory article on OS4.Hyperion site AmigaOS 4.0 image included in this article is intended for fair use. In the past, neither Hyperion VOF (Belgium), nor Amiga Inc. (USA) were opposed to publishing in internet sites of AmigaOS 4.0 screenshots kindly donated by users. Owners of copyrights are free to register and write in the talk page of this article to ask for the removing of this image from article, and to ask also for its deletion. Hyperion Entertainment announces Amiga OS 4.1 AmigaOS Software version histories
AmigaOS version history
Technology
3,129
14,195,991
https://en.wikipedia.org/wiki/HBD
Hemoglobin subunit delta is a protein that in humans is encoded by the HBD gene. Function The delta (HBD) and beta (HBB) genes are normally expressed in the adult: two alpha chains plus two beta chains constitute HbA, which in normal adult life comprises about 97% of the total hemoglobin. Two alpha chains plus two delta chains constitute HbA2, which with HbF comprises the remaining 3% of adult hemoglobin. Five beta-like globin genes are found within a 45 kb cluster on chromosome 11 in the following order: 5' - epsilon – gamma-G – gamma-A – delta – beta - 3'. Clinical significance Mutations in the delta-globin gene are associated with Delta-thalassemia. See also Hemoglobin Human β-globin locus Thalassemia References Further reading Hemoglobins
HBD
Chemistry
188
28,642,548
https://en.wikipedia.org/wiki/SystemsGo
SystemsGo is a classroom program used in the United States promote the study of engineering and the development of work force skills. It is intended to encourage students to pursue careers in science, technology, engineering, and mechanics. It is based in Fredericksburg, Texas. It allows students to learn more about the past, present, and future of rocket technology, as well as use information learned from independent study to complete hands-on projects. Annual Rocket Meet Every spring, teams from high schools in the US gather in at several sites to launch their semester project rockets. The program offers high school students three different levels of projects, ranging from a mid-sized rocket traveling to a one mile apogee to rockets weighing several hundred pounds with the potential to reach near-space altitudes. Program levels Tsiolkovsky Level : The first stage of the Program, named after the Soviet rocket scientist Konstantin Tsiolkovsky, requires students to propel a research package to an altitude of 5,280 feet (one mile) and to recover it after launch. Oberth Level : The intermediate stage, named after the German physicist Hermann Oberth, gives students the goal of designing and launching a rocket able to reach transonic velocity while maintaining a maximum altitude of less than 13,000 feet. Goddard Level : The most advanced level, named after the American engineer and rocket scientist Robert H. Goddard, challenges students to create a rocket from scratch that can reach an altitude of 50,000 feet, which is considered to fall within the near space region of Earth’s atmosphere. Rockets for this level are often several hundred pounds and can reach velocities of Mach 3 or even Mach 4. Launches for this level are hosted by the US Army at the White Sands Missile Range, New Mexico. Recognition The SystemsGo program has been recognized and supported by such organizations as SpaceX, Dow Chemical, Boeing, NASA, and The Space Foundation. References Aerospace
SystemsGo
Physics
385
51,614,357
https://en.wikipedia.org/wiki/Tarangire%20Ecosystem
The Tarangire Ecosystem () is a geographical region in northern Tanzania, Africa. It extends between 2.5 and 5.5 degrees south latitudes and between 35.5 and 37 degrees east longitudes. The Tarangire Ecosystem hosts the second-largest population of migratory ungulates in East Africa and the largest population of elephants in northern Tanzania. The Tarangire Ecosystem is defined by watershed boundaries of the Lake Manyara Basin and the Engaruka Basin, and the long distance migratory movements of eastern white-bearded wildebeest and plains zebra. It includes the dry season wildlife concentration area near the Tarangire River in Tarangire National Park, and the wet-season dispersal and calving grounds to the north in the Northern Plains and to the east in Simanjiro Plains, spanning in total approximately 20,500 km2 (7,900 sq mi). Migratory animals must have access to both the dry-season water source in the park, and the nutrient-rich forage available only on the calving grounds outside the park to successfully raise their calves and maintain their high abundance. The Tarangire Ecosystem is also known as the Masai Steppe, or the Tarangire-Manyara Ecosystem. Tarangire has approximately 500 species of birds, and more than 60 species of larger mammal. Geography and climate The area falls within the eastern branch of the East African Rift Valley which has widened and the valley floor fallen over the past few million years. About 250,000 years ago Lake Manyara and Lake Burunge were part of a larger lake called Proto-Manyara, a basin of internal drainage that lost water through evaporation and deep percolation. Subsequent rises in the Rift Valley floor changed drainage patterns and the lake was reduced in size and divided into the two shallow, alkali lakes currently seen. Topography is now mainly low ridges of gneiss and pre-Cambrian rocks covered with well-drained, medium textured, stony soils. Large areas of valley bottoms are montmorillonite black cotton soils. Ancient lake sediments produced clay soils in the Proto-Manyara area. Minjingu Hill and Vilima Vitatu were islands in Proto-Manyara Lake and their phosphate deposits there are derived from accumulated waterbird feces. Volcanic ash deposits produce rich soils on the Northern Plains and Simanjiro Plains where migratory wildebeest and zebra find forage with the nutrients necessary for lactation and healthy calf growth. The current western boundary is the rift valley escarpment, the northern boundary is the Kenyan border near Lake Natron, the southern and eastern boundaries are not defined by any strict geographic features. Elevation ranges from about 1000 m in the southwest to 2660 m in the northeast. Tarangire has a bimodal rainfall averaging 650 mm per annum, with short rains from November to February, long rains from March to May, and dry season from June to October. The rains, particularly the short rains, are very unreliable and often fail. Rainfall varies inter-annually, the standard deviation of the annual rainfall is equal to 37% of the mean annual rainfall. The inter-annual variation of monthly rainfall varies even more markedly, the standard deviation of monthly rainfall is 72% of the mean. This high variability in rainfall is also reflected in a high inter-annual variation of the length of the wet season. For example, the wet season lasted 38 days in 1983/1984 and 200 days in 1987/1988. The oldest known elephant to give birth to twins is found in Tarangire. A recent birth of elephant twins in the Tarangire National Park of Tanzania is a great example of how the birth of these two healthy and thriving twins can beat the odds. History Between the sixteenth and nineteenth centuries, the Maa-speaking pastoral people expanded into the area, replacing other pastoral groups like Nilotes and farming Bantu groups. By 1880 the Maasai reached their greatest extent. Around 1900 they suffered pleuro-pneumonia and smallpox diseases that killed many. At the same time the outbreak of the rinderpest decimated Maasai livestock and wildlife. The colonial period of 1880s to 1950s saw the displacement of Maasai from lands with high potential for agricultural development by the European farmers/settlers. Many game parks were created at the same time, often evicting pastoralists from key dry season grazing areas and watering points. Because of the abundant water and pasture in the Tarangire ecosystem, it had a reputation as one of the best pastoral areas in Tanzania. Many herders who were evicted from the Serengeti National Park in the 1950s relocated to this area. Tanzania became independent in 1961 and most of the British colonial administration/legislation was adopted by the new government. In the 1960s and 1970s the rinderpest that had previously killed many wildlife and livestock species was controlled. The control of rinderpest resulted in the increased numbers of wildebeest in the Serengeti ecosystem. This pushed Maasai into Tarangire area to avoid contact with wildebeest calving areas in the short grass-plains. Such areas are associated with the spread of Malignant Catarrhal Fever that affects cattle. Between 1962 and 1963 the worst drought in 50 years hit most parts of the country including Tarangire area and killed many wildlife and livestock. In 1967 agriculture was promoted as the backbone of the national economy. Large-scale farms like the Lolkisale bean farms were established in Tarangire to produce crops for export as well as for national reserves during droughts and food shortage. Human population increased in Tarangire area due both to natural increase and immigration of agriculturists from nearby regions of Kilimanjaro and Arusha. This displaced Maasai pastoralists and wildlife from the best rangelands into more marginal areas. In 1970, the Tarangire Game Reserve was upgraded to become Tarangire National Park. By the mid-1980s the movement of commercial interests and farmers into the area had expanded, blocking many traditional migratory routes for wildlife. In 2001, the Tanzanian government turned over the National Ranching Company land at Manyrara Ranch to the Tanzania Land Conservation Trust to help conserve the wildlife migration corridor between Tarangire National Park and the calving grounds to the north on the Gelai Plains. Conservation easements are being used as conservation tools on the calving grounds east of Tarangire National Park on the Simanjiro Plains. Land-use planning informed by wildlife survey data is being tried to help conserve pastoral rangelands, wildlife migration routes, and calving grounds in the Northern Plains. References External links Wild Nature Institute Wildlife Conservation Society Tropical and subtropical grasslands, savannas, and shrublands Tarangire River Natural history of Tanzania Geography of Arusha Region Geography of Dodoma Region Geography of Manyara Region Geography of Rift Valley Province Great Rift Valley Southern Acacia-Commiphora bushlands and thickets Ecosystems by region
Tarangire Ecosystem
Biology
1,408
5,398,628
https://en.wikipedia.org/wiki/Bread%20and%20salt
Bread and salt are offered to guests in a ceremony of welcome in cultures around the world. This pair of foods is particularly significant in Slavic countries, but is also notable in Nordic, Baltic, Balkan and other European cultures as well as in Middle Eastern cultures. Bread and salt as a traditional greeting remains common in Albania, Armenia, and among the Jewish diaspora. This tradition has been extended to spaceflight. Background Salt is an essential nutrient, and has long held an important place in religion and culture. For example, it is mentioned in the Bible dozens of times, including as a covenant of salt. Bread is a staple food, leavened or unleavened. It is usually made of wheat, but other grains can be used. In many cultures, bread is a metaphor for basic necessities and living conditions in general. Etymology The tradition is known locally by its Slavic names, all literal variants of "bread and salt": , , , , , , , , , . It is shared with some of the neighbouring non-Slavic peoples—the Latvians and Lithuanians (both Baltic nations), Romanians (Romance) as well as some Finno-Ugric peoples like the Karelians—all of whom are culturally and historically close to their Slavic neighbours: , , , , . It is also common in Albania (), Armenia (, agh u hats), Turkey (), among the Jewish diaspora, and within parts of the Middle East under different names. Cultural associations Bread and salt as a traditional greeting is shared with some non-Slavic nations—Lithuanians, Latvians (both Baltic), Romanians (Romance) as well as some Finno-Ugric peoples like the Karelians—all of which are culturally and historically close to their Slavic neighbours. Albania Bread, salt, and heart () is a traditional Albanian way of honoring guests, it dates back from the Kanun of Lekë Dukagjini, chapter 18 - para. 608: "The Guest shall be welcomed with Bread, salt and heart". Heart in the context is related with hospitality, the concept is based on giving the most expensive thing of that time which was salt to the awaited guest. Nowadays it is not commonly practiced in daily life. Belarus, Russia, and Ukraine When important, respected, or admired guests arrive, they are presented with a loaf of bread (usually a korovai) placed on a rushnyk (embroidered ritual cloth). A salt holder or a salt cellar is placed on top of the bread loaf or secured in a hole on the top of the loaf. On official occasions, the "bread and salt" is usually presented by young women dressed in national costumes (e.g., sarafan and kokoshnik). The tradition gave rise to the Russian word that expresses a person's hospitality: (literally: "bready-salty"). In general, the word "bread" is associated in Russian culture with hospitality, with bread being the most respected food, whereas salt is associated with long friendship, as expressed in a Russian saying "to eat a pood of salt (together with someone)". Also historically the Russian Empire had a high salt tax that made salt a very expensive and prized commodity (see also the Moscow uprising of 1648). There also is a traditional Russian greeting "" (). The phrase is to be uttered by an arriving guest as an expression of good wish towards the host's household. It was often used by beggars as an implicit hint to be fed, therefore a mocking rhymed response is known: "Khleb da sol!" — "Yem da svoy!" (Хлеб да соль — ем да свой! "Bread and salt!" — "I am eating and it is my own!"). In Russian weddings, it is a traditional custom for the bride and groom to be greeted after the ceremony by family, usually the matriarch, with bread and salt in an embroidered cloth. This confers good health and fortune unto the newlyweds. In the Russian Orthodox Church, it is customary to greet the bishop at the steps of the church when he arrives for a pastoral visit to a church or monastery with bread and salt. Bulgaria Bread and salt () is a traditional Bulgarian custom expressing hospitality, showing that the guest is welcomed. The bread and salt is commonly presented to guests by a woman. Bulgarians usually make a certain type of bread for this occasion called pogacha, which is flat, fancy, and decorated. Regular bread is not usually used, although it may have been historically, but pogacha is much more common in this custom. Usually, guests are presented with the pogacha, and the guest is supposed to take a small piece, dip into the salt and eat it. This custom is common for official visits regardless of whether the guest is foreign or Bulgarian. One notable example of this custom is when the Russians came to liberate Bulgaria from the Ottomans at the end of the 19th century. A common scene from that period was of a Bulgarian village woman welcoming Russian soldiers with bread and salt as a sign of gratitude. Poland In Poland, welcoming with bread and salt ("") is often associated with the traditional hospitality ("") of the Polish nobility (szlachta), who prided themselves on their hospitality. A 17th-century Polish poet, Wespazjan Kochowski, wrote in 1674: "O good bread, when it is given to guests with salt and good will!" Another poet who mentioned the custom was Wacław Potocki. The custom was, however, not limited to the nobility, as Polish people of all classes observed this tradition, reflected in old Polish proverbs. Nowadays, the tradition is mainly observed on wedding days, when newlyweds are greeted with bread and salt by their parents on returning from the church wedding. North Macedonia In the North Macedonia, this tradition still is practiced occasionally as a custom expressing hospitality. A certain type of bread, similar to that in Bulgaria and also by the same name—pogača (from ) is prepared. The notable Macedonian and ex-Yugoslav ethno-jazz-rock group of the world music guitarist Vlatko Stefanovski had the name "Leb i Sol", which means "bread and salt" and speaks itself about this term of hospitality as something basic and traditional. Romania and Moldova As in the neighbouring Slavic countries, bread and salt is a traditional Romanian custom expressing hospitality, showing that the guest is welcomed. In Transylvania bread and salt are served to protect against weather demons. Serbia Bread and salt () is a traditional welcoming of guests, being customary to offer it before anything else, with bread having an important place in Serbian tradition, used in rituals. The traditional bread, pogača, is a symbol of family unity and goodness, and salt prosperity and security for the guest. It is part of the state protocol, in use since the Principality of Serbia, often used when welcoming foreign representatives. Slovakia and Czechia The long-tradition of the Slovakia and the Czech Republic as Slavic countries is to welcome important visits with bread and salt. An example is the welcome of Pope Francis in Bratislava 2021 by president Zuzana Čaputová. Finland, Estonia, Latvia, and Lithuania In Finland, Estonia, Latvia, and Lithuania, bread and salt were traditionally given as a symbol of blessing for a new home. This tradition continues today and instead of white bread, dark fiber-rich rye bread is often used. The tradition is still kept alive in Eastern Karelia and in Ingria by the minor Baltic Finnic peoples. Germany Bread and salt are given away for different reasons: to the wedding for a lasting alliance between spouses to move into a house to wish prosperity and fertility In northern Germany and Bohemia (Czech Republic) bread and salt are traditionally put into the diaper of a newborn. Turkish culture According to some Turkic legends, the bread and salt were discovered by grandsons of Japheth, the Islamic ancestor of the Turks. In Turkish language sometimes salt is used as a synonym to the word sugar and flavor. Saying "they added salt to their words" had the meaning of "they say sweet words." Newborns were bathed in salt water so they wouldn't smell, salt was believed to remove bad eye and there is a salt saint in Turkish folklore called Tuz Baba. Bread is considered to be holy and is strongly respected. Together salt and bread create the concept of "tuz-ekmek hakkı" meaning "salt-bread right" in which if two people (or more) eat bread and salt together they become friends, there is an alliance between them. The concept was widely popular in Ottoman literature and Turkish folk literature. There are various folkloric beliefs related to salt and bread, most often affecting children; to make the birth easier, bread, salt and a knife are placed next to the mother. In Samsun, it is believed that if two mothers who have given birth 40 days ago meet on the street, their children will not be able to walk. To prevent this, one of the mothers must give salt and bread to another mother who has a daughter. In Turkish folklore, there is a demon called Albastı, which harms the mothers. In Adana mothers walk around with a bag containing salt, bread and a nail until the 40th day of giving a birth. Bread and salt are used for rain magic as well. When it rains heavily, parents give to their firstborn child bread in one hand and salt in the other hand. Then the child says "until this salt melts, let the rain stop". In Muş it is believed that a girl who eats salted bread during Hıdrellez and does not drink water before going to bed will dream of her soulmate. Salt and bread are used in love magic too. Arab culture Arab culture also has a concept of "bread and salt" ( or ), not in the context of welcoming, but as an expression of alliance by eating together, symbolizing the rapprochement between two persons. Eating bread and salt with a friend is considered to create a moral obligation which requires gratitude. This attitude is also expressed by Arab phrases such as "there are bread and salt between us" ( or ), and "salt between them" () which are terms of alliance. Jewish culture A similar practice also exists among Jews in the Diaspora and in Israel. After the ceremony of kiddush, a piece of challah is dipped in salt and eaten. The challah is a staple food eaten on special occasions, like holidays and weddings, as well as every shabbat. Bread and salt were also used in the past at welcoming ceremonies, given to respected persons. Iranian culture In the Iranian culture when a guest is welcomed into the home, it is said that they have eaten bread and salt, and this leads to loyalty of the guest. In space With the advent of the Soviet space program, this tradition has spread into space, where appropriately small packages of bread and salt are used nowadays. It was observed at the Apollo–Soyuz Test Project and the Salyut programme, when crackers and salt tablets were used in the spaceship. Bread chunks and salt were used as a welcome at the Mir space station, a tradition that was extended on the International Space Station. Bread and salt are also used to welcome cosmonauts returning to Earth. In fiction The custom of serving bread and salt to guests is a recurring reference in George R. R. Martin's A Song of Ice and Fire novels, where the welcome ritual serves not only as a Westerosi tradition of hospitality, but also a formal assurance of "guest right", a sacred bond of trust and honor guaranteeing that nobody in attendance, hosts and guests alike, shall be harmed. Violating the guest right is widely considered among the highest moral crimes, an affront worthy of the worst damnation, rivaled only by kinslaying. Game of Thrones, the associated television series, prominently features the tradition in season three, episode 9, "The Rains of Castamere". In Season 2, Episode 4 of Peaky Blinders, Alfie Solomons offers Charles Sabini bread and salt as Sabini offers a white flag of truce. Rudyard Kipling referenced bread and salt in a number of works. In The Ballad of East and West, leavened bread and salt is mentioned as binding an oath of blood brothership. At the beginning of Puck of Pook's Hill Puck establishes his credentials with the child protagonists by asking them to sprinkle plenty of salt on their shared meal. ""That'll show you the sort of person I am." In Rosemary Sutcliff's historical novel Outcast, bread and salt is referred to as a sign of belonging to a tribe: "You are my people, my own people, by hearth fire and bread and salt". In The Count of Monte Cristo by Alexandre Dumas, Chapter 72 is titled "Bread and Salt". The character Mercedes attempts to coax the main character into eating fruit, as part of an Arabian custom to ensure that those who have shared food and drink together under one roof would be eternal friends. Bread and salt are given as a housewarming gift in one scene of the 1946 film It's a Wonderful Life. In The Solar War, the first book belonging to Black Library's The Siege of Terra series, the character Malcador the Sigillite offers bread and salt to a weary Emperor of Mankind. Gallery References Bibliography External links Culture of Albania Slavic culture Greetings Traditions Religious food and drink National symbols of Belarus National symbols of Russia National symbols of Serbia National symbols of Ukraine Guest greeting food and drink Albanian traditions Lithuanian traditions Serbian traditions Edible salt Bread in culture Food combinations Turkish folklore
Bread and salt
Chemistry
2,840
33,530,940
https://en.wikipedia.org/wiki/Retraction%20Watch
Retraction Watch is a blog that reports on retractions of scientific papers and on related topics. The blog was launched in August 2010 and is produced by science writers Ivan Oransky (Former Vice President, Editorial Medscape) and Adam Marcus (editor of Gastroenterology & Endoscopy News). Its parent organization is the Center for Scientific Integrity, a US 501(c)(3) nonprofit organization. Motivation and scope In 2011, Oransky and Marcus pointed out in Nature that the peer review process for scholarly publications continues long after the publication date. They were motivated to launch Retraction Watch to encourage this continuation and to increase the transparency of the retraction process. They observed that retractions of papers generally are not announced, that the reasons for retractions are not publicized, and that other researchers or the public who are unaware of the retraction may make decisions based on invalid results. Oransky described an example of a paper published in Proceedings of the National Academy of Sciences that reported a potential role for a drug against some types of breast cancers. Although the paper was later retracted, its retraction was not reported in media outlets that had earlier reported its positive conclusions, with a company having been established on the basis of the ultimately retracted conclusions. Oransky and Marcus claim that retractions also provide a window into the self-correcting nature of science, can provide insight into cases of scientific fraud, and can "be the source of great stories that say a lot about how science is conducted". In January 2021, more than 50 studies have cited Retraction Watch as the scientific publishing community is exploring the impact of retracted papers. During the COVID-19 pandemic, Retraction Watch maintained a separate list of retracted articles that added to misinformation about the pandemic, with additional research undertaken to analyse the subsequent pollution of further research as retracted papers are cited and used within scholarly research. In 2023, in the wake of the resignation of Stanford University president Marc Tessier-Lavigne, Oransky and Marcus co-authored op-eds in Scientific American and The Guardian. They estimated that scientific misconduct was more common than is reported. They also assessed that, despite recent scandals involving research misconduct, the academic community was not interested in exposing wrongdoing and scientific errors. However, all members of the academic community are responsible for the delays and lack of action. Impact Retraction Watch has demonstrated that retractions are more common than was previously thought. When Retraction Watch was launched, Marcus "wondered if we'd have enough material". It had been estimated that about 80 papers were retracted annually. However, in its first year, the blog reported on approximately 200 retractions. In October 2019 the Retraction Watch Database reached a milestone 20,000 entries As of January 2024, it contains over 50,000 entries. Hijacked journal tracker In 2022, Retraction Watch added a feature that tracks journal hijacking. Political scientist Anna Abalkina had developed a method for identifying hijacked journal domains based on an analysis of the archives of clone journals. This method is based on the argument that fraudulent publishers recycle identical papers to create a fictitious archive for a hijacked journal. Methods used to locate or confirm hijacked statuses of journals include duplicated journal archives, identical website templates, growth in indexing, anomalous citations, and scholars’ comments. Abalkina created the Retraction Watch Hijacked Journal Checker in partnership with Retraction Watch. Funding Retraction Watch has been funded by a variety of sources, including donations and grants. They received grants from the John D. and Catherine T. MacArthur Foundation, the Helmsley Charitable Trust, and the Laura and John Arnold Foundation. The database of retractions was funded by a $400,000 grant from the MacArthur Foundation in 2015. They have partnered with the Center for Open Science, which is also funded by the Laura and John Arnold Foundation, to create a retraction database on the Open Science Framework. See also PubPeer Replication crisis Center for Open Science Journal hijacking References External links Center for Scientic Integrity Science blogs Medical controversies Scientific misconduct Media analysis organizations and websites Error
Retraction Watch
Technology
879
67,527,587
https://en.wikipedia.org/wiki/TestOps
TestOps (or test operations) refers to the discipline of managing the operational aspects of testing within the software delivery lifecycle. Software testing is an evolving discipline that includes both functional and non-functional testing. The first mention of this is regarded in March 2019 where, Ditmir Hasani, CEO of QA Tech consultancy is thought to have coined the term. Increasingly, software testing, especially in agile development processes is shifting to become more of a continuous testing process where software developers, quality engineers, manual testers, product owners, and more are involved in the quality process. As more people have become involved in the testing process and testing projects have grown, so too has the need to advance the discipline of software testing management and to manage the software quality processes, programmers, people, systems, and tests. TestOps helps teams scale their teams, tests, and quality processes in and effectively. Elements of test operations (TestOps) TestOps involves several important disciplines that can be broken down into: Planning — Helps identify how the software is going to be tested. What are the priorities for testing? How will it be tested? Who will do the testing? In addition, the planning phase should identify the environment for the tests. Will they be run in a test environment, or in production? Production data can be valuable to identify real user flows that help prioritize test cases. The outputs include identifying the type of tests to use, the test automation tools, the timing of the testing, the ownership of the testing at different phases, and the design and outputs of the tests. Management — Test management includes the organization and governance of the team, the tools, the test environment, and the tests themselves. Tests follow a lifecycle pattern and must be managed whether they are in stages such as draft, active, or quarantine. TestOps helps ensure that the testing processes are efficient and scalable. Control — As teams and tests grow in number and diversity, they naturally increase complexity. Change control processes such as pull requests, approvals on merges, collaboration tools, and ownership labeling can help ensure that changes to tests are properly approved. Insights — The data and information you derive from your test automation systems should help inform operational as well as process transformation decisions. Operational information about your testing activities includes the test results (pass/fail), release readiness criteria, failure diagnostics, team testing productivity, test stability, and more. Information that can inform process improvements includes team and individual performance data, failure type trend analysis, test coverage mapping, and risk coverage analysis. TestOps Features DevOps integration — TestOps exists to ensure that the product development pipeline has all the testing frameworks and tools needed. It is common for QA engineers to rely on the pipelines that IT puts together without much input. TestOps changes this by owning test activities related to DevOps, allowing QA engineers and developers to have full ownership and visibility of the development pipeline, so they can tailor it to meet their needs. Cloud integration — Integrating tests and test runs with cloud providers Enhanced test planning — Automation is not effective if the entire codebase has to get tested every time a line of code is changed. TestOps provides a centralized platform that makes it easier for testers and developers to plan what tests to write, as well as identify what tests to run and when. Test lifecycle management — Automated tests follow a lifecycle including creation, evaluation, active, and quarantine or removal. The status of the test should impact how it is treated in build automation systems like CI/CD. Test version control — Processes that help ensure that changes to tests are properly reviewed and approved through capabilities like pull requests in code. Real-time dashboards — Real-time results and status help the test teams understand the state of software releases and the work that needs to be done to create, approve, or run more tests. TestOps vs. DevOps DevOps is a broader, more inclusive concept that includes software feature planning, code development, testing, deployment, configuration, monitoring, and feedback. It was an attempt to integrate some of the disconnected toolchains. Testing is included in the broader DevOps methodology. TestOps is not simply providing additional emphasis on testing. It is focused on operational aspects of testing that are necessary to ensure that testing, whether performed in development, production, or in its own testing phase, is well planned, managed, controlled, and provides insights to enable continuous improvement. The siloed working mindset is aimed to be removed from activities such as continuous delivery, software testing (manual and automated testing), environment setups, infrastructure and log management, and built-in security enforcement. References Software testing
TestOps
Engineering
948
43,275,476
https://en.wikipedia.org/wiki/Passthrough%20%28electronics%29
In signal processing, a passthrough is a logic gate that enables a signal to "pass through" unaltered, sometimes with little alteration. Sometimes the concept of a "passthrough" can also involve daisy chain logic. Examples of passthroughs Analog passthrough (for digital TV) Sega 32X (passthrough for Sega Genesis video games) VCRs, DVD recorders, etc. act as a "pass-through" for composite video and S-video, though sometimes they act as an RF modulator for use on Channel 3. Tape monitor features allow an AV receiver (sometime the recording device itself) to act as a "pass-through" for audio. An AV receiver usually allows pass-through of the video signal while amplifying the audio signal to drive speakers. See also Dongle, a device that converts signal, instead of just being a "passthrough" for unaltered signal Signal processing Electrical engineering de:Durchschleifen
Passthrough (electronics)
Technology,Engineering
209
31,045,573
https://en.wikipedia.org/wiki/Methods%20to%20investigate%20protein%E2%80%93protein%20interactions
There are many methods to investigate protein–protein interactions which are the physical contacts of high specificity established between two or more protein molecules involving electrostatic forces and hydrophobic effects. Each of the approaches has its own strengths and weaknesses, especially with regard to the sensitivity and specificity of the method. A high sensitivity means that many of the interactions that occur are detected by the screen. A high specificity indicates that most of the interactions detected by the screen are occurring in reality. Biochemical methods Co-immunoprecipitation is considered to be the gold standard assay for protein–protein interactions, especially when it is performed with endogenous (not overexpressed and not tagged) proteins. The protein of interest is isolated with a specific antibody. Interaction partners which stick to this protein are subsequently identified by Western blotting. Interactions detected by this approach are considered to be real. However, this method can only verify interactions between suspected interaction partners. Thus, it is not a screening approach. A note of caution also is that immunoprecipitation experiments reveal direct and indirect interactions. Thus, positive results may indicate that two proteins interact directly or may interact via one or more bridging molecules. This could include bridging proteins, nucleic acids (DNA or RNA), or other molecules. Bimolecular fluorescence complementation (BiFC) is a new technique in observing the interactions of proteins. Combining with other new techniques, this method can be used to screen protein–protein interactions and their modulators, DERB. Affinity electrophoresis as used for estimation of binding constants, as for instance in lectin affinity electrophoresis or characterization of molecules with specific features like glycan content or ligand binding. Pull-down assays are a common variation of immunoprecipitation and immunoelectrophoresis and are used identically, although this approach is more amenable to an initial screen for interacting proteins. Label transfer can be used for screening or confirmation of protein interactions and can provide information about the interface where the interaction takes place. Label transfer can also detect weak or transient interactions that are difficult to capture using other in vitro detection strategies. In a label transfer reaction, a known protein is tagged with a detectable label. The label is then passed to an interacting protein, which can then be identified by the presence of the label. Phage display is used for the high-throughput screening of protein interactions. In-vivo crosslinking of protein complexes using photo-reactive amino acid analogs was introduced in 2005 by researchers from the Max Planck Institute In this method, cells are grown with photoreactive diazirine analogs to leucine and methionine, which are incorporated into proteins. Upon exposure to ultraviolet light, the diazirines are activated and bind to interacting proteins that are within a few angstroms of the photo-reactive amino acid analog. Tandem affinity purification (TAP) method allows high throughput identification of protein interactions. In contrast to yeast two-hybrid approach the accuracy of the method can be compared to those of small-scale experiments and the interactions are detected within the correct cellular environment as by co-immunoprecipitation. However, the TAP tag method requires two successive steps of protein purification and consequently it can not readily detect transient protein–protein interactions. Recent genome-wide TAP experiments were performed by Krogan et al. and Gavin et al. providing updated protein interaction data for yeast organism. Chemical cross-linking is often used to "fix" protein interactions in place before trying to isolate/identify interacting proteins. Common crosslinkers for this application include the non-cleavable NHS-ester cross-linker, bissulfosuccinimidyl suberate (BS3); a cleavable version of BS3, dithiobis(sulfosuccinimidyl propionate) (DTSSP); and the imidoester cross-linker dimethyl dithiobispropionimidate (DTBP) that is popular for fixing interactions in ChIP assays. Chemical cross-linking followed by high mass MALDI mass spectrometry can be used to analyze intact protein interactions in place before trying to isolate/identify interacting proteins. This method detects interactions among non-tagged proteins and is available from CovalX. SPINE (Strepprotein interaction experiment) uses a combination of reversible crosslinking with formaldehyde and an incorporation of an affinity tag to detect interaction partners in vivo. Quantitative immunoprecipitation combined with knock-down (QUICK) relies on co-immunoprecipitation, quantitative mass spectrometry (SILAC) and RNA interference (RNAi). This method detects interactions among endogenous non-tagged proteins. Thus, it has the same high confidence as co-immunoprecipitation. However, this method also depends on the availability of suitable antibodies. Proximity ligation assay (PLA) in situ is an immunohistochemical method utilizing so called PLA probes for detection of proteins, protein interactions and modifications. Each PLA probes comes with a unique short DNA strand attached to it and bind either to species specific primary antibodies or consist of directly DNA-labeled primary antibodies. When the PLA probes are in close proximity, the DNA strands can interact through a subsequent addition of two other circle-forming DNA oligonucleotides. After joining of the two added oligonucleotides by enzymatic ligation, they are amplified via rolling circle amplification using a polymerase. After the amplification reaction, several-hundredfold replication of the DNA circle has occurred and flurophore or enzyme labeled complementary oligonucleotide probes highlight the product. The resulting high concentration of fluorescence or cromogenic signal in each single-molecule amplification product is easily visible as a distinct bright spot when viewed with either in a fluorescence microscope or a standard brightfield microscope. Biophysical and theoretical methods Surface plasmon resonance (SPR) is the most common label-free technique for the measurement of biomolecular interactions. SPR instruments measure the change in the refractive index of light reflected from a metal surface (the "biosensor"). Binding of biomolecules to the other side of this surface leads to a change in the refractive index which is proportional to the mass added to the sensor surface. In a typical application, one binding partner (the "ligand", often a protein) is immobilized on the biosensor and a solution with potential binding partners (the "analyte") is channelled over this surface. The build-up of analyte over time allows to quantify on rates (kon), off rates (koff), dissociation constants (Kd) and, in some applications, active concentrations of the analyte. Several different vendors offer SPR-based devices. Best known are Biacore instruments which were the first commercially available. Dual polarisation interferometry (DPI) can be used to measure protein–protein interactions. DPI provides real-time, high-resolution measurements of molecular size, density and mass. While tagging is not necessary, one of the protein species must be immobilized on the surface of a waveguide. As well as kinetics and affinity, conformational changes during interaction can also be quantified. Static light scattering (SLS) measures changes in the Rayleigh scattering of protein complexes in solution and can characterize both weak and strong interactions without labeling or immobilization of the proteins or other biomacromolecule. The composition-gradient, multi-angle static light scattering (CG-MALS) measurement mixes a series of aliquots of different concentrations or compositions, measures the effect of the changes in light scattering as a result of the interaction, and fits the correlated light scattering changes with concentration to a series of association models in order to find the best-fit descriptor. Weak, non-specific interactions are typically characterized via the second virial coefficient. For specific binding, this type of analysis can determine the stoichiometry and equilibrium association constant(s) of one or more associated complexes, including challenging systems such as those that exhibit simultaneous homo- and hetero-association, multi-valent interactions and cooperativity. Dynamic light scattering (DLS), also known as quasielastic light scattering (QELS), or photon correlation spectroscopy, processes the time-dependent fluctuations in scattered light intensity to yield the hydrodynamic radius of particles in solution. The hydrodynamic radius is the radius of a solid sphere with the same translational diffusion coefficient as that measured for the sample particle. As proteins associate, the average hydrodynamic radius of the solution increases. Application of the Method of Continuous Variation, otherwise known as the Job plot, with the solution hydrodynamic radius as the observable, enables in vitro determination of Kd, complex stoichiometry, complex hydrodynamic radius, and the ΔH° and ΔS° of protein–protein interactions. This technique does not entail immobilization or labeling. Transient and weak interactions can be characterized. Relative to static light scattering, which is based upon the absolute intensity of scattered light, DLS is insensitive to background light from the walls of containing structures. This insensitivity permits DLS measurements from 1 μL volumes in 1536 well plates, and lowers sample requirements into the femtomole range. This technique is also suitable for screening of buffer components and/or small molecule inhibitors/effectors. Flow-induced dispersion analysis (FIDA), is a new capillary-based and immobilization-free technology used for characterization and quantification of biomolecular interaction and protein concentration under native conditions. The technique is based on measuring the change in apparent size (hydrodynamic radius) of a selective ligand when interacting with the analyte of interest. A FIDA assay works in complex solutions (e.g. plasma ), and provides information regarding analyte concentration, affinity constants, molecular size and binding kinetics. A single assay is typically completed in minutes and only requires a sample consumption of a few μL. Fluorescence polarization/anisotropy can be used to measure protein–protein or protein–ligand interactions. Typically one binding partner is labeled with a fluorescence probe (although sometimes intrinsic protein fluorescence from tryptophan can be used) and the sample is excited with polarized light. The increase in the polarization of the fluorescence upon binding of the labeled protein to its binding partner can be used to calculate the binding affinity. With fluorescence correlation spectroscopy, one protein is labeled with a fluorescent dye and the other is left unlabeled. The two proteins are then mixed and the data outputs the fraction of the labeled protein that is unbound and bound to the other protein, allowing you to get a measure of KD and binding affinity. You can also take time-course measurements to characterize binding kinetics. FCS also tells you the size of the formed complexes so you can measure the stoichiometry of binding. A more powerful methods is fluorescence cross-correlation spectroscopy (FCCS) that employs double labeling techniques and cross-correlation resulting in vastly improved signal-to-noise ratios over FCS. Furthermore, the two-photon and three-photon excitation practically eliminates photobleaching effects and provide ultra-fast recording of FCCS or FCS data. Fluorescence resonance energy transfer (FRET) is a common technique when observing the interactions of different proteins. Applied in vivo, FRET has been used to detect the location and interactions of genes and cellular structures including integrins and membrane proteins. FRET can be used to obtain information about metabolic or signaling pathways. Bio-layer interferometry (BLI) is a label-free technology for measuring biomolecular interactions (protein:protein or protein:small molecule). It is an optical analytical technique that analyzes the interference pattern of white light reflected from two surfaces: a layer of immobilized protein on the biosensor tip, and an internal reference layer. Any change in the number of molecules bound to the biosensor tip causes a shift in the interference pattern that can be measured in real-time, providing detailed information regarding the kinetics of association and dissociation of the two molecule molecules as well as the affinity constant for the protein interaction (ka, kd and Kd). Due to sensor configuration, the technique is highly amenable to both purified and crude samples as well as high throughput screening experiments. The detection method can also be used to determine the molar concentration of analytes. Protein activity determination by NMR multi-nuclear relaxation measurements, or 2D-FT NMR spectroscopy in solutions, combined with nonlinear regression analysis of NMR relaxation or 2D-FT spectroscopy data sets. Whereas the concept of water activity is widely known and utilized in the applied biosciences, its complement—the protein activity which quantitates protein–protein interactions—is much less familiar to bioscientists as it is more difficult to determine in dilute solutions of proteins; protein activity is also much harder to determine for concentrated protein solutions when protein aggregation, not merely transient protein association, is often the dominant process. Isothermal titration calorimetry (ITC), is considered as the most quantitative technique available for measuring the thermodynamic properties of protein–protein interactions and is becoming a necessary tool for protein–protein complex structural studies. This technique relies upon the accurate measurement of heat changes that follow the interaction of protein molecules in solution, without the need to label or immobilize the binding partners, since the absorption or production of heat is an intrinsic property of virtually all biochemical reactions. ITC provides information regarding the stoichiometry, enthalpy, entropy, and binding kinetics between two interacting proteins. Microscale thermophoresis (MST), is a new method that enables the quantitative analysis of molecular interactions in solution at the microliter scale. The technique is based on the thermophoresis of molecules, which provides information about molecule size, charge and hydration shell. Since at least one of these parameters is typically affected upon binding, the method can be used for the analysis of each kind of biomolecular interaction or modification. The method works equally well in standard buffers and biological liquids like blood or cell-lysate. It is a free solution method which does not need to immobilize the binding partners. MST provides information regarding the binding affinity, stoichiometry, competition and enthalpy of two or more interacting proteins. Rotating cell‑based ligand binding assay using radioactivity or fluorescence, is a recent method that measures molecular interactions in living cells in real-time. This method allows the characterization of the binding mechanism, as well as Kd, kon and koff. This principle is being applied in several studies, mainly with protein ligands and living mammalian cells. An alternative technology to measure protein interactions directly on cells is Real-Time Interaction Cytometry (RT-IC). In this technology, the living or fixed cells are physically retained on the surface of biosensor chips using biocompatible and flow-permeable polymer traps. Binding and unbinding of automatically injected labeled analytes is measured by time-resolved fluorescence detection. Single colour reflectometry (SCORE) is a label-free technology for measuring all kinds of biomolecular interactions in real-time. Similar to BLI, it exploits interference effects at thin layers. However, it does not need a spectral resolution but rather uses monochromatic light. Thus, it is possible to analyse not only a single interaction but high-density arrays with up to 10,000 interactions per cm2. switchSENSE is a technology based on DNA nanolevers on a chip surface. A fluorescent dye as well as the unlabeled ligand are attached to this nanolever. Upon binding of an analyte to the ligand, the real-time kinetic rates (kon, koff) can be measured as changes in fluorescence intensity and the Kd can be derived. This method can be used to investigate protein-protein interactions, as well as to investigate modulators of protein-protein interactions by assessing ternary complex formation. An example for such modulators are PROTACs, which are investigated for their therapeutic potential in cancer therapy. Another example for such ternary interactions are bispecific antibodies binding to their two distinct antigens. switchSENSE can additionally be utilized to detect conformational changes induced by ligands binding to a target protein. Genetic methods The yeast two-hybrid and bacterial two-hybrid screen investigate the interaction between artificial fusion proteins. They do not require isolation of proteins but rather use transformation to express proteins in bacteria or yeast. The cells are designed in a way that an interaction activates the transcription of a reporter gene or a reporter enzyme. Computational methods Most PPI methods require some computational data analysis. The methods in this section are primarily computational although they typically require data generated by wet lab experiments. Protein–protein docking, the prediction of protein–protein interactions based only on the three-dimensional protein structures from X-ray diffraction of protein crystals might not be satisfactory. Network analysis includes the analysis of interaction networks using methods of graph theory or statistical methods. The goal of these studies is to understand the nature of interactions in the context of a cell or pathway, not just individual interactions. References Protein methods Molecular biology techniques Protein–protein interaction assays
Methods to investigate protein–protein interactions
Chemistry,Biology
3,687
69,444,595
https://en.wikipedia.org/wiki/Seismic%20stratigraphy
Seismic stratigraphy is a method for studying sedimentary rock in the deep subsurface based on seismic data acquisition. History The term Seismic stratigraphy was introduced in 1977 by Vail as an integrated stratigraphic and sedimentologic technique to interpret seismic reflection data for stratigraphic correlation and to predict depositional environments and lithology. This technique was initially employed for petroleum exploration and subsequently evolved into sequence stratigraphy by academic institutes. Basic Concept Seismic reflection is generated at interfaces that separate media with different acoustic properties, and traditionally these interfaces have been interpreted as the lithological boundaries. Vail in 1977, however, recognized that these reflections were, in fact, parallel to the bedding surfaces, and therefore time equivalent surfaces. Interruption of reflections indicates the disappearance of bedding surfaces. Hence, onlap, down lap and top lap and other depositional features observed on surface outcrops have been demonstrated on seismic profiles. This revolutionary interpretation has been substantiated by Vail’s associated industrial drilling results and extensive multichannel seismic data. Furthermore, the most indisputable evidence comes from the progradational dipping reflection pattern associated with the advancing delta deposition in shallow marine environments. Lithological boundaries associated with delta front and slope are nearly horizontal, but are not represented by reflections. Instead, the dipping reflections are a clear indication of depositional surfaces, hence time plane equivalents. Methodology Establishing Sequence Boundary Sequence boundaries are defined as an erosional unconformity recognized on the seismic profile as a reflection surface with reflection termination features such as truncation below and onlap above the surface, The sequence boundary, therefore, represents a marine regression event, during which continental shelf is partially exposed to subaerial erosion processes. A seismic sequence is defined as the stratigraphic interval between two consecutive sequence boundaries, representing two marine regression events with a marine transgression event at the middle. Thus a seismic sequence is further subdivided with a basal unit of regressive systems tract, a transgressive systems tract at the middle, and a regressive systems tract at the top. The transgressive systems tract is marked at the top by a maximum flooding surface. Describing Seismic Facies Within a systems tract, each seismic facies is mapped based on reflection geometry, continuity, amplitude, frequency, and interval velocity. The lithology of each facies is then predicted according to known depositional model and nearby drilling results. Estimating Relative Sea level Changes Since onlaps on an erosional surface approximate the positions of sea level on a coastal plain, the sea level variation of a marine transgression/regression cycle could be estimated by the onlap positions on seismic profiles. The maximum sea-level rise is represented by the highest onlap position on a sequence boundary and the minimum sea-level fall by the lowest onlap position on the next younger sequence boundary. The difference in depth between the two positions represents the sea level change magnitude of the cycle. See also Stratigraphy References Stratigraphy Geophysics
Seismic stratigraphy
Physics
622
70,180,977
https://en.wikipedia.org/wiki/Verticordia%20elizabethiae
Verticordia elizabethiae, named as Elizabeth's featherflower, is a flowering plant in the myrtle family, Myrtaceae. An endemic species of Southwest Australia, it occurs near salt lakes as an erect bushy shrub. Taxonomy A species of Verticordia, the featherflowers, assigned to a section of the genus Verticordia sect. Verticordella. The type was collected in 2018 at a location reported imprecisely as Baladjie. Previously collected specimens, including one made by Charles Gardner in 1926 and another recognised as Verticordia sp. Koolyanobbing, were assigned by the authors, Barbara Rye and Matthew Barrett, to the new species. The specific epithet honours the extensive contribution of Elizabeth Anne (Berndt) George, née Sykes (1935-2012) to the collection and research of verticordias. A treatment of the population had previously been published by George as an inland variant of Verticordia halophila. Description A low growing salt tolerant shrub between 0.4 and 1.2 metres in width and 0.3 to 0.6 m high. The species lacks evidence of a lignotuber. Distribution and habitat Only known at a restricted distribution range within the semi-arid Coolgardie bioregion, in an area near Southern Cross, Western Australia. It occurs on flats around salt lakes amongst other halophytes forming heath communities, species of Maireana, Gunniopsis and Frankenia, associated with Callitris. References elizabethiae Halophytes Endemic flora of Western Australia Rosids of Western Australia Plants described in 2020
Verticordia elizabethiae
Chemistry
332
71,929,531
https://en.wikipedia.org/wiki/Inverse%20lithography
In semiconductor device fabrication, the inverse lithography technology (ILT) is an approach to photomask design. It is basically an approach to solve an inverse imaging problem: to calculate the shapes of the openings in a photomask ("source") so that the passing light produces a good approximation of the desired pattern ("target") on the illuminated material, typically a photoresist. As such, it is treated as a mathematical optimization problem of a special kind, because usually an analytical solution does not exist. In conventional approaches known as the optical proximity correction (OPC) a "target" shape is augmented with carefully tuned rectangles to produce a "Manhattan shape" for the "source", as shown in the illustration. The ILT approach generates curvilinear shapes for the "source", which deliver better approximations for the "target". The ILT was proposed in 1980s, however at that time it was impractical due to the huge required computational power and complicated "source" shapes, which presented difficulties for verification (design rule checking) and manufacturing. However in late 2000s developers started reconsidering ILT due to significant increases in computational power. References Lithography (microfabrication) Inverse problems
Inverse lithography
Materials_science,Mathematics
258
3,603,880
https://en.wikipedia.org/wiki/Red%20Rectangle%20Nebula
The Red Rectangle Nebula, so called because of its red color and unique rectangular shape, is a protoplanetary nebula in the Monoceros constellation. Also known as HD 44179, the nebula was discovered in 1973 during a rocket flight associated with the AFCRL Infrared Sky Survey called Hi Star. The binary system at the center of the nebula was first discovered by Robert Grant Aitken in 1915. Characteristics High-resolution images of it in visible and near infrared light reveal a highly symmetric, compact bipolar nebula with X-shaped spikes which imply anisotropic dispersion of the circumstellar material. The central binary system is completely obscured, providing no direct light. The Red Rectangle is known to be particularly rich in polycyclic aromatic hydrocarbons (PAHs). The presence of such carbon-bearing macromolecules in the X-shaped nebular component, while the equatorial regions are known to contain silicate-rich dust grains and O-bearing molecules, was interpreted as due to a change of the O/C abundance ratio of the primary star during its late evolution. However, PAHs could also be formed as a result of the development of a central photodissociation region, a region in which a very active chemistry appears due to dissociation of stable molecules by the UV emission of the central stellar system. The Red Rectangle was the first nebula around an evolved star in which an equatorial disk in rotation was well identified (the existence of such disks has been demonstrated only in a few of these objects, only expansion is observed in most of them). However, the disk absorbs the stellar light and is practically not seen in the beautiful optical image, which mainly represents a relatively diffuse outflow that is very probably formed of material extracted from the denser disk. The distinct rungs suggest several episodes of increased ejection rate. The Hubble Space Telescope has revealed a wealth of new features in the Red Rectangle that cannot be seen by ground-based telescopes looking through Earth's turbulent atmosphere. The origins of many of the features in this dying star, in particular its X-shaped image, still remain hidden or even outright mysterious. The presence of a conspicuous bipolar symmetry is usual in protoplanetary and planetary nebulae. Theorists, like Noam Soker, , Adam Frank, and others, have shown that this axial symmetry can appear as a result of shocks due to interaction of different phases of the stellar winds (characteristic of the late stellar evolution), but its origin is still debated. On the other hand, the X-like shape and the low velocity of the outflowing gas in the Red Rectangle are peculiar, probably because its origin (associated to a stable, extended disk) is different than for most protoplanetary nebulae. References External links The remarkable Red Rectangle: A Stairway to Heaven? Dying Star Sculpts Rungs of Gas and Dust Charged polycyclic aromatic hydrocarbon clusters and the galactic red emission, 2007. Astronomy Picture of the Day – The Red Rectangle Nebula from Hubble (14 June 2010) The Red Rectangle Protoplanetary nebulae Monoceros
Red Rectangle Nebula
Astronomy
644
44,636,075
https://en.wikipedia.org/wiki/Monowheel%20tractor
A monowheel tractor or monowheel-drive tractor is a light transport and agricultural vehicle that is driven and controlled by an engine and steering mechanism mounted on a single large wheel, with the load-carrying body trailing behind. Despite the name, they are tricycles. Development Monowheel tractors developed in two periods, both during times of rapid upheaval after warfare. Both types had quite different circumstances and goals. Small wheel tractors appeared after World War I, during a time of new opportunity. Large-wheel tractors appeared after World War II, during a period of austerity. Small wheel tractors The first monowheel tractors appeared in the 1920s, as a result of technical developments in small petrol engines. These had been driven by improving engine technology, particularly for motorbikes. Such engines now represented an affordable and portable power source. An entire powertrain could be constructed as a single monobloc unit, carried on a single wheel, and this mounted onto a trailer through a large swivel bearing. The engine and its drivetrain represented a relatively high technology machine for the period, although its trailer could be much more crude. The engine units were made by the new light engineering works of the time, such as R A Lister, and custom-made trailers could be produced for a wide range of tasks by less sophisticated workshops, down to village blacksmiths. The completed vehicles had small wheels and little suspension. This limited their use to smooth floors, such as factories and railway stations, rather than muddy farm tracks or even the roads of the period. In most cases they were replacing hand barrows. Their advantage was that they were faster than a loaded barrow, limited to around the same speed as walking with an empty barrow and they only required one operator, no matter what load. Loads of up to 2 tons could be carried. Some vehicle were driven by a driver riding or standing on-board with direct tiller steering, others were pedestrian controlled by walking alongside. All were highly manoeuvrable, the full swivel of the self-contained engine unit allowing them to turn within their own length. Examples include: Lister Auto-Truck Reliance of Heckmondwike, from 1935, originally the Redshaw Lister Woollen Machinery Co. Salsbury Motors 'Turret Truck' of 1946. This used a cylindrical motor enclosure that could be rotated completely to allow reverse. The 6hp motor and continuously variable transmission were based on Salsbury's pre-war scooter designs. Large wheel tractors After World War II, tractors were a well-developed and widespread piece of agricultural machinery, although they were still expensive. Some items, such as their large rubber tyres, were particularly difficult as they relied on imported raw materials. Britain, for some years after the war, was in a period of austerity and currency controls applied to overseas purchases. An obvious simplification was to take the technology of the tractor, but use only a single wheel and a smaller engine. Many of the large monowheel tractor's tasks would be in either replacing horse carts, or else as a cheaper substitute for more conventional tractors. S. E. Opperman of Boreham Wood did this in 1945 with their Opperman Motocart. This used a tricycle cart chassis of welded sheet steel, drawn by a tractor wheel mounted on a single small-diameter kingpin above it. The entire powertrain was carried on the wheel hub, including an JAP or Douglas single cylinder petrol engine. Although there was still no suspension other than the large front tractor tyre, the Motocart's wheel steering and large wheels allowed a greater speed than other carts, up to (for legal reasons). Many were road registered, although not provided with full lighting. Using the full range of low gears, a load of up to tons on trial was carried up steep hills. A similar, although smaller, vehicle was patented in the US. This was intended as a manoeuvrable light tipper for construction sites. The monowheel concept continued into the 1960s, although now aimed more at "colonial" overseas use. In 1966 the British government through the NRDC was working on a design developed for the National Institute of Agricultural Engineers. See also Two-wheel tractor References Tractors
Monowheel tractor
Engineering
861
24,028,211
https://en.wikipedia.org/wiki/The%20Combustion%20Institute
The Combustion Institute is an educational non-profit, international, scientific and engineering society whose purpose is to promote research in combustion science. The institute was established in 1954, and its headquarters are in Pittsburgh, Pennsylvania, United States. The current president of The Combustion Institute is Philippe Dagaut (2021-). Foundation and mission The support of this important field of study spanning many scientific and engineering disciplines is done through the discussion of research findings at regional, national and the biennial international symposia, and through the publication of the Proceedings of the Combustion Institute and the institute's journals, Combustion and Flame and the affiliated journals Progress in Energy and Combustion Science, Combustion Science and Technology and Combustion Theory and Modelling. The institute serves as the parent organization for thirty three national sections organized in many countries (the US being divided into three sections) as of 2012: In honor of fiftieth anniversary of Combustion Institute, the leading combustion scientists John D. Buckmaster, Paul Clavin, Amable Liñán, Moshe Matalon, Norbert Peters, Gregory Sivashinsky and Forman A. Williams wrote a paper in the Proceedings of the Combustion Institute. International symposium on combustion The international symposium on combustion is organised by the Combustion Institute biennially. The first symposium on combustion was held in 1928 in the United States and the first international symposium on combustion was held on 1948, even though the combustion institute itself was found on 1954. Thirty seven symposiums has been held so far and the 38th symposium was to be held on 2020 but is postponed to 2021. Institute Awards During each International Symposium, The Combustion Institute awards the following: Bernard Lewis Gold Medal – established in 1958 and awarded for brilliant research in the field of combustion. Alfred C. Egerton Gold Medal – established in 1958 and awarded biennially for distinguished, continuing and encouraging contributions to the field of combustion. Silver Combustion Medal – established in 1958 and awarded to an outstanding paper presented at the previous symposium. The Hottel Lecture. Ya B. Zeldovich Gold Medal – established in 1990 and awarded for outstanding contribution to the theory of combustion or detonation. Bernard Lewis Fellowship – established in 1996 during the 26th International Symposium, this award is awarded to encourage high quality research in combustion by young scientists and engineers. Distinguished Paper Award – established in 1996 during the 31st International Symposium, this award is presented to the paper in each of the twelve colloquia of a Symposium which is judged to be most distinguished in quality, achievement and significance. Bernard Lewis Visiting Lecturer Fellowship. Hiroshi Tsuji Early Career Researcher Award. See also International Flame Research Foundation References American engineering organizations Scientific organizations established in 1954 Combustion
The Combustion Institute
Chemistry
534
66,635,300
https://en.wikipedia.org/wiki/Jennifer%20Burney
Jennifer Burney grew up in Albuquerque, New Mexico, and is now professor and the Marshall Saunders Chancellor's Endowed Chair in Global Climate Policy and Research at the University of California, San Diego, as part of the School of Global Policy and Strategy. She studied history and science at Harvard University and earned a PhD in physics from Stanford, developing a superconducting camera to capture images of cosmic bodies, like pulsars or exoplanets. After graduating, she worked for Solar Electric Light Fund on rural electrification, particularly in West Africa. She worked as a postdoc, starting in 2008, at Stanford on food security and the environment. She was named a National Geographic Emerging Explorer in 2011. As a current research affiliate at the University of California, San Diego's Policy Design and Evaluation Laboratory, her research focuses mainly on global food security, adaptation, and climate change mitigation. Some projects she has worked on include rural electrification, aerosol emissions, and high-yield farming. Her partner is Claire Adida, professor of political science at the University of California, San Diego. They have two children. References External links 21st-century American women scientists Living people 21st-century American physicists American LGBTQ academics American LGBTQ scientists LGBTQ people from California Environmental scientists Stanford University alumni University of California, San Diego faculty Harvard College alumni Year of birth missing (living people) LGBTQ physicists
Jennifer Burney
Environmental_science
285
16,022,035
https://en.wikipedia.org/wiki/Planetary%20mnemonic
A planetary mnemonic refers to a phrase created to remember the planets and dwarf planets of the Solar System, with the order of words corresponding to increasing sidereal periods of the bodies. One simple visual mnemonic is to hold out both hands side-by-side with thumbs in the same direction (typically left-hand facing palm down, and right-hand palm up). The fingers of hand with palm down represent the terrestrial planets where the left pinkie represents Mercury and its thumb represents the asteroid belt, including Ceres. The other hand represents the giant planets, with its thumb representing trans-Neptunian objects, including Pluto. Nine planets Before 2006, Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto were considered as planets. Below are partial list of these mnemonics: "Men Very Easily Make Jugs Serve Useful Needs, Perhaps" – The structure of this sentence, which is current in the 1950s, suggests that it may have originated before Pluto's discovery. It can easily be trimmed back to reflect Pluto's demotion to dwarf planet. "My Very Elegant Mother Just Sat Upon Nine Porcupines" "Mary's violet eyes make Johnnie stay up nights pondering" With the IAU's 2006 definition of planet which reclassified Pluto as a dwarf planet, along with Ceres and Eris, these mnemonics became obsolete. Eight planets When Pluto's significance was changed to dwarf planet, mnemonics could no longer include the final "P". The first notable suggestion came from Kyle Sullivan of Lumberton, Mississippi, USA, whose mnemonic was published in the Jan. 2007 issue of Astronomy magazine: "My Violent Evil Monster Just Scared Us Nuts". In August 2006, for the eight planets recognized under the new definition, Phyllis Lugger, professor of astronomy at Indiana University suggested the following modification to the common mnemonic for the nine planets: "My Very Educated Mother Just Served Us Nachos". She proposed this mnemonic to Owen Gingerich, Chair of the International Astronomical Union (IAU) Planet Definition Committee and published the mnemonic in the American Astronomical Society Committee on the Status of Women in Astronomy Bulletin Board on August 25, 2006. It also appeared in Indiana University's IU News Room Star Trak on August 30, 2006. This mnemonic is used by the IAU on their website for the public. Others angry at the IAU's decision to "demote" Pluto composed sarcastic mnemonics in protest: "Many Very Educated Men Justify Stealing Unique Ninth" – found in Schott's Miscellany by Ben Schott. "Many Very Educated Men Just Screwed Up Nature" – this mnemonic is mentioned by Mike Brown, who discovered Eris. Another mnemonic which was changed from 9 to 8 planets was , "Most Very Elderly Men Just Slept Under Newspapers". Slightly risque versions include, "Mary's 'Virgin' Explanation Made Joseph Suspect Upstairs Neighbor". Eleven planets and dwarf planets In 2007, the National Geographic Society sponsored a contest for a new mnemonic of MVEMCJSUNPE, incorporating the then-eleven known planets and dwarf planets, including Eris, Ceres, and the newly demoted Pluto. On February 22, 2008, "My Very Exciting Magic Carpet Just Sailed Under Nine Palace Elephants", coined by 10-year-old Maryn Smith of Great Falls, Montana, was announced as the winner. The phrase was featured in the song 11 Planets by Grammy-nominated singer and songwriter Lisa Loeb and in the book 11 Planets: A New View of the Solar System by David Aguilar (). Thirteen planets and dwarf planets Since the National Geographic competition, two additional bodies were designated as dwarf planets, Makemake and Haumea, on July 11 and September 17, 2008 respectively. A 2015 New York Times article suggested some mnemonics including, "My Very Educated Mother Cannot Just Serve Us Nine Pizzas—Hundreds May Eat!" Longer mnemonics will be required in the future, if more of the possible dwarf planets are recognized as such by the IAU. However, at some point enthusiasm for new mnemonics will wane as the number of dwarf planets exceeds the number that people will want to learn (it is estimated that there may be up to 200 dwarf planets). See also Lists of astronomical objects References Science mnemonics Mnemonic Mnemonic Mnemonic Solar System
Planetary mnemonic
Astronomy
919
58,646,964
https://en.wikipedia.org/wiki/Extracorporeal%20life%20support
Extracorporeal life support (ECLS), is a set of extracorporeal modalities that can provide oxygenation, removal of carbon dioxide, and/or circulatory support, excluding cardiopulmonary bypass for cardiothoracic or vascular surgery. ECLS modalities include: Extracorporeal membrane oxygenation (ECMO) - for temporary support of patients with respiratory and/or cardiac failure . Extracorporeal carbon dioxide removal (ECCO2R) - for removal of CO2 only. without cardiac support. ECCO2R is used for patients with hypercapnic respiratory failure or patients with less severe forms of acute respiratory distress syndrome. References Intensive care medicine Medical equipment stubs
Extracorporeal life support
Biology
152
1,967,236
https://en.wikipedia.org/wiki/Antarctic%20Technology%20Offshore%20Lagoon%20Laboratory
The Antarctic Technology Offshore Lagoon Laboratory (ATOLL) was a floating oceanographic laboratory for in situ observation experiments. This facility also tested instruments and equipment for polar expeditions. The ATOLL hull was the largest fiberglass structure ever built at that time. It was in operation from 1982 to 1995. Structure and infrastructure The ATOLL was composed of three curved fiberglass elements, each long and having a draught of only . For towing, the elements could be assembled in a long S-shape; in operation, the elements would form a horseshoe shape surrounding water surface. The lab provided ample space for twelve researchers. The laboratory contained a lab, storage and supply facilities, a dormitory, computer room, and a fireplace. The laboratory was installed and operated in the Baltic Sea (and the Bay of Kiel in particular) at the initiative and under the direction of Uwe Kils, at the Institute of Oceanography (Institut für Meereskunde) of the University of Kiel. The fiberglass hulls themselves were bought from Waki Zöllner's "Atoll" company. The onboard computer was a NeXT and the first versions of a virtual microscope of Antarctic krill for interactive dives into their morphology and behavior were developed here, finding later mention in Science magazine. The lab was connected to the Internet via a radio link, and the first images of ocean critters on the internet came from this NeXT. The first ever in situ videos of Atlantic herring feeding on copepods were recorded from this lab. An underwater observation and experimentation room allowed direct observation and manipulation through large portholes. The technical equipment included an ultra-high-resolution scanning sonar that was used for locating schools of juvenile herring, for guiding a ROV, which was controlled via a cyberhelmet and glove, and for determining positions, distances, and speeds. Probes measured the water salinity, temperature, and oxygen levels. Special instruments could measure plankton-, particle-, and bubble-concentrations and their size distributions. Imaging equipment included low-light still and high speed video cameras using shuttered Laser-sheet or infrared LED illumination. An endoscope-system for non-invasive optical measurements called ecoSCOPE, which could also be mounted on an ROV, was developed and used to record the microscale dynamics and behavior of the highly evasive herring. Research Scientific investigations aboard the ATOLL concentrated on one of the most important food chain transitions: the linkages between the early life stages of herring (Clupea harengus) and their principal prey, the copepods. A major hypotheses of fisheries ecologists is that the microdistribution of prey, the microturbulence of the ocean, or the retention conditions are normally not suited to allow strong year classes of fish to develop. In most years more than 99% of herring larvae do not survive. Occasionally however, physical and biotic conditions are favorable, larval survival is high, and large year-classes result. Research work at the ATOLL investigated the effects of small-scale dynamics on fish feeding and predator avoidance and their correlation to year-class strength. Research questions investigated by students during courses and their thesis work at the Laboratory included: What are the effects of the natural light gradient on predator-prey interactions? How can the predator best see the prey without being seen? How does the focussing of small waves oscillating light regime influence camouflage and attack strategy? What are the influences of the different frequencies of microturbulences? How do such effects change at the moment when herring larvae join into schools? What role does the phenomenon of aggregation play? Do ocean physics create or alter organism-aggregations? Can the dynamics of aggregations effect ocean physics at the microscales? Are there effects of the surface waves? What are the distribution and dynamics of microbubbles caused by turbulences and gas-oversaturations? How can the organisms orient in respect to micro-gradients of the ocean physics? How do they survive in the direct vicinity of undulating anoxia and hypoxia? Why are eelpouts, sticklebacks and herrings so extremely successful in the Baltic while cod is not? What are the effects and functions of schooling for feeding and microscale-orientation? What is the behavior of fish in netcages and how much food is lost from the cages. The ATOLL mainly served as a test bed for the development and field testing of equipment such as developing ROVs that were to be used later in Antarctic expeditions, e.g. for in situ imaging of transparent organisms of krill size under the ice. References External links ATOLL laboratory Kils, U.: The ATOLL Laboratory and other Instruments Developed at Kiel; U.S. GLOBEC NEWS Technology Forum Number 8: 6–9; 1995. Also available as a PDF file. Waki Zoellner Zweites Deutsches Fernsehen, ZDF, 1978, Drehscheibe: Waki Zöllner und sein Atoll vor Travemünde. Norddeutsches Fernsehen, NDR, 1978, Atoll vor Travemünde. Bayerisches Fernsehen, BR3, 1980, Samstags-Club: Maria Schell und Waki Zöllner. Zweites Deutsches Fernsehen, ZDF, 1990, Teleillustrierte: Waki Zöllner und sein Floatel Newspaper article Newspaper article2 Oceanography Science and technology in Antarctica
Antarctic Technology Offshore Lagoon Laboratory
Physics,Environmental_science
1,108
5,626,777
https://en.wikipedia.org/wiki/International%20Teledemocracy%20Centre
The International Teledemocracy Centre (ITC) was established at Edinburgh Napier University in 1999. The centre is dedicated to researching innovative E-democracy systems that will strengthen public understanding and participation in democratic decision making. ITC have worked in a number of roles on E-participation and E-democracy initiatives and research projects with a wide range of partners including parliaments, government departments and local authorities, NGOs, charities, youth groups, media and technical and research organisations. One of its first projects, undertaken in partnership with BT Scotland, was the design of the E-Petitioner internet petitioning system. References External links ITC Home Page Edinburgh Napier University 1999 establishments in Scotland Educational institutions established in 1999 E-democracy
International Teledemocracy Centre
Technology
146
23,534,176
https://en.wikipedia.org/wiki/Wireless%20triangulation
Wireless triangulation is a method of determining the location of wireless nodes using IEEE 802.11 standards. It is normally implemented by measuring the RSSI signals strength. See also Location awareness Real-time locating standards Wireless local area network Wireless personal area network References Radio-frequency identification Radio navigation Tracking Wireless locating
Wireless triangulation
Technology,Engineering
62
618,076
https://en.wikipedia.org/wiki/Opticks
Opticks: or, A Treatise of the Reflexions, Refractions, Inflexions and Colours of Light is a collection of three books by Isaac Newton that was published in English in 1704 (a scholarly Latin translation appeared in 1706). The treatise analyzes the fundamental nature of light by means of the refraction of light with prisms and lenses, the diffraction of light by closely spaced sheets of glass, and the behaviour of color mixtures with spectral lights or pigment powders. Opticks was Newton's second major work on physical science and it is considered one of the three major works on optics during the Scientific Revolution (alongside Johannes Kepler's Astronomiae Pars Optica and Christiaan Huygens' Treatise on Light). Overview The publication of Opticks represented a major contribution to science, different from but in some ways rivalling the Principia, yet Isaac Newton's name did not appear on the cover page of the first edition. Opticks is largely a record of experiments and the deductions made from them, covering a wide range of topics in what was later to be known as physical optics. That is, this work is not a geometric discussion of catoptrics or dioptrics, the traditional subjects of reflection of light by mirrors of different shapes and the exploration of how light is "bent" as it passes from one medium, such as air, into another, such as water or glass. Rather, the Opticks is a study of the nature of light and colour and the various phenomena of diffraction, which Newton called the "inflexion" of light. Newton sets forth in full his experiments, first reported to the Royal Society of London in 1672, on dispersion, or the separation of light into a spectrum of its component colours. He demonstrates how the appearance of color arises from selective absorption, reflection, or transmission of the various component parts of the incident light. The major significance of Newton's work is that it overturned the dogma, attributed to Aristotle or Theophrastus and accepted by scholars in Newton's time, that "pure" light (such as the light attributed to the Sun) is fundamentally white or colourless, and is altered into color by mixture with darkness caused by interactions with matter. Newton showed the opposite was true: light is composed of different spectral hues (he describes seven – red, orange, yellow, green, blue, indigo and violet), and all colours, including white, are formed by various mixtures of these hues. He demonstrates that color arises from a physical property of light – each hue is refracted at a characteristic angle by a prism or lens – but he clearly states that color is a sensation within the mind and not an inherent property of material objects or of light itself. For example, he demonstrates that a red violet (magenta) color can be mixed by overlapping the red and violet ends of two spectra, although this color does not appear in the spectrum and therefore is not a "color of light". By connecting the red and violet ends of the spectrum, he organised all colours as a color circle that both quantitatively predicts color mixtures and qualitatively describes the perceived similarity among hues. Newton's contribution to prismatic dispersion was the first to outline multiple-prism arrays. Multiple-prism configurations, as beam expanders, became central to the design of the tunable laser more than 275 years later and set the stage for the development of the multiple-prism dispersion theory. Comparison to the Principia Opticks differs in many respects from the Principia. It was first published in English rather than in the Latin used by European philosophers, contributing to the development of a vernacular science literature. The books were a model of popular science exposition: although Newton's English is somewhat dated—he shows a fondness for lengthy sentences with much embedded qualifications—the book can still be easily understood by a modern reader. In contrast, few readers of Newton's time found the Principia accessible or even comprehensible. His formal but flexible style shows colloquialisms and metaphorical word choice. Unlike the Principia, Opticks is not developed using the geometric convention of propositions proved by deduction from either previous propositions, lemmas or first principles (or axioms). Instead, axioms define the meaning of technical terms or fundamental properties of matter and light, and the stated propositions are demonstrated by means of specific, carefully described experiments. The first sentence of Book I declares "My Design in this Book is not to explain the Properties of Light by Hypotheses, but to propose and prove them by Reason and Experiments. In an Experimentum crucis or "critical experiment" (Book I, Part II, Theorem ii), Newton showed that the color of light corresponded to its "degree of refrangibility" (angle of refraction), and that this angle cannot be changed by additional reflection or refraction or by passing the light through a coloured filter. The work is a vade mecum of the experimenter's art, displaying in many examples how to use observation to propose factual generalisations about the physical world and then exclude competing explanations by specific experimental tests. Unlike the Principia, which vowed Non fingo hypotheses or "I make no hypotheses" outside the deductive method, the Opticks develops conjectures about light that go beyond the experimental evidence: for example, that the physical behaviour of light was due its "corpuscular" nature as small particles, or that perceived colours were harmonically proportioned like the tones of a diatonic musical scale. Queries Newton originally considered to write four books, but he dropped the last book on action at a distance. Instead he concluded Opticks a set of unanswered questions and positive assertions referred as queries in Book III. The first set of queries were brief, but the later ones became short essays, filling many pages. In the first edition, these were sixteen such queries; that number was increased to 23 in the Latin edition, published in 1706, and then in the revised English edition, published in 1717/18. In the fourth edition of 1730, there were 31 queries. These queries, especially the later ones, deal with a wide range of physical phenomena that go beyond the topic of optics. The queries concern the nature and transmission of heat; the possible cause of gravity; electrical phenomena; the nature of chemical action; the way in which God created matter; the proper way to do science; and even the ethical conduct of human beings. These queries are not really questions in the ordinary sense. These queries are almost all posed in the negative, as rhetorical questions. That is, Newton does not ask whether light "is" or "may be" a "body." Rather, he declares: "Is not Light a Body?" Stephen Hales, a firm Newtonian of the early eighteenth century, declared that this was Newton's way of explaining "by Quaere." The first query reads: "Do not Bodies act upon Light at a distance, and by their action bend its Rays; and is not this action (caeteris paribus) strongest at the least distance?" suspecting on the effect of gravity on the trajectory of light rays. This query predates the prediction of gravitational lensing by Albert Einstein's general relativity by two centuries and later confirmed by Eddington experiment in 1919. The first part of query 30 reads "Are not gross Bodies and Light convertible into one another" thereby anticipating mass-energy equivalence. Query 6 of the book reads "Do not black Bodies conceive heat more easily from Light than those of other Colours do, by reason that the Light falling on them is not reflected outwards, but enters into the Bodies, and is often reflected and refracted within them, until it be stifled and lost?", thereby introducing the concept of a black body. The last query (number 31) wonders if a corpuscular theory could explain how different substances react more to certain substances than to others, in particular how aqua fortis (nitric acid) reacts more with calamine that with iron. This 31st query has been often been linked to the origin of the concept of affinity in chemical reactions. Various 18th century historians and chemists like William Cullen and Torbern Bergman, credited Newton for the development affinity tables. Reception The Opticks was widely read and debated in England and on the Continent. The early presentation of the work to the Royal Society stimulated a bitter dispute between Newton and Robert Hooke over the "corpuscular" or particle theory of light, which prompted Newton to postpone publication of the work until after Hooke's death in 1703. On the Continent, and in France in particular, both the Principia and the Opticks were initially rejected by many natural philosophers, who continued to defend Cartesian natural philosophy and the Aristotelian version of color, and claimed to find Newton's prism experiments difficult to replicate. Indeed, the Aristotelian theory of the fundamental nature of white light was defended into the 19th century, for example by the German writer Johann Wolfgang von Goethe in his 1810 Theory of Colours (). Newtonian science became a central issue in the assault waged by the philosophes in the Age of Enlightenment against a natural philosophy based on the authority of ancient Greek or Roman naturalists or on deductive reasoning from first principles (the method advocated by French philosopher René Descartes), rather than on the application of mathematical reasoning to experience or experiment. Voltaire popularised Newtonian science, including the content of both the Principia and the Opticks, in his Elements de la philosophie de Newton (1738), and after about 1750 the combination of the experimental methods exemplified by the Opticks and the mathematical methods exemplified by the Principia were established as a unified and comprehensive model of Newtonian science. Some of the primary adepts in this new philosophy were such prominent figures as Benjamin Franklin, Antoine-Laurent Lavoisier, and James Black. Subsequent to Newton, much has been amended. Thomas Young and Augustin-Jean Fresnel showed that the wave theory Christiaan Huygens described in his Treatise on Light (1690) could prove that colour is the visible manifestation of light's wavelength. Science also slowly came to recognize the difference between perception of colour and mathematisable optics. The German poet Goethe, with his epic diatribe Theory of Colours, could not shake the Newtonian foundation – but "one hole Goethe did find in Newton's armour.. Newton had committed himself to the doctrine that refraction without colour was impossible. He therefore thought that the object-glasses of telescopes must for ever remain imperfect, achromatism and refraction being incompatible. This inference was proved by Dollond to be wrong." (John Tyndall, 1880) See also Color theory Luminiferous aether Prism (optics) Theory of Colours Book of Optics (Ibn al-Haytham) Elements of the Philosophy of Newton (Voltaire) Multiple-prism dispersion theory Notes References External links Full and free online editions of Newton's Opticks Rarebookroom, First edition ETH-Bibliothek, First edition Gallica, First edition Internet Archive, Fourth edition Project Gutenberg digitized text & images of the Fourth Edition Cambridge University Digital Library, Papers on Hydrostatics, Optics, Sound and Heat – Manuscript papers by Isaac Newton containing draft of Opticks 1704 non-fiction books 1704 in science English non-fiction literature Books by Isaac Newton History of optics Mathematics books Physics books Treatises Light
Opticks
Physics
2,411
9,463,925
https://en.wikipedia.org/wiki/Wilson%E2%80%93Bappu%20effect
The Ca II K line in cool stars is among the strongest emission lines which originates in the star's chromosphere. In 1957, Olin C. Wilson and M. K. Vainu Bappu reported on the remarkable correlation between the measured width of the aforementioned emission line and the absolute visual magnitude of the star. This is known as the Wilson–Bappu effect. The correlation is independent of spectral type and is applicable to stellar classification main sequence types G, K, and Red giant type M. The greater the emission band, the brighter the star, which is correlated with distance empirically. The main interest of the Wilson–Bappu effect is in its use for determining the distance of stars too remote for direct measurements. It can be studied using nearby stars, for which independent distance measurements are possible, and it can be expressed in a simple analytical form. In other words, the Wilson–Bappu effect can be calibrated with stars within 100 parsecs from the Sun. The width of the emission core of the K line () can be measured in distant stars, so, knowing W0 and the analytical form expressing the Wilson–Bappu effect, we can determine the absolute magnitude of a star. The distance of a star follows immediately from the knowledge of both absolute and apparent magnitude, provided that the interstellar reddening of the star is either negligible or well known. The first calibration of the Wilson–Bappu effect using distance from Hipparcos parallaxes was made in 1999 by Wallerstein et al. A later work also used W0 measurements on high-resolution spectra taken with CCD, but a smaller sample. According to the latest calibration, the relation between absolute visual magnitude (Mv) expressed in magnitudes and W0, transformed in km/s, is the following: The data error, however, is quite large: about 0.5 mag, rendering the effect too imprecise to significantly improve the cosmic distance ladder. Another limitation comes from the fact that the measurement of W0 in distant stars is very challenging, requires long observations at big telescopes. Sometimes the emission feature in the core of the K line is affected by the interstellar extinction. In these cases an accurate measurement of W0 is not possible. The Wilson–Bappu effect is also valid for the Mg II k line. However, the Mg II k line is at 2796.34 Å in the ultraviolet, and since the radiation at this wavelength does not reach the Earth's surface it can only be observed with satellites such as the International Ultraviolet Explorer. In 1977, Stencel published a spectroscopic survey that showed that the wing emission features seen in the broad wings of the K line among higher luminosity late type stars, share a correlation of line width and Mv similar to the Wilson–Bappu effect. References Astronomical spectroscopy
Wilson–Bappu effect
Physics,Chemistry
591
73,886,820
https://en.wikipedia.org/wiki/Abaucin
Abaucin (RS-102895, MLJS-21001) is a compound which has been reported to show useful activity as a narrow-spectrum antibiotic. There is evidence that it is effective against Acinetobacter baumannii, which is one of the three superbugs identified by the World Health Organization as a "critical threat" to humanity. Notably, abaucin was developed with assistance from artificial intelligence by a team led by the MIT Jameel Clinic's faculty lead for life sciences, James J. Collins, and McMaster's Jonathan Stokes. Its mode of action involves inhibiting lipoprotein transport. The compound had previously been reported as an antagonist of the chemokine receptor CCR2, but its antibiotic activity was not discovered during earlier research. A New York Times opinion piece by Peter Coy for Thanksgiving listed abaucin among scientific discoveries to be thankful for in 2023. See also Halicin SCHEMBL19952957 References Antibiotics Piperidines Trifluoromethyl compounds Spiro compounds Phenethylamines Benzoxazines
Abaucin
Chemistry,Biology
233
2,949,850
https://en.wikipedia.org/wiki/Logical%20access%20control
In computers, logical access controls are tools and protocols used for identification, authentication, authorization, and accountability in computer information systems. Logical access is often needed for remote access of hardware and is often contrasted with the term "physical access", which refers to interactions (such as a lock and key) with hardware in the physical environment, where equipment is stored and used. Models Logical access controls enforce access control measures for systems, programs, processes, and information. The controls can be embedded within operating systems, applications, add-on security packages, or database and telecommunication management systems. The line between logical access and physical access can be blurred when physical access is controlled by software. For example, entry to a room may be controlled by a chip and PIN card and an electronic lock controlled by software. Only those in possession of an appropriate card, with an appropriate security level and with knowledge of the PIN are permitted entry to the room. On swiping the card into a card reader and entering the correct PIN code. Logical controls, also called logical access controls and technical controls, protect data and the systems, networks, and environments that protect them. In order to authenticate, authorize, or maintain accountability a variety of methodologies are used such as password protocols, devices coupled with protocols and software, encryption, firewalls, or other systems that can detect intruders and maintain security, reduce vulnerabilities and protect the data and systems from threats. Businesses, organizations and other entities use a wide spectrum of logical access controls to protect hardware from unauthorized remote access. These can include sophisticated password programs, advanced biometric security features, or any other setups that effectively identify and screen users at any administrative level. The particular logical access controls used in a given facility and hardware infrastructure partially depend on the nature of the entity that owns and administrates the hardware setup. Government logical access security is often different from business logical access security, where federal agencies may have specific guidelines for controlling logical access. Users may be required to hold security clearances or go through other screening procedures that complement secure password or biometric functions. This is all part of protecting the data kept on a specific hardware setup. Militaries and governments use logical access biometrics to protect their large and powerful networks and systems which require very high levels of security. It is essential for the large networks of police forces and militaries where it is used not only to gain access but also in six main essential applications. Without logical access control security systems highly confidential information would be at risk of exposure. There is a wide range of biometric security devices and software available for different levels of security needs. There are very large complex biometric systems for large networks that require absolute airtight security and there are less expensive systems for use in office buildings and smaller institutions. Notes References Andress, Jason. (2011). ″The Basics of Information Security.″ Cory Janssen, Logical Access, Techopedia, retrieved at 3:15 a.m. on August 12, 2014 findBIOMETRICS, Logical Access Control Biometrics, retrieved at 3:25 a.m. on August 12, 2014 External links RSA Intelligence Driven Security, EMC Corporation Computer access control
Logical access control
Engineering
651
19,101,296
https://en.wikipedia.org/wiki/Project%20Iceworm
Project Iceworm was a top secret United States Army program of the Cold War, which aimed to build a network of mobile nuclear missile launch sites under the Greenland ice sheet. The goal was to install a vast network of nuclear missile launch sites that could survive a first strike. This was according to documents declassified in 1996. The missiles, which could strike targets within the Soviet Union, were never fielded and necessary consent from the Danish Government to do so was never obtained. To study the feasibility of working under the ice, a highly publicized "cover" project, known as Camp Century, was launched in 1959. Unstable ice conditions within the ice sheet caused the project to be canceled in 1966. Political background Details of the missile base project were secret for decades, but first came to light in January 1995 during an enquiry by the Danish Foreign Policy Institute (DUPI) into the history of the use and storage of nuclear weapons in Greenland. The enquiry was ordered by the Parliament of Denmark following the release of previously classified information about the 1968 Thule Air Base B-52 crash that contradicted previous assertions by the Government of Denmark. Description To test the feasibility of construction techniques a project site called Camp Century was started by the United States military in 1959, located at an elevation of in Northwestern Greenland, from the American Thule Air Base. The radar and air base at Thule had been active since 1951. Camp Century was described at the time as a demonstration of affordable ice-cap military outposts. The secret Project Iceworm was to be a system of tunnels in length, used to deploy up to 600 nuclear missiles, that would be able to reach the Soviet Union in case of nuclear war. The missile locations would be under the cover of Greenland's ice sheet and were supposed to be periodically changed. While Project Iceworm was secret, plans for Camp Century were discussed with and approved by Denmark. The facility, including its nuclear power plant, was profiled in The Saturday Evening Post magazine in 1960. The "official purpose" of Camp Century, as explained by the United States Department of Defense to Danish officials in 1960, was to test various construction techniques under Arctic conditions, explore practical problems with a semi-mobile nuclear reactor, as well as supporting scientific experiments on the icecap. A total of 21 trenches were cut and covered with arched roofs within which prefabricated buildings were erected. With a total length of , these tunnels also contained a hospital, a shop, a theater and a church. The total number of inhabitants was approximately 200. From 1960 until 1963, the electric supply was provided by the world's first mobile/portable nuclear reactor, designated PM-2A and designed by Alco for the U.S. Army. Water was supplied by Rod wells melting glaciers, and tested for germs such as the plague. Within three years after it was excavated, ice core samples taken by geologists working at Camp Century demonstrated that the glacier was moving much faster than anticipated and would destroy the tunnels and launch stations in about two years. The facility was evacuated in 1965, and the nuclear generator removed. Project Iceworm was canceled, and Camp Century closed in 1966. The project generated valuable scientific information and provided scientists with some of the first ice cores, still being used by climatologists as of 2005. Size of proposed missile complex According to the documents published by Denmark in 1997, the U.S. Army's "Iceworm" missile network was outlined in a 1960 Army report titled "Strategic Value of the Greenland Icecap". If fully implemented, the project would cover an area of , roughly three times the size of Denmark. The launch complex floors would be below the surface, with the missile launchers even deeper. Clusters of missile launch centers would be spaced apart. New tunnels were to be dug every year, so that after five years there would be thousands of firing positions, among which the several hundred missiles could be rotated. The US Army intended to deploy a shortened, two-stage version of the U.S. Air Force's Minuteman missile, a variant the Army proposed calling the Iceman. Sheet ice elasticity Although the Greenland icecap appears, on its surface, to be hard and immobile, snow and ice are viscoelastic materials, which slowly deform over time, depending on temperature and density. Despite its seeming stability, the icecap is in constant, slow movement, spreading outward from the center. This spreading movement, over a year, causes tunnels and trenches to narrow, as their walls deform and bulge, eventually leading to a collapse of the ceiling. By mid-1962 the ceiling of the reactor room within Camp Century had dropped and had to be lifted . During a planned reactor shutdown for maintenance in late July 1963, the Army decided to operate Camp Century as a summer-only camp. It did not reactivate the PM-2A reactor. The camp resumed operations in 1964 using its standby diesel power plant, the portable reactor was removed that summer, and the camp was abandoned in 1966. Ecological impact When the camp was decommissioned in 1967, its infrastructure and waste were abandoned under the assumption they would be entombed forever by perpetual snowfall. A 2016 study found that the portion of the ice sheet covering Camp Century will start to melt by 2100, if current trends continue. When the ice melts, the camp's infrastructure, as well as remaining biological, chemical and radioactive waste, will re-enter the environment and potentially disrupt nearby ecosystems. This includes 200,000 liters of diesel, PCBs and radioactive waste. See also Camp Century Camp Fistclench Camp TUTO References Sources Camp Century and its PM-2A reactor covered by Suid in "Chapter 5: The Nuclear Power in Full Bloom", pp. 57–80. (online) External links The Big Picture: Camp Century Camp Century, Greenland, Frank J. Leskovitz (including good pictures and diagrams) U.S. Military Buildup of Thule, Woods Hole Oceanographic Institution Camp Century, thuleab.dk Atomic Insights Nov 1995 Comments on army film. Glaciological Studies in the Vicinity of Camp Century, Greenland Documentary film on YouTube Cold War Cold War fortifications Former installations of the United States Army Military in the Arctic Nuclear weapons program of the United States 1958 in military history Military installations of the United States in Greenland Glaciology
Project Iceworm
Engineering
1,294
3,116,487
https://en.wikipedia.org/wiki/Gamma%20Ceti
Gamma Ceti (γ Ceti, abbreviated Gamma Cet, γ Cet) is a triple star system in the equatorial constellation of Cetus. It has a combined apparent visual magnitude of 3.47. Based upon parallax measurements, this star is located at a distance of about 80 light-years (24.4 parsecs) from the Sun. The three components are designated Gamma Ceti A (officially named Kaffaljidhma , the traditional name for the entire system), B and C. Nomenclature γ Ceti (Latinised to Gamma Ceti) is the system's Bayer designation. The designations of the three components as Gamma Ceti A, B and C derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). The close pair AB is also designated HIP 12706, HD 16970, and HR 804. The system of A, B, and C is collectively designated GJ 106.1 in the Gliese Catalogue of Nearby Stars. Gamma Ceti bore the traditional names of or , derived from ('the cut-short hand'). According to a 1971 NASA memorandum, was originally the title for five stars: Gamma Ceti as , Xi1 Ceti as , Xi2 Ceti as , Delta Ceti as and Mu Ceti as (excluding Alpha Ceti and Lambda Ceti). The IAU Working Group on Star Names (WGSN) approved the name Kaffaljidhma for the component Gamma Ceti A on February 1, 2017. In Chinese astronomy, , meaning 'Circular Celestial Granary', refers to an asterism consisting of Gamma Ceti, Alpha Ceti, Kappa1 Ceti, Lambda Ceti, Mu Ceti, Xi1 Ceti, Xi2 Ceti, Nu Ceti, Delta Ceti, 75 Ceti, 70 Ceti, 63 Ceti and 66 Ceti. Consequently, the Chinese name for Gamma Ceti itself is ('the Eighth Star of Circular Celestial Granary'). Triple system Gamma Ceti appears to be a triple star system. The inner pair (A and B) have an angular separation of 2.6 arcseconds. The primary component of this pair (A) is an A-type main sequence star with a stellar classification of A3 V and a visual magnitude of 3.6. The fainter secondary component (B) is an F-type main sequence star that has a classification of F3 V and a magnitude of 6.6. The contrasting colors of these two stars makes them a popular target of amateur astronomers. The two can be resolved with a small, aperture telescope under ideal seeing conditions, although at times they can be a challenge to resolve even with a much larger scope. At a wide separation of 840 arcseconds is component C, a dim, magnitude 10.2 K-type star with the designation . It shares a common proper motion with A and is at a very similar distance, but is separated from the close pair by over . It has a spectral classification of K5V. There are several other stars brighter and closer to Gamma Ceti than – , HD 16985, and – but they are all more distant background stars. Properties The measured angular diameter of the primary star is . At the estimated distance of this system, this yields a physical size of about 1.9 times the radius of the Sun. The secondary component of this system is an X-ray source with a luminosity of . Gamma Ceti is about 300 million years old, and it appears to be a member of the stream of stars loosely associated with the Ursa Major Moving Group. The primary has been examined for an excess of infrared emission that would suggest the presence of circumstellar matter, but none was found. References External links Information about Kaffaljidhma on STARS Gamma Ceti on AAS WorldWide Telescope Cetus Ceti, Gamma Kaffaljidhma Ceti, 86 A-type main-sequence stars F-type main-sequence stars K-type main-sequence stars 012706 016970 0804 Durchmusterung objects
Gamma Ceti
Astronomy
867
2,254,890
https://en.wikipedia.org/wiki/Hypervariable
Hypervariable may refer to: Hypervariable sequence, a segment of a chromosome characterised by considerable variation in the number of tandem repeats at one or more loci Hypervariable locus, a locus with many alleles; especially those whose variation is due to variable numbers of tandem repeats Hypervariable region (HVR), a chromosomal segment characterized by multiple alleles within a population for a single genetic locus Genetics
Hypervariable
Biology
87
10,397,972
https://en.wikipedia.org/wiki/NGC%202535
NGC 2535 is an unbarred spiral galaxy in the constellation Cancer. It was discovered on 22 January 1877 by French astronomer Édouard Stephan. NGC 2535 is exhibiting a weak inner ring structure around the nucleus that is interacting with NGC 2536. The interaction has warped the disk and spiral arms of NGC 2535, producing an elongated structure, visible at ultraviolet wavelengths, that contain many bright, recently formed blue star clusters in addition to enhanced star forming regions around the galaxy center. The two galaxies are listed together as Arp 82 in the Atlas of Peculiar Galaxies as an example of a spiral galaxy with a high surface brightness companion. One supernova has been observed in NGC 2535. SN 1901A (type unknown, mag. 14.7) was seen by Karl Reinmuth on a photographic plate taken on 10 January 1901, although the discovery was not made until 28 September 1923. See also NGC 2536 List of NGC objects (2001–3000) References External links Spitzer Space Telescope page on NGC 2535 Unbarred spiral galaxies Peculiar galaxies Interacting galaxies Cancer (constellation) 2535 04264 022957 082 Astronomical objects discovered in 1877 Discoveries by Édouard Stephan
NGC 2535
Astronomy
238
8,027,117
https://en.wikipedia.org/wiki/30th%20century%20BC%20in%20architecture
Buildings and structures Buildings Alvastra pile-dwelling – circa 3000 BC in Neolithic Scandinavia Barbar Temple – oldest of the three temples built in 3000 BC, in present-day Bahrain Diraz Temple – circa 3rd millennium BCE, in present-day Bahrain Tepe Sialk – claimed to be the world's oldest ziggurat built in 3000 BC, in present-day Iran See also 29th century BC 29th century BC in architecture Timeline of architecture References BC Architecture
30th century BC in architecture
Engineering
96
65,780
https://en.wikipedia.org/wiki/Vladimir%20Komarov
Vladimir Mikhaylovich Komarov (, ; 16 March 1927 – 24 April 1967) was a Soviet test pilot, aerospace engineer, and cosmonaut. In October 1964, he commanded Voskhod 1, the first spaceflight to carry more than one crew member. He became the first Soviet cosmonaut to fly in space twice when he was selected as the solo pilot of Soyuz 1, its first crewed test flight. A parachute failure caused his Soyuz capsule to crash into the ground after re-entry on 24 April 1967, making him the first human to die in a space flight. He was declared medically unfit for training or spaceflight twice while in the program but continued playing an active role. During his time at the cosmonaut training center, he contributed to space vehicle design, cosmonaut training, evaluation and public relations. Early life Komarov was born on 16 March 1927 in Moscow and grew up with his half-sister Matilda (born in 1915). His father was a labourer who worked at various low-paid jobs to support the family. In 1935, Komarov began his formal education in the local elementary school. Here he showed a natural aptitude for mathematics. In 1941, Komarov left school because of World War II and the German invasion of the Soviet Union, and he became a laborer on a collective farm. He showed an interest in aeronautics from an early age, and he collected magazines and pictures about aviation, in addition to making model aircraft and his own propeller. At the age of fifteen in 1942, Komarov entered the "1st Moscow Special Air Force School" to pursue his dream of becoming an aviator. Shortly thereafter, his family learned that Komarov's father had been killed in an "unknown war action". Of necessity because of the German invasion, the flight school was soon moved to the Tyumen region in Siberia for the duration of the war. Students there learned a wide variety of subjects besides aviation—including zoology and foreign languages. In 1945, Komarov graduated from flight school with honors. World War II hostilities ended before Komarov was called on to enter combat. In 1946, Komarov completed his first year of training at the Chkalov Higher Air Force School in Borisoglebsk in Voronezh Oblast. He then completed his training at the A.K. Serov Military Aviation College in Bataisk. Komarov's mother died in 1948, seven months before his graduation in 1949, at which he received his pilot's wings and commission as a lieutenant in the Soviet Air Force. Career in the Soviet Air Force In December 1949, Komarov served as the pilot of a fighter plane with the 383rd Regiment of the 42nd North Caucasian Fighter Air Division that was based in Grozny. Komarov married Valentina Yakovlevna Kiselyova in October 1950. He was promoted to senior lieutenant in 1952, and he was later assigned as the chief pilot of the 486th Fighter Aviation Regiment of the 279th Fighter Air Division in the Prikarpate Region. Komarov continued to fly in that position until 1954, and then he enrolled in an engineering course at the Zhukovsky Air Force Engineering Academy. In 1959, Komarov was promoted to the rank of senior engineer-lieutenant. Later that year he achieved his goal of becoming a test pilot at the Central Scientific Research Institute at Chkalovsky. Cosmonaut selection Air Force Group One In September 1959, Komarov was promoted to engineer-captain and invited to participate in the selection process for cosmonaut candidate along with approximately 3,000 other pilots. He was one of twenty candidates selected for "Air Force Group One"; he and the others reported to the newly formed TsPK (Yuri Gagarin Cosmonaut Training Center) just outside Moscow for assignment on 13 March 1960. Although eminently qualified, Komarov was not chosen in the top six candidates, because he did not meet the age, height, and weight restrictions specified by the Chief Designer of Russia's space program, Sergei Korolev. "If the criteria had been different," the cosmonaut trainer Mark Gallai noted in an interview, "Certainly Komarov, who was very intelligent, would have been in the group. He had Air Force Academy flight experience. He greatly influenced the design of the 'Vostok' and [the] 'Voskhod'." At age 32, Komarov was the second oldest of the pilots chosen; Korolev had specified a maximum age of 27. Only two members of the first group, Pavel Belyayev (Voskhod 2) and Komarov himself, were also graduates of the Soviet Air Force Academy. In addition, only Komarov had experience as a flight test engineer on new aircraft. Training Shortly after beginning his training Komarov was hospitalised for a minor operation in May 1960, which left him medically unfit for physical training for approximately six months. At the time, the selection criteria placed a heavy emphasis on the physical condition of cosmonauts and any imperfection led to instant disqualification. Since Komarov already held engineering qualifications, he was allowed to remain in the program after assuring the administration he would be able to catch up. He continued with the required academic studies while recovering. He returned to training in October, because his recovery was more rapid than medical staff had expected. During that time he assisted his younger peers with their academic studies; earning him the casual nickname of "The Professor," which he shared with Belyayev, who was two years his senior. In 1961 the first space flights began. By 1962, Komarov was the third-highest-paid cosmonaut, due to his qualifications, rank and experience. He earned 528 rubles a month, with only cosmonauts 1 and 2, Yuri Gagarin and Gherman Titov, being more highly paid. When Georgi Shonin demonstrated an unacceptable level of g-force susceptibility in the centrifuge, Komarov replaced him in May 1962 for planned dual Vostok missions. Komarov was selected as back up for Pavel Popovich (Vostok 4), but subsequent routine ECG testing of Komarov revealed a heart irregularity and he was pulled from the program and replaced by Boris Volynov. The same heart irregularity grounded American astronaut Deke Slayton. After Komarov persistently lobbied medical and military personnel for re-admittance to the program, they allowed him to return to training. In 1963, cosmonaut training was conducted in six Groups, with Komarov being selected in Group 2 with Valery Bykovsky and Volynov. This group was to train for missions of up to five days in duration scheduled for the latter part of 1963. In May 1963, Alekseyev proposed to General Kamanin that Komarov be named backup for Vostok 5 rather than Khrunov because his suit was ready. Komarov was later named in a further group for planned missions in 1964 with Belyaev, Shonin, Khrunov, Zaikin, Gorbatko, Volynov, and Leonov. The training groups were formed for later Vostok missions (Vostok 7–13), but no actual crews were assigned and the missions did not occur under the auspices of the original Vostok program. In December 1963, Komarov was shortlisted for flight by Kamanin with Volynov and Leonov, having completed two years of training. In April 1964 Komarov was declared space-flight ready with Bykovsky, Popovich, Titov, Volynov, Leonov, Khrunov, Belyayev, and Lev Demin. From this group the commander of the planned Voskhod mission scheduled for late 1964 would be chosen. In May the group was reduced to Volynov, Komarov, Leonov and Khrunov. During training, Komarov lived at the TsPK with his wife Valentina and their two children Yevgeny and Irina. There, he enjoyed hunting, cross country skiing, ice hockey, and other social activities with his fellow trainees in their leisure time. Komarov was well liked by his peers, who referred to him as Volodya (a diminutive of his first name). Pavel Popovich noted that Komarov was respected for his humility and experience: "he was already an engineer when he joined us, but he never looked down on the others. He was warm-hearted, purposeful and industrious. Volodya's prestige was so high that people came to him to discuss all questions: personal as well as questions of our work." Fellow cosmonaut Alexei Leonov described him as "very serious. He was a first-class test pilot." Spaceflights Voskhod 1 By July 1964, only seven cosmonauts remained eligible for the Voskhod crew after some were disqualified on medical grounds. On 6 July, Komarov was named as the commander of the back-up crew for Voskhod 1. After several months of much heated debate between Kamanin and Korolev over the selection of the crew, Komarov was named as prime crew commander by the State Commission on 4 October 1964, just eight days before its scheduled launch. Kamanin played tennis with the Voskhod crew that evening and noted that Komarov played poorly in comparison to his crew: Boris Yegorov and Konstantin Feoktistov. On 9 October, Komarov and the crew inspected the Voskhod with Korolev and other members of the administration. Later that day they were interviewed by the state press and played tennis for the benefit of photographers. On the morning of 11 October, Komarov was given various communist relics to take with him into space the following day. In the afternoon the crew again inspected the capsule and were given their final instructions by Korolev. Komarov was the only member of the crew to have undertaken extensive training and was the only member with any flight experience; the two other crewmen being civilians. His call sign was "Ruby" (Russian: Рубин). During the mission Komarov performed various tasks with the other crew members, including medical and navigational tests and observing the Aurora Borealis. Komarov alone carried out tests with ion thrusters that had been attached to the Voskhod. He also made a number of radio transmissions, including a greeting to the Tokyo Olympics, which had opened on 10 October. The mission lasted just over twenty-four hours. After the crew landed safely they were flown back to the launch site at Tyuratam (also known as Baikonur to disguise its true location). Kamanin noted in his diary that while his crew were in good spirits, Komarov was fatigued. On 19 October, Komarov and his crew made reports in Red Square and attended an audience at the Kremlin. After the success of this short but scientifically important mission he was promoted to colonel. The success of the mission earned Komarov the awards of the Order of Lenin and Hero of the Soviet Union. In December 1964, the RVSN (Strategic Rocket Forces) requested that Komarov be transferred from the VVS (Soviet Air Force) to the RVSN, in a move possibly motivated by the poor record of the RVSN in producing successful rockets compared to the VVS. The request was opposed by Kamanin. In 1965, Komarov worked with Gagarin in supervising preparations for the flight of Voskhod 2, which carried out the first attempt of an extravehicular activity in outer space. These preparations included fitting of space suits on the cosmonauts and briefings for the spaceflight. In April of that year, Komarov toured Leningrad with Kamanin, Gagarin, Titov, Belyayev, and Leonov. Komarov also visited Petropavlovsk Fortress with Valentin Glushko where Glushko had conducted early rocket experiments in the early 1930s. In September that year, Komarov toured West Germany. Soyuz 1 Komarov was assigned to the Soviet Soyuz program along with Gagarin and Leonov. In July 1966, Komarov was reprimanded by Kamanin for his unauthorised disclosure, while in Japan, that "the Soviet Union will, at the scheduled time, fly an automated spacecraft around the Moon and return it to (the) Earth, to be followed by a dog flight, then a manned circumlunar flight." The following month Komarov clashed with other engineers over ongoing design problems in which zero-G tests showed that the Soyuz module hatch was too small to allow the safe exit of a fully suited cosmonaut. Meanwhile, Komarov and his fellow cosmonauts had their groups and assignments constantly revised, and they became increasingly anxious about the lack of response to their concerns about the design and manufacture of the spacecraft, which Gagarin had raised in a letter on their behalf to Leonid Brezhnev. Komarov was selected to command the Soyuz 1, in 1967, with Gagarin as his backup cosmonaut. During the preparations for the spaceflight, both cosmonauts were working twelve- to fourteen-hour days. On orbital insertion, the solar panels of the Soyuz module failed to fully deploy thereby preventing the craft from being fully powered and obscuring some of the navigation equipment. Komarov reported: "Conditions are poor. The cabin parameters are normal, but the left solar panel didn't deploy. The electrical bus is at only 13 to 14 amperes. The HF (high frequency) communications are not working. I cannot orient the spacecraft to the sun. I tried orienting the spacecraft manually using the DO-1 orientation engines, but the pressure remaining on the DO-1 has gone down to 180." Komarov tried unsuccessfully to orient the Soyuz module for five hours. The craft was transmitting unreliable status information, and lost communications on orbits 13 through 15 due to the failure of the high frequency transmitter that should have maintained radio contact while the craft was out of range of the ultra high frequency (UHF) ground receivers. As a result of the problems with the craft, the Soviets did not launch the second Soyuz module, from which cosmonauts were to perform an extra-vehicular activity (EVA) to the Soyuz 1, and cut the mission short. Komarov was ordered to re-orient the craft using the ion flow sensors on orbits 15 to 17. The ion sensors failed. Komarov did not have enough time to attempt a manual re-entry until orbit 19. Manual orientation relied on using the equipped Vzor periscope device, but to do this, Komarov had to be able to see the Sun. To reach the designated landing site at Orsk, the retro-fire had to take place on the night side of the Earth. Komarov oriented the spacecraft manually on the dayside then used the gyro-platform as a reference so that he could orient the craft for a night side retro-fire. He successfully re-entered the Earth's atmosphere on his 19th orbit, but the module's drogue and main braking parachute failed to deploy correctly. The module crashed into the ground, killing Komarov, at 6.24 a.m. Response to Komarov's death In his diary, Kamanin recorded that the Soyuz 1 capsule crashed into the ground at and that the remains of Komarov's body were an irregular lump in diameter and long. Three hours after the capsule's crash, Keldysh, Tyulin, Rudenko, and other State Commission members visited the site. At 21:45 Kamanin accompanied Komarov's remains to the Orsk aerodrome, where they were loaded on an Il-18. Ten minutes before departure an An-12 landed with Kuznetsov and several cosmonauts. Kamanin's aircraft arrived in Moscow in the early hours of the next morning. The aircraft had to divert to Sheremetyevo since all the other airfields around Moscow were closed to takeoffs or landings due to weather. Konstantin Vershinin's orders were that Komarov's remains were to be photographed, then immediately cremated so that a state burial in the Kremlin wall could take place. The remains underwent a quick autopsy that morning, then were cremated. On 25 April, a response to Komarov's death by his fellow cosmonauts was published in Pravda: "For the forerunners it is always more difficult. They tread the unknown paths and these paths are not straight, they have sharp turns, surprises and dangers. But anyone who takes the pathway into orbit never wants to leave it. And no matter what difficulties or obstacles there are, they are never strong enough to deflect such a man from his chosen path. While his heart beats in his chest, a cosmonaut will always continue to challenge the universe. Vladimir Komarov was one of the first on this treacherous path." When interviewed on 17 May by the newspaper Komsomolskaya Pravda, Gagarin alluded to the failure of the administration to listen to the concerns about the Soyuz module that the cosmonaut corps had identified, and maintained that Komarov's death should teach the establishment to be more rigorous in its testing and evaluation of "all the mechanisms of the spaceship, even more attentive to all stages of checking and testing, even more vigilant in our encounter with the unknown. He has shown us how dangerous the pathway to space is. His flight and his death will teach us courage." In May 1967, Gagarin and Leonov criticised program head Vasily Mishin's "poor knowledge of the Soyuz spacecraft and the details of its operation, his lack of cooperation in working with the cosmonauts in flight and training activities," and asked Kamanin to cite him in the official crash report. Honours and awards Gold Star Hero of the Soviet Union, twice (19 October 1964, 1967 (posthumously)) Order of Lenin (19 October 1964, 1967 (posthumously)) Order of the Red Star (1961) Medal "For Combat Merit" (1956) Medal "For the Development of Virgin Lands" (1964) Pilot-Cosmonaut of the USSR Hero of Socialist Labour (North Vietnam, 1964) Posthumous honours On 26 April 1967, Komarov was given a state funeral in Moscow, and his ashes were interred in the Kremlin Wall Necropolis at Red Square. The American astronauts requested the Soviet government to allow a representative to attend, but were turned down. Komarov was posthumously awarded his second Order of Lenin and also Hero of the Soviet Union. On 25 April 1968, a memorial service was held for Komarov at the crash site near Orsk . Kamanin noted in his diary that over 10,000 people were present at this service, "some driving hundreds of kilometres for the event." Komarov has been featured on commemorative First Day Covers and stamps for his contribution to the space program—from several different countries. Komarov is commemorated with other prominent figures from the early Russian space program with a bust on Cosmonauts Alley in Moscow, and he is also honored with a monument at the crash site near Orsk. Before leaving the Moon on Apollo 11's Lunar Module, Neil Armstrong's final task was to place a small package of memorial items to honor Soviet cosmonauts Komarov, Yuri Gagarin, and the Apollo 1 astronauts Gus Grissom, Ed White, and Roger Chaffee. Komarov's name also appears on a commemorative plaque left at Hadley Rille on the Moon by the commander of Apollo 15, David Scott in memory of 14 deceased NASA astronauts and USSR cosmonauts, along with a small sculpture entitled Fallen Astronaut, on 1 August 1971. This plaque and the sculpture represent those astronauts and cosmonauts who died in the quest to reach outer space and the Moon. The asteroid 1836 Komarov, discovered in 1971, was named in the honor of Komarov, as was a crater on the Moon. This asteroid and the cosmonaut inspired the composer Brett Dean to write a piece of symphonic music commissioned by conductor Simon Rattle in 2006. The composition is named Komarov's Fall, and it can be found on the EMI Classics Album of Simon Rattle's The Planets. The Fédération Aéronautique Internationale's V.M. Komarov Diploma is named in Komarov's honor. There was formerly a Soviet satellite-tracking ship named for Komarov, the Kosmonaut Vladimir Komarov. In popular culture Vladimir Komarov is a character in the French science fiction series Missions. The crew of a mission to Mars find Komarov decades after he was believed to have died. See also Apollo 1 Re-entry accidents Soyuz 11 Space Shuttle Columbia disaster Notes References Nikolai Kamanin's personal diaries, from 1960 to 1971. A summary and English translation by Mark Wade is available online at Kamanin Diaries, in the Encyclopedia Astronautica. A.I. Ostashev, Sergey Pavlovich Korolyov – The Genius of the 20th Century – 2010 M. of Public Educational Institution of Higher Professional Training MGUL . "S. P. Korolev. Encyclopedia of life and creativity" – edited by C. A. Lopota, RSC Energia. S. P. Korolev, 2014 External links Analysis of Voskhod Mission and in flight voice recordings of Komarov compiled by Sven Grahn Analysis of Soyuz 1 Mission and in flight voice recordings of Komarov compiled by Sven Grahn ARK Vladimir M. Komarov Komarov – detailed biography at Encyclopedia Astronautica BBC "On this day" 1967: Russian cosmonaut dies in space crash Zarya – site dedicated to early Soviet Missions, including Voskhod The official website of the city administration Baikonur – Honorary citizens of Baikonur "Death of a Cosmonaut", BBC Radio 4 drama. 19.04.2017 "Cosmonaut Crashed Into Earth 'Crying In Rage'" – NPR article about the crash (2011) 1927 births 1967 deaths 1964 in spaceflight 1967 in spaceflight Accidental deaths in the Soviet Union Heroes of the Soviet Union Recipients of the Order of Lenin Recipients of the Order of the Red Star Burials at the Kremlin Wall Necropolis Cosmonauts from Moscow 20th-century Russian explorers Soviet cosmonauts Soviet engineers Space program fatalities Soviet test pilots Voskhod program cosmonauts
Vladimir Komarov
Engineering
4,704
4,810,690
https://en.wikipedia.org/wiki/Fumed%20silica
Fumed silica (CAS number 7631-86-9, also 112945-52-5), also known as pyrogenic silica because it is produced in a flame, consists of microscopic droplets of amorphous silica fused into branched, chainlike, three-dimensional secondary particles which then agglomerate into tertiary particles. The resulting powder has an extremely low bulk density and high surface area. Its three-dimensional structure results in viscosity-increasing, thixotropic behavior when used as a thickener or reinforcing filler. Properties Fumed silica has a very strong thickening effect. Primary particle size is 5–50 nm. The particles are non-porous and have a surface area of 50–600 m2/g. The density is 160–190 kg/m3. Production Fumed silica is made from flame pyrolysis of silicon tetrachloride or from quartz sand vaporized in a 3000 °C electric arc. Major global producers are Evonik (who sells it under the name Aerosil), Cabot Corporation (Cab-O-Sil), Wacker Chemie (HDK), Dow Corning, Heraeus (Zandosil), Tokuyama Corporation (Reolosil), OCI (Konasil), Orisil (Orisil) and Xunyuchem(XYSIL). Applications Fumed silica serves as a universal thickening agent and an anticaking agent (free-flow agent) in powders. Like silica gel, it serves as a desiccant. It is used in cosmetics for its light-diffusing properties. It is used as a light abrasive, in products like toothpaste. Other uses include filler in silicone elastomer and viscosity adjustment in paints, coatings, printing inks, adhesives and unsaturated polyester resins. Fumed silica readily forms a network structure within bitumen and enhances its elasticity. Health issues Fumed silica is not listed as a carcinogen by OSHA, IARC, or NTP. Due to its fineness and thinness, fumed silica can easily become airborne, making it an inhalation hazard capable of causing irritation. See also Hydrophobic silica Precipitated silica Nanoparticles References Ceramic materials Glass types Edible thickening agents Silicon dioxide
Fumed silica
Engineering
508
4,644,002
https://en.wikipedia.org/wiki/Proanthocyanidin
Proanthocyanidins are a class of polyphenols found in many plants, such as cranberry, blueberry, and grape seeds. Chemically, they are oligomeric flavonoids. Many are oligomers of catechin and epicatechin and their gallic acid esters. More complex polyphenols, having the same polymeric building block, form the group of condensed tannins. Proanthocyanidins were discovered in 1947 by Jacques Masquelier, who developed and patented techniques for the extraction of oligomeric proanthocyanidins from pine bark and grape seeds. Proanthocyanidins are under preliminary research for the potential to reduce the risk of urinary tract infections (UTIs) by consuming cranberries, grape seeds or red wine. Distribution in plants Proanthocyanidins, including the lesser bioactive and bioavailable polymers (four or more catechins), represent a group of condensed flavan-3-ols, such as procyanidins, prodelphinidins and propelargonidins. They can be found in many plants, most notably apples, maritime pine bark and that of most other pine species, cinnamon, aronia fruit, cocoa beans, grape seed, grape skin (procyanidins and prodelphinidins), and red wines of Vitis vinifera (the European wine grape). However, bilberry, cranberry, black currant, green tea, black tea, and other plants also contain these flavonoids. Cocoa beans contain the highest concentrations. Proanthocyanidins also may be isolated from Quercus petraea and Q. robur heartwood (wine barrel oaks). Açaí oil, obtained from the fruit of the açaí palm (Euterpe oleracea), is rich in numerous procyanidin oligomers. Apples contain on average per serving about eight times the amount of proanthocyanidin found in wine, with some of the highest amounts found in the Red Delicious and Granny Smith varieties. An extract of maritime pine bark called Pycnogenol bears 65–75 percent proanthocyanidins (procyanidins). Thus a 100 mg serving would contain 65 to 75 mg of proanthocyanidins (procyanidins). Proanthocyanidin glycosides can be isolated from cocoa liquor. The seed testas of field beans (Vicia faba) contain proanthocyanidins that affect the digestibility in piglets and could have an inhibitory activity on enzymes. Cistus salviifolius also contains oligomeric proanthocyanidins. Analysis Condensed tannins may be characterised by a number of techniques including depolymerisation, asymmetric flow field flow fractionation or small-angle X-ray scattering. DMACA is a dye that is particularly useful for localization of proanthocyanidin compounds in plant histology. The use of the reagent results in blue staining. It can also be used to titrate proanthocyanidins. Proanthocyanidins from field beans (Vicia faba) or barley have been estimated using the vanillin-HCl method, resulting in a red color of the test in the presence of catechins or proanthocyanidins. Proanthocyanidins can be titrated using the Procyanidolic Index (also called the Bates-Smith Assay). It is a testing method that measures the change in color when the product is mixed with certain chemicals. The greater the color changes, the higher the PCOs content is. However, the Procyanidolic Index is a relative value that can measure well over 100. Unfortunately, a Procyanidolic Index of 95 was erroneously taken to mean 95% PCO by some and began appearing on the labels of finished products. All current methods of analysis suggest that the actual PCO content of these products is much lower than 95%. Gel permeation chromatography (GPC) analysis allows separation of monomers from larger proanthocyanidin molecules. Monomers of proanthocyanidins can be characterized by analysis with HPLC and mass spectrometry. Condensed tannins can undergo acid-catalyzed cleavage in the presence of a nucleophile like phloroglucinol (reaction called phloroglucinolysis), thioglycolic acid (thioglycolysis), benzyl mercaptan or cysteamine (processes called thiolysis) leading to the formation of oligomers that can be further analyzed. Tandem mass spectrometry can be used to sequence proanthocyanidins. Oligomeric proanthocyanidins Oligomeric proanthocyanidins (OPC) strictly refer to dimer and trimer polymerizations of catechins. OPCs are found in most plants and thus are common in the human diet. Especially the skin, seeds, and seed coats of purple or red pigmented plants contain large amounts of OPCs. They are dense in grape seeds and skin, and therefore in red wine and grape seed extract, cocoa, nuts and all Prunus fruits (most concentrated in the skin), and in the bark of Cinnamomum (cinnamon) and Pinus pinaster (pine bark; formerly known as Pinus maritima), along with many other pine species. OPCs also can be found in blueberries, cranberries (notably procyanidin A2), aronia, hawthorn, rosehip, and sea buckthorn. Oligomeric proanthocyanidins can be extracted via Vaccinium pahalae from in vitro cell culture. The US Department of Agriculture maintains a database of botanical and food sources of proanthocyanidins. Plant defense In nature, proanthocyanidins serve among other chemical and induced defense mechanisms against plant pathogens and predators, such as occurs in strawberries. Bioavailability Proanthocyanidin has low bioavailability, with 90% remaining unabsorbed from the intestines until metabolized by gut flora to the more bioavailable metabolites. Non-oxidative chemical depolymerisation Condensed tannins can undergo acid-catalyzed cleavage in the presence of (or an excess of) a nucleophile like phloroglucinol (reaction called phloroglucinolysis), benzyl mercaptan (reaction called thiolysis), thioglycolic acid (reaction called thioglycolysis) or cysteamine. Flavan-3-ol compounds used with methanol produce short-chain procyanidin dimers, trimers, or tetramers which are more absorbable. These techniques are generally called depolymerisation and give information such as average degree of polymerisation or percentage of galloylation. These are SN1 reactions, a type of substitution reaction in organic chemistry, involving a carbocation intermediate under strongly acidic conditions in polar protic solvents like methanol. The reaction leads to the formation of free and derived monomers that can be further analyzed or used to enhance procyanidin absorption and bioavailability. The free monomers correspond to the terminal units of the condensed tannins chains. In general, reactions are made in methanol, especially thiolysis, as benzyl mercaptan has a low solubility in water. They involve a moderate (50 to 90 °C) heating for a few minutes. Epimerisation may happen. Phloroglucinolysis can be used for instance for proanthocyanidins characterisation in wine or in grape seeds and skin. Thioglycolysis can be used to study proanthocyanidins or the oxidation of condensed tannins. It is also used for lignin quantitation. Reaction on condensed tannins from Douglas fir bark produces epicatechin and catechin thioglycolates. Condensed tannins from Lithocarpus glaber leaves have been analysed through acid-catalyzed degradation in the presence of cysteamine. Research Urinary tract infections Cranberries have A2-type proanthocyanidins (PACs) which may be important for the ability of PACs to bind to proteins, such as the adhesins present on E. coli fimbriae and were thought to inhibit bacterial infections, such as urinary tract infections (UTIs). Clinical trials have produced mixed results when asking the question to confirm that PACs, particularly from cranberries, were an alternative to antibiotic prophylaxis for UTIs: 1) a 2014 scientific opinion by the European Food Safety Authority rejected physiological evidence that cranberry PACs have a role in inhibiting bacterial pathogens involved in UTIs; 2) an updated 2023 Cochrane Collaboration review supported the use of cranberry products for the prevention of UTIs for certain groups. A 2017 systematic review concluded that cranberry products significantly reduced the incidence of UTIs, indicating that cranberry products may be effective particularly for individuals with recurrent infections. In 2019, the American Urological Association released guidelines stating that a moderate level of evidence supports the use of cranberry products containing PACs for possible prevention from recurrent UTIs. Wine consumption Proanthocyanidins are the principal polyphenols in red wine that are under research to assess risk of coronary heart disease and lower overall mortality. With tannins, they also influence the aroma, flavor, mouth-feel and astringency of red wines. In red wines, total OPC content, including flavan-3-ols (catechins), was substantially higher (177  mg/L) than that in white wines (9  mg/L). Other Proanthocyanidins found in the proprietary extract of maritime pine bark called Pycnogenol were not found (in 2012) to be effective as a treatment for any disease: "Current evidence is insufficient to support Pycnogenol(®) use for the treatment of any chronic disorder. Well-designed, adequately powered trials are needed to establish the value of this treatment." Sources Proanthocyanidins are present in fresh grapes, juice, red wine, and other darkly pigmented fruits such as cranberry, blackcurrant, elderberry, and aronia. Although red wine may contain more proanthocyanidins by mass per unit of volume than does red grape juice, red grape juice contains more proanthocyanidins per average serving size. An serving of grape juice averages 124 milligrams proanthocyanidins, whereas a serving of red wine averages 91 milligrams (i.e., 145.6 milligrams per 8 fl. oz. or 240 mL). Many other foods and beverages may also contain proanthocyanidins, but few attain the levels found in red grape seeds and skins, with a notable exception being aronia, which has the highest recorded level of proanthocyanidins among fruits assessed to date (664 milligrams per 100 g). See also A type proanthocyanidin B type proanthocyanidin Tannin Polyphenol Phenolic compounds in wine References External links Grape seed extract, US National Institutes of Health, Office of Dietary Supplements, The National Center for Complementary and Alternative Medicine, 2014 Food chemistry Flavonoid antioxidants Condensed tannins
Proanthocyanidin
Chemistry,Biology
2,480
35,981,107
https://en.wikipedia.org/wiki/Molecular-scale%20temperature
The molecular-scale temperature is the defining property of the U.S. Standard Atmosphere, 1962. It is defined by the relationship: Tm(z) is molecular-scale temperature at altitude z; M0 is molecular weight of air at sea level; M(z) is molecular weight of air at altitude z; T(z) is absolute temperature at altitude z. This is citation of the Technical Report of USAF from 1967. References Atmosphere Temperature
Molecular-scale temperature
Physics,Chemistry
93
38,876,656
https://en.wikipedia.org/wiki/Truncated%20order-6%20pentagonal%20tiling
In geometry, the truncated order-6 pentagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t1,2{6,5}. Uniform colorings Symmetry The dual of this tiling represents the fundamental domains of the *553 symmetry. There are no mirror removal subgroups of [(5,5,3)], but this symmetry group can be doubled to 652 symmetry by adding a bisecting mirror to the fundamental domains. Related polyhedra and tiling References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Square tiling Tilings of regular polygons List of uniform planar tilings List of regular polytopes External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Order-6 tilings Pentagonal tilings Truncated tilings Uniform tilings
Truncated order-6 pentagonal tiling
Physics
238
70,576,296
https://en.wikipedia.org/wiki/Women%20in%20Scientific%20and%20Engineering%20Professions
Women in Scientific and Engineering Professions is a 1984 book co-edited by American authors Violet B. Haas and Carolyn C. Perrucci. It was published through University of Michigan Press. The book was reviewed in several academic journals. References 1984 non-fiction books Women in science and technology University of Michigan Press books Women and employment
Women in Scientific and Engineering Professions
Technology
67
3,765,417
https://en.wikipedia.org/wiki/Brook%20rearrangement
In organic chemistry the Brook rearrangement refers to any [1,n] carbon to oxygen silyl migration. The rearrangement was first observed in the late 1950s by Canadian chemist Adrian Gibbs Brook (1924–2013), after which the reaction is named. These migrations can be promoted in a number of different ways, including thermally, photolytically or under basic/acidic conditions. In the forward direction, these silyl migrations produce silyl ethers as products which is driven by the stability of the oxygen-silicon bond. The silyl substituents can be aliphatic or aromatic, and if the silicon is a center of chirality, the migration occurs with retention at this center. This migration occurs through a transition state where silicon is penta-coordinate and bears a partial negative charge. If a center of chirality is present at the carbon center to which the silyl group is attached, then inversion occurs at this center. As an example, if (trimethylsilyl)methanol where to be deprotonated, a [1,2]-Brook rearrangement would occur. Reaction mechanism The reaction mechanism for this rearrangement depends on the conditions employed to affect the rearrangement and the nature of the starting material. Anionic rearrangements are the most common Brook rearrangements observed, and their mechanisms can be broken into two general categories. The first category starts with proton abstraction of a nearby hydroxyl group by a base. This generates an alkoxide which then acts as a nucleophile and attacks the silicon atom in a nucleophilic displacement reaction, with the methylene group acting as the leaving group. The generated carbanion is then protonated by the H-B species to form the product. In the case where the base used is consumed in the reaction (i.e. Butyllithium), then the carbanion can act as a base to deprotonate further starting material to generate the final product. The proposed transition state for this reaction step is a three-membered ring, with significant negative charge build-up on the carbon atom and the silicon atom, as demonstrated by Hammett sigma and rho studies. This reaction generally proceeds with a low activation energy and a large negative entropy of activation. This further supports the cyclic three member transition state, as this would be considerably more ordered than the ground state of the starting material. The reaction proceeds with overall retention at the silicon center, as demonstrated with a Walden Cycle (shown below). This supports a pentacoordinate silicon as part of the mechanism, as trigonal bipyramidal geometry around the silicon with one of the O or C axial and the other equatorial would explain the observed retention in configuration at the silicon center. This mechanism also proceeds with inversion at the carbon center. This reaction is known to be reversible. Depending on the relative stabilities of the carbanion and oxy-anion formed, a silyl ether is perfectly capable of rearranging to a species with the silicon bonded to the carbon atom, and the free alcohol being present. This would be termed a Retro-Brook rearrangement. The second category of anionic brook rearrangements involves nucleophilic attack at an sp2 hybridized center to generate an oxy-anion two atoms removed from the silicon atom. This can then undergoes intramolecular attack by the oxy-anion to yield the silyl ether, but the final fate of the carbanion often depends on the substrate in question. For example, attempting to perform a Wittig reaction on acylsilane results in the formation of a silyl enol ether instead of the expected alkene, due to elimination by the carbanion instead of protonation as seen above. The Brook rearrangement has been shown to occur with retention of configuration at the silicon center as demonstrated in the following Walden cycle: All steps in this cycle were known to proceed with retention of configuration except for attack of the lithium reagent (which proceeded by inversion) and the Brook Rearrangement, which was being investigated. By starting with a chiral silicon of known configuration, the stereochemistry of the reaction could be determined by looking at the specific rotation of the recovered silane. Since it is known that attack by the lithium reagent proceeds with inversion, the recovered silane should be the opposite enantiomer of the starting silane (single inversion) if the Brook Rearrangement proceeds with retention, and the same enantiomer if the reaction proceeds with inversion (double inversion). Experimentally, the recovered silane was the opposite enantiomer, showing that the reaction occurred with retention at the silicon center. Scope Brook rearrangements are known in acylsilanes. Beyond that, acylsilanes are well known for their hydrolysis in basic solution to a silanol and an aldehyde. This occurs through a Brook-rearrangement initiated by attack at the carbonyl group. A related reaction, involving initial attack at the silicon center, causes migration of one of the silicon groups to the carbonyl carbon, which initiates a Brook-Rearrangement. If the silicon group was chiral, the end product is a chiral silyl ether, as the migration occurs stereospecifically. Rearrangements analogous to the Brook Rearrangement are known for many other types of atoms. These include nitrogen, phosphorus, and sulfur as the nucleophilic component, with boron and germanium analogous known as the electrophilic component. References Rearrangement reactions Name reactions
Brook rearrangement
Chemistry
1,182
63,346,731
https://en.wikipedia.org/wiki/Communications%20Earth%20%26%20Environment
Communications Earth & Environment is a peer-reviewed, open-access, scientific journal in environmental science and planetary science published by Nature Portfolio in 2020. The editor-in-chief is Heike Langenberg. Communications Earth & Environment was created as a sub-journal to Nature Communications following the introduction of Communications Biology, Communications Chemistry, and Communications Physics in 2018. Abstracting and indexing The journal is abstracted and indexed in: Astrophysics Data System (ADS) Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2022 impact factor of 7.9, ranking it 36th out of 274 journals in the category "Environmental Sciences" and 10th out of 201 journals in the category "Geosciences, Multidisciplinary". See also Nature Nature Communications Scientific Reports References External links Nature Research academic journals Earth and atmospheric sciences journals Environmental science journals Open access journals Academic journals established in 2020 English-language journals Creative Commons-licensed journals Continuous journals
Communications Earth & Environment
Environmental_science
197
2,279,750
https://en.wikipedia.org/wiki/Propidium%20iodide
Propidium iodide (or PI) is a fluorescent intercalating agent that can be used to stain cells and nucleic acids. PI binds to DNA by intercalating between the bases with little or no sequence preference. When in an aqueous solution, PI has a fluorescent excitation maximum of 493 nm (blue-green), and an emission maximum of 636 nm (red). After binding DNA, the quantum yield of PI is enhanced 20-30 fold, and the excitation/emission maximum of PI is shifted to 535 nm (green) / 617 nm (orange-red). Propidium iodide is used as a DNA stain in flow cytometry to evaluate cell viability or DNA content in cell cycle analysis, or in microscopy to visualize the nucleus and other DNA-containing organelles. Propidium Iodide is not membrane-permeable, making it useful to differentiate necrotic, apoptotic and healthy cells based on membrane integrity. PI also binds to RNA, necessitating treatment with nucleases to distinguish between RNA and DNA staining. PI is widely used in fluorescence staining and visualization of the plant cell wall. See also Viability assay Vital stain SYBR Green I Ethidium bromide References Flow cytometry DNA-binding substances Iodides Phenanthridine dyes Staining dyes
Propidium iodide
Chemistry,Biology
294
13,735,033
https://en.wikipedia.org/wiki/Helium%20atom%20scattering
Helium atom scattering (HAS) is a surface analysis technique used in materials science. It provides information about the surface structure and lattice dynamics of a material by measuring the diffracted atoms from a monochromatic helium beam incident on the sample. History The first recorded helium diffraction experiment was completed in 1930 by Immanuel Estermann and Otto Stern on the (100) crystal face of lithium fluoride. This experimentally established the feasibility of atom diffraction when the de Broglie wavelength, λ, of the impinging atoms is on the order of the interatomic spacing of the material. At the time, the major limit to the experimental resolution of this method was due to the large velocity spread of the helium beam. It wasn't until the development of high pressure nozzle sources capable of producing intense and strongly monochromatic beams in the 1970s that HAS gained popularity for probing surface structure. Interest in studying the collision of rarefied gases with solid surfaces was helped by a connection with aeronautics and space problems of the time. Plenty of studies showing the fine structures in the diffraction pattern of materials using helium atom scattering were published in the 1970s. However, it wasn't until a third generation of nozzle beam sources was developed, around 1980, that studies of surface phonons could be made by helium atom scattering. These nozzle beam sources were capable of producing helium atom beams with an energy resolution of less than 1meV, making it possible to explicitly resolve the very small energy changes resulting from the inelastic collision of a helium atom with the vibrational modes of a solid surface, so HAS could now be used to probe lattice dynamics. The first measurement of such a surface phonon dispersion curve was reported in 1981, leading to a renewed interest in helium atom scattering applications, particularly for the study of surface dynamics. Basic principles Surface sensitivity Generally speaking, surface bonding is different from the bonding within the bulk of a material. In order to accurately model and describe the surface characteristics and properties of a material, it is necessary to understand the specific bonding mechanisms at work at the surface. To do this, one must employ a technique that is able to probe only the surface, we call such a technique "surface-sensitive". That is, the 'observing' particle (whether it be an electron, a neutron, or an atom) needs to be able to only 'see' (gather information from) the surface. If the penetration depth of the incident particle is too deep into the sample, the information it carries out of the sample for detection will contain contributions not only from the surface, but also from the bulk material. While there are several techniques that probe only the first few monolayers of a material, such as low-energy electron diffraction (LEED), helium atom scattering is unique in that it does not penetrate the surface of the sample at all! In fact, the scattering 'turnaround' point of the helium atom is 3-4 angstroms above the surface plane of atoms on the material. Therefore, the information carried out in the scattered helium atom comes solely from the very surface of the sample. A visual comparison of helium scattering and electron scattering is shown below: Helium at thermal energies can be modeled classically as scattering from a hard potential wall, with the location of scattering points representing a constant electron density surface. Since single scattering dominates the helium-surface interactions, the collected helium signal easily gives information on the surface structure without the complications of considering multiple electron scattering events (such as in LEED). Scattering mechanism A qualitative sketch of the elastic one-dimensional interaction potential between the incident helium atom and an atom on the surface of the sample is shown here: This potential can be broken down into an attractive portion due to Van der Waals forces, which dominates over large separation distances, and a steep repulsive force due to electrostatic repulsion of the positive nuclei, which dominates the short distances. To modify the potential for a two-dimensional surface, a function is added to describe the surface atomic corrugations of the sample. The resulting three-dimensional potential can be modeled as a corrugated Morse potential as: The first term is for the laterally-averaged surface potential - a potential well with a depth D at the minimum of z = zm and a fitting parameter α, and the second term is the repulsive potential modified by the corrugation function, ξ(x,y), with the same periodicity as the surface and fitting parameter β. Helium atoms, in general, can be scattered either elastically (with no energy transfer to or from the crystal surface) or inelastically through excitation or deexcitation of the surface vibrational modes (phonon creation or annihilation). Each of these scattering results can be used in order to study different properties of a material's surface. Why use helium atoms? There are several advantages to using helium atoms as compared with x-rays, neutrons, and electrons to probe a surface and study its structures and phonon dynamics. As mentioned previously, the lightweight helium atoms at thermal energies do not penetrate into the bulk of the material being studied. This means that in addition to being strictly surface-sensitive they are truly non-destructive to the sample. Their de Broglie wavelength is also on the order of the interatomic spacing of materials, making them ideal probing particles. Since they are neutral, helium atoms are insensitive to surface charges. As a noble gas, the helium atoms are chemically inert. When used at thermal energies, as is the usual scenario, the helium atomic beam is an inert probe (chemically, electrically, magnetically, and mechanically). It is therefore capable of studying the surface structure and dynamics of a wide variety of materials, including those with reactive or metastable surfaces. A helium atom beam can even probe surfaces in the presence of electromagnetic fields and during ultra-high vacuum surface processing without interfering with the ongoing process. Because of this, helium atoms can be useful to make measurements of sputtering or annealing, and adsorbate layer depositions. Finally, because the thermal helium atom has no rotational and vibrational degrees of freedom and no available electronic transitions, only the translational kinetic energy of the incident and scattered beam need be analyzed in order to extract information about the surface. Instrumentation The accompanying figure is a general schematic of a helium atom scattering experimental setup. It consists of a nozzle beam source, an ultra high vacuum scattering chamber with a crystal manipulator, and a detector chamber. Every system can have a different particular arrangement and setup, but most will have this basic structure. Sources The helium atom beam, with a very narrow energy spread of less than 1meV, is created through free adiabatic expansion of helium at a pressure of ~200bar into a low-vacuum chamber through a small ~5-10μm nozzle. Depending on the system operating temperature range, typical helium atom energies produced can be 5-200meV. A conical aperture between A and B called the skimmer extracts the center portion of the helium beam. At this point, the atoms of the helium beam should be moving with nearly uniform velocity. Also contained in section B is a chopper system, which is responsible for creating the beam pulses needed to generate the time of flight measurements to be discussed later. Scattering chamber The scattering chamber, area C, generally contains the crystal manipulator and any other analytical instruments that can be used to characterize the crystal surface. Equipment that can be included in the main scattering chamber includes a LEED screen (to make complementary measurements of the surface structure), an Auger analysis system (to determine the contamination level of the surface), a mass spectrometer (to monitor the vacuum quality and residual gas composition), and, for working with metal surfaces, an ion gun (for sputter cleaning of the sample surface). In order to maintain clean surfaces, the pressure in the scattering chamber needs to be in the range of 10−8 to 10−9 Pa. This requires the use of turbomolecular or cryogenic vacuum pumps. Crystal manipulator The crystal manipulator allows for at least three different motions of the sample. The azimuthal rotation allows the crystal to change the direction of the surface atoms, the tilt angle is used to set the normal of the crystal to be in the scattering plane, and the rotation of the manipulator around the z-axis alters the beam incidence angle. The crystal manipulator should also incorporate a system to control the temperature of the crystal. Detector After the beam scatters off the crystal surface, it goes into the detector area D. The most commonly used detector setup is an electron bombardment ion source followed by a mass filter and an electron multiplier. The beam is directed through a series of differential pumping stages that reduce the noise-to-signal ratio before hitting the detector. A time-of-flight analyzer can follow the detector to take energy loss measurements. Elastic measurements Under conditions for which elastic diffractive scattering dominates, the relative angular positions of the diffraction peaks reflect the geometric properties of the surface being examined. That is, the locations of the diffraction peaks reveal the symmetry of the two-dimensional space group that characterizes the observed surface of the crystal. The width of the diffraction peaks reflects the energy spread of the beam. The elastic scattering is governed by two kinematic conditions - conservation of energy and the energy of the momentum component parallel to the crystal: Here G is a reciprocal lattice vector, kG and ki are the final and initial (incident) wave vectors of the helium atom. The Ewald sphere construction will determine the diffracted beams to be seen and the scattering angles at which they will appear. A characteristic diffraction pattern will appear, determined by the periodicity of the surface, in a similar manner to that seen for Bragg scattering in electron and x-ray diffraction. Most helium atom scattering studies will scan the detector in a plane defined by the incoming atomic beam direction and the surface normal, reducing the Ewald sphere to a circle of radius R=k0 intersecting only reciprocal lattice rods that lie in the scattering plane as shown here: The intensities of the diffraction peaks provide information about the static gas-surface interaction potentials. Measuring the diffraction peak intensities under different incident beam conditions can reveal the surface corrugation (the surface electron density) of the outermost atoms on the surface. Note that the detection of the helium atoms is much less efficient than for electrons, so the scattered intensity can only be determined for one point in k-space at a time. For an ideal surface, there should be no elastic scattering intensity between the observed diffraction peaks. If there is intensity seen here, it is due to a surface imperfection, such as steps or adatoms. From the angular position, width and intensity of the peaks, information is gained regarding the surface structure and symmetry, and the ordering of surface features. Inelastic measurements The inelastic scattering of the helium atom beam reveals the surface phonon dispersion for a material. At scattering angles far away from the specular or diffraction angles, the scattering intensity of the ordered surface is dominated by inelastic collisions. In order to study the inelastic scattering of the helium atom beam due only to single-phonon contributions, an energy analysis needs to be made of the scattered atoms. The most popular way to do this is through the use of time-of-flight (TOF) analysis. The TOF analysis requires the beam to be pulsed through the mechanical chopper, producing collimated beam 'packets' that have a 'time-of-flight' (TOF) to travel from the chopper to the detector. The beams that scatter inelastically will lose some energy in their encounter with the surface and therefore have a different velocity after scattering than they were incident with. The creation or annihilation of surface phonons can be measured, therefore, by the shifts in the energy of the scattered beam. By changing the scattering angles or incident beam energy, it is possible to sample inelastic scattering at different values of energy and momentum transfer, mapping out the dispersion relations for the surface modes. Analyzing the dispersion curves reveals sought-after information about the surface structure and bonding. A TOF analysis plot would show intensity peaks as a function of time. The main peak (with the highest intensity) is that for the unscattered helium beam 'packet'. A peak to the left is that for the annihilation of a phonon. If a phonon creation process occurred, it would appear as a peak to the right: The qualitative sketch above shows what a time-of-flight plot might look like near a diffraction angle. However, as the crystal rotates away from the diffraction angle, the elastic (main) peak drops in intensity. The intensity never shrinks to zero even far from diffraction conditions, however, due to incoherent elastic scattering from surface defects. The intensity of the incoherent elastic peak and its dependence on scattering angle can therefore provide useful information about surface imperfections present on the crystal. The kinematics of the phonon annihilation or creation process are extremely simple - conservation of energy and momentum can be combined to yield an equation for the energy exchange and momentum exchange during the collision process. This inelastic scattering process is described as a phonon of energy and wavevector . The vibrational modes of the lattice can then be described by the dispersion relations , which give the possible phonon frequencies ω as a function of the phonon wavevector . In addition to detecting surface phonons, because of the low energy of the helium beam, low-frequency vibrations of adsorbates can be detected as well, leading to the determination of their potential energy. References Other E. Hulpke (Ed.), Helium Atom Scattering from Surfaces, Springer Series in Surface Sciences 27 (1992) G. Scoles (Ed.), Atomic and Molecular Beam Methods, Vol. 2, Oxford University Press, New York (1992) J. B. Hudson, Surface Science - An Introduction, John Wiley & Sons, Inc, New York (1998) Materials science Scientific techniques
Helium atom scattering
Physics,Materials_science,Engineering
2,964
1,619,576
https://en.wikipedia.org/wiki/Mountain%20jet
Mountain jets are a type of jet stream created by surface winds channeled through mountain passes, sometimes causing high wind speeds and drastic temperature changes. Central America jets The North Pacific east of about 120°W is strongly influenced by winds blowing through gaps in the Central American cordillera. Air flow in the region forms the Intra-Americas Low-Level Jet, a westward flow about 1 km above sea level. This flow, trade winds, and cold air flowing south from North America contribute to winds flowing through several mountain valleys. Along Central America are three main wind jets through breaks in the American Cordillera, on the Pacific Ocean side due to prevailing winds. Tehuano wind blows from the Gulf of Mexico through Chivela Pass in Mexico's Isthmus of Tehuantepec and out over the Gulf of Tehuantepec on the Pacific coast. Chivela Pass is a gap between the Sierra Madre del Sur and the Sierra Madre range to the south. Papagayo wind shrieks over the lakes of Nicaragua and pushes far out over the Gulf of Papagayo on the Pacific coast. The Cordillera Central Mountains rise to the south, gradually descending to Gatun Lake and the Isthmus of Panama. Panama winds slice through to the Pacific through the Gaillard Cut in Panama, which also holds the Panama Canal. Cause The air flow is due to surges of cold dense air originating from the North American continent. The meteorological mechanism that causes Tehuano and Papagayo winds is relatively simple. In the winter, cold high-pressure weather systems move southward from North America over the Gulf of Mexico. These high pressure systems create strong pressure gradients between the atmosphere over the Gulf of Mexico and the warmer, moister atmosphere over the Pacific Ocean. Just as a river flows from high elevations to lower elevations, the air in the high pressure system will flow "downhill" toward lower pressure, but the Cordillera mountains block the flow of air, channeling it through Chivela Pass in Mexico, the lake district of Nicaragua, and also Gaillard (Culebra) Cut in Panama. Many times, a Tehuano wind is followed by Papagayo and Panama winds a few days later as the high pressure system moves south. The arrival of these cold surges, and their associated anticyclonic circulation, strengthens the trade winds at low latitudes, and this effect can last for several days. The wind flow over Central America is actually composed of the confluence of two air streams; one from the north, associated with cold surges, and the other from the northeast, associated with trade winds north of South America. Local effects The winds blow at speeds of 80 km/h or more down the hillsides from Chivela Pass and over the waters of the Gulf of Tehuantepec, sometimes extending more than 500 miles (800 km) into the Pacific Ocean. The surface waters under the Gulf of Tehuantepec wind jet can cool by as much as 10 °C in a day. In addition to the cold water that is detectable from other satellite sensors, the ocean's response to these winds shows up in satellite estimates of chlorophyll from ocean color measurements. The cold water and high chlorophyll concentration are signatures of mixing and upwelling of cold, nutrient-rich deep water. Fish converge on this food source, which supports the highly successful fishing industry in the Gulf of Tehuantepec. External links Atmospheric dynamics Geography of Central America Mountains
Mountain jet
Chemistry
720
21,729,425
https://en.wikipedia.org/wiki/Isaac%20Newton%20Medal
The Isaac Newton Medal and Prize is a gold medal awarded annually by the Institute of Physics (IOP) accompanied by a prize of £1,000. The award is given to a physicist, regardless of subject area, background or nationality, for outstanding contributions to physics. The award winner is invited to give a lecture at the Institute. It is named in honour of Sir Isaac Newton. The first medal was awarded in 2008 to Anton Zeilinger, having been announced in 2007. It gained national recognition in the UK in 2013 when it was awarded for technology that could lead to an 'invisibility cloak'. By 2018 it was recognised internationally as the highest honour from the IOP. In 2020, a citation study identified it as one of the five most prestigious prizes in physics. Recipients See also University of Glasgow Isaac Newton Medal Institute of Physics Awards List of physics awards List of awards named after people References External links Awards of the Institute of Physics Physics awards Isaac Newton
Isaac Newton Medal
Technology
194
77,955,200
https://en.wikipedia.org/wiki/MSX-2
MSX-2 is a selective adenosine A2A receptor antagonist used in scientific research. It is a xanthine and a derivative of the non-selective adenosine receptor antagonist caffeine. The affinities (Ki) of MSX-2 for the human adenosine receptors are 5.38 to 14.5nM for the adenosine A2A receptor, 2,500nM for the adenosine A1 receptor (172- to 465-fold lower than for the A2A receptor), and >10,000nM for the adenosine A2B and A3 receptors (>690-fold lower than for the A2A receptor). MSX-2 has poor water solubility, which has limited the use of MSX-2 itself. Water-soluble ester prodrugs of MSX-2, including MSX-3 (a phosphate ester prodrug) and MSX-4 (an amino acid ester prodrug), have been developed and used in place of MSX-2. MSX-3 is best-suited for use by intravenous administration, whereas MSX-4 can be administered by oral administration. MSX-3 and MSX-4 reverse motivational deficits in animals and hence have the capacity to produce pro-motivational effects. MSX-2 and MSX-3 were first described in the scientific literature by 1998. Subsequently, MSX-4 was developed and described by 2008. See also 3-Chlorostyrylcaffeine Istradefylline Preladenant References 3-Methoxyphenyl compounds Adenosine receptor antagonists Experimental drugs Pro-motivational agents Prodrugs Propargyl compounds Xanthines
MSX-2
Chemistry
370
13,624
https://en.wikipedia.org/wiki/High%20fidelity
High fidelity (often shortened to Hi-Fi or HiFi) is the high-quality reproduction of sound. It is popular with audiophiles and home audio enthusiasts. Ideally, high-fidelity equipment has inaudible noise and distortion, and a flat (neutral, uncolored) frequency response within the human hearing range. High fidelity contrasts with the lower-quality "lo-fi" sound produced by inexpensive audio equipment, AM radio, or the inferior quality of sound reproduction that can be heard in recordings made until the late 1940s. History Bell Laboratories began experimenting with various recording techniques in the early 1930s. Performances by Leopold Stokowski and the Philadelphia Orchestra were recorded in 1931 and 1932 using telephone lines between the Academy of Music in Philadelphia and the Bell labs in New Jersey. Some multitrack recordings were made on optical sound film, which led to new advances used primarily by MGM (as early as 1937) and Twentieth Century Fox Film Corporation (as early as 1941). RCA Victor began recording performances by several orchestras using optical sound around 1941, resulting in higher-fidelity masters for 78-rpm discs. During the 1930s, Avery Fisher, an amateur violinist, began experimenting with audio design and acoustics. He wanted to make a radio that would sound like he was listening to a live orchestra and achieve high fidelity to the original sound. After World War II, Harry F. Olson conducted an experiment whereby test subjects listened to a live orchestra through a hidden variable acoustic filter. The results proved that listeners preferred high-fidelity reproduction, once the noise and distortion introduced by early sound equipment was removed. Beginning in 1948, several innovations created the conditions that made major improvements in home audio quality possible: Reel-to-reel audio tape recording, based on technology taken from Germany after WWII, helped musical artists such as Bing Crosby make and distribute recordings with better fidelity. The advent of the 33⅓ rpm long play (LP) microgroove vinyl record, with lower surface noise and quantitatively specified equalization curves as well as noise-reduction and dynamic range systems. Classical music fans, who were opinion leaders in the audio market, quickly adopted LPs because, unlike with older records, most classical works would fit on a single LP. Higher quality turntables, with more responsive needles FM radio, with wider audio bandwidth and less susceptibility to signal interference and fading than AM radio. Better amplifier designs, with more attention to frequency response and much higher power output capability, reproducing audio without perceptible distortion. New loudspeaker designs, including acoustic suspension, developed by Edgar Villchur and Henry Kloss with improved bass frequency response. In the 1950s, audio manufacturers employed the phrase high fidelity as a marketing term to describe records and equipment intended to provide faithful sound reproduction. Many consumers found the difference in quality compared to the then-standard AM radios and 78-rpm records readily apparent and bought high-fidelity phonographs and 33⅓ LPs such as RCA's New Orthophonics and London's FFRR (Full Frequency Range Recording, a UK Decca system). Audiophiles focused on technical characteristics and bought individual components, such as separate turntables, radio tuners, preamplifiers, power amplifiers and loudspeakers. Some enthusiasts even assembled their loudspeaker systems, with the advent of integrated multi-speaker console systems in the 1950s, hi-fi became a generic term for home sound equipment, to some extent displacing phonograph and record player. In the late 1950s and early 1960s, the development of stereophonic equipment and recordings led to the next wave of home-audio improvement, and in common parlance stereo displaced hi-fi. Records were now played on a stereo (stereophonic phonograph). In the world of the audiophile, however, the concept of high fidelity continued to refer to the goal of highly accurate sound reproduction and to the technological resources available for approaching that goal. This period is regarded as the "Golden Age of Hi-Fi", when vacuum tube equipment manufacturers of the time produced many models considered superior by modern audiophiles, and just before solid state (transistorized) equipment was introduced to the market, subsequently replacing tube equipment as the mainstream technology. In the 1960s, the FTC with the help of the audio manufacturers came up with a definition to identify high-fidelity equipment so that the manufacturers could clearly state if they meet the requirements and reduce misleading advertisements. A popular type of system for reproducing music beginning in the 1970s was the integrated music centre—which combined a phonograph turntable, AM-FM radio tuner, tape player, preamplifier, and power amplifier in one package, often sold with its own separate, detachable or integrated speakers. These systems advertised their simplicity. The consumer did not have to select and assemble individual components or be familiar with impedance and power ratings. Purists generally avoid referring to these systems as high fidelity, though some are capable of very good quality sound reproduction. Audiophiles in the 1970s and 1980s preferred to buy each component separately. That way, they could choose models of each component with the specifications that they desired. In the 1980s, several audiophile magazines became available, offering reviews of components and articles on how to choose and test speakers, amplifiers, and other components. Listening tests Listening tests are used by hi-fi manufacturers, audiophile magazines, and audio engineering researchers and scientists. If a listening test is done in such a way that the listener who is assessing the sound quality of a component or recording can see the components that are being used for the test (e.g., the same musical piece listened to through a tube power amplifier and a solid-state amplifier), then it is possible that the listener's pre-existing biases towards or against certain components or brands could affect their judgment. To respond to this issue, researchers began to use blind tests, in which listeners cannot see the components being tested. A commonly used variant of this test is the ABX test. A subject is presented with two known samples (sample A, the reference, and sample B, an alternative), and one unknown sample X, for three samples total. X is randomly selected from A and B, and the subject identifies X as being either A or B. Although there is no way to prove that a certain methodology is transparent, a properly conducted double-blind test can prove that a method is not transparent. Blind tests are sometimes used as part of attempts to ascertain whether certain audio components (such as expensive, exotic cables) have any subjectively perceivable effect on sound quality. Data gleaned from these blind tests is not accepted by some audiophile magazines such as Stereophile and The Absolute Sound in their evaluations of audio equipment. John Atkinson, current editor of Stereophile, stated that he once purchased a solid-state amplifier, the Quad 405, in 1978 after seeing the results from blind tests, but came to realize months later that "the magic was gone" until he replaced it with a tube amp. Robert Harley of The Absolute Sound wrote, in 2008, that: "...blind listening tests fundamentally distort the listening process and are worthless in determining the audibility of a certain phenomenon." Doug Schneider, editor of the online Soundstage network, argued the opposite in 2009. He stated: "Blind tests are at the core of the decades' worth of research into loudspeaker design done at Canada's National Research Council (NRC). The NRC researchers knew that for their result to be credible within the scientific community and to have the most meaningful results, they had to eliminate bias, and blind testing was the only way to do so." Many Canadian companies such as Axiom, Energy, Mirage, Paradigm, PSB, and Revel use blind testing extensively in designing their loudspeakers. Audio professional Dr. Sean Olive of Harman International shares this view. Semblance of realism Stereophonic sound provided a partial solution to the problem of reproducing the sound of live orchestral performers by creating separation among instruments, the illusion of space, and a phantom central channel. An attempt to enhance reverberation was tried in the 1970s through quadraphonic sound. Consumers did not want to pay the additional costs and space required for the marginal improvements in realism. With the rise in popularity of home theater, however, multi-channel playback systems became popular, and many consumers were willing to tolerate the six to eight channels required in a home theater. In addition to spatial realism, the playback of music must be subjectively free from noise, such as hiss or hum, to achieve realism. The compact disc (CD) provides about 90 decibels of dynamic range, which exceeds the 80 dB dynamic range of music as normally perceived in a concert hall. Audio equipment must be able to reproduce frequencies high enough and low enough to be realistic. The human hearing range, for healthy young persons, is 20 Hz to 20,000 Hz. Most adults can't hear higher than 15,000 Hz. CDs are capable of reproducing frequencies as low as 0 Hz and as high as 22,050 Hz, making them adequate for reproducing the frequency range that most humans can hear. The equipment must also provide no noticeable distortion of the signal or emphasis or de-emphasis of any frequency in this frequency range. Modularity Integrated, mini, or lifestyle systems (also known by the older terms music centre or midi system) contain one or more sources such as a CD player, a tuner, or a cassette tape deck together with a preamplifier and a power amplifier in one box. A limitation of an "integrated" system is that failure of any one component can possibly lead to the need to replace the entire unit, as components are not readily swapped in or out of a system merely by plugging and unplugging cables, and may not even have been made available by the manufacturer to allow piecemeal repairs. Although some high-end audio manufacturers do produce integrated systems, such products are generally disparaged by audiophiles, who prefer to build a system from separates (or components), often with each item from a different manufacturer specialising in a particular component. This provides the most flexibility for piece-by-piece upgrades and repairs. A preamplifier and a power amplifier in one box is called an integrated amplifier; with a tuner added, it is a receiver. A monophonic power amplifier is called a monoblock and is often used for powering a subwoofer. Other modules in the system may include components like cartridges, tonearms, hi-fi turntables, digital media players, DVD players that play a wide variety of discs including CDs, CD recorders, MiniDisc recorders, hi-fi videocassette recorders (VCRs) and reel-to-reel tape recorders. Signal modification equipment can include equalizers and noise-reduction systems. This modularity allows the enthusiast to spend as little or as much as they want on a component to suit their specific needs, achieve a desired sound, and add components as desired. Also, failure of any component of an integrated system can render it unusable, while the unaffected components of a modular system may continue to function. A modular system introduces the complexity of cabling multiple components and often having different remote controls for each unit. Modern equipment Some modern hi-fi equipment can be digitally connected using fiber optic TOSLINK cables, USB ports (including one to play digital audio files), or Wi-Fi support. Another modern component is the music server consisting of one or more computer hard drives that hold music in the form of computer files. When the music is stored in an audio file format that is lossless such as FLAC, Monkey's Audio or WMA Lossless, the computer playback of recorded audio can serve as an audiophile-quality source for a hi-fi system. There is now a push from certain streaming services to offer hi-fi services. Streaming services typically have a modified dynamic range and possibly bit rates lower than audiophile standards. Tidal and others have launched a hi-fi tier that includes access to FLAC and Master Quality Authenticated studio masters for many tracks through the desktop version of the player. This integration is also available for high-end audio systems. See also Audio system measurements Comparison of analog and digital recording DIY audio Edwin Howard Armstrong Entertainment center Lo-fi music Wife acceptance factor Wi-Fi, a wireless term derived from hi-fi References Further reading External links A Dictionary of Home Entertainment Terms Sound Consumer electronics Sound recording Audio engineering
High fidelity
Engineering
2,588
68,402,947
https://en.wikipedia.org/wiki/Event%20shape%20observables
In high energy physics, event shapes observables are quantities used to characterize the geometry of the outcome of a collision between high energy particles in a collider. Specifically, event shapes observables quantify the general pattern traced by the trajectories of the particles resulting from the collision. The most common event shape observables include: The sphericity; The aplanarity; The thrust. The C-parameter; The jet broadening. References Experimental particle physics
Event shape observables
Physics
104
12,257,662
https://en.wikipedia.org/wiki/Sever%20Sternhell
Severyn Marcel Sternhell (30 May 1930 – 18 November 2022) was a Polish-born Australian academic and organic chemist. He was professor of Chemistry at the University of Sydney and a Fellow of the Australian Academy of Science. His research focused on the induction of chirality into mesophases, aspects of steric hindrance and the mechanochemistry of organic compounds. Early life and education Sternhell was born in Lwow, then in Poland but now in western Ukraine. His father was Samson Sternhell, a lawyer. Before the war he attended a Zionist Hebrew language primary school for five years. Having survived the Bergen-Belsen concentration camp, he emigrated to Australia with his parents to join members of the Sternhell family who had left Europe prior to World War II. Arriving in Sydney in February 1947, he enrolled as a boarder at Newington College a few days after the commencement of the first term. Having had no formal education since primary school, and only learnt English recently in Palestine, he spent the next nine months studying for the Leaving Certificate. He sat for his exams and received four A's and a B in English. Following Newington, he attended the University of Sydney and graduated with a BSc (hons) in 1951 and a MSc in 1953. In 1961, he was awarded a PhD and DIC from the University of London. After further study in London, he received a DSc from Imperial College. Research and university career Sternhell began his career as a research chemist in private industry with Monsanto in 1953. Two years later, he was appointed a senior research officer at CSIRO and remained in that position until 1964. Sternhell was a senior lecturer in the Department of Organic Chemistry at the University of Sydney, from 1964 until 1967. He spent a further ten years as a reader in organic chemistry at the University of Sydney before being appointed Professor of Organic Chemistry. He was an Emeritus Professor from 1999. In 1991–92, he was chairman of the Australian Research Council Chemical Sciences Panel. Fellowships and honours Sternhell was a Fellow, of the Australian Academy of Science and of the Royal Australian Chemical Institute. In 2001, he was awarded the Centenary Medal for his service to Australian society and science in organic chemistry and molecular engineering. In June 2018, he was made an Officer of the Order of Australia "for distinguished service to education in the field of organic chemistry, specifically to nuclear magnetic resonance, as an academic and researcher, and to scientific institutions." Death Sternhell died in Naremburn, Sydney on 18 November 2022, at the age of 92. References 1930 births 2022 deaths Alumni of Imperial College London Australian Jews Australian people of Polish-Jewish descent Bergen-Belsen concentration camp survivors Academic staff of the University of Sydney University of Sydney alumni People educated at Newington College Organic chemists Australian chemists Fellows of the Australian Academy of Science Officers of the Order of Australia
Sever Sternhell
Chemistry
591
207,897
https://en.wikipedia.org/wiki/Window%20%28computing%29
In computing, a window is a graphical control element. It consists of a visual area containing some of the graphical user interface of the program it belongs to and is framed by a window decoration. It usually has a rectangular shape that can overlap with the area of other windows. It displays the output of and may allow input to one or more processes. Windows are primarily associated with graphical displays, where they can be manipulated with a pointer by employing some kind of pointing device. Text-only displays can also support windowing, as a way to maintain multiple independent display areas, such as multiple buffers in Emacs. Text windows are usually controlled by keyboard, though some also respond to the mouse. A graphical user interface (GUI) using windows as one of its main "metaphors" is called a windowing system, whose main components are the display server and the window manager. History The idea was developed at the Stanford Research Institute (led by Douglas Engelbart). Their earliest systems supported multiple windows, but there was no obvious way to indicate boundaries between them (such as window borders, title bars, etc.). Research continued at Xerox Corporation's Palo Alto Research Center / PARC (led by Alan Kay). They used overlapping windows. During the 1980s the term "WIMP", which stands for window, icon, menu, pointer, was coined at PARC. Apple had worked with PARC briefly at that time. Apple developed an interface based on PARC's interface. It was first used on Apple's Lisa and later Macintosh computers. Microsoft was developing Office applications for the Mac at that time. Some speculate that this gave them access to Apple's OS before it was released and thus influenced the design of the windowing system in what would eventually be called Microsoft Windows. Properties Windows are two dimensional objects arranged on a plane called the desktop metaphor. In a modern full-featured windowing system they can be resized, moved, hidden, restored or closed. Windows usually include other graphical objects, possibly including a menu-bar, toolbars, controls, icons and often a working area. In the working area, the document, image, folder contents or other main object is displayed. Around the working area, within the bounding window, there may be other smaller window areas, sometimes called panes or panels, showing relevant information or options. The working area of a single document interface holds only one main object. "Child windows" in multiple document interfaces, and tabs for example in many web browsers, can make several similar documents or main objects available within a single main application window. Some windows in macOS have a feature called a drawer, which is a pane that slides out the side of the window and to show extra options. Applications that can run either under a graphical user interface or in a text user interface may use different terminology. GNU Emacs uses the term "window" to refer to an area within its display while a traditional window, such as controlled by an X11 window manager, is called a "frame". Any window can be split into the window decoration and the window's content, although some systems purposely eschew window decoration as a form of minimalism. Window decoration The window decoration is a part of a window in most windowing systems. Window decoration typically consists of a title bar, usually along the top of each window and a minimal border around the other three sides. On Microsoft Windows this is called "non-client area". In the predominant layout for modern window decorations, the top bar contains the title of that window and buttons which perform windowing-related actions such as: Close Maximize Minimize Resize Roll-up The border exists primarily to allow the user to resize the window, but also to create a visual separation between the window's contents and the rest of the desktop environment. Window decorations are considered important for the design of the look and feel of an operating system and some systems allow for customization of the colors, styles and animation effects used. Window border Window border is a window decoration component provided by some window managers, that appears around the active window. Some window managers may also display a border around background windows. Typically window borders enable the window to be resized or moved by dragging the border. Some window managers provide useless borders which are purely for decorative purposes and offer no window motion facility. These window managers do not allow windows to be resized by using a drag action on the border. Title bar The title bar is a graphical control element and part of the window decoration provided by some window managers. As a convention, it is located at the top of the window as a horizontal bar. The title bar is typically used to display the name of the application or the name of the open document, and may provide title bar buttons for minimizing, maximizing, closing or rolling up of application windows. These functions are typically placed in the top-right of the screen to allow fast and inaccurate inputs through barrier pointing. Typically title bars can be used to provide window motion enabling the window to be moved around the screen by grabbing the title bar and dragging it. Some window managers provide title bars which are purely for decorative purposes and offer no window motion facility. These window managers do not allow windows to be moved around the screen by using a drag action on the title bar. Default title-bar text often incorporates the name of the application and/or of its developer. The name of the host running the application also appears frequently. Various methods (menu-selections, escape sequences, setup parameters, command-line options – depending on the computing environment) may exist to give the end-user some control of title-bar text. Document-oriented applications like a text editor may display the filename or path of the document being edited. Most web browsers will render the contents of the HTML element title in their title bar, sometimes pre- or postfixed by the application name. Google Chrome and some versions of Mozilla Firefox place their tabs in the title bar. This makes it unnecessary to use the main window for the tabs, but usually results in the title becoming truncated. An asterisk at its beginning may be used to signify unsaved changes. The title bar often contains widgets for system commands relating to the window, such as a maximize, minimize, rollup and close buttons; and may include other content such as an application icon, a clock, etc. Title bar buttons Some window managers provide title bar buttons which provide the facility to minimize, maximize, roll-up or close application windows. Some window managers may display the title bar buttons in the task bar or task panel, rather than in the title bars. The following buttons may appear in the title bar: Close Maximize Minimize Resize Roll-up (or WindowShade) Note that a context menu may be available from some title bar buttons or by right-clicking. Title bar icon Some window managers display a small icon in the title bar that may vary according to the application on which it appears. The title bar icon may behave like a menu button, or may provide a context menu facility. macOS applications commonly have a proxy icon next to the window title that functions the same as the document's icon in the file manager. Document status icon Some window managers display an icon or symbol to indicate that the contents of the window have not been saved or confirmed in some way: macOS displays a dot in the center of its close button; RISC OS appends an asterisk to the title. Tiling window managers Some tiling window managers provide title bars which are purely for informative purposes and offer no controls or menus. These window managers do not allow windows to be moved around the screen by using a drag action on the title bar and may also serve the purpose of a status line from stacking window managers. In popular operating systems See also Client-Side Decoration Display server Graphical user interface Human interface guidelines WIMP (computing) Window manager References Graphical control elements Graphical user interface elements
Window (computing)
Technology
1,632
586,694
https://en.wikipedia.org/wiki/Signed%20number%20representations
In computing, signed number representations are required to encode negative numbers in binary number systems. In mathematics, negative numbers in any base are represented by prefixing them with a minus sign ("−"). However, in RAM or CPU registers, numbers are represented only as sequences of bits, without extra symbols. The four best-known methods of extending the binary numeral system to represent signed numbers are: sign–magnitude, ones' complement, two's complement, and offset binary. Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2. Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other elaborations on such themes. There is no definitive criterion by which any of the representations is universally superior. For integers, the representation used in most current computing devices is two's complement, although the Unisys ClearPath Dorado series mainframes use ones' complement. History The early days of digital computing were marked by competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions. One camp supported two's complement, the system that is dominant today. Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent. A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit. There were arguments for and against each of the systems. Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits. These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign–magnitude when the result was transmitted back to the register. The electronics required more gates than the other systemsa key concern when the cost and packaging of discrete transistors were critical. IBM was one of the early supporters of sign–magnitude, with their 704, 709 and 709x series computers being perhaps the best-known systems to use it. Ones' complement allowed for somewhat simpler hardware designs, as there was no need to convert values when passed to and from the math unit. But it also shared an undesirable characteristic with sign–magnitude: the ability to represent negative zero (−0). Negative zero behaves exactly like positive zero: when used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. The disadvantage is that the existence of two forms of the same value necessitates two comparisons when checking for equality with zero. Ones' complement subtraction can also result in an end-around borrow (described below). It can be argued that this makes the addition and subtraction logic more complicated or that it makes it simpler, as a subtraction requires simply inverting the bits of the second operand as it is passed to the adder. The PDP-1, CDC 160 series, CDC 3000 series, CDC 6000 series, UNIVAC 1100 series, and LINC computer use ones' complement representation. Two's complement is the easiest to implement in hardware, which may be the ultimate reason for its widespread popularity. Processors on the early mainframes often consisted of thousands of transistors, so eliminating a significant number of transistors was a significant cost savings. Mainframes such as the IBM System/360, the GE-600 series, and the PDP-6 and PDP-10 use two's complement, as did minicomputers such as the PDP-5 and PDP-8 and the PDP-11 and VAX machines. The architects of the early integrated-circuit-based CPUs (Intel 8080, etc.) also chose to use two's complement math. As IC technology advanced, two's complement technology was adopted in virtually all processors, including x86, m68k, Power ISA, MIPS, SPARC, ARM, Itanium, PA-RISC, and DEC Alpha. Sign–magnitude In the sign–magnitude representation, also called sign-and-magnitude or signed magnitude, a signed number is represented by the bit pattern corresponding to the sign of the number for the sign bit (often the most significant bit, set to 0 for a positive number and to 1 for a negative number), and the magnitude of the number (or absolute value) for the remaining bits. For example, in an eight-bit byte, only seven bits represent the magnitude, which can range from 0000000 (0) to 1111111 (127). Thus numbers ranging from −12710 to +12710 can be represented once the sign bit (the eighth bit) is added. For example, −4310 encoded in an eight-bit byte is 10101011 while 4310 is 00101011. Using sign–magnitude representation has multiple consequences which makes them more intricate to implement: There are two ways to represent zero, 00000000 (0) and 10000000 (−0). Addition and subtraction require different behavior depending on the sign bit, whereas ones' complement can ignore the sign bit and just do an end-around carry, and two's complement can ignore the sign bit and depend on the overflow behavior. Comparison also requires inspecting the sign bit, whereas in two's complement, one can simply subtract the two numbers, and check if the outcome is positive or negative. The minimum negative number is −127, instead of −128 as in the case of two's complement. This approach is directly comparable to the common way of showing a sign (placing a "+" or "−" next to the number's magnitude). Some early binary computers (e.g., IBM 7090) use this representation, perhaps because of its natural relation to common usage. Sign–magnitude is the most common way of representing the significand in floating-point values. Ones' complement In the ones' complement representation, a negative number is represented by the bit pattern corresponding to the bitwise NOT (i.e. the "complement") of the positive number. Like sign–magnitude representation, ones' complement has two representations of 0: 00000000 (+0) and 11111111 (−0). As an example, the ones' complement form of 00101011 (4310) becomes 11010100 (−4310). The range of signed numbers using ones' complement is represented by to and ±0. A conventional eight-bit byte is −12710 to +12710 with zero being either 00000000 (+0) or 11111111 (−0). To add two numbers represented in this system, one does a conventional binary addition, but it is then necessary to do an end-around carry: that is, add any resulting carry back into the resulting sum. To see why this is necessary, consider the following example showing the case of the addition of −1 (11111110) to +2 (00000010): binary decimal 11111110 −1 + 00000010 +2 ─────────── ── 1 00000000 0 ← Incorrect answer 1 +1 ← Add carry ─────────── ── 00000001 1 ← Correct answer In the previous example, the first binary addition gives 00000000, which is incorrect. The correct result (00000001) only appears when the carry is added back in. A remark on terminology: The system is referred to as "ones' complement" because the negation of a positive value x (represented as the bitwise NOT of x) can also be formed by subtracting x from the ones' complement representation of zero that is a long sequence of ones (−0). Two's complement arithmetic, on the other hand, forms the negation of x by subtracting x from a single large power of two that is congruent to +0. Therefore, ones' complement and two's complement representations of the same negative value will differ by one. Note that the ones' complement representation of a negative number can be obtained from the sign–magnitude representation merely by bitwise complementing the magnitude (inverting all the bits after the first). For example, the decimal number −125 with its sign–magnitude representation 11111101 can be represented in ones' complement form as 10000010. Two's complement In the two's complement representation, a negative number is represented by the bit pattern corresponding to the bitwise NOT (i.e. the "complement") of the positive number plus one, i.e. to the ones' complement plus one. It circumvents the problems of multiple representations of 0 and the need for the end-around carry of the ones' complement representation. This can also be thought of as the most significant bit representing the inverse of its value in an unsigned integer; in an 8-bit unsigned byte, the most significant bit represents the 128ths place, where in two's complement that bit would represent −128. In two's-complement, there is only one zero, represented as 00000000. Negating a number (whether negative or positive) is done by inverting all the bits and then adding one to that result. This actually reflects the ring structure on all integers modulo 2N: . Addition of a pair of two's-complement integers is the same as addition of a pair of unsigned numbers (except for detection of overflow, if that is done); the same is true for subtraction and even for N lowest significant bits of a product (value of multiplication). For instance, a two's-complement addition of 127 and −128 gives the same binary bit pattern as an unsigned addition of 127 and 128, as can be seen from the 8-bit two's complement table. An easier method to get the negation of a number in two's complement is as follows: Method two: Invert all the bits through the number. This computes the same result as subtracting from negative one. Add one Example: for +2, which is 00000010 in binary (the ~ character is the C bitwise NOT operator, so ~X means "invert all the bits in X"): ~00000010 → 11111101 11111101 + 1 → 11111110 (−2 in two's complement) Offset binary In the offset binary representation, also called excess-K or biased, a signed number is represented by the bit pattern corresponding to the unsigned number plus K, with K being the biasing value or offset. Thus 0 is represented by K, and −K is represented by an all-zero bit pattern. This can be seen as a slight modification and generalization of the aforementioned two's-complement, which is virtually the representation with negated most significant bit. Biased representations are now primarily used for the exponent of floating-point numbers. The IEEE 754 floating-point standard defines the exponent field of a single-precision (32-bit) number as an 8-bit excess-127 field. The double-precision (64-bit) exponent field is an 11-bit excess-1023 field; see exponent bias. It also had use for binary-coded decimal numbers as excess-3. Base −2 In the base −2 representation, a signed number is represented using a number system with base −2. In conventional binary number systems, the base, or radix, is 2; thus the rightmost bit represents 20, the next bit represents 21, the next bit 22, and so on. However, a binary number system with base −2 is also possible. The rightmost bit represents , the next bit represents , the next bit and so on, with alternating sign. The numbers that can be represented with four bits are shown in the comparison table below. The range of numbers that can be represented is asymmetric. If the word has an even number of bits, the magnitude of the largest negative number that can be represented is twice as large as the largest positive number that can be represented, and vice versa if the word has an odd number of bits. Comparison table The following table shows the positive and negative integers that can be represented using four bits. Same table, as viewed from "given these binary bits, what is the number as interpreted by the representation system": Other systems Google's Protocol Buffers "zig-zag encoding" is a system similar to sign–magnitude, but uses the least significant bit to represent the sign and has a single representation of zero. This allows a variable-length quantity encoding intended for nonnegative (unsigned) integers to be used efficiently for signed integers. A similar method is used in the Advanced Video Coding/H.264 and High Efficiency Video Coding/H.265 video compression standards to extend exponential-Golomb coding to negative numbers. In that extension, the least significant bit is almost a sign bit; zero has the same least significant bit (0) as all the negative numbers. This choice results in the largest magnitude representable positive number being one higher than the largest magnitude negative number, unlike in two's complement or the Protocol Buffers zig-zag encoding. Another approach is to give each digit a sign, yielding the signed-digit representation. For instance, in 1726, John Colson advocated reducing expressions to "small numbers", numerals 1, 2, 3, 4, and 5. In 1840, Augustin Cauchy also expressed preference for such modified decimal numbers to reduce errors in computation. See also Balanced ternary Binary-coded decimal Computer number format Method of complements Signedness References Ivan Flores, The Logic of Computer Arithmetic, Prentice-Hall (1963) Israel Koren, Computer Arithmetic Algorithms, A.K. Peters (2002), Computer arithmetic ca:Representació de nombres amb signe cs:Dvojková soustava#Zobrazení záporných čísel fr:Système binaire#Représentation des entiers négatifs
Signed number representations
Mathematics
3,000
34,475,270
https://en.wikipedia.org/wiki/Basil%20Gordon
Basil Gordon (December 23, 1931 – January 12, 2012) was a mathematician at UCLA, specializing in number theory and combinatorics. He obtained his Ph.D. at California Institute of Technology under the supervision of Tom Apostol. Ken Ono was one of his students. Gordon is well known for Göllnitz–Gordon identities, generalizing the Rogers–Ramanujan identities. He also posed the still-unsolved Gaussian moat problem in 1962. Gordon was drafted into the US Army, where he worked with the former Nazi rocket scientist Wernher von Braun. Gordon's calculations of the gravitational interactions of earth, moon, and satellite contributed to the success and longevity of Explorer I, which launched in 1958 and remained in orbit until 1970. He was the step-grandson of General George Barnett and is a descendant of the Gordon family of British distillers, producers of Gordon's Gin. References External links In memoriam: Basil Gordon, Professor of Mathematics, Emeritus, 1931 – 2012, UCLA Mathematics Department website Some Tauberian Theorems connected with the Prime Number Theorem, Basil Gordon, PhD thesis, 1956 2012 deaths 20th-century American mathematicians 21st-century American mathematicians Combinatorialists California Institute of Technology alumni 1931 births
Basil Gordon
Mathematics
256