source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Synchronous%20circuit | In digital electronics, a synchronous circuit is a digital circuit in which the changes in the state of memory elements are synchronized by a clock signal. In a sequential digital logic circuit, data are stored in memory devices called flip-flops or latches. The output of a flip-flop is constant until a pulse is applied to its "clock" input, upon which the input of the flip-flop is latched into its output. In a synchronous logic circuit, an electronic oscillator called the clock generates a string (sequence) of pulses, the "clock signal". This clock signal is applied to every storage element, so in an ideal synchronous circuit, every change in the logical levels of its storage components is simultaneous. Ideally, the input to each storage element has reached its final value before the next clock occurs, so the behaviour of the whole circuit can be predicted exactly. Practically, some delay is required for each logical operation, resulting in a maximum speed limitations at which each synchronous system can run.
To make these circuits work correctly, a great deal of care is needed in the design of the clock distribution networks. Static timing analysis is often used to determine the maximum safe operating speed.
Nearly all digital circuits, and in particular nearly all CPUs, are fully synchronous circuits with a global clock.
Exceptions are often compared to fully synchronous circuits.
Exceptions include self-synchronous circuits,
globally asynchronous locally synchronous circuits,
and fully asynchronous circuits.
See also
Synchronous network
Asynchronous circuit
Moore machine
Mealy machine
Finite state machine
Sequential logic
Memory
Control unit
Arithmetic logic unit
Processor register
Application-specific integrated circuit (ASIC) |
https://en.wikipedia.org/wiki/Fortuna%20%28PRNG%29 | Fortuna is a cryptographically secure pseudorandom number generator (PRNG) devised by Bruce Schneier and Niels Ferguson and published in 2003. It is named after Fortuna, the Roman goddess of chance. FreeBSD uses Fortuna for /dev/random and /dev/urandom is symbolically linked to it since FreeBSD 11. Apple OSes have switched to Fortuna since 2020 Q1.
Design
Fortuna is a family of secure PRNGs; its design
leaves some choices open to implementors. It is composed of the following pieces:
The generator itself, which once seeded will produce an indefinite quantity of pseudo-random data.
The entropy accumulator, which collects genuinely random data from various sources and uses it to reseed the generator when enough new randomness has arrived.
The seed file, which stores enough state to enable the computer to start generating random numbers as soon as it has booted.
Generator
The generator is based on any good block cipher. Practical Cryptography suggests AES, Serpent or Twofish. The basic idea is to run the cipher in counter mode, encrypting successive values of an incrementing counter.
With a 128-bit block cipher, this would produce statistically identifiable deviations from randomness; for instance, generating 264 genuinely random 128-bit blocks would produce on average about one pair of identical blocks, but there are no repeated blocks at all among the first 2128 produced by a 128-bit cipher in counter mode. Therefore, the key is changed periodically: no more than 1 MiB of data (216 128-bit blocks) is generated without a key change. The book points out that block ciphers with a 256-bit (or greater) block size, which did not enjoy much popularity at the time, do not have this statistical problem.
The key is also changed after every data request (however small), so that a future key compromise doesn't endanger previous generator outputs. This property is sometimes described as "Fast Key Erasure" or Forward secrecy.
Entropy accumulator
The entropy accumulator |
https://en.wikipedia.org/wiki/Asynchronous%20circuit | Asynchronous circuit (clockless or self-timed circuit) is a sequential digital logic circuit that does not use a global clock circuit or signal generator to synchronize its components. Instead, the components are driven by a handshaking circuit which indicates a completion of a set of instructions. Handshaking works by simple data transfer protocols. Many synchronous circuits were developed in early 1950s as part of bigger asynchronous systems (e.g. ORDVAC). Asynchronous circuits and theory surrounding is a part of several steps in integrated circuit design, a field of digital electronics engineering.
Asynchronous circuits are contrasted with synchronous circuits, in which changes to the signal values in the circuit are triggered by repetitive pulses called a clock signal. Most digital devices today use synchronous circuits. However asynchronous circuits have a potential to be much faster, have a lower level of power consumption, electromagnetic interference, and better modularity in large systems. Asynchronous circuits are an active area of research in digital logic design.
It was not until the 1990s when viability of the asynchronous circuits was shown by real-life commercial products.
Overview
All digital logic circuits can be divided into combinational logic, in which the output signals depend only on the current input signals, and sequential logic, in which the output depends both on current input and on past inputs. In other words, sequential logic is combinational logic with memory. Virtually all practical digital devices require sequential logic. Sequential logic can be divided into two types, synchronous logic and asynchronous logic.
Synchronous circuits
In synchronous logic circuits, an electronic oscillator generates a repetitive series of equally spaced pulses called the clock signal. The clock signal is supplied to all the components of the IC. Flip-flops only flip when triggered by the edge of the clock pulse, so changes to the logic signals thr |
https://en.wikipedia.org/wiki/Korky%20the%20Cat | Korky the Cat is a character in a comic strip in the British comics magazine The Dandy. It first appeared in issue 1, dated 4 December 1937, except for one issue, No. 294 (9 June 1945) when Keyhole Kate was on the cover. For several decades he was the mascot of The Dandy. In 1984, after 47 virtually continuous years, Korky was replaced on the front cover by Desperate Dan.
History
The strip's simple premise follows the adventures of a black male cat called Korky, a cat who behaves like a human and is accepted in a world of humans as only a comic character can be. Originally a mute character, he started speaking in No. 149, 5 October 1940 (see image for his first words as he tries to help some hungry dogs). The 1950s saw the introduction of his 'Kits', Nip, Lip, and Rip.
Artists were:
James Crighton from issues 1 to 1051 (4 December 1937 to 13 January 1962.
Charlie Grigg from issues 1052 to 2116 (20 January 1962 to 12 June 1982.
David Gudgeon from issue 2117 (19 June 1982 to 1986).
Robert Nixon from 1986 to 1999
Phil Corbett from 2010
After 1984, Korky still continued inside the comic, however, and a picture of him remained next to the Dandy logo until 1998. When Robert Nixon took over drawing in the Dandy issue dated 1 November 1986, some changes were made. Korky, whose look had remained virtually the same since the 1940s, now looked noticeably different, particularly in the case of his eyes (though the picture of him next to the Dandy logo was never changed). The focus of the strip also switched more to the Kits, who had been promoted from their originally minor role. So much so, that at one point the strip had been renamed Korky the Cat and the Kits.
When Nixon went into semi retirement at the beginning of 1999 (relinquishing Beryl the Peril at the same time) several different artists took up the pen, including David Sutherland (who also draws The Bash Street Kids from The Beano, and used to draw Dennis the Menace) at first, Steve Bright, Lesley Reavey, Anthony |
https://en.wikipedia.org/wiki/Linux%20for%20PlayStation%202 | Linux for PlayStation 2 (or PS2 Linux) is a kit released by Sony Computer Entertainment in 2002 that allows the PlayStation 2 console to be used as a personal computer. It included a Linux-based operating system, a USB keyboard and mouse, a VGA adapter, a PS2 network adapter (Ethernet only), and a 40 GB hard disk drive (HDD). An 8 MB memory card is required; it must be formatted during installation, erasing all data previously saved on it, though afterwards the remaining space may be used for savegames. It is strongly recommended that a user of Linux for PlayStation 2 have some basic knowledge of Linux before installing and using it, due to the command-line interface for installation.
The official site for the project was closed at the end of October 2009 and communities like ps2dev are no longer active.
Capabilities
The Linux Kit turns the PlayStation 2 into a full-fledged computer system, but it does not allow for use of the DVD-ROM drive except to read PS1 and PS2 discs due to piracy concerns by Sony. Although the HDD included with the Linux Kit is not compatible with PlayStation 2 games, reformatting the HDD with the utility disc provided with the retail HDD enables use with PlayStation 2 games but erases PS2 Linux, though there is a driver that allows PS2 Linux to operate once copied onto the APA partition created by the utility disc. The Network Adapter included with the kit only supports Ethernet; a driver is available to enable modem support if the retail Network Adapter (which includes a built-in V.90 modem) is used. The kit supports display on RGB monitors (with sync-on-green) using a VGA cable provided with the Linux Kit, or television sets with the normal cable included with the PlayStation 2 unit.
The PS2 Linux distribution is based on , a Japanese distribution itself based on Red Hat Linux. PS2 Linux is similar to Red Hat Linux 6, and has most of the features one might expect in a Red Hat Linux 6 system. The stock kernel is Linux 2.2.1 (although |
https://en.wikipedia.org/wiki/Tuberculous%20lymphadenitis | Tuberculous lymphadenitis (or tuberculous adenitis) is the most common form of tuberculosis infections that appears outside the lungs. Tuberculous lymphadenitis is a chronic, specific granulomatous inflammation of the lymph node with caseation necrosis, caused by infection with Mycobacterium tuberculosis or related bacteria.
The characteristic morphological element is the tuberculous granuloma (caseating tubercule). This consists of giant multinucleated cells and (Langhans cells), surrounded by epithelioid cells aggregates, T cell lymphocytes and fibroblasts. Granulomatous tubercules eventually develop central caseous necrosis and tend to become confluent, replacing the lymphoid tissue.
Symptoms and signs
In addition to swollen lymph nodes, called lymphadenitis, the person may experience mild fevers, not feel like eating, or lose weight.
Cause
It is usually caused by the most common cause of tuberculosis in the lungs, namely Mycobacterium tuberculosis. It has sometimes also been caused by related bacteria, including M. bovis, M. kansasii, M. fortuitum, M. marinum, and Mycobacterium ulcerans.
Stages
Stages of tubercular lymphadenitis:
Lymphadenitis
Periadenitis
Cold abscess
'Collar stud' abscess
Sinus
Tuberculous lymphadenitis is popularly known as collar stud abscess, due to its proximity to the collar bone and its superficial resemblance to a collar stud, although this is just one of the five stages of the disease. One or more affected lymph nodes can also be in a different body part, although it is most typical to have at least one near the collar bone.The characteristic morphological element is the tuberculous granuloma (caseating tubercule): giant multinucleated cells (Langhans cells), surrounded by epithelioid cells aggregates, T cell lymphocytes and few fibroblasts. Granulomatous tubercules evolve to central caseous necrosis and tend to become confluent, replacing the lymphoid tissue.
Diagnosis
The diagnosis of tuberculous lymphadenitis may requi |
https://en.wikipedia.org/wiki/Newman%20projection | A Newman projection is a drawing that helps visualize the 3-dimensional structure of a molecule. This projection most commonly sights down a carbon-carbon bond, making it a very useful way to visualize the stereochemistry of alkanes. A Newman projection visualizes the conformation of a chemical bond from front to back, with the front atom represented by the intersection of three lines (a dot) and the back atom as a circle. The front atom is called proximal, while the back atom is called distal. This type of representation clearly illustrates the specific dihedral angle between the proximal and distal atoms.
This projection is named after American chemist Melvin Spencer Newman, who introduced it in 1952 as a partial replacement for Fischer projections, which are unable to represent conformations and thus conformers properly. This diagram style is an alternative to a sawhorse projection, which views a carbon-carbon bond from an oblique angle, or a wedge-and-dash style, such as a Natta projection. These other styles can indicate the bonding and stereochemistry, but not as much conformational detail.
A Newman projection can also be used to study cyclic molecules, such as the chair conformation of cyclohexane:
Because of the free rotation around single bonds, there are various conformations for a single molecule. Up to six unique conformations may be drawn for any given chemical bond. Each conformation is drawn by rotation of either the proximal or distal atom 60 degrees. Of these six conformations, three will be in a staggered conformation, while the other three will be in an eclipsed conformation. These six conformations can be represented in a relative energy diagram.
A staggered projection appears to have the surrounding species equidistant from each other. This kind of conformation tends to experience both anti and gauche interactions. Anti interactions refer to the molecules (usually of the same type) sitting exactly opposite of each other at 180° on the Newm |
https://en.wikipedia.org/wiki/Eclipsed%20conformation | In chemistry an eclipsed conformation is a conformation in which two substituents X and Y on adjacent atoms A, B are in closest proximity, implying that the torsion angle X–A–B–Y is 0°. Such a conformation can exist in any open chain, single chemical bond connecting two sp3-hybridised atoms, and it is normally a conformational energy maximum. This maximum is often explained by steric hindrance, but its origins sometimes actually lie in hyperconjugation (as when the eclipsing interaction is of two hydrogen atoms).
In order to gain a deeper understanding of eclipsed conformations in organic chemistry, it is first important to understand how organic molecules are arranged around bonds, as well as how they move and rotate.
In the example of ethane, two methyl groups are connected with a carbon-carbon sigma bond, just as one might connect two Lego pieces through a single “stud” and “tube”. With this image in mind, if the methyl groups are rotated around the bond, they will remain connected; however, the shape will change. This leads to multiple possible three-dimensional arrangements, known as conformations, conformational isomers (conformers), or sometimes rotational isomers (rotamers).
Organic chemistry
Conformations can be described by dihedral angles, which are used to determine the placements of atoms and their distance from one another and can be visualized by Newman projections. A dihedral angle can indicate staggered and eclipsed orientation, but is specifically used to determine the angle between two specific atoms on opposing carbons. Different conformations have unequal energies, creating an energy barrier to bond rotation which is known as torsional strain. In particular, eclipsed conformations tend to have raised energies due to the repulsion of the electron clouds of the eclipsed substituents. The relative energies of different conformations can be visualized using graphs. In the example of ethane, such a graph shows that rotation around the carbon-car |
https://en.wikipedia.org/wiki/Adaptationism | Adaptationism (also known as functionalism) is the Darwinian view that many physical and psychological traits of organisms are evolved adaptations. Pan-adaptationism is the strong form of this, deriving from the early 20th century modern synthesis, that all traits are adaptations, a view now shared by only a few biologists.
The "adaptationist program" was heavily criticized by Stephen Jay Gould and Richard Lewontin in their 1979 paper "The Spandrels of San Marco and the Panglossian Paradigm". According to Gould and Lewontin, evolutionary biologists had a habit of proposing adaptive explanations for any trait by default without considering non-adaptive alternatives, and often by conflating products of adaptation with the process of natural selection. One formal alternative to adaptationist explanations for traits in organisms is the neutral theory of molecular evolution, which proposes that features in organisms can arise through neutral transitions and become fixed in a population by chance (genetic drift). Constructive neutral evolution (CNE) is another paradigm which proposes a means by which complex systems emerge through neutral transitions, and CNE has been used to help understand the origins of a wide variety of features from the spliceosome of eukaryotes to the interdependency and simplification widespread in microbial communities. For many, neutral evolution is seen as the null hypothesis when attempting to explain the origins of a complex trait, so that adaptive scenarios for the origins of traits undergo a more rigorous demonstration prior to their acceptance.
Introduction
Criteria to identify a trait as an adaptation
Adaptationism is an approach to studying the evolution of form and function. It attempts to frame the existence and persistence of traits, assuming that each of them arose independently and improved the reproductive success of the organism's ancestors.
A trait is an adaptation if it fulfils the following criteria:
The trait is a variat |
https://en.wikipedia.org/wiki/Crackpot%20index | The Crackpot Index is a number that rates scientific claims or the individuals that make them, in conjunction with a method for computing that number. It was proposed by John C. Baez in 1992, and updated in 1998.
While the index was created for its humorous value, the general concepts can be applied in other fields like risk management.
Baez's crackpot index
The method was initially proposed semi-seriously by mathematical physicist John C. Baez in 1992, and then revised in 1998. The index used responses to a list of 37 questions, each positive response contributing a point value ranging from 1 to 50; the computation is initialized with a value of −5. An earlier version only had 17 questions with point values for each ranging from 1 to 40.
The New Scientist published a claim in 1992 that the creation of the index was "prompted by an especially striking
outburst from a retired mathematician insisting that TIME has INERTIA".
Baez later confirmed in a 1993 letter to New Scientist that he created the index. The index was later published in Skeptic magazine, with an editor's note saying "we know that outsiders to a field can make important contributions and even lead revolutions. But the chances of that happening are rather slim, especially when they meet many of the [Crackpot index] criteria".
Though the index was not proposed as a serious method, it nevertheless has become popular in Internet discussions of whether a claim or an individual is cranky, particularly in physics (e.g., at the Usenet newsgroup sci.physics), or in mathematics.
Chris Caldwell's Prime Pages has a version adapted to prime number research which is a field with many famous unsolved problems that are easy to understand for amateur mathematicians.
Gruenberger's measure for crackpots
An earlier crackpot index is Fred J. Gruenberger's "A Measure for Crackpots" published in December 1962 by the RAND Corporation.
See also
Crank (person)
List of topics characterized as pseudoscience
Pseudophy |
https://en.wikipedia.org/wiki/Low%20Bandwidth%20X | In computing, LBX, or Low Bandwidth X, is a protocol to use the X Window System over network links with low bandwidth and high latency. It was introduced in X11R6.3 ("Broadway") in 1996, but never achieved wide use. It was disabled by default as of X.Org Server 7.1, and was removed for version 7.2.
X was originally implemented for use with the server and client on the same machine or the same local area network. By 1996, the Internet was becoming popular, and X's performance over narrow, slow links was problematic.
LBX ran as a proxy server (). It cached commonly used information — connection setup, large window properties, font metrics, keymaps and so on — and compressed data transmission over the network link.
LBX was never widely deployed as it did not offer significant speed improvements. The slow links it was introduced to help were typically insecure, and RFB (VNC) over a secure shell connection — which includes compression — proved faster than LBX, and also provided session resumption.
Finally, it was shown that greater speed improvements to X could be obtained for all networked environments with replacement of X's antiquated font system as part of the new composited graphics system, along with care and attention to application and widget toolkit design, particularly care to avoid network round trips and hence latency.
See also
Virtual Network Computing (VNC)
xmove - a tool allows you to move programs between X Window System displays
xpra - a more recent tool which is similar to xmove
NX technology, an X acceleration system |
https://en.wikipedia.org/wiki/Black%20Manta | Black Manta is a supervillain appearing in American comic books published by DC Comics. Created by Bob Haney and Nick Cardy, the character debuted in Aquaman #35 (September 1967), and has since endured as the archenemy of the superhero Aquaman.
Black Manta has had numerous origin stories throughout his comic book history, having been a young boy kidnapped and enslaved by abusive pirates on their ship; an autistic orphan subjected to unethical experiments in Arkham Asylum; and a high-seas treasure hunter caught in a mutual cycle of vengeance with Aquaman over the deaths of their fathers. Despite these different versions of his past, Black Manta is consistently depicted as a ruthless underwater mercenary who is obsessed with destroying Aquaman's life. A black armored suit and a large metal helmet with red eye lenses serve as Black Manta's visual motif.
The character has been adapted in various media incarnations, having been portrayed in live-action by Yahya Abdul-Mateen II in the 2018 DC Extended Universe film Aquaman and its upcoming 2023 sequel Aquaman and the Lost Kingdom. Kevin Michael Richardson, Khary Payton and others have provided the character's voice in media ranging from animation to video games.
Fictional character biography
Black Manta had no definitive origin story until #6 of the 1993 Aquaman series. In this origin, the African American child who would become Black Manta grew up in Baltimore, Maryland, and loved to play by the Chesapeake Bay. In his youth, he was kidnapped and forced to work on a ship for an unspecified amount of time, where he was physically abused by his captors. At one point, he saw Aquaman with his dolphin friends and tried to signal him for help but was not seen. Finally, he was forced to defend himself, killing one of his tormentors on the ship with a knife. Hating the emotionless sea and Aquaman, whom he saw as its representative, he was determined to become its master.
An alternative version was given in #8 of the 2003 Aqu |
https://en.wikipedia.org/wiki/Captive%20NTFS | Captive NTFS is a discontinued open-source project in the Linux programming community, started by Jan Kratochvíl. It is a driver wrapper around the original Microsoft Windows NTFS file system driver using parts of ReactOS code. By taking this approach, it aimed to provide safe write support to NTFS partitions.
Until the release of NTFS-3G, it was the only Linux NTFS driver with full write support.
On January 26, 2006 Kratochvíl released version 1.1.7 of the package. It restores compatibility with recent Linux kernels by replacing the obsolete LUFS (Linux Userland File System) module with FUSE (File System in Userspace), which as of Linux 2.6.14 has been part of the official Linux kernel.
Captive NTFS requires NTFS.SYS, which cannot be freely distributed for legal reasons. It can either be obtained from an installed Windows system (which most computers with NTFS partitions are likely to have) or extracted from certain Microsoft service packs.
External links
Jan Kratochvil's Captive NTFS home page
Compatibility layers
Disk file systems
Userspace file systems |
https://en.wikipedia.org/wiki/Magic%20hexagon | A magic hexagon of order n is an arrangement of numbers in a centered hexagonal pattern with n cells on each edge, in such a way that the numbers in each row, in all three directions, sum to the same magic constant M. A normal magic hexagon contains the consecutive integers from 1 to 3n2 − 3n + 1. It turns out that normal magic hexagons exist only for n = 1 (which is trivial, as it is composed of only 1 cell) and n = 3. Moreover, the solution of order 3 is essentially unique. Meng also gave a less intricate constructive proof.
The order-3 magic hexagon has been published many times as a 'new' discovery. An early reference, and possibly the first discoverer, is Ernst von Haselberg (1887).
Proof of normal magic hexagons
The numbers in the hexagon are consecutive, and run from 1 to . Hence their sum is a triangular number, namely
There are r = 2n − 1 rows running along any given direction (E-W, NE-SW, or NW-SE). Each of these rows sum up to the same number M. Therefore:
This can be rewritten as
Multiplying throughout by 32 gives
which shows that must be an integer, hence 2n − 1 must be a factor of 5, namely 2n − 1 = ±1 or 2n − 1 = ±5. The only that meet this condition are and , proving that there are no normal magic hexagons except those of order 1 and 3.
Abnormal magic hexagons
Although there are no normal magical hexagons with order greater than 3, certain abnormal ones do exist. In this case, abnormal means starting the sequence of numbers other than with 1. Arsen Zahray discovered these order 4 and 5 hexagons:
The order 4 hexagon starts with 3 and ends with 39, its rows summing to 111. The order 5 hexagon starts with 6 and ends with 66 and sums to 244.
An order 5 hexagon starting with 15, ending with 75 and summing to 305 is this:
A higher sum than 305 for order 5 hexagons is not possible.
Order 5 hexagons, where the "X" are placeholders for order 3 hexagons, which complete the number sequence. The left one contains the hexagon with the sum 38 (numbe |
https://en.wikipedia.org/wiki/Attenuation%20length | In physics, the attenuation length or absorption length is the distance into a material when the probability has dropped to that a particle has not been absorbed. Alternatively, if there is a beam of particles incident on the material, the attenuation length is the distance where the intensity of the beam has dropped to , or about 63% of the particles have been stopped.
Mathematically, the probability of finding a particle at depth into the material is calculated by the Beer–Lambert law:
.
In general is material- and energy-dependent.
See also
Beer's Law
Mean free path
Attenuation coefficient
Attenuation (electromagnetic radiation)
Radiation length |
https://en.wikipedia.org/wiki/Densa | Densa has been used as the name of a number of fictional organizations parodying Mensa International, the organization for highly intelligent people. Densa is ostensibly an organization for people insufficiently intelligent to be members of Mensa. The name Densa has been said to be an acronym for "Diversely Educated Not Seriously Affected." The name Densa is a portmanteau of (in the sense of stupider) and Mensa.
There is no single formal Densa organization; instead, various projects using that name exist as informal groups, usually meant by their founders as a joke rather than a serious organization. Even within Mensa itself, a SIG (special interest group, an informal sub-group of Mensans sharing a particular common interest) has existed for Densa, which, like all Mensa SIGs, required Mensa membership for admission, while it was active.
The concept of an organization for the mentally dense originated in Boston & Outskirts Mensa Bulletin (BOMB), August 1974, in "A-Bomb-inable Puzzle II" by John D. Coons. The puzzle involved "The Boston chapter of Densa, the low IQ society". Subsequent issues had additional puzzles with gags about the group and were widely reprinted by the bulletins of other Mensa groups before the concept of a low IQ group gained wider circulation in the 1970s, with other people creating quizzes, etc.
A humor book called The Densa Quiz: The Official & Complete Dq Test of the International Densa Society was written in 1983 by Stephen Price and J. Webster Shields. |
https://en.wikipedia.org/wiki/Radiation%20length | In particle physics, the radiation length is a characteristic of a material, related to the energy loss of high energy particles electromagnetically interacting with it. It is defined as the mean length (in cm) into the material at which the energy of an electron is reduced by the factor 1/e.
Definition
In materials of high atomic number (e.g. tungsten, uranium, plutonium) the electrons of energies >~10 MeV predominantly lose energy by , and high-energy photons by pair production. The characteristic amount of matter traversed for these related interactions is called the radiation length , usually measured in g·cm−2. It is both the mean distance over which a high-energy electron loses all but of its energy by , and of the mean free path for pair production by a high-energy photon. It is also the appropriate length scale for describing high-energy electromagnetic cascades.
The radiation length for a given material consisting of a single type of nucleus can be approximated by the following expression:
where is the atomic number and is mass number of the nucleus.
For , a good approximation is
where
is the number density of the nucleus,
denotes the reduced Planck constant,
is the electron rest mass,
is the speed of light,
is the fine-structure constant.
For electrons at lower energies (below few tens of MeV), the energy loss by ionization is predominant.
While this definition may also be used for other electromagnetic interacting particles beyond leptons and photons, the presence of the stronger hadronic and nuclear interaction makes it a far less interesting characterisation of the material; the nuclear collision length and nuclear interaction length are more relevant.
Comprehensive tables for radiation lengths and other properties of materials are available from the Particle Data Group.
See also
Mean free path
Attenuation length
Attenuation coefficient
Attenuation
Range (particle radiation)
Stopping power (particle radiation)
Electron en |
https://en.wikipedia.org/wiki/Pull-up%20resistor | In electronic logic circuits, a pull-up resistor (PU) or pull-down resistor (PD) is a resistor used to ensure a known state for a signal. It is typically used in combination with components such as switches and transistors, which physically interrupt the connection of subsequent components to ground or to VCC. Closing the switch creates a direct connection to ground or VCC, but when the switch is open, the rest of the circuit would be left floating (i.e., it would have an indeterminate voltage).
For a switch that is used to connect a circuit to VCC (e.g., if the switch or button is used to transmit a "high" signal), a pull-down resistor connected between the circuit and ground ensures a well-defined ground voltage (i.e. logical low) across the remainder of the circuit when the switch is open. For a switch that is used to connect a circuit to ground, a pull-up resistor (connected between the circuit and VCC) ensures a well-defined voltage (i.e. VCC, or logical high) when the switch is open.
An open switch is not equivalent to a component with infinite impedance, since in the former case, the stationary voltage in any loop in which it is involved can no longer be determined by Kirchhoff's laws. Consequently, the voltages across those critical components (such as the logic gate in the example on the right), which are only in loops involving the open switch, are undefined, too.
A pull-up resistor effectively establishes an additional loop over the critical components, ensuring that the voltage is well-defined even when the switch is open.
For a pull-up resistor to serve only this one purpose and not interfere with the circuit otherwise, a resistor with an appropriate amount of resistance must be used. For this, it is assumed that the critical components have infinite or sufficiently high impedance, which is guaranteed for example for logic gates made from FETs. In this case, when the switch is open, the voltage across a pull-up resistor (with sufficiently low impe |
https://en.wikipedia.org/wiki/Carry%20%28arithmetic%29 | In elementary arithmetic, a carry is a digit that is transferred from one column of digits to another column of more significant digits. It is part of the standard algorithm to add numbers together by starting with the rightmost digits and working to the left. For example, when 6 and 7 are added to make 13, the "3" is written to the same column and the "1" is carried to the left. When used in subtraction the operation is called a borrow.
Carrying is emphasized in traditional mathematics, while curricula based on reform mathematics do not emphasize any specific method to find a correct answer.
Carrying makes a few appearances in higher mathematics as well. In computing, carrying is an important function of adder circuits.
Manual arithmetic
A typical example of carry is in the following pencil-and-paper addition:
1
27
+ 59
----
86
7 + 9 = 16, and the digit 1 is the carry.
The opposite is a borrow, as in
−1
47
− 19
----
28
Here, , so try , and the 10 is got by taking ("borrowing") 1 from the next digit to the left. There are two ways in which this is commonly taught:
The ten is moved from the next digit left, leaving in this example in the tens column. According to this method, the term "borrow" is a misnomer, since the ten is never paid back.
The ten is copied from the next digit left, and then 'paid back' by adding it to the subtrahend in the column from which it was 'borrowed', giving in this example in the tens column.
Mathematics education
Traditionally, carry is taught in the addition of multi-digit numbers in the 2nd or late first year of elementary school. However, since the late 20th century, many widely adopted curricula developed in the United States such as TERC omitted instruction of the traditional carry method in favor of invented arithmetic methods, and methods using coloring, manipulatives, and charts. Such omissions were criticized by such groups as Mathematically Correct, and some states and districts have since ab |
https://en.wikipedia.org/wiki/Medical%20Waste%20Tracking%20Act | The Medical Waste Tracking Act of 1988 was a United States federal law concerning the illegal dumping of body tissues, blood wastes and other contaminated biological materials. It established heavy penalties for knowingly endangering life through noncompliance. The law expired in 1991.
Authority
The law created a two-year program that went into effect in New York, New Jersey, Connecticut, Rhode Island and Puerto Rico on June 24, 1989, and expired on June 21, 1991.
The H.R. 3515 legislation was passed by the 100th Congressional session and signed into law by the 40th President of the United States Ronald Reagan on November 2, 1988.
History
Beginning on August 13, 1987, a "30-mile garbage slick" composed primarily of medical and household wastes prompted extensive closures of numerous New Jersey and New York beaches. Investigations ongoing throughout the year indicated that the waste likely originated from "New York City's marine transfer stations … and the Southwest Brooklyn Incinerator and Transfer Station in particular…" The then-assistant commissioner of the New Jersey Department of Environmental Protection stated his belief that the cause of pollution was intentional rather than accidental; "sealed plastic garbage bags, he said, were cut at the top, so their contents could disperse through the ocean." Such a deliberate action may have arisen given the high cost (~$1500/ton) associated with the legal disposal of the waste, thus incentivizing private waste contractors to dump illegally to avoid high fees.
Ultimately the Medical Waste Tracking Act of 1988 (MWTA) arose from the aftermath of this situation. It was designed primarily to monitor the treatment of medical wastes through their creation, transportation and destruction, i.e. from "cradle-to-grave." Congress approved the bill "to amend the Solid Waste Disposal Act to require the Administrator of the Environmental Protection Agency (EPA) to promulgate regulations on the management of infections waste." In |
https://en.wikipedia.org/wiki/Prime%20power | In mathematics, a prime power is a positive integer which is a positive integer power of a single prime number.
For example: , and are prime powers, while
, and are not.
The sequence of prime powers begins:
2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53, 59, 61, 64, 67, 71, 73, 79, 81, 83, 89, 97, 101, 103, 107, 109, 113, 121, 125, 127, 128, 131, 137, 139, 149, 151, 157, 163, 167, 169, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 243, 251, … .
The prime powers are those positive integers that are divisible by exactly one prime number; in particular, the number 1 is not a prime power. Prime powers are also called primary numbers, as in the primary decomposition.
Properties
Algebraic properties
Prime powers are powers of prime numbers. Every prime power (except powers of 2) has a primitive root; thus the multiplicative group of integers modulo pn (that is, the group of units of the ring Z/pnZ) is cyclic.
The number of elements of a finite field is always a prime power and conversely, every prime power occurs as the number of elements in some finite field (which is unique up to isomorphism).
Combinatorial properties
A property of prime powers used frequently in analytic number theory is that the set of prime powers which are not prime is a small set in the sense that the infinite sum of their reciprocals converges, although the primes are a large set.
Divisibility properties
The totient function (φ) and sigma functions (σ0) and (σ1) of a prime power are calculated by the formulas
All prime powers are deficient numbers. A prime power pn is an n-almost prime. It is not known whether a prime power pn can be a member of an amicable pair. If there is such a number, then pn must be greater than 101500 and n must be greater than 1400.
See also
Almost prime
Fermi–Dirac prime
Perfect power
Semiprime |
https://en.wikipedia.org/wiki/Decay%20heat | Decay heat is the heat released as a result of radioactive decay. This heat is produced as an effect of radiation on materials: the energy of the alpha, beta or gamma radiation is converted into the thermal movement of atoms.
Decay heat occurs naturally from decay of long-lived radioisotopes that are primordially present from the Earth's formation.
In nuclear reactor engineering, decay heat continues to be generated after the reactor has been shut down (see SCRAM and nuclear chain reactions) and power generation has been suspended. The decay of the short-lived radioisotopes such as iodine-131 created in fission continues at high power for a time after shut down. The major source of heat production in a newly shut down reactor is due to the beta decay of new radioactive elements recently produced from fission fragments in the fission process.
Quantitatively, at the moment of reactor shutdown, decay heat from these radioactive sources is still 6.5% of the previous core power if the reactor has had a long and steady power history. About 1 hour after shutdown, the decay heat will be about 1.5% of the previous core power. After a day, the decay heat falls to 0.4%, and after a week, it will be only 0.2%. Because radioisotopes of all half-life lengths are present in nuclear waste, enough decay heat continues to be produced in spent fuel rods to require them to spend a minimum of one year, and more typically 10 to 20 years, in a spent fuel pool of water before being further processed. However, the heat produced during this time is still only a small fraction (less than 10%) of the heat produced in the first week after shutdown.
If no cooling system is working to remove the decay heat from a crippled and newly shut down reactor, the decay heat may cause the core of the reactor to reach unsafe temperatures within a few hours or days, depending upon the type of core. These extreme temperatures can lead to minor fuel damage (e.g. a few fuel particle failures (0.1 to 0.5%) i |
https://en.wikipedia.org/wiki/Suspensory%20ligament | A suspensory ligament is a ligament that supports a body part, especially an organ.
Types include:
Suspensory ligament of axilla, also known as Gerdy's ligament
Cooper's ligaments, also known as the suspensory ligaments of Cooper or Suspensory ligaments of breast
Suspensory ligament of clitoris
Suspensory ligament of duodenum, also known as the ligament of Treitz
Suspensory ligament of eyeball, also known as Lockwood's ligament
Suspensory ligament of lens, also known as the zonule of Zinn or zonular fibre
Suspensory ligament of ovary
Suspensory ligament of penis
Suspensory ligament of thyroid gland, also known as Berry's ligament
Part of the suspensory apparatus of the leg of a horse. When the leg is supporting the horse's weight, this ligament supports the fetlock joint. Suspensory ligament injures are common in athletic horses. |
https://en.wikipedia.org/wiki/Pollard%27s%20rho%20algorithm%20for%20logarithms | Pollard's rho algorithm for logarithms is an algorithm introduced by John Pollard in 1978 to solve the discrete logarithm problem, analogous to Pollard's rho algorithm to solve the integer factorization problem.
The goal is to compute such that , where belongs to a cyclic group generated by . The algorithm computes integers , , , and such that . If the underlying group is cyclic of order , by substituting as and noting that two powers are equal if and only if the exponents are equivalent modulo the order of the base, in this case modulo , we get that is one of the solutions of the equation . Solutions to this equation are easily obtained using the extended Euclidean algorithm.
To find the needed , , , and the algorithm uses Floyd's cycle-finding algorithm to find a cycle in the sequence , where the function is assumed to be random-looking and thus is likely to enter into a loop of approximate length after steps. One way to define such a function is to use the following rules: Divide into three disjoint subsets of approximately equal size: , , and . If is in then double both and ; if then increment , if then increment .
Algorithm
Let be a cyclic group of order , and given , and a partition , let be the map
and define maps and by
input: a: a generator of G
b: an element of G
output: An integer x such that ax = b, or failure
Initialise a0 ← 0, b0 ← 0, x0 ← 1 ∈ G
i ← 1
loop
xi ← f(xi-1),
ai ← g(xi-1, ai-1),
bi ← h(xi-1, bi-1)
x2i ← f(f(x2i-2)),
a2i ← g(f(x2i-2), g(x2i-2, a2i-2)),
b2i ← h(f(x2i-2), h(x2i-2, b2i-2))
if xi = x2i then
r ← bi - b2i
if r = 0 return failure
x ← r−1(a2i - ai) mod n
return x
else // xi ≠ x2i
i ← i + 1
end if
end loop
Example
Consider, for example, the group generated by 2 modulo (the order of the group is , 2 generates the group of units modulo 1019). The algorithm is implemented by the following C++ |
https://en.wikipedia.org/wiki/Footedness | In human biology, footedness is the natural preference of one's left or right foot for various purposes. It is the foot equivalent of handedness. While purposes vary, such as applying the greatest force in a certain foot to complete the action of kick as opposed to stomping, footedness is most commonly associated with the preference of a particular foot in the leading position while engaging in foot- or kicking-related sports, such as association football and kickboxing. A person may thus be left-footed, right-footed or ambipedal (able to use both feet equally well).
Ball games
In association football, the ball is predominantly struck by the foot. Footedness may refer to the foot a player uses to kick with the greatest force and skill. Most people are right-footed, kicking with the right leg. Capable left-footed footballers are rare and therefore quite sought after. As rare are "two-footed" players, who are equally capable with both feet. Such players make up only one sixth of players in the top professional leagues in Europe. Two-footedness can be learnt, a notable case being England international Tom Finney, but can only be properly developed in the early years. In Australian Rules Football, several players are equally adept at using both feet to kick the ball, such as Sam Mitchell and Charles Bushnell (footballer, retired).
In basketball, a sport composed almost solely of right-handed players, it is common for most athletes to have a dominant left leg which they would use when jumping to complete a right-hand layup. Hence, left-handed basketball players tend to use their right leg more as they finish a left handed layup (although both right- and left-handed players are usually able to use both hands when finishing near the basket).
In the National Football League, a disproportionate, and increasing, number of punters punt with their left leg, where punting is the position in play that receives and kicks the ball once it leaves the line of scrimmage. At the |
https://en.wikipedia.org/wiki/Registry%20of%20Toxic%20Effects%20of%20Chemical%20Substances | Registry of Toxic Effects of Chemical Substances (RTECS) is a database of toxicity information compiled from the open scientific literature without reference to the validity or usefulness of the studies reported. Until 2001 it was maintained by US National Institute for Occupational Safety and Health (NIOSH) as a freely available publication. It is now maintained by the private company BIOVIA or from several value-added resellers and is available only for a fee or by subscription.
Contents
Six types of toxicity data are included in the file:
Primary irritation
Mutagenic effects
Reproductive effects
Tumorigenic effects
Acute toxicity
Other multiple dose toxicity
Specific numeric toxicity values such as , LC50, TDLo, and TCLo are noted as well as species studied and the route of administration used. For all data the bibliographic source is listed. The studies are not evaluated in any way.
History
RTECS was an activity mandated by the US Congress, established by Section 20(a)(6) of the Occupational Safety and Health Act of 1970 (PL 91-596). The original edition, known as the Toxic Substances List was published on June 28, 1971, and included toxicological data for approximately 5,000 chemicals. The name changed later to its current name Registry of Toxic Effects of Chemical Substances. In January 2001 the database contained 152,970 chemicals. In December 2001 RTECS was transferred from NIOSH to the private company Elsevier MDL. Symyx acquired MDL from Elsevier in 2007 and the Toxicity database was included in the acquisition. The Toxicity database is only accessible for charge on an annual subscription base.
RTECS is available in English, French and Spanish language versions, offered by the Canadian Centre for Occupational Health and Safety. The database subscription is offered on the Web, on CD-ROM and as an Intranet format. The database is also available online from NISC (National Information Services Corporation, RightAnswer.com, and ToxPlanet (Timber |
https://en.wikipedia.org/wiki/Risk-adjusted%20return%20on%20capital | Risk-adjusted return on capital (RAROC) is a risk-based profitability measurement framework for analysing risk-adjusted financial performance and providing a consistent view of profitability across businesses. The concept was developed by Bankers Trust and principal designer Dan Borge in the late 1970s. Note, however, that increasingly return on risk-adjusted capital (RORAC) is used as a measure, whereby the risk adjustment of Capital is based on the capital adequacy guidelines as outlined by the Basel Committee.
Basic formulae
The formula is given by
Broadly speaking, in business enterprises, risk is traded off against benefit. RAROC is defined as the ratio of risk adjusted return to economic capital. The economic capital is the amount of money which is needed to secure the survival in a worst-case scenario, it is a buffer against unexpected shocks in market values. Economic capital is a function of market risk, credit risk, and operational risk, and is often calculated by VaR. This use of capital based on risk improves the capital allocation across different functional areas of banks, insurance companies, or any business in which capital is placed at risk for an expected return above the risk-free rate.
RAROC system allocates capital for two basic reasons:
Risk management
Performance evaluation
For risk management purposes, the main goal of allocating capital to individual business units is to determine the bank's optimal capital structure—that is economic capital allocation is closely correlated with individual business risk. As a performance evaluation tool, it allows banks to assign capital to business units based on the economic value added of each unit.
Decision measures based on regulatory and economic capital
With the financial crisis of 2007, and the introduction of Dodd–Frank Act, and Basel III, the minimum required regulatory capital requirements have become onerous. An implication of stringent regulatory capital requirements spurred debates on |
https://en.wikipedia.org/wiki/Biolex | Biolex Therapeutics was a biotechnology firm in the Research Triangle of North Carolina which was founded in 1997 and raised $190 million from investors. It filed for Chapter 7 bankruptcy on July 5, 2012.
The company focused on expression of difficult-to-synthesize recombinant proteins in its LEX platform, which used Lemna, a duckweed. The duckweeds are a family of small aquatic plants that can be grown in sterile culture. Biolex developed recombinant DNA technology for efficiently producing pharmaceutical proteins in Lemna. Therapeutic glycosylated proteins, including monoclonal antibodies and interferon (IFN-alpha2b) have been produced using the LEX platform.
Biolex acquired Epicyte Pharmaceutical Inc. on May 6, 2004, and acquired the LemnaGene SA of Lyon, France in 2005. Biolex was a privately held company, originally backed by Quaker BioVentures, The Trelys Funds, and Polaris Venture Partners. The term "plantibody" is trademarked by Biolex. In May 2012 Biolex announced that it sold the LEX System to Synthon, a Netherlands-based specialty pharmaceutical company. The sale included two preclinical biologics made with the LEX System, BLX-301, a humanized and glyco-optimized anti-CD20 antibody for non-Hodgkin's B-cell lymphoma and other B-cell malignancies and BLX-155, a direct-acting thrombolytic. The financial terms of the sale were not disclosed. |
https://en.wikipedia.org/wiki/Matthew%20Meselson | Matthew Stanley Meselson (born May 24, 1930) is a geneticist and molecular biologist currently at Harvard University, known for his demonstration, with Franklin Stahl, of semi-conservative DNA replication. After completing his Ph.D. under Linus Pauling at the California Institute of Technology, Meselson became a Professor at Harvard University in 1960, where he has remained, today, as Thomas Dudley Cabot Professor of the Natural Sciences.
In the famous Meselson–Stahl experiment of 1958 he and Frank Stahl demonstrated through nitrogen isotope labeling that DNA is replicated semi-conservatively. In addition, Meselson, François Jacob, and Sydney Brenner discovered the existence of messenger RNA in 1961. Meselson has investigated DNA repair in cells and how cells recognize and destroy foreign DNA, and, with Werner Arber, was responsible for the discovery of restriction enzymes.
Since 1963 he has been interested in chemical and biological defense and arms control, has served as a consultant on this subject to various government agencies. Meselson worked with Henry Kissinger under the Nixon administration to convince President Richard Nixon to renounce biological weapons, suspend chemical weapons production, and support an international treaty prohibiting the acquisition of biological agents for hostile purposes, which in 1972 became known as the Biological Weapons Convention.
Meselson has received the Award in Molecular Biology from the National Academy of Sciences, the Public Service Award of the Federation of American Scientists, the Presidential Award of the New York Academy of Sciences, the 1995 Thomas Hunt Morgan Medal of the Genetics Society of America, as well as the Lasker Award for Special Achievement in Medical Science. His laboratory at Harvard currently investigates the biological and evolutionary nature of sexual reproduction, genetic recombination, and aging. Many of his past students are notable biologists, including Nobel Laureate Sidney Altman, as wel |
https://en.wikipedia.org/wiki/Zamiaceae | The Zamiaceae are a family of cycads that are superficially palm or fern-like. They are divided into two subfamilies with eight genera and about 150 species in the tropical and subtropical regions of Africa, Australia and North and South America.
The Zamiaceae, sometimes known as zamiads, are perennial, evergreen, and dioecious. They have subterranean to tall and erect, usually unbranched, cylindrical stems, and stems clad with persistent leaf bases (in Australian genera).
Their leaves are simply pinnate, spirally arranged, and interspersed with cataphylls. The leaflets are sometimes dichotomously divided. The leaflets occur with several sub-parallel, dichotomously branching longitudinal veins; they lack a mid rib. Stomata occur either on both surfaces or undersurface only.
Their roots have small secondary roots. The coralloid roots develop at the base of the stem at or below the soil surface.
Male and female sporophylls are spirally aggregated into determinate cones that grow along the axis. Female sporophylls are simple, appearing peltate, with a barren stipe and an expanded and thickened lamina with 2 (rarely 3 or more) sessile ovules inserted on the inner (axis facing) surface and directed inward. The seeds are angular, with the inner coat hardened and the outer coat fleshy. They are often brightly colored, with 2 cotyledons.
One subfamily, the Encephalartoideae, is characterized by spirally arranged sporophylls (rather than spirally orthostichous), non-articulate leaflets and persistent leaf bases. It is represented in Australia, with two genera and 40 species.
As with all cycads, members of the Zamiaceae are poisonous, producing poisonous glycosides known as cycasins.
The former family Stangeriaceae (which contained Bowenia and Stangeria) has been shown to be nested within Zamiaceae by phylogenetic analysis.
The family first began to diversify during the Cretaceous period.
Genera
Dioon (14 species)
Macrozamia (42 species)
Lepidozamia (2 specie |
https://en.wikipedia.org/wiki/List%20of%20mathematical%20identities | This article lists mathematical identities, that is, identically true relations holding in mathematics.
Bézout's identity (despite its usual name, it is not, properly speaking, an identity)
Binomial inverse theorem
Binomial identity
Brahmagupta–Fibonacci two-square identity
Candido's identity
Cassini and Catalan identities
Degen's eight-square identity
Difference of two squares
Euler's four-square identity
Euler's identity
Fibonacci's identity see Brahmagupta–Fibonacci identity or Cassini and Catalan identities
Heine's identity
Hermite's identity
Lagrange's identity
Lagrange's trigonometric identities
MacWilliams identity
Matrix determinant lemma
Newton's identity
Parseval's identity
Pfister's sixteen-square identity
Sherman–Morrison formula
Sophie Germain identity
Sun's curious identity
Sylvester's determinant identity
Vandermonde's identity
Woodbury matrix identity
Identities for classes of functions
Exterior calculus identities
Fibonacci identities: Combinatorial Fibonacci identities and Other Fibonacci identities
Hypergeometric function identities
List of integrals of logarithmic functions
List of topics related to
List of trigonometric identities
Inverse trigonometric functions
Logarithmic identities
Summation identities
Vector calculus identities
See also
External links
A Collection of Algebraic Identities
Matrix Identities
Identities |
https://en.wikipedia.org/wiki/Ian%20Wilmut | Sir Ian Wilmut (7 July 1944 – 10 September 2023) was a British embryologist and the chair of the Scottish Centre for Regenerative Medicine at the University of Edinburgh. He is best known as the leader of the research group that in 1996 first cloned a mammal from an adult somatic cell, a Finnish Dorset lamb named Dolly.
Wilmut was appointed OBE in 1999 for services to embryo development and knighted in the 2008 New Year Honours. He, Keith Campbell and Shinya Yamanaka jointly received the 2008 Shaw Prize for Medicine and Life Sciences for their work on cell differentiation in mammals.
Early life and education
Wilmut was born in Hampton Lucy, Warwickshire, England, on 7 July 1944. Wilmut's father, Leonard Wilmut, was a mathematics teacher who suffered from diabetes for fifty years, which eventually caused him to become blind. The younger Wilmut attended the Boys' High School in Scarborough, where his father taught. His early desire was to embark on a naval career, but he was unable to do so due to his colour blindness. As a schoolboy, Wilmut worked as a farm hand on weekends, which inspired him to study Agriculture at the University of Nottingham.
In 1966, Wilmut spent eight weeks working in the laboratory of Christopher Polge, who is credited with developing the technique of cryopreservation in 1949. The following year Wilmut joined Polge's laboratory to undertake a Doctor of Philosophy degree at the University of Cambridge, from where he graduated in 1971 with a thesis on semen cryopreservation. During this time he was a postgraduate student at Darwin College.
Career and research
After completing his PhD, he was involved in research focusing on gametes and embryogenesis, including working at the Roslin Institute.
Wilmut was the leader of the research group that in 1996 first cloned a mammal, a lamb named Dolly. She died of a respiratory disease in 2003. In 2008 Wilmut announced that he would abandon the technique of somatic cell nuclear transfer by which Dolly |
https://en.wikipedia.org/wiki/Equivalent%20weight | In chemistry, equivalent weight (also known as gram equivalent or equivalent mass) is the mass of one equivalent, that is the mass of a given substance which will combine with or displace a fixed quantity of another substance. The equivalent weight of an element is the mass which combines with or displaces 1.008 gram of hydrogen or 8.0 grams of oxygen or 35.5 grams of chlorine. These values correspond to the atomic weight divided by the usual valence; for oxygen as example that is 16.0 g / 2 = 8.0 g.
For acid–base reactions, the equivalent weight of an acid or base is the mass which supplies or reacts with one mole of hydrogen cations (). For redox reactions, the equivalent weight of each reactant supplies or reacts with one mole of electrons (e−) in a redox reaction.
Equivalent weight has the units of mass, unlike atomic weight, which is now used as a synonym for relative atomic mass and is dimensionless. Equivalent weights were originally determined by experiment, but (insofar as they are still used) are now derived from molar masses. The equivalent weight of a compound can also be calculated by dividing the molecular mass by the number of positive or negative electrical charges that result from the dissolution of the compound.
In history
The first equivalent weights were published for acids and bases by Carl Friedrich Wenzel in 1777. A larger set of tables was prepared, possibly independently, by Jeremias Benjamin Richter, starting in 1792. However, neither Wenzel nor Richter had a single reference point for their tables, and so had to publish separate tables for each pair of acid and base.
John Dalton's first table of atomic weights (1808) suggested a reference point, at least for the elements: taking the equivalent weight of hydrogen to be one unit of mass. However, Dalton's atomic theory was far from universally accepted in the early 19th century. One of the greatest problems was the reaction of hydrogen with oxygen to produce water. One gram of hydroge |
https://en.wikipedia.org/wiki/List%20of%20BBS%20software | This is a list of notable bulletin board system (BBS) software packages.
Multi-platform
Citadel – originally written for the CP/M operating system, had many forks for different systems under different names.
CONFER – CONFER II on the MTS, CONFER U on Unix and CONFER V on VAX/VMS, written by Robert Parnes starting in 1975.
Mystic BBS – written by James Coyle with versions for Windows/Linux/ARM Linux/OSX. Past versions: MS-DOS and OS/2.
Synchronet – Windows/Linux/BSD, past versions: MS-DOS and OS/2.
WWIV – WWIV v5.x is supported on both Windows 7+ 32bit as well as Linux 32bit and 64bit. Written by Wayne Bell, included WWIVNet. Past versions: MS-DOS and OS/2.
Altos 68000
PicoSpan
Amiga based
Ami-Express – aka "/X", very popular in the crackers/warez software scene.
C-Net – aka "Cnet"
Apple II series
Diversi-Dial (DDial) – Chat-room atmosphere supporting up to 7 incoming lines allowing links to other DDial boards.
GBBS – Applesoft and assembler-based BBS program by Greg Schaeffer.
GBBS Pro – based on the ACOS or MACOS (modified ACOS) language.
Net-Works II – by Nick Naimo.
SBBS – Sonic BBS by Patrick Sonnek.
Apple Macintosh
Citadel – including Macadel, MacCitadel.
FirstClass (SoftArc)
Hermes
Second Sight
TeleFinder
Atari 8-bit computer
Atari Message Information System – and derivatives
Commodore computers
Blue Board – by Martin Sikes.
Superboard – by Greg Francis and Randy Schnedler.
C*Base – by Gunther Birznieks, Jerome P. Yoner, and David Weinehall.
C-Net DS2 – by Jim Selleck.
Color64 – by Greg Pfountz.
McBBS – by Derek E. McDonald.
CP/M
CBBS – The first ever BBS software, written by Ward Christensen.
Citadel
RBBS
TBBS
Microsoft Windows
Excalibur BBS
Maximus
Mystic BBS
MS-DOS and compatible
Celerity BBS
Citadel – including DragCit, Cit86, TurboCit, Citadel+
Ezycom – written by Peter Davies.
FBB (F6FBB) – packet radio BBS system, still in use.
GBBS (Graphics BBS) – used in the Melbourne area.
GT-Power
L.S.D. |
https://en.wikipedia.org/wiki/WSES | WSES (channel 33) is a television station licensed to Tuscaloosa, Alabama, United States, serving the western portion of the Birmingham market as an affiliate of the digital multicast network Heroes & Icons. The station is owned by Howard Stirk Holdings, a partner company of the Sinclair Broadcast Group. WSES' advertising sales office is located on Golden Crest Drive in Birmingham, and its transmitter is located near County Road 38/Blue Creek Road, east of State Route 69 near Windham Springs.
WGWW (channel 40) in Anniston operates as a full-time satellite of WSES.
History
As an independent station
The station first signed on the air on October 27, 1965, as WCFT-TV. Originally operating as an independent station, it was the first television station to sign on in western Alabama. It was originally owned by Chapman Family Television, a consortium of eight Tuscaloosa businessmen who saw the benefits of operating a television station to serve west-central Alabama, in terms of both business and community service purposes.
However, the station did not return a profit suitable enough for its owners throughout its first two years of operation, an issue that led Chapman Family Television to sell the station to South Mississippi Broadcasting, Inc. (later Service Broadcasters) in 1967, becoming the company's second television station, after flagship WDAM-TV in the company's home market of Hattiesburg, Mississippi. The new owners rejuvenated WCFT by heavily investing in the station, purchasing new broadcasting and transmission equipment, and improving the station's image. In addition to carrying syndicated programming, WCFT-TV also aired network programs from CBS and NBC that were not cleared for broadcast in the Birmingham market by WAPI-TV (channel 13, now WVTM-TV), which WBMG (channel 42, now WIAT) did during that same timeframe.
As an exclusive CBS affiliate
On May 31, 1970, when WAPI-TV formally removed CBS programming and became the exclusive NBC affiliate for the Bir |
https://en.wikipedia.org/wiki/Y%20linkage | Y linkage, also known as holandric inheritance (from Ancient Greek ὅλος hólos, "whole" + ἀνδρός andrós, "male"), describes traits that are produced by genes located on the Y chromosome. It is a form of sex linkage.
Y linkage can be difficult to detect. This is partly because the Y chromosome is small and contains fewer genes than the autosomal chromosomes or the X chromosome. It is estimated to contain about 200 genes. Earlier, the human Y chromosome was thought to have little importance;. Although the Y-chromosome is sex-determining in humans and some other species, not all genes that play a role in sex determination are Y-linked. The Y-chromosome, generally does not undergo genetic recombination and only small regions called pseudoautosomal regions exhibit recombination. The majority of the Y-chromosome genes that do not recombine are located in the "non-recombining region".
For a trait to be considered Y linkage, it must exhibit these characteristics:
occurs only in males
appears in all sons of males who exhibit that trait
is absent from daughters of trait carriers; instead the daughters that are phenotypically normal and do not have affected offspring.
These requirements were established by the pioneer of Y linkage, Curt Stern. Stern detailed in his paper genes he suspected to be Y-linked. His requirements at first made Y linkage hard to prove. In the 1950s using human pedigrees, many genes were incorrectly determined to be Y-linked. Later research adopted more advanced techniques and more sophisticated statistical analysis. Hairy ears are an example of a gene once thought to be Y-linked in humans; however, that hypothesis was discredited. Due to advancements in DNA sequencing, Y linkage is getting easier to determine and prove. The Y-chromosome is almost entirely mapped, revealing many Y-linked traits.
Y linkage is similar to, but different from X linkage; although, both are forms of sex linkage. X linkage can be genetically linked and sex-linked, while |
https://en.wikipedia.org/wiki/Inception%20of%20Darwin%27s%20theory | The inception of Darwin's theory occurred during an intensively busy period which began when Charles Darwin returned from the survey voyage of the Beagle, with his reputation as a fossil collector and geologist already established. He was given an allowance from his father to become a gentleman naturalist rather than a clergyman, and his first tasks were to find suitable experts to describe his collections, write out his Journal and Remarks, and present papers on his findings to the Geological Society of London.
At Darwin's geological début, the anatomist Richard Owen's reports on the fossils showed that extinct species were related to current species in the same locality, and the ornithologist John Gould showed that bird specimens from the Galápagos Islands were of distinct species related to places, not just varieties. These points convinced Darwin that transmutation of species must be occurring, and in his Red Notebook he jotted down his first evolutionary ideas. He began specific transmutation notebooks with speculations on variation in offspring "to adapt & alter the race to changing world", and sketched an "irregularly branched" genealogical branching of a single evolutionary tree.
Animal observations of an orangutan at the zoo showed how human its expressions looked, confirming his thoughts from the Beagle voyage that there was little gulf between man and animals. He investigated animal breeding and found parallels to nature removing runts and keeping the fit, with farmers deliberately selecting breeding animals so that through "a thousand intermediate forms" their descendants were significantly changed. His speculations on instincts and mental traits suggested habits, beliefs and facial expressions having evolved, and considered the social implications. While this was his "prime hobby", he was struggling with an immense workload and began suffering from his illness. Having taken a break from work, his thoughts of marriage turned to his cousin Emma Wedgwood |
https://en.wikipedia.org/wiki/Essbase | Essbase is a multidimensional database management system (MDBMS) that provides a platform upon which to build analytic applications. Essbase began as a product from Arbor Software, which merged with Hyperion Software in 1998. Oracle Corporation acquired Hyperion Solutions Corporation in 2007. Until late 2005 IBM also marketed an OEM version of Essbase as DB2 OLAP Server.
The database researcher E. F. Codd coined the term "on-line analytical processing" (OLAP) in a whitepaper
that set out twelve rules for analytic systems (an allusion to his earlier famous set of twelve rules defining the relational model). This whitepaper, published by Computerworld, was somewhat explicit in its reference to Essbase features, and when it was later discovered that Codd had been sponsored by Arbor Software, Computerworld withdrew the paper.
In contrast to "on-line transaction processing" (OLTP), OLAP defines a database technology optimized for processing human queries rather than transactions. The results of this orientation were that multidimensional databases oriented their performance requirements around a different set of benchmarks (Analytic Performance Benchmark, APB-1) than that of RDBMS (Transaction Processing Performance Council [TPC]).
Hyperion renamed many of its products in 2005, giving Essbase an official name of Hyperion System 9 BI+ Analytic Services, but the new name was largely ignored by practitioners. The Essbase brand was later returned to the official product name for marketing purposes, but the server software still carried the "Analytic Services" title until it was incorporated into Oracle's Business Intelligence Foundation Suite (BIFS) product.
In August 2005, Information Age magazine named Essbase as one of the 10 most influential technology innovations of the previous 10 years, along with Netscape, the BlackBerry, Google, virtualization, Voice Over IP (VOIP), Linux, XML, the Pentium processor, and ADSL. Editor Kenny MacIver said: "Hyperion Essbase was |
https://en.wikipedia.org/wiki/Acoustical%20Society%20of%20America | The Acoustical Society of America (ASA) is an international scientific society founded in 1929 dedicated to generating, disseminating and promoting the knowledge of acoustics and its practical applications. The Society is primarily a voluntary organization of about 7500 members and attracts the interest, commitment, and service of many professionals.
History
In the summer of 1928, Floyd R. Watson and Wallace Waterfall (1900–1974), a former doctoral student of Watson, were invited by UCLA's Vern Oliver Knudsen to an evening dinner at Knudsen's beach club in Santa Monica. The three physicists decided to form a society of acoustical engineers interested in architectural acoustics. In the early part of December 1928, Wallace Waterfall sent letters to sixteen people inquiring about the possibility of organizing such a society. Harvey Fletcher offered the use of the Bell Telephone Laboratories at 463 West Street in Manhattan as a meeting place for an organizational, initial meeting to be held on December 27, 1928. The meeting was attended by forty scientists and engineers who started the Acoustical Society of America (ASA). Temporary officers were elected: Harvey Fletcher as president, V. O. Knudsen as vice-president, Wallace Waterfall as secretary, and Charles Fuller Stoddard (1876–1958) as treasurer. A constitution and by-laws were drafted. The first issue of the Journal of the Acoustical Society of America was published in October 1929.
Technical committees
The Society has 13 technical committees that represent specialized interests in the field of acoustics. The committees organize technical sessions at conferences and are responsible for the representation of their sub-field in ASA publications. The committees include:
Acoustical oceanography
Animal bioacoustics
Architectural acoustics
Biomedical acoustics
Computational acoustics (Technical Specialty Group)
Acoustical engineering
Musical acoustics
Noise
Physical acoustics
Psychoacoustics
Signal processing in acous |
https://en.wikipedia.org/wiki/GEOS%20%288-bit%20operating%20system%29 | GEOS (Graphic Environment Operating System) is a discontinued operating system from Berkeley Softworks (later GeoWorks). Originally designed for the Commodore 64 with its version being released in 1986, enhanced versions of GEOS later became available in 1987 for the Commodore 128 and in 1988 for the Apple II series of computers. A lesser-known version was also released for the Commodore Plus/4.
GEOS closely resembles early versions of the classic Mac OS and includes a graphical word processor (geoWrite) and paint program (geoPaint).
A December 1987 survey by the Commodore-dedicated magazine Compute!'s Gazette found that nearly half of respondents used GEOS. For many years, Commodore bundled GEOS with its redesigned and cost-reduced C64, the C64C. At its peak, GEOS was the third-most-popular microcomputer operating system in the world in terms of units shipped, trailing only MS-DOS and Mac OS (besides the original Commodore 64's KERNAL).
Other GEOS-compatible software packages were available from Berkeley Softworks or from third parties, including a reasonably sophisticated desktop publishing application called geoPublish and a spreadsheet called geoCalc. While geoPublish is not as sophisticated as Aldus Pagemaker and geoCalc not as sophisticated as Microsoft Excel, the packages provide reasonable functionality, and Berkeley Softworks founder Brian Dougherty claimed the company ran its business using its own software on Commodore 8-bit computers for several years.
Development
Written by a group of programmers at Berkeley Softworks, the GEOS Design Team: Jim DeFrisco, Dave Durran, Michael Farr, Doug Fults, Chris Hawley, Clayton Jung, and Tony Requist, led by Dougherty, who cut their teeth on limited-resource video game machines such as the Atari 2600, GEOS was revered for what it could accomplish on machines with 64–128 kB of RAM memory and 1–2 MHz of 8-bit processing power.
Unlike many pieces of proprietary software for the C64 and C128, GEOS takes full advant |
https://en.wikipedia.org/wiki/Nuclear%20transfer | Nuclear transfer is a form of cloning. The step involves removing the DNA from an oocyte (unfertilised egg), and injecting the nucleus which contains the DNA to be cloned. In rare instances, the newly constructed cell will divide normally, replicating the new DNA while remaining in a pluripotent state. If the cloned cells are placed in the uterus of a female mammal, a cloned organism develops to term in rare instances. This is how Dolly the Sheep and many other species were cloned. Cows are commonly cloned to select those that have the best milk production. On 24 January 2018, two monkey clones were reported to have been created with the technique for the first time.
Despite this, the low efficiency of the technique has prompted some researchers, notably Ian Wilmut, creator of Dolly the cloned sheep, to abandon it.
Tools and reagents
Nuclear transfer is a delicate process that is a major hurdle in the development of cloning technology. Materials used in this procedure are a microscope, a holding pipette (small vacuum) to keep the oocyte in place, and a micropipette (hair-thin needle) capable of extracting the nucleus of a cell using a vacuum. For some species, such as mouse, a drill is used to pierce the outer layers of the oocyte.
Various chemical reagents are used to increase cloning efficiency. Microtubule inhibitors, such as nocodazole, are used to arrest the oocyte in M phase, during which its nuclear membrane is dissolved. Chemicals are also used to stimulate oocyte activation. When applied the membrane is completely dissolved.
Somatic cell nuclear transfer
Somatic Cell Nuclear Transfer (SCNT) is the process by which the nucleus of an oocyte (egg cell) is removed and is replaced with the nucleus of a somatic (body) cell (examples include skin, heart, or nerve cell). The two entities fuse to become one and factors in the oocyte cause the somatic nucleus to reprogram to a pluripotent state. The cell contains genetic information identical to the donated s |
https://en.wikipedia.org/wiki/Anoxic%20event | Oceanic anoxic events or anoxic events (anoxia conditions) describe periods wherein large expanses of Earth's oceans were depleted of dissolved oxygen (O2), creating toxic, euxinic (anoxic and sulfidic) waters. Although anoxic events have not happened for millions of years, the geologic record shows that they happened many times in the past. Anoxic events coincided with several mass extinctions and may have contributed to them. These mass extinctions include some that geobiologists use as time markers in biostratigraphic dating. On the other hand, there are widespread, various black-shale beds from the mid-Cretaceous which indicate anoxic events but are not associated with mass extinctions. Many geologists believe oceanic anoxic events are strongly linked to the slowing of ocean circulation, climatic warming, and elevated levels of greenhouse gases. Researchers have proposed enhanced volcanism (the release of CO2) as the "central external trigger for euxinia."
Human activities in the Holocene epoch , such as the release of nutrients from farms and sewage, cause relatively small-scale dead zones around the world. British oceanologist and atmospheric scientist Andrew Watson says full-scale ocean anoxia would take "thousands of years to develop." The idea that modern climate change could lead to such an event is also referred to as Kump's hypothesis, however, evidence is still missing.
Background
The concept of the oceanic anoxic event (OAE) was first proposed in 1976 by Seymour Schlanger (1927–1990) and geologist Hugh Jenkyns and arose from discoveries made by the Deep Sea Drilling Project (DSDP) in the Pacific Ocean. The finding of black, carbon-rich shales in Cretaceous sediments that had accumulated on submarine volcanic plateaus (e.g. Shatsky Rise, Manihiki Plateau), coupled with their identical age to similar, cored deposits from the Atlantic Ocean and known outcrops in Europe—particularly in the geological record of the otherwise limestone-dominated Apennines |
https://en.wikipedia.org/wiki/Ecological%20engineering | Ecological engineering uses ecology and engineering to predict, design, construct or restore, and manage ecosystems that integrate "human society with its natural environment for the benefit of both".
Origins, key concepts, definitions, and applications
Ecological engineering emerged as a new idea in the early 1960s, but its definition has taken several decades to refine. Its implementation is still undergoing adjustment, and its broader recognition as a new paradigm is relatively recent. Ecological engineering was introduced by Howard Odum and others as utilizing natural energy sources as the predominant input to manipulate and control environmental systems. The origins of ecological engineering are in Odum's work with ecological modeling and ecosystem simulation to capture holistic macro-patterns of energy and material flows affecting the efficient use of resources.
Mitsch and Jorgensen summarized five basic concepts that differentiate ecological engineering from other approaches to addressing problems to benefit society and nature: 1) it is based on the self-designing capacity of ecosystems; 2) it can be the field (or acid) test of ecological theories; 3) it relies on system approaches; 4) it conserves non-renewable energy sources; and 5) it supports ecosystem and biological conservation.
Mitsch and Jorgensen were the first to define ecological engineering as designing societal services such that they benefit society and nature, and later noted the design should be systems based, sustainable, and integrate society with its natural environment.
Bergen et al. defined ecological engineering as: 1) utilizing ecological science and theory; 2) applying to all types of ecosystems; 3) adapting engineering design methods; and 4) acknowledging a guiding value system.
Barrett (1999) offers a more literal definition of the term: "the design, construction, operation and management (that is, engineering) of landscape/aquatic structures and associated plant and animal com |
https://en.wikipedia.org/wiki/Flow%20net | A flow net is a graphical representation of two-dimensional steady-state groundwater flow through aquifers.
Construction of a flow net is often used for solving groundwater flow problems where the geometry makes analytical solutions impractical. The method is often used in civil engineering, hydrogeology or soil mechanics as a first check for problems of flow under hydraulic structures like dams or sheet pile walls. As such, a grid obtained by drawing a series of equipotential lines is called a flow net. The flow net is an important tool in analysing two-dimensional irrotational flow problems. Flow net technique is a graphical representation method.
Basic method
The method consists of filling the flow area with stream and equipotential lines, which are everywhere perpendicular to each other, making a curvilinear grid. Typically there are two surfaces (boundaries) which are at constant values of potential or hydraulic head (upstream and downstream ends), and the other surfaces are no-flow boundaries (i.e., impermeable; for example the bottom of the dam and the top of an impermeable bedrock layer), which define the sides of the outermost streamtubes (see figure 1 for a stereotypical flow net example).
Mathematically, the process of constructing a flow net consists of contouring the two harmonic or analytic functions of potential and stream function. These functions both satisfy the Laplace equation and the contour lines represent lines of constant head (equipotentials) and lines tangent to flowpaths (streamlines). Together, the potential function and the stream function form the complex potential, where the potential is the real part, and the stream function is the imaginary part.
The construction of a flow net provides an approximate solution to the flow problem, but it can be quite good even for problems with complex geometries by following a few simple rules (initially developed by Philipp Forchheimer around 1900, and later formalized by Arthur Casagrande in |
https://en.wikipedia.org/wiki/Cofinal%20%28mathematics%29 | In mathematics, a subset of a preordered set is said to be cofinal or frequent in if for every it is possible to find an element in that is "larger than " (explicitly, "larger than " means ).
Cofinal subsets are very important in the theory of directed sets and nets, where “cofinal subnet” is the appropriate generalization of "subsequence". They are also important in order theory, including the theory of cardinal numbers, where the minimum possible cardinality of a cofinal subset of is referred to as the cofinality of
Definitions
Let be a homogeneous binary relation on a set
A subset is said to be or with respect to if it satisfies the following condition:
For every there exists some that
A subset that is not frequent is called .
This definition is most commonly applied when is a directed set, which is a preordered set with additional properties.
Final functions
A map between two directed sets is said to be if the image of is a cofinal subset of
Coinitial subsets
A subset is said to be (or in the sense of forcing) if it satisfies the following condition:
For every there exists some such that
This is the order-theoretic dual to the notion of cofinal subset.
Cofinal (respectively coinitial) subsets are precisely the dense sets with respect to the right (respectively left) order topology.
Properties
The cofinal relation over partially ordered sets ("posets") is reflexive: every poset is cofinal in itself. It is also transitive: if is a cofinal subset of a poset and is a cofinal subset of (with the partial ordering of applied to ), then is also a cofinal subset of
For a partially ordered set with maximal elements, every cofinal subset must contain all maximal elements, otherwise a maximal element that is not in the subset would fail to be any element of the subset, violating the definition of cofinal. For a partially ordered set with a greatest element, a subset is cofinal if and only if it contains that grea |
https://en.wikipedia.org/wiki/Hensel%27s%20lemma | In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number , then this root can be lifted to a unique root modulo any higher power of . More generally, if a polynomial factors modulo into two coprime polynomials, this factorization can be lifted to a factorization modulo any higher power of (the case of roots corresponds to the case of degree for one of the factors).
By passing to the "limit" (in fact this is an inverse limit) when the power of tends to infinity, it follows that a root or a factorization modulo can be lifted to a root or a factorization over the -adic integers.
These results have been widely generalized, under the same name, to the case of polynomials over an arbitrary commutative ring, where is replaced by an ideal, and "coprime polynomials" means "polynomials that generate an ideal containing ".
Hensel's lemma is fundamental in -adic analysis, a branch of analytic number theory.
The proof of Hensel's lemma is constructive, and leads to an efficient algorithm for Hensel lifting, which is fundamental for factoring polynomials, and gives the most efficient known algorithm for exact linear algebra over the rational numbers.
Modular reduction and lifting
Hensel's original lemma concerns the relation between polynomial factorization over the integers and over the integers modulo a prime number and its powers. It can be straightforwardly extended to the case where the integers are replaced by any commutative ring, and is replaced by any maximal ideal (indeed, the maximal ideals of have the form where is a prime number).
Making this precise requires a generalization of the usual modular arithmetic, and so it is useful to define accurately the terminology that is commonly used in this context.
Let be a commutative ring, and an ideal of . Reduction modulo refers to the replacement of eve |
https://en.wikipedia.org/wiki/Richard%20P.%20Stanley | Richard Peter Stanley (born June 23, 1944) is an Emeritus Professor of Mathematics at the Massachusetts Institute of Technology, in Cambridge, Massachusetts. From 2000 to 2010, he was the Norman Levinson Professor of Applied Mathematics. He received his Ph.D. at Harvard University in 1971 under the supervision of Gian-Carlo Rota. He is an expert in the field of combinatorics and its applications to other mathematical disciplines.
Contributions
Stanley is known for his two-volume book Enumerative Combinatorics (1986–1999). He is also the author of Combinatorics and Commutative Algebra (1983) and well over 200 research articles in mathematics. He has served as thesis advisor to 60 doctoral students, many of whom have had distinguished careers in combinatorial research. Donald Knuth named Stanley as one of his combinatorial heroes in a 2023 interview.
Awards and honors
Stanley's distinctions include membership in the National Academy of Sciences (elected in 1995), the 2001 Leroy P. Steele Prize for Mathematical Exposition, the 2003 Schock Prize, a plenary lecture at the International Congress of Mathematicians (in Madrid, Spain), and election in 2012 as a fellow of the American Mathematical Society. In 2022 he was awarded the Leroy P. Steele Prize for Lifetime Achievement.
Selected publications
Stanley, Richard P. (1996). Combinatorics and Commutative Algebra, 2nd ed. .
Stanley, Richard P. (1997, 1999). Enumerative Combinatorics, Volumes 1 and 2. Cambridge University Press. , 0-521-56069-1.
See also
Exponential formula
Order polynomial
Stanley decomposition
Stanley's reciprocity theorem |
https://en.wikipedia.org/wiki/George%20Devol | George Charles Devol Jr. (February 20, 1912 – August 11, 2011) was an American inventor, best known for creating Unimate, the first industrial robot. Devol's invention earned him the title "Grandfather of Robotics". The National Inventors Hall of Fame says, "Devol's patent for the first digitally operated programmable robotic arm represents the foundation of the modern robotics industry."
The concept of the robot arm has evolved over time with contributions from various individuals and researchers. However, the first patent for an industrial robot was filed in 1954 by George Devol, an American inventor and entrepreneur, who is often credited as the "father of the robot arm."
Early life
George Devol was born in an upper-middle-class family in Louisville, Kentucky. He attended Riordan Prep school.
United Cinephone
Foregoing higher education, Devol went into business in 1932, forming United Cinephone to produce variable area recording directly onto film for the new sound motion pictures ("talkies"). However, he later learned that companies like RCA and Western Electric were working in the same area, and discontinued the product.
During that time, Devol developed and patented industrial lighting and invented the automatic opening door.
World War II
In 1939, Devol applied for a patent for proximity controls for use in laundry press machines, based on a radio frequency field. This control would automatically open and close laundry presses when workers approached the machines. After World War II began, the patent office told Devol that his patent application would be placed on hold for the duration of the conflict.
Around that time, Devol sold his interest in United Cinephone and approached Sperry Gyroscope to pitch his ideas on radar technology. He was retained by Sperry as manager of the Special Projects Department, which developed radar devices and microwave test equipment.
Later in the war, he approached Auto-Ordnance Company regarding products that company |
https://en.wikipedia.org/wiki/Pectineus%20muscle | The pectineus muscle (, from the Latin word pecten, meaning comb) is a flat, quadrangular muscle, situated at the anterior (front) part of the upper and medial (inner) aspect of the thigh. The pectineus muscle is the most anterior adductor of the hip. The muscle's primary action is hip flexion; it also produces adduction and internal rotation of the hip.
It can be classified in the medial compartment of thigh (when the function is emphasized) or the anterior compartment of thigh (when the nerve is emphasized).
Structure
The pectineus muscle arises from the pectineal line of the pubis and to a slight extent from the surface of bone in front of it, between the iliopectineal eminence and pubic tubercle, and from the fascia covering the anterior surface of the muscle; the fibers pass downward, backward, and lateral, to be inserted into the pectineal line of the femur which leads from the lesser trochanter to the linea aspera.
Relations
The pectineus is in relation by its anterior surface with the pubic portion of the fascia lata, which separates it from the femoral artery and vein and internal saphenous vein, and lower down with the profunda femoris artery.
By its posterior surface with the capsule of the hip joint, and with the obturator externus and adductor brevis, the obturator artery and vein being interposed.
By its external border with the psoas major, the femoral artery resting upon the line of interval.
By its internal border with the outer edge of the adductor longus.
Obturator foramen is situated directly behind this muscle, which forms one of its coverings.
It forms part of the floor of the femoral triangle.
Innervation
The lumbar plexus is formed from the anterior rami of nerves L1 to L4 and some fibers from T12. With only five roots and two divisions, it is less complex than the brachial plexus and gives rise to a number of nerves including the femoral nerve and accessory obturator nerve. The pectineus muscle is considered a composite muscle as th |
https://en.wikipedia.org/wiki/Total%20angular%20momentum%20quantum%20number | In quantum mechanics, the total angular momentum quantum number parametrises the total angular momentum of a given particle, by combining its orbital angular momentum and its intrinsic angular momentum (i.e., its spin).
If s is the particle's spin angular momentum and ℓ its orbital angular momentum vector, the total angular momentum j is
The associated quantum number is the main total angular momentum quantum number j. It can take the following range of values, jumping only in integer steps:
where ℓ is the azimuthal quantum number (parameterizing the orbital angular momentum) and s is the spin quantum number (parameterizing the spin).
The relation between the total angular momentum vector j and the total angular momentum quantum number j is given by the usual relation (see angular momentum quantum number)
The vector's z-projection is given by
where mj is the secondary total angular momentum quantum number, and the is the reduced Planck's constant. It ranges from −j to +j in steps of one. This generates 2j + 1 different values of mj.
The total angular momentum corresponds to the Casimir invariant of the Lie algebra so(3) of the three-dimensional rotation group.
See also
Principal quantum number
Orbital angular momentum quantum number
Magnetic quantum number
Spin quantum number
Angular momentum coupling
Clebsch–Gordan coefficients
Angular momentum diagrams (quantum mechanics)
Rotational spectroscopy |
https://en.wikipedia.org/wiki/Near-term%20digital%20radio | The Near-term digital radio (NTDR) program provided a prototype mobile ad hoc network (MANET) radio system to the United States Army, starting in the 1990s. The MANET protocols were provided by Bolt, Beranek and Newman; the radio hardware was supplied by ITT. These systems have been fielded by the United Kingdom as the High-capacity data radio (HCDR) and by the Israelis as the Israeli data radio. They have also been purchased by a number of other countries for experimentation.
The NTDR protocols consist of two components: clustering and routing. The clustering algorithms dynamically organize a given network into cluster heads and cluster members. The cluster heads create a backbone; the cluster members use the services of this backbone to send and receive packets. The cluster heads use a link-state routing algorithm to maintain the integrity of their backbone and to track the locations of cluster members.
The NTDR routers also use a variant of Open Shortest Path First (OSPF) that is called Radio-OSPF (ROSPF). ROSPF does not use the OSPF hello protocol for link discovery, etc. Instead, OSPF adjacencies are created and destroyed as a function of MANET information that is distributed by the NTDR routers, both cluster heads and cluster members. It also supported multicasting. |
https://en.wikipedia.org/wiki/Feedwater%20heater | A feedwater heater is a power plant component used to pre-heat water delivered to a steam generating boiler. Preheating the feedwater reduces the irreversibilities involved in steam generation and therefore improves the thermodynamic efficiency of the system. This reduces plant operating costs and also helps to avoid thermal shock to the boiler metal when the feedwater is introduced back into the steam cycle.
In a steam power plant (usually modeled as a modified Rankine cycle), feedwater heaters allow the feedwater to be brought up to the saturation temperature very gradually. This minimizes the inevitable irreversibilities associated with heat transfer to the working fluid (water). See the article on the second law of thermodynamics for a further discussion of such irreversibilities.
Cycle discussion and explanation
The energy used to heat the feedwater is usually derived from steam extracted between the stages of the steam turbine. Therefore, the steam that would be used to perform expansion work in the turbine (and therefore generate power) is not utilized for that purpose. The percentage of the total cycle steam mass flow used for the feedwater heater is termed the extraction fraction and must be carefully optimized for maximum power plant thermal efficiency since increasing this fraction causes a decrease in turbine power output.
Feedwater heaters can also be "open" or "closed" heat exchangers. An open heat exchanger is one in which extracted steam is allowed to mix with the feedwater. This kind of heater will normally require a feed pump at both the feed inlet and outlet since the pressure in the heater is between the boiler pressure and the condenser pressure. A deaerator is a special case of the open feedwater heater which is specifically designed to remove non-condensable gases from the feedwater.
Closed feedwater heaters are typically shell and tube heat exchangers where the feedwater passes throughout the tubes and is heated by turbine extract |
https://en.wikipedia.org/wiki/C%20parity | In physics, the C parity or charge parity is a multiplicative quantum number of some particles that describes their behavior under the symmetry operation of charge conjugation.
Charge conjugation changes the sign of all quantum charges (that is, additive quantum numbers), including the electrical charge, baryon number and lepton number, and the flavor charges strangeness, charm, bottomness, topness and Isospin (I3). In contrast, it doesn't affect the mass, linear momentum or spin of a particle.
Formalism
Consider an operation that transforms a particle into its antiparticle,
Both states must be normalizable, so that
which implies that is unitary,
By acting on the particle twice with the operator,
we see that and . Putting this all together, we see that
meaning that the charge conjugation operator is Hermitian and therefore a physically observable quantity.
Eigenvalues
For the eigenstates of charge conjugation,
.
As with parity transformations, applying twice must leave the particle's state unchanged,
allowing only eigenvalues of the so-called C-parity or charge parity of the particle.
Eigenstates
The above implies that for eigenstates, . Since antiparticles and particles have charges of opposite sign, only states with all quantum charges equal to zero, such as the photon and particle–antiparticle bound states like the neutral pion, η or positronium, are eigenstates of .
Multiparticle systems
For a system of free particles, the C parity is the product of C parities for each particle.
In a pair of bound mesons there is an additional component due to the orbital angular momentum. For example, in a bound state of two pions, π+ π− with an orbital angular momentum L, exchanging π+ and π− inverts the relative position vector, which is identical to a parity operation. Under this operation, the angular part of the spatial wave function contributes a phase factor of (−1)L, where L is the angular momentum quantum number associated with L.
.
With a two-ferm |
https://en.wikipedia.org/wiki/Insect%20repellent | An insect repellent (also commonly called "bug spray") is a substance applied to the skin, clothing, or other surfaces to discourage insects (and arthropods in general) from landing or climbing on that surface. Insect repellents help prevent and control the outbreak of insect-borne (and other arthropod-bourne) diseases such as malaria, Lyme disease, dengue fever, bubonic plague, river blindness, and West Nile fever. Pest animals commonly serving as vectors for disease include insects such as flea, fly, and mosquito; and ticks (arachnids).
Some insect repellents are insecticides (bug killers), but most simply discourage insects and send them flying or crawling away. Nearly any would be fatal upon reaching the median lethal dose, but classification as an insecticide implies death even at lower doses.
Effectiveness
Synthetic repellents tend to be more effective and/or longer lasting than "natural" repellents.
For protection against mosquito bites, the U.S. Centers for Disease Control (CDC) recommends DEET, icaridin (picaridin, KBR 3023), oil of lemon eucalyptus (para-menthane-diol or PMD), IR3535 and 2-undecanone with the caveat that higher percentages of the active ingredient provide longer protection.
In 2015, Researchers at New Mexico State University tested 10 commercially available products for their effectiveness at repelling mosquitoes. On the mosquito Aedes aegypti, the vector of Zika virus, only one repellent that did not contain DEET had a strong effect for the duration of the 240 minutes test: a lemon eucalyptus oil repellent. All DEET-containing mosquito repellents were active.
In one comparative study from 2004, IR3535 was as effective or better than DEET in protection against Aedes aegypti and Culex quinquefasciatus mosquitoes. Other sources (official publications of the associations of German physicians as well as of German druggists) suggest the contrary and state DEET is still the most efficient substance available and the substance of choice f |
https://en.wikipedia.org/wiki/Atomic%20battery | An atomic battery, nuclear battery, radioisotope battery or radioisotope generator is a device which uses energy from the decay of a radioactive isotope to generate electricity. Like nuclear reactors, they generate electricity from nuclear energy, but differ in that they do not use a chain reaction. Although commonly called batteries, they are technically not electrochemical and cannot be charged or recharged. They are very costly, but have an extremely long life and high energy density, and so they are typically used as power sources for equipment that must operate unattended for long periods of time, such as spacecraft, pacemakers, underwater systems and automated scientific stations in remote parts of the world.
Nuclear battery technology began in 1913, when Henry Moseley first demonstrated a current generated by charged particle radiation. The field received considerable in-depth research attention for applications requiring long-life power sources for space needs during the 1950s and 1960s. In 1954 RCA researched a small atomic battery for small radio receivers and hearing aids. Since RCA's initial research and development in the early 1950s, many types and methods have been designed to extract electrical energy from nuclear sources. The scientific principles are well known, but modern nano-scale technology and new wide-bandgap semiconductors have created new devices and interesting material properties not previously available.
Nuclear batteries can be classified by energy conversion technology into two main groups: thermal converters and non-thermal converters. The thermal types convert some of the heat generated by the nuclear decay into electricity. The most notable example is the radioisotope thermoelectric generator (RTG), often used in spacecraft. The non-thermal converters extract energy directly from the emitted radiation, before it is degraded into heat. They are easier to miniaturize and do not require a thermal gradient to operate, so they are sui |
https://en.wikipedia.org/wiki/Lua | Lua or LUA may refer to:
Science and technology
Lua (programming language)
Latvia University of Agriculture
Last universal ancestor, in evolution
Ethnicity and language
Lua people, of Laos
Lawa people, of Thailand sometimes referred to as Lua
Lua language (disambiguation), several languages (including Lua’)
Luba-Kasai language, ISO 639 code
Lai (surname) (賴), Chinese, sometimes romanised as Lua
Places
Tenzing-Hillary Airport (IATA code), in Lukla, Nepal
One of the Duff Islands
People
Lua (goddess), a Roman goddess
Saint Lua (died c 609)
Lua Blanco (born 1987), Brazilian actress and singer
Lua Getsinger (1871–1916)
A member of Weki Meki band
Other uses
Lua (martial art), of Hawaii
"Lua" (song), by Bright Eyes |
https://en.wikipedia.org/wiki/Zymography | Zymography is an electrophoretic technique for the detection of hydrolytic enzymes, based on the substrate repertoire of the enzyme. Three types of zymography are used; in gel zymography, in situ zymography and in vivo zymography. For instance, gelatin embedded in a polyacrylamide gel will be digested by active gelatinases run through the gel. After Coomassie staining, areas of degradation are visible as clear bands against a darkly stained background.
Modern usage of the term zymography has been adapted to define the study and cataloging of fermented products, such as beer or wine, often by specific brewers or winemakers or within an identified category of fermentation such as with a particular strain of yeast or species of bacteria.
Zymography also refers to a collection of related, fermented products, considered as a body of work. For example, all of the beers produced by a particular brewery could collectively be referred to as its zymography.
See also Zymology or the applied science of zymography. Zymology relates to the biochemical processes of fermentation, especially the selection of fermenting yeast and bacteria in brewing, winemaking, and other fermented foods. For example, beer-making involves the application of top (ale) or bottom fermenting yeast (lager), to produce the desired variety of beer. The synthesis of the yeast can impact the flavor profile of the beer, i.e. diacetyl (taste or aroma of buttery, butterscotch).
Gel zymography
Samples are prepared in a standard, non-reducing loading buffer for SDS-PAGE. No reducing agent or boiling are necessary since these would interfere with refolding of the enzyme. A suitable substrate (e.g. gelatin or casein for protease detection) is embedded in the resolving gel during preparation of the acrylamide gel. Following electrophoresis, the SDS is removed from the gel (or zymogram) by incubation in unbuffered Triton X-100, followed by incubation in an appropriate digestion buffer, for an optimized length of t |
https://en.wikipedia.org/wiki/Online%20magazine | An online magazine is a magazine published on the Internet, through bulletin board systems and other forms of public computer networks. One of the first magazines to convert from a print magazine format to an online only magazine was the computer magazine Datamation. Some online magazines distributed through the World Wide Web call themselves webzines. An ezine (also spelled e-zine) is a more specialized term appropriately used for small magazines and newsletters distributed by any electronic method, for example, by email. Some social groups may use the terms cyberzine and hyperzine when referring to electronically distributed resources. Similarly, some online magazines may refer to themselves as "electronic magazines", "digital magazines", or "e-magazines" to reflect their readership demographics or to capture alternative terms and spellings in online searches.
An online magazine shares some features with a blog and also with online newspapers, but can usually be distinguished by its approach to editorial control. Magazines typically have editors or editorial boards who review submissions and perform a quality control function to ensure that all material meets the expectations of the publishers (those investing time or money in its production) and the readership.
Many large print publishers now provide digital reproduction of their print magazine titles through various online services for a fee. These service providers also refer to their collections of these digital format products as online magazines, and sometimes as digital magazines.
Online magazines representing matters of interest to specialists or societies for academic subjects, science, trade, or industry are typically referred to as online journals.
Business model
Many general interest online magazines provide free access to all aspects of their online content, although some publishers have opted to require a subscription fee to access premium online article and/or multimedia content. Online magazi |
https://en.wikipedia.org/wiki/Rough%20set | In computer science, a rough set, first described by Polish computer scientist Zdzisław I. Pawlak, is a formal approximation of a crisp set (i.e., conventional set) in terms of a pair of sets which give the lower and the upper approximation of the original set. In the standard version of rough set theory (Pawlak 1991), the lower- and upper-approximation sets are crisp sets, but in other variations, the approximating sets may be fuzzy sets.
Definitions
The following section contains an overview of the basic framework of rough set theory, as originally proposed by Zdzisław I. Pawlak, along with some of the key definitions. More formal properties and boundaries of rough sets can be found in Pawlak (1991) and cited references. The initial and basic theory of rough sets is sometimes referred to as "Pawlak Rough Sets" or "classical rough sets", as a means to distinguish from more recent extensions and generalizations.
Information system framework
Let be an information system (attribute–value system), where is a non-empty, finite set of objects (the universe) and is a non-empty, finite set of attributes such that for every . is the set of values that attribute may take. The information table assigns a value from to each attribute and object in the universe .
With any there is an associated equivalence relation :
The relation is called a -indiscernibility relation. The partition of is a family of all equivalence classes of and is denoted by (or ).
If , then and are indiscernible (or indistinguishable) by attributes from .
The equivalence classes of the -indiscernibility relation are denoted .
Example: equivalence-class structure
For example, consider the following information table:
{| class="wikitable" style="text-align:center; width:30%" border="1"
|+ Sample Information System
! Object !! !! !! !! !!
|-
!
| 1 || 2 || 0 || 1 || 1
|-
!
| 1 || 2 || 0 || 1 || 1
|-
!
| 2 || 0 || 0 || 1 || 0
|-
!
| 0 || 0 || 1 || 2 || 1
|-
!
| 2 || 1 || 0 || |
https://en.wikipedia.org/wiki/Grothendieck%20group | In mathematics, the Grothendieck group, or group of differences, of a commutative monoid is a certain abelian group. This abelian group is constructed from in the most universal way, in the sense that any abelian group containing a homomorphic image of will also contain a homomorphic image of the Grothendieck group of . The Grothendieck group construction takes its name from a specific case in category theory, introduced by Alexander Grothendieck in his proof of the Grothendieck–Riemann–Roch theorem, which resulted in the development of K-theory. This specific case is the monoid of isomorphism classes of objects of an abelian category, with the direct sum as its operation.
Grothendieck group of a commutative monoid
Motivation
Given a commutative monoid , "the most general" abelian group that arises from is to be constructed by introducing inverse elements to all elements of . Such an abelian group always exists; it is called the Grothendieck group of . It is characterized by a certain universal property and can also be concretely constructed from .
If does not have the cancellation property (that is, there exists and in such that and ), then the Grothendieck group cannot contain . In particular, in the case of a monoid operation denoted multiplicatively that has a zero element satisfying for every the Grothendieck group must be the trivial group (group with only one element), since one must have
for every .
Universal property
Let M be a commutative monoid. Its Grothendieck group is an abelian group K with a monoid homomorphism satisfying the following universal property: for any monoid homomorphism from M to an abelian group A, there is a unique group homomorphism such that
This expresses the fact that any abelian group A that contains a homomorphic image of M will also contain a homomorphic image of K, K being the "most general" abelian group containing a homomorphic image of M.
Explicit constructions
To construct the Grothendieck group K |
https://en.wikipedia.org/wiki/Semiset | In set theory, a semiset is a proper class that is a subclass of a set. In the typical foundations of Zermelo–Fraenkel set theory, semisets are impossible due to the axiom schema of specification.
The theory of semisets was proposed and developed by Czech mathematicians Petr Vopěnka and Petr Hájek (1972). It is based on a modification of the von Neumann–Bernays–Gödel set theory; in standard NBG, the existence of semisets is precluded by the axiom of separation.
The concept of semisets opens the way for a formulation of an alternative set theory.
In particular, Vopěnka's Alternative Set Theory (1979) axiomatizes the concept of semiset, supplemented with several additional principles.
Semisets can be used to represent sets with imprecise boundaries. Novák (1984) studied approximation of semisets by fuzzy sets, which are often more suitable for practical applications of the modeling of imprecision.
Vopěnka's alternative set theory
Vopěnka's "Alternative Set Theory" builds on some ideas of the theory of semisets, but also introduces more radical changes: for example, all sets are "formally" finite, which means that sets in AST satisfy the law of mathematical induction for set-formulas (more precisely: the part of AST that consists of axioms related to sets only is equivalent to the Zermelo–Fraenkel (or ZF) set theory, in which the axiom of infinity is replaced by its negation). However, some of these sets contain subclasses that are not sets, which makes them different from Cantor (ZF) finite sets and they are called infinite in AST. |
https://en.wikipedia.org/wiki/Radical%20of%20an%20integer | In number theory, the radical of a positive integer n is defined as the product of the distinct prime numbers dividing n. Each prime factor of n occurs exactly once as a factor of this product:
The radical plays a central role in the statement of the abc conjecture.
Examples
Radical numbers for the first few positive integers are
1, 2, 3, 2, 5, 6, 7, 2, 3, 10, 11, 6, 13, 14, 15, 2, 17, 6, 19, 10, 21, 22, 23, 6, 5, 26, 3, 14, 29, 30, 31, 2, 33, 34, 35, 6, 37, 38, 39, 10, 41, 42, 43, 22, 15, 46, 47, 6, 7, 10, ... .
For example,
and therefore
Properties
The function is multiplicative (but not completely multiplicative).
The radical of any integer is the largest square-free divisor of and so also described as the square-free kernel of . There is no known polynomial-time algorithm for computing the square-free part of an integer.
The definition is generalized to the largest -free divisor of , , which are multiplicative functions which act on prime powers as
The cases and are tabulated in and .
The notion of the radical occurs in the abc conjecture, which states that, for any , there exists a finite such that, for all triples of coprime positive integers , , and satisfying ,
For any integer , the nilpotent elements of the finite ring are all of the multiples of . |
https://en.wikipedia.org/wiki/List%20of%20alternative%20set%20theories | In mathematical logic, an alternative set theory is any of the alternative mathematical approaches to the concept of set and any alternative to the de facto standard set theory described in axiomatic set theory by the axioms of Zermelo–Fraenkel set theory.
Alternative set theories
Alternative set theories include:
Vopěnka's alternative set theory
Von Neumann–Bernays–Gödel set theory
Morse–Kelley set theory
Tarski–Grothendieck set theory
Ackermann set theory
Type theory
New Foundations
Positive set theory
Internal set theory
Naive set theory
S (set theory)
Kripke–Platek set theory
Scott–Potter set theory
Constructive set theory
Zermelo set theory
General set theory
See also
Non-well-founded set theory
Notes
Systems of set theory
Mathematics-related lists |
https://en.wikipedia.org/wiki/Mehrotra%20predictor%E2%80%93corrector%20method | Mehrotra's predictor–corrector method in optimization is a specific interior point method for linear programming. It was proposed in 1989 by Sanjay Mehrotra.
The method is based on the fact that at each iteration of an interior point algorithm it is necessary to compute the Cholesky decomposition (factorization) of a large matrix to find the search direction. The factorization step is the most computationally expensive step in the algorithm. Therefore, it makes sense to use the same decomposition more than once before recomputing it.
At each iteration of the algorithm, Mehrotra's predictor–corrector method uses the same Cholesky decomposition to find two different directions: a predictor and a corrector.
The idea is to first compute an optimizing search direction based on a first order term (predictor). The step size that can be taken in this direction is used to evaluate how much centrality correction is needed. Then, a corrector term is computed: this contains both a centrality term and a second order term.
The complete search direction is the sum of the predictor direction and the corrector direction.
Although there is no theoretical complexity bound on it yet, Mehrotra's predictor–corrector method is widely used in practice. Its corrector step uses the same Cholesky decomposition found during the predictor step in an effective way, and thus it is only marginally more expensive than a standard interior point algorithm. However, the additional overhead per iteration is usually paid off by a reduction in the number of iterations needed to reach an optimal solution. It also appears to converge very fast when close to the optimum.
Derivation
The derivation of this section follows the outline by Nocedal and Wright.
Predictor step - Affine scaling direction
A linear program can always be formulated in the standard form
where and define the problem with constraints and equations while is a vector of variables.
The Karush-Kuhn-Tucker (KKT) conditions for |
https://en.wikipedia.org/wiki/Multilocus%20sequence%20typing | Multilocus sequence typing (MLST) is a technique in molecular biology for the typing of multiple loci, using DNA sequences of internal fragments of multiple housekeeping genes to characterize isolates of microbial species.
The first MLST scheme to be developed was for Neisseria meningitidis, the causative agent of meningococcal meningitis and septicaemia. Since its introduction for the research of evolutionary history, MLST has been used not only for human pathogens but also for plant pathogens.
Principle
MLST directly measures the DNA sequence variations in a set of housekeeping genes and characterizes strains by their unique allelic profiles. The principle of MLST is simple: the technique involves PCR amplification followed by DNA sequencing. Nucleotide differences between strains can be checked at a variable number of genes depending on the degree of discrimination desired.
The workflow of MLST involves: 1) data collection, 2) data analysis and 3) multilocus sequence analysis. In the data collection step, definitive identification of variation is obtained by nucleotide sequence determination of gene fragments. In the data analysis step, all unique sequences are assigned allele numbers and combined into an allelic profile and assigned a sequence type (ST). If new alleles and STs are found, they are stored in the database after verification. In the final analysis step of MLST, the relatedness of isolates are made by comparing allelic profiles. Researchers do epidemiological and phylogenetical studies by comparing STs of different clonal complexes. A huge set of data is produced during the sequencing and identification process so bioinformatic techniques are used to arrange, manage, analyze and merge all of the biological data.
To strike the balance between the acceptable identification power, time and cost for the strain typing, about seven to eight house-keeping genes are commonly used in the laboratories. Quoting Staphylococcus aureus as an example, seven hou |
https://en.wikipedia.org/wiki/Headroom%20%28audio%20signal%20processing%29 | In digital and analog audio, headroom refers to the amount by which the signal-handling capabilities of an audio system can exceed a designated nominal level. Headroom can be thought of as a safety zone allowing transient audio peaks to exceed the nominal level without damaging the system or the audio signal, e.g., via clipping. Standards bodies differ in their recommendations for nominal level and headroom.
Digital audio
In digital audio, headroom is defined as the amount by which digital full scale (FS) exceeds the nominal level in decibels (dB). The European Broadcasting Union (EBU) specifies several nominal levels and resulting headroom for different applications.
Analog audio
In analog audio, headroom can mean low-level signal capabilities as well as the amount of extra power reserve available within the amplifiers that drive the loudspeakers.
Alignment level
Alignment level is an anchor point 9 dB below the nominal level, a reference level that exists throughout the system or broadcast chain, though it may imply different voltage levels at different points in the analog chain. Typically, nominal (not alignment) level is 0 dB, corresponding to an analog sine wave of voltage of 1.23 volts RMS (+4 dBu or 3.47 volts peak to peak). In the digital realm, alignment level is −18 dBFS.
AL = analog level
SPL = sound pressure level
See also
A-weighting
Audio system measurements
Equal-loudness contour
ITU-R 468 noise weighting
Loudness war
Noise measurement
Programme levels
Rumble measurement
Weighting filter |
https://en.wikipedia.org/wiki/Optical%20neural%20network | An optical neural network is a physical implementation of an artificial neural network with optical components. Early optical neural networks used a photorefractive Volume hologram to interconnect arrays of input neurons to arrays of output with synaptic weights in proportion to the multiplexed hologram's strength. Volume holograms were further multiplexed using spectral hole burning to add one dimension of wavelength to space to achieve four dimensional interconnects of two dimensional arrays of neural inputs and outputs. This research led to extensive research on alternative methods using the strength of the optical interconnect for implementing neuronal communications.
Some artificial neural networks that have been implemented as optical neural networks include the Hopfield neural network and the Kohonen self-organizing map with liquid crystal spatial light modulators Optical neural networks can also be based on the principles of neuromorphic engineering, creating neuromorphic photonic systems. Typically, these systems encode information in the networks using spikes, mimicking the functionality of spiking neural networks in optical and photonic hardware. Photonic devices that have demonstrated neuromorphic functionalities include (among others) vertical-cavity surface-emitting lasers, integrated photonic modulators, optoelectronic systems based on superconducting Josephson junctions or systems based on resonant tunnelling diodes.
Electrochemical vs. optical neural networks
Biological neural networks function on an electrochemical basis, while optical neural networks use electromagnetic waves. Optical interfaces to biological neural networks can be created with optogenetics, but is not the same as an optical neural networks. In biological neural networks there exist a lot of different mechanisms for dynamically changing the state of the neurons, these include short-term and long-term synaptic plasticity. Synaptic plasticity is among the electrophysiologic |
https://en.wikipedia.org/wiki/Voice%20activity%20detection | Voice activity detection (VAD), also known as speech activity detection or speech detection, is the detection of the presence or absence of human speech, used in speech processing. The main uses of VAD are in speaker diarization, speech coding and speech recognition. It can facilitate speech processing, and can also be used to deactivate some processes during non-speech section of an audio session: it can avoid unnecessary coding/transmission of silence packets in Voice over Internet Protocol (VoIP) applications, saving on computation and on network bandwidth.
VAD is an important enabling technology for a variety of speech-based applications. Therefore, various VAD algorithms have been developed that provide varying features and compromises between latency, sensitivity, accuracy and computational cost. Some VAD algorithms also provide further analysis, for example whether the speech is voiced, unvoiced or sustained. Voice activity detection is usually independent of language.
It was first investigated for use on time-assignment speech interpolation (TASI) systems.
Algorithm overview
The typical design of a VAD algorithm is as follows:
There may first be a noise reduction stage, e.g. via spectral subtraction.
Then some features or quantities are calculated from a section of the input signal.
A classification rule is applied to classify the section as speech or non-speech – often this classification rule finds when a value exceeds a certain threshold.
There may be some feedback in this sequence, in which the VAD decision is used to improve the noise estimate in the noise reduction stage, or to adaptively vary the threshold(s). These feedback operations improve the VAD performance in non-stationary noise (i.e. when the noise varies a lot).
A representative set of recently published VAD methods formulates the decision rule on a frame by frame basis using instantaneous measures of the divergence distance between speech and noise. The different measures which |
https://en.wikipedia.org/wiki/ECRYPT | ECRYPT (European Network of Excellence in Cryptology) was a 4-year European research initiative launched on 1 February 2004 with the stated objective of promoting the collaboration of European researchers in information security, and especially in cryptology and digital watermarking.
ECRYPT listed five core research areas, termed "virtual laboratories": symmetric key algorithms (STVL), public key algorithms (AZTEC), protocol (PROVILAB), secure and efficient implementations (VAMPIRE) and watermarking (WAVILA).
In August 2008 the network started another 4-year phase as ECRYPT II.
ECRYPT II products
Yearly report on algorithms and key lengths
During the project, algorithms and key lengths were evaluated yearly. The most recent of these documents is dated 30 September 2012.
Key sizes
Considering the budget of a large intelligence agency to be about US$300 million for a single ASIC machine, the recommended minimum key size is 84 bits, which would give protection for a few months. In practice, most commonly used algorithms have key sizes of 128 bits or more, providing sufficient security also in the case that the chosen algorithm is slightly weakened by cryptanalysis.
Different kinds of keys are compared in the document (e.g. RSA keys vs. EC keys). This "translation table" can be used to roughly equate keys of other types of algorithms with symmetric encryption algorithms. In short, 128 bit symmetric keys are said to be equivalent to 3248 bits RSA keys or 256-bit EC keys. Symmetric keys of 256 bits are roughly equivalent to 15424 bit RSA keys or 512 bit EC keys. Finally 2048 bit RSA keys are said to be equivalent to 103 bit symmetric keys.
Among key sizes, 8 security levels are defined, from the lowest "Attacks possible in real-time by individuals" (level 1, 32 bits) to "Good for the foreseeable future, also against quantum computers unless Shor's algorithm applies" (level 8, 256 bits). For general long-term protection (30 years), 128 bit keys are recommended ( |
https://en.wikipedia.org/wiki/Flag%20dipping | To dip a flag that is being carried means to lower it by turning it forward from an upright position to 45° or horizontal. This is done as a sign of respect or deference. At sea, it is done by lowering to half-mast and returning to full mast position.
To dip the flag on a merchant vessel passing a naval vessel involves lowering the stern flag (the country flag) to the half-mast position and back to the truck as the vessels pass abeam of each other. The half-mast position in this case being one flag width from the truck as in the case of half mast. Some jurisdictions have laws that discourage or prohibit the dipping of the national flag, including India, the Philippines, South Africa, and the United States (with its non-binding flag code only allowing vessels to dip the ensign as a salute to other ships).
Gallery |
https://en.wikipedia.org/wiki/Kosmopoisk | Kosmopoisk (, full name: Общеросси́йская нау́чно-иссле́довательская обще́ственная организа́ция, ОНИОО, translated "All-Russian Research Public Organization"), also known as Spacesearch, is a group with interests in ufology, cryptozoology, and other mystery investigations. It started in 1980, and expanded in 2001, to an international movement. In 2004, it registered under the name All-Russian Scientific Organization. Many of the activities are in the form of expeditions to sites that are reputed to have extraterrestrial activity or unusual creatures.
Formation
The organization was founded by Russian science-fiction writer Alexander Kazantsev, aerospace engineer Vadim Chernobrov, astronaut Georgy Beregovoy, and other enthusiasts, in order to explore the mysteries of the universe and nature, research new ways of space technology development, and work on breakthrough branches of science. In 1945, Kanzanstev started to research the Tunguska event of 1908 and link it to a UFO crash and explosion. Two years later, there were UFO sightings in the United States.
In 1980, Chernobrov and his colleagues from the Moscow Aviation Institute created the group whose objectives were to collect information about UFOs and anomalous events in the Soviet Union, to develop a Lovondatr device (also known as "time car"), and to send expeditions to explore the most promising anomalous zones. In 2004, the group registered themselves as Kosmopoisk (All-Russian Scientific Organization). They consider themselves the largest non-commercial public research organization in the world
Membership
The organization has more than 2,500 active members, in more than 100 groups in 25 countries. It has organized more than 250 expeditions.
Expeditions
In the 1990s, the group made expeditions an annual event. To save costs, they would hitchhike as a main way of traveling. For instance, in 1999, the group made an expedition to the remote Labynkyr Lake in Yakutia, Sakha Republic, where an underwater monster |
https://en.wikipedia.org/wiki/MAC%20flooding | In computer networking, a media access control attack or MAC flooding is a technique employed to compromise the security of network switches. The attack works by forcing legitimate MAC table contents out of the switch and forcing a unicast flooding behavior potentially sending sensitive information to portions of the network where it is not normally intended to go.
Attack method
Switches maintain a MAC table that maps individual MAC addresses on the network to the physical ports on the switch. This allows the switch to direct data out of the physical port where the recipient is located, as opposed to indiscriminately broadcasting the data out of all ports as an Ethernet hub does. The advantage of this method is that data is bridged exclusively to the network segment containing the computer that the data is specifically destined for.
In a typical MAC flooding attack, a switch is fed many Ethernet frames, each containing different source MAC addresses, by the attacker. The intention is to consume the limited memory set aside in the switch to store the MAC address table.
The effect of this attack may vary across implementations, however the desired effect (by the attacker) is to force legitimate MAC addresses out of the MAC address table, causing significant quantities of incoming frames to be flooded out on all ports. It is from this flooding behavior that the MAC flooding attack gets its name.
After launching a successful MAC flooding attack, a malicious user can use a packet analyzer to capture sensitive data being transmitted between other computers, which would not be accessible were the switch operating normally. The attacker may also follow up with an ARP spoofing attack which will allow them to retain access to privileged data after switches recover from the initial MAC flooding attack.
MAC flooding can also be used as a rudimentary VLAN hopping attack.
Counter measures
To prevent MAC flooding attacks, network operators usually rely on the presence of |
https://en.wikipedia.org/wiki/Sotolon | Sotolon (also known as sotolone) is a lactone and an extremely powerful aroma compound, with the typical smell of fenugreek or curry at high concentrations and maple syrup, caramel, or burnt sugar at lower concentrations. Sotolon is the major aroma and flavor component of fenugreek seed and lovage, and is one of several aromatic and flavor components of artificial maple syrup. It is also present in molasses, aged rum, aged sake and white wine, flor sherry, roast tobacco, and dried fruiting bodies of the mushroom Lactarius helvus. Sotolon can pass through the body relatively unchanged, and consumption of foods high in sotolon, such as fenugreek, can impart a maple syrup aroma to one's sweat and urine. In some individuals with the genetic disorder maple syrup urine disease, it is spontaneously produced in their bodies and excreted in their urine, leading to the disease's characteristic smell.
This molecule is thought to be responsible for the mysterious maple syrup smell that has occasionally wafted over Manhattan since 2005. Sotolon was first isolated in 1975 from the herb fenugreek. The compound was named in 1980 when it was found to be responsible for the flavor of raw cane sugar: soto- means "raw sugar" in Japanese and -olon signifies that the molecule is an enol lactone.
Several aging-derived compounds have been pointed out as playing an important role on the aroma of fortified wines; however, sotolon (3-hydroxy-4,5-dimethyl-2(5H)-furanone) is recognized as being the key odorant and has also been classified as a potential aging marker of these type of wines. This chiral lactone is a powerful odorant, which can impart a nutty, caramel, curry, or rancid odor, depending on its concentration and enantiomeric distribution. Despite being pointed out as a key odorant of other fortified wines, the researchers’ attention has also been directed to its off-flavor character, associated to the premature oxidative aging of young dry white wines, overlapping the expected fr |
https://en.wikipedia.org/wiki/Sagnac%20effect | The Sagnac effect, also called Sagnac interference, named after French physicist Georges Sagnac, is a phenomenon encountered in interferometry that is elicited by rotation. The Sagnac effect manifests itself in a setup called a ring interferometer or Sagnac interferometer. A beam of light is split and the two beams are made to follow the same path but in opposite directions. On return to the point of entry the two light beams are allowed to exit the ring and undergo interference. The relative phases of the two exiting beams, and thus the position of the interference fringes, are shifted according to the angular velocity of the apparatus. In other words, when the interferometer is at rest with respect to a nonrotating frame, the light takes the same amount of time to traverse the ring in either direction. However, when the interferometer system is spun, one beam of light has a longer path to travel than the other in order to complete one circuit of the mechanical frame, and so takes longer, resulting in a phase difference between the two beams. Georges Sagnac set up this experiment in 1913 in an attempt to prove the existence of the aether that Einstein's theory of special relativity makes superfluous.
A gimbal mounted mechanical gyroscope remains pointing in the same direction after spinning up, and thus can be used as a rotational reference for an inertial navigation system. With the development of so-called laser gyroscopes and fiber optic gyroscopes based on the Sagnac effect, bulky mechanical gyroscopes can be replaced by those with no moving parts in many modern inertial navigation systems. A conventional gyroscope relies on the principle of conservation of angular momentum whereas the sensitivity of the ring interferometer to rotation arises from the invariance of the speed of light for all inertial frames of reference.
Description and operation
Typically three or more mirrors are used, so that counter-propagating light beams follow a closed path such as a |
https://en.wikipedia.org/wiki/Quadratic%20assignment%20problem | The quadratic assignment problem (QAP) is one of the fundamental combinatorial optimization problems in the branch of optimization or operations research in mathematics, from the category of the facilities location problems first introduced by Koopmans and Beckmann.
The problem models the following real-life problem:
There are a set of n facilities and a set of n locations. For each pair of locations, a distance is specified and for each pair of facilities a weight or flow is specified (e.g., the amount of supplies transported between the two facilities). The problem is to assign all facilities to different locations with the goal of minimizing the sum of the distances multiplied by the corresponding flows.
Intuitively, the cost function encourages facilities with high flows between each other to be placed close together.
The problem statement resembles that of the assignment problem, except that the cost function is expressed in terms of quadratic inequalities, hence the name.
Formal mathematical definition
The formal definition of the quadratic assignment problem is as follows:
Given two sets, P ("facilities") and L ("locations"), of equal size, together with a weight function w : P × P → R and a distance function d : L × L → R. Find the bijection f : P → L ("assignment") such that the cost function:
is minimized.
Usually weight and distance functions are viewed as square real-valued matrices, so that the cost function is written down as:
In matrix notation:
where is the set of permutation matrices, is the weight matrix and is the distance matrix.
Computational complexity
The problem is NP-hard, so there is no known algorithm for solving this problem in polynomial time, and even small instances may require long computation time. It was also proven that the problem does not have an approximation algorithm running in polynomial time for any (constant) factor, unless P = NP. The travelling salesman problem (TSP) may be seen as a special case of QA |
https://en.wikipedia.org/wiki/Ilin%20Island%20cloudrunner | The Ilin Island cloudrunner (Crateromys paulus) is a cloud rat known from a single specimen purchased on Ilin Island in the Philippines. It is called siyang by the Taubuwid Mangyan. It is a fluffy-coated, bushy-tailed rat and may have emerged from tree hollows at night to feed on fruits and leaves. The specimen, collected on 4 April 1953, was presented to the National Museum of Natural History in Washington D.C. The island's forests have been destroyed by human activity. The cloudrunner is among the 25 “most wanted lost” species that are the focus of Global Wildlife Conservation’s “Search for Lost Species” initiative. As there in no proof that the single specimen originated on Ilin Island, searches are now focussed on nearby Mindoro. Hope that it may be rediscovered have prompted IUCN to improve its status from possibly Extinct (EX?) in 1994 to Critically Endangered (CR) in 1996 before the current Data Deficient (DD) from 2008. |
https://en.wikipedia.org/wiki/Food%20technology | Food technology is a branch of food science that addresses the production, preservation, quality control and research and development of food products.
Early scientific research into food technology concentrated on food preservation. Nicolas Appert's development in 1810 of the canning process was a decisive event. The process wasn't called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques.
Louis Pasteur's research on the spoilage of wine and his description of how to avoid spoilage in 1864, was an early attempt to apply scientific knowledge to food handling. Besides research into wine spoilage, Pasteur researched the production of alcohol, vinegar, wines and beer, and the souring of milk. He developed pasteurization – the process of heating milk and milk products to destroy food spoilage and disease-producing organisms. In his research into food technology, Pasteur became the pioneer into bacteriology and of modern preventive medicine.
Developments
Developments in food technology have contributed greatly to the food supply and have changed our world. Some of these developments are:
Instantized Milk Powder – Instant milk powder has become the basis for a variety of new products that are rehydratable. This process increases the surface area of the powdered product by partially rehydrating spray-dried milk powder.
Freeze-drying – The first application of freeze drying was most likely in the pharmaceutical industry; however, a successful large-scale industrial application of the process was the development of continuous freeze drying of coffee.
High-Temperature Short Time Processing – These processes, for the most part, are characterized by rapid heating and cooling, holding for a short time at a relatively high temperature and filling aseptically into sterile containers.
Decaffeination of Coffee and Tea – Decaffeinated coffee and tea was first developed on |
https://en.wikipedia.org/wiki/Fredkin%20gate | The Fredkin gate (also CSWAP gate and conservative logic gate) is a computational circuit suitable for reversible computing, invented by Edward Fredkin. It is universal, which means that any logical or arithmetic operation can be constructed entirely of Fredkin gates. The Fredkin gate is a circuit or device with three inputs and three outputs that transmits the first bit unchanged and swaps the last two bits if, and only if, the first bit is 1.
Definition
The basic Fredkin gate is a controlled swap gate that maps three inputs onto three outputs . The C input is mapped directly to the C output. If C = 0, no swap is performed; maps to , and maps to . Otherwise, the two outputs are swapped so that maps to , and maps to . It is easy to see that this circuit is reversible, i.e., "undoes" itself when run backwards. A generalized n×n Fredkin gate passes its first n−2 inputs unchanged to the corresponding outputs, and swaps its last two outputs if and only if the first n−2 inputs are all 1.
The Fredkin gate is the reversible three-bit gate that swaps the last two bits if, and only if, the first bit is 1.
It has the useful property that the numbers of 0s and 1s are conserved throughout, which in the billiard ball model means the same number of balls are output as input. This corresponds nicely to the conservation of mass in physics, and helps to show that the model is not wasteful.
Truth functions with AND, OR, XOR, and NOT
The Fredkin gate can be defined using truth functions with AND, OR, XOR, and NOT, as follows:
Cout= Cin
where
Alternatively:
Cout= Cin
Completeness
One way to see that the Fredkin gate is universal is to observe that it can be used to implement AND, NOT and OR:
If , then .
If , then .
If and , then .
Example
Three-bit full adder (add with carry) using five Fredkin gates. The "g" garbage output bit is if , and if .
Inputs on the left, including two constants, go through three gates to quickly determine the parity. The 0 and 1 bits |
https://en.wikipedia.org/wiki/Bernstein%E2%80%93Sato%20polynomial | In mathematics, the Bernstein–Sato polynomial is a polynomial related to differential operators, introduced independently by and , . It is also known as the b-function, the b-polynomial, and the Bernstein polynomial, though it is not related to the Bernstein polynomials used in approximation theory. It has applications to singularity theory, monodromy theory, and quantum field theory.
gives an elementary introduction, while and give more advanced accounts.
Definition and properties
If is a polynomial in several variables, then there is a non-zero polynomial and a differential operator with polynomial coefficients such that
The Bernstein–Sato polynomial is the monic polynomial of smallest degree amongst such polynomials . Its existence can be shown using the notion of holonomic D-modules.
proved that all roots of the Bernstein–Sato polynomial are negative rational numbers.
The Bernstein–Sato polynomial can also be defined for products of powers of several polynomials . In this case it is a product of linear factors with rational coefficients.
generalized the Bernstein–Sato polynomial to arbitrary varieties.
Note, that the Bernstein–Sato polynomial can be computed algorithmically. However, such computations are hard in general. There are implementations of related algorithms in computer algebra systems RISA/Asir, Macaulay2, and SINGULAR.
presented algorithms to compute the Bernstein–Sato polynomial of an affine variety together with an implementation in the computer algebra system SINGULAR.
described some of the algorithms for computing Bernstein–Sato polynomials by computer.
Examples
If then
so the Bernstein–Sato polynomial is
If then
so
The Bernstein–Sato polynomial of x2 + y3 is
If tij are n2 variables, then the Bernstein–Sato polynomial of det(tij) is given by
which follows from
where Ω is Cayley's omega process, which in turn follows from the Capelli identity.
Applications
If is a non-negative polynomial then , initially |
https://en.wikipedia.org/wiki/Portmap | The port mapper (rpc.portmap or just portmap, or rpcbind) is an Open Network Computing Remote Procedure Call (ONC RPC) service that runs on network nodes that provide other ONC RPC services.
Version 2 of the port mapper protocol maps ONC RPC program number/version number pairs to the network port number for that version of that program. When an ONC RPC server is started, it will tell the port mapper, for each particular program number/version number pair it implements for a particular transport protocol (TCP or UDP), what port number it is using for that particular program number/version number pair on that transport protocol. Clients wishing to make an ONC RPC call to a particular version of a particular ONC RPC service must first contact the port mapper on the server machine to determine the actual TCP or UDP port to use.
Versions 3 and 4 of the protocol, called the rpcbind protocol, map a program number/version number pair, and an indicator that specifies a transport protocol, to a transport-layer endpoint address for that program number/version number pair on that transport protocol.
The port mapper service always uses TCP or UDP port 111; a fixed port is required for it, as a client would not be able to get the port number for the port mapper service from the port mapper itself.
The port mapper must be started before any other RPC servers are started.
The port mapper service first appeared in SunOS 2.0.
Example portmap instance
This shows the different programs and their versions, and which ports they use. For example, it shows that NFS is running, both version 2 and 3, and can be reached at TCP port 2049 or UDP port 2049, depending on what transport protocol the client wants to use, and that the mount protocol, both version 1 and 2, is running, and can be reached at UDP port 644 or TCP port 645, depending on what transport protocol the client wants to use.
$ rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
1000 |
https://en.wikipedia.org/wiki/Biorefinery | A biorefinery is a refinery that converts biomass to energy and other beneficial byproducts (such as chemicals). The International Energy Agency Bioenergy Task 42 defined biorefining as "the sustainable processing of biomass into a spectrum of bio-based products (food, feed, chemicals, materials) and bioenergy (biofuels, power and/or heat)". As refineries, biorefineries can provide multiple chemicals by fractioning an initial raw material (biomass) into multiple intermediates (carbohydrates, proteins, triglycerides) that can be further converted into value-added products. Each refining phase is also referred to as a "cascading phase". The use of biomass as feedstock can provide a benefit by reducing the impacts on the environment, as lower pollutants emissions and reduction in the emissions of hazard products. In addition, biorefineries are intended to achieve the following goals:
Supply the current fuels and chemical building blocks
Supply new building blocks for the production of novel materials with disruptive characteristics
Creation of new jobs, including rural areas
Valorization of waste (agricultural, urban, and industrial waste)
Achieve the ultimate goal of reducing GHG emissions
Classification of biorefinery systems
Biorefineries can be classified based in four main features:
Platforms: Refers to key intermediates between raw material and final products. The most important intermediates are:
Biogas from anaerobic digestion
Syngas from gasification
Hydrogen from water-gas shift reaction, steam reforming, water electrolysis and fermentation
C6 sugars from hydrolysis of sucrose, starch, cellulose and hemicellulose
C5 sugars (e.g., xylose, arabinose: C5H10O5), from hydrolysis of hemicellulose and food and feed side streams
Lignin from the processing of lignocellulosic biomass.
Liquid from pyrolysis (pyrolysis oil)
Products: Biorefineries can be grouped in two main categories according to the conversion of biomass in an energetic or non-energet |
https://en.wikipedia.org/wiki/Cell%20disruption | Cell disruption is a method or process for releasing biological molecules from inside a cell.
Methods
The production of biologically interesting molecules using cloning and culturing methods allows the study and manufacture of relevant molecules. Except for excreted molecules, cells producing molecules of interest must be disrupted. This page discusses various methods. Another method of disruption is called cell unroofing.
Bead method
A common laboratory-scale mechanical method for cell disruption uses glass, ceramic, or steel beads, in diameter, mixed with a sample suspended in an aqueous solution. First developed by Tim Hopkins in the late 1970s, the sample and bead mix is subjected to high level agitation by stirring or shaking. Beads collide with the cellular sample, cracking open the cell to release the intracellular components. Unlike some other methods, mechanical shear is moderate during homogenization resulting in excellent membrane or subcellular preparations. The method, often called "bead beating", works well for all types of cellular material - from spores to animal and plant tissues. It is the most widely used method of yeast lysis, and can yield breakage of well over 50% (up to 95%). It has the advantage over other mechanical cell disruption methods of being able to disrupt very small sample sizes, process many samples at a time with no cross-contamination concerns, and does not release potentially harmful aerosols in the process.
In the simplest example of the method, an equal volume of beads are added to a cell or tissue suspension in a test tube and the sample is vigorously mixed on a common laboratory vortex mixer. While processing times are slow, taking 310 times longer than that in specialty shaking machines, it works well for easily disrupted cells and is inexpensive.
Successful bead beating is dependent not only on design features of the shaking machine (which take into consideration shaking oscillations frequency, shaking throw or distan |
https://en.wikipedia.org/wiki/Escherichia | Escherichia ( ) is a genus of Gram-negative, non-spore-forming, facultatively anaerobic, rod-shaped bacteria from the family Enterobacteriaceae. In those species which are inhabitants of the gastrointestinal tracts of warm-blooded animals, Escherichia species provide a portion of the microbially derived vitamin K for their host. A number of the species of Escherichia are pathogenic. The genus is named after Theodor Escherich, the discoverer of Escherichia coli. Escherichia are facultative aerobes, with both aerobic and anaerobic growth, and an optimum temperature of 37 °C. Escherichia are usually motile by flagella, produce gas from fermentable carbohydrates, and do not decarboxylate lysine or hydrolyze arginine. Species include E. albertii, E. fergusonii, E. hermannii, E. ruysiae, E. marmotae and most notably, the model organism and clinically relevant E. coli. Formerly, Shimwellia blattae and Pseudescherichia vulneris were also classified in this genus.
Pathogenesis
While many Escherichia are commensal members of the gut microbiota, certain strains of some species, most notably the pathogenic serotypes of E. coli, are human pathogens, and are the most common cause of urinary tract infections, significant sources of gastrointestinal disease, ranging from simple diarrhea to dysentery-like conditions, as well as a wide range of other pathogenic states classifiable in general as colonic escherichiosis. While E. coli is responsible for the vast majority of Escherichia-related pathogenesis, other members of the genus have also been implicated in human disease. Escherichia are associated with the imbalance of microbiota of the lower reproductive tract of women. These species are associated with inflammation.
See also
E. coli O157:H7
List of bacterial genera named after personal names |
https://en.wikipedia.org/wiki/Actor%20model | The actor model in computer science is a mathematical model of concurrent computation that treats an actor as the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging (removing the need for lock-based synchronization).
The actor model originated in 1973. It has been used both as a framework for a theoretical understanding of computation and as the theoretical basis for several practical implementations of concurrent systems. The relationship of the model to other work is discussed in actor model and process calculi.
History
According to Carl Hewitt, unlike previous models of computation, the actor model was inspired by physics, including general relativity and quantum mechanics. It was also influenced by the programming languages Lisp, Simula, early versions of Smalltalk, capability-based systems, and packet switching. Its development was "motivated by the prospect of highly parallel computing machines consisting of dozens, hundreds, or even thousands of independent microprocessors, each with its own local memory and communications processor, communicating via a high-performance communications network." Since that time, the advent of massive concurrency through multi-core and manycore computer architectures has revived interest in the actor model.
Following Hewitt, Bishop, and Steiger's 1973 publication, Irene Greif developed an operational semantics for the actor model as part of her doctoral research. Two years later, Henry Baker and Hewitt published a set of axiomatic laws for actor systems. Other major milestones include William Clinger's 1981 dissertation introducing a denotational semantics based on power domains and Gul Agha's 1985 dissertation which further developed a transition-based s |
https://en.wikipedia.org/wiki/Infrared%20window | The infrared atmospheric window refers to a region of the Infrared spectrum where there is relatively little absorption of terrestrial thermal radiation by atmospheric gases. The window plays an important role in the atmospheric greenhouse effect by maintaining the balance between incoming solar radiation and outgoing IR to space. In the Earth's atmosphere this window is roughly the region between 8 and 14 μm although it can be narrowed or closed at times and places of high humidity because of the strong absorption in the water vapor continuum or because of blocking by clouds. It covers a substantial part of the spectrum from surface thermal emission which starts at roughly 5 μm. Principally it is a large gap in the absorption spectrum of water vapor. Carbon dioxide plays an important role in setting the boundary at the long wavelength end. Ozone partly blocks transmission in the middle of the window.
The importance of the infrared atmospheric window in the atmospheric energy balance was discovered by George Simpson in 1928, based on G. Hettner's 1918 laboratory studies of the gap in the absorption spectrum of water vapor. In those days, computers were not available, and Simpson notes that he used approximations; he writes about the need for this in order to calculate outgoing IR radiation: "There is no hope of getting an exact solution; but by making suitable simplifying assumptions . . . ." Nowadays, accurate line-by-line computations are possible, and careful studies of the spectroscopy of infrared atmospheric gases have been published.
Mechanisms in the infrared atmospheric window
The principal natural greenhouse gases in order of their importance are water vapor , carbon dioxide , ozone , methane and nitrous oxide . The concentration of the least common of these, , is about 400 ppbV. Other gases which contribute to the greenhouse effect are present at pptV levels. These include the chlorofluorocarbons (CFCs), halons and hydrofluorocarbons (HFC and HCFCs). A |
https://en.wikipedia.org/wiki/Faraday%20wave | Faraday waves, also known as Faraday ripples, named after Michael Faraday (1791–1867), are nonlinear standing waves that appear on liquids enclosed by a vibrating receptacle. When the vibration frequency exceeds a critical value, the flat hydrostatic surface becomes unstable. This is known as the Faraday instability. Faraday first described them in an appendix to an article in the Philosophical Transactions of the Royal Society of London in 1831.
If a layer of liquid is placed on top of a vertically oscillating piston, a pattern of standing waves appears which oscillates at half the driving frequency, given certain criteria of instability. This relates to the problem of parametric resonance. The waves can take the form of stripes, close-packed hexagons, or even squares or quasiperiodic patterns. Faraday waves are commonly observed as fine stripes on the surface of wine in a wine glass that is ringing like a bell. Faraday waves also explain the 'fountain' phenomenon on a singing bowl.
The Faraday wave and its wavelength is analogous to the de Broglie wave with the de Broglie wavelength in de Broglie–Bohm theory in the field of quantum mechanics.
Application
Faraday waves are used as a liquid-based template for directed assembly of microscale materials including soft matter, rigid bodies, biological entities (e.g., individual cells, cell spheroids and cell-seeded microcarrier beads). Unlike solid-based template, this liquid-based template can be dynamically changed by tuning vibrational frequency and acceleration and generate diverse sets of symmetrical and periodic patterns.
This phenomenon is also used by alligators to call mates. They vibrate their lungs at low frequencies slightly below the surface, causing their spikes to move and induce surface waves. These surface waves are basically Faraday waves and one can observe the splashing effect characteristic of certain resonances.
This effect can also be used for mixing two liquids acoustically. Faraday wav |
https://en.wikipedia.org/wiki/Complete%20set%20of%20commuting%20observables | In quantum mechanics, a complete set of commuting observables (CSCO) is a set of commuting operators whose common eigenvectors can be used as a basis to express any quantum state. In the case of operators with discrete spectra, a CSCO is a set of commuting observables whose simultaneous eigenspaces span the Hilbert space, so that the eigenvectors are uniquely specified by the corresponding sets of eigenvalues.
Since each pair of observables in the set commutes, the observables are all compatible so that the measurement of one observable has no effect on the result of measuring another observable in the set. It is therefore not necessary to specify the order in which the different observables are measured. Measurement of the complete set of observables constitutes a complete measurement, in the sense that it projects the quantum state of the system onto a unique and known vector in the basis defined by the set of operators. That is, to prepare the completely specified state, we have to take any state arbitrarily, and then perform a succession of measurements corresponding to all the observables in the set, until it becomes a uniquely specified vector in the Hilbert space (up to a phase).
The compatibility theorem
Consider two observables, and , represented by the operators and . Then the following statements are equivalent:
and are compatible observables.
and have a common eigenbasis.
The operators and commute, meaning that .
Proofs
Discussion
We consider the two above observables and . Suppose there exists a complete set of kets whose every element is simultaneously an eigenket of and . Then we say that and are compatible. If we denote the eigenvalues of and corresponding to respectively by and , we can write
If the system happens to be in one of the eigenstates, say, , then both and can be simultaneously measured to any arbitrary level of precision, and we will get the results and respectively. This idea can be extended to more than two |
https://en.wikipedia.org/wiki/Branch%20target%20predictor | In computer architecture, a branch target predictor is the part of a processor that predicts the target, i.e. the address of the instruction that is executed next, of a taken conditional branch or an unconditional branch instruction before the target of the branch instruction is computed by the execution unit of the processor.
Branch target prediction is not the same as branch prediction which attempts to guess whether a conditional branch will be taken or not-taken (i.e., binary).
In more parallel processor designs, as the instruction cache latency grows longer and the fetch width grows wider, branch target extraction becomes a bottleneck. The recurrence is:
Instruction cache fetches block of instructions
Instructions in block are scanned to identify branches
First predicted taken branch is identified
Target of that branch is computed
Instruction fetch restarts at branch target
In machines where this recurrence takes two cycles, the machine loses one full cycle of fetch after every predicted taken branch. As predicted branches happen every 10 instructions or so, this can force a substantial drop in fetch bandwidth. Some machines with longer instruction cache latencies would have an even larger loss. To ameliorate the loss, some machines implement branch target prediction: given the address of a branch, they predict the target of that branch. A refinement of the idea predicts the start of a sequential run of instructions given the address of the start of the previous sequential run of instructions.
This predictor reduces the recurrence above to:
Hash the address of the first instruction in a run
Fetch the prediction for the addresses of the targets of branches in that run of instructions
Select the address corresponding to the branch predicted taken
As the predictor RAM can be 5–10% of the size of the instruction cache, the fetch happens much faster than the instruction cache fetch, and so this recurrence is much faster. If it were not fast eno |
https://en.wikipedia.org/wiki/Ferber%20method | The Ferber method, or Ferberization, is a technique invented by Richard Ferber to solve infant sleep problems. It involves "sleep-training" children to self-soothe by allowing the child to cry for a predetermined amount of time at intervals before receiving external comfort.
"Cry it out"
The "Cry It Out" (CIO) approach can be traced back to the book The Care and Feeding of Children written by Emmett Holt in 1894. CIO is any sleep-training method which allows a baby to cry for a specified period before the parent will offer comfort. "Ferberization" is one such approach. Ferber does not advocate simply leaving a baby to cry, but rather supports giving the baby time to learn to self-soothe, by offering comfort and support from the parent at predetermined intervals. The best age to attempt Ferber's sleep training method is around 6 months-old.
Other CIO methods, such as Marc Weissbluth's extinction method, are often mistakenly referred to as "Ferberization", though they fall outside of the guidelines Ferber recommended. "Ferberization" is referred to as graduated extinction by Weissbluth. Some pediatricians feel that any form of CIO is unnecessary and damaging to a baby.
Ferberization summarized
Ferber discusses and outlines a wide range of practices to teach an infant to sleep. The term Ferberization is now popularly used to refer to the following techniques:
Take steps to prepare the baby to sleep. This includes night-time rituals and day-time activities.
At bedtime, leave the child in bed and leave the room.
Return at progressively increasing intervals to comfort the baby, but do not pick them up. For example, on the first night, some scenarios call for returning first after three minutes, then after five minutes, and thereafter each ten minutes, until the baby is asleep.
Each subsequent night, return at intervals longer than the night before. For example, the second night may call for returning first after five minutes, then after ten minutes, and thereafter |
https://en.wikipedia.org/wiki/Flanders%20Mathematics%20Olympiad | The Flanders Mathematics Olympiad (; VWO) is a Flemish mathematics competition for students in grades 9 through 12. Two tiers of this competition exist: one for 9th- and 10th-graders (; JWO), and one for 11th- and 12th-graders. It is a feeder competition for the International Mathematical Olympiad.
History
The Olympiad was founded in 1985, replacing a system previously used since 1969 in which Flemish students were nominated to the IMO by their teachers. , 20,000 students participate annually.
In 2015, the founders of the Olympiad, Paul Igodt of the Katholieke Universiteit Leuven and Frank De Clerck of Ghent University, were given the career award for science communication of the Royal Flemish Academy of Belgium for Science and the Arts for their work.
Procedure
The competition lasts three rounds. During the first and second rounds, students must answer 30 multiple-choice mathematics problems. The first round occurs in schools, and the second round is organized by province, and is administered at various universities. The first round has a three-hour time limit for completion, the second round has a two-hour time limit.
The final round consists of four problems which require a detailed and coherent essay-type response. After the final round, three contestants are selected to compete in the International Mathematical Olympiad, making up half of the team from Belgium; the other half of the team comes from Wallonia. |
https://en.wikipedia.org/wiki/List%20of%20personal%20information%20managers | The following is a list of personal information managers (PIMs) and online organizers.
Applications
Discontinued applications
See also
Comparisons
Comparison of email clients
Comparison of file managers
Comparison of note-taking software
Comparison of reference management software
Comparison of text editors
Comparison of wiki software
Comparison of word processors
Lists
List of outliners
Comparison of project management software
List of text editors
List of wiki software
External links
Lists of software |
https://en.wikipedia.org/wiki/SHA-2 | SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA) and first published in 2001. They are built using the Merkle–Damgård construction, from a one-way compression function itself built using the Davies–Meyer structure from a specialized block cipher.
SHA-2 includes significant changes from its predecessor, SHA-1. The SHA-2 family consists of six hash functions with digests (hash values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256. SHA-256 and SHA-512 are novel hash functions computed with eight 32-bit and 64-bit words, respectively. They use different shift amounts and additive constants, but their structures are otherwise virtually identical, differing only in the number of rounds. SHA-224 and SHA-384 are truncated versions of SHA-256 and SHA-512 respectively, computed with different initial values. SHA-512/224 and SHA-512/256 are also truncated versions of SHA-512, but the initial values are generated using the method described in Federal Information Processing Standards (FIPS) PUB 180-4.
SHA-2 was first published by the National Institute of Standards and Technology (NIST) as a U.S. federal standard. The SHA-2 family of algorithms are patented in the U.S.. The United States has released the patent under a royalty-free license.
As of 2011, the best public attacks break preimage resistance for 52 out of 64 rounds of SHA-256 or 57 out of 80 rounds of SHA-512, and collision resistance for 46 out of 64 rounds of SHA-256.
Hash standard
With the publication of FIPS PUB 180-2, NIST added three additional hash functions in the SHA family. The algorithms are collectively known as SHA-2, named after their digest lengths (in bits): SHA-256, SHA-384, and SHA-512.
The algorithms were first published in 2001 in the draft FIPS PUB 180-2, at which time public review and comments were accepted. In August 2002, FIPS PUB 180-2 became the new Sec |
https://en.wikipedia.org/wiki/Je%C5%A1t%C4%9Bd%20Tower | Ještěd Tower () is a television transmitter on the top of Mount Ještěd near Liberec in the Czech Republic. It is high. It is made of reinforced concrete shaped in a hyperboloid form. The tower's architect is Karel Hubáček who was assisted by Zdeněk Patrman, involved in building statics, and by Otakar Binar, who designed the interior furnishing. It took the team three years to finalize the structure design (1963–1966). The construction itself took seven years to finish (1966–1973).
The hyperboloid shape was chosen since it naturally extends the silhouette of the hill and, moreover, well resists the extreme climate conditions on the summit of Mount Ještěd. The design combines the operation of a mountain-top hotel and a television transmitter. The hotel and the restaurant are located in the lowest sections of the tower. Before the construction of the current hotel, two huts stood near the mountain summit: one was built in the middle of the 19th century and the other was added in the early 20th century. Both buildings had a wooden structure and both burned to the ground in the 1960s.
The tower is one of the dominant features of the North Bohemian landscape. The gallery on the ground floor and the restaurant on the first floor offers views as far as to Poland and Germany. The tower has been on the list of the Czech cultural monuments since 1998, becoming a national cultural monument in 2006. In 2007 it was entered on the Tentative List of UNESCO World Heritage sites. In 1969 Karel Hubáček was awarded the prestigious Perret Prize of the International Union of Architects (UIA).
Access
The monument is accessible by road and by the Ještěd cable car from the foot of the mountain. However since a crash in 2021, the system has been closed indefinitely.
Construction
After the existing Ještěd lodge burned down in January 1963, a decision was made by Restaurace Liberec (the company that used to manage the burned-down lodges) and the Prague Radio Communications Administration |
https://en.wikipedia.org/wiki/Heterostyly | Heterostyly is a unique form of polymorphism and herkogamy in flowers. In a heterostylous species, two or three morphological types of flowers, termed "morphs", exist in the population. On each individual plant, all flowers share the same morph. The flower morphs differ in the lengths of the pistil and stamens, and these traits are not continuous. The morph phenotype is genetically linked to genes responsible for a unique system of self-incompatibility, termed heteromorphic self-incompatibility, that is, the pollen from a flower on one morph cannot fertilize another flower of the same morph.
Heterostylous plants having two flower morphs are termed "distylous". In one morph (termed "pin", "longistylous", or "long-styled" flower) the stamens are short and the pistils are long; in the second morph (termed "thrum", "brevistylous", or "short-styled" flower) the stamens are long and the pistils are short; the length of the pistil in one morph equals the length of the stamens in the second morph, and vice versa. Examples of distylous plants are the primrose and many other Primula species, buckwheat, flax and other Linum species, some Lythrum species, and many species of Cryptantha.
Heterostylous plants having three flower morphs are termed "tristylous". Each morph has two types of stamens. In one morph, the pistil is short, and the stamens are long and intermediate; in the second morph, the pistil is intermediate, and the stamens are short and long; in the third morph, the pistil is long, and the stamens are short and intermediate. Oxalis pes-caprae, purple loosestrife (Lythrum salicaria) and some other species of Lythrum are trimorphic.
The lengths of stamens and pistils in heterostylous flowers are adapted for pollination by different pollinators, or different body parts of the same pollinator. Thus, pollen originating in a long stamen will reach primarily long rather than short pistils, and vice versa. When pollen is transferred between two flowers of the same morph |
https://en.wikipedia.org/wiki/Subquotient | In the mathematical fields of category theory and abstract algebra, a subquotient is a quotient object of a subobject. Subquotients are particularly important in abelian categories, and in group theory, where they are also known as sections, though this conflicts with a different meaning in category theory.
In the literature about sporadic groups wordings like " is involved in " can be found with the apparent meaning of " is a subquotient of ."
A quotient of a subrepresentation of a representation (of, say, a group) might be called a subquotient representation; e.g., Harish-Chandra's subquotient theorem.
Examples
Of the 26 sporadic groups, the 20 subquotients of the monster group are referred to as the "Happy Family", whereas the remaining 6 are called "pariah groups."
Order relation
The relation subquotient of is an order relation.
Proof of transitivity for groups
Notation
For group , subgroup of and normal subgroup of the quotient group is a subquotient of
Let be subquotient of , furthermore be subquotient of and be the canonical homomorphism. Then all vertical () maps
with suitable are surjective for the respective pairs
The preimages and are both subgroups of containing and it is and , because every has a preimage with Moreover, the subgroup is normal in
As a consequence, the subquotient of is a subquotient of in the form
Relation to cardinal order
In constructive set theory, where the law of excluded middle does not necessarily hold, one can consider the relation subquotient of as replacing the usual order relation(s) on cardinals. When one has the law of the excluded middle, then a subquotient of is either the empty set or there is an onto function . This order relation is traditionally denoted
If additionally the axiom of choice holds, then has a one-to-one function to and this order relation is the usual on corresponding cardinals.
See also
Homological algebra
Subcountable |
https://en.wikipedia.org/wiki/Loose%20coupling | In computing and systems design, a loosely coupled system is one
in which components are weakly associated (have breakable relationships) with each other, and thus changes in one component least affect existence or performance of another component.
in which each of its components has, or makes use of, little or no knowledge of the definitions of other separate components. Subareas include the coupling of classes, interfaces, data, and services. Loose coupling is the opposite of tight coupling.
Advantages and disadvantages
Components in a loosely coupled system can be replaced with alternative implementations that provide the same services. Components in a loosely coupled system are less constrained to the same platform, language, operating system, or build environment.
If systems are decoupled in time, it is difficult to also provide transactional integrity; additional coordination protocols are required. Data replication across different systems provides loose coupling (in availability), but creates issues in maintaining consistency (data synchronization).
In integration
Loose coupling in broader distributed system design is achieved by the use of transactions, queues provided by message-oriented middleware, and interoperability standards.
Four types of autonomy, which promote loose coupling, are: reference autonomy, time autonomy, format autonomy, and platform autonomy.
Loose coupling is an architectural principle and design goal in service-oriented architectures; eleven forms of loose coupling and their tight coupling counterparts are listed in:
physical connections via mediator,
asynchronous communication style,
simple common types only in data model,
weak type system,
data-centric and self-contained messages,
distributed control of process logic,
dynamic binding (of service consumers and providers),
platform independence,
business-level compensation rather than system-level transactions,
deployment at different times,
implicit upgrades in ve |
https://en.wikipedia.org/wiki/Const%20%28computer%20programming%29 | In some programming languages, const is a type qualifier (a keyword applied to a data type) that indicates that the data is read-only. While this can be used to declare constants, in the C family of languages differs from similar constructs in other languages in being part of the type, and thus has complicated behavior when combined with pointers, references, composite data types, and type-checking. In other languages, the data is not in a single memory location, but copied at compile time on each use. Languages which use it include C, C++, D, JavaScript, Julia, and Rust.
Introduction
When applied in an object declaration, it indicates that the object is a constant: its value may not be changed, unlike a variable. This basic use – to declare constants – has parallels in many other languages.
However, unlike in other languages, in the C family of languages the const is part of the type, not part of the object. For example, in C, declares an object x of int const type – the const is part of the type, as if it were parsed "(int const) x" – while in Ada, declares a constant (a kind of object) X of INTEGER type: the constant is part of the object, but not part of the type.
This has two subtle results. Firstly, const can be applied to parts of a more complex type – for example, int const * const x; declares a constant pointer to a constant integer, while int const * x; declares a variable pointer to a constant integer, and int * const x; declares a constant pointer to a variable integer. Secondly, because const is part of the type, it must match as part of type-checking. For example, the following code is invalid:
void f(int& x);
// ...
int const i;
f(i);
because the argument to f must be a variable integer, but i is a constant integer. This matching is a form of program correctness, and is known as const-correctness. This allows a form of programming by contract, where functions specify as part of their type signature whether they modify their arguments or not, an |
https://en.wikipedia.org/wiki/Pullulanase | Pullulanase (, limit dextrinase, amylopectin 6-glucanohydrolase, bacterial debranching enzyme, debranching enzyme, α-dextrin endo-1,6-α-glucosidase, R-enzyme, pullulan α-1,6-glucanohydrolase) is a specific kind of glucanase, an amylolytic exoenzyme, that degrades pullulan. It is produced as an extracellular, cell surface-anchored lipoprotein by Gram-negative bacteria of the genus Klebsiella. Type I pullulanases specifically attack α-1,6 linkages, while type II pullulanases are also able to hydrolyse α-1,4 linkages. It is also produced by some other bacteria and archaea. Pullulanase is used as a processing aid in grain processing biotechnology (production of ethanol and sweeteners).
Pullulanase is also known as pullulan-6-glucanohydrolase (Debranching enzyme). Its substrate, pullulan, is regarded as a chain of maltotriose units linked by α-1,6-glycosidic bonds. Pullulanase will hydrolytically cleave pullulan (α-glucan polysaccharides).
Pullulanase Enzyme in the Food Industry
In the food industry, pullulanase works well as an ingredient. Pullulan can be applied directly to foods as a protective glaze or edible film due to its ability to form films. It can be used as a spice and flavoring agent for micro-encapsulation. It is used in mayonnaise to maintain consistency and quality. It is additionally used in low-calorie food formulations as a starch replacement. |
https://en.wikipedia.org/wiki/Stink%20bomb | A stink bomb, sometimes called a stinkpot, is a device designed to create an unpleasant smell. They range in effectiveness from being used as simple pranks to military grade malodorants or riot control chemical agents.
History
A stink bomb that could be launched with arrows was invented by Leonardo da Vinci.
The 1972 U.S. presidential campaign of Edmund Muskie was disrupted at least four times in Florida in 1972 with the use of stink bombs during the Florida presidential primary. Stink bombs were set off at campaign picnics in Miami and Tampa, at the Muskie campaign headquarters in Tampa and at offices in Tampa where the campaign's telephone bank was located. The stink bomb plantings served to disrupt the picnics and campaign operations, and was deemed by the U.S. Select Committee on Presidential Campaign Activities of the U.S. Senate to have "disrupted, confused, and unnecessarily interfered with a campaign for the office of the Presidency".
In 2004, it was reported that the Israeli weapons research and development directorate had created a liquid stink bomb, dubbed the "skunk bomb", with an odor that lingers for five years on clothing. It is a synthetic stink bomb based upon the chemistry of the spray that is emitted from the anal glands of the skunk. It was designed as a crowd control tool to be used as a deterrent that causes people to scatter, such as at a protest. It has been described as a less than lethal weapon.
Range
At the lower end of the spectrum, relatively harmless stink bombs consist of a mixture of ammonium sulfide, vinegar and bicarbonate, which smells strongly of rotten eggs. When exposed to air, the ammonium sulfide reacts with moisture, hydrolyzes, and a mixture of hydrogen sulfide (rotten egg smell) and ammonia is released. Another mixture consists of hydrogen sulfide and ammonia mixed together directly.
Other popular substances on which to base stink bombs are thiols with lower molecular weight such as methyl mercaptan and ethyl mercapt |
https://en.wikipedia.org/wiki/Mulibrey%20nanism | Mulibrey nanism ("MUscle-LIver-BRain-EYe nanism") is a rare autosomal recessive congenital disorder. It causes severe growth failure along with abnormalities of the heart, muscle, liver, brain and eye. TRIM37 is responsible for various cellular functions including developmental patterning.
Signs/symptoms
An individual with Mulibrey nanism has growth retardation, a short broad neck, misshapen sternum, small thorax, square shoulders, enlarged liver, and yellowish dots in the ocular fundi. Individuals with Mulibrey nanism have also been reported to have intellectual disability, tumors, and infertility.
Genetics
Mulibrey nanism is caused by mutations of the TRIM37 gene, located at human chromosome 17q22-23. The disorder is inherited in an autosomal recessive manner. This means the defective gene responsible for the disorder is located on an autosome (chromosome 17 is an autosome), and two copies of the defective gene (one inherited from each parent) are required in order to be born with the disorder. The parents of an individual with an autosomal recessive disorder both carry one copy of the defective gene, but usually do not experience any symptoms of the disorder.
Diagnosis
The diagnosis of Mulibrey nanism can be done via genetic testing, as well as by the physical characteristics (signs/symptoms) displayed by the individual.
Treatment
In terms of treatment/management for those with Mulibrey nanism should have routine medical follow-ups, additionally the following can be done:
Growth hormone treatment
Regular pelvic exams
Pericardiectomy
Prevalence
Worldwide, it has been documented in 110 persons, 85 of them Finnish. It is a recessive genetic disease. Many people with Mulibrey nanism have parents who are closely related, consanguine. Signs and symptoms are variable: siblings who suffer this disease sometimes do not share the same symptoms.
See also
Wilms' tumour |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.