source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Tetraphenylporphine%20sulfonate | Tetraphenylporphine sulfonate is a trypanocidal agent. |
https://en.wikipedia.org/wiki/Voluntary%20observing%20ship%20program | Due to the importance of surface weather observations from the surface of the ocean, the voluntary observing ship program, known as VOS, was set up to train crews how to take weather observations while at sea and also to calibrate weather sensors used aboard ships when they arrive in port, such as barometers and thermometers. An Automatic Voluntary Observing Ships (AVOS) System is an automated weather station that transmits VOS program reports.
See also
Old Weather
AIREP
PIREP
Citizen Weather Observer Program
Phenology |
https://en.wikipedia.org/wiki/Hand%20drill | A hand drill is the simplest primitive method to produce rapid rotary motion of a rod. It consists in holding the rod vertically between both hands and moving these back and forth, in opposite directions, as in rubbing them. The rod typically is one or two feet long and half an inch in diameter.
Hand drills have been used by many primitive societies as a fire drill to start a fire. It is still often learned as a useful survival skill. A hand drill could also be used as a tool for drilling holes in hard materials such as wood, stone, or bone.
For either use, the hands must also exert downward pressure while spinning the rod. As a result, the hands drift down the rod, and must be periodically raised to the top again. These interruptions can be avoided by cutting a notch at the top of the shaft, tying a cord through it, and then inserting the thumbs through loops in the cord. However, skilled operators can either maintain pressure with their hands almost stationary vertically; or, in a movement comparable to floating, can "float" their hands back to the top of the drill.
See also
Bow drill
Pump drill
Brace (tool) |
https://en.wikipedia.org/wiki/Bogomol%27nyi%E2%80%93Prasad%E2%80%93Sommerfield%20bound | The Bogomol'nyi–Prasad–Sommerfield bound (named after Evgeny Bogomolny, M.K. Prasad, and Charles Sommerfield) is a series of inequalities for solutions of partial differential equations depending on the homotopy class of the solution at infinity. This set of inequalities is very useful for solving soliton equations. Often, by insisting that the bound be satisfied (called "saturated"), one can come up with a simpler set of partial differential equations to solve the Bogomolny equations. Solutions saturating the bound are called "BPS states" and play an important role in field theory and string theory.
Example
In a theory of non-abelian Yang–Mills–Higgs, the energy at a given time t is given by
where is the covariant derivative of the Higgs field and V is the potential. If we assume that V is nonnegative and is zero only for the Higgs vacuum and that the Higgs field is in the adjoint representation, then, by virtue of the Yang–Mills Bianchi identity,
Therefore,
Saturation of the inequality is obtained when the Bogomolny equations are satisfied.
The other condition for saturation is that the Higgs mass and self-interaction are zero, which is the case in N=2 supersymmetric theories.
This quantity is the absolute value of the magnetic flux.
A slight generalization applying to dyons also exists. For that, the Higgs field needs to be a complex adjoint, not a real adjoint.
Supersymmetry
In supersymmetry, the BPS bound is saturated when half (or a quarter or an eighth) of the SUSY generators are unbroken. This happens when the mass is equal to the central extension, which is typically a topological charge.
In fact, most bosonic BPS bounds actually come from the bosonic sector of a supersymmetric theory and this explains their origin. |
https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Budan%20de%20Boislaurent | Ferdinand François Désiré Budan de Boislaurent (28 September 1761 – 6 October 1840) was a French amateur mathematician, best known for a tract, Nouvelle méthode pour la résolution des équations numériques,
first published in Paris in 1807, but based on work from 1803.
Budan was born in Limonade, Cap-Français, Saint-Domingue (now Haiti) on 28 September 1761. His early education was at Juilly, France. He then proceeded to Paris where he studied medicine, receiving a doctorate for a thesis entitled Essai sur cette question d'économie médicale : Convient-il qu'un malade soit instruit de sa situation? Budan died in Paris on 6 October 1840.
Budan explains in his book how, given a monic polynomial p(x), the coefficients of p(x+1) can be obtained by developing a Pascal-like triangle with first row the coefficients of p(x), rather than by expanding successive powers of x+1, as in Pascal's triangle proper, and then summing; thus, the method has the flavour of lattice path combinatorics. Taken together with Descartes' Rule of Signs, this leads to an upper bound on the number of the real roots a polynomial has inside an open interval. Although Budan's Theorem, as this result was known, was taken up by, among others, Pierre Louis Marie Bourdon (1779-1854), in his celebrated algebra textbook, it tended to be eclipsed by an equivalent result due to Joseph Fourier, as the consequence of a priority dispute. Interest in Budan's theorem has been revived because some further computational results are more easily deducible from it than from Fourier's version of the theorem.
Budan's book was read across the English Channel; for example, Peter Barlow includes mention of it in his entry on Approximation in his Dictionary (1814), although grouping it with the method of Joseph-Louis Lagrange as being accurate, but of more theoretical interest than practical use. Budan's work on approximation was studied by Horner in preparing his celebrated article in the Philosophical Transactions of the |
https://en.wikipedia.org/wiki/Paul%20M%C3%BCller%20%28biologist%29 | Paul Müller (11 October 1940 – 30 May 2010) was a German professor of biology at the University of Trier. He was born in Gersweiler and died in Saarland.
The focus of his work was biogeography, in particular in the Neotropics. His Ph.D. work was on birds and other vertebrates of the Ilha de São Sebastião (Brazil). Subsequently, his studies focussed on herpetofauna, tropical ecology and sustainability of hunting.
Awards
Ordre national du mérite (1984)
Bundesverdienstkreuz (1994)
Honorary Doctorate of Chiang Mai University (1988)
Honorary Doctorate of Yokohama National University (1989)
See also
Jürgen Haffer |
https://en.wikipedia.org/wiki/Joint%20compatibility%20branch%20and%20bound | Joint compatibility branch and bound (JCBB) is an algorithm in computer vision and robotics commonly used for data association in simultaneous localization and mapping. JCBB measures the joint compatibility of a set of pairings that successfully rejects spurious matchings and is hence known to be robust in complex environments. |
https://en.wikipedia.org/wiki/United%20States%20Drought%20Monitor | The United States Drought Monitor is a collection of measures that allows experts to assess droughts in the United States. The monitor is not an agency but a partnership between the National Drought Mitigation Center at the University of Nebraska-Lincoln, the United States Department of Agriculture, and the National Oceanic and Atmospheric Administration. Different experts provide their best judgment to outline a single map every week that shows droughts throughout the United States. The effort started in 1999 as a federal, state, and academic partnership, growing out of an initiative by the Western Governors Association to provide timely and understandable scientific information on water supply and drought for policymakers.
The monitor is produced by a rotating group of authors and incorporates review from a group of 250 climatologists, extension agents, and others across the nation. Each week the authors revise the previous map based on rainfall, snowfall, and other events, and observers' reports of how drought is affecting crops, wildlife, and other indicators. Authors balance conflicting data and reports to come up with a new map every Wednesday afternoon. The map is then released on the following Thursday morning. |
https://en.wikipedia.org/wiki/Disk%20controller | The disk controller is the controller circuit which enables the CPU to communicate with a hard disk, floppy disk or other kind of disk drive. It also provides an interface between the disk drive and the bus connecting it to the rest of the system.
Early disk controllers were identified by their storage methods and data encoding. They were typically implemented on a separate controller card. Modified frequency modulation (MFM) controllers were the most common type in small computers, used for both floppy disk and hard disk drives. Run length limited (RLL) controllers used data compression to increase storage capacity by about 50%. Priam created a proprietary storage algorithm that could double the disk storage. Shugart Associates Systems Interface (SASI) was a predecessor to SCSI.
Modern disk controllers are integrated into the disk drive as peripheral controllers. For example, disks called "SCSI disks" have built-in SCSI controllers. In the past, before most SCSI controller functionality was implemented in a single chip, separate SCSI controllers interfaced disks to the SCSI bus.
These integrated peripheral controllers communicate with a host adapter in the host system over a standardized, high-level storage bus interface. The most common types of interfaces provided nowadays by host controllers are PATA (IDE) and Serial ATA for home use. High-end disks use Parallel SCSI, Fibre Channel or Serial Attached SCSI.
Disk controllers can also control the timing of access to flash memory which is not mechanical in nature (i.e. no physical disk).
Disk controller versus host adapter
The component that allows a computer to talk to a peripheral bus is host adapter or host bus adapter (HBA, e.g. Advanced Host Controller Interface or AHDC). A disk controller allows a disk to talk to the same bus. Signals read by a disk read-and-write head are converted by a disk controller, then transmitted over the peripheral bus, then converted again by the host adapter into the suita |
https://en.wikipedia.org/wiki/Brendel%E2%80%93Bormann%20oscillator%20model | The Brendel–Bormann oscillator model is a mathematical formula for the frequency dependence of the complex-valued relative permittivity, sometimes referred to as the dielectric function. The model has been used to fit to the complex refractive index of materials with absorption lineshapes exhibiting non-Lorentzian broadening, such as metals and amorphous insulators, across broad spectral ranges, typically near-ultraviolet, visible, and infrared frequencies. The dispersion relation bears the names of R. Brendel and D. Bormann, who derived the model in 1992, despite first being applied to optical constants in the literature by Andrei M. Efimov and E. G. Makarova in 1983. Around that time, several other researchers also independently discovered the model. The Brendel-Bormann oscillator model is aphysical because it does not satisfy the Kramers–Kronig relations. The model is non-causal, due to a singularity at zero frequency, and non-Hermitian. These drawbacks inspired J. Orosco and C. F. M. Coimbra to develop a similar, causal oscillator model.
Mathematical formulation
The general form of an oscillator model is given by
where
is the relative permittivity,
is the value of the relative permittivity at infinite frequency,
is the angular frequency,
is the contribution from the th absorption mechanism oscillator.
The Brendel-Bormann oscillator is related to the Lorentzian oscillator and Gaussian oscillator , given by
where
is the Lorentzian strength of the th oscillator,
is the Lorentzian resonant frequency of the th oscillator,
is the Lorentzian broadening of the th oscillator,
is the Gaussian broadening of the th oscillator.
The Brendel-Bormann oscillator is obtained from the convolution of the two aforementioned oscillators in the manner of
,
which yields
where
is the Faddeeva function,
.
The square root in the definition of must be taken such that its imaginary component is positive. This is achieved by: |
https://en.wikipedia.org/wiki/Reversal%20potential | In a biological membrane, the reversal potential is the membrane potential at which the direction of ionic current reverses. At the reversal potential, there is no net flow of ions from one side of the membrane to the other. For channels that are permeable to only a single type of ions, the reversal potential is identical to the equilibrium potential of the ion.
Equilibrium potential
The equilibrium potential for an ion is the membrane potential at which there is no net movement of the ion. The flow of any inorganic ion, such as Na+ or K+, through an ion channel (since membranes are normally impermeable to ions) is driven by the electrochemical gradient for that ion. This gradient consists of two parts, the difference in the concentration of that ion across the membrane, and the voltage gradient. When these two influences balance each other, the electrochemical gradient for the ion is zero and there is no net flow of the ion through the channel; this also translates to no current across the membrane. The voltage gradient at which this equilibrium is reached is the equilibrium potential for the ion and it can be calculated from the Nernst equation.
Mathematical models and the driving force
We can consider as an example a positively charged ion, such as K+, and a negatively charged membrane, as it is commonly the case in most organisms. The membrane voltage opposes the flow of the potassium ions out of the cell and the ions can leave the interior of the cell only if they have sufficient thermal energy to overcome the energy barrier produced by the negative membrane voltage. However, this biasing effect can be overcome by an opposing concentration gradient if the interior concentration is high enough which favours the potassium ions leaving the cell.
An important concept related to the equilibrium potential is the driving force. Driving force is simply defined as the difference between the actual membrane potential and an ion's equilibrium potential where refers t |
https://en.wikipedia.org/wiki/Italo%20Jose%20Dejter | Italo Jose Dejter (December 17, 1939) is an Argentine-born American mathematician, a retired professor of mathematics and computer science from the University of Puerto Rico, (August 1984-February 2018) and a researcher in algebraic topology,
differential topology, graph theory, coding theory and combinatorial designs.
He obtained a Licentiate degree in mathematics from University of Buenos Aires in 1967, arrived at Rutgers University in 1970 by means of a Guggenheim Fellowship and obtained a Ph.D. degree in mathematics in 1975 under the supervision of Professor Ted Petrie, with support of the National Science Foundation. He was a professor at the
Federal University of Santa Catarina, Brazil, from 1977 to 1984, with grants from the National Council for Scientific and Technological Development, (CNPq).
Dejter has been a visiting scholar at a number of research institutions, including University of São Paulo, Instituto Nacional de Matemática Pura e Aplicada, Federal University of Rio Grande do Sul,
University of Cambridge, National Autonomous University of Mexico,
Simon Fraser University, University of Victoria, New York University, University of Illinois at Urbana–Champaign, McMaster University, DIMACS, Autonomous University of Barcelona, Technical University of Denmark, Auburn University, Polytechnic University of Catalonia, Technical University of Madrid, Charles University, Ottawa University and Simón Bolívar University. The sections below describe the relevance of Dejter's work in the research areas mentioned in the first paragraph above, or in the adjacent box.
Algebraic and differential topology
In 1971, Ted Petrie conjectured that if X is a closed, smooth 2n-dimensional homotopy complex projective space that admits a nontrivial smooth action of the circle,
and if a function h, mapping X onto the 2n-dimensional complex projective space, is a homotopy equivalence, then h preserves the Pontrjagin classes. In 1975, Dejter proved Petrie's Conjecture for n=3, |
https://en.wikipedia.org/wiki/Content%20adaptation | Content adaptation is the action of transforming content to adapt to device capabilities. Content adaptation is usually related to mobile devices, which require special handling because of their limited computational power, small screen size, and constrained keyboard functionality.
Content adaptation could roughly be divided to two fields:
Media content adaptation that adapts media files.
Browsing content adaptation that adapts a website to mobile devices.
Browsing content adaptation
Advances in the capabilities of small, mobile devices such as mobile phones (cell phones) and Personal Digital Assistants have led to an explosion in the number of types of device that can now access the Web. Some commentators refer to the Web that can be accessed from mobile devices as the Mobile Web.
The sheer number and variety of Web-enabled devices poses significant challenges for authors of websites who want to support access from mobile devices. The W3C Device Independence Working Group described many of the issues in its report Authoring Challenges for Device Independence.
Content adaptation is one approach to a solution. Rather than requiring authors to create pages explicitly for each type of device that might request them, content adaptation transforms an author's materials automatically.
For example, content might be converted from a device-independent markup language, such as XDIME, an implementation of the W3C's DIAL specification, into a form suitable for the device, such as XHTML Basic, C-HTML, or WML. Similarly, a suitable device-specific CSS style sheet or a set of in-line styles might be generated from abstract style definitions. Likewise, a device specific layout might be generated from abstract layout definitions.
Once created, the device-specific materials form the response returned to the device from which the request was made.
Another way is to use the latest trend responsive design based on CSS, covered in this article (RWD).
Content adaptation requ |
https://en.wikipedia.org/wiki/Tri-level%20sync | Tri-level sync is an analogue video synchronization pulse primarily used for the locking of high-definition video signals (genlock).
It is preferred in HD environments over black and burst, as timing jitter is reduced due to the nature of its higher frequency. It also benefits from having no DC content, as the pulses are in both polarities.
Synchronization
Modern real-time multi-source HD facilities have many pieces of equipment that all output HD-SDI video. If this baseband video is to be mixed, switched or luma keyed with any other sources, then they will need to be synchronous, i.e. the first pixel of the first line must be transmitted at the same time (within a few microseconds). This then allows the switcher to cut, mix or key these sources together with a minimal amount of delay (~1 HD video line 1/(1125×25) seconds for 50i video). This synchronization is done by supplying each piece of equipment with either a tri-level sync, or black-and-burst input. There are video switchers that do not require synchronous sources, but these operate with a much bigger delay
Waveform
The main pulse definition is as follows: a negative-going pulse of 300 mV lasting 40 sample clocks followed by a positive-going pulse of 300 mV lasting 40 sample clocks. The allowed rise/fall time for each of the transitions is 4 sample clocks. This is with a clock rate of 74.25 MHz. |
https://en.wikipedia.org/wiki/XCP%20%28protocol%29 | XCP (or) "Universal Measurement and Calibration Protocol" is a network protocol originating from ASAM for connecting calibration systems to electronic control units, ECUs. It enables read and write access to variables and memory contents of microcontroller systems at runtime. Entire datasets can be acquired or stimulated synchronous to events triggered by timers or operating conditions. In addition, XCP also supports programming of flash memory.
ASAM states "The primary purpose of XCP is to adjust internal parameters and acquire the current values of internal variables of an ECU. The first letter X in XCP expresses the fact that the protocol is designed for a variety of bus systems."
In 2003, the protocol was standardized as "ASAM MCD-1 XCP". XCP is a successor to CAN Calibration Protocol (CCP) that was developed back in the mid-1990s. At that time, CAN was the dominant networking system in the automobile industry. Over time, other bus systems such as LIN, MOST and FlexRay emerged and made it necessary to extend the protocol to other transport media. In addition, XCP supports synchronous and asynchronous serial interfaces. With Ethernet or USB as the transport medium, XCP can also serve as a standardized interface to analog measurement devices and to hardware interface converters to RAM emulators, JTAG or other microcontroller debug interfaces.
Due to its broad range of use, a primary goal in the development of XCP was to achieve as lean an implementation in the ECU as possible and high scalability of features and resource utilization. XCP can even be implemented on 8-bit microcontrollers for CAN or SCI with few resources, and it exploits the full potential of FlexRay or Ethernet on high-performance platforms.
As a two-layer protocol, XCP consistently separates the protocol and transport layers from one another and adheres to a Single-Master/Multi-Slave concept. XCP always uses the same protocol layer independent of the transport layer. The “X” in its name stan |
https://en.wikipedia.org/wiki/Stink%20bomb | A stink bomb, sometimes called a stinkpot, is a device designed to create an unpleasant smell. They range in effectiveness from being used as simple pranks to military grade malodorants or riot control chemical agents.
History
A stink bomb that could be launched with arrows was invented by Leonardo da Vinci.
The 1972 U.S. presidential campaign of Edmund Muskie was disrupted at least four times in Florida in 1972 with the use of stink bombs during the Florida presidential primary. Stink bombs were set off at campaign picnics in Miami and Tampa, at the Muskie campaign headquarters in Tampa and at offices in Tampa where the campaign's telephone bank was located. The stink bomb plantings served to disrupt the picnics and campaign operations, and was deemed by the U.S. Select Committee on Presidential Campaign Activities of the U.S. Senate to have "disrupted, confused, and unnecessarily interfered with a campaign for the office of the Presidency".
In 2004, it was reported that the Israeli weapons research and development directorate had created a liquid stink bomb, dubbed the "skunk bomb", with an odor that lingers for five years on clothing. It is a synthetic stink bomb based upon the chemistry of the spray that is emitted from the anal glands of the skunk. It was designed as a crowd control tool to be used as a deterrent that causes people to scatter, such as at a protest. It has been described as a less than lethal weapon.
Range
At the lower end of the spectrum, relatively harmless stink bombs consist of a mixture of ammonium sulfide, vinegar and bicarbonate, which smells strongly of rotten eggs. When exposed to air, the ammonium sulfide reacts with moisture, hydrolyzes, and a mixture of hydrogen sulfide (rotten egg smell) and ammonia is released. Another mixture consists of hydrogen sulfide and ammonia mixed together directly.
Other popular substances on which to base stink bombs are thiols with lower molecular weight such as methyl mercaptan and ethyl mercapt |
https://en.wikipedia.org/wiki/Random%20flip-flop | Random flip-flop (RFF) is a theoretical concept of a non-sequential logic circuit capable of generating true randomness. By definition, it operates as an "ordinary" edge-triggered clocked flip-flop, except that its clock input acts randomly and with probability p = 1/2. Unlike Boolean circuits, which behave deterministically, random flip-flop behaves non-deterministically. By definition, random flip-flop is electrically compatible with Boolean logic circuits. Together with them, RFF makes up a full set of logic circuits capable of performing arbitrary algorithms, namely to realize Probabilistic Turing machine.
Symbol
Random flip-flop comes in all varieties in which ordinary, edge triggered clocked flip-flop does, for example: D-type random flip-flop (DRFF). T-type random flip-flop (TRFF), JK-type random flip-flop (JKRFF), etc. Symbol for DRFF, TRFF and JKRFF are shown in the Fig. 1.
While varieties are possible, not all of them are needed: a single RFF type can be used to emulate all other types. Emulation of one type of RFF by the other type of RFF can be done using the same additional gates circuitry as for ordinary flip-flops. Examples are shown in the Fig. 2.
Practical realization of random flip-flip
By definition, action of a theoretical RFF is truly random. This is difficult to achieve in practice and is probably best realized through use of physical randomness. A RFF, based on quantum-random effect of photon emission in semiconductor and subsequent detection, has been demonstrated to work well up to a clock frequency of 25 MHz. At a higher clock frequency, subsequent actions of the RFF become correlated. This RFF has been built using bulk components and the effort resulted only in a handful of units. Recently, a monolithic chip containing 2800 integrated RFFs based on quantum randomness has been demonstrated in Bipolar-CMOS-DMOS (BCD) process.
Applications and prospects
One straightforward application of a RFF is generation of random bits, as shown |
https://en.wikipedia.org/wiki/Dixie%20Cornell%20Gebhardt | Dixie Cornell Gebhardt (November 18, 1866 – October 16, 1955) was a state regent and secretary of the Daughters of the American Revolution (DAR) in Iowa during World War I, and designed the flag for the state of Iowa.
At the beginning of the war, Iowa had no state flag, and such a flag would have been expected to be carried by regiments from that state. (As the war progressed, however, it became obvious that regiments comprising men from individual states would no longer be formed.) Gebhardt's flag design was chosen from among several submissions by Governor William L. Harding and the Iowa Council on National Defense. It became the official flag of the state in 1921.
Early life and education
Dixie May Cornell was born on November 18, 1866, in Knoxville, Iowa to Norman Riley Cornell and Mary Fletcher Timmonds. Her father, a pioneer Knoxville physician who served as an army surgeon in the American Civil War with the Iowa Infantry, named his trotting horses, "Iowa Belle," "Jim Dick," and "Jackie" after his three girls. According to her obituary, her mother called her "Dixie" for the south-land and her Green Valley home in Kentucky but friends and relatives also referred to her as "Dickie."
With the exception of a year spent at the Visitation School for Girls in Ottumwa, Iowa in 1883, she lived all her life in Knoxville. She graduated from Knoxville Public Schools in 1885.
Career
She taught briefly after graduating from Knoxville Public Schools in 1885 but returned home to care for her aging parents. On March 20, 1887, she became a member of Knoxville's Chapter M of the P.E.O. Sisterhood, an international women's organization. During her long membership, she served as chapter president and also held offices in the state and supreme chapters.
In June 1900, she married George Tullis Gebhardt.
Originally a member of Abigail Adams Chapter of Daughters of the American Revolution (Des Moines), Gebhardt became the organizer and charter member of Mary Marion Chapter of |
https://en.wikipedia.org/wiki/Imaginary%20unit | The imaginary unit or unit imaginary number () is a solution to the quadratic equation . Although there is no real number with this property, can be used to extend the real numbers to what are called complex numbers, using addition and multiplication. A simple example of the use of in a complex number is .
Imaginary numbers are an important mathematical concept; they extend the real number system to the complex number system , in which at least one root for every nonconstant polynomial exists (see Algebraic closure and Fundamental theorem of algebra). Here, the term "imaginary" is used because there is no real number having a negative square.
There are two complex square roots of −1: and , just as there are two complex square roots of every real number other than zero (which has one double square root).
In contexts in which use of the letter is ambiguous or problematic, the letter is sometimes used instead. For example, in electrical engineering and control systems engineering, the imaginary unit is normally denoted by instead of , because is commonly used to denote electric current.
Definition
The imaginary number is defined solely by the property that its square is −1:
With defined this way, it follows directly from algebra that and are both square roots of −1.
Although the construction is called "imaginary", and although the concept of an imaginary number may be intuitively more difficult to grasp than that of a real number, the construction is valid from a mathematical standpoint. Real number operations can be extended to imaginary and complex numbers, by treating as an unknown quantity while manipulating an expression (and using the definition to replace any occurrence of with −1). Higher integral powers of can also be replaced with , 1, , or −1:
or, equivalently,
Similarly, as with any non-zero real number:
As a complex number, can be represented in rectangular form as , with a zero real component and a unit imaginary component. In |
https://en.wikipedia.org/wiki/Partial%20cloning | In the field of cell biology, the method of partial cloning (PCL) converts a fully differentiated old somatic cell into a partially reprogrammed young cell that retains all the specialised functions of the differentiated old cell but is simply younger. The method of PCL reverses characteristics associated with old cells. For example, old, senescent, cells rejuvenated by PCL are free of highly condensed senescence-associated heterochromatin foci (SAHF) and re-acquire the proliferation potential of young cells. The method of PCL thus rejuvenates old cells without de-differentiation and passage through an embryonic, pluripotent, stage.
Method
PCL consists in introducing a somatic adult or senescent cell nucleus or entire cell with enlarged membrane pores in an (activated) oocyte and to withdraw this treated cell before its de-differentiation and first cell division occurs. Thus, the progressive rejuvenation capability of the oocyte is used only temporarily in order to obtain a partial natural rejuvenation. PCL permits to envisage a chosen degree of partial rejuvenation in changing the duration of the introduction of the treated cell in the oocyte. Using PCL cell de-differentiation and its age reprogramming might be, at least partially, separable. Thus the existence of an isolated ageing clock would be confirmed at least during a certain part of the cellular evolution and involution.
Application
First experimental result shows a possible high efficiency in partial rejuvenation of senescent mouse cells. Notably PCL rejuvenates exclusively one single tissue or organ, in contrast to classical cloning PCL is therefore unable to reconstitute an entire organism. Furthermore, PCL is feasible in a few hours in opposition to classical cloning or induced pluripotent stem cells (iPS) which all need weeks or months.
Classical cloning can rejuvenate old cells but the process demands that the old cells must artificially pass through an embryonic cell stage. Partial cloning |
https://en.wikipedia.org/wiki/Thermoception | In physiology, thermoception or thermoreception is the sensation and perception of temperature, or more accurately, temperature differences inferred from heat flux. It deals with a series of events and processes required for an organism to receive a temperature stimulus, convert it to a molecular signal, and recognize and characterize the signal in order to trigger an appropriate defense response.
Thermoception in larger animals is mainly done in the skin; mammals have at least two types. The details of how temperature receptors work are still being investigated. Ciliopathy is associated with decreased ability to sense heat; thus cilia may aid in the process. Transient receptor potential channels (TRP channels) are believed to play a role in many species in sensation of hot, cold, and pain. Vertebrates have at least two types of sensor: those that detect heat and those that detect cold.
In animals
A particularly specialized form of thermoception is used by Crotalinae (pit viper) and Boidae (boa) snakes, which can effectively see the infrared radiation emitted by hot objects. The snakes' face has a pair of holes, or pits, lined with temperature sensors. The sensors indirectly detect infrared radiation by its heating effect on the skin inside the pit. They can work out which part of the pit is hottest, and therefore the direction of the heat source, which could be a warm-blooded prey animal. By combining information from both pits, the snake can also estimate the distance of the object.
The Common vampire bat has specialized infrared sensors in its nose-leaf. Vampire bats are the only mammals that feed exclusively on blood. The infrared sense enables Desmodus to localize homeothermic (warm-blooded) animals (cattle, horses, wild mammals) within a range of about 10 to 15 cm. This infrared perception is possibly used in detecting regions of maximal blood flow on targeted prey.
Other animals with specialized heat detectors are forest fire seeking beetles (Melano |
https://en.wikipedia.org/wiki/Liquefactive%20necrosis | Liquefactive necrosis (or colliquative necrosis) is a type of necrosis which results in a transformation of the tissue into a liquid viscous mass. Often it is associated with focal bacterial or fungal infections, and can also manifest as one of the symptoms of an internal chemical burn. In liquefactive necrosis, the affected cell is completely digested by hydrolytic enzymes, resulting in a soft, circumscribed lesion consisting of pus and the fluid remains of necrotic tissue. Dead leukocytes will remain as a creamy yellow pus. After the removal of cell debris by white blood cells, a fluid filled space is left. It is generally associated with abscess formation and is commonly found in the central nervous system.
In the brain
Due to excitotoxicity, hypoxic death of cells within the central nervous system can result in liquefactive necrosis. This is a process in which lysosomes turn tissues into pus as a result of lysosomal release of digestive enzymes. Loss of tissue architecture means that the tissue can be liquefied. This process is not associated with bacterial action or infection. Ultimately, in a living patient most necrotic cells and their contents disappear.
The affected area is soft with liquefied centre containing necrotic debris. Later, a cyst wall is formed.
Microscopically, the cystic space contains necrotic cell debris and macrophages filled with phagocytosed material. The cyst wall is formed by proliferating capillaries, inflammatory cells, and gliosis (proliferating glial cells) in the case of brain and proliferating fibroblasts in the case of abscess cavities.
Brain cells have a large amount of digestive enzymes (hydrolases). These enzymes cause the neural tissue to become soft and liquefy.
In the lung
Liquefactive necrosis can also occur in the lung, especially in the context of lung abscesses.
Infection
Liquefactive necrosis can also take place due to certain infections. Neutrophils, fighting off a bacteria, will release hydrolytic enzymes whi |
https://en.wikipedia.org/wiki/Frisch%E2%80%93Waugh%E2%80%93Lovell%20theorem | In econometrics, the Frisch–Waugh–Lovell (FWL) theorem is named after the econometricians Ragnar Frisch, Frederick V. Waugh, and Michael C. Lovell.
The Frisch–Waugh–Lovell theorem states that if the regression we are concerned with is expressed in terms of two separate sets of predictor variables:
where and are matrices, and are vectors (and is the error term), then the estimate of will be the same as the estimate of it from a modified regression of the form:
where projects onto the orthogonal complement of the image of the projection matrix . Equivalently, MX1 projects onto the orthogonal complement of the column space of X1. Specifically,
and this particular orthogonal projection matrix is known as the residual maker matrix or annihilator matrix.
The vector is the vector of residuals from regression of on the columns of .
The most relevant consequence of the theorem is that the parameters in do not apply to but to , that is: the part of uncorrelated with . This is the basis for understanding the contribution of each single variable to a multivariate regression (see, for instance, Ch. 13 in ).
The theorem also implies that the secondary regression used for obtaining is unnecessary when the predictor variables are uncorrelated: using projection matrices to make the explanatory variables orthogonal to each other will lead to the same results as running the regression with all non-orthogonal explanators included.
Moreover, the standard errors from the partial regression equal those from the full regression.
History
The origin of the theorem is uncertain, but it was well-established in the realm of linear regression before the Frisch and Waugh paper. George Udny Yule's comprehensive analysis of partial regressions, published in 1907, included the theorem in section 9 on page 184. Yule emphasized the theorem's importance for understanding multiple and partial regression and correlation coefficients, as mentioned in section 10 of the same paper.
|
https://en.wikipedia.org/wiki/Vulgamycin | Vulgamycin is an antibiotic made by Streptomyces. |
https://en.wikipedia.org/wiki/MIBE%20architecture | MIBE architecture (Motivated Independent BEhavior) is a behavior-based robot architecture developed at Artificial Intelligence and Robotics Lab of Politecnico di Milano by Fabio La Daga and Andrea Bonarini in 1998. MIBE architecture is based on the idea of animat and derived from subsumption architecture, formerly developed by Rodney Brooks and colleagues at MIT in 1986.
Description
MIBE architecture is based on the assumption that autonomy is grounded on motivation and arises from superimposition of synergetic activities in response to multiple drives. An autonomous agent is developed to achieve one or more goals (primary goals), but secondary goals also originate from environmental and functional constraints. MIBE architecture defines both primary and secondary goals as needs. A specific drive originates from each need. MIBE architecture generates and weights all these drives in an explicit motivational state. The higher the urgency to satisfy a specific need, the higher its weight in the motivational state and the higher the drive to perform a behavior that satisfies the given need.
Differences from subsumption architecture
MIBE architecture mainly departs from subsumption architecture due to the introduction of a top-level motivational structure which determines behavior priorities at run time. That is, there are not layers and static hierarchical dependencies between behavioral modules, but each behavior constantly competes with others for taking control of the agent through the top level motivational state from which specific drives originate (via predetermined or reinforcement-learned functions).
While subsumption architecture is built on a predetermined hierarchy of behavioral modules, MIBE architecture consists of a more complex structure, where several behaviors (that always compete for taking control of the robot via the motivational state) can activate and control dynamically an adaptive set of underlying modules, called abilities. Each behavior perf |
https://en.wikipedia.org/wiki/Sound%20localization | Sound localization is a listener's ability to identify the location or origin of a detected sound in direction and distance.
The sound localization mechanisms of the mammalian auditory system have been extensively studied. The auditory system uses several cues for sound source localization, including time difference and level difference (or intensity difference) between the ears, and spectral information. These cues are also used by other animals, such as birds and reptiles, but there may be differences in usage, and there are also localization cues which are absent in the human auditory system, such as the effects of ear movements. Animals with the ability to localize sound have a clear evolutionary advantage.
How sound reaches the brain
Sound is the perceptual result of mechanical vibrations traveling through a medium such as air or water. Through the mechanisms of compression and rarefaction, sound waves travel through the air, bounce off the pinna and concha of the exterior ear, and enter the ear canal. The sound waves vibrate the tympanic membrane (ear drum), causing the three bones of the middle ear to vibrate, which then sends the energy through the oval window and into the cochlea where it is changed into a chemical signal by hair cells in the organ of Corti, which synapse onto spiral ganglion fibers that travel through the cochlear nerve into the brain.
Neural interactions
In vertebrates, interaural time differences are known to be calculated in the superior olivary nucleus of the brainstem. According to Jeffress, this calculation relies on delay lines: neurons in the superior olive which accept innervation from each ear with different connecting axon lengths. Some cells are more directly connected to one ear than the other, thus they are specific for a particular interaural time difference. This theory is equivalent to the mathematical procedure of cross-correlation. However, because Jeffress's theory is unable to account for the precedence effect, |
https://en.wikipedia.org/wiki/List%20of%20integrals%20of%20trigonometric%20functions | The following is a list of integrals (antiderivative functions) of trigonometric functions. For antiderivatives involving both exponential and trigonometric functions, see List of integrals of exponential functions. For a complete list of antiderivative functions, see Lists of integrals. For the special antiderivatives involving trigonometric functions, see Trigonometric integral.
Generally, if the function is any trigonometric function, and is its derivative,
In all formulas the constant a is assumed to be nonzero, and C denotes the constant of integration.
Integrands involving only sine
Integrands involving only cosine
Integrands involving only tangent
Integrands involving only secant
Integrands involving only cosecant
Integrands involving only cotangent
Integrands involving both sine and cosine
An integral that is a rational function of the sine and cosine can be evaluated using Bioche's rules.
Integrands involving both sine and tangent
Integrand involving both cosine and tangent
Integrand involving both sine and cotangent
Integrand involving both cosine and cotangent
Integrand involving both secant and tangent
Integrand involving both cosecant and cotangent
Integrals in a quarter period
Using the beta function one can write
Integrals with symmetric limits
Integral over a full circle
See also
Trigonometric integral
Trigonometric functions
Trigonometry |
https://en.wikipedia.org/wiki/WS-Discovery | Web Services Dynamic Discovery (WS-Discovery) is a technical specification that defines a multicast discovery protocol to locate services on a local network. It operates over TCP and UDP port 3702 and uses IP multicast address or . As the name suggests, the actual communication between nodes is done using web services standards, notably SOAP-over-UDP.
Various components in Microsoft's Windows Vista operating system use WS-Discovery, e.g. "People near me". The component WSDMON in Windows 7 and later uses WS-Discovery to automatically discover WSD-enabled network printers, which show in Network in Windows Explorer, and can be installed by double-clicking on them. In Windows 8 or later installation is automatic. WS-Discovery is enabled by default in networked HP printers since 2008. WS-Discovery is an integral part of Windows Rally technologies and Devices Profile for Web Services.
The protocol was originally developed by BEA Systems, Canon, Intel, Microsoft, and WebMethods. On July 1, 2009 it was approved as a standard by OASIS.
See also
Avahi
Bonjour
DHCP
Jini
List of Web service specifications
LLMNR
OSGi Alliance
SSDP
Universal Plug and Play (UPnP)
Web Services Discovery
Web Services for Devices
Zero-configuration networking (Zeroconf) |
https://en.wikipedia.org/wiki/Wrack%20%28seaweed%29 | Wrack is part of the common names of several species of seaweed in the family Fucaceae. It may also refer more generally to any seaweeds or seagrasses that wash up on beaches and may accumulate in the wrack zone.
It consists largely of species of Fucus — brown seaweeds with flat branched ribbon-like fronds, characterized in F. serratus by a saw-toothed margin and in F. vesiculosus, another common species, by bearing air-bladders. Another component of sea wrack may be seagrasses such as Zostera marina a marine flowering plant with bright green long narrow grass-like leaves. Posidonia australis, which occurs sub-tidally on the southern coasts of Australia, sheds its older ribbon-like leaf blades in winter, resulting in thick accumulations along more sheltered shorelines.
"Bladder wrack", Fucus vesiculosus
"Channelled wrack", Pelvetia canaliculata
"Knotted wrack", Ascophyllum nodosum
"Spiral wrack" or "flat wrack", Fucus spiralis
"Toothed wrack" or "serrated wrack", Fucus serratus
Historically wrack was used for making manure, and for making "kelp", a form of potash.
The word's origin is possibly from M Dutch 'wrak', from its root - to push, to shove, to drive.
In the case of seaweed, its sense is in a possible derivation of the word wreck - cast up on shore. |
https://en.wikipedia.org/wiki/Residual%20strength | Residual strength is the load or force (usually mechanical) that a damaged object or material can still carry without failing. Material toughness, fracture size and geometry as well as its orientation all contribute to residual strength. |
https://en.wikipedia.org/wiki/Pickling | Pickling is the process of preserving or extending the shelf life of food by either anaerobic fermentation in brine or immersion in vinegar. The pickling procedure typically affects the food's texture and flavor. The resulting food is called a pickle, or, to prevent ambiguity, prefaced with pickled. Foods that are pickled include vegetables, fruits, mushrooms, meats, fish, dairy and eggs.
Pickling solutions are typically highly acidic, with a pH of 4.6 or lower, and high in salt, preventing enzymes from working and micro-organisms from multiplying. Pickling can preserve perishable foods for months. Antimicrobial herbs and spices, such as mustard seed, garlic, cinnamon or cloves, are often added. If the food contains sufficient moisture, a pickling brine may be produced simply by adding dry salt. For example, sauerkraut and Korean kimchi are produced by salting the vegetables to draw out excess water. Natural fermentation at room temperature, by lactic acid bacteria, produces the required acidity. Other pickles are made by placing vegetables in vinegar. Like the canning process, pickling (which includes fermentation) does not require that the food be completely sterile before it is sealed. The acidity or salinity of the solution, the temperature of fermentation, and the exclusion of oxygen determine which microorganisms dominate, and determine the flavor of the end product.
When both salt concentration and temperature are low, Leuconostoc mesenteroides dominates, producing a mix of acids, alcohol, and aroma compounds. At higher temperatures Lactobacillus plantarum dominates, which produces primarily lactic acid. Many pickles start with Leuconostoc, and change to Lactobacillus with higher acidity.
History
Pickling with vinegar likely originated in ancient Mesopotamia around 2400 BCE. There is archaeological evidence of cucumbers being pickled in the Tigris Valley in 2030 BCE. Pickling vegetables in vinegar continued to develop in the Middle East region before spr |
https://en.wikipedia.org/wiki/Cyclic%20subspace | In mathematics, in linear algebra and functional analysis, a cyclic subspace is a certain special subspace of a vector space associated with a vector in the vector space and a linear transformation of the vector space. The cyclic subspace associated with a vector v in a vector space V and a linear transformation T of V is called the T-cyclic subspace generated by v. The concept of a cyclic subspace is a basic component in the formulation of the cyclic decomposition theorem in linear algebra.
Definition
Let be a linear transformation of a vector space and let be a vector in . The -cyclic subspace of generated by , denoted , is the subspace of generated by the set of vectors . In the case when is a topological vector space, is called a cyclic vector for if is dense in . For the particular case of finite-dimensional spaces, this is equivalent to saying that is the whole space .
There is another equivalent definition of cyclic spaces. Let be a linear transformation of a topological vector space over a field and be a vector in . The set of all vectors of the form , where is a polynomial in the ring of all polynomials in over , is the -cyclic subspace generated by .
The subspace is an invariant subspace for , in the sense that .
Examples
For any vector space and any linear operator on , the -cyclic subspace generated by the zero vector is the zero-subspace of .
If is the identity operator then every -cyclic subspace is one-dimensional.
is one-dimensional if and only if is a characteristic vector (eigenvector) of .
Let be the two-dimensional vector space and let be the linear operator on represented by the matrix relative to the standard ordered basis of . Let . Then . Therefore and so . Thus is a cyclic vector for .
Companion matrix
Let be a linear transformation of a -dimensional vector space over a field and be a cyclic vector for . Then the vectors
form an ordered basis for . Let the characteristic polynomial for b |
https://en.wikipedia.org/wiki/Thyrohyoid%20membrane | The thyrohyoid membrane (or hyothyroid membrane) is a broad, fibro-elastic sheet of the larynx. It connects the upper border of the thyroid cartilage to the hyoid bone.
Structure
The thyrohyoid membrane is attached below to the upper border of the thyroid cartilage and to the front of its superior cornu, and above to the upper margin of the posterior surface of the body and greater cornu of the hyoid bone. It passes behind the posterior surface of the body of the hyoid. It is separated from the hyoid bone by a mucous bursa, which allows for the upward movement of the larynx during swallowing.
Its middle thicker part is termed the median thyrohyoid ligament. Its lateral thinner portions are pierced by the superior laryngeal vessels and the internal branch of the superior laryngeal nerve. Its anterior surface is in relation with the thyrohyoid muscle, sternohyoid muscle, and omohyoid muscles, and with the body of the hyoid bone. It is pierced by the superior laryngeal nerve. It is also pierced the superior thyroid artery, where there is a thickening of the membrane.
Clinical significance
Superior laryngeal artery
The thyrohyoid membrane needs to be manipulated to access the superior thyroid artery.
History
The thyrohyoid membrane refers to the two structures it connects: the thyroid cartilage and the hyoid bone. It may also be known as the hyothyroid membrane, where the two structures are reversed.
Additional images |
https://en.wikipedia.org/wiki/Duct%20modes | The acoustic pressure in a cylindrical duct can be expressed as the superposition of duct modes:
where is the participation factor or modal amplitude and is the mode shape. The duct mode shape are analytically determined from the solution of the Helmholtz equation. |
https://en.wikipedia.org/wiki/Forest%20genetic%20resources | Forest genetic resources or forest tree genetic resources are genetic resources (i.e., genetic material of actual or future value) of forest shrub and tree species. Forest genetic resources are essential for forest-depending communities who rely for a substantial part of their livelihoods on timber and non-timber forest products (for example fruits, gums and resins) for food security, domestic use and income generation. These resources are also the basis for large-scale wood production in planted forests to satisfy the worldwide need for timber and paper. Genetic resources of several important timber, fruit and other non-timber tree species are conserved ex situ in genebanks or maintained in field collections. Nevertheless, in situ conservation in forests and on farms is in the case of most tree species the most important measure to protect their genetic resources.
Understanding diversity
A better understanding of the diversity of these species is crucial for their sustainable use and conservation. Monitoring patterns of distribution and genetic diversity of these species allows the prioritization of populations for in situ conservation, identification of populations and species most at risk and existing gaps in genebank collections. This is vital information which helps tackle global challenges such as food security and climate change.
The State of the World's Forest Genetic Resources
In 2014, the Food and Agriculture Organization of the United Nations published the first State of the World's Forest Genetic Resources. The publication addressed the conservation, management and sustainable use of forest tree and other woody plant genetic resources of actual and potential value for human well-being in the broad range of management systems. It was prepared based on information provided by 86 countries, outcomes from regional and subregional consultations, and commissioned thematic studies. Amongst the ten key findings, half of the forest species reported as regul |
https://en.wikipedia.org/wiki/Superbloom | A superbloom is a rare desert botanical phenomenon in California and Arizona in which an unusually high proportion of wildflowers whose seeds have lain dormant in desert soil germinate and blossom at roughly the same time. The phenomenon is associated with an unusually wet rainy season. The term may have developed as a label in the 1990s.
A similar phenomenon also occurs annually during the wet season along the arid west coast of South Africa between Cape Town and Namaqualand; notably at nature reserves such as the West Coast National Park and Goegap Nature Reserve.
Necessary conditions and sequence of events
The conditions under which a superbloom can occur are exceptional. Because some invasive grasses, such as bromes, will compete with native flowers for moisture, the desert must remain dry enough prior to the bloom to keep them from becoming established. The desert must receive rainfall in the autumn, and this rain must penetrate deep into the soil matrix in order to reach a majority of the dormant seeds of flowering plants. If subsequent rainfall is excessive or inundating, the young plants may be carried away in flash floods; if it is inadequate, the seeds will die from dehydration.
Next, the ground in which the seeds lie must warm slowly over the several months which follow the first soaking rain, and the desert must have enough cloud cover both to shield the soil from intense daytime desert heat and to insulate it from overnight freezing temperatures. Finally, once the newly germinated plants have reached the surface of the soil, the desert must remain undisturbed by strong winds which would uproot the plants or damage the young shoots. The rare concatenation of these events is what makes a superbloom such an extraordinary occurrence.
In California, superblooms typically occur once every ten years or so. This has happened less often since the beginning of the 21st century due to persistent state drought. Anza-Borrego Park and Carrizo Plain National Monu |
https://en.wikipedia.org/wiki/Drop-out%20compensator | A drop-out compensator is an error concealment device that was commonly used in the analog video era to hide brief RF signal drop-outs on videotape playback caused by imperfections in or damage to the tape's magnetic coating. Most compensators worked by repeating earlier video scan lines over short periods of signal loss; one early such system, Mincom, was developed in the 1960s by the Minnesota Mining and Manufacturing Company, the company now known as 3M. Because of the high cost of the 3M device at the time, BBC R&D engineers developed a simpler, less expensive unit based on a sample-and-hold technique for in-house use.
Dedicated drop-out compensators were eventually superseded by the incorporation of drop-out compensation functionality into timebase correctors based on analog-to-digital conversion and digital line stores.
The advent of compressed digital video systems finally eliminated the need for line-based drop-out compensators. Most low-level media errors, such as those caused by tape damage or imperfections, are now dealt with by forward error correction techniques, and those which overwhelm the FEC layer are typically too severe to remedy using simple line-based error concealment techniques because damage to the compressed bitstream will often damage large parts of the video image. However, since occasional signal drop-outs can still occur, either through severe tape damage or because of packet loss in packetized video transmission, modern error concealment techniques that are aware of the structure of the compressed video format have been developed to deal with these. |
https://en.wikipedia.org/wiki/Excess%20property | In chemical thermodynamics, excess properties are properties of mixtures which quantify the non-ideal behavior of real mixtures. They are defined as the difference between the value of the property in a real mixture and the value that would exist in an ideal solution under the same conditions. The most frequently used excess properties are the excess volume, excess enthalpy, and excess chemical potential. The excess volume (), internal energy (), and enthalpy () are identical to the corresponding mixing properties; that is,
These relationships hold because the volume, internal energy, and enthalpy changes of mixing are zero for an ideal solution.
Definition
By definition, excess properties are related to those of the ideal solution by:
Here, the superscript IS denotes the value in the ideal solution, a superscript denotes the excess molar property, and denotes the particular property under consideration. From the properties of partial molar properties,
substitution yields:
For volumes, internal energies, and enthalpies, the partial molar quantities in the ideal solution are identical to the molar quantities in the pure components; that is,
Because the ideal solution has molar entropy of mixing
where is the mole fraction, the partial molar entropy is not equal to the molar entropy:
One can therefore define the excess partial molar quantity the same way:
Several of these results are summarized in the next section.
Examples of excess partial molar properties
The pure component's molar volume and molar enthalpy are equal to the corresponding partial molar quantities because there is no volume or internal energy change on mixing for an ideal solution.
The molar volume of a mixture can be found from the sum of the excess volumes of the components of a mixture:
This formula holds because there is no change in volume upon mixing for an ideal mixture. The molar entropy, in contrast, is given by
where the term originates from the entropy of mixing of an idea |
https://en.wikipedia.org/wiki/Koedoe | Koedoe, subtitled African Protected Area Conservation and Science, is a peer-reviewed open access scientific journal covering biology, ecology, and biodiversity conservation in Africa. It was established in 1958. Koedoe is Afrikaans for Kudu.
Abstracting and indexing
For full information visit the journal website link http://koedoe.co.za/index.php/koedoe/pages/view/about#7
External links
Academic journals established in 1958
English-language journals
Open access journals
Ecology journals
Conservation biology |
https://en.wikipedia.org/wiki/Polarization%20scrambling | Polarization scrambling is the process of rapidly varying the polarization of light within a system using a polarization controller so that the average polarization over time is effectively randomized. Polarization scrambling can be used in scientific experiments to cancel out errors caused by polarization effects. Polarization scrambling is also used on long-distance fibre optic transmission systems with optical amplifiers, in order to avoid polarization hole-burning. Polarization scrambling, also for the variation of polarization mode dispersion, is a mandatory test procedure for fiber optic data transmission systems based on polarization-division multiplexing.
Polarization scramblers usually vary the normalized Stokes vector of the polarization state over the entire Poincaré sphere. They are commercially available with speeds of 20 Mrad/s on the Poincaré sphere (see external link). Various speed distributions such as peaked and quasi-Rayleigh can be generated.
Some experiments have implemented ultrafast polarization scrambling on a polaritonic platform with speeds in the order of the Trad/s on the Poincaré sphere.
See also
Depolarizer (optics)
Polarization-division multiplexing
Polarization controller
Polarization mixing |
https://en.wikipedia.org/wiki/Toll-interleukin%20receptor | The toll-interleukin-1 receptor (TIR) homology domain is an intracellular signaling domain found in MyD88, SARM1, interleukin-1 receptors, toll receptors and many plant R proteins. It contains three highly conserved regions, and mediates protein-protein interactions between the toll-like receptors (TLRs) and signal-transduction components. TIR-like motifs are also found in plant proteins where they are involved in resistance to disease and in bacteria where they are associated with virulence. When activated, TIR domains recruit cytoplasmic adaptor proteins MyD88 (UniProt ) and TOLLIP (toll-interacting protein, UniProt ). In turn, these associate with various kinases to set off signaling cascades. Some TIR domains have also been found to have intrinsic NAD+ cleavage activity, such as in SARM1. In the case of SARM1, the TIR NADase activity leads to the production of Nam, ADPR and cADPR and the activation of downstream pathways involved in Wallerian degeneration and neuron death.
In Drosophila melanogaster the toll protein is involved in establishment of dorso-ventral polarity in the embryo. In addition, members of the toll family play a key role in innate antibacterial and antifungal immunity in insects as well as in mammals. These proteins are type-I transmembrane receptors that share an intracellular 200 residue domain with the interleukin-1 receptor (IL-1R), the toll/IL-1R homologous region (TIR). The similarity between toll-like receptors (TLRs) and IL-1R is not restricted to sequence homology since these proteins also share a similar signaling pathway. They both induce the activation of a Rel type transcription factor via an adaptor protein and a protein kinase. MyD88, a cytoplasmic adaptor protein found in mammals, contains a TIR domain associated to a DEATH domain (see ). Besides the mammalian and Drosophila melanogaster proteins, a TIR domain is also found in a number of plant proteins implicated in host defense. As MyD88, these proteins are cytoplasmic.
Sit |
https://en.wikipedia.org/wiki/Stocks-to-use%20ratio | The stocks-to-use ratio (S/U) is a convenient measure of supply and demand interrelationships of commodities. This ratio indicates the level of carryover stock for any given commodity as a percentage of the total use of the commodity.
It is typically used for grain commodity stocks such as wheat, corn and soybeans where it can be used to compare both the ending stock, along with the stocks-to-use ratio against previous years, this percentage number is a good indicator of whether current ending stock levels are at historically small amounts to justification for higher prices or plentiful amounts resulting in lower prices.
Calculation
According to Futures Trading Charts the ratio is calculated as follows:
This can be simplified by consolidating the upper portion to:
Beginning stocks represent the previous year’s ending or carry-over inventories. Total production represents the total grain produced in a given year. Total usage is the sum of all the end uses in which the stock of grain has been consumed. This would include human consumption, export programs, seed, waste, dockage and feed consumption. Adding carry-over stocks to the total production provides the total supply.
Using the total supply, subtract the total use and result will be the year-ending carry-over stock. |
https://en.wikipedia.org/wiki/Ultra%20Port%20Architecture | The Ultra Port Architecture (UPA) bus was developed by Sun Microsystems as a high-speed graphics card to CPU interconnect, beginning with the Ultra 1 workstation in 1995.
See also
List of device bandwidths
External links
UPA Bus Whitepaper
Computer buses
Sun Microsystems hardware |
https://en.wikipedia.org/wiki/Data%20version%20control | Data version control is a method of working with data sets. It is similar to the version control systems used in traditional software development, but is optimized to allow better processing of data and collaboration in the context of data analytics, research, and any other form of data analysis. Data version control may also include specific features and configurations designed to facilitate work with large data sets and data lakes.
History
Background
As early as 1985, researchers recognized the need for defining timing attributes in database tables, which would be necessary for tracking changes to databases. This research continued into the 1990s, and the theory was formalized into practical methods for managing data in relational databases, providing some of the foundational concepts for what would later become data version control.
In the early 2010s the size of data sets was rapidly expanding, and relational databases were no longer sufficient to manage the amounts of data organizations were accumulating. The rise of the Apache Hadoop eco system, with HDFS as a storage layer, and later object storage had become dominant in big data operations. Research into data management tools and data version control systems increased sharply, along with demand for such tools from both academia and the private and public sectors.
Version controlled databases
The first versioned database was proposed in 2012 for the SciDB database, and demonstrated it was possible to create chains and trees of different versions of the database while decreasing both the overall storage size and access speeds associated with previous methods. In 2014, a proposal was made to generalize these principles into a platform that could be used for any application.
In 2016, a prototype for a data version control system was developed during a Kaggle competition. This software was later used internally at an AI firm, and eventually spun off as a startup. Since then, a number of data version contro |
https://en.wikipedia.org/wiki/Silicon%20Dreams | Silicon Dreams is a trilogy of interactive fiction games developed by Level 9 Computing during the 1980s. The first game was Snowball, released during 1983, followed a year later by Return to Eden, and then by The Worm in Paradise during 1985. The next year they were vended together as the first, second and last of the Silicon Dreams.
As with most Level 9 games, the trilogy used an interpreted language termed A-code and was usable in all major types of home computer of the time, on either diskette or cassette. Level 9 self-published each game separately, but the compilation was published by Telecomsoft, which sold it in the United States with the tradename Firebird and in Europe with the tradename Rainbird.
The trilogy is set in a not too-distant future when humans have started colonising space. For the first two instalments the player has the role of Kim Kimberley, an undercover agent, whose goal in Snowball is to save the colonist's spacecraft from crashing into a star, and in Return to Eden to stop the defence system at the destination planet of Eden from destroying the craft. In The Worm in Paradise, the player, with the role of an unnamed citizen of Eden, must travel around the city of Enoch, learn its secrets, earn money and save the planet.
Gameplay
The games use a text parser for entering commands at the "What now?" prompt. The parser can interpret more than a thousand words to control movement or actions. It looks at the command, picking out two or three words it knows, ignoring the order, and tries to guess what is meant. For movement, the usual commands for moving 'NORTH', 'SOUTH', 'EAST' and 'WEST' are available (and their abbreviated forms of 'N', 'S', 'E' and 'W') as well as 'UP' and 'DOWN' ('U' and 'D' respectively) and a number of other directions and 'modes' of movement (like 'JUMP'). For actions, it understands how to pick up objects, opening doors, lighting lamps, as well as dropping objects and wielding them. Additionally, there are com |
https://en.wikipedia.org/wiki/Magnolia%20%28SMC%20brand%29 | Magnolia is a food and beverage brand owned by San Miguel Corporation (SMC) and used by its various subsidiaries. The brand was commercially established by SMC (then known as San Miguel Brewery) as an ice cream brand in 1925.
History
The history of the Magnolia brand can be traced back to 1899 when an American by the name of William J. Schober arrived in the Philippines as a cook in the United States Army. After the Philippine–American War, Schober would remain in the Philippines and introduced the "magnolia pie", "magnolia ice cream" and "magnolia ice-drop". In 1925, Schober sold his "magnolia" business interests to SMC (then known as San Miguel Brewery). Schober would move on to establish Legaspi Garden Restaurant at Pier 7, Port Area, Manila (the location is now the headquarters of Philippine Coast Guard), behind the Manila Hotel.
Under SMC, the dairy plant at 526 Calle Aviles in the San Miguel district of Manila stood on the same street as the site of the original San Miguel brewery (6 Calle Aviles). In 1926, the dairy plant was relocated to Calle Echague (now C. Palanca Sr., Street) in Quiapo, Manila. In 1970, production was transferred to a new modern facility in Aurora Boulevard, Quezon City, known as the Magnolia Dairy Products Plant. The facility, designed by National Artist Leandro Locsin also housed the main branch of its Magnolia ice cream parlor.
In 1972, SMC entered the poultry business with its first breeder farm in Cavite. The following year, SMC established its first chicken processing plant in Muntinlupa to produce Magnolia Fresh Chicken. The poultry business, along with its B-Meg feeds business were operated as the Feeds and Livestock Division of SMC until it was spun-off as a new subsidiary (San Miguel Foods, Inc.) in 1991.
In 1981, SMC spun off its butter and margarine production to a new subsidiary named Philippine Dairy Products Corporation (now known as Magnolia, Inc.), a joint-venture with New Zealand Dairy Board, with its production fa |
https://en.wikipedia.org/wiki/Goal%20structuring%20notation | Goal structuring notation (GSN) is a graphical diagram notation used to show the elements of an argument and the relationships between those elements in a clearer format than plain text. Often used in safety engineering, GSN was developed at the University of York during the 1990s to present safety cases. The notation gained popularity as a method of presenting safety assurances but can be applied to any type of argument and was standardized in 2011.
GSN has been used to track safety assurances in industries such as clinical care aviation, automotive, rail, traffic management and nuclear power and has been used in other contexts such as security cases, patent claims, debate strategy, and legal arguments.
History
The goal structuring notation was first developed at the University of York during the ASAM-II (A Safety Argument Manager II) project in the early 1990s, to overcome perceived issues in expressing safety arguments using the Toulmin method. The notation was further developed and expanded by Tim Kelly, whose PhD thesis contributed systematic methods for constructing and maintaining GSN diagrams, and the concept of ′safety case patterns′ to promote re-use of argument fragments. During the late 1990s and early 2000s, the GSN methodology was taught on the Safety Critical Systems Engineering course at York, and various extensions to the GSN methodology were proposed by Kelly and other members of the university's High Integrity Systems Engineering group.
By 2007, goal structuring notation was sufficiently popular that a group of industry and academic users came together to standardise the notation and its surrounding methodology, resulting in publication of the GSN Community Standard in 2011. From 2014, maintenance of the GSN standard moved under the auspices of the SCSC's Assurance Case Working Group. As at 2022, the standard has reached Version 3.
Criticism
Charles Haddon-Cave in his review of the Nimrod accident commented that the top goal of a GSN argumen |
https://en.wikipedia.org/wiki/Cytochrome%20c%20oxidase | The enzyme cytochrome c oxidase or Complex IV, (was , now reclassified as a translocase EC 7.1.1.9) is a large transmembrane protein complex found in bacteria, archaea, and mitochondria of eukaryotes.
It is the last enzyme in the respiratory electron transport chain of cells located in the membrane. It receives an electron from each of four cytochrome c molecules and transfers them to one oxygen molecule and four protons, producing two molecules of water. In addition to binding the four protons from the inner aqueous phase, it transports another four protons across the membrane, increasing the transmembrane difference of proton electrochemical potential, which the ATP synthase then uses to synthesize ATP.
Structure
The complex
The complex is a large integral membrane protein composed of several metal prosthetic sites and 14 protein subunits in mammals. In mammals, eleven subunits are nuclear in origin, and three are synthesized in the mitochondria. The complex contains two hemes, a cytochrome a and cytochrome a, and two copper centers, the Cu and Cu centers. In fact, the cytochrome a and Cu form a binuclear center that is the site of oxygen reduction. Cytochrome c, which is reduced by the preceding component of the respiratory chain (cytochrome bc1 complex, Complex III), docks near the Cu binuclear center and passes an electron to it, being oxidized back to cytochrome c containing Fe. The reduced Cu binuclear center now passes an electron on to cytochrome a, which in turn passes an electron on to the cytochrome a>-Cu binuclear center. The two metal ions in this binuclear center are 4.5 Å apart and coordinate a hydroxide ion in the fully oxidized state.
Crystallographic studies of cytochrome c oxidase show an unusual post-translational modification, linking C6 of Tyr(244) and the ε-N of His(240) (bovine enzyme numbering). It plays a vital role in enabling the cytochrome a- Cu binuclear center to accept four electrons in reducing molecular oxygen and four protons |
https://en.wikipedia.org/wiki/Fluid%20animation | Fluid animation refers to computer graphics techniques for generating realistic animations of fluids such as water and smoke. Fluid animations are typically focused on emulating the qualitative visual behavior of a fluid, with less emphasis placed on rigorously correct physical results, although they often still rely on approximate solutions to the Euler equations or Navier–Stokes equations that govern real fluid physics. Fluid animation can be performed with different levels of complexity, ranging from time-consuming, high-quality animations for films, or visual effects, to simple and fast animations for real-time animations like computer games.
Relationship to computational fluid dynamics
Fluid animation differs from computational fluid dynamics (CFD) in that fluid animation is used primarily for visual effects, whereas computational fluid dynamics is used to study the behavior of fluids in a scientifically rigorous way.
Development
The development of fluid animation techniques based on the Navier–Stokes equations began in 1996, when Nick Foster and Dimitris Metaxas implemented solutions to 3D Navier-Stokes equations in a computer graphics context, basing their work on a scientific CFD paper by Harlow and Welch from 1965. Up to that point, a variety of simpler methods had primarily been used, including ad-hoc particle systems, lower dimensional techniques such as height fields, and semi-random turbulent noise fields.
In 1999, Jos Stam published the "Stable Fluids" method, which exploited a semi-Lagrangian advection technique and implicit integration of viscosity to provide unconditionally stable behaviour. This allowed for much larger time steps and therefore faster simulations. This general technique was extended by Ronald Fedkiw and co-authors to handle more realistic smoke and fire, as well as complex 3D water simulations using variants of the level-set method.
Some notable academic researchers in this area include Jerry Tessendorf, James F. O'Brien, R |
https://en.wikipedia.org/wiki/Cytidine%20diphosphate%20glucose | Cytidine diphosphate glucose, often abbreviated CDP-glucose, is a nucleotide-linked sugar consisting of cytidine diphosphate and glucose.
Biosynthesis
CDP-glucose is produced from CTP and glucose-1-phosphate by the enzyme glucose-1-phosphate cytidylyltransferase. |
https://en.wikipedia.org/wiki/Jakobson%27s%20functions%20of%20language | Roman Jakobson defined six functions of language (or communication functions), according to which an effective act of verbal communication can be described. Each of the functions has an associated factor. For this work, Jakobson was influenced by Karl Bühler's organon model, to which he added the poetic, phatic and metalingual functions.
The six functions of language
The referential function: corresponds to the factor of Context and describes a situation, object or mental state. The descriptive statements of the referential function can consist of both definite descriptions and deictic words, e.g. "The autumn leaves have all fallen now." Similarly, the referential function is associated with an element whose true value is under questioning especially when the truth value is identical in both the real and assumptive universe.
The poetic function: focuses on "the message for its own sake" (how the code is used) and is the operative function in poetry as well as slogans.
The emotive function: relates to the Addresser (sender) and is best exemplified by interjections and other sound changes that do not alter the denotative meaning of an utterance but do add information about the Addresser's (speaker's) internal state, e.g. "Wow, what a view!" Whether a person is experiencing feelings of happiness, sadness, grief or otherwise, they use this function to express themselves.
The conative function: engages the Addressee (receiver) directly and is best illustrated by vocatives and imperatives, e.g. "Tom! Come inside and eat!"
The phatic function: is language for the sake of interaction and is therefore associated with the Contact/Channel factor. The Phatic Function can be observed in greetings and casual discussions of the weather, particularly with strangers. It also provides the keys to open, maintain, verify or close the communication channel: "Hello?", "Ok?", "Hummm", "Bye"...
The metalingual (alternatively called "metalinguistic" or "reflexive") function: is the |
https://en.wikipedia.org/wiki/Call%20of%20Duty%3A%20Vanguard | Call of Duty: Vanguard is a 2021 first-person shooter game developed by Sledgehammer Games and published by Activision. It was released on November 5 for PlayStation 4, PlayStation 5, Windows, Xbox One, and Xbox Series X/S. It serves as the 18th installment in the overall Call of Duty series. Vanguard establishes a storyline featuring the birth of the special forces to face an emerging threat at the end of the war during various theatres of World War II.
The game received mixed reviews from critics, with praise towards the entertainment value of the campaign and multiplayer, and the graphics, but criticism for its writing, Zombies mode, and lack of innovation. It failed to meet the sales expectations of Activision.
Gameplay
Campaign
Vanguard's campaign features similar gameplay mechanics previously introduced in Modern Warfare, such as the player being able to mount wielded weapons to flat surfaces, interact with doors and execute takedowns. New gameplay features allow the player the ability to use a more advanced tactical approach in combat such as blind firing from behind cover, breaking through destructible environmental elements or creating new paths to complete objectives by climbing walls.
Multiplayer
Vanguard's multiplayer mode introduces a new game mode to the series, titled "Champion Hill", a successor to the mode "Gunfight", a 2v2 arena mode previously featured in Call of Duty: Modern Warfare and Call of Duty: Black Ops Cold War. Champion Hill tasks players with surviving as long as possible in a squad-based deathmatch, round-robin tournament. Season 2 of Vanguard introduces "Arms Race", a large-scale game mode where two teams of 12 players attempt to capture five bases, or destroy them; players can also earn cash from kills and complete objectives to buy custom loadouts and killstreaks. A new feature introduced to the Call of Duty series through Vanguard's multiplayer is "Combat Pacing". This allows the player to have more control over the selection |
https://en.wikipedia.org/wiki/Apportionment%20in%20the%20European%20Parliament | The apportionment of seats within the European Parliament to each member state of the European Union is set out by the EU treaties. According to European Union treaties, the distribution of seats is "degressively proportional" to the population of the member states, with negotiations and agreements between member states playing a role. Thus the allocation of seats is not strictly proportional to the size of a state's population, nor does it reflect any other automatically triggered or fixed mathematical formula. The process can be compared to the composition of the electoral college used to elect the President of the United States of America in that, pro rata, the smaller state received more places in the electoral college than the more populous states.
Since the withdrawal of the United Kingdom from the EU in 2020, the number of MEPs, including the president, is 705. The maximum number allowed by the Lisbon Treaty is 751.
Background
When the Parliament was established in 1952 as the 78-member "Common Assembly of the European Coal and Steel Community" the then-three smaller states (Belgium, Luxembourg, and the Netherlands) were concerned about being under-represented and hence they were granted more seats than their population would have allowed. Membership increased to 142 with the Assembly expanded to cover the Economic and Atomic Energy Communities.
It then grew further with each enlargement, which each time allowing smaller nations to have greater proportion of seats relative to larger states. Membership reached 626 in 1995 with the Treaty of Amsterdam setting a limit of 700. The Treaty of Nice increased this to 732 and set out the future distribution for up to 27 states. In 2007 Romania and Bulgaria joined with 35 and 18 members respectively temporarily pushing the number of members over the ceiling to 785. In 2009 the number of members decreased to 736.
In December 2011 an amendment had temporarily increased the Lisbon limit to 754. This allowed member st |
https://en.wikipedia.org/wiki/Hong%20Kong%20Mathematical%20High%20Achievers%20Selection%20Contest | Hong Kong Mathematical High Achievers Selection Contest (HKMHASC, Traditional Chinese: 香港青少年數學精英選拔賽) is a yearly mathematics competition for students of or below Secondary 3 in Hong Kong. It is jointly organized by Po Leung Kuk and Hong Kong Association of Science and Mathematics Education since the academic year 1998-1999. Recently, there are more than 250 secondary schools participating.
Format and Scoring
Each participating school may send at most 5 students into the contest. There is one paper, divided into Part A and Part B, with two hours given. Part A is usually made up of 14 - 18 easier questions, carrying one mark each. In Part A, only answers are required. Part B is usually made up of 2 - 4 problems with different difficulties, and may carry different number of marks, varying from 4 to 8. In Part B, workings are required and marked. No calculators or calculation assisting equipments (e.g. printed mathematical tables) are allowed.
Awards and Further Training
Awards are given according to the total mark. The top 40 contestants are given the First Honour Award (一等獎), the next 80 the Second Honour Award (二等獎), and the Third Honour Award (三等獎) for the next 120. Moreover, the top 4 can obtain an award, namely the Champion and the 1st, 2nd and 3rd Runner-up.
Group Awards are given to schools, according to the sum of marks of the 3 contestants with highest mark. The first 4 are given the honour of Champion and 1st, 2nd and 3rd Runner-up. The honour of Top 10 (首十名最佳成績) is given to the 5th-10th, and Group Merit Award (團體優異獎) is given to the next 10.
First Honour Award achievers would receive further training. Eight students with best performance will be chosen to participate in the Invitational World Youth Mathematics Inter-City Competition (IWYMIC).
List of Past Champions (1999-2019)
98-99: Queen Elizabeth School, Ying Wa College
99-00: Queen's College
00-01: La Salle College
01-02: St. Paul's College
02-03: Queen's College
03-04: La Salle College
04-05: La |
https://en.wikipedia.org/wiki/Digital%20geometry | Digital geometry deals with discrete sets (usually discrete point sets) considered to be digitized models or images of objects of the 2D or 3D Euclidean space.
Simply put, digitizing is replacing an object by a discrete set of its points. The images we see on the TV screen, the raster display of a computer, or in newspapers are in fact digital images.
Its main application areas are computer graphics and image analysis.
Main aspects of study are:
Constructing digitized representations of objects, with the emphasis on precision and efficiency (either by means of synthesis, see, for example, Bresenham's line algorithm or digital disks, or by means of digitization and subsequent processing of digital images).
Study of properties of digital sets; see, for example, Pick's theorem, digital convexity, digital straightness, or digital planarity.
Transforming digitized representations of objects, for example (A) into simplified shapes such as (i) skeletons, by repeated removal of simple points such that the digital topology of an image does not change, or (ii) medial axis, by calculating local maxima in a distance transform of the given digitized object representation, or (B) into modified shapes using mathematical morphology.
Reconstructing "real" objects or their properties (area, length, curvature, volume, surface area, and so forth) from digital images.
Study of digital curves, digital surfaces, and digital manifolds.
Designing tracking algorithms for digital objects.
Functions on digital space.
Curve sketching, a method of drawing a curve pixel by pixel.
Digital geometry heavily overlaps with discrete geometry and may be considered as a part thereof.
Digital space
A 2D digital space usually means a 2D grid space that only contains integer points in 2D Euclidean space. A 2D image is a function on a 2D digital space (See image processing).
In Rosenfeld and Kak's book, digital connectivity are defined as the relationship among elements in digital space. For e |
https://en.wikipedia.org/wiki/Barnette%27s%20conjecture | Barnette's conjecture is an unsolved problem in graph theory, a branch of mathematics, concerning Hamiltonian cycles in graphs. It is named after David W. Barnette, a professor emeritus at the University of California, Davis; it states that every bipartite polyhedral graph with three edges per vertex has a Hamiltonian cycle.
Definitions
A planar graph is an undirected graph that can be embedded into the Euclidean plane without any crossings. A planar graph is called polyhedral if and only if it is 3-vertex-connected, that is, if there do not exist two vertices the removal of which would disconnect the rest of the graph. A graph is bipartite if its vertices can be colored with two different colors such that each edge has one endpoint of each color. A graph is cubic (or 3-regular) if each vertex is the endpoint of exactly three edges. Finally, a graph is Hamiltonian if there exists a cycle that passes through each of its vertices exactly once. Barnette's conjecture states that every cubic bipartite polyhedral graph is Hamiltonian.
By Steinitz's theorem, a planar graph represents the edges and vertices of a convex polyhedron if and only if it is polyhedral. A three-dimensional polyhedron has a cubic graph if and only if it is a simple polyhedron.
And, a planar graph is bipartite if and only if, in a planar embedding of the graph, all face cycles have even length. Therefore, Barnette's conjecture may be stated in an equivalent form: suppose that a three-dimensional simple convex polyhedron has an even number of edges on each of its faces. Then, according to the conjecture, the graph of the polyhedron has a Hamiltonian cycle.
History
conjectured that every cubic polyhedral graph is Hamiltonian; this came to be known as Tait's conjecture. It was disproven by , who constructed a counterexample with 46 vertices; other researchers later found even smaller counterexamples. However, none of these known counterexamples is bipartite. Tutte himself conjectured that every cubi |
https://en.wikipedia.org/wiki/Renal%20lobe | The renal lobe is a portion of a kidney consisting of a renal pyramid and the renal cortex above it. In humans, on average there are 7 to 18 renal lobes.
It is visible without a microscope, though it is easier to see in humans than in other animals.
It is composed of many renal lobules, which are not visible without a microscope.
See also
Renal capsule
Renal medulla |
https://en.wikipedia.org/wiki/Narrow-spectrum%20antibiotic | A narrow-spectrum antibiotic is an antibiotic that is only able to kill or inhibit limited species of bacteria. Examples of narrow-spectrum antibiotics include fidaxomicin and sarecycline.
Advantages
Narrow-spectrum antibiotic allow to kill or inhibit only those bacteria species that are unwanted (i.e. causing disease). As such, it leaves most of the beneficial bacteria unaffected, hence minimizing the collateral damage on the microbiota.
Low propensity for bacterial resistance development.
Disadvantages
Often, the exact species of bacteria causing the illness is unknown, in which case narrow-spectrum antibiotics can't be used, and broad-spectrum antibiotics are used instead. To know the exact species of bacteria causing the illness, clinical specimens need to be taken for antimicrobial susceptibility testing in a clinical microbiology laboratory.
See also
Antimicrobial spectrum
Broad-spectrum antibiotics |
https://en.wikipedia.org/wiki/Amazon%20%28video%20game%29 | Amazon is an interactive fiction graphic adventure game. The game was published by Telarium in 1984 and written by Michael Crichton.
Development
Best-selling novelist and director Michael Crichton was a computer hobbyist who taught himself the programming language BASIC. In the early 1980s he, programmer Stephen Warady, and artist David Durand began developing an Apple II graphic adventure game based on Crichton's novel Congo; he sometimes programmed game sequences which Warady converted into much faster assembly language. They worked on the project for 18 months and, before Crichton found a publisher, Spinnaker Software approached him about adapting his novels for its Telarium division's new "bookware" games. The author revealed the game, amazing Spinnaker, and signed a contract in late 1983.
Crichton did not realize, however, that he had already sold all adaptation rights to Congo to another party. The team revised the game (renamed Amazon), moving the setting from Africa to South America and changing a diamond mine to an emerald mine; the novel's Amy the talking gorilla became Paco the talking parrot. Because the game was mostly complete, Telarium was able to port it to the Commodore 64 before Amazons release. Crichton later said that he was disappointed with the game due to technological limitations at the time of its development.
Reception
Amazon was the best-selling Telarium title with as many as 100,000 copies sold, the majority likely for the Commodore 64. Computer Gaming World praised its rarely used animated graphics and Crichton's cooperation with its designers, stating that "the cohesive manner in which the game's storyline unfolds reflects Crichton's skill as a writer". James Delson of Family Computing reviewed the Apple II version and wrote that the game "has limited graphics, but what's there is choice." Delson also noted the game's difficulty and wrote, "Patience is more than a virtue in this game, it's a necessity." German reviewers recognized t |
https://en.wikipedia.org/wiki/Diceware | Diceware is a method for creating passphrases, passwords, and other cryptographic variables using ordinary dice as a hardware random number generator. For each word in the passphrase, five rolls of a six-sided die are required. The numbers from 1 to 6 that come up in the rolls are assembled as a five-digit number, e.g. 43146. That number is then used to look up a word in a cryptographic word list. In the original Diceware list 43146 corresponds to munch. By generating several words in sequence, a lengthy passphrase can thus be constructed randomly.
A Diceware word list is any list of unique words, preferably ones the user will find easy to spell and to remember. The contents of the word list do not have to be protected or concealed in any way, as the security of a Diceware passphrase is in the number of words selected, and the number of words each selected word could be taken from. Lists have been compiled for several languages, including Basque, Bulgarian, Catalan, Chinese, Czech, Danish, Dutch, English, Esperanto, Estonian, Finnish, French, German, Greek, Hebrew, Hungarian, Italian, Japanese, Latin, Māori, Norwegian, Polish, Portuguese, Romanian, Russian, Slovak, Slovenian, Spanish, Swedish and Turkish.
The level of unpredictability of a Diceware passphrase can be easily calculated: each word adds s of entropy to the passphrase (that is, bits). Originally, in 1995, Diceware creator Arnold Reinhold considered five words () the minimal length needed by average users. However, in 2014 Reinhold started recommending that at least six words () be used.
This level of unpredictability assumes that potential attackers know three things: that Diceware has been used to generate the passphrase, the particular word list used, and exactly how many words make up the passphrase. If the attacker has less information, the entropy can be greater than .
The above calculations of the Diceware algorithm's entropy assume that, as recommended by Diceware's author, each word is sepa |
https://en.wikipedia.org/wiki/MED31 | Mediator of RNA polymerase II transcription subunit 31 is a protein in humans encoded by the MED31 gene. It represents subunit Med31 of the Mediator complex. The family contains the Saccharomyces cerevisiae SOH1 homologues. SOH1 is responsible for the repression of temperature sensitive growth of the HPR1 mutant and has been found to be a component of the RNA polymerase II transcription complex. SOH1 not only interacts with factors involved in DNA repair, but transcription as well. Thus, the SOH1 protein may serve to couple these two processes. |
https://en.wikipedia.org/wiki/5040%20%28number%29 | 5040 is a factorial (7!), a superior highly composite number, abundant number, colossally abundant number and the number of permutations of 4 items out of 10 choices (10 × 9 × 8 × 7 = 5040). It is also one less than a square, making (7, 71) a Brown number pair.
Philosophy
Plato mentions in his Laws that 5040 is a convenient number to use for dividing many things (including both the citizens and the land of a city-state or polis) into lesser parts, making it an ideal number for the number of citizens (heads of families) making up a polis. He remarks that this number can be divided by all the (natural) numbers from 1 to 12 with the single exception of 11 (however, it is not the smallest number to have this property; 2520 is). He rectifies this "defect" by suggesting that two families could be subtracted from the citizen body to produce the number 5038, which is divisible by 11. Plato also took notice of the fact that 5040 can be divided by 12 twice over. Indeed, Plato's repeated insistence on the use of 5040 for various state purposes is so evident that Benjamin Jowett, in the introduction to his translation of Laws, wrote, "Plato, writing under Pythagorean influences, seems really to have supposed that the well-being of the city depended almost as much on the number 5040 as on justice and moderation."
Jean-Pierre Kahane has suggested that Plato's use of the number 5040 marks the first appearance of the concept of a highly composite number, a number with more divisors than any smaller number.
Number theoretical
If is the divisor function and is the Euler–Mascheroni constant, then 5040 is the largest of 27 known numbers for which this inequality holds:
.
This is somewhat unusual, since in the limit we have:
Guy Robin showed in 1984 that the inequality fails for all larger numbers if and only if the Riemann hypothesis is true.
Interesting notes
5040 has exactly 60 divisors, counting itself and 1.
5040 is the largest factorial (7! = 5040) that is also a high |
https://en.wikipedia.org/wiki/Food%20combining | Food combining is a nutritional pseudoscientific approach that advocates specific combinations (or advises against certain combinations) of foods. Some combinations are promoted as central to good health, improved digestion, and weight loss, despite no sufficient evidence for these claims. It proposes a list of rules that advocate for eating or not eating certain foods together, including to avoid eating starches and proteins together; always eat fruit before, and not after, a meal; avoid eating fruits and vegetables together in the same meal; and to not drink cold water during a meal.
Food combining was originally promoted by Herbert M. Shelton in his book Food Combining Made Easy (1951), but the issue had been previously discussed by Edgar Cayce. The best-known food-combining diet is the Hay Diet, named after William Howard Hay. He lost 30 pounds in 3 months when he implemented his research. In recent years, the food combining diet was popularized in online spaces by social media influencer Kenzie Burke, who promoted and profited from the fad diet through the sale of her "21-Day Reset" program.
The promotion of food combining is not based in fact and makes claims that have no scientific backing, displaying some characteristics of pseudoscience. Kenzie Burke utilizes a multitude of positive testimonials for her 21-Day Reset program that detail various customers' stories of success with the program. One randomized controlled trial of food combining was performed in 2000, and found no evidence that food combining was any more effective than a "balanced" diet in promoting weight loss. Besides this study, there is minimal legitimate scientific research on food combining as a diet, and subsequently no sufficient amount of legitimate scientific evidence for any of the diet's claims and any benefits it could potentially have for one's health.
See also
Protein combining
Alkaline diet
Fit for Life
Foodpairing
Gracie Diet
Michel Montignac
Raw foodism
List of di |
https://en.wikipedia.org/wiki/Synaptic%20stabilization | Synaptic stabilization is crucial in the developing and adult nervous systems and is considered a result of the late phase of long-term potentiation (LTP). The mechanism involves strengthening and maintaining active synapses through increased expression of cytoskeletal and extracellular matrix elements and postsynaptic scaffold proteins, while pruning less active ones. For example, cell adhesion molecules (CAMs) play a large role in synaptic maintenance and stabilization. Gerald Edelman discovered CAMs and studied their function during development, which showed CAMs are required for cell migration and the formation of the entire nervous system. In the adult nervous system, CAMs play an integral role in synaptic plasticity relating to learning and memory.
Types of CAMs
SynCAMs
Synaptic cell adhesion molecules (CAMs) play a crucial role in axon pathfinding and synaptic establishment between neurons during neurodevelopment and are integral members in many synaptic processes including the correct alignment of pre- and post-synaptic signal transduction pathways, vesicular recycling in regards to endocytosis and exocytosis, integration of postsynaptic receptors and anchoring to the cytoskeleton to ensure stability of synaptic components
SynCAM’s (also known as Cadm or nectin-like molecules) are a specific type of synaptic CAM found in vertebrates that promotes growth and stabilization of excitatory (not inhibitory) synapses. SynCAM’s are localized primarily in the brain at both pre- and postsynaptic sites and their structures consist of intracellular FERM and PDZ binding domains, a single transmembrane domain, and three extracellular Ig-domains. During neurodevelopment, SynCAMs such as SynCAM1 act as “contact sensors” of axonal growth cones accumulating rapidly when axo-dendritic connections are made and helping to form a stable adhesion complex.
synCAM1 along with neuroligin are the two CAM’s known to be sufficient to initiate the formation of presynaptic terminal |
https://en.wikipedia.org/wiki/Copernicium | Copernicium is a synthetic chemical element with the symbol Cn and atomic number 112. Its known isotopes are extremely radioactive, and have only been created in a laboratory. The most stable known isotope, copernicium-285, has a half-life of approximately 30 seconds. Copernicium was first created in 1996 by the GSI Helmholtz Centre for Heavy Ion Research near Darmstadt, Germany. It was named after the astronomer Nicolaus Copernicus.
In the periodic table of the elements, copernicium is a d-block transactinide element and a group 12 element. During reactions with gold, it has been shown to be an extremely volatile element, so much so that it is possibly a gas or a volatile liquid at standard temperature and pressure.
Copernicium is calculated to have several properties that differ from its lighter homologues in group 12, zinc, cadmium and mercury; due to relativistic effects, it may give up its 6d electrons instead of its 7s ones, and it may have more similarities to the noble gases such as radon rather than its group 12 homologues. Calculations indicate that copernicium may show the oxidation state +4, while mercury shows it in only one compound of disputed existence and zinc and cadmium do not show it at all. It has also been predicted to be more difficult to oxidize copernicium from its neutral state than the other group 12 elements. Predictions vary on whether solid copernicium would be a metal, semiconductor, or insulator. Copernicium is one of the heaviest elements whose chemical properties have been experimentally investigated.
Introduction
History
Discovery
Copernicium was first created on February 9, 1996, at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany, by Sigurd Hofmann, Victor Ninov et al. This element was created by firing accelerated zinc-70 nuclei at a target made of lead-208 nuclei in a heavy ion accelerator. A single atom of copernicium was produced with a mass number of 277. (A second was originally reported, but was f |
https://en.wikipedia.org/wiki/Equichordal%20point%20problem | In Euclidean plane geometry, the equichordal point problem is the question whether a closed planar convex body can have two equichordal points. The problem was originally posed in 1916 by Fujiwara and in 1917 by Wilhelm Blaschke, Hermann Rothe, and Roland Weitzenböck. A generalization of this problem statement was answered in the negative in 1997 by Marek R. Rychlik.
Problem statement
An equichordal curve is a closed planar curve for which a point in the plane exists such that all chords passing through this point are equal in length. Such a point is called an equichordal point. It is easy to construct equichordal curves with a single equichordal point, particularly when the curves are symmetric; the simplest construction is a circle.
It has long only been conjectured that no convex equichordal curve with two equichordal points can exist. More generally, it was asked whether there exists a Jordan curve with two equichordal points and , such that the curve
would be star-shaped with respect to each of the two points.
Excentricity (or eccentricity)
Many results on equichordal curves refer to their excentricity. It turns out that the smaller the excentricity, the harder it is to disprove the existence of curves with two equichordal points. It can be shown rigorously that a small excentricity means that the curve must be close to the circle.
Let be the hypothetical convex curve with two equichordal points and . Let be the common length of all chords of the curve passing through or . Then excentricity is the ratio
where is the distance between the points and .
The history of the problem
The problem has been extensively studied, with significant papers published over eight decades preceding its solution:
In 1916 Fujiwara proved that no convex curves with three equichordal points exist.
In 1917 Blaschke, Rothe and Weitzenböck formulated the problem again.
In 1923 Süss showed certain symmetries and uniqueness of the curve, if it existed.
In 1953 G. A. |
https://en.wikipedia.org/wiki/Jerzy%20Giedymin | Jerzy Giedymin (September 18, 1925 – June 24, 1993) was a philosopher and historian of mathematics and science.
Life
Giedymin, of Polish origin, was born in 1925.
He studied at the University of Poznań under Kazimierz Ajdukiewicz. In 1953 Jerzy Giedymin succeeded Adam Wiegner at the Chair of Logic at the Faculty of Philosophy.
The so-called Poznań School was a Marxist current of philosophy marked by an idealisational theory of science which emphasised the scientific features of Marxism in close confrontation with contemporary logic and epistemology.
In 1968 Giedymin moved to England and attended seminars by Karl Popper at the London School of Economics.
In 1971 he came to Sussex to become Professor at the School of Mathematical and Physical Sciences of the University of Sussex.
Giedymin died during a trip to Poland on 24 June 1993.
Work
Giedymin was convinced that Henri Poincaré's conventionalist philosophy was fundamentally misunderstood and thus underestimated. Giedymin argues that Poincaré was at the origin of much of the 20th century's innovations in relativity theory and quantum physics.
Giedymin's standpoint was much influenced by his exposure to Kazimierz Ajdukiewicz's perception of the history of ideas which in defiance of traditional empiricism reviews the philosophy of science of the early 20th century in the light of pragmatic conventionalism.
Bibliography
Books
Jerzy Giedymin, Z problemow logicznych analizy historycznej [Some Logical Problems of Historical Analysis], Poznanskie towarzystwo przyjaciol nauk. Wydzial filologiczno-filozoficzny. Prace Komisji filozoficznej. tom 10. zesz. 3., Poznań, 1961.
Jerzy Giedymin, Problemy, zalozenia, rozstrzygniecia. Studia nad logicznymi podstawami nauk spolecznych [Questions, assumptions, decidability. Essays concerning the logical functions of the social sciences], Polskie Towarzystwo Ekonomiczne. Oddzial w Poznaniu. Rozprawy i monografie. No. 10, Poznań, 1964.
Jerzy Giedymin ed., Kazimierz Ajdukiewicz: |
https://en.wikipedia.org/wiki/Forum%20of%20Incident%20Response%20and%20Security%20Teams | The Forum of Incident Response and Security Teams (FIRST) is a global forum of incident response and security teams. They aim to improve cooperation between security teams on handling major cybersecurity incidents. FIRST is an association of incident response teams with global coverage.
The 2018 Report of the United Nations Secretary-General's High-Level Panel on Digital Cooperation noted FIRST as a neutral third party which can help build trust and exchange best practices and tools during cybersecurity incidents.
History
FIRST was founded as an informal group by a number of incident response teams after the WANK (computer worm) highlighted the need for better coordination of incident response activities between organizations, during major incidents. It was formally incorporated in California on August 7, 1995, and moved to North Carolina on May 14, 2014.
Activities
In 2020, FIRST launched EthicsFIRST, a code of Ethics for Incident Response teams.
Annually, FIRST offers a Suguru Yamaguchi Fellowship, which helps incident response teams with national responsibility gain further integration with the international incident response community. It also maintains an Incident Response Hall of Fame, highlighting individuals who contributed significantly to the Incident Response community.
FIRST maintains several international standards, including the Common Vulnerability Scoring System, a standard for expressing impact of security vulnerabilities; the Traffic light protocol for classifying sensitive information; and the Exploit Prediction Scoring System, an effort for predicting when software vulnerabilities will be exploited.
FIRST is a partner of the International Telecommunication Union (ITU) and the Department of Foreign Affairs and Trade of Australia on Cybersecurity. The ITU co-organizes with FIRST the Women in Cyber Mentorship Programme, which engages cybersecurity leaders in the field, and connects them with women worldwide.
Together with the National Telec |
https://en.wikipedia.org/wiki/Mechanical%20index | Mechanical index (MI) is a unitless ultrasound metric. It is defined as
where
Pr is the peak rarefaction pressure of the ultrasound wave (MPa), derated by an attenuation factor to account for in-tissue acoustic attenuation
fc is the center frequency of the ultrasound pulse (MHz).
MI is measured with a calibrated hydrophone in a tank of degassed water. The pulse pressure amplitudes are measured along the central axis of the ultrasound beam. The Pr is calculated by reducing it using an attenuation coefficient of 0.3 dB/cm/MHz.
MI is a unitless number that can be used as an index of cavitation bio-effects; a higher MI value indicates greater exposure. Levels below 0.3 are generally considered to have no detectable effects. Currently the FDA stipulates that diagnostic ultrasound scanners cannot exceed a mechanical index of 1.9. |
https://en.wikipedia.org/wiki/Visitor%20Based%20Network | A Visitor-based network (VBN) is a computer network intended for mobile users in need of temporary Internet access. A visitor-based network is most commonly established in hotels, airports, convention centers, universities, and business offices. It gives the on-the-go user a quick and painless way to temporarily connect a device to networks and broadband Internet connections. A visitor-based network usually includes hardware (such as VBN gateways, hubs, switches, and/or routers), telecommunications (an Internet connection), and service (subscriber support).
What is a VBN Gateway?
Virtually any Internet-based Ethernet LAN can become a visitor-based network by adding a device generally termed a "VBN gateway". The function of a VBN Gateway is to provide a necessary layer of management between public users and the Internet router to enable a plug and play connection for visitors. Typical VBN Gateway provide services and support for billing and management application integrations, such as PMS systems (in hotels), credit-card billing interfaces, or Radius/LDAP servers for central authentication models.
A common criteria for VBN gateways is they allow users to connect and access the available network services with little or no configuration on their local machines (specifically modification to their IP address). In order to accomplish this a layer 2 (See: OSI model#Layer 2: Data link layer) connection is required between the user and the VBN gateway. Aside from the layer 2 (or bridged) network requirement, there are really no other restrictions for using a VBN gateway to enable a network. As such, Ethernet, 802.11x, CMTS, and xDSL are all acceptable mediums for distributing networks to use with VBN Gateways.
In the simplest form a VBN gateway is a hardware device with a minimum of two network connections. One network connection is considered the subscriber network, and the other the uplink to the Internet. The majority of VBN gateways on the market today all use Ether |
https://en.wikipedia.org/wiki/Economic%20threshold | In integrated pest management, the economic threshold is the density of a pest at which a control treatment will provide an economic return.
An economic threshold is the insect's population level or extent of crop damage at which the value of the crop destroyed exceeds the cost of controlling the pest. Economic thresholds can be expressed in a variety of ways including the number of insects per plant or per square metre, the amount of leaf surface damage, etc. In many cases, thresholds have been established through scientific research. Because some combinations of pests and crops have not yet been studied, some thresholds are just educated estimates. |
https://en.wikipedia.org/wiki/Oscillatory%20integral%20operator | In mathematics, in the field of harmonic analysis, an oscillatory integral operator is an integral operator of the form
where the function S(x,y) is called the phase of the operator and the function a(x,y) is called the symbol of the operator. λ is a parameter. One often considers S(x,y) to be real-valued and smooth, and a(x,y) smooth and compactly supported. Usually one is interested in the behavior of Tλ for large values of λ.
Oscillatory integral operators often appear in many fields of mathematics (analysis, partial differential equations, integral geometry, number theory) and in physics. Properties of oscillatory integral operators have been studied by Elias Stein and his school.
Hörmander's theorem
The following bound on the L2 → L2 action of oscillatory integral operators (or L2 → L2 operator norm) was obtained by Lars Hörmander in his paper on Fourier integral operators:
Assume that x,y ∈ Rn, n ≥ 1. Let S(x,y) be real-valued and smooth, and let a(x,y) be smooth and compactly supported. If everywhere on the support of a(x,y), then there is a constant C such that Tλ, which is initially defined on smooth functions, extends to a continuous operator from L2(Rn) to L2(Rn), with the norm bounded by , for every λ ≥ 1: |
https://en.wikipedia.org/wiki/143rd%20meridian%20east | The 143rd meridian east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Pacific Ocean, Australasia, the Indian Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 143rd meridian east forms a great circle with the 37th meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 143rd meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Sakha Republic — Kotelny Island, New Siberian Islands
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | East Siberian Sea
| style="background:#b0e0e6;" | Sannikov Strait
|-
|
! scope="row" |
| Sakha Republic — Great Lyakhovsky Island, New Siberian Islands
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | East Siberian Sea
| style="background:#b0e0e6;" | Laptev Strait
|-valign="top"
|
! scope="row" |
| Sakha Republic Khabarovsk Krai — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Sea of Okhotsk
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Sakhalin Oblast — island of Sakhalin
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Sea of Okhotsk
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Sakhalin Oblast — island of Sakhalin
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Sea of Okhotsk
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Hokkaidō Prefecture — island of Hokkaidō
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" | Passing just west of Aua Island, (at ) Passing just east of Wuv |
https://en.wikipedia.org/wiki/Steamroller%20%28microarchitecture%29 | AMD Steamroller Family 15h is a microarchitecture developed by AMD for AMD APUs, which succeeded Piledriver in the beginning of 2014 as the third-generation Bulldozer-based microarchitecture. Steamroller APUs continue to use two-core modules as their predecessors, while aiming at achieving greater levels of parallelism.
Microarchitecture
Steamroller still features two-core modules found in Bulldozer and Piledriver designs called clustered multi-thread (CMT), meaning that one module is marketed as a dual-core processor. The focus of Steamroller is for greater parallelism. Improvements center on independent instruction decoders for each core within a module, 25% more of the maximum width dispatches per thread, better instruction schedulers, improved perceptron branch predictor, larger and smarter caches, up to 30% fewer instruction cache misses, branch misprediction rate reduced by 20%, dynamically resizable L2 cache, micro-operations queue, more internal register resources and improved memory controller.
AMD estimated that these improvements will increase instructions per cycle (IPC) up to 30% compared to the first-generation Bulldozer core while maintaining Piledriver's high clock rates with decreased power consumption. The final result was a 9% single-threaded IPC improvement, and 18% multi-threaded IPC improvement over Piledriver.
Steamroller, the microarchitecture for CPUs, as well as Graphics Core Next, the microarchitecture for GPUs, are paired together in the APU lines to support features specified in Heterogeneous System Architecture.
History
In 2011, AMD announced a third-generation Bulldozer-based line of processors for 2013, with Next Generation Bulldozer as the working title, using the 28 nm manufacturing process.
On 21 September 2011, leaked AMD slides indicated that this third generation of Bulldozer core was codenamed Steamroller.
In January 2014, the first Kaveri APUs became available.
Starting from May 2015 till March 2016 new APUs were laun |
https://en.wikipedia.org/wiki/Benitec%20Biopharma | Benitec Biopharma Ltd () is an Australian biotechnology company founded in 1997. It is engaged in the development of gene-silencing therapies for the treatment of chronic and life-threatening diseases using DNA-directed RNA interference (ddRNAi) technology.
The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has researched RNAi extensively, developing the small hairpin RNA concept employed in ddRNAi. Benitec Biopharma has an exclusive license to this ddRNAi technology in human therapeutic uses and research.
Research and development
Benitec Biopharma is researching ddRNAi in the following fields:
Hepatitis B (under development with partner Biomics Biotechnologies))
Oculopharyngeal muscular dystrophy (OPMD) |
https://en.wikipedia.org/wiki/Supertramp%20%28ecology%29 | In ecology, a supertramp species is any type of animal which follows the "supertramp" strategy of high dispersion among many different habitats, towards none of which it is particularly specialized. Supertramp species are typically the first to arrive in newly available habitats, such as volcanic islands and freshly deforested land; they can have profoundly negative effects on more highly specialized flora and fauna, both directly through predation and indirectly through competition for resources.
The name was coined by Jared Diamond in 1974, as an allusion to both the itinerant lifestyle of the tramp, and the then-popular band Supertramp. Although Diamond originally applied the term only to birds, the term has since been applied to insects and reptiles as well, among others; any species which can migrate can be a supertramp.
In an evolutionary context, the supertramp may represent the first stage of the taxon cycle.
See also
Assembly rules |
https://en.wikipedia.org/wiki/White%20magic | White magic has traditionally referred to the use of supernatural powers or magic for selfless purposes. Practitioners of white magic have been given titles such as wise men or women, healers, white witches or wizards. Many of these people claimed to have the ability to do such things because of knowledge or power that was passed on to them through hereditary lines, or by some event later in their lives. White magic was practiced through healing, blessing, charms, incantations, prayers, and songs. White magic is the benevolent counterpart of malicious black magic.
History
Early origins
In his 1978 book, A History of White Magic, recognised occult author Gareth Knight traces the origins of white magic to early adaptations of paleolithic religion and early religious history in general, including the polytheistic traditions of Ancient Egypt and the later monotheistic ideas of Judaism and early Christianity.
In particular, he traced many of the traditions of white magic to the early worship of local "gods and goddesses of fertility and vegetation who were usually worshipped at hill-top shrines" and were "attractive to a nomadic race settling down to an agricultural existence". He focuses in particular on the nomadic Hebrew-speaking tribes and suggests that early Jews saw the worship of such deities more in terms of atavism than evil. It was only when the polytheistic and pagan Roman Empire began to expand that Jewish leaders began to rally against those ideas.
Early origins of white magic can also be traced back to the Cunning Folk.
During the Renaissance
By the late 15th century, natural magic "had become much discussed in high-cultural circles". "Followers" of Marsilio Ficino advocated the existence of spiritual beings and spirits in general, though many such theories ran counter to the ideas of the later Age of Enlightenment. While Ficino and his supporters were treated with hostility by the Roman Catholic Church, the Church itself also acknowledged the existe |
https://en.wikipedia.org/wiki/Chiplet | A chiplet is a tiny integrated circuit (IC) that contains a well-defined subset of functionality. It is designed to be combined with other chiplets on an interposer in a single package. A set of chiplets can be implemented in a mix-and-match "Lego-like" assembly. This provides several advantages over a traditional system on chip:
Reusable IP (intellectual property): the same chiplet can be used in many different devices
Heterogeneous integration: chiplets can be fabricated with different processes, materials, and nodes, each optimized for its particular function
Known good die: chiplets can be tested before assembly, improving the yield of the final device
Multiple chiplets working together in a single integrated circuit may be called a multi-chip module, hybrid IC, 2.5D IC, or an advanced package.
Chiplets may be connected with standards such as UCIe, bunch of wires (BoW), OpenHBI, and OIF XSR.
The term was coined by University of California, Berkeley professor John Wawrzynek as a component of the RAMP Project (research accelerator for multiple processors) in 2006 extension for the Department of Energy, as was RISC-V architecture. |
https://en.wikipedia.org/wiki/Prism%20%28chipset%29 | The Prism brand is used for wireless networking integrated circuit (commonly called "chips") technology from Conexant for wireless LANs. They were formerly produced by Intersil Corporation.
Legacy 802.11b products (Prism 2/2.5/3)
The open-source HostAP driver supports the IEEE 802.11b Prism 2/2.5/3 family of chips.
Wireless adaptors which use the Prism chipset are known for compatibility, and are preferred for specialist applications such as packet capture.
No win64 drivers are known to exist.
Intersil firmware
WEP
WPA (TKIP), after update
WPA2 (CCMP), after update
Lucent/Agere
WEP
WPA (TKIP in hardware)
802.11b/g products (Prism54, ISL38xx)
The chipset has undergone a major redesign for 802.11g compatibility and cost reduction, and newer "Prism54" chipsets are not compatible with their predecessors.
Intersil initially provided a Linux driver for the first Prism54 chips which implemented a large part of the 802.11 stack in the firmware. However, further cost reductions caused a new, lighter firmware to be designed and the amount of on-chip memory to shrink, making it impossible to run the older version of the firmware on the latest chips. In the meantime, the PRISM business was sold to Conexant, which never published information about the newer firmware API that would enable a Linux driver to be written.
However, a reverse engineering effort eventually made it possible to use the new Prism54 chipsets under the Linux and BSD operating systems.
See also
HostAP driver for prism chipsets
External links
PRISM solutions at Conexant
GPL drivers and firmware for the ISL38xx-based Prism chipsets (mostly reverse engineered)
Wireless networking hardware |
https://en.wikipedia.org/wiki/124th%20meridian%20west | The meridian 124° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, North America, the Pacific Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 124th meridian west forms a great circle with the 56th meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 124th meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Beaufort Sea
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | M'Clure Strait
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Northwest Territories — Banks Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Beaufort Sea
| style="background:#b0e0e6;" | Burnett Bay
|-
|
! scope="row" |
| Northwest Territories — Banks Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Amundsen Gulf
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Northwest Territories — passing through the Great Bear Lake Yukon — for about from , the easternmost part of the territory British Columbia — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Strait of Georgia
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| British Columbia — Vancouver Island; passing through Nanaimo
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Strait of Juan de Fuca
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Washington Oregon — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Oc |
https://en.wikipedia.org/wiki/Carcinology | Carcinology is a branch of zoology that consists of the study of crustaceans, a group of arthropods that includes lobsters, crayfish, shrimp, krill, copepods, barnacles and crabs. Other names for carcinology are malacostracology, crustaceology, and crustalogy, and a person who studies crustaceans is a carcinologist or occasionally a malacostracologist, a crustaceologist, or a crustalogist.
The word carcinology derives from Greek , karkínos, "crab"; and , -logia.
Subfields
Carcinology is a subdivision of arthropodology, the study of arthropods which includes arachnids, insects, and myriapods. Carcinology branches off into taxonomically oriented disciplines such as:
astacology – the study of crayfish
cirripedology – the study of barnacles
copepodology – the study of copepods
Journals
Scientific journals devoted to the study of crustaceans include:
Crustaceana
Journal of Crustacean Biology
''Nauplius (journal)
See also
Entomology
Publications in carcinology
List of carcinologists |
https://en.wikipedia.org/wiki/65%2C537 | 65537 is the integer after 65536 and before 65538.
In mathematics
65537 is the largest known prime number of the form (). Therefore, a regular polygon with 65537 sides is constructible with compass and unmarked straightedge. Johann Gustav Hermes gave the first explicit construction of this polygon. In number
theory, primes of this form are known as Fermat primes, named after the mathematician
Pierre de Fermat. The only known prime Fermat numbers are
In 1732, Leonhard Euler found that the next Fermat number is composite:
In 1880, showed that
65537 is also the 17th Jacobsthal–Lucas number, and currently the largest known integer n for which the number is a probable prime.
Applications
65537 is commonly used as a public exponent in the RSA cryptosystem. Because it is the Fermat number with , the common shorthand is "F" or "F4". This value was used in RSA mainly for historical reasons; early raw RSA implementations (without proper padding) were vulnerable to very small exponents, while use of high exponents was computationally expensive with no advantage to security (assuming proper padding).
65537 is also used as the modulus in some Lehmer random number generators, such as the one used by ZX Spectrum, which ensures that any seed value will be coprime to it (vital to ensure the maximum period) while also allowing efficient reduction by the modulus using a bit shift and subtract. |
https://en.wikipedia.org/wiki/Replikins | Replikins are a group of peptides, whose increase in concentration in virus or other organism proteins is associated with rapid replication. It is often measured in number of replikins per 100 amino acids. This particular group of peptides have been found to play a significant role in predicting both infectivity and lethality of various viral strains. In particular, this group allowed the prediction of the A/H1N1 pandemic almost one year before onset.
A method for identifying replikins was patented by Samuel and Elenore S. Bogoch in 2001. The peptide group was first identified by a proprietary company called Replikins, who have trademarked the name "Replikin Count".
See also
2009 flu pandemic |
https://en.wikipedia.org/wiki/List%20of%20drugs%20used%20to%20treat%20schistosomiasis | A schistosomicide is a drug used to combat schistosomiasis.
List
Examples listed in MeSH include:
amoscanate
arteether
artemether
chloroxylenol
hycanthone
lucanthone
metrifonate
niridazole
oltipraz
oxamniquine
praziquantel
stibophen
See also
Schistosomiasis vaccine |
https://en.wikipedia.org/wiki/Hyraceum | Hyraceum () is the petrified and rock-like excrement composed of both urine and feces of the rock hyrax (Procavia capensis) and closely related species.
The rock hyrax defecates in the same location over generations, which may be sheltered in caves. These locations form middens that are composed of hyraceum and hyrax pellets, which can be petrified and preserved for over 50,000 years. These middens form a record of past climate and vegetation.
It is also a sought-after material that has been used in both traditional South African medicine and perfumery.
Hyraceum in perfumery
The material hardens and ages until it becomes a fairly sterile, rock-like material (also referred to as "Africa Stone") that contains compounds giving it an animalic, deeply complex fermented scent that combines the elements of musk, castoreum, civet, tobacco and agarwood. The material is harvested without disturbing the animals by digging strata of the brittle, resinous, irregular, blackish-brown stone; because animals are not harmed in its harvesting, it is often an ethical substitute for deer musk and civet, which require killing or inflicting pain on the animal.
It should be noted, however, that hyraceum accumulates extremely slowly, making it essentially a non-renewable resource. Considering that hyraceum - accumulating in the form of rock hyrax middens - is in many cases the only available source for information regarding climate and environmental change in arid regions of Africa and Arabia, its collection for commercial sale has been criticized in scientific circles as the destruction of a critical resource that could help to understand the impact of climate change in sensitive regions.
Hyraceum in traditional South African medicine
After it has been fossilized hyraceum has been used as a traditional folk medicine in South Africa for treating epilepsy.
One clinical study of 14 samples of the material collected at various geographical locations in South Africa tested the material |
https://en.wikipedia.org/wiki/Optical%20character%20recognition | Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene photo (for example the text on signs and billboards in a landscape photo) or from subtitle text superimposed on an image (for example: from a television broadcast).
Widely used as a form of data entry from printed paper data recordswhether passport documents, invoices, bank statements, computerized receipts, business cards, mail, printed data, or any suitable documentationit is a common method of digitizing printed texts so that they can be electronically edited, searched, stored more compactly, displayed online, and used in machine processes such as cognitive computing, machine translation, (extracted) text-to-speech, key data and text mining. OCR is a field of research in pattern recognition, artificial intelligence and computer vision.
Early versions needed to be trained with images of each character, and worked on one font at a time. Advanced systems capable of producing a high degree of accuracy for most fonts are now common, and with support for a variety of image file format inputs. Some systems are capable of reproducing formatted output that closely approximates the original page including images, columns, and other non-textual components.
History
Early optical character recognition may be traced to technologies involving telegraphy and creating reading devices for the blind. In 1914, Emanuel Goldberg developed a machine that read characters and converted them into standard telegraph code. Concurrently, Edmund Fournier d'Albe developed the Optophone, a handheld scanner that when moved across a printed page, produced tones that corresponded to specific letters or characters.
In the late 1920s and into the 1930s, Emanuel Goldberg developed what he called a "Statistical Machine" for searching microfilm archives us |
https://en.wikipedia.org/wiki/Wilhelmina%27s%20bird-of-paradise | Wilhelmina's bird-of-paradise, also known as Wilhelmina's riflebird, is a bird in the family Paradisaeidae that is presumed to be an intergeneric hybrid between a greater lophorina and magnificent bird-of-paradise.
History
Three adult male specimens are known of this hybrid, held in the American Museum of Natural History, Royal Natural History Museum of the Netherlands, and the State Museum of Zoology, Dresden. Two of the specimens come from the Arfak Mountains of north-western New Guinea, while the other is of unknown provenance. The bird was named as a species by Adolf Bernhard Meyer in 1894 after Wilhelmina, his wife who joined him during his travels in 1870–1872.
Notes |
https://en.wikipedia.org/wiki/Factorial%20moment%20measure | In probability and statistics, a factorial moment measure is a mathematical quantity, function or, more precisely, measure that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. Moment measures generalize the idea of factorial moments, which are useful for studying non-negative integer-valued random variables.
The first factorial moment measure of a point process coincides with its first moment measure or intensity measure, which gives the expected or average number of points of the point process located in some region of space. In general, if the number of points in some region is considered as a random variable, then the moment factorial measure of this region is the factorial moment of this random variable. Factorial moment measures completely characterize a wide class of point processes, which means they can be used to uniquely identify a point process.
If a factorial moment measure is absolutely continuous, then with respect to the Lebesgue measure it is said to have a density (which is a generalized form of a derivative), and this density is known by a number of names such as factorial moment density and product density, as well as coincidence density, joint intensity
, correlation function or multivariate frequency spectrum The first and second factorial moment densities of a point process are used in the definition of the pair correlation function, which gives a way to statistically quantify the strength of interaction or correlation between points of a point process.
Factorial moment measures serve as useful tools in the study of point processes as well as the related fields of stochastic geometry and spatial statistics, which are applied in various scientific and engineering disciplines such as biology, geology, physics, and telecommunications.
Point process nota |
https://en.wikipedia.org/wiki/Trigger%20%28horse%29 | Trigger (July 4, 1934 – July 3, 1965) was a palomino horse made famous in American Western films with his owner and rider, cowboy star Roy Rogers.
Pedigree
The original Trigger, named Golden Cloud, was born in San Diego, California. Though often mistaken for a Tennessee Walking Horse, his sire was a Thoroughbred and his dam a grade (unregistered) mare that, like Trigger, was a palomino. Movie director William Witney, who directed Roy and Trigger in many of their movies, claimed a slightly different lineage, that his sire was a "registered" palomino stallion (though no known palomino registry existed at the time of Trigger's birth) and his dam was by a Thoroughbred and out of a "cold-blood" mare. Horses other than Golden Cloud also portrayed "Trigger" over the years, none of which was related to Golden Cloud; the two most prominent were palominos known as "Little Trigger" and "Trigger Jr." (a Tennessee Walking Horse listed as "Allen's Gold Zephyr" in the Tennessee Walking Horse registry). Though Trigger remained a stallion his entire life, he was never bred and has no descendants. Rogers used "Trigger Jr."/"Allen's Golden Zephyr", though, at stud for many years, and the horse named "Triggerson" that actor Val Kilmer led on stage as a tribute to Rogers and his cowboy peers during the Academy Awards show in March 1999 was reportedly a grandson of Trigger Jr.
Film career
Golden Cloud made an early appearance as the mount of Maid Marian, played by Olivia de Havilland in The Adventures of Robin Hood (1938). A short while later, when Roy Rogers was preparing to make his first movie in a starring role, he was offered a choice of five rented "movie" horses to ride and chose Golden Cloud. Rogers bought him eventually in 1943 and renamed him Trigger for his quickness of both foot and mind. Trigger learned 150 trick cues and could walk 50 ft (15 m) on his hind legs (according to sources close to Rogers). They were said to have run out of places to cue Trigger. Trigger bec |
https://en.wikipedia.org/wiki/Submandibular%20ganglion | The submandibular ganglion (or submaxillary ganglion in older texts) is part of the human autonomic nervous system. It is one of four parasympathetic ganglia of the head and neck. (The others are the otic ganglion, pterygopalatine ganglion, and ciliary ganglion).
Location and relations
The submandibular ganglion is small and fusiform in shape. It is situated above the deep portion of the submandibular gland, on the hyoglossus muscle, near the posterior border of the mylohyoid muscle.
The ganglion 'hangs' by two nerve filaments from the lower border of the lingual nerve (itself a branch of the mandibular nerve, CN V3). It is suspended from the lingual nerve by two filaments, one anterior and one posterior. Through the posterior of these it receives a branch from the chorda tympani nerve which runs in the sheath of the lingual nerve.
Fibers
Like other parasympathetic ganglia of the head and neck, the submandibular ganglion is the site of synapse for parasympathetic fibers and carries other types of nerve fiber that do not synapse in the ganglion. In summary, the fibers carried in the ganglion are:
Sympathetic fibers from the external carotid plexus, via the facial nerve and its branches. These do not synapse in this ganglion.
Preganglionic parasympathetic fibers from the superior salivatory nucleus of the Pons, via the chorda tympani and lingual nerve, which synapse at this ganglion.
Postganglionic parasympathetic fibers to the oral mucosa and the submandibular and sublingual salivary glands. They are secretomotor to these glands. Some of the postganglionic fibers reach the sublingual gland after they re-enter the lingual nerve.
Additional images |
https://en.wikipedia.org/wiki/Viking%20lander%20biological%20experiments | In 1976 two identical Viking program landers each carried four types of biological experiments to the surface of Mars. The first successful Mars landers, Viking 1 and Viking 2, then carried out experiments to look for biosignatures of microbial life on Mars. The landers each used a robotic arm to pick up and place soil samples into sealed test containers on the craft.
The two landers carried out the same tests at two places on Mars' surface, Viking 1 near the equator and Viking 2 further north.
The experiments
The four experiments below are presented in the order in which they were carried out by the two Viking landers. The biology team leader for the Viking program was Harold P. Klein (NASA Ames).
Gas chromatograph — mass spectrometer
A gas chromatograph — mass spectrometer (GCMS) is a device that separates vapor components chemically via a gas chromatograph and then feeds the result into a mass spectrometer, which measures the molecular weight of each chemical. As a result, it can separate, identify, and quantify a large number of different chemicals. The GCMS (PI: Klaus Biemann, MIT) was used to analyze the components of untreated Martian soil, and particularly those components that are released as the soil is heated to different temperatures. It could measure molecules present at a level of a few parts per billion.
The GCMS measured no significant amount of organic molecules in the Martian soil. In fact, Martian soils were found to contain less carbon than lifeless lunar soils returned by the Apollo program. This result was difficult to explain if Martian bacterial metabolism was responsible for the positive results seen by the Labeled Release experiment (see below). A 2011 astrobiology textbook notes that this was the decisive factor due to which "For most of the Viking scientists, the final conclusion was that the Viking missions failed to detect life in the Martian soil."
Experiments conducted in 2008 by the Phoenix lander discovered the presence o |
https://en.wikipedia.org/wiki/Lichen%20stromatolite | Lichen stromatolites are laminar calcretes that are proposed as being formed by a sequence of repetitions of induration followed by lichen colonization. Endolithic lichens inhabit areas between grains of rock, chemically and physically weathering that rock, leaving a rind, which is then indurated (hardened), then recolonized.
See also
Stromatolite |
https://en.wikipedia.org/wiki/Drude%20model | The Drude model of electrical conduction was proposed in 1900 by Paul Drude to explain the transport properties of electrons in materials (especially metals). Basically, Ohm's law was well established and stated that the current J and voltage V driving the current are related to the resistance R of the material. The inverse of the resistance is known as the conductance. When we consider a metal of unit length and unit cross sectional area, the conductance is known as the conductivity, which is the inverse of resistivity. The Drude model attempts to explain the resistivity of a conductor in terms of the scattering of electrons (the carriers of electricity) by the relatively immobile ions in the metal that act like obstructions to the flow of electrons.
The model, which is an application of kinetic theory, assumes that the microscopic behaviour of electrons in a solid may be treated classically and behaves much like a pinball machine, with a sea of constantly jittering electrons bouncing and re-bouncing off heavier, relatively immobile positive ions.
In modern terms this is reflected in the valence electron model where the sea of electrons is composed of the valence electrons only, and not the full set of electrons available in the solid, and the scattering centers are the inner shells of tightly bound electrons to the nucleus. The scattering centers had a positive charge equivalent to the valence number of the atoms.
This similarity added to some computation errors in the Drude paper, ended up providing a reasonable qualitative theory of solids capable of making good predictions in certain cases and giving completely wrong results in others.
Whenever people tried to give more substance and detail to the nature of the scattering centers, and the mechanics of scattering, and the meaning of the length of scattering, all these attempts ended in failures.
The scattering lengths computed in the Drude model, are of the order of 10 to 100 inter-atomic distances, and also |
https://en.wikipedia.org/wiki/Riverstone%20Networks | Riverstone Networks, was a provider of networking switching hardware based in Santa Clara, California. Originally part of Cabletron Systems, and based on an early acquisition of YAGO, it was one of the many Gigabit Ethernet startups in the mid-1990s. It is now a part of Alcatel-Lucent and its operations are being wound down via a Chapter 11 filing by their current owners.
Company history
7 February 2006 - Riverstone's partner Lucent Technologies signed an Asset Purchase agreement to acquire Riverstone Networks
21 March 2006 - Lucent Technologies wins the auction for Riverstone Networks over rival Ericsson. The final price was $207 million
18 April 2006 - Lucent Technologies are currently in the process of a merger of equals with Alcatel
1 December 2006 - Lucent Technologies completed the process of a merger of equals with Alcatel. Assets of Riverstone Networks are now part of Alcatel-Lucent
Products
All of Riverstone Networks products were geared towards IP over Ethernet, often for a Metro Ethernet solution. All the products were multilayer switches (or switch-routers) and specialized in MPLS VPNs.
15000 Family
The 15000 Family (referred to as the 15K) differed from the RS family as the 15K is not flow-based. Flow-based routers use the main CPU to process new flows and packets through the switch. The 15K differed by letting the line card processors do the work for the network traffic, leaving the main CPU to work on the system itself. This type of network processing is similar to Cisco's dCEF.
The 15K products were based on a different operating system than other Riverstone products, called ROS-X. It was designed to be modular and more like the common command line interface of Cisco.
15008 - The highest performance product from Riverstone. It supported a 96 port 10/100 Ethernet card, 12 or 24 port 1GB Ethernet cards and 1 or 2 port 10GB Ethernet cards. Support for ATM and PoS was planned.
15100/15200 - Designed with the same architecture and operating s |
https://en.wikipedia.org/wiki/Approximation | An approximation is anything that is intentionally similar but not exactly equal to something else.
Etymology and usage
The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ad- (ad- before p becomes ap- by assimilation) meaning to. Words like approximate, approximately and approximation are used especially in technical or scientific contexts. In everyday English, words such as roughly or around are used with a similar meaning. It is often found abbreviated as approx.
The term can be applied to various properties (e.g., value, quantity, image, description) that are nearly, but not exactly correct; similar, but not exactly the same (e.g., the approximate time was 10 o'clock).
Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.
In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incomplete information prevents use of exact representations.
The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.
Mathematics
Approximation theory is a branch of mathematics, a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers.
Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. For example, 1.5 × 106 means that the true value of something being measured is 1,500,000 to the nearest hundred thousand (so the actual value is somewhere between |
https://en.wikipedia.org/wiki/Coherent%20potential%20approximation | The coherent potential approximation (or CPA) is a method, in physics, of finding the Green's function of an effective medium. It is a useful concept in understanding how sound waves scatter in a material which displays spatial inhomogeneity.
One version of the CPA is an extension to random materials of the muffin-tin approximation, used to calculate electronic band structure in solids. A variational implementation of the muffin-tin approximation to crystalline solids using Green's functions was suggested by Korringa and by Kohn and Rostoker, and is often referred to as the KKR method. For random materials, the theory is applied by the introduction of an ordered lattice of effective potentials to replace the varying potentials in the random material. This approach is called the KKR coherent potential approximation. |
https://en.wikipedia.org/wiki/General%20Graphics%20Interface | General Graphics Interface (GGI) was a project that aimed to develop a reliable, stable and fast computer graphics system that works everywhere. The intent was to allow for any program using GGI to run on any computing platform supported by it, requiring at most a recompilation. GGI is free and open-source software, subject to the requirements of the MIT License.
The GGI project, and its related projects such as KGI, are generally acknowledged to be dead.
Goals
The project was originally started to make switching back and forth between virtual consoles, svgalib, and the X display server subsystems on Linux more reliable. The goals were:
Portability through a flexible and extensible API for the applications. This avoids bloat in the applications by only getting what they use.
Portability in cross-platform and in backends
Security in the sense of requiring as few privileges as possible
The GGI framework is implemented by a set of portable user-space libraries, with an array of different backends or targets (e.g. Linux framebuffer, X11, Quartz, DirectX), of which the two most fundamental are LibGII (for input-handling) and LibGGI (for graphical output). All other packages add features to these core libraries, and so depend on one or both of them.
Some targets talk to other targets. These are called pseudo targets. Pseudo targets can be combined and work like a pipeline.
One example:
display-palemu, for example, emulates palette mode on truecolor modes. This allows users to run applications in palette mode even on machines where no palette mode would be available otherwise. display-tile splits large virtual display into many smaller pieces. You can spread them on multiple monitors or even forward them over a network.
History
Andreas Beck and Steffen Seeger founded The GGI Project in 1994 after some experimental precursors that were called "scrdrv".
Development of scrdrv was motivated by the problems caused by coexisting but not very well cooperating graphics env |
https://en.wikipedia.org/wiki/Sackler%20Prize | The Sackler Prize is named for the Sackler family and can indicate any of the following three awards established by Raymond Sackler and his wife Beverly Sackler currently bestowed by the Tel Aviv University. The Sackler family is known for its role in the opioid epidemic in the United States, has been the subject of numerous lawsuits and critical media coverage, and been dubbed the "most evil family in America", and "the worst drug dealers in history". The family has engaged in extensive efforts to promote the Sackler name, that has been characterized as reputation laundering. In 2023 the Sackler family's name was removed from the name of the Tel Aviv University Faculty of Medicine.
Sackler Prize in the Physical Sciences
The Raymond and Beverly Sackler International Prize in the Physical Sciences is a $40,000 prize in the disciplines of either physics or chemistry awarded by Tel Aviv University each year for young scientists who have made outstanding and fundamental contributions in their fields.
There is an age limit for all nominees. Nominations for the Sackler Prize can be made by individuals in any of the following categories:
1) Faculty of Physics, Astronomy or Chemistry departments in institutions of higher learning worldwide.
2) Presidents, Rectors, vice-presidents, Provosts and Deans, of institutions of higher learning worldwide.
3) Directors of laboratories worldwide.
4) Sackler Prize laureates.
For 2008, the age limit has been raised to 45 and the prize money to $50,000.
Winners
Source: Chemistry – Tel Aviv University
Physics – Tel Aviv University
2000 prize for Physics (Theoretical High Energy Physics): Michael R. Douglas (Rutgers University) and Juan Martin Maldacena (Institute for Advanced Study, Princeton), for work "beyond the 1975 synthesis known as the 'Standard Model' and within the framework of (supersymmetrical) String or M-theory."
2001 prize for Chemistry (Physical Chemistry of Advanced Materials): Moungi G. Bawendi (MIT) and James R. Hea |
https://en.wikipedia.org/wiki/Molecular%20recognition%20feature | Molecular recognition features (MoRFs) are small (10-70 residues) intrinsically disordered regions in proteins that undergo a disorder-to-order transition upon binding to their partners. MoRFs are implicated in protein-protein interactions, which serve as the initial step in molecular recognition. MoRFs are disordered prior to binding to their partners, whereas they form a common 3D structure after interacting with their partners. As MoRF regions tend to resemble disordered proteins with some characteristics of ordered proteins, they can be classified as existing in an extended semi-disordered state.
Categorization
MoRFs can be separated in 4 categories according to the shape they form once bound to their partners.
The categories are:
α-MoRFs (when they form alpha-helixes)
β-MoRFs (when they form beta-sheets)
irregular-MoRFs (when they don't form any shape)
complex-MoRFs (combination of the above categories)
MoRF predictors
Determining protein structures experimentally is a very time-consuming and expensive process. Therefore, recent years have seen a focus on computational methods for predicting protein structure and structural characteristics. Some aspects of protein structure, such as secondary structure and intrinsic disorder, have benefited greatly from applications of deep learning on an abundance of annotated data. However, computational prediction of MoRF regions remains a challenging task due to the limited availability of annotated data and the rarity of the MoRF class itself. Most current methods have been trained and benchmarked on the sets released by the authors of MoRFPred in 2012, as well as another set released by the authors of MoRFChibi based on experimentally-annotated MoRF data. The table below, adapted from, details some methods currently available for MoRF prediction (as well as related problems).
Databases
mpMoRFsDB
Mutual Folding Induced by Binding (MFIB) database |
https://en.wikipedia.org/wiki/Sexual%20antagonistic%20coevolution | Sexual antagonistic co-evolution is the relationship between males and females where sexual morphology changes over time to counteract the opposite's sex traits to achieve the maximum reproductive success. This has been compared to an arms race between sexes. In many cases, male mating behavior is detrimental to the female's fitness. For example, when insects reproduce by means of traumatic insemination, it is very disadvantageous to the female's health. During mating, males will try to inseminate as many females as possible, however, the more times a female's abdomen is punctured, the less likely she is to survive. Females that possess traits to avoid multiple matings will be more likely to survive, resulting in a change in morphology. In males, genitalia is relatively simple and more likely to vary among generations compared to female genitalia. This results in a new trait that females have to avoid in order to survive.
Additionally, sexual antagonistic co-evolution can be the cause of rapid evolution, as is thought to be the case in seminal proteins known as Acps in species of Drosophila melanogaster. While Acps facilitate the mutually beneficial outcome of increased progeny production, several Acps have detrimental effects on female fitness as they are toxic and shorten her lifespan. This leads to antagonistic co-evolution, as the female must evolve in order to defend herself. When female Drosophila melanogaster are experimentally prevented from co-evolving with males, males rapidly adapt to the static female phenotype. This male adaptation leads to a reduction in female survivorship, which is mediated by an increased rate of remating and increased toxicity of Acps in seminal fluid. Since non-reproductive proteins do not feel the same evolutionary pressure as Acps, they are not evolving nearly as quickly. Consistent with the arms race theory, DNA analyses reveal a two-fold increase in Acp divergence relative to non-reproductive proteins.
Female co-evolution
Fo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.