source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Structures%20built%20by%20animals | Structures built by non-human animals, often called animal architecture, are common in many species. Examples of animal structures include termite mounds, ant hills, wasp and beehives, burrow complexes, beaver dams, elaborate nests of birds, and webs of spiders.
Often, these structures incorporate sophisticated features such as temperature regulation, traps, bait, ventilation, special-purpose chambers and many other features. They may be created by individuals or complex societies of social animals with different forms carrying out specialized roles. These constructions may arise from complex building behaviour of animals such as in the case of night-time nests for chimpanzees, from inbuilt neural responses, which feature prominently in the construction of bird songs, or triggered by hormone release as in the case of domestic sows, or as emergent properties from simple instinctive responses and interactions, as exhibited by termites, or combinations of these. The process of building such structures may involve learning and communication, and in some cases, even aesthetics. Tool use may also be involved in building structures by animals.
Building behaviour is common in many non-human mammals, birds, insects and arachnids. It is also seen in a few species of fish, reptiles, amphibians, molluscs, urochordates, crustaceans, annelids and some other arthropods. It is virtually absent from all the other animal phyla.
Functions
Animals create structures primarily for three reasons:
to create protected habitats, i.e. homes.
to catch prey and for foraging, i.e. traps.
for communication between members of the species (intra-specific communication), i.e. display.
Animals primarily build habitat for protection from extreme temperatures and from predation. Constructed structures raise physical problems which need to be resolved, such as humidity control or ventilation, which increases the complexity of the structure. Over time, through evolution, animals use shelters for ot |
https://en.wikipedia.org/wiki/Trondhjem%20Biological%20Station | Trondhjem Biological Station () is a marine biological research facility at the Norwegian University of Science and Technology. It is located by the Trondheimsfjord in Byneset, west of the city centre of Trondheim.
It was founded in 1900. It was directly subordinate to the Norwegian national government for the first fifty years of existence, and from 1951 to 1984 it belonged to the Royal Norwegian Society of Sciences and Letters Museum. |
https://en.wikipedia.org/wiki/The%20Princeton%20Companion%20to%20Mathematics | The Princeton Companion to Mathematics is a book providing an extensive overview of mathematics that was published in 2008 by Princeton University Press. Edited by Timothy Gowers with associate editors June Barrow-Green and Imre Leader, it has been noted for the high caliber of its contributors. The book was the 2011 winner of the Euler Book Prize of the Mathematical Association of America, given annually to "an outstanding book about mathematics".
Topics and organization
The book concentrates primarily on modern pure mathematics rather than applied mathematics, although it does also cover both applications of mathematics and the mathematics that relates to those applications;
it provides a broad overview of the significant ideas and developments in research mathematics. It is organized into eight parts:
An introduction to mathematics, outlining the major areas of study, key definitions, and the goals and purposes of mathematical research.
An overview of the history of mathematics, in seven chapters including the development of important concepts such as number, geometry, mathematical proof, and the axiomatic approach to the foundations of mathematics. A chronology of significant events in mathematical history is also provided later in the book.
Three core sections, totalling approximately 600 pages. The first of these sections provides an alphabetized set of articles on 99 specific mathematical concepts such as the axiom of choice, expander graphs, and Hilbert space. The second core section includes long surveys of 26 branches of research mathematics such as algebraic geometry and combinatorial group theory. The third describes 38 important mathematical problems and theorems such as the four color theorem, the Birch and Swinnerton-Dyer conjecture, and the Halting problem.
A collection of biographies of nearly 100 famous deceased mathematicians, arranged chronologically, also including a history of Nicolas Bourbaki's pseudonymous collaboration.
Essays describing th |
https://en.wikipedia.org/wiki/Vacuous%20truth | In mathematics and logic, a vacuous truth is a conditional or universal statement (a universal statement that can be converted to a conditional statement) that is true because the antecedent cannot be satisfied.
It is sometimes said that a statement is vacuously true because it does not really say anything. For example, the statement "all cell phones in the room are turned off" will be true when no cell phones are in the room. In this case, the statement "all cell phones in the room are turned on" would also be vacuously true, as would the conjunction of the two: "all cell phones in the room are turned on and turned off", which would otherwise be incoherent and false.
More formally, a relatively well-defined usage refers to a conditional statement (or a universal conditional statement) with a false antecedent. One example of such a statement is "if Tokyo is in France, then the Eiffel Tower is in Bolivia".
Such statements are considered vacuous truths, because the fact that the antecedent is false prevents using the statement to infer anything about the truth value of the consequent. In essence, a conditional statement, that is based on the material conditional, is true when the antecedent ("Tokyo is in France" in the example) is false regardless of whether the conclusion or consequent ("the Eiffel Tower is in Bolivia" in the example) is true or false because the material conditional is defined in that way.
Examples common to everyday speech include conditional phrases used as idioms of improbability like "when hell freezes over..." and "when pigs can fly...", indicating that not before the given (impossible) condition is met will the speaker accept some respective (typically false or absurd) proposition.
In pure mathematics, vacuously true statements are not generally of interest by themselves, but they frequently arise as the base case of proofs by mathematical induction. This notion has relevance in pure mathematics, as well as in any other field that uses cl |
https://en.wikipedia.org/wiki/Fontaine%E2%80%93Mazur%20conjecture | In mathematics, the Fontaine–Mazur conjectures are some conjectures introduced by about when p-adic representations of Galois groups of number fields can be constructed from representations on étale cohomology groups of a varieties. Some cases of this conjecture in dimension 2 were already proved by . |
https://en.wikipedia.org/wiki/Wheel%20theory | A wheel is a type of algebra (in the sense of universal algebra) where division is always defined. In particular, division by zero is meaningful. The real numbers can be extended to a wheel, as can any commutative ring.
The term wheel is inspired by the topological picture of the real projective line together with an extra point ⊥ (bottom element) such as .
A wheel can be regarded as the equivalent of a commutative ring (and semiring) where addition and multiplication are not a group but respectively a commutative monoid and a commutative monoid with involution.
Definition
A wheel is an algebraic structure , in which
is a set,
and are elements of that set,
and are binary operations,
is a unary operation,
and satisfying the following properties:
and are each commutative and associative, and have and as their respective identities.
( is an involution)
( is multiplicative)
Algebra of wheels
Wheels replace the usual division as a binary operation with multiplication, with a unary operation applied to one argument similar (but not identical) to the multiplicative inverse , such that becomes shorthand for , but neither nor in general, and modifies the rules of algebra such that
in the general case
in the general case, as is not the same as the multiplicative inverse of .
Other identities that may be derived are
where the negation is defined by and if there is an element such that (thus in the general case ).
However, for values of satisfying and , we get the usual
If negation can be defined as below then the subset is a commutative ring, and every commutative ring is such a subset of a wheel. If is an invertible element of the commutative ring then . Thus, whenever makes sense, it is equal to , but the latter is always defined, even when .
Examples
Wheel of fractions
Let be a commutative ring, and let be a multiplicative submonoid of . Define the congruence relation on via
means that there exist such th |
https://en.wikipedia.org/wiki/Glossary%20of%20cannabis%20terms | Terms related to cannabis include:
0–9
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
W
X
Y
Z
Brand of rolling papers made famous by the Afroman song "Crazy Rap"
See also
List of anti-cannabis organizations
List of cannabis companies
List of cannabis-related lists
List of cannabis rights leaders
List of cannabis rights organizations
List of names for cannabis
List of names for cannabis strains
List of slang names for cannabis |
https://en.wikipedia.org/wiki/Line%20of%20greatest%20slope | In topography, the line of greatest slope is a curve following the steepest slope. In mountain biking and skiing, the line of greatest slope is sometimes called the fall line.
Definition
Mathematically, the line (or path) of greatest slope from a point is determined by the gradient of height, taken as a potential field with respect to an acceleration from the force of gravity. Lines of greatest slope are analogous to lines of force acting to accelerate an object downward at that point. These lines are orthogonal to contour lines. Discounting inertial forces and terrain roughness, a ball rolling down a slope, or water flowing down, will accelerate in the direction of greatest slope.
Applications
Mountain biking
In mountain biking the line of greatest slope defines the fall line, which is the path a trail will follow to descend a hill or mountain with the shortest path, and will also cause the rider to gain the most velocity (assuming brakes are not used, and other factors such as rolling resistance are equal).
Mountain climbing
In mountain climbing, the line of greatest slope defines the fall line, which is the path a climber will take to gain the most elevation with the shortest possible path.
Map reading
The line of greatest slope has practical significance in map reading. On the terrain it is often far more discernible, even intuitively obvious, rather than accurately picking out the consistent height level on what is likely the undulating uneven ground along the ground represented on the contour line. But knowing that a greatest slope vector is orthogonal to the contour line, one can readily deduce the direction of the contour lines from the line of greatest slope. The extent and overall direction of the contour line to a map scale can only be found on the topographic map.
By noting the corresponding compass vector, walking along the contour one can line up a hand held compass aligning the expected direction, and eye-balling the line of contour's estim |
https://en.wikipedia.org/wiki/Breast%20pump | A breast pump is a mechanical device that lactating women use to extract milk from their breasts. They may be manual devices powered by hand or foot movements or automatic devices powered by electricity.
History
On June 20, 1854, the United States Patent Office issued Patent No. 11,135 to O.H. Needham for a breast pump. Scientific American (1863) credits L.O. Colbin as the inventor and patent applicant of a breast pump. In 1921–23, engineer and chess master Edward Lasker produced a mechanical breast pump that imitated an infant's sucking action and was regarded by physicians as a marked improvement on existing hand-operated breast pumps, which failed to remove all the milk from the breast. The U.S. Patent Office issued for Lasker's breast pump. In 1956 Einar Egnell published his groundbreaking work, "Viewpoints on what happens mechanically in the female breast during various methods of milk collection". This article provided insight into the technical aspects of milk extraction from the breast. Many Egnell SMB breast pumps designed through this research are still in operation over 50 years after publication.
Archaeologists working at a glass factory site in Philadelphia, Pennsylvania, excavated a 19th-century breast pipe that matches breast pumping instruments in period advertisements.
Reasons for use
Breast pumps are used for many reasons. Many parents use them to continue breastfeeding after they return to work. They express their milk at work, which is later bottle-fed to their child by a caregiver. This use of breast milk is widespread in the United States, where paid family leave is one of the shortest in the developed world. American historian Jill Lepore argues that the need for so-called "lactation rooms" and breast pumps is driven by the corporate desire for parents to return to work immediately rather than mothers' wishes or babies' needs.
A breast pump may also be used to stimulate lactation for women with a low milk supply or those who have not jus |
https://en.wikipedia.org/wiki/Impulse%20excitation%20technique | The impulse excitation technique (IET) is a non-destructive material characterization technique to determine the elastic properties and internal friction of a material of interest. It measures the resonant frequencies in order to calculate the Young's modulus, shear modulus, Poisson's ratio and internal friction of predefined shapes like rectangular bars, cylindrical rods and disc shaped samples. The measurements can be performed at room temperature or at elevated temperatures (up to 1700 °C) under different atmospheres.
The measurement principle is based on tapping the sample with a small projectile and recording the induced vibration signal with a piezoelectric sensor, microphone, laser vibrometer or accelerometer. To optimize the results a microphone or a laser vibrometer can be used as there is no contact between the test-piece and the sensor. Laser vibrometers are preferred to measure signals in vacuum. Afterwards, the acquired vibration signal in the time domain is converted to the frequency domain by a fast Fourier transformation. Dedicated software will determine the resonant frequency with high accuracy to calculate the elastic properties based on the classical beam theory.
Elastic properties
Different resonant frequencies can be excited dependent on the position of the support wires, the mechanical impulse and the microphone. The two most important resonant frequencies are the flexural which is controlled by the Young's modulus of the sample and the torsional which is controlled by the shear modulus for isotropic materials.
For predefined shapes like rectangular bars, discs, rods and grinding wheels, dedicated software calculates the sample's elastic properties using the sample dimensions, weight and resonant frequency (ASTM E1876-15).
Flexure mode
The first figure gives an example of a test-piece vibrating in the flexure
mode. This induced vibration is also referred as the out-of-plane vibration mode. The in-plane vibration will be excited by turning |
https://en.wikipedia.org/wiki/Anti%E2%80%93computer%20forensics | Anti–computer forensics or counter-forensics are techniques used to obstruct forensic analysis.
Definition
Anti-forensics has only recently been recognized as a legitimate field of study.
One of the more widely known and accepted definitions comes from Marc Rogers. One of the earliest detailed presentations of anti-forensics, in Phrack Magazine in 2002, defines anti-forensics as "the removal, or hiding, of evidence in an attempt to mitigate the effectiveness of a forensics investigation".
A more abbreviated definition is given by Scott Berinato in his article entitled, The Rise of Anti-Forensics. "Anti-forensics is more than technology. It is an approach to criminal hacking that can be summed up like this: Make it hard for them to find you and impossible for them to prove they found you." Neither author takes into account using anti-forensics methods to ensure the privacy of one's personal data.
Sub-categories
Anti-forensics methods are often broken down into several sub-categories to make classification of the various tools and techniques simpler. One of the more widely accepted subcategory breakdowns was developed by Dr. Marcus Rogers. He has proposed the following sub-categories: data hiding, artifact wiping, trail obfuscation and attacks against the CF (computer forensics) processes and tools. Attacks against forensics tools directly has also been called counter-forensics.
Purpose and goals
Within the field of digital forensics, there is much debate over the purpose and goals of anti-forensic methods. The conventional wisdom is that anti-forensic tools are purely malicious in intent and design. Others believe that these tools should be used to illustrate deficiencies in digital forensic procedures, digital forensic tools, and forensic examiner education. This sentiment was echoed at the 2005 Blackhat Conference by anti-forensic tool authors, James Foster and Vinnie Liu. They stated that by exposing these issues, forensic investigators will have to work h |
https://en.wikipedia.org/wiki/Lumino%20kinetic%20art | Lumino Kinetic art is a subset and an art historical term in the context of the more established kinetic art, which in turn is a subset of new media art. The historian of art Frank Popper views the evolution of this type of art as evidence of "aesthetic preoccupations linked with technological advancement" and a starting-point in the context of high-technology art. László Moholy-Nagy (1895–1946), a member of the Bauhaus, and influenced by constructivism can be regarded as one of the fathers of Lumino kinetic art. Light sculpture and moving sculpture are the components of his Light-Space Modulator (1922–30), One of the first Light art pieces which also combines kinetic art.
The multiple origins of the term itself involve, as the name suggests, light and movement. There was an early cybernetic artist, Nicolas Schöffer, who developed walls of light, prisms, and video circuits under the term in the 50s. Artist/engineer Frank Malina came up with the Lumidyne system of lighting (CITE), and his work Tableaux mobiles (moving paintings) is an example of Lumino Kinetic art of that period. Later, artist Nino Calos worked with the term Lumino-kinetic paintings. Artist György Kepes was also experimenting with lumino-kinetic works. Ellis D Fogg is also associated with the term as a "lumino kinetic sculptor".
In the 1960s various exhibits involved Lumino Kinetic art, inter alia Kunst-Licht-Kunst at the Stedelijk Van Abbemuseum in Eindhoven in 1966, and Lumière et mouvement at the Musée d'Art Moderne de la Ville de Paris in 1967.
Lumino Kinetic art was also aligned with Op art in the late 1960s because the moving lights were spectacular and psychedelic.
Frank Popper views it as an art historical term in the context of kinetic art; he states that "there is no lumino kinetic art after the early 70s; it stands as a precursor to other contemporary cybernetic, robotic, new media-based arts, and is limited to a very small number of (male) European avant-garde artists (part of the |
https://en.wikipedia.org/wiki/Stanford%20Institute%20for%20Theoretical%20Physics | The Stanford Institute for Theoretical Physics (SITP) is a research institute within the Physics Department at Stanford University. Led by 16 physics faculty members, the institute conducts research in High Energy and Condensed Matter theoretical physics.
Research
Research within SITP includes a strong focus on fundamental questions about the new physics underlying the Standard Models of particle physics and cosmology, and on the nature and applications of our basic frameworks (quantum field theory and string theory) for attacking these questions.
Principal areas of research include:
Biophysics
Condensed matter theory
Cosmology
Formal theory
Physics beyond the standard model
"Precision frontiers"
Quantum computing
Quantum gravity
Central questions include:
What governs particle theory beyond the scale of electroweak symmetry breaking?
How do string theory and holography resolve the basic puzzles of general relativity, including the deep issues arising in black hole physics and the study of cosmological horizons?
Which class of models of inflationary cosmology captures the physics of the early universe, and what preceded inflation?
Can physicists develop new techniques in quantum field theory and string theory to shed light on mysterious phases arising in many contexts in condensed matter physics (notably, in the high temperature superconductors)?
Faculty
Current faculty include:
Savas Dimopoulos, theorist focusing on physics beyond the standard model; winner of Sakurai Prize
Sebastian Doniach, condensed matter physicist
Daniel Fisher, biophysicist
Surya Ganguli, theoretical neuroscientist
Peter Graham, winner of 2017 New Horizons Prize
Sean Hartnoll, AdS/CFT, winner of New Horizons Prize
Patrick Hayden, quantum information theorist
Shamit Kachru, string theorist; Stanford Physics Department chair
Renata Kallosh, noted string theorist
Vedika Khemani, condensed matter theorist
Steven Kivelson, condensed matter theorist
Rober |
https://en.wikipedia.org/wiki/Hero%20of%20Alexandria | Hero of Alexandria (; , Hērōn hò Alexandreús, also known as Heron of Alexandria ; 60 AD) was a Greek mathematician and engineer who was active in his native city of Alexandria in Egypt during the Roman era. He is often considered the greatest experimenter of antiquity and his work is representative of the Hellenistic scientific tradition.
Hero published a well-recognized description of a steam-powered device called an aeolipile (sometimes called a "Hero engine"). Among his most famous inventions was a windwheel, constituting the earliest instance of wind harnessing on land. He is said to have been a follower of the atomists. In his work Mechanics, he described pantographs. Some of his ideas were derived from the works of Ctesibius.
In mathematics he is mostly remembered for Heron's formula, a way to calculate the area of a triangle using only the lengths of its sides.
Much of Hero's original writings and designs have been lost, but some of his works were preserved including in manuscripts from the Eastern Roman Empire and to a lesser extent, in Latin or Arabic translations.
Life and career
Hero's ethnicity may have been either Greek or Hellenized Egyptian. It is almost certain that Hero taught at the Musaeum which included the famous Library of Alexandria, because most of his writings appear as lecture notes for courses in mathematics, mechanics, physics and pneumatics. Although the field was not formalized until the twentieth century, it is thought that the work of Hero, in particular his automated devices, represented some of the first formal research into cybernetics.
Inventions
Hero described the construction of the aeolipile (a version of which is known as Hero's engine) which was a rocket-like reaction engine and the first-recorded steam engine (although Vitruvius mentioned the aeolipile in De Architectura some 100 years earlier than Hero). It was described almost two millennia before the industrial revolution. Another engine used air from a closed cha |
https://en.wikipedia.org/wiki/Biomarker%20%28medicine%29 | In medicine, a biomarker is a measurable indicator of the severity or presence of some disease state. It may be defined as a "cellular, biochemical or molecular alteration in cells, tissues or fluids that can be measured and evaluated to indicate normal biological processes, pathogenic processes, or pharmacological responses to a therapeutic intervention." More generally a biomarker is anything that can be used as an indicator of a particular disease state or some other physiological state of an organism. According to the WHO, the indicator may be chemical, physical, or biological in nature - and the measurement may be functional, physiological, biochemical, cellular, or molecular.
A biomarker can be a substance that is introduced into an organism as a means to examine organ function or other aspects of health. For example, rubidium chloride is used in isotopic labeling to evaluate perfusion of heart muscle. It can also be a substance whose detection indicates a particular disease state, for example, the presence of an antibody may indicate an infection. More specifically, a biomarker indicates a change in expression or state of a protein that correlates with the risk or progression of a disease, or with the susceptibility of the disease to a given treatment. Biomarkers can be characteristic biological properties or molecules that can be detected and measured in parts of the body like the blood or tissue. They may indicate either normal or diseased processes in the body. Biomarkers can be specific cells, molecules, or genes, gene products, enzymes, or hormones. Complex organ functions or general characteristic changes in biological structures can also serve as biomarkers. Although the term biomarker is relatively new, biomarkers have been used in pre-clinical research and clinical diagnosis for a considerable time. For example, body temperature is a well-known biomarker for fever. Blood pressure is used to determine the risk of stroke. It is also widely known that |
https://en.wikipedia.org/wiki/Hitting%20mechanics | In baseball, hitting mechanics studies the biomechanical motion that governs the swing of a baseball player. The goal of biomechanics in hitting during baseball training is to study and improve upon the physics involved in hitting. This includes optimizing a player's swing for either maximizing their "bat speed" or time for plate coverage. There is a wide range of batting stances and mechanics that are developed through individual preferences. However, when comparing among experienced baseball players, their batting mechanics approach are almost similar.
Hitting analysis
Hitters have a wide variation of swings, but in the end staying balanced and having stable posture is the most important aspect of hitting a baseball. If the hitter becomes unbalanced throughout the swing the chance of making solid contact with the baseball is very slim. Once balanced throughout the swing, bat speed comes into the next most important aspect of the baseball swing. The faster the bat speed, the faster the ball will come off the bat. Furthermore, researchers have long established that home run hits are dependent on swing speed. Most notably, one can logically assume that a faster swing will result in the ball traveling farther. A 3-6% increase in bat speed can significantly affect the distance a ball travels after contact in competition (7). In terms of simple physics and mathematics, the conservation of momentum (E1) and a kinematic equation (E2) also reinforces this idea.
(E1): (Mass1*Velocity1 = Mass2*Velocity2)
(E2): (distance = Velocity (initial) *time + 0.5 *acceleration *time^2)
A study used an intensive mathematical program (finite element analysis software) to confirm that ball exit velocity is indeed dependent on linear bat velocity. These findings and observations confirm that a faster swing will be beneficial to a baseball player. In a research done by Welsh and et al. for the Journal of Orthopaedic and Sport Physical Therapy, they found that every baseball p |
https://en.wikipedia.org/wiki/Integral%20nonlinearity | Integral nonlinearity (acronym INL) is a commonly used measure of performance in digital-to-analog (DAC) and analog-to-digital (ADC) converters. In DACs, it is a measure of the deviation between the ideal output value and the actual measured output value for a certain input code. In ADCs, it is the deviation between the ideal input threshold value and the measured threshold level of a certain output code. This measurement is performed after offset and gain errors have been compensated.
The ideal transfer function of a DAC or ADC is a straight line. The INL measurement depends on what line is chosen as ideal. One common option is the line that connects the endpoints of the transfer function, in other words, the line connecting the smallest and largest measured input/output value. An alternative is to use a best fit line, where one minimizes the average (or alternatively the mean squared) INL.
While the INL can be measured for every possible input/output code, often only the maximal error is provided when reporting the INL of a converter.
Formulas
INL of a DAC
The INL of a code of a DAC with output codes is defined as the absolute value of the difference of the real output voltage minus the ideal value:
where
and are the maximum and minimum ideal output voltages of the DAC.
INL of an ADC
For an ADC, the INL of a code is defined as the deviation of the mid-points of the quantization steps between the ideal and real transfer function.
Maximum INL of a converter
When referring to the INL of a converter usually the maximum INL maximized over all codes is meant. For the line through the endpoints, the INL of a DAC is
This INL is measured in volts; one can divide it by the ideal LSB voltage to get the measurement in LSBs:
See also
Differential nonlinearity
Quantization |
https://en.wikipedia.org/wiki/TORCH%20syndrome | TORCH syndrome is a cluster of symptoms caused by congenital infection with toxoplasmosis, rubella, cytomegalovirus, herpes simplex, and other organisms including syphilis, parvovirus, and Varicella zoster. Zika virus is considered the most recent member of TORCH infections.
TORCH is an acronym for , Agents, , , and Simplex.
Signs and symptoms
Though caused by different infections, the signs and symptoms of TORCH syndrome are consistent. They include hepatosplenomegaly (enlargement of the liver and spleen), fever, lethargy, difficulty feeding, anemia, petechiae, purpurae, jaundice, and chorioretinitis. The specific infection may cause additional symptoms.
TORCH syndrome may develop before birth, causing stillbirth, in the neonatal period, or later in life.
Pathophysiology
TORCH syndrome is caused by in-utero infection with one of the TORCH agents, disrupting fetal development.
Diagnosis
Presence of IgM is diagnostic and persistence of IgG beyond 6–9 months is diagnostic.
Prevention
TORCH syndrome can be prevented by treating an infected pregnant woman, thereby preventing the infection from affecting the fetus.
Treatment
The treatment of TORCH syndrome is mainly supportive and depends on the symptoms present; medication is an option for herpes and cytomegalovirus infections.
Epidemiology
Developing countries are more severely affected by TORCH syndrome. |
https://en.wikipedia.org/wiki/Stack%20register | A stack register is a computer central processor register whose purpose is to keep track of a call stack. On an accumulator-based architecture machine, this may be a dedicated register. On a machine with multiple general-purpose registers, it may be a register that is reserved by convention, such as on the IBM System/360 through z/Architecture architecture and RISC architectures, or it may be a register that procedure call and return instructions are hardwired to use, such as on the PDP-11, VAX, and Intel x86 architectures. Some designs such as the Data General Eclipse had no dedicated register, but used a reserved hardware memory address for this function.
Machines before the late 1960s—such as the PDP-8 and HP 2100—did not have compilers which supported recursion. Their subroutine instructions typically would save the current location in the jump address, and then set the program counter to the next address. While this is simpler than maintaining a stack, since there is only one return location per subroutine code section, there cannot be recursion without considerable effort on the part of the programmer.
A stack machine has 2 or more stack registers — one of them keeps track of a call stack, the other(s) keep track of other stack(s).
Stack registers in x86
In 8086, the main stack register is called stack pointer - SP. The stack segment register (SS) is usually used to store information about the memory segment that stores the call stack of currently executed program. SP points to current stack top. By default, the stack grows downward in memory, so newer values are placed at lower memory addresses. To push a value to the stack, the PUSH instruction is used. To pop a value from the stack, the POP instruction is used.
Example: Assuming that SS = 1000h and SP = 0xF820. This means that current stack top is the physical address 0x1F820 (this is due to memory segmentation in 8086). The next two machine instructions of the program are:
PUSH AX
PUSH BX
These fir |
https://en.wikipedia.org/wiki/Coupling%20coefficient%20of%20resonators | The coupling coefficient of resonators is a dimensionless value that characterizes interaction of two resonators. Coupling coefficients are used in resonator filter theory. Resonators may be both electromagnetic and acoustic. Coupling coefficients together with resonant frequencies and external quality factors of resonators are the generalized parameters of filters. In order to adjust the frequency response of the filter it is sufficient to optimize only these generalized parameters.
Evolution of the term
This term was first introduced in filter theory by M Dishal. In some degree it is an analog of coupling coefficient of coupled inductors. Meaning of this term has been improved many times with progress in theory of coupled resonators and filters. Later definitions of the coupling coefficient are generalizations or refinements of preceding definitions.
Coupling coefficient considered as a positive constant
Earlier well-known definitions of the coupling coefficient of resonators are given in monograph by G. Matthaei et al. Note that these definitions are approximate because they were formulated in the assumption that the coupling between resonators is sufficiently small. The coupling coefficient for the case of two equal resonators is defined by formula
(1)
where are the frequencies of even and odd coupled oscillations of unloaded pair of the resonators and It is obvious that the coupling coefficient defined by formula (2) is a positive constant that characterizes interaction of resonators at the resonant frequency
In case when an appropriate equivalent network having an impedance or admittance inverter loaded at both ports with resonant one-port networks may be matched with the pair of coupled resonators with equal resonant frequencies, the coupling coefficient is defined by the formula
(2)
for series-type resonators and by the formula
(3)
for parallel-type resonators. Here are impedance-inverter and admittance-inverter parameters, are reac |
https://en.wikipedia.org/wiki/Mesencephalic%20nucleus%20of%20trigeminal%20nerve | The mesencephalic nucleus of trigeminal nerve is one of the sensory nuclei of the trigeminal nerve (cranial nerve V). It is located in the brainstem. It receives proprioceptive sensory information from the muscles of mastication and other muscles of the head and neck. It is involved in processing information about the position of the jaw/teeth. It is functionally responsible for preventing excessive biting that may damage the dentition, regulating tooth pain perception, and mediating the jaw jerk reflex (by means of projecting to the motor nucleus of the trigeminal nerve).
The axons of the neuron cell bodies of this nucleus provide sensory innervation to target tissues directly, whereas other sensory nuclei of the trigeminal nerve receive their sensory inputs by synapsing with primary sensory neurons in the trigeminal ganglion.
Anatomy
The MNTN is located in the brainstem, more specifically (sources vary) spanning the length of the midbrain/in the caudal midbrain and rostral pons. It is situated (sources vary) near/within the periaqueductal gray, lateral to the cerebral aqueduct.
The mesencephalic nucleus is the only structure in the central nervous system to contain the cell bodies of first order sensory neurons. The mesencephalic nucleus can thus be considered functionally as a primary sensory ganglion embedded within the brainstem, making it neuroanatomically unique.
Microanatomy
Unlike many nuclei within the central nervous system (CNS), the mesencephalic nucleus contains no chemical synapses but are electrically coupled. Neurons of this nucleus are pseudounipolar, receiving proprioceptive afferent information from the mandible and sending efferent projections to the trigeminal motor nucleus to mediate monosynaptic jaw jerk reflexes.
Development
The pseudounipolar neurons in the mesencephalic nucleus are embryologically derived from the neural crest. However, instead of joining the trigeminal ganglion, the neurons migrate into the brainstem. The MNTN is |
https://en.wikipedia.org/wiki/Numerical%20certification | Numerical certification is the process of verifying the correctness of a candidate solution to a system of equations. In (numerical) computational mathematics, such as numerical algebraic geometry, candidate solutions are computed algorithmically, but there is the possibility that errors have corrupted the candidates. For instance, in addition to the inexactness of input data and candidate solutions, numerical errors or errors in the discretization of the problem may result in corrupted candidate solutions. The goal of numerical certification is to provide a certificate which proves which of these candidates are, indeed, approximate solutions.
Methods for certification can be divided into two flavors: a priori certification and a posteriori certification. A posteriori certification confirms the correctness of the final answers (regardless of how they are generated), while a priori certification confirms the correctness of each step of a specific computation. A typical example of a posteriori certification is Smale's alpha theory, while a typical example of a priori certification is interval arithmetic.
Certificates
A certificate for a root is a computational proof of the correctness of a candidate solution. For instance, a certificate may consist of an approximate solution , a region containing , and a proof that contains exactly one solution to the system of equations.
In this context, an a priori numerical certificate is a certificate in the sense of correctness in computer science. On the other hand, an a posteriori numerical certificate operates only on solutions, regardless of how they are computed. Hence, a posteriori certification is different from algorithmic correctness – for an extreme example, an algorithm could randomly generate candidates and attempt to certify them as approximate roots using a posteriori certification.
A posteriori certification methods
There are a variety of methods for a posteriori certification, including
Alpha |
https://en.wikipedia.org/wiki/Eric%20Urban | Eric Jean-Paul Urban is a professor of mathematics at Columbia University working in number theory and automorphic forms, particularly Iwasawa theory.
Career
Urban received his PhD in mathematics from Paris-Sud University in 1994 under the supervision of Jacques Tilouine. He is a professor of mathematics at Columbia University.
Research
Together with Christopher Skinner, Urban proved many cases of Iwasawa–Greenberg main conjectures for a large class of modular forms. As a consequence, for a modular elliptic curve over the rational numbers, they prove that the vanishing of the Hasse–Weil L-function L(E, s) of E at s = 1 implies that the p-adic Selmer group of E is infinite. Combined with theorems of Gross-Zagier and Kolyvagin, this gave a conditional proof (on the Tate–Shafarevich conjecture) of the conjecture that E has infinitely many rational points if and only if L(E, 1) = 0, a (weak) form of the Birch–Swinnerton-Dyer conjecture. These results were used (in joint work with Manjul Bhargava and Wei Zhang) to prove that a positive proportion of elliptic curves satisfy the Birch–Swinnerton-Dyer conjecture.
Awards
Urban was awarded a Guggenheim Fellowship in 2007.
Selected publications |
https://en.wikipedia.org/wiki/Bathmotropic | Bathmotropic often refers to modifying the degree of excitability specifically of the heart; in general, it refers to modification of the degree of excitability (threshold of excitation) of musculature in general, including the heart. It especially is used to describe the effects of the cardiac nerves on cardiac excitability. Positive bathmotropic effects increase the response of muscle to stimulation, whereas negative bathmotropic effects decrease the response of muscle to stimulation. In a whole, it is the heart's reaction to catecholamines (norepinephrine, epinephrine, dopamine). Conditions that decrease bathmotropy (i.e. hypercarbia) cause the heart to be less responsive to catecholaminergic drugs. A substance that has a bathmotropic effect is known as a bathmotrope.
While bathmotropic, as used herein, has been defined as pertaining to modification of the excitability of the heart, it can also refer to modification of the irritability of heart muscle, and the two terms are frequently used interchangeably.
Etymology
The term "bathmotropic" is derived from the Ancient Greek word βαθμός (bathmós), meaning "step" or "threshold".
History
In 1897 Engelmann introduced four Greek terms to describe key physiological properties of the heart: inotropy, the ability to contract; chronotropy, the ability to initiate an electrical impulse; dromotropy, the ability to conduct an electrical impulse; and bathmotropy, the ability to respond to direct mechanical stimulation. A fifth term, lusitropy, was introduced in 1982 when relaxation was recognized to be an active process, and not simply dissipation of the contractile event. In an article in the American Journal of the Medical Sciences, these five terms were described as the five fundamental properties of the heart.
Physiological explanation
The bathmotropic effect modifies the heart muscle membrane excitability, and thus the ease of generating an action potential. The ease of generating an action potential is related bo |
https://en.wikipedia.org/wiki/Traction%20%28mechanics%29 | Traction, traction force or tractive force is a force used to generate motion between a body and a tangential surface, through the use of either dry friction or shear force.
It has important applications in vehicles, as in tractive effort.
Traction can also refer to the maximum tractive force between a body and a surface, as limited by available friction; when this is the case, traction is often expressed as the ratio of the maximum tractive force to the normal force and is termed the coefficient of traction (similar to coefficient of friction). It is the force which makes an object move over the surface by overcoming all the resisting forces like friction, normal loads(load acting on the tiers in negative 'Z' axis), air resistance, rolling resistance, etc.
Definitions
Traction can be defined as:
In vehicle dynamics, tractive force is closely related to the terms tractive effort and drawbar pull, though all three terms have different definitions.
Coefficient of traction
The coefficient of traction is defined as the usable force for traction divided by the weight on the running gear (wheels, tracks etc.) i.e.:
usable traction = coefficient of traction x normal force
Factors affecting coefficient of traction
Traction between two surfaces depends on several factors:
Material composition of each surface.
Macroscopic and microscopic shape (texture; macrotexture and microtexture)
Normal force pressing contact surfaces together.
Contaminants at the material boundary including lubricants and adhesives.
Relative motion of tractive surfaces - a sliding object (one in kinetic friction) has less traction than a non-sliding object (one in static friction).
Direction of traction relative to some coordinate system - e.g., the available traction of a tire often differs between cornering, accelerating, and braking.
For low-friction surfaces, such as off-road or ice, traction can be increased by using traction devices that partially penetrate the surface; these device |
https://en.wikipedia.org/wiki/Even%20code | A binary code is called an even code if the Hamming weight of each of its codewords is even. An even code should have a generator polynomial that include (1+x) minimal polynomial as a product. Furthermore, a binary code is called doubly even if the Hamming weight of all its codewords is divisible by 4. An even code which is not doubly even is said to be strictly even.
Examples of doubly even codes are the extended binary Hamming code of block length 8 and the extended binary Golay code of block length 24. These two codes are, in addition, self-dual.
Coding theory
Parity (mathematics) |
https://en.wikipedia.org/wiki/Cape%20Provinces | The Cape Provinces of South Africa is a biogeographical area used in the World Geographical Scheme for Recording Plant Distributions (WGSRPD). It is part of the WGSRPD region 27 Southern Africa. The area has the code "CPP". It includes the South African provinces of the Eastern Cape, the Northern Cape and the Western Cape, together making up most of the former Cape Province.
The area includes the Cape Floristic Region, the smallest of the six recognised floral kingdoms of the world, an area of extraordinarily high diversity and endemism, home to more than 9,000 vascular plant species, of which 69 percent are endemic.
See also
Northern Provinces |
https://en.wikipedia.org/wiki/Keynesian%20beauty%20contest | A Keynesian beauty contest describes a beauty contest where judges are rewarded for selecting the most popular faces among all judges, rather than those they may personally find the most attractive. This idea is often applied in financial markets, whereby investors could profit more by buying whichever stocks they think other investors will buy, rather than the stocks that have fundamentally the best value. Because when other people buy a stock, they bid up the price, allowing an earlier investor to cash out with a profit, regardless of whether the price increases are supported by its fundamentals.
The concept was developed by John Maynard Keynes and introduced in Chapter 12 of his work, The General Theory of Employment, Interest and Money (1936), to explain price fluctuations in equity markets.
Overview
Keynes described the action of rational agents in a market using an analogy based on a fictional newspaper contest, in which entrants are asked to choose the six most attractive faces from a hundred photographs. Those who picked the most popular faces are then eligible for a prize.
A naive strategy would be to choose the face that, in the opinion of the entrant, is the most handsome. A more sophisticated contest entrant, wishing to maximize the chances of winning a prize, would think about what the majority perception of attractiveness is, and then make a selection based on some inference from their knowledge of public perceptions. This can be carried one step further to take into account the fact that other entrants would each have their own opinion of what public perceptions are. Thus the strategy can be extended to the next order and the next and so on, at each level attempting to predict the eventual outcome of the process based on the reasoning of other rational agents.
"It is not a case of choosing those [faces] that, to the best of one's judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached |
https://en.wikipedia.org/wiki/OurGrid | OurGrid is an opensource grid middleware based on a peer-to-peer architecture. OurGrid was mainly developed at the Federal University of Campina Grande (Brazil), which has run an OurGrid instance named "OurGrid" since December 2004. Anyone can freely join it to gain access to large amount of computational power and run parallel applications. This computational power is provided by the idle resources of all participants, and is shared in a way that makes those who contribute more get more when they need. Currently, the platform can be used to run any application whose tasks (i.e. parts that run on a single machine) do not communicate among themselves during execution, like most simulations, data mining and searching. |
https://en.wikipedia.org/wiki/Armorial%20of%20Mexico | Each of the 31 states of Mexico and Mexico City has a separate coat of arms. Each Mexican state flag contains the respective state arms, typically on a white background.
Gallery
See also
Coat of arms of Mexico |
https://en.wikipedia.org/wiki/Negative%20priming | Negative priming is an implicit memory effect in which prior exposure to a stimulus unfavorably influences the response to the same stimulus. It falls under the category of priming, which refers to the change in the response towards a stimulus due to a subconscious memory effect. Negative priming describes the slow and error-prone reaction to a stimulus that is previously ignored. For example, a subject may be imagined trying to pick a red pen from a pen holder. The red pen becomes the target of attention, so the subject responds by moving their hand towards it. At this time, they mentally block out all other pens as distractors to aid in closing in on just the red pen. After repeatedly picking the red pen over the others, switching to the blue pen results in a momentary delay picking the pen out (however, there is a decline in the negative priming effect when there is more than one nontarget item that is selected against). The slow reaction due to the change of the distractor stimulus to target stimulus is called the negative priming effect.
Negative priming is believed to play a crucial role in attention and memory retrieval processes. When stimuli are perceived through the senses, all the stimuli are encoded within the brain, where each stimulus has its own internal representation. In this perceiving process, some of the stimuli receive more attention than others. Similarly, only some of them are stored in short-term memory. Negative priming is highly related to the selective nature of attention and memory.
Broadly, negative priming is also known as the mechanism by which inhibitory control is applied to cognition. This refers only to the inhibition stimuli that can interfere with the current short-term goal of creating a response. The effectiveness of inhibiting the interferences depends on the cognitive control mechanism as a higher number of distractors yields higher load on working memory. Increased load on working memory can in turn result in slower perce |
https://en.wikipedia.org/wiki/Monad%20%28nonstandard%20analysis%29 | In nonstandard analysis, a monad or also a halo is the set of points infinitesimally close to a given point.
Given a hyperreal number x in R∗, the monad of x is the set
If x is finite (limited), the unique real number in the monad of x is called the standard part of x. |
https://en.wikipedia.org/wiki/Relational%20mobility | Relational mobility is a sociological variable that represents how much freedom individuals have to choose which persons to have relationships with, including friendships, working relationships, and romantic partnerships in a given society. Societies with low relational mobility have less flexible interpersonal networks. People form relationships based on circumstance rather than active choice. In these societies, relationships are more stable and guaranteed, while there are fewer opportunities to leave unsatisfying relationships and find new ones.
Group memberships tend to be fixed, and individuals have less freedom to select or change these relationships even if they wished to.
In contrast, societies with high relational mobility give people choice and freedom to select or leave interpersonal relationships based on their personal preferences. Such relationships are based on mutual agreement and are not guaranteed to last.
Individuals have many opportunities to meet new people and to choose whom they interact with or which groups they belong to in such societies.
Relational mobility is conceived as a socioecological factor, which means that it depends on the social and natural environment. The theory of relational mobility has attracted increased interest since the early 2000's because it has been found to explain important cross-cultural differences in people's behavior and way of thinking.
The relational mobility scale
The relational mobility scale is a sociometric scale used for measuring relational mobility in population surveys. This scale is based on a series of questions asking people not about their own situation, but the situation of people around them such as friendship groups, hobby groups, sports teams, and companies. The questions are probing to what degree these people are able to choose the people whom they interact with in their daily life, according to their own preferences.
Geographic differences
Relational mobility is low in cultures with |
https://en.wikipedia.org/wiki/MacOS%20Monterey | macOS Monterey (version 12) is the eighteenth major release of macOS, Apple's desktop operating system for Macintosh computers. The successor to macOS Big Sur, it was announced at WWDC 2021 on June 7, 2021, and released on October 25, 2021. macOS Monterey was succeeded by macOS Ventura, which was released on October 24, 2022.
The operating system is named after Monterey Bay, continuing the trend of releases named after California locations since 2013's 10.9 Mavericks.
macOS Monterey is the final version of macOS that supports the 2015–2017 MacBook Air, Retina MacBook Pro, 2014 Mac Mini and cylindrical Mac Pro, as its successor, macOS Ventura, drops support for those models.
Changes
Monterey introduced several new features and changes, including the following:
Shortcuts for the Mac
TestFlight for the Mac
Provisions to allow the planned introduction of Universal Control, which allows a single keyboard and mouse to control multiple Macs and iPads. It works on Macs with Apple silicon and some with an Intel processor, including MacBook Pro (2016 and later), MacBook (2016 and later), MacBook Air (2018 and later), iMac (2017 and later), iMac (5K Retina, 27-inch, Late 2015), iMac Pro, Mac Mini (2018 and later), and Mac Pro (2019). It works on these iPads: iPad Pro, iPad Air (3rd generation and later), iPad (6th generation and later), and iPad Mini (5th generation and later).
Support for the Apple Music Voice Plan Subscription.
Portrait Mode and Noise Cancellation features for FaceTime and some apps (in Control Center).
New Toolbar features and designs for Finder and the Preview app.
Have a Live Memoji and Animoji right on the lock screen.
A yellow privacy indicator on the menu bar for indicating if the Mac's microphone or camera is active.
Live Text, which allows a user to copy, paste, translate and look up text from images displayed by Photos, Screenshot, Quick Look, and Safari.
New Passwords Manager for Mac
New on-device machine-learning–activated keyboard dictation |
https://en.wikipedia.org/wiki/Pathogenicity%20island | Pathogenicity islands (PAIs), as termed in 1990, are a distinct class of genomic islands acquired by microorganisms through horizontal gene transfer. Pathogenicity islands are found in both animal and plant pathogens. Additionally, PAIs are found in both gram-positive and gram-negative bacteria. They are transferred through horizontal gene transfer events such as transfer by a plasmid, phage, or conjugative transposon. Therefore, PAIs contribute to microorganisms' ability to evolve.
One species of bacteria may have more than one PAI. For example, Salmonella has at least five.
An analogous genomic structure in rhizobia is termed a symbiosis island.
Properties
Pathogenicity islands (PAIs) are gene clusters incorporated in the genome, chromosomally or extrachromosomally, of pathogenic organisms, but are usually absent from those nonpathogenic organisms of the same or closely related species. They may be located on a bacterial chromosome or may be transferred within a plasmid or can be found in bacteriophage genomes. The GC-content and codon usage of pathogenicity islands often differs from that of the rest of the genome, potentially aiding in their detection within a given DNA sequence, unless the donor and recipient of the PAI have similar GC-content.
PAIs are discrete genetic units flanked by direct repeats, insertion sequences or tRNA genes, which act as sites for recombination into the DNA. Cryptic mobility genes may also be present, indicating the provenance as transduction. PAIs are flanked by direct repeats; the sequence of bases at two ends of the inserted sequence are the same. They carry functional genes, such as integrases, transposases, phagocytosis, or part of insertion sequences, to enable insertion into host DNA. PAIs are often associated with tRNA genes, which target sites for this integration event. They can be transferred as a single unit to new bacterial cells, thus conferring virulence to formerly benign strains.
PAIs, a type of mobile genetic |
https://en.wikipedia.org/wiki/Suillus%20albidipes | Suillus albidipes is a species of edible mushroom in the genus Suillus native to North America.
See also
List of North American boletes |
https://en.wikipedia.org/wiki/Polylogarithmic%20function | In mathematics, a polylogarithmic function in is a polynomial in the logarithm of ,
The notation is often used as a shorthand for , analogous to for .
In computer science, polylogarithmic functions occur as the order of time or memory used by some algorithms (e.g., "it has polylogarithmic order"), such as in the definition of QPTAS (see PTAS).
All polylogarithmic functions of are for every exponent (for the meaning of this symbol, see small o notation), that is, a polylogarithmic function grows more slowly than any positive exponent. This observation is the basis for the soft O notation . |
https://en.wikipedia.org/wiki/Cognitive%20Abilities%20Screening%20Instrument | The Cognitive Abilities Screening Instrument (CASI) is a cognitive test screening for dementia, in monitoring the disease progression, and in providing profiles of cognitive impairment by examining abilities on attention, concentration, orientation, short-term memory, long-term memory, language abilities, visual construction, list-generating fluency, abstraction, and judgment with score ranges of 0 to 100, respectively. |
https://en.wikipedia.org/wiki/Rate%20making | Rate making, or insurance pricing, is the determination of rates charged by insurance companies. The benefit of rate making is to ensure insurance companies are setting fair and adequate premiums given the competitive nature.
Fundamental rate-making definitions
The following are fundamental terms that are commonly used in rate making. A rate "is the price per unit of insurance for each exposure unit, which is the unit of measurement used in insurance pricing". The exposure unit is used to establish insurance premiums by examining parallel groups.
The pure premium "refers to that portion of that rate needed to pay losses and loss-adjustment expenses". The loading "refers to the amount of the premium necessary to cover other expenses, particularly sales expenses, and to allow for a profit". The gross rate "is the pure premium and the loading per exposure unit". Finally, the gross premium is the premium paid by the insured consisting of the gross rate multiplied by the number of exposure units.
Objectives in rate making
Rate making has several objectives under regulatory requirements regulated by the states and business objectives due to the goal of profitability:
The goal of insurance regulation is to protect the public and three regulatory objectives are placed to meet certain standards:
The first regulatory requirement is that rates must be adequate; meaning the rates the insurers charge should be able to cover expenses.
The second regulatory requirement is that rates must not be excessive; meaning rates should not be so high that policyholders are paying more than the actual value of their protection.
The third regulatory objective is the rates must not be unfairly discriminatory; meaning exposures that are similar with respect to losses and expenses should not be charged significantly different rates.
The business objectives are set as a guide for insurers while designing the rating system. The rating system should meet each of the four objectives:
For |
https://en.wikipedia.org/wiki/Institute%20of%20Healthcare%20Engineering%20and%20Estate%20Management | The Institute of Healthcare Engineering and Estate Management (IHEEM) is the UK's largest specialist Institute for the Healthcare Estates Sector; devoted to developing careers, provision of education and training and registering engineers as Eng Tech, IEng and CEng.
History
The Institute was founded in 1943 and was originally named the Institute of Hospital Engineers; the Society of X-Ray Technology had merged with this in 1990.
Structure
It is headquartered in the Cumberland Business Centre in Portsmouth, on the A2030.
IHEEM:
is a not-for-profit company. Their primary purpose, as a professional development organisation, is to keep members up to date with developing technology and changing regulations within the industry
is independent of government, the NHS and commercial interests and protects its impartiality and objectivity.
IHEEM’s members comply with a Code of Professional Conduct that places a personal obligation to uphold the dignity and reputation of the profession and to safeguard public interest; each member undertakes to exercise all reasonable professional skill and care and to discharge this responsibility with integrity.
The Institute counts among its members employees of both public and private healthcare providers, engineering and consultancy firms and practices. Increasingly members come from a non-engineering background, many with Facilities Management experience.
See also
Chartered engineer
Incorporated engineer
Engineering technician |
https://en.wikipedia.org/wiki/Maxim%20DL | MaxIm DL is a software package created by Cyanogen for the intended purpose of astronomical imaging. It contains tools to process and analyze data from imaging array detectors such as CCDs.
It is only available for Windows 7 and above. Installation on alternative operating systems is acknowledged as possible but is not officially supported. |
https://en.wikipedia.org/wiki/Introduction%20to%20Solid%20State%20Physics | Introduction to Solid State Physics, known colloquially as Kittel, is a classic condensed matter physics textbook written by American physicist Charles Kittel in 1953. The book has been highly influential and has seen widespread adoption; Marvin L. Cohen remarked in 2019 that Kittel's content choices in the original edition played a large role in defining the field of solid-state physics. It was also the first proper textbook covering this new field of physics. The book is published by John Wiley and Sons and, as of 2018, it is in its ninth edition and has been reprinted many times as well as translated into over a dozen languages, including Chinese, French, German, Hungarian, Indonesian, Italian, Japanese, Korean, Malay, Romanian, Russian, Spanish, and Turkish. In some later editions, the eighteenth chapter, titled Nanostructures, was written by Paul McEuen. Along with its rival Ashcroft and Mermin, the book is considered a standard textbook in condensed matter physics.
Background
Kittel received his PhD from the University of Wisconsin–Madison in 1941 under his advisor Gregory Breit. Before being promoted to professor of physics at UC Berkeley in 1951, Kittel held several other positions. He worked for the Naval Ordnance Laboratory from 1940 to 1942, was a research physicist in the US Navy until 1945, worked at the Research Laboratory of Electronics at MIT from 1945 to 1947 and at Bell Labs from 1947 to 1951, and was a visiting associate professor at UC Berkeley from 1950 until his promotion.
Henry Ehrenreich has noted that before the first edition of Introduction to Solid State Physics came out in 1953, there were no other textbooks on the subject; rather, the young field's study material was spread across several prominent articles and treatises. The field of solid state physics was very new at the time of writing and was defined by only a few treatises that, in the Ehrenreich's view, expounded rather than explained the topics and were not suitable as textb |
https://en.wikipedia.org/wiki/Passivation%20%28chemistry%29 | In physical chemistry and engineering, passivation is coating a material so that it becomes "passive", that is, less readily affected or corroded by the environment. Passivation involves creation of an outer layer of shield material that is applied as a microcoating, created by chemical reaction with the base material, or allowed to build by spontaneous oxidation in the air. As a technique, passivation is the use of a light coat of a protective material, such as metal oxide, to create a shield against corrosion. Passivation of silicon is used during fabrication of microelectronic devices. Undesired passivation of electrodes, called "fouling", increases the circuit resistance so it interferes with some electrochemical applications such as electrocoagulation for wastewater treatment, amperometric chemical sensing, and electrochemical synthesis.
When exposed to air, many metals naturally form a hard, relatively inert surface layer, usually an oxide (termed the "native oxide layer") or a nitride, that serves as a passivation layer. In the case of silver, the dark tarnish is a passivation layer of silver sulfide formed from reaction with environmental hydrogen sulfide. (In contrast, metals such as iron oxidize readily to form a rough porous coating of rust that adheres loosely and sloughs off readily, allowing further oxidation.) The passivation layer of oxide markedly slows further oxidation and corrosion in room-temperature air for aluminium, beryllium, chromium, zinc, titanium, and silicon (a metalloid). The inert surface layer formed by reaction with air has a thickness of about 1.5 nm for silicon, 1–10 nm for beryllium, and 1 nm initially for titanium, growing to 25 nm after several years. Similarly, for aluminium, it grows to about 5 nm after several years.
In the context of the semiconductor device fabrication, such as silicon MOSFET transistors and solar cells, surface passivation refers not only to reducing the chemical reactivity of the surface but also to e |
https://en.wikipedia.org/wiki/Teiresias%20algorithm | The Teiresias algorithm is a combinatorial algorithm for the discovery of rigid patterns (motifs) in biological sequences. It is named after the Greek prophet Teiresias and was created in 1997 by Isidore Rigoutsos and Aris Floratos.
The problem of finding sequence similarities in the primary structure of related proteins or genes arises in the analysis of biological sequences. It can be shown that pattern discovery in its general form is NP-hard. The Teiresias algorithm is based on the observation that if a pattern spans many positions and appears exactly k times in the input then all fragments (sub patterns) of the pattern have to appear at least k times in the input. The algorithm is able to produce all patterns that have a user-defined number of copies in the given input, and manages to be very efficient by avoiding the enumeration of the entire space. Finally, the algorithm reports motifs that are maximal in both length and composition.
A new implementation of the Teiresias algorithm was recently made available by the . Teiresias is also accessible through an interactive web-based user interface by the same center. See external links for both.
Pattern description
The Teiresias algorithm uses regular expressions to define the patterns. This allows the patterns reported to consist not only from the characters that appear in each position (literals) but from a specific group of characters (bracketed literals) or even from any character (wild card). The patterns created by the algorithm are <L,W> patterns that have at least k instances in the input, where L ≤ W and L, W, k positive integers. A pattern is called an <L,W> pattern if and only if any L consecutive literals or bracketed literals span at most W positions (i.e. there can be no more than W-L wild cards).
The algorithm reports only maximal patterns. Given a set of sequences S, a pattern P that appears k times in S is called maximal if and only if there exists no pattern P' which is more specific than P |
https://en.wikipedia.org/wiki/Coincidence%20circuit | In physics and electrical engineering, a coincidence circuit or coincidence gate is an electronic device with one output and two (or more) inputs. The output activates only when the circuit receives signals within a time window accepted as at the same time and in parallel at both inputs. Coincidence circuits are widely used in particle detectors and in other areas of science and technology.
Walther Bothe shared the Nobel Prize for Physics in 1954 "...for his discovery of the method of coincidence and the discoveries subsequently made by it." Bruno Rossi invented the electronic coincidence circuit for implementing the coincidence method.
History
Bothe, 1924
In his Nobel Prize lecture, Bothe described how he had implemented the coincidence method in an experiment on Compton scattering in 1924. The experiment aimed to check whether Compton scattering produces a recoil electron simultaneously with the scattered gamma ray. Bothe used two point discharge counters connected to separate fibre electrometers and recorded the fibre deflections on a moving photographic film. On the film record he could discern coincident discharges with a time resolution of approximately 1 millisecond.
Bothe and Kohlhörster, 1929
In 1929, Walther Bothe and Werner Kolhörster published the description of a coincidence experiment with tubular discharge counters that Hans Geiger and Wilhelm Müller had invented in 1928. The Bothe-Kohlhörster experiment showed penetrating charged particles in cosmic rays. They used the same mechanical-photographic method for recording simultaneous discharges which, in this experiment, signalled the passage of a charged cosmic ray particle through both counters and through thick wall of lead and iron that surrounded the counters. Their paper, entitled Das Wesen der Höhenstrahlung", was published in the Zeitschrift für Physik v.56, p.751 (1929).
Rossi, 1930
Bruno Rossi, at the age of 24, was in his first job as assistant in the Physics Institute of the Universi |
https://en.wikipedia.org/wiki/Rolling%20and%20wheeled%20creatures%20in%20fiction%20and%20legend | Legends and speculative fiction reveal a longstanding human fascination with rolling and wheeled creatures. Such creatures appear in mythologies from Europe, Japan, pre-Columbian Mexico, the United States, and Australia, and in numerous modern works.
Rolling creatures
The triskelion is a motif with central symmetry used since ancient times.
A variant with three human legs appears in the medieval flag of the Isle of Man.
A variant with the head of Medusa in the union of the legs is associated with Sicily.
It is not known the meaning it had in antiquity or its original Greek name.
The hoop snake, a creature of legend in the United States and Australia, is said to grasp its tail in its mouth and roll like a wheel towards its prey. Japanese culture includes a similar mythical creature, the Tsuchinoko.
Buer, a demon mentioned in the 16th-century grimoire Pseudomonarchia Daemonum, was described in Collin de Plancy's 1825 edition of Dictionnaire Infernal as having "the shape of a star or wheel". The 1863 edition of this book featured an illustration by Louis Le Breton, depicting a creature with five legs radially arranged.
Neil R. Jones' 1937 story "On the Planet Fragment" features aliens dubbed the Disci, which are shaped like wheels, with limbs around the circumference. One of their methods of locomotion is a "rolling motion like that of a cartwheel."
The 1944 science fiction short story "Arena", by Fredric Brown, features a telepathic alien called an Outsider, which is roughly spherical and moves by rolling. The story was the basis for a 1967 Star Trek episode of the same name, and possibly also a 1964 episode of The Outer Limits entitled "Fun and Games", though neither television treatment included a spherical creature.
E. E. "Doc" Smith's 1950 novel First Lensman features the fontema, which consists of two wheels connected by articulations to an axle, lives on sunlight, and has only two behaviors: rolling, and conjugation/mating, which is scarcely more complica |
https://en.wikipedia.org/wiki/Reaction%20Time%20%28book%29 | Reaction Time: Climate Change and the Nuclear Option is a book by Professor Ian Lowe which was officially launched by science broadcaster Robyn Williams at the Writers' Festival in Brisbane in September 2007. The book is about energy policy, and Lowe argues that nuclear power does not make sense on any level: economically, environmentally, politically or socially.
Themes
Ian Lowe, AO, explains that energy is essential for civilised living, and says our energy-intensive lifestyle based on fossil fuels is unsustainable, and that he believes fundamental improvements must be made. In his book he says: "the nuclear option does not make sense on any level: economically, environmentally, politically or socially. It is too costly, too dangerous, too slow and has too small an impact on global warming."
Quote
"Promoting nuclear power as the solution to climate change is like advocating smoking as a cure for obesity. That is, taking up the nuclear option will make it much more difficult to move to the sort of sustainable, ecologically healthy future that should be our goal."
Author
Professor Lowe is the Emeritus professor of Science, Technology and Society at Griffith University and the former President of the Australian Conservation Foundation.
See also
Anti-nuclear movement in Australia
List of books about nuclear issues
Renewable energy commercialization
List of Australian environmental books
Quarterly Essay |
https://en.wikipedia.org/wiki/Royana | Royana (2006–2010) was Iran's and the Middle East's first successfully cloned sheep. Royana was a brown male domestic sheep and was cloned in the Royan Research Institute in Isfahan, Iran (The word Royan means embryo in Persian). He was the second cloned sheep in Royan Research Institute, but whereas the first sheep died few hours after birth, Royana lived for a few years.
Birth
On September 30, 2006, a group of scientists in Iran cloned Royana from an adult cell in a test tube in a laboratory. After the embryo proved its stability, scientists transferred it to the uterus of a female sheep. After a period of 145 days, Royana was born by caesarean section. Despite critical conditions, he survived and thrived. Royana was born on April 15, 2006, 1:30 am at Isfahan campus of Royan Institute by cesarean section in a healthy condition.
Death
Royana was euthanized after the abdominal pain was traced to his liver. It was also thought that Royana suffered premature death syndrome. Royana died at the age of three.
His birth was a great step in the production of transgenic lambs containing factor IX transgenic, which is helpful in human blood clotting. |
https://en.wikipedia.org/wiki/Data%20Security%20Law%20of%20the%20People%27s%20Republic%20of%20China | The Data Security Law of the People's Republic of China (; referred to as the Data Security Law or DSL) governs the creation, use, storage, transfer, and exploitation of data within China. The law is seen to be primarily targeted at technology companies which have grown increasingly powerful in China over the years. The law is part of a series of interlocking but related national security legislation including the National Security Law of the People's Republic of China, Cybersecurity Law and National Intelligence Law, passed during Xi Jinping's administration as part of efforts to strengthen national security .
Provisions
The law controversially requires data localisation of data collected by foreign and domestic entities on Chinese citizens. The law prohibits the export of data by technology companies without first the completion of a "cybersecurity review", the process of which is vague and still being developed. In addition, foreign judicial authorities are prohibited from requesting data on Chinese citizens without first seeking permission from Chinese authorities.
Reactions
Carolyn Bigg of law firms DLA Piper Hong Kong stated that the law represents: “another important piece in the overall data protection regulatory jigsaw in China”, making it: “complex" and "increasingly onerous" for international businesses to navigate through. Chinese technology company stocks fell in reaction to the passing of the law while tech companies such as Meituan, Alibaba and Ant Financial were all placed under regulatory scrutiny prior to its passing. The law is seen to have wide-ranging implications and is seen as another step in the increasing lawfare between China and the United States in areas of trade, intellectual property and national security since the beginning of the US-China trade war which began in 2016.
See also
Personal Information Protection Law of the People's Republic of China
Cybersecurity Law of the People's Republic of China |
https://en.wikipedia.org/wiki/List%20of%20whale%20vocalizations | Whale vocalizations are the sounds made by whales to communicate. The word "song" is used in particular to describe the pattern of regular and predictable sounds made by some species of whales (notably the humpback) in a way that is reminiscent of human singing.
Humans produce sound by expelling air through the larynx. The vocal cords within the larynx open and close as necessary to separate the stream of air into discrete pockets of air. These pockets are shaped by the throat, tongue, and lips into the desired sound.
Cetacean sound production differs markedly from this mechanism. The precise mechanism differs in the two major suborders of cetaceans: the Odontoceti (toothed whales—including dolphins) and the Mysticeti (baleen whales—including the largest whales, such as the blue whale).
Blue whale (Balaenoptera musculus)
Estimates made by Cummings and Thompson (1971) and Richardson et al. (1995) suggest that source level of sounds made by blue whales are between 155 and 188 decibels with reference to one micropascal metre. All blue whale groups make calls at a fundamental frequency of between 10 and 40 Hz, and the lowest frequency sound a human can typically perceive is 20 Hz. Blue whale calls last between ten and thirty seconds. Additionally blue whales off the coast of Sri Lanka have been recorded repeatedly making "songs" of four notes duration lasting about two minutes each, reminiscent of the well-known humpback whale songs.
All of the baleen whale sound files on this page (with the exception of the humpback vocalizations) are reproduced at 10x speed to bring the sound into the human auditory band.
Vocalizations produced by the Eastern North Pacific population have been well studied. This population produces long-duration, low frequency pulses ("A") and tonal calls ("B"), upswept tones that precede type B calls ("C"), moderate-duration downswept tones ("D"), and variable amplitude-modulated and frequency-modulated sounds. A and B calls are often produce |
https://en.wikipedia.org/wiki/Leftist%20grammar | In formal language theory, a leftist grammar is a formal grammar on which certain restrictions are made on the left and right sides of the grammar's productions. Only two types of productions are allowed, namely those of the form (insertion rules) and (deletion rules). Here, and are terminal symbols. This type of grammar was motivated by accessibility problems in the field computer security.
Computational properties
The membership problem for leftist grammars is decidable.
See also
Unrestricted grammar
String rewriting |
https://en.wikipedia.org/wiki/Human%20HGF%20plasmid%20DNA%20therapy | Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease (a major cause of myocardial infarction (MI)), as well as treatment for the damage that occurs to the heart after MI. After MI, the myocardium suffers from reperfusion injury which leads to death of cardiomyocytes and detrimental remodelling of the heart, consequently reducing proper cardiac function. Transfection of cardiac myocytes with human HGF reduces ischemic reperfusion injury after MI. The benefits of HGF therapy include preventing improper remodelling of the heart and ameliorating heart dysfunction post-MI.
Human hepatocyte growth factor
Human hepatocyte growth factor (HGF) is an 80kD pleiotropic protein that is endogenously produced by a variety of cell types from the mesenchymal cell lineage (such as cardiomyocytes and neurons). It is produced and proteolytically cleaved to its active state in response to cellular injury or during apoptosis. HGF binds to c-met receptors found on mesenchymal cell types to produce its many different effects such as increased cellular motility, morphogenesis, proliferation and differentiation. Research has shown that HGF has potent angiogenic, anti-fibrotic, and anti-apoptotic properties. It has also been shown to act as a chemoattractant for adult mesenchymal stem cells via c-met receptor binding.
Research and clinical trials
Animal research has demonstrated that administration of HGF cDNA plasmids into ischemic cardiac tissue can increase cardiac function (improved left ventricular ejection fraction and fractional shortening compared to control subjects) after induced MI or ischemia. Transfection with HGF plasmids in damaged cardiac tissue also promotes angiogenesis (increased capillary density compared to control subjects), as well as decreasing detrimental remodelling of the tissue at the site of injury (decreased fibrotic deposition). The increased production of HGF by transfected cardiomyocytes duri |
https://en.wikipedia.org/wiki/Thermal%20amplitude%20%28medical%29 | Thermal amplitude or thermal range refers to the temperature range in which a cold autoantibody or cold-reacting alloantibody binds to its antigen. Cold antibodies that can bind to antigen above are considered potentially clinically significant and may lead to disease that occurs or worsens on exposure to low temperatures. The closer the thermal range comes to core body temperature (37 °C or 99 °F), the greater the chance that the antibody will cause symptoms such as anemia or Raynaud syndrome. Antibodies that are only reactive at temperatures below are generally considered unlikely to be clinically significant. |
https://en.wikipedia.org/wiki/Sudo | sudo ( or ) is a program for Unix-like computer operating systems that enables users to run programs with the security privileges of another user, by default the superuser. It originally stood for "superuser do", as that was all it did, and it is its most common usage; however, the official Sudo project page lists it as "su 'do'". The current Linux manual pages for su define it as "substitute user", making the correct meaning of sudo "substitute user, do", because sudo can run a command as other users as well.
Unlike the similar command su, users must, by default, supply their own password for authentication, rather than the password of the target user. After authentication, and if the configuration file (typically /etc/sudoers) permits the user access, the system invokes the requested command. The configuration file offers detailed access permissions, including enabling commands only from the invoking terminal; requiring a password per user or group; requiring re-entry of a password every time or never requiring a password at all for a particular command line. It can also be configured to permit passing arguments or multiple commands.
History
Robert Coggeshall and Cliff Spencer wrote the original subsystem around 1980 at the Department of Computer Science at SUNY/Buffalo. Robert Coggeshall brought sudo with him to the University of Colorado Boulder. Between 1986 and 1993, the code and features were substantially modified by the IT staff of the University of Colorado Boulder Computer Science Department and the College of Engineering and Applied Science, including Todd C. Miller. The current version has been publicly maintained by OpenBSD developer Todd C. Miller since 1994, and has been distributed under an ISC-style license since 1999.
In November 2009 Thomas Claburn, in response to concerns that Microsoft had patented sudo, characterized such suspicions as overblown. The claims were narrowly framed to a particular GUI, rather than to the sudo concept.
The logo |
https://en.wikipedia.org/wiki/Hardware%20compatibility%20list | A hardware compatibility list (HCL) is a list of computer hardware (typically including many types of peripheral devices) that is compatible with a particular operating system or device management software. The list contains both whole computer systems and specific hardware elements including motherboards, sound cards, and video cards. In today's world, there is a vast amount of computer hardware in circulation, and many operating systems too. A hardware compatibility list is a database of hardware models and their compatibility with a certain operating system.
HCLs can be centrally controlled (one person or team keeps the list of hardware maintained) or user-driven (users submit reviews on hardware they have used).
There are many HCLs. Usually, each operating system will have an official HCL on its website.
See also
System requirements |
https://en.wikipedia.org/wiki/Unavoidable%20pattern | In mathematics and theoretical computer science, a pattern is an unavoidable pattern if it is unavoidable on any finite alphabet.
Definitions
Pattern
Like a word, a pattern (also called term) is a sequence of symbols over some alphabet.
The minimum multiplicity of the pattern is where is the number of occurrence of symbol in pattern . In other words, it is the number of occurrences in of the least frequently occurring symbol in .
Instance
Given finite alphabets and , a word is an instance of the pattern if there exists a non-erasing semigroup morphism such that , where denotes the Kleene star of . Non-erasing means that for all , where denotes the empty string.
Avoidance / Matching
A word is said to match, or encounter, a pattern if a factor (also called subword or substring) of is an instance of . Otherwise, is said to avoid , or to be -free. This definition can be generalized to the case of an infinite , based on a generalized definition of "substring".
Avoidability / Unavoidability on a specific alphabet
A pattern is unavoidable on a finite alphabet if each sufficiently long word must match ; formally: if . Otherwise, is avoidable on , which implies there exist infinitely many words over the alphabet that avoid .
By Kőnig's lemma, pattern is avoidable on if and only if there exists an infinite word that avoids .
Maximal -free word
Given a pattern and an alphabet . A -free word is a maximal -free word over if and match .
Avoidable / Unavoidable pattern
A pattern is an unavoidable pattern (also called blocking term) if is unavoidable on any finite alphabet.
If a pattern is unavoidable and not limited to a specific alphabet, then it is unavoidable for any finite alphabet by default. Conversely, if a pattern is said to be avoidable and not limited to a specific alphabet, then it is avoidable on some finite alphabet by default.
-avoidable / -unavoidable
A pattern is -avoidable if is avoidable on an alphabet of siz |
https://en.wikipedia.org/wiki/Starch%20sodium%20octenyl%20succinate | Starch sodium octenyl succinate, E1450 in the E number scheme of food additives, is a modified starch. These are not absorbed intact by the gut, but are significantly hydrolysed by intestinal enzymes and then fermented by intestinal microbiota. |
https://en.wikipedia.org/wiki/Keycloak | Keycloak is an open source software product to allow single sign-on with identity and access management aimed at modern applications and services. this WildFly community project is under the stewardship of Red Hat who use it as the upstream project for their RH-SSO product.
History
The first production release of Keycloak was in September 2014, with development having started about a year earlier. In 2016 Red Hat switched the RH SSO product from being based on the PicketLink framework to being based on the Keycloak upstream Project. This followed a merging of the PicketLink codebase into Keycloak.
To some extent Keycloak can now also be considered a replacement of the Red Hat JBoss SSO open source product which was previously superseded by PicketLink. JBoss.org is redirecting the old jbosssso subsite to the Keycloak website. The JBoss name is a registered trademark and Red Hat moved its upstream open source projects names to avoid using JBoss, JBoss AS to Wildfly being a more commonly recognized example.
Features
The features of Keycloak include:
User registration
Social login
Single sign-on/sign-off across all applications belonging to the same realm
Two-factor authentication
LDAP integration
Kerberos broker
Multitenancy with per-realm customizable skin
Custom extensions to extend the core functionality
Components
There are two main components of Keycloak:
Keycloak server, including the API and graphical interface.
Keycloak application adapter: a set of libraries to call the server.
See also
Single sign-on (SSO)
OpenAM
Kerberos (protocol)
Identity management
List of single sign-on implementations
Red Hat Single Sign-On |
https://en.wikipedia.org/wiki/Estonian%20identity%20card | The Estonian identity card () is a mandatory identity document for citizens of Estonia. In addition to regular identification of a person, an ID-card can also be used for establishing one's identity in electronic environment and for giving one's digital signature. Within Europe (except Belarus, Russia, Ukraine and United Kingdom) as well as French overseas territories and Georgia, the Estonian ID Card can be used by the citizens of Estonia as a travel document.
The mandatory identity document of a citizen of the European Union is also an identity card, also known as an ID card. The Estonian ID Card can be used to cross the Estonian border, however Estonian authorities cannot guarantee that other EU member states will accept the card as a travel document.
In addition to regular identification of a person, an ID-card can also be used for establishing one's identity in electronic environment and for giving one's digital signature. With the Estonian ID-card the citizen will receive a personal @eesti.ee e-mail address, which is used by the state to send important information. In order to use the @eesti.ee e-mail address, the citizen has to forward it to his or her personal e-mail address, using the State Portal eesti.ee.
The Police and Border Guard Board (PPA) on 25 September 2018 introduced the newest version of Estonia's ID card, featuring additional security elements and a contactless interface. The new cards also utilizes Estonia's own font and elements of its brand. One new detail is the inclusion of a QR code, which makes it easier to check the validity of the ID card. The new design also features a color photo of its bearer, which doubles as a security element and is made up of lines; looking at the card at an angle, another photo appears. The new chip has a higher capacity, allowing the addition of new applications to it.
Scope
The Estonian ID cards are used in health care, electronic banking, signing contracts, public transit, encrypting email and voting. E |
https://en.wikipedia.org/wiki/Howard%20Jerome%20Keisler | Howard Jerome Keisler (born 3 December 1936) is an American mathematician, currently professor emeritus at University of Wisconsin–Madison. His research has included model theory and non-standard analysis.
His Ph.D. advisor was Alfred Tarski at Berkeley; his dissertation is Ultraproducts and Elementary Classes (1961).
Following Abraham Robinson's work resolving what had long been thought to be inherent logical contradictions in the literal interpretation of Leibniz's notation that Leibniz himself had proposed, that is, interpreting "dx" as literally representing an infinitesimally small quantity, Keisler published Elementary Calculus: An Infinitesimal Approach, a first-year calculus textbook conceptually centered on the use of infinitesimals, rather than the epsilon, delta approach, for developing the calculus.
He is also known for extending the Henkin construction (of Leon Henkin) to what are now called Henkin–Keisler models. He is also known for the Rudin–Keisler ordering along with Mary Ellen Rudin.
He held the named chair of Vilas Professor of Mathematics at Wisconsin.
Among Keisler's graduate students, several have made notable mathematical contributions, including Frederick Rowbottom who discovered Rowbottom cardinals. Several others have gone on to careers in computer science research and product development, including: Michael Benedikt, a professor of computer science at the University of Oxford, Kevin J. Compton, a professor of computer science at the University of Michigan, Curtis Tuckey, a developer of software-based collaboration environments; Joseph Sgro, a neurologist and developer of vision processor hardware and software, and Edward L. Wimmers, a database researcher at IBM Almaden Research Center.
In 2012 he became a fellow of the American Mathematical Society.
His son Jeffrey Keisler is a Fulbright Distinguished Chair at the University of Massachusetts, Boston, College of Management.
Publications
Chang, C. C.; Keisler, H. J. Continuous Mode |
https://en.wikipedia.org/wiki/DataPlay | DataPlay is an optical disc system developed by DataPlay Inc. and released to the consumer market in 2002. Using very small (32mm diameter) disks enclosed in a protective cartridge storing 250MB per side, DataPlay was intended primarily for portable music playback, although it could also store other types of data, using both pre-recorded disks and user-recorded disks (and disks that combined pre-recorded information with a writable area). It would also allow for multisession recording. It won the CES Best of Show award in 2001.
DataPlay also included an elaborate digital rights management system designed to allow consumers to "unlock" extra pre-recorded content on the disk at any time, through the internet, following the initial purchase. It was based on the Secure Digital Music Initiative's DRM system. Dataplay's DRM system was one of the reasons behind its attractiveness to the music industry. It also included a proprietary file system, Dataplay File System (DFS) which natively supported DRM. By default it would allow up to 3 copies to other Dataplay discs, without allowing any copies to CDs.
The recorded music industry was initially generally supportive of DataPlay and a small number of a pre-recorded DataPlay disks were released, including the Britney Spears album Britney. Graphics on press releases show that Sting and Garth Brooks were also set to have DataPlay releases. In 2001 the first DIY DataPlay album was released by the experimental rave producer Backmasker. However, as a pre-recorded format, DataPlay was a failure. The company closed due to a lack of funding. In 2003 a company called DPHI bought Dataplay's intellectual property and reintroduced it at CES 2004. The company swapped Dataplay's DFS file system in favor of the FAT file system. Again, they were marketed as a cheaper alternative to memory cards, with a device being designed that would allow users to transfer data from an SD card to a cheaper and higher capacity Dataplay disc. Each disc wo |
https://en.wikipedia.org/wiki/Digital%20distribution%20of%20video%20games | In the video game industry, digital distribution is the process of delivering video game content as digital information, without the exchange or purchase of new physical media such as ROM cartridges, magnetic storage, optical discs and flash memory cards. This process has existed since the early 1980s, but it was only with network advancements in bandwidth capabilities in the early 2000s that digital distribution became more prominent as a method of selling games. Currently, the process is dominated by online distribution over broadband Internet.
To facilitate the sale of games, various video game publishers and console manufacturers have created their own platforms for digital distribution. These platforms, such as Steam, Origin, and Xbox Live Marketplace, provide centralized services to purchase and download digital content for either specific video game consoles or personal computers. Some platforms may also serve as digital rights management systems, limiting the use of purchased items to one account.
Digital distribution of video games is becoming increasingly common, with major publishers and retailers paying more attention to digital sales, including Steam, PlayStation Store, Amazon.com, GAME, GameStop, and others. It is particularly popular for PC games. According to a study conducted by SuperData Research, the volume of digital distribution of video games worldwide was $6.2 billion per month in February 2016, and reached $7.7 billion per month in April 2017.
History
1980s
Before Internet connections became widespread, there were few services for digital distribution of games, and physical media was the dominant method of delivering video games. One of the first examples of digital distribution in video games was GameLine, which operated during the early 1980s. The service allowed Atari 2600 owners to use a specialized cartridge to connect through a phone line to a central server and rent a video game for 5–10 days. The GameLine service was terminated d |
https://en.wikipedia.org/wiki/Balanced%20lethal%20systems | In evolutionary biology, a balanced lethal system is a situation where recessive lethal alleles are present on two homologous chromosomes. Each of the chromosomes in such a pair carries a different lethal allele, which is compensated for by the functioning allele on the other chromosome. Since both these lethal alleles end up in the gametes in the same frequency as the functioning alleles, half of the offspring, the homozygotes, receive two copies of a lethal allele and therefore die during development. In such systems, only the heterozygotes survive.
Balanced lethal systems appear to pose a challenge to evolutionary theory, since a system so wasteful should be rapidly eliminated through natural selection and recombination. Instead, it has become fixed in various species all over the tree of life.
Mechanism
The exact mechanism behind balanced lethal systems remains unknown. Prior to the availability of efficient DNA sequencing methods, it was already known that the lethality in such a system was caused by homozygosity of a certain chromosome pair.
One theory is that, in the case of the Triturus genus, the balanced lethal system is a remnant of an ancient sex-determination system. One of the chromosomes of the pair that contains the system is longer than the other, which is also the case for the actual sex chromosomes. In this theory, deleterious mutations accumulated on the non-recombining part of the Y-chromosome (Muller’s ratchet). Then, two distinct Y-chromosomes, both with different lethal mutations, co-segregated in a population. Since sex-determination in many cold-blooded vertebrates is potentially dependent on temperature, a shift away from chromosomal sex determination occurred. This system favoured the sex reversal of females, which eventually led to the loss of the original X-chromosome. A mutation on another chromosome later restored the even sex ratio, and gave rise to a new male-heterogametic system. A major restriction for this theory is that it c |
https://en.wikipedia.org/wiki/Albert%20Einstein%20World%20Award%20of%20Science | The Albert Einstein World Award for Science is an annual award given by the World Cultural Council "as a means of recognition and encouragement for scientific and technological research and development", with special consideration for researches which "have brought true benefit and wellbeing to mankind". Named for physicist and theoretician Albert Einstein, the award includes a diploma, a commemorative medal, and US$10,000.
The recipient of the award is evaluated and elected by an Interdisciplinary Committee, which is composed of world-renowned scientists, among them 25 Nobel laureates.
Award recipients
See also
Leonardo da Vinci World Award of Arts
José Vasconcelos World Award of Education
Albert Einstein Medal
Albert Einstein Award
Prizes named after people
List of things named after Albert Einstein
List of physics awards
List of general science and technology awards |
https://en.wikipedia.org/wiki/Insulin-like%20growth%20factor-binding%20protein | The insulin-like growth factor-binding protein (IGFBP) serves as a transport protein for insulin-like growth factor 1 (IGF-1).
Function
Approximately 98% of IGF-1 is always bound to one of six binding proteins (IGF-BP). IGFBP-3, the most abundant protein, accounts for 80% of all IGF binding. IGF-1 binds to IGFBP-3 in a 1:1 molar ratio. IGF-BP also binds to IGF-1 inside the liver, allowing growth hormone to continuously act upon the liver to produce more IGF-1.
IGF binding proteins (IGFBPs) are proteins of 24 to 45 kDa. All six IGFBPs share 50% homology with each other and have binding affinities for IGF-I and IGF-II at the same order of magnitude as the ligands have for the IGF-IR.
The IGFBPs help to lengthen the half-life of circulating IGFs in all tissues, including the prostate. Individual IGFBPs may act to enhance or attenuate IGF signaling depending on their physiological context (i.e. cell type). Even with these similarities, some characteristics are different: chromosomal location, heparin binding domains, RGD recognition site, preference for binding IGF-I or IGF-II, and glycosylation and phosphorylation differences. These structural differences can have a tremendous impact on how the IGFBPs interact with cellular basement membranes.
Family members
In humans, IGFBPs are transcribed from the following seven genes:
IGFBP1
IGFBP2
IGFBP3
IGFBP4
IGFBP5
IGFBP6
IGFBP7
See also
Insulin-like growth factor receptor |
https://en.wikipedia.org/wiki/Florian%20Brody | Florian Brody is an Austrian/American digital media creator, inventor, writer, public speaker, academic, and global business consultant. He is best known for his contributions to the invention and development of Expanded Books, an early form of E-books, at the Voyager Company, and for his writings and public speaking about digital media innovation and marketing, including TEDx talks in Austria, as well as his 1999 essay "The Medium is the Memory," originally published in MIT Press's The Digital Dialectic: New Essays on New Media, edited by Peter Lunenfeld.
Brody is a certified executive leadership coach, and supports European start-up businesses looking to enter the U.S. market. He served as principal at The Halo Agency from 2013 to 2017.
Career
Digital media innovation
In the late 1980s and early 1990s, Brody was head of the Expanded Books Project at the Voyager Company and was involved in the development and creation of their "expanded books", originally designed to be read on the then-new Apple PowerBook 100. The Voyager Company released their first expanded books on floppy disk for the PowerBook in 1992, and presented them at that year's MacWorld Expo. The books on this original disk were Jurassic Park by Michael Crichton, The Hitchhiker's Guide to the Galaxy by Douglas Adams, and Alice's Adventures in Wonderland by Lewis Carroll. Brody's public statements at the time on the Voyager Company's early expanded books, or 'e-books', were that he believed they were more of an experiment than a definitive 'product', paving the way for future versions of e-books for the next generation of less expensive, more lightweight, and more high-powered portable computers.
Digital evangelism
Brody has spoken about digital media innovation and marketing in both Europe and the US. He has done multiple TEDx talks in Austria.
Business and consultant work
Brody has been a principal partner at The Halo Agency, which provides business consultation to European start-up tech-base |
https://en.wikipedia.org/wiki/Sven%20Apel | Sven Apel (born 1977) is a German computer scientist and professor of software engineering at Saarland University.
His research focuses on software product lines and configurable systems, domain-specific generation and optimization, software analytics and intelligence, as well as empirical methods and the human factor in software development.
Education and career
Sven Apel studied computer science at the University of Magdeburg from 1996 to 2002. At the same university, he also received his doctorate in computer science in 2007 with a thesis on the “Role of Features and Aspects in Software Development.”
After his doctorate, Apel was a postdoctoral researcher at the University of Passau until 2010. From 2010 to 2013, he led the Emmy Noether Junior Research Group “Secure and Efficient Software Product Lines” there before he was appointed professor in Passau in 2013 as part of the DFG's Heisenberg Program.
Since 2019, Sven Apel has been a professor of software engineering at Saarland University.
In 2019, Apel, together with Christian Kästner and Martin Kuhlemann, received the “Most Influential Paper Award” at the Systems and Software Products Line Conference (SPLC) for the paper “Granularity in Software Product Lines”. In the article, the three researchers demonstrate how programs can be extended by fine-grained import from other software.
In 2022, together with Janet Feigenspan, Christian Kästner, Jörg Liebig and Stefan Hanenberg, he was awarded the “Most Influential Paper Award” at the International Conference on Program Comprehension (ICPC) for the paper “Measuring programming experience”. In the article, the researchers present a questionnaire and an experiment to assess and measure a programmer's level of experience.
According to Google Scholar, he has an h-index of 69.
Research areas
Sven Apel's research focuses in particular on methods, tools, and theories for the construction of manageable, reliable, efficient, configurable, and evolvable software sys |
https://en.wikipedia.org/wiki/Adrianus%20de%20Hoop | Adrianus Teunis (Aad) de Hoop (born 24 December 1927) is a Dutch electrical engineer, mathematician, and physicist, and professor emeritus at Delft University of Technology. De Hoop's research interests are in the broad area of wavefield modeling in acoustics, electromagnetics, and elastodynamics. Other research includes a method for computing pulsed electromagnetic fields in strongly heterogeneous media with applications to integrated circuits, and a methodology for time-domain pulsed-field antenna analysis, design, and optimization for mobile communication and radar applications.
Early life and education
De Hoop was born in Rotterdam, Netherlands in 1927. He received his MSc in electrical engineering in 1950 and his PhD in technological sciences in 1958, both cum laude from Delft University of Technology. He is the namesake of the Cagniard-de Hoop method, a modification of the Cagniard method.
Career
De Hoop worked as an assistant professor (1950—1957), associate professor (1957—1960), full professor (1960—1996), and Lorentz Chair emeritus professor (1996—present) for Delft University of Technology, his alma mater. He taught electromagnetic theory, applied mathematics, electrical engineering, mathematics, and computer science. In 1970, he founded the Laboratory of Elecromagnetic Research at Delft; this has since developed into a world-class center for electromagnetics. He spent a year in 1956 as a research assistant at University of California's Institute of Geophysics in Los Angeles in the United States. During his time there, he created a modification of the Cagniard method for calculating impulsive wave propagation in layered media. This modification was later called the Cagniard-de Hoop method and is now considered a benchmark tool in analyzing time-domain wave propagation. He spent a year-long sabbatical at the Philips Natuurkundig Laboratorium in Eindhoven working on magnetic recording theory. Among his PhD students was Jacob Fokkema, later rector at Delft |
https://en.wikipedia.org/wiki/Routing%20protocol | A routing protocol specifies how routers communicate with each other to distribute information that enables them to select paths between nodes on a computer network. Routers perform the traffic directing functions on the Internet; data packets are forwarded through the networks of the internet from router to router until they reach their destination computer. Routing algorithms determine the specific choice of route. Each router has a prior knowledge only of networks attached to it directly. A routing protocol shares this information first among immediate neighbors, and then throughout the network. This way, routers gain knowledge of the topology of the network. The ability of routing protocols to dynamically adjust to changing conditions such as disabled connections and components and route data around obstructions is what gives the Internet its fault tolerance and high availability.
The specific characteristics of routing protocols include the manner in which they avoid routing loops, the manner in which they select preferred routes, using information about hop costs, the time they require to reach routing convergence, their scalability, and other factors such as relay multiplexing and cloud access framework parameters. Certain additional characteristics such as multilayer interfacing may also be employed as a means of distributing uncompromised networking gateways to authorized ports. This has the added benefit of preventing issues with routing protocol loops.
Many routing protocols are defined in technical standards documents called RFCs.
Types
Although there are many types of routing protocols, three major classes are in widespread use on IP networks:
Interior gateway protocols type 1, link-state routing protocols, such as OSPF and IS-IS
Interior gateway protocols type 2, distance-vector routing protocols, such as Routing Information Protocol, RIPv2, IGRP.
Exterior gateway protocols are routing protocols used on the Internet for exchanging routing info |
https://en.wikipedia.org/wiki/Bullet%20%28typography%29 | In typography, a bullet or bullet point, , is a typographical symbol or glyph used to introduce items in a list. For example:
Point 1
Point 2
Point 3
The bullet symbol may take any of a variety of shapes, such as circular, square, diamond or arrow. Typical word processor software offers a wide selection of shapes and colors. Several regular symbols, such as (asterisk), (hyphen), (period), and even (lowercase Latin letter O), are conventionally used in ASCII-only text or other environments where bullet characters are not available. Historically, the index symbol (representing a hand with a pointing index finger) was popular for similar uses.
Lists made with bullets are called bulleted lists. The HTML element name for a bulleted list is "unordered list", because the list items are not arranged in numerical order (as they would be in a numbered list).
"Bullet points"
Items—known as "bullet points"—may be short phrases, single sentences, or of paragraph length. Bulleted items are not usually terminated with a full stop unless they are complete sentences. In some cases, however, the style guide for a given publication may call for every item except the last one in each bulleted list to be terminated with a semicolon, and the last item with a full stop. It is correct to terminate any bullet point with a full stop if the text within that item consists of one full sentence or more. Bullet points are usually used to highlight list elements.
Example of use for a bullet point list
Take for example this arbitrarily chosen statement "Bullets are most often used in technical writing, reference works, notes, and presentations". This statement may be presented using bullets or other techniques:
Technical writing
Reference works
Notes
Presentations
Alternatives to bulleted lists are numbered lists and outlines (lettered lists, hierarchical lists). They are used where either the order is important or to label the items for later referencing.
Other uses
The glyph is |
https://en.wikipedia.org/wiki/OPhone | OPhone, or OMS (Open Mobile System), is a mobile operating system running on the Linux kernel. It is based on technologies initially developed by Android Inc., a firm later purchased by Google, and work done by the Open Handset Alliance. The OPhone OS has appeared only on China Mobile phones, and the software was developed for China Mobile by software firm Borqs. A modified version of OMS has appeared on other carriers as Android+, also developed and maintained by Borqs. Android has been modified for local Chinese markets by China Mobile's OPhone Software Developers Network.
History
OPhone is a Linux-based smartphone software platform developed by China Mobile and based on the Android operating system developed by Google. OPhone is based on open source software and mobile internet technologies. For end-users, OPhone aims to provide cheap, low frills, entry-level smartphone access and a limited mobile internet experience using China Mobile's proprietary TD-SCDMA network, and its GSM network.
Software development
China Mobile consecutively released the 1.0 and 1.5 versions of the OPhone SDK for public use.
In February 2010, China Mobile released the 2.0 version of the SDK for public use. According to a Sina Tech release, this iteration would include support for the Windows Mobile API framework.
As of April 2010 around 600 apps had been developed specifically for OPhones.
See also
Android (operating system)
Baidu Yi
Borqs
Google Nexus
WebOS |
https://en.wikipedia.org/wiki/Local%20twistor | In differential geometry, the local twistor bundle is a specific vector bundle with connection that can be associated to any conformal manifold, at least locally. Intuitively, a local twistor is an association of a twistor space to each point of space-time, together with a conformally invariant connection that relates the twistor spaces at different points. This connection can have holonomy that obstructs the existence of "global" twistors (that is, solutions of the twistor equation in open sets).
Construction
Let M be a pseudo-Riemannian conformal manifold with a spin structure and a conformal metric of signature (p,q). The conformal group is the pseudo-orthogonal group . There is a conformal Cartan connection on a bundle, the tractor bundle, of M. The spin group of admits a fundamental representation, the spin representation, and the associated bundle is the local twistor bundle.
Representation via Weyl spinors
Local twistors can be represented as pairs of Weyl spinors on M (in general from different spin representations, determined by the reality conditions specific to the signature). In the case of a four-dimensional Lorentzian manifold, such as the space-time of general relativity, a local twistor has the form
Here we use index conventions from , and and are two-component complex spinors for the Lorentz group .
Local twistor transport
The connection, sometimes called local twistor transport, is given by
Here is the canonical one-form and the Schouten tensor, contracted on one index with the canonical one-form. An analogous equation holds in other dimensions, with appropriate Clifford algebra multipliers between the two Weyl spin representations . In this formalism, the twistor equation is the requirement that a local twistor be parallel under the connection.
Canonical filtration
In general, the local twistor bundle T is equipped with a short exact sequence of vector bundles
where and are two Weyl spin bundles. The bundle is a distinguish |
https://en.wikipedia.org/wiki/Quasi-sphere | In mathematics and theoretical physics, a quasi-sphere is a generalization of the hypersphere and the hyperplane to the context of a pseudo-Euclidean space. It may be described as the set of points for which the quadratic form for the space applied to the displacement vector from a centre point is a constant value, with the inclusion of hyperplanes as a limiting case.
Notation and terminology
This article uses the following notation and terminology:
A pseudo-Euclidean vector space, denoted , is a real vector space with a nondegenerate quadratic form with signature . The quadratic form is permitted to be definite (where or ), making this a generalization of a Euclidean vector space.
A pseudo-Euclidean space, denoted , is a real affine space in which displacement vectors are the elements of the space . It is distinguished from the vector space.
The quadratic form acting on a vector , denoted , is a generalization of the squared Euclidean distance in a Euclidean space. Élie Cartan calls the scalar square of .
The symmetric bilinear form acting on two vectors is denoted or . This is associated with the quadratic form .
Two vectors are orthogonal if .
A normal vector at a point of a quasi-sphere is a nonzero vector that is orthogonal to each vector in the tangent space at that point.
Definition
A quasi-sphere is a submanifold of a pseudo-Euclidean space consisting of the points for which the displacement vector from a reference point satisfies the equation
,
where and .
Since in permitted, this definition includes hyperplanes; it is thus a generalization of generalized circles and their analogues in any number of dimensions. This inclusion provides a more regular structure under conformal transformations than if they are omitted.
This definition has been generalized to affine spaces over complex numbers and quaternions by replacing the quadratic form with a Hermitian form.
A quasi-sphere in a quadratic space has a counter-sphere . Furt |
https://en.wikipedia.org/wiki/Cosmos%20%28category%20theory%29 | In the area of mathematics known as category theory, a cosmos is a symmetric closed monoidal category that is complete and cocomplete. Enriched category theory is often considered over a cosmos. |
https://en.wikipedia.org/wiki/TSX-32 | TSX-32 has been a general purpose 32-bit multi-user multitasking operating system for x86 architecture platform, with a command line user interface. It is compatible with some 16-bit DOS applications and supports file systems FAT16 and FAT32. It was developed by S&H Computer Systems, and has been available since 1989.
DEC-oriented columnist Kevin G. Barkes noted that TSX-32 is "not a port of the PDP-11 TSX-Plus" and that it runs
well on 386, 486 and Pentium-based systems. He reported a limitation: since it supports the MS/DOS (FAT) file system, filenames are DOS's 8+3.
TSX-Plus
An earlier non-DEC operating system, also from S&H, was named TSX-Plus. Released in 1980, TSX-Plus was the successor to TSX, released in 1976.
The strength of TSX-Plus is to simultaneously provide to multiple users the services of DEC's single-user RT-11. Depending on which PDP-11 model and the amount of memory, the system could support a minimum of 12 users (14-18 users on a 2Mb 11/73, depending on workload). A productivity feature called "virtual lines" "allows a single user to control several tasks from a single terminal."
History
S&H wrote the original TSX because "Spending $25K on a computer that could only support one user bugged" (founder Harry Sanders); the outcome was the initial four-user TSX in 1976.
For TSX-32, they said in an interview, "We started with a clean sheet of paper" rather than starting with a "port."
As of 2021, it appears to be defunct.
VAX
The company's product line was ported/expanded for the VAX line.
See also
Multiuser DOS Federation |
https://en.wikipedia.org/wiki/Silverquant | Silverquant is a labeling and detection method for DNA microarrays or protein microarrays. A synonym is <colorimetric> detection.
In contrast to the classical signal detection on microarrays by using fluorescence, the colorimetric detection is more sensitive and ozone-stable.
Chemical reaction
The probe to be detected is labeled with some biotin-molecules. After incubation with a gold-coupled anti-biotin conjugate, silver nitrate and a reducing agent are added. The reaction starts whereas the gold particle serves as a starting point for the silver precipitation.
The reaction needs to be stopped after a specific time. The constant reaction time is essential to obtain comparable results.
Detection
The silver-stained spots on the microarray are clearly visible. By using a transmission microarray scanner, the signals are transformed into digital values which are finally available as an image file. |
https://en.wikipedia.org/wiki/Reef%20Life%20Survey | Reef Life Survey is a marine life monitoring programme based in Hobart, Tasmania. It is international in scope, but predominantly Australian, as a large proportion of the volunteers are Australian. Most of the surveys are done by volunteer recreational divers, collecting biodiversity data for marine conservation. The database is available to marine ecology researchers, and is used by several marine protected area managements in Australia, New Zealand, American Samoa and the eastern Pacific.
Function
Reef Life Survey provides data to improve biodiversity conservation and the sustainable management of marine resources. They collect and curate biodiversity information at spatial and temporal scales beyond those possible by most scientific dive teams which have to work with limited resources, by using volunteer recreational divers trained in the RLS survey procedures. The University of Tasmania houses and manages the RLS database, and the data is freely available to the public for non-profit purposes through public outputs, including their website.
History
Reef Life Survey was started by researchers at the University of Tasmania and initially funded by the Commonwealth Environment Research Facilities (CERF) Program. This program is the core activity of the Reef Life Survey Foundation Incorporated – a not for profit Australian organisation.
Personnel
Reef Life Survey includes a volunteer network of recreational scuba divers, trained in the relevant skills, and an Advisory Committee. The advisory committee is made up of managers and scientists who use the collected data, and representatives of the recreational diver network.
Procedures
Standard survey procedures are used matched to a variety of habitat topographies, and using simple equipment - waterproof clipboard with records sheet, underwater camera, and 50m surveyor's tape measure. The surveys are typically repeated at irregular intervals at listed sites, identified by GPS location, transect depth and directio |
https://en.wikipedia.org/wiki/List%20of%20gene%20therapies | This article contains a list of commercially available gene therapies.
Gene therapies
Alipogene tiparvovec (Glybera): AAV-based treatment for lipoprotein lipase deficiency (no longer commercially available)
Axicabtagene ciloleucel (Yescarta): treatment for large B-cell lymphoma
Beremagene geperpavec (Vyjuvek): treatment of wounds.
Betibeglogene autotemcel (Zynteglo): treatment for beta thalassemia
Brexucabtagene autoleucel (Tecartus): treatment for mantle cell lymphoma and acute lymphoblastic leukemia
Cambiogenplasmid (Neovasculgen): treatment for vascular endothelial growth factor peripheral artery disease
Ciltacabtagene autoleucel (Carvykti): treatment for multiple myeloma
Delandistrogene moxeparvovec (Elevidys): treatment for Duchenne muscular dystrophy
Elivaldogene autotemcel (Skysona): treatment for cerebral adrenoleukodystrophy
Etranacogene dezaparvovec (Hemgenix): AAV-based treatment for hemophilia B
Gendicine: treatment for head and neck squamous cell carcinoma
Idecabtagene vicleucel (Abecma): treatment for multiple myeloma
Nadofaragene firadenovec (Adstiladrin): treatment for bladder cancer
Onasemnogene abeparvovec (Zolgensma): AAV-based treatment for spinal muscular atrophy
Strimvelis: treatment for adenosine deaminase deficiency (ADA-SCID)
Talimogene laherparepvec (Imlygic): treatment for melanoma in patients who have recurring skin lesions
Tisagenlecleucel (Kymriah): treatment for B cell lymphoblastic leukemia
Valoctocogene roxaparvovec (Roctavian): treatment for hemophilia A
Voretigene neparvovec (Luxturna): AAV-based treatment for Leber congenital amaurosis
See also
FDA-approved CAR T cell therapies |
https://en.wikipedia.org/wiki/Hilbert%27s%20twenty-second%20problem | Hilbert's twenty-second problem is the penultimate entry in the celebrated list of 23 Hilbert problems compiled in 1900 by David Hilbert. It entails the uniformization of analytic relations by means of automorphic functions.
Problem statement
The entirety of the original problem statement is as follows:
As Poincaré was the first to prove, it is always possible to reduce any algebraic relation between two variables to uniformity by the use of automorphic functions of one variable. That is, if any algebraic equation in two variables be given, there can always be found for these variables two such single valued automorphic functions of a single variable that their substitution renders the given algebraic equation an identity. The generalization of this fundamental theorem to any analytic non-algebraic relations whatever between two variables has likewise been attempted with success by Poincaré, though by a way entirely different from that which served him in the special problem first mentioned. From Poincaré's proof of the possibility of reducing to uniformity an arbitrary analytic relation between two variables, however, it does not become apparent whether the resolving functions can be determined to meet certain additional conditions. Namely, it is not shown whether the two single valued functions of the one new variable can be so chosen that, while this variable traverses the regular domain of those functions, the totality of all regular points of the given analytic field are actually reached and represented. On the contrary it seems to be the case, from Poincaré's investigations, that there are beside the branch points certain others, in general infinitely many other discrete exceptional points of the analytic field, that can be reached only by making the new variable approach certain limiting points of the functions. In view of the fundamental importance of Poincaré's formulation of the question it seems to me that an elucidation and resolution of this difficu |
https://en.wikipedia.org/wiki/Minkowski%20inequality | In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality
with equality for if and only if and are positively linearly dependent; that is, for some or Here, the norm is given by:
if or in the case by the essential supremum
The Minkowski inequality is the triangle inequality in In fact, it is a special case of the more general fact
where it is easy to see that the right-hand side satisfies the triangular inequality.
Like Hölder's inequality, the Minkowski inequality can be specialized to sequences and vectors by using the counting measure:
for all real (or complex) numbers and where is the cardinality of (the number of elements in ).
The inequality is named after the German mathematician Hermann Minkowski.
Proof
First, we prove that has finite -norm if and both do, which follows by
Indeed, here we use the fact that is convex over (for ) and so, by the definition of convexity,
This means that
Now, we can legitimately talk about If it is zero, then Minkowski's inequality holds. We now assume that is not zero. Using the triangle inequality and then Hölder's inequality, we find that
We obtain Minkowski's inequality by multiplying both sides by
Minkowski's integral inequality
Suppose that and are two -finite measure spaces and is measurable. Then Minkowski's integral inequality is , :
with obvious modifications in the case If and both sides are finite, then equality holds only if a.e. for some non-negative measurable functions and
If is the counting measure on a two-point set then Minkowski's integral inequality gives the usual Minkowski inequality as a special case: for putting for the integral inequality gives
If the measurable function is non-negative then for all
This notation has been generalized to
for with Using this notation, manipulation of the expon |
https://en.wikipedia.org/wiki/Monatomic%20ion | A monatomic ion (also called simple ion) is an ion consisting of exactly one atom. If, instead of being monatomic, an ion contains more than one atom, even if these are of the same element, it is called a polyatomic ion. For example, calcium carbonate consists of the monatomic cation Ca2+ and the polyatomic anion ; both pentazenium () and azide () are polyatomic as well.
A type I binary ionic compound contains a metal that forms only one type of ion. A type II ionic compound contains a metal that forms more than one type of ion, i.e., the same element in different oxidation states.
{|class="wikitable"
|-
! colspan="2" | Common type I monatomic cations
|-
| Hydrogen
| H+
|-
| Lithium
| Li+
|-
| Sodium
| Na+
|-
| Potassium
| K+
|-
| Rubidium
| Rb+
|-
| Caesium
| Cs+
|-
| Magnesium
| Mg2+
|-
| Calcium
| Ca2+
|-
| Strontium
| Sr2+
|-
| Barium
| Ba2+
|-
| Aluminium
| Al3+
|-
| Silver
| Ag+
|-
| Zinc
| Zn2+
|-
|}
{|class="wikitable"
|-
! colspan="3" | Common type II monatomic cations
|-
|-
| iron(II)
| Fe2+
| ferrous
|-
| iron(III)
| Fe3+
| ferric
|-
| copper(I)
| Cu+
| cuprous
|-
| copper(II)
| Cu2+
| cupric
|-
| cobalt(II)
| Co+2
| cobaltous
|-
| cobalt(III)
| Co3+
| cobaltic
|-
| tin(II)
| Sn2+
| stannous
|-
| tin(IV)
| Sn4+
| stannic
|}
{|class="wikitable"
|-
! colspan="2" | Common monatomic anions
|-
| hydride
| H−
|-
| fluoride
| F−
|-
| chloride
| Cl−
|-
| bromide
| Br−
|-
| iodide
| I−
|-
| oxide
| O2−
|-
| sulfide
| S2−
|-
| nitride
| N3−
|-
| phosphide
| P3−
|-
|} |
https://en.wikipedia.org/wiki/Ribozyviria | Ribozyviria is a realm of satellite nucleic acids — infectious agents that resemble viruses, but cannot replicate without a helper virus. Established in ICTV TaxoProp 2020.012D, the realm is named after the presence of genomic and antigenomic ribozymes of the Deltavirus type. The agents in Ribozyviria are satellite nucleic acids, which are distinct from satellite viruses in that they do not encode all of their own structural proteins but require proteins from their helper viruses in order to assemble. Additional common features include a rod-like structure, an RNA-binding "delta antigen" encoded in the genome, and animal hosts. Furthermore, the size range of the genomes of these viruses is between around 1547–1735nt, they encode a hammerhead ribozyme or a hepatitis delta virus ribozyme, and their coding capacity only involves one conserved protein. Most lineages of this realm are poorly understood, the notable exception being the genus Deltavirus, comprising the causal agents of hepatitis D in humans.
The realm Ribozyviria has an unclear evolutionary origin. It has been proposed that they may have derived from retrozymes (a family of retrotransposons) or a viroid-like element (i.e. viroids and satellites) with capsid protein capture.
Taxonomy
Historical development
The first taxa of this realm to receive acceptance by the ICTV is the species Hepatitis delta virus and its genus Deltavirus, in 1993. Deltavirus remained unassigned to any higher taxa until 2018, when the ICTV mistakenly classified Deltavirus within the then newly established realm, Riboviria. In 2019, this error was rectified and Deltavirus returned to its original position. In 2020, Hepatitis delta virus was abolished and replaced with eight new species, and the taxonomy developed to reach its current form, detailed below.
Current status
Ribozyviria contains a single family, Kolmioviridae, with no intermediate taxa between realm and family. This family contains eight genera.
The names of all gen |
https://en.wikipedia.org/wiki/LIONsolver | LIONsolver is an integrated software for data mining, business intelligence, analytics, and modeling and reactive business intelligence approach. A non-profit version is also available as LIONoso.
LIONsolver is used to build models, visualize them, and improve business and engineering processes.
It is a tool for decision making based on data and quantitative model and it can be connected to most databases and external programs.
The software is fully integrated with the Grapheur business intelligence and intended for more advanced users.
Overview
LIONsolver originates from research principles in Reactive Search Optimization advocating the use of self-tuning schemes acting while a software
system is running. Learning and Intelligent OptimizatioN refers to the integration of online machine learning schemes into the optimization software, so that
it becomes capable of learning from its previous runs and from human feedback.
A related approach is that of Programming by Optimization,
which provides a direct way of defining design spaces involving Reactive Search Optimization, and
of Autonomous Search
advocating adapting problem-solving algorithms.
Version 2.0 of the software was released on Oct 1, 2011, covering also the Unix and Mac OS X operating
systems in addition to Windows.
The modeling components include neural networks, polynomials, locally weighted Bayesian regression, k-means clustering, and self-organizing maps. A free academic license for non-commercial use and class use is available.
The software architecture of LIONsolver
permits interactive multi-objective optimization, with a user interface for visualizing the results and facilitating
the solution analysis and decision making process.
The architecture allows for problem-specific extensions, and it is
applicable as a post-processing tool for all optimization schemes with a number of
different potential solutions. When the architecture is tightly coupled to a specific
problem-solving or optimiz |
https://en.wikipedia.org/wiki/Proceedings%20of%20the%20Physical%20Society | The Proceedings of the Physical Society was a journal on the subject of physics, originally associated with the Physical Society of London, England. In 1968, it was replaced by the Journal of Physics.
Journal history
1874–1925: Proceedings of the Physical Society of London
1926–1948: Proceedings of the Physical Society
1949–1957: Proceedings of the Physical Society, Section A
1949–1957: Proceedings of the Physical Society, Section B
1958–1967: Proceedings of the Physical Society
External links
Electronic access from the Institute of Physics (IoP)
Physics journals
IOP Publishing academic journals
Academic journals associated with learned and professional societies of the United Kingdom
Defunct journals of the United Kingdom |
https://en.wikipedia.org/wiki/Takeshi%20Saito%20%28mathematician%29 | Takeshi Saito (斎藤 毅 Saitō Takeshi, born 9 September 1961) is a Japanese mathematician, specializing in some areas of number theory and algebraic geometry. His thesis advisor was Kazuya Kato.
Saito was an invited speaker of the International Congress of Mathematicians in 2010.
Selected publications
Articles
Books |
https://en.wikipedia.org/wiki/Boot%20Service%20Discovery%20Protocol | Boot Service Discovery Protocol (BSDP) is an Apple-developed, standards-conforming extension of DHCP. It allows Macintosh computers to boot from bootable images on a network instead of local storage media such as CD, DVD, or hard disk. The DHCP options used are the "vendor-specific information" option (number 43) and the "vendor class identifier" option (number 60).
There are three versions of BSDP, though usually version 1.0 is used. All versions enable a client to choose from several bootable images offered by a server.
The reference implementation of BSDP is Darwin's BOOTP server, which is part of Mac OS's NetBoot feature.
Description
Contents of DHCP Vendor Class Identifier
The DHCP server and client send a vendor class option that contains an ASCII-encoded string with three parts delimited by a / character. The first part is AAPLBSDPC, which advertises BSDP capability. The second part is the client's architecture ("ppc" or "i386"). The third part is a system identifier. For example, an Intel-based iMac sends
AAPLBSDPC/i386/iMac4,1
as its vendor class. A list of Microsoft vendor classes can be found here.
Contents of DHCP Vendor Specific Information Options
DHCP Option 43 is reserved for vendor specific information. This information is stored in the following format:
Code Len Vendor-specific information
+-----+-----+-----+-----+---
| 43 | n | i1 | i2 | ...
+-----+-----+-----+-----+---
If the vendor wants to convey multiple options within this option field, this is done with encapsulated vendor-specific extensions. Vendor encapsulated extensions contain one or more concatenated fields. Each field consists of:
The following table describes the possible field types. All numeric fields are interpreted as unsigned and Big Endian integers.
Example
The following example illustrates the construction of the Vendor Encapsulated Option:
The first field here, 01 01 02, means that the packet is a BSDP "SELECT" message. The 01 decla |
https://en.wikipedia.org/wiki/Retinal%20waves | Retinal waves are spontaneous bursts of action potentials that propagate in a wave-like fashion across the developing retina. These waves occur before rod and cone maturation and before vision can occur. The signals from retinal waves drive the activity in the dorsal lateral geniculate nucleus (dLGN) and the primary visual cortex. The waves are thought to propagate across neighboring cells in random directions determined by periods of refractoriness that follow the initial depolarization. Retinal waves are thought to have properties that define early connectivity of circuits and synapses between cells in the retina. There is still much debate about the exact role of retinal waves. Some contend that the waves are instructional in the formation of retinogeniculate pathways, while others argue that the activity is necessary but not instructional in the formation of retinogeniculate pathways.
Discovery
One of the first scientists to theorize the existence of spontaneous cascades of electrical activity during retinal development was computational neurobiologist David J. Willshaw. He proposed that adjacent cells generate electrical activity in a wave-like formation through layers of interconnected pre-synaptic and postsynaptic cells. Activity propagating through a close span of pre- and postsynaptic cells is thought to result in strong electrical activity in comparison to pre- and postsynaptic cells that are farther apart, which results in weaker activity. Willshaw thought this difference in the firing strength and the location of cells was responsible for determining the activities' boundaries. The lateral movement of firing from neighboring cell to neighboring cell, starting in one random area of cells and moving throughout both the pre- and postsynaptic layers, is thought to be responsible for the formation of the retinotopic map. To simulate the cascade of electrical activity, Willshaw wrote a computer program to demonstrate the movement of electrical activity betwee |
https://en.wikipedia.org/wiki/Lone%20Star%20Toys | Lone Star Products Ltd. was the brand name used by the British company Die Cast Machine Tools Ltd (DCMT) for its toy products. DCMT was based in Welham Green, Hertfordshire, north of London.
Company history
Starting as early as 1939, DCMT manufactured die cast toys for children. The 'Lone Star' name was chosen because of a demand at the time for toy guns and rifles popular in the Western films in cinemas all over Britain. Eventually, the company also made tie-in toy guns licensed from the James Bond films and The Man From U.N.C.L.E. TV series.
Toy soldiers
Harry Eagles, one of the sons of Henry George Eagles who co-founded Crescent Toys had been known by the nickname of Harvey. When Crescent moved to South Wales in 1949, Eagles remained in London where he started the Harvey Toy Company, producing figures in hollowcast metal. Eagles sold his trade name and designs to Lone Star where a variety of plastic soldiers were sold from 1955 to 1976,
Vehicles
Competition with Dinky and Corgi
Interpreting the base of a Lone Star vehicle can be difficult. Some of Impy Toys read: "Lone Star Road-Master Impy Super Cars".
To keep up with competitors such as Corgi and Dinky, Lone Star began producing Corgi-sized diecast toy vehicles in 1956 with its Road-Master series (later spelled without the hyphen). Castings on the earlier vehicles, though handsome, were a bit cruder than the competition. For example, the double-deck bus had its casting line, for its two halves, right down the centre of the roof. Also, most earlier Lone Stars have simpler bumper, grille and body detail than Corgi or Dinky.
The Impy Series
Much changed with the introduction of the Impy line in 1966. Bright new packaging was introduced while the older, larger, Road-Master series was discontinued (though the name "Roadmaster" was still used). The new cars were a smaller three and a half inch size, similar to Mini-Dinkys, and were advertised as the "cars with everything". For example, the 1963 Chrysler Imper |
https://en.wikipedia.org/wiki/Generalized%20Multi-Protocol%20Label%20Switching | Generalized Multi-Protocol Label Switching (GMPLS) is a protocol suite extending MPLS to manage further classes of interfaces and switching technologies other than packet interfaces and switching, such as time-division multiplexing, layer-2 switching, wavelength switching and fiber-switching.
Differences between MPLS and GMPLS
Generalized MPLS differs from traditional MPLS in that it extends support to multiple types of switching such as TDM, wavelength and fiber (port) switching. For instance, GMPLS is the de facto control plane of wavelength switched optical network (WSON). The support for the additional types of switching has driven GMPLS to extend certain base functions of traditional MPLS and, in some cases, to add functionality.
These changes and additions impact basic label-switched path (LSP) properties: how labels are requested and communicated, the unidirectional nature of LSPs, how errors are propagated, and information provided for synchronizing the ingress and egress LSRs.
How GMPLS works
GMPLS is based on Generalized Labels. The Generalized Label is a label that can represent either (a) a single fiber in a bundle, (b) a single waveband within fiber, (c) a single wavelength within a waveband (or fiber), or (d) a set of time-slots within a wavelength (or fiber). The Generalized Label can also carry a label that represents a generic MPLS label, a Frame Relay label, or an ATM label.
GMPLS is composed of three main protocols:
Resource Reservation Protocol with Traffic Engineering extensions (RSVP-TE) signaling protocol.
Open Shortest Path First with Traffic Engineering extensions (OSPF-TE) routing protocol.
Link Management Protocol (LMP).
See also
Wavelength switched optical network (WSON)
Automatic switched-transport network (ASTN) |
https://en.wikipedia.org/wiki/Incremental%20build%20model | The incremental build model is a method of software development where the product is designed, implemented and tested incrementally (a little more is added each time) until the product is finished. It involves both development and maintenance. The product is defined as finished when it satisfies all of its requirements. This model combines the elements of the waterfall model with the iterative philosophy of prototyping.
According to the Project Management Institute, an incremental approach is an "adaptive development approach in which the deliverable is produced successively, adding
functionality until the deliverable contains the necessary and
sufficient capability to be considered complete."
The product is decomposed into a number of components, each of which is designed and built separately (termed as builds).
Each component is delivered to the client when it is complete. This allows partial utilization of the product and avoids a long
development time. It also avoids a large initial capital outlay and subsequent long waiting period. This model of development also helps ease the traumatic effect of introducing a completely new system all at once.
Incremental model
The incremental model applies the waterfall model incrementally.
The series of releases is referred to as “increments”, with each increment providing more functionality to the customers. After the first increment, a core product is delivered, which can already be used by the customer. Based on customer feedback, a plan is developed for the next increments, and modifications are made accordingly. This process continues, with increments being delivered until the complete product is delivered. The incremental philosophy is also used in the agile process model (see agile modeling).
The Incremental model can be applied to DevOps. In DevOps it centers around the idea of minimizing risk and cost of a DevOps adoption whilst building the necessary in-house skillset and momentum.
Characteristics of Increment |
https://en.wikipedia.org/wiki/Stationary%20orbit | In celestial mechanics, the term stationary orbit refers to an orbit around a planet or moon where the orbiting satellite or spacecraft remains orbiting over the same spot on the surface. From the ground, the satellite would appear to be standing still, hovering above the surface in the same spot, day after day.
In practice, this is accomplished by matching the rotation of the surface below, by reaching a particular altitude where the orbital speed almost matches the rotation below, in an equatorial orbit. As the speed decreases slowly, then an additional boost would be needed to increase the speed back to a matching speed, or a retro-rocket could be fired to slow the speed when too fast.
The stationary-orbit region of space is known as the Clarke Belt, named after British science fiction writer Arthur C. Clarke, who published the idea in Wireless World magazine in 1945. A stationary orbit is sometimes referred to as a "fixed orbit".
Stationary Earth orbit
Around the Earth, stationary satellites orbit at altitudes of approximately . Writing in 1945, the science-fiction author Arthur C. Clarke imagined communications satellites as travelling in stationary orbits, where those satellites would travel around the Earth at the same speed the globe is spinning, making them hover stationary over one spot on the Earth's surface.
A satellite being propelled into place, into a stationary orbit, is first fired to a special equatorial orbit called a "geostationary transfer orbit" (GTO). Within this oval-shaped (elliptical) orbit, the satellite will alternately swing out to high and then back down to an altitude of only above the Earth (223 times closer). Then, at a planned time and place, an attached "kick motor" will push the satellite out to maintain an even, circular orbit at the 22,300-mile altitude.
Stationary Mars orbit
An areostationary orbit or areosynchronous equatorial orbit (abbreviated AEO) is a circular areosynchronous orbit in the Martian equatorial plan |
https://en.wikipedia.org/wiki/Transport%20phenomena | In engineering, physics, and chemistry, the study of transport phenomena concerns the exchange of mass, energy, charge, momentum and angular momentum between observed and studied systems. While it draws from fields as diverse as continuum mechanics and thermodynamics, it places a heavy emphasis on the commonalities between the topics covered. Mass, momentum, and heat transport all share a very similar mathematical framework, and the parallels between them are exploited in the study of transport phenomena to draw deep mathematical connections that often provide very useful tools in the analysis of one field that are directly derived from the others.
The fundamental analysis in all three subfields of mass, heat, and momentum transfer are often grounded in the simple principle that the total sum of the quantities being studied must be conserved by the system and its environment. Thus, the different phenomena that lead to transport are each considered individually with the knowledge that the sum of their contributions must equal zero. This principle is useful for calculating many relevant quantities. For example, in fluid mechanics, a common use of transport analysis is to determine the velocity profile of a fluid flowing through a rigid volume.
Transport phenomena are ubiquitous throughout the engineering disciplines. Some of the most common examples of transport analysis in engineering are seen in the fields of process, chemical, biological, and mechanical engineering, but the subject is a fundamental component of the curriculum in all disciplines involved in any way with fluid mechanics, heat transfer, and mass transfer. It is now considered to be a part of the engineering discipline as much as thermodynamics, mechanics, and electromagnetism.
Transport phenomena encompass all agents of physical change in the universe. Moreover, they are considered to be fundamental building blocks which developed the universe, and which is responsible for the success of all life |
https://en.wikipedia.org/wiki/Relativistic%20runaway%20electron%20avalanche | A relativistic runaway electron avalanche (RREA) is an avalanche growth of a population of relativistic electrons driven through a material (typically air) by an electric field. RREA has been hypothesized to be related to lightning initiation, terrestrial gamma-ray flashes, sprite lightning, and spark development. RREA is unique as it can occur at electric fields an order of magnitude lower than the dielectric strength of the material.
Mechanism
When an electric field is applied to a material, free electrons will drift slowly through the material as described by the electron mobility. For low-energy electrons, faster drift velocities result in more interactions with surrounding particles. These interactions create a form of friction that slow the electrons down. Thus, for low-energy cases, the electron velocities tend to stabilize.
At higher energies, above about 100 keV, these collisional events become less common as the mean free path of the electron rises. These higher-energy electrons thus see less frictional force as their velocity increases. In the presence of the same electric field, these electrons will continue accelerating, "running away".
As runaway electrons gain energy from an electric field, they occasionally collide with atoms in the material, knocking off secondary electrons. If the secondary electrons also have high enough energy to run away, they too accelerate to high energies, produce further secondary electrons, etc. As such, the total number of energetic electrons grows exponentially in an avalanche.
Seeding
The RREA mechanism above only describes the growth of the avalanche. An initial energetic electron is needed to start the process. In ambient air, such energetic electrons typically come from cosmic rays. In very strong electric fields, stronger than the maximum frictional force experienced by electrons, even low-energy ("cold" or "thermal") electrons can accelerate to relativistic energies, a process dubbed "thermal runaway."
Feedbac |
https://en.wikipedia.org/wiki/Features%2C%20events%2C%20and%20processes | Features, Events, and Processes (FEP) are terms used in the fields of radioactive waste management, carbon capture and storage, and hydraulic fracturing to define relevant scenarios for safety assessment studies. For a radioactive waste repository, features would include the characteristics of the site, such as the type of soil or geological formation the repository is to be built on or under. Events would include things that may or will occur in the future, like, e.g., glaciations, droughts, earthquakes, or formation of faults. Processes are things that are ongoing, such as the erosion or subsidence of the landform where the site is located on, or near.
Several catalogues of FEP's are publicly available, a.o., this one elaborated for the NEA Clay Club dealing with the disposal of radioactive waste in deep clay formations,
and those compiled for deep crystalline rocks (granite) by Svensk Kärnbränslehantering AB, SKB, the Swedish Nuclear Fuel and Waste Management Company. |
https://en.wikipedia.org/wiki/Backup%20validation | Backup validation is the process whereby owners of computer data may examine how their data was backed up in order to understand what their risk of data loss might be. It also speaks to optimization of such processes, charging for them as well as estimating future requirements, sometimes called capacity planning.
History
Over the past several decades (leading up to 2005), organizations (banks, governments, schools, manufacturers and others) have increased their reliance more on "Open Systems" and less on "Closed Systems". For example, 25 years ago, a large bank might have most if not all of its critical data housed in an IBM mainframe computer (a "Closed System"), but today, that same bank might store a substantially greater portion of its critical data in spreadsheets, databases, or even word processing documents (i.e., "Open Systems"). The problem with Open Systems is, primarily, their unpredictable nature. The very nature of an Open System is that it is exposed to potentially thousands if not millions of variables ranging from network overloads to computer virus attacks to simple software incompatibility. Any one, or indeed several in combination, of these factors may result in either lost data and/or compromised data backup attempts. These types of problems do not generally occur on Closed Systems, or at least, in unpredictable ways. In the "old days", backups were a nicely contained affair. Today, because of the ubiquity of, and dependence upon, Open Systems, an entire industry has developed around data protection. Three key elements of such data protection are Validation, Optimization and Chargeback.
Validation
Validation is the process of finding out whether a backup attempt succeeded or not, or, whether the data is backed up enough to consider it "protected". This process usually involves the examination of log files, the "smoking gun" often left behind after a backup attempts takes place, as well as media databases, data traffic and even magnetic tapes. |
https://en.wikipedia.org/wiki/Corneometry | Corneometry is a widely practiced method for the measurement of skin hydration. It uses a capacitive sensor to measure the relative permittivity of upper skin layers. Because these depend on hydration of skin, the measured value is a measure for skin hydration.
The name corneometry is derived from the German trademark Corneometer. In 1979 the first commercial instrument for measuring skin hydration was sold under this name.
Literature
Skin physiology
Skin tests |
https://en.wikipedia.org/wiki/Federico%20Faggin | Federico Faggin (, ; born 1 December 1941) is an Italian physicist, engineer, inventor and entrepreneur. He is best known for designing the first commercial microprocessor, the Intel 4004. He led the 4004 (MCS-4) project and the design group during the first five years of Intel's microprocessor effort. Faggin also created, while working at Fairchild Semiconductor in 1968, the self-aligned MOS (metal-oxide-semiconductor) silicon-gate technology (SGT), which made possible MOS semiconductor memory chips, CCD image sensors, and the microprocessor. After the 4004, he led development of the Intel 8008 and 8080, using his SGT methodology for random logic chip design, which was essential to the creation of early Intel microprocessors. He was co-founder (with Ralph Ungermann) and CEO of Zilog, the first company solely dedicated to microprocessors, and led the development of the Zilog Z80 and Z8 processors. He was later the co-founder and CEO of Cygnet Technologies, and then Synaptics.
In 2010, he received the 2009 National Medal of Technology and Innovation, the highest honor the United States confers for achievements related to technological progress. In 2011, Faggin founded the Federico and Elvia Faggin Foundation to support the scientific study of consciousness at US universities and research institutes. In 2015, the Faggin Foundation helped to establish a $1 million endowment for the Faggin Family Presidential Chair in the Physics of Information at UC Santa Cruz to promote the study of "fundamental questions at the interface of physics and related fields including mathematics, complex systems, biophysics, and cognitive science, with the unifying theme of information in physics."
Education and early career
Born in Vicenza, Italy, Federico grew up in an intellectual environment. His father, Giuseppe Faggin, was a scholar who wrote many academic books and translated, with commentaries, the Enneads of Plotinus from the original Greek into modern Italian. Federico had a str |
https://en.wikipedia.org/wiki/%CE%91-Naphthylthiourea | α-Naphthylthiourea (ANTU) is an organosulfur compound with the formula C10H7NHC(S)NH2. This a white, crystalline powder although commercial samples may be off-white. It is used as a rodenticide and as such is fairly toxic. Naphthylthiourea is available as 10% active baits in suitable protein- or carbohydrate-rich materials and as a 20% tracking powder.
Synthesis
Like other thioureas, ANTU can be prepared by several routes. The usual method is the reaction of 1-naphthylamine hydrochloride with ammonium thiocyanate:
[C10H7NH3]Cl + NH4SCN → C10H7NHC(S)NH2 + NH3 + HCl
It is produced from the reaction of 1-naphthyl isothiocyanate with ammonia.
C10H7NCS + NH3 → C10H7NHC(S)NH2
Mechanism of action
ANTU is specifically toxic in lung cells due to its conversion to a short-lived active metabolite to which it is converted in the liver, not ANTU acting directly. This damage is focused on the endothelium of pulmonary capillaries and venules, it will lead to the formation of irreversible gaps in the endothelium of pulmonary vessels. This damage can lead to pulmonary edema.
In ANTU poisoning plasma, carbon and ferritin escape through a gap in the thick part of the pulmonary capillary into the interstitial tissues of the lung
Toxicity
Alpha-Naphthylthiourea is toxic to inhalation, ingestion, or skin contact, although the intoxication may be delayed. According to the U.S. National Institute for Occupational Safety and Health (NIOSH), the recommended workplace airborne exposure limit is 0.3 mg/m3 averaged over a 10-hour workshift. Exposure to 100 mg/m3 is immediately dangerous to life and health. The lethal dose in humans is approximately 4 g/kg.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
Effects on anim |
https://en.wikipedia.org/wiki/Roadrunner%20%28supercomputer%29 | Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008, to become the world's first TOP500 LINPACK sustained 1.0 petaflops system.
In November 2008, it reached a top performance of 1.456 petaFLOPS, retaining its top spot in the TOP500 list. It was also the fourth-most energy-efficient supercomputer in the world on the Supermicro Green500 list, with an operational rate of 444.94 megaflops per watt of power used. The hybrid Roadrunner design was then reused for several other energy efficient supercomputers. Roadrunner was decommissioned by Los Alamos on March 31, 2013. In its place, Los Alamos commissioned a supercomputer called Cielo, which was installed in 2010.
Overview
IBM built the computer for the U.S. Department of Energy's (DOE) National Nuclear Security Administration (NNSA). It was a hybrid design with 12,960 IBM PowerXCell 8i and 6,480 AMD Opteron dual-core processors in specially designed blade servers connected by InfiniBand. The Roadrunner used Red Hat Enterprise Linux along with Fedora as its operating systems, and was managed with xCAT distributed computing software. It also used the Open MPI Message Passing Interface implementation.
Roadrunner occupied approximately 296 server racks which covered and became operational in 2008. It was decommissioned March 31, 2013. The DOE used the computer for simulating how nuclear materials age in order to predict whether the USA's aging arsenal of nuclear weapons are both safe and reliable. Other uses for the Roadrunner included the science, financial, automotive, and aerospace industries.
Hybrid design
Roadrunner differed from other contemporary supercomputers because it continued the hybrid approach to supercomputer design introduced by Seymour Cray in 1964 with the Control Data Corporation CDC 6600 and continued with the order of |
https://en.wikipedia.org/wiki/Vagrant%20lichen | A vagrant lichen is a lichen that is either not attached to a substrate, or can become unattached then blow around, yet continue to grow and flourish. Some authors reserve the expression "vagrant lichen" for those lichens that never attach, that is, those that are obligately vagrant, referring to vagrant forms of other species as "erratic lichen". Vagrant lichens generally occur in open and windswept habitats, all over the world, in all kinds of temperature zones. Habitats include saltbush (mallee) vegetation zones in Australia, steppes of Eurasia, Arctic tundra, and the North American prairie. They range from the low elevations of the Namib Desert to the high altitude Andean páramo. There are under 100 identified vagrant species, most commonly in the Aspicilia and Xanthoparmelia genera. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.