source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Recovery%20testing | In software testing, recovery testing is the activity of testing how well an application is able to recover from crashes, hardware failures and other similar problems.
Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is
properly performed. Recovery testing should not be confused with reliability testing, which tries to discover the specific point at which failure occurs. Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Recovery testing is simulating failure modes or actually causing failures in a controlled environment. Following a failure, the failover mechanism is tested to ensure that data is not lost or corrupted and that any agreed service levels are maintained (e.g., function availability or response times). Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems
Examples of recovery testing:
While an application is running, suddenly restart the computer, and afterwards check the validness of the application's data integrity.
While an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application's ability to continue receiving data from the point at which the network connection disappeared.
See also
Fault injection
Failsafe |
https://en.wikipedia.org/wiki/Modified%20Richardson%20iteration | Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method.
We seek the solution to a set of linear equations, expressed in matrix terms as
The Richardson iteration is
where is a scalar parameter that has to be chosen such that the sequence converges.
It is easy to see that the method has the correct fixed points, because if it converges, then and has to approximate a solution of .
Convergence
Subtracting the exact solution , and introducing the notation for the error , we get the equality for the errors
Thus,
for any vector norm and the corresponding induced matrix norm. Thus, if , the method converges.
Suppose that is symmetric positive definite and that are the eigenvalues of . The error converges to if for all eigenvalues . If, e.g., all eigenvalues are positive, this can be guaranteed if is chosen such that . The optimal choice, minimizing all , is , which gives the simplest Chebyshev iteration. This optimal choice yields a spectral radius of
where is the condition number.
If there are both positive and negative eigenvalues, the method will diverge for any if the initial error has nonzero components in the corresponding eigenvectors.
Equivalence to gradient descent
Consider minimizing the function . Since this is a convex function, a sufficient condition for optimality is that the gradient is zero () which gives rise to the equation
Define and .
Because of the form of A, it is a positive semi-definite matrix, so it has no negative eigenvalues.
A step of gradient descent is
which is equivalent to the Richardson iteration by making .
See also
Richardson extrapolation |
https://en.wikipedia.org/wiki/Extensible%20Forms%20Description%20Language | Extensible Forms Description Language (XFDL) is a high-level computer language that facilitates defining a form as a single, stand-alone object using elements and attributes from the Extensible Markup Language (XML). Technically, it is a class of XML originally specified in a World Wide Web Consortium (W3C) Note. See Specifications below for links to the current versions of XFDL. XFDL It offers precise control over form layout, permitting replacement of existing business/government forms with electronic documents in a human-readable, open standard.
In addition to precision layout control, XFDL provides multiple page capabilities, step-by-step guided user experiences, and digital signatures. XFDL also provides a syntax for in-line mathematical and conditional expressions and data validation constraints as well as custom items, options, and external code functions. Current versions of XFDL (see Specifications below) are capable of providing these interactive features via open standard markup languages including XForms, XPath, XML Schema and XML Signatures.
XFDL not only supports multiple digital signatures, but the signatures can apply to specific sections of a form and prevent changes to signed content.
These advantages to XFDL led large organizations such as the United States Army and Air Force to migrate to XFDL from using forms in other formats. Later, though, the lack of portable software capable of creating XFDL led them to investigate moving away from it. The Army migrated to Adobe fillable PDFs in 2014. |
https://en.wikipedia.org/wiki/Monte%20Carlo%20method%20in%20statistical%20mechanics | Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics, or statistical mechanics.
Overview
The general motivation to use the Monte Carlo method in statistical physics is to evaluate a multivariable integral. The typical problem begins with a system for which the Hamiltonian is known, it is at a given temperature and it follows the Boltzmann statistics. To obtain the mean value of some macroscopic variable, say A, the general approach is to compute, over all the phase space, PS for simplicity, the mean value of A using the Boltzmann distribution:
.
where
is the energy of the system for a given state defined by
- a vector with all the degrees of freedom (for instance, for a mechanical system, ),
and
is the partition function.
One possible approach to solve this multivariable integral is to exactly enumerate all possible configurations of the system, and calculate averages at will. This is done in exactly solvable systems, and in simulations of simple systems with few particles. In realistic systems, on the other hand, an exact enumeration can be difficult or impossible to implement.
For those systems, the Monte Carlo integration (and not to be confused with Monte Carlo method, which is used to simulate molecular chains) is generally employed. The main motivation for its use is the fact that, with the Monte Carlo integration, the error goes as , independently of the dimension of the integral. Another important concept related to the Monte Carlo integration is the importance sampling, a technique that improves the computational time of the simulation.
In the following sections, the general implementation of the Monte Carlo integration for solving this kind of problems is discussed.
Importance sampling
An estimation, under Monte Carlo integration, of an integral defined as
is
where are uniformly obtained from all the phase space (PS) and N is the number of sampling points (or function |
https://en.wikipedia.org/wiki/Analog%20transmission | Analog transmission is a transmission method of conveying information using a continuous signal which varies in amplitude, phase, or some other property in proportion to that information. It could be the transfer of an analog signal, using an analog modulation method such as frequency modulation (FM) or amplitude modulation (AM), or no modulation at all.
Some textbooks also consider passband data transmission using a digital modulation method such as ASK, PSK and QAM, i.e. a sinewave modulated by a digital bit-stream, as analog transmission and as an analog signal. Others define that as digital transmission and as a digital signal. Baseband data transmission using line codes, resulting in a pulse train, are always considered as digital transmission, although the source signal may be a digitized analog signal.
Methods
Analog transmission can be conveyed in many different fashions:
Optical fiber
Twisted pair or coaxial cable
Radio
Underwater acoustic communication
There are two basic kinds of analog transmission, both based on how they modulate data to combine an input signal with a carrier signal. Usually, this carrier signal is of a specific frequency, and data is transmitted through its variations. The two techniques are amplitude modulation (AM), which varies the amplitude of the carrier signal, and frequency modulation (FM), which modulates the frequency of the carrier.
Types
Most analog transmissions fall into one of several categories. Telephony and voice communication was originally primarily analog in nature, as was most television and radio transmission. Early telecommunication devices utilized analog-to-digital conversion devices called modulator/demodulators, or modems, to convert analog signals to digital signals and back.
Benefits and drawbacks
The analog transmission method is still very popular, in particular for shorter distances, due to significantly lower costs with complex multiplexing and timing equipment that are unnecessary, and in |
https://en.wikipedia.org/wiki/Woordeboek%20van%20die%20Afrikaanse%20Taal | Woordeboek van die Afrikaanse Taal (Dictionary of the Afrikaans Language), generally known as the WAT, is the largest descriptive Afrikaans dictionary. As comprehensive descriptive dictionary, it strives to reflect the Afrikaans language in its entirety. Not only standard Afrikaans is portrayed, but also varieties like Kaaps and Namakwalands. As of 2021, sixteen volumes have been published, with the sixteenth volume containing part of the letter S. The WAT is also available as a CD and on the internet. The Handwoordeboek van die Afrikaanse Taal (HAT) is a shorter, concise Afrikaans explanatory dictionary in a single volume, compared to the comprehensive Woordeboek van die Afrikaanse Taal (WAT), similar to the Concise Oxford Dictionary and the Oxford English Dictionary.
The project was begun in 1926 by Prof. J J Smith of Stellenbosch University.
External links
Official Website (Afrikaans)
Official Website (English)
Online Edition of the Woordeboek van die Afrikaanse Taal
Online dictionaries
Afrikaans dictionaries |
https://en.wikipedia.org/wiki/Budapest%20Semesters%20in%20Mathematics | The Budapest Semesters in Mathematics program is a study abroad opportunity for North American undergraduate students in Budapest, Hungary. The coursework is primarily mathematical and conducted in English by Hungarian professors whose primary positions are at Eötvös Loránd University or the Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences. Originally started by László Lovász, László Babai, Vera Sós, and Pál Erdős, the first semester was conducted in Spring 1985. The North- American part of the program is currently run by Tina Garrett (North American Director) out of St. Olaf College in Northfield, MN. She is supported by Kendra Killpatrick (Associate Director) and Eileen Shimota (Program Administrator). The former North American Directors were Paul D. Humke (1988–2011) and Tom Trotter. The Hungarian director is Dezső Miklós. The first Hungarian director was Gábor J. Székely (1985–1995).
History of the Program
Courses offered
Courses commonly offered at BSM:
Introduction to Abstract Algebra
Advanced Abstract Algebra
Topics in Analysis
Complex Functions
Combinatorics 1
Combinatorics 2
Commutative Algebra
Conjecture and Proof
Functional Analysis
Elementary Problem Solving
Galois Theory
Topics in Geometry
Graph Theory
Number Theory
Topics in Number Theory
Probability Theory
Real Functions and Measures
Set Theory
Introduction to Topology
Mathematical Physics
Independent Research Groups
Theory of Computing
Differential Geometry
Dynamical Systems and Bifurcations
Stochastic Models in Bioinformatics
Mathematical Logic
In addition to mathematics-based courses, students have the opportunity to take culture classes, such as beginning and intermediate Hungarian Language classes, Hungarian Arts and Culture, and Holocaust and Memory.
Location
Classes are held in the College International, located at Bethlen Gábor Tér in the heart of Pest in Budapest's District VII. This is also the location for several |
https://en.wikipedia.org/wiki/Nickel%20aluminide | Nickel aluminide typically refers to one of the two most widely used compounds, Ni3Al or NiAl, but can refer to most aluminides from the Ni-Al system. These alloys are widely used due to their corrosion resistance, low-density and easy production. Ni3Al is of specific interest as the strengthening γ' phase precipitate in nickel-based superalloys allowing for high temperature strength up to 0.7-0.8 of its melting temperature. Meanwhile, NiAl displays excellent properties such as low-density (lower than that of Ni3Al), good thermal conductivity, oxidation resistance and high melting temperature. These properties, make it ideal for special high temperature applications like coatings on blades in gas turbines and jet engines. However, both these alloys do have the disadvantage of being quite brittle at room temperature while Ni3Al remains brittle at high temperatures as well. Although, it has been shown that Ni3Al can be made ductile when manufactured as a single crystal as opposed to polycrystalline. Another application was demonstrated in 2005, when the most abrasion-resistant material was reportedly created by embedding diamonds in a matrix of nickel aluminide.
Ni3Al
The chief issue with polycrystalline Ni3Al-based alloys is its room-temperature and high-temperature brittleness. This brittleness is generally attributed to the inability for dislocations to move in the highly ordered lattices. Researchers worked hard to address this brittleness as it greatly reduced the potential structural applications these Ni3Al-based alloys could be used for. However, in 1990, it was shown that the introduction of small amount of boron can drastically increase the ductility by suppressing intergranular fracture. Once this was addressed focus turned to maximizing the structural properties of the alloy. As mentioned, NiAl3-based alloys derive their strength from the formation of γ' precipitates in the γ which strength the alloys through precipitate strengthening. In these NiAl3-ba |
https://en.wikipedia.org/wiki/AEP%20meter%20label%20format | The California Direct Access Standards setting process in 1997 identified the need to standardize the electric meter label identifier so as to create a unique identifier for every electric meter in the United States. The AEP meter label format is a recommended solution for this need.
Format
AEP meter label format follows ANSI C12.10 requirements for format of the electric meter labels. (See below) The format for the barcode label is:
AABYYYYYYYYYZZZZZ
where
AA is the meter test code
B is an identifier for the meter manufacturer
YYYYYYYYY is the manufacturer’s serial number or utility number
ZZZZZ is user specified
Usage
The meter test code is used by calibration equipment manufacturers to automatically set up their equipment to calibrate a given meter. The user scans in the AEP barcode on the meter, and the calibrator will use the correct voltage and test amps. The last five characters are user specified, and many electric utilities use these characters for an inventory code or date of manufacture. |
https://en.wikipedia.org/wiki/Soft-collinear%20effective%20theory | In quantum field theory, soft-collinear effective theory (or SCET) is a theoretical framework for doing calculations that involve interacting particles carrying widely different energies.
The motivation for developing SCET was to control the infrared divergences that occur in quantum chromodynamics (QCD) calculations that involve particles that are soft—carrying much lower energy or momentum than other particles in the process—or collinear—traveling in the same direction as another particle in the process. SCET is an effective theory for highly energetic quarks interacting with collinear and/or soft gluons. It has been used for calculations of the decays of B mesons (quark-antiquark bound states involving a bottom quark) and the properties of jets (sprays of hadrons that emerge from particle collisions when a quark or gluon is produced). SCET has also been used to calculate electroweak interactions in Higgs boson production.
The new feature of SCET is its ability to handle more than one soft energy scale. For example, processes involving quarks carrying a high energy Q interacting with gluons have two soft scales: the transverse momentum pT of the collinear particles, plus the even softer scale pT2/Q. SCET provides a power-counting formalism for doing perturbation theory in the small parameter ΛQCD/Q.
External links
See the original papers were by Christian Bauer, Sean Fleming, Michael Luke, Dan Pirjol, and Iain Stewart: |
https://en.wikipedia.org/wiki/List%20of%20Australian%20sporting%20mascots | Many sporting mascots used as mascots and characters by clubs and teams in Australia and New Zealand are similar to those used around the world. There are, however, quite a number that are unique to these two nations.
The following is a list of notable mascots and characters created specifically for advertising purposes in Australia and New Zealand, listed alphabetically by the club or team they represent.
Australian Football
Australian Football League
In 2003, the Australian Football League standardised the club mascots into the Mascot Manor theme. Some, however, have since been replaced.
- Claude "Curls" Crow
- Roy the Lion (Former: The Brisbane Bear 1987-96)
- Captain Carlton
- Jock "One Eye" McPie
- Moz "Skeeta" Reynolds
- Johnny "The Doc" Docker (formerly Grinder)
- Half Cat
- Sunny Ray (originally Gary "GC" Clifford)
- G-man
- Hudson "Hawka" Knights
- Checker, Chuck and Cheeky (formerly Ronald "Dee" Mann)
- Barry "Bruiser" Cracker
- Tommy "Thunda" Power
- Tiger "Stripes" Dyer (formerly Tiggy)
- Trevor "Saint" Kilda
- Syd "Swannie" Skilton
- Rick "The Rock" Eagle
- Woofer "Dogg" Whitten
Cricket
Big Bash League
Adelaide Strikers - Smash and Summer
Brisbane Heat - Heater
Hobart Hurricanes - Captain hurricane
Melbourne Renegades - Sledge (Cricketer from the year 2020) and Willow
Melbourne Stars - Starman and starlett
Perth Scorchers - Blaze and Amber
Sydney Sixers - Syd
Sydney Thunder - Thor (Formerly Maxiumus)
State teams
Victoria Bushrangers - Ned Ranger
Queensland Bulls - "Rocky" the Five Star Senepol bull
Gridiron
Rugby League
National Rugby League
Brisbane Broncos - "Buck"
Canterbury Bulldogs - "Brutus"
Canberra Raiders - "Victor" the Viking
Cronulla Sharks - "Reefy" & "Hammerhead"
Gold Coast Titans - "Blade"
Manly-Warringah Sea Eagles - "Egor" the Eagle
Melbourne Storm - "Boom" and “Storm man”
Newcastle Knights - "Knytro" and “Novo”
New Zealand Warriors - "Tiki"
North Queensland Cowboys - "Bluey" th |
https://en.wikipedia.org/wiki/Potassium%20phosphate | Potassium phosphate is a generic term for the salts of potassium and phosphate ions including:
Monopotassium phosphate (KH2PO4) (Molar mass approx: 136 g/mol)
Dipotassium phosphate (K2HPO4) (Molar mass approx: 174 g/mol)
Tripotassium phosphate (K3PO4) (Molar mass approx: 212.27 g/mol)
As food additives, potassium phosphates have the E number E340. |
https://en.wikipedia.org/wiki/Relative%20intensity%20noise | Relative intensity noise (RIN), describes the instability in the power level of a laser. The noise term is important to describe lasers used in fiber-optic communication and LIDAR remote sensing.
Relative intensity noise can be generated from cavity vibration, fluctuations in the laser gain medium or simply from transferred intensity noise from a pump source. Since intensity noise typically is proportional to the intensity, the relative intensity noise is typically independent of laser power. Hence, when the signal to noise ratio (SNR) is limited by RIN, it does not depend on laser power. In contrast, when SNR is limited by shot noise, it improves with increasing laser power. RIN typically peaks at the relaxation oscillation frequency of the laser then falls off at higher frequencies until it converges to the shot noise level. The roll off frequency sets what is specified as the RIN bandwidth. RIN is sometimes referred to as a kind of 1/f noise otherwise known as pink noise.
Relative intensity noise is measured by sampling the output current of a photodetector over time and transforming this data set into frequency with a fast Fourier transform. Alternatively, it can be measured by analyzing the spectrum of the photodetected signal using an electrical spectrum analyzer. Noise observed in the electrical domain is proportional to electric current squared and hence to optical power squared. Therefore, RIN is usually presented as relative fluctuation in the square of the optical power in decibels per hertz over the RIN bandwidth and at one or several optical intensities. It may also be specified as a percentage, a value that represents the relative fluctuations per Hz multiplied by the RIN bandwidth.
See also
Shot noise
External reference
Intensity noise in Encyclopedia of Laser Physics and Technology
Relative Intensity Noise in Encyclopedia of Laser Physics and Technology
Noise (electronics)
Fiber-optic communications
Laser science |
https://en.wikipedia.org/wiki/Multiplicative%20distance | In algebraic geometry, is said to be a multiplicative distance function over a field if it satisfies
AB is congruent to A'B' iff
AB < A'B' iff
See also
Algebraic geometry
Hyperbolic geometry
Poincaré disc model
Hilbert's arithmetic of ends |
https://en.wikipedia.org/wiki/Parallax%20SX | Parallax SX is a discontinued line of microcontrollers that was marketed by Parallax, from a design by Ubicom. Designed to be architecturally similar to the PIC microcontrollers used in the original versions of the BASIC Stamp, SX microcontrollers replaced the PIC in several subsequent versions of that product.
Production
The designs for the devices are owned by Ubicom (formerly Scenix, hence "SX"). The SX dies were manufactured by Ubicom, who sent them to Parallax for packaging. Ubicom had made processors with 18, 20, 28, 48 and 52 pins, but because Parallax did not have packages for 18 and 52 pins chips, the SX-18 and SX-52 were discontinued.
End-of-life
On July 31, 2009, Parallax announced that the SX line had reached its production EOL (End-of-Life) as Ubicom would no longer be manufacturing dies based on the designs; after the supplies from the final "lifetime buy" have been exhausted, the associated products cannot be restocked. In the same announcement, Parallax expressed that availability of its own products based on SX devices would not be impacted and that technical support would remain available.
Technical details
The Parallax's SX series microcontrollers are 8-bit RISC microcontrollers (using a 12-bit instruction word) which have an unusually high speed, up to 75 MHz (75 MIPS), and a high degree of flexibility. They include up to 4096 12-bit words of Flash memory and up to 262 bytes of random-access memory (RAM), an eight bit counter and other support logic. They are especially geared toward the emulation of I/O hardware in software, which makes them very flexible. While Parallax's SX micros are limited in variety, their high speed and additional resources allow programmers to create 'virtual devices', including complete video controllers, as required. For example, there are software library modules to emulate I2C and SPI interfaces, UARTs, frequency generators, measurement counters and PWM and sigma-delta A/D converters. Other interfaces are relat |
https://en.wikipedia.org/wiki/Hilbert%20system | In mathematical physics, Hilbert system is an infrequently used term for a physical system described by a C*-algebra.
In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.
Most variants of Hilbert systems take a characteristic tack in the way they balance a trade-off between logical axioms and rules of inference. Hilbert systems can be characterised by the choice of a large number of schemes of logical axioms and a small set of rules of inference. Systems of natural deduction take the opposite tack, including many deduction rules but very few or no axiom schemes. The most commonly studied Hilbert systems have either just one rule of inference modus ponens, for propositional logics or two with generalisation, to handle predicate logics, as well and several infinite axiom schemes. Hilbert systems for propositional modal logics, sometimes called Hilbert-Lewis systems, are generally axiomatised with two additional rules, the necessitation rule and the uniform substitution rule.
A characteristic feature of the many variants of Hilbert systems is that the context is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. Thus, if one is interested only in the derivability of tautologies, no hypothetical judgments, then one can formalize the Hilbert system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided not even if we want to use them just for proving derivab |
https://en.wikipedia.org/wiki/Bayesian%20regret | In stochastic game theory, Bayesian regret is the expected difference ("regret") between the utility of a Bayesian strategy and that of the optimal strategy (the one with the highest expected payoff).
The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference.
Economics
This term has been used to compare a random buy-and-hold strategy to professional traders' records. This same concept has received numerous different names, as the New York Times notes:
"In 1957, for example, a statistician named James Hanna called his theorem Bayesian Regret. He had been preceded by David Blackwell, also a statistician, who called his theorem Controlled Random Walks. Other, later papers had titles like 'On Pseudo Games', 'How to Play an Unknown Game', 'Universal Coding' and 'Universal Portfolios'".
Social Choice (voting methods)
"Bayesian Regret" has also been used as an alternate term for social utility efficiency, that is, a measure of the expected utility of different voting methods under a given probabilistic model of voter utilities and strategies. In this case, the relation to Bayes is unclear, as there is no conditioning or posterior distribution involved. |
https://en.wikipedia.org/wiki/Hasegawa%E2%80%93Mima%20equation | In plasma physics, the Hasegawa–Mima equation, named after Akira Hasegawa and Kunioki Mima, is an equation that describes a certain regime of plasma, where the time scales are very fast, and the distance scale in the direction of the magnetic field is long. In particular the equation is useful for describing turbulence in some tokamaks. The equation was introduced in Hasegawa and Mima's paper submitted in 1977 to Physics of Fluids, where they compared it to the results of the ATC tokamak.
Assumptions
The magnetic field is large enough that:
for all quantities of interest. When the particles in the plasma are moving through a magnetic field, they spin in a circle around the magnetic field. The frequency of oscillation, known as the cyclotron frequency or gyrofrequency, is directly proportional to the magnetic field.
The particle density follows the quasineutrality condition:
where Z is the number of protons in the ions. If we are talking about hydrogen Z = 1, and n is the same for both species. This condition is true as long as the electrons can shield out electric fields. A cloud of electrons will surround any charge with an approximate radius known as the Debye length. For that reason this approximation means the size scale is much larger than the Debye length. The ion particle density can be expressed by a first order term that is the density defined by the quasineutrality condition equation, and a second order term which is how much it differs from the equation.
The first order ion particle density is a function of position, but not time. This means that perturbations of the particle density change at a timescale much slower than the scale of interest. The second order particle density which causes a charge density and thus an electric potential can change with time.
The magnetic field, B must be uniform in space, and not be a function of time. The magnetic field also moves at a timescale much slower than the scale of interest. This allows the time deri |
https://en.wikipedia.org/wiki/International%20Genetically%20Engineered%20Machine | The International Genetically Engineered Machine (iGEM) competition is a worldwide synthetic biology competition that was initially aimed at undergraduate university students, but has since expanded to include divisions for high school students, entrepreneurs, and community laboratories, as well as 'overgraduates'.
Competition details
Student teams are given a kit (so called ‘Distribution Kit’) of standard, interchangeable parts (so called 'BioBricks') at the beginning of the summer from the Registry of Standard Biological Parts comprising various genetic components such as promoters, terminators, reporter elements, and plasmid backbones. Working at their local laboratories over the summer, they use these parts and new parts of their own design to build biological systems and operate them in living cells.
The teams are free to choose a project, which can build on previous projects or be new to iGEM. Successful projects produce cells that exhibit new and unusual properties by engineering sets of multiple genes together with mechanisms to regulate their expression.
At the end of the summer, the teams add their new BioBricks to the Parts Registry and the scientific community can build upon the expanded set of BioBricks in the next year.
At the annual ‘iGEM Jamboree’ teams from all continents meet in Paris for a scientific conference where they present their projects to each other and to a scientific jury of ~120 judges. The judges award medals and special prizes to the teams and select a ‘Grand Prize Winner’ team as well as ‘Runner-Up’ teams in each division (High School, Undergraduate and Overgraduate).
Awards & Judging in the iGEM competition
Each participant receives a participating certificate (see fig. below) and has the possibility to earn medals (bronze, silver and gold; see fig. below) with their team depending on different criteria that the team fulfilled in the competitions. For a bronze medal it is for example necessary to submit a new part to the Par |
https://en.wikipedia.org/wiki/Favard%20constant | In mathematics, the Favard constant, also called the Akhiezer–Krein–Favard constant, of order r is defined as
This constant is named after the French mathematician Jean Favard, and after the Soviet mathematicians Naum Akhiezer and Mark Krein.
Particular values
Uses
This constant is used in solutions of several extremal problems, for example
Favard's constant is the sharp constant in Jackson's inequality for trigonometric polynomials
the sharp constants in the Landau–Kolmogorov inequality are expressed via Favard's constants
Norms of periodic perfect splines. |
https://en.wikipedia.org/wiki/Automath | Automath ("automating mathematics") is a formal language, devised by Nicolaas Govert de Bruijn starting in 1967, for expressing complete mathematical theories in such a way that an included automated proof checker can verify their correctness.
Overview
The Automath system included many novel notions that were later adopted and/or reinvented in areas such as typed lambda calculus and explicit substitution. Dependent types is one outstanding example. Automath was also the first practical system that exploited the Curry–Howard correspondence. Propositions were represented as sets (called "categories") of their proofs, and the question of provability became a question of non-emptiness (type inhabitation); de Bruijn was unaware of Howard's work, and stated the correspondence independently.
L. S. van Benthem Jutting, as part of this Ph.D. thesis in 1976, translated Edmund Landau's Foundations of Analysis into Automath and checked its correctness.
Automath was never widely publicized at the time, however, and so never achieved widespread use; nonetheless, it proved very influential in the later development of logical frameworks and proof assistants. The Mizar system, a system of writing and checking formalized mathematics that is still in active use, was influenced by Automath.
See also
QED manifesto |
https://en.wikipedia.org/wiki/Coded%20mark%20inversion | In telecommunication, coded mark inversion (CMI) is a non-return-to-zero (NRZ) line code. It encodes zero bits as a half bit time of zero followed by a half bit time of one, and while one bits are encoded as a full bit time of a constant level. The level used for one bits alternates each time one is coded.
This is vaguely reminiscent of, but quite different from, Miller encoding, which also uses half-bit and full-bit pulses, but additionally uses the half-one/half-zero combination and arranges them so that the signal always spends at least a full bit time at a particular level before transitioning again.
CMI doubles the bitstream frequency, when compared to its simple NRZ equivalent, but allows easy and reliable clock recovery.
See also
Manchester code |
https://en.wikipedia.org/wiki/Mice%20problem | In mathematics, the mice problem is a continuous pursuit–evasion problem in which a number of mice (or insects, dogs, missiles, etc.) are considered to be placed at the corners of a regular polygon. In the classic setup, each then begins to move towards its immediate neighbour (clockwise or anticlockwise). The goal is often to find out at what time the mice meet.
The most common version has the mice starting at the corners of a unit square, moving at unit speed. In this case they meet after a time of one unit, because the distance between two neighboring mice always decreases at a speed of one unit. More generally, for a regular polygon of unit-length sides, the distance between neighboring mice decreases at a speed of , so they meet after a time of .
Path of the mice
For all regular polygons, each mouse traces out a pursuit curve in the shape of a logarithmic spiral. These curves meet in the center of the polygon.
In media
In Dara Ó Briain: School of Hard Sums, the mice problem is discussed. Instead of 4 mice, 4 ballroom dancers are used. |
https://en.wikipedia.org/wiki/Slurry%20ice | Slurry ice is a phase changing refrigerant made up of millions of ice "micro-crystals" (typically 0.1 to 1 mm in diameter) formed and suspended within a solution of water and a freezing point depressant. Some compounds used in the field are salt, ethylene glycol, propylene glycol, alcohols like isobutyl and ethanol, and sugars like sucrose and glucose. Slurry ice has greater heat absorption compared to single phase refrigerants like brine, because the melting enthalpy (latent heat) of the ice is also used.
Characteristics
The small ice particle size results in greater heat transfer area than other types of ice for a given weight. It can be packed inside a container as dense as 700 kg/m3, the highest ice-packing factor among all usable industrial ice.
The spherical crystals have good flow properties, making them easy to distribute through conventional pumps and piping and over product in direct contact chilling applications, allowing them to flow into crevices and provide greater surface contact and faster cooling than other traditional forms of ice (flake, block, shell, etc.).
Its flow properties, high cooling capacity and flexibility in application make a slurry ice system a substitute for conventional ice generators and refrigeration systems, and offers improvements in energy efficiency: 70%, compared to around 45% in standard systems, lower freon consumption per ton of ice and lower operating costs.
Application fields
Slurry ice is commonly used in a wide range of air conditioning, packaging, and industrial cooling processes, supermarkets, and cooling and storage of fish, produce, poultry and other perishable products.
Slurry ice can boost by up to 200% the cooling efficiency of existing cooling or freezing brine systems without any major changes to the system (i.e. heat exchanger, pipes, valves), and reduce the amount of energy consumption used for pumping.
Advantages
Slurry ice is also used in direct contact cooling of products in food processing appli |
https://en.wikipedia.org/wiki/VMware%20VMFS | VMware VMFS (Virtual Machine File System) is VMware, Inc.'s clustered file system used by the company's flagship server virtualization suite, vSphere. It was developed to store virtual machine disk images, including snapshots. Multiple servers can read/write the same filesystem simultaneously while individual virtual machine files are locked. VMFS volumes can be logically "grown" (non-destructively increased in size) by spanning multiple VMFS volumes together.
Version history
There are six (plus one for vSAN) versions of VMFS, corresponding with ESX/ESXi Server product releases.
VMFS0 can be reported by ESX Server v6.5 as a VMFS version when a datastore is unmounted from a cluster/host.
VMFS1 was used by ESX Server v1.x. It did not feature the cluster filesystem properties and was used only by a single server at a time. VMFS1 is a flat filesystem with no directory structure.
VMFS2 is used by ESX Server v2.x and (in a limited capacity) v3.x. VMFS2 is a flat filesystem with no directory structure.
VMFS3 is used by ESX Server v3.x and vSphere 4.x. Notably, it introduces directory structure in the filesystem.
VMFS5 is used by vSphere 5.x. Notably, it raises the extent limit to 64 TB and the file size limit to 62 TB, though vSphere versions earlier than 5.5 are limited to VMDKs smaller than 2 TB.
VMFS6 is used by vSphere 6.5. It supports 512 emulation (512e) mode drives.
VMFS-L is the underlying file system for VSAN-1.0. Leaf level VSAN objects reside directly on VMFS-L volumes that are composed from server side direct attached storage (DAS). File system format is optimized for DAS. Optimization include aggressive caching with for the DAS use case, a stripped lock down lock manager and faster formats.
Features
Allows access by multiple ESXi servers at the same time by implementing per-file locking. SCSI reservations are only implemented when logical unit number (LUN) metadata is updated (e.g. file name change, file size change, etc.)
Add or delete an ESXi se |
https://en.wikipedia.org/wiki/CFD-FASTRAN | CFD-FASTRAN is a commercial Computational Fluid Dynamics (CFD) software package developed by ESI Group for aerodynamic and aerothermodynamic applications.
CFD-FASTRAN was used by the Council for Scientific and Industrial Research in South Africa to simulate the release of a missile from the outboard pylon of the BAE Hawk Mk120 at transonic speeds where shockwaves dominate the flowfield. The CFD software was used to calculate the carriage loads, structural dynamic responses from the ejection forces and model the loads on the missile in free-flight.
The CFD software was used to predict supercooled droplet impingement on helicopter blades by the Institute for Aerospace Research. This is a first step towards simulating ice formation on rotating helicopter blades.
CFD-FASTRAN was used to study the aerodynamic performance of a hypersonic vehicle powered by scramjet engines. Flow conditions were simulated at various angles of attack at Mach 5.85.
Two-dimensional numerical flow simulations were performed with CFD-FASTRAN to compare the effects of a combined jet flap and Coanda jet on a supercritical airfoil. The results showed that the combined jet flap provided the best performance.
CFD-FASTRAN was used to simulate flow past helicopter rotors in hover and forward flight conditions. The predictions matched the experimental data. |
https://en.wikipedia.org/wiki/Herping | Herping is the act of searching for amphibians or reptiles. The term, often used by professional and amateur herpetologists, comes from the word "herp", which comes from the same Greek root as herpetology, herpet-, meaning "creeping". The term herp is a shorthand used to refer to the two classes of ectothermic tetrapods (i.e., amphibians and reptiles).
Herping consists of many activities; anyone can find reptiles or amphibians while herping. The activity or technique depends on the terrain and target species. These include, but are not limited to, searching under natural cover objects (such as rocks and logs) and artificial cover objects (such as trash or construction debris), sometimes called 'flipping', as in 'flipping rocks' or 'flipping boards'; locating calling amphibians by ear, commonly done in pairs in order to triangulate on the location of the frog or toad; muddling or noodling for turtles by feeling around in mud or around objects submerged in water; dip-netting for aquatic amphibians and turtles; noosing lizards with wire or fishing line on the end of a pole; lantern walking, which involves searching habitat on foot at night; and road cruising, which refers to the practice of driving along a road slowly in search of reptiles or amphibians that are crossing the road or basking on the road surface.
General herping can take place any time, anywhere, whereas road herping or "cruising" usually takes place at dawn or dusk, or during rainstorms. Road cruising during rainstorms has a high probability of finding toads or frogs that may be otherwise impossible to find during normal conditions.
Photography
Photography of reptiles and amphibians is largely dependent on digital cameras with a macro lens. An adequate lens is necessary for successfully efficiently capturing many species' images, as it keeps photographer and subject from being injured, as well as maintaining the natural behaviour of the subject. In some cases, it is more practical to temporarily c |
https://en.wikipedia.org/wiki/Puzzle%20Panic | Puzzle Panic, also known as Ken Uston's Puzzle Panic, is a video game created by blackjack strategist Ken Uston, Bob Polin (designer of Blue Max), and Ron Karr. It was published by Epyx in 1984 for the Atari 8-bit family and Commodore 64.
Gameplay
The player guides Benny, a light bulb, through a series of 11 puzzles, each with varying difficulty settings (a total of over 40 levels). At the completion of each level, there are a few available exits, each bearing an obscure symbol, which take Benny forward or back in the game (or possibly to repeat the level). The final level, the "Metasequence," is a cryptic puzzle with a non-explicit objective. Its original purpose was part of a contest: those who solved it correctly by the August 13, 1984 deadline could enter in a drawing to win a weekend at an Atlantic City casino with co-creator Ken Uston.
Development
A pre-release version of the game was called PuzzleMania.
Reception
Steve Panak wrote in ANALOG Computing, "Puzzle Panic is so radically different, so unlike anything else you've ever set your cathode-raybloodshot eyes on, that there's no readily memorable program to compare it with," and called the game "addictive." He disliked the brief window for winning the contest; it had already expired by the time he played.
Fred Pinho wrote in Antic: |
https://en.wikipedia.org/wiki/Interdigital%20transducer | An interdigital transducer (IDT) is a device that consists of two interlocking comb-shaped arrays of metallic electrodes (in the fashion of a zipper). These metallic electrodes are deposited on the surface of a piezoelectric substrate, such as quartz or lithium niobate, to form a periodic structure.
Function
IDTs primary function is to convert electric signals to surface acoustic waves (SAW) by generating periodically distributed mechanical forces via piezoelectric effect (an input transducer).
The same principle is applied to the conversion of SAW back to electric signals (an output transducer). These processes of generation and reception of SAW can be used in different types of SAW signal processing devices, such as band pass filters, delay lines, resonators, sensors, etc.
IDT was first proposed by Richard M. White and Voltmer in 1965. |
https://en.wikipedia.org/wiki/Clutching%20construction | In topology, a branch of mathematics, the clutching construction is a way of constructing fiber bundles, particularly vector bundles on spheres.
Definition
Consider the sphere as the union of the upper and lower hemispheres and along their intersection, the equator, an .
Given trivialized fiber bundles with fiber and structure group over the two hemispheres, then given a map (called the clutching map), glue the two trivial bundles together via f.
Formally, it is the coequalizer of the inclusions via and : glue the two bundles together on the boundary, with a twist.
Thus we have a map : clutching information on the equator yields a fiber bundle on the total space.
In the case of vector bundles, this yields , and indeed this map is an isomorphism (under connect sum of spheres on the right).
Generalization
The above can be generalized by replacing and with any closed triad , that is, a space X, together with two closed subsets A and B whose union is X. Then a clutching map on gives a vector bundle on X.
Classifying map construction
Let be a fibre bundle with fibre . Let be a collection of pairs such that is a local trivialization of over . Moreover, we demand that the union of all the sets is (i.e. the collection is an atlas of trivializations ).
Consider the space modulo the equivalence relation is equivalent to if and only if and . By design, the local trivializations give a fibrewise equivalence between this quotient space and the fibre bundle .
Consider the space modulo the equivalence relation is equivalent to if and only if and consider to be a map then we demand that . That is, in our re-construction of we are replacing the fibre by the topological group of homeomorphisms of the fibre, . If the structure group of the bundle is known to reduce, you could replace with the reduced structure group. This is a bundle over with fibre and is a principal bundle. Denote it by . The relation to the previous bundle is induced f |
https://en.wikipedia.org/wiki/Generalized%20Poincar%C3%A9%20conjecture | In the mathematical area of topology, the generalized Poincaré conjecture is a statement that a manifold which is a homotopy sphere a sphere. More precisely, one fixes a category of manifolds: topological (Top), piecewise linear (PL), or differentiable (Diff). Then the statement is
Every homotopy sphere (a closed n-manifold which is homotopy equivalent to the n-sphere) in the chosen category (i.e. topological manifolds, PL manifolds, or smooth manifolds) is isomorphic in the chosen category (i.e. homeomorphic, PL-isomorphic, or diffeomorphic) to the standard n-sphere.
The name derives from the Poincaré conjecture, which was made for (topological or PL) manifolds of dimension 3, where being a homotopy sphere is equivalent to being simply connected and closed. The generalized Poincaré conjecture is known to be true or false in a number of instances, due to the work of many distinguished topologists, including the Fields medal awardees John Milnor, Steve Smale, Michael Freedman, and Grigori Perelman.
Status
Here is a summary of the status of the generalized Poincaré conjecture in various settings.
Top: true in all dimensions.
PL: true in dimensions other than 4; unknown in dimension 4, where it is equivalent to Diff.
Diff: false generally, the first known counterexample is in dimension 7. True in some dimensions including 1, 2, 3, 5, 6, 12, 56 and 61. The case of dimension 4 is equivalent to PL and is unsettled . The previous list includes all odd dimensions and all even dimensions between 6 and 62 for which the conjecture is true; it may be true for some additional even dimensions though it is conjectured that this is not the case.
Thus the veracity of the Poincaré conjectures changes according to which category it is formulated in. More generally the notion of isomorphism differs between the categories Top, PL, and Diff. It is the same in dimension 3 and below. In dimension 4, PL and Diff agree, but Top differs. In dimensions above 6 they all differ. In di |
https://en.wikipedia.org/wiki/Somatic%20epitype | A somatic epitype is a non-heritable epigenetic alteration in a gene. It is similar to conventional epigenetics in that it does not involve changes in the DNA primary sequence. Physically, the somatic epitype corresponds to changes in DNA methylation, oxidative damage (replacement of GTP with oxo-8-dGTP), or changes in DNA-chromatin structure that are not reversed by normal cellular or nuclear repair mechanisms. Somatic epitypes alter gene expression levels without altering the amino acid sequence of the expressed protein. Current research suggests that somatic epitypes can be altered both before and after birth, and this alteration can be in response to exposure to heavy metals (such as lead), differences in maternal care, or nutritional or behavioral stress. There is no indication that somatic epitypes are heritable in a conventional epigenetic fashion. Some research suggests that methylation levels (and gene expression) can be reversed for some somatic epitypes by alterations in environmental factors such as diet.
See also
Epigenetics
Sources
DNA
Epigenetics |
https://en.wikipedia.org/wiki/Cosmic%20Calendar | The Cosmic Calendar is a method to visualize the chronology of the universe, scaling its currently understood age of 13.8 billion years to a single year in order to help intuit it for pedagogical purposes in science education or popular science.
In this visualization, the Big Bang took place at the beginning of January 1 at midnight, and the current moment maps onto the end of December 31 just before midnight.
At this scale, there are 437.5 years per cosmic second, 1.575 million years per cosmic hour, and 37.8 million years per cosmic day.
The concept was popularized by Carl Sagan in his 1977 book The Dragons of Eden and on his 1980 television series Cosmos. Sagan goes on to extend the comparison in terms of surface area, explaining that if the Cosmic Calendar were scaled to the size of a football field, then "all of human history would occupy an area the size of [his] hand".
A similar analogy used to visualize the geologic time scale and the history of life on Earth is the Geologic Calendar.
Cosmology
Date in year calculated from formula
T(days) = 365 days * ( 1- T_Gya/13.797 )
Evolution of life on Earth
Human evolution
History begins
See also
List of timelines |
https://en.wikipedia.org/wiki/M%C3%BCller%20glia | Müller glia, or Müller cells, are a type of retinal glial cells, first recognized and described by Heinrich Müller. They are found in the vertebrate retina, where they serve as support cells for the neurons, as all glial cells do. They are the most common type of glial cell found in the retina. While their cell bodies are located in the inner nuclear layer of the retina, they span across the entire retina.
The major role of the Müller cells is to maintain the structural and functional stability of retinal cells. This includes regulation of the extracellular environment via uptake of neurotransmitters, removal of debris, regulation of K+ levels, storage of glycogen, electrical insulation of receptors and other neurons, and mechanical support of the neural retina.
Development
Müller glia are derived developmentally from two distinct populations of cells. The Müller glia cell is the only retinal glial cell that shares a common cell lineage with retinal neurons. A subset of Müller glia has been shown to originate from neural crest cells. They are shown to be critical to the development of the retina in mice, serving as promoters of retinal growth and histogenesis, via a nonspecific esterase-mediated mechanism. Müller glia have also been implicated as guidepost cells for the developing axons of neurons in the chick retina. Studies using a zebrafish model of Usher syndrome have implicated a role for Müller glia in synaptogenesis, the formation of synapses.
Neuronal support
As glial cells, Müller glia serve a secondary but important role to neurons. As such, they have been shown to serve as important mediators of neurotransmitter (acetylcholine and GABA specifically) degradation and maintenance of a favorable retinal microenvironment in turtles. Müller glia have also been shown to be important in the induction of the enzyme glutamine synthetase in chicken embryos, which is an important actor in the regulation of glutamine and ammonia concentrations in the central ne |
https://en.wikipedia.org/wiki/Loc%20Blocs | Loc Blocs was a plastic block construction toy set. Never reaching the popularity of Lego bricks, they were marketed in the 1970s and 1980s by Entex Industries and manufactured in Japan as Dia Block by Kawada Co. which still produces sets to this day. They were also sold by Sears under their house brand Brix Blox.
Today, similar blocks are still manufactured in Japan as Diabloks and sold in the US under the name Disney Build-It.
Compatibility
The blocks were of a very similar grid pattern to the LEGO system, but due to existing LEGO patents, were slightly different. Rather than using a stud and tube system, Loc Blocs used a tall stud and short channels on the bottom of bricks. The tall studs were just tall enough to engage the channels. The knobs were too tall and spaced just a little bit off for fitting between LEGO tubes. |
https://en.wikipedia.org/wiki/High%20time-resolution%20astrophysics | High time-resolution astrophysics (HTRA) is a section of astronomy/astrophysics involved in measuring and studying astronomical phenomena in time scales of 1 second and smaller (t.b.c.). This breed of astronomy has developed with higher efficiency detectors and larger telescopes to get more photons per second along with better computers to store and analyse the vast amounts of data acquired in one night.
Pre-existing objects can now fall into this category such as gamma-ray burst optical transients and pulsars, although this relatively new science is concentrated in the optical/infrared regime and time limits are yet to be set as to what is high time-resolution.
External links
Opticon:HTRA
Observational astronomy
Astrophysics |
https://en.wikipedia.org/wiki/Uno%20%28video%20game%29 | Uno is a video game based on the card game of the same name. It has been released for a number of platforms. The Xbox 360 version by Carbonated Games and Microsoft Game Studios was released on May 9, 2006, as a digital download via Xbox Live Arcade. A version for iPhone OS and iPod devices was released in 2008 by Gameloft. Gameloft released the PlayStation 3 version on October 1, 2009, and also released a version for WiiWare, Nintendo DSi via DSiWare, and PlayStation Portable. An updated version developed by Ubisoft Chengdu and published by Ubisoft was released for the PlayStation 4 and Xbox One in August 2016, Microsoft Windows in December 2016 and for the Nintendo Switch in November 2017.
Uno's original version was well received by critics. A sequel to the game's original version, Uno Rush, was announced at E3 2008 and released in 2009.
Gameplay
Uno is a video game that takes similarities to the card game of the same name. For the official rules, see the rules of the physical version.
Differences between versions
Xbox 360 version
The Xbox 360 version of the game offers three different game modes including Standard Uno, Partner Uno, and House Rules Uno. In Partner Uno, players sitting across from each other join forces to form a team, so that a win by either player is a win for the team. In House Rules Uno, the rules can be tweaked and customized to the player's preference.
The Xbox 360 version of Uno offers multiplayer for up to four players through Xbox Live. Players can join or drop-out of in-progress games at any time, with computer players automatically taking over for any missing humans. The game supports the Xbox Live Vision camera, allowing opponents to view an image of the player (or whatever the camera is pointed at) while playing the game.
Theme decks
The Xbox 360 version of Uno supports downloadable content through the Xbox Live Marketplace. This content takes the form of custom theme decks, which feature new visual appearances, sound effec |
https://en.wikipedia.org/wiki/Double%20exponential%20function | A double exponential function is a constant raised to the power of an exponential function. The general formula is (where a>1 and b>1), which grows much more quickly than an exponential function. For example, if a = b = 10:
f(x) = 1010x
f(0) = 10
f(1) = 1010
f(2) = 10100 = googol
f(3) = 101000
f(100) = 1010100 = googolplex.
Factorials grow faster than exponential functions, but much more slowly than doubly exponential functions. However, tetration and the Ackermann function grow faster. See Big O notation for a comparison of the rate of growth of various functions.
The inverse of the double exponential function is the double logarithm log(log(x)).
Doubly exponential sequences
A sequence of positive integers (or real numbers) is said to have doubly exponential rate of growth if the function giving the th term of the sequence is bounded above and below by doubly exponential functions of .
Examples include
The Fermat numbers
The harmonic primes: The primes p, in which the sequence exceeds 0, 1, 2, 3, …The first few numbers, starting with 0, are 2, 5, 277, 5195977, ...
The Double Mersenne numbers
The elements of Sylvester's sequence where E ≈ 1.264084735305302 is Vardi's constant .
The number of k-ary Boolean functions:
The prime numbers 2, 11, 1361, ... where A ≈ 1.306377883863 is Mills' constant.
Aho and Sloane observed that in several important integer sequences, each term is a constant plus the square of the previous term. They show that such sequences can be formed by rounding to the nearest integer the values of a doubly exponential function with middle exponent 2.
Ionaşcu and Stănică describe some more general sufficient conditions for a sequence to be the floor of a doubly exponential sequence plus a constant.
Applications
Algorithmic complexity
In computational complexity theory, 2-EXPTIME is the class of decision problems solvable in doubly exponential time. It is equivalent to AEXPSPACE, the set of decision problems solvable by an alte |
https://en.wikipedia.org/wiki/FaceGen | FaceGen is a 3D face-generating 3D modeling middleware produced by Singular Inversions.
Approach
Although FaceGen generates conventional 3D mesh data, it uses a "parameterized" approach to defining the properties that make up a face, and by using a fixed set of parameters it is able to morph and modify a face model independently of output resolution. FaceGen 3.3 allows the user to randomize, tween, normalize and exaggerate faces, and also includes algorithms for adjusting apparent age, ethnicity and gender. It also allows limited parametric control of facial expressions, and includes a set of phoneme expressions for the animation of characters with "speaking" roles.
FaceGen can also generate 3D models from front and side images of a face, or by analyzing a single photograph.
Free versions
Free demo versions of FaceGen Artist, FaceGen 3D Print and FaceGen Modeller can be downloaded from the company's website. These allow the user to create, edit, load and save files in the program's proprietary ".fg" format. The free version features the same functionality of the paid version, except that a logo is placed on the forehead of models that are generated and only a few additional assets like hairstyles and beards are provided.
External links
FaceGen official homepage
Discovery Channel interview
https://www.researchgate.net
Anatomical simulation
Windows graphics-related software
3D imaging |
https://en.wikipedia.org/wiki/Costamere | The costamere is a structural-functional component of striated muscle cells which connects the sarcomere of the muscle to the cell membrane (i.e. the sarcolemma).
Costameres are sub-sarcolemmal protein assemblies circumferentially aligned in register with the Z-disk of peripheral myofibrils. They physically couple force-generating sarcomeres with the sarcolemma in striated muscle cells and are thus considered one of several "Achilles' heels" of skeletal muscle, a critical component of striated muscle morphology which, when compromised, is thought to directly contribute to the development of several distinct myopathies.
The dystrophin-associated protein complex, also referred to as the dystrophin-associated glycoprotein complex (DGC or DAGC), contains various integral and peripheral membrane proteins such as dystroglycans and sarcoglycans, which are thought to be responsible for linking the internal cytoskeletal system of individual myofibers to structural proteins within the extracellular matrix (such as collagen and laminin). Therefore, it is one of the features of the sarcolemma which helps to couple the sarcomere to the extracellular connective tissue as some experiments have shown. Desmin protein may also bind to the DAG complex, and regions of it are known to be involved in signaling.
Structure
Costameres are highly complex networks of proteins and glycoproteins, and can be considered as consisting of two major protein complexes: the dystrophin-glycoprotein complex (DGC) and the integrin-vinculin-talin complex. The sarcoglycans of the DGC and the integrins of the integrin-vinculin-talin complex attach directly to filamin C, a component of the Z-disk, linking these protein complexes of costameres to complexes of the Z-disk. Restated, filamin C physically links the two complexes that constitute the costamere to sarcomeres by interacting with the sarcoglycans in the DGC and the integrins of the integrin-vinculin-talin complex.
The DGC consists of peripheral a |
https://en.wikipedia.org/wiki/Coefficient%20of%20inbreeding | The coefficient of inbreeding of an individual is the probability that two alleles at any locus in an individual are identical by descent from the common ancestor(s) of the two parents.
Calculation
An individual is said to be inbred if there is a loop in its pedigree chart. A loop is defined as a path that runs from an individual up to the common ancestor through one parent and back down to the other parent, without going through any individual twice. The number of loops is always the number of common ancestors the parents have. If an individual is inbred, the coefficient of inbreeding is calculated by summing all the probabilities that an individual receives the same allele from its father's side and mother's side. As every individual has a 50% chance of passing on an allele to the next generation, the formula depends on 0.5 raised to the power of however many generations separate the individual from the common ancestor of its parents, on both the father's side and mother's side. This number of generations can be calculated by counting how many individuals lie in the loop defined earlier. Thus, the coefficient of inbreeding (f) of an individual X can be calculated with the following formula:
where is the number of individuals in the aforementioned loop,and is the coefficient of inbreeding of the common ancestor of X's parents.
To give an example, consider the following pedigree.
In this pedigree chart, G is the progeny of C and F, and C is the biological uncle of F. To find the coefficient of inbreeding of G, first locate a loop that leads from G to the common ancestor through one parent and back down to the other parent without going through the same individual twice, There are only two such loops in this chart, as there are only 2 common ancestors of C and F. The loops are G - C - A - D - F and G - C - B - D - F, both of which have 5 members.
Because the common ancestors of the parents (A and B) are not inbred themselves, . Therefore the coefficient of i |
https://en.wikipedia.org/wiki/Fully%20differential%20amplifier | A fully differential amplifier (FDA) is a DC-coupled high-gain electronic voltage amplifier with differential inputs and differential outputs. In its ordinary usage, the output of the FDA is controlled by two feedback paths which, because of the amplifier's high gain, almost completely determine the output voltage for any given input.
In a fully differential amplifier, common-mode noise such as power supply disturbances is rejected; this makes FDAs especially useful as part of a mixed-signal integrated circuit.
An FDA is often used to convert an analog signal into a form more suitable for driving into an analog-to-digital converter; many modern high-precision ADCs have differential inputs.
The ideal FDA
For any input voltages, the ideal FDA has infinite open-loop gain, infinite bandwidth, infinite input impedances resulting in zero input currents, infinite slew rate, zero output impedance and zero noise.
In the ideal FDA, the difference in the output voltages is equal to the difference between the input voltages multiplied by the gain. The common mode voltage of the output voltages is not dependent on the input voltage. In many cases, the common mode voltage can be directly set by a third voltage input.
Input voltage:
Output voltage:
Output common-mode voltage:
A real FDA can only approximate this ideal, and the actual parameters are subject to drift over time and with changes in temperature, input conditions, etc. Modern integrated FET or MOSFET FDAs approximate more closely to these ideals than bipolar ICs where large signals must be handled at room temperature over a limited bandwidth; input impedance, in particular, is much higher, although the bipolar FDA usually exhibit superior (i.e., lower) input offset drift and noise characteristics.
Where the limitations of real devices can be ignored, an FDA can be viewed as a Black Box with gain; circuit function and parameters are determined by feedback, usually negative. An FDA, as implemented in practic |
https://en.wikipedia.org/wiki/Pseudonymization | Pseudonymization is a data management and de-identification procedure by which personally identifiable information fields within a data record are replaced by one or more artificial identifiers, or pseudonyms. A single pseudonym for each replaced field or collection of replaced fields makes the data record less identifiable while remaining suitable for data analysis and data processing.
Pseudonymization (or pseudonymisation, the spelling under European guidelines) is one way to comply with the European Union's new General Data Protection Regulation (GDPR) demands for secure data storage of personal information. Pseudonymized data can be restored to its original state with the addition of information which allows individuals to be re-identified. In contrast, anonymization is intended to prevent re-identification of individuals within the dataset.
Impact of Schrems II Ruling
The European Data Protection Supervisor (EDPS) on 9 December 2021 highlighted pseudonymization as the top technical supplementary measure for Schrems II compliance. Less than two weeks later, the EU Commission highlighted pseudonymization as an essential element of the equivalency decision for South Korea, which is the status that was lost by the United States under the Schrems II ruling by the Court of Justice of the European Union (CJEU).
The importance of GDPR-compliant pseudonymization increased dramatically in June 2021 when the European Data Protection Board (EDPB) and the European Commission highlighted GDPR-compliant Pseudonymisation as the state-of-the-art technical supplementary measure for the ongoing lawful use of EU personal data when using third country (i.e., non-EU) cloud processors or remote service providers under the "Schrems II" ruling by the CJEU. Under the GDPR and final EDPB Schrems II Guidance, the term pseudonymization requires a new protected “state” of data, producing a protected outcome that:
(1) Protects direct, indirect, and quasi-identifiers, together with charac |
https://en.wikipedia.org/wiki/Gamebird%20hybrids | Gamebird hybrids are the result of crossing species of game birds, including ducks, with each other and with domestic poultry. These hybrid species may sometimes occur naturally in the wild or more commonly through the deliberate or inadvertent intervention of humans.
Charles Darwin described hybrids of game birds and domestic fowl in The Variation of Animals and Plants Under Domestication:
Mr. Hewitt, who has had great experience in crossing tame cock-pheasants with fowls belonging to five breeds, gives as the character of all 'extraordinary wildness' (13/42. 'The Poultry Book' by Tegetmeier 1866 pages 165, 167.); but I have myself seen one exception to this rule. Mr. S. J. Salter (13/43. 'Natural History Review' 1863 April page 277.) who raised a large number of hybrids from a bantam-hen by Gallus sonneratii, states that 'all were exceedingly wild.' [...] utterly sterile male hybrids from the pheasant and the fowl act in the same manner, "their delight being to watch when the hens leave their nests, and to take on themselves the office of a sitter." (13/57. 'Cottage Gardener' 1860 page 379.) [...] Mr. Hewitt gives it as a general rule with fowls, that crossing the breed increases their size. He makes this remark after stating that hybrids from the pheasant and fowl are considerably larger than either progenitor: so again, hybrids from the male golden pheasant and female common pheasant "are of far larger size than either parent-bird.' (17/39. Ibid 1866 page 167; and 'Poultry Chronicle' volume 3 1855 page 15.)"
Pheasant and grouse hybrids
Hybrids have been obtained between the "ornamental" species of pheasants e.g. Lady Amherst's, silver and Reeves's pheasants.
Natural pheasant and grouse hybrids have been reported:
Capercaillie or wood grouse (Tetrao urogallus) and black grouse (Tetrao tetrix) in the UK
Dusky or blue grouse (Dendragapus obscurus) and common pheasant (Phasianus colchicus) near Portland, Oregon, United States
Sharp-tailed grouse (Tympanuch |
https://en.wikipedia.org/wiki/Effective%20circulating%20volume | The effective circulating volume (ECV) is the volume of arterial blood effectively perfusing tissue. ECV is a dynamic quantity and not a measurable, distinct compartment. This concept is useful for discussion of cardiovascular and renal physiology.
Though ECV normally varies with extracellular fluid (ECF), they become uncoupled in diseases, such as congestive heart failure (CHF) or hepatic cirrhosis. In such cases, decreased ECV may lead to volume depletion responses and edema.
Decreased ECV can stimulate renin secretion or stimulate a sympathetic nervous system response or prostaglandin release (all of which help mediate renal blood flow and glomerular filtration rate among other things).
See also
Blood plasma |
https://en.wikipedia.org/wiki/Latent%20internal%20energy | The latent internal energy of a system is the internal energy a system requires to undergo a phase transition. Its value is specific to the substance or mix of substances in question. The value can also vary with temperature and pressure. Generally speaking the value is different for the type of phase change being accomplished. Examples can include Latent internal energy of vaporization (liquid to vapor), Latent internal energy of crystallization (liquid to solid) Latent internal energy of sublimation (solid to vapor). These values are usually expressed in units of energy per mole or per mass such as J/mol or BTU/lb. Often a negative sign will be used to represent energy being withdrawn from the system, while a positive value represents energy being added to the system.
For every type of latent internal energy there is an opposite. For example, the latent internal energy of Freezing (liquid to solid) is equal to the negative of the Latent internal energy of melting (solid to liquid) |
https://en.wikipedia.org/wiki/DesktopTwo | Desktoptwo was a free Webtop (whose URL was desktoptwo.com and which is now a parked domain) developed by Sapotek (whose URL was sapotek.com, which also is now a parked domain). It's also been called a WebOS although Sapotek stated on its website that the term is premature and presumptuous. It mimics the look, feel and functionality of the desktop environment of an operating system. The software only reached beta stage. It had a Spanish version called Computadora.de. Desktoptwo was web-based and required Adobe Flash Player to operate. The web applications' found on Desktoptwo were built on PHP in the back end. Features included drag-and-drop functionality. Sapotek had liberated all the web applications found on Desktoptwo through Sapodesk on an AGPL license.
Desktoptwo belonged to a category of services that intended to turn the Web into a full-fledged platform by using web services as a foundation along with presentation technologies that replicated the experience of desktop applications for users. In a "Cloud OS" the functionality of a server was granularized and abstracted as Web services that Web developers used to create composite applications similar to how desktop software developers use several APIs of the OS to create their applications. Sites like Facebook attempt to create a similar effect by exposing their APIs and allowing developers to create applications upon these.
Some of the features found on Desktoptwo were: File sharing, Webmail, Blog creator, Instant messenger, Address book, Calendar, RSS Reader and Office productivity applications.
Desktoptwo.com and the Sapotek website no longer operate.
See also
Web portal
Web desktop |
https://en.wikipedia.org/wiki/Utmp | utmp, wtmp, btmp and variants such as utmpx, wtmpx and btmpx are files on Unix-like systems that keep track of all logins and logouts to the system.
Format
utmp, wtmp and btmp
utmp maintains a full accounting of the current status of the system, system boot time (used by uptime), recording user logins at which terminals, logouts, system events etc.
wtmp acts as a historical utmp
btmp records failed login attempts
These files are not regular text files, but rather a binary format which needs to be edited by specially crafted programs. The implementation and the fields present in the file differ depending on the system or the libc version, and are defined in the utmp.h header file. The wtmp and btmp format are exactly like utmp except that a null value for "username" indicates a logout on the associated terminal (the actual user name is located by finding the preceding login on that terminal). Furthermore, the value "~" as a terminal name with username "shutdown" or "reboot" indicates a system shutdown or reboot (respectively).
These files are not set by any given PAM module (such as pam_unix.so or pam_sss.so) but are set by the application performing the operation (e.g. mingetty, /bin/login, or sshd). As such it is the obligation of the program itself to record the utmp information.
utmpx, wtmpx and btmpx
Utmpx and wtmpx are extensions to the original utmp and wtmp, originating from Sun Microsystems. Utmpx is specified in POSIX. The utmp, wtmp and btmp files were never a part of any official Unix standard, such as Single UNIX Specification, while utmpx and corresponding APIs are part of it. While some systems create different newer files for the utmpx variants and have deprecated/obsoleted former formats, this is not always the case. Linux for example uses the utmpx structure in the place of the older file structure.
Location
Depending on the system, those files may commonly be found in different places (non-exhaustive list) :
Linux:
/var/run/utmp
|
https://en.wikipedia.org/wiki/Frontal%20solver | A frontal solver is an approach to solving sparse linear systems which is used extensively in finite element analysis. Algorithms of this kind are variants of Gauss elimination that automatically avoids a large number of operations involving zero terms due to the fact that the matrix is only sparse. The development of frontal solvers is usually considered as dating back to work by Bruce Irons.
A frontal solver builds a LU or Cholesky decomposition of a sparse matrix.
Frontal solvers start with one or a few diagonal entries of the matrix, then consider all of those diagonal entries that are coupled to the first set via off-diagonal entries, and so on. In the finite element context, these consecutive sets form "fronts" that march through the domain (and consequently through the matrix, if one were to permute rows and columns of the matrix in such a way that the diagonal entries are ordered by the wave they are part of). Processing the front involves dense matrix operations, which use the CPU efficiently.
Given that the elements of the matrix are only needed as the front marches through the matrix, it is possible (but not necessary) to provide matrix elements only as needed. For example, for matrices arising from the finite element method, one can structure the "assembly" of element matrices by assembling the matrix and eliminating equations only on a subset of elements at a time. This subset is called the front and it is essentially the transition region between the part of the system already finished and the part not touched yet. In this context, the whole sparse matrix is never created explicitly, though the decomposition of the matrix is stored. This approach was mainly used historically, when computers had little memory; in such implementations, only the front is in memory, while the factors in the decomposition are written into files. The element matrices are read from files or created as needed and discarded. More modern implementations, running on computers w |
https://en.wikipedia.org/wiki/NA48%20experiment | The NA48 experiment was a series of particle physics experiments in the field of kaon physics being carried out at the North Area of the Super Proton Synchrotron at CERN. The collaboration involved over 100 physicists mostly from Western Europe and Russia.
The construction of the NA48 experimental setup took place early 1990s. The primary physics goal – the search for direct CP violation – was inherited from the predecessor NA31 experiment. The physics data taking runs took place between 1997 and 2001. The discovery of the phenomenon of direct CP violation, one of the most important experimental results obtained at CERN, was announced by the collaboration in 1999. The publication of the final result was made in 2001. In addition the experiment made a contribution to studies of rare decays of neutral kaons.
The following stage of the experiment (NA48/1) was carried out in 2002 and was devoted to high precision study of rare decays of neutral kaons and hyperons. The next stage (NA48/2) was carried out in 2003–2004 and was dedicated to a large programme of studies of properties of charged kaons, including the search of direct CP violation, studies of rare decays of the charged kaon, and low-energy QCD using final state rescattering.
The successor of NA48 is the NA62 experiment, which started data collection in 2015 and is dedicated to further studies of rare decays of the charged kaon.
See also
NA31 experiment
NA62 experiment
External links
NA48 website
NA48/1 website
NA48/2 website
NA62 website
The experiments NA48 and NA62 at CERN
CERN-NA-048 experiment record on INSPIRE-HEP
CERN experiments
Particle experiments |
https://en.wikipedia.org/wiki/Unreasonable%20ineffectiveness%20of%20mathematics | The unreasonable ineffectiveness of mathematics is a phrase that alludes to the article by physicist Eugene Wigner, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". This phrase is meant to suggest that mathematical analysis has not proved as valuable in other fields as it has in physics.
Life sciences
I. M. Gelfand, a mathematician who worked in biomathematics and molecular biology, as well as many other fields in applied mathematics, is quoted as stating,
Eugene Wigner wrote a famous essay on the unreasonable effectiveness of mathematics in natural sciences. He meant physics, of course. There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable ineffectiveness of mathematics in biology.
An opposing view is given by Leonard Adleman, a theoretical computer scientist who pioneered the field of DNA computing. In Adleman's view, "Sciences reach a point where they become mathematized," starting at the fringes but eventually "the central issues in the field become sufficiently understood that they can be thought about mathematically. It occurred in physics about the time of the Renaissance; it began in chemistry after John Dalton developed atomic theory" and by the 1990s was taking place in biology. By the early 1990s, "Biology was no longer the science of things that smelled funny in refrigerators (my view from undergraduate days in the 1960s). The field was undergoing a revolution and was rapidly acquiring the depth and power previously associated exclusively with the physical sciences. Biology was now the study of information stored in DNA - strings of four letters: A, T, G, and C and the transformations that information undergoes in the cell. There was mathematics here!"
Economics and finance
K. Vela Velupillai wrote of The unreasonable ineffectiveness of mathematics in economics. To him "the headlong rush with which economists have equipped themselves with |
https://en.wikipedia.org/wiki/Anterolateral%20sulcus%20of%20spinal%20cord | The Anterolateral sulcus of spinal cord is a landmark on the anterior side of the spinal cord. It denotes the location at which the ventral fibers leave the spinal cord.
The anterolateral sulcus is less visible than the posterolateral sulcus.
See also
Anterolateral sulcus of medulla |
https://en.wikipedia.org/wiki/Posterolateral%20sulcus%20of%20medulla%20oblongata | The accessory, vagus, and glossopharyngeal nerves correspond with the posterior nerve roots, and are attached to the bottom of a sulcus named the posterolateral sulcus (or dorsolateral sulcus).
Additional images |
https://en.wikipedia.org/wiki/Golomb%20sequence | In mathematics, the Golomb sequence, named after Solomon W. Golomb (but also called Silverman's sequence), is a monotonically increasing integer sequence where an is the number of times that n occurs in the sequence, starting with a1 = 1, and with the property that for n > 1 each an is the smallest unique integer which makes it possible to satisfy the condition. For example, a1 = 1 says that 1 only occurs once in the sequence, so a2 cannot be 1 too, but it can be 2, and therefore must be 2. The first few values are
1, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12 .
Examples
a1 = 1
Therefore, 1 occurs exactly one time in this sequence.
a2 > 1
a2 = 2
2 occurs exactly 2 times in this sequence.
a3 = 2
3 occurs exactly 2 times in this sequence.
a4 = a5 = 3
4 occurs exactly 3 times in this sequence.
5 occurs exactly 3 times in this sequence.
a6 = a7 = a8 = 4
a9 = a10 = a11 = 5
etc.
Recurrence
Colin Mallows has given an explicit recurrence relation . An asymptotic expression for an is
where is the golden ratio (approximately equal to 1.618034). |
https://en.wikipedia.org/wiki/List%20of%20web%20service%20frameworks | A list of web service frameworks:
See also
Comparison of web frameworks
List of web service specifications
List of web service protocols
Web service
Java view technologies and frameworks
List of application servers
Web services |
https://en.wikipedia.org/wiki/Payment%20Card%20Industry%20Data%20Security%20Standard | The Payment Card Industry Data Security Standard (PCI DSS) is an information security standard used to handle credit cards from major card brands. The standard is administered by the Payment Card Industry Security Standards Council, and its use is mandated by the card brands. It was created to better control cardholder data and reduce credit card fraud. Validation of compliance is performed annually or quarterly with a method suited to the volume of transactions:
Self-assessment questionnaire (SAQ)
Firm-specific Internal Security Assessor (ISA)
External Qualified Security Assessor (QSA)
History
The major card brands had five different security programs:
Visa's Cardholder Information Security Program
Mastercard's Site Data Protection
American Express's Data Security Operating Policy
Discover's Information Security and Compliance
JCB's Data Security Program
The intentions of each were roughly similar: to create an additional level of protection for card issuers by ensuring that merchants meet minimum levels of security when they store, process, and transmit cardholder data. To address interoperability problems among the existing standards, the combined effort by the principal credit-card organizations resulted in the release of version 1.0 of PCI DSS in December 2004. PCI DSS has been implemented and followed worldwide.
The Payment Card Industry Security Standards Council (PCI SSC) was then formed, and these companies aligned their policies to create the PCI DSS. MasterCard, American Express, Visa, JCB International and Discover Financial Services established the PCI SSC in September 2006 as an administrative and governing entity which mandates the evolution and development of the PCI DSS. Independent private organizations can participate in PCI development after they register. Each participating organization joins a SIG (Special Interest Group) and contributes to activities mandated by the group. The following versions of the PCI DSS have been made available:
R |
https://en.wikipedia.org/wiki/Diffraction%20topography | Diffraction topography (short: "topography") is a imaging technique based on Bragg diffraction.
Diffraction topographic images ("topographies") record the intensity profile of a beam of X-rays (or, sometimes, neutrons) diffracted by a crystal.
A topography thus represents a two-dimensional spatial intensity mapping of reflected X-rays, i.e. the spatial fine structure of a Laue reflection.
This intensity mapping reflects the distribution of scattering power inside the crystal; topographs therefore reveal the irregularities in a non-ideal crystal lattice.
X-ray diffraction topography is one variant of X-ray imaging, making use of diffraction contrast rather than absorption contrast which is usually used in radiography and computed tomography (CT). Topography is exploited to a lesser extends with neutrons, and has similarities to dark field imaging in the electron microscope community.
Topography is used for monitoring crystal quality and visualizing defects in many different crystalline materials.
It has proved helpful e.g. when developing new crystal growth methods, for monitoring growth and the crystal quality achieved, and for iteratively optimizing growth conditions.
In many cases, topography can be applied without preparing or otherwise damaging the sample; it is therefore one variant of non-destructive testing.
History
After the discovery of x-rays by Wilhelm Röntgen in 1895, and of the principles of X-ray diffraction by Laue and the Bragg family, it still took several decades for the benefits of diffraction imaging to be fully recognized, and the first useful experimental techniques to be developed. First systematic reports on laboratory topography techniques date from the early 1940s. In the 1950s and 1960s, topographic investigations played a role in detecting the nature of defects and improving crystal growth methods for Germanium and (later) Silicon as materials for semiconductor microelectronics.
For a more detailed account of the historical developme |
https://en.wikipedia.org/wiki/Anterior%20median%20fissure%20of%20the%20medulla%20oblongata | The anterior median fissure (ventral or ventromedian fissure) contains a fold of pia mater, and extends along the entire length of the medulla oblongata: It ends at the lower border of the pons in a small triangular expansion, termed the foramen cecum.
Its lower part is interrupted by bundles of fibers that cross obliquely from one side to the other, and constitute the pyramidal decussation.
Some fibers, termed the anterior external arcuate fibers, emerge from the fissure above this decussation and curve lateralward and upward over the surface of the medulla oblongata to join the inferior peduncle.
Additional images |
https://en.wikipedia.org/wiki/Border%20Gateway%20Multicast%20Protocol | The Border Gateway Multicast Protocol (BGMP) was an IETF project which attempted to design a true inter-domain multicast routing protocol. BGMP was planned to be able to scale in order to operate in the global Internet. |
https://en.wikipedia.org/wiki/Cerebellar%20tonsil | The cerebellar tonsil (Latin: tonsilla cerebelli) is a rounded lobule on the undersurface of each cerebellar hemisphere, continuous medially with the uvula of the cerebellar vermis and superiorly by the flocculonodular lobe. Synonyms include: tonsilla cerebelli, amygdala cerebelli, the latter of which is not to be confused with the cerebral tonsils or amygdala nuclei located deep within the medial temporal lobes of the cerebral cortex.
The flocculonodular lobe of the cerebellum, which can also be confused for the cerebellar tonsils, is one of three lobes that make up the overall composition of the cerebellum. The cerebellum consists of three anatomical and functional lobes: anterior lobe, posterior lobe, and flocculonodular lobe.
The cerebellar tonsil is part of the posterior lobe, also known as the neocerebellum, which is responsible for coordinating the voluntary movement of the distal parts of limbs.
Elongation of the cerebellar tonsils can, due to pressure, lead to this portion of the cerebellum to slip or be pushed through the foramen magnum of the skull resulting in tonsillar herniation. This is a life-threatening condition as it causes increased pressure on the medulla oblongata which contains respiratory and cardiac control centres. A congenital condition of tonsillar herniation of either one or both tonsils is Chiari malformation.
Pathology
A Type I Chiari malformation is a congenital anomaly of the brain in which the cerebellar tonsils are elongated and pushed down through the opening of the base of the skull (see foramen magnum), blocking the flow of cerebrospinal fluid (CSF) as it exits through the medial and lateral apertures of the fourth ventricle. Also called cerebellar tonsillar ectopia, or tonsillar herniation. Although often congenital, Chiari malformation symptoms can also be induced due to physical head trauma, commonly from raised intracranial pressure secondary to a hematoma, or increased dural strain pulling the brain caudally into the f |
https://en.wikipedia.org/wiki/Biological%20organisation | Biological organisation is the organisation of complex biological structures and systems that define life using a reductionistic approach. The traditional hierarchy, as detailed below, extends from atoms to biospheres. The higher levels of this scheme are often referred to as an ecological organisation concept, or as the field, hierarchical ecology.
Each level in the hierarchy represents an increase in organisational complexity, with each "object" being primarily composed of the previous level's basic unit. The basic principle behind the organisation is the concept of emergence—the properties and functions found at a hierarchical level are not present and irrelevant at the lower levels.
The biological organisation of life is a fundamental premise for numerous areas of scientific research, particularly in the medical sciences. Without this necessary degree of organisation, it would be much more difficult—and likely impossible—to apply the study of the effects of various physical and chemical phenomena to diseases and physiology (body function). For example, fields such as cognitive and behavioral neuroscience could not exist if the brain was not composed of specific types of cells, and the basic concepts of pharmacology could not exist if it was not known that a change at the cellular level can affect an entire organism. These applications extend into the ecological levels as well. For example, DDT's direct insecticidal effect occurs at the subcellular level, but affects higher levels up to and including multiple ecosystems. Theoretically, a change in one atom could change the entire biosphere.
Levels
The simple standard biological organisation scheme, from the lowest level to the highest level, is as follows:
More complex schemes incorporate many more levels. For example, a molecule can be viewed as a grouping of elements, and an atom can be further divided into subatomic particles (these levels are outside the scope of biological organisation). Each level can |
https://en.wikipedia.org/wiki/Paragon%20Space%20Development%20Corporation | Paragon Space Development Corporation is an American company headquartered in Tucson, Arizona. Paragon is a provider of environmental controls for extreme and hazardous environments. They design, build, test and operate life-support systems and leading thermal-control products for astronauts, contaminated water divers, and other extreme environment explorers, as well as for uncrewed space and terrestrial applications.
History
Paragon was conceived to combine the expertise of biology, chemistry and aerospace engineering to develop technical solutions to life support and thermal control problems related to human and biological spaceflight.
Paragon was founded by six principal partners including Grant Anderson, Taber MacCallum, Jane Poynter, Dave Bearden, Max Nelson, and Alicia (Cesa) Pederson.
Prior to co-founding Paragon, Anderson was employed at Lockheed Martin, Sunnyvale, California, MacCallum and Poynter were members of Biosphere 2 in Oracle, Arizona, David was at The Aerospace Corporation, El Segundo, California, (where he is still employed), Max was at the RAND Corporation, and Cesa was a manager at Lockheed Martin. MacCallum served as CEO of Paragon from its inception until his move to serve as chief technology officer of World View Enterprises, Inc., a company incubated by Paragon. Jane, formerly president and chairman of the Board of Paragon and Former World View CEO, and Taber are now co-CEO's of Space Perspective. Taber, Max, David and Grant had all previously attended International Space University Summer Sessions, through which they became connected.
Current projects
Paragon is providing the CST-100 Humidity Control Subassembly (HCS) for cabin atmospheric humidity control of the Boeing Crew Space Transportation System (CCTS) and Crew Space Transportation (CST)-100 spacecraft.
Paragon, a Lockheed Martin subcontractor on the NASA Orion program, provided the tubing for life-support systems including oxygen, heating and cooling and critical sensor pac |
https://en.wikipedia.org/wiki/Horizontal%20correlation | Horizontal correlation is a methodology for gene sequence analysis. Rather than referring to one specific technique, horizontal correlation instead encompasses a variety of approaches to sequence analysis that are unified by two specific themes:
Sequence analysis is performed by making comparisons horizontally, along the length of a single genetic sequence; this is in contrast to vertical methods that make comparisons across several different genetic sequences.
The comparisons made generally measure information theoretic quantities such as value of the mutual information function between two regions of the sequence.
The core ideas of the horizontal correlation approach were first presented in a year 2000 paper by Grosse, Herzel, Buldyrev, and Stanley (Grosse, et al. 2000). In this first formulation, Grosse and colleagues sought to characterize a large genetic sequence by dividing the sequence into coding and non-coding regions. Whereas traditional approaches to the coding-vs.-non-coding problem generally relied on sophisticated pattern recognition systems that were first trained on small inputs and then run over the entire sequence (Ohler, et al. 1999), the horizontal correlation approach of Grosse and colleagues worked instead by breaking the sequence into many relatively short sequence fragments, each only 500 base pairs in length. They then sought to characterize each of these fragments as either coding or non-coding. This was accomplished by comparing each size 3 window along the length of a fragment with the first size 3 window in that fragment, then measuring the value of the mutual information function between the two windows. Coding sequences were found to display a stylized pattern of 3-periodicity that non-coding sequences did not. Such a pattern was easy to recognize, and enabled significantly more rapid, more species-independent identification of coding regions (Grosse, et al. 2000).
Since 2000, horizontal correlation methodologies emphasizing the m |
https://en.wikipedia.org/wiki/Thermal%20oxidation | In microfabrication, thermal oxidation is a way to produce a thin layer of oxide (usually silicon dioxide) on the surface of a wafer. The technique forces an oxidizing agent to diffuse into the wafer at high temperature and react with it. The rate of oxide growth is often predicted by the Deal–Grove model. Thermal oxidation may be applied to different materials, but most commonly involves the oxidation of silicon substrates to produce silicon dioxide.
The chemical reaction
Thermal oxidation of silicon is usually performed at a temperature between 800 and 1200 °C, resulting in so called High Temperature Oxide layer (HTO). It may use either water vapor (usually UHP steam) or molecular oxygen as the oxidant; it is consequently called either wet or dry oxidation. The reaction is one of the following:
The oxidizing ambient may also contain several percent of hydrochloric acid (HCl). The chlorine removes metal ions that may occur in the oxide.
Thermal oxide incorporates silicon consumed from the substrate and oxygen supplied from the ambient. Thus, it grows both down into the wafer and up out of it. For every unit thickness of silicon consumed, 2.17 unit thicknesses of oxide will appear. If a bare silicon surface is oxidized, 46% of the oxide thickness will lie below the original surface, and 54% above it.
Deal-Grove model
According to the commonly used Deal-Grove model, the time τ required to grow an oxide of thickness Xo, at a constant temperature, on a bare silicon surface, is:
where the constants A and B relate to properties of the reaction and the oxide layer, respectively. This model has further been adapted to account for self-limiting oxidation processes, as used for the fabrication and morphological design of Si nanowires and other nanostructures.
If a wafer that already contains oxide is placed in an oxidizing ambient, this equation must be modified by adding a corrective term τ, the time that would have been required to grow the pre-existing oxi |
https://en.wikipedia.org/wiki/Contraposition | In logic and mathematics, contraposition refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as proof by contraposition. The contrapositive of a statement has its antecedent and consequent inverted and flipped.
Conditional statement . In formulas: the contrapositive of is .
If P, Then Q. — If not Q, Then not P. "If it is raining, then I wear my coat" — "If I don't wear my coat, then it isn't raining."
The law of contraposition says that a conditional statement is true if, and only if, its contrapositive is true.
The contrapositive () can be compared with three other statements:
Inversion (the inverse), "If it is not raining, then I don't wear my coat." Unlike the contrapositive, the inverse's truth value is not at all dependent on whether or not the original proposition was true, as evidenced here.
Conversion (the converse), "If I wear my coat, then it is raining." The converse is actually the contrapositive of the inverse, and so always has the same truth value as the inverse (which as stated earlier does not always share the same truth value as that of the original proposition).
Negation (the logical complement), "It is not the case that if it is raining then I wear my coat.", or equivalently, "Sometimes, when it is raining, I don't wear my coat. " If the negation is true, then the original proposition (and by extension the contrapositive) is false.
Note that if is true and one is given that is false (i.e., ), then it can logically be concluded that must be also false (i.e., ). This is often called the law of contrapositive, or the modus tollens rule of inference.
Intuitive explanation
In the Euler diagram shown, if something is in A, it must be in B as well. So we can interpret "all of A is in B" as:
It is also clear that anything that is not within B (the blue region) cannot be within A, either. This statement, which can be expressed as:
is the contrapos |
https://en.wikipedia.org/wiki/Parallels%20Desktop%20for%20Mac | Parallels Desktop for Mac is software providing hardware virtualization for Macintosh computers with Intel processors, and since version 16.5 also for Apple silicon-based Macintosh computers. It is developed by Parallels, since 2018 a subsidiary of Corel.
History
Released on June 15, 2006, it was the first software product to bring mainstream virtualization to Macintosh computers utilizing the Apple–Intel architecture (earlier software products ran PC software in an emulated environment).
Its name initially was 'Parallels Workstation for Mac OS X', which was consistent with the company's corresponding Linux and Windows products. This name was not well received within the Mac community, where some felt that the name, particularly the term “workstation,” evoked the aesthetics of a Windows product. Parallels agreed: “Since we've got a great Mac product, we should make it look and sound like a Mac product...”, it was therefore renamed ‘Parallels Desktop for Mac’.
On January 10, 2007, Parallels Desktop 3.0 for Mac was awarded “Best in Show” at MacWorld 2007.
Technical
Parallels Desktop for Mac is a hardware emulation virtualization software, using hypervisor technology that works by mapping the host computer's hardware resources directly to the virtual machine's resources. Each virtual machine thus operates identically to a standalone computer, with virtually all the resources of a physical computer. Because all guest virtual machines use the same hardware drivers irrespective of the actual hardware on the host computer, virtual machine instances are highly portable between computers. For example, a running virtual machine can be stopped, copied to another physical computer, and restarted.
Parallels Desktop for Mac is able to virtualize a full set of standard PC hardware, including
A virtualized CPU of the same type as the host's physical processor,
ACPI compliance system,
A generic motherboard compatible with the Intel i965 chipset,
Up to 64 GB of RAM for guest |
https://en.wikipedia.org/wiki/Drude%20particle | Drude particles are model oscillators used to simulate the effects of electronic polarizability in the context of a classical molecular mechanics force field. They are inspired by the Drude model of mobile electrons and are used in the computational study of proteins, nucleic acids, and other biomolecules.
Classical Drude oscillator
Most force fields in current practice represent individual atoms as point particles interacting according to the laws of Newtonian mechanics. To each atom, a single electric charge is assigned that doesn't change during the course of the simulation. However, such models cannot have induced dipoles or other electronic effects due to a changing local environment.
Classical Drude particles are massless virtual sites carrying a partial electric charge, attached to individual atoms via a harmonic spring. The spring constant and relative partial charges on the atom and associated Drude particle determine its response to the local electrostatic field, serving as a proxy for the changing distribution of the electronic charge of the atom or molecule. However, this response is limited to a changing dipole moment. This response is not enough to model interactions in environments with large field gradients, which interact with higher order moments.
Efficiency of simulation
The major computational cost of simulating classical Drude oscillators is the calculation of the local electrostatic field and the repositioning of the Drude particle at each step. Traditionally, this repositioning is done self consistently. This cost can be reduced by assigning a small mass to each Drude particle, applying a Lagrangian transformation and evolving the simulation in the generalised coordinates. This method of simulation has been used to create water models incorporating classical Drude oscillators.
Quantum Drude oscillator
Since the response of a classical Drude oscillator is limited, it is not enough to model interactions in heterogeneous media with large fiel |
https://en.wikipedia.org/wiki/Deal%E2%80%93Grove%20model | The Deal–Grove model mathematically describes the growth of an oxide layer on the surface of a material. In particular, it is used to predict and interpret thermal oxidation of silicon in semiconductor device fabrication. The model was first published in 1965 by Bruce Deal and Andrew Grove of Fairchild Semiconductor, building on Mohamed M. Atalla's work on silicon surface passivation by thermal oxidation at Bell Labs in the late 1950s. This served as a step in the development of CMOS devices and the fabrication of integrated circuits.
Physical assumptions
The model assumes that the oxidation reaction occurs at the interface between the oxide layer and the substrate material, rather than between the oxide and the ambient gas. Thus, it considers three phenomena that the oxidizing species undergoes, in this order:
It diffuses from the bulk of the ambient gas to the surface.
It diffuses through the existing oxide layer to the oxide-substrate interface.
It reacts with the substrate.
The model assumes that each of these stages proceeds at a rate proportional to the oxidant's concentration. In the first step, this means Henry's law; in the second, Fick's law of diffusion; in the third, a first-order reaction with respect to the oxidant. It also assumes steady state conditions, i.e. that transient effects do not appear.
Results
Given these assumptions, the flux of oxidant through each of the three phases can be expressed in terms of concentrations, material properties, and temperature.
By setting the three fluxes equal to each other the following relations can be derived:
Assuming a diffusion controlled growth i.e. where determines the growth rate, and substituting and in terms of from the above two relations into and equation respectively, one obtains:
If N is the concentration of the oxidant inside a unit volume of the oxide, then the oxide growth rate can be written in the form of a differential equation. The solution to this equation gives the o |
https://en.wikipedia.org/wiki/Flexible%20shaft | A flexible shaft, often referred to as a flex shaft, is a device for transmitting rotary motion between two objects which are not fixed relative to one another. It consists of a rotating wire rope or coil which is flexible but has some torsional stiffness. It may or may not have a covering, which also bends but does not rotate. It may transmit considerable power, or only motion, with negligible power.
Flexible shafts are commonly used in plumber's snakes. They are popular accessories for handheld rotary tools, and integral parts of rotary tools with a remote motor, which are called "flexible shaft tools". They are used to transmit power to some sheep shears. They are also sold to connect panel knobs to remote potentiometers or other variable electronic components. Flexible shaft tools are used frequently in the dental and jewelry industry, as well as other industrial applications.
See also
Driveshaft
John K. Stewart
Bowden cable |
https://en.wikipedia.org/wiki/Algebraic%20fraction | In algebra, an algebraic fraction is a fraction whose numerator and denominator are algebraic expressions. Two examples of algebraic fractions are and . Algebraic fractions are subject to the same laws as arithmetic fractions.
A rational fraction is an algebraic fraction whose numerator and denominator are both polynomials. Thus is a rational fraction, but not because the numerator contains a square root function.
Terminology
In the algebraic fraction , the dividend a is called the numerator and the divisor b is called the denominator. The numerator and denominator are called the terms of the algebraic fraction.
A complex fraction is a fraction whose numerator or denominator, or both, contains a fraction. A simple fraction contains no fraction either in its numerator or its denominator. A fraction is in lowest terms if the only factor common to the numerator and the denominator is 1.
An expression which is not in fractional form is an integral expression. An integral expression can always be written in fractional form by giving it the denominator 1. A mixed expression is the algebraic sum of one or more integral expressions and one or more fractional terms.
Rational fractions
If the expressions a and b are polynomials, the algebraic fraction is called a rational algebraic fraction or simply rational fraction. Rational fractions are also known as rational expressions. A rational fraction is called proper if , and improper otherwise. For example, the rational fraction is proper, and the rational fractions and are improper. Any improper rational fraction can be expressed as the sum of a polynomial (possibly constant) and a proper rational fraction. In the first example of an improper fraction one has
where the second term is a proper rational fraction. The sum of two proper rational fractions is a proper rational fraction as well. The reverse process of expressing a proper rational fraction as the sum of two or more fractions is called resolving it into p |
https://en.wikipedia.org/wiki/Multi-layer%20insulation | Multi-layer insulation (MLI) is thermal insulation composed of multiple layers of thin sheets and is often used on spacecraft and cryogenics. Also referred to as superinsulation, MLI is one of the main items of the spacecraft thermal design, primarily intended to reduce heat loss by thermal radiation. In its basic form, it does not appreciably insulate against other thermal losses such as heat conduction or convection. It is therefore commonly used on satellites and other applications in vacuum where conduction and convection are much less significant and radiation dominates. MLI gives many satellites and other space probes the appearance of being covered with gold foil which is the effect of the amber-coloured Kapton layer deposited over the silver Aluminized mylar.
For non-spacecraft applications, MLI works only as part of a vacuum insulation system. For use in cryogenics, wrapped MLI can be installed inside the annulus of vacuum jacketed pipes. MLI may also be combined with advanced vacuum insulation for use in high temperature applications.
Function and design
The principle behind MLI is radiation balance. To see why it works, start with a concrete example - imagine a square meter of a surface in outer space, held at a fixed temperature of 300 K, with an emissivity of 1, facing away from the sun or other heat sources. From the Stefan–Boltzmann law, this surface will radiate 460 W. Now imagine placing a thin (but opaque) layer 1 cm away from the plate, also with an emissivity of 1. This new layer will cool until it is radiating 230 W from each side, at which point everything is in balance. The new layer receives 460 W from the original plate. 230 W is radiated back to the original plate, and 230 W to space. The original surface still radiates 460 W, but gets 230 W back from the new layers, for a net loss of 230 W. So overall, the radiation losses from the surface have been reduced by half by adding the additional layer.
More layers can be added to red |
https://en.wikipedia.org/wiki/Hin%20recombinase | Hin recombinase is a 21kD protein composed of 198 amino acids that is found in the bacteria Salmonella. Hin belongs to the serine recombinase family (B2) of DNA invertases in which it relies on the active site serine to initiate DNA cleavage and recombination. The related protein, gamma-delta resolvase shares high similarity to Hin, of which much structural work has been done, including structures bound to DNA and reaction intermediates. Hin functions to invert a 900 base pair (bp) DNA segment within the salmonella genome that contains a promoter for downstream flagellar genes, fljA and fljB. Inversion of the intervening DNA alternates the direction of the promoter and thereby alternates expression of the flagellar genes. This is advantageous to the bacterium as a means of escape from the host immune response.
Hin functions by binding to two 26bp imperfect inverted repeat sequences as a homodimer. These hin binding sites flank the invertible segment which not only encodes the Hin gene itself, but also contains an enhancer element to which the bacterial Fis proteins binds with nanomolar affinity. Four molecules of Fis bind to this site as a homodimers and are required for the recombination reaction to proceed.
The initial reaction requires binding of Hin and Fis to their respective DNA sequences and assemble into a higher-order nucleoprotein complex with branched plectonemic supercoils with the aid of the DNA bending protein HU. At this point, it is believed that the Fis protein modulates subtle contacts to activate the reaction, possibly through direct interactions with the Hin protein. Activation of the 4 catalytic serine residues within the Hin tetramer make a 2-bp double stranded DNA break and forms a covalent reaction intermediate. The DNA cleavage event also requires the divalent metal cation magnesium. A large conformational change reveals a large hydrophobic interface that allows for subunit rotation which may be driven by superhelical torsion within the pr |
https://en.wikipedia.org/wiki/Nucleogenic | A nucleogenic isotope, or nuclide, is one that is produced by a natural terrestrial nuclear reaction, other than a reaction beginning with cosmic rays (the latter nuclides by convention are called by the different term cosmogenic). The nuclear reaction that produces nucleogenic nuclides is usually interaction with an alpha particle or the capture of fission or thermal neutrons. Some nucleogenic isotopes are stable and others are radioactive.
Example
An example of a nucleogenic nuclide is neon-21 produced from neon-20 that absorbs a thermal neutron (though some neon-21 is also primordial). Other nucleogenic reactions that produce heavy neon isotopes are (fast neutron capture, alpha emission) reactions, starting with magnesium-24 and magnesium-25, respectively. The source of the neutrons in these reactions is often secondary neutrons produced by alpha radiation from natural uranium and thorium in rock.
Types
Because nucleogenic isotopes have been produced later than the birth of the solar system (and the nucleosynthetic events that preceded it), nucleogenic isotopes, by definition, are not primordial nuclides. However, nucleogenic isotopes should not be confused with much more common radiogenic nuclides that are also younger than primordial nuclides, but which arise as simple daughter isotopes from radioactive decay. Nucleogenic isotopes, as noted, are the result of a more complicated nuclear reaction, although such reactions may begin with a radioactive decay event.
Alpha particles that produce nucleogenic reactions come from natural alpha particle emitters in uranium and thorium decay chains. Neutrons to produce nucleogenic nuclides may be produced by a number of processes, but due to the short half-life of free neutrons, all of these reactions occur on Earth. Among the most common are cosmic ray spallation production of neutrons from elements near the surface of the Earth. Alpha emission produced by some radioactive decay also produces neutrons by spallation kno |
https://en.wikipedia.org/wiki/Sentry%202020 | Sentry 2020 is a commercial software program for "transparent" disk encryption for PC and PDA. It has two compatible versions, one for desktop Windows XP and one for Windows Mobile 6.5.3, which allows using the same encrypted volume on both platforms.
The latest versions have been released in February 2011. Windows Vista is the "newest" OS supported.
See also
LibreCrypt - an alternative system which also works on both PC and PDAs
Disk encryption
Disk encryption software
Comparison of disk encryption software
External links
Official Sentry 2020 Website
Cryptographic software
Windows security software
Disk encryption
Cross-platform software |
https://en.wikipedia.org/wiki/Triad%20%28anatomy%29 | In the histology of skeletal muscle, a triad is the structure formed by a T tubule with a sarcoplasmic reticulum (SR) known as the terminal cisterna on either side. Each skeletal muscle fiber has many thousands of triads, visible in muscle fibers that have been sectioned longitudinally. (This property holds because T tubules run perpendicular to the longitudinal axis of the muscle fiber.) In mammals, triads are typically located at the A-I junction; that is, the junction between the A and I bands of the sarcomere, which is the smallest unit of a muscle fiber.
Triads form the anatomical basis of excitation-contraction coupling, whereby a stimulus excites the muscle and causes it to contract. A stimulus, in the form of positively charged current, is transmitted from the neuromuscular junction down the length of the T tubules, activating dihydropyridine receptors (DHPRs). Their activation causes 1) a negligible influx of calcium and 2) a mechanical interaction with calcium-conducting ryanodine receptors (RyRs) on the adjacent SR membrane. Activation of RyRs causes the release of calcium from the SR, which subsequently initiates a cascade of events leading to muscle contraction. These muscle contractions are caused by calcium's bonding to troponin and unmasking the binding sites covered by the troponin-tropomyosin complex on the actin myofilament and allowing the myosin cross-bridges to connect with the actin.
Function: Helps in muscle contraction and Ca+ Secretion
See also
Diad, a homologous structure in cardiac muscle |
https://en.wikipedia.org/wiki/Bucket-brigade%20device | A bucket brigade or bucket-brigade device (BBD) is a discrete-time analogue delay line, developed in 1969 by F. Sangster and K. Teer of the Philips Research Labs in the Netherlands. It consists of a series of capacitance sections C0 to Cn. The stored analogue signal is moved along the line of capacitors, one step at each clock cycle. The name comes from analogy with the term bucket brigade, used for a line of people passing buckets of water.
In most signal processing applications, bucket brigades have been replaced by devices that use digital signal processing, manipulating samples in digital form. Bucket brigades still see use in specialty applications, such as guitar effects.
A well-known integrated circuit device around 1976, the Reticon SAD-1024 implemented two 512-stage analog delay lines in a 16-pin DIP. It allowed clock frequencies ranging from 1.5 kHz to more than 1.5 MHz. The SAD-512 was a single delay line version. The Philips Semiconductors TDA1022 similarly offered a 512-stage delay line but with a clock rate range of 5–500 kHz. Other common BBD chips include the Panasonic MN3005, MN3007, MN3204 and MN3205, with the primary differences being the available delay time. An example of an effects unit utilizing Panasonic BBDs is the Yamaha E1010.
In 2009, the guitar effects pedal manufacturer Visual Sound recommissioned production of the Panasonic-designed MN3102 and MN3207 BBD chip that it offers for sale.
Despite being analog in their representation of individual signal voltage samples, these devices are discrete in the time domain and thus are limited by the Nyquist–Shannon sampling theorem; both the input and output signals are generally low-pass filtered. The input must be low-pass filtered to avoid aliasing effects, while the output is low-pass filtered for reconstruction. (A low-pass is used as an approximation to the Whittaker–Shannon interpolation formula.)
The concept of the bucket-brigade device led to the charge-coupled device (CCD) dev |
https://en.wikipedia.org/wiki/Geometry%20of%20interaction | The Geometry of Interaction (GoI) was introduced by Jean-Yves Girard shortly after his work on linear logic. In linear logic, proofs can be seen as various kinds of networks as opposed to the flat tree structures of sequent calculus. To distinguish the real proof nets from all the possible networks, Girard devised a criterion involving trips in the network. Trips can in fact be seen as some kind of operator acting on the proof. Drawing from this observation, Girard described directly this operator from the proof and has given a formula, the so-called execution formula, encoding the process of cut elimination at the level of operators.
One of the first significant applications of GoI was a better analysis of Lamping's algorithm for optimal reduction for the lambda calculus. GoI had a strong influence on game semantics for linear logic and PCF.
GoI has been applied to deep compiler optimisation for lambda calculi. A bounded version of GoI dubbed the Geometry of Synthesis has been used to compile higher-order programming languages directly into static circuits. |
https://en.wikipedia.org/wiki/Electroluminescent%20wire | Electroluminescent wire (often abbreviated as EL wire) is a thin copper wire coated in a phosphor that produces light through electroluminescence when an alternating current is applied to it. It can be used in a wide variety of applications—vehicle and structure decoration, safety and emergency lighting, toys, clothing etc.—much as rope light or Christmas lights are often used. Unlike these types of strand lights, EL wire is not a series of points, but produces a continuous unbroken line of visible light. Its thin diameter makes it flexible and ideal for use in a variety of applications such as clothing or costumes.
Structure
EL wire's construction consists of five major components. First is a solid-copper wire core coated with phosphor. A very fine wire or pair of wires is spiral-wound around the phosphor-coated copper core and then the outer Indium tin oxide (ITO) conductive coating is evaporated on. This fine wire is electrically isolated from the copper core. Surrounding this "sandwich" of copper core, phosphor and fine copper wire is a clear PVC sleeve. Finally, surrounding this thin and clear PVC sleeve is another clear, colored translucent or fluorescent PVC sleeve.
An alternating current electric potential of approximately 90 to 120 volts at about 1000 Hz is applied between the copper core wire and the fine wire that surrounds the copper core. The wire can be modeled as a coaxial capacitor with about 1 nF of capacitance per 30 cm, and the rapid charging and discharging of this capacitor excites the phosphor to emit light. The colors of light that can be produced efficiently by phosphors are limited, so many types of wire use an additional fluorescent organic dye in the clear PVC sleeve to produce the final result. These organic dyes produce colors like red and purple when excited by the blue-green light of the core.
A resonant oscillator is typically used to generate the high voltage drive signal. Because of the capacitance load of the EL wire |
https://en.wikipedia.org/wiki/Stazione%20Zoologica%20Anton%20Dohrn | The Stazione Zoologica Anton Dohrn is a research institute in Naples, Italy, devoted to basic research in biology. Research is largely interdisciplinary involving the fields of evolution, biochemistry, molecular biology, neurobiology, cell biology, biological oceanography, marine botany, molecular plant biology, benthic ecology, and ecophysiology.
Founded in 1872 as a private concern by Anton Dohrn, in 1982 the Stazione Zoologica came under the supervision and control of the Ministero dell'Università e della Ricerca Scientifica e Tecnologica (Ministry of Universities and Scientific and Technological Research) as a National Institute.
History
The idea
Dohrn's idea was to establish an international scientific community provided with laboratory space, equipment, research material and a library. This was supported and funded by the German Government, Thomas Henry Huxley, Charles Darwin, Francis Balfour and Charles Lyell among others. Dohrn provided a substantial sum himself. Running costs were paid from income derived from the bench system, the sale of scientific journals and specimens and the income from the public aquarium. This system was an important innovations in management of research and it worked. When Anton Dohrn died in 1909 more than 2,200 scientists from Europe and the United States had worked at Stazione Zoologica and more than 50 tables-per-year had been rented out.
"Report of the Committee, consisting of Dr. Anton Dohrn, Professor Rolleston and Mr. P. L. Sclater, appointed for the purpose of promoting the Foundation of Zoological Stations in different parts of the World:—Reporter, Dr. Dohrn [Jena]."-"The Committee beg to report that since the last Meeting of the British Association at Liverpool steps have been taken by Dr. Dohrn to secure the moral assistance of some other scientific bodies, and that the Academy of Belgium has passed a vote acknowledging the great value of the proposed Observatories. Besides this, the Government at Berlin has given i |
https://en.wikipedia.org/wiki/VMware%20Fusion | VMware Fusion is a software hypervisor developed by VMware for macOS systems. It allows Macs with Intel or the Apple M series of chips to run virtual machines with guest operating systems, such as Microsoft Windows, Linux, or macOS, within the host macOS operating system.
Overview
VMware Fusion can virtualize a multitude of operating systems, including many older versions of macOS, which allows users to run older Mac software that can no longer be run under the current version of macOS, such as 32-bit and PowerPC applications.
History
VMware Fusion, which uses a combination of paravirtualization and hardware virtualization made possible by the Mac transition to Intel processors in 2006, marked VMware's first entry into Macintosh-based x86 virtualization. VMware Fusion uses Intel VT present in the Intel Core microarchitecture platform. Much of the underlying technology in VMware Fusion is inherited from other VMware products, such as VMware Workstation, allowing VMware Fusion to offer features such as 64-bit and SMP support. VMware Fusion 1.0 was released on August 6, 2007, exactly one year after being announced.
Along with the Mac transition to Apple silicon in 2020, VMware announced plans for Fusion to support the new M-series platform and ARM architecture, releasing a tech preview for M1 chips in September 2021. In November 2022, VMware Fusion 13 was released, allowing ARM virtualization on Apple Silicon chips. Coinciding with the release, VMware implemented support for TPM 2.0 and OpenGL 4.3, along with improvements to VMware Tools on Windows 11. VMware Fusion 13 retains support for Intel Macs, distributing the software as a universal binary.
System requirements
Most Macs launched in 2015 or later with Apple Silicon or Intel processors for VMware Fusion 13, most Macs launched in 2012 or later for VMware Fusion 12, most Macs launched in 2011 or later for VMware Fusion 11, any x86-64 capable Intel Mac for VMware Fusion 8
macOS Monterey or later for VMware Fu |
https://en.wikipedia.org/wiki/Rippling | In computer science, more particularly in automated theorem proving, rippling refers to a group of meta-level heuristics, developed primarily in the Mathematical Reasoning Group in the School of Informatics at the University of Edinburgh, and most commonly used to guide inductive proofs in automated theorem proving systems. Rippling may be viewed as a restricted form of rewrite system, where special object level annotations are used to ensure fertilization upon the completion of rewriting, with a measure decreasing requirement ensuring termination for any set of rewrite rules and expression.
History
Raymond Aubin was the first person to use the term "rippling out" whilst working on his 1976 PhD thesis at the University of Edinburgh. He recognised a common pattern of movement during the rewriting stage of inductive proofs. Alan Bundy later turned this concept on its head by defining rippling to be this pattern of movement, rather than a side effect.
Since then, "rippling sideways", "rippling in" and "rippling past" were coined, so the term was generalised to rippling. Rippling continues to be developed at Edinburgh, and elsewhere, as of 2007.
Rippling has been applied to many problems traditionally viewed as being hard in the inductive theorem proving community, including Bledsoe's limit theorems and a proof of the Gordon microprocessor, a miniature computer developed by Michael J. C. Gordon and his team at Cambridge.
Overview
Very often, when attempting to prove a proposition, we are given a source expression and a target expression, which differ only by the inclusion of a few extra syntactic elements.
This is especially true in inductive proofs, where the given expression is taken to be the inductive hypothesis, and the target expression the inductive conclusion. Usually, the differences between the hypothesis and conclusion are only minor, perhaps the inclusion of a successor function (e.g., +1) around the induction variable.
At the start of rippling the dif |
https://en.wikipedia.org/wiki/Antistatic%20device | An antistatic device is any device that reduces, dampens, or otherwise inhibits electrostatic discharge, or ESD, which is the buildup or discharge of static electricity. ESD can damage electrical components such as computer hard drives, and even ignite flammable liquids and gases.
Many methods exist for neutralizing static electricity, varying in use and effectiveness depending on the application. Antistatic agents are chemical compounds that can be added to an object, or the packaging of an object, to help deter the buildup or discharge of static electricity. For the neutralization of static charge in a larger area, such as a factory floor, semiconductor cleanroom or workshop, antistatic systems may utilize electron emission effects such as corona discharge or photoemission that introduce ions into the area that combine with and neutralize any electrically charged object. In many situations, sufficient ESD protection can be achieved with electrical grounding.
Symbology
Various symbols can be found on products, indicating that the product is electrostatically sensitive, as with sensitive electrical components, or that it offers antistatic protection, as with antistatic bags.
Reach symbol
ANSI/ESD standard S8.1-2007 is most commonly seen on applications related to electronics. Several variations consist of a triangle with a reaching hand depicted inside of it using negative space.
Versions of the symbol will often have the hand being crossed out as a warning for the component being protected, indicating that it is ESD sensitive and is not to be touched unless antistatic precautions are taken.
Another version of the symbol has the triangle surrounded by an arc. This variant is in reference to the antistatic protective device, such as an antistatic wrist strap, rather than the component being protected. It usually does not feature the hand being crossed out, indicating that it makes contact with the component safe.
Circle
Another common symbol takes the form |
https://en.wikipedia.org/wiki/OMDoc | OMDoc (Open Mathematical Documents) is a semantic markup format for mathematical documents. While MathML only covers mathematical formulae and the related OpenMath standard only supports formulae and “content dictionaries” containing definitions of the symbols used in formulae, OMDoc covers the whole range of written mathematics.
Coverage
OMDoc allows for mathematical expressions on three levels:
Object levelFormulae, written in Content MathML (the non-presentational subset of MathML), OpenMath or languages for mathematical logic.
Statement levelDefinitions, theorems, proofs, examples and the relations between them (e.g. “this proof proves that theorem”).
Theory levelA theory is a set of contextually related statements. Theories may import each other, thereby forming a graph. Seen as collections of symbol definitions, OMDoc theories are compatible to OpenMath content dictionaries.
On each level, formal syntax and informal natural language can be used, depending on the application.
Semantics and Presentation
OMDoc is a semantic markup language that allows writing down the meaning of texts about mathematics. In contrast to LaTeX, for example, it is not primarily presentation-oriented. An OMDoc document need not specify what its contents should look like. A conversion to LaTeX and XHTML (with Presentation MathML for the formulae) is possible, though. To this end, the presentation of each symbol can be defined.
Applications
Today, OMDoc is used in the following settings:
E-learningCreation of customized textbooks.
Data exchangeOMDoc import and export modules are available for many automated theorem provers and computer algebra systems. OMDoc is intended to be used for communication between mathematical web services.
Document preparationDocuments about mathematics can be prepared in OMDoc and later exported to a presentation-oriented format like LaTeX or XHTML+MathML.
History
OMDoc has been developed by the German mathematician and computer scientist Michael Ko |
https://en.wikipedia.org/wiki/After%20the%20War%20%28video%20game%29 | After The War is a side-scrolling beat 'em up video game published in 1989 by Dinamic Software, in which the player must navigate through a hostile post-apocalyptic city. Although the name of the city is not mentioned in the game itself, both official promotional and unreleased artwork by Luis Royo and Alfonso Azpiri suggest that it is a post-nuclear version of New York City.
Gameplay
The game is structured into two parts. The first part is a side scrolling beat 'em up, and plays in much the same way as other staples of the genre, such as Streets of Rage. This first act takes place in the streets of the city, and consists of a sequence of fights with minor enemies and occasional bosses. The goal is to find the entrance to the city's underground rail transport, which is located on the opposite side of the map.
After completion of the first act the player is given an opportunity to enter their name on the game's high score board, and is then given a password which allows them to continue to the second act.
The second part is set in the tunnels and stations of the city's underground rail transport. The gameplay in this section differs from the first, as the player now has the ability to shoot enemies, shifting the genre closer to a Shoot 'em up game such as Contra. Many of the enemies in the second act are larger and tougher than in the first, and feature more complex in designs.
Development
Versions
There are numerous differences between the versions released, depending on which system the version was developed for. For example, some 16-bit versions featured digitized voices while others did not. Some ports featured more complex graphical effects, such as the Amstrad CPC version, which included both mode 0 and mode 1 graphics.
After The War featured two of the classic "FX brands" of Dinamic, commercial names that Dinamic used for some features of its games in marketing. These included FX Double Load, consisting on two separate parts to get advantage of comp |
https://en.wikipedia.org/wiki/Ronchi%20test | In optical testing a Ronchi test is a method of determining the surface shape (figure) of a mirror used in telescopes and other optical devices.
Description
In 1923 Italian physicist Vasco Ronchi published a description of the eponymous Ronchi test, which is a variation of the Foucault knife-edge test and which uses simple equipment to test the quality of optics, especially concave mirrors. . A "Ronchi tester" consists of:
A light source
A diffuser
A Ronchi grating
A Ronchi grating consists of alternate dark and clear stripes. One design is a small frame with several evenly spaced fine wires attached.
Light is emitted through the Ronchi grating (or a single slit), reflected by the mirror being tested, then passes through the Ronchi grating again and is observed by the person doing the test. The observer's eye is placed close to the centre of curvature of the mirror under test looking at the mirror through the grating. The Ronchi grating is a short distance (less than 2 cm) closer to the mirror.
The observer sees the mirror covered in a pattern of stripes that reveal the shape of the mirror. The pattern is compared to a mathematically generated diagram (usually done on a computer today) of what it should look like for a given figure. Inputs to the program are line frequency of the Ronchi grating, focal length and diameter of the mirror, and the figure required. If the mirror is spherical, the pattern consists of straight lines.
Applications
The Ronchi test is used in the testing of mirrors for reflecting telescopes especially in the field of amateur telescope making. It is much faster to set up than the standard Foucault knife-edge test.
The Ronchi test differs from the knife-edge test, requiring a specialized target (the Ronchi grating, which amounts to a periodic series of knife edges) and being more difficult to interpret. This procedure offers a quick evaluation of the mirror's shape and condition. It readily identifies a 'turned edge' (rolled down |
https://en.wikipedia.org/wiki/Nuclear%20mitochondrial%20DNA%20segment | NUMT, pronounced "new might", is an acronym for "nuclear mitochondrial DNA" segment or genetic locus coined by evolutionary geneticist, Jose V. Lopez, which describes a transposition of any type of cytoplasmic mitochondrial DNA into the nuclear genome of eukaryotic organisms.
More and more NUMT sequences, with different size and length, in the diverse number of Eukaryotes, have been detected as more whole genome sequencing of different organisms accumulates. In fact, NUMTs have often been unintentionally discovered by researchers who were looking for mtDNA (mitochondrial DNA). NUMTs have been reported in all studied eukaryotes, and nearly all mitochondrial genome regions can be integrated into the nuclear genome. However, NUMTs differ in number and size across different species. Such differences may be accounted for by interspecific variation in such factors as germline stability and mitochondria number.
After the release of the mtDNA to the cytoplasm, due to the mitochondrial alteration and morphological changes, mtDNA is transferred into the nucleus by one of the various predicted methods and are eventually inserted by double-stranded break repair processes into the nuclear DNA (nDNA). Not only has any correlation been found between the fraction of noncoding DNA and NUMT abundance in the genome but NUMTs are also proven to have non-random distribution and a higher likelihood of being inserted in the certain location of genome compare to others. Depending on the location of the insertion, NUMTs might perturb the function of the genes. In addition, De novo integration of NUMT pseudogenes into the nuclear genome has an adverse effect in some cases, promoting various disorders and aging.
The first application of the NUMT term in the domestic cat (Felis catus) example was striking, since mitochondrial gene number and content were amplified 38-76X in the cat nuclear genome, besides being transposed from the cytoplasm. The cat NUMTs sequences did not appear to be funct |
https://en.wikipedia.org/wiki/Genesi | Genesi is an international group of technology and consulting companies in the United States, Mexico and Germany. It is most widely known for designing and manufacturing ARM architecture and Power ISA-based computing devices. The Genesi Group consists of Genesi USA Inc., Genesi Americas LLC, Genesi Europe UG, Red Efika, bPlan GmbH and the affiliated non-profit organization Power2People.
Genesi is an official Linaro partner and its software development team has been instrumental in moving Linux on the ARM architecture towards a wider adoption of the hard-float application binary interface, which is incompatible with most existing applications but provides enormous performance gains for many use cases.
Products
The main products of Genesi are ARM-based computers that were designed to be inexpensive, quiet and highly energy efficient, and a custom Open Firmware compliant firmware. All products can run a multitude of operating systems.
Current products include
Aura - A comprehensive abstraction layer for embedded and desktop devices, with UEFI and IEEE1275. Desktop systems with AGP or PCI/PCI Express may take advantage of an embedded x86/BIOS emulator providing boot functionality for standard graphics cards.
EFIKA MX53
EFIKA MX6
Discontinued products include
EFIKA MX Smarttop - A highly energy efficient and compact computing device (complete system) powered by a Freescale ARM iMX515 CPU.
EFIKA MX Smartbook - A 10" smartbook (complete system) powered by the Freescale ARM iMX515 CPU.
High Density Blade - PowerPC based high density blade server.
Home Media Center - PowerPC based digital video recorder.
EFIKA 5200B - A small Open Firmware based motherboard powered by a Freescale MPC5200B SoC processor with 128 MB RAM, a 44-pin ATA connector for a 2.5" hard drive, sound in/out, USB, Ethernet, serial port, and a PCI slot.
Open Client - Thin Clients available with Freescale's Power Architecture or ARM SoCs.
Pegasos - An Open Firmware MicroATX motherboard powered |
https://en.wikipedia.org/wiki/Bogdanov%E2%80%93Takens%20bifurcation | In bifurcation theory, a field within mathematics, a Bogdanov–Takens bifurcation is a well-studied example of a bifurcation with co-dimension two, meaning that two parameters must be varied for the bifurcation to occur. It is named after Rifkat Bogdanov and Floris Takens, who independently and simultaneously described this bifurcation.
A system y''' = f(y) undergoes a Bogdanov–Takens bifurcation if it has a fixed point and the linearization of f around that point has a double eigenvalue at zero (assuming that some technical nondegeneracy conditions are satisfied).
Three codimension-one bifurcations occur nearby: a saddle-node bifurcation, an Andronov–Hopf bifurcation and a homoclinic bifurcation. All associated bifurcation curves meet at the Bogdanov–Takens bifurcation.
The normal form of the Bogdanov–Takens bifurcation is
There exist two codimension-three degenerate Takens–Bogdanov bifurcations, also known as Dumortier–Roussarie–Sotomayor bifurcations. |
https://en.wikipedia.org/wiki/Landau%E2%80%93Kolmogorov%20inequality | In mathematics, the Landau–Kolmogorov inequality, named after Edmund Landau and Andrey Kolmogorov, is the following family of interpolation inequalities between different derivatives of a function f defined on a subset T of the real numbers:
On the real line
For k = 1, n = 2 and T = [c,∞) or T = R, the inequality was first proved by Edmund Landau with the sharp constants C(2, 1, [c,∞)) = 2 and C(2, 1, R) = √2. Following contributions by Jacques Hadamard and Georgiy Shilov, Andrey Kolmogorov found the sharp constants and arbitrary n, k:
where an are the Favard constants.
On the half-line
Following work by Matorin and others, the extremising functions were found by Isaac Jacob Schoenberg, explicit forms for the sharp constants are however still unknown.
Generalisations
There are many generalisations, which are of the form
Here all three norms can be different from each other (from L1 to L∞, with p=q=r=∞ in the classical case) and T may be the real axis, semiaxis or a closed segment.
The Kallman–Rota inequality generalizes the Landau–Kolmogorov inequalities from the derivative operator to more general contractions on Banach spaces.
Notes
Inequalities
→ |
https://en.wikipedia.org/wiki/Bruno%20Th%C3%BCring | Bruno Jakob Thüring (7 September 1905, in Warmensteinach – 6 May 1989, in Karlsruhe) was a German physicist and astronomer.
Thüring studied mathematics, physics, and astronomy at the University of Munich and received his doctorate in 1928, under Alexander Wilkens and Arnold Sommerfeld. Wilkens was a professor of astronomy and director of the Munich Observatory, which was part of the University. From 1928 to 1933, he was an assistant at the Munich Observatory. From 1934 to 1935, he was an assistant to Heinrich Vogt at the University of Heidelberg. Thüring completed his Habilitation there in 1935, whereupon he became an Observator at the Munich Observatory. In 1937, Thüring became a lecturer (Dozent) at the University of Munich. From 1940 to 1945, he held the chair for astronomy at the University of Vienna and was director of the Vienna Observatory. After 1945, Thüring lived as a private scholar in Karlsruhe.
During the reign of Adolf Hitler, Thüring was a proponent of Deutsche Physik, as were the two Nobel Prize–winning physicists Johannes Stark and Philipp Lenard; Deutsche Physik, was anti-Semitic and had a bias against theoretical physics, especially quantum mechanics. He was also a student of the philosophy of Hugo Dingler.
Thüring was an opponent of Albert Einstein's theory of relativity.
Books
Bruno Thüring (Georg Lüttke Verlag, 1941)
Bruno Thüring (Göller, 1957)
Bruno Thüring (Göller, 1958)
Bruno Thüring (Duncker u. Humblot GmbH, 1967)
Bruno Thüring (Duncker & Humblot GmbH, 1978)
Bruno Thüring (Haag u. Herchen, 1985)
Notes |
https://en.wikipedia.org/wiki/No-teleportation%20theorem | In quantum information theory, the no-teleportation theorem states that an arbitrary quantum state cannot be converted into a sequence of classical bits (or even an infinite number of such bits); nor can such bits be used to reconstruct the original state, thus "teleporting" it by merely moving classical bits around. Put another way, it states that the unit of quantum information, the qubit, cannot be exactly, precisely converted into classical information bits. This should not be confused with quantum teleportation, which does allow a quantum state to be destroyed in one location, and an exact replica to be created at a different location.
In crude terms, the no-teleportation theorem stems from the Heisenberg uncertainty principle and the EPR paradox: although a qubit can be imagined to be a specific direction on the Bloch sphere, that direction cannot be measured precisely, for the general case ; if it could, the results of that measurement would be describable with words, i.e. classical information.
The no-teleportation theorem is implied by the no-cloning theorem: if it were possible to convert a qubit into classical bits, then a qubit would be easy to copy (since classical bits are trivially copyable).
Formulation
The term quantum information refers to information stored in the state of a quantum system. Two quantum states ρ1 and ρ2 are identical if the measurement results of any physical observable have the same expectation value for ρ1 and ρ2. Thus measurement can be viewed as an information channel with quantum input and classical output, that is, performing measurement on a quantum system transforms quantum information into classical information. On the other hand, preparing a quantum state takes classical information to quantum information.
In general, a quantum state is described by a density matrix. Suppose one has a quantum system in some mixed state ρ. Prepare an ensemble, of the same system, as follows:
Perform a measurement on ρ.
According to |
https://en.wikipedia.org/wiki/Chain%20rule%20for%20Kolmogorov%20complexity | The chain rule for Kolmogorov complexity is an analogue of the chain rule for information entropy, which states:
That is, the combined randomness of two sequences X and Y is the sum of the randomness of X plus whatever randomness is left in Y once we know X.
This follows immediately from the definitions of conditional and joint entropy, and the fact from probability theory that the joint probability is the product of the marginal and conditional probability:
The equivalent statement for Kolmogorov complexity does not hold exactly; it is true only up to a logarithmic term:
(An exact version, KP(x, y) = KP(x) + KP(y|x*) + O(1),
holds for the prefix complexity KP, where x* is a shortest program for x.)
It states that the shortest program printing X and Y is obtained by concatenating a shortest program printing X with a program printing Y given X, plus at most a logarithmic factor. The results implies that algorithmic mutual information, an analogue of mutual information for Kolmogorov complexity is symmetric: I(x:y) = I(y:x) + O(log K(x,y)) for all x,y.
Proof
The ≤ direction is obvious: we can write a program to produce x and y by concatenating a program to produce x, a program to produce y given
access to x, and (whence the log term) the length of one of the programs, so
that we know where to separate the two programs for x and y|x (log(K(x, y)) upper-bounds this length).
For the ≥ direction, it suffices to show that for all k,l such that k+l = K(x,y) we have that either
K(x|k,l) ≤ k + O(1)
or
K(y|x,k,l) ≤ l + O(1).
Consider the list (a1,b1), (a2,b2), ..., (ae,be) of all pairs (a,b) produced by programs of length exactly K(x,y) [hence K(a,b) ≤ K(x,y)]. Note that this list
contains the pair (x,y),
can be enumerated given k and l (by running all programs of length K(x,y) in parallel),
has at most 2K(x,y) elements (because there are at most 2n programs of length n).
First, suppose that x appears less than 2l times as first element. We can specify y |
https://en.wikipedia.org/wiki/Pacific%20Symposium%20on%20Biocomputing | The Pacific Symposium on Biocomputing (PSB) is an annual multidisciplinary scientific meeting co-founded in 1996 by Dr. Teri Klein, Dr. Lawrence Hunter and Sharon Surles. The conference is to presentation and discuss research in the theory and application of computational methods for biology. Papers and presentations are peer reviewed and published.
PSB brings together researchers from the US and the Asian Pacific nations, to exchange research results and address open issues in all aspects of computational biology. PSB is a forum for the presentation of work in databases, algorithms, interfaces, visualization, modeling, and other computational methods, as applied to biological problems, with emphasis on applications in data-rich areas of molecular biology.
The PSB aims for "critical mass" in sub-disciplines within biocomputing. For that reason, it is the only meeting whose sessions are defined dynamically each year in response to specific proposals. PSB sessions are organized by leaders in the emerging areas and targeted to provide a forum for publication and discussion of research in biocomputing's topics.
Since 2017 the Research Parasite Award has been announced and presented annually at the Symposium to recognize scientists who study previously-published data in ways not anticipated by the researchers who first generated it. An endowment for the award and sponsorship has been provided for the Junior Parasite award winner to attend the symposium and presentation. |
https://en.wikipedia.org/wiki/Years%20of%20potential%20life%20lost | Years of potential life lost (YPLL) or potential years of life lost (PYLL) is an estimate of the average years a person would have lived if they had not died prematurely. It is, therefore, a measure of premature mortality. As an alternative to death rates, it is a method that gives more weight to deaths that occur among younger people. An alternative is to consider the effects of both disability and premature death using disability adjusted life years.
Calculation
To calculate the years of potential life lost, the analyst has to set an upper reference age. The reference age should correspond roughly to the life expectancy of the population under study. In the developed world, this is commonly set at age 75, but it is essentially arbitrary. Thus, PYLL should be written with respect to the reference age used in the calculation: e.g., PYLL[75].
PYLL can be calculated using individual level data or using age grouped data.
Briefly, for the individual method, each person's PYLL is calculated by subtracting the person's age at death from the reference age. If a person is older than the reference age when they die, that person's PYLL is set to zero (i.e., there are no "negative" PYLLs). In effect, only those who die before the reference age are included in the calculation. Some examples:
Reference age = 75; Age at death = 60; PYLL[75] = 75 − 60 = 15
Reference age = 75; Age at death = 6 months; PYLL[75] = 75 − 0.5 = 74.5
Reference age = 75; Age at death = 80; PYLL[75] = 0 (age at death greater than reference age)
To calculate the PYLL for a particular population in a particular year, the analyst sums the individual PYLLs for all individuals in that population who died in that year. This can be done for all-cause mortality or for cause-specific mortality.
Significance
In the developed world, mortality counts and rates tend to emphasise the most common causes of death in older people because the risk of death increases with age. Because YPLL gives more weight to d |
https://en.wikipedia.org/wiki/Coronoid%20process%20of%20the%20mandible | In human anatomy, the mandible's coronoid process (from Greek korōnē, denoting something hooked) is a thin, triangular eminence, which is flattened from side to side and varies in shape and size. Its anterior border is convex and is continuous below with the anterior border of the ramus. Its posterior border is concave and forms the anterior boundary of the mandibular notch. The lateral surface is smooth, and affords insertion to the temporalis and masseter muscles. Its medial surface gives insertion to the temporalis, and presents a ridge which begins near the apex of the process and runs downward and forward to the inner side of the last molar tooth.
Between this ridge and the anterior border is a grooved triangular area, the upper part of which gives attachment to the temporalis, the lower part to some fibers of the buccinator.
Clinical significance
Fractures of the mandible are common. However, coronoid process fractures are very rare. Isolated fractures of the coronoid process caused by direct trauma are rare, as it is anatomically protected by the complex zygomatic arch/ temporo-zygomatic bone and their associated muscles. Most fractures here are caused by strokes (contusion or penetrating injuries). Conservative management of minor fractures can lead to trismus (lockjaw) that can later only be corrected by removing the coronoid process. For serious fractures, a surgery involving open reduction and internal fixation can have good outcomes.
Additional images
See also
Ramus mandibulae |
https://en.wikipedia.org/wiki/Performance%20prediction | In computer science, performance prediction means to estimate the execution time or other performance factors (such as cache misses) of a program on a given computer. It is being widely used for computer architects to evaluate new computer designs, for compiler writers to explore new optimizations, and also for advanced developers to tune their programs.
There are many approaches to predict program 's performance on computers. They can be roughly divided into three major categories:
simulation-based prediction
profile-based prediction
analytical modeling
Simulation-based prediction
Performance data can be directly obtained from computer simulators, within which each instruction of the target program is actually dynamically executed given a particular input data set. Simulators can predict program's performance very accurately, but takes considerable time to handle large programs. Examples include the PACE and Wisconsin Wind Tunnel simulators as well as the more recent WARPP simulation toolkit, which attempts to significantly reduce the time required for parallel system simulation.
Another approach, based on trace-based simulation does not run every instruction, but runs a trace file which store important program events only. This approach loses some flexibility and accuracy compared to cycle-accurate simulation mentioned above but can be much faster. The generation of traces often consumes considerable amounts of storage space and can severely impact the runtime of applications if large amount of data are recorded during execution.
Profile-based prediction
The classic approach of performance prediction treats a program as a set of basic blocks connected by execution path. Thus the execution time of the whole program is the sum of execution time of each basic block multiplied by its execution frequency, as shown in the following formula:
The execution frequencies of basic blocks are generated from a profiler, which is why this method is called profile-base |
https://en.wikipedia.org/wiki/Micro%20Electronics%2C%20Inc. | Micro Electronics, Inc. (MEI) is an American privately owned corporation headquartered in Hilliard, Ohio. Founded in 1979 by John Baker, it serves as the parent company of the computer retailer Micro Center, its online division Micro Center Online, and its brand iPSG, which houses PowerSpec PC, WinBook, and Inland(including Inland Premium for high-end SSDs). |
https://en.wikipedia.org/wiki/Conrad%20Models | Conrad GmbH (previously "Gescha Toys") is a German manufacturer of diecast scale model trucks, primarily in 1:50 scale for use both as toys and promotional models by heavy equipment manufacturers. Conrad is one of the few European diecast companies which have not outsourced production to China or elsewhere in Asia. Conrad Modelle is headquartered in Kalchreuth, just northeast of Nuremberg.
In the past, Conrad also manufactured model cars.
From Gescha to Conrad
On early German toys the abbreviation "Ges. Gesch." was short for the German for "trademark registered". This may have led to the eventual name of the predecessor toy firm of Gescha which was established in 1923. Gescha had previously specialized in wind-up tin toys similar to Schuco Modell or Gama Toys. The Conrad website says that Conrad – a family name – started making diecast models in 1956, however Gescha used the Conrad name as a sub-brand first. Most diecast truck and heavy equipment models, for which Conrad became most well-known, were marketed as Gescha in the 1960s and 1970s. The name Conrad was increasingly used through the 1970s and by about 1980, the Gescha name was discontinued.
The official website says that since 1987 the company has been run by Gunther Conrad assisted by his wife Gerde and their son, Michael. Thus the company has remained a family owned business, probably since about 1956 when the Conrad name was introduced when the Conrad family took control of Gescha.
Models today
Conrad today has a line of over 90 separate models, mostly trucks and cranes. The appearance and finish of the diecast models themselves is similar to its competitor, NZG Models, though perhaps NZG's are slightly more adventurous in models contracted and slightly more realistic – but this is simply a perception. While NZG Modelle focuses more on construction and earth moving equipment, Conrad's line-up centers more on a variety of commercial trucks themselves. Several models, however, are quite distinct, li |
https://en.wikipedia.org/wiki/Mishnat%20ha-Middot | The Mishnat ha-Middot (, 'Treatise of Measures') is the earliest known Hebrew treatise on geometry, composed of 49 mishnayot in six chapters. Scholars have dated the work to either the Mishnaic period or the early Islamic era.
History
Date of composition
Moritz Steinschneider dated the Mishnat ha-Middot to between 800 and 1200 CE. Sarfatti and Langermann have advanced Steinschneider's claim of Arabic influence on the work's terminology, and date the text to the early ninth century.
On the other hand, Hermann Schapira argued that the treatise dates from an earlier era, most likely the Mishnaic period, as its mathematical terminology differs from that of the Hebrew mathematicians of the Arab period. Solomon Gandz conjectured that the text was compiled no later than (possibly by Rabbi Nehemiah) and intended to be a part of the Mishnah, but was excluded from its final canonical edition because the work was regarded as too secular. The content resembles both the work of Hero of Alexandria (c. ) and that of al-Khwārizmī (c. ) and the proponents of the earlier dating therefore see the Mishnat ha-Middot linking Greek and Islamic mathematics.
Modern history
The Mishnat ha-Middot was discovered in MS 36 of the Munich Library by Moritz Steinschneider in 1862. The manuscript, copied in Constantinople in 1480, goes as far as the end of Chapter V. According to the colophon, the copyist believed the text to be complete. Steinschneider published the work in 1864, in honour of the seventieth birthday of Leopold Zunz. The text was edited and published again by mathematician Hermann Schapira in 1880.
After the discovery by Otto Neugebauer of a genizah-fragment in the Bodleian Library containing Chapter VI, Solomon Gandz published a complete version of the Mishnat ha-Middot in 1932, accompanied by a thorough philological analysis. A third manuscript of the work was found among uncatalogued material in the Archives of the Jewish Museum of Prague in 1965.
Contents
Although prima |
https://en.wikipedia.org/wiki/Novarossi | Novarossi World, also known as Novarossi Nitro Micro Engines, was an Italian manufacturer of model engines and related items for radio-controlled models.
History
Nova Rossi was founded in 1984 by Cesare Rossi and his wife Graziosa Barchi in the Italia region. Rossi started building model aircraft in the 1950s. Rossi started to successfully modify existing model engines, and won Italian and international competitions.
Throughout the 1960s, Rossi manufactured his own engines using engineering and mechanical skills learned from his father. With the aid of his brother, Cesare Rossi started a family business. Along with his wife Graziosa Barchi, and two sons Mario and Sergio Rossi, Cesare Rossi formed Nova Rossi.
In December 2005, Mario left Nova Rossi. He designed engines for RC Cars produced by Italian RC giants GRP Gandini and sold under various brands.
In December 2006, Nova Rossi changed their company name from Nova Rossi snc to Novarossi World s.r.l. Nova Rossi ceased manufacturing circa November 2019.
Products
Nova Rossi produced engines, manifolds, tuned pipes, glow plugs and other accessories for all forms of radio-controlled models. Nova Rossi held a high share of sales with radio controlled cars, boats and aircraft. Many world championships were won in all of these classes, leading to the company slogan "World champion engines" often found on decals and apparel.
Nova Rossi produced 60,000 engines a year, and 300,000 glow plugs a year. Exports contribute 90% of Nova Rossi sales, showing global interest. Rossi expected a boom in model use through Italy to help improve the percentage of engines sold in Italy, which currently stands at 10%.
Team Drivers
Adam Drake - part of Team Novarossi and used a Novarossi engine to gain a victory in pro nitro buggy in the ROAR 2013 Nitro Offroad Nationals.
Robert Batlle was part of Team Novarossi (Spain) - 2-time European Champion, 9-time Spanish Champion, 1 World Championship (2012, Argentina)
Daniel Kleff (Germ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.