source
stringlengths 31
203
| text
stringlengths 28
2k
|
|---|---|
https://en.wikipedia.org/wiki/Download%20Cache
|
The Download Cache, or downloaded files cache, is a component of Microsoft's .NET Framework that is similar to the Global Assembly Cache except that it caches assemblies that have been downloaded from the Internet.
Q.
Assemblies are downloaded from the Internet when a specific managed object is requested using the <object> tag in a web page. For example, the following HTML will cause Internet Explorer to download MyAssembly.dll to the Download Cache and will subsequently instantiate MyControl on the page that contains it.
<object id="myControlId" classid="http://MyServer/MyVirtualFolder/MyAssembly.dll#MyNamespace.MyControl">
<param name="MyProperty" value="SomeStringValue" />
</object>
Usage
Like the GAC, the Download Cache can be accessed with gacutil.exe.
One can list the contents of the Download Cache using the command:
gacutil.exe /ldl
One can delete the contents of the Download Cache using the command:
gacutil.exe /cdl
References
.NET terminology
|
https://en.wikipedia.org/wiki/Harmonic%20mixer
|
The harmonic mixer and subharmonic mixer are a type of frequency mixer, which is a circuit that changes one signal frequency to another. The ordinary mixer has two input signals and one output signal. If the two input signals are sinewaves at frequencies f1 and f2, then the output signal consists of frequency components at the sum f1+f2 and difference f1−f2 frequencies. In contrast, the harmonic and subharmonic mixers form sum and difference frequencies at a harmonic multiple of one of the inputs. The output signal then contains frequencies such as f1+kf2 and f1−kf2 where k is an integer.
Background
The classic frequency mixer is a multiplier. Multiplying two sinewaves produces just the sum and difference frequencies; the input frequencies are suppressed, and, in theory, there are no other heterodyne products. In practice, the multiplier is not perfect, and the input frequencies and other heterodyne products will be present.
An actual multiplier is not needed. The significant requirement is a nonlinearity, and at microwave frequencies it is easier to use a nonlinearity rather than an ideal multiplier. A Taylor series expansion of a nonlinearity will show multiplications that give rise to the desired higher order products.
Design goals for mixers seek to select the desired heterodyne products and suppress the undesired ones.
Diode mixers.
Overdriven diode bridge mixers. Drive signal looks like odd harmonic waveform (essentially a square wave).
Harmonic mixer
One classic design for a harmonic mixer uses a step recovery diode (SRD). The mixer's subharmonic input is first amplified to a power level that might be around 1 watt. That signal then drives a step recovery diode impulse generator circuit that turns the sine wave into something approximating an impulse train. The resulting impulse train has the harmonics of the input sine wave present to a high frequency (such as 18 GHz). The impulse train can then be used with a diode mixer (also called a sampler).
The
|
https://en.wikipedia.org/wiki/Copyright%20alternatives
|
Various copyright alternatives in an alternative compensation systems (ACS) have been proposed as ways to allow the widespread reproduction of digital copyrighted works while still paying the authors and copyright owners of those works. This article only discusses those proposals which involve some form of government intervention. Other models, such as the street performer protocol or voluntary collective licenses, could arguably be called "alternative compensation systems" although they are very different and generally less effective at solving the free rider problem.
The impetus for these proposals has come from the widespread use of peer-to-peer file sharing networks. A few authors argue that an ACS is simply the only practical response to the situation. But most ACS advocates go further, holding that P2P file sharing is in fact greatly beneficial, and that tax or levy funded systems are actually more desirable tools for paying artists
than sales coupled with DRM copy prevention technologies.
Artistic freedom voucher
The artistic freedom voucher (AFV) proposal argues that the current copyright system providing a state enforced monopoly leads to "enormous inefficiencies and creates substantial enforcement problems". Under the AFV proposed system, individuals would be allowed to contribute a refundable tax credit of approximately $100 to a "creative worker", this contribution would act as a voucher that can only be used to support artistic or creative work.
Recipients of the AFV contribution would in turn be required to register with the government in similar fashion to that of religious or charitable institutions do so for tax-exempt status. The sole purpose of the registration would be to prevent fraud and would have no evaluation of the quality or work being produced. Alongside registration, artists would also now be ineligible for copyright protection for a set period of time (5 years for example) as the work is contributed to the public domain and allowed
|
https://en.wikipedia.org/wiki/Biological%20system
|
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
|
https://en.wikipedia.org/wiki/Mitointeractome
|
Mitointeractome is a mitochondrial protein interactome database.
References
External links
Mitointeractome
Molecular biology
|
https://en.wikipedia.org/wiki/API-Calculus
|
API Calculus is a program that solves calculus problems using operating systems within a device that solves calculus problems. In 1989, the PI- Calculus was created by Robin Milner and was very successful throughout the years. The PI Calculus is an extension of the process algebra CCS, a tool that has algebraic languages that are specific to processing and formulating statements. The PI Calculus provides a formal theory for modeling systems and reasoning about their behaviors. In the PI Calculus there are two specific variables such as name and processes. But it was not until 2002 when Shahram Rahimi decided to create an upgraded version of the PI- Calculus and call it the API Calculus. Milner claimed the detailed characteristics of the API Calculus to be its "Communication Ability, Capacity for Cooperation, Capacity for Reasoning and Learning, Adaptive Behavior and Trustworthiness." The main purpose of creating this mobile advancement is to better network and communicate with other operators while completing a task. Unfortunately, the API Calculus is not perfect and has faced a problem with its security system. The language has seven features that was created within the device that the PI Calculus does not have. Since this program is so advanced by the way the software was created and the different abilities that are offered in the program, it is required to be converted to other programming languages so it can be used on various devices and other computing languages. Although the API Calculus is currently being used by various other programming languages, modifications are still being done since the security on the API Calculus is causing problems to users.
What Does It Do?
The API Calculus is the main demonstration for modeling migration, intelligence, natural grouping and security in agent-based systems. This calculus programming language is usually used in various other program languages such as Java. In Java, a famous programming language used by variou
|
https://en.wikipedia.org/wiki/List%20of%20formulas%20in%20Riemannian%20geometry
|
This is a list of formulas encountered in Riemannian geometry. Einstein notation is used throughout this article. This article uses the "analyst's" sign convention for Laplacians, except when noted otherwise.
Christoffel symbols, covariant derivative
In a smooth coordinate chart, the Christoffel symbols of the first kind are given by
and the Christoffel symbols of the second kind by
Here is the inverse matrix to the metric tensor . In other words,
and thus
is the dimension of the manifold.
Christoffel symbols satisfy the symmetry relations
or, respectively, ,
the second of which is equivalent to the torsion-freeness of the Levi-Civita connection.
The contracting relations on the Christoffel symbols are given by
and
where |g| is the absolute value of the determinant of the metric tensor . These are useful when dealing with divergences and Laplacians (see below).
The covariant derivative of a vector field with components is given by:
and similarly the covariant derivative of a -tensor field with components is given by:
For a -tensor field with components this becomes
and likewise for tensors with more indices.
The covariant derivative of a function (scalar) is just its usual differential:
Because the Levi-Civita connection is metric-compatible, the covariant derivatives of metrics vanish,
as well as the covariant derivatives of the metric's determinant (and volume element)
The geodesic starting at the origin with initial speed has Taylor expansion in the chart:
Curvature tensors
Definitions
(3,1) Riemann curvature tensor
(3,1) Riemann curvature tensor
Ricci curvature
Scalar curvature
Traceless Ricci tensor
(4,0) Riemann curvature tensor
(4,0) Weyl tensor
Einstein tensor
Identities
Basic symmetries
The Weyl tensor has the same basic symmetries as the Riemann tensor, but its 'analogue' of the Ricci tensor is zero:
The Ricci tensor, the Einstein tensor, and the traceless Ricci tensor are symmetric 2-tensors:
First Bianch
|
https://en.wikipedia.org/wiki/Bach%20tensor
|
In differential geometry and general relativity, the Bach tensor is a trace-free tensor of rank 2 which is conformally invariant in dimension . Before 1968, it was the only known conformally invariant tensor that is algebraically independent of the Weyl tensor. In abstract indices the Bach tensor is given by
where is the Weyl tensor, and the Schouten tensor given in terms of the Ricci tensor and scalar curvature by
See also
Cotton tensor
Obstruction tensor
References
Further reading
Arthur L. Besse, Einstein Manifolds. Springer-Verlag, 2007. See Ch.4, §H "Quadratic Functionals".
Demetrios Christodoulou, Mathematical Problems of General Relativity I. European Mathematical Society, 2008. Ch.4 §2 "Sketch of the proof of the global stability of Minkowski spacetime".
Yvonne Choquet-Bruhat, General Relativity and the Einstein Equations. Oxford University Press, 2011. See Ch.XV §5 "Christodoulou-Klainerman theorem" which notes the Bach tensor is the "dual of the Coton tensor which vanishes for conformally flat metrics".
Thomas W. Baumgarte, Stuart L. Shapiro, Numerical Relativity: Solving Einstein's Equations on the Computer. Cambridge University Press, 2010. See Ch.3.
Tensors
Tensors in general relativity
|
https://en.wikipedia.org/wiki/List%20of%20Plan%209%20applications
|
This is a list of Plan 9 programs. Many of these programs are very similar to the UNIX programs with the same name, others are to be found only on Plan 9. Others again share only the name, but have a different behaviour.
System software
General user
dd – convert and copy a file
date – date and time
echo – print arguments
file – determine file type
ns – display namespace
plumb – send message to plumber
plumber – interprocess messaging
rc – rc is the Plan 9 shell
rio – the new Plan 9 windowing system
8½ – the old Plan 9 windowing system
uptime – show how long the system has been running
System management
Processes and tasks management
time – time a command
kill, slay, broke – print commands to kill processes
sleep – suspend execution for an interval
ps – process status
psu – process status information about processes started by a specific user
User management and support
passwd, netkey, iam – change user password
who – who is using the machine
man, lookman – print or find pages of this manual
File system and server
/boot/boot – connect to the root file server
fossil/fossil, fossil/flchk, fossil/flfmt, fossil/conf, fossil/last – archival file server
history – print file names from the dump
users – file server user list format
vac – create a vac archive on Venti
venti/buildindex, venti/checkarenas, venti/checkindex, venti/conf, venti/copy, venti/fmtarenas, venti/fmtindex, venti/fmtisect, venti/rdarena, venti/rdarenablocks, venti/read, venti/wrarenablocks, venti/write – Venti maintenance and debugging commands
venti/venti, venti/sync – an archival block storage server
yesterday, diffy – print file names from the dump
Hardware devices
setrtc – set real time clock (RTC) on PC hardware
Files and text
Filesystem utilities
chgrp – change file group
chmod – change mode
cp, fcp, mv – copy, move files
du – disk usage
ls, lc – list contents of directory
mkdir – make a directory
bind, mount, umount – change name space
pwd, pbd –
|
https://en.wikipedia.org/wiki/Rothe%E2%80%93Hagen%20identity
|
In mathematics, the Rothe–Hagen identity is a mathematical identity valid for all complex numbers () except where its denominators vanish:
It is a generalization of Vandermonde's identity, and is named after Heinrich August Rothe and Johann Georg Hagen.
References
.
. See especially pp. 89–91.
. As cited by .
.
. As cited by .
Factorial and binomial topics
Mathematical identities
Complex analysis
|
https://en.wikipedia.org/wiki/Lerche%E2%80%93Newberger%20sum%20rule
|
The Lerche–Newberger, or Newberger, sum rule, discovered by B. S. Newberger in 1982, finds the sum of certain infinite series involving Bessel functions Jα of the first kind.
It states that if μ is any non-integer complex number, , and Re(α + β) > −1, then
Newberger's formula generalizes a formula of this type proven by Lerche in 1966; Newberger discovered it independently. Lerche's formula has γ =1; both extend a standard rule for the summation of Bessel functions, and are useful in plasma physics.
References
Special functions
Mathematical identities
|
https://en.wikipedia.org/wiki/Potato%20cyst%20nematode
|
Potato root nematodes or potato cyst nematodes (PCN) are 1-mm long roundworms belonging to the genus Globodera, which comprises around 12 species. They live on the roots of plants of the family Solanaceae, such as potatoes and tomatoes. PCN cause growth retardation and, at very high population densities, damage to the roots and early senescence of plants. The nematode is not indigenous to Europe but originates from the Andes. Fields are free from PCN until an introduction occurs, after which the typical patches, or hotspots, occur on the farmland. These patches can become full field infestations when unchecked. Yield reductions can average up to 60% at high population densities.
Biology and life cycle
The eggs hatch in the presence of Solanoeclepine A, a substance secreted by the roots of host plants otherwise known as root exudates. The nematodes hatch when they grow into a second-stage juvenile (J2). At this stage, the J2 nematodes find host cells to feed off of. The potato cyst nematodes are endoparasites meaning they go completely into the root to feed. Access to the root cells is gained through piercing through the cell wall using the nematode’s stylet. After a feeding tube has been established, a syncytium begins to form through the breakdown of multiple cell walls adjacent to each other. J2 nematodes continue to feed until they grow into third-stage juveniles (J3), then fourth-stage juveniles (J4), and finally reach the adult stage. The shape of the J3 females begins to appear more like a sac as the female grows into a J4 nematode. At the J4 stage, the body of the female nematode lies outside of the root while the head remains inside the cell. During this stage, the male nematodes become motile again and are then able to fertilize the female nematodes leading to embryos developing inside the female body. Once the female is fertilized, the female dies and leaves a protective cyst containing 200-500 eggs. Once the cysts detach from the original hosts, they rem
|
https://en.wikipedia.org/wiki/Mechanical%20singularity
|
In engineering, a mechanical singularity is a position or configuration of a mechanism or a machine where the subsequent behaviour cannot be predicted, or the forces or other physical quantities involved become infinite or nondeterministic.
When the underlying engineering equations of a mechanism or machine are evaluated at the singular configuration (if any exists), then those equations exhibit mathematical singularity.
Examples of mechanical singularities are gimbal lock and in static mechanical analysis, an under-constrained system.
Types of singularities
There are three types of singularities that can be found in mechanisms: direct-kinematics singularities, inverse-kinematics singularities, and combined singularities. These singularities occur when one or both Jacobian matrices of the mechanisms becomes singular of rank-deficient. The relationship between the input and output velocities of the mechanism are defined by the following general equation:
where is the output velocities, is the input velocities, is the direct-kinematics Jacobians, and is the inverse-kinematics Jacobian.
Type-I: Inverse-kinematics singularities
This first kind of singularities occurs when:
Type-II: Direct-kinematics singularities
This second kind of singularities occurs when:
Type-III: Combined singularities
This kind of singularities occurs when for a particular configuration, both and become singular simultaneously.
References
Mechanical engineering
|
https://en.wikipedia.org/wiki/Dublin%20Accord
|
The Dublin Accord is an agreement for the international recognition of Engineering Technician qualifications.
In May 2002, the national engineering organisations of Ireland, the United Kingdom, South Africa and Canada signed an agreement mutually recognising the qualifications which underpin the granting of Engineering Technician titles in the four countries. Operation of the Dublin Accord is similar as for the Washington Accord and Sydney Accord.
Signatories
See also
Seoul Accord - computing and information technology
Outcome-based education
Chartered Engineer
Professional Engineer
References
External links
International Engineering Alliance Dublin Accord website
Professional titles and certifications
Engineering education
|
https://en.wikipedia.org/wiki/Railway%20Technical%20Research%20Institute
|
, or , is the technical research company under the Japan Railways group of companies.
Overview
RTRI was established in its current form in 1986 just before Japanese National Railways (JNR) was privatised and split into separate JR group companies. It conducts research on everything related to trains, railways and their operation. It is funded by the government and private rail companies. It works both on developing new railway technology, such as magnetic levitation, and on improving the safety and economy of current technology.
Its research areas include earthquake detection and alarm systems, obstacle detection on level crossings, improving adhesion between train wheels and tracks, reducing energy usage, noise barriers and preventing vibrations.
RTRI is the main developer in the Japanese SCMaglev program.
Offices and test facilities
Main office
844 Shin-Kokusai Bldg. 3-4-1 Marunouchi, Chiyoda-ku, Tokyo 100-0005, Japan
Research facilities
Kunitachi Institute - 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo, 185-8540, Japan
Wind Tunnel Technical Center, Maibara, Shiga
Shiozawa Snow Testing Station, Minami-Uonuma, Niigata
Hino Civil Engineering Testing Station, Hino, Tokyo
Gatsugi Anti-Salt Testing Station, Sanpoku, Niigata
Gauge Change Train
The RTRI is developing a variable gauge system, called the "Gauge Change Train", to allow Shinkansen trains to access lines of the original rail network.
Publications
Japan Railway & Technical Review
Quarterly Report of RTRI - Print: Online:
See also
British Rail Research Division
German Centre for Rail Traffic Research
Hydrail
References
External links
Rail transport in Japan
Railway infrastructure companies
Engineering research institutes
|
https://en.wikipedia.org/wiki/Index%20of%20structural%20engineering%20articles
|
This is an alphabetical list of articles pertaining specifically to structural engineering. For a broad overview of engineering, please see List of engineering topics. For biographies please see List of engineers.
A
A-frame –
Aerodynamics –
Aeroelasticity –
Air-supported structure –
Airframe –
Aluminium –
Analytical method –
Angular frequency –
Angular speed –
Architecture –
Architectural engineering –
Arch –
Arch bridge
B
Base isolation –
Beam –
Beam axle –
Bending –
Bifurcation theory –
Biomechanics –
Boat Building –
Body-on-frame –
Box girder bridge –
Box truss –
Bridge engineering –
Buckling –
Building –
Building construction –
Building engineering
C
Cable –
Cable-stayed bridge –
Cantilever –
Cantilever bridge –
Carbon-fiber-reinforced polymer –
Casing –
Casting –
Catastrophic failure –
Center of mass –
Chaos theory –
Chassis –
Chimneys –
Coachwork –
Coefficient of thermal expansion –
Coil spring –
Columns –
Composite material –
Composite structure –
Compression –
Compressive stress –
Concrete –
Concrete cover –
Construction –
Construction engineering –
Construction management –
Continuum mechanics –
Corrosion –
Crane –
Creep –
Crumple zone –
Curvature
D
Dam –
Damper –
Damping ratio –
Dead and live loads –
Deflection –
Deformation –
Direct stiffness method –
Dome –
Double wishbone suspension –
Duhamel's integral –
Dynamical system –
Dynamics
E
Earthquake--
Earthquake engineering –
Earthquake engineering research –
Earthquake engineering structures –
Earthquake loss –
Earthquake performance evaluation –
Earthquake simulation –
Elasticity theory –
Elasticity –
Energy principles in structural mechanics –
Engineering mechanics –
Euler method –
Euler–Bernoulli beam equation
F
Falsework –
Fatigue –
Fibre reinforced plastic –
Finite element analysis –
Finite element method –
Finite element method in structural mechanics –
Fire safety –
Fire protection –
Fire protection engineering –
First moment of area –
Flexibility method –
Floating raft system –
Floor –
Fluid
|
https://en.wikipedia.org/wiki/Implicational%20propositional%20calculus
|
In mathematical logic, the implicational propositional calculus is a version of classical propositional calculus which uses only one connective, called implication or conditional. In formulas, this binary operation is indicated by "implies", "if ..., then ...", "→", "", etc..
Functional (in)completeness
Implication alone is not functionally complete as a logical operator because one cannot form all other two-valued truth functions from it.
For example, the two-place truth function that always returns false is not definable from → and arbitrary sentence variables: any formula constructed from → and propositional variables must receive the value true when all of its variables are evaluated to true.
It follows that {→} is not functionally complete.
However, if one adds a nullary connective ⊥ for falsity, then one can define all other truth functions. Formulas over the resulting set of connectives {→, ⊥} are called f-implicational. If P and Q are propositions, then:
¬P is equivalent to P → ⊥
P ∧ Q is equivalent to (P → (Q → ⊥)) → ⊥
P ∨ Q is equivalent to (P → Q) → Q
P ↔ Q is equivalent to ((P → Q) → ((Q → P) → ⊥)) → ⊥
Since the above operators are known to be functionally complete, it follows that any truth function can be expressed in terms of → and ⊥.
Axiom system
The following statements are considered tautologies (irreducible and intuitively true, by definition).
Axiom schema 1 is P → (Q → P).
Axiom schema 2 is (P → (Q → R)) → ((P → Q) → (P → R)).
Axiom schema 3 (Peirce's law) is ((P → Q) → P) → P.
The one non-nullary rule of inference (modus ponens) is: from P and P → Q infer Q.
Where in each case, P, Q, and R may be replaced by any formulas which contain only "→" as a connective. If Γ is a set of formulas and A a formula, then means that A is derivable using the axioms and rules above and formulas from Γ as additional hypotheses.
Łukasiewicz (1948) found an axiom system for the implicational calculus, which replaces the schemas 1–3 above with a single schem
|
https://en.wikipedia.org/wiki/Molecular%20mimicry
|
Molecular mimicry is the theoretical possibility that sequence similarities between foreign and self-peptides are enough to result in the cross-activation of autoreactive T or B cells by pathogen-derived peptides. Despite the prevalence of several peptide sequences which can be both foreign and self in nature, just a few crucial residues can activate a single antibody or TCR (T cell receptor). This highlights the importance of structural homology in the theory of molecular mimicry. Upon activation, these "peptide mimic" specific T or B cells can cross-react with self-epitopes, thus leading to tissue pathology (autoimmunity). Molecular mimicry is one of several ways in which autoimmunity can be evoked. A molecular mimicking event is more than an epiphenomenon despite its low probability, and these events have serious implications in the onset of many human autoimmune disorders.
One possible cause of autoimmunity, the failure to recognize self antigens as "self", is a loss of immunological tolerance, the ability for the immune system to discriminate between self and non-self. Other possible causes include mutations governing programmed cell death or environmental products that injure target tissues, thus causing a release of immunostimulatory alarm signals. Growth in the field of autoimmunity has resulted in more frequent diagnosis of autoimmune diseases. The resulting data show that autoimmune diseases affect approximately 1 in 31 people within the general population. Growth has also led to a greater characterization of what autoimmunity is and how it can be studied and treated. With more research comes growth in the study of the several different ways in which autoimmunity can occur, one of which is molecular mimicry. The mechanism by which pathogens have similar amino acid sequences or the homologous three-dimensional crystal structure of immunodominant epitopes remains a mystery.
Immunological tolerance
Tolerance is a fundamental property of the immune system.
|
https://en.wikipedia.org/wiki/Structural%20mechanics
|
Structural mechanics or mechanics of structures is the computation of deformations, deflections, and internal forces or stresses (stress equivalents) within structures, either for design or for performance evaluation of existing structures. It is one subset of structural analysis. Structural mechanics analysis needs input data such as structural loads, the structure's geometric representation and support conditions, and the materials' properties. Output quantities may include support reactions, stresses and displacements. Advanced structural mechanics may include the effects of stability and non-linear behaviors.
Mechanics of structures is a field of study within applied mechanics that investigates the behavior of structures under mechanical loads, such as bending of a beam, buckling of a column, torsion of a shaft, deflection of a thin shell, and vibration of a bridge.
There are three approaches to the analysis: the energy methods, flexibility method or direct stiffness method which later developed into finite element method and the plastic analysis approach.
Energy method
Energy principles in structural mechanics
Flexibility method
Flexibility method
Stiffness methods
Direct stiffness method
Finite element method in structural mechanics
Plastic analysis approach
Plastic Analysis
Major topics
Beam theory
Buckling
Earthquake engineering
Finite element method in structural mechanics
Plates and shells
Torsion
Trusses
Stiffening
Structural dynamics
Structural instability
Building engineering
Structural engineering
Solid mechanics
Mechanics
Earthquake engineering
|
https://en.wikipedia.org/wiki/Initialization-on-demand%20holder%20idiom
|
In software engineering, the initialization-on-demand holder (design pattern) idiom is a lazy-loaded singleton. In all versions of Java, the idiom enables a safe, highly concurrent lazy initialization of static fields with good performance.
public class Something {
private Something() {}
private static class LazyHolder {
static final Something INSTANCE = new Something();
}
public static Something getInstance() {
return LazyHolder.INSTANCE;
}
}
The implementation of the idiom relies on the initialization phase of execution within the Java Virtual Machine (JVM) as specified by the Java Language Specification (JLS). When the class Something is loaded by the JVM, the class goes through initialization. Since the class does not have any static variables to initialize, the initialization completes trivially. The static class definition LazyHolder within it is not initialized until the JVM determines that LazyHolder must be executed. The static class LazyHolder is only executed when the static method getInstance is invoked on the class Something, and the first time this happens the JVM will load and initialize the LazyHolder class. The initialization of the LazyHolder class results in static variable INSTANCE being initialized by executing the (private) constructor for the outer class Something. Since the class initialization phase is guaranteed by the JLS to be sequential, i.e., non-concurrent, no further synchronization is required in the static getInstance method during loading and initialization. And since the initialization phase writes the static variable INSTANCE in a sequential operation, all subsequent concurrent invocations of the getInstance will return the same correctly initialized INSTANCE without incurring any additional synchronization overhead.
Caveats
While the implementation is an efficient thread-safe "singleton" cache without synchronization overhead, and better performing than uncontended synchronization, the i
|
https://en.wikipedia.org/wiki/BackTrack
|
BackTrack was a Linux distribution that focused on security, based on the Knoppix Linux distribution aimed at digital forensics and penetration testing use. In March 2013, the Offensive Security team rebuilt BackTrack around the Debian distribution and released it under the name Kali Linux.
History
The BackTrack distribution originated from the merger of two formerly competing distributions which focused on penetration testing:
WHAX: a Slax-based Linux distribution developed by Mati Aharoni, a security consultant. Earlier versions of WHAX were called Whoppix and were based on Knoppix.
Auditor Security Collection: a Live CD based on Knoppix developed by Max Moser which included over 300 tools organized in a user-friendly hierarchy.
On January 9, 2010, BackTrack 4 improved hardware support, and added official FluxBox support. The overlap with Auditor and WHAX in purpose and in collection of tools partly led to the merger. The overlap was done based on Ubuntu Lucid LTS starting from BackTrack 5.
Tools
BackTrack provided users with easy access to a comprehensive and large collection of security-related tools ranging from port scanners to Security Audit. Support for Live CD and Live USB functionality allowed users to boot BackTrack directly from portable media without requiring installation, though permanent installation to hard disk and network was also an option.
BackTrack included many well known security tools including:
Metasploit for integration
Wi-Fi drivers supporting monitor mode (rfmon mode) and packet injection
Aircrack-ng
Reaver, a tool used to exploit a vulnerability in WPS
Gerix Wifi Cracker
Kismet
Nmap
Ophcrack
Ettercap
Wireshark (formerly known as Ethereal)
BeEF (Browser Exploitation Framework)
Hydra
OWASP Mantra Security Framework, a collection of hacking tools, add-ons and scripts based on Firefox
Cisco OCS Mass Scanner, a very reliable and fast scanner for Cisco routers to test default telnet and enabling password.
A large collec
|
https://en.wikipedia.org/wiki/Quasitopological%20space
|
In mathematics, a quasi-topology on a set X is a function that associates to every compact Hausdorff space C a collection of mappings from C to X satisfying certain natural conditions. A set with a quasi-topology is called a quasitopological space.
They were introduced by Spanier, who showed that there is a natural quasi-topology on the space of continuous maps from one space to another.
References
.
Topology
|
https://en.wikipedia.org/wiki/List%20of%20sequence%20alignment%20software
|
This list of sequence alignment software is a compilation of software tools and web portals used in pairwise sequence alignment and multiple sequence alignment. See structural alignment software for structural alignment of proteins.
Database search only
*Sequence type: protein or nucleotide
Pairwise alignment
*Sequence type: protein or nucleotide **Alignment type: local or global
Multiple sequence alignment
*Sequence type: protein or nucleotide. **Alignment type: local or global
Genomics analysis
*Sequence type: protein or nucleotide
Motif finding
*Sequence type: protein or nucleotide
Benchmarking
Alignment viewers, editors
Please see List of alignment visualization software.
Short-read sequence alignment
See also
List of open source bioinformatics software
References
Sequence
Sequence alignment software
|
https://en.wikipedia.org/wiki/Neighborhood%20semantics
|
Neighborhood semantics, also known as Scott–Montague semantics, is a formal semantics for modal logics. It is a generalization, developed independently by Dana Scott and Richard Montague, of the more widely known relational semantics for modal logic. Whereas a relational frame consists of a set W of worlds (or states) and an accessibility relation R intended to indicate which worlds are alternatives to (or, accessible from) others, a neighborhood frame still has a set W of worlds, but has instead of an accessibility relation a neighborhood function
that assigns to each element of W a set of subsets of W. Intuitively, each family of subsets assigned to a world are the propositions necessary at that world, where 'proposition' is defined as a subset of W (i.e. the set of worlds at which the proposition is true). Specifically, if M is a model on the frame, then
where
is the truth set of .
Neighborhood semantics is used for the classical modal logics that are strictly weaker than the normal modal logic K.
Correspondence between relational and neighborhood models
To every relational model M = (W, R, V) there corresponds an equivalent (in the sense of having pointwise-identical modal theories) neighborhood model M' = (W, N, V) defined by
The fact that the converse fails gives a precise sense to the remark that neighborhood models are a generalization of relational ones. Another (perhaps more natural) generalization of relational structures are general frames.
References
Chellas, B.F. Modal Logic. Cambridge University Press, 1980.
Montague, R. "Universal Grammar", Theoria 36, 373–98, 1970.
Scott, D. "Advice on modal logic", in Philosophical Problems in Logic, ed. Karel Lambert. Reidel, 1970.
Modal logic
|
https://en.wikipedia.org/wiki/VPS/VM
|
VPS/VM (Virtual Processing System/Virtual Machine) was an operating system that ran on IBM System/370 – IBM 3090 computers at Boston University in general use from 1977 to around 1990, and in limited use until at least 1993. During the 1980s, VPS/VM was the main operating system of Boston University and often ran up to 250 users at a time when rival VM/CMS computing systems could only run 120 or so users.
Each user ran in a Virtual Machine under VM, an IBM hypervisor operating system. VM provided the virtual IBM 370 machine which the VPS operating system ran under. The VM code was modified to allow all the VPS virtual machines to share pages of storage with read and write access. VPS utilized a shared nucleus, as well as pages used to facilitate passing data from one VPS virtual machine to another. This organization is very similar to that of MVS; substituting Address Spaces for Virtual Machines.
Origins
According to Craig Estey, who worked at the Boston University Academic Computing Center between 1974 and 1977:
Description
An IBM-based operating system, and quite like some DOS/VSE time sharing options, VPS/VM provided the user an IBM 3270 full screen terminal (a green screen) and a user interface that was like VM/CMS. Each user had an 11 megabyte virtual machine (with a strange 3 megabyte memory gap in the middle) and, from 1984 onwards, could run several programs at a time.
The operating system was sparsely documented but was written first by Charles Brown, a BU doctoral student, and John H. Porter, a physics PHD, who later became the head of the VPS project (and eventually Boston University's vice president for information systems and technology). Marian Moore wrote much of the later VM code necessary to run the VPS system.
Josie Bondoc wrote some of the later VPS additions, like UNIX piping.
Many MVS/VM programs ran on VPS/VM, such as XEDIT, and compilers for Pascal, PL/1, C and Cobol. These MVS/VM programs ran under an OS simulation program that simula
|
https://en.wikipedia.org/wiki/In-game%20advertising
|
In-game advertising (IGA) is advertising in electronic games. IGA differs from advergames, which refers to games specifically made to advertise a product. The IGA industry is large and growing.
In-game advertising generated $34 million in 2004, $56 million in 2005, $80 million in 2006,
and $295 million in 2007.
In 2009, spending on IGA was estimated to reach $699 million USD, $1 billion by 2014 and according to Forbes is anticipated to grow to $7.2 billion by 2016.
The earliest known IGA was the 1978 computer game Adventureland, which inserted a self-promotional advertisement for its next game, Pirate Adventure.
IGA can be integrated into the game either through a display in the background, such as an in-game billboard or a commercial during the pause created when a game loads, or highly integrated within the game so that the advertised product is necessary to complete part of the game or is featured prominently within cutscenes. Due to the custom programming required, dynamic advertising is usually presented in the background; static advertisements can appear as either. One of the advantages of IGA over traditional advertisements is that consumers are less likely to multitask with other media while playing a game, however, some attention is still divided between the gameplay, controls, and the advertisement.
Static in-game advertising
Similar to product placement in the film industry, static IGAs cannot be changed after they are programmed directly into the game (unless it's completely online). However, unlike product placement in traditional media, IGA allows gamers to interact with the virtual product. For example, Splinter Cell has required the use of in-game Sony Ericsson phones to catch terrorists. Unlike static IGAs, dynamic IGAs are not limited to a developer and publisher determined pre-programmed size or location and allow the advertiser to customize the advertisement display.
A number of games utilize billboard-like advertisements or product pl
|
https://en.wikipedia.org/wiki/Volterra%20series
|
The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of the system depends strictly on the input at that particular time. In the Volterra series, the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors.
It has been applied in the fields of medicine (biomedical engineering) and biology, especially neuroscience. It is also used in electrical engineering to model intermodulation distortion in many devices, including power amplifiers and frequency mixers. Its main advantage lies in its generalizability: it can represent a wide range of systems. Thus, it is sometimes considered a non-parametric model.
In mathematics, a Volterra series denotes a functional expansion of a dynamic, nonlinear, time-invariant functional. The Volterra series are frequently used in system identification. The Volterra series, which is used to prove the Volterra theorem, is an infinite sum of multidimensional convolutional integrals.
History
The Volterra series is a modernized version of the theory of analytic functionals from the Italian mathematician Vito Volterra, in his work dating from 1887. Norbert Wiener became interested in this theory in the 1920s due to his contact with Volterra's student Paul Lévy. Wiener applied his theory of Brownian motion for the integration of Volterra analytic functionals. The use of the Volterra series for system analysis originated from a restricted 1942 wartime report of Wiener's, who was then a professor of mathematics at MIT. He used the series to make an approximate analysis of the effect of radar noise in a nonlinear receiver circuit. The report became public after the war. As a general method of anal
|
https://en.wikipedia.org/wiki/Electrical%20capacitance%20tomography
|
Electrical capacitance tomography (ECT) is a method for determination of the dielectric permittivity distribution in the interior of an object from external capacitance measurements. It is a close relative of electrical impedance tomography and is proposed as a method for industrial process monitoring.
Although capacitance sensing methods were in widespread use the idea of using capacitance measurement to form images is attributed to Maurice Beck and co-workers at UMIST in the 1980s.
Although usually called tomography, the technique differs from conventional tomographic methods, in which high resolution images are formed of slices of a material. The measurement electrodes, which are metallic plates, must be sufficiently large to give a measureable change in capacitance. This means that very few electrodes are used, typically eight to sixteen electrodes. An N-electrode system can only provide N(N−1)/2 independent measurements. This means that the technique is limited to producing very low resolution images of approximate slices. However, ECT is fast, and relatively inexpensive.
Applications
Applications of ECT include the measurement of flow of fluids in pipes and measurement of the concentration of one fluid in another, or the distribution of a solid in a fluid. ECT enables the visualization of multiphase flow, which play an important role in the technological processes of the chemical, petrochemical and food industries.
Due to its very low spatial resolution, ECT has not yet been used in medical diagnostics. Potentially, ECT may have similar medical applications to electrical impedance tomography, such as monitoring lung function or detecting ischemia or hemorrhage in the brain.
See also
Electrical capacitance volume tomography
Electrical impedance tomography
Electrical resistivity tomography
Industrial Tomography Systems
Process tomography
References
Electrical engineering
Nondestructive testing
Inverse problems
|
https://en.wikipedia.org/wiki/Riesz%20mean
|
In mathematics, the Riesz mean is a certain mean of the terms in a series. They were introduced by Marcel Riesz in 1911 as an improvement over the Cesàro mean. The Riesz mean should not be confused with the Bochner–Riesz mean or the Strong–Riesz mean.
Definition
Given a series , the Riesz mean of the series is defined by
Sometimes, a generalized Riesz mean is defined as
Here, the are a sequence with and with as . Other than this, the are taken as arbitrary.
Riesz means are often used to explore the summability of sequences; typical summability theorems discuss the case of for some sequence . Typically, a sequence is summable when the limit exists, or the limit exists, although the precise summability theorems in question often impose additional conditions.
Special cases
Let for all . Then
Here, one must take ; is the Gamma function and is the Riemann zeta function. The power series
can be shown to be convergent for . Note that the integral is of the form of an inverse Mellin transform.
Another interesting case connected with number theory arises by taking where is the Von Mangoldt function. Then
Again, one must take c > 1. The sum over ρ is the sum over the zeroes of the Riemann zeta function, and
is convergent for λ > 1.
The integrals that occur here are similar to the Nörlund–Rice integral; very roughly, they can be connected to that integral via Perron's formula.
References
M. Riesz, Comptes Rendus, 12 June 1911
Means
Summability methods
Zeta and L-functions
|
https://en.wikipedia.org/wiki/Angiopoietin
|
Angiopoietin is part of a family of vascular growth factors that play a role in embryonic and postnatal angiogenesis. Angiopoietin signaling most directly corresponds with angiogenesis, the process by which new arteries and veins form from preexisting blood vessels. Angiogenesis proceeds through sprouting, endothelial cell migration, proliferation, and vessel destabilization and stabilization. They are responsible for assembling and disassembling the endothelial lining of blood vessels. Angiopoietin cytokines are involved with controlling microvascular permeability, vasodilation, and vasoconstriction by signaling smooth muscle cells surrounding vessels.
There are now four identified angiopoietins: ANGPT1, ANGPT2, ANGPTL3, ANGPT4.
In addition, there are a number of proteins that are closely related to ('like') angiopoietins (Angiopoietin-related protein 1, , , , , , , ).
Angiopoietin-1 is critical for vessel maturation, adhesion, migration, and survival. Angiopoietin-2, on the other hand, promotes cell death and disrupts vascularization. Yet, when it is in conjunction with vascular endothelial growth factors, or VEGF, it can promote neo-vascularization.
Structure
Structurally, angiopoietins have an N-terminal super clustering domain, a central coiled domain, a linker region, and a C-terminal fibrinogen-related domain responsible for the binding between the ligand and receptor.
Angiopoietin-1 encodes a 498 amino acid polypeptide with a molecular weight of 57 kDa whereas angiopoietin-2 encodes a 496 amino acid polypeptide.
Only clusters/multimers activate receptors
Angiopoietin-1 and angiopoietin-2 can form dimers, trimers, and tetramers. Angiopoietin-1 has the ability to form higher order multimers through its super clustering domain. However, not all of the structures can interact with the tyrosine kinase receptor. The receptor can only be activated at the tetramer level or higher.
Specific mechanisms
Tie pathway
The collective interactions between angiopoi
|
https://en.wikipedia.org/wiki/List%20of%20Inferno%20applications
|
This is a list of Inferno programs. Most of these programs are very similar to the Plan 9 applications or UNIX programs with the same name.
System software
General user
dd – convert and copy a file
date – print the date
echo – print arguments
emu – Inferno emulator
mash – programmable shell
ns – display current namespace
– build Inferno namespace
os – interface to host OS commands (hosted Inferno only)
plumb – send message to plumber
plumber – plumber for interapplication message routing
rcmd – remote command execution
runas – run command as another user
sh – command language
tiny/sh – reduced command line interface to the Inferno system
wm/logon – log on to Inferno
wm/sh, wm/mash – Window frames for the Inferno shells
wm/wm – window manager
System Management
Processes and tasks management
time – time command execution
kill, broke – terminate processes
sleep, pause – suspend execution for an interval
ps – process (thread) status
wm/task – graphical task manager
User management and support
auth/passwd – change user password
man, wm/man, man2txt, lookman – print or find manual pages
Files and Text
Filesystem Utilities
chgrp – change file's group or owner
chmod – change file mode (permissions)
cp, fcp – copy files
du – disk usage
lc – list files in columns
ls – list files
mkdir – make a directory
mv – move files
bind, mount, unmount – change name space
pwd – working directory
rm – remove files
touch – update the modification time of one or more files
Archivers and compression
ar – archive maintainer
gettar, lstar, puttar – tar archive utilities
gzip, gunzip – compression and decompression utilities
Text Processing
cat – concatenate files
cmp – compare two files
diff – differential file comparator
fmt – simple text formatter
freq – print histogram of character frequencies
grep – pattern matching
p – paginate
read – read from standard input with optional seek
tail – deliver the last part of a file
tcs – translate character sets
tr – translate characters
w
|
https://en.wikipedia.org/wiki/Center%20of%20curvature
|
In geometry, the center of curvature of a curve is found at a point that is at a distance from the curve equal to the radius of curvature lying on the normal vector. It is the point at infinity if the curvature is zero. The osculating circle to the curve is centered at the centre of curvature. Cauchy defined the center of curvature C as the intersection point of two infinitely close normal lines to the curve. The locus of centers of curvature for each point on the curve comprise the evolute of the curve. This term is generally used in physics regarding the study of lenses and mirrors (see radius of curvature (optics)).
It can also be defined as the spherical distance between the point at which all the rays falling on a lens or mirror either seems to converge to (in the case of convex lenses and concave mirrors) or diverge from (in the case of concave lenses or convex mirrors) and the lens/mirror itself.
See also
Curvature
Differential geometry of curves
References
Bibliography
Curves
Differential geometry
Curvature
Concepts in physics
|
https://en.wikipedia.org/wiki/Parts%20Manufacturer%20Approval
|
Parts Manufacturer Approval (PMA) is an approval granted by the United States Federal Aviation Administration (FAA) to a manufacturer of aircraft parts.
Approval
It is generally illegal in the United States to install replacement or modification parts on a certificated aircraft without an airworthiness release such as a Supplemental Type Certificate (STC) or Parts Manufacturing Approval (PMA). There are a number of other methods of compliance, including parts manufactured to government or industry standards, parts manufactured under technical standard order authorization [TSO], owner-/operator-produced parts, experimental aircraft, field approvals, etc.
PMA-holding manufacturers are permitted to make replacement parts for aircraft, even though they are not the original manufacturer of the aircraft. The process is analogous to 'after-market' parts for automobiles, except that the United States aircraft parts production market remains tightly regulated by the FAA.
An applicant for a PMA applies for approval from the FAA. The FAA prioritizes its review of a new application based on its internal process called Project Prioritization.
The FAA Order covering the application for PMA is Order 8110.42 revision D. This document is worded as instructions to the FAA reviewing personnel. An accompanying Advisory Circular (AC) 21.303-4 is intended to address the applicant. 8110.42C addressed both the applicant and the reviewer. Per the order, application for a PMA can be made per the following ways: Identicality in which the applicant attempts to convince the FAA that the PMA part is identical to the OAH (Original Approval Holder) part. Identicality by Licensure is accomplished by providing evidence to the FAA that the applicant has licensed the part data from the OAH. This evidence is usually in the form of an Assist Letter provided to the applicant by the OAH. PMA may also be granted based upon prior approval of an STC . As an example: If an STC were granted to
|
https://en.wikipedia.org/wiki/Holonomic%20function
|
In mathematics, and more specifically in analysis, a holonomic function is a smooth function of several variables that is a solution of a system of linear homogeneous differential equations with polynomial coefficients and satisfies a suitable dimension condition in terms of D-modules theory. More precisely, a holonomic function is an element of a holonomic module of smooth functions. Holonomic functions can also be described as differentiably finite functions, also known as D-finite functions. When a power series in the variables is the Taylor expansion of a holonomic function, the sequence of its coefficients, in one or several indices, is also called holonomic. Holonomic sequences are also called P-recursive sequences: they are defined recursively by multivariate recurrences satisfied by the whole sequence and by suitable specializations of it. The situation simplifies in the univariate case: any univariate sequence that satisfies a linear homogeneous recurrence relation with polynomial coefficients, or equivalently a linear homogeneous difference equation with polynomial coefficients, is holonomic.
Holonomic functions and sequences in one variable
Definitions
Let be a field of characteristic 0 (for example, or ).
A function is called D-finite (or holonomic) if there exist polynomials such that
holds for all x. This can also be written as where
and is the differential operator that maps to . is called an annihilating operator of f (the annihilating operators of form an ideal in the ring , called the annihilator of ). The quantity r is called the order of the annihilating operator. By extension, the holonomic function f is said to be of order r when an annihilating operator of such order exists.
A sequence is called P-recursive (or holonomic) if there exist polynomials such that
holds for all n. This can also be written as where
and the shift operator that maps to . is called an annihilating operator of c (the annihilating operators of
|
https://en.wikipedia.org/wiki/Angular%20eccentricity
|
Angular eccentricity is one of many parameters which arise in the study of the ellipse or ellipsoid. It is denoted here by α (alpha). It may be defined in terms of the eccentricity, e, or the aspect ratio, b/a (the ratio of the semi-minor axis and the semi-major axis):
Angular eccentricity is not currently used in English language publications on mathematics, geodesy or map projections but it does appear in older literature.
Any non-dimensional parameter of the ellipse may be expressed in terms of the angular eccentricity. Such expressions are listed in the following table after the conventional definitions. in terms of the semi-axes. The notation for these parameters varies. Here we follow Rapp:
{| class="wikitable" style="border: 1px solid darkgray" cellpadding="5"
| (first) eccentricity
| style="padding-left: 0.5em"|
| style="padding-left: 1.5em"|
| style="padding-left: 1.5em"|
|
|-
| second eccentricity
| style="padding-left: 0.5em"|
| style="padding-left: 1.5em"|
| style="padding-left: 1.5em"|
|
|-
| third eccentricity
| style="padding-left: 0.5em"|
| style="padding-left: 1.5em"|
| style="padding-left: 1.5em"|
|
|-
| style="padding-left: 0.5em"| (first) flattening
| style="padding-left: 0.5em"|
| style="padding-left: 1.5em"|
| style="padding-left: 1.5em"|
|
|-
| style="padding-left: 0.5em"|second flattening
| style="padding-left: 0.5em"|
| style="padding-left: 1.5em"|
| style="padding-left: 1.5em"|
|
|-
| style="padding-left: 0.5em"| third flattening
| style="padding-left: 0.5em"|
| style="padding-left: 1.5em"|
| style="padding-left: 1.5em"|
|
|}
The alternative expressions for the flattenings would guard against large cancellations in numerical work.
References
External links
Toby Garfield's APPENDIX A: The ellipse [Archived copy].
Map Projections for Europe (pg.116)
Geodesy
Conic sections
|
https://en.wikipedia.org/wiki/Acyclic%20object
|
In mathematics, in the field of homological algebra, given an abelian category
having enough injectives and an additive (covariant) functor
,
an acyclic object with respect to , or simply an -acyclic object, is an object in such that
for all ,
where are the right derived functors of
.
References
Homological algebra
|
https://en.wikipedia.org/wiki/J.%20Anthony%20Hall
|
J. Anthony Hall FREng is a leading British software engineer specializing in the use of formal methods, especially the Z notation.
Anthony Hall was educated at the University of Oxford with a BA in chemistry and a DPhil in theoretical chemistry. His subsequent posts have included:
ICI Research Fellow, Department of Theoretical Chemistry, University of Sheffield (1971–1973)
Principal Scientific Officer, British Museum Research Laboratory (1973–1980)
Senior Consultant, Systems Programming Limited (1980–1984)
Principal Consultant, Systems Designers (1984–1986)
Visiting Professor, Carnegie Mellon University (1994)
Principal Consultant, Praxis Critical Systems (1986–2004)
In particular, Hall has worked on software development using formal methods for the UK National Air Traffic Services (NATS). He has been an invited speaker at conferences concerned with formal methods, requirements engineering and software engineering.
Since 2004, Hall has been an independent consultant. He has also been a visiting professor at the University of York. Hall was the founding chair of ForTIA, the Formal Techniques Industry Association.
Selected publications
Anthony Hall, Seven Myths of Formal Methods, IEEE Software, September 1990, pp. 11–19.
Anthony Hall and Roderick Chapman, Correctness by Construction: Developing a Commercial Secure System, IEEE Software, January/February 2002, pp. 18–25.
References
Career history
External links
Anthony Hall website
Living people
British computer programmers
British computer scientists
Formal methods people
Fellows of the Royal Academy of Engineering
Fellows of the British Computer Society
Alumni of the University of Oxford
Employees of the British Museum
Academics of the University of Sheffield
British software engineers
Year of birth missing (living people)
|
https://en.wikipedia.org/wiki/Phenoptosis
|
Phenoptosis (from pheno: showing or demonstrating; ptosis: programmed death, "falling off") is a conception of the self-programmed death of an organism proposed by Vladimir Skulachev in 1999.
In many species, including salmon and marsupial mice, under certain circumstances, especially following reproduction, an organism's genes will cause the organism to rapidly degenerate and die off. Recently this has been referred to as "fast phenoptosis" as aging is being explored as "slow phenoptosis". Phenoptosis is a common feature of living species, whose ramifications for humans is still being explored. The concept of programmed cell death was used before, by Lockshin & Williams in 1964 in relation to insect tissue development, around eight years before "apoptosis" was coined. The term 'phenoptosis' is a neologism associated with Skulachev's proposal.
Evolutionary significance
In multicellular organisms, worn-out and ineffective cells are dismantled and recycled for the greater good of the whole organism in a process called apoptosis. It is believed that phenoptosis is an evolutionary mechanism that culls out the damaged, aged, infectious, or those in direct competition with their own offspring for the good of the species. Special circumstances need to exist for the "phenoptosis" strategy to be an evolutionarily stable strategy (ESS), let alone the only ESS. Examples of "phenoptosis" given below are really examples of semelpary - a life history with a single reproduction followed by death, which evolves not "for the good of the species" but as the ESS in the conditions of high adult-to-juvenile mortality ratio. The elimination of parts detrimental to the organism or individuals detrimental to the species has been deemed "The samurai law of biology" – it is better to die than to be wrong.
Stress-induced, acute, or fast phenoptosis is the rapid deterioration of an organism induced by a life event such as breeding. Elimination of the parent provides space for fitter offspri
|
https://en.wikipedia.org/wiki/Martinotti%20cell
|
Martinotti cells are small multipolar neurons with short branching dendrites. They are scattered throughout various layers of the cerebral cortex, sending their axons up to the cortical layer I where they form axonal arborization. The arbors transgress multiple columns in layer VI and make contacts with the distal tuft dendrites of pyramidal cells.
Martinotti cells express somatostatin and sometimes calbindin, but not parvalbumin or vasoactive intestinal peptide. Furthermore, Martinotti cells in layer V have been shown to express the nicotinic acetylcholine receptor α2 subunit (Chrna2).
Martinotti cells are associated with a cortical dampening mechanism. When the pyramidal neuron, which is the most common type of neuron in the cortex, starts getting overexcited, Martinotti cells start sending inhibitory signals to the surrounding neurons.
Historically, the discovery of Martinotti cells has been mistakenly attributed to Giovanni Martinotti 1888, although it is now accepted that they were actually discovered in 1889 by Carlo Martinotti (1859–1908), a student of Camillo Golgi.
External links
News, press releases
Rare cell prevents rampant brain activity - on the discovery of potential dampening influence of Martinotti cells.
NIF Search - Martinotti Cell via the Neuroscience Information Framework
See also
List of distinct cell types in the adult human body
References
Neurons
Cell biology
|
https://en.wikipedia.org/wiki/Shadows%20of%20the%20Mind
|
Shadows of the Mind: A Search for the Missing Science of Consciousness is a 1994 book by mathematical physicist Roger Penrose that serves as a followup to his 1989 book The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics.
Penrose hypothesizes that:
Human consciousness is non-algorithmic, and thus is not capable of being modelled by a conventional Turing machine type of digital computer.
Quantum mechanics plays an essential role in the understanding of human consciousness; specifically, he believes that microtubules within neurons support quantum superpositions.
The objective collapse of the quantum wavefunction of the microtubules is critical for consciousness.
The collapse in question is physical behaviour that is non-algorithmic and transcends the limits of computability.
The human mind has abilities that no Turing machine could possess because of this mechanism of non-computable physics.
Argument
Mathematical thought
In 1931, the mathematician and logician Kurt Gödel proved his incompleteness theorems, showing that any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. Further to that, for any consistent formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory. The essence of Penrose's argument is that while a formal proof system cannot, because of the theorem, prove its own incompleteness, Gödel-type results are provable by human mathematicians. He takes this disparity to mean that human mathematicians are not describable as formal proof systems and are not running an algorithm, so that the computational theory of mind is false, and computational approaches to artificial general intelligence are unfounded. (The argument was first given by Penrose in The Emperor's New Mind (1989) and is developed further in Shadows of The Mind. An earlier version of the argument was given by J. R. Lucas in 19
|
https://en.wikipedia.org/wiki/Gas/oil%20ratio
|
When oil is produced to surface temperature and pressure it is usual for some natural gas to come out of solution. The gas/oil ratio (GOR) is the ratio of the volume of gas ("scf") that comes out of solution to the volume of oil — at standard conditions.
In reservoir simulation gas/oil ratio is usually abbreviated .
A point to check is whether the volume of oil is measured before or after the gas comes out of solution, since the remaining oil volume will decrease when the gas comes out.
In fact, gas dissolution and oil volume shrinkage will happen at many stages during the path of the hydrocarbon stream from reservoir through the wellbore and processing plant to export. For light oils and rich gas condensates the ultimate GOR of export streams is strongly influenced by the efficiency with which the processing plant strips liquids from the gas phase. Reported GORs may be calculated from export volumes, which may not be at standard conditions.
The GOR is a dimensionless ratio (volume per volume) in metric units, but in field units, it is usually measured in cubic feet of gas per barrel of oil or condensate.
In the states of Texas and Pennsylvania, the statutory definition of a gas well is one where the GOR is greater than 100,000 ft3/bbl or 100 Kcf/bbl.
The state of New Mexico also designates a gas well as having over 100 MCFG per barrel.
The Oklahoma Geological Survey in 2015 published a map that displays gas wells with greater than 20 MCFG per barrel of oil. They go on to display oil wells with GOR of less than 5 MCFG/BBL and oil and gas wells between these limits.
The EPA's 2016 Information Collection Request for Oil and Gas Facilities (EPA ICR No. 2548.01, OMB Control No. 2060-NEW) divided well types into five categories:
1. Heavy Oil (GOR ≤ 300 scf/bbl)
2. Light Oil (GOR 300 < GOR ≤ 100,000 scf/bbl)
3. Wet Gas (100,000 < GOR ≤1,000,000 scf/bbl)
4. Dry Gas (GOR > 1,000,000 scf/bbl)
5. Coal Bed Methane.
References
Bibliography
Ratios
Petroleum
|
https://en.wikipedia.org/wiki/EnterpriseDB
|
EnterpriseDB (EDB), a privately held company based in Massachusetts, provides software and services based on the open-source database PostgreSQL (also known as Postgres), and is one of the largest contributors to Postgres. EDB develops and integrates performance, security, and manageability enhancements into Postgres to support enterprise-class workloads. EDB has also developed database compatibility for Oracle to facilitate the migration of workloads from Oracle to EDB Postgres and to support the operation of many Oracle workloads on EDB Postgres.
EDB provides a portfolio of databases and tools that extend Postgres for enterprise workloads. This includes fully managed Postgres in the cloud, extreme high availability for Postgres, command line migration tools, Kubernetes Operator and container images, management, monitoring and optimizing of Postgres, enterprise ready Oracle migration tools and browser-based schema migration tools
EnterpriseDB was purchased by Great Hill Partners in 2019.
In June 2022, Bain Capital Private Equity announced a majority growth investment in the company, whereafter EDB continues to operate under the leadership of Ed Boyajian, President and CEO of EDB, an open source pioneer who has led the company since 2008.
Great Hill Partners, which acquired EDB in 2019, remains a significant shareholder.
History
EDB was founded in 2004. The growing acceptance of open source software created a market opportunity and the company wanted to challenge the database incumbents with a standards based product that was compatible with other vendor solutions. EnterpriseDB sought to develop an open source-based, enterprise-class relational database to compete with established vendors at an open source price point.
EDB introduced its database, EnterpriseDB 2005, in 2005. It was named Best Database Solution at LinuxWorld that year, beating solutions from Oracle, MySQL, and IBM. EDB renamed the database EnterpriseDB Advanced Server with its March 2006 rele
|
https://en.wikipedia.org/wiki/Self-healing%20material
|
Self-healing materials are artificial or synthetically created substances that have the built-in ability to automatically repair damages to themselves without any external diagnosis of the problem or human intervention. Generally, materials will degrade over time due to fatigue, environmental conditions, or damage incurred during operation. Cracks and other types of damage on a microscopic level have been shown to change thermal, electrical, and acoustical properties of materials, and the propagation of cracks can lead to eventual failure of the material. In general, cracks are hard to detect at an early stage, and manual intervention is required for periodic inspections and repairs. In contrast, self-healing materials counter degradation through the initiation of a repair mechanism that responds to the micro-damage. Some self-healing materials are classed as smart structures, and can adapt to various environmental conditions according to their sensing and actuation properties.
Although the most common types of self-healing materials are polymers or elastomers, self-healing covers all classes of materials, including metals, ceramics, and cementitious materials. Healing mechanisms vary from an instrinsic repair of the material to the addition of a repair agent contained in a microscopic vessel. For a material to be strictly defined as autonomously self-healing, it is necessary that the healing process occurs without human intervention. Self-healing polymers may, however, activate in response to an external stimulus (light, temperature change, etc.) to initiate the healing processes.
A material that can intrinsically correct damage caused by normal usage could prevent costs incurred by material failure and lower costs of a number of different industrial processes through longer part lifetime, and reduction of inefficiency caused by degradation over time.
History
The ancient Romans used a form of lime mortar that has been found to have self-healing properties. By 2
|
https://en.wikipedia.org/wiki/Bounded%20quantifier
|
In the study of formal theories in mathematical logic, bounded quantifiers (a.k.a. restricted quantifiers) are often included in a formal language in addition to the standard quantifiers "∀" and "∃". Bounded quantifiers differ from "∀" and "∃" in that bounded quantifiers restrict the range of the quantified variable. The study of bounded quantifiers is motivated by the fact that determining whether a sentence with only bounded quantifiers is true is often not as difficult as determining whether an arbitrary sentence is true.
Examples
Examples of bounded quantifiers in the context of real analysis include:
- for all x where x is larger than 0
- there exists a y where y is less than 0
- for all x where x is a real number
- every positive number is the square of a negative number
Bounded quantifiers in arithmetic
Suppose that L is the language of Peano arithmetic (the language of second-order arithmetic or arithmetic in all finite types would work as well). There are two types of bounded quantifiers: and .
These quantifiers bind the number variable n and contain a numeric term t which may not mention n but which may have other free variables. ("Numeric terms" here means terms such as "1 + 1", "2", "2 × 3", "m + 3", etc.)
These quantifiers are defined by the following rules ( denotes formulas):
There are several motivations for these quantifiers.
In applications of the language to recursion theory, such as the arithmetical hierarchy, bounded quantifiers add no complexity. If is a decidable predicate then and are decidable as well.
In applications to the study of Peano arithmetic, the fact that a particular set can be defined with only bounded quantifiers can have consequences for the computability of the set. For example, there is a definition of primality using only bounded quantifiers: a number n is prime if and only if there are not two numbers strictly less than n whose product is n. There is no quantifier-free definition of primality in the
|
https://en.wikipedia.org/wiki/DVVL
|
DVVL is an acronym for Discrete variable valve lift, a mechanical component of which two types exist:
DVVLd, includes dual cam phasing.
DVVLi, includes intake valve cam phasing.
Valvetrain
|
https://en.wikipedia.org/wiki/James%20Earl%20Baumgartner
|
James Earl Baumgartner (March 23, 1943 – December 28, 2011) was an American mathematician who worked in set theory, mathematical logic and foundations, and topology.
Baumgartner was born in Wichita, Kansas, began his undergraduate study at the California Institute of Technology in 1960, then transferred to the University of California, Berkeley, from which he received his PhD in 1970 from for a dissertation titled Results and Independence Proofs in Combinatorial Set Theory. His advisor was Robert Vaught. He became a professor at Dartmouth College in 1969, and spent his entire career there.
One of Baumgartner's results is the consistency of the statement that any two -dense sets of reals are order isomorphic (a set of reals is -dense if it has exactly points in every open interval). With András Hajnal he proved the Baumgartner–Hajnal theorem, which states that the partition relation holds for and . He died in 2011 of a heart attack at his home in Hanover, New Hampshire.
The mathematical context in which Baumgartner worked spans Suslin's problem, Ramsey theory, uncountable order types, disjoint refinements, almost disjoint families, cardinal arithmetics, filters, ideals, and partition relations, iterated forcing and Axiom A, proper forcing and the proper forcing axiom, chromatic number of graphs, a thin very-tall superatomic Boolean algebra, closed unbounded sets, and partition relations.
See also
Baumgartner's axiom
Selected publications
Baumgartner, James E., A new class of order types, Annals of Mathematical Logic, 9:187–222, 1976
Baumgartner, James E., Ineffability properties of cardinals I, Infinite and Finite Sets, Keszthely (Hungary) 1973, volume 10 of Colloquia Mathematica Societatis János Bolyai, pages 109–130. North-Holland, 1975
Baumgartner, James E.; Harrington, Leo; Kleinberg, Eugene, Adding a closed unbounded set, Journal of Symbolic Logic, 41(2):481–482, 1976
Baumgartner, James E., Ineffability properties of cardinals II, Robert E. Butts
|
https://en.wikipedia.org/wiki/Robbins%20algebra
|
In abstract algebra, a Robbins algebra is an algebra containing a single binary operation, usually denoted by , and a single unary operation usually denoted by satisfying the following axioms:
For all elements a, b, and c:
Associativity:
Commutativity:
Robbins equation:
For many years, it was conjectured, but unproven, that all Robbins algebras are Boolean algebras. This was proved in 1996, so the term "Robbins algebra" is now simply a synonym for "Boolean algebra".
History
In 1933, Edward Huntington proposed a new set of axioms for Boolean algebras, consisting of (1) and (2) above, plus:
Huntington's equation:
From these axioms, Huntington derived the usual axioms of Boolean algebra.
Very soon thereafter, Herbert Robbins posed the Robbins conjecture, namely that the Huntington equation could be replaced with what came to be called the Robbins equation, and the result would still be Boolean algebra. would interpret Boolean join and Boolean complement. Boolean meet and the constants 0 and 1 are easily defined from the Robbins algebra primitives. Pending verification of the conjecture, the system of Robbins was called "Robbins algebra."
Verifying the Robbins conjecture required proving Huntington's equation, or some other axiomatization of a Boolean algebra, as theorems of a Robbins algebra. Huntington, Robbins, Alfred Tarski, and others worked on the problem, but failed to find a proof or counterexample.
William McCune proved the conjecture in 1996, using the automated theorem prover EQP. For a complete proof of the Robbins conjecture in one consistent notation and following McCune closely, see Mann (2003). Dahn (1998) simplified McCune's machine proof.
See also
Algebraic structure
Minimal axioms for Boolean algebra
References
Dahn, B. I. (1998) Abstract to "Robbins Algebras Are Boolean: A Revision of McCune's Computer-Generated Solution of Robbins Problem," Journal of Algebra 208(2): 526–32.
Mann, Allen (2003) "A Complete Proof of the Robbins
|
https://en.wikipedia.org/wiki/Decalage
|
Decalage on a fixed-wing aircraft is a measure of the relative incidences of wing surfaces. Various sources have defined it in multiple ways, depending on context:
On a biplane, decalage can refer to the angle difference between the upper and lower wings, i.e. the acute angle contained between the chords of the wings in question.
On other fixed-wing aircraft, decalage can refer to the difference in angle of the chord line of the wing and the chord line of the horizontal stabilizer. This is different from the angle of incidence, which refers to the angle of the wing chord to the longitudinal axis of the fuselage, without reference to the horizontal stabilizer.
In biplanes
Decalage is said to be positive when the upper wing has a higher angle of incidence than the lower wing, and negative when the lower wing's incidence is greater than that of the upper wing. Positive decalage results in greater lift from the upper wing than the lower wing, the difference increasing with the amount of decalage.
In a survey of representative biplanes, real-life design decalage is typically zero, with both wings having equal incidence. A notable exception is the Stearman PT-17, which has 4° of incidence in the lower wing, and 3° in the upper wing. Considered from an aerodynamic perspective, it is desirable to have the forward-most wing stall first, which will induce a pitch-down moment, aiding in stall recovery. Biplane designers may use incidence to control stalling behavior, but may also use airfoil selection or other means to accomplish correct behavior.
In other fixed-wing aircraft
In other fixed-wing aircraft, "decalage" typically refers to geometric decalage, which is the difference in the angle of the wing's chord line and the horizontal stabilizer's chord line.
The term "aerodynamic decalage" can refer to a similar angular measure that is taken with respect to each surface's respective zero-lift line rather than its chord line. Aerodynamic decalage may be modified i
|
https://en.wikipedia.org/wiki/Malicious%20Software%20Removal%20Tool
|
Microsoft Windows Malicious Software Removal Tool (MSRT) is a freeware second-opinion malware scanner that Microsoft's Windows Update downloads and runs on Windows computers each month, independent of the install antivirus software. First released on January 13, 2005, MSRT does not offer real-time protection. It scans its host computer for specific, widespread malware, and tries to eliminate the infection. Outside its monthly deployment schedule, it can be separately downloaded from Microsoft.
Availability
Since its January 13, 2005, Microsoft releases the updated tool every second Tuesday of every month (commonly called "Patch Tuesday") through Windows Update, at which point it runs once automatically in the background and reports if malicious software is found. The tool is also available as a standalone download.
Since support for Windows 2000 ended on July 13, 2010, Microsoft stopped distributing the tool to Windows 2000 users via Windows Update. The last version of the tool that could run on Windows 2000 was 4.20, released on May 14, 2013. Starting with version 5.1, released on June 11, 2013, support for Windows 2000 was dropped altogether. Although Windows XP support ended on April 8, 2014, updates for the Windows XP version of the Malicious Software Removal Tool would be provided until August, 2016; version 5.39. The latest version of MSRT for Windows Vista is 5.47, released on 11 April 2017.
Despite Microsoft ending general support for the Windows 7 operating system in 2020, updates are still provided to Windows 7 users via the standard Windows Update delivery mechanism.
Operation
MSRT does not install a shortcut in the Start menu. Hence, users must manually execute %windir%\system32\mrt.exe. The tool records its results in a log file located at %windir%\debug\mrt.log.
The tool reports anonymized data about any detected infections to Microsoft. MSRT's EULA discloses this reporting behavior and explains how to disable it.
Impact
In a June 2006 Micro
|
https://en.wikipedia.org/wiki/Alexander%20Ostrowski
|
Alexander Markowich Ostrowski (; ; 25 September 1893, in Kiev, Russian Empire – 20 November 1986, in Montagnola, Lugano, Switzerland) was a mathematician.
His father Mark having been a merchant, Alexander Ostrowski attended the Kiev College of Commerce, not a high school, and thus had an insufficient qualification to be admitted to university. However, his talent did not remain undetected: Ostrowski's mentor, Dmitry Grave, wrote to Landau and Hensel for help.
Subsequently, Ostrowski began to study mathematics at Marburg University under Hensel's supervision in 1912. During World War I he was interned, but thanks to the intervention of Hensel, the restrictions on his movements were eased somewhat, and he was allowed to use the university library.
After the war ended Ostrowski moved to Göttingen where he wrote his doctoral dissertation and was influenced by Hilbert, Klein and Landau. In 1920, after having obtained his doctorate, Ostrowski moved to Hamburg where he worked as Hecke's assistant and finished his habilitation in 1922. In 1923 he returned to Göttingen, and in 1928 became Professor of Mathematics at Basel, until retirement in 1958. In 1950 Ostrowski obtained Swiss citizenship. After retirement he still published scientific papers until his late eighties.
Selected publications
Vorlesungen über Differential- und Integralrechnung, 3 vols., Birkhäuser; vol. 1, 1945; vol. 1, 2nd edition, 1960; vol. 2, 1951; vol. 3, 1954;
Solution of equations and systems of equations. Academic Press, New York 1960; 2nd edition 1965; 2016 pbk reprint of 2nd edition
Aufgabensammlung zur Infinitesimalrechnung. several vols., Birkhäuser, Basel (1st edition 1964; 2nd edition 1972) pbk reprint vol. 1; vol. 2 A; vol. 2 B; vol. 3
Collected mathematical papers. 6 vols., Birkhäuser, Basel 1983–1984. vol. 1; vol. 2; vol. 3; vol. 4; vol. 5; vol. 6
See also
Ostrowski's theorem
Ostrowski–Hadamard gap theorem
Ostrowski numeration
Ostrowski Prize
References
External links
Ostrowski
|
https://en.wikipedia.org/wiki/Auction%20theory
|
Auction theory is an applied branch of economics which deals with how bidders act in auction markets and researches how the features of auction markets incentivise predictable outcomes. Auction theory is a tool used to inform the design of real-world auctions. Sellers use auction theory to raise higher revenues while allowing buyers to procure at a lower cost. The conference of the price between the buyer and seller is an economic equilibrium. Auction theorists design rules for auctions to address issues which can lead to market failure. The design of these rulesets encourages optimal bidding strategies among a variety of informational settings. The 2020 Nobel Prize for Economics was awarded to Paul R. Milgrom and Robert B. Wilson “for improvements to auction theory and inventions of new auction formats.”
Introduction
Auctions facilitate transactions by enforcing a specific set of rules regarding the resource allocations of a group of bidders. Theorists consider auctions to be economic games that differ in two respects: format and information. The format defines the rules for the announcement of prices, the placement of bids, the updating of prices, the auction close, and the way a winner is picked. The way auctions differ with respect to information regards the asymmetries of information that exist between bidders. In most auctions, bidders have some private information that they choose to withhold from their competitors. For example, bidders usually know their personal valuation of the item, which is unknown to the other bidders and the seller; however, the behaviour of bidders can influence the personal valuation of other bidders.
History
One of the historical events related to auctions that has been reported is a custom in Babylonia, namely when men try to make an offers to women in order to marry her. The more familiar the auction system is, the more situations where auctions are conducted. There are auctions for various things, from livestock, rare and unus
|
https://en.wikipedia.org/wiki/Configurable%20modularity
|
Configurable modularity is a term coined by Raoul de Campo of IBM Research and later expanded on by Nate Edwards of the same organization, denoting the ability to reuse independent components by changing their interconnections, but not their internals. In Edwards' view this characterizes all successful reuse systems, and indeed all systems which can be described as "engineered".
See also
Flow-Based Programming
References
Theoretical computer science
|
https://en.wikipedia.org/wiki/Texas%20Institute%20for%20Genomic%20Medicine
|
The Texas A&M Institute for Genomic Medicine (TIGM) is a research institute of Texas A&M AgriLife Research. It was founded in 2005 under a $50 million award from the Texas Enterprise Fund to accelerate the pace of medical discoveries and foster the development of the biotechnology industry in Texas.
TIGM helps researchers gain faster access to the genetically engineered knockout mice used in medical research. TIGM owns and maintains the world's largest library of embryonic stem cells for C57BL/6 mice. In addition, TIGM has contracted access to the world's largest library of genetically modified 129 mouse cells. The Institute headquarters and laboratory facilities are based on the main campus of Texas A&M University in College Station, Texas.
References
External links
Texas Institute for Genomic Medicine Homepage
Biotechnology organizations
Organizations established in 2005
Organizations based in Texas
|
https://en.wikipedia.org/wiki/Valis%3A%20The%20Fantasm%20Soldier
|
is a 1986 action-platform video game originally developed by Wolf Team and published by Telenet Japan for the MSX, PC-8801, X1, FM-7, and PC-9801 home computers. It is the first entry in the Valis series. It stars Yuko Asou, a Japanese teenage schoolgirl chosen as the Valis warrior and wielder of the mystical Valis sword to protect the Earth, the land of spirits, and the dream world Vecanti from demon lord Rogles. Through the journey, the player explores and search for items and power-ups, while fighting enemies and defeating bosses to increase Yuko's attributes.
Programmers Masahiro Akishino and Osamu Ikegame began planning on a side-scrolling action game featuring a customed delinquent heroine, an idea originated from Sukeban Deka to compete in a contest sponsored by Japanese computer magazine LOGiN, being kept secret within Telenet until they approved development to continue when the company learned of its existence. After a Telenet superior expressed disliking towards its graphics, writer Hiroki Hayashi was ordered to take action and fix it, leading to the conception of Valis. Akishino and Hayashi used Ikegame's work as basis to introduce their own story and character ideas, which were based on an unfinished personal novel Hayashi wrote prior to the game's production.
Valis sold well and was listed as one of the best-selling games in 1987 rankings. An almost completely reworked version was also released for the Family Computer, followed by remakes for the Sega Mega Drive/Genesis and PC Engine Super CD-ROM², and a version for mobile phones as well. The game was supplemented with manga adaptations, an anime short by Sunrise, albums from King Records and Wave Master, and doujinshi books. Critical reception has varied depending on the version; the original MSX version garnered mixed reviews while the Genesis remake carried average sentiments, however the enhanced PC Engine remake was received more favorably. It was followed by Valis II (1989).
Gameplay and prem
|
https://en.wikipedia.org/wiki/Claranet
|
Claranet provides network, hosting and managed application services in the UK, France, Germany, The Netherlands (Benelux), Portugal, Spain, Italy and Brazil.
History
Charles Nasser founded the ISP in 1996 and by 1999 had 150,000 subscribers.
Claranet has grown its business through a number of acquisitions, including Netscalibur in 2003, via net.works uk in 2004 and in 2005 Amen Group, via net.works Europe and Artful. In 2012 Claranet acquired Star Technology.
The company has annualised revenues of circa £375 million, over 6,500 customers and over 2,200 employees. On a constant currency basis, revenues have increased four times in under five years. Claranet was recognised as a ‘Leader’ in Gartner’s Magic Quadrant for Managed Hybrid Cloud Hosting, Europe (2016) for the fourth consecutive year and holds Premier Partner status with Amazon Web Services and Google Cloud.
In 2017 Claranet acquired French company Oxalide and ITEN Solutions, revolutionising the IT market in Portugal.
On 5 July 2018, Claranet acquired NotSoSecure.
References
External links
Internet service providers of the Netherlands
Internet service providers of the United Kingdom
Internet service providers of Germany
Web hosting
|
https://en.wikipedia.org/wiki/Abel%20equation
|
The Abel equation, named after Niels Henrik Abel, is a type of functional equation of the form
or
.
The forms are equivalent when is invertible. or control the iteration of .
Equivalence
The second equation can be written
Taking , the equation can be written
For a known function , a problem is to solve the functional equation for the function , possibly satisfying additional requirements, such as .
The change of variables , for a real parameter , brings Abel's equation into the celebrated Schröder's equation, .
The further change into Böttcher's equation, .
The Abel equation is a special case of (and easily generalizes to) the translation equation,
e.g., for ,
. (Observe .)
The Abel function further provides the canonical coordinate for Lie advective flows (one parameter Lie groups).
History
Initially, the equation in the more general form
was reported. Even in the case of a single variable, the equation is non-trivial, and admits special analysis.
In the case of a linear transfer function, the solution is expressible compactly.
Special cases
The equation of tetration is a special case of Abel's equation, with .
In the case of an integer argument, the equation encodes a recurrent procedure, e.g.,
and so on,
Solutions
The Abel equation has at least one solution on if and only if for all and all , , where , is the function iterated times.
Analytic solutions (Fatou coordinates) can be approximated by asymptotic expansion of a function defined by power series in the sectors around a parabolic fixed point. The analytic solution is unique up to a constant.
See also
Functional equation
Schröder's equation
Böttcher's equation
Infinite compositions of analytic functions
Iterated function
Shift operator
Superfunction
References
Niels Henrik Abel
Functional equations
|
https://en.wikipedia.org/wiki/Schr%C3%B6der%27s%20equation
|
Schröder's equation, named after Ernst Schröder, is a functional equation with one independent variable: given the function , find the function such that
Schröder's equation is an eigenvalue equation for the composition operator that sends a function to .
If is a fixed point of , meaning , then either (or ) or . Thus, provided that is finite and does not vanish or diverge, the eigenvalue is given by .
Functional significance
For , if is analytic on the unit disk, fixes , and , then Gabriel Koenigs showed in 1884 that there is an analytic (non-trivial) satisfying Schröder's equation. This is one of the first steps in a long line of theorems fruitful for understanding composition operators on analytic function spaces, cf. Koenigs function.
Equations such as Schröder's are suitable to encoding self-similarity, and have thus been extensively utilized in studies of nonlinear dynamics (often referred to colloquially as chaos theory). It is also used in studies of turbulence, as well as the renormalization group.
An equivalent transpose form of Schröder's equation for the inverse of Schröder's conjugacy function is . The change of variables (the Abel function) further converts Schröder's equation to the older Abel equation, . Similarly, the change of variables converts Schröder's equation to Böttcher's equation, .
Moreover, for the velocity, , Julia's equation, , holds.
The -th power of a solution of Schröder's equation provides a solution of Schröder's equation with eigenvalue , instead. In the same vein, for an invertible solution of Schröder's equation, the (non-invertible) function is also a solution, for any periodic function with period . All solutions of Schröder's equation are related in this manner.
Solutions
Schröder's equation was solved analytically if is an attracting (but not superattracting)
fixed point, that is by Gabriel Koenigs (1884).
In the case of a superattracting fixed point, , Schröder's equation is unwieldy, and
|
https://en.wikipedia.org/wiki/Memory%20box
|
A memory box is a box containing objects that serve as reminders.
Dementia
In cases of dementia, a memory box may be used as a form of therapy to remind the patient of their earlier life.
Deceased infants
Memory boxes are provided by some hospitals in the event of stillbirth, miscarriage, or other problem during or after childbirth. They contain objects belonging to or representing the deceased child to help relatives come to terms with their loss. Memory boxes are usually donated by local charities and organizations.
Memory boxes for miscarriage, stillbirth and infant loss can contain the following items:
lock of hair
baby blanket
special box to keep items in
data card that states baby's name and birth information
card/ink pad for taking foot/hand prints
journal
writing pen
small stuffed animal to use in photos
outfit that fits the baby
air-dry clay for taking foot/hand molds
disposable camera
pocket kleenex
bereavement books and information
References
Equipment used in childbirth
Miscarriage
Stillbirth
|
https://en.wikipedia.org/wiki/Diagnosis%20%28artificial%20intelligence%29
|
As a subfield in artificial intelligence, diagnosis is concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. If the system is not functioning correctly, the algorithm should be able to determine, as accurately as possible, which part of the system is failing, and which kind of fault it is facing. The computation is based on observations, which provide information on the current behaviour.
The expression diagnosis also refers to the answer of the question of whether the system is malfunctioning or not, and to the process of computing the answer. This word comes from the medical context where a diagnosis is the process of identifying a disease by its symptoms.
Example
An example of diagnosis is the process of a garage mechanic with an automobile. The mechanic will first try to detect any abnormal behavior based on the observations on the car and his knowledge of this type of vehicle. If he finds out that the behavior is abnormal, the mechanic will try to refine his diagnosis by using new observations and possibly testing the system, until he discovers the faulty component; the mechanic plays an important role in the vehicle diagnosis.
Expert diagnosis
The expert diagnosis (or diagnosis by expert system) is based on experience with the system. Using this experience, a mapping is built that efficiently associates the observations to the corresponding diagnoses.
The experience can be provided:
By a human operator. In this case, the human knowledge must be translated into a computer language.
By examples of the system behaviour. In this case, the examples must be classified as correct or faulty (and, in the latter case, by the type of fault). Machine learning methods are then used to generalize from the examples.
The main drawbacks of these methods are:
The difficulty acquiring the expertise. The expertise is typically only available after a long period of use of the system
|
https://en.wikipedia.org/wiki/Email%20bomb
|
On Internet usage, an email bomb is a form of net abuse that sends large volumes of email to an address to overflow the mailbox, overwhelm the server where the email address is hosted in a denial-of-service attack (DoS attack) or as a smoke screen to distract the attention from important email messages indicating a security breach.
Methods
There are three methods of perpetrating an email bomb: mass mailing, list linking and zip bombing.
Mass mailing
Mass mailing consists of sending numerous duplicate emails to the same email address. These types of mail bombs are simple to design but their extreme simplicity means they can be easily detected by spam filters. Email-bombing using mass mailing is also commonly performed as a DDoS attack by employing the use of "zombies" botnets; hierarchical networks of computers compromised by malware and under the attacker's control. Similar to their use in spamming, the attacker instructs the botnet to send out millions of emails, but unlike normal botnet spamming, the emails are all addressed to only one or a few addresses the attacker wishes to flood. This form of email bombing is similar to other DDoS flooding attacks. As the targets are frequently the dedicated hosts handling website and email accounts of a business, this type of attack can be devastating to both services of the host.
This type of attack is more difficult to defend against than a simple mass-mailing bomb because of the multiple source addresses and the possibility of each zombie computer sending a different message or employing stealth techniques to defeat spam filters.
List linking
List linking, also known as "email cluster bomb", means signing a particular email address up to several email list subscriptions. The victim then has to unsubscribe from these unwanted services manually. The attack can be carried out automatically with simple scripts: this is easy, almost impossible to trace back to the perpetrator, and potentially very destructive. A massive
|
https://en.wikipedia.org/wiki/Decorrelation
|
Decorrelation is a general term for any process that is used to reduce autocorrelation within a signal, or cross-correlation within a set of signals, while preserving other aspects of the signal. A frequently used method of decorrelation is the use of a matched linear filter to reduce the autocorrelation of a signal as far as possible. Since the minimum possible autocorrelation for a given signal energy is achieved by equalising the power spectrum of the signal to be similar to that of a white noise signal, this is often referred to as signal whitening.
Process
Although most decorrelation algorithms are linear, non-linear decorrelation algorithms also exist.
Many data compression algorithms incorporate a decorrelation stage. For example, many transform coders first apply a fixed linear transformation that would, on average, have the effect of decorrelating a typical signal of the class to be coded, prior to any later processing. This is typically a Karhunen–Loève transform, or a simplified approximation such as the discrete cosine transform.
By comparison, sub-band coders do not generally have an explicit decorrelation step, but instead exploit the already-existing reduced correlation within each of the sub-bands of the signal, due to the relative flatness of each sub-band of the power spectrum in many classes of signals.
Linear predictive coders can be modelled as an attempt to decorrelate signals by subtracting the best possible linear prediction from the input signal, leaving a whitened residual signal.
Decorrelation techniques can also be used for many other purposes, such as reducing crosstalk in a multi-channel signal, or in the design of echo cancellers.
In image processing decorrelation techniques can be used to enhance or stretch, colour differences found in each pixel of an image. This is generally termed as 'decorrelation stretching'.
The concept of decorrelation can be applied in many other fields.
In neuroscience, decorrelation is used in the an
|
https://en.wikipedia.org/wiki/Road%20Blaster
|
is an interactive movie video game developed by Data East featuring animation by Toei Animation, originally released exclusively in Japan as a laserdisc-based arcade game in 1985. The player assumes the role of a vigilante who must avenge the death of his wife by pursuing the biker gang responsible for her death in a modified sports car. The game would later be ported to a variety of home formats such as the MSX and Sharp X1 (VHD format), Sega CD (under the title of Road Blaster FX), LaserActive (in Mega-LD format), PlayStation and Sega Saturn (in a compilation with Thunder Storm). The Sega CD and Mega-LD versions were released outside of Japan under titles of Road Avenger and Road Prosecutor respectively.
Gameplay
As with other laserdisc-based arcade games from the same time, the gameplay consists of on-screen instructions overlaid over pre-recorded full motion video animated footage of high-speed chases and vehicular combat. The player controls the crosshair from a first-person perspective, to steer their car toward the correct directions according to the green arrows flashing and beeping beside it, while controlling the gas pedal, brake and booster whenever they light up.
The game has nine stages. Upon successfully completing a level, the player is graded on the reaction time. Different difficulty levels can be selected. In Normal Mode, pop-up icons and audio tones signal when to turn left or right, brake, hit turbo, or hit other cars. In Hard Mode, there are no on-screen icons to guide the player.
Plot
The story of Road Blaster is inspired by revenge thriller films such as Mad Max. In the late 1990s United States, the player assumes the role of a vigilante who drives a customized sports car in order to get revenge on a biker gang responsible for his wife's death on their honeymoon. After recovering from his own injuries, he upgrades his car and goes on a rampage through nine areas. His goal is to seek out the gang's female boss and complete his vengeance.
De
|
https://en.wikipedia.org/wiki/STIX%20Fonts%20project
|
The STIX Fonts project or Scientific and Technical Information Exchange (STIX), is a project sponsored by several leading scientific and technical publishers to provide, under royalty-free license, a comprehensive font set of mathematical symbols and alphabets, intended to serve the scientific and engineering community for electronic and print publication. The STIX fonts are available as fully hinted OpenType/CFF fonts. There is currently no TrueType version of the STIX fonts available, but the STIX Mission Statement includes the intention to create one in the future. However, there exists an unofficial conversion of STIX Fonts (from the beta version release) to TrueType, suitable for use with software without OpenType support.
STIX fonts also include natural language glyphs for Latin, Greek and Cyrillic. The family is designed to be visually compatible with the Times New Roman family, a popular choice in book publishing.
Composition
Among the glyphs in STIX, 32.9% have been contributed by the project members. The commercial TeX vendor and TeX font foundry MicroPress has been contracted to create the additional glyphs. The STIX project will also create a TeX implementation. Goals also include incorporating the characters into Unicode, and ensuring that browsers can use them.
Members of the STIX Fonts project, known collectively as the STI Pub consortium, include the American Institute of Physics, the American Chemical Society, the American Mathematical Society, the Institute of Electrical and Electronics Engineers, the American Physical Society, and Elsevier.
Development process
A beta version of the fonts was released on October 31, 2007. This version does not include enough of the OpenType mathematical layout features present in Cambria Math, so it is not usable to the fullest extent in Microsoft Office 2007. The Latin glyph set included in the beta version does not yet cover all the characters required to typeset in Eastern European languages.
"Final design
|
https://en.wikipedia.org/wiki/Nimrod%20%28computer%29
|
The Nimrod, built in the United Kingdom by Ferranti for the 1951 Festival of Britain, was an early computer custom-built to play Nim, inspired by the earlier Nimatron. The twelve-by-nine-by-five-foot (3.7-by-2.7-by-1.5-meter) computer, designed by John Makepeace Bennett and built by engineer Raymond Stuart-Williams, allowed exhibition attendees to play a game of Nim against an artificial intelligence. The player pressed buttons on a raised panel corresponding with lights on the machine to select their moves, and the Nimrod moved afterward, with its calculations represented by more lights. The speed of the Nimrod's calculations could be reduced to allow the presenter to demonstrate exactly what the computer was doing, with more lights showing the state of the calculations. The Nimrod was intended to demonstrate Ferranti's computer design and programming skills rather than to entertain, though Festival attendees were more interested in playing the game than the logic behind it. After its initial exhibition in May, the Nimrod was shown for three weeks in October 1951 at the Berlin Industrial Show before being dismantled.
The game of Nim running on the Nimrod is a candidate for one of the first video games, as it was one of the first computer games to have any sort of visual display of the game. It appeared only four years after the 1947 invention of the cathode-ray tube amusement device, the earliest known interactive electronic game to use an electronic display, and one year after Bertie the Brain, a computer similar to the Nimrod which played tic-tac-toe at the 1950 Canadian National Exhibition. The Nimrod's use of light bulbs rather than a screen with real-time visual graphics, however, much less moving graphics, does not meet some definitions of a video game.
Development
In the summer of 1951, the United Kingdom held the Festival of Britain, a national exhibition held throughout the UK to promote the British contribution to science, technology, industrial design,
|
https://en.wikipedia.org/wiki/Sequential%20coupling
|
In object-oriented programming, sequential coupling (also known as temporal coupling) is a form of coupling where a class requires its methods to be called in a particular sequence. This may be an anti-pattern, depending on context.
Methods whose name starts with Init, Begin, Start, etc. may indicate the existence of sequential coupling.
Using a car as an analogy, if the user steps on the gas without first starting the engine, the car does not crash, fail, or throw an exception - it simply fails to accelerate.
Sequential coupling can be refactored with the template method pattern to overcome the problems posed by the usage of this anti-pattern.
References
Anti-patterns
|
https://en.wikipedia.org/wiki/Conical%20function
|
In mathematics, conical functions or Mehler functions are functions which can be expressed in terms of Legendre functions of the first and second kind,
and
The functions were introduced by Gustav Ferdinand Mehler, in 1868, when expanding in series the distance of a point on the axis of a cone to a point located on the surface of the cone. Mehler used the notation to represent these functions. He obtained integral representation and series of functions representations for them. He also established an addition theorem
for the conical functions. Carl Neumann obtained an expansion of the functions in terms
of the Legendre polynomials in 1881. Leonhardt introduced for the conical functions the equivalent of the spherical harmonics in 1882.
External links
G. F. Mehler "Ueber die Vertheilung der statischen Elektricität in einem von zwei Kugelkalotten begrenzten Körper" Journal für die reine und angewandte Mathematik 68, 134 (1868).
G. F. Mehler "Ueber eine mit den Kugel- und Cylinderfunctionen verwandte Function und ihre Anwendung in der Theorie der Elektricitätsvertheilung" Mathematische Annalen 18 p. 161 (1881).
C. Neumann "Ueber die Mehler'schen Kegelfunctionen und deren Anwendung auf elektrostatische Probleme" Mathematische Annalen 18 p. 195 (1881).
G. Leonhardt " Integraleigenschaften der adjungirten Kegelfunctionen" Mathematische Annalen 19 p. 578 (1882).
Milton Abramowitz and Irene Stegun (Eds.) Handbook of Mathematical Functions (Dover, 1972) p. 337
A. Gil, J. Segura, N. M. Temme "Computing the conical function $P^{\mu}_{-1/2+i\tau}(x)$" SIAM J. Sci. Comput. 31(3), 1716–1741 (2009).
Tiwari, U. N.; Pandey, J. N. The Mehler-Fock transform of distributions. Rocky Mountain J. Math. 10 (1980), no. 2, 401–408.
Special functions
|
https://en.wikipedia.org/wiki/International%20Conference%20on%20Distributed%20Computing%20Systems
|
The International Conference on Distributed Computing Systems (ICDCS) is the oldest conference in the field of distributed computing systems in the world. It was launched by the IEEE Computer Society Technical Committee on Distributed Processing (TCDP) in October 1979, and is sponsored by such committee. It was started as an 18-month conference until 1983 and became an annual conference since 1984. The ICDCS has a long history of significant achievements and worldwide visibility, and has recently celebrated its 37th year.
Location history
2019: Dallas, Texas, United States
2018: Vienna, Austria
2017: Atlanta, GA, United States
2016: Nara, Japan
2015: Columbus, Ohio, United States
2014: Madrid, Spain
2013: Philadelphia, Pennsylvania, United States
2012: Macau, China
2011: Minneapolis, Minnesota, United States
2010: Genoa, Italy
2009: Montreal, Quebec, Canada
2008: Beijing, China
2007: Toronto, Ontario, Canada
2006: Lisbon, Portugal
2005: Columbus, Ohio, United States
2004: Keio University, Japan
2003: Providence, RI, United States
2002: Vienna, Austria
2001: Phoenix, AZ, United States
2000: Taipei, Taiwan
1999: Austin, TX, United States
1998: Amsterdam, The Netherlands
1997: Baltimore, MD, United States
1996: Hong Kong
1995: Vancouver, Canada
1994: Poznań, Poland
1993: Pittsburgh, PA, United States
1992: Yokohama, Japan
1991: Arlington, TX, United States
1990: Paris, France
1989: Newport Beach, CA, United States
1988: San Jose, CA, United States
1987: Berlin, Germany
1986: Cambridge, MA, United States
1985: Denver, CO, United States
1984: San Francisco, CA, United States
1983: Hollywood, FL, United States
1981: Versailles, France
1979: Huntsville, AL, United States
See also
List of distributed computing conferences
External links
ICDCS 2018 - July 2–July 5, 2018, Vienna, Austria
ICDCS 2007 - June 25–June 29, 2007, Toronto, Canada.
ICDCS 2006 - July 4–July 7, 2006, Lisbon, Portugal.
ICDCS 2005 - July 6–July 10, 2005, Co
|
https://en.wikipedia.org/wiki/Aging%20in%20cats
|
Aging in cats is the process by which cats change over the course of their natural lifespans. The average lifespan of a domestic cat may range from 10 to 13 years. As cats senesce, they undergo predictable changes in health and behavior. Dental disease and loss of olfaction are common as cats age, affecting eating habits. Arthritis and sarcopenia are also common in older cats. How a cat's health is affected by aging may be managed through modifications in a cat's diet, accessibility adjustments, and cognitive stimulation.
Average lifespan among domestic cats
The average lifespan of domestic cats has increased in recent decades. It has risen from seven years in the 1980s, to nine years in 1995, to about 15 years in 2021. Reliable information on the lifespans of domestic cats is varied and limited. Nevertheless, a number of studies have investigated the matter and have come up with noteworthy estimates. Estimates of mean lifespan in these studies range between 13 and 20 years, with a single value in the neighborhood of 15 years. At least one study found a median lifespan value of 14 years and a corresponding interquartile range of 9 to 17 years. Maximum lifespan has been estimated at values ranging from 22 to 30 years although there have been claims of cats living longer than 30 years. According to the 2010 edition of the Guinness World Records, the oldest cat ever recorded was Creme Puff, who died in 2005, aged 38 years, 3 days. Female cats typically outlive male cats, and crossbred cats typically outlive purebred cats. It has also been found that the greater a cat's weight, the lower its life expectancy on average.
A common misconception in cat aging (and dog aging) is that a cat ages the equivalent of what a human would age in seven years each year. This is inaccurate due to the inconsistencies in aging as well as there being far more accurate equations to predict a cat's age in "cat years". A more accurate equation often used by veterinarians to predict cat yea
|
https://en.wikipedia.org/wiki/Bisection%20bandwidth
|
In computer networking, if the network is bisected into two equal-sized partitions, the bisection bandwidth of a network topology is the bandwidth available between the two partitions. Bisection should be done in such a way that the bandwidth between two partitions is minimum. Bisection bandwidth gives the true bandwidth available in the entire system. Bisection bandwidth accounts for the bottleneck bandwidth of the entire network. Therefore bisection bandwidth represents bandwidth characteristics of the network better than any other metric.
Bisection bandwidth calculations
For a linear array with n nodes bisection bandwidth is one link bandwidth. For linear array only one link needs to be broken to bisect the network into two partitions.
For ring topology with n nodes two links should be broken to bisect the network, so bisection bandwidth becomes bandwidth of two links.
For tree topology with n nodes can be bisected at the root by breaking one link, so bisection bandwidth is one link bandwidth.
For Mesh topology with n nodes, links should be broken to bisect the network, so bisection bandwidth is bandwidth of links.
For Hyper-cube topology with n nodes, n/2 links should be broken to bisect the network, so bisection bandwidth is bandwidth of n/2 links.
Significance of bisection bandwidth
Theoretical support for the importance of this measure of network performance was developed in the PhD research of Clark Thomborson (formerly Clark Thompson). Thomborson proved that important algorithms for sorting, Fast Fourier transformation, and matrix-matrix multiplication become communication-limited—as opposed to CPU-limited or memory-limited—on computers with insufficient bisection bandwidth. F. Thomson Leighton's PhD research tightened Thomborson's loose bound on the bisection bandwidth of a computationally-important variant of the De Bruijn graph known as the shuffle-exchange network. Based on Bill Dally's analysis of latency, average-case throughput, and h
|
https://en.wikipedia.org/wiki/Safety%20instrumented%20system
|
In functional safety a safety instrumented system (SIS) is an engineered set of hardware and software controls which provides a protection layer that shuts down a chemical, nuclear, electrical, or mechanical system, or part of it, if a hazardous condition is detected.
Requirement specification
An SIS performs a safety instrumented function (SIF). The SIS is credited with a certain measure of reliability depending on its safety integrity level (SIL). The required SIL is determined from a quantitative process hazard analysis (PHA), such as a Layers of Protection Analysis (LOPA). The SIL requirements are verified during the design, construction, installation, and operation of the SIS. The required functionality may be verified by design reviews, factory acceptance testing, site acceptance testing, and regular functional testing. The PHA is in turn based on a hazard identification exercise. In the process industries (oil and gas production, refineries, chemical plants, etc.), this exercise is usually a hazard and operability study (HAZOP). The HAZOP usually identifies not only the process hazards of a plant (such as release of hazardous materials due to the process operating outside the safe limits of the plant) but also the SIFs protecting the plant from such excursions.
Design
An SIS is intended to perform specific control functions to prevent unsafe process operations when unacceptable or dangerous conditions occur. Because of its criticality, safety instrumented systems must be independent from all other control systems that control the same equipment, in order to ensure SIS functionality is not compromised. An SIS is composed of the same types of control elements (including sensors, logic solvers, actuators and other control equipment) as a Basic Process Control System (BPCS). However, all of the control elements in an SIS are dedicated solely to the proper functioning of the SIS.
The essential characteristic of an SIS is that it must include instruments, w
|
https://en.wikipedia.org/wiki/Genotropism
|
Genotropism is defined as the reciprocal attraction between carriers of the same or related latent recessive genes. Developed by the Hungarian psychiatrist Léopold Szondi in the 1930s, the theory concludes that instinct is biological and genetic in origin. Szondi believed that these genes regulated the "possibilities of fate" and was the working principle of the familial unconscious.
Overview
Genotropism consists of the theory that genes influence human behavior. While identified as entities, genes exist in groups because evolution favors cooperation. Within each gene group, it is possible to detect specific needs that function as mechanisms of screening and natural selection.
Szondi arrived a sort of genetic determinism, a philosophical theory of predestination. "The latent hereditary factors in human beings, the recessive genes, do not remain dormant or inactive within the human organism, but exert a very important and even decisive influence upon its behavior. This latent or recessive gene theory claims that these non-dominant hereditary factors determine the Object selection, voluntary and involuntary, of the individual. The drives resulting from these latent genes, therefore, direct the individual's selection of love objects, friendships, occupations, diseases, and forms of death. Hence, from the very beginning of the human's existence there is a hidden plan of life guided by 'Instinctual drives'."
Instinctual drives
In Szondi's theory, each "need" (a link between genes and behavior) comprises a polarity of positive and negative tendencies. Needs also group together in polarities to form larger wholes called "instinctual drives." Together, behavior tendencies, needs, and drives combine to form patterned wholes.
Szondi created a drive theory that determines that every drive has at least four genes. "The four Szondian drives are (1) contact, (2) sexual, (3) paroxysmal, and (4) ego. They are implicated in their corresponding psychiatric disorders and equival
|
https://en.wikipedia.org/wiki/MKS%20Toolkit
|
MKS Toolkit is a software package produced and maintained by PTC that provides a Unix-like environment for scripting, connectivity and porting Unix and Linux software to Microsoft Windows. It was originally created for MS-DOS, and OS/2 versions were released up to version 4.4. Several editions of each version, such as MKS Toolkit for developers, power users, enterprise developers and interoperability are available, with the enterprise developer edition being the most complete.
Before PTC, MKS Toolkit was owned by MKS Inc. In 1999, MKS acquired a company based in Fairfax, Virginia, USA called Datafocus Inc. The Datafocus product NuTCRACKER had included the MKS Toolkit since 1994 as part of its Unix compatibility technology. The MKS Toolkit was also licensed by Microsoft for the first two versions of their Windows Services for Unix, but later dropped in favor of Interix after Microsoft purchased the latter company.
Version 10.0 was current .
Overview
The MKS Toolkit products offer functionality in the following areas:
Command shell environments of Bourne shell, KornShell, Bash, C shell, Tcl shell
Traditional Unix commands (400+), including grep, awk, sed, vi, ls, kill
Windows specific commands (70+), including registry, shortcut, desktop, wcopy, db, dde, userinfo
Tape and archive commands, including tar, cpio, pax, zip, bzip2, ar
Remote connectivity, including ssh, remote shell, telnet, xterm, kterm, rexec, rlogin
Porting APIs, including fork(), signals, alarms, threads
Graphical porting APIs, including X, ncurses, Motif, OpenGL
Supported operating systems
MKS Toolkit products support all IA-32 and x64 of the Microsoft Windows operating systems. There is some loss of functionality running IA-32 versions on Windows 9x. Earlier versions ran on MS-DOS and compatible operating systems.
See also
Cygwin
MinGW
Hamilton C shell
UnxUtils
UWIN
GnuWin32
References
Reviews
External links
MKS Home
Compilers
Programming tools
Compatibility layers
Unix em
|
https://en.wikipedia.org/wiki/All-pass%20filter
|
An all-pass filter is a signal processing filter that passes all frequencies equally in gain, but changes the phase relationship among various frequencies. Most types of filter reduce the amplitude (i.e. the magnitude) of the signal applied to it for some values of frequency, whereas the all-pass filter allows all frequencies through without changes in level.
Common applications
A common application in electronic music production is in the design of an effects unit known as a "phaser", where a number of all-pass filters are connected in sequence and the output mixed with the raw signal.
It does this by varying its phase shift as a function of frequency. Generally, the filter is described by the frequency at which the phase shift crosses 90° (i.e., when the input and output signals go into quadrature – when there is a quarter wavelength of delay between them).
They are generally used to compensate for other undesired phase shifts that arise in the system, or for mixing with an unshifted version of the original to implement a notch comb filter.
They may also be used to convert a mixed phase filter into a minimum phase filter with an equivalent magnitude response or an unstable filter into a stable filter with an equivalent magnitude response.
Active analog implementation
Implementation using low-pass filter
The operational amplifier circuit shown in adjacent figure implements a single-pole active all-pass filter that features a low-pass filter at the non-inverting input of the opamp. The filter's transfer function is given by:
which has one pole at -1/RC and one zero at 1/RC (i.e., they are reflections of each other across the imaginary axis of the complex plane). The magnitude and phase of H(iω) for some angular frequency ω are
The filter has unity-gain magnitude for all ω. The filter introduces a different delay at each frequency and reaches input-to-output quadrature at ω=1/RC (i.e., phase shift is 90°).
This implementation uses a low-pass filter at the
|
https://en.wikipedia.org/wiki/Stanford%20Web%20Credibility%20Project
|
The Stanford Web Credibility Project, which involves assessments of website credibility conducted by the Stanford University Persuasive Technology Lab, is an investigative examination of what leads people to believe in the veracity of content found on the Web. The goal of the project is to enhance website design and to promote further research on the credibility of Web resources.
Origins
The Web has become an important channel for exchanging information and services, resulting in a greater need for methods to ascertain the credibility of websites. In response, since 1998, the Stanford Persuasive Technology Lab (SPTL) has investigated what causes people to believe, or not, what they find online. SPTL provides insight into how computers can be designed to change what people think and do, an area called captology. Directed by experimental psychologist B.J. Fogg, the Stanford team includes social scientists, designers, and technologists who research and design interactive products that motivate and influence their users.
Objectives
The ongoing research of the Stanford Web Credibility Project includes:
Performing quantitative research on Web credibility
Collecting all public information on Web credibility
Acting as a clearinghouse for this information
Facilitating research and discussion about Web credibility
Collaborating with academic and industry research groups
How Do People Evaluate a Web Site's Credibility?
A study by the Stanford Web Credibility Project, How Do People Evaluate a Web Site's Credibility? Results from a Large Study, published in 2002, invited 2,684 "average people" to rate the credibility of websites in ten content areas. The study evaluated the credibility of two live websites randomly assigned from one of ten content categories: e-commerce, entertainment, finance, health, news, nonprofit, opinion or review, search engines, sports, and travel. A total of one hundred sites were assessed.
This study was launched jointly with a para
|
https://en.wikipedia.org/wiki/Outline%20of%20algebraic%20structures
|
In mathematics, there are many types of algebraic structures which are studied. Abstract algebra is primarily the study of specific algebraic structures and their properties. Algebraic structures may be viewed in different ways, however the common starting point of algebra texts is that an algebraic object incorporates one or more sets with one or more binary operations or unary operations satisfying a collection of axioms.
Another branch of mathematics known as universal algebra studies algebraic structures in general. From the universal algebra viewpoint, most structures can be divided into varieties and quasivarieties depending on the axioms used. Some axiomatic formal systems that are neither varieties nor quasivarieties, called nonvarieties, are sometimes included among the algebraic structures by tradition.
Concrete examples of each structure will be found in the articles listed.
Algebraic structures are so numerous today that this article will inevitably be incomplete. In addition to this, there are sometimes multiple names for the same structure, and sometimes one name will be defined by disagreeing axioms by different authors. Most structures appearing on this page will be common ones which most authors agree on. Other web lists of algebraic structures, organized more or less alphabetically, include Jipsen and PlanetMath. These lists mention many structures not included below, and may present more information about some structures than is presented here.
Study of algebraic structures
Algebraic structures appear in most branches of mathematics, and one can encounter them in many different ways.
Beginning study: In American universities, groups, vector spaces and fields are generally the first structures encountered in subjects such as linear algebra. They are usually introduced as sets with certain axioms.
Advanced study:
Abstract algebra studies properties of specific algebraic structures.
Universal algebra studies algebraic structures abstractly, r
|
https://en.wikipedia.org/wiki/Buffalo%20network-attached%20storage%20series
|
The Buffalo TeraStation network-attached storage series are network-attached storage devices.
The current lineup includes the LinkStation and TeraStation series. These devices have undergone various improvements since they were first produced, and have expanded to include a Windows Storage Server-based operating system.
History
Buffalo released the first TeraStation model, the HD-HTGL/R5, in December 2004. The second generation models, the TS-TGL/R5, was released the following year with uninterrupted operation and improved operational stability. This was followed up with the TeraStation Pro and the TeraStation Pro II in 2006, which offered iSCSI support, as well as 2U rackmount models. in 2008, the fourth generation TS-X models were released with hot swapping and replication, along with IU rackmount versions.
TeraStation
The TeraStation is a network-attached storage device using a PowerPC or ARM architecture processor. Many TeraStation models are shipped with enterprise-grade internal hard drives mounted in a RAID array. Since January 2012, the TeraStation uses LIO for its iSCSI target.
LinkStation
The LinkStation is a network-attached storage device using a PowerPC or ARM architecture processor designed for personal use, aiming to serve as a central media hub and backup storage for a household. Compared to the TeraStation series, LinkStation devices typically offer more streamlined UI and media server features.
Current Product Lineup
LinkStation
The LinkStation is notable among the Linux community both in Japan and in the US/Europe for being "hackable" into a generic Linux appliance and made to do tasks other than the file storage and sharing tasks for which it was designed. As the device runs on Linux, and included changes to the Linux source code, Buffalo was required to release their modified versions of source code as per the terms of the GNU General Public License. Due to the availability of source code and the relatively low cost of the device, there
|
https://en.wikipedia.org/wiki/Fragment%20molecular%20orbital
|
The fragment molecular orbital method (FMO) is a computational method that can be used to calculate very large molecular systems with thousands of atoms using ab initio quantum-chemical wave functions.
History of FMO and related methods
The fragment molecular orbital method (FMO) was developed by Kazuo Kitaura and coworkers in 1999. FMO is deeply interconnected with the energy decomposition analysis (EDA) by Kazuo Kitaura and Keiji Morokuma, developed in 1976. The main use of FMO is to compute very large molecular systems by dividing them into fragments and performing ab initio or density functional quantum-mechanical calculations of fragments and their dimers, whereby the Coulomb field from the whole system is included. The latter feature allows fragment calculations without using caps.
The mutually consistent field (MCF) method had introduced the idea of self-consistent fragment calculations in their embedding potential, which was later used with some modifications in various methods including FMO. There had been other methods related to FMO including the incremental correlation method by H. Stoll (1992).
Later, other methods closely related to FMO were proposed including the kernel energy method of L. Huang and the electrostatically embedded many-body expansion by E. Dahlke,
S. Hirata and later M. Kamiya suggested approaches also very closely related to FMO. Effective fragment molecular orbital (EFMO) method combines some features of the effective fragment potentials (EFP) and FMO. A detailed perspective on the fragment-based method development can be found in a review.
Introduction to FMO
In addition to the calculation of the total properties, such as the energy,
energy gradient, dipole moment etc., an interaction energy is obtained for
each pair of fragments. This pair interaction energy can be further
decomposed into electrostatic, exchange, charge transfer and dispersion
contributions. This analysis is known as the pair interaction energy
decompositi
|
https://en.wikipedia.org/wiki/Band%20%28algebra%29
|
In mathematics, a band (also called idempotent semigroup) is a semigroup in which every element is idempotent (in other words equal to its own square). Bands were first studied and named by .
The lattice of varieties of bands was described independently in the early 1970s by Biryukov, Fennemore and Gerhard. Semilattices, left-zero bands, right-zero bands, rectangular bands, normal bands, left-regular bands, right-regular bands and regular bands are specific subclasses of bands that lie near the bottom of this lattice and which are of particular interest; they are briefly described below.
Varieties of bands
A class of bands forms a variety if it is closed under formation of subsemigroups, homomorphic images and direct product. Each variety of bands can be defined by a single defining identity.
Semilattices
Semilattices are exactly commutative bands; that is, they are the bands satisfying the equation
for all and .
Bands induce a preorder that may be defined as if . Requiring commutativity implies that this preorder becomes a (semilattice) partial order.
Zero bands
A left-zero band is a band satisfying the equation
,
whence its Cayley table has constant rows.
Symmetrically, a right-zero band is one satisfying
,
so that the Cayley table has constant columns.
Rectangular bands
A rectangular band is a band that satisfies
for all , or equivalently,
for all ,
In any semigroup the first identity is sufficient to characterize a Nowhere commutative semigroup.
Nowhere commutative semigroup implies the first identity.
In any flexible magma so every element commutes with its square. So in any Nowhere commutative semigroup every element is idempotent thus any Nowhere commutative semigroup is in fact a Nowhere commutative band.
Thus in any Nowhere commutative semigroup
So commutes with and thus - the first characteristic identity.
In a any semigroup the first identity implies idempotence since so so idempotent (a band). Then
nowhere commutativ
|
https://en.wikipedia.org/wiki/Web%20interoperability
|
Web interoperability is producing web pages viewable with nearly every device and browser. There have been various projects to improve web interoperability, for example the Web Standards Project, Mozilla's Technology Evangelism and Web Standards Group, and the Web Essential Conference.
History
The term was first used in the Web Interoperability Pledge, which is a promise to adhere to current HTML recommendations as promoted by the World Wide Web Consortium (W3C). The WIP was not a W3C initiative. but it was started by and has been run by ZDNet AnchorDesk.
This issue was known as "cross browsing" in the browser war between Internet Explorer and Netscape. Microsoft's Internet Explorer was the dominant browser after that, but modern web browsers such as Mozilla Firefox, Opera and Safari have become dominant, and support additional web standards beyond what Internet Explorer supports. Because of Internet Explorer's backwards compatibility, some web pages have continued to use non-standard HTML tags, DOM handling scripts, and platform-specific technologies such as ActiveX, which could potentially be harmful for Web accessibility and device independence.
Elements
Structural and semantic markup with HTML
CSS-based layout with layout elements that resize based on screen size
See also
Web accessibility
Computer accessibility
Multimodal interaction
Forward compatibility
Backward compatibility
References
External links
Htaccess Redirect Generator
Web design
Interoperability
|
https://en.wikipedia.org/wiki/Radial%20%28radio%29
|
In RF engineering, radial has three distinct meanings, both referring to lines which radiate from (or intersect at) a radio antenna, but neither meaning is related to the other.
Ground system radial wires
When used in the context of antenna construction, radial wires are physical objects: Wires running away from the base of the antenna, used to augment or replace the conductivity of the ground near the base of the antenna. The radial wires either may run above the surface of the earth (elevated radials), on the surface (on ground radials), or buried a centimeter or so under the earth (buried radials). The ends of the wires nearest the antenna base are connected to the antenna system electrical ground, and the far ends are either unconnected, or connected to metal stakes driven into the earth.
Top loading radial wires
Symmetrically arranged radial wires may also be attached to the top of an antenna, running horizontally away from its apex. For practical length radials, their effect is to improve feedpoint impedance of a short antenna almost the same as extending the height of the antenna by an amount equal to the combined length of all the radials, up to a point of diminishing returns around about a dozen radials. The radials do not themselves radiate, but may indirectly cause a small improvement in antenna radiation of short antennas by raising their point of maximum current upward along the main part of the mast.
Map radial lines
When used in the context of planning for a transmission system, radial lines are a concept used when describing a radio station's broadcast range: The radials in this case are several lines drawn on a map, radiating from the transmitter, with evenly spaced horizontal bearings. The radial extends as far as the transmitted signal can reach either by calculation or by measurement.
Ground system radial wires
Stations transmitting at low frequencies like the mediumwave and longwave AM broadcast bands, and some lower shortwave frequencies,
|
https://en.wikipedia.org/wiki/Zerosumfree%20monoid
|
In abstract algebra, an additive monoid is said to be zerosumfree, conical, centerless or positive if nonzero elements do not sum to zero. Formally:
This means that the only way zero can be expressed as a sum is as .
References
Semigroup theory
|
https://en.wikipedia.org/wiki/Evolutionary%20landscape
|
An evolutionary landscape is a metaphor or a construct used to think about and visualize the processes of evolution (e.g. natural selection and genetic drift) acting on a biological entity (e.g. a gene, protein, population, or species). This entity can be viewed as searching or moving through a search space. For example, the search space of a gene would be all possible nucleotide sequences. The search space is only part of an evolutionary landscape. The final component is the "y-axis", which is usually fitness. Each value along the search space can result in a high or low fitness for the entity. If small movements through search space cause changes in fitness that are relatively small, then the landscape is considered smooth. Smooth landscapes happen when most fixed mutations have little to no effect on fitness, which is what one would expect with the neutral theory of molecular evolution. In contrast, if small movements result in large changes in fitness, then the landscape is said to be rugged. In either case, movement tends to be toward areas of higher fitness, though usually not the global optima.
What exactly constitutes an "evolutionary landscape" is frequently confused in the literature; the term is often used interchangeably with "adaptive landscape" and "fitness landscape", although some authors have different definitions of adaptive and fitness landscapes. Additionally, there is a large disagreement whether the concept of an evolutionary landscape should be used as a visual metaphor disconnected from the underlying math, a tool for evaluating models of evolution, or a model in and of itself used to generate hypotheses and predictions.
History
Pre-Wright
According to McCoy (1979), the first evolutionary landscape was presented by Armand Janet of Toulon, France, in 1895. In Janet's evolutionary landscape, a species is represented as a point or an area on a polydimensional surface of phenotypes, which is reduced to two dimensions for simplicity. The size o
|
https://en.wikipedia.org/wiki/Cray%20T90
|
The Cray T90 series (code-named Triton during development) was the last of a line of vector processing supercomputers manufactured by Cray Research, Inc, superseding the Cray C90 series. The first machines were shipped in 1995, and featured a 2.2 ns (450 MHz) clock cycle and two-wide vector pipes, for a peak speed of 1.8 gigaflops per processor; the high clock speed arises from the CPUs being built using ECL logic. As with the Cray J90, each CPU contained a scalar data cache, in addition to the instruction buffering/caching which has always been in Cray architectures.
Configurations were available with between four and 32 processors, and with either IEEE 754 or traditional Cray floating-point arithmetic; the processors shared an SRAM main memory of up to eight gigabytes, with a bandwidth of three 64-bit words per cycle per CPU (giving a 32-CPU STREAM bandwidth of 360 gigabytes per second). The clock signal is distributed via a fiber-optic harness to the processors.
The T90 series was available in three variants, the T94 (one to four processors), T916 (eight to 16 processors) and T932 (16 to 32 processors).
It is widely considered as being slightly ahead of the state of the art at the time it was shipped; the systems were never particularly reliable. At launch, a 32-processor T932 cost $35 million.
Cray T90 systems were installed at, amongst other places, at least three US government sites, at NAVOCEANO in Mississippi (Bay St. Louis) USA, at NTT and NIED in Japan, at the Ford Motor Company and at General Motors, at NOAA's Geophysical Fluid Dynamics Laboratory, at Forschungszentrum Jülich in Germany, and at the Commissariat à l'Energie Atomique in France.
The system chassis weighs , contains of fluorinert coolant, and is approximately the shape and size of a very large chest freezer, paneled in black and gold plastic.
Its successor, some years after the last T90s shipped, was the Cray X1.
References
External links
Top 500 Supercomputer sites (PDF)
Compute
|
https://en.wikipedia.org/wiki/System%20image
|
In computing, a system image is a serialized copy of the entire state of a computer system stored in some non-volatile form such as a file. A system is said to be capable of using system images if it can be shut down and later restored to exactly the same state. In such cases, system images can be used for backup.
Hibernation is an example that uses an image of the entire machine's RAM.
Disk images
If a system has all its state written to a disk, then a system image can be produced by simply copying that disk to a file elsewhere, often with disk cloning applications. On many systems a complete system image cannot be created by a disk cloning program running within that system because information can be held outside of disks and volatile memory, for example in non-volatile memory like boot ROMs.
Process images
A process image is a copy of a given process's state at a given point in time. It is often used to create persistence within an otherwise volatile system. A common example is a database management system (DBMS). Most DBMS can store the state of its database or databases to a file before being closed down (see database dump). The DBMS can then be restarted later with the information in the database intact and proceed as though the software had never stopped. Another example would be the hibernate feature of many operating systems. Here, the state of all RAM memory is stored to disk, the computer is brought into an energy saving mode, then later restored to normal operation.
Some emulators provide a facility to save an image of the system being emulated. In video gaming this is often referred to as a savestate.
Another use is code mobility: a mobile agent can migrate between machines by having its state saved, then copying the data to another machine and restarting there.
Programming language support
Some programming languages provide a command to take a system image of a program. This is normally a standard feature in Smalltalk (inspired by FLEX) and Lisp
|
https://en.wikipedia.org/wiki/Grothendieck%20connection
|
In algebraic geometry and synthetic differential geometry, a Grothendieck connection is a way of viewing connections in terms of descent data from infinitesimal neighbourhoods of the diagonal.
Introduction and motivation
The Grothendieck connection is a generalization of the Gauss–Manin connection constructed in a manner analogous to that in which the Ehresmann connection generalizes the Koszul connection. The construction itself must satisfy a requirement of geometric invariance, which may be regarded as the analog of covariance for a wider class of structures including the schemes of algebraic geometry. Thus the connection in a certain sense must live in a natural sheaf on a Grothendieck topology. In this section, we discuss how to describe an Ehresmann connection in sheaf-theoretic terms as a Grothendieck connection.
Let be a manifold and a surjective submersion, so that is a manifold fibred over Let be the first-order jet bundle of sections of This may be regarded as a bundle over or a bundle over the total space of With the latter interpretation, an Ehresmann connection is a section of the bundle (over ) The problem is thus to obtain an intrinsic description of the sheaf of sections of this vector bundle.
Grothendieck's solution is to consider the diagonal embedding The sheaf of ideals of in consists of functions on which vanish along the diagonal. Much of the infinitesimal geometry of can be realized in terms of For instance, is the sheaf of sections of the cotangent bundle. One may define a first-order infinitesimal neighborhood of in to be the subscheme corresponding to the sheaf of ideals (See below for a coordinate description.)
There are a pair of projections given by projection the respective factors of the Cartesian product, which restrict to give projections One may now form the pullback of the fibre space along one or the other of or In general, there is no canonical way to identify and with each other.
|
https://en.wikipedia.org/wiki/IBM%20RS/6000
|
The RISC System/6000 (RS/6000) is a family of RISC-based Unix servers, workstations and supercomputers made by IBM in the 1990s. The RS/6000 family replaced the IBM RT PC computer platform in February 1990 and was the first computer line to see the use of IBM's POWER and PowerPC based microprocessors. In October 2000, the RS/6000 brand was retired for POWER-based servers and replaced by the eServer pSeries. Workstations continued under the RS/6000 brand until 2002, when new POWER-based workstations were released under the IntelliStation POWER brand.
History
The first RS/6000 models used the Micro Channel bus, later models used PCI. Some later models conformed to the PReP and CHRP standard platforms, which were co-developed with Apple and Motorola, with Open Firmware. The plan was to enable the RS/6000 to run multiple operating systems such as Windows NT, NetWare, OS/2, Solaris, Taligent, AIX and Mac OS but in the end only IBM's Unix variant AIX was used and supported on RS/6000. Linux is widely used on CHRP based RS/6000s, but support was added after the RS/6000 name was changed to eServer pSeries in 2000.
The RS/6000 family also included the POWERserver servers, POWERstation workstations and Scalable POWERparallel supercomputer platform. While most machines were desktops, desksides, or rack-mounted, there were laptop models too. Famous RS/6000s include the PowerPC 604e-based Deep Blue supercomputer that beat world champion Garry Kasparov at chess in 1997, and the POWER3-based ASCI White which was the fastest supercomputer in the world during 20002002.
Architecture
Hardware
Service processor
Many RS/6000 and subsequent pSeries machines came with a service processor, which booted itself when power was applied and continuously ran its own firmware, independent of the operating system. The service processor could call a phone number (via a modem) in case of serious failure with the machine. Early advertisements and documentation called the service processor "Sy
|
https://en.wikipedia.org/wiki/METEOR
|
METEOR (Metric for Evaluation of Translation with Explicit ORdering) is a metric for the evaluation of machine translation output. The metric is based on the harmonic mean of unigram precision and recall, with recall weighted higher than precision. It also has several features that are not found in other metrics, such as stemming and synonymy matching, along with the standard exact word matching. The metric was designed to fix some of the problems found in the more popular BLEU metric, and also produce good correlation with human judgement at the sentence or segment level. This differs from the BLEU metric in that BLEU seeks correlation at the corpus level.
Results have been presented which give correlation of up to 0.964 with human judgement at the corpus level, compared to BLEU's achievement of 0.817 on the same data set. At the sentence level, the maximum correlation with human judgement achieved was 0.403.
Algorithm
As with BLEU, the basic unit of evaluation is the sentence, the algorithm first creates an alignment (see illustrations) between two sentences, the candidate translation string, and the reference translation string. The alignment is a set of mappings between unigrams. A mapping can be thought of as a line between a unigram in one string, and a unigram in another string. The constraints are as follows; every unigram in the candidate translation must map to zero or one unigram in the reference. Mappings are selected to produce an alignment as defined above. If there are two alignments with the same number of mappings, the alignment is chosen with the fewest crosses, that is, with fewer intersections of two mappings. From the two alignments shown, alignment (a) would be selected at this point. Stages are run consecutively and each stage only adds to the alignment those unigrams which have not been matched in previous stages. Once the final alignment is computed, the score is computed as follows: Unigram precision is calculated as:
Where is the num
|
https://en.wikipedia.org/wiki/Private%20message
|
In computing, a private message, personal message, or direct message (abbreviated as PM or DM) refers to a private communication sent or received by a user of a private communication channel on any given platform. Unlike public posts, PMs are only viewable by the participants. Though long a function present on IRCs and Internet forums, private channels for PMs have recently grown in popularity due to the increasing demand for privacy and private collaboration on social media.
There are two main types of private messages. One type includes those found on IRCs and Internet forums, as well as on social media apps like Twitter, Facebook, and Instagram, where the focus is public posting, PMs allow users to communicate privately without leaving the platform. The second type are those relayed through instant messaging platforms such as WhatsApp, Kik, and Snapchat, where users create accounts primarily to exchange PMs. A third type, peer-to-peer messaging, occurs when users create and own the infrastructure used to transmit and store the messages; while features vary depending on application, they give the user full control over the data they transmit. An example of software that enables this kind of messaging is Classified-ads.
Besides serving as a tool to connect privately with friends and family, PMs have gained momentum in the workplace. Working professionals use PMs to reach coworkers in other spaces and increase efficiency during meetings. Although useful, using PMs in the workplace may blur the boundary between work and private lives.
History
The development of computers sparked the information revolution, which changed the way people communicate. Peter Drucker published an article centering on the theme that the computer is to the Information Revolution what the railroad was to the Industrial Revolution; railroads unified travel between the east and west coast of the United States, whereas computers unified communication across the entire globe. This revolutioni
|
https://en.wikipedia.org/wiki/Tom%20Oberheim
|
Thomas Elroy Oberheim (born July 7, 1936, Manhattan, Kansas), known as Tom Oberheim, is an American audio engineer and electronics engineer best known for designing effects processors, analog synthesizers, sequencers, and drum machines. He has been the founder of four audio electronics companies, most notably Oberheim Electronics. He was also a key figure in the development and adoption of the MIDI standard. He is also a trained physicist.
Early life and education
Oberheim was born and raised in Manhattan, Kansas, also the home of Kansas State University. Beginning in junior high school, he put his interest in electronics into practice by building hi-fi components and amplifiers for friends. A fan of jazz music, Oberheim decided to move to Los Angeles after seeing an ad on the back of Downbeat Magazine about free jazz performances at a club there. He arrived in Los Angeles in July 1956 at the age of 20 with $10 in his pocket. He worked as a draftsman trainee at NCR Corporation where he was inspired to become a computer engineer. Oberheim enrolled at UCLA, studying computer engineering and physics while also taking music courses. Over the next nine years he worked toward his physics degree, serving in the U.S. Army for a short period of time, harmonizing with the Gregg Smith Singers, and working jobs at computer companies (most notably Abacus, where he first began designing computers).
Oberheim was attending a class during his last semester at UCLA when he met and became friends with trumpet player Don Ellis, and keyboardist Joseph Byrd of the band The United States of America, who were attending the same class. Oberheim stayed in touch with both Ellis and Byrd after leaving UCLA, and ended up building an amplifier for Ellis to use for his public address system. Oberheim also built guitar amplifiers for The United States of America, and their lead singer Dorothy Moskowitz asked him to build a ring modulator for the band (Joseph Byrd had used one while a band membe
|
https://en.wikipedia.org/wiki/Cholestane
|
Cholestane is a saturated tetracyclic triterpene. This 27-carbon biomarker is produced by diagenesis of cholesterol and is one of the most abundant biomarkers in the rock record. Presence of cholestane, its derivatives and related chemical compounds in environmental samples is commonly interpreted as an indicator of animal life and/or traces of O2, as animals are known for exclusively producing cholesterol, and thus has been used to draw evolutionary relationships between ancient organisms of unknown phylogenetic origin and modern metazoan taxa. Cholesterol is made in low abundance by other organisms (e.g., rhodophytes, land plants), but because these other organisms produce a variety of sterols it cannot be used as a conclusive indicator of any one taxon. It is often found in analysis of organic compounds in petroleum.
Background
Cholestane is a saturated C-27 animal biomarker often found in petroleum deposits. It is a diagenetic product of cholesterol, which is an organic molecule made primarily by animals and make up ~30% of animal cell membranes. Cholesterol is responsible for membrane rigidity and fluidity, as well as intracellular transport, cell signaling and nerve conduction. In humans, it is also the precursor for hormones (i.e., estrogen, testosterone). It is synthesized via squalene and naturally assumes a specific stereochemical orientation (3β-ol, 5α (H), 14α (H), 17α (H), 20R). This stereochemical orientation is typically maintained throughout diagenetic processes, but cholestane can be found in the fossil record with many stereochemical configurations.
Biomarker
Cholestane in the fossil record is often interpreted as an indicator (biomarker) of ancient animal life and is often used by geochemists and geobiologists to reconstruct animal evolution (particularly in the Precambrian Earth history; e.g., Ediacaran, Cryogenian and Proterozoic in general). Molecular oxygen is required to produce cholesterol; thus, the presence of cholestane suggests some
|
https://en.wikipedia.org/wiki/AppForge
|
AppForge, Inc. was a software company headquartered in Atlanta, Georgia, providing mobile application development services as well as CrossFire, a software tool simplifying mobile applications for Symbian, Windows Mobile, RIM BlackBerry, and Palm OS. Crossfire was a software plugin for Visual Basic 6 and for Microsoft Visual Studio .NET.
On March 13, 2007, AppForge ceased operations and has been assigned to the benefit of the creditors so it could begin
bidding. All AppForge License validation servers went dark on April 2, and all development platforms became invalid leaving its customers high and dry. Eight days later, the developers forum and shop parts of the website went offline. On April 12, the AppForge URL was redirected to Oracle's website. The assets of AppForge, Inc. have been assigned for benefit of creditors to Hays Financial Consulting, LLC.
On April 18, Oracle announced they had purchased the Intellectual Property of Appforge, Inc. Oracle announced that:
“Oracle did not acquire the AppForge...former customer contracts, so Oracle does not plan to sell or provide support for former AppForge products going forward.”
Effect of insolvency on AppForge CrossFire Users And Possible Solutions
AppForge used to sell an ISV version and a non-ISV version of Crossfire. The ISV version required yearly renewals of the development environment license. The non-ISV version required activation of the client license (the booster) upon deployment.
On April 2, 2007, ISV users were not able to update their applications once their yearly license expired. As of the same date, Non-ISV users were no longer able to install new software on their mobile units, or to re-install mobile units that ran out of batteries.
References
External links
AppForge.com
AppForge sells assets; firm owes $1.8 million - Atlanta Business Chronicle, April 13, 2007
AppForge closes its doors on developers - FierceDeveloper, May 1, 2007
A tale of two dead companies - Linux Weekly News, May
|
https://en.wikipedia.org/wiki/WiFiDog%20Captive%20Portal
|
WiFiDog was an open source embeddable captive portal solution used to build wireless hotspots. It is no longer an active project after not being updated for several years.
WiFiDog consists of two components: the gateway and the authentication server. It was written by the technical team of Île Sans Fil and is included in the software package repository of OpenWrt.
Gateway
The WiFiDog gateway is written in C with no dependencies beyond the Linux kernel. This structure enables it to be embedded in devices such as the WRT54G router running OpenWrt, FreeWRT or DD-WRT or most PCs running Linux. Linux Journal reports that a working gateway install can be packaged in less than 15kB on an i386 platform.
Authentication server
The WiFiDog authentication server is a PHP and PostgreSQL or MySQL server based solution written to authenticate clients in a captive portal environment. WiFiDog Auth provides portal specific content management, allows users to create wireless internet access accounts using email access, provides gateway uptime statistics and connection specific and user log statistics.
References
External links
Project home page
Chinese WifiDog Community Home Page
WiFi Foundation uses WiFi Dog for Community WiFi
Wi-Fi
|
https://en.wikipedia.org/wiki/Wyckoff%20positions
|
In crystallography, a Wyckoff position is any point in a set of points whose site symmetry groups (see below) are all conjugate subgroups one of another. Crystallography tables give the Wyckoff positions for different space groups.
History
The Wyckoff positions are named after Ralph Walter Graystone Wyckoff, an American X-ray crystallographer who authored several books in the field. His 1922 book, The Analytical Expression of the Results of the Theory of Space Groups, contained tables with the positional coordinates, both general and special, permitted by the symmetry elements. This book was the forerunner of International Tables for X-ray Crystallography, which first appeared in 1935.
Definition
For any point in a unit cell, given by fractional coordinates, one can apply a symmetry operation to the point. In some cases it will move to new coordinates, while in other cases the point will remain unaffected. For example, reflecting across a mirror plane will switch all the points left and right of the mirror plane, but points exactly on the mirror plane itself will not move. We can test every symmetry operation in the crystal's point group and keep track of whether the specified point is invariant under the operation or not. The (finite) list of all symmetry operations which leave the given point invariant taken together make up another group, which is known as the site symmetry group of that point. By definition, all points with the same site symmetry group, or a conjugate site symmetry group, are assigned the same Wyckoff position.
The Wyckoff positions are designated by a letter, often preceded by the number of positions that are equivalent to a given position with that letter, in other words the number of positions in the unit cell to which the given position is moved by applying all the elements of the space group. For instance, 2a designates the positions left where they are by a certain subgroup, and indicates that other symmetry elements move the point to
|
https://en.wikipedia.org/wiki/Victor%20Animatograph%20Corporation
|
The Victor Animatograph Corporation was a maker of projection equipment founded in 1910 in Davenport, Iowa by Swedish-born American inventor Alexander F. Victor.
The firm introduced its first 16 mm camera and movie projector on August 12, 1923, the same year Eastman Kodak introduced the Cine-Kodak and Kodascope. Victor advertised through his entire career thereafter that he had marketed the first 16mm equipment, but his claim was incorrect by several weeks, since the Cine-Kodak had been introduced in July, substantially earlier than Victor's August marketing date. Victor's first 16mm camera was a hand-cranked rectangular aluminum box designed for the additional film economy of cranking only 14 frames per second instead of the standard sixteen. A later version of this first Victor was driven by an electric motor. Neither camera sold in large numbers, but Victor followed in 1927 with a more successful camera modeled on the Bell & Howell Filmo. Victor offered many models of 16mm projectors, most with only minor variations, but prior to military contracts won during World War II, all were made and sold in very small numbers, from 20 units to usually no more than a couple of thousand units.
The company was a large producer of lantern slides using their "Featherweight" method- a one piece glass positive with a durable emulsion framed by a cardboard mat.
See also
28 mm film
References
External links
Victor archive listing at the University of Iowa
Projectors
|
https://en.wikipedia.org/wiki/Vanishing%20cycle
|
In mathematics, vanishing cycles are studied in singularity theory and other parts of algebraic geometry. They are those homology cycles of a smooth fiber in a family which vanish in the singular fiber.
For example, in a map from a connected complex surface to the complex projective line, a generic fiber is a smooth Riemann surface of some fixed genus g and, generically, there will be isolated points in the target whose preimages are nodal curves. If one considers an isolated critical value and a small loop around it, in each fiber, one can find a smooth loop such that the singular fiber can be obtained by pinching that loop to a point. The loop in the smooth fibers gives an element of the first homology group of a surface, and the monodromy of the critical value is defined to be the monodromy of the first homology of the fibers as the loop is traversed, i.e. an invertible map of the first homology of a (real) surface of genus g.
A classical result is the Picard–Lefschetz formula, detailing how the monodromy round the singular fiber acts on the vanishing cycles, by a shear mapping.
The classical, geometric theory of Solomon Lefschetz was recast in purely algebraic terms, in SGA7. This was for the requirements of its application in the context of l-adic cohomology; and eventual application to the Weil conjectures. There the definition uses derived categories, and looks very different. It involves a functor, the nearby cycle functor, with a definition by means of the higher direct image and pullbacks. The vanishing cycle functor then sits in a distinguished triangle with the nearby cycle functor and a more elementary functor. This formulation has been of continuing influence, in particular in D-module theory.
See also
Thom–Sebastiani Theorem
References
Dimca, Alexandru; Singularities and Topology of Hypersurfaces.
Section 3 of Peters, C.A.M. and J.H.M. Steenbrink: Infinitesimal variations of Hodge structure and the generic Torelli problem for projective hyper
|
https://en.wikipedia.org/wiki/Harmonic%20mixing
|
Harmonic mixing or key mixing (also referred to as mixing in key) is a DJ's continuous mix between two pre-recorded tracks that are most often either in the same key, or their keys are relative or in a subdominant or dominant relationship with one another.
The primary goal of harmonic mixing is to create a smooth transition between songs. Songs in the same key do not generate a dissonant tone when mixed. This technique enables DJs to create a harmonious and consonant mashup with any music genre.
The Camelot wheel can be used for harmonic mixing. It is based on the circle of fifths.
Traditional methods
A commonly known method of using harmonic mixing is to detect the key signature of every music file in the DJ collection by using a piano. The key signature can be used to create harmonic mash-ups with other tracks in the same key. Also considered compatible with the key signature in question are its related subdominant and dominant keys, as well as its relative major (or minor, as the case may be) key.
See also
Beatmatching
Segue in music
References
DJing
Audio mixing
|
https://en.wikipedia.org/wiki/MUSHRA
|
MUSHRA stands for Multiple Stimuli with Hidden Reference and Anchor and is a methodology for conducting a codec listening test to evaluate the perceived quality of the output from lossy audio compression algorithms. It is defined by ITU-R recommendation BS.1534-3. The MUSHRA methodology is recommended for assessing "intermediate audio quality". For very small audio impairments, Recommendation ITU-R BS.1116-3 (ABC/HR) is recommended instead.
The main advantage over the mean opinion score (MOS) methodology (which serves a similar purpose) is that MUSHRA requires fewer participants to obtain statistically significant results. This is because all codecs are presented at the same time, on the same samples, so that a paired t-test or a repeated measures analysis of variance can be used for statistical analysis. Also, the 0–100 scale used by MUSHRA makes it possible to rate very small differences.
In MUSHRA, the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. The recommendation specifies that a low-range and a mid-range anchor should be included in the test signals. These are typically a 7 kHz and a 3.5 kHz low-pass version of the reference. The purpose of the anchors is to calibrate the scale so that minor artifacts are not unduly penalized. This is particularly important when comparing or pooling results from different labs.
Listener behavior
Both, MUSHRA and ITU BS.1116 tests call for trained expert listeners who know what typical artifacts sound like and where they are likely to occur. Expert listeners also have a better internalization of the rating scale which leads to more repeatable results than with untrained listeners. Thus, with trained listeners, fewer listeners are needed to achieve statistically significant results.
It is assumed that preferences are similar for expert listeners and naive listeners and thus results of expert listeners are also predic
|
https://en.wikipedia.org/wiki/Electric%20organ%20%28fish%29
|
In biology, the electric organ is an organ that an electric fish uses to create an electric field. Electric organs are derived from modified muscle or in some cases nerve tissue, and have evolved at least six times among the elasmobranchs and teleosts. These fish use their electric discharges for navigation, communication, mating, defence, and in strongly electric fish also for the incapacitation of prey.
The electric organs of two strongly electric fish, the torpedo ray and the electric eel were first studied in the 1770s by John Walsh, Hugh Williamson, and John Hunter. Charles Darwin used them as an instance of convergent evolution in his 1859 On the Origin of Species. Modern study began with Hans Lissmann's 1951 study of electroreception and electrogenesis in Gymnarchus niloticus.
Research history
Detailed descriptions of the powerful shocks that the electric catfish could give were written in ancient Egypt.
In the 1770s the electric organs of the torpedo ray and electric eel were the subject of Royal Society papers by John Walsh, Hugh Williamson, and John Hunter, who discovered what is now called Hunter's organ. These appear to have influenced the thinking of Luigi Galvani and Alessandro Volta – the founders of electrophysiology and electrochemistry.
In the 19th century, Charles Darwin discussed the electric organs of the electric eel and the torpedo ray in his 1859 book On the Origin of Species as a likely example of convergent evolution: "But if the electric organs had been inherited from one ancient progenitor thus provided, we might have expected that all electric fishes would have been specially related to each other…I am inclined to believe that in nearly the same way as two men have sometimes independently hit on the very same invention, so natural selection, working for the good of each being and taking advantage of analogous variations, has sometimes modified in very nearly the same manner two parts in two organic beings". In 1877, Carl Sachs stud
|
https://en.wikipedia.org/wiki/NOS%20%28Portuguese%20company%29
|
NOS, SGPS S.A. is a Portuguese telecommunications and media company who provides mobile and fixed telephony, cable television, satellite television and internet. The company resulted from the merger in 2013 of two of the country's major telecommunications companies: Zon Multimédia (formerly known as PT Multimédia, a spun-off media arm of Portugal Telecom) and Sonae's Optimus Telecommunications.
NOS owns premium movie channels TVCine and has a 25% stake in the Sport TV television network. It also operates 4 channels in joint-venture with AMC Networks International Southern Europe. NOS Audiovisuais (formerly ZON Lusomundo) is a home-video and cinema film distributor and operates Nos Cinema, the largest cinema chain of Portugal.
History
NOS was founded as TVCabo in 1994, and was the third cable operator to be founded in Portugal (the first was the regional Cabo TV Madeirense, which was founded in 1992, followed by Bragatel early on in 1994). The first customer was connected in November 1994. Initially the channel offer consisted of thirty channels and the number of Portuguese-speaking channels was initially limited to the terrestrial channels, with the number of Portuguese-speaking channels increasing as the years went on.
The company might be considered a Portuguese dot-com. In the PT Multimédia days, it brought Portugal Telecom SAPO (a successful web portal and search engine, sold to its parent company in 2005), Lusomundo (a successful movie distributor, movie theater operator included in the spun off company and, formerly, the owner of the Diário de Notícias newspaper and the TSF radio, which were sold to Controlinveste the same year as SAPO was sold) and several TV channels such as SportTV, CNL (now SIC Notícias) and TVCine (MOV was only created after the spin-off).
On 17 January 2008, ZON announced it would acquire TVTEL, its main competitor in both cable and satellite broadcasting in Porto region. Thus, ZON was strengthening its position due to the appearance
|
https://en.wikipedia.org/wiki/Virtual%20Interface%20Adapter
|
A Virtual Interface Adapter ("VIA") is a network protocol (such as TCP/IP ...). As of July 2006 Microsoft SQL Server 2005 supports it. The specific implementation of VIA will vary from vendor to vendor. In general, it is usually a network kind of interface but is usually a very high-performance, dedicated connection between two systems. Part of that high performance comes from specialized, dedicated hardware that knows that it has a dedicated connection and therefore doesn't have to deal with normal network addressing issues.
The VIA protocol is used to support VIA devices such as VIA Storage Area Network devices.
Comes in the concept of clustering (i.e.) load balancing method.
The load balancer will have this VIA and through VIA it will connect the databases.
The VIA protocol is deprecated by Microsoft, and will be removed in a future version of Microsoft SQL Server. It is however supported in SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, and SQL Server 2014.
See also
System Area Network
References
Notes
Network protocols
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.