source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Clark%E2%80%93Wilson%20model | The Clark–Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computing system.
The model is primarily concerned with formalizing the notion of information integrity. Information integrity is maintained by preventing corruption of data items in a system due to either error or malicious intent. An integrity policy describes how the data items in the system should be kept valid from one state of the system to the next and specifies the capabilities of various principals in the system. The model uses security labels to grant access to objects via transformation procedures and a restricted interface model.
Origin
The model was described in a 1987 paper (A Comparison of Commercial and Military Computer Security Policies) by David D. Clark and David R. Wilson. The paper develops the model as a way to formalize the notion of information integrity, especially as compared to the requirements for multilevel security (MLS) systems described in the Orange Book. Clark and Wilson argue that the existing integrity models such as Biba (read-up/write-down) were better suited to enforcing data integrity rather than information confidentiality. The Biba models are more clearly useful in, for example, banking classification systems to prevent the untrusted modification of information and the tainting of information at higher classification levels. In contrast, Clark–Wilson is more clearly applicable to business and industry processes in which the integrity of the information content is paramount at any level of classification (although the authors stress that all three models are obviously of use to both government and industry organizations).
Basic principles
According to Stewart and Chapple's CISSP Study Guide Sixth Edition, the Clark–Wilson model uses a multi-faceted approach in order to enforce data integrity. Instead of defining a formal state machine, the model defines each data item and allows modifications through only a sma |
https://en.wikipedia.org/wiki/Battery%20pack | A battery pack is a set of any number of (preferably) identical batteries or individual battery cells. They may be configured in a series, parallel or a mixture of both to deliver the desired voltage, capacity, or power density. The term battery pack is often used in reference to cordless tools, radio-controlled hobby toys, and battery electric vehicles.
Components of battery packs include the individual batteries or cells, and the interconnects which provide electrical conductivity between them. Rechargeable battery packs often contain a temperature sensor, which the battery charger uses to detect the end of charging. Interconnects are also found in batteries as they are the part which connects each cell, though batteries are most often only arranged in series strings.
When a pack contains groups of cells in parallel there are differing wiring configurations which take into consideration the electrical balance of the circuit. Battery regulators are sometimes used to keep the voltage of each individual cell below its maximum value during charging so as to allow the weaker batteries to become fully charged, bringing the whole pack back into balance. Active balancing can also be performed by battery balancer devices which can shuttle energy from strong cells to weaker ones in real time for better balance. A well-balanced pack lasts longer and delivers better performance.
For an inline package, cells are selected and stacked with solder in between them. The cells are pressed together and a current pulse generates heat to solder them together and to weld all connections internal to the cell.
Calculating state of charge
SOC, or state of charge, is the equivalent of a fuel gauge for a battery. SOC cannot be determined by a simple voltage measurement, because the terminal voltage of a battery may stay substantially constant until it is completely discharged. In some types of battery, electrolyte specific gravity may be related to state of charge but this is not m |
https://en.wikipedia.org/wiki/Animal%20science | Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th |
https://en.wikipedia.org/wiki/Neurochip | A neurochip is an integrated circuit chip (such as a microprocessor) that is designed for interaction with neuronal cells.
Formation
It is made of silicon that is doped in such a way that it contains EOSFETs (electrolyte-oxide-semiconductor field-effect transistors) that can sense the electrical activity of the neurons (action potentials) in the above-standing physiological electrolyte solution. It also contains capacitors for the electrical stimulation of the neurons. The University of Calgary, Faculty of Medicine scientists led by Pakistani-born Canadian scientist Naweed Syed who proved it is possible to cultivate a network of brain cells that reconnect on a silicon chip—or the brain on a microchip—have developed new technology that monitors brain cell activity at a resolution never achieved before.
Developed with the National Research Council Canada (NRC), the new silicon chips are also simpler to use, which will help future understanding of how brain cells work under normal conditions and permit drug discoveries for a variety of neurodegenerative diseases, such as Alzheimer's and Parkinson's.
Naweed Syed's lab cultivated brain cells on a microchip.
The new technology from the lab of Naweed Syed, in collaboration with the NRC, was published online in August 2010, in the journal, Biomedical Devices. It is the world's first neurochip. It is based on Syed's earlier experiments on neurochip technology dating back to 2003.
"This technical breakthrough means we can track subtle changes in brain activity at the level of ion channels and synaptic potentials, which are also the most suitable target sites for drug development in neurodegenerative diseases and neuropsychological disorders," says Syed, professor and head of the Department of Cell Biology and Anatomy, member of the Hotchkiss Brain Institute and advisor to the Vice President Research on Biomedical Engineering Initiative of the University of Chicago.
The new neurochips are also automated, meaning that an |
https://en.wikipedia.org/wiki/EOSFET | An EOSFET or electrolyte–oxide–semiconductor field-effect transistor is a FET, like a MOSFET, but with an electrolyte solution replacing the metal for the detection of neuronal activity. Many EOSFETs are integrated in a neurochip.
Electrochemistry
Sensors
Transistor types
MOSFETs
Field-effect transistors |
https://en.wikipedia.org/wiki/Radiochemistry | Radiochemistry is the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). Much of radiochemistry deals with the use of radioactivity to study ordinary chemical reactions. This is very different from radiation chemistry where the radiation levels are kept too low to influence the chemistry.
Radiochemistry includes the study of both natural and man-made radioisotopes.
Main decay modes
All radioisotopes are unstable isotopes of elements— that undergo nuclear decay and emit some form of radiation. The radiation emitted can be of several types including alpha, beta, gamma radiation, proton, and neutron emission along with neutrino and antiparticle emission decay pathways.
1. α (alpha) radiation—the emission of an alpha particle (which contains 2 protons and 2 neutrons) from an atomic nucleus. When this occurs, the atom's atomic mass will decrease by 4 units and the atomic number will decrease by 2.
2. β (beta) radiation—the transmutation of a neutron into an electron and a proton. After this happens, the electron is emitted from the nucleus into the electron cloud.
3. γ (gamma) radiation—the emission of electromagnetic energy (such as gamma rays) from the nucleus of an atom. This usually occurs during alpha or beta radioactive decay.
These three types of radiation can be distinguished by their difference in penetrating power.
Alpha can be stopped quite easily by a few centimetres of air or a piece of paper and is equivalent to a helium nucleus. Beta can be cut off by an aluminium sheet just a few millimetres thick and are electrons. Gamma is the most penetrating of the three and is a massless chargeless high-energy photon. Gamma radiation requires an appreciable amount of heavy metal radiation shielding (usually lead or |
https://en.wikipedia.org/wiki/Gibbs%27%20inequality | In information theory, Gibbs' inequality is a statement about the information entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality.
It was first presented by J. Willard Gibbs in the 19th century.
Gibbs' inequality
Suppose that
is a discrete probability distribution. Then for any other probability distribution
the following inequality between positive quantities (since pi and qi are between zero and one) holds:
with equality if and only if
for all i. Put in words, the information entropy of a distribution P is less than or equal to its cross entropy with any other distribution Q.
The difference between the two quantities is the Kullback–Leibler divergence or relative entropy, so the inequality can also be written:
Note that the use of base-2 logarithms is optional, and
allows one to refer to the quantity on each side of the inequality as an
"average surprisal" measured in bits.
Proof
For simplicity, we prove the statement using the natural logarithm (). Because
the particular logarithm base that we choose only scales the relationship by the factor .
Let denote the set of all for which pi is non-zero. Then, since for all x > 0, with equality if and only if x=1, we have:
The last inequality is a consequence of the pi and qi being part of a probability distribution. Specifically, the sum of all non-zero values is 1. Some non-zero qi, however, may have been excluded since the choice of indices is conditioned upon the pi being non-zero. Therefore, the sum of the qi may be less than 1.
So far, over the index set , we have:
,
or equivalently
.
Both sums can be extended to all , i.e. including , by recalling that the expression tends to 0 as tends to 0, and tends to as tends to 0. We arrive at
For equality to hold, we require
for all so that the equality holds,
and which means if , that is, if .
This can happen i |
https://en.wikipedia.org/wiki/Abstract%20analytic%20number%20theory | Abstract analytic number theory is a branch of mathematics which takes the ideas and techniques of classical analytic number theory and applies them to a variety of different mathematical fields. The classical prime number theorem serves as a prototypical example, and the emphasis is on abstract asymptotic distribution results. The theory was invented and developed by mathematicians such as John Knopfmacher and Arne Beurling in the twentieth century.
Arithmetic semigroups
The fundamental notion involved is that of an arithmetic semigroup, which is a commutative monoid G satisfying the following properties:
There exists a countable subset (finite or countably infinite) P of G, such that every element a ≠ 1 in G has a unique factorisation of the form
where the pi are distinct elements of P, the αi are positive integers, r may depend on a, and two factorisations are considered the same if they differ only by the order of the factors indicated. The elements of P are called the primes of G.
There exists a real-valued norm mapping on G such that
The total number of elements of norm is finite, for each real .
Additive number systems
An additive number system is an arithmetic semigroup in which the underlying monoid G is free abelian. The norm function may be written additively.
If the norm is integer-valued, we associate counting functions a(n) and p(n) with G where p counts the number of elements of P of norm n, and a counts the number of elements of G of norm n. We let A(x) and P(x) be the corresponding formal power series. We have the fundamental identity
which formally encodes the unique expression of each element of G as a product of elements of P. The radius of convergence of G is the radius of convergence of the power series A(x).
The fundamental identity has the alternative form
Examples
The prototypical example of an arithmetic semigroup is the multiplicative semigroup of positive integers G = Z+ = {1, 2, 3, ...}, with subset of rational |
https://en.wikipedia.org/wiki/Aluminium%20sulfate | Aluminium sulfate is a salt with the formula Al2(SO4)3. It is soluble in water and is mainly used as a coagulating agent (promoting particle collision by neutralizing charge) in the purification of drinking water and wastewater treatment plants, and also in paper manufacturing.
The anhydrous form occurs naturally as a rare mineral millosevichite, found for example in volcanic environments and on burning coal-mining waste dumps. Aluminium sulfate is rarely, if ever, encountered as the anhydrous salt. It forms a number of different hydrates, of which the hexadecahydrate Al2(SO4)3·16H2O and octadecahydrate Al2(SO4)3·18H2O are the most common. The heptadecahydrate, whose formula can be written as [Al(H2O)6]2(SO4)3·5H2O, occurs naturally as the mineral alunogen.
Aluminium sulfate is sometimes called alum or papermaker's alum in certain industries. However, the name "alum" is more commonly and properly used for any double sulfate salt with the generic formula , where X is a monovalent cation such as potassium or ammonium.
Production
In the laboratory
Aluminium sulfate may be made by adding aluminium hydroxide, Al(OH)3, to sulfuric acid, H2SO4:
2 Al(OH)3 + 3 H2SO4 → Al2(SO4)3 + 6 H2O
or by heating aluminium metal in a sulfuric acid solution:
2 Al + 3 H2SO4 → Al2(SO4)3 + 3 H2↑
From alum schists
The alum schists employed in the manufacture of aluminium sulfate are mixtures of iron pyrite, aluminium silicate and various bituminous substances, and are found in upper Bavaria, Bohemia, Belgium, and Scotland. These are either roasted or exposed to the weathering action of the air. In the roasting process, sulfuric acid is formed and acts on the clay to form aluminium sulfate, a similar condition of affairs being produced during weathering. The mass is now systematically extracted with water, and a solution of aluminium sulfate of specific gravity 1.16 is prepared. This solution is allowed to stand for some time (in order that any calcium sulfate and basic iron(III) sulfate |
https://en.wikipedia.org/wiki/Standard-bearer | A standard-bearer, also known as a colour-bearer or flag-bearer, is a person who bears an emblem known as a standard or military colours, i.e. either a type of flag or an inflexible but mobile image, which is used (and often honoured) as a formal, visual symbol of a state, prince, military unit, etc. This can either be an occasional duty, often seen as an honour (especially on parade), or a permanent charge (also on the battlefield); the second type has even led in certain cases to this task being reflected in official rank titles such as Ensign, Cornet and Fähnrich.
Role
In the context of the Olympic Games, a flagbearer is the athlete who carries the flag of their country during the opening and closing ceremonies.
While at present a purely ceremonial function, as far back as Roman warfare and medieval warfare the standard-bearer had an important role on the battlefield. The standard-bearer acted as an indicator of where the position of a military unit was, with the bright, colorful standard or flag acting as a strong visual beacon to surrounding soldiers. Soldiers were typically ordered to follow and stay close to the standard or flag in order to maintain unit cohesion, and for a single commander to easily position his troops by only positioning his standard-bearer, typically with the aid of musical cues or loud verbal commands. It was an honorable position carrying a considerable risk, as a standard-bearer would be a major target for the opposing side's troops seeking to capture the standard or pull it down.
In the Roman military the person carrying the standard was called Signifer. In addition to carrying the signum, the signifer also assumed responsibility for the financial administration of the unit and functioned as the legionaries' banker. The Signifer was also a Duplicarius, paid twice the basic wage.
In the city militias of the Dutch Republic, the standard-bearer was often the youngest single man, who was shown in group portraits wearing rich clothing i |
https://en.wikipedia.org/wiki/Paper%20bag%20problem | In geometry, the paper bag problem or teabag problem is to calculate the maximum possible inflated volume of an initially flat sealed rectangular bag which has the same shape as a cushion or pillow, made out of two pieces of material which can bend but not stretch.
According to Anthony C. Robin, an approximate formula for the capacity of a sealed expanded bag is:
where w is the width of the bag (the shorter dimension), h is the height (the longer dimension), and V is the maximum volume. The approximation ignores the crimping round the equator of the bag.
A very rough approximation to the capacity of a bag that is open at one edge is:
(This latter formula assumes that the corners at the bottom of the bag are linked by a single edge, and that the base of the bag is not a more complex shape such as a lens).
The square teabag
For the special case where the bag is sealed on all edges and is square with unit sides, h = w = 1, the first formula estimates a volume of roughly
or roughly 0.19. According to Andrew Kepert at the University of Newcastle, Australia, an upper bound for this version of the teabag problem is 0.217+, and he has made a construction that appears to give a volume of 0.2055+.
Robin also found a more complicated formula for the general paper bag, which gives 0.2017, below the bounds given by Kepert (i.e., 0.2055+ ≤ maximum volume ≤ 0.217+).
See also
Biscornu, a shape formed by attaching two squares in a different way, with the corner of one at the midpoint of the other
Mylar balloon (geometry)
Notes |
https://en.wikipedia.org/wiki/GeForce%207%20series | The GeForce 7 series is the seventh generation of Nvidia's GeForce graphics processing units. This was the last series available on AGP cards.
A slightly modified GeForce 7-based card (more specifically based on the 7800GTX) is present as the RSX Reality Synthesizer, which is present on the PlayStation 3.
Features
The following features are common to all models in the GeForce 7 series except the GeForce 7100, which lacks GCAA(Gamma Corrected Anti-Aliasing):
Intellisample 4.0
Scalable Link Interface (SLI)
TurboCache
Nvidia PureVideo
The GeForce 7 supports hardware acceleration for H.264, but this feature was not used on Windows by Adobe Flash Player until the GeForce 8 Series.
GeForce 7100 series
The 7100 series was introduced on August 30, 2006 and is based on GeForce 6200 series architecture. This series supports only PCI Express interface. Only one model, the 7100 GS, is available.
Features
The 7100 series supports all of the standard features common to the GeForce 7 Series provided it is using the ForceWare 91.47 driver or later releases, though it lacks OpenCL/CUDA support, and its implementation of IntelliSample 4.0 lacks GCAA.
The 7100 series does not support technologies such as high-dynamic-range rendering (HDR) and UltraShadow II.
GeForce 7100 GS
Although the 7300 LE was originally intended to be the "lowest budget" GPU from the GeForce 7 lineup, the 7100 GS has taken its place. As it is little more than a revamped version of the GeForce 6200TC, it is designed as a basic PCI-e solution for OEMs to use if the chipset does not have integrated video capabilities. It comes in a PCI Express Graphics Bus and up to 512MB DDR2 VRAM.
Performance specifications:
Graphics Bus: PCI Express
Memory Interface: 64-bits
Memory Bandwidth: 5.3 GB/s
Fill Rate: 1.4 billion pixel/s
Vertex/s: 263 million
Memory Type: DDR2 with TC
GeForce 7200 series
The 7200 series was introduced October 8, 2006 and is based on (G72) architecture. It is designed to offer a low- |
https://en.wikipedia.org/wiki/Solaris%20Volume%20Manager | Solaris Volume Manager (SVM; formerly known as Online: DiskSuite, and later Solstice DiskSuite) is a software package for creating, modifying and controlling RAID-0 (concatenation and stripe) volumes, RAID-1 (mirror) volumes, RAID 0+1 volumes, RAID 1+0 volumes, RAID-5 volumes, and soft partitions.
Version 1.0 of Online: DiskSuite was released as an add-on product for SunOS in late 1991; the product has undergone significant enhancements over the years. SVM has been included as a standard part of Solaris since Solaris 8 was released in February 2000.
SVM is similar in functionality to later software volume managers such as FreeBSD Vinum volume manager, allowing metadevices (virtual disks) to be concatenated, striped or mirrored together from physical ones. It also supports soft partitioning, dynamic hot spares, and growing metadevices. The mirrors support dirty region logging (DRL, called resync regions in DiskSuite) and logging support for RAID-5.
The ZFS file system, added in the Solaris 10 6/06 release, has its own integrated volume management capabilities, but SVM continues to be included with Solaris for use with other file systems.
See also
Logical volume management
Sun Microsystems |
https://en.wikipedia.org/wiki/Gago%20Coutinho | Carlos Viegas Gago Coutinho, GCTE, GCC (; 17 February 1869 – 18 February 1959), generally known simply as Gago Coutinho, was a Portuguese geographer, cartographer, naval officer, historian and aviator. An aviation pioneer, Gago Coutinho and Sacadura Cabral were the first to cross the South Atlantic Ocean by air, in a journey from March to June 1922, started in Lisbon, Portugal, and finished in Rio de Janeiro, Brazil, using a seaplane variant of the British reconnaissance biplane Fairey III.
In June 2022, the centenary of the first aerial crossing of the South Atlantic, it was announced that Faro Airport will officially change its name to Gago Coutinho Airport.
Early life
He was born in Belém, Lisbon, in a modest family, the son of José Viegas Gago Coutinho and his cousin, Fortunata Maria Coutinho. He finished high school in 1885, and entered the Polytechnic School, where he studied for one year, as preparation for his entrance at the Naval School, in Alfeite, Almada, in 1886.
Naval and geographical career
He joined the Navy in 1886 as an aspirant. In 1890, he was promoted to marine guard, in 1891 to second lieutenant, and in 1895 to first lieutenant. In 1907 he was promoted to captain-lieutenant and in 1915 to frigate captain. In 1920 he became captain of sea-and-war. In 1922 he was promoted to vice-admiral, and in 1958 to admiral. During his first Navy years, he did several travels, being the first in the corvette "Afonso de Albuquerque", from 7 December 1888 to 16 January 1891, where he traveled to Mozambique as a member of the Naval Division of Eastern Africa. He did several naval travels the following years, until 31 March 1898, when he did his first commission as an overseas geographer, in Portuguese Timor. Since March 1898, Gago Coutinho activities were developed mostly at the Cartography Commission, created in 1883. From 27 July 1898 to 19 April 1899, he was involved in field work, working at the delimitation of the borders and at the survey of the geograp |
https://en.wikipedia.org/wiki/Bar%20product | In information theory, the bar product of two linear codes C2 ⊆ C1 is defined as
where (a | b) denotes the concatenation of a and b. If the code words in C1 are of length n, then the code words in C1 | C2 are of length 2n.
The bar product is an especially convenient way of expressing the Reed–Muller RM (d, r) code in terms of the Reed–Muller codes RM (d − 1, r) and RM (d − 1, r − 1).
The bar product is also referred to as the | u | u+v | construction
or (u | u + v) construction.
Properties
Rank
The rank of the bar product is the sum of the two ranks:
Proof
Let be a basis for and let be a basis for . Then the set
is a basis for the bar product .
Hamming weight
The Hamming weight w of the bar product is the lesser of (a) twice the weight of C1, and (b) the weight of C2:
Proof
For all ,
which has weight . Equally
for all and has weight . So minimising over we have
Now let and , not both zero. If then:
If then
so
See also
Reed–Muller code |
https://en.wikipedia.org/wiki/Hydrogen-terminated%20silicon%20surface | Hydrogen-terminated silicon surface is a chemically passivated silicon substrate where the surface Si atoms are bonded to hydrogen. The hydrogen-terminated surfaces are hydrophobic, luminescent, and amenable to chemical modification. Hydrogen-terminated silicon is an intermediate in the growth of bulk silicon from silane:
SiH4 → Si + 2H2
Preparation
Silicon wafers are treated with solutions of electronic-grade hydrofluoric acid in water, buffered water, or alcohol. One of the relevant reactions is simply removal of silicon oxides:
SiO2 + 4 HF → SiF4 + 2 H2O
The key reaction however is the formation of the hydrosilane functional group.
atomic force microscope (AFM) has been used to manipulate hydrogen-terminated silicon surfaces.
Properties
Hydrogen termination removes dangling bonds. All surface Si atoms are tetrahedral. Hydrogen termination confers stability in ambient environments. So again, the surface is both clean (of oxides) and relatively inert. These materials can be handled in air without special care for several minutes.
The Si-H bond in fact is stronger than the Si-Si bonds. Two kinds of Si-H centers are proposed, both featuring terminal Si-H bonds. One kind of site has one Si-H bond. The other kind of site features SiH2 centers.
Like organic hydrosilanes, the H-Si groups on the surface react with terminal alkenes and diazo groups. The reaction is called hydrosilylation. Many kinds of organic compounds with various functions can be introduced onto the silicon surface by the hydrosilylation of a hydrogen-terminated surface. The infrared spectrum of hydrogen-terminated silicon shows a band near 2090 cm−1, not very different from νSi-H for organic hydrosilanes.
Potential applications
One group proposed to use the material to create digital circuits made of quantum dots by removing hydrogen atoms from the silicon surface.
See also
Silanization of silicon and mica |
https://en.wikipedia.org/wiki/Trigamma%20function | In mathematics, the trigamma function, denoted or , is the second of the polygamma functions, and is defined by
.
It follows from this definition that
where is the digamma function. It may also be defined as the sum of the series
making it a special case of the Hurwitz zeta function
Note that the last two formulas are valid when is not a natural number.
Calculation
A double integral representation, as an alternative to the ones given above, may be derived from the series representation:
using the formula for the sum of a geometric series. Integration over yields:
An asymptotic expansion as a Laurent series is
if we have chosen , i.e. the Bernoulli numbers of the second kind.
Recurrence and reflection formulae
The trigamma function satisfies the recurrence relation
and the reflection formula
which immediately gives the value for z : .
Special values
At positive half integer values we have that
Moreover, the trigamma function has the following special values:
where represents Catalan's constant.
There are no roots on the real axis of , but there exist infinitely many pairs of roots for . Each such pair of roots approaches quickly and their imaginary part increases slowly logarithmic with . For example, and are the first two roots with .
Relation to the Clausen function
The digamma function at rational arguments can be expressed in terms of trigonometric functions and logarithm by the digamma theorem. A similar result holds for the trigamma function but the circular functions are replaced by Clausen's function. Namely,
Computation and approximation
An easy method to approximate the trigamma function is to take the derivative of the asymptotic expansion of the digamma function.
Appearance
The trigamma function appears in this sum formula:
See also
Gamma function
Digamma function
Polygamma function
Catalan's constant
Notes |
https://en.wikipedia.org/wiki/ISFET | An ion-sensitive field-effect transistor (ISFET) is a field-effect transistor used for measuring ion concentrations in solution; when the ion concentration (such as H+, see pH scale) changes, the current through the transistor will change accordingly. Here, the solution is used as the gate electrode. A voltage between substrate and oxide surfaces arises due to an ion sheath. It is a special type of MOSFET (metal–oxide–semiconductor field-effect transistor), and shares the same basic structure, but with the metal gate replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. Invented in 1970, the ISFET was the first biosensor FET (BioFET).
The surface hydrolysis of Si–OH groups of the gate materials varies in aqueous solutions due to pH value. Typical gate materials are SiO2, Si3N4, Al2O3 and Ta2O5.
The mechanism responsible for the oxide surface charge can be described by the site binding model, which describes the equilibrium between the Si–OH surface sites and the H+ ions in the solution. The hydroxyl groups coating an oxide surface such as that of SiO2 can donate or accept a proton and thus behave in an amphoteric way as illustrated by the following acid-base reactions occurring at the oxide-electrolyte interface:
—Si–OH + H2O ↔ —Si–O− + H3O+
—Si–OH + H3O+ ↔ —Si–OH2+ + H2O
An ISFET's source and drain are constructed as for a MOSFET. The gate electrode is separated from the channel by a barrier which is sensitive to hydrogen ions and a gap to allow the substance under test to come in contact with the sensitive barrier. An ISFET's threshold voltage depends on the pH of the substance in contact with its ion-sensitive barrier.
Practical limitations due to the reference electrode
An ISFET electrode sensitive to H+ concentration can be used as a conventional glass electrode to measure the pH of a solution. However, it also requires a reference electrode to operate. If the reference electrode used in contact with the soluti |
https://en.wikipedia.org/wiki/Concurrent%20ML | Concurrent ML (CML) is a concurrent extension of the Standard ML programming language characterized by its ability to allow programmers to create composable communication abstractions that are first-class rather than built into the language. The design of CML and its primitive operations have been adopted in several other programming languages such as GNU Guile, Racket, and Manticore.
Concepts
Many programming languages that support concurrency offer communication channels that allow the exchange of values between processes or threads running concurrently in a system. Communications established between processes may follow a specific protocol, requiring the programmer to write functions to establish the required pattern of communication. Meanwhile, a communicating system often requires establishing multiple channels, such as to multiple servers, and then choosing between the available channels when new data is available. This can be accomplished using polling, such as with the select operation on Unix systems.
Combining both application-specific protocols and multi-party communication may be complicated due to the need to introduce polling and checking for blocking within a pre-existing protocol. Concurrent ML solves this problem by reducing this coupling of programming concepts by introducing synchronizable events. Events are a first-class abstraction that can be used with a synchronization operation (called in CML and Racket) in order to potentially block and then produce some value resulting from communication (for example, data transmitted on a channel).
In CML, events can be combined or manipulated using a number of primitive operations. Each primitive operation constructs a new event rather than modifying the event in-place, allowing for the construction of compound events that represent the desired communication pattern. For example, CML allows the programmer to combine several sub-events in order to create a compound event that can then make a non-deter |
https://en.wikipedia.org/wiki/Geodesics%20in%20general%20relativity | In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.
In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting a star is the projection of a geodesic of the curved four-dimensional (4-D) spacetime geometry around the star onto three-dimensional (3-D) space.
Mathematical expression
The full geodesic equation is
where s is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, so this equation is analogous to Newton's laws of motion, which likewise provide formulae for the acceleration of a particle. The Christoffel symbols are functions of the four spacetime coordinates and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation.
Equivalent mathematical expression using coordinate time as parameter
So far the geodesic equation of motion has been written in terms of a scalar parameter s. It can alternatively be written in terms of the time coordinate, (here we have used the triple bar to signify a definition). The geodesic equation of motion then becomes:
This formulation of the geodesic equation of motion can be useful for computer calculations and to compare General Relativity with |
https://en.wikipedia.org/wiki/Medical%20logistics | Medical logistics is the logistics of pharmaceuticals, medical and surgical supplies, medical devices and equipment, and other products needed to support doctors, nurses, and other health and dental care providers. Because its final customers are responsible for the lives and health of their patients, medical logistics is unique in that it seeks to optimize effectiveness rather than efficiency.
Medical logistics functions comprise an important part of the health care system: after staff costs, medical supplies are the single most expensive component of health care. To drive costs out of the health-care sector, medical logistics providers are adopting supply chain management theories.
This organizational chart is as follows and separated into three key areas.
Medical Materiel
Biomedical Engineering (BMET) or Clinical Engineering
Facilities Management.
These areas are managed by a qualified Director of Logistics. The Director of Logistics' educational background holds some type of accredited graduate degree (MBA or M.S.)
Prioritize Compliance with Regulatory Standards
Understanding Regulatory Complexity: Pharmaceutical companies operate within a labyrinth of stringent regulations established by various authorities like the FDA, EMA, WHO, and national health agencies. These regulations are designed to ensure product safety, efficacy, and patient well-being. They cover a wide array of factors, including storage conditions, handling, documentation, and distribution.
Tailored Storage Solutions: Different pharmaceutical products have unique storage requirements. Vaccines, for instance, demand precise temperature control to maintain their potency. On the other hand, some medications can deteriorate due to exposure to light or humidity. Compliance entails understanding these distinct needs and designing storage areas accordingly.
Investment in Infrastructure: Creating compliant storage spaces involves financial commitment. This includes building temperature-contr |
https://en.wikipedia.org/wiki/Fabric%20Shortest%20Path%20First | Fabric Shortest Path First (FSPF) is a routing protocol used in Fibre Channel computer networks. It calculates the best path between network switches, establishes routes across the fabric and calculates alternate routes in event of a failure or network topology change. FSPF can guarantee in-sequence delivery of frames, even if the routing topology has changed during a failure, by enforcing a 'hold down' time before a new path is activated.
FSPF was created by Brocade Communications Systems in collaboration with Gadzoox, McDATA, Ancor Communications (now QLogic), and Vixel; it was submitted as an American National Standards Institute standard. It was introduced in 2000. The protocol is similar in conception to the Open Shortest Path First used in IP networks. FSPF has been adopted as the industry standard for routing between Fibre Channel switches within a fabric.
A management information base for FSPF was published as RFC 4626. |
https://en.wikipedia.org/wiki/Lazy%20caterer%27s%20sequence | The lazy caterer's sequence, more formally known as the central polygonal numbers, describes the maximum number of pieces of a disk (a pancake or pizza is usually used to describe the situation) that can be made with a given number of straight cuts. For example, three cuts across a pancake will produce six pieces if the cuts all meet at a common point inside the circle, but up to seven if they do not. This problem can be formalized mathematically as one of counting the cells in an arrangement of lines; for generalizations to higher dimensions, see arrangement of hyperplanes.
The analogue of this sequence in three dimensions is the cake numbers.
Formula and sequence
The maximum number p of pieces that can be created with a given number of cuts (where ) is given by the formula
Using binomial coefficients, the formula can be expressed as
Simply put, each number equals a triangular number plus 1.
As the third column of Bernoulli's triangle (k = 2) is a triangular number plus one, it forms the lazy caterer's sequence for n cuts, where n ≥ 2.
The sequence can be alternatively derived from the sum of up to the first 3 terms of each row of Pascal's triangle:
{| class="wikitable" style="text-align:right;"
! !! 0 !! 1 !! 2
! rowspan="11" style="padding:0;"| !! Sum
|-
! style="text-align:left;"|0
| 1 || - || - || 1
|-
! style="text-align:left;"|1
| 1 || 1 || - || 2
|-
! style="text-align:left;"|2
| 1 || 2 || 1 || 4
|-
! style="text-align:left;"|3
| 1 || 3 || 3 || 7
|-
! style="text-align:left;"|4
| 1 || 4 || 6 || 11
|-
! style="text-align:left;"|5
| 1 || 5 || 10 || 16
|-
! style="text-align:left;"|6
| 1 || 6 || 15 || 22
|-
! style="text-align:left;"|7
| 1 || 7 || 21 || 29
|-
! style="text-align:left;"|8
| 1 || 8 || 28 || 37
|-
! style="text-align:left;"|9
| 1 || 9 || 36 || 46
|}
This sequence , starting with , thus results in
1, 2, 4, 7, 11, 16, 22, 29, 37, 46, 56, 67, 79, 92, 106, 121, 137, 154, 172, 191, 211, ...
Its three-dimensional analogue is known a |
https://en.wikipedia.org/wiki/Lens%20%28geometry%29 | In 2-dimensional geometry, a lens is a convex region bounded by two circular arcs joined to each other at their endpoints. In order for this shape to be convex, both arcs must bow outwards (convex-convex). This shape can be formed as the intersection of two circular disks. It can also be formed as the union of two circular segments (regions between the chord of a circle and the circle itself), joined along a common chord.
Types
If the two arcs of a lens have equal radius, it is called a symmetric lens, otherwise is an asymmetric lens.
The vesica piscis is one form of a symmetric lens, formed by arcs of two circles whose centers each lie on the opposite arc. The arcs meet at angles of 120° at their endpoints.
Area
Symmetric
The area of a symmetric lens can be expressed in terms of the radius R and arc lengths θ in radians:
Asymmetric
The area of an asymmetric lens formed from circles of radii R and r with distance d between their centers is
where
is the area of a triangle with sides d, r, and R.
The two circles overlap if . For sufficiently large , the coordinate of the lens centre lies between the coordinates of the two circle centers:
For small the coordinate of the lens centre lies outside the line that connects the circle centres:
By eliminating y from the circle equations and the abscissa of the intersecting rims is
.
The sign of x, i.e., being larger or smaller than , distinguishes the two cases shown in the images.
The ordinate of the intersection is
.
Negative values under the square root indicate that the rims of the two circles do not touch
because the circles are too far apart or one circle lies entirely within the other.
The value under the square root is a biquadratic polynomial of d. The four roots of this polynomial are associated with y=0 and with the four values of d where the two circles have only one point in common.
The angles in the blue triangle of sides d, r and R are
where y is the ordinate of the intersection. Th |
https://en.wikipedia.org/wiki/Website%20builder | Website builders are tools that typically allow the construction of websites without manual code editing. They fall into two categories:
Online proprietary tools provided by web hosting service companies. These are typically intended for service users to build their own website. Some services allow the site owner to use alternative tools (commercial or open-source) — the more complex of these may also be described as content management systems.
Application software software that runs on a personal computing device used to create and edit the pages of a web site and then publish these pages on any host. (These are often considered to be "website design software", rather than "website builders".)
History
The first website, manually written in HTML, was created on August 6, 1991.
Over time, software was created to help design web pages. For example, Microsoft released FrontPage in November 1995.
By 1998, Dreamweaver had been established as the industry leader; however, some have criticized the quality of the code produced by such software as being overblown and reliant on HTML tables. As the industry moved towards W3C standards, Dreamweaver and others were criticized for not being compliant. Compliance has improved over time, but many professionals still prefer to write optimized markup by hand.
Open source tools were typically developed to the standards and made fewer exceptions for the then-dominant Internet Explorer's deviations from the standards.
The W3C started Amaya in 1996 to showcase Web technologies in a fully featured Web client. This was to provide a framework that integrated many W3C technologies in a single, consistent environment. Amaya started as an HTML and CSS editor and now supports XML, XHTML, MathML, and SVG.
GeoCities was one of the first more modern site builders that didn't require any technical skills. Five years after its launch in 1994 Yahoo! purchased it for $3.6 billion. After becoming obsolescent, it was shut down in April 2009.
|
https://en.wikipedia.org/wiki/Annus%20mirabilis%20papers | The annus mirabilis papers (from Latin annus mīrābilis, "miracle year") are the four papers that Albert Einstein published in Annalen der Physik (Annals of Physics), a scientific journal, in 1905. These four papers were major contributions to the foundation of modern physics. They revolutionized science's understanding of the fundamental concepts of space, time, mass, and energy. Because Einstein published these remarkable papers in a single year, 1905 is called his annus mirabilis (miracle year in English or Wunderjahr in German).
The first paper explained the photoelectric effect, which established the energy of the light quanta , and was the only specific discovery mentioned in the citation awarding Einstein the Nobel Prize in Physics.
The second paper explained Brownian motion, which established the Einstein relation and led reluctant physicists to accept the existence of atoms.
The third paper introduced Einstein's theory of special relativity, which used the universal constant speed of light to derive the Lorentz transformations.
The fourth, a consequence of the theory of special relativity, developed the principle of mass–energy equivalence, expressed in the famous equation and which led to the discovery and use of atomic energy decades later.
These four papers, together with quantum mechanics and Einstein's later theory of general relativity, are the foundation of modern physics.
Background
At the time the papers were written, Einstein did not have easy access to a complete set of scientific reference materials, although he did regularly read and contribute reviews to Annalen der Physik. Additionally, scientific colleagues available to discuss his theories were few. He worked as an examiner at the Patent Office in Bern, Switzerland, and he later said of a co-worker there, Michele Besso, that he "could not have found a better sounding board for my ideas in all of Europe". In addition, co-workers and the other members of the self-styled "Olympia |
https://en.wikipedia.org/wiki/Theoretical%20astronomy | Theoretical astronomy is the use of analytical and computational models based on principles from physics and chemistry to describe and explain astronomical objects and astronomical phenomena. Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
Ptolemy's Almagest, although a brilliant treatise on theoretical astronomy combined with a practical handbook for computation, nevertheless includes compromises to reconcile discordant observations with a geocentric model. Modern theoretical astronomy is usually assumed to have begun with the work of Johannes Kepler (1571–1630), particularly with Kepler's laws. The history of the descriptive and theoretical aspects of the Solar System mostly spans from the late sixteenth century to the end of the nineteenth century.
Theoretical astronomy is built on the work of observational astronomy, astrometry, astrochemistry, and astrophysics. Astronomy was early to adopt computational techniques to model stellar and galactic formation and celestial mechanics. From the point of view of theoretical astronomy, not only must the mathematical expression be reasonably accurate but it should preferably exist in a form which is amenable to further mathematical analysis when used in specific problems. Most of theoretical astronomy uses Newtonian theory of gravitation, considering that the effects of general relativity are weak for most celestial objects. Theoretical astronomy does not attempt to predict the position, size and temperature of every object in the universe, but by and large has concentrated upon analyzing the apparently complex but periodic motions of celestial objects.
Integrating astronomy and physics
"Contrary to the belief generally held by laboratory physicists, astrono |
https://en.wikipedia.org/wiki/Runway%20visual%20range | In aviation, the runway visual range (RVR) is the distance over which a pilot of an aircraft on the centreline of the runway can see the runway surface markings delineating the runway or identifying its centre line. RVR is normally expressed in meters or feet. RVR is used to determine the landing and takeoff conditions for aircraft pilots, as well as the type of operational visual aids used at the airport.
Measurement
Originally RVR was measured by a person, either by viewing the runway lights from the top of a vehicle parked on the runway threshold, or by viewing special angled runway lights from a tower at one side of the runway. The number of lights visible could then be converted to a distance to give the RVR. This is known as the human observer method and can still be used as a fall-back.
Today most airports use instrumented runway visual range (IRVR), which is measured by devices called scatterometers which provide simplified installation as they are integrated units and can be installed as a single unit(s) at a critical location along the runway or transmissometers which are installed at one side of a runway close to its edge. Normally three transmissometers are provided, one at each end of the runway and one at the midpoint. In the US, Forward Scatter RVRs are replacing transmissometers at most airports. According to the US Federal Aviation Administration: "There are approximately 279 RVR systems in the NAS, of which 242 are forward scatter NG RVR Systems and 34 are older Transmissometer Systems."
Data reliability
Because IRVR data are localized information, the values obtained are not necessarily a reliable guide to what a pilot can actually expect to see. This can easily be demonstrated when obscuration such as fog is variable, different values can apply simultaneously at the same physical point.
For example, a 2000 m runway could have reported touchdown, midpoint and rollout IRVR values of 700m, 400m and 900 m. If the actual RVR at the touchdown po |
https://en.wikipedia.org/wiki/ECMWF%20re-analysis | The ECMWF reanalysis project is a meteorological reanalysis project carried out by the European Centre for Medium-Range Weather Forecasts (ECMWF).
The first reanalysis product, ERA-15, generated reanalyses for approximately 15 years, from December 1978 to February 1994. The second product, ERA-40 (originally intended as a 40-year reanalysis) begins in 1957 (the International Geophysical Year) and covers 45 years to 2002. As a precursor to a revised extended reanalysis product to replace ERA-40, ECMWF released ERA-Interim, which covers the period from 1979 to 2019.
A new reanalysis product ERA5 has recently been released by ECMWF as part of Copernicus Climate Change Services. This product has higher spatial resolution (31 km) and covers the period from 1979 to present. Extension up to 1940 became available in 2023.
In addition to reanalysing all the old data using a consistent system, the reanalyses also make use of much archived data that was not available to the original analyses. This allows for the correction of many historical hand-drawn maps where the estimation of features was common in areas of data sparsity. The ability is also present to create new maps of atmosphere levels that were not commonly used until more recent times.
Generation
Many sources of the meteorological observations were used, including radiosondes, balloons, aircraft, buoyes, satellites, scatterometers. This data was run through the ECMWF computer model at a 125 km resolution. As the ECMWF's computer model is one of the more highly regarded in the field of forecasting, many scientists take its reanalysis to have similar merit. The data is stored in GRIB format. The reanalysis was done in an effort to improve the accuracy of historical weather maps and aid in a more detailed analysis of various weather systems through a period that was severely lacking in computerized data. With the data from reanalyses such as this, many of the more modern computerized tools for analyzing storm systems |
https://en.wikipedia.org/wiki/Ensembl%20genome%20database%20project | Ensembl genome database project is a scientific project at the European Bioinformatics Institute, which provides a centralized resource for geneticists, molecular biologists and other researchers studying the genomes of our own species and other vertebrates and model organisms. Ensembl is one of several well known genome browsers for the retrieval of genomic information.
Similar databases and browsers are found at NCBI and the University of California, Santa Cruz (UCSC).
History
The human genome consists of three billion base pairs, which code for approximately 20,000–25,000 genes. However the genome alone is of little use, unless the locations and relationships of individual genes can be identified. One option is manual annotation, whereby a team of scientists tries to locate genes using experimental data from scientific journals and public databases. However this is a slow, painstaking task. The alternative, known as automated annotation, is to use the power of computers to do the complex pattern-matching of protein to DNA. The Ensembl project was launched in 1999 in response to the imminent completion of the Human Genome Project, with the initial goals of automatically annotate the human genome, integrate this annotation with available biological data and make all this knowledge publicly available.
In the Ensembl project, sequence data are fed into the gene annotation system (a collection of software "pipelines" written in Perl) which creates a set of predicted gene locations and saves them in a MySQL database for subsequent analysis and display. Ensembl makes these data freely accessible to the world research community. All the data and code produced by the Ensembl project is available to download, and there is also a publicly accessible database server allowing remote access. In addition, the Ensembl website provides computer-generated visual displays of much of the data.
Over time the project has expanded to include additional species (including key model |
https://en.wikipedia.org/wiki/Danger%20zone%20%28food%20safety%29 | The danger zone is the temperature range in which food-borne bacteria can grow. Food safety agencies, such as the United States' Food Safety and Inspection Service (FSIS), define the danger zone as roughly . The FSIS stipulates that potentially hazardous food should not be stored at temperatures in this range in order to prevent foodborne illness and that food that remains in this zone for more than two hours should not be consumed. Foodborne microorganisms grow much faster in the middle of the zone, at temperatures between . In the UK and NI, the Danger Zone is defined as 8 to 63 °C.
Food-borne bacteria, in large enough numbers, may cause food poisoning, symptoms similar to gastroenteritis or "stomach flu" (a misnomer, as true influenza primarily affects the respiratory system). Some of the symptoms include stomach cramps, nausea, vomiting, diarrhea, and fever. Food-borne illness becomes more dangerous in certain populations, such as people with weakened immune systems, young children, the elderly, and pregnant women. In Canada, there are approximately 4 million cases of food-borne disease per year. These symptoms can begin as early as shortly after and as late as weeks after consumption of the contaminated food.
Time and temperature control safety (TCS) plays a critical role in food handling. To prevent time-temperature abuse, the amount of time food spends in the danger zone must be minimized. A logarithmic relationship exists between microbial cell death and temperature, that is, a small decrease of cooking temperature can result in considerable numbers of cells surviving the process. In addition to reducing the time spent in the danger zone, foods should be moved through the danger zone as few times as possible when reheating or cooling.
Foods that are potentially hazardous inside the danger zone:
Meat: beef, poultry, pork, seafood
Eggs and other protein-rich foods
Dairy products
Cut or peeled fresh produce
Cooked vegetables, beans, rice, pasta
Sauc |
https://en.wikipedia.org/wiki/Madelung%20constant | The Madelung constant is used in determining the electrostatic potential of a single ion in a crystal by approximating the ions by point charges. It is named after Erwin Madelung, a German physicist.
Because the anions and cations in an ionic solid attract each other by virtue of their opposing charges, separating the ions requires a certain amount of energy. This energy must be given to the system in order to break the anion–cation bonds. The energy required to break these bonds for one mole of an ionic solid under standard conditions is the lattice energy.
Formal expression
The Madelung constant allows for the calculation of the electric potential of all ions of the lattice felt by the ion at position
where is the distance between the th and the th ion. In addition,
number of charges of the th ion
the elementary charge, 1.6022 C
; is the permittivity of free space.
If the distances are normalized to the nearest neighbor distance , the potential may be written
with being the (dimensionless) Madelung constant of the th ion
Another convention is to base the reference length on the cubic root of the unit cell volume, which for cubic systems is equal to the lattice constant. Thus, the Madelung constant then reads
The electrostatic energy of the ion at site then is the product of its charge with the potential acting at its site
There occur as many Madelung constants in a crystal structure as ions occupy different lattice sites. For example, for the ionic crystal NaCl, there arise two Madelung constants – one for Na and another for Cl. Since both ions, however, occupy lattice sites of the same symmetry they both are of the same magnitude and differ only by sign. The electrical charge of the and ion are assumed to be onefold positive and negative, respectively, and . The nearest neighbour distance amounts to half the lattice constant of the cubic unit cell and the Madelung constants become
The prime indicates that the term is to be left out. Sin |
https://en.wikipedia.org/wiki/Cassini%20oval | In geometry, a Cassini oval is a quartic plane curve defined as the locus of points in the plane such that the product of the distances to two fixed points (foci) is constant. This may be contrasted with an ellipse, for which the sum of the distances is constant, rather than the product. Cassini ovals are the special case of polynomial lemniscates when the polynomial used has degree 2.
Cassini ovals are named after the astronomer Giovanni Domenico Cassini who studied them in the late 17th century.
Cassini believed that the Sun traveled around the Earth on one of these ovals, with the Earth at one focus of the oval.
Other names include Cassinian ovals, Cassinian curves and ovals of Cassini.
Formal definition
A Cassini oval is a set of points, such that for any point of the set, the product of the distances to two fixed points is a constant, usually written as where :
As with an ellipse, the fixed points are called the foci of the Cassini oval.
Equations
If the foci are (a, 0) and (−a, 0), then the equation of the curve is
When expanded this becomes
The equivalent polar equation is
Shape
The curve depends, up to similarity, on e = b/a. When e < 1, the curve consists of two disconnected loops, each of which contains a focus. When e = 1, the curve is the lemniscate of Bernoulli having the shape of a sideways figure eight with a double point (specifically, a crunode) at the origin. When e > 1, the curve is a single, connected loop enclosing both foci. It is peanut-shaped for and convex for . The limiting case of a → 0 (hence e → ), in which case the foci coincide with each other, is a circle.
The curve always has x-intercepts at ± c where c2 = a2 + b2. When e < 1 there are two additional real x-intercepts and when e > 1 there are two real y-intercepts, all other x- and y-intercepts being imaginary.
The curve has double points at the circular points at infinity, in other words the curve is bicircular. These points are biflecnodes, meaning that the curve |
https://en.wikipedia.org/wiki/Selamectin | Selamectin (trade names Selehold manufactured by KRKA, Selarid manufactured by Norbrook Laboratories Limited, Revolution and Stronghold manufactured by Zoetis, Revolt manufactured by Aurora Pharmaceuticals, Senergy manufactured by Virbac, among others) is a topical parasiticide and anthelminthic used on dogs and cats. It treats and prevents infections of heartworms, fleas, ear mites, sarcoptic mange (scabies), and certain types of ticks in dogs, and prevents heartworms, fleas, ear mites, hookworms, and roundworms in cats. It is structurally related to ivermectin and milbemycin. Selamectin is not approved for human use.
Usage
The drug is applied topically. It is isopropyl alcohol based, packaged according to its varying dosage sizes and applied once monthly. It is not miscible in water.
Mode of action
Selamectin disables parasites by activating glutamate-gated chloride channels at muscle synapses. Selamectin activates the chloride channel without desensitization, allowing chloride ions to enter the nerve cells and causing neuromuscular paralysis, impaired muscular contraction, and eventual death.
The substance fights both internal and surface parasitic infection. Absorbed into the body through the skin and hair follicles, it travels through the bloodstream, intestines, and sebaceous glands; parasites ingest the drug when they feed on the animal's blood or secretions.
Side effects
Selamectin has been found to be safe and effective in a 2003 review.
Selamectin has high safety ratings, with less than 1% of pets displaying side effects. In cases where side-effects do occur, they most often include passing irritation or hair loss at the application site. Symptoms beyond these (such as drooling, rapid breathing, lack of coordination, vomiting, or diarrhea) could be due to shock as a result of selamectin killing heartworms or other vulnerable parasites present at high levels in the bloodstreams of dogs. This would be a reaction due to undetected or underestimated i |
https://en.wikipedia.org/wiki/Social%20networking%20service | A social networking service or SNS (sometimes called a social networking site) is a type of online social media platform which people use to build social networks or social relationships with other people who share similar personal or career content, interests, activities, backgrounds or real-life connections.
Social networking services vary in format and the number of features. They can incorporate a range of new information and communication tools, operating on desktops and on laptops, on mobile devices such as tablet computers and smartphones. This may feature digital photo/video/sharing and diary entries online (blogging). Online community services are sometimes considered social-network services by developers and users, though in a broader sense, a social-network service usually provides an individual-centered service whereas online community services are groups centered. Generally defined as "websites that facilitate the building of a network of contacts in order to exchange various types of content online," social networking sites provide a space for interaction to continue beyond in-person interactions. These computer mediated interactions link members of various networks and may help to create, sustain and develop new social and professional relationships.
Social networking sites allow users to share ideas, digital photos and videos, posts, and to inform others about online or real-world activities and events with people within their social network. While in-person social networking – such as gathering in a village market to talk about events – has existed since the earliest development of towns, the web enables people to connect with others who live in different locations across the globe (dependent on access to an Internet connection to do so). Depending on the platform, members may be able to contact any other member. In other cases, members can contact anyone they have a connection to, and subsequently anyone that contact has a connection to, and so o |
https://en.wikipedia.org/wiki/A%20System%20of%20Logic | A System of Logic, Ratiocinative and Inductive is an 1843 book by English philosopher John Stuart Mill.
Overview
In this work, he formulated the five principles of inductive reasoning that are known as Mill's Methods. This work is important in the philosophy of science, and more generally, insofar as it outlines the empirical principles Mill would use to justify his moral and political philosophies.
An article in "Philosophy of Recent Times" has described this book as an "attempt to expound a psychological system of logic within empiricist principles.”
This work was important to the history of science, being a strong influence on scientists such as Dirac. A System of Logic also had an impression on Gottlob Frege, who rebuked many of Mill's ideas about the philosophy of mathematics in his work The Foundations of Arithmetic.
Mill revised the original work several times over the course of thirty years in response to critiques and commentary by Whewell, Bain, and others.
Editions
Mill, John Stuart, A System of Logic, University Press of the Pacific, Honolulu, 2002,
See also
Emergentism |
https://en.wikipedia.org/wiki/Shear%20rate | In physics, shear rate is the rate at which a progressive shearing deformation is applied to some material.
Simple shear
The shear rate for a fluid flowing between two parallel plates, one moving at a constant speed and the other one stationary (Couette flow), is defined by
where:
is the shear rate, measured in reciprocal seconds;
is the velocity of the moving plate, measured in meters per second;
is the distance between the two parallel plates, measured in meters.
Or:
For the simple shear case, it is just a gradient of velocity in a flowing material. The SI unit of measurement for shear rate is s−1, expressed as "reciprocal seconds" or "inverse seconds". However, when modelling fluids in 3D, it is common to consider a scalar value for the shear rate by calculating the second invariant of the strain-rate tensor
.
The shear rate at the inner wall of a Newtonian fluid flowing within a pipe is
where:
is the shear rate, measured in reciprocal seconds;
is the linear fluid velocity;
is the inside diameter of the pipe.
The linear fluid velocity is related to the volumetric flow rate by
where is the cross-sectional area of the pipe, which for an inside pipe radius of is given by
thus producing
Substituting the above into the earlier equation for the shear rate of a Newtonian fluid flowing within a pipe, and noting (in the denominator) that :
which simplifies to the following equivalent form for wall shear rate in terms of volumetric flow rate and inner pipe radius :
For a Newtonian fluid wall, shear stress () can be related to shear rate by where is the dynamic viscosity of the fluid. For non-Newtonian fluids, there are different constitutive laws depending on the fluid, which relates the stress tensor to the shear rate tensor. |
https://en.wikipedia.org/wiki/Simple%20shear | Simple shear is a deformation in which parallel planes in a material remain parallel and maintain a constant distance, while translating relative to each other.
In fluid mechanics
In fluid mechanics, simple shear is a special case of deformation where only one component of velocity vectors has a non-zero value:
And the gradient of velocity is constant and perpendicular to the velocity itself:
,
where is the shear rate and:
The displacement gradient tensor Γ for this deformation has only one nonzero term:
Simple shear with the rate is the combination of pure shear strain with the rate of and rotation with the rate of :
The mathematical model representing simple shear is a shear mapping restricted to the physical limits. It is an elementary linear transformation represented by a matrix. The model may represent laminar flow velocity at varying depths of a long channel with constant cross-section. Limited shear deformation is also used in vibration control, for instance base isolation of buildings for limiting earthquake damage.
In solid mechanics
In solid mechanics, a simple shear deformation is defined as an isochoric plane deformation in which there are a set of line elements with a given reference orientation that do not change length and orientation during the deformation. This deformation is differentiated from a pure shear by virtue of the presence of a rigid rotation of the material. When rubber deforms under simple shear, its stress-strain behavior is approximately linear. A rod under torsion is a practical example for a body under simple shear.
If e1 is the fixed reference orientation in which line elements do not deform during the deformation and e1 − e2 is the plane of deformation, then the deformation gradient in simple shear can be expressed as
We can also write the deformation gradient as
Simple shear stress–strain relation
In linear elasticity, shear stress, denoted , is related to shear strain, denoted , by the following equation:
w |
https://en.wikipedia.org/wiki/VCR/DVD%20combo | A VCR/DVD combination, VCR/DVD combo, or DVD/VCR combo, is a multiplex or converged device that allows the ability to watch both VHS tapes and DVDs. Many such players can also play additional formats such as CD and VCD.
VCR/DVD player combinations were first introduced around the year 1999, with the first model released by Go Video, model DVR5000, manufactured by Samsung Electronics. VCR/DVD combinations were sometimes criticized as being of poorer quality in terms of resolution than stand-alone units. These products also had a disadvantage in that if one function (DVD or VHS) became unusable, the entire unit needed to be replaced or repaired, though later models which suffered from DVD playback lag still functioned with the VCR.
Normally in a combo unit, it will have typical features such as recording a DVD onto VHS (on most), record a show to VHS with a digital-to-analog converter device (unless a unit has a digital TV tuner), LP recording for VHS, surround sound for Dolby Digital and DTS (DVD), component connections for DVD (although some may lack the connection), 480p progressive scan for the DVD side, VCR+, playback of tapes in a variety of playback speeds, and front A/V inputs (VCR only).
To help the consumer, they will have one or more buttons for switching the output source for ease of use. Usually, the recording capabilities are VCR exclusive, while the better picture quality is DVD exclusive, but some models include S-Video, Component, or HDMI output for VCR as well. These devices were among the only VCRs alongside some VCR/Blu-ray combos to be equipped with an HDMI port for HDTV viewing upscaling to several different types of resolutions including 1080i.
Shortly after the turn of the century, combo devices including DVD recorders (instead of players) also became available. These could be used for transferring VHS material onto recordable blank DVDs. In rare cases, such devices had component inputs to record with the best connection possible.
In Jul |
https://en.wikipedia.org/wiki/ED50 | ED50 ("European Datum 1950", EPSG:4230) is a geodetic datum which was defined after World War II for the international connection of geodetic networks.
Background
Some of the important battles of World War II were fought on the borders of Germany, the Netherlands, Belgium and France, and the mapping of these countries had incompatible latitude and longitude positioning. During the war the German Military Survey (Reichsamt Kriegskarten und Vermessungswesen), under the command of Lieutenant General Gerlach Hemmerich, began a systematic mapping of the areas under the control of the German Military, a large part of Europe. The allies were also concerned about the state of mapping in Europe, and in 1944 the US Army Map Service set up an intelligence team to collect mapping and surveying information from the Germans as the allied armies moved through Europe after the Normandy landings. The group, known as Houghteam after Major Floyd W. Hough, collected much material. Their greatest success was in April 1945. They found a large cache of material in Saalfeld, Thuringia, which proved to be the entire geodetic archives of the German Army. The shipment, 75 truckloads in all, was transferred to Bamberg, and then to Washington for evaluation.
Shortly after this, the team captured the personnel of the Reichsamt für Landesaufnahme, the State Surveying Service, in Friedrichroda, also in Thuringia. This group had been working on the integration of the mapping of the occupied territories with that of Germany, under Professor Erwin Gigas, a geodesist with an international reputation. They were directed to continue this work, in Bamberg in the US zone of occupation, as part of the US-led effort to develop a single adjusted triangulation for Central Europe. This was completed in 1947. The work was then extended to cover much of Western Europe which was completed in 1950, and became ED50.
The European triangulation was originally classified military information. It was de-classifire |
https://en.wikipedia.org/wiki/Time-Triggered%20Protocol | The Time-Triggered Protocol (TTP) is an open computer network protocol for control systems.
It was designed as a time-triggered fieldbus for vehicles and industrial applications. and standardized in 2011 as SAE AS6003 (TTP Communication Protocol). TTP controllers have accumulated over 500 million flight hours in commercial DAL A aviation application, in power generation, environmental and flight controls. TTP is used in FADEC and modular aerospace controls, and flight computers. In addition, TTP devices have accumulated over 1 billion operational hours in SIL4 railway signalling applications.
History
TTP was originally designed at the Vienna University of Technology in the early 1980s. In 1998 TTTech Computertechnik AG took over the development of TTP, providing software and hardware products. TTP communication controller chips and IP are available from sources including austriamicrosystems, ON Semiconductor and ALTERA.
Definition
TTP is a dual-channel 4 - 25 Mbit/s time-triggered field bus. It can operate using one or both channels with maximum data rate of 2x 25 Mbit/s. With replicated data on both channels, redundant communication is supported
As a fault-tolerant time-triggered protocol, TTP provides autonomous fault-tolerant message transport at known times and with minimal jitter by employing a TDMA (Time-Division Multiple Access) strategy on replicated communication channels. TTP offers fault-tolerant clock synchronization that establishes the global time base without relying on a central time server[citation needed].
TTP provides a membership service to inform every correct node about the consistency of data transmission. This mechanism can be viewed as a distributed acknowledgment service that informs the application promptly if an error in the communication system has occurred. If state consistency is lost, the application is notified immediately.
Additionally, TTP includes the service of clique avoidance to detect faults outside the fault hypothesis |
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20operating%20systems | This is a list of Microsoft written and published operating systems. For the codenames that Microsoft gave their operating systems, see Microsoft codenames. For another list of versions of Microsoft Windows, see, List of Microsoft Windows versions.
MS-DOS
See MS-DOS Versions for a full list.
Windows
Windows 1.0 until 8.1
Windows 10/11 and Windows Server 2016/2019/2022
Windows Mobile
Windows Mobile 2003
Windows Mobile 2003 SE
Windows Mobile 5
Windows Mobile 6
Windows Phone
Xbox gaming
Xbox system software
Xbox 360 system software
Xbox One and Xbox Series X/S system software
OS/2
Unix and Unix-like
Xenix
Nokia X platform
Microsoft Linux distributions
Azure Sphere
SONiC
Windows Subsystem for Linux
CBL-Mariner
Other operating systems
MS-Net
LAN Manager
MIDAS
Singularity
Midori
Zune
KIN OS
Nokia Asha platform
Barrelfish
Time line
See also
List of Microsoft topics
List of operating systems
External links
Concise Microsoft O.S. Timeline, by Bravo Technology Center
Micro
Operating systems |
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20software | Microsoft is a developer of personal computer software. It is best known for its Windows operating system, the Internet Explorer and subsequent Microsoft Edge web browsers, the Microsoft Office family of productivity software plus services, and the Visual Studio IDE. The company also publishes books (through Microsoft Press) and video games (through Xbox Game Studios), and produces its own line of hardware. The following is a list of the notable Microsoft software Applications.
Software development
Azure DevOps
Azure DevOps Server (formerly Team Foundation Server and Visual Studio Team System)
Azure DevOps Services (formerly Visual Studio Team Services, Visual Studio Online and Team Foundation Service)
BASICA
Bosque
CLR Profiler
GitHub
Atom
GitHub Desktop
GitHub Copilot
npm
Spectrum
Dependabot
GW-BASIC
IronRuby
IronPython
JScript
Microsoft Liquid Motion
Microsoft BASIC, also licensed as:
Altair BASIC
AmigaBASIC
Applesoft BASIC
Commodore BASIC
Color BASIC
MBASIC
Spectravideo Extended BASIC
TRS-80 Level II BASIC
Microsoft MACRO-80
Microsoft Macro Assembler
Microsoft Small Basic
Microsoft Visual SourceSafe
Microsoft XNA
Microsoft WebMatrix
MSX BASIC
NuGet
QBasic and QuickBASIC
TASC (The AppleSoft Compiler)
TypeScript
VBScript
Visual Studio
Microsoft Visual Studio Express
Visual Basic
Visual Basic .NET
Visual Basic for Applications
Visual C++
C++/CLI
Managed Extensions for C++
Visual C#
Visual FoxPro
Visual J++
Visual J#
Visual Studio Code
Visual Studio Lab Management
Visual Studio Tools for Applications
Visual Studio Tools for Office
VSTS Profiler
Windows API
Windows SDK
WordBASIC
Xbox Development Kit
3D
3D Builder
3D Scan (requires a Kinect for Xbox One sensor)
3D Viewer
AltspaceVR
Bing Maps for Enterprise (formerly "Bing Maps Platform" and "Microsoft Virtual Earth")
Direct3D
Havok
HoloStudio
Kinect for Windows SDK
Microsoft Mesh
Paint 3D
Simplygon
Educational
Bing
Bing Bar
Browstat
Creative Writ |
https://en.wikipedia.org/wiki/Ecopsychology | Ecopsychology is an interdisciplinary and transdisciplinary field that focuses on the synthesis of ecology and psychology and the promotion of sustainability. It is distinguished from conventional psychology as it focuses on studying the emotional bond between humans and the Earth. Instead of examining personal pain solely in the context of individual or family pathology, it is analyzed in its wider connection to the more than human world. A central premise is that while the mind is shaped by the modern world, its underlying structure was created in a natural non-human environment. Ecopsychology seeks to expand and remedy the emotional connection between humans and nature, treating people psychologically by bringing them spiritually closer to nature.
History
Origins of ecopsychology
Sigmund Freud
In his 1929 book Civilization and Its Discontents ("Das Unbehagen in der Kultur"), Sigmund Freud discussed the basic tensions between civilization and the individual. He recognized the interconnection between the internal world of the mind and the external world of the environment, stating:
Robert Greenway
Influenced by the philosophies of noted ecologists Walles T. Edmondson and Loren Eiseley, Robert Greenway began researching and developing a concept that he described as "a marriage" between psychology and ecology in the early 1960s. He theorized that "the mind is nature, and nature, the mind," and called its study psychoecology. Greenway published his first essay on the topic at Brandeis University in 1963.
In 1969, he began teaching the subject at Sonoma State University. One of Greenway's students founded a psychoecology study group at University of California, Berkeley, which was joined by Theodore Roszak in the 1990s.
In the 1995 book Ecopsychology: Restoring the Earth, Healing the Mind, Greenway wrote:
Theodore Roszak
Theodore Roszak is credited with coining the term "ecopsychology" in his 1992 book The Voice of the Earth, although a group of psychol |
https://en.wikipedia.org/wiki/Geodetic%20control%20network | A geodetic control network (also geodetic network, reference network, control point network, or control network) is a network, often of triangles, which are measured precisely by techniques of control surveying, such as terrestrial surveying or satellite geodesy.
A geodetic control network consists of stable, identifiable points with published datum values derived from observations that tie the points together.
Classically, a control is divided into horizontal (X-Y) and vertical (Z) controls (components of the control), however with the advent of satellite navigation systems, GPS in particular, this division is becoming obsolete.
Many organizations contribute information to the geodetic control network.
The higher-order (high precision, usually millimeter-to-decimeter on a scale of continents) control points are normally defined in both space and time using global or space techniques, and are used for "lower-order" points to be tied into. The lower-order control points are normally used for engineering, construction and navigation. The scientific discipline that deals with the establishing of coordinates of points in a control network is called geodesy.
Cartography applications
After a cartographer registers key points in a digital map to the real world coordinates of those points on the ground, the map is then said to be "in control". Having a base map and other data in geodetic control means that they will overlay correctly.
When map layers are not in control, it requires extra work to adjust them to line up, which introduces additional error.
Those real world coordinates are generally in some particular map projection, unit, and geodetic datum.
Measurement techniques
Terrestrial techniques
Triangulation
In "classical geodesy" (up to the sixties) control networks were established by triangulation using measurements of angles and of some spare distances. The precise orientation to the geographic north is achieved through methods of geodetic astronomy. |
https://en.wikipedia.org/wiki/Geodetic%20astronomy | Geodetic astronomy or astronomical geodesy (astro-geodesy) is the application of astronomical methods into geodetic networks and other technical projects of geodesy.
Applications
The most important applications are:
Establishment of geodetic datum systems (e.g. ED50) or at expeditions
apparent places of stars, and their proper motions
precise astronomical navigation
astro-geodetic geoid determination
modelling the rock densities of the topography and of geological layers in the subsurface
Monitoring of the Earth rotation and polar wandering
Contribution to the time system of physics and geosciences
Measuring techniques
Important measuring techniques are:
Latitude determination and longitude determination, by theodolites, tacheometers, astrolabes or zenith cameras
time and star positions by observation of star transits, e.g. by meridian circles (visual, photographic or CCD)
Azimuth determination
for the exact orientation of geodetic networks
for mutual transformations between terrestrial and space methods
for improved accuracy by means of "Laplace points" at special fixed points
Vertical deflection determination and their use
in geoid determination
in mathematical reduction of very precise networks
for geophysical and geological purposes (see above)
Modern spatial methods
VLBI with radio sources (quasars)
Astrometry of stars by scanning satellites like Hipparcos or the future Gaia.
The accuracy of these methods depends on the instrument and its spectral wavelength, the measuring or scanning method, the time amount (versus economy), the atmospheric situation, the stability of the surface resp. the satellite, on mechanical and temperature effects to the instrument, on the experience and skill of the observer, and on the accuracy of the physical-mathematical models.
Therefore, the accuracy reaches from 60" (navigation, ~1 mile) to 0,001" and better (a few cm; satellites, VLBI), e.g.:
angles (vertical deflections and azimuths) ±1" up to 0,1"
ge |
https://en.wikipedia.org/wiki/Circuit-level%20gateway | A circuit-level gateway is a type of firewall.
Circuit-level gateways work at the session layer of the OSI model, or as a "shim-layer" between the application layer and the transport layer of the TCP/IP stack. They monitor TCP handshaking between packets to determine whether a requested session is legitimate. Information passed to a remote computer through a circuit-level gateway appears to have originated from the gateway. Firewall traffic is cleaned based on particular session rules and may be controlled to acknowledged computers only. Circuit-level firewalls conceal the details of the protected network from the external traffic, which is helpful for interdicting access to impostors. Circuit-level gateways are relatively inexpensive and have the advantage of hiding information about the private network they protect. However, they do not filter individual packets.
See also
Application firewall
Application-level gateway firewall
Bastion host
Dual-homed
External links
http://netsecurity.about.com/cs/generalsecurity/g/def_circgw.htm
http://www.softheap.com/internet/circuit-level-gateway.html
http://www.pcstats.com/articleview.cfm?articleid=1450&page=5
Internet architecture
Network socket
Transmission Control Protocol |
https://en.wikipedia.org/wiki/Flattening | Flattening is a measure of the compression of a circle or sphere along a diameter to form an ellipse or an ellipsoid of revolution (spheroid) respectively. Other terms used are ellipticity, or oblateness. The usual notation for flattening is and its definition in terms of the semi-axes and of the resulting ellipse or ellipsoid is
The compression factor is in each case; for the ellipse, this is also its aspect ratio.
Definitions
There are three variants: the flattening sometimes called the first flattening, as well as two other "flattenings" and each sometimes called the second flattening, sometimes only given a symbol, or sometimes called the second flattening and third flattening, respectively.
In the following, is the larger dimension (e.g. semimajor axis), whereas is the smaller (semiminor axis). All flattenings are zero for a circle ().
{| class="wikitable" style="border:1px solid darkgray;" cellpadding="5"
! style="padding-left: 0.5em" scope="row" | (First) flattening
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em " | Fundamental. Geodetic reference ellipsoids are specified by giving
|-
! style="padding-left: 0.5em" scope="row" | Second flattening
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" | Rarely used.
|-
! style="padding-left: 0.5em" scope="row" | Third flattening
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" |
| style="padding-left: 0.5em" | Used in geodetic calculations as a small expansion parameter.
|}
Identities
The flattenings can be related to each-other:
The flattenings are related to other parameters of the ellipse. For example,
where is the eccentricity.
See also
Earth flattening
Equatorial bulge
Ovality
Planetary flattening
Sphericity
Roundness (object) |
https://en.wikipedia.org/wiki/Zenith%20camera | A zenith camera is an astrogeodetic telescope used today primarily for the local surveys of Earth's gravity field. Zenith cameras are designed as transportable field instruments for the direct observation of the plumb line (astronomical latitude and longitude) and vertical deflections.
Instrument
A zenith camera combines an optical lens (about 10–20 cm aperture) with a digital image sensor (CCD) in order to image stars near the zenith. Electronic levels (tilt sensors) serve as a means to point the lens towards zenith.
Zenith cameras are generally mounted on a turnable platform to allow star images to be taken in two camera directions (two-face-measurement). Because zenith cameras are usually designed as non-tracking and non-scanning instruments, exposure times are kept short, at the order of few 0.1 s, yielding rather circular star images. Exposure epochs are mostly recorded by means of the timing-capability of GPS-receivers (time-tagging).
Data processing
Depending on the CCD sensor - lens combination used, few tens to hundreds of stars are captured with a single digital zenith image. The positions of imaged stars are measured by means of digital image processing algorithms, such as image moment analysis or point spread functions to fit the star images. Star catalogues, such as Tycho-2 or UCAC-3 are used as celestial reference to reduce the star images. The zenith point is interpolated into the field of imaged stars, and corrected for the exposure time and (small) tilt of the telescope axis to yield the direction of the plumb line.
Accuracy and applications
If the geodetic coordinates of the zenith camera is known or measured, e.g., with a GPS-receiver, vertical deflections are obtained as result of the zenith camera measurement. Zenith cameras deliver astronomical coordinates and vertical deflections accurate to about 0.1 seconds of arc. Zenith cameras with CCD image sensors are efficient to collect vertical deflections at about 10 field stations per night. |
https://en.wikipedia.org/wiki/Apparent%20place | The apparent place of an object is its position in space as seen by an observer. Because of physical and geometrical effects it may differ from the "true" or "geometric" position.
Astronomy
In astronomy, a distinction is made between the mean position, apparent position and topocentric position of an object.
Position of a star
The mean position of a star (relative to the observer's adopted coordinate system) can be calculated from its value at an arbitrary epoch, together with its actual motion over time (known as proper motion). The apparent position is its position as seen by a theoretical observer at the centre of the moving Earth. Several effects cause the apparent position to differ from the mean position:
Annual aberration – a deflection caused by the velocity of the Earth's motion around the Sun, relative to an inertial frame of reference. This is independent of the distance of the star from the Earth.
Annual parallax – the apparent change in position due to the star being viewed from different places as the Earth orbits the Sun in the course of a year. Unlike aberration, this effect depends on the distance of the star, being larger for nearby stars.
Precession – a long-term (ca. 26,000 years) variation in the direction of the Earth's axis of rotation.
Nutation – shorter-term variations in the direction of the Earth's axis of rotation.
The Apparent Places of Fundamental Stars is an astronomical yearbook, which is published one year in advance by the Astronomical Calculation Institute (Heidelberg University) in Heidelberg, Germany. It lists the apparent place of about 1000 fundamental stars for every 10 days and is published as a book and in a more extensive version on the Internet.
Solar System objects
The apparent position of a planet or other object in the Solar System is also affected by light-time correction, which is caused by the finite time it takes light from a moving body to reach the observer. Simply put, the observer sees the object |
https://en.wikipedia.org/wiki/Pkg-config | pkg-config is a computer program that defines and supports a unified interface for querying installed libraries for the purpose of compiling software that depends on them. It allows programmers and installation scripts to work without explicit knowledge of detailed library path information. pkg-config was originally designed for Linux, but it is now also available for BSD, Microsoft Windows, macOS, and Solaris.
It outputs various information about installed libraries. This information may include:
Parameters (flags) for C or C++ compiler
Parameters (flags) for linker
Version of the package in question
The first implementation was written in shell. Later, it was rewritten in C using the GLib library.
Synopsis
When a library is installed (automatically through the use of an RPM, deb, or other binary packaging system or by compiling from the source), a .pc file should be included and placed into a directory with other .pc files (the exact directory is dependent upon the system and outlined in the pkg-config man page). This file has several entries.
These entries typically contain a list of dependent libraries that programs using the package also need to compile. Entries also typically include the location of header files, version information and a description.
Here is an example .pc file for libpng:
prefix=/usr/local
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${exec_prefix}/include
Name: libpng
Description: Loads and saves PNG files
Version: 1.2.8
Libs: -L${libdir} -lpng12 -lz
Cflags: -I${includedir}/libpng12
This file demonstrates how libpng informs that its libraries can be found in /usr/local/lib and its headers in /usr/local/include, that the library name is libpng, and that the version is 1.2.8. It also gives the additional linker flags that are needed to compile code that uses this library.
Here is an example of usage of pkg-config while compiling:
$ gcc -o test test.c $(pkg-config --libs --cflags libpng)
pkg-config can be used by bu |
https://en.wikipedia.org/wiki/Radio%20masts%20and%20towers | Radio masts and towers are typically tall structures designed to support antennas for telecommunications and broadcasting, including television. There are two main types: guyed and self-supporting structures. They are among the tallest human-made structures. Masts are often named after the broadcasting organizations that originally built them or currently use them.
In the case of a mast radiator or radiating tower, the whole mast or tower is itself the transmitting antenna.
Terminology
The terms "mast" and "tower" are often used interchangeably. However, in structural engineering terms, a tower is a self-supporting or cantilevered structure, while a mast is held up by stays or guys. Broadcast engineers in the UK use the same terminology. A mast is a ground-based or rooftop structure that supports antennas at a height where they can satisfactorily send or receive radio waves. Typical masts are of steel lattice or tubular steel construction. Masts themselves play no part in the transmission of mobile telecommunications.
Masts (to use the civil engineering terminology) tend to be cheaper to build but require an extended area surrounding them to accommodate the guy wires. Towers are more commonly used in cities where land is in short supply.
(NB: the terminology used in the United States is opposite that used in Europe - in the US, structures called masts are typically relatively small, un-guyed structures, while larger structures, guyed or un-guyed, are referred to as towers. The US Federal Communications Commission, for example, uses "tower" to describe such structures as radio and television transmission towers, whether guyed or unguyed.
There are a few borderline designs that are partly free-standing and partly guyed, called additionally guyed towers. For example:
The Gerbrandy tower consists of a self-supporting tower with a guyed mast on top.
The few remaining Blaw-Knox towers do the opposite: they have a guyed lower section surmounted by a freestanding par |
https://en.wikipedia.org/wiki/The%20Future%20Is%20Wild | The Future Is Wild (also referred to by the acronym FIW) is a 2002 speculative evolution docufiction miniseries and an accompanying multimedia entertainment franchise. The Future Is Wild explores the ecosystems and wildlife of three future time periods: 5, 100, and 200 million years in the future, in the format of a nature documentary. Though the settings and animals are fictional, the series has an educational purpose, serving as an informative and entertaining way to explore concepts such as evolution and climate change.
The Future Is Wild was first conceived by independent producer Joanna Adams in 1996 and developed together with various scientists, including Dougal Dixon, best known as the author of the 1981 book After Man, which also explored future wildlife. The 2002 series was an international co-production, involving the British BBC, the Franco-German channel Arte, the German ZDF, the Austrian ORF, the Italian MFE - MediaForEurope (via their Mediaset division), and the American Animal Planet and Discovery Channel. Wildly successful, The Future Is Wild continues to be broadcast to this day and has been shown on TV in more than 60 countries.
The success of The Future Is Wild spawned a large multimedia franchise, including books, children's entertainment, exhibitions, theme park rides, educational material, and toys. There have also been cancelled projects, such as a potential movie adaptation, as well as a sequel series, The Future Is Wild 2. From 2016 onwards, there has been talk of "relaunching" the franchise through various projects, such as an action-adventure TV series and The Future is Wild VR (a virtual reality videogame), though no new media has yet materialized.
Premise
The Future Is Wild explores twelve different future ecosystems across three future time periods: 5 million years in the future, 100 million years in the future, and 200 million years in the future. Four ecosystems from each period are explored and described.
Ice World: 5 million y |
https://en.wikipedia.org/wiki/Mast%20radiator | A mast radiator (or radiating tower) is a radio mast or tower in which the metal structure itself is energized and functions as an antenna. This design, first used widely in the 1930s, is commonly used for transmitting antennas operating at low frequencies, in the LF and MF bands, in particular those used for AM radio broadcasting stations. The conductive steel mast is electrically connected to the transmitter. Its base is usually mounted on a nonconductive support to insulate it from the ground. A mast radiator is a form of monopole antenna.
Structural design
Most mast radiators are built as guyed masts. Steel lattice masts of triangular cross-section are the most common type. Square lattice masts and tubular masts are also sometimes used. To ensure that the tower is a continuous conductor, the tower's structural sections are electrically bonded at the joints by short copper jumpers which are soldered to each side or "fusion" (arc) welds across the mating flanges.
Base-fed masts, the most common type, must be insulated from the ground. At its base, the mast is usually mounted on a thick ceramic insulator, which has the compressive strength to support the tower's weight and the dielectric strength to withstand the high voltage applied by the transmitter. The RF power to drive the antenna is supplied by a impedance matching network, usually housed in an antenna tuning hut near the base of the mast, and the cable supplying the current is simply bolted or brazed to the tower. The actual transmitter is usually located in a separate building, which supplies RF power to the tuning hut via a transmission line.
To keep it upright the mast has tensioned guy wires attached, usually in sets of 3 at 120° angles, which are anchored to the ground usually with concrete anchors. Multiple sets of guys (from 2 to 5) at different levels are used to make the tower rigid against buckling. The guy lines have strain insulators inserted, usually at the top near the attachment point |
https://en.wikipedia.org/wiki/DOS%20Plus | DOS Plus (erroneously also known as DOS+) was the first operating system developed by Digital Research's OEM Support Group in Newbury, Berkshire, UK, first released in 1985. DOS Plus 1.0 was based on CP/M-86 Plus combined with the PCMODE emulator from Concurrent PC DOS 4.11. While CP/M-86 Plus and Concurrent DOS 4.1 still had been developed in the United States, Concurrent PC DOS 4.11 was an internationalized and bug-fixed version brought forward by Digital Research UK. Later DOS Plus 2.x issues were based on Concurrent PC DOS 5.0 instead. In the broader picture, DOS Plus can be seen as an intermediate step between Concurrent CP/M-86 and DR DOS.
DOS Plus is able to run programs written for either CP/M-86 or MS-DOS 2.11, and can read and write the floppy formats used by both of these systems. Up to four CP/M-86 programs can be multitasked, but only one DOS program can be run at a time.
User interface
DOS Plus attempts to present the same command-line interface as MS-DOS. Like MS-DOS, it has a command-line interpreter called COMMAND.COM (alternative name DOSPLUS.COM). There is an AUTOEXEC.BAT file, but no CONFIG.SYS (except for FIDDLOAD, an extension to load some field-installable device drivers (FIDD) in some versions of DOS Plus 2.1). The major difference the user will notice is that the bottom line of the screen contains status information similar to:
DDT86 ALARM UK8 PRN=LPT1 Num 10:17:30
The left-hand side of the status bar shows running processes. The leftmost one will be visible on the screen; the others (if any) are running in the background. The right-hand side shows the keyboard layout in use (UK8 in the above example), the printer port assignment, the keyboard Caps Lock and Num Lock status, and the current time. If a DOS program is running, the status line is not shown. DOS programs cannot be run in the background.
The keyboard layout in use can be changed by pressing , and one of the function keys –.
Commands
DOS Plus con |
https://en.wikipedia.org/wiki/Potassium%20citrate | Potassium citrate (also known as tripotassium citrate) is a potassium salt of citric acid with the molecular formula K3C6H5O7. It is a white, hygroscopic crystalline powder. It is odorless with a saline taste. It contains 38.28% potassium by mass. In the monohydrate form, it is highly hygroscopic and deliquescent.
As a food additive, potassium citrate is used to regulate acidity, and is known as E number E332. Medicinally, it may be used to control kidney stones derived from uric acid or cystine.
In 2020, it was the 297th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Synthesis
Potassium citrate can be synthesized by the neutralization of citric acid which is achieved by the addition of potassium bicarbonate, potassium carbonate or potassium hydroxide to it. The solution can then be filtered and the solvent can be evaporated till granulation.
Uses
Potassium citrate is rapidly absorbed when given by mouth, and is excreted in the urine. Since it is an alkaline salt, it is effective in reducing the pain and frequency of urination when these are caused by highly acidic urine. It is used for this purpose in dogs and cats, but is chiefly employed as a non-irritating diuretic.
Potassium citrate is an effective way to treat/manage arrhythmia, if the patient is hypokalemic.
It is widely used to treat urinary calculi (kidney stones), and is often used by patients with cystinuria. A systematic review showed a significant reduction in the incidence of stone formation RR 0.26, 95% CI 0.10 to 0.68.
It is also used as an alkalizing agent in the treatment of mild urinary tract infections, such as cystitis.
It is also used in many soft drinks as a buffering agent.
Frequently used in an aqueous solution with other potassium salts, it is a wet chemical fire suppressant that is particularly useful against kitchen fires. Its alkaline pH encourages saponification to insulate the fuel from oxidizing air, and the endothermic dehydr |
https://en.wikipedia.org/wiki/DECUS | The Digital Equipment Computer Users' Society (DECUS) was an independent computer user group related to Digital Equipment Corporation (DEC). The Connect User Group Community, formed from the consolidation in May, 2008 of DECUS, Encompass, HP-Interex, and ITUG is the Hewlett-Packard’s largest user community, representing more than 50,000 participants.
History
DECUS was the Digital Equipment Computer Users' Society, a users' group for Digital Equipment Corporation (DEC) computers. Members included companies and organizations who purchased DEC equipment; many members were application programmers who wrote code for DEC machines or system programmers who managed DEC systems. DECUS was founded in March 1961 by Edward Fredkin.
DECUS was legally a part of Digital Equipment Corporation and subsidized by the company; however, it was run by unpaid volunteers. Digital staff members were not eligible to join DECUS, yet were allowed and encouraged to participate in DECUS activities. Digital, in turn, relied on DECUS as an important channel of communication with its customers.
DECUS Software Library
DECUS had a software library which accepted orders from anyone, distributing programs submitted to it by people willing to share. It was organized by processor and operating system, using information submitted by program submitters, who signed releases allowing this and asserting their right to do so. The DECUS library published catalogs of these offerings yearly, though because it had the catalog mastered by an outside firm, it did not have easy ways to retrieve the content of early catalogs (prior to circa 1980) in machine readable format. Later material was maintained in house and was more easily edited. The charges for copying were somewhat high, reflecting the fact the copies were made by hand on DECUS equipment.
Activities
There were two DECUS US symposia per year, at which members and DEC employees gave presentations, and could visit an exhibit hall containing many new comp |
https://en.wikipedia.org/wiki/Nuclear%20cross%20section | The nuclear cross section of a nucleus is used to describe the probability that a nuclear reaction will occur. The concept of a nuclear cross section can be quantified physically in terms of "characteristic area" where a larger area means a larger probability of interaction. The standard unit for measuring a nuclear cross section (denoted as σ) is the barn, which is equal to , or . Cross sections can be measured for all possible interaction processes together, in which case they are called total cross sections, or for specific processes, distinguishing elastic scattering and inelastic scattering; of the latter, amongst neutron cross sections the absorption cross sections are of particular interest.
In nuclear physics it is conventional to consider the impinging particles as point particles having negligible diameter. Cross sections can be computed for any nuclear process, such as capture scattering, production of neutrons, or nuclear fusion. In many cases, the number of particles emitted or scattered in nuclear processes is not measured directly; one merely measures the attenuation produced in a parallel beam of incident particles by the interposition of a known thickness of a particular material. The cross section obtained in this way is called the total cross section and is usually denoted by a σ or σT.
Typical nuclear radii are of the order 10−14 m. Assuming spherical shape, we therefore expect the cross sections for nuclear reactions to be of the order of or (i.e., 1 barn). Observed cross sections vary enormously: for example, slow neutrons absorbed by the (n, ) reaction show a cross section much higher than 1,000 barns in some cases (boron-10, cadmium-113, and xenon-135), while the cross sections for transmutations by gamma-ray absorption are in the region of 0.001 barn.
Microscopic and macroscopic cross section
Nuclear cross sections are used in determining the nuclear reaction rate, and are governed by the reaction rate equation for a particular se |
https://en.wikipedia.org/wiki/QUICC | The QUICC (Quad Integrated Communications Controller) was a Motorola 68k -based microcontroller made by Freescale Semiconductor, targeted at the telecommunications market. It lends its name to a family of successor chips called PowerQUICC.
History
The original QUICC was the Motorola 68360 (MC68360), based on the MC68302. It was followed by the PowerPC-based PowerQUICC I, PowerQUICC II, PowerQUICC II+ and PowerQUICC III.
Applications
QUICC chips form the core of many Motorola Cellular Base stations.
Many PowerQUICC II+ designs now have SATA controllers for SAN based applications.
PowerQUICC CPUs/boards come with a Linux environment. Freescale also offers MQX (a RTOS) for PPC. |
https://en.wikipedia.org/wiki/Petrov%20classification | In differential geometry and theoretical physics, the Petrov classification (also known as Petrov–Pirani–Penrose classification) describes the possible algebraic symmetries of the Weyl tensor at each event in a Lorentzian manifold.
It is most often applied in studying exact solutions of Einstein's field equations, but strictly speaking the classification is a theorem in pure mathematics applying to any Lorentzian manifold, independent of any physical interpretation. The classification was found in 1954 by A. Z. Petrov and independently by Felix Pirani in 1957.
Classification theorem
We can think of a fourth rank tensor such as the Weyl tensor, evaluated at some event, as acting on the space of bivectors at that event like a linear operator acting on a vector space:
Then, it is natural to consider the problem of finding eigenvalues and eigenvectors (which are now referred to as eigenbivectors) such that
In (four-dimensional) Lorentzian spacetimes, there is a six-dimensional space of antisymmetric bivectors at each event. However, the symmetries of the Weyl tensor imply that any eigenbivectors must belong to a four-dimensional subset.
Thus, the Weyl tensor (at a given event) can in fact have at most four linearly independent eigenbivectors.
The eigenbivectors of the Weyl tensor can occur with various multiplicities and any multiplicities among the eigenbivectors indicates a kind of algebraic symmetry of the Weyl tensor at the given event. The different types of Weyl tensor (at a given event) can be determined by solving a characteristic equation, in this case a quartic equation. All the above happens similarly to the theory of the eigenvectors of an ordinary linear operator.
These eigenbivectors are associated with certain null vectors in the original spacetime, which are called the principal null directions (at a given event).
The relevant multilinear algebra is somewhat involved (see the citations below), but the resulting classification theorem states tha |
https://en.wikipedia.org/wiki/Segre%20classification | The Segre classification is an algebraic classification of rank two symmetric tensors. The resulting types are then known as Segre types. It is most commonly applied to the energy–momentum tensor (or the Ricci tensor) and primarily finds application in the classification of exact solutions in general relativity.
See also
Corrado Segre
Jordan normal form
Petrov classification |
https://en.wikipedia.org/wiki/Cataclysmic%20pole%20shift%20hypothesis | The cataclysmic pole shift hypothesis is a pseudo-scientific claim that there have been recent, geologically rapid shifts in the axis of rotation of Earth, causing calamities such as floods and tectonic events or relatively rapid climate changes.
There is evidence of precession and changes in axial tilt, but this change is on much longer time-scales and does not involve relative motion of the spin axis with respect to the planet. However, in what is known as true polar wander, the Earth rotates with respect to a fixed spin axis. Research shows that during the last 200 million years a total true polar wander of some 30° has occurred, but that no rapid shifts in Earth's geographic axial pole were found during this period. A characteristic rate of true polar wander is 1° or less per million years. Between approximately 790 and 810 million years ago, when the supercontinent Rodinia existed, two geologically rapid phases of true polar wander may have occurred. In each of these, the magnetic poles of Earth shifted by approximately 55° due to a large shift in the crust.
Definition and clarification
The geographic poles are defined by the points on the surface of Earth that are intersected by the axis of rotation. The pole shift hypothesis describes a change in location of these poles with respect to the underlying surface – a phenomenon distinct from the changes in axial orientation with respect to the plane of the ecliptic that are caused by precession and nutation, and is an amplified event of a true polar wander. Geologically, a surface shift separate from a planetary shift, enabled by earth's molten core.
Pole shift hypotheses are not connected with plate tectonics, the well-accepted geological theory that Earth's surface consists of solid plates which shift over a viscous, or semifluid asthenosphere; nor with continental drift, the corollary to plate tectonics which maintains that locations of the continents have moved slowly over the surface of Earth, resulting |
https://en.wikipedia.org/wiki/Immunophenotyping | Immunophenotyping is a technique used to study the protein expressed by cells. This technique is commonly used in basic science research and laboratory diagnostic purpose. This can be done on tissue section (fresh or fixed tissue), cell suspension, etc. An example is the detection of tumor markers, such as in the diagnosis of leukemia. It involves the labelling of white blood cells with antibodies directed against surface proteins on their membrane. By choosing appropriate antibodies, the differentiation of leukemic cells can be accurately determined. The labelled cells are processed in a flow cytometer, a laser-based instrument capable of analyzing thousands of cells per second. The whole procedure can be performed on cells from the blood, bone marrow or spinal fluid in a matter of a few hours.
Immunophenotyping is a very common flow cytometry test in which fluorophore-conjugated antibodies are used as probes for staining target cells with high avidity and affinity. This technique allows rapid and easy phenotyping of each cell in a heterogeneous sample according to the presence or absence of a protein combination. |
https://en.wikipedia.org/wiki/Video%20BIOS | Video BIOS is the BIOS of a graphics card in a (usually IBM PC-derived) computer. It initializes the graphics card at the computer's boot time. It also implements INT 10h interrupt and VESA BIOS Extensions (VBE) for basic text and videomode output before a specific video driver is loaded. In UEFI 2.x systems, the INT 10h and the VBE are replaced by the UEFI GOP.
Much the way the system BIOS provides a set of functions that are used by software programs to access the system hardware, the video BIOS provides a set of video-related functions that are used by programs to access the video hardware as well as storing vendor-specific settings such as card name, clock frequencies, VRAM types & voltages. The video BIOS interfaces software to the video chipset in the same way that the system BIOS does for the system chipset.
The ROM also contained a basic font set to upload to the video adapter font RAM, if the video card did not contain a font ROM with this font set instead.
Unlike some other hardware components, the video card usually needs to be active very early during the boot process so that the user can see what is going on. This requires the card to be activated before any operating system begins loading; thus it needs to be activated by the BIOS, the only software that is present at this early stage. The system BIOS loads the video BIOS from the card's ROM into system RAM and transfers control to it early in the boot sequence.
Early PCs contained functions for driving MDA and CGA cards in the system BIOS, and those cards did not have any Video BIOS built in. When the EGA card was first sold in 1984, the Video BIOS was introduced to make these cards compatible with existing PCs whose BIOS did not know how to drive an EGA card. Ever since, EGA/VGA and all enhanced VGA compatible cards have included a Video BIOS.
When the computer is started, some graphics cards (usually certain Nvidia cards) display their vendor, model, Video BIOS version and amount of video memory |
https://en.wikipedia.org/wiki/Chaff%20algorithm | Chaff is an algorithm for solving instances of the Boolean satisfiability problem in programming. It was designed by researchers at Princeton University. The algorithm is an instance of the DPLL algorithm with a number of enhancements for efficient implementation.
Implementations
Some available implementations of the algorithm in software are mChaff and zChaff, the latter one being the most widely known and used. zChaff was originally written by Dr. Lintao Zhang, at Microsoft Research, hence the “z”. It is now maintained by researchers at Princeton University and available for download as both source code and binaries on Linux. zChaff is free for non-commercial use. |
https://en.wikipedia.org/wiki/Sulcus%20%28morphology%29 | In biological morphology and anatomy, a sulcus (: sulci) is a furrow or fissure (Latin fissura, : fissurae). It may be a groove, natural division, deep furrow, elongated cleft, or tear in the surface of a limb or an organ, most notably on the surface of the brain, but also in the lungs, certain muscles (including the heart), as well as in bones, and elsewhere. Many sulci are the product of a surface fold or junction, such as in the gums, where they fold around the neck of the tooth.
In invertebrate zoology, a sulcus is a fold, groove, or boundary, especially at the edges of sclerites or between segments.
In pollen a grain that is grooved by a sulcus is termed sulcate.
Examples in anatomy
Liver
Ligamentum teres hepatis fissure
Ligamentum venosum fissure
Portal fissure, found in the under-surface of the liver
Transverse fissure of liver, found in the lower surface of the liver
Umbilical fissure, found in front of the liver
Lung
Azygos fissure, of right lung
Horizontal fissure of right lung
Oblique fissure, of the right and left lungs
Skull
Auricular fissure, found in the temporal bone
Petrotympanic fissure
Pterygomaxillary fissure
Sphenoidal fissure, separates the wings and the body of the sphenoid bone
Superior orbital fissure
Other types
anal fissure, a break or tear in the skin of the anal canal
anterior interventricular sulcus
calcaneal sulcus
coronal sulcus
femoral sulcus or intercondylar fossa of femur
fissure (dentistry), a break in the tooth enamel
fissure of the nipple, a condition that results from running, breastfeeding and other friction-causing exposures
fissured tongue, a condition characterized by deep grooves (fissures) in the tongue
gingival sulcus
gluteal sulcus
Henle's fissure, a fissure in the connective tissue between the muscle fibers of the heart
interlabial sulci
intermammary sulcus
intertubercular sulcus, the groove between the lesser and greater tubercules of the humerus (bone of the upper arm)
lacrimal sulcus (sulcus l |
https://en.wikipedia.org/wiki/Holographic%20Versatile%20Card | The Holographic Versatile Card (HVC) was a proposed data storage format by Optware; the projected date for a Japanese launch had been the first half of 2007, pending finalization of the specification, however as of March 2022, nothing has yet surfaced. One of its main advantages compared with discs was supposed to be the lack of moving parts when played. They claimed it would hold 30GB of data, have a write speed 3 times faster than Blu-ray, and be approximately the size of a credit card. Optware claimed that at release the media would cost about ¥100 (roughly $1.20) each, that reader devices would initially cost about ¥200,000(roughly $2400) while reader/writer devices would have cost ¥1 000,000 (roughly $12000, as per exchange rate of Apr 2011) each.
See also
DVD
HD DVD
Holographic memory
Holographic Versatile Disc
Vaporware |
https://en.wikipedia.org/wiki/Nuclear%20fuel | Nuclear fuel is material used in nuclear power stations to produce heat to power turbines. Heat is created when nuclear fuel undergoes nuclear fission.
Most nuclear fuels contain heavy fissile actinide elements that are capable of undergoing and sustaining nuclear fission. The three most relevant fissile isotopes are uranium-233, uranium-235 and plutonium-239. When the unstable nuclei of these atoms are hit by a slow-moving neutron, they frequently split, creating two daughter nuclei and two or three more neutrons. In that case, the neutrons released go on to split more nuclei. This creates a self-sustaining chain reaction that is controlled in a nuclear reactor, or uncontrolled in a nuclear weapon. Alternatively, if the nucleus absorbs the neutron without splitting, it creates a heavier nucleus with one additional neutron.
The processes involved in mining, refining, purifying, using, and disposing of nuclear fuel are collectively known as the nuclear fuel cycle.
Not all types of nuclear fuels create power from nuclear fission; plutonium-238 and some other isotopes are used to produce small amounts of nuclear power by radioactive decay in radioisotope thermoelectric generators and other types of atomic batteries.
Nuclear fuel has the highest energy density of all practical fuel sources.
Oxide fuel
For fission reactors, the fuel (typically based on uranium) is usually based on the metal oxide; the oxides are used rather than the metals themselves because the oxide melting point is much higher than that of the metal and because it cannot burn, being already in the oxidized state.
Uranium dioxide
Uranium dioxide is a black semiconducting solid. It can be made by heating uranyl nitrate to form .
This is then converted by heating with hydrogen to form UO2. It can be made from enriched uranium hexafluoride by reacting with ammonia to form a solid called ammonium diuranate, . This is then heated (calcined) to form and U3O8 which is then converted by heating with |
https://en.wikipedia.org/wiki/Akeno%20Giant%20Air%20Shower%20Array | The Akeno Giant Air Shower Array (AGASA) was an array of particle detectors designed to study the origin of ultra-high-energy cosmic rays. It was deployed from 1987 to 1991 and decommissioned in 2004.
It consisted of 111 scintillation detectors and 27 muon detectors spread over an area of 100 km2.
It was operated by the Institute for Cosmic Ray Research, University of Tokyo at the Akeno Observatory.
Results
The results from AGASA were used to calculate the energy spectrum and anisotropy of cosmic rays. The results helped to confirm the existence of ultra-high energy cosmic rays (>), such as the so-called "Oh-My-God" particle that was observed by the Fly's Eye experiment run by the University of Utah.
The Telescope Array, a merger of the AGASA and High Resolution Fly's Eye (HiRes) groups, and the Pierre Auger Observatory have improved on the results from AGASA by building larger, hybrid detectors and collecting greater quantities of more precise data.
See also
CREDO
Extragalactic cosmic ray
Gamma-ray telescopes (Alphabetic list)
Gamma-ray astronomy & X-ray astronomy
Cosmic Ray System (CR instrument on the Voyagers) |
https://en.wikipedia.org/wiki/De%20Magnete | De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure (On the Magnet and Magnetic Bodies, and on That Great Magnet the Earth) is a scientific work published in 1600 by the English physician and scientist William Gilbert. A highly influential and successful book, it exerted an immediate influence on many contemporary writers, including Francis Godwin and Mark Ridley.
Contents
In De Magnete, Gilbert described many of his experiments with his model Earth called the terrella. Gilbert made the claim that gravity was due to the same force and he believed that this held the Moon in orbit around the Earth.
The work then considered static electricity produced by amber. Amber is called elektron in Greek, and electrum in Latin, so Gilbert decided to refer to the phenomenon by the adjective electricus.
Summary
De Magnete consists of six books.
Book 1
Historical survey of magnetism and theory of Earth's magnetism. The lodestone in antiquity from Plato onwards and the gradual identification of iron ores. The south pole of a lodestone points to the north pole of the Earth and vice versa as the terrestrial globe is magnetic.
Book 2
Distinction between electricity and magnetism. An amber stick when rubbed affects a rotating needle made of any type of metal (a versorium) and attracts paper, leaves and even water. But electricity is different from heat and to magnetism which only attracts iron-bearing materials (he calls it coition). He shows the effects of cutting a spherical lodestone (which he calls a terrella) through the poles and equator and the direction of attraction at different points. Magnets act at a distance but the force has no permanent presence and is not hindered like light. Materials including gold, silver and diamonds are not affected by magnets, nor can one produce perpetual motion.
Book 3
The Earth's normal magnetism. He proposes (incorrectly) that the angle of the ecliptic and precession of the equinoxes are caused by magnetism. A lo |
https://en.wikipedia.org/wiki/Coincidence%20point | In mathematics, a coincidence point (or simply coincidence) of two functions is a point in their common domain having the same image.
Formally, given two functions
we say that a point x in X is a coincidence point of f and g if f(x) = g(x).
Coincidence theory (the study of coincidence points) is, in most settings, a generalization of fixed point theory, the study of points x with f(x) = x. Fixed point theory is the special case obtained from the above by letting X = Y and taking g to be the identity function.
Just as fixed point theory has its fixed-point theorems, there are theorems that guarantee the existence of coincidence points for pairs of functions. Notable among them, in the setting of manifolds, is the Lefschetz coincidence theorem, which is typically known only in its special case formulation for fixed points.
Coincidence points, like fixed points, are today studied using many tools from mathematical analysis and topology. An equaliser is a generalization of the coincidence set. |
https://en.wikipedia.org/wiki/Baby%20powder | Baby powder is an astringent powder used for preventing diaper rash and for cosmetic uses. It may be composed of talc (in which case it is also called talcum powder) or corn starch. It may also contain additional ingredients like fragrances. Baby powder can also be used as a dry shampoo, cleaning agent (to remove grease stains), and freshener.
Health risks
Talcum powder, if inhaled, may cause aspiration pneumonia and granuloma. Severe cases may lead to chronic respiratory problems and death. The particles in corn starch powder are larger and less likely to be inhaled.
Some studies have found a statistical relationship between talcum powder applied to the perineal area by women and the incidence of ovarian cancer, but there is not a consensus that the two are linked. In 2016, more than 1,000 women in the United States sued Johnson & Johnson for covering up the possible cancer risk associated with its baby powder. The company stopped selling talc-based baby powder in the United States and Canada in 2020 and has said it will stop all talc sales worldwide by 2023, switching to a corn starch-based formula. However, Johnson & Johnson says that its talc-based baby powder is safe to use and does not contain asbestos.
See also |
https://en.wikipedia.org/wiki/El-Fish | El-Fish is a fish and fish-tank simulator and software toy developed by Russian game developer AnimaTek, with Maxis providing development advice. The game was published by Mindscape (v1.1) and later by Maxis (v1.1 + v1.2) in 1993 on 5 diskettes.
Each fish in El-Fish has a unique Roe, similar to the genome. This allows the user to catch fish and use selective breeding and mutation to create fish to their own tastes for placing them in virtual aquariums. Around 800 possible genetic attributes (fin shape, body color, size, etc.) are available, which can be selectively shaped into virtually infinite numbers of unique fish. Once fish are created, El-Fish will algorithmically generate up to 256 animation frames so that the fish will appear to swim smoothly around the tank.
The tank simulator is very customizable for this game's era. The player can select from a large number of backdrops and tank ornaments for the fish to swim between. The user can also import their own images to use as tank ornaments. El-Fish includes a fractal based plant generator for creating unique aquarium plants. There are several "moving objects" that can be added to the tanks which the fish will react to, such as a cat paw, fibcrab, and a small plastic scuba diver. The user can also procedurally generate background music for their tank using an in-game composer, or choose a separate MIDI music file to be played for each virtual aquarium, and feed the fish.
The tank simulator can run as a memory-resident program in MS-DOS, making it a screensaver. Multiple tanks can be displayed via El-Fish's slideshow feature.
Development
El-Fish was created by Vladimir Pokhilko, Ph.D. and Alexey Pajitnov (the creator of Tetris), who had backgrounds in mathematics, computer science, and psychology. They were attempting to create software for INTEC (a company that they started) that would be made for "people's souls". They developed this idea, calling it "Human Software", with three rules:
The software needs |
https://en.wikipedia.org/wiki/Stokes%20flow | Stokes flow (named after George Gabriel Stokes), also named creeping flow or creeping motion, is a type of fluid flow where advective inertial forces are small compared with viscous forces. The Reynolds number is low, i.e. . This is a typical situation in flows where the fluid velocities are very slow, the viscosities are very large, or the length-scales of the flow are very small. Creeping flow was first studied to understand lubrication. In nature, this type of flow occurs in the swimming of microorganisms and sperm. In technology, it occurs in paint, MEMS devices, and in the flow of viscous polymers generally.
The equations of motion for Stokes flow, called the Stokes equations, are a linearization of the Navier–Stokes equations, and thus can be solved by a number of well-known methods for linear differential equations. The primary Green's function of Stokes flow is the Stokeslet, which is associated with a singular point force embedded in a Stokes flow. From its derivatives, other fundamental solutions can be obtained. The Stokeslet was first derived by Oseen in 1927, although it was not named as such until 1953 by Hancock. The closed-form fundamental solutions for the generalized unsteady Stokes and Oseen flows associated with arbitrary time-dependent translational and rotational motions have been derived for the Newtonian and micropolar fluids.
Stokes equations
The equation of motion for Stokes flow can be obtained by linearizing the steady state Navier–Stokes equations. The inertial forces are assumed to be negligible in comparison to the viscous forces, and eliminating the inertial terms of the momentum balance in the Navier–Stokes equations reduces it to the momentum balance in the Stokes equations:
where is the stress (sum of viscous and pressure stresses), and an applied body force. The full Stokes equations also include an equation for the conservation of mass, commonly written in the form:
where is the fluid density and the fluid velocity. To |
https://en.wikipedia.org/wiki/RAD750 | The RAD750 is a radiation-hardened single-board computer manufactured by BAE Systems Electronics, Intelligence & Support. The successor of the RAD6000, the RAD750 is for use in high-radiation environments experienced on board satellites and spacecraft. The RAD750 was released in 2001, with the first units launched into space in 2005.
Technology
The CPU has 10.4 million transistors, an order of magnitude more than the RAD6000 (which had 1.1 million). It is manufactured using either 250 or 150 nm photolithography and has a die area of 130 mm2. It has a core clock of 110 to 200 MHz and can process at 266 MIPS or more. The CPU can include an extended L2 cache to improve performance.
The CPU can withstand an absorbed radiation dose of 2,000 to 10,000 grays (200,000 to 1,000,000 rads), temperatures between −55 °C and 125 °C, and requires 5 watts of power. The standard RAD750 single-board system (CPU and motherboard) can withstand 1,000 grays (100,000 rads), temperatures between −55 °C and 70 °C, and requires 10 watts of power.
The RAD750 system has a price that is comparable to the RAD6000, the latter of which as of 2002 was listed at US$200,000 (). Customer program requirements and quantities, however, greatly affect the final unit costs.
The RAD750 is based on the PowerPC 750. Its packaging and logic functions are completely compatible with the PowerPC 7xx family.
The term RAD750 is a registered trademark of BAE Systems Information and Electronic Systems Integration Inc.
Deployment
In 2010, it was reported that there were over 150 RAD750s used in a variety of spacecraft. Notable examples, in order of launch date, include:
Deep Impact comet-chasing spacecraft, launched in January 2005 first to use the RAD750 computer.
XSS 11, small experimental satellite, launched 11 April 2005.
Mars Reconnaissance Orbiter, launched 12 August 2005.
SECCHI (Sun Earth Connection Coronal and Heliospheric Investigation) instrument package on each of the STEREO spacecraft, launched |
https://en.wikipedia.org/wiki/New%20standard%20tuning | New standard tuning (NST) is an alternative tuning for the guitar that approximates all-fifths tuning. The guitar's strings are assigned the notes C2-G2-D3-A3-E4-G4 (from lowest to highest); the five lowest open strings are each tuned to an interval of a perfect fifth {(C,G),(G,D),(D,A),(A,E)}; the two highest strings are a minor third apart (E,G).
All-fifths tuning is typically used for mandolins, cellos, violas, and violins. On a guitar, tuning the strings in fifths would mean the first string would be a high B. NST provides a good approximation to all-fifths tuning. Like other regular tunings, NST allows chord fingerings to be shifted from one set of strings to another.
NST's C-G range is wider, both lower and higher, than the E-E range of standard tuning in which the strings are tuned to the open notes E2-A2-D3-G3-B3-E4. The greater range allows NST guitars to play repertoire that would be impractical, if not impossible, on a standard-tuned guitar.
NST was developed by Robert Fripp, the guitarist for King Crimson. Fripp taught the new standard tuning in Guitar Craft courses beginning in 1985, and thousands of Guitar Craft students continue to use the tuning. Like other alternative tunings for guitar, NST provides challenges and new opportunities to guitarists, who have developed music especially suited to NST.
NST places the guitar strings under greater tension than standard tuning. Standard sets of guitar strings do not work well with the tuning as the lowest strings are too loose and the highest string may snap under the increased tension. Special sets of NST strings have been available for decades, and some guitarists have assembled NST sets from individual strings.
History
New standard tuning (NST) was invented by Robert Fripp of the band King Crimson in September 1983.
Fripp began using the tuning in 1985 before beginning his Guitar Craft seminars, which have taught the tuning to three thousand guitarists.
The tuning is (from low to high): C2-G2-D |
https://en.wikipedia.org/wiki/Fully%20polynomial-time%20approximation%20scheme | A fully polynomial-time approximation scheme (FPTAS) is an algorithm for finding approximate solutions to function problems, especially optimization problems. An FPTAS takes as input an instance of the problem and a parameter ε > 0. It returns as output a value is at least times the correct value, and at most times the correct value.
In the context of optimization problems, the correct value is understood to be the value of the optimal solution, and it is often implied that an FPTAS should produce a valid solution (and not just the value of the solution). Returning a value and finding a solution with that value are equivalent assuming that the problem possesses self reducibility.
Importantly, the run-time of an FPTAS is polynomial in the problem size and in 1/ε. This is in contrast to a general polynomial-time approximation scheme (PTAS). The run-time of a general PTAS is polynomial in the problem size for each specific ε, but might be exponential in 1/ε.
The term FPTAS may also be used to refer to the class of problems that have an FPTAS. FPTAS is a subset of PTAS, and unless P = NP, it is a strict subset.
Relation to other complexity classes
All problems in FPTAS are fixed-parameter tractable with respect to the standard parameterization.
Any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have an FPTAS unless P=NP. However, the converse fails: e.g. if P does not equal NP, knapsack with two constraints is not strongly NP-hard, but has no FPTAS even when the optimal objective is polynomially bounded.
Converting a dynamic program to an FPTAS
Woeginger presented a general scheme for converting a certain class of dynamic programs to an FPTAS.
Input
The scheme handles optimization problems in which the input is defined as follows:
The input is made of n vectors, x1,...,xn.
Each input vector is made of some non-negative integers, where may depend on the input.
All components of the input vectors are encoded |
https://en.wikipedia.org/wiki/389%20Directory%20Server | The 389 Directory Server (previously Fedora Directory Server) is a Lightweight Directory Access Protocol (LDAP) server developed by Red Hat as part of the community-supported Fedora Project. The name "389" derives from the port number used by LDAP.
389 Directory Server supports many operating systems, including Fedora Linux, Red Hat Enterprise Linux, Debian, Solaris, and HP-UX 11i. In late 2016 the project merged experimental FreeBSD support.
However, the 389 Directory Server team, as of 2017, is likely to remove HPUX and Solaris support in the upcoming 1.4.x series.
The 389 source code is generally available under the GNU General Public License version 3; some components have an exception for plugin code, while other components use LGPLv2 or Apache. Red Hat also markets a commercial version of the project as a Red Hat Directory Server as part of support contracts for RHEL.
History
389 Directory Server is derived from the original University of Michigan Slapd project. In 1996, the project's developers were hired by Netscape Communications Corporation, and the project became known as the Netscape Directory Server (NDS). After acquiring Netscape, AOL sold ownership of the NDS intellectual property to Sun Microsystems, but retained rights akin to ownership. Sun sold and developed the Netscape Directory Server under the name JES/SunOne Directory Server, now Oracle Directory Server since the takeover of Sun by Oracle. AOL/Netscape's rights were acquired by Red Hat, and on June 1, 2005, much of the source code was released as free software under the terms of the GNU General Public License (GPL).
As of 389 Directory Server version 1.0 (December 1, 2005), Red Hat released as free software all of the remaining source code for all components included in the release package (admin server, console, etc.) and continues to maintain them under their respective licenses.
In May 2009, the Fedora Directory Server project changed its name to 389 to give the project a distributio |
https://en.wikipedia.org/wiki/Emote | An emote is an entry in a text-based chat client that indicates an action taking place. Unlike emoticons, they are not text art, and instead describe the action using words or images (similar to emoji).
Emotes were created by Shigetaka Kurita in Japan, whose original idea was to create a way of communication using pictures. Kurita would draw pictures of himself for inspiration for the emotes he created.
Nowadays emotes are a language of their own; emotes have progressed far past their original counterparts in 1999.
Overview
In most IRC chat clients, entering the command "/me" will print the user's name followed by whatever text follows. For example, if a user named Joe typed "/me jumps with joy", the client will print this as "Joe jumps with joy" in the chat window.
<Joe> Allow me to demonstrate...
* Joe jumps with joy again.
In chat media which do not support the "/me" command, it is conventional to read text surrounded by asterisks as if it were emoted. For example, reading "Joe: *jumps with joy*" in a chat log would suggest that the user had intended the words to be performed rather than spoken.
In MMORPGs with visible avatars, such as EverQuest, Asheron's Call, Second Life and World of Warcraft, certain commands entered through the chat interface will print a predefined /me emote to the chat window and cause the character to animate, and in some cases produce sound effects. For example, entering "/confused" into World of Warcraft's chat interface will play an animation on the user's avatar and print "You are hopelessly confused." in the chat window.
With this being said, emotes are used primarily online in video games, and even more recently, on smartphones. An example of image-based emotes being used frequently is the chat feature on the streaming service Twitch. Twitch also allows users to upload animated emotes encoded with the GIF format.
See also
Emoji |
https://en.wikipedia.org/wiki/Polarization%20identity | In linear algebra, a branch of mathematics, the polarization identity is any one of a family of formulas that express the inner product of two vectors in terms of the norm of a normed vector space.
If a norm arises from an inner product then the polarization identity can be used to express this inner product entirely in terms of the norm. The polarization identity shows that a norm can arise from at most one inner product; however, there exist norms that do not arise from any inner product.
The norm associated with any inner product space satisfies the parallelogram law:
In fact, as observed by John von Neumann, the parallelogram law characterizes those norms that arise from inner products.
Given a normed space , the parallelogram law holds for if and only if there exists an inner product on such that for all in which case this inner product is uniquely determined by the norm via the polarization identity.
Polarization identities
Any inner product on a vector space induces a norm by the equation
The polarization identities reverse this relationship, recovering the inner product from the norm.
Every inner product satisfies:
Solving for gives the formula If the inner product is real then and this formula becomes a polarization identity for real inner products.
Real vector spaces
If the vector space is over the real numbers then the polarization identities are:
These various forms are all equivalent by the parallelogram law:
This further implies that class is not a Hilbert space whenever , as the parallelogram law is not satisfied. For the sake of counterexample, consider and for any two disjoint subsets of general domain and compute the measure of both sets under parallelogram law.
Complex vector spaces
For vector spaces over the complex numbers, the above formulas are not quite correct because they do not describe the imaginary part of the (complex) inner product.
However, an analogous expression does ensure that both real and imaginary |
https://en.wikipedia.org/wiki/Z%C3%A9%20Povinho | Zé Povinho is the cartoon character of a Portuguese everyman created in 1875 by Rafael Bordalo Pinheiro. He became first a symbol of the Portuguese working-class people, and eventually into the unofficial personification of Portugal.
Gallery |
https://en.wikipedia.org/wiki/216%20%28number%29 | 216 (two hundred [and] sixteen) is the natural number following 215 and preceding 217. It is a cube, and is often called Plato's number, although it is not certain that this is the number intended by Plato.
In mathematics
216 is the cube of 6, and the sum of three cubes:
It is the smallest cube that can be represented as a sum of three positive cubes, making it the first nontrivial example for Euler's sum of powers conjecture. It is, moreover, the smallest number that can be represented as a sum of any number of distinct positive cubes in more than one way. It is a highly powerful number: the product of the exponents in its prime factorization is larger than the product of exponents of any smaller number.
Because there is no way to express it as the sum of the proper divisors of any other integer, it is an untouchable number. Although it is not a semiprime, the three closest numbers on either side of it are, making it the middle number between twin semiprime-triples, the smallest number with this property. Sun Zhiwei has conjectured that each natural number not equal to 216 can be written as either a triangular number or as a triangular number plus a prime number; however, this is not possible for 216. If the conjecture is true, 216 would be the only number for which this is not possible.
There are 216 ordered pairs of four-element permutations whose products generate all the other permutations on four elements. There are also 216 fixed hexominoes, the polyominoes made from 6 squares, joined edge-to-edge. Here "fixed" means that rotations or mirror reflections of hexominoes are considered to be distinct shapes.
In other fields
216 is one common interpretation of Plato's number, a number described in vague terms by Plato in the Republic. Other interpretations include and .
There are 216 colors in the web-safe color palette, a color cube.
In the game of checkers, there are 216 different positions that can be reached by the first three moves.
The proto-Kab |
https://en.wikipedia.org/wiki/Name%E2%80%93value%20pair | A name–value pair, also called an attribute–value pair, key–value pair, or field–value pair, is a fundamental data representation in computing systems and applications. Designers often desire an open-ended data structure that allows for future extension without modifying existing code or data. In such situations, all or part of the data model may be expressed as a collection of 2-tuples in the form <attribute name, value> with each element being an attribute–value pair. Depending on the particular application and the implementation chosen by programmers, attribute names may or may not be unique.
Examples of use
Some of the applications where information is represented as name-value pairs are:
E-mail, in RFC 2822 headers
Query strings, in URLs
Optional elements in network protocols, such as IP, where they often appear as TLV (type–length–value) triples
Bibliographic information, as in BibTeX and Dublin Core metadata
Element attributes in SGML, HTML and XML
Some kinds of database systems – namely a key–value database
OpenStreetMap map data
Windows registry entries
Environment variables
Use in computer languages
Some computer languages implement name–value pairs, or more frequently collections of attribute–value pairs, as standard language features. Most of these implement the general model of an associative array: an unordered list of unique attributes with associated values. As a result, they are not fully general; they cannot be used, for example, to implement electronic mail headers (which are ordered and non-unique).
In some applications, a name–value pair has a value that contains a nested collection of attribute–value pairs. Some data serialization formats such as JSON support arbitrarily deep nesting.
Other data representations are restricted to one level of nesting, such as INI file's section/name/value.
See also
Attribute (computing)
Entity–attribute–value model
Key–value database
Query string |
https://en.wikipedia.org/wiki/PAST%20storage%20utility | PAST is a large-scale, distributed, persistent storage system based on the Pastry peer-to-peer overlay network.
See also
Pastry (DHT) (PAST section)
External links
A. Rowstron and P. Druschel. Storage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility. 18th ACM SOSP'01, Lake Louise, Alberta, Canada, October 2001.
PAST homepages: freepastry.org, Microsoft Research
http://www.cse.lehigh.edu/~brian/course/advanced-networking/reviews/weber-past.html
http://ieeexplore.ieee.org/xpl/abs_free.jsp?arNumber=990064 IEEE
Distributed data storage |
https://en.wikipedia.org/wiki/Singularity%20%28operating%20system%29 | Singularity is an experimental operating system developed by Microsoft Research between July 9, 2003, and February 7, 2015. It was designed as a high dependability OS in which the kernel, device drivers, and application software were all written in managed code. Internal security uses type safety instead of hardware memory protection.
Operation
The lowest-level x86 interrupt dispatch code is written in assembly language and C. Once this code has done its job, it invokes the kernel, which runtime system and garbage collector are written in Sing# (an extended version of Spec#, itself an extension of C#) and runs in unprotected mode. The hardware abstraction layer is written in C++ and runs in protected mode. There is also some C code to handle debugging. The computer's basic input/output system (BIOS) is invoked during the 16-bit real mode bootstrap stage; once in 32-bit mode, Singularity never invokes the BIOS again, but invokes device drivers written in Sing#. During installation, Common Intermediate Language (CIL) opcodes are compiled into x86 opcodes using the Bartok compiler.
Security design
Singularity is a microkernel operating system. Unlike most historic microkernels, its components execute in the same address space (process), which contains software-isolated processes (SIPs). Each SIP has its own data and code layout, and is independent from other SIPs. These SIPs behave like normal processes, but avoid the cost of task-switches.
Protection in this system is provided by a set of rules called invariants that are verified by static program analysis. For example, in the memory-invariant states there must be no cross-references (or memory pointers) between two SIPs; communication between SIPs occurs via higher-order communication channels managed by the operating system. Invariants are checked during installation of the application. (In Singularity, installation is managed by the operating system.)
Most of the invariants rely on the use of safer memory-man |
https://en.wikipedia.org/wiki/Ground%20tissue | The ground tissue of plants includes all tissues that are neither dermal nor vascular. It can be divided into three types based on the nature of the cell walls. This tissue system is present between the dermal tissue and forms the main bulk of the plant body.
Parenchyma cells have thin primary walls and usually remain alive after they become mature. Parenchyma forms the "filler" tissue in the soft parts of plants, and is usually present in cortex, pericycle, pith, and medullary rays in primary stem and root.
Collenchyma cells have thin primary walls with some areas of secondary thickening. Collenchyma provides extra mechanical and structural support, particularly in regions of new growth.
Sclerenchyma cells have thick lignified secondary walls and often die when mature. Sclerenchyma provides the main structural support to a plant.
Parenchyma
Parenchyma is a versatile ground tissue that generally constitutes the "filler" tissue in soft parts of plants. It forms, among other things, the cortex (outer region) and pith (central region) of stems, the cortex of roots, the mesophyll of leaves, the pulp of fruits, and the endosperm of seeds. Parenchyma cells are often living cells and may remain meristematic, meaning that they are capable of cell division if stimulated. They have thin and flexible cellulose cell walls and are generally polyhedral when close-packed, but can be roughly spherical when isolated from their neighbors. Parenchyma cells are generally large. They have large central vacuoles, which allow the cells to store and regulate ions, waste products, and water. Tissue specialised for food storage is commonly formed of parenchyma cells.
Parenchyma cells have a variety of functions:
In leaves, they form two layers of mesophyll cells immediately beneath the epidermis of the leaf, that are responsible for photosynthesis and the exchange of gases. These layers are called the palisade parenchyma and spongy mesophyll. Palisade parenchyma cells can be either cu |
https://en.wikipedia.org/wiki/ISOLDE | The ISOLDE (Isotope Separator On Line DEvice) Radioactive Ion Beam Facility, is an on-line isotope separator facility located at the centre of the CERN accelerator complex on the Franco-Swiss border. Created in 1964, the ISOLDE facility started delivering radioactive ion beams (RIBs) to users in 1967. Originally located at the Synchro-Cyclotron (SC) accelerator (CERN's first ever particle accelerator), the facility has been upgraded several times most notably in 1992 when the whole facility was moved to be connected to CERN's ProtonSynchroton Booster (PSB). ISOLDE is currently the longest-running facility in operation at CERN, with continuous developments of the facility and its experiments keeping ISOLDE at the forefront of science with RIBs. ISOLDE benefits a wide range of physics communities with applications covering nuclear, atomic, molecular and solid-state physics, but also biophysics and astrophysics, as well as high-precision experiments looking for physics beyond the Standard Model. The facility is operated by the ISOLDE Collaboration, comprising CERN and sixteen (mostly) European countries. As of 2019, close to 1000 experimentalists around the world (including all continents) are coming to ISOLDE to perform typically 50 different experiments per year.
Radioactive nuclei are produced at ISOLDE by shooting a high-energy (1.4GeV) beam of protons delivered by CERN's PSB accelerator on a 20 cm thick target. Several target materials are used depending on the desired final isotopes that are requested by the experimentalists. The interaction of the proton beam with the target material produces radioactive species through spallation, fragmentation and fission reactions. They are subsequently extracted from the bulk of the target material through thermal diffusion processes by heating the target to about 2000 °C.
The cocktail of produced isotopes is ultimately filtered using one of ISOLDE's two magnetic dipole mass separators to yield the desired isobar of inte |
https://en.wikipedia.org/wiki/Fetch-and-add | In computer science, the fetch-and-add (FAA) CPU instruction atomically increments the contents of a memory location by a specified value.
That is, fetch-and-add performs the operation
increment the value at address by , where is a memory location and is some value, and return the original value at .
in such a way that if this operation is executed by one process in a concurrent system, no other process will ever see an intermediate result.
Fetch-and-add can be used to implement concurrency control structures such as mutex locks and semaphores.
Overview
The motivation for having an atomic fetch-and-add is that operations that appear in programming languages as
are not safe in a concurrent system, where multiple processes or threads are running concurrently (either in a multi-processor system, or preemptively scheduled onto some single-core systems). The reason is that such an operation is actually implemented as multiple machine instructions:
load into a register;
add to register;
store register value back into .
When one process is doing and another is doing concurrently, there is a race condition. They might both fetch and operate on that, then both store their results with the effect that one overwrites the other and the stored value becomes either or , not as might be expected.
In uniprocessor systems with no kernel preemption supported, it is sufficient to disable interrupts before accessing a critical section. However, in multiprocessor systems (even with interrupts disabled) two or more processors could be attempting to access the same memory at the same time. The fetch-and-add instruction allows any processor to atomically increment a value in memory, preventing such multiple processor collisions.
Maurice Herlihy (1991) proved that fetch-and-add has a finite consensus number, in contrast to the compare-and-swap operation. The fetch-and-add operation can solve the wait-free consensus problem for no more than two concurrent processes.
Im |
https://en.wikipedia.org/wiki/Quantum%20tomography | Quantum tomography or quantum state tomography is the process by which a quantum state is reconstructed using measurements on an ensemble of identical quantum states. The source of these states may be any device or system which prepares quantum states either consistently into quantum pure states or otherwise into general mixed states. To be able to uniquely identify the state, the measurements must be tomographically complete. That is, the measured operators must form an operator basis on the Hilbert space of the system, providing all the information about the state. Such a set of observations is sometimes called a quorum. The term tomography was first used in the quantum physics literature in a 1993 paper introducing experimental optical homodyne tomography.
In quantum process tomography on the other hand, known quantum states are used to probe a quantum process to find out how the process can be described. Similarly, quantum measurement tomography works to find out what measurement is being performed. Whereas, randomized benchmarking scalably obtains a figure of merit of the overlap between the error prone physical quantum process and its ideal counterpart.
The general principle behind quantum state tomography is that by repeatedly performing many different measurements on quantum systems described by identical density matrices, frequency counts can be used to infer probabilities, and these probabilities are combined with Born's rule to determine a density matrix which fits the best with the observations.
This can be easily understood by making a classical analogy. Consider a harmonic oscillator (e.g. a pendulum). The position and momentum of the oscillator at any given point can be measured and therefore the motion can be completely described by the phase space. This is shown in figure 1. By performing this measurement for a large number of identical oscillators we get a probability distribution in the phase space (figure 2). This distribution can be normal |
https://en.wikipedia.org/wiki/Managed%20retreat | Managed retreat involves the purposeful, coordinated movement of people and buildings away from risks. This may involve the movement of a person, infrastructure (e.g., building or road), or community. It can occur in response to a variety of hazards such as flood, wildfire, or drought.
Politicians, insurers and residents are increasingly paying attention to managed retreat from low-lying coastal areas because of the threat of sea-level rise due to climate warming. Trends in climate change predict substantial sea-level rises worldwide, causing damage to human infrastructure through coastal erosion and putting communities at risk of severe coastal flooding.
The type of managed retreat proposed depends on the location and type of natural hazard, and on local policies and practices for managed retreat. In the United Kingdom, managed realignment through removal of flood defences is often a response to sea-level rise exacerbated by local subsidence. In the United States, managed retreat often occurs through voluntary acquisition and demolition or relocation of at-risk properties by government. In the Global South, relocation may occur through government programs. Some low-lying countries, facing inundation due to sea-level rise, are planning for the relocation of their populations, such as Kiribati planning for "Migration with Dignity".
Managed realignment
In the United Kingdom, the main reason for managed realignment is to improve coastal stability, essentially replacing artificial 'hard' coastal defences with natural ‘soft’ coastal landforms. According to University of Southampton researchers Matthew M. Linham and Robert J. Nicholls, "one of the biggest drawbacks of managed realignment is that the option requires land to be yielded to the sea." One of its benefits is that it can help protect land further inland by creating natural spaces that act as buffers to absorb water or dampen the force of waves.
Managed realignment has also been used to mitigate for loss of |
https://en.wikipedia.org/wiki/Kashida | Kashida or Kasheeda (; ; lit. "extended", "stretched", "lengthened") is a type of justification in the Arabic language and in some descendant cursive scripts. In contrast to white-space justification, which increases the length of a line of text by expanding spaces between words or individual letters, kasheeda creates justification by elongating characters at certain points. Kasheeda justification can be combined with white-space justification.
The analog in European (Latin-based) typography (expanding or contracting letters to improve spacing) is sometimes called expansion, and falls within microtypography. Kasheeda is considerably easier and more flexible, however, because Arabic–Persian scripts feature prominent horizontal strokes, whose lengths are accordingly flexible.
For example, and with and without kasheeda may look like the following:
Kasheeda can also refer to a character that represents this elongation ( )—also known as tatweel or taṭwīl ( taṭwīl)—or to one of a set of glyphs of varying lengths that implement this elongation in a font. The Unicode standard assigns code point U+0640 as Arabic Tatweel.
The kasheeda can take a subtle downward curvature in some calligraphic styles and handwriting. However, the curvilinear stroke is not feasible for most basic fonts, which merely use a completely flat underscore-like stroke for kashida.
In addition to letter spacing and justification, calligraphers also use kasheeda for emphasis and as book or chapter titles.
In modern Arabic mathematical notation, kasheeda appears in some operation symbols that must stretch to accommodate associated contents above or below.
Kasheeda generally only appears in one word per line, and one letter per word. Furthermore, experts recommend kasheeda only between certain combinations of letters (typically those that cannot form a ligature). Some calligraphers who were paid by the page used an inordinate number of kasheeda to stretch content over more pages.
The branding of |
https://en.wikipedia.org/wiki/Astringent | An astringent (sometimes called adstringent) is a chemical that shrinks or constricts body tissues. The word derives from the Latin adstringere, which means "to bind fast". Calamine lotion, witch hazel, and yerba mansa, a Californian plant, are astringents, as are the powdered leaves of the myrtle.
Astringency, the dry, puckering or numbing mouthfeel caused by the tannins in unripe fruits, lets the fruit mature by deterring eating. Ripe fruits and fruit parts including blackthorn (sloe berries), Aronia chokeberry, chokecherry, bird cherry, rhubarb, quince, jabuticaba and persimmon fruits (especially when unripe), banana skins (or unripe bananas), cashew fruits and acorns are astringent. Citrus fruits, like lemons, are somewhat astringent. Tannins, being a kind of polyphenol, bind salivary proteins and make them precipitate and aggregate, producing a rough, "sandpapery", or dry sensation in the mouth. The tannins in some teas, coffee, and red grape wines like Cabernet Sauvignon and Merlot produce mild astringency.
Squirrels, wild boars, and insects can eat astringent food as their tongues are able to handle the taste.
In Ayurveda, astringent is the sixth taste (after sweet, sour, salty, pungent, bitter) represented by "air and earth".
Smoking tobacco is also reported to have an astringent effect.
In a scientific study, astringency was still detectable by subjects who had local anesthesia applied to their taste nerves, but not when both these and the trigeminal nerves were disabled.
Uses
In medicine, astringents cause constriction or contraction of mucous membranes and exposed tissues and are often used internally to reduce discharge of blood serum and mucous secretions. This can happen with a sore throat, hemorrhages, diarrhea, and peptic ulcers. Externally applied astringents, which cause mild coagulation of skin proteins, dry, harden, and protect the skin. People with acne are often advised to use astringents if they have oily skin. Mild astringents relieve s |
https://en.wikipedia.org/wiki/Child%20psychopathology | Child psychopathology refers to the scientific study of mental disorders in children and adolescents. Oppositional defiant disorder, attention-deficit hyperactivity disorder, and autism spectrum disorder are examples of psychopathology that are typically first diagnosed during childhood. Mental health providers who work with children and adolescents are informed by research in developmental psychology, clinical child psychology, and family systems. Lists of child and adult mental disorders can be found in the International Statistical Classification of Diseases and Related Health Problems, 10th Edition (ICD-10), published by the World Health Organization (WHO) and in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), published by the American Psychiatric Association (APA). In addition, the Diagnostic Classification of Mental Health and Developmental Disorders of Infancy and Early Childhood (DC: 0-3R) is used in assessing mental health and developmental disorders in children up to age five.
Causes
The etiology of child psychopathology has many explanations which differ from case to case. Many psychopathological disorders in children involve genetic and physiological mechanisms, though there are still many without any physical grounds. It is absolutely imperative that multiple sources of data be gathered. Diagnosing the psychopathology of children is daunting. It is influenced by development and contest, in addition to the traditional sources. Interviews with parents about school, etc., are inadequate. Either reports from teachers or direct observation by the professional are critical. (author, Robert B. Bloom, Ph.D.) The disorders with physical or biological mechanisms are easier to diagnose in children and are often diagnosed earlier in childhood. As psychopathy exists on a spectrum, the initial indications of the disorder can differ greatly. Certain children may exhibit subtle indications as early as two or three years old, while in |
https://en.wikipedia.org/wiki/Tom%20West | Joseph Thomas West III (November 22, 1939 – May 19, 2011) was an American technologist. West is notable for being the key figure in the Pulitzer Prize winning non-fiction book The Soul of a New Machine.
West began his career in computer design at RCA, after seven years at the Smithsonian Astrophysical Observatory, a job he'd gotten right out of college. He started working for Data General in 1974. He became the head of Data General's Eclipse group and then became the lead on the Eagle project, building a machine officially named the Eclipse MV/8000. After the publication of Soul of a New Machine, West was sent to Japan by Data General where he helped design DG-1, the first full-screen laptop. His last project in 1996, a thin Web server, was intended to be an internet-ready machine. West retired as Chief Technologist in 1998.
Personal life
West was married to Elizabeth West in 1965; they divorced in 1994. The couple had two daughters, Katherine West and librarian Jessamyn West. West married Cindy Woodward (his former assistant at Data General) in 2001; the couple divorced in 2011. West died at the age of 71 in his Westport, Massachusetts home of an apparent heart attack. His nephew, Christopher Schwarz, is a former editor of Popular Woodworking magazine, author of The Anarchist's Toolchest, and co-founder of Lost Art Press; West's death prompted Schwarz to "leave the magazine and do my own thing". |
https://en.wikipedia.org/wiki/Environmental%20biotechnology | Environmental biotechnology is biotechnology that is applied to and used to study the natural environment. Environmental biotechnology could also imply that one try to harness biological process for commercial uses and exploitation. The International Society for Environmental Biotechnology defines environmental biotechnology as "the development, use and regulation of biological systems for remediation of contaminated environments (land, air, water), and for environment-friendly processes (green manufacturing technologies and sustainable development)".
Environmental biotechnology can simply be described as "the optimal use of nature, in the form of plants, animals, bacteria, fungi and algae, to produce renewable energy, food and nutrients in a synergistic integrated cycle of profit making processes where the waste of each process becomes the feedstock for another process".
Significance for agriculture, food security, climate change mitigation and adaptation and the MDGs
The IAASTD has called for the advancement of small-scale agro-ecological farming systems and technology in order to achieve food security, climate change mitigation, climate change adaptation and the realisation of the Millennium Development Goals. Environmental biotechnology has been shown to play a significant role in agroecology in the form of zero waste agriculture and most significantly through the operation of over 15 million biogas digesters worldwide.
Significance towards industrial biotechnology
Consider the effluents of starch plant which has mixed up with a local water body like a lake or pond. We find huge deposits of starch which are not so easily taken up for degradation by microorganisms except for a few exemptions. Microorganisms from the polluted site are scan for genomic changes that allow them to degrade/utilize the starch better than other microbes of the same genus. The modified genes are then identified. The resultant genes are cloned into industrially significant microorg |
https://en.wikipedia.org/wiki/Histogenesis | Histogenesis is the formation of different tissues from undifferentiated cells. These cells are constituents of three primary germ layers, the endoderm, mesoderm, and ectoderm. The science of the microscopic structures of the tissues formed within histogenesis is termed histology.
Germ layers
A germ layer is a collection of cells, formed during animal and mammalian embryogenesis. Germ layers are typically pronounced within vertebrate organisms; however, animals or mammals more complex than sponges (eumetazoans and agnotozoans) produce two or three primary tissue layers. Animals with radial symmetry, such as cnidarians, produce two layers, called the ectoderm and endoderm. They are diploblastic. Animals with bilateral symmetry produce a third layer in-between called mesoderm, making them triploblastic. Germ layers will eventually give rise to all of an animal's or mammal's tissues and organs through a process called organogenesis.
Endoderm
The endoderm is one of the germ layers formed during animal embryogenesis. Cells migrating inward along the archenteron form the inner layer of the gastrula, which develops into the endoderm. Initially, the endoderm consists of flattened cells, which subsequently become columnar...
Mesoderm
The mesoderm germ layer forms in the embryos of animals and mammals more complex than cnidarians, making them triploblastic. During gastrulation, some of the cells migrating inward to form the endoderm form an additional layer between the endoderm and the ectoderm. A theory suggests that this key innovation evolved hundreds of millions of years ago and led to the evolution of nearly all large, complex animals. The formation of a mesoderm led to the formation of a coelom. Organs formed inside a coelom can freely move, grow, and develop independently of the body wall while fluid cushions and protects them from shocks.
Ectoderm
The ectoderm is the start of a tissue that covers the body surfaces. It emerges first and forms from the outermost |
https://en.wikipedia.org/wiki/Proportional%20control | Proportional control, in engineering and process control, is a type of linear feedback control system in which a correction is applied to the controlled variable, and the size of the correction is proportional to the difference between the desired value (setpoint, SP) and the measured value (process variable, PV). Two classic mechanical examples are the toilet bowl float proportioning valve and the fly-ball governor.
The proportional control concept is more complex than an on–off control system such as a bi-metallic domestic thermostat, but simpler than a proportional–integral–derivative (PID) control system used in something like an automobile cruise control. On–off control will work where the overall system has a relatively long response time, but can result in instability if the system being controlled has a rapid response time. Proportional control overcomes this by modulating the output to the controlling device, such as a control valve at a level which avoids instability, but applies correction as fast as practicable by applying the optimum quantity of proportional gain.
A drawback of proportional control is that it cannot eliminate the residual SP − PV error in processes with compensation e.g. temperature control, as it requires an error to generate a proportional output. To overcome this the PI controller was devised, which uses a proportional term (P) to remove the gross error, and an integral term (I) to eliminate the residual offset error by integrating the error over time to produce an "I" component for the controller output.
Theory
In the proportional control algorithm, the controller output is proportional to the error signal, which is the difference between the setpoint and the process variable. In other words, the output of a proportional controller is the multiplication product of the error signal and the proportional gain.
This can be mathematically expressed as
where
: Controller output with zero error.
: Output of the proportional control |
https://en.wikipedia.org/wiki/Strainmeter | A strainmeter is an instrument used by geophysicists to measure the deformation of the Earth. Linear strainmeters measure the changes in the distance between two points, using either a solid piece of material (over a short distance) or a laser interferometer (over a long distance, up to several hundred meters).
The type using a solid length standard was invented by Benioff in 1932, using an iron pipe; later instruments used rods made of fused quartz. Modern instruments of this type can make measurements of length changes over very small distances, and are commonly placed in boreholes to measure small changes in the diameter of the borehole. Another type of borehole instrument detects changes in a volume filled with fluid (such as silicone oil). The most common type is the dilatometer invented by Sacks and Evertson in the USA (patent 3,635,076); a design that uses specially shaped volumes to measure the strain tensor has been developed by Sakata in Japan.
All these types of strainmeters can measure deformation over frequencies from a few Hz to periods of days, months, and years. This allows them to measure signals at lower frequencies than can be detected with seismometers. Most strainmeter records show signals from the earth tides, and seismic waves from earthquakes. At longer periods, they can also record the gradual accumulation of stress (physics) caused by plate tectonics, the release of this stress in earthquakes, and rapid changes of stress following earthquakes.
The most extensive network of strainmeters is installed in Japan; it includes mostly quartz-bar instruments in tunnels and borehole strainmeters, with a few laser instruments. Starting in 2003 there has been a major effort (the Plate Boundary Observatory) to install many more strainmeters along the Pacific/North-America plate boundary in the United States. The aim is to install about 100 borehole strainmeters, primarily in Washington, Oregon and California, and five laser strainmeters, all in Calif |
https://en.wikipedia.org/wiki/Bill%20Moog | William Moog (August 15, 1915–1997) is known for inventing the electrohydraulic servo valve in 1951. He founded Moog Inc., which makes actuators for aircraft.
He was born in New Jersey. His cousin was Robert Moog, one of the leading pioneers of the modern synthesizer.
Moog was married and had three daughters. |
https://en.wikipedia.org/wiki/Mr.%20Red | Mr. Red is the first mascot of the Cincinnati Reds baseball team. He is a humanoid figure dressed in a Reds uniform, with an oversized baseball for a head. Sometimes, Mr. Red is referred to by the team as "The Running Man" for the way he has posed on the logo c. 1968.
Mr. Red was created by Henry "Hank" Zureick, the Reds Publicity Director. The character first appeared on the cover of the 1953 Cincinnati Red Stockings yearbook, which was also produced by Mr. Zureick, along with many yearbooks and programs during his career.
Mr. Red made his first appearance on a Reds uniform as a sleeve patch in 1955. The patch featured Mr. Red's head, clad in an old-fashioned white pillbox baseball cap with red stripes. The following season, 1956, saw the Reds adopt sleeveless jerseys, and Mr. Red was eliminated from the home uniform. He was moved to the left breast of the road uniform, and remained there for one season before being eliminated entirely.
In 1999, the Reds re-designed their uniform and "Mr. Red" was reintroduced as a sleeve patch on the undershirt.
A human version of the mascot had appeared in 1972 and went full time in 1973 season. By the end of 1973 Tom Kindig replaced his older brother Chuck as the day to day Mr. Red mascot for remainder of the 1970s. Many viewed Mr. Red nationally in Game 5 of 1975 World Series, when he appeared on screen during the NBC broadcast (see the DVD version available on A&E Video). The mascot disappeared in the late 1980s for unknown reasons. The costumed mascot was reintroduced in 1997.
Mr. Red was joined by Gapper, a new furry mascot created by David Raymond (the original Phillie Phanatic), as the franchise moved to Great American Ballpark in 2003. In 2007, the current Mr. Red has been supplemented by a retro 1950s version known as "Mr. Redlegs", complete with handlebar mustache and old fashioned baseball uniform. Throughout the 1970s and 1980s, Mr. Red wore uniform number 27.
The humanoid Mr. Red retired in 2007 leaving "Gappe |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.