source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Inverse-gamma%20distribution | In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to the gamma distribution.
Perhaps the chief use of the inverse gamma distribution is in Bayesian statistics, where the distribution arises as the marginal posterior distribution for the unknown variance of a normal distribution, if an uninformative prior is used, and as an analytically tractable conjugate prior, if an informative prior is required. It is common among some Bayesians to consider an alternative parametrization of the normal distribution in terms of the precision, defined as the reciprocal of the variance, which allows the gamma distribution to be used directly as a conjugate prior. Other Bayesians prefer to parametrize the inverse gamma distribution differently, as a scaled inverse chi-squared distribution.
Characterization
Probability density function
The inverse gamma distribution's probability density function is defined over the support
with shape parameter and scale parameter . Here denotes the gamma function.
Unlike the Gamma distribution, which contains a somewhat similar exponential term, is a scale parameter as the distribution function satisfies:
Cumulative distribution function
The cumulative distribution function is the regularized gamma function
where the numerator is the upper incomplete gamma function and the denominator is the gamma function. Many math packages allow direct computation of , the regularized gamma function.
Moments
Provided that , the -th moment of the inverse gamma distribution is given by
Characteristic function
in the expression of the characteristic function is the modified Bessel function of the 2nd kind.
Properties
For and ,
and
The information entropy is
where is the digamma function.
The Kullback-Leibler divergence of Inverse-Gamma(αp, βp) fro |
https://en.wikipedia.org/wiki/Theta%20vacuum | In quantum field theory, the theta vacuum is the semi-classical vacuum state of non-abelian Yang–Mills theories specified by the vacuum angle θ that arises when the state is written as a superposition of an infinite set of topologically distinct vacuum states. The dynamical effects of the vacuum are captured in the Lagrangian formalism through the presence of a θ-term which in quantum chromodynamics leads to the fine tuning problem known as the strong CP problem. It was discovered in 1976 by Curtis Callan, Roger Dashen, and David Gross, and independently by Roman Jackiw and Claudio Rebbi.
Yang–Mills vacuum
Topological vacua
The semi-classical vacuum structure of non-abelian Yang–Mills theories is often investigated in Euclidean spacetime in some fixed gauge such as the temporal gauge . Classical ground states of this theory have a vanishing field strength tensor which corresponds to pure gauge configurations , where at each point in spacetime is some gauge transformation belonging to the non-abelian gauge group . To ensure that the action is finite, approaches some fixed value as . Since all points at spatial infinity now behave as a single new point, the spatial manifold behaves as a 3-sphere so that every pure gauge choice for the gauge field is described by a mapping .
When every ground state configuration can be smoothly transformed into every other ground state configuration through a smooth gauge transformation then the theory has a single vacuum state, but if there are topologically distinct configurations then it has multiple vacua. This is because if there are two different configurations that are not smoothly connected, then to transform one into the other one must pass through a configuration with non-vanishing field strength tensor, which will have non-zero energy. This means that there is an energy barrier between the two vacua, making them distinct.
The question of whether two gauge configurations can be smoothly deformed into each other is |
https://en.wikipedia.org/wiki/Bandwidth%20throttling | Bandwidth throttling consists in the intentional limitation of the communication speed (bytes or kilobytes per second), of the ingoing (received) or outgoing (sent) data in a network node or in a network device.
The data speed and rendering may be limited depending on various parameters and conditions.
Overview
Limiting the speed of data sent by a data originator (a client computer or a server computer) is much more efficient than limiting the speed in an intermediate network device between client and server because while in the first case usually no network packets are lost, in the second case network packets can be lost / discarded whenever ingoing data speed overcomes the bandwidth limit or the capacity of device and data packets cannot be temporarily stored in a buffer queue (because it is full or it does not exist); the usage of such a buffer queue is to absorb the peaks of incoming data for very short time lapse.
In the second case discarded data packets can be resent by transmitter and received again.
When a low level network device discards incoming data packets usually can also notify that fact to data transmitter in order to slow down the transmission speed (see also network congestion).
NOTE: Bandwidth throttling should not be confused with rate limiting which operates on client requests at application server level and/or at network management level (i.e. by inspecting protocol data packets). Rate limiting can also help in keeping peaks of data speed under control.
These bandwidth limitations can be implemented:
at (a client program or a server program, i.e. ftp server, web server, etc.) which can be run and configured to throttle data sent through network or even to throttle data received from network (by reading data at most at a throttled amount per second);
at (typically done by an ISP).
The (client/server program) is usually perfectly because it is a choice of the client manager or the server manager (by server administrator) to limit |
https://en.wikipedia.org/wiki/Antrum | This is a disambiguation page for the biological term. For the 2018 horror movie, see Antrum (film)
In biology, antrum is a general term for a cavity or chamber, which may have specific meaning in reference to certain organs or sites in the body.
In vertebrates, it may refer specifically to:
Antrum follicularum, the cavity in the epithelium that envelops the oocyte
Mastoid antrum, a cavity between the middle ear and temporal bone in the skull
Stomach antrum, either
Pyloric antrum, the lower portion of the stomach. This is what is usually referred to as "antrum" in stomach-related topics
or Antrum cardiacum, a dilation that occurs in the esophagus near the stomach (forestomach)
Maxillary antrum or antrum of Highmore, the maxillary sinus, a cavity in the maxilla and the largest of the paranasal sinuses
In invertebrates, it may refer specifically to:
Antrum of female lepidoptera genitalia
Anatomy |
https://en.wikipedia.org/wiki/Artificial%20intelligence%20in%20video%20games | In video games, artificial intelligence (AI) is used to generate responsive, adaptive or intelligent behaviors primarily in non-player characters (NPCs) similar to human-like intelligence. Artificial intelligence has been an integral part of video games since their inception in the 1950s. AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather than machine learning or decision making. During the golden age of arcade video games the idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such as pathfinding and decision trees to guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such as data mining and procedural-content generation.
In general, game AI does not, as might be thought and sometimes is depicted to be the case, mean a realization of an artificial person corresponding to an NPC in the manner of the Turing test or an artificial general intelligence.
Overview
The term "game AI" is used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general, and so video game AI may often not constitute "true AI" in that such techniques do not necessarily facilitate computer learning or other standard criteria, only constituting "automated computation" or a predetermined and limited set of responses to a predetermined and limited set of inputs.
Many industries and corporate voices claim that so-called video game AI has come a long way in the sense that it has revolutionized the way humans interact with all forms of technology, although many expert researchers are skeptical of such claims, and particularly of the notion that such technologies fit the definition of "intelligence" standardly used in the |
https://en.wikipedia.org/wiki/List%20of%20video%20editing%20software | The following is a list of video editing software.
The criterion for inclusion in this list is the ability to perform non-linear video editing. Most modern transcoding software supports transcoding a portion of a video clip, which would count as cropping and trimming. However, items in this article have one of the following conditions:
Can perform other non-linear video editing function such as montage or compositing
Can do the trimming or cropping without transcoding
Free (libre) or open-source
The software listed in this section is either free software or open source, and may or may not be commercial.
Active and stable
Avidemux (Linux, macOS, Windows)
Losslesscut (Linux, macOS, Windows)
Blender VSE (Linux, FreeBSD, macOS, Windows)
Cinelerra (Linux, FreeBSD)
FFmpeg (Linux, macOS, Windows) – CLI only; no visual feedback
Flowblade (Linux)
Kdenlive (Linux, FreeBSD, macOS, Windows)
LiVES (BSD, IRIX, Linux, Solaris)
Olive (Linux, macOS, Windows) - currently in alpha
OpenShot (Linux, FreeBSD, macOS, Windows)
Pitivi (Linux, FreeBSD)
Shotcut (Linux, FreeBSD, macOS, Windows)
Inactive
Kino (Linux, FreeBSD)
VirtualDub (Windows)
VirtualDubMod (Windows)
VideoLan Movie Creator (VLMC) (Linux, macOS, Windows)
Proprietary (non-commercial)
The software listed in this section is proprietary, and freeware or freemium.
Active
ActivePresenter (Windows) – Also screencast software
DaVinci Resolve (macOS, Windows, Linux)
Freemake Video Converter (Windows)
iMovie (iOS, macOS)
ivsEdits (Windows)
Lightworks (Windows, Linux, macOS)
Microsoft Photos (Windows)
showbox.com (Windows, macOS)
VideoPad Home Edition (Windows, macOS, iPad, Android)
VSDC Free Video Editor (Windows)
WeVideo (Web app)
YouTube Create (Android)
Discontinued
Adobe Premiere Express (Web app)
Pixorial (Web app)
VideoThang (Windows)
Windows Movie Maker (Windows)
Proprietary (commercial)
The software listed in this section is proprietary and commercial.
Active
Adobe After Effects (macOS, Windows)
Adobe Premier |
https://en.wikipedia.org/wiki/Tutte%20theorem | In the mathematical discipline of graph theory the Tutte theorem, named after William Thomas Tutte, is a characterization of finite undirected graphs with perfect matchings. It is a generalization of Hall's marriage theorem from bipartite to arbitrary graphs. It is a special case of the Tutte–Berge formula.
Intuition
The goal is to characterize all graphs that do not have a perfect matching. Start with the most obvious case of a graph without a perfect matching: a graph with an odd number of vertices. In such a graph, any matching leaves at least one unmatched vertex, so it cannot be perfect.
A slightly more general case is a disconnected graph in which one or more components have an odd number of vertices (even if the total number of vertices is even). Let us call such components odd components. In any matching, each vertex can only be matched to vertices in the same component. Therefore, any matching leaves at least one unmatched vertex in every odd component, so it cannot be perfect.
Next, consider a graph G with a vertex u such that, if we remove from G the vertex u and its adjacent edges, the remaining graph (denoted ) has two or more odd components. As above, any matching leaves, in every odd component, at least one vertex that is unmatched to other vertices in the same component. Such a vertex can only be matched to u. But since there are two or more unmatched vertices, and only one of them can be matched to u, at least one other vertex remains unmatched, so the matching is not perfect.
Finally, consider a graph G with a set of vertices such that, if we remove from G the vertices in and all edges adjacent to them, the remaining graph (denoted ) has more than odd components. As explained above, any matching leaves at least one unmatched vertex in every odd component, and these can be matched only to vertices of - but there are not enough vertices on for all these unmatched vertices, so the matching is not perfect.
We have arrived at a necessary co |
https://en.wikipedia.org/wiki/Averrhoa%20bilimbi | Averrhoa bilimbi (commonly known as bilimbi, cucumber tree, or tree sorrel) is a fruit-bearing tree of the genus Averrhoa, family Oxalidaceae. It is believed to be originally native to the Maluku Islands of Indonesia but has naturalized and is common throughout Southeast Asia. It is cultivated in parts of tropical South Asia and the Americas. It bears edible extremely sour fruits. It is a close relative of the carambola tree.
Description
Averrhoa bilimbi is a small tropical tree reaching up to 15m in height. It is often multitrunked, quickly dividing into ramifications. Bilimbi leaves are alternate, pinnate, measuring approximately 30–60 cm in length. Each leaf contains 11-37 leaflets; ovate to oblong, 2–10 cm long and 1–2 cm wide and cluster at branch extremities. The leaves are quite similar to those of the Otaheite gooseberry. The tree is cauliflorous with 18–68 flowers in panicles that form on the trunk and other branches. The flowers are heterotristylous, borne in a pendulous panicle inflorescence. There flower is fragrant, corolla of 5 petals 10–30 mm long, yellowish green to reddish purple.
The fruit is ellipsoidal, elongated, measuring about 4 – 10 cm and sometimes faintly 5-angled. The skin, smooth to slightly bumpy, thin and waxy turning from light green to yellowish-green when ripe. The flesh is crisp and the juice is sour and extremely acidic and therefore not typically consumed as fresh fruit by itself.
Distribution and habitat
A. bilimbi is believed to be originally native to Moluccas, Indonesia, the species is now cultivated and found throughout Indonesia, Timor-Leste, the Philippines, Sri Lanka, Bangladesh, Maldives, Myanmar (Burma) and Malaysia. It is also common in other Southeast Asian countries. In India, where it is usually found in gardens, the bilimbi has gone wild in the warmest regions of the country. It is also seen in coastal regions of South India.
Outside of Asia, the tree is cultivated in Zanzibar. In 1793, the bilimbi was introduc |
https://en.wikipedia.org/wiki/Moons%20of%20Mars | The two moons of Mars are Phobos and Deimos. They are irregular in shape. Both were discovered by American astronomer Asaph Hall in August 1877 and are named after the Greek mythological twin characters Phobos (fear and panic) and Deimos (terror and dread) who accompanied their father Ares into battle. Ares, the god of war, was known to the Romans as Mars.
Compared to the Earth's Moon, the moons Phobos and Deimos are small. Phobos has a diameter of 22.2 km (13.8 mi) and a mass of 1.08 kg, while Deimos measures 12.6 km (7.8 mi) across, with a mass of 2.0 kg. Phobos orbits closer to Mars, with a semi-major axis of and an orbital period of 7.66 hours; while Deimos orbits farther with a semi-major axis of and an orbital period of 30.35 hours.
History
Early speculation
Speculation about the existence of the moons of Mars had begun when the moons of Jupiter were discovered. When Galileo Galilei, as a hidden report about his having observed two bumps on the sides of Saturn (later discovered to be its rings), used the anagram smaismrmilmepoetaleumibunenugttauiras for Altissimum planetam tergeminum observavi ("I have observed the most distant planet to have a triple form"), Johannes Kepler had misinterpreted it to mean Salve umbistineum geminatum Martia proles (Hello, furious twins, sons of Mars).
Perhaps inspired by Kepler (and quoting Kepler's third law of planetary motion), Jonathan Swift's satire Gulliver's Travels (1726) refers to two moons in Part 3, Chapter 3 (the "Voyage to Laputa"), in which Laputa's astronomers are described as having discovered two satellites of Mars orbiting at distances of 3 and 5 Martian diameters with periods of 10 and 21.5 hours. Phobos and Deimos (both found in 1877, more than a century after Swift's novel) have actual orbital distances of 1.4 and 3.5 Martian diameters, and their respective orbital periods are 7.66 and 30.35 hours. In the 20th century, V. G. Perminov, a spacecraft designer of early Soviet Mars and Venus spacecraft, sp |
https://en.wikipedia.org/wiki/Graph%20property | In graph theory, a graph property or graph invariant is a property of graphs that depends only on the abstract structure, not on graph representations such as particular labellings or drawings of the graph.
Definitions
While graph drawing and graph representation are valid topics in graph theory, in order to focus only on the abstract structure of graphs, a graph property is defined to be a property preserved under all possible isomorphisms of a graph. In other words, it is a property of the graph itself, not of a specific drawing or representation of the graph.
Informally, the term "graph invariant" is used for properties expressed quantitatively, while "property" usually refers to descriptive characterizations of graphs. For example, the statement "graph does not have vertices of degree 1" is a "property" while "the number of vertices of degree 1 in a graph" is an "invariant".
More formally, a graph property is a class of graphs with the property that any two isomorphic graphs either both belong to the class, or both do not belong to it. Equivalently, a graph property may be formalized using the indicator function of the class, a function from graphs to Boolean values that is true for graphs in the class and false otherwise; again, any two isomorphic graphs must have the same function value as each other. A graph invariant or graph parameter may similarly be formalized as a function from graphs to a broader class of values, such as integers, real numbers, sequences of numbers, or polynomials, that again has the same value for any two isomorphic graphs.
Properties of properties
Many graph properties are well-behaved with respect to certain natural partial orders or preorders defined on graphs:
A graph property P is hereditary if every induced subgraph of a graph with property P also has property P. For instance, being a perfect graph or being a chordal graph are hereditary properties.
A graph property is monotone if every subgraph of a graph with property P al |
https://en.wikipedia.org/wiki/PRIMOS | PRIMOS is a discontinued operating system developed during the 1970s by Prime Computer for its minicomputer systems. It rapidly gained popularity and by the mid-1980s was a serious contender as a mainline minicomputer operating system.
With the advent of PCs and the decline of the minicomputer industry, Prime was forced out of the market in the early 1990s, and by the end of 2010 the trademarks for both PRIME and PRIMOS no longer existed.
Prime had also offered a customizable real-time OS called RTOS.
Internals
One feature of PRIMOS was that it, like UNIX, was largely written in a high level language (with callable assembly language library functions available). At first, this language was FORTRAN IV, which was an odd choice from a pure computer science standpoint: no pointers, no if-then-else, no native string type, etc. FORTRAN was, however, the language most known to engineers, and engineers were a big market for Prime in their early years.
The unusual choice of FORTRAN for the OS programming language had to do with the people who founded Prime. They had worked for Honeywell on a NASA project. FORTRAN was the language they had used both at NASA and, for many of them, at MIT.
Honeywell, at that time, was uninterested in minicomputers, so they left and founded Prime, "taking" the code with them. They developed hardware optimized to run FORTRAN, including machine instructions that directly implemented FORTRAN's distinctive 3-way branch operation.
Since Prime's hardware did not perform byte addressing, there was no impetus to create a C compiler. Late models of the hardware were eventually modified to support I-mode, and programs compiled in C.
Later, around version 18, a version of PL/1, called PL/P, became the high level language of choice within PRIMOS, and
the PL/P and Modula-2 languages were used in the Kernel. Furthermore, some new PRIMOS utilities were written in SP/L, which was similar to PL/P.
The source code to PRIMOS was available to customers |
https://en.wikipedia.org/wiki/Baseline%20%28configuration%20management%29 | In configuration management, a baseline is an agreed description of the attributes of a product, at a point in time, which serves as a basis for defining change. A change is a movement from this baseline state to a next state. The identification of significant changes from the baseline state is the central purpose of baseline identification.
Typically, significant states are those that receive a formal approval status, either explicitly or implicitly. An approval status may be attributed to individual items, when a prior definition for that status has been established by project leaders, or signified by mere association to a particular established baseline. Nevertheless, this approval status is usually recognized publicly. A baseline may be established for the singular purpose of marking an approved configuration item, e.g. a project plan that has been signed off for execution. Associating multiple configuration items to such a baseline indicates those items as also being approved. Baselines may also be used to mark milestones. A baseline may refer to a single work product, or a set of work products that can be used as a logical basis for comparison.
Most baselines are established at a fixed point in time and serve to continue to reference that point (identification of state). However, some baselines, dynamic baselines, are established to carry forward as a reference to the item itself regardless of any changes to the item. These latter baselines evolve with the progression of the work effort but continue to identify notable work products in the project. Retrieving such a dynamic baseline obtains the current revision of only these notable items in the project.
While marking approval status covers the majority of uses for a baseline, multiple fixed baselines may also be established to monitor the progress of work through the passage of time. In this case, each baseline is a visible measure through an endured team effort, e.g. a series of developmental baselines. T |
https://en.wikipedia.org/wiki/Derivation%20of%20the%20Schwarzschild%20solution | The Schwarzschild solution describes spacetime under the influence of a massive, non-rotating, spherically symmetric object. It is considered by some to be one of the simplest and most useful solutions to the Einstein field equations .
Assumptions and notation
Working in a coordinate chart with coordinates labelled 1 to 4 respectively, we begin with the metric in its most general form (10 independent components, each of which is a smooth function of 4 variables). The solution is assumed to be spherically symmetric, static and vacuum. For the purposes of this article, these assumptions may be stated as follows (see the relevant links for precise definitions):
A spherically symmetric spacetime is one that is invariant under rotations and taking the mirror image.
A static spacetime is one in which all metric components are independent of the time coordinate (so that ) and the geometry of the spacetime is unchanged under a time-reversal .
A vacuum solution is one that satisfies the equation . From the Einstein field equations (with zero cosmological constant), this implies that since contracting yields .
Metric signature used here is (+,+,+,−).
Diagonalising the metric
The first simplification to be made is to diagonalise the metric. Under the coordinate transformation, , all metric components should remain the same. The metric components () change under this transformation as:
()
But, as we expect (metric components remain the same), this means that:
()
Similarly, the coordinate transformations and respectively give:
()
()
Putting all these together gives:
()
and hence the metric must be of the form:
where the four metric components are independent of the time coordinate (by the static assumption).
Simplifying the components
On each hypersurface of constant , constant and constant (i.e., on each radial line), should only depend on (by spherical symmetry). Hence is a function of a single variable:
A similar argument applied to |
https://en.wikipedia.org/wiki/PBASIC | PBASIC is a microcontroller-based version of BASIC created by Parallax, Inc. in 1992.
PBASIC was created to bring ease of use to the microcontroller and embedded processor world. It is used for writing code for the BASIC Stamp microcontrollers. After the code is written, it is tokenized and loaded into an EEPROM on the microcontroller. These tokens are fetched by the microcontroller and used to generate instructions for the processor.
Syntax
When starting a PBASIC file, the programmer defines the version of the BASIC Stamp and the version of PBASIC that will be used. Variables and constants are usually declared first thing in a program. The DO LOOP, FOR NEXT loop, IF and ENDIF, and some standard BASIC commands are part of the language, but many commands like PULSOUT, HIGH, LOW, DEBUG, and FREQOUT are native to PBASIC and are used for special purposes that are not available in traditional BASIC (such as having the Basic Stamp ring a piezoelectric speaker, for example).
Programming
In the Stamp Editor, the PBASIC integrated development environment (IDE) running on a (Windows) PC, the programmer has to select 1 of 7 different basic stamps, BS1, BS2, BS2E, BS2SX, BS2P, BS2PE, and BS2PX, which is done by using one of these commands:
' {$STAMP BS1}
' {$STAMP BS2}
' {$STAMP BS2e}
' {$STAMP BS2sx}
' {$STAMP BS2p}
' {$STAMP BS2pe}
' {$STAMP BS2px}
The programmer must also select which PBASIC version to use, which he or she may express with commands such as these:
' {$PBASIC 1.0} ' use version 1.0 syntax (BS1 only)
' {$PBASIC 2.0} ' use version 2.0 syntax
' {$PBASIC 2.5} ' use version 2.5 syntax
An example of a program using HIGH and LOW to make an LED blink, along with a DO...LOOP would be:
DO
HIGH 1 'turn LED on I/O pin 1 on
PAUSE 1000 'keep it on for 1 second
LOW 1 'turn it off
PAUSE 500 'keep it off for 500 msec
LOOP 'repeat forever
An example of a pr |
https://en.wikipedia.org/wiki/Foe%20%28unit%29 | A foe is a unit of energy equal to 1044 joules or 1051 ergs, used to express the large amount of energy released by a supernova. An acronym for "[ten to the power of] fifty-one ergs", the term was introduced by Gerald E. Brown of Stony Brook University in his work with Hans Bethe, because "it came up often enough in our work".
Without mentioning the foe, Steven Weinberg proposed in 2006 "a new unit called the bethe" (B) with the same value, to "replace" it.
This unit of measure is convenient because a supernova typically releases about one foe of observable energy in a very short period (which can be measured in seconds). In comparison, if the Sun's current luminosity is the same as its average luminosity over its lifetime, it would release 3.827 W × 3.1536 s/yr × 1010 yr ≈ 1.2 foe. One solar mass has a rest mass energy of 1787 foe.
See also
Orders of magnitude (energy) |
https://en.wikipedia.org/wiki/Product%20software%20implementation%20method | A product software implementation method is a systematically structured approach to effectively integrate a software based service or component into the workflow of an organizational structure or an individual end-user.
This entry focuses on the process modeling (Process Modeling) side of the implementation of “large” (explained in complexity differences) product software, using the implementation of Enterprise Resource Planning systems as the main example to elaborate on.
Overview
A product software implementation method is a blueprint to get users and/or organizations running with a specific software product.
The method is a set of rules and views to cope with the most common issues that occur when implementing a software product: business alignment from the organizational view and acceptance from human view.
The implementation of product software, as the final link in the deployment chain of software production, is in a financial perspective a major issue.
It is stated that the implementation of (product) software consumes up to 1/3 of the budget of a software purchase (more than hardware and software requirements together).
Implementation complexity differences
The complexity of implementing product software differs on several issues.
Examples are: the number of end users that will use the product software, the effects that the implementation has on changes of tasks and responsibilities for the end user, the culture and the integrity of the organization where the software is going to be used and the budget available for acquiring product software.
In general, differences are identified on a scale of size (bigger, smaller, more, less).
An example of the “smaller” product software is the implementation of an office package.
However there could be a lot of end users in an organization, the impact on the tasks and responsibilities of the end users will not be too intense, as the daily workflow of the end user is not changing significantly.
An example of “lar |
https://en.wikipedia.org/wiki/1023%20%28number%29 | 1023 (one thousand [and] twenty-three) is the natural number following 1022 and preceding 1024.
In mathematics
1023 is the tenth Mersenne number of the form .
In binary, it is also the tenth repdigit 11111111112 as all Mersenne numbers in decimal are repdigits in binary.
It is equal to the sum of five consecutive prime numbers 193 + 197 + 199 + 211 + 223.
It is the number of three-dimensional polycubes with 7 cells.
1023 is the number of elements in the 9-simplex, as well as the number of uniform polytopes in the tenth-dimensional hypercubic family , and the number of noncompact solutions in the family of paracompact honeycombs that shares symmetries with .
In other fields
Computing
Floating-point units in computers often run a IEEE 754 64-bit, floating-point excess-1023 format in 11-bit binary. In this format, also called binary64, the exponent of a floating-point number (e.g. 1.009001 E1031) appears as an unsigned binary integer from 0 to 2047, where subtracting 1023 from it gives the actual signed value.
1023 is the number of dimensions or length of messages of an error-correcting Reed-Muller code made of 64 block codes.
Technology
The Global Positioning System (GPS) works on a ten-digit binary counter that runs for 1023 weeks, at which point an integer overflow causes its internal value to roll over to zero again.
1023 being , is the maximum number that a 10-bit ADC converter can return when measuring the highest voltage in range.
See also
The year AD 1023 |
https://en.wikipedia.org/wiki/Heston%20Blumenthal | Heston Marc Blumenthal (; born 27 May 1966) is a British celebrity chef, TV personality and food writer. Blumenthal is regarded as a pioneer of multi-sensory cooking, food pairing and flavour encapsulation. He came to public attention with unusual recipes, such as bacon-and-egg ice cream and snail porridge. His recipes for triple-cooked chips and soft-centred Scotch eggs have been widely imitated. He has advocated a scientific approach to cooking, for which he has been awarded honorary degrees from the universities of Reading, Bristol and London and made an honorary Fellow of the Royal Society of Chemistry.
Blumenthal's public profile has been increased by a number of television series, most notably for Channel 4, as well as a product range for the Waitrose supermarket chain introduced in 2010. He is the proprietor of the Fat Duck in Bray, Berkshire, a three-Michelin-star restaurant which is widely regarded as one of the best in the world. Blumenthal also owns Dinner, a two-Michelin-star restaurant in London, and a pub in Bray, the Hind's Head, with one Michelin star.
Early life
Heston Marc Blumenthal was born in Shepherd's Bush, London, on 27 May 1966, to a Jewish father born in Southern Rhodesia and an English mother who converted to Judaism. His surname comes from a great-grandfather from Latvia and means 'flowered valley' (or 'bloom-dale'), in German.
Blumenthal was raised in Paddington, and attended Latymer Upper School in Hammersmith; St John's Church of England School in Lacey Green, Buckinghamshire; and John Hampden Grammar School, High Wycombe.
His interest in cooking began at the age of sixteen on a family holiday to Provence, France, when he was taken to the 3-Michelin-starred restaurant L'Oustau de Baumanière. He was inspired by the quality of the food and "the whole multi-sensory experience: the sound of fountains and cicadas, the heady smell of lavender, the sight of the waiters carving lamb at the table". When he learned to cook, he was influe |
https://en.wikipedia.org/wiki/Sequential%20hermaphroditism | Sequential hermaphroditism (called dichogamy in botany) is one of the two types of hermaphroditism, the other type being simultaneous hermaphroditism. It occurs when the organism's sex changes at some point in its life. In particular, a sequential hermaphrodite produces eggs (female gametes) and sperm (male gametes) at different stages in life. Sequential hermaphroditism occurs in many fish, gastropods, and plants. Species that can undergo these changes do so as a normal event within their reproductive cycle, usually cued by either social structure or the achievement of a certain age or size. In some species of fish, sequential hermaphroditism is much more common than simultaneous hermaphroditism.
In animals, the different types of change are male to female (protandry or protandrous hermaphroditism), female to male (protogyny or protogynous hermaphroditism), and bidirectional (serial or bidirectional hermaphroditism). Both protogynous and protandrous hermaphroditism allow the organism to switch between functional male and functional female. Bidirectional hermaphrodites have the capacity for sex change in either direction between male and female or female and male, potentially repeatedly during their lifetime. These various types of sequential hermaphroditism may indicate that there is no advantage based on the original sex of an individual organism. Those that change gonadal sex can have both female and male germ cells in the gonads or can change from one complete gonadal type to the other during their last life stage.
In plants, individual flowers are called dichogamous if their function has the two sexes separated in time, although the plant as a whole may have functionally male and functionally female flowers open at any one moment. A flower is protogynous if its function is first female, then male, and protandrous if its function is male then female. It used to be thought that this reduced inbreeding, but it may be a more general mechanism for reducing pollen- |
https://en.wikipedia.org/wiki/Object%20Process%20Methodology | Object process methodology (OPM) is a conceptual modeling language and methodology for capturing knowledge and designing systems, specified as ISO/PAS 19450. Based on a minimal universal ontology of stateful objects and processes that transform them, OPM can be used to formally specify the function, structure, and behavior of artificial and natural systems in a large variety of domains.
OPM was conceived and developed by Dov Dori. The ideas underlying OPM were published for the first time in 1995. Since then, OPM has evolved and developed.
In 2002, the first book on OPM was published, and on December 15, 2015, after six years of work by ISO TC184/SC5, ISO adopted OPM as ISO/PAS 19450. A second book on OPM was published in 2016.
Since 2019, OPM has become a foundation for a Professional Certificate program in Model-Based Systems Engineering - MBSE at EdX. Lectures are available as web videos on Youtube.
Overview
Object process methodology (OPM) is a conceptual modeling language and methodology for capturing knowledge and designing systems. Based on a minimal universal ontology of stateful objects and processes that transform them, OPM can be used to formally specify the function, structure, and behavior of artificial and natural systems in a large variety of domains. Catering to human cognitive abilities, an OPM model represents the system under design or study bimodally in both graphics and text for improved representation, understanding, communication, and learning.
In OPM, an object anything that does or does not exist. Objects are stateful—they may have states, such that at each point in time, the object is at one of its states or in transition between states. A process is a thing that transforms an object by creating or consuming it, or by changing its state.
OPM is bimodal; it is expressed both visually/graphically in object-process diagrams (OPD) and verbally/textually in Object-Process Language (OPL), a set of automatically generated sentences in a su |
https://en.wikipedia.org/wiki/Intertrigo | Intertrigo refers to a type of inflammatory rash (dermatitis) of the superficial skin that occurs within a person's body folds. These areas are more susceptible to irritation and subsequent infection due to factors that promote skin breakdown such as moisture, friction, and exposure to bodily secretions and excreta such as sweat, urine, or feces. Areas of the body which are more likely to be affected by intertrigo include the inframammary fold, intergluteal cleft, armpits, and spaces between the fingers or toes. Skin affected by intertrigo is more prone to infection than intact skin.
The term "intertrigo" commonly refers to a secondary infection with bacteria (such as Corynebacterium minutissimum), fungi (such as Candida albicans), or viruses. A frequent manifestation is candidal intertrigo.
Intertrigo occurs more often in warm and humid conditions. Generally, intertrigo is more common in people with a weakened immune system including children, the elderly, and immunocompromised people. The condition is also more common in people who experience urinary incontinence and decreased ability to move.
Cause
An intertrigo usually develops from the chafing of warm, moist skin in the areas of the inner thighs and genitalia, the armpits, under the breasts, the underside of the belly, behind the ears, and the web spaces between the toes and fingers. An intertrigo usually appears red and raw-looking, and may also itch, ooze, and be sore. Intertrigos occur more often among overweight individuals, those with diabetes, those restricted to bed rest or diaper use, and those who use medical devices, like artificial limbs, that trap moisture against the skin. Also, there are several skin diseases that can cause an intertrigo to develop, such as dermatitis or inverse psoriasis.
Bacterial
Bacterial intertrigo can be caused by Streptococci and Corynebacterium minutissimum.
Diagnosis
Intertrigo can be diagnosed clinically by a medical professional after taking a thorough history and |
https://en.wikipedia.org/wiki/NV1 | The Nvidia NV1, manufactured by SGS-Thomson Microelectronics under the model name STG2000, was a multimedia PCI card announced in May 1995 and released in November 1995. It was sold to retail by Diamond as the Diamond Edge 3D.
The NV1 featured a complete 2D/3D graphics core based upon quadratic texture mapping, VRAM or FPM DRAM memory, an integrated 32-channel 350 MIPS playback-only sound card, and a Sega Saturn-compatible joypad port. As such, it was intended to replace the 2D graphics card, Sound Blaster-compatible audio systems, and 15-pin joystick ports, then prevalent on IBM PC compatibles.
Putting all of this functionality on a single card led to significant compromises, and the NV1 was not very successful in the market. A modified version, the NV2, was developed in partnership with Sega for the Sega Dreamcast, but ultimately dropped. Nvidia's next stand-alone product, the RIVA 128, focussed entirely on 2D and 3D performance and was much more successful.
History
Several Sega Saturn games saw NV1-compatible conversions on the PC such as Panzer Dragoon and Virtua Fighter Remix. However, the NV1 struggled in a market place full of several competing proprietary standards, and was marginalized by emerging triangle polygon-based 2D/3D accelerators such as the low-cost S3 Graphics ViRGE, Matrox Mystique, ATI Rage, and Rendition Vérité V1000 among other early entrants. It ultimately did not sell well, despite being a promising and interesting device.
NV1's biggest initial problem was its cost and overall quality. Although it offered credible 3D performance, its use of quadratic surfaces was anything but popular, and was quite different than typical polygon rendering. The audio portion of the card received merely acceptable reviews, with the General MIDI receiving lukewarm responses at best (a critical component at the time due to the superior sound quality produced by competing products). The Sega Saturn console was a market failure compared to Sony's PlayStation |
https://en.wikipedia.org/wiki/Sashimono | Sashimono (指物, 差物, 挿物) were small banners historically worn by soldiers in feudal Japan, for identification during battles.
Description
Sashimono poles were attached to the backs of the dō "cuirass" by special fittings. Sashimono were worn both by foot soldiers, including the common soldiers known as ashigaru, as well as by the elite samurai and members of the shogunate, and in special holders on the horses of some cavalry. The banners, resembling small flags and bearing clan symbols, were most prominent during the Sengoku period, a long period of civil war in Japan from the middle 15th to early 17th century.
Variety
Given the great variety in Japanese armour, sashimono were used to provide a kind of "uniform" to armies. Sashimono typically came in either square or short rectangular forms, although many variations existed. A variation that is often bigger and coloured is the uma-jirushi, which were large, personalized, sashimono-like flags worn by commanders. Similar to this were the very large and narrow nobori banners, which commonly took two or three men to hold erect and were used to control the direction of fighting during large battles. (Uma-jirushi and nobori are still used today at sports events, as Japanese versions of the banners common among Western sports audiences.)
The banner hung from an L-shaped frame, which was attached to the chest armour dō or dou by a socket machi-uke or uketsubo near the waistline and hinged at shoulder level with a ring gattari or sashimono-gane. While this arrangement was perhaps one of the most common, there were other variations. Silk and leather were the most common materials used.
Design
The designs on sashimono were usually very simple geometric shapes, sometimes accompanied by Japanese characters providing the name of the leader or clan, the clan's mon, or a clan's slogan. Often, the background colour of the flag indicated which army unit the wearer belonged to, while different divisions in these armies emblazoned th |
https://en.wikipedia.org/wiki/Permabit | Permabit Technology Corporation was a private supplier of Data Reduction solutions to the Computer Data Storage industry.
On 31 July 2017 it was announced that Red Hat had acquired the assets and technology of Permabit Technology Corporation.
Permabit Albireo
The Permabit Albireo family of products are designed with data reduction features. The common component among these products is the Albireo index - a hash datastore. Three products in the Albireo family range from an embedded SDK (offering integration with existing storage) to a ready-to-deploy appliance.
Albireo SDK – a software development kit designed to add data deduplication to hardware devices or software applications that benefit from sharing duplicate chunks.
Albireo VDO – a drop-in data efficiency solution for Linux architectures. VDO provides fine-grained (4 KB chunk), inline deduplication, thin provisioning, compression and replication.
Albireo SANblox – a ready-to-run data efficiency appliance that integrates data deduplication and data compression transparently into Fibre Channel SAN environments.
History
Permabit was founded as Permabit Inc. in 2000 by a technical and business team from Massachusetts Institute of Technology. The company went through a management buyout in 2007 and a new business entity, Permabit Technology Corporation, was formed at that time.
Permabit’s first product, Permabit Enterprise Archive (originally known as Permeon) was a multi-PB scalable, content-addressable, scale-out storage product, first launched in 2004. Enterprise Archive utilized in-house developed technologies in the areas of capacity optimization, WORM, storage management and data protection.
In 2010, Permabit launched the Albireo family of products which focus on licensing Permabit data efficiency and management innovations to original equipment manufacturers, software vendors and online service providers. Publicly acknowledged companies that offer Albireo-based solutions include Dell EMC, Hitachi Dat |
https://en.wikipedia.org/wiki/Automatic%20number-plate%20recognition | Automatic number-plate recognition (ANPR; see also other names below) is a technology that uses optical character recognition on images to read vehicle registration plates to create vehicle location data. It can use existing closed-circuit television, road-rule enforcement cameras, or cameras specifically designed for the task. ANPR is used by police forces around the world for law enforcement purposes, including checking if a vehicle is registered or licensed. It is also used for electronic toll collection on pay-per-use roads and as a method of cataloguing the movements of traffic, for example by highways agencies.
Automatic number-plate recognition can be used to store the images captured by the cameras as well as the text from the license plate, with some configurable to store a photograph of the driver. Systems commonly use infrared lighting to allow the camera to take the picture at any time of day or night. ANPR technology must take into account plate variations from place to place.
Privacy issues have caused concerns about ANPR, such as government tracking citizens' movements, misidentification, high error rates, and increased government spending. Critics have described it as a form of mass surveillance.
Other names
ANPR is sometimes known by various other terms:
Automatic (or automated) license-plate recognition (ALPR)
Automatic (or automated) license-plate reader (ALPR)
Automatic vehicle identification (AVI)
Automatisk nummerpladegenkendelse (ANPG)
Car-plate recognition (CPR)
License-plate recognition (LPR)
Lecture automatique de plaques d'immatriculation (LAPI)
Mobile license-plate reader (MLPR)
Vehicle license-plate recognition (VLPR)
Vehicle recognition identification (VRI)
Development
ANPR was invented in 1976 at the Police Scientific Development Branch in Britain. Prototype systems were working by 1979, and contracts were awarded to produce industrial systems, first at EMI Electronics, and then at Computer Recognition Systems (CRS, n |
https://en.wikipedia.org/wiki/Timeout%20%28computing%29 | In telecommunications and related engineering (including computer networking and programming), the term timeout or time-out has several meanings, including:
A network parameter related to an enforced event designed to occur at the conclusion of a predetermined elapsed time.
A specified period of time that will be allowed to elapse in a system before a specified event is to take place, unless another specified event occurs first; in either case, the period is terminated when either event takes place. Note: A timeout condition can be canceled by the receipt of an appropriate time-out cancellation signal.
An event that occurs at the end of a predetermined period of time that began at the occurrence of another specified event. The timeout can be prevented by an appropriate signal.
Timeouts allow for more efficient usage of limited resources without requiring additional interaction from the agent interested in the goods that cause the consumption of these resources. The basic idea is that in situations where a system must wait for something to happen, rather than waiting indefinitely, the waiting will be aborted after the timeout period has elapsed. This is based on the assumption that further waiting is useless, and some other action is necessary.
Examples
Specific examples include:
In the Microsoft Windows and ReactOS command-line interfaces, the timeout command pauses the command processor for the specified number of seconds.
In POP connections, the server will usually close a client connection after a certain period of inactivity (the timeout period). This ensures that connections do not persist forever, if the client crashes or the network goes down. Open connections consume resources, and may prevent other clients from accessing the same mailbox.
In HTTP persistent connections, the web server saves opened connections (which consume CPU time and memory). The web client does not have to send an "end of requests series" signal. Connections are closed |
https://en.wikipedia.org/wiki/Edge-of-the-wedge%20theorem | In mathematics, Bogoliubov's edge-of-the-wedge theorem implies that holomorphic functions on two "wedges" with an "edge" in common are analytic continuations of each other provided they both give the same continuous function on the edge. It is used in quantum field theory to construct the analytic continuation of Wightman functions. The formulation and the first proof of the theorem were presented by Nikolay Bogoliubov at the International Conference on Theoretical Physics, Seattle, USA (September, 1956) and also published in the book Problems in the Theory of Dispersion Relations. Further proofs and generalizations of the theorem were given by R. Jost and H. Lehmann (1957), F. Dyson (1958), H. Epstein (1960), and by other researchers.
The one-dimensional case
Continuous boundary values
In one dimension, a simple case of the edge-of-the-wedge theorem can be stated as follows.
Suppose that f is a continuous complex-valued function on the complex plane that is holomorphic on the upper half-plane, and on the lower half-plane. Then it is holomorphic everywhere.
In this example, the two wedges are the upper half-plane and the lower half plane, and their common edge is the real axis. This result can be proved from Morera's theorem. Indeed, a function is holomorphic provided its integral round any contour vanishes; a contour which crosses the real axis can be broken up into contours in the upper and lower half-planes and the integral round these vanishes by hypothesis.
Distributional boundary values on a circle
The more general case is phrased in terms of distributions. This is technically simplest in the case where the common boundary is the unit circle in the complex plane. In that case holomorphic functions f, g in the regions and have Laurent expansions
absolutely convergent in the same regions and have distributional boundary values given by the formal Fourier series
Their distributional boundary values are equal if for all n. It is then elementary that t |
https://en.wikipedia.org/wiki/Cornelius%20Lanczos |
Cornelius (Cornel) Lanczos (, ; born as Kornél Lőwy, until 1906: Löwy (Lőwy) Kornél; February 2, 1893 – June 25, 1974) was a Hungarian-Jewish, Hungarian-American and later Hungarian-Irish mathematician and physicist. According to György Marx he was one of The Martians.
Biography
He was born in Fehérvár (Alba Regia), Fejér County, Kingdom of Hungary to Jewish parents, Károly Lőwy and Adél Hahn. Lanczos' Ph.D. thesis (1921) was on relativity theory. He sent his thesis copy to Albert Einstein, and Einstein wrote back, saying:
"I studied your paper as far as my present overload allowed. I believe I may say this much: this does involve competent and original brainwork, on the basis of which a doctorate should be obtainable ... I gladly accept the honorable dedication."
In 1924 he discovered an exact solution of the Einstein field equation representing a cylindrically symmetric rigidly rotating configuration of dust particles. This was later rediscovered by Willem Jacob van Stockum and is known today as the van Stockum dust. It is one of the simplest known exact solutions in general relativity and is regarded as an important example, in part because it exhibits closed timelike curves.
Lanczos served as assistant to Albert Einstein during the period of 1928–29.
In 1927 Lanczos married Maria Rupp. He was offered a one-year visiting professorship from Purdue University. For a dozen years (1927–39) Lanczos split his life between two continents. His wife Maria Rupp stayed with Lanczos' parents in Székesfehérvár year-around while Lanczos went to Purdue for half the year, teaching graduate students matrix mechanics and tensor analysis. In 1933 his son Elmar was born; Elmar came to Lafayette, Indiana with his father in August 1939, just before WW II broke out. Maria was too ill to travel and died several weeks later from tuberculosis. When the Nazis purged Hungary of Jews in 1944, of Lanczos' family, only his sister and a nephew survived. Elmar married, moved to Seattle and |
https://en.wikipedia.org/wiki/RAM%20image | A RAM image is a sequence of machine code instructions and associated data kept permanently in the non-volatile ROM memory of an embedded system, which is copied into volatile RAM by a bootstrap loader. Typically the RAM image is loaded into RAM when the system is switched on, and it contains a second-level bootstrap loader and basic hardware drivers, enabling the unit to function as desired, or else more sophisticated software to be loaded into the system.
Embedded systems |
https://en.wikipedia.org/wiki/Enaptin | Enaptin also known as nesprin-1 or synaptic nuclear envelope protein 1 (syne-1) is an actin-binding protein that in humans that is encoded by the SYNE1 gene.
Function
This gene encodes a spectrin repeat containing protein expressed in skeletal and smooth muscle, and peripheral blood lymphocytes, that localizes to the nuclear membrane.
Enaptin is a nuclear envelope protein found in human myocytes and synapses, which is made up of 8,797 amino acids. Enaptin is involved in the maintenance of nuclear organization and structural integrity, tethering the cell nucleus to the cytoskeleton by interacting with the nuclear envelope and with F-actin in the cytoplasm.
Structure
Enaptin contains a coiled alpha-helical region and a large beta-sheet region in the upper part and at least four alpha-helices spliced together, indicating the similarity with collagen. The protein is made up of three main parts, as can be seen in the diagram: cytoplasmic (1-8746), anchor for type IV membrane protein (8747-8767), and the sequence for perinuclear space (8768-8797). The region in the perinuclear space contains a KASH domain.
The molecular weight of the mature protein is approximately 1,011 kDa, and it has a theoretical pI of 5.38. The protein's chemical formula is C44189H71252N12428O14007S321. It has a theoretical Instability Index (II) of 51.63, indicating that it would be unstable in a test tube. The protein's in vivo half-life, the time it takes for half of the amount of protein in a cell to disappear after its synthesis in the cell, is predicted to be approximately 30 hours (in mammalian reticulocytes).
Clinical significance
Mutations in this gene have been associated with autosomal recessive spinocerebellar ataxia 8, also referred to as autosomal recessive cerebellar ataxia type 1 or recessive ataxia of Beauce. |
https://en.wikipedia.org/wiki/Hadwiger%20conjecture%20%28graph%20theory%29 | In graph theory, the Hadwiger conjecture states that if is loopless and has no minor then its chromatic number satisfies It is known to be true for The conjecture is a generalization of the four-color theorem and is considered to be one of the most important and challenging open problems in the field.
In more detail, if all proper colorings of an undirected graph use or more colors, then one can find disjoint connected subgraphs of such that each subgraph is connected by an edge to each other subgraph. Contracting the edges within each of these subgraphs so that each subgraph collapses to a single vertex produces a complete graph on vertices as a minor
This conjecture, a far-reaching generalization of the four-color problem, was made by Hugo Hadwiger in 1943 and is still unsolved. call it "one of the deepest unsolved problems in graph theory."
Equivalent forms
An equivalent form of the Hadwiger conjecture (the contrapositive of the form stated above) is that, if there is no sequence of edge contractions (each merging the two endpoints of some edge into a single supervertex) that brings a graph to the complete then must have a vertex coloring with colors.
In a minimal of any contracting each color class of the coloring to a single vertex will produce a complete However, this contraction process does not produce a minor because there is (by definition) no edge between any two vertices in the same color class, thus the contraction is not an edge contraction (which is required for minors). Hadwiger's conjecture states that there exists a different way of properly edge contracting sets of vertices to single vertices, producing a complete in such a way that all the contracted sets are connected.
If denotes the family of graphs having the property that all minors of graphs in can be then it follows from the Robertson–Seymour theorem that can be characterized by a finite set of forbidden minors. Hadwiger's conjecture is that this set consists o |
https://en.wikipedia.org/wiki/PC-UX | PC-UX is a discontinued NEC port of UNIX System III for their APC III and PC-9801 personal computer. It had extensive graphics capability. PC-UX and MS-DOS could reside on the same hard drive. It also had file transfer utilities that allowed files between PC-UX and MS-DOS.
In 1985, the suggested retail price for PC-UX on APC III was $700.
NEC's subsequent port of UNIX System V was called PC-UX/V.
See also
Xenix for AT etc. |
https://en.wikipedia.org/wiki/Direction%20of%20arrival | In signal processing, direction of arrival (DOA) denotes the direction from which usually a propagating wave arrives at a point, where usually a set of sensors are located. These set of sensors forms what is called a sensor array. Often there is the associated technique of beamforming which is estimating the signal from a given direction. Various engineering problems addressed in the associated literature are:
Find the direction relative to the array where the sound source is located
Direction of different sound sources around you are also located by you using a process similar to those used by the algorithms in the literature
Radio telescopes use these techniques to look at a certain location in the sky
Recently beamforming has also been used in radio frequency (RF) applications such as wireless communication. Compared with the spatial diversity techniques, beamforming is preferred in terms of complexity. On the other hand, beamforming in general has much lower data rates. In multiple access channels (code-division multiple access (CDMA), frequency-division multiple access (FDMA), time-division multiple access (TDMA)), beamforming is necessary and sufficient
Various techniques for calculating the direction of arrival, such as angle of arrival (AoA), time difference of arrival (TDOA), frequency difference of arrival (FDOA), or other similar associated techniques.
Limitations on the accuracy of estimation of direction of arrival signals in digital antenna arrays are associated with jitter ADC and DAC.
Advanced sophisticated techniques perform joint direction of arrival and time of arrival (ToA) estimation to allow a more accurate localization of a node. This also has the merit of localizing more targets with less antenna resources. Indeed, it is well-known in the array processing community that, generally speaking, one can resolve targets via antennas. When JADE (joint angle and delay) estimation is employed, one can go beyond this limit.
Typical DOA estimation |
https://en.wikipedia.org/wiki/Stickler%20syndrome | Stickler syndrome (hereditary progressive arthro-ophthalmodystrophy) is a group of rare genetic disorders affecting connective tissue, specifically collagen. Stickler syndrome is a subtype of collagenopathy, types II and XI. Stickler syndrome is characterized by distinctive facial abnormalities, ocular problems, hearing loss, and joint and skeletal problems. It was first studied and characterized by Gunnar B. Stickler in 1965.
Signs and symptoms
Individuals with Stickler syndrome experience a range of signs and symptoms. Some people have no signs and symptoms; others have some or all of the features described below. In addition, each feature of this syndrome may vary from subtle to severe.
A characteristic feature of Stickler syndrome is a somewhat flattened facial appearance. This is caused by underdeveloped bones in the middle of the face, including the cheekbones and the bridge of the nose. A particular group of physical features, called the Pierre Robin sequence, is common in children with Stickler syndrome. Robin sequence includes a U-shaped or sometimes V-shaped cleft palate (an opening in the roof of the mouth) with a tongue that is too large for the space formed by the small lower jaw. Children with a cleft palate are also prone to ear infections and occasionally swallowing difficulties.
Many people with Stickler syndrome are very nearsighted (described as having high myopia) because of the shape of the eye. People with eye involvement are prone to increased pressure within the eye (ocular hypertension) which could lead to glaucoma and tearing or detachment of the light-sensitive retina of the eye (retinal detachment). Cataract may also present as an ocular complication associated with Stickler's Syndrome. The jelly-like substance within the eye (the vitreous humour) has a distinctive appearance in the types of Stickler syndrome associated with the COL2A1 and COL11A1 genes. As a result, regular appointments to a specialist ophthalmologist are advised. Th |
https://en.wikipedia.org/wiki/International%20Association%20for%20Quantitative%20Finance | The International Association for Quantitative Finance (IAQF), formerly the International Association of Financial Engineers (IAFE), is a non-profit professional society dedicated to fostering the fields of quantitative finance and financial engineering. The IAQF hosts several panel discussions throughout the year to discuss the issues that affect the industry from both academic and professional angles. Since it was established in 1992, the IAQF has expanded its reach to host events in San Francisco, Toronto, Boston, and London.
Fischer Black Memorial Foundation
The educational arm of the IAQF is the Fischer Black Memorial Foundation (FBMF). While the IAQF focuses on the profession of financial engineering, the FBMF aims to expose students to the financial engineering field and help them work towards a career in the industry. Financial engineering is often underrepresented on university campuses and the FBMF tries to bridge the gap between academia and the professional world. The main tool of the FBMF is the very successful "How I Became a Quant" event series that bring professionals to college campuses to tell students about their experiences getting into the field. The FBMF also co-hosts (along with SIAM and New York University) an annual career fair that draws students from all over the country to meet with the premier hiring companies in the industry. This is one of the only career fairs that is specifically for financial engineering and it is hugely popular with both the students and companies.
Events
Often, these events are evening panels with 3–4 speakers; both practitioners and academics typically sit on these panels. Much of the information presented at these events is available afterward on the IAQF website.
Every year, the IAQF honors one member of the financial engineering world with its Financial Engineer of the Year (FEOY) award. The winner is selected through an exhaustive nomination and voting process and the list of former winners illustrates th |
https://en.wikipedia.org/wiki/Wind%20fetch | In oceanography wind fetch, also known as fetch length or simply fetch, is the length of water over which a given wind has blown without obstruction. Fetch is used in geography and meteorology and its effects are usually associated with sea state and when it reaches shore it is the main factor that creates storm surge which leads to coastal erosion and flooding. It also plays a large part in longshore drift.
Fetch length, along with the wind speed (wind strength), and duration, determines the size (sea state) of waves produced. If the wind direction is constant, the longer the fetch and the greater the wind speed, the more wind energy is transferred to the water surface and the larger the resulting sea state will be. Sea state will increase over time until local energy dissipation balances energy transfer to the water from the wind and a fully developed sea results.
See also
Gale
Sea state
Ocean surface wave
Storm surge |
https://en.wikipedia.org/wiki/Adrian%20Bejan | Adrian Bejan is a Romanian-American professor who has made contributions to modern thermodynamics and developed his constructal law. He is J. A. Jones Distinguished Professor of Mechanical Engineering at Duke University and author of the books Design in Nature, The Physics of Life , Freedom and Evolution and Time And Beauty: Why Time Flies And Beauty Never Dies
Early life and education
Bejan was born in Galaţi, a city on the Danube in Romania.
His mother, Marioara Bejan (1914–1998), was a pharmacist. His father, Dr. Anghel Bejan (1910–1976), was a veterinarian. Bejan showed an early talent in drawing, and his parents enrolled him in art school. He also excelled in basketball, which earned him a position on the Romanian national basketball team.
At age 19 Bejan won a scholarship to the United States and entered Massachusetts Institute of Technology in Cambridge, Massachusetts. In 1972 he was awarded BS and MS degrees as a member of the Honors Course in Mechanical Engineering. He graduated in 1975 with a PhD from MIT with a thesis titled "Improved thermal design of the cryogenic cooling system for a superconducting synchronous generator". His advisor was Joseph L. Smith Jr.
Career
From 1976 to 1978 Bejan was a Miller research fellow in at the University of California Berkeley working with Chang-Lin Tien. In 1978 he moved to Colorado and joined the faculty of the Department of Mechanical Engineering at the University of Colorado in Boulder. In 1982 Bejan published his first book, Entropy Generation Through Heat and Fluid Flow. The book is aimed at practical applications of the second law of thermodynamics, and presented his ideas on irreversibility, availability and exergy analysis in a form for engineers. In 1984 he published Convection Heat Transfer. In an era when researchers did heat transfer calculations using numerical methods on supercomputers, the book emphasized new research methods such as intersection of asymptotes, heatlines, and scale analysis to solv |
https://en.wikipedia.org/wiki/Bejan%20number | There are two different Bejan numbers (Be) used in the scientific domains of thermodynamics and fluid mechanics. Bejan numbers are named after Adrian Bejan.
Thermodynamics
In the field of thermodynamics the Bejan number is the ratio of heat transfer irreversibility to total irreversibility due to heat transfer and fluid friction:
where
is the entropy generation contributed by heat transfer
is the entropy generation contributed by fluid friction.
Schiubba has also achieved the relation between Bejan number Be and Brinkman number Br
Heat transfer and mass transfer
In the context of heat transfer. the Bejan number is the dimensionless pressure drop along a channel of length :
where
is the dynamic viscosity
is the thermal diffusivity
The Be number plays in forced convection the same role that the Rayleigh number plays in natural convection.
In the context of mass transfer. the Bejan number is the dimensionless pressure drop along a channel of length :
where
is the dynamic viscosity
is the mass diffusivity
For the case of Reynolds analogy (Le = Pr = Sc = 1), it is clear that all three definitions of Bejan number are the same.
Also, Awad and Lage: obtained a modified form of the Bejan number, originally proposed by Bhattacharjee and Grosshandler for momentum processes, by replacing the dynamic viscosity appearing in the original proposition with the equivalent product of the fluid density and the momentum diffusivity of the fluid. This modified form is not only more akin to the physics it represents but it also has the advantage of being dependent on only one viscosity coefficient. Moreover, this simple modification allows for a much simpler extension of Bejan number to other diffusion processes, such as a heat or a species transfer process, by simply replacing the diffusivity coefficient. Consequently, a general Bejan number representation for any process involving pressure-drop and diffusion becomes possible. It is shown that this gene |
https://en.wikipedia.org/wiki/FreeS/WAN | FreeS/WAN, for Free Secure Wide-Area Networking, was a free software project, which implemented a reference version of the IPsec network security layer for Linux. The project goal of ubiquitous opportunistic encryption of Internet traffic was not realized, although it did contribute to general Internet encryption.
The project was founded by John Gilmore, and administered for most of its duration by Hugh Daniel. John Ioannidis and Angelos Keromytis started the codebase while outside the United States of America prior to autumn 1997. Technical lead for the project was Henry Spencer, and later Michael Richardson. The IKE keying daemon (pluto) was maintained by D. Hugh Redelmeier while the IPsec kernel module (KLIPS) was maintained by Richard Guy Briggs. Sandy Harris was the main documentation person for most of the project, later Claudia Schmeing.
The final FreeS/WAN version 2.06 was released on 22 April 2004. The earlier version 2.04 was forked to form two projects, Openswan and strongSwan. Openswan has since (2012) been forked to Libreswan.
External links
Project website
Documentation
Free security software
History of software
IPsec |
https://en.wikipedia.org/wiki/Beauchamp%E2%80%93Feuillet%20notation | Beauchamp–Feuillet notation is a system of dance notation used in Baroque dance.
The notation was commissioned by Louis XIV (who had founded the Académie Royale de Danse in 1661), and devised in the 1680s by Pierre Beauchamp. The notation system was first described in detail in 1700 by Raoul-Auger Feuillet in Chorégraphie. Feuillet also then began a programme of publishing complete notated dances. It was used to record dances for the stage and domestic use throughout the eighteenth century, being modified by Pierre Rameau in 1725, and surviving into at least the 1780s in various modified forms.
One of the innovations of this notation was to show the music on a staff as a musician would use it, across the top of a page. Bar markings on the music are also drawn across the tract of the dancers, clarifying the relation of the steps to the music. The focus of the notation is the footwork. The notation shows the sequence of foot moves, and, for each move, the direction, the manner of executing the step, and the relative timing of the moves. There is enough detail that dancing masters, in other places and times, could reconstruct the dance and teach it from the notation alone. There are over 300 notated dances known.<ref name="LandM">Meredith Ellis Little & Carol G. Marsh (1992) La Danse Noble: An Inventory of Dances and Sources
</ref> The majority of the known dances are for two dancers, usually a man and a woman, and were intended to be performed at balls or on the stage.
Notes
Reading
Raoul Auger Feuillet (1700) Chorégraphie, ou l'art de d'écrire la danse (Paris)
a facsimile of the 1700 Paris edition (1968: Broude Brothers)
translated into English by John Weaver: (1706) Orchesography (London)
translated into English by P. Siris: (1706) The Art of Dancing (London)
Raoul Auger Feuillet (1706) Recueil de contredanses (Paris)
a facsimile of the 1706 Paris edition (1968: Broude Brothers)
Wendy Hilton “Dance of court and theater: the French noble style 1690–1725”
repr |
https://en.wikipedia.org/wiki/Parasitemia | Parasitemia is the quantitative content of parasites in the blood. It is used as a measurement of parasite load in the organism and an indication of the degree of an active parasitic infection. Systematic measurement of parasitemia is important in many phases of the assessment of disease, such as in diagnosis and in the follow-up of therapy, particularly in the chronic phase, when cure depends on ascertaining a parasitemia of zero.
The methods to be used for quantifying parasitemia depends on the parasitic species and its life cycle. For instance, in malaria, the number of blood-stage parasites can be counted using an optical microscope, on a special thick film (for low parasitemias) or thin film blood smear (for high parasitemias).
The use of molecular biology techniques, such as PCR has been used increasingly as a tool to measure parasitemia, especially in patients in the chronic phase of disease. In this technique, blood samples are obtained from the patient, and specific DNA of the parasite is extracted and amplified by PCR. |
https://en.wikipedia.org/wiki/Log%E2%80%93log%20plot | In science and engineering, a log–log graph or log–log plot is a two-dimensional graph of numerical data that uses logarithmic scales on both the horizontal and vertical axes. Power functions – relationships of the form – appear as straight lines in a log–log graph, with the exponent corresponding to the slope, and the coefficient corresponding to the intercept. Thus these graphs are very useful for recognizing these relationships and estimating parameters. Any base can be used for the logarithm, though most commonly base 10 (common logs) are used.
Relation with monomials
Given a monomial equation taking the logarithm of the equation (with any base) yields:
Setting and which corresponds to using a log–log graph, yields the equation:
where m = k is the slope of the line (gradient) and b = log a is the intercept on the (log y)-axis, meaning where log x = 0, so, reversing the logs, a is the y value corresponding to x = 1.
Equations
The equation for a line on a log–log scale would be:
where m is the slope and b is the intercept point on the log plot.
Slope of a log–log plot
To find the slope of the plot, two points are selected on the x-axis, say x1 and x2. Using the above equation:
and
The slope m is found taking the difference:
where F1 is shorthand for F(x1) and F2 is shorthand for F(x2). The figure at right illustrates the formula. Notice that the slope in the example of the figure is negative. The formula also provides a negative slope, as can be seen from the following property of the logarithm:
Finding the function from the log–log plot
The above procedure now is reversed to find the form of the function F(x) using its (assumed) known log–log plot. To find the function F, pick some fixed point (x0, F0), where F0 is shorthand for F(x0), somewhere on the straight line in the above graph, and further some other arbitrary point (x1, F1) on the same graph. Then from the slope formula above:
which leads to
Notice that 10log10(F1) = F1. Therefor |
https://en.wikipedia.org/wiki/Microaerophile | A microaerophile is a microorganism that requires environments containing lower levels of dioxygen than that are present in the atmosphere (i.e. < 21% O2; typically 2–10% O2) for optimal growth. A more restrictive interpretation requires the microorganism to be obligate in this requirement. Many microaerophiles are also capnophiles, requiring an elevated concentration of carbon dioxide (e.g. 10% CO2 in the case of Campylobacter species).
The original definition of a microaerophile has been criticized for being too restrictive and not accurate enough compared to similar categories. The broader term microaerobe has been coined to describe microbes able to respire oxygen "within microoxic environments by using high-affinity terminal oxidase".
Culture
Microaerophiles are traditionally cultivated in candle jars. Candle jars are containers into which a lit candle is introduced before sealing the container's airtight lid. The candle's flame burns until extinguished by oxygen deprivation, creating a carbon dioxide-rich, oxygen-poor atmosphere.
Newer oxystat bioreactor methods allow for more precise control of gas levels in the microaerobic environment, using a probe to measure the oxygen concentration or redox potential in real time. Ways to control oxygen intake include gas-generating packs and gas exchange.
As oxystat bioreactors are expensive to buy and run, lower-cost solutions have been devised. For example, the Micro-Oxygenated Culture Device (MOCD) is a system involving ordinary flasks, oxygen-permeable tubes, sensors, and water pumps. Aeration is done by pumping the culture medium through the tubes.
Examples
A wide variety of microaerobic conditions exist in the world: in human bodies, underwater, etc. Many bacteria from these sources are microaerobes, some of which are also microaerophiles.
Some members of Campylobacterales are microaerophilic:
Campylobacter species are microaerophilic.
Helicobacter pylori (previously identified as a Campylobacter), a sp |
https://en.wikipedia.org/wiki/Radioresistance | Radioresistance is the level of ionizing radiation that organisms are able to withstand.
Ionizing-radiation-resistant organisms (IRRO) were defined as organisms for which the dose of acute ionizing radiation (IR) required to achieve 90% reduction (D10) is greater than 1,000 gray (Gy)
Radioresistance is surprisingly high in many organisms, in contrast to previously held views. For example, the study of environment, animals and plants around the Chernobyl disaster area has revealed an unexpected survival of many species, despite the high radiation levels. A Brazilian study in a hill in the state of Minas Gerais which has high natural radiation levels from uranium deposits, has also shown many radioresistant insects, worms and plants. Certain extremophiles, such as the bacteria Deinococcus radiodurans and the tardigrades, can withstand large doses of ionizing radiation on the order of 5,000 Gy.
Induced radioresistance
In the graph on left, a dose/survival curve for a hypothetical group of cells has been drawn with and without a rest time for the cells to recover. Other than the recovery time partway through the irradiation, the cells would have been treated identically.
Radioresistance may be induced by exposure to small doses of ionizing radiation. Several studies have documented this effect in yeast, bacteria, protozoa, algae, plants, insects, as well as in in vitro mammalian and human cells and in animal models. Several cellular radioprotection mechanisms may be involved, such as alterations in the levels of some cytoplasmic and nuclear proteins and increased gene expression, DNA repair and other processes. Also biophysical models presented general basics for this phenomenon.
Many organisms have been found to possess a self-repair mechanism that can be activated by exposure to radiation in some cases. Two examples of this self-repair process in humans are described below.
Devair Alves Ferreira received a large dose (7.0 Gy) during the Goiânia accident, and |
https://en.wikipedia.org/wiki/His%20Majesty%27s%20Botanist | His Majesty's Botanist is a member of the Royal household in Scotland.
The office was created in 1699, and from 1768 until 1956 it was combined with the office of Regius Keeper of the Royal Botanic Garden Edinburgh, who also held the post of Regius Professor of Botany at the University of Edinburgh. Since then the office of HM Botanist has been honorary, but conferred on a serving or retired Regius Keeper.
Office holders
1699: James Sutherland
1715: Dr William Arthur
1716: Charles Alston
1761: Dr John Hope
1786: Daniel Rutherford MD
1820: Robert Graham MD
1845: John Hutton Balfour MD
1880: Alexander Dickson MD LLD
1888: Sir Isaac Bayley Balfour
1922: Prof. Sir William Wright Smith (d 1956)
1966: Harold Roy Fletcher (d 1978)
1987: Prof. Douglas Mackay Henderson
2010: Prof Stephen Blackmore
See also
Regius Keeper of the Royal Botanic Garden Edinburgh |
https://en.wikipedia.org/wiki/Safe%20area%20%28television%29 | Safe area is a term used in television production to describe the areas of the television picture that can be seen on television screens.
Older televisions can display less of the space outside of the safe area than ones made more recently. Flat panel screens, plasma displays and liquid crystal display (LCD) screens generally can show most of the picture outside the safe areas.
The use of safe areas in television production ensures that the most important parts of the picture are seen by the majority of viewers.
The size of the title-safe area is typically specified in pixels or percent. The NTSC and PAL analog television standards do not specify official overscan amounts, and producers of television programming use their own guidelines.
Some video editing software packages for non-linear editing systems (NLE) solutions have a setting which shows the safe areas while editing.
Title-safe area
The title-safe area or graphics-safe area is, in television broadcasting, a rectangular area which is far enough in from the four edges, such that text or graphics show neatly: with a margin and without distortion. This is applied against a worst case of on-screen location and display type. Typically corners would require more space from the edges, but due to increased quality of the average display this is no longer the concern it used to be, even on CRTs.
If the editor of the content does not take care to ensure that all titles are inside the title-safe area, some titles in the content could have their edges chopped off when viewed in some screens.
Video editing programs that can output video for either television or the Web can take the title-safe area into account. In Apple's consumer-grade NLE software iMovie, the user is advised to uncheck the QT Margins checkbox for content meant for television, and to check it for content meant only for QuickTime on a computer. Final Cut Pro can show two overlay rectangles in both its Viewer and Canvas; the inner rectangle is |
https://en.wikipedia.org/wiki/Complete%20active%20space | In quantum chemistry, a complete active space is a type of classification of molecular orbitals. Spatial orbitals are classified as belonging to three classes:
core, always hold two electrons
active, partially occupied orbitals
virtual, always hold zero electrons
This classification allows one to develop a set of Slater determinants for the description of the wavefunction as a linear combination of these determinants. Based on the freedom left for the occupation in the active orbitals, a certain number of electrons are allowed to populate all the active orbitals in appropriate combinations, developing a finite-size space of determinants. The resulting wavefunction is of multireference nature, and is blessed by additional properties if compared to other selection schemes.
The active classification can theoretically be extended to all the molecular orbitals, to obtain a full CI treatment. In practice, this choice is limited, due to the high computational cost needed to optimize a large CAS wavefunction on medium and large molecular systems.
A Complete Active Space wavefunction is used to obtain a first approximation of the so-called static correlation, which represents the contribution needed to describe bond dissociation processes correctly. This requires a wavefunction that includes a set of electronic configurations with high and very similar importance. Dynamic correlation, representing the contribution to the energy brought by the instantaneous interaction between electrons, is normally small and can be recovered with good accuracy by means of perturbative evaluations, such as CASPT2 and NEVPT.
See also
CASSCF
Multi-configurational self-consistent field (MCSCF)
Quantum chemistry |
https://en.wikipedia.org/wiki/Resting%20metabolic%20rate | Resting metabolic rate (RMR) is whole-body mammal (and other vertebrate) metabolism during a time period of strict and steady resting conditions that are defined by a combination of assumptions of physiological homeostasis and biological equilibrium. RMR differs from basal metabolic rate (BMR) because BMR measurements must meet total physiological equilibrium whereas RMR conditions of measurement can be altered and defined by the contextual limitations. Therefore, BMR is measured in the elusive "perfect" steady state, whereas RMR measurement is more accessible and thus, represents most, if not all measurements or estimates of daily energy expenditure.
Indirect calorimetry is the study or clinical use of the relationship between respirometry and bioenergetics, where the measurement of the rates of oxygen consumption, sometimes carbon dioxide production, and less often urea production is transformed to rates of energy expenditure, expressed as the ratio between i) energy and ii) the time frame of the measurement. For example, following analysis of oxygen consumption of a human subject, if 5.5 kilocalories of energy were estimated during a 5-minute measurement from a rested individual, then the resting metabolic rate equals = 1.1 kcal/min rate. Unlike some related measurements (e.g. METs), RMR itself is not referenced to body mass and has no bearing on the energy density of the metabolism.
A comprehensive treatment of confounding factors on BMR measurements is demonstrated as early as 1922 in Massachusetts by Engineering Professor Frank B Sanborn, wherein descriptions of the effects of food, posture, sleep, muscular activity, and emotion provide criteria for separating BMR from RMR.
Indirect calorimetry
Pre-computer technologies
In the 1780s for the French Academy of Sciences, Lavoisier, Laplace, and Seguin investigated and published relationships between direct calorimetry and respiratory gas exchanges from mammalian subjects. 100 years later in the 19th century |
https://en.wikipedia.org/wiki/Specific%20dynamic%20action | Specific dynamic action (SDA), also known as thermic effect of food (TEF) or dietary induced thermogenesis (DIT), is the amount of energy expenditure above the basal metabolic rate due to the cost of processing food for use and storage. Heat production by brown adipose tissue which is activated after consumption of a meal is an additional component of dietary induced thermogenesis. The thermic effect of food is one of the components of metabolism along with resting metabolic rate and the exercise component. A commonly used estimate of the thermic effect of food is about 10% of one's caloric intake, though the effect varies substantially for different food components. For example, dietary fat is very easy to process and has very little thermic effect, while protein is hard to process and has a much larger thermic effect.
Factors that affect the thermic effect of food
The thermic effect of food is increased by both aerobic training of sufficient duration and intensity or by anaerobic weight training. However, the increase is marginal, amounting to 7-8 calories per hour. The primary determinants of daily TEF are the total caloric content of the meals and the macronutrient composition of the meals ingested. Meal frequency has little to no effect on TEF; assuming total calorie intake for the days are equivalent.
Although some believe that TEF is reduced in obesity, discrepant results and inconsistent research methods have failed to validate such claims.
The mechanism of TEF is unknown. TEF has been described as the energy used in the distribution of nutrients and metabolic processes in the liver, but a hepatectomized animal shows no signs of TEF and intravenous injection of amino acids results in an effect equal to that of oral ingestion of the same amino acids.
Types of foods
The thermic effect of food is the energy required for digestion, absorption, and disposal of ingested nutrients. Its magnitude depends on the composition of the food consumed:
Carbohydrates |
https://en.wikipedia.org/wiki/Neurally%20controlled%20animat | A neurally controlled animat is the conjunction of
a cultured neuronal network
a virtual or physical robotic body, the Animat, "living" in a virtual computer generated environment or in a physical arena, connected to this array
Patterns of neural activity are used to control the virtual body, and the computer is used as a sensory device to provide electrical feedback to the neural network about the Animat's movement in the virtual environment.
The current aim of the Animat research is to study the neuronal activity and plasticity when learning and processing information in order to find a mathematical model for the neural network, and to determine how information is processed and encoded in the rat cortex.
It leads towards interesting questions about consciousness theories as well. |
https://en.wikipedia.org/wiki/Animat | Animat are artificial animals and is a contraction of animal and materials. The term includes physical robots and virtual simulations. The animat model includes features of a simple animal capable of interacting with its environment. It is, therefore, designed to simulate the ability to associate certain signals from the environment within a learning phase that indicate a potential for cognitive structure.
Animat research, a subset of Artificial Life studies, has become rather popular since Rodney Brooks' seminal paper "Intelligence without representation".
Development
Animats is derived from the third-person indicative present of the Latin verb animō), which means to "animate, give or bring life". The term was coined by S.W. Wilson in 1985,in "Knowledge growth in an artificial animal", published in the first Proceedings of an International Conference on Genetic Algorithms and Their Applications. Wilson's conceptualization built on the works of W.G. Walter, particularly his invention of the nuilt 2 three-wheeled sensor, propulsion motor for front-wheel drive vehicles. In Machina speculatrix, Walter introduced what can be described as a sub-animat, which chose actions based on needs and the sensory situation. A few rules were already introduced in this seminal work. There is, for instance, the linking of speeds of the two motors to the level of illumination. Norbert Weiner's theories postulated in the 1948 Cybernetics is also said to have inspired the simulation of animals, particularly the brain and behaviors of frogs (Rana computatrix), rats, and monkeys.
In its early conceptualization, the animats - was built as simple creatures and simulated behaviors, which pertain to genetic reproduction and natural selection. Wilson's animat, however, did not only interact with the environment but also learned from its "experience".
Theories and applications
An example using the Animat model as proposed by Wilson is discussed at some length in chapter 9 of Stan Frankl |
https://en.wikipedia.org/wiki/Scombroid%20food%20poisoning | Scombroid food poisoning, also known as simply scombroid, is a foodborne illness that typically results from eating spoiled fish. Symptoms may include flushed skin, sweating, headache, itchiness, blurred vision, abdominal cramps, and diarrhea. Onset of symptoms is typically 10 to 60 minutes after eating and can last for up to two days. Rarely, breathing problems, difficulty swallowing, redness of the mouth, or an irregular heartbeat may occur.
Scombroid occurs from eating fish high in histamine due to inappropriate storage or processing. Fish commonly implicated include tuna, mackerel, mahi mahi, walu walu, sardine, anchovy, bonito, herring, bluefish, amberjack, and marlin. These fish naturally have high levels of histidine, which is converted to histamine when bacterial growth occurs during improper storage. Subsequent cooking, smoking, or freezing does not eliminate the histamine. Diagnosis is typically based on the symptoms and may be supported by a normal blood tryptase. If a number of people who eat the same fish develop symptoms, the diagnosis is more likely.
Prevention is by refrigerating or freezing fish right after it is caught. Treatment is generally with antihistamines such as diphenhydramine and ranitidine. Epinephrine may be used for severe symptoms. Along with ciguatera fish poisoning, it is one of the most common type of seafood poisoning. It occurs globally in both temperate and tropical waters. Only one death has been reported. The condition was first described in 1799.
Signs and symptoms
Symptoms typically occur within 10–30 minutes of ingesting the fish and generally are self-limited. People with asthma are more vulnerable to respiratory problems such as wheezing or bronchospasms. However, symptoms may show over two hours after eating a spoiled dish. They usually last for about 10 to 14 hours, and rarely exceed one to two days.
Initial
The first signs of poisoning suggest an allergic reaction with these symptoms:
facial flushing/sweating
b |
https://en.wikipedia.org/wiki/Educational%20Commission%20for%20Foreign%20Medical%20Graduates | According to the US Department of Education, the Educational Commission for Foreign Medical Graduates is "the authorized credential evaluation and guidance agency for non-U.S. physicians and graduates of non-U.S. medical schools who seek to practice in the United States or apply for a U.S. medical residency program. It provides comprehensive information and resources on licensure, the U.S. Medical Licensure Examination (USMLE), residencies, and recognition."
Through its program of certification, the Educational Commission for Foreign Medical Graduates (ECFMG) assesses the readiness of international medical graduates to enter residency or fellowship programs in the United States that are accredited by the Accreditation Council for Graduate Medical Education (ACGME).
ECFMG acts as the registration and score-reporting agency for the USMLE for foreign medical students/ graduates, or in short, it acts as the designated Dean's office for International Medical Graduates (IMGs) in contrast to the American Medical Graduates (AMGs).
Medical schools in Canada that award the M.D. are not assessed by ECFMG, because the Liaison Committee on Medical Education historically accredited M.D.-granting institutions in both the U.S. and Canada (today, Canada has its own accrediting body that generally follows U.S. standards). M.D. graduates of American and Canadian institutions are not considered IMGs in either country.
History
ECFMG was founded in 1956, in response to the increase need for the evaluation of the readiness of international medical graduates entering the physician workforce during the 1950 expansion of US healthcare system. Its initial name was Evaluation Service for Foreign Medical Graduates (ESFMG). Later that year, it was renamed Educational Council for Foreign Medical Graduates. In conjunction with NBME,
it created what became known as the ECFMG certification which included examinations and
assessments of English language proficiency. In 1974, it merged with the |
https://en.wikipedia.org/wiki/Price%E2%80%93sales%20ratio | Price–sales ratio, P/S ratio, or PSR, is a valuation metric for stocks. It is calculated by dividing the company's market capitalization by the revenue in the most recent year; or, equivalently, divide the per-share stock price by the per-share revenue.
The justified P/S ratio is calculated as the price-to-sales ratio based on the Gordon Growth Model. Thus, it is the price-to-sales ratio based on the company's fundamentals rather than . Here, is the sustainable growth rate as defined below and is the required rate of return.
Where .
Unless otherwise stated, P/S is "trailing twelve months" (TTM), the reported sales for the four previous quarters, although of course longer time periods can be examined.
The smaller this ratio (i.e. less than 1.0) is usually thought to be a better investment since the investor is paying less for each unit of sales. However, sales do not reveal the whole picture, as the company may be unprofitable with a low P/S ratio. Because of the limitations, this ratio is usually used only for unprofitable companies, since they don't have a price–earnings ratio (P/E ratio). The metric can be used to determine the value of a stock relative to its past performance. It may also be used to determine relative valuation of a sector or the market as a whole.
PSRs vary greatly from sector to sector, so they are most useful in comparing similar stocks within a sector or sub-sector.
Comparing P/S ratios carries the implicit assumption that all firms in the comparison have an identical capital structure. This is always a problematic assumption, but even more so when the assumption is made between industries, since industries often have vastly different typical capital structures (for example, a utility vs. a technology company). This is the reason why P/S ratios across industries vary widely.
See also
Financial ratio
Price–earnings ratio |
https://en.wikipedia.org/wiki/Cancer%20immunotherapy | Cancer immunotherapy (sometimes called immuno-oncology) is the stimulation of the immune system to treat cancer, improving on the immune system's natural ability to fight the disease. It is an application of the fundamental research of cancer immunology and a growing subspecialty of oncology.
Cancer immunotherapy exploits the fact that cancer cells often have tumor antigens, molecules on their surface that can bind to antibody proteins or T-cell receptors, triggering an immune system response. The tumor antigens are often proteins or other macromolecules (e.g., carbohydrates). Normal antibodies bind to external pathogens, but the modified immunotherapy antibodies bind to the tumor antigens marking and identifying the cancer cells for the immune system to inhibit or kill. Clinical success of cancer immunotherapy is highly variable between different forms of cancer; for instance, certain subtypes of gastric cancer react well to the approach whereas immunotherapy is not effective for other subtypes.
In 2018, American immunologist James P. Allison and Japanese immunologist Tasuku Honjo received the Nobel Prize in Physiology or Medicine for their discovery of cancer therapy by inhibition of negative immune regulation.
History
"During the 17th and 18th centuries, various forms of immunotherapy in cancer became widespread... In the 18th and 19th centuries, septic dressings enclosing ulcerative tumours were used for the treatment of cancer. Surgical wounds were left open to facilitate the development of infection, and purulent sores were created deliberately... One of the most well-known effects of microorganisms on ... cancer was reported in 1891, when an American surgeon, William Coley, inoculated patients having inoperable tumours with [ Streptococcus pyogenes ]." "Coley [had] thoroughly reviewed the literature available at that time and found 38 reports of cancer patients with accidental or iatrogenic feverish erysipelas. In 12 patients, the sarcoma or carcinoma had |
https://en.wikipedia.org/wiki/C.%20W.%20Woodworth%20Award | The C. W. Woodworth Award is an annual award presented by the Pacific Branch of the Entomological Society of America. This award, the PBESA's largest, is for achievement in entomology in the Pacific region of the United States over the previous ten years. The award is named in honor of Charles W. Woodworth and was established on June 25, 1968. It is principally sponsored by Woodworth's great-grandson, Brian Holden, and his wife, Joann Wilfert, with additional support by Dr. Craig W. and Kathryn Holden, and Dr. Jim and Betty Woodworth.
Award recipients
Source: Entomological Society of America
A box containing the older records of the PBESA and which likely contains the names of the first few recipients of the award is located in the special collections section of the library at U.C. Davis.
See also
The John Henry Comstock Graduate Student Award
List of biology awards |
https://en.wikipedia.org/wiki/Prokarin | Prokarin (also known as Procarin) is an alternative medicine that consists of a mixture of histamine and caffeine. It is marketed as a treatment for fatigue in multiple sclerosis (MS), although in the 1950s multiple sclerosis was believed to be an alergic condition and Prokarin was developed as a treatment for the condition. The idea behind this combination is that caffeine is a generally well tolerated stimulant which can help treat fatigue. Like most stimulants, caffeine causes vasoconstriction, so the vasodilation effects from concomitant histamine can counteract this. Prokarin is formulated as a transdermal patch since histamine isn't well absorbed when consumed orally. While brand name Prokarin can be expensive, it can be compounded by pharmacists if prescribed by a physician.
Quackwatch lists Prokarin as one of the multiple sclerosis "cures" of which people should be wary. While it doesn't treat the disease progression of MS, its usage for symptomatic management of fatigue in MS is controversial. |
https://en.wikipedia.org/wiki/Accretion%20%28astrophysics%29 | In astrophysics, accretion is the accumulation of particles into a massive object by gravitationally attracting more matter, typically gaseous matter, into an accretion disk. Most astronomical objects, such as galaxies, stars, and planets, are formed by accretion processes.
Overview
The accretion model that Earth and the other terrestrial planets formed from meteoric material was proposed in 1944 by Otto Schmidt, followed by the protoplanet theory of William McCrea (1960) and finally the capture theory of Michael Woolfson. In 1978, Andrew Prentice resurrected the initial Laplacian ideas about planet formation and developed the modern Laplacian theory. None of these models proved completely successful, and many of the proposed theories were descriptive.
The 1944 accretion model by Otto Schmidt was further developed in a quantitative way in 1969 by Viktor Safronov. He calculated, in detail, the different stages of terrestrial planet formation. Since then, the model has been further developed using intensive numerical simulations to study planetesimal accumulation. It is now accepted that stars form by the gravitational collapse of interstellar gas. Prior to collapse, this gas is mostly in the form of molecular clouds, such as the Orion Nebula. As the cloud collapses, losing potential energy, it heats up, gaining kinetic energy, and the conservation of angular momentum ensures that the cloud forms a flattened disk—the accretion disk.
Accretion of galaxies
A few hundred thousand years after the Big Bang, the Universe cooled to the point where atoms could form. As the Universe continued to expand and cool, the atoms lost enough kinetic energy, and dark matter coalesced sufficiently, to form protogalaxies. As further accretion occurred, galaxies formed. Indirect evidence is widespread. Galaxies grow through mergers and smooth gas accretion. Accretion also occurs inside galaxies, forming stars.
Accretion of stars
Stars are thought to form inside giant clouds of cold |
https://en.wikipedia.org/wiki/Boris%20Tsirelson | Boris Semyonovich Tsirelson (May 4, 1950 – January 21, 2020) (, ) was a Russian–Israeli mathematician and Professor of Mathematics at Tel Aviv University in Israel, as well as a Wikipedia editor.
Biography
Tsirelson was born in Leningrad to a Russian Jewish family. From his father Simeon's side, he was the great-nephew of rabbi Yehuda Leib Tsirelson, chief rabbi of Bessarabia from 1918 to 1941, and a prominent posek and Jewish leader. He obtained his Master of Science from the University of Leningrad and remained there to pursue graduate studies. He obtained his Ph.D. in 1975, with thesis "General properties of bounded Gaussian processes and related questions" written under the direction of Ildar Abdulovich Ibragimov.
Later, he participated in the refusenik movement, but only received permission to emigrate to Israel in 1991. From then until 2017, he was a professor at Tel-Aviv University.
In 1998 he was an Invited Speaker at the International Congress of Mathematicians in Berlin.
Contributions to mathematics
Tsirelson made notable contributions to probability theory and functional analysis. These include:
Tsirelson's bound, in quantum mechanics, is an inequality, related to the issue of quantum nonlocality.
Tsirelson space is an example of a reflexive Banach space in which neither a l p space nor a c0 space can be embedded.
The Tsirelson's drift, a counterexample in the theory of stochastic differential equations, it's a SDE which has a weak solution but no strong solution.
The Gaussian isoperimetric inequality (proved by Vladimir Sudakov and Tsirelson, and independently by Christer Borell), stating that affine halfspaces are the isoperimetric sets for the Gaussian measure. |
https://en.wikipedia.org/wiki/Natural%20Language%20Toolkit | The Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language. It supports classification, tokenization, stemming, tagging, parsing, and semantic reasoning functionalities. It was developed by Steven Bird and Edward Loper in the Department of Computer and Information Science at the University of Pennsylvania. NLTK includes graphical demonstrations and sample data. It is accompanied by a book that explains the underlying concepts behind the language processing tasks supported by the toolkit, plus a cookbook.
NLTK is intended to support research and teaching in NLP or closely related areas, including empirical linguistics, cognitive science, artificial intelligence, information retrieval, and machine learning.
NLTK has been used successfully as a teaching tool, as an individual study tool, and as a platform for prototyping and building research systems. There are 32 universities in the US and 25 countries using NLTK in their courses.
Library highlights
Discourse representation
Lexical analysis: Word and text tokenizer
n-gram and collocations
Part-of-speech tagger
Tree model and Text chunker for capturing
Named-entity recognition
See also
SpaCy |
https://en.wikipedia.org/wiki/Line%20Printer%20Daemon%20protocol | The Line Printer Daemon protocol/Line Printer Remote protocol (or LPD, LPR) is a network printing protocol for submitting print jobs to a remote printer. The original implementation of LPD was in the Berkeley printing system in the BSD UNIX operating system; the LPRng project also supports that protocol. The Common Unix Printing System (or CUPS), which is more common on modern Linux distributions and also found on Mac OS X, supports LPD as well as the Internet Printing Protocol (IPP). Commercial solutions are available that also use Berkeley printing protocol components, where more robust functionality and performance is necessary than is available from LPR/LPD (or CUPS) alone (such as might be required in large corporate environments). The LPD Protocol Specification is documented in RFC 1179.
Usage
A server for the LPD protocol listens for requests on TCP port 515. A request begins with a byte containing the request code, followed by the arguments to the request, and is terminated by an ASCII LF character.
An LPD printer is identified by the IP address of the server machine and the queue name on that machine. Many different queue names may exist in one LPD server, with each queue having unique settings. Note that the LPD queue name is case sensitive. Some modern implementations of LPD on network printers might ignore the case or queue name altogether and send all jobs to the same printer. Others have the option to automatically create a new queue when a print job with a new queue name is received. This helps to simplify the setup of the LPD server. Some companies (e.g. D-Link in model DP-301P+) have a tradition of calling the queue name “lpt1” or “LPT1”.
A printer that supports LPD/LPR is sometimes referred to as a "TCP/IP printer" (TCP/IP is used to establish connections between printers and clients on a network), although that term would be equally applicable to a printer that supports the Internet Printing Protocol.
See also
Lp (Unix)
LPRng
Le |
https://en.wikipedia.org/wiki/Client%20confidentiality | Client confidentiality is the principle that an institution or individual should not reveal information about their clients to a third party without the consent of the client or a clear legal reason. This concept, sometimes referred to as social systems of confidentiality, is outlined in numerous laws throughout many countries.
The access to a client's data as provided by the institution in question is usually limited to law enforcement agencies and requires some legal procedures to be accomplished prior to such action (e.g.: court order issued, etc.). This applies to bank account information or medical record. In some cases the data is by definition inaccessible to third parties and should never be revealed; this can include confidential information gathered by attorneys, psychiatrists, psychologists, or priests. One well known result that can seem hard to reconcile is that of a priest hearing a murder confession, but being unable to reveal details to the authorities. However, had it not been for the assumed confidentiality, it is unlikely that the information would have been shared in the first place, and to breach this trust would then discourage others from confiding with priests in the future. So, even if justice was served in that particular case (assuming the confession led to a correct conviction), it would result in fewer people taking part in what is generally considered a beneficial process. This could also be said of a patient sharing information with a psychiatrist, or a client seeking legal advice from a lawyer.
See also
Privilege (evidence)
Health Insurance Portability and Accountability Act |
https://en.wikipedia.org/wiki/Gene-centered%20view%20of%20evolution | The gene-centered view of evolution, gene's eye view, gene selection theory, or selfish gene theory holds that adaptive evolution occurs through the differential survival of competing genes, increasing the allele frequency of those alleles whose phenotypic trait effects successfully promote their own propagation. The proponents of this viewpoint argue that, since heritable information is passed from generation to generation almost exclusively by DNA, natural selection and evolution are best considered from the perspective of genes.
Proponents of the gene-centered viewpoint argue that it permits understanding of diverse phenomena such as altruism and intragenomic conflict that are otherwise difficult to explain from an organism-centered viewpoint.
The gene-centered view of evolution is a synthesis of the theory of evolution by natural selection, the particulate inheritance theory, and the rejection of transmission of acquired characters. It states that those alleles whose phenotypic effects successfully promote their own propagation will be favorably selected relative to their competitor alleles within the population. This process produces adaptations for the benefit of alleles that promote the reproductive success of the organism, or of other organisms containing the same allele (kin altruism and green-beard effects), or even its own propagation relative to the other genes within the same organism (selfish genes and intragenomic conflict).
Overview
The gene-centered view of evolution is a model for the evolution of social characteristics such as selfishness and altruism, with gene defined as "not just one single physical bit of DNA [but] all replicas of a particular bit of DNA distributed throughout the world".
Acquired characteristics
The formulation of the central dogma of molecular biology was summarized by Maynard Smith:
The rejection of the inheritance of acquired characters, combined with Ronald Fisher the statistician, giving the subject a mathematical |
https://en.wikipedia.org/wiki/Loan-to-value%20ratio | The loan-to-value (LTV) ratio is a financial term used by lenders to express the ratio of a loan to the value of an asset purchased.
In Real estate, the term is commonly used by banks and building societies to represent the ratio of the first mortgage line as a percentage of the total appraised value of real property. For instance, if someone borrows to purchase a house worth , the LTV ratio is or , or 87%. The remaining 13% represent the lender's haircut, adding up to 100% and being covered from the borrower's equity. The higher the LTV ratio, the riskier the loan is for a lender.
The valuation of a property is typically determined by an appraiser, but a better measure is an arms-length transaction between a willing buyer and a willing seller. Typically, banks will utilize the lesser of the appraised value and purchase price if the purchase is "recent" (within 1–2 years).
Risk
Loan to value is one of the key risk factors that lenders assess when qualifying borrowers for a mortgage. The risk of default is always at the forefront of lending decisions, and the likelihood of a lender absorbing a loss increases as the amount of equity decreases. Therefore, as the LTV ratio of a loan increases, the qualification guidelines for certain mortgage programs become much more strict. Lenders can require borrowers of high LTV loans to buy mortgage insurance to protect the lender from the buyer's default, which increases the costs of the mortgage.
Low LTV ratios (below 80%) may carry with them lower rates for lower-risk borrowers and allow lenders to consider higher-risk borrowers, such as those with low credit scores, previous late payments in their mortgage history, high debt-to-income ratios, high loan amounts or cash-out requirements, insufficient reserves and/or no income. However, an LTV higher than 80% may carry Mortgage Insurance requirements, which will in turn offer the borrower a lower interest rate. Higher LTV ratios are primarily reserved for borrowers with h |
https://en.wikipedia.org/wiki/Turion%20%28botany%29 | A turion (from Latin turio meaning "shoot") is a type of bud that is capable of growing into a complete plant. A turion may be an underground bud. Many members of the genus Epilobium are known to produce turions at or below ground level.
Some aquatic plant species produce overwintering turions, especially in the genera Potamogeton, Myriophyllum, Aldrovanda and Utricularia. These plants produce turions in response to unfavourable conditions such as decreasing day-length or reducing temperature.
They are derived from modified shoot apices and are often rich in starch and sugars enabling them to act as storage organs. Although they are hardy (frost resistant), it is probable that their principal adaptation is their ability to sink to the bottom of a pond or lake when the water freezes. Because water expands anomalously at lower temperatures, water at is denser than colder water, and thus stays at the bottom of the pond or lake. Turions overwinter in this denser, warmer water before rising again in the spring. Some turions of aquatic plants such as Potamogeton crispus also exhibit drought resistance, allowing them to survive in temporary pools.
See also
Bulbils, which may resemble turions, but include specialized storage leaves
Hibernaculum (botany) |
https://en.wikipedia.org/wiki/Geometry%20of%20Complex%20Numbers | Geometry of Complex Numbers: Circle Geometry, Moebius Transformation, Non-Euclidean Geometry is an undergraduate textbook on geometry, whose topics include circles, the complex plane, inversive geometry, and non-Euclidean geometry. It was written by Hans Schwerdtfeger, and originally published in 1962 as Volume 13 of the Mathematical Expositions series of the University of Toronto Press. A corrected edition was published in 1979 in the Dover Books on Advanced Mathematics series of Dover Publications (). The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics
The book is divided into three chapters, corresponding to the three parts of its subtitle: circle geometry, Möbius transformations, and non-Euclidean geometry. Each of these is further divided into sections (which in other books would be called chapters) and sub-sections. An underlying theme of the book is the representation of the Euclidean plane as the plane of complex numbers, and the use of complex numbers as coordinates to describe geometric objects and their transformations.
The chapter on circles covers the analytic geometry of circles in the complex plane. It describes the representation of circles by Hermitian matrices, the inversion of circles, stereographic projection, pencils of circles (certain one-parameter families of circles) and their two-parameter analogue, bundles of circles, and the cross-ratio of four complex numbers.
The chapter on Möbius transformations is the central part of the book, and defines these transformations as the fractional linear transformations of the complex plane (one of several standard ways of defining them). It includes material on the classification of these transformations, on the characteristic parallelograms of these transformations, on the subgroups of the group of transformations, on iterated transformations that either return to the identity (forming a periodic sequ |
https://en.wikipedia.org/wiki/Prime%20model | In mathematics, and in particular model theory, a prime model is a model that is as simple as possible. Specifically, a model is prime if it admits an elementary embedding into any model to which it is elementarily equivalent (that is, into any model satisfying the same complete theory as ).
Cardinality
In contrast with the notion of saturated model, prime models are restricted to very specific cardinalities by the Löwenheim–Skolem theorem. If is a first-order language with cardinality and is a complete theory over then this theorem guarantees a model for of cardinality Therefore no prime model of can have larger cardinality since at the very least it must be elementarily embedded in such a model. This still leaves much ambiguity in the actual cardinality. In the case of countable languages, all prime models are at most countably infinite.
Relationship with saturated models
There is a duality between the definitions of prime and saturated models. Half of this duality is discussed in the article on saturated models, while the other half is as follows. While a saturated model realizes as many types as possible, a prime model realizes as few as possible: it is an atomic model, realizing only the types that cannot be omitted and omitting the remainder. This may be interpreted in the sense that a prime model admits "no frills": any characteristic of a model that is optional is ignored in it.
For example, the model is a prime model of the theory of the natural numbers N with a successor operation S; a non-prime model might be meaning that there is a copy of the full integers that lies disjoint from the original copy of the natural numbers within this model; in this add-on, arithmetic works as usual. These models are elementarily equivalent; their theory admits the following axiomatization (verbally):
There is a unique element that is not the successor of any element;
No two distinct elements have the same successor;
No element satisfies Sn(x) = |
https://en.wikipedia.org/wiki/Allometry | Allometry is the study of the relationship of body size to shape, anatomy, physiology and finally behaviour, first outlined by Otto Snell in 1892, by D'Arcy Thompson in 1917 in On Growth and Form and by Julian Huxley in 1932.
Overview
Allometry is a well-known study, particularly in statistical shape analysis for its theoretical developments, as well as in biology for practical applications to the differential growth rates of the parts of a living organism's body. One application is in the study of various insect species (e.g., Hercules beetles), where a small change in overall body size can lead to an enormous and disproportionate increase in the dimensions of appendages such as legs, antennae, or horns The relationship between the two measured quantities is often expressed as a power law equation (allometric equation) which expresses a remarkable scale symmetry:
or in a logarithmic form,
or similarly,
where is the scaling exponent of the law. Methods for estimating this exponent from data can use type-2 regressions, such as major axis regression or reduced major axis regression, as these account for the variation in both variables, contrary to least-squares regression, which does not account for error variance in the independent variable (e.g., log body mass). Other methods include measurement-error models and a particular kind of principal component analysis.
The allometric equation can also be acquired as a solution of the differential equation
Allometry often studies shape differences in terms of ratios of the objects' dimensions. Two objects of different size, but common shape, have their dimensions in the same ratio. Take, for example, a biological object that grows as it matures. Its size changes with age, but the shapes are similar. Studies of ontogenetic allometry often use lizards or snakes as model organisms both because they lack parental care after birth or hatching and because they exhibit a large range of body sizes between the juv |
https://en.wikipedia.org/wiki/Multimap | In computer science, a multimap (sometimes also multihash, multidict or multidictionary) is a generalization of a map or associative array abstract data type in which more than one value may be associated with and returned for a given key. Both map and multimap are particular cases of containers (for example, see C++ Standard Template Library containers). Often the multimap is implemented as a map with lists or sets as the map values.
Examples
In a student enrollment system, where students may be enrolled in multiple classes simultaneously, there might be an association for each enrollment of a student in a course, where the key is the student ID and the value is the course ID. If a student is enrolled in three courses, there will be three associations containing the same key.
The index of a book may report any number of references for a given index term, and thus may be coded as a multimap from index terms to any number of reference locations or pages.
Querystrings may have multiple values associated with a single field. This is commonly generated when a web form allows multiple check boxes or selections to be chosen in response to a single form element.
Language support
C++
C++'s Standard Template Library provides the multimap container for the sorted multimap using a self-balancing binary search tree, and SGI's STL extension provides the hash_multimap container, which implements a multimap using a hash table.
As of C++11, the Standard Template Library provides the unordered_multimap for the unordered multimap.
Dart
Quiver provides a Multimap for Dart.
Java
Apache Commons Collections provides a MultiMap interface for Java. It also provides a MultiValueMap implementing class that makes a MultiMap out of a Map object and a type of Collection.
Google Guava provides a Multimap interface and implementations of it.
Python
Python provides a collections.defaultdict class that can be used to create a multimap. The user can instantiate the class as collections.de |
https://en.wikipedia.org/wiki/MPLS%20VPN | MPLS VPN is a family of methods for using Multiprotocol Label Switching (MPLS) to create virtual private networks (VPNs). MPLS VPN is a flexible method to transport and route several types of network traffic using an MPLS backbone.
There are three types of MPLS VPNs deployed in networks today:
1. Point-to-point (Pseudowire)
2. Layer 2 (VPLS)
3. Layer 3 (VPRN)
Point-to-point (pseudowire)
Point-to-point MPLS VPNs employ VLL (virtual leased lines) for providing Layer 2 point-to-point connectivity between two sites. Ethernet, TDM, and ATM frames can be encapsulated within these VLLs.
Some examples of how point-to-point VPNs might be used by utilities include:
encapsulating TDM T1 circuits attached to Remote Terminal Units
forwarding non-routed DNP3 traffic across the backbone network to the SCADA master controller.
Layer 2 VPN (VPLS)
Layer 2 MPLS VPNs, or VPLS (virtual private LAN service), offers a “switch in the cloud” style service. VPLS provides the ability to span VLANs between sites. L2 VPNs are typically used to route voice, video, and AMI traffic between substation and data center locations.
Layer 3 VPN (VPRN)
Layer 3, or VPRN (virtual private routed network), utilizes layer 3 VRF (VPN/virtual routing and forwarding) to segment routing tables for each customer utilizing the service. The customer peers with the service provider router and the two exchange routes, which are placed into a routing table specific to the customer. Multiprotocol BGP (MP-BGP) is required in the cloud to utilize the service, which increases complexity of design and implementation. L3 VPNs are typically not deployed on utility networks due to their complexity; however, a L3 VPN could be used to route traffic between corporate or datacenter locations.
See also
Segment Routing
Ethernet VPN
External links
RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs)
Virtual Private Network (VPN): A Very Detailed Guide for Newbies
MPLS networking
Virtual private networks |
https://en.wikipedia.org/wiki/Panela | Panela () or rapadura (Portuguese pronunciation: ) is an unrefined whole cane sugar, typical of Central and Latin America. It is a solid form of sucrose derived from the boiling and evaporation of sugarcane juice. Panela is known by other names in Latin America, such as chancaca in Chile, Bolivia, and Peru, piloncillo in Mexico (where panela refers to a type of cheese, queso panela). Just like brown sugar, two varieties of piloncillo are available; one is lighter (blanco) and one darker (oscuro). Unrefined, it is commonly used in Mexico, where it has been around for at least 500 years. Made from crushed sugar cane, the juice is collected, boiled, and poured into molds, where it hardens into blocks. Elsewhere in the world, the word jaggery describes a similar foodstuff. Both are considered non-centrifugal cane sugars.
Panela is sold in many forms, including liquid, granulated, and solid blocks, and is used in the canning of foods, as well as in confectionery, soft drinks, baking, and vinegar, beer, and winemaking.
Regional names
Chancaca in Bolivia, Chile and Peru; also the name of a sweet sauce made from this
Dulce de panela or dulce de atado in El Salvador
Đường phên in Vietnam
Gura in Afghanistan
Gurr in Pakistan
Jaggery, Bella (ಬೆಲ್ಲ), Gur, Sharkara, or Vellam in India
Nam oy in Laos
Panela in Colombia, Ecuador, and Venezuela
Panocha in the Mexican State of Sinaloa and the Philippines
Papelón in Venezuela
Uluru Dust in Australia
Piloncillo ("little pylon", so named for the cone shape) in Mexico and Spain
Rapadou in Haiti
Rapadura in Argentina, Brazil, Cuba, Guatemala, Honduras, Panama, Paraguay, and the Dominican Republic
Raspadura in Cuba, Ecuador, and Panama
Tapa de dulce or Dulce (de tapa) in Costa Rica and Nicaragua
Economics
The main producer of panela is Colombia (about 1.4 million tons/year), where panela production is one of the most important economic activities, with the highest index of panela consumption per capita world |
https://en.wikipedia.org/wiki/Rework%20%28electronics%29 | Rework (or re-work) is the term for the refinishing operation or repair of an electronic printed circuit board (PCB) assembly, usually involving desoldering and re-soldering of surface-mounted electronic components (SMD). Mass processing techniques are not applicable to single device repair or replacement, and specialized manual techniques by expert personnel using appropriate equipment are required to replace defective components; area array packages such as ball grid array (BGA) devices particularly require expertise and appropriate tools. A hot air gun or hot air station is used to heat devices and melt solder, and specialised tools are used to pick up and position often tiny components.
A rework station is a place to do this work—the tools and supplies for this work, typically on a workbench. Other kinds of rework require other tools.
Reasons for rework
Rework is practiced in many kinds of manufacturing when defective products are found.
For electronics, defects may include:
Poor solder joints because of faulty assembly or thermal cycling.
Solder bridges—unwanted drops of solder that connect points that should be isolated from each other.
Faulty components.
Engineering parts changes, upgrades, etc.
Components broken due to natural wear, physical stress or excessive current.
Components damaged due to liquid ingress, leading to corrosion, weak solder joints or physical damage.
Process
The rework may involve several components, which must be worked on one by one without damage to surrounding parts or the PCB itself. All parts not being worked on are protected from heat and damage. Thermal stress on the electronic assembly is kept as low as possible to prevent unnecessary contractions of the board which might cause immediate or future damage.
In the 21st century, almost all soldering is carried out with lead-free solder, both on manufactured assemblies and in rework, to avoid the health and environmental hazards of lead. Where this precaution is not n |
https://en.wikipedia.org/wiki/Adaptive%20immune%20system | The adaptive immune system, also known as the acquired immune system, or specific immune system is a subsystem of the immune system that is composed of specialized, systemic cells and processes that eliminate pathogens or prevent their growth. The acquired immune system is one of the two main immunity strategies found in vertebrates (the other being the innate immune system).
Like the innate system, the adaptive immune system includes both humoral immunity components and cell-mediated immunity components and destroys invading pathogens. Unlike the innate immune system, which is pre-programmed to react to common broad categories of pathogen, the adaptive immune system is highly specific to each particular pathogen the body has encountered.
Adaptive immunity creates immunological memory after an initial response to a specific pathogen, and leads to an enhanced response to future encounters with that pathogen. Antibodies are a critical part of the adaptive immune system. Adaptive immunity can provide long-lasting protection, sometimes for the person's entire lifetime. For example, someone who recovers from measles is now protected against measles for their lifetime; in other cases it does not provide lifetime protection, as with chickenpox. This process of adaptive immunity is the basis of vaccination.
The cells that carry out the adaptive immune response are white blood cells known as lymphocytes. B cells and T cells, two different types of lymphocytes, carry out the main activities: antibody responses, and cell-mediated immune response. In antibody responses, B cells are activated to secrete antibodies, which are proteins also known as immunoglobulins. Antibodies travel through the bloodstream and bind to the foreign antigen causing it to inactivate, which does not allow the antigen to bind to the host. Antigens are any substances that elicit the adaptive immune response. Sometimes the adaptive system is unable to distinguish harmful from harmless foreign molecule |
https://en.wikipedia.org/wiki/Physical%20Society%20of%20Japan | The Physical Society of Japan (JPS; 日本物理学会 in Japanese) is the organisation of physicists in Japan. There are about 16,000 members, including university professors, researchers as well as educators, and engineers.
The origins of the JPS go back to the establishment of the Tokyo Mathematical Society in 1877, as the first society in natural science in Japan. After being renamed twice, as Tokyo Mathematical and Physical Society in 1884 and as Physico-Mathematical Society of Japan in 1919, it eventually separated into two in 1946, and the Physical Society of Japan was formed. Takeo Shimizu (清水武雄), a contributor to the improvements to the Wilson cloud chamber and the last President of the Physico-Mathematical Society, was also the first president of JPS.
Purpose
The primary purposes of the JPS are to publish research reports of its members and to provide its members with facilities relating to physics.
Reciprocal agreements
The JPS has established reciprocal agreements with seven physical societies such as the American Physical Society, German Physical Society, Mexican Physical Society and Korean Physical Society so that the members of one society can participate in the activities of the other on an equal partner basis.
Publications
Journal of the Physical Society of Japan
Progress of Theoretical Physics
See also
Applied Physics Express
Japan Society of Applied Physics
Optical Review
Optical Society of Japan
Mathematical Society of Japan
External links
The Physical Society of Japan |
https://en.wikipedia.org/wiki/Mereotopology | In formal ontology, a branch of metaphysics, and in ontological computer science, mereotopology is a first-order theory, embodying mereological and topological concepts, of the relations among wholes, parts, parts of parts, and the boundaries between parts.
History and motivation
Mereotopology begins in philosophy with theories articulated by A. N. Whitehead in several books and articles he published between 1916 and 1929, drawing in part on the mereogeometry of De Laguna (1922). The first to have proposed the idea of a point-free definition of the concept of topological space in mathematics was Karl Menger in his book Dimensionstheorie (1928) -- see also his (1940). The early historical background of mereotopology is documented in Bélanger and Marquis (2013) and Whitehead's early work is discussed in Kneebone (1963: ch. 13.5) and Simons (1987: 2.9.1). The theory of Whitehead's 1929 Process and Reality augmented the part-whole relation with topological notions such as contiguity and connection. Despite Whitehead's acumen as a mathematician, his theories were insufficiently formal, even flawed. By showing how Whitehead's theories could be fully formalized and repaired, Clarke (1981, 1985) founded contemporary mereotopology. The theories of Clarke and Whitehead are discussed in Simons (1987: 2.10.2), and Lucas (2000: ch. 10). The entry Whitehead's point-free geometry includes two contemporary treatments of Whitehead's theories, due to Giangiacomo Gerla, each different from the theory set out in the next section.
Although mereotopology is a mathematical theory, we owe its subsequent development to logicians and theoretical computer scientists. Lucas (2000: ch. 10) and Casati and Varzi (1999: ch. 4,5) are introductions to mereotopology that can be read by anyone having done a course in first-order logic. More advanced treatments of mereotopology include Cohn and Varzi (2003) and, for the mathematically sophisticated, Roeper (1997). For a mathematical treatment of poin |
https://en.wikipedia.org/wiki/UniProt | UniProt is a freely accessible database of protein sequence and functional information, many entries being derived from genome sequencing projects. It contains a large amount of information about the biological function of proteins derived from the research literature. It is maintained by the UniProt consortium, which consists of several European bioinformatics organisations and a foundation from Washington, DC, United States.
The UniProt consortium
The UniProt consortium comprises the European Bioinformatics Institute (EBI), the Swiss Institute of Bioinformatics (SIB), and the Protein Information Resource (PIR). EBI, located at the Wellcome Trust Genome Campus in Hinxton, UK, hosts a large resource of bioinformatics databases and services. SIB, located in Geneva, Switzerland, maintains the ExPASy (Expert Protein Analysis System) servers that are a central resource for proteomics tools and databases. PIR, hosted by the National Biomedical Research Foundation (NBRF) at the Georgetown University Medical Center in Washington, DC, US, is heir to the oldest protein sequence database, Margaret Dayhoff's Atlas of Protein Sequence and Structure, first published in 1965. In 2002, EBI, SIB, and PIR joined forces as the UniProt consortium.
The roots of the UniProt databases
Each consortium member is heavily involved in protein database maintenance and annotation. Until recently, EBI and SIB together produced the Swiss-Prot and TrEMBL databases, while PIR produced the Protein Sequence Database (PIR-PSD). These databases coexisted with differing protein sequence coverage and annotation priorities.
Swiss-Prot was created in 1986 by Amos Bairoch during his PhD and developed by the Swiss Institute of Bioinformatics and subsequently developed by Rolf Apweiler at the European Bioinformatics Institute. Swiss-Prot aimed to provide reliable protein sequences associated with a high level of annotation (such as the description of the function of a protein, its domain structure, post- |
https://en.wikipedia.org/wiki/Maculopapular%20rash | A maculopapular rash is a type of rash characterized by a flat, red area on the skin that is covered with small confluent bumps. It may only appear red in lighter-skinned people. The term "maculopapular" is a compound: macules are small, flat discolored spots on the surface of the skin; and papules are small, raised bumps. It is also described as erythematous, or red.
This type of rash is common in several diseases and medical conditions, including scarlet fever, measles, Ebola virus disease, rubella, HIV, secondary syphilis (Congenital syphilis, which is asymptomatic, the newborn may present this type of rash), erythrovirus (parvovirus B19), chikungunya (alphavirus), zika, smallpox (which has been eradicated), varicella (when vaccinated persons exhibit symptoms from the modified form), heat rash, and sometimes in Dengue fever. It is also a common manifestation of a skin reaction to the antibiotic amoxicillin or chemotherapy drugs. Cutaneous infiltration of leukemic cells may also have this appearance. Maculopapular rash is seen in graft-versus-host disease (GVHD) developed after a hematopoietic stem cell transplant (bone marrow transplant), which can be seen within one week or several weeks after the transplant. In the case of GVHD, the maculopapular lesions may progress to a condition similar to toxic epidermal necrolysis. In addition, this is the type of rash that some patients presenting with Ebola virus hemorrhagic (EBO-Z) fever will reveal but can be hard to see on dark skin people. It is also seen in patients with Marburg hemorrhagic fever, a filovirus not unlike Ebola.
This type of rash can be as a result of large doses of niacin or no-flush niacin (2000 – 2500 mg), used for the management of low HDL cholesterol.
This type of rash can also be a symptom of Sea bather's eruption. This stinging, pruritic, maculopapular rash affects swimmers in some Atlantic locales (e.g., Florida, Caribbean, Long Island). It is caused by hypersensitivity to stings from the |
https://en.wikipedia.org/wiki/Dodecagonal%20number | A dodecagonal number is a figurate number that represents a dodecagon. The dodecagonal number for n is given by the formula
The first few dodecagonal numbers are:
0, 1, 12, 33, 64, 105, 156, 217, 288, 369, 460, 561, 672, 793, 924, 1065, 1216, 1377, 1548, 1729, 1920, 2121, 2332, 2553, 2784, 3025, 3276, 3537, 3808, 4089, 4380, 4681, 4992, 5313, 5644, 5985, 6336, 6697, 7068, 7449, 7840, 8241, 8652, 9073, 9504, 9945 ...
Properties
The dodecagonal number for n can be calculated by adding the square of n to four times the (n - 1)th pronic number, or to put it algebraically, .
Dodecagonal numbers consistently alternate parity, and in base 10, their units place digits follow the pattern 1, 2, 3, 4, 5, 6, 7, 8, 9, 0.
By the Fermat polygonal number theorem, every number is the sum of at most 12 dodecagonal numbers.
is the sum of the first n natural numbers congruent to 1 mod 10.
is the sum of all odd numbers from 4n+1 to 6n+1.
See also
Polygonal number
Figurate number
Dodecagon
Figurate numbers |
https://en.wikipedia.org/wiki/Erythorbic%20acid | Erythorbic acid (isoascorbic acid, D-araboascorbic acid) is a stereoisomer of ascorbic acid (vitamin C). It is synthesized by a reaction between methyl 2-keto-D-gluconate and sodium methoxide. It can also be synthesized from sucrose or by strains of Penicillium that have been selected for this feature. It is denoted by E number E315, and is widely used as an antioxidant in processed foods.
Clinical trials have been conducted to investigate aspects of the nutritional value of erythorbic acid. One such trial investigated the effects of erythorbic acid on vitamin C metabolism in young women; no effect on vitamin C uptake or clearance from the body was found. A later study found that erythorbic acid is a potent enhancer of nonheme-iron absorption.
Since the U.S. Food and Drug Administration banned the use of sulfites as a preservative in foods intended to be eaten fresh (such as salad bar ingredients), the use of erythorbic acid as a food preservative has increased.
It is also used as a preservative in cured meats and frozen vegetables.
It was first synthesized in 1933 by the German chemists Kurt Maurer and Bruno Schiedt. |
https://en.wikipedia.org/wiki/D.%20R.%20Kaprekar | Dattatreya Ramchandra Kaprekar (; 17 January 1905 – 1986) was an Indian recreational mathematician who described several classes of natural numbers including the Kaprekar, harshad and self numbers and discovered the Kaprekar's constant, named after him. Despite having no formal postgraduate training and working as a schoolteacher, he published extensively and became well known in recreational mathematics circles.
Biography
Kaprekar received his secondary school education in Thane and studied at Cotton College in Guwahati. In 1927, he won the Wrangler R. P. Paranjpye Mathematical Prize for an original piece of work in mathematics.
He attended the University of Mumbai, receiving his bachelor's degree in 1929. Having never received any formal postgraduate training, for his entire career (1930–1962) he was a schoolteacher at the government junior school in Devlali Maharashtra, India. Cycling from place to place he also tutored private students with unconventional methods, cheerfully sitting by a river and "thinking of theorems". He published extensively, writing about such topics as recurring decimals, magic squares, and integers with special properties. He is also known as "Ganitanand".
Discoveries
Working largely alone, Kaprekar discovered a number of results in number theory and described various properties of numbers. In addition to the Kaprekar's constant and the Kaprekar numbers which were named after him, he also described self numbers or Devlali numbers, the harshad numbers and Demlo numbers. He also constructed certain types of magic squares related to the Copernicus magic square. Initially his ideas were not taken seriously by Indian mathematicians, and his results were published largely in low-level mathematics journals or privately published, but international fame arrived when Martin Gardner wrote about Kaprekar in his March 1975 column of Mathematical Games for Scientific American. Today his name is well-known and many other mathematicians have pursu |
https://en.wikipedia.org/wiki/Immunogen | An immunogen is any substance that generates B-cell (humoral/antibody) and/or T-cell (cellular) adaptive immune responses upon exposure to a host organism. Immunogens that generate antibodies are called antigens ("antibody-generating"). Immunogens that generate antibodies are directly bound by host antibodies and lead to the selective expansion of antigen-specific B-cells. Immunogens that generate T-cells are indirectly bound by host T-cells after processing and presentation by host antigen-presenting cells.
An immunogen can be defined as a complete antigen which is composed of the macromolecular carrier and epitopes (determinants) that can induce immune response.
An explicit example is a hapten. Haptens are low-molecular-weight compounds that may be bound by antibodies, but cannot elicit an immune response. Consequently, the haptens themselves are nonimmunogenic and they cannot evoke an immune response until they bind with a larger carrier immunogenic molecule. The hapten-carrier complex, unlike free hapten, can act as an immunogen and can induce an immune response.
Until 1959, the terms immunogen and antigen were not distinguished.
Used carrier proteins
Keyhole limpet hemocyanin
It is copper-containing respiratory protein, isolated from keyhole limpets (Megathura crenulata). Because of its evolutionary distance from mammals, high molecular weight and complex structure it is usually immunogenic in vertebrate animals.
Concholepas concholepas hemocyanin
(also blue carrier immunogenic orotein) It is alternative to KLH isolated from Concholepas concholepas. It has the similar immunogenic properties as KLH but better solubility and therefore better flexibility.
Bovine serum albumin
It is from the blood sera of cows and has similarly immunogenic properties as KLH or CCH. The cationized form of BSA (cBSA) is highly positively charged protein with significantly increased immunogenicity. This change possesses a greater number of possible conjugated antigens to t |
https://en.wikipedia.org/wiki/Playskool | Playskool is an American brand of educational toys and games for children. The former Playskool manufacturing company was a subsidiary of the Milton Bradley Company and was headquartered in Pawtucket, Rhode Island. Playskool's last remaining plant in Chicago was shut down in 1984, and Playskool became a brand of Hasbro, which had acquired Milton Bradley that same year.
History
The "Playskool Institute" was established by Lucille King in 1928 as a division of the John Schroeder Lumber Company in Milwaukee, Wisconsin. King, an employee at the company, developed wooden toys to use as teaching aids for children in the classroom. In 1935, the Playskool Institute became a division of Thorncraft, Inc., and established offices in Chicago, Illinois. In 1938, Playskool was purchased by the Joseph Lumber Company, where Manuel Fink was placed in charge of operations. In 1940, Fink, along with Robert Meythaler, bought Playskool and established the "Playskool Manufacturing Company".
In 1943, Playskool bought the J.L. Wright Company, the manufacturer of Lincoln Logs. In 1958, Playskool merged with Holgate Toys, Inc., a wood product manufacturer based in Kane, Pennsylvania. In 1962, they purchased the Halsam Company, a producer of wooden blocks, checkers, dominoes, and construction sets. In 1968, Playskool became a subsidiary of Milton Bradley; both companies were acquired by Hasbro, Inc. in 1984.
After the acquisition, Playskool began operating out of Pawtucket, Rhode Island as a division of Hasbro. In 1985, Playskool released a line of infant products under the Tommee Tippee brand name, including bibs and bottles. Many Hasbro products targeted at preschoolers were rebranded with the Playskool name, including Play-Doh, and Tonka. Playskool also began licensing toys from other designers, creating licensing agreements to manufacture Teddy Ruxpin, Barney, Arthur, Teletubbies, and Nickelodeon branded products. Hasbro also began licensing the Playskool brand name to other vendors, |
https://en.wikipedia.org/wiki/Personnel%20selection | Personnel selection is the methodical process used to hire (or, less commonly, promote) individuals. Although the term can apply to all aspects of the process (recruitment, selection, hiring, onboarding, acculturation, etc.) the most common meaning focuses on the selection of workers. In this respect, selected prospects are separated from rejected applicants with the intention of choosing the person who will be the most successful and make the most valuable contributions to the organization. Its effect on the group is discerned when the selected accomplish their desired impact to the group, through achievement or tenure. The procedure of selection takes after strategy to gather data around a person so as to figure out whether that individual ought to be utilized. The strategies used must be in compliance with the various laws in respect to work force selection.
Overview
The professional standards of industrial-organizational psychologists (I-O psychologists) require that any selection system be based on a job analysis to ensure that the selection criteria are job-related. The requirements for a selection system are characteristics known as KSAOs – knowledge, skills, ability, and other characteristics. US law also recognizes bona fide occupational qualifications (BFOQs), which are requirements for a job which would be considered discriminatory if not necessary – such as only employing men as wardens of maximum-security male prisons, enforcing a mandatory retirement age for airline pilots, a religious college only employing professors of its religion to teach its theology, or a modeling agency only hiring women to model women's clothing.
Personnel selection systems employ evidence-based practices to determine the most qualified candidates and involve both the newly hired and those individuals who can be promoted from within the organization.
In this respect, selection of personnel has "validity" if an unmistakable relationship can be shown between the system itself |
https://en.wikipedia.org/wiki/Membranous%20glomerulonephritis | Membranous glomerulonephritis (MGN) is a slowly progressive disease of the kidney affecting mostly people between ages of 30 and 50 years, usually white people (i.e., those of European, Middle Eastern, or North African ancestry.).
It is the second most common cause of nephrotic syndrome in adults, with focal segmental glomerulosclerosis (FSGS) recently becoming the most common.
Signs and symptoms
Most people will present as nephrotic syndrome, with the triad of albuminuria, edema and low serum albumin (with or without kidney failure). High blood pressure and high cholesterol are often also present. Others may not have symptoms and may be picked up on screening, with urinalysis finding high amounts of protein loss in the urine. A definitive diagnosis of membranous nephropathy requires a kidney biopsy, though given the very high specificity of anti-PLA2R antibody positivity this can sometimes be avoided in patients with nephrotic syndrome and preserved kidney function
Causes
Traditional definitions split membranous nephropathy into 'primary/idiopathic' or 'secondary'. It is likely that instead the field will move to novel classification on the basis of the specific autoantigen detected, though given the current lack of clinical assays (other than for PLA2R autoantibodies) this may be several years off still.
Primary/idiopathic
Traditionally 85% of MGN cases are classified as primary membranous glomerulonephritis—that is to say, the cause of the disease is idiopathic (of unknown origin or cause). This can also be referred to as idiopathic membranous nephropathy.
Antibodies to an M-type phospholipase A2 receptor are responsible around 60% of cases of membranous nephropathy. Testing for these anti-PLA2R has revolutionised diagnosis and treatment of this disease in antibody positive patients, and tracking titre level over time allows you to predict risk of disease progression and chance of spontaneous remission There is little secondary disease association with P |
https://en.wikipedia.org/wiki/Electronic%20waste%20recycling | Electronic waste recycling, electronics recycling or e-waste recycling is the disassembly and separation of components and raw materials of waste electronics; when referring to specific types of e-waste, the terms like computer recycling or mobile phone recycling may be used. Like other waste streams, re-use, donation and repair are common sustainable ways to dispose of IT waste.
Since its inception in the early 1990s, more and more devices are recycled worldwide due to increased awareness and investment. Electronic recycling occurs primarily in order to recover valuable rare earth metals and precious metals, which are in short supply, as well as plastics and metals. These are resold or used in new devices after purification, in effect creating a circular economy. Such processes involve specialised facilities and premises, but within the home or ordinary workplace, sound components of damaged or obsolete computers can often be reused, reducing replacement costs.
Recycling is considered environmentally friendly because it prevents hazardous waste, including heavy metals and carcinogens, from entering the atmosphere, landfill or waterways. While electronics consist a small fraction of total waste generated, they are far more dangerous. There is stringent legislation designed to enforce and encourage the sustainable disposal of appliances, the most notable being the Waste Electrical and Electronic Equipment Directive of the European Union and the United States National Computer Recycling Act. In 2009, 38% of computers and a quarter of total electronic waste was recycled in the United States, 5% and 3% up from 3 years prior respectively.
Reasons for recycling
Obsolete computers and old electronics are valuable sources for secondary raw materials if recycled; otherwise, these devices are a source of toxins and carcinogens. Rapid technology change, low initial cost, and planned obsolescence have resulted in a fast-growing surplus of computers and other electronic comp |
https://en.wikipedia.org/wiki/Relvar | In relational databases, relvar is a term introduced by C. J. Date and Hugh Darwen as an abbreviation for relation variable in their 1995 paper The Third Manifesto, to avoid the confusion sometimes arising from the use of the term relation, by the inventor of the relational model, E. F. Codd, for a variable to which a relation is assigned as well as for the relation itself. The term is used in Date's well-known database textbook An Introduction to Database Systems and in various other books authored or coauthored by him.
Some database textbooks use the term relation for both the variable and the data it contains. Similarly, texts on SQL tend to use the term table for both purposes, though the qualified term base table is used in the standard for the variable.
A closely related term often used in academic texts is relation schema, this being a set of attributes paired with a set of constraints, together defining a set of relations for the purpose of some discussion (typically, database normalization). Constraints that mention just one relvar are termed relvar constraints, so relation schema can be regarded as a single term encompassing a relvar and its relvar constraints. |
https://en.wikipedia.org/wiki/Analogue%20electronics | Analogue electronics () are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two levels. The term "analogue" describes the proportional relationship between a signal and a voltage or current that represents the signal. The word analogue is derived from the Greek word meaning "proportional".
Analogue signals
An analogue signal uses some attribute of the medium to convey the signal's information. For example, an aneroid barometer uses the angular position of a needle on top of a contracting and expanding box as the signal to convey the information of changes in atmospheric pressure. Electrical signals may represent information by changing their voltage, current, frequency, or total charge. Information is converted from some other physical form (such as sound, light, temperature, pressure, position) to an electrical signal by a transducer which converts one type of energy into another (e.g. a microphone).
The signals take any value from a given range, and each unique signal value represents different information. Any change in the signal is meaningful, and each level of the signal represents a different level of the phenomenon that it represents. For example, suppose the signal is being used to represent temperature, with one volt representing one degree Celsius. In such a system, 10 volts would represent 10 degrees, and 10.1 volts would represent 10.1 degrees.
Another method of conveying an analogue signal is to use modulation. In this, some base carrier signal has one of its properties altered: amplitude modulation (AM) involves altering the amplitude of a sinusoidal voltage waveform by the source information, frequency modulation (FM) changes the frequency. Other techniques, such as phase modulation or changing the phase of the carrier signal, are also used.
In an analogue sound recording, the variation in pressure of a sound striking a microphone creates a corresponding variation in t |
https://en.wikipedia.org/wiki/Abyssal%20zone | The abyssal zone or abyssopelagic zone is a layer of the pelagic zone of the ocean. The word abyss comes from the Greek word (), meaning "bottomless". At depths of , this zone remains in perpetual darkness. It covers 83% of the total area of the ocean and 60% of Earth's surface. The abyssal zone has temperatures around through the large majority of its mass. The water pressure can reach up to .
Due to there being no light, there are no plants producing oxygen, which instead primarily comes from ice that had melted long ago from the polar regions. The water along the seafloor of this zone is actually devoid of oxygen, resulting in a death trap for organisms unable to quickly return to the oxygen-enriched water above or survive in the low-oxygen environment. This region also contains a much higher concentration of nutrient salts, like nitrogen, phosphorus, and silica, due to the large amount of dead organic material that drifts down from the above ocean zones and decomposes.
The area below the abyssal zone is the sparsely inhabited hadal zone. The zone above is the bathyal zone.
Trenches
The deep trenches or fissures that plunge down thousands of meters below the ocean floor (for example, the mid-oceanic trenches such as the Mariana Trench in the Pacific) are almost unexplored. Previously, only the bathyscaphe Trieste, the remote control submarine Kaikō and the Nereus have been able to descend to these depths. However, as of March 25, 2012 one vehicle, the Deepsea Challenger was able to penetrate to a depth of 10,898.4 meters (35,756 ft).
Ecosystem
The relative sparsity of primary producers means that the majority of organisms living in the abyssal zone depend on the marine snow that falls from oceanic layers above. The biomass of the abyssal zone actually increases near the seafloor as most of the decomposing material and decomposers rest on the seabed.
The composition of the abyssal plain depends on the depth of the sea floor. Above 4000 meters the seafloor |
https://en.wikipedia.org/wiki/Hadwiger%20number | In graph theory, the Hadwiger number of an undirected graph is the size of the largest complete graph that can be obtained by contracting edges of .
Equivalently, the Hadwiger number is the largest number for which the complete graph is a minor of , a smaller graph obtained from by edge contractions and vertex and edge deletions. The Hadwiger number is also known as the contraction clique number of or the homomorphism degree of . It is named after Hugo Hadwiger, who introduced it in 1943 in conjunction with the Hadwiger conjecture, which states that the Hadwiger number is always at least as large as the chromatic number of .
The graphs that have Hadwiger number at most four have been characterized by . The graphs with any finite bound on the Hadwiger number are sparse, and have small chromatic number. Determining the Hadwiger number of a graph is NP-hard but fixed-parameter tractable.
Graphs with small Hadwiger number
A graph has Hadwiger number at most two if and only if it is a forest, for a three-vertex complete minor can only be formed by contracting a cycle in .
A graph has Hadwiger number at most three if and only if its treewidth is at most two, which is true if and only if each of its biconnected components is a series–parallel graph.
Wagner's theorem, which characterizes the planar graphs by their forbidden minors, implies that the planar graphs have Hadwiger number at most four. In the same paper that proved this theorem, also characterized the graphs with Hadwiger number at most four more precisely: they are graphs that can be formed by clique-sum operations that combine planar graphs with the eight-vertex Wagner graph.
The graphs with Hadwiger number at most five include the apex graphs and the linklessly embeddable graphs, both of which have the complete graph among their forbidden minors.
Sparsity
Every graph with vertices and Hadwiger number has edges. This bound is tight: for every , there exist graphs with Hadwiger number that h |
https://en.wikipedia.org/wiki/Burr%20mill | A burr mill, or burr grinder, is a mill used to grind hard, small food products between two revolving abrasive surfaces separated by a distance usually set by the user. When the two surfaces are set far apart, the resulting ground material is coarser, and when the two surfaces are set closer together, the resulting ground material is finer and smaller. Often, the device includes a revolving screw that pushes the food through. It may be powered electrically or manually.
Burr mills do not heat the ground product by friction as much as blade grinders ("choppers"), and produce particles of a uniform size determined by the separation between the grinding surfaces.
Food burr mills are usually manufactured for a single purpose: coffee beans, dried peppercorns, coarse salt, spices, or poppy seeds, for example. Coffee mills for volume consumption are usually powered by electric motors, but fast and precise manual mills have experienced an uptick in popularity in the 2020s for individual-serving pour-over and espresso. Domestic pepper, salt, and spice mills, used to sprinkle a little seasoning on food, are usually operated manually, sometimes by a battery-powered motor.
Coffee grinders
The uniform particle size and variable settings of a burr grinder make it well-suited for coffee preparation, as different methods of coffee preparation require different size grinds. The size of the grind is determined by the width between the two burrs, into which the coffee beans fall and are ground a few at a time. A finer grind allows a larger surface area to come into contact with the water, which yields a more complete extraction of caffeine and flavor. The grind spectrum ranges from Turkish coffee and espresso on the finer end to a French press and cold brew extraction on the coarse/extra coarse end, with pour-overs and drip coffee in the medium-fine to medium range.
Burr coffee grinders are also more suited for keeping the flavor and aroma of the coffee beans intact, as they prod |
https://en.wikipedia.org/wiki/Euclid%20number | In mathematics, Euclid numbers are integers of the form , where pn # is the nth primorial, i.e. the product of the first n prime numbers. They are named after the ancient Greek mathematician Euclid, in connection with Euclid's theorem that there are infinitely many prime numbers.
Examples
For example, the first three primes are 2, 3, 5; their product is 30, and the corresponding Euclid number is 31.
The first few Euclid numbers are 3, 7, 31, 211, 2311, 30031, 510511, 9699691, 223092871, 6469693231, 200560490131, ... .
History
It is sometimes falsely stated that Euclid's celebrated proof of the infinitude of prime numbers relied on these numbers. Euclid did not begin with the assumption that the set of all primes is finite. Rather, he said: consider any finite set of primes (he did not assume that it contained only the first n primes, e.g. it could have been ) and reasoned from there to the conclusion that at least one prime exists that is not in that set.
Nevertheless, Euclid's argument, applied to the set of the first n primes, shows that the nth Euclid number has a prime factor that is not in this set.
Properties
Not all Euclid numbers are prime.
E6 = 13# + 1 = 30031 = 59 × 509 is the first composite Euclid number.
Every Euclid number is congruent to 3 modulo 4 since the primorial of which it is composed is twice the product of only odd primes and thus congruent to 2 modulo 4. This property implies that no Euclid number can be a square.
For all the last digit of En is 1, since is divisible by 2 and 5. In other words, since all primorial numbers greater than E2 have 2 and 5 as prime factors, they are divisible by 10, thus all En ≥ 3 + 1 have a final digit of 1.
Unsolved problems
It is not known whether there is an infinite number of prime Euclid numbers (primorial primes).
It is also unknown whether every Euclid number is a squarefree number.
Generalization
A Euclid number of the second kind (also called Kummer number) is an integer of the form En = pn |
https://en.wikipedia.org/wiki/Invitae | Invitae Corp. is a biotechnology company that was created as a subsidiary of Genomic Health in 2010 and then spun-off in 2012.
In 2017, Invitae acquired Good Start Genetics and CombiMatrix. In 2020, Invitae announced the acquisition of ArcherDX for $1.4 billion. In 2021, Invitae announced the acquisition of health care AI startup Ciitizen for $325 million.
CombiMatrix Corp.
CombiMatrix Corp. () was a clinical diagnostic laboratory specializing in pre-implantation genetic screening, miscarriage analysis, prenatal and pediatric diagnostics, offering DNA-based testing for the detection of genetic abnormalities beyond what can be identified through traditional methodologies. As a full-scale cytogenetic and cytogenomic laboratory, CombiMatrix performs genetic testing utilizing a variety of advanced cytogenomic techniques, including chromosomal microarray analysis, standardized and customized fluorescence in situ hybridization (FISH) and high-resolution karyotyping. CombiMatrix is dedicated to providing high-level clinical support for healthcare professionals in order to help them incorporate the results of complex genetic testing into patient-centered medical decision making.
In 2012 CombiMatrix shifted its focus from providing oncology genetic testing to developmental testing. Their focus is cytogenomic miscarriage analysis, prenatal analysis and postnatal/pediatric analysis.
History
In the mid-1990s, a PhD from Caltech invented a method of analyzing and immobilizing genetic material on the surface of a modified semiconductor wafer. The CombiMatrix microarray ("CombiMatrix" refers to combinatorial chemistry on a matrix array) was born and received US patents in the late '90s. Located in Mukilteo, Washington, the company received several rounds of private financings and filed to go public in November 2000. Unfortunately, CombiMatrix missed its "IPO window" by a few months due to the turmoil that was happening in the financial markets in late 2000 and early 2001, |
https://en.wikipedia.org/wiki/Crystal%20Analysis | Crystal Analysis (a.k.a. Crystal Analysis Professional) is an On Line Analytical Processing (OLAP) application for analysing business data originally developed by Seagate Software.
It was first released under the name Seagate Analysis as a free application written in Java released in 1999. After disappointing application performance, a decision was made to rewrite using ATL COM in C++. The initial rewrite only supported Microsoft Analysis Services, but support for other vendors soon followed, with Holos cubes in version 8.5, Essbase, IBM Db2 and SAP BW following in later releases. The web client was rewritten using an XSLT abstraction layer for the version 9.0 release, with better standards compliance to support Mozilla based browsers—this work also set the building blocks for support for Safari.
Crystal Analysis relies on Crystal Enterprise for distribution of analytical applications created with it.
Release timeline
Seagate Analysis 1999, by Seagate Software
Crystal Analysis Professional v8.0, 29 May 2001 by Crystal Decisions
Crystal Analysis Professional v8.1, Q4 2001 by Crystal Decisions
Crystal Analysis Professional v8.5 9 July 2002 , by Crystal Decisions
Crystal Analysis Professional v9.0 9 April 2003 , by Crystal Decisions
Crystal Analysis Professional v10.0 8 January 2004 , by Business Objects
Crystal Analysis Professional v11.0 31 January 2005, by Business Objects
Crystal Analysis Professional v11.0 Release 2 30 November 2005 , by Business Objects
Future versions will be released under the name, BusinessObjects OLAP Intelligence.
External links
Product page at Business Objects
Business intelligence software
Online analytical processing |
https://en.wikipedia.org/wiki/Materialist%20feminism | Materialist feminism highlights capitalism and patriarchy as a central aspect in understanding women's oppression. It focuses on the material, or physical, aspects that define oppression. Under materialist feminism, gender is seen as a social construct, and society forces gender roles, such as rearing children, onto women. Materialist feminism's ideal vision is a society in which women are treated socially and economically the same as men. The theory centers on social change rather than seeking transformation within the capitalist system.
Jennifer Wicke defines materialist feminism as "a feminism that insists on examining the material conditions under which social arrangements, including those of gender hierarchy, develop... materialist feminism avoids seeing this gender hierarchy as the effect of a singular... patriarchy and instead gauges the web of social and psychic relations that make up a material, historical moment". She states that "...materialist feminism argues that material conditions of all sorts play a vital role in the social production of gender and assays the different ways in which women collaborate and participate in these productions". Material feminism also considers how women and men of various races and ethnicities are kept in their lower economic status due to an imbalance of power that privileges those who already have privilege, thereby protecting the status quo. Materialist feminists ask whether people have access to free education, if they can pursue careers, have access or opportunity to become wealthy, and if not, what economic or social constraints are preventing them from doing so, and how this can be changed.
History
The term materialist feminism emerged in the late 1970s and is associated with key thinkers, such as Rosemary Hennessy, Stevi Jackson and Christine Delphy.
Rosemary Hennessy traces the history of materialist feminism in the work of British and French feminists who preferred the term materialist feminism to Marxist fem |
https://en.wikipedia.org/wiki/Semi-Hilbert%20space | In mathematics, a semi-Hilbert space is a generalization of a Hilbert space in functional analysis, in which, roughly speaking, the inner product is required only to be positive semi-definite rather than positive definite, so that it gives rise to a seminorm rather than a vector space norm.
The quotient of this space by the kernel of this seminorm is also required to be a Hilbert space in the usual sense. |
https://en.wikipedia.org/wiki/Magnetic%20semiconductor | Magnetic semiconductors are semiconductor materials that exhibit both ferromagnetism (or a similar response) and useful semiconductor properties. If implemented in devices, these materials could provide a new type of control of conduction. Whereas traditional electronics are based on control of charge carriers (n- or p-type), practical magnetic semiconductors would also allow control of quantum spin state (up or down). This would theoretically provide near-total spin polarization (as opposed to iron and other metals, which provide only ~50% polarization), which is an important property for spintronics applications, e.g. spin transistors.
While many traditional magnetic materials, such as magnetite, are also semiconductors (magnetite is a semimetal semiconductor with bandgap 0.14 eV), materials scientists generally predict that magnetic semiconductors will only find widespread use if they are similar to well-developed semiconductor materials. To that end, dilute magnetic semiconductors (DMS) have recently been a major focus of magnetic semiconductor research. These are based on traditional semiconductors, but are doped with transition metals instead of, or in addition to, electronically active elements. They are of interest because of their unique spintronics properties with possible technological applications. Doped wide band-gap metal oxides such as zinc oxide (ZnO) and titanium oxide (TiO2) are among the best candidates for industrial DMS due to their multifunctionality in opticomagnetic applications. In particular, ZnO-based DMS with properties such as transparency in visual region and piezoelectricity have generated huge interest among the scientific community as a strong candidate for the fabrication of spin transistors and spin-polarized light-emitting diodes, while copper doped TiO2 in the anatase phase of this material has further been predicted to exhibit favorable dilute magnetism.
Hideo Ohno and his group at the Tohoku University were the first to |
https://en.wikipedia.org/wiki/Fundamental%20polygon | In mathematics, a fundamental polygon can be defined for every compact Riemann surface of genus greater than 0. It encodes not only the topology of the surface through its fundamental group but also determines the Riemann surface up to conformal equivalence. By the uniformization theorem, every compact Riemann surface has simply connected universal covering surface given by exactly one of the following:
the Riemann sphere,
the complex plane,
the unit disk D or equivalently the upper half-plane H.
In the first case of genus zero, the surface is conformally equivalent to the Riemann sphere.
In the second case of genus one, the surface is conformally equivalent to a torus C/Λ for some lattice Λ in C. The fundamental polygon of Λ, if assumed convex, may be taken to be either a period parallelogram or a centrally symmetric hexagon, a result first proved by Fedorov in 1891.
In the last case of genus g > 1, the Riemann surface is conformally equivalent to H/Γ where Γ is a Fuchsian group of Möbius transformations. A fundamental domain for Γ is given by a convex polygon for the hyperbolic metric on H. These can be defined by Dirichlet polygons and have an even number of sides. The structure of the fundamental group Γ can be read off from such a polygon. Using the theory of quasiconformal mappings and the Beltrami equation, it can be shown there is a canonical convex Dirichlet polygon with 4g sides, first defined by Fricke, which corresponds to the standard presentation of Γ as the group with 2g generators a1, b1, a2, b2, ..., ag, bg and the single relation [a1,b1][a2,b2] ⋅⋅⋅ [ag,bg] = 1, where [a,b] = a b a−1b−1.
Any Riemannian metric on an oriented closed 2-manifold M defines a complex structure on M, making M a compact Riemann surface. Through the use of fundamental polygons, it follows that two oriented closed 2-manifolds are classified by their genus, that is half the rank of the Abelian group Γ/[Γ,Γ], where Γ = 1(M). Moreover, it also follows from the theory of |
https://en.wikipedia.org/wiki/Johann%20Friedrich%20Pfaff | Johann Friedrich Pfaff (sometimes spelled Friederich; 22 December 1765 – 21 April 1825) was a German mathematician. He was described as one of Germany's most eminent mathematicians during the 19th century. He was a precursor of the German school of mathematical thinking, under which Carl Friedrich Gauss and his followers largely determined the lines on which mathematics developed during the 19th century.
Biography
He received his early education at the Carlsschule, where he met Friedrich Schiller, his lifelong friend. His mathematical capacity was noticed during his early years. He pursued his studies at Göttingen under Abraham Gotthelf Kästner, and in 1787 he went to Berlin and studied practical astronomy under J. E. Bode. In 1788, Pfaff became professor of mathematics in Helmstedt, and continued his work as a professor until that university was abolished in 1810. After this event, he became professor of mathematics at the University of Halle, where he stayed for the rest of his life.
He studied mathematical series and integral calculus, and is noted for his work on partial differential equations of the first order Pfaffian systems, as they are now called, which became part of the theory of differential forms; and as Carl Friedrich Gauss's formal research supervisor. He knew Gauss well, when they both lived together in Helmstedt in 1798. August Möbius was later his student.
His two principal works are Disquisitiones analyticae maxime ad calculum integralem et doctrinam serierum pertinentes (4to., vol. i., Helmstädt, 1797) and “Methodus generalis, aequationes differentiarum particularum, necnon aequationes differentiales vulgares, utrasque primi ordinis inter quotcumque variabiles, complete integrandi” in Abhandlungen der Königlichen Akademie der Wissenschaften zu Berlin (1814-1815).
His brother Johann Wilhelm Andreas Pfaff was a professor of pure and applied mathematics. Another brother, Christoph Heinrich Pfaff, was a professor of medicine, physics and chemist |
https://en.wikipedia.org/wiki/Largest%20known%20prime%20number | The largest known prime number () is , a number which has 24,862,048 digits when written in base 10. It was found via a computer volunteered by Patrick Laroche of the Great Internet Mersenne Prime Search (GIMPS) in 2018.
A prime number is a natural number greater than 1 with no divisors other than 1 and itself. According to Euclid's theorem there are infinitely many prime numbers, so there is no largest prime.
Many of the largest known primes are Mersenne primes, numbers that are one less than a power of two, because they can utilize a specialized primality test that is faster than the general one. , the six largest known primes are Mersenne primes. The last seventeen record primes were Mersenne primes. The binary representation of any Mersenne prime is composed of all ones, since the binary form of 2k − 1 is simply k ones.
Current record
The record is currently held by with 24,862,048 digits, found by GIMPS in December 2018. The first and last 120 digits of its value are shown below:
This prime has been holding the record for 4 years and 9 months (as of October 2023), longer than any other record prime since M19937 (which held the record for 7 years, 1971–1978).
Prizes
There are several prizes offered by the Electronic Frontier Foundation (EFF) for record primes. A prime with one million digits was found in 1999, earning the discoverer a US$50,000 prize. In 2008, a ten-million digit prime won a US$100,000 prize and a Cooperative Computing Award from the EFF. Time called this prime the 29th top invention of 2008.
Both of these primes were discovered through the Great Internet Mersenne Prime Search (GIMPS), which coordinates long-range search efforts among tens of thousands of computers and thousands of volunteers. The $50,000 prize went to the discoverer and the $100,000 prize went to GIMPS. GIMPS will split the US$150,000 prize for the first prime of over 100 million digits with the winning participant. A further prize is offered for the first prime with a |
https://en.wikipedia.org/wiki/Image%20plane | In 3D computer graphics, the image plane is that plane in the world which is identified with the plane of the display monitor used to view the image that is being rendered. It is also referred to as screen space. If one makes the analogy of taking a photograph to rendering a 3D image, the surface of the film is the image plane. In this case, the viewing transformation is a projection that maps the world onto the image plane. A rectangular region of this plane, called the viewing window or viewport, maps to the monitor. This establishes the mapping between pixels on the monitor and points (or rather, rays) in the 3D world. The plane is not usually an actual geometric object in a 3D scene, but instead is usually a collection of target coordinates or dimensions that are used during the rasterization process so the final output can be displayed as intended on the physical screen.
In optics, the image plane is the plane that contains the object's projected image, and lies beyond the back focal plane.
See also
Focal plane
Picture plane
Projection plane
Real image |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.