source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Q-theta%20function | In mathematics, the q-theta function (or modified Jacobi theta function) is a type of q-series which is used to define elliptic hypergeometric series.
It is given by
where one takes 0 ≤ |q| < 1. It obeys the identities
It may also be expressed as:
where is the q-Pochhammer symbol.
See also
elliptic hypergeometric series
Jacobi theta function
Ramanujan theta function |
https://en.wikipedia.org/wiki/Guard%20byte | A guard byte is a part of a computer program's memory that helps software developers find buffer overflows while developing the program.
Principle
When a program is compiled for debugging, all memory allocations are prefixed and postfixed by guard bytes. Special memory allocation routines may then perform additional tasks to determine unwanted read and write attempts outside the allocated memory. These extra bytes help to detect that the program is writing into (or even reading from) inappropriate memory areas, potentially causing buffer overflows. In case of accessing these bytes by the program's algorithm, the programmer is warned with information assisting him/her to locate the problem.
Checking for the inappropriate access to the guard bytes may be done in two ways:
by setting a memory breakpoint on a condition of write and/or read to those bytes, or
by pre-initializing the guard bytes with specific values and checking the values upon deallocation.
The first way is possible only with a debugger that handles such breakpoints, but significantly increases the chance of locating the problem. The second way does not require any debuggers or special environments and can be done even on other computers, but the programmer is alerted about the overflow only upon the deallocation, which is sometimes quite late.
Because guard bytes require additional code to be executed and additional memory to be allocated, they are used only when the program is compiled for debugging. When compiled as a release, guard bytes are not used at all, neither the routines working with them.
Example
A programmer wants to allocate a buffer of 100 bytes of memory while debugging. The system memory allocating routine will allocate 108 bytes instead, adding 4 leading and 4 trailing guard bytes, and return a pointer shifted by the 4 leading guard bytes to the right, hiding them from the programmer. The programmer should then work with the received pointer without the knowledge of the presence |
https://en.wikipedia.org/wiki/Black%20Queen%20hypothesis | The Black Queen hypothesis (BQH) is reductive evolution theory which seeks to explain how natural selection (as opposed to genetic drift) can drive gene loss. In a microbial community, different members may have genes which produce certain chemicals or resources in a "leaky fashion" making them accessible to other members of that community. If this resource is available to certain members of a community in a way that allows them to sufficiently access that resource without generating it themselves, these other members in the community may lose the biological function (or the gene) involved in producing that chemical. Put another way, the black queen hypothesis is concerned with the conditions under which it is advantageous to lose certain biological functions. By accessing resources without the need to generate it themselves, these microbes conserve energy and streamline their genomes to enable faster replication.
Jeffrey Morris proposed the Black Queen hypothesis in his 2011 PhD dissertation. In the following year, Morris wrote another publication on the subject alongside Richard Lenski and Erik Zinser more fully refining and fleshing out the hypothesis. The name of the hypothesis—"Black Queen hypothesis"—is a play on the Red Queen hypothesis, an earlier theory of coevolution which states that organisms must constantly refine and adapt to keep up with the changing environment and the evolution of other organisms.
Principles
Original theory
The "Black Queen" refers to the "Queen of Spades" from the card game Hearts. The goal of Hearts is to end up as the player with the fewest number of points. However, the Queen of Spades is worth the same number of points as all the other cards combined. For this reason, players seek to avoid getting the Queen of Spades. At the same time, one player must end up with the Queen. Similarly, the BQH posits that members of a community will dispense with any functions (or genes) that become dispensable. At the same time, at least on |
https://en.wikipedia.org/wiki/Steve%20%28Minecraft%29 | Steve is a fictional character from the 2011 video game Minecraft. Created by Swedish video game developer Markus "Notch" Persson and introduced in the initial Java-based version of Minecraft which was first publicly released on May 17, 2009, Steve is one of nine default player character skins available for players of contemporary versions of Minecraft. Steve lacks an official backstory by the developers of Minecraft as he is intended to be a customizable player avatar as opposed to being a predefined character. His feminine counterpart, Alex, was first introduced in August 2014 for Java PC versions of Minecraft, with the other seven debuting in the Java edition of the game in October 2022. Depending on the version of Minecraft, players have a chance of defaulting to either Steve or any of the other variant skins when creating a new account, although the skin is easy to change from the client or website.
Steve became a widely recognized character in the video game industry following the critical and commercial success of the Minecraft franchise. Considered by some critics as a mascot for the Minecraft intellectual property, his likeness has appeared extensively in advertising and merchandise, including wearable apparel and collectible items. The character has also inspired a number of unofficial media and urban legends, most notably the "Herobrine" creepypasta which became widely shared around internet communities as a meme during the 2010s.
Concept and design
Steve is presented as a human character with a blocky appearance, which is consistent with the aesthetic and art style of Minecraft. His name originated as an in-joke by Persson, and was confirmed as his official name in the "Bedrock Edition" of Minecraft. Steve's face consists of an eight-by-eight pixel image, usually adorned with a goatee. He wears a light blue top, a pair of blue trousers, and shoes. In console editions of Minecraft, Steve appeared in a variety of different outfits, such as a tuxedo and p |
https://en.wikipedia.org/wiki/Wiggers%20diagram | A Wiggers diagram, named after its developer, Carl Wiggers, is a unique diagram that has been used in teaching cardiac physiology for more than a century. In the Wiggers diagram, the X-axis is used to plot time subdivided into the cardiac phases, while the Y-axis typically contains the following on a single grid:
Blood pressure
Aortic pressure
Ventricular pressure
Atrial pressure
Ventricular volume
Electrocardiogram
Arterial flow (optional)
Heart sounds (optional)
The Wiggers diagram clearly illustrates the coordinated variation of these values as the heart beats, assisting one in understanding the entire cardiac cycle.
Events
Note that during isovolumetric/isovolumic contraction and relaxation, all the heart valves are closed; at no time are all the heart valves open. *S3 and S4 heart sounds are associated with pathologies and are not routinely heard.
Additional images
See also
Pressure volume diagram |
https://en.wikipedia.org/wiki/Mojibake | Mojibake (; , "character transformation") is the garbled text that is the result of text being decoded using an unintended character encoding. The result is a systematic replacement of symbols with completely unrelated ones, often from a different writing system.
This display may include the generic replacement character ("�") in places where the binary representation is considered invalid. A replacement can also involve multiple consecutive symbols, as viewed in one encoding, when the same binary code constitutes one symbol in the other encoding. This is either because of differing constant length encoding (as in Asian 16-bit encodings vs European 8-bit encodings), or the use of variable length encodings (notably UTF-8 and UTF-16).
Failed rendering of glyphs due to either missing fonts or missing glyphs in a font is a different issue that is not to be confused with mojibake. Symptoms of this failed rendering include blocks with the code point displayed in hexadecimal or using the generic replacement character. Importantly, these replacements are valid and are the result of correct error handling by the software.
Causes
To correctly reproduce the original text that was encoded, the correspondence between the encoded data and the notion of its encoding must be preserved (i.e. the source and target encoding standards must be the same). As mojibake is the instance of non-compliance between these, it can be achieved by manipulating the data itself, or just relabelling it.
Mojibake is often seen with text data that have been tagged with a wrong encoding; it may not even be tagged at all, but moved between computers with different default encodings. A major source of trouble are communication protocols that rely on settings on each computer rather than sending or storing metadata together with the data.
The differing default settings between computers are in part due to differing deployments of Unicode among operating system families, and partly the legacy encodings' |
https://en.wikipedia.org/wiki/Triple-alpha%20process | The triple-alpha process is a set of nuclear fusion reactions by which three helium-4 nuclei (alpha particles) are transformed into carbon.
Triple-alpha process in stars
Helium accumulates in the cores of stars as a result of the proton–proton chain reaction and the carbon–nitrogen–oxygen cycle.
Nuclear fusion reaction of two helium-4 nuclei produces beryllium-8, which is highly unstable, and decays back into smaller nuclei with a half-life of , unless within that time a third alpha particle fuses with the beryllium-8 nucleus to produce an excited resonance state of carbon-12, called the Hoyle state, which nearly always decays back into three alpha particles, but once in about 2421.3 times releases energy and changes into the stable base form of carbon-12. When a star runs out of hydrogen to fuse in its core, it begins to contract and heat up. If the central temperature rises to 108 K, six times hotter than the Sun's core, alpha particles can fuse fast enough to get past the beryllium-8 barrier and produce significant amounts of stable carbon-12.
{|
| + →
| (−0.0918 MeV)
|-
| + → + 2
| (+7.367 MeV)
|}
The net energy release of the process is 7.275 MeV.
As a side effect of the process, some carbon nuclei fuse with additional helium to produce a stable isotope of oxygen and energy:
+ → + (+7.162 MeV)
Nuclear fusion reactions of helium with hydrogen produces lithium-5, which also is highly unstable, and decays back into smaller nuclei with a half-life of .
Fusing with additional helium nuclei can create heavier elements in a chain of stellar nucleosynthesis known as the alpha process, but these reactions are only significant at higher temperatures and pressures than in cores undergoing the triple-alpha process. This creates a situation in which stellar nucleosynthesis produces large amounts of carbon and oxygen, but only a small fraction of those elements are converted into neon and heavier elements. Oxygen and carbon are the main "ash" of helium |
https://en.wikipedia.org/wiki/AP%20Physics%20C%3A%20Mechanics | Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern |
https://en.wikipedia.org/wiki/Muckenhoupt%20weights | In mathematics, the class of Muckenhoupt weights consists of those weights for which the Hardy–Littlewood maximal operator is bounded on . Specifically, we consider functions on and their associated maximal functions defined as
where is the ball in with radius and center at . Let , we wish to characterise the functions for which we have a bound
where depends only on and . This was first done by Benjamin Muckenhoupt.
Definition
For a fixed , we say that a weight belongs to if is locally integrable and there is a constant such that, for all balls in , we have
where is the Lebesgue measure of , and is a real number such that: .
We say belongs to if there exists some such that
for all and all balls .
Equivalent characterizations
This following result is a fundamental result in the study of Muckenhoupt weights.
Theorem. A weight is in if and only if any one of the following hold.
(a) The Hardy–Littlewood maximal function is bounded on , that is
for some which only depends on and the constant in the above definition.
(b) There is a constant such that for any locally integrable function on , and all balls :
where:
Equivalently:
Theorem. Let , then if and only if both of the following hold:
This equivalence can be verified by using Jensen's Inequality.
Reverse Hölder inequalities and
The main tool in the proof of the above equivalence is the following result. The following statements are equivalent
for some .
There exist such that for all balls and subsets , implies .
There exist and (both depending on ) such that for all balls we have:
We call the inequality in the third formulation a reverse Hölder inequality as the reverse inequality follows for any non-negative function directly from Hölder's inequality. If any of the three equivalent conditions above hold we say belongs to .
Weights and BMO
The definition of an weight and the reverse Hölder inequality indicate that such a weight cannot degenerate or grow t |
https://en.wikipedia.org/wiki/Outline%20of%20geometry | Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. Geometry is one of the oldest mathematical sciences.
Classical branches
Geometry
Analytic geometry
Differential geometry
Euclidean geometry
Non-Euclidean geometry
Projective geometry
Riemannian geometry
Contemporary branches
Absolute geometry
Affine geometry
Archimedes' use of infinitesimals
Birational geometry
Complex geometry
Combinatorial geometry
Computational geometry
Conformal geometry
Constructive solid geometry
Contact geometry
Convex geometry
Descriptive geometry
Digital geometry
Discrete geometry
Distance geometry
Elliptic geometry
Enumerative geometry
Epipolar geometry
Finite geometry
Geometry of numbers
Hyperbolic geometry
Incidence geometry
Information geometry
Integral geometry
Inversive geometry
Klein geometry
Lie sphere geometry
Numerical geometry
Ordered geometry
Parabolic geometry
Plane geometry
Quantum geometry
Ruppeiner geometry
Spherical geometry
Symplectic geometry
Synthetic geometry
Systolic geometry
Taxicab geometry
Toric geometry
Transformation geometry
Tropical geometry
History of geometry
History of geometry
Timeline of geometry
Babylonian geometry
Egyptian geometry
Ancient Greek geometry
Euclidean geometry
Pythagorean theorem
Euclid's Elements
Measurement of a Circle
Indian mathematics
Bakhshali manuscript
Modern geometry
History of analytic geometry
History of the Cartesian coordinate system
History of non-Euclidean geometry
History of topology
History of algebraic geometry
General geometry concepts
General concepts
Geometric progression — Geometric shape — Geometry — Pi — angular velocity — linear velocity — De Moivre's theorem — parallelogram rule — Pythagorean theorem — similar triangles — trigonometric identity — unit circle — Trapezoid — Triangle — Theorem — point — ray — plane — line — line segment
Measurements
Bearing
A |
https://en.wikipedia.org/wiki/Coders%20at%20Work | Coders at Work: Reflections on the Craft of Programming () is a 2009 book by Peter Seibel comprising interviews with 15 highly accomplished programmers. The primary topics in these interviews include how the interviewees learned programming, how they debug code, their favorite languages and tools, their opinions on literate programming, proofs, code reading and so on.
Interviewees
Jamie Zawinski
Brad Fitzpatrick
For studying Perl he recommends Higher-Order Perl by Mark Jason Dominus.
Douglas Crockford
Brendan Eich
Joshua Bloch
Joe Armstrong
Simon Peyton Jones
Mentions David Turner's paper on S-K combinators (cf. SKI combinator calculus). The S-K combinators are a way of translating and then executing the lambda calculus. Turner showed in his paper how to translate lambda calculus into the three combinators S, K and I which are all just closed lambda terms and I = SKK. So in effect you take a lambda term and compile to just Ss and Ks.
Recalls his first instance of learning functional programming when taking a course by Arthur Norman who showed how to build doubly linked lists without any side effects at all.
Mentions the paper "Can Programming be Liberated from the von Neumann Style" by John Backus.
Wants John Hughes to write a paper for the Journal of Functional Programming on why static typing is bad. Hughes has written a popular paper titled "Why Functional Programming Matters".
Mentions a data structure called "zipper" that is a very useful functional data structure. Peyton Jones also mentions the 4-5 line program that Hughes wrote to calculate an arbitrary number of digits of e lazily.
Mentions that the sequential implementation of a double-ended queue is a first year undergraduate programming problem. For a concurrent implementation with a lock per node, it's a research paper problem. With transactional memory, it's an undergraduate problem again.
Favorite books/authors: Programming Pearls by Jon Bentley, a chapter titled "Writing Programs for 'The Book |
https://en.wikipedia.org/wiki/False%20positive%20rate | In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification).
The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio.
Definition
The false positive rate is
where is the number of false positives, is the number of true negatives and is the total number of ground truth negatives.
The level of significance that is used to test each hypothesis is set based on the form of inference (simultaneous inference vs. selective inference) and its supporting criteria (for example FWER or FDR), that were pre-determined by the researcher.
When performing multiple comparisons in a statistical framework such as above, the false positive ratio (also known as the false alarm ratio, as opposed to false positive rate / false alarm rate ) usually refers to the probability of falsely rejecting the null hypothesis for a particular test. Using the terminology suggested here, it is simply .
Since V is a random variable and is a constant (), the false positive ratio is also a random variable, ranging between 0–1.
The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio, expressed by .
It is worth noticing that the two definitions ("false positive ratio" / "false positive rate") are somewhat interchangeable. For example, in the referenced article serves as the false positive "rate" rather than as its "ratio".
Classification of multiple hypothesis tests
Comparison with other error rates
While the false positive rate is mathematically equal to the type I error rate, it is viewed as a separate term for the following reaso |
https://en.wikipedia.org/wiki/Induction-induction | In intuitionistic type theory (ITT), some discipline within mathematical logic, induction-induction is for simultaneously declaring some inductive type and some inductive predicate over this type.
An inductive definition is given by rules for generating elements of some type. One can then define some predicate on that type by providing constructors for forming the elements of the predicate , such inductively on the way the elements of the type are generated. Induction-induction generalizes this situation since one can simultaneously define the type and the predicate, because the rules for generating elements of the type are allowed to refer to the predicate .
Induction-induction can be used to define larger types including various universe constructions in type theory. and limit constructions in category/topos theory.
Example 1
Present the type as having the following constructors , note the early reference to the predicate :
and-simultaneously present the predicate as having the following constructors :
if and then
if and and then .
Example 2
A simple common example is the Universe à la Tarski type former. It creates some inductive type and some inductive predicate . For every type in the type theory (except itself!), there will be some element of which may be seen as some code for this corresponding type ; The predicate inductively encodes each possible type to the corresponding element of ; and constructing new codes in will require referring to the decoding-as-type of earlier codes , via the predicate .
See also
Induction-recursion – for simultaneously declaring some inductive type and some recursive function on this type . |
https://en.wikipedia.org/wiki/Britten%E2%80%93Davidson%20model | The Britten–Davidson model, also known as the gene-battery model, is a hypothesis for the regulation of protein synthesis in eukaryotes. Proposed by Roy John Britten and Eric H. Davidson in 1969, the model postulates four classes of DNA sequence: an integrator gene, a producer gene, a receptor site, and a sensor site. The sensor site regulates the integrator gene, responsible for synthesis of activator RNA. The integrator gene cannot synthesize activator RNA unless the sensor site is activated. Activation and deactivation of the sensor site is done by external stimuli, such as hormones. The activator RNA then binds with a nearby receptor site, which stimulates the synthesis of mRNA at the structural gene.
This theory would explain how several different integrators could be concurrently synthesized, and would explain the pattern of repetitive DNA sequences followed by a unique DNA sequence that exists in genes.
See also
Transcriptional regulation
Operon |
https://en.wikipedia.org/wiki/Method%20%28computer%20programming%29 | A method in object-oriented programming (OOP) is a procedure associated with an object, and generally also a message. An object consists of state data and behavior; these compose an interface, which specifies how the object may be used. A method is a behavior of an object parametrized by a user.
Data is represented as properties of the object, and behaviors are represented as methods. For example, a Window object could have methods such as open and close, while its state (whether it is open or closed at any given point in time) would be a property.
In class-based programming, methods are defined within a class, and objects are instances of a given class. One of the most important capabilities that a method provides is method overriding - the same name (e.g., area) can be used for multiple different kinds of classes. This allows the sending objects to invoke behaviors and to delegate the implementation of those behaviors to the receiving object. A method in Java programming sets the behavior of a class object. For example, an object can send an area message to another object and the appropriate formula is invoked whether the receiving object is a rectangle, circle, triangle, etc.
Methods also provide the interface that other classes use to access and modify the properties of an object; this is known as encapsulation. Encapsulation and overriding are the two primary distinguishing features between methods and procedure calls.
Overriding and overloading
Method overriding and overloading are two of the most significant ways that a method differs from a conventional procedure or function call. Overriding refers to a subclass redefining the implementation of a method of its superclass. For example, findArea may be a method defined on a shape class, triangle, etc. would each define the appropriate formula to calculate their area. The idea is to look at objects as "black boxes" so that changes to the internals of the object can be made with minimal impact on the other o |
https://en.wikipedia.org/wiki/Social%20history%20%28medicine%29 | In medicine, a social history (abbreviated "SocHx") is a portion of the medical history (and thus the admission note) addressing familial, occupational, and recreational aspects of the patient's personal life that have the potential to be clinically significant.
Components
Components can include inquiries about:
Substances
Alcohol
Tobacco (pack years)
illicit drugs
occupation
sexual behavior (increased risk of various infections among prostitutes, people who have sex with people for money, and males engaging in anal-receptive intercourse)
prison (especially if tuberculosis needs to be ruled out)
travel
exercise
diet
Firearms in household (especially if children or persons with cognitive impairment are present)
Relation to history |
https://en.wikipedia.org/wiki/N-Gage%20%28service%29 | The N-Gage service (also referred to as N-Gage 2.0) was a mobile gaming platform from Nokia that was available for several Nokia smartphones running on S60 (Symbian). N-Gage combined numerous games with 3D graphics into an application featuring online (via N-Gage Arena) and social features. The service was a successor to the original 2003 N-Gage gaming device.
The N-Gage platform was compatible with: Nokia N78, N79, N81, N81 8GB, N82, N85, N86, N86 8MP, N95, N95 8GB, N96, N97, Nokia 5320 XpressMusic, 5630 XpressMusic, 5730 XpressMusic Nokia 6210 Navigator, 6710 Navigator, 6720 Classic, E52, E55 and E75. Due to memory issues, hinted at during an interview in February 2008, support for the Nokia N73, N93 and N93i was cancelled.
On October 30, 2009, Nokia announced that no new N-Gage games would be produced. Some reasons cited for its failure are its bad development model, marketing, and success of Apple's App Store.
A total of 49 games were released for it. Nokia moved its games onto their Ovi Store thereafter. Games can still be played on compatible devices, but support for the service and its online features ceased in September 2010.
Background
Nokia's N-Gage gaming smartphone from 2003 did not perform as well as expected, and its upgraded QD version did not have significants improvements to its performance. Instead of developing a new gaming device, there was a change in concept as Nokia explained to the world during E3 2005 that they were planning to put a N-Gage platform on several smartphone devices, rather than releasing a specific device (although their N81 and 5730 XpressMusic models with its two dedicated gaming-buttons next to the screen is being marketed as a phone built for gaming). It was often nicknamed as N-Gage Next Generation by the public.
Working behind closed doors, it took a little more than a year before, at E3 2006, finally announcing the N-Gage mobile gaming service, set for a 2007 release. They also started showing off next-gen titles s |
https://en.wikipedia.org/wiki/Pl%40ntNet | Pl@ntNet is a citizen science project for automatic plant identification through photographs and based on machine learning.
History
This project launched in 2009 has been developed by scientists (computer engineers and botanists) from a consortium gathering French research institutes (Institut de recherche pour le développement (IRD), Centre de coopération internationale en recherche agronomique pour le développement (CIRAD), Institut national de la recherche agronomique (INRA), Institut national de recherche en informatique et en automatique (INRIA) and the network Tela Botanica, with the support of Agropolis Fondation
).
Platforms
An app for smartphones (and a web version) was launched in 2013, which allows to identify thousands of plant species from photographs taken by the user. It is available in several languages.
As of 2019 it had been downloaded over 10 million times, in more than 180 countries worldwide.
Projects
In 2019, Pl@ntNet has 22 projects: |
https://en.wikipedia.org/wiki/Ribbon%20%28mathematics%29 | In differential geometry, a ribbon (or strip) is the combination of a smooth space curve and its corresponding normal vector. More formally, a ribbon denoted by includes a curve given by a three-dimensional vector , depending continuously on the curve arc-length (), and a unit vector perpendicular to at each point. Ribbons have seen particular application as regards DNA.
Properties and implications
The ribbon is called simple if is a simple curve (i.e. without self-intersections) and closed and if and all its derivatives agree at and .
For any simple closed ribbon the curves given parametrically by are, for all sufficiently small positive , simple closed curves disjoint from .
The ribbon concept plays an important role in the Călugăreanu-White-Fuller formula, that states that
where is the asymptotic (Gauss) linking number, the integer number of turns of the ribbon around its axis; denotes the total writhing number (or simply writhe), a measure of non-planarity of the ribbon's axis curve; and is the total twist number (or simply twist), the rate of rotation of the ribbon around its axis.
Ribbon theory investigates geometric and topological aspects of a mathematical reference ribbon associated with physical and biological properties, such as those arising in topological fluid dynamics, DNA modeling and in material science.
See also
Bollobás–Riordan polynomial
Knots and graphs
Knot theory
DNA supercoil
Möbius strip |
https://en.wikipedia.org/wiki/Parmanu%3A%20The%20Story%20of%20Pokhran | Parmanu: The Story of Pokhran (; ) is a 2018 Indian Hindi-language historical action drama film directed by Abhishek Sharma and jointly written by Saiwyn Quadras, Sanyuktha Chawla Sheikh and Sharma. It was produced by Zee Studios, JA Entertainment and KYTA Productions banners. The film is based on the nuclear bomb test explosions conducted by the Indian Army at Pokhran in 1998. It stars John Abraham, Diana Penty and Boman Irani in lead roles.
Parmanu was earlier slated to release on 8 December 2017, but was postponed to avoid clashing with Padmaavat (2018). It was further delayed owing to conflicts between the producers of the film, Abraham and KriArj Entertainment. It was eventually released on 25 May 2018.
Plot
In 1995, Ashwat Raina, an IAS officer from the Research and Analysis Wing, suggests the ministers perform a retaliatory nuclear test in response to the recent nuclear missile tests by China. However, he is ridiculed, and the PMO secretary Suresh Yadav tells him to keep the file of his plan brief. Ashwat submits the file along with a floppy containing the details, but Yadav submits a half-baked plan to the Prime Minister and ignores the floppy. The test is hastily conducted without Ashwat's involvement, who then becomes the scapegoat when an American Lacrosse satellite photographs the test preparations, and the United States warns India not to continue with the tests. Ashwat loses his job, and three years later, in 1998, when a new Prime Minister is sworn in, he is approached by the new PMO secretary Himanshu Shukla who questions the failure of the tests. Ashwat explains the most integral part of the mission was to keep it confidential, but it couldn't happen since no one viewed the floppy. Himanshu gives Ashwat a second chance to conduct the tests, for which he starts preparing a team. Selecting five members from the BARC (Bhabha Atomic Research Centre), DRDO (Defence Research and Development Organisation), Indian Army, ISA (Indian Space Agency) and IB (I |
https://en.wikipedia.org/wiki/Second%20polar%20moment%20of%20area | The second polar moment of area, also known (incorrectly, colloquially) as "polar moment of inertia" or even "moment of inertia", is a quantity used to describe resistance to torsional deformation (deflection), in objects (or segments of an object) with an invariant cross-section and no significant warping or out-of-plane deformation. It is a constituent of the second moment of area, linked through the perpendicular axis theorem. Where the planar second moment of area describes an object's resistance to deflection (bending) when subjected to a force applied to a plane parallel to the central axis, the polar second moment of area describes an object's resistance to deflection when subjected to a moment applied in a plane perpendicular to the object's central axis (i.e. parallel to the cross-section). Similar to planar second moment of area calculations (,, and ), the polar second moment of area is often denoted as . While several engineering textbooks and academic publications also denote it as or , this designation should be given careful attention so that it does not become confused with the torsion constant, , used for non-cylindrical objects.
Simply put, the 'polar moment of area is a shaft or beam's resistance to being distorted by torsion, as a function of its shape. The rigidity comes from the object's cross-sectional area only, and does not depend on its material composition or shear modulus. The greater the magnitude of the second polar moment of area, the greater the torsional stiffness of the object.
Definition
The equation describing the polar moment of area is a multiple integral over the cross-sectional area, , of the object.
where is the distance to the element .
Substituting the and components, using the Pythagorean theorem:
Given the planar second moments of area equations, where:
It is shown that the polar moment of area can be described as the summation of the and planar moments of area, and
This is also shown in the perpendicular a |
https://en.wikipedia.org/wiki/Enoxolone | Enoxolone (INN, BAN; also known as glycyrrhetinic acid or glycyrrhetic acid) is a pentacyclic triterpenoid derivative of the beta-amyrin type obtained from the hydrolysis of glycyrrhizic acid, which was obtained from the herb liquorice. It is used in flavoring and it masks the bitter taste of drugs like aloe and quinine. It is effective in the treatment of peptic ulcer and also has expectorant (antitussive) properties. It has some additional pharmacological properties with possible antiviral, antifungal, antiprotozoal, and antibacterial activities.
Mechanism of action
Glycyrrhetinic acid inhibits the enzymes (15-hydroxyprostaglandin dehydrogenase and delta-13-prostaglandin) that metabolize the prostaglandins PGE-2 and PGF-2α to their respective, inactive 15-keto-13,14-dihydro metabolites. This increases prostaglandins in the digestive system. Prostaglandins inhibit gastric secretion, stimulate pancreatic secretion and mucous secretion in the intestines, and markedly increase intestinal motility. They also cause cell proliferation in the stomach. The effect on gastric acid secretion, and promotion of mucous secretion and cell proliferation shows why licorice has potential in treating peptic ulcers.
Licorice should not be taken during pregnancy, because PGF-2α stimulates activity of the uterus during pregnancy and can cause abortion.
The structure of glycyrrhetinic acid is similar to that of cortisone. Both molecules are flat and similar at positions 3 and 11. This might be the basis for licorice's anti-inflammatory action.
3-β-D-(Monoglucuronyl)-18-β-glycyrrhetinic acid, a metabolite of glycyrrhetinic acid, inhibits the conversion of 'active' cortisol to 'inactive' cortisone in the kidneys. This occurs via inhibition of the enzyme 11-β-hydroxysteroid dehydrogenase. As a result, cortisol levels become high within the collecting duct of the kidney. Cortisol has intrinsic mineralocorticoid properties (that is, it acts like aldosterone and increases sodium reabsorpt |
https://en.wikipedia.org/wiki/Comparison%20of%20civic%20technology%20platforms | Civic technology is technology that enables engagement and participation, or enhances the relationship between the people and government, by enhancing citizen communications and public decision, improving government delivery of services and infrastructure. This comparison of civic technology platforms compares platforms that are designed to improve citizen participation in governance, distinguished from technology that directly deals with government infrastructure.
Platform types
Graham Smith of the University of Southampton, in his book Beyond the Ballot, used the following categorization of democratic innovations:
Electoral innovations"aim to increase electoral turnout"
Consultation innovations"aim to inform decision-makers of citizens' views"
Deliberative innovations"aim to bring citizens together to deliberate on policy issues, the outcomes of which may influence decision-makers"
Co-governance innovations"aim to give citizens significant influence during the process of decision-making"
Direct democracy innovations"aim to give citizens final decision-making power on key issues"
E-democracy innovations"use information technology to engage citizens in the decision-making process"
Comparison chart
See also
Civic technology
Civic technology companies
Comparison of Internet forum software
Comparison of Q&A sites
E-democracy
Liquid democracy
Open government
Voting
Direct democracy |
https://en.wikipedia.org/wiki/Triangle%20K | Triangle K is a kosher certification agency under the leadership of Rabbi Aryeh R. Ralbag. It was founded by his late father, Rabbi Yehosef Ralbag. The hechsher is a letter K enclosed in an equilateral triangle.
Supervision and certification
They supervise a number of major brands, including Del Monte, Hebrew National, Ocean Spray, Sunsweet, Sunny Delight, SunChips and Wonder Bread.
Minute Maid products used to be supervised by Triangle K. Since 2013, the Orthodox Union has been providing kosher certification for Minute Maid products instead.
Many Orthodox Jews eat only glatt kosher. Triangle K continues to certify foods as kosher that are not glatt kosher. As a result, some Orthodox Jews will not eat food that is certified by Triangle K.
K Meshulash
The name K Meshulash is sometimes used in addition to the trademarked Triangle K name. Meshulash means triangle, triple, or tripled in Hebrew.
See also
Kosher foods
Kashrut |
https://en.wikipedia.org/wiki/Curio%20%C3%97%20peregrinus | Curio × peregrinus, also known as dolphin necklace, flying dolphins, string of dolphins, dolphin plant or Senecio hippogriff, is a succulent nothospecies of Curio (previously Senecio) in the family Asteraceae. It is often called, incorrectly, Senecio peregrinus, but that name was previously given, by Grisebach in 1879, to a different species from South America. The name Curio × peregrinus was published in 1999, based on the earlier name Kleinia peregrina; however, this name was not validly published. The plant is a hybrid between Curio rowleyanus and Curio articulatus.
Description
Reaching a height of 15 cm (6 inches) tall, the plant's curvy leaves develop two small points which make it look like a pod of coltish dolphins. It blooms from May to June. The flowers are mincing and white in colour, forming clenched puffballs. Each bloom has a halo of blood red to golden yellow filaments.
Cultivation
The plant thrives under bright, indirect light with some morning sun and in semi-shade under moist conditions. Not frost tolerant and preferring warmer weather, the plant may become sunburned from incessant sun exposure. The plant does well in hanging baskets, where its leaves can shower downward. Under good conditions, the plant can grow over 50 cm in the first year. The plant may be fertilised once or twice a year to heighten new growth and its general health. The plant can be propagated through cuttings.
Novelty
The plant was a fashion trend in 2017, creating a furore in Japan after a Twitter account posted the plant, where it ultimately got 10.5k retweets.
See also
Curio 'Trident Blue', a similar looking hybrid |
https://en.wikipedia.org/wiki/Collective%20effects%20%28accelerator%20physics%29 | Charged particle beams in a particle accelerator or a storage ring undergo a variety of different processes. Typically the beam dynamics is broken down into single particle dynamics and collective effects. Sources of collective effects include single or multiple inter-particle scattering and interaction with the vacuum chamber and other surroundings, formalized in terms of impedance.
The collective effects of charged particle beams in particle accelerators share some similarity to the dynamics of plasmas. In particular, a charged particle beam may be considered as a non-neutral plasma, and one may find mathematical methods in common with the study of stability or instabilities. One may also find commonality with the field of fluid mechanics
since the density of charged particles is often sufficient to be considered as flowing continuum.
Another important topic is the attempt to mitigate collective effects by use of single bunch or multi-bunch feedback systems.
Types of collective effects
Collective effects can include emittance growth, bunch length or energy spread growth, instabilities, or particle losses. There are also multi-bunch effects.
Formalisms for treating collective effects
The collective beam motion may be modeled in a variety of ways. One may use macroparticle models, or else a continuum model. The evolution equation in the latter case is typically called the Vlasov equation, and requires one to write down the Hamiltonian function including the external magnetic fields, and the self interaction. Stochastic effects may be added by generalizing to the Fokker–Planck equation.
Software for computation of collective effects
Depending on the effects considered and the modeling formalism used, different software is available for simulation. The collective effects must typically be added in addition to the single particle dynamics, which may be modeled using a tracking code.
See article on Accelerator physics codes. |
https://en.wikipedia.org/wiki/Aichelburg%E2%80%93Sexl%20ultraboost | In general relativity, the Aichelburg–Sexl ultraboost is an exact solution which models the spacetime of an observer moving towards or away from a spherically symmetric gravitating object at nearly the speed of light. It was introduced by Peter C. Aichelburg and Roman U. Sexl in 1971.
The original motivation behind the ultraboost was to consider the gravitational field of massless point particles within general relativity. It can be considered an approximation to the gravity well of a photon or other lightspeed particle, although it does not take into account quantum uncertainty in particle position or momentum.
The metric tensor can be written, in terms of Brinkmann coordinates, as
The ultraboost can be obtained as the limit of a metric, which is also an exact solution, at least if one admits impulsive curvatures.
For example, one can take a Gaussian pulse.
In these plus-polarized axisymmetric vacuum pp-waves, the curvature is concentrated along the axis of symmetry, falling off like , and also near . As , the wave profile turns into a Dirac delta and the ultraboost is recovered.
The ultraboost helps also to understand why fast moving observers won't see moving stars and planet-like objects become black holes. |
https://en.wikipedia.org/wiki/Statistical%20machine%20translation | Statistical machine translation (SMT) was a machine translation approach, that superseded the previous, rule-based approach because it required explicit description of each and every linguistic rule, which was costly, and which often did not generalize to other languages. Since 2003, the statistical approach itself has been gradually superseded by the deep learning-based neural network approach.
The first ideas of statistical machine translation were introduced by Warren Weaver in 1949, including the ideas of applying Claude Shannon's information theory. Statistical machine translation was re-introduced in the late 1980s and early 1990s by researchers at IBM's Thomas J. Watson Research Center
Basis
The idea behind statistical machine translation comes from information theory. A document is translated according to the probability distribution that a string in the target language (for example, English) is the translation of a string in the source language (for example, French).
The problem of modeling the probability distribution has been approached in a number of ways. One approach which lends itself well to computer implementation is to apply Bayes Theorem, that is , where the translation model is the probability that the source string is the translation of the target string, and the language model is the probability of seeing that target language string. This decomposition is attractive as it splits the problem into two subproblems. Finding the best translation is done by picking up the one that gives the highest probability:
.
For a rigorous implementation of this one would have to perform an exhaustive search by going through all strings in the native language. Performing the search efficiently is the work of a machine translation decoder that uses the foreign string, heuristics and other methods to limit the search space and at the same time keeping acceptable quality. This trade-off between quality and time usage can also be found in speech recogn |
https://en.wikipedia.org/wiki/Deoxycytidine%20monophosphate | Deoxycytidine monophosphate (dCMP), also known as deoxycytidylic acid or deoxycytidylate in its conjugate acid and conjugate base forms, respectively, is a deoxynucleotide, and one of the four monomers that make up DNA. In a DNA double helix, it will base pair with deoxyguanosine monophosphate.
See also
Cytidine monophosphate
Nucleotides |
https://en.wikipedia.org/wiki/Mutation%20bias | Mutation bias refers to a pattern in which some type of mutation occurs more often than expected under uniformity. The types are most often defined by the molecular nature of the mutational change (see examples below), but sometimes they are based on downstream effects, e.g., Ostrow, et al. refer to the tendency for mutations to increase body size in nematodes as a mutation bias.
Scientific context
The concept of mutation bias appears in several scientific contexts, most commonly in molecular studies of evolution, where mutation biases may be invoked to account for such phenomena as systematic differences in codon usage or genome composition between species. The short tandem repeat (STR) loci used in forensic identification may show biased patterns of gain and loss of repeats. In cancer research, some types of tumors have distinctive mutational signatures that reflect differences in the contributions of mutational pathways. Mutational signatures have proved useful in both detection and treatment.
Recent studies of the emergence of resistance to anti-microbials and anti-cancer drugs show that mutation biases are an important determinant of the prevalence for different types of resistant strains or tumors. Thus, a knowledge of mutation bias can be used to design more evolution-resistant therapies.
When mutation bias is invoked as a possible cause of some pattern in evolution, this is generally an application of the theory of arrival biases, and the alternative hypotheses may include selection, biased gene conversion, and demographic factors.
In the past, due to the technical difficulty of detecting rare mutations, most attempts to characterize the mutation spectrum were based on reporter gene systems, or based on patterns of presumptively neutral change in pseudogenes. More recently, there has been an effort to use the MA (mutation accumulation) method and high-throughput sequencing (e.g., ).
Examples of mutation biases
Transition-transversion bias
Th |
https://en.wikipedia.org/wiki/Lennart%20Johnsson | Lennart Johnsson (born 1944) is a Swedish computer scientist and engineer.
Johnsson started his career at ABB in Sweden and moved on to UCLA, Caltech, Yale University, Harvard University, the Royal Institute of Technology (KTH in Sweden), and Thinking Machines Corporation. He is currently based at the University of Houston, where he holds the Hugh and Lillie Roy Cranz Cullen Distinguished Chair of Computer Science, Mathematics, and Electrical and Computer Engineering as a lecturer at a summer school at the KTH PDC Center for High Performance Computing. |
https://en.wikipedia.org/wiki/Kazimierz%20Kuratowski | Kazimierz Kuratowski (; 2 February 1896 – 18 June 1980) was a Polish mathematician and logician. He was one of the leading representatives of the Warsaw School of Mathematics. He worked as a professor at the University of Warsaw and at the Mathematical Institute of the Polish Academy of Sciences (IM PAN). Between 1946 and 1953, he served as President of the Polish Mathematical Society.
He is primarily known for his contributions to set theory, topology, measure theory and graph theory. Some of the notable mathematical concepts bearing Kuratowski's name include Kuratowski's theorem, Kuratowski closure axioms, Kuratowski-Zorn lemma and Kuratowski's intersection theorem.
Biography and studies
Kazimierz Kuratowski was born in Warsaw, (then part of Congress Poland controlled by the Russian Empire), on 2 February 1896, into an assimilated Jewish family. He was a son of Marek Kuratow, a barrister, and Róża Karzewska. He completed a Warsaw secondary school, which was named after general Paweł Chrzanowski. In 1913, he enrolled in an engineering course at the University of Glasgow in Scotland, in part because he did not wish to study in Russian; instruction in Polish was prohibited. He completed only one year of study when the outbreak of World War I precluded any further enrolment. In 1915, Russian forces withdrew from Warsaw and Warsaw University was reopened with Polish as the language of instruction. Kuratowski restarted his university education there the same year, this time in mathematics. He obtained his Ph.D. in 1921, in the newly established Second Polish Republic.
Doctoral thesis
In autumn 1921 Kuratowski was awarded the Ph.D. degree for his groundbreaking work. His thesis statement consisted of two parts. One was devoted to an axiomatic construction of topology via the closure axioms. This first part (republished in a slightly modified form in 1922) has been cited in hundreds of scientific articles.
The second part of Kuratowski's thesis was devoted to continu |
https://en.wikipedia.org/wiki/Cubane-type%20cluster | A cubane-type cluster is an arrangement of atoms in a molecular structure that forms a cube. In the idealized case, the eight vertices are symmetry equivalent and the species has Oh symmetry. Such a structure is illustrated by the hydrocarbon cubane. With chemical formula , cubane has carbon atoms at the corners of a cube and covalent bonds forming the edges. Most cubanes have more complicated structures, usually with nonequivalent vertices. They may be simple covalent compounds or macromolecular or supramolecular cluster compounds.
Examples
Other compounds having different elements in the corners, various atoms or groups bonded to the corners are all part of this class of structures.
Inorganic cubane-type clusters include selenium tetrachloride, tellurium tetrachloride, and sodium silox.
Cubane clusters are common throughout bioinorganic chemistry. Ferredoxins containing [Fe4S4] iron–sulfur clusters are pervasive in nature. The four iron atoms and four sulfur atoms form an alternating arrangement at the corners. The whole cluster is typically anchored by coordination of the iron atoms, usually with cysteine residues. In this way, each Fe center achieves tetrahedral coordination geometry. Some [Fe4S4] clusters arise via dimerization of square-shaped [Fe2S2] precursors. Many synthetic analogues are known including heterometallic derivatives.
Several alkyllithium compounds exist as clusters in solution, typically tetramers, with the formula [RLi]4. Examples include methyllithium and tert-butyllithium. The individual RLi molecules are not observed. The four lithium atoms and the carbon from each alkyl group bonded to them occupy alternating vertices of the cube, with the additional atoms of the alkyl groups projecting off their respective corners.
Octaazacubane is a hypothetical allotrope of nitrogen with formula N8; the nitrogen atoms are the corners of the cube. Like the carbon-based cubane compounds, octaazacubane is predicted to be highly unstable due to angle |
https://en.wikipedia.org/wiki/Nullator | In electronics, a nullator is a theoretical linear, time-invariant one-port defined as having zero current and voltage across its terminals. Nullators are strange in the sense that they simultaneously have properties of both a short (zero voltage) and an open circuit (zero current). They are neither current nor voltage sources, yet both at the same time.
Inserting a nullator in a circuit schematic imposes a mathematical constraint on how that circuit must behave, forcing the circuit itself to adopt whatever arrangements needed to meet the condition. For example, the inputs of an ideal operational amplifier (with negative feedback) behave like a nullator, as they draw no current and have no voltage across them, and these conditions are used to analyze the circuitry surrounding the operational amplifier.
A nullator is normally paired with a norator to form a nullor.
Two trivial cases are worth noting: A nullator in parallel with a norator is equivalent to a short (zero voltage any current) and a nullator in series with a norator is an open circuit (zero current, any voltage). |
https://en.wikipedia.org/wiki/Lateral%20epicondyle%20of%20the%20humerus | The lateral epicondyle of the humerus is a large, tuberculated eminence, curved a little forward, and giving attachment to the radial collateral ligament of the elbow joint, and to a tendon common to the origin of the supinator and some of the extensor muscles. Specifically, these extensor muscles include the anconeus muscle, the supinator, extensor carpi radialis brevis, extensor digitorum, extensor digiti minimi, and extensor carpi ulnaris. In birds, where the arm is somewhat rotated compared to other tetrapods, it is termed dorsal epicondyle of the humerus. In comparative anatomy, the term ectepicondyle is sometimes used.
A common injury associated with the lateral epicondyle of the humerus is lateral epicondylitis also known as tennis elbow. Repetitive overuse of the forearm, as seen in tennis or other sports, can result in inflammation of "the tendons that join the forearm muscles on the outside of the elbow. The forearm muscles and tendons become damaged from overuse. This leads to pain and tenderness on the outside of the elbow."
See also
Medial epicondyle of the humerus
Common extensor tendon
Tennis elbow (lateral epicondylitis)
Additional images |
https://en.wikipedia.org/wiki/Hymenocallis%20pimana | Hymenocallis pimana is a member of the genus Hymenocallis, in the family Amaryllidaceae. Common name in English is Pima spider-lily; in Spanish it is cebollín. It is endemic to a small mountainous region in the Sierra Madre Occidental, straddling the Mexican states of Chihuahua and Sonora. Many of the people of the region are of the indigenous group known as the Mountain Pima or Pima Bajo.
Type locale is the small village of Nabogame, approximately 18 km northwest of Yepáchic, Chihuahua and about 10 km east of the frontier with Sonora. This is at elevation of approximately 1800 m (6000 ft).
Ecology
The plant is locally abundant in the vicinity of the type locale. It grows in large colonies of hundreds of individuals, often near waterways and gulleys. This largely due to the nature of the large, green seeds which fall on the ground and germinate close to the parent plant without much dispersal.
The large, showy flowers appear in June, at the beginning of the summer rainy season. Flowers are white and erect, with narrow perianth segments and a prominent corolla. Bulbs resemble small onion bulbs.
Uses
The Pima peoples of the region report that these bulbs provided an emergency food source in years past, during famines that followed crop failures. They say that the bulbs were boiled in lye to remove toxic alkaloids before consumption. This is the only known instance of any people utilizing any member of this genus as food. |
https://en.wikipedia.org/wiki/Slovenian%20National%20Corpus | Slovenian National Corpus FidaPLUS is the 621 million words (tokens) corpus of the Slovenian language, gathered from selected texts written in Slovenian of different genres and styles, mainly from books and newspapers.
The FidaPLUS database is an upgrade of the older (FIDA) corpus, which was developed between 1997 and 2000, with added texts that were published up to 2006 and was the result of the applicative research project of the Faculty of Arts, Faculty of Social Sciences, both University of Ljubljana, and Jožef Stefan Institute's Department of Knowledge Technologies.
Corpus is available via a corpus manager Sketch Engine. This version FidaPLUS corpus contains Word sketches, an automatic corpus-derived overview of word's grammatical and collocational behaviour. |
https://en.wikipedia.org/wiki/Tarhana | Tarhana is a dried food ingredient, based on a fermented mixture of grain and yogurt or fermented milk, found in the cuisines of Central Asia, Southeast Europe and the Middle East. Dry tarhana has a texture of coarse, uneven crumbs, and it is usually made into a thick soup with water, stock, or milk. As it is both acidic and low in moisture, the milk proteins keep for long periods. Tarhana is very similar to some kinds of kashk.
Regional variations of the name include Armenian թարխանա tarkhana; Greek τραχανάς trahanas or (ξυνό)χονδρος (xyno)hondros; Persian ترخینه، ترخانه، ترخوانه tarkhineh, tarkhāneh, tarkhwāneh; Kurdish tarxane; Albanian trahana; Bulgarian трахана or тархана; Serbo-Croatian tarana or trahana; Hungarian tarhonya; Turkish tarhana.
The Armenian tarkhana is made up of matzoon and eggs mixed with equal amounts of wheat flour and starch. Small pieces of dough are prepared and dried and then kept in glass containers and used mostly in soups, dissolving in hot liquids. The Greek trahanas contains only cracked wheat or a couscous-like paste and fermented milk. The Turkish tarhana consists of cracked wheat (or flour), yoghurt, and vegetables, fermented and then dried. In Cyprus, it is considered a national specialty, and is often served with pieces of halloumi cheese in it. In Albania it is prepared with wheat, yoghurt and butter, and served with hot olive oil and feta cheese.
Etymology
Hill and Bryer suggest that the term tarhana is related to Greek τρακτόν (trakton, romanized as tractum), a thickener Apicius wrote about in the 1st century CE which most other authors consider to be a sort of cracker crumb. Dalby (1996) connects it to the Greek τραγός/τραγανός (tragos/traganos), described (and condemned) in Galen's Geoponica 3.8. Weaver (2002) also considers it of Western origin.
Perry, on the other hand, considers that the phonetic evolution of τραγανός to tarhana is unlikely, and that it probably comes from tarkhwāneh. He considers the resemblance |
https://en.wikipedia.org/wiki/Scotland%27s%20Rural%20College | Scotland's Rural College (SRUC; ) is a public land based research institution focused on agriculture and life sciences. Its history stretches back to 1899 with the establishment of the West of Scotland Agricultural College and its current organisation came into being through a merger of smaller institutions.
After the West of Scotland Agricultural College was established in 1899, the Edinburgh and East of Scotland College of Agriculture and the Aberdeen and North of Scotland College of Agriculture were both established in the early 20th century. These three colleges were merged into a single institution, the Scottish Agricultural College, in 1990. In October 2012, the Scottish Agricultural College was merged with Barony College, Elmwood College and Oatridge College to re-organise the institution as Scotland's Rural College, initialised as SRUC in preparation for it gaining the status of a university college with degree awarding powers.
SRUC has six campuses across Scotland – Aberdeen, Ayr, Barony, Elmwood, King's Buildings and Oatridge. Students study land based courses from further education to postgraduate level and degrees are currently awarded by the University of Edinburgh or the University of Glasgow depending on the course of study. Undergraduates study over a period of three terms each year during their first two years and two semesters during their third and fourth years. In addition to higher education, SRUC has a consulting division, SAC Consulting, which works with clients in agricultural businesses and associated rural industries and it also has a research division which carries out research in agriculture and life sciences.
SRUC has attracted notable botanists, chemists and agriculturists as lecturers and researchers and the institution has counted Henry Dyer, Victor Hope, 2nd Marquess of Linlithgow and Maitland Mackie amongst its academic staff. In addition to careers in agriculture and life sciences, the institution's alumni have gone on to have c |
https://en.wikipedia.org/wiki/Chimol | Chimol (also known chirmol and chismol) is a common Central American cuisine or condiment topping on foods such as carne asada.
Preparation
Chimol is made of diced tomato, bell pepper, cilantro, and onion. It is seasoned with lemon juice, salt, and black pepper, and sometimes with vinegar too. It usually has a spice of some sort.
Modifications to the recipe exist depending on different regions of Central America.
Uses
It is tradition to cook the tomato at the same time as the carne asada, because this gives the carna asada a juicy taste. It is used as a sauce on carna asada, with salads, with pork and with chicken breast. |
https://en.wikipedia.org/wiki/Language%20module | The language module or language faculty is a hypothetical structure in the human brain which is thought to contain innate capacities for language, originally posited by Noam Chomsky. There is ongoing research into brain modularity in the fields of cognitive science and neuroscience, although the current idea is much weaker than what was proposed by Chomsky and Jerry Fodor in the 1980s. In today's terminology, 'modularity' refers to specialisation: language processing is specialised in the brain to the extent that it occurs partially in different areas than other types of information processing such as visual input. The current view is, then, that language is neither compartmentalised nor based on general principles of processing (as proposed by George Lakoff). It is modular to the extent that it constitutes a specific cognitive skill or area in cognition.
Meaning of a module
The notion of a dedicated language module in the human brain originated with Noam Chomsky's theory of Universal Grammar (UG). The debate on the issue of modularity in language is underpinned, in part, by different understandings of this concept. There is, however, some consensus in the literature that a module is considered committed to processing specialized representations (domain-specificity) in an informationally encapsulated way. A distinction should be drawn between anatomical modularity, which proposes there is one 'area' in the brain that deals with this processing, and functional modularity that obviates anatomical modularity whilst maintaining information encapsulation in distributed parts of the brain.
No singular anatomical module
The available evidence points toward the conclusion that no single area of the brain is solely devoted to processing language. The Wada test, where sodium amobarbital is used to anaesthetise one hemisphere, shows that the left-hemisphere appears to be crucial in language processing. Yet, neuroimaging does not implicate any single area but rather ident |
https://en.wikipedia.org/wiki/CatSper3 | CatSper3, is a protein which in humans is encoded by the CATSPER3 gene. CatSper3 is a member of the cation channels of sperm family of proteins. The four proteins in this family together form a Ca2+-permeant ion channel specific essential for the correct function of sperm cells. |
https://en.wikipedia.org/wiki/Diatom | A diatom (Neo-Latin diatoma) is any member of a large group comprising several genera of algae, specifically microalgae, found in the oceans, waterways and soils of the world. Living diatoms make up a significant portion of the Earth's biomass: they generate about 20 to 50 percent of the oxygen produced on the planet each year, take in over 6.7 billion tonnes of silicon each year from the waters in which they live, and constitute nearly half of the organic material found in the oceans. The shells of dead diatoms can reach as much as a half-mile (800 m) deep on the ocean floor, and the entire Amazon basin is fertilized annually by 27 million tons of diatom shell dust transported by transatlantic winds from the African Sahara, much of it from the Bodélé Depression, which was once made up of a system of fresh-water lakes.
Diatoms are unicellular organisms: they occur either as solitary cells or in colonies, which can take the shape of ribbons, fans, zigzags, or stars. Individual cells range in size from 2 to 200 micrometers. In the presence of adequate nutrients and sunlight, an assemblage of living diatoms doubles approximately every 24 hours by asexual multiple fission; the maximum life span of individual cells is about six days. Diatoms have two distinct shapes: a few (centric diatoms) are radially symmetric, while most (pennate diatoms) are broadly bilaterally symmetric.
The unique feature of diatoms are that they are surrounded by a cell wall made of silica (hydrated silicon dioxide), called a frustule. These frustules produce structural coloration, prompting them to be described as "jewels of the sea" and "living opals".
Movement in diatoms primarily occurs passively as a result of both ocean currents and wind-induced water turbulence; however, male gametes of centric diatoms have flagella, permitting active movement to seek female gametes. Similar to plants, diatoms convert light energy to chemical energy by photosynthesis, but their chloroplasts were acquire |
https://en.wikipedia.org/wiki/Tubuli%20seminiferi%20recti | The tubuli seminiferi recti (also known as the tubuli recti, tubulus rectus, or straight seminiferous tubules) are structures in the testicle connecting the convoluted region of the seminiferous tubules to the rete testis, although the tubuli recti have a different appearance distinguishing them from these two structures.
They enter the fibrous tissue of the mediastinum, and pass upward and backward, forming, in their ascent, a close network of anastomosing tubes which are merely channels in the fibrous stroma, lined by flattened epithelium, and having no proper walls; this constitutes the rete testis. Only Sertoli cells line the terminal ends of the seminiferous tubules (tubuli recti). |
https://en.wikipedia.org/wiki/Method%20of%20moments%20%28probability%20theory%29 | In probability theory, the method of moments is a way of proving convergence in distribution by proving convergence of a sequence of moment sequences. Suppose X is a random variable and that all of the moments
exist. Further suppose the probability distribution of X is completely determined by its moments, i.e., there is no other probability distribution with the same sequence of moments
(cf. the problem of moments). If
for all values of k, then the sequence {Xn} converges to X in distribution.
The method of moments was introduced by Pafnuty Chebyshev for proving the central limit theorem; Chebyshev cited earlier contributions by Irénée-Jules Bienaymé. More recently, it has been applied by Eugene Wigner to prove Wigner's semicircle law, and has since found numerous applications in the theory of random matrices.
Notes
Moment (mathematics) |
https://en.wikipedia.org/wiki/Dichotic%20listening | Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.
In a standard dichotic listening test, a participant is presented with two different auditory stimuli simultaneously (usually speech), directed into different ears over headphones. In one type of test, participants are asked to pay attention to one or both of the stimuli; later, they are asked about the content of either the stimulus they were instructed to attend to or the stimulus they were instructed to ignore.
History
Donald Broadbent is credited with being the first scientist to systematically use dichotic listening tests in his work. In the 1950s, Broadbent employed dichotic listening tests in his studies of attention, asking participants to focus attention on either a left- or right-ear sequence of digits. He suggested that due to limited capacity, the human information processing system needs to select which channel of stimuli to attend to, deriving his filter model of attention.
In the early 1960s, Doreen Kimura used dichotic listening tests to draw conclusions about lateral asymmetry of auditory processing in the brain. She demonstrated, for example, that healthy participants have a right-ear superiority for the reception of verbal stimuli, and left-ear superiority for the perception of melodies. From that study, and others studies using neurological patients with brain lesions, she concluded that there is a predominance of the left hemisphere for speech perception, and a predominance of the right hemisphere for melodic perception.
In the late 1960s and early 1970s, Donald Shankweiler and Michael Studdert-Kennedy of Haskins Laboratories used a dichotic listening technique (presenting different nonsense syllables) to demonstrate the dissociation of phonetic (speech) and auditory (nonspeech) perception by finding that phoneti |
https://en.wikipedia.org/wiki/Shapley%E2%80%93Folkman%20lemma | The Shapley–Folkman lemma is a result in convex geometry that describes the Minkowski addition of sets in a vector space. It is named after mathematicians Lloyd Shapley and Jon Folkman, but was first published by the economist Ross M. Starr.
The lemma may be intuitively understood as saying that, if the number of summed sets exceeds the dimension of the vector space, then their Minkowski sum is approximately convex.
Related results provide more refined statements about how close the approximation is. For example, the Shapley–Folkman theorem provides an upper bound on the distance between any point in the Minkowski sum and its convex hull. This upper bound is sharpened by the Shapley–Folkman–Starr theorem (alternatively, Starr's corollary).
The Shapley–Folkman lemma has applications in economics, optimization and probability theory. In economics, it can be used to extend results proved for convex preferences to non-convex preferences. In optimization theory, it can be used to explain the successful solution of minimization problems that are sums of many functions. In probability, it can be used to prove a law of large numbers for random sets.
Introductory example
A set is convex if every line segment joining two of its points is a subset in the set: For example, the solid disk is a convex set but the circle is not, because the line segment joining two distinct points is not a subset of the circle.
The convex hull of a set Q is the smallest convex set that contains Q. This distance is zero if and only if the sum is convex.
Minkowski addition is the addition of the set members. For example, adding the set consisting of the integers zero and one to itself yields the set consisting of zero, one, and two:
The subset of the integers {0, 1, 2} is contained in the interval of real numbers [0, 2], which is convex. The Shapley–Folkman lemma implies that every point in [0, 2] is the sum of an integer from {0, 1} and a real number from [0, 1].
The distance between the |
https://en.wikipedia.org/wiki/Post-detection%20policy | A post-detection policy (PDP), also known as a post-detection protocol, is a set of structured rules, standards, guidelines, or actions that governmental or other organizational entities plan to follow for the "detection, analysis, verification, announcement, and response to" confirmed signals from extraterrestrial civilizations. Though no PDPs have been formally and openly adopted by any governmental entity, there is significant work being done by scientists and nongovernmental organizations to develop cohesive plans of action to utilize in the event of detection. The most popular and well known of these is the "Declaration of Principles Concerning Activities Following the Detection of Extraterrestrial Intelligence", which was developed by the International Academy of Astronautics (IAA), with the support of the International Institute of Space Law. The theories of PDPs constitute a distinct area of research but draw heavily from the fields of SETI (the Search for Extra-Terrestrial Intelligence), METI (Messaging to Extra-Terrestrial Intelligence), and CETI (Communication with Extraterrestrial Intelligence).
Scientist Zbigniew Paptrotny has argued that the formulation of post-detection protocols can be guided by three factors: terrestrial society's readiness to accept the news of ET detection, how the news of detection is released, and the comprehensibility of the message in the signal. These three broad areas and their related subsidiaries comprise the bulk of the content and discourse surrounding PDPs.
Issues
Significance of transmission
There are two proposed scales for quantifying the significance of transmissions between Earth and potential extraterrestrial intelligence (ETI). The Rio scale, ranging from 0 to 10, was proposed in 2000 as a means of quantifying the significance of a SETI detection. The scale was designed by Iván Almár and Jill Tarter to help policy-makers formulate an initial judgment on a detection's potential consequences. The scale borrows h |
https://en.wikipedia.org/wiki/Aizerman%27s%20conjecture | In nonlinear control, Aizerman's conjecture or Aizerman problem states that a linear system in feedback with a sector nonlinearity would be stable if the linear system is stable for any linear gain of the sector. This conjecture was proven false but led to the (valid) sufficient criteria on absolute stability.
Mathematical statement of Aizerman's conjecture (Aizerman problem)
Consider a system with one scalar nonlinearity
where P is a constant n×n-matrix, q, r are constant n-dimensional vectors, ∗ is an operation of transposition, f(e) is scalar function, and f(0)=0. Suppose that the nonlinearity f is sector bounded, meaning that for some real and with , the function satisfies
Then Aizerman's conjecture is that the system is stable in large (i.e. unique stationary point is global attractor) if all linear systems with f(e)=ke, k ∈(k1,k2) are asymptotically stable.
There are counterexamples to Aizerman's conjecture such that nonlinearity belongs to the sector of linear stability and unique stable equilibrium coexists with a stable periodic solution, i.e. a hidden oscillation. However, under stronger assumptions on the system, such as positivity, Aizerman's conjecture is known to hold true.
Variants
Strengthening of Aizerman's conjecture is Kalman's conjecture (or Kalman problem) where in place of condition on the nonlinearity it is required that the derivative of nonlinearity belongs to linear stability sector.
A multivariate version of Aizerman's conjecture holds true over the complex field, and it can be used to derive the circle criterion for the stability of nonlinear time-varying systems. |
https://en.wikipedia.org/wiki/Communications%20satellite | A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth. Communications satellites are used for television, telephone, radio, internet, and military applications. Many communications satellites are in geostationary orbit above the equator, so that the satellite appears stationary at the same point in the sky; therefore the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track the satellite. Others form satellite constellations in low Earth orbit, where antennas on the ground have to follow the position of the satellites and switch between satellites frequently.
The high frequency radio waves used for telecommunications links travel by line of sight and so are obstructed by the curve of the Earth. The purpose of communications satellites is to relay the signal around the curve of the Earth allowing communication between widely separated geographical points. Communications satellites use a wide range of radio and microwave frequencies. To avoid signal interference, international organizations have regulations for which frequency ranges or "bands" certain organizations are allowed to use. This allocation of bands minimizes the risk of signal interference.
History
Origins
In October 1945, Arthur C. Clarke published an article titled "Extraterrestrial Relays" in the British magazine Wireless World. The article described the fundamentals behind the deployment of artificial satellites in geostationary orbits to relay radio signals. Because of this, Arthur C. Clarke is often quoted as being the inventor of the concept of the communications satellite, and the term 'Clarke Belt' is employed as a description of the orbit.
The first artificial Earth satellite was Sputnik 1, which was put into orbit by the Soviet Union on 4 Octo |
https://en.wikipedia.org/wiki/Moore%27s%20second%20law | Rock's law or Moore's second law, named for Arthur Rock or Gordon Moore, says that the cost of a semiconductor chip fabrication plant doubles every four years. As of 2015, the price had already reached about 14 billion US dollars.
Rock's law can be seen as the economic flip side to Moore's (first) law – that the number of transistors in a dense integrated circuit doubles every two years. The latter is a direct consequence of the ongoing growth of the capital-intensive semiconductor industry— innovative and popular products mean more profits, meaning more capital available to invest in ever higher levels of large-scale integration, which in turn leads to the creation of even more innovative products.
The semiconductor industry has always been extremely capital-intensive, with ever-dropping manufacturing unit costs. Thus, the ultimate limits to growth of the industry will constrain the maximum amount of capital that can be invested in new products; at some point, Rock's Law will collide with Moore's Law.
It has been suggested that fabrication plant costs have not increased as quickly as predicted by Rock's law – indeed plateauing in the late 1990s – and also that the fabrication plant cost per transistor (which has shown a pronounced downward trend) may be more relevant as a constraint on Moore's Law.
See also
Semiconductor device fabrication
Fabless manufacturing
Wirth's law, an analogous law about software complicating over time
Semiconductor consolidation |
https://en.wikipedia.org/wiki/Entanglement-assisted%20classical%20capacity | In the theory of quantum communication, the entanglement-assisted classical capacity of a quantum channel is the highest rate at which classical information can be transmitted from a sender to receiver when they share an unlimited amount of noiseless entanglement. It is given by the quantum mutual information of the channel, which is the input-output quantum mutual information maximized over all pure bipartite quantum states with one system transmitted through the channel. This formula is the natural generalization of Shannon's noisy channel coding theorem, in the sense that this formula is equal to the capacity, and there is no need to regularize it. An additional feature that it shares with Shannon's formula is that a noiseless classical or quantum feedback channel cannot increase the entanglement-assisted classical capacity. The entanglement-assisted classical capacity theorem is proved in two parts: the direct coding theorem and the converse theorem. The direct coding theorem demonstrates that the quantum mutual information of the channel is an achievable rate, by a random coding strategy that is effectively a noisy version of the super-dense coding protocol. The converse theorem demonstrates that this rate is optimal by making use of the strong subadditivity of quantum entropy.
See also
Classical capacity
Quantum capacity
Typical subspace |
https://en.wikipedia.org/wiki/Spread%20Toolkit | The Spread Toolkit is a computer software package that provides a high performance group communication system that is resilient to faults across local and wide area networks. Spread functions as a unified message bus for distributed applications, and provides highly tuned application-level multicast, group communication, and point to point support. Spread services range from reliable messaging to fully ordered messages with delivery guarantees.
The toolkit consists of a messaging server, and client libraries for many software development environments, including C/C++ libraries (with and without thread support), a Java class to be used by applets or applications, and interfaces for Perl, Python, and Ruby. Interfaces for many other software environments have been provided by third parties.
In typical operation, each computer in a cluster runs its own instance of the Spread server, and client applications connect locally to that server process. The Spread servers, in turn, communicate with each other to pass messages to subscriber applications. It can also be configured so that clients distributed across the network all communicate with a Spread server process on one host.
The Spread Toolkit is developed by Spread Concepts LLC, with much support by the Distributed Systems and Networks Lab (DSN) at Johns Hopkins University, and the Experimental Networked Systems Lab at George Washington University.
Partial funding was provided by the Defense Advanced Research Projects Agency (DARPA) and The National Security Agency (NSA).
Bindings
Bindings for Spread Toolkit exist for many languages and platforms:
Ada
C
C++
C#
Haskell
Java
Lua
Microsoft Excel
OCaml
Perl
PHP
Python
Ruby
Squeak
Scheme
TCL |
https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius%20theorem | In matrix theory, the Perron–Frobenius theorem, proved by and , asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems (subshifts of finite type); to economics (Okishio's theorem, Hawkins–Simon condition);
to demography (Leslie population age distribution model);
to social networks (DeGroot learning process); to Internet search engines (PageRank); and even to ranking of football
teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.
Statement
Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices with exclusively non-negative real numbers as elements. The eigenvalues of a real square matrix A are complex numbers that make up the spectrum of the matrix. The exponential growth rate of the matrix powers Ak as k → ∞ is controlled by the eigenvalue of A with the largest absolute value (modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a non-negative real square matrix. Early results were due to and concerned positive matrices. Later, found their extension to certain classes of non-negative matrices.
Positive matrices
Let be an positive matrix: for . Then the following statements hold.
There is a positive real number r, called the Perron root or the Perron–Frobenius eigenvalue (also called the leading eigenvalue or dominant eigenvalue), such that r is an eigenvalue of A and any other eigenvalue λ (possibly complex) in absolute value is strictly smaller than r , |λ| < r. Thus, |
https://en.wikipedia.org/wiki/Vaccine%20adverse%20event | A vaccine adverse event (VAE), sometimes referred to as a vaccine injury, is an adverse event caused by vaccination. The World Health Organization (WHO) knows VAEs as Adverse Events Following Immunization (AEFI).
AEFIs can be related to the vaccine itself (product or quality defects), to the vaccination process (administration error or stress related reactions) or can occur independently from vaccination (coincidental).
Most vaccine adverse events are mild. Serious injuries and deaths caused by vaccines are very rare, and the idea that severe events are common has been classed as a "common misconception about immunization" by the WHO. Some claimed vaccine injuries are not, in fact, caused by vaccines; for example, there is a subculture of advocates who attribute their children's autism to vaccine injury, despite the fact that vaccines do not cause autism.
Claims of vaccine injuries appeared in litigation in the United States in the latter part of the 20th century. Some families have won substantial awards from sympathetic juries, even though many public health officials have said that the claims of injuries are unfounded. In response, several vaccine makers stopped production, threatening public health, resulting in laws being passed to shield makers from liabilities stemming from vaccine injury claims.
Adverse events
According to the U.S. Centers for Disease Control and Prevention, while "any vaccine can cause side effects", most side effects are minor, primarily including sore arms or a mild fever. Unlike most medical interventions vaccines are given to healthy people, where the risk of side effects is not as easily outweighed by the benefit of treating existing disease. As such, the safety of immunization interventions is taken very seriously by the scientific community, with constant monitoring of a number of data sources looking for patterns of adverse events.
As the success of immunization programs increases and the incidence of disease decreases, publi |
https://en.wikipedia.org/wiki/Disodium%20citrate | Disodium citrate, also known as disodium hydrogen citrate, Alkacitron, and sesquihydrate, is an acid salt of citric acid with the chemical formula Na2C6H6O7.
Uses
Food
It is used as an antioxidant in food and to improve the effects of other antioxidants. It is also used as an acidity regulator and sequestrant. Typical products include gelatin, jam, sweets, ice cream, carbonated beverages, milk powder, wine, and processed cheeses. Disodium citrate can also be used as a thickening agent or stabilizer.
Manufacturing
Disodium citrate can also be used as an ingredient in household products that remove stains.
Health
Disodium citrate may be used in patients to alleviate discomfort from urinary-tract infections. |
https://en.wikipedia.org/wiki/Human%20climate%20niche | The human climate niche is the ensemble of climate conditions that have sustained human life and human activities, like agriculture, on the globe for the last millennia. The human climate niche is estimated by calculating the human population density with respect to mean annual temperature. The human population distribution as a function of mean annual temperature is bimodal and results in two modes; one at 15 °C and another one at ~20 to 25 °C. Crops and livestock required for sustaining the human population are also limited to the similar niche conditions. Given the rise in mean global temperatures, the human population is projected to experience climate conditions beyond the human climate niche. Some projections show that considering temperature and demographic changes, 2.0 and 3.7 billion people will live in out of the niche by 2030 and 2090, respectively. |
https://en.wikipedia.org/wiki/Kazuma%20Kiryu | is a fictional character and the protagonist of Sega's action-adventure beat 'em up Japanese role-playing game series Yakuza / Like a Dragon. He is popularly known as due to the tattoo of a dragon on his back and him originally being a fearsome member of the yakuza group known as the Dojima Family, a subsidiary of the Tojo Clan. He was introduced in the series' 2005 debut game, where he took the blame for his boss's death to protect his best friend Akira Nishikiyama, resulting in his expulsion from the clan and a ten-year stay in prison. After leaving prison, he fights against the new threats in his life, during which he meets Haruka Sawamura, to whom he eventually becomes an adoptive father. He is voiced by Takaya Kuroda in Japanese, by Darryl Kurylo in the English versions of the first game and Yakuza: Like a Dragon, and by Yong Yea in the English releases beginning with Like a Dragon Gaiden. Besides the two Like a Dragon live-action films, the character has also appeared in other video games including Project X Zone 2 and as downloadable content in the spin-off game, Fist of the North Star: Lost Paradise.
Sega producer Toshihiro Nagoshi created Kiryu alongside Hase Seishū, a novelist who helped write the plotlines of the series' first two games. Nagoshi stated he was created to appeal to a broad audience. Kiryu was the sole playable character for the first three games; in the next three, additional playable characters are included, causing him to take on a smaller role. He was once again the sole playable character in the series' seventh main installment, Yakuza 6: The Song of Life. The eighth installment, Yakuza: Like a Dragon, stars a new protagonist, though Kiryu still makes an appearance.
Critical reception to Kiryu has been generally positive. Several critics have praised how Kiryu, despite his history as a yakuza, stood out because of his kindness and character development across the series' story. Kiryu is also often considered a PlayStation mascot. On |
https://en.wikipedia.org/wiki/Lucas%E2%80%93Carmichael%20number | In mathematics, a Lucas–Carmichael number is a positive composite integer n such that
if p is a prime factor of n, then p + 1 is a factor of n + 1;
n is odd and square-free.
The first condition resembles the Korselt's criterion for Carmichael numbers, where -1 is replaced with +1. The second condition eliminates from consideration some trivial cases like cubes of prime numbers, such as 8 or 27, which otherwise would be Lucas–Carmichael numbers (since n3 + 1 = (n + 1)(n2 − n + 1) is always divisible by n + 1).
They are named after Édouard Lucas and Robert Carmichael.
Properties
The smallest Lucas–Carmichael number is 399 = 3 × 7 × 19. It is easy to verify that 3+1, 7+1, and 19+1 are all factors of 399+1 = 400.
The smallest Lucas–Carmichael number with 4 factors is 8855 = 5 × 7 × 11 × 23.
The smallest Lucas–Carmichael number with 5 factors is 588455 = 5 × 7 × 17 × 23 × 43.
It is not known whether any Lucas–Carmichael number is also a Carmichael number.
Thomas Wright proved in 2016 that there are infinitely many Lucas–Carmichael numbers. If we let denote the number of Lucas–Carmichael numbers up to , Wright showed that there exists a positive constant such that
.
List of Lucas–Carmichael numbers
The first few Lucas–Carmichael numbers and their prime factors are listed below. |
https://en.wikipedia.org/wiki/Process%20theory%20of%20composition | The process theory of composition (hereafter referred to as "process") is a field of composition studies that focuses on writing as a process rather than a product. Based on Janet Emig's breakdown of the writing process, the process is centered on the idea that students determine the content of the course by exploring the craft of writing using their own interests, language, techniques, voice, and freedom, and where students learn what people respond to and what they don't. Classroom activities often include peer work where students themselves are teaching, reviewing, brainstorming, and editing.
History
The ideas behind process were born out of increased college enrollment thanks to the GI Bill following World War II. Writing instructors began giving students more group work and found that, with guidance, students were able to identify and recognize areas that needed improvement in other students' papers, and that criticism also helped students recognize their own areas to strengthen. Composition scholars such as Janet Emig, Peter Elbow, and Donald Murray began considering how these methods could be used in the writing classroom. Emig, in her book The Composing Processes of Twelfth Graders, showed the complexity of the writing process; this process was later simplified into a basic three-step process by Murray: prewriting, writing, and rewriting (also called "revision"). In her 1975 study, Sondra Perl demonstrated that college writers who had been labeled as "unskilled" or "basic" writers nonetheless engaged in sophisticated and recursive writing processes. However, those processes sometimes became unproductive when the writers became too worried about editing errors. Perl later developed her theory of embodied writing process, grounded in a "felt sense" that writers can learn to access to guide their writing process.
Process theory had many philosophies behind it following its creation. From the 1970s to the early 1990s, scholars such as Richard Fulkerson a |
https://en.wikipedia.org/wiki/Day%20and%20Night%20%28cellular%20automaton%29 | Day and Night is a cellular automaton rule in the same family as Game of Life. It is defined by rule notation B3678/S34678, meaning that a dead cell becomes live (is born) if it has 3, 6, 7, or 8 live neighbors, and a live cell remains alive (survives) if it has 3, 4, 6, 7, or 8 live neighbors, out of the eight neighbors in the Moore neighborhood. It was invented and named by Nathan Thompson in 1997, and investigated extensively by David I. Bell. The rule is given the name "Day & Night" because its on and off states are symmetric: if all the cells in the Universe are inverted, the future states are the inversions of the future states of the original pattern. A pattern in which the entire universe consists of off cells except for finitely many on cells can equivalently be represented by a pattern in which the whole universe is covered in on cells except for finitely many off cells in congruent locations.
Although the detailed evolution of this cellular automaton is very different from Conway's Game of Life, it exhibits complex behavior similar to that rule: there are many known small oscillators and spaceships, and guns formed by combining oscillators in such a way that they periodically emit spaceships of various types. |
https://en.wikipedia.org/wiki/Cyclic%20sieving | In combinatorial mathematics, cyclic sieving is a phenomenon by which evaluating a generating function for a finite set at roots of unity counts symmetry classes of objects acted on by a cyclic group.
Definition
Let C be a cyclic group generated by an element c of order n. Suppose C acts on a set X. Let X(q) be a polynomial with integer coefficients. Then the triple (X, X(q), C) is said to exhibit the cyclic sieving phenomenon (CSP) if for all integers d, the value X(e2id/n) is the number of elements fixed by cd. In particular X(1) is the cardinality of the set X, and for that reason X(q) is regarded as a generating function for X.
Examples
The q-binomial coefficient
is the polynomial in q defined by
It is easily seen that its value at q = 1 is the usual binomial coefficient , so it is a generating function for the subsets of {1, 2, ..., n} of size k. These subsets carry a natural action of the cyclic group C of order n which acts by adding 1 to each element of the set, modulo n. For example, when n = 4 and k = 2, the group orbits are
(of size 2)
and
(of size 4).
One can show that evaluating the q-binomial coefficient when q is an nth root of unity gives the number of subsets fixed by the corresponding group element.
In the example n = 4 and k = 2, the q-binomial coefficient is
evaluating this polynomial at q = 1 gives 6 (as all six subsets are fixed by the identity element of the group); evaluating it at q = −1 gives 2 (the subsets {1, 3} and {2, 4} are fixed by two applications of the group generator); and evaluating it at q = ±i gives 0 (no subsets are fixed by one or three applications of the group generator).
List of cyclic sieving phenomena
In the Reiner–Stanton–White paper, the following example is given:
Let α be a composition of n, and let W(α) be the set of all words of length n with αi letters equal to i. A descent of a word w is any index j such that . Define the major index on words as the sum of all descents.
The triple e |
https://en.wikipedia.org/wiki/Applied%20Physics%20Express | Applied Physics Express or APEX is a scientific journal publishing letters, with usually no more than three pages per (concise) article. The main purpose is to rapidly publish original, timely, and novel research papers in applied physics. As part of its aim, the journal intends for papers to be novel research that has a strong impact on relevant fields and society. It is notable that the journal considers satisfaction of this criterion as showing the paper merits priority handling in the review and publication processes. In keeping with this aim, its issues are published online on a weekly basis. The print version is published monthly.
The journal is the successor of Japanese Journal of Applied Physics Part 2, i.e., the letter and express letter sections.
Indexing and abstracting
APEX is abstracted and indexed in Journal Citation Reports, SCImago Journal Rank, SCOPUS, Science Citation Index, and Current Contents.
See also
Optical Review
Japan Society of Applied Physics
Physical Society of Japan
Optical Society of Japan |
https://en.wikipedia.org/wiki/Reliable%20Event%20Logging%20Protocol | Reliable Event Logging Protocol (RELP), a networking protocol for computer data logging in computer networks, extends the functionality of the syslog protocol to provide reliable delivery of event messages. It is most often used in environments which do not tolerate message loss, such as the financial industry.
Overview
RELP uses TCP for message transmission. This provides basic protection against message loss, but does not guarantee delivery under all circumstances. When a connection aborts, TCP cannot reliably detect whether the last messages sent have actually reached their destination.
Unlike the syslog protocol, RELP works with a backchannel which conveys information back to the sender about messages processed by the receiver. This enables RELP to always know which messages have been properly received, even in the case of a connection abort.
History
RELP was developed in 2008 as a reliable protocol for rsyslog-to-rsyslog communication. As RELP designer Rainer Gerhards explains, the lack of reliable transmission in industry-standard syslog was a core motivation to create RELP. Originally, RFC 3195 syslog was considered to take up this part in rsyslog, but it suffered from high overhead and missing support for new IETF syslog standards (which have since been published as RFC 5424, but were not named at that time).
While RELP was initially meant solely for rsyslog use, it became adopted more widely. Currently tools both under Linux and Windows support RELP. There are also in-house deployments for Java. While RELP is still not formally standardized, it has evolved into an industry standard for computer logging.
Technical details
RELP is inspired by RFC 3195 syslog and RFC 3080. During initial connection, sender and receiver negotiate session options, like supported command set or application level window size. Network event messages are transferred as commands, where the receiver acknowledges each command as soon as it has processed it. Sessions may be closed b |
https://en.wikipedia.org/wiki/Semantic%20URL%20attack | In a semantic URL attack, a client manually adjusts the parameters of its request by maintaining the URL's syntax but altering its semantic meaning. This attack is primarily used against CGI driven websites.
A similar attack involving web browser cookies is commonly referred to as cookie poisoning.
Example
Consider a web-based e-mail application where users can reset their password by answering the security question correctly, and allows the users to send the password
to the e-mail address of their choosing. After they answer the security question correctly, the web page will arrive to the following web form where the users can enter their alternative e-mail address:
<form action="resetpassword.php" method="GET">
<input type="hidden" name="username" value="user001" />
<p>Please enter your alternative e-mail address:</p>
<input type="text" name="altemail" /><br />
<input type="submit" value="Submit" />
</form>
The receiving page, resetpassword.php, has all the information it needs to send the password to the new e-mail. The hidden variable username contains the value user001, which is the username of the e-mail account.
Because this web form is using the GET data method, when the user submits alternative@emailexample.com as the e-mail address where the user wants the password to be sent to,
the user then arrives at the following URL:
http://semanticurlattackexample.com/resetpassword.php?username=user001&altemail=alternative%40emailexample.com
This URL appears in the location bar of the browser, so the user can identify the username and the e-mail address through the URL parameters. The user may decide to steal other people's (user002) e-mail address by visiting the following URL as an experiment:
http://semanticurlattackexample.com/resetpassword.php?username=user002&altemail=alternative%40emailexample.com
If the resetpassword.php accepts these values, it is vulnerable to a semantic URL attack. The new password of the user002 e-mail address will be ge |
https://en.wikipedia.org/wiki/Koji%20orange | Koji orange (Citrus leiocarpa), also called smooth-fruited orange in English, bingyul in Korean, 光橘 (guang ju), 柑 子 (gan zi), and 日本土柑 (ri ben tu gan) in Chinese, and コウジ (kōji) in Japanese, is a Citrus species native to Japan. The specific epithet (leiocarpa) comes from Greek leios (= smooth) and karpon (= fruit). It is a taxonomical synonym of Citrus aurantium.
Distribution
Besides Japan, it is grown in the United States, and other parts of East Asia including South Korea and China.
Description
The fruit is oblate in shape, slightly ribbed, bright orange in color, very small, and very seedy, and for the latter two reasons it is not grown for commercial use. It ripens from October through November and has been cultivated since at least 1900. It may be monoembryonic. The tree is densely branched and has a broad crown and a short, straight trunk. The leaves are dark green and elliptical in shape.
Genetics
Citrus leiocarpa is inferred to be a hybrid between a koji-type species (seed parent) and the tachibana orange (pollen parent, Citrus tachibana). Its genotype matches with that of the komikan and toukan varieties.
Varieties
Citrus leiocarpa f. monoembryota, a form of Citrus leiocarpa, was described by Chozaburo Tanaka. Once believed to be a mutation of the koji orange, it has been revealed that it is a hybrid between koji (pollen parent) and kishu (seed parent). In Chinese, it is called 駿河柑子 (jun he gan zi) and is called スルガユコウ (suruga yukō) and 駿河柚柑 (suruga yuzukan) in Japanese.
See also
Japanese citrus
List of citrus fruits |
https://en.wikipedia.org/wiki/Operation%20Cyber%20Condition%20Zebra | Operation Cyber Condition Zebra is a network operations campaign conducted by the United States Navy to deny network intrusion and establish an adequate computer network defense posture to provide defense-in-depth and warfighting capability. The operation specifies that perimeter security for legacy networks will deny intrusions and data infiltration, that firewalls will be maintained through risk assessment and formal adjudication of legacy application waiver requests, and that legacy networks will be shut down as quickly a possible after enterprise networks (such as the NMCI) are established.
Its name is an analogue of the term "material condition Zebra," which is a standard configuration of equipment systems set on a warship to provide the greatest degree of subdivision and tightness to the ship. It is set immediately and automatically when general quarters is sounded. |
https://en.wikipedia.org/wiki/Commercial%20Processing%20Workload | The Commercial Processing Workload (CPW) is a simplified variant of the industry-wide TPC-C benchmarking standard originally developed by IBM to compare the performance of their various AS/400 (now IBM i) server offerings.
The related, but less commonly used Computational Intensive Workload (CIW) measures performance in a situation where there is a high ratio of computation to input/output communication. The reverse situation is simulated by the CPW. |
https://en.wikipedia.org/wiki/Rep-tile | In the geometry of tessellations, a rep-tile or reptile is a shape that can be dissected into smaller copies of the same shape. The term was coined as a pun on animal reptiles by recreational mathematician Solomon W. Golomb and popularized by Martin Gardner in his "Mathematical Games" column in the May 1963 issue of Scientific American. In 2012 a generalization of rep-tiles called self-tiling tile sets was introduced by Lee Sallows in Mathematics Magazine.
Terminology
A rep-tile is labelled rep-n if the dissection uses n copies. Such a shape necessarily forms the prototile for a tiling of the plane, in many cases an aperiodic tiling.
A rep-tile dissection using different sizes of the original shape is called an irregular rep-tile or irreptile. If the dissection uses n copies, the shape is said to be irrep-n. If all these sub-tiles are of different sizes then the tiling is additionally described as perfect. A shape that is rep-n or irrep-n is trivially also irrep-(kn − k + n) for any k > 1, by replacing the smallest tile in the rep-n dissection by n even smaller tiles. The order of a shape, whether using rep-tiles or irrep-tiles is the smallest possible number of tiles which will suffice.
Examples
Every square, rectangle, parallelogram, rhombus, or triangle is rep-4. The sphinx hexiamond (illustrated above) is rep-4 and rep-9, and is one of few known self-replicating pentagons. The Gosper island is rep-7. The Koch snowflake is irrep-7: six small snowflakes of the same size, together with another snowflake with three times the area of the smaller ones, can combine to form a single larger snowflake.
A right triangle with side lengths in the ratio 1:2 is rep-5, and its rep-5 dissection forms the basis of the aperiodic pinwheel tiling. By Pythagoras' theorem, the hypotenuse, or sloping side of the rep-5 triangle, has a length of .
The international standard ISO 216 defines sizes of paper sheets using the , in which the long side of a rectangular sheet of paper is |
https://en.wikipedia.org/wiki/Pineal%20gland | The pineal gland (also known as the pineal body, conarium, or epiphysis cerebri) is a small endocrine gland in the brain of most vertebrates. The pineal gland produces melatonin, a serotonin-derived hormone which modulates sleep patterns in both circadian and seasonal cycles. The shape of the gland resembles a pine cone, which gives it its name. The pineal gland is located in the epithalamus, near the center of the brain, between the two hemispheres, tucked in a groove where the two halves of the thalamus join. It is one of the neuroendocrine secretory circumventricular organs in which capillaries are mostly permeable to solutes in the blood.
The pineal gland is present in almost all vertebrates, but is absent in protochordates in which there is a simple pineal homologue. The hagfish, considered as a primitive vertebrate, has a rudimentary structure regarded as the "pineal equivalent" in the dorsal diencephalon. In some species of amphibians and reptiles, the gland is linked to a light-sensing organ, variously called the parietal eye, the pineal eye or the third eye. Reconstruction of the biological evolution pattern suggests that the pineal gland was originally a kind of atrophied photoreceptor that developed into a neuroendocrine organ.
Ancient Greeks were the first to notice the pineal gland and believed it to be a valve, a guardian for the flow of pneuma. Galen in the 2nd century C.E. could not find any functional role and regarded the gland as a structural support for the brain tissue. He gave the name konario, meaning cone or pinecone, which during Renaissance was translated to Latin as pinealis. In the 17th century, René Descartes revived the mystical purpose and described the gland as the "principal seat of the soul". In the mid-20th century, the real biological role as a neuroendocrine organ was established.
Etymology
The word pineal, from Latin pinea (pine-cone), was first used in the late 17th century to refer to the cone shape of the brain gland.
Str |
https://en.wikipedia.org/wiki/Reference%20designator | A reference designator unambiguously identifies the location of a component within an electrical schematic or on a printed circuit board. The reference designator usually consists of one or two letters followed by a number, e.g. R13, C1002. The number is sometimes followed by a letter, indicating that components are grouped or matched with each other, e.g. R17A, R17B.
IEEE 315 contains a list of Class Designation Letters to use for electrical and electronic assemblies. For example, the letter R is a reference prefix for the resistors of an assembly, C for capacitors, K for relays.
History
IEEE 200-1975 or "Standard Reference Designations for Electrical and Electronics Parts and Equipments" is a standard that was used to define referencing naming systems for collections of electronic equipment. IEEE 200 was ratified in 1975. The IEEE renewed the standard in the 1990s, but withdrew it from active support shortly thereafter. This document also has an ANSI document number, ANSI Y32.16-1975.
This standard codified information from, among other sources, a United States military standard MIL-STD-16 which dates back to at least the 1950s in American industry.
To replace IEEE 200–1975, ASME, a standards body for mechanical engineers, initiated the new standard ASME Y14.44-2008. This standard, along with IEEE 315–1975, provide the electrical designer with guidance on how to properly reference and annotate everything from a single circuit board to a collection of complete enclosures.
Definition
ASME Y14.44-2008 and IEEE 315-1975 define how to reference and annotate components of electronic devices.
It breaks down a system into units, and then any number of sub-assemblies. The unit is the highest level of demarcation in a system and is always a numeral. Subsequent demarcation are called assemblies and always have the Class Letter "A" as a prefix following by a sequential number starting with 1. Any number of sub-assemblies may be defined until finally reaching the co |
https://en.wikipedia.org/wiki/Perlin%20noise | Perlin noise is a type of gradient noise developed by Ken Perlin in 1983. It has many uses, including but not limited to: procedurally generating terrain, applying pseudo-random changes to a variable, and assisting in the creation of image textures. It is most commonly implemented in two, three, or four dimensions, but can be defined for any number of dimensions.
History
Ken Perlin developed Perlin noise in 1983 as a result of his frustration with the "machine-like" look of computer-generated imagery (CGI) at the time. He formally described his findings in a SIGGRAPH paper in 1985 called "An Image Synthesizer". He developed it after working on Disney's computer animated sci-fi motion picture Tron (1982) for the animation company Mathematical Applications Group (MAGI). In 1997, Perlin was awarded an Academy Award for Technical Achievement for creating the algorithm, the citation for which read:
Perlin did not apply for any patents on the algorithm, but in 2001 he was granted a patent for the use of 3D+ implementations of simplex noise for texture synthesis. Simplex noise has the same purpose, but uses a simpler space-filling grid. Simplex noise alleviates some of the problems with Perlin's "classic noise", among them computational complexity and visually-significant directional artifacts.
Uses
Perlin noise is a procedural texture primitive, a type of gradient noise used by visual effects artists to increase the appearance of realism in computer graphics. The function has a pseudo-random appearance, yet all of its visual details are the same size. This property allows it to be readily controllable; multiple scaled copies of Perlin noise can be inserted into mathematical expressions to create a great variety of procedural textures. Synthetic textures using Perlin noise are often used in CGI to make computer-generated visual elementssuch as object surfaces, fire, smoke, or cloudsappear more natural, by imitating the controlled random appearance of textures in natu |
https://en.wikipedia.org/wiki/V%28D%29J%20recombination | V(D)J recombination is the mechanism of somatic recombination that occurs only in developing lymphocytes during the early stages of T and B cell maturation. It results in the highly diverse repertoire of antibodies/immunoglobulins and T cell receptors (TCRs) found in B cells and T cells, respectively. The process is a defining feature of the adaptive immune system.
V(D)J recombination in mammals occurs in the primary lymphoid organs (bone marrow for B cells and thymus for T cells) and in a nearly random fashion rearranges variable (V), joining (J), and in some cases, diversity (D) gene segments. The process ultimately results in novel amino acid sequences in the antigen-binding regions of immunoglobulins and TCRs that allow for the recognition of antigens from nearly all pathogens including bacteria, viruses, parasites, and worms as well as "altered self cells" as seen in cancer. The recognition can also be allergic in nature (e.g. to pollen or other allergens) or may match host tissues and lead to autoimmunity.
In 1987, Susumu Tonegawa was awarded the Nobel Prize in Physiology or Medicine "for his discovery of the genetic principle for generation of antibody diversity".
Background
Human antibody molecules (including B cell receptors) are composed of heavy and light chains, each of which contains both constant (C) and variable (V) regions, genetically encoded on three loci:
The immunoglobulin heavy locus (IGH@) on chromosome 14, containing the gene segments for the immunoglobulin heavy chain.
The immunoglobulin kappa (κ) locus (IGK@) on chromosome 2, containing the gene segments for one type (κ) of immunoglobulin light chain.
The immunoglobulin lambda (λ) locus (IGL@) on chromosome 22, containing the gene segments for another type (λ) of immunoglobulin light chain.
Each heavy chain or light chain gene contains multiple copies of three different types of gene segments for the variable regions of the antibody proteins. For example, the human immunoglobulin heavy |
https://en.wikipedia.org/wiki/LINC | The LINC (Laboratory INstrument Computer) is a 12-bit, 2048-word transistorized computer. The LINC is considered by some the first minicomputer and a forerunner to the personal computer. Originally named the "Linc", suggesting the project's origins at MIT's Lincoln Laboratory, it was renamed LINC after the project moved from the Lincoln Laboratory. The LINC was designed by Wesley A. Clark and Charles Molnar.
The LINC and other "MIT Group" machines were designed at MIT and eventually built by Digital Equipment Corporation (DEC) and Spear Inc. of Waltham, Massachusetts (later a division of Becton, Dickinson and Company). The LINC sold for more than $40,000 at the time. A typical configuration included an enclosed 6'X20" rack; four boxes holding (1) two tape drives, (2) display scope and input knobs, (3) control console and (4) data terminal interface; and a keyboard.
The LINC interfaced well with laboratory experiments. Analog inputs and outputs were part of the basic design. It was designed in 1962 by Charles Molnar and Wesley Clark at Lincoln Laboratory, Massachusetts, for NIH researchers. The LINC's design was literally in the public domain, perhaps making it unique in the history of computers. A dozen LINC computers were assembled by their eventual biomedical researcher owners in a 1963 summer workshop at MIT. Digital Equipment Corporation (starting in 1964) and, later, Spear Inc. of Waltham, MA. manufactured them commercially.
DEC's pioneer C. Gordon Bell states that the LINC project began in 1961, with first delivery in March 1962, and the machine was not formally withdrawn until December 1969. A total of 50 were built (all using DEC System Module Blocks and cabinets), most at Lincoln Labs, housing the desktop instruments in four wooden racks. The first LINC included two oscilloscope displays. Twenty-one were sold by DEC at $43,600 (), delivered in the Production Model design. In these, the tall cabinet sitting behind a white Formica-covered table held two so |
https://en.wikipedia.org/wiki/Business%20process%20network | Business process networks (BPN), also referred to as business service networks or business process hubs, enable the efficient execution of multi-enterprise operational processes, including supply chain planning and execution. A BPN extends and implements an organization's Service-orientation in Enterprise Applications.
To execute such processes, BPNs combine integration services with application services, often to support a particular industry or process, such as order management, logistics management, or automated shipping and receiving.
Purpose
Most organizations derive their primary value (e.g., revenue) and attain their goals outside of the 'four walls' of the enterprise-—by selling to consumers (B2C) or to other businesses (Business-to-business). Thus, businesses seek to efficiently manage processes that span multiple organizations. One such process is supply chain management; BPNs are gaining in popularity partly because of the changing nature of supply chains, which have become global. Trends such as global sourcing and offshoring to Asia, India and other low-cost production regions of the world continue to add complexity to effective trading partner management and supply chain visibility.
Complications
The transition to global sourcing has been challenging for some companies. Few companies have the requisite strategies, infrastructure and extended process control to effectively make the transition to global sourcing. The majority of supply managers continue to use a mix of e-mail, phone and fax to collaborate with offshore suppliers—-none of which are standardized nor easily integrated to enable informed business decisions and actions. Further, distant trading partners introduce new standards, new systems, multiple time zones, new processes and different levels of technological maturity into the supply chain. BPNs help reduce this complexity by providing a common framework for information exchange, visibility and collaboration.
BPNs are also increasi |
https://en.wikipedia.org/wiki/Astrophysical%20plasma | Astrophysical plasma is plasma outside of the Solar System. It is studied as part of astrophysics and is commonly observed in space. The accepted view of scientists is that much of the baryonic matter in the universe exists in this state.
When matter becomes sufficiently hot and energetic, it becomes ionized and forms a plasma. This process breaks matter into its constituent particles which includes negatively charged electrons and positively charged ions. These electrically charged particles are susceptible to influences by local electromagnetic fields. This includes strong fields generated by stars, and weak fields which exist in star forming regions, in interstellar space, and in intergalactic space. Similarly, electric fields are observed in some stellar astrophysical phenomena, but they are inconsequential in very low-density gaseous media.
Astrophysical plasma is often differentiated from space plasma, which typically refers to the plasma of the Sun, the solar wind, and the ionospheres and magnetospheres of the Earth and other planets.
Observing and studying astrophysical plasma
Plasmas in stars can both generate and interact with magnetic fields, resulting in a variety of dynamic astrophysical phenomena. These phenomena are sometimes observed in spectra due to the Zeeman effect. Other forms of astrophysical plasmas can be influenced by preexisting weak magnetic fields, whose interactions may only be determined directly by polarimetry or other indirect methods. In particular, the intergalactic medium, the interstellar medium, the interplanetary medium and solar winds consist of diffuse plasmas.
Possible related phenomena
Scientists are interested in active galactic nuclei because such astrophysical plasmas could be directly related to the plasmas studied in laboratories. Many of these phenomena seemingly exhibit an array of complex magnetohydrodynamic behaviors, such as turbulence and instabilities.
In Big Bang cosmology, the entire universe was in a pl |
https://en.wikipedia.org/wiki/Transit%20map | A transit map is a topological map in the form of a schematic diagram used to illustrate the routes and stations within a public transport system—whether this be bus, tram, rapid transit, commuter rail or ferry routes. The main components are color-coded lines to indicate each route or service, with named icons to indicate stations or stops.
Transit maps can be found in the transit vehicles, at the platforms or in printed timetables. Their primary function is to help users to efficiently use the public transport system, including which stations function as interchange between lines. Unlike conventional maps, transit maps are usually not geographically accurate—instead they use straight lines and fixed angles, and often illustrate a fixed distance between stations, compressing those in the outer area of the system and expanding those close to the center.
History
The mapping of transit systems was at first generally geographically accurate, but abstract route-maps of individual lines (usually displayed inside the carriages) can be traced back as early as 1908 (London's District line), and certainly there are examples from European and American railroad cartography as early as the 1890s where geographical features have been removed and the routes of lines have been artificially straightened out. But it was George Dow of the London and North Eastern Railway who was the first to launch a diagrammatic representation of an entire rail transport network (in 1929); his work is seen by historians of the subject as being part of the inspiration for Harry Beck when he launched his iconic London Underground map in 1933.
After this pioneering work, many transit authorities worldwide imitated the diagrammatic look for their own networks, some while continuing to also publish hybrid versions that were geographically accurate.
Early maps of the Berlin U-Bahn, Berlin S-Bahn, Boston T, Paris Métro, and New York City Subway also exhibited some elements of the diagrammatic form.
Th |
https://en.wikipedia.org/wiki/Audio/modem%20riser | The audio/modem riser (AMR) is a riser expansion slot found on the motherboards of some Pentium III, Pentium 4, Duron, and Athlon personal computers. It was designed by Intel to interface with chipsets and provide analog functionality, such as sound cards and modems, on an expansion card.
Technology
Physically, it has two rows of 23 pins, making 46 pins total. Three drawbacks of AMR are that it eliminates one PCI slot, it is not plug and play, and it does not allow for hardware accelerated cards (only software-based).
Technologically, it has been superseded by the Advanced Communications Riser (ACR) and Intel's own communications and networking riser (CNR). However, riser technologies in general never really took off. Modems generally remained as PCI cards while audio and network interfaces were integrated on to motherboards.
See also
Advanced Communications Riser (ACR)
GeoPort
Mobile Daughter Card |
https://en.wikipedia.org/wiki/Biosignal | A biosignal is any signal in living beings that can be continually measured and monitored. The term biosignal is often used to refer to bioelectrical signals, but it may refer to both electrical and non-electrical signals. The usual understanding is to refer only to time-varying signals, although spatial parameter variations (e.g. the nucleotide sequence determining the genetic code) are sometimes subsumed as well.
Electrical biosignals
Electrical biosignals, or bioelectrical time signals, usually refers to the change in electric current produced by the sum of an electrical potential difference across a specialized tissue, organ or cell system like the nervous system. Thus, among the best-known bioelectrical signals are:
Electroencephalogram (EEG)
Electrocardiogram (ECG)
Electromyogram (EMG)
Electrooculogram (EOG)
Electroretinogram (ERG)
Electrogastrogram (EGG)
Galvanic skin response (GSR) or electrodermal activity (EDA)
EEG, ECG, EOG and EMG are measured with a differential amplifier which registers the difference between two electrodes attached to the skin. However, the galvanic skin response measures electrical resistance and the Magnetoencephalography (MEG) measures the magnetic field induced by electrical currents (electroencephalogram) of the brain.
With the development of methods for remote measurement of electric fields using new sensor technology, electric biosignals such as EEG and ECG can be measured without electric contact with the skin. This can be applied, for example, for remote monitoring of brain waves and heart beat of patients who must not be touched, in particular patients with serious burns.
Electrical currents and changes in electrical resistances across tissues can also be measured from plants.
Biosignals may also refer to any non-electrical signal that is capable of being monitored from biological beings, such as mechanical signals (e.g. the mechanomyogram or MMG), acoustic signals (e.g. phonetic and non-phonetic utterances, bre |
https://en.wikipedia.org/wiki/Anterior%20tibial%20recurrent%20artery | The anterior tibial recurrent artery is a small artery in the leg. It arises from the anterior tibial artery, as soon as that vessel has passed through the interosseous space. It ascends in the tibialis anterior muscle, ramifies on the front and sides of the knee-joint, and assists in the formation of the patellar plexus by anastomosing with the genicular branches of the popliteal, and with the highest genicular artery. |
https://en.wikipedia.org/wiki/Millimeter%20Anisotropy%20eXperiment%20IMaging%20Array | The Millimeter Anisotropy eXperiment IMaging Array (MAXIMA) experiment was a balloon-borne experiment funded by the United States NSF, NASA, and Department of Energy, and operated by an international collaboration headed by the University of California, to measure the fluctuations of the cosmic microwave background. It consisted of two flights, one in August 1998 and one in June 1999. For each flight the balloon was started at the Columbia Scientific Balloon Facility in Palestine, Texas and flew to an altitude of 40,000 metres for over 8 hours. For the first flight it took data from about 0.3 percent of the sky of the northern region near the Draco constellation. For the second flight, known as MAXIMA-II, twice the area was observed, this time in the direction of Ursa Major.
Initially planned together with the BOOMERanG experiment, it split off during the planning phase to take a less risky approach by reducing flying time as well as launching and landing on U.S. territory.
Instrumentation
A 1.3-metre primary mirror, along with a smaller secondary and tertiary mirror, was used to focus the microwaves onto the feed horns. The feed horns had spectral bands centred at 150, 240 and 420 GHz with a resolution of 10 arcminutes. A bolometer array consisting of sixteen NTD-Ge thermistors measured the incident radiation.
The detector array was cooled to 100 mK via a four-stage refrigeration process. Liquid nitrogen cooled the outer layer of radiation shielding and He-4 was used to cool the two other layers to a temperature of 2-3 K. Finally liquid He-3 cooled the array down to operation temperature. The shielding, together with the properties of the feed horns, gave a sensitivity of .
Two CCD cameras were used to provide accurate measurements of the telescope's orientation. The first wide-field camera pointed towards Polaris and gave a coarse orientation up to 15 arcminutes. The other camera was mounted in the primary focus and gave an accuracy of half an arcminute for |
https://en.wikipedia.org/wiki/Neural%20radiance%20field | A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. It is a graphics primitive which can be optimized from a set of 2D images to produce a 3D scene. The NeRF model can learn the scene geometry, camera poses, and the reflectance properties of objects in a scene, which allows it to render new views of the scene from novel viewpoints.
The method was originally introduced by a team from UC Berkeley, Google Research, and UC San Diego in 2020.
See also
Gaussian splatting |
https://en.wikipedia.org/wiki/Defense%20Data%20Network | The Defense Data Network (DDN) was a computer networking effort of the United States Department of Defense from 1983 through 1995. It was based on ARPANET technology.
History
As an experiment, from 1971 to 1977, the Worldwide Military Command and Control System (WWMCCS) purchased and operated an ARPANET-type system from BBN Technologies for the Prototype WWMCCS Intercomputer Network (PWIN). The experiments proved successful enough that it became the basis of the much larger WIN system. Six initial WIN sites in 1977 increased to 20 sites by 1981.
In 1975, the Defense Communication Agency (DCA) took over operation of the ARPANET as it became an operational tool in addition to an ongoing research project. At that time, the Automatic Digital Network (AUTODIN), carried most of the Defense Department's message traffic. Starting in 1972, attempts had been made to introduce some packet switching into its planned replacement, AUTODIN II. AUTODIN II development proved unsatisfactory, however, and in 1982, AUTODIN II was canceled, to be replaced by a combination of several packet-based networks that would connect military installations.
The DCA used "Defense Data Network" (DDN) as the program name for this new network. Under its initial architecture, as developed by the Institute for Defense Analysis, the DDN would consist of two separate instances: the unclassified MILNET, which would be split off the ARPANET; and a classified network, also based on ARPANET technology, which would provide services for WIN, DODIIS, and SACDIN. C/30 packet switches, developed by BBN Technologies as upgraded Interface Message Processors, would provide the network technology. End-to-end encryption would be provided by ARPANET encryption devices, namely the Internet Private Line Interface (IPLI) or Blacker.
After MILNET was split away, the ARPANET would continue be used as an Internet backbone for researchers, but be slowly phased out. Both networks carried unclassified information, and we |
https://en.wikipedia.org/wiki/Landau%E2%80%93Kolmogorov%20inequality | In mathematics, the Landau–Kolmogorov inequality, named after Edmund Landau and Andrey Kolmogorov, is the following family of interpolation inequalities between different derivatives of a function f defined on a subset T of the real numbers:
On the real line
For k = 1, n = 2 and T = [c,∞) or T = R, the inequality was first proved by Edmund Landau with the sharp constants C(2, 1, [c,∞)) = 2 and C(2, 1, R) = √2. Following contributions by Jacques Hadamard and Georgiy Shilov, Andrey Kolmogorov found the sharp constants and arbitrary n, k:
where an are the Favard constants.
On the half-line
Following work by Matorin and others, the extremising functions were found by Isaac Jacob Schoenberg, explicit forms for the sharp constants are however still unknown.
Generalisations
There are many generalisations, which are of the form
Here all three norms can be different from each other (from L1 to L∞, with p=q=r=∞ in the classical case) and T may be the real axis, semiaxis or a closed segment.
The Kallman–Rota inequality generalizes the Landau–Kolmogorov inequalities from the derivative operator to more general contractions on Banach spaces.
Notes
Inequalities
→ |
https://en.wikipedia.org/wiki/Oral%20and%20maxillofacial%20pathology | Oral and maxillofacial pathology refers to the diseases of the mouth ("oral cavity" or "stoma"), jaws ("maxillae" or "gnath") and related structures such as salivary glands, temporomandibular joints, facial muscles and perioral skin (the skin around the mouth). The mouth is an important organ with many different functions. It is also prone to a variety of medical and dental disorders.
The specialty oral and maxillofacial pathology is concerned with diagnosis and study of the causes and effects of diseases affecting the oral and maxillofacial region. It is sometimes considered to be a specialty of dentistry and pathology. Sometimes the term head and neck pathology is used instead, which may indicate that the pathologist deals with otorhinolaryngologic disorders (i.e. ear, nose and throat) in addition to maxillofacial disorders. In this role there is some overlap between the expertise of head and neck pathologists and that of endocrine pathologists.
Diagnosis
The key to any diagnosis is thorough medical, dental, social and psychological history as well as assessing certain lifestyle risk factors that may be involved in disease processes. This is followed by a thorough clinical investigation including extra-oral and intra-oral hard and soft tissues.
It is sometimes the case that a diagnosis and treatment regime are possible to determine from history and examination, however it is good practice to compile a list of differential diagnoses. Differential diagnosis allows for decisions on what further investigations are needed in each case.
There are many types of investigations in diagnosis of oral and maxillofacial diseases, including screening tests, imaging (radiographs, CBCT, CT, MRI, ultrasound) and histopathology (biopsy).
Biopsy
A biopsy is indicated when the patient's clinical presentation, past history or imaging studies do not allow a definitive diagnosis. A biopsy is a surgical procedure that involves the removal of a piece of tissue sample from the livin |
https://en.wikipedia.org/wiki/Rubin%20causal%20model | The Rubin causal model (RCM), also known as the Neyman–Rubin causal model, is an approach to the statistical analysis of cause and effect based on the framework of potential outcomes, named after Donald Rubin. The name "Rubin causal model" was first coined by Paul W. Holland. The potential outcomes framework was first proposed by Jerzy Neyman in his 1923 Master's thesis, though he discussed it only in the context of completely randomized experiments. Rubin extended it into a general framework for thinking about causation in both observational and experimental studies.
Introduction
The Rubin causal model is based on the idea of potential outcomes. For example, a person would have a particular income at age 40 if they had attended college, whereas they would have a different income at age 40 if they had not attended college. To measure the causal effect of going to college for this person, we need to compare the outcome for the same individual in both alternative futures. Since it is impossible to see both potential outcomes at once, one of the potential outcomes is always missing. This dilemma is the "fundamental problem of causal inference."
Because of the fundamental problem of causal inference, unit-level causal effects cannot be directly observed. However, randomized experiments allow for the estimation of population-level causal effects. A randomized experiment assigns people randomly to treatments: college or no college. Because of this random assignment, the groups are (on average) equivalent, and the difference in income at age 40 can be attributed to the college assignment since that was the only difference between the groups. An estimate of the average causal effect (also referred to as the average treatment effect or ATE) can then be obtained by computing the difference in means between the treated (college-attending) and control (not-college-attending) samples.
In many circumstances, however, randomized experiments are not possible due to ethical or |
https://en.wikipedia.org/wiki/Pace%20count%20beads | Pace count beads or ranger beads are a manual counting tool used to keep track of distance traveled through a pace count. It is used in military land navigation or orienteering. A typical example for military use is keeping track of distance traveled during a foot patrol.
Description
The tool is usually constructed using a set of 13 beads on a length of cord. The beads are divided into two sections, separated by a knot. Nine beads are used in the lower section, and four or more beads are used in the upper section. There is often a loop in the upper end, making it possible to attach the tool to the user's gear with a simple Larks head hitch.
Usage
The beads can be used to count paces or a distance calculated from the number of paces. Both methods require the user to know the relationship between the paces walked, and the distance traveled. There are two main ways to use the beads. One is to represent the distance a person has walked, and the other is to represent the distance they need to walk. In the latter, beads may be used to count down the distance to a destination.
Counting paces
As users walk, they typically slide one bead on the cord for every ten paces taken. On the tenth pace, the user slides a bead in the lower section towards the knot. After the 90th pace, all nine beads are against the knot. On the 100th pace, all nine beads in the lower section are returned away from the knot, and a bead from the upper section is slid upwards, away from the knot.
In this manner, the user calculates the distance traveled by keeping track of the paces taken. To use this method, the user must know the length of their pace to calculate the distance accurately traveled. Also, the number of paces to be walked must be precalculated (example: 2,112 paces= one mile, based on a 30-inch pace), and then the distance traveled has to be calculated from the walked paces.
Distance walked
For every 100 meters the user walks, one of the lower beads is pulled down. When the ninth of |
https://en.wikipedia.org/wiki/Format-preserving%20encryption | In cryptography, format-preserving encryption (FPE), refers to encrypting in such a way that the output (the ciphertext) is in the same format as the input (the plaintext). The meaning of "format" varies. Typically only finite sets of characters are used; numeric, alphabetic or alphanumeric. For example:
Encrypting a 16-digit credit card number so that the ciphertext is another 16-digit number.
Encrypting an English word so that the ciphertext is another English word.
Encrypting an n-bit number so that the ciphertext is another n-bit number (this is the definition of an n-bit block cipher).
For such finite domains, and for the purposes of the discussion below, the cipher is equivalent to a permutation of N integers } where N is the size of the domain.
Motivation
Restricted field lengths or formats
One motivation for using FPE comes from the problems associated with integrating encryption into existing applications, with well-defined data models. A typical example would be a credit card number, such as 1234567812345670 (16 bytes long, digits only).
Adding encryption to such applications might be challenging if data models are to be changed, as it usually involves changing field length limits or data types. For example, output from a typical block cipher would turn credit card number into a hexadecimal (e.g.0x96a45cbcf9c2a9425cde9e274948cb67, 34 bytes, hexadecimal digits) or Base64 value (e.g. lqRcvPnCqUJc3p4nSUjLZw==, 24 bytes, alphanumeric and special characters), which will break any existing applications expecting the credit card number to be a 16-digit number.
Apart from simple formatting problems, using AES-128-CBC, this credit card number might get encrypted to the hexadecimal value 0xde015724b081ea7003de4593d792fd8b695b39e095c98f3a220ff43522a2df02. In addition to the problems caused by creating invalid characters and increasing the size of the data, data encrypted using the CBC mode of an encryption algorithm also changes its value when it is decrypt |
https://en.wikipedia.org/wiki/Airliners.net | Airliners.net is an aviation website that includes an extensive photo database of aircraft and airports, as well as a forum catering to aviation enthusiasts. Created by Johan Lundgren, Jr., the site originated in 1996 as Pictures of Modern Airliners. It was acquired by Demand Media (now known as Leaf Group) in 2007, and underwent a major redesign in 2016.
History
Johan Lundgren, Jr., an IT student/aviation enthusiast attending Luleå University of Technology in Sweden, created the site Pictures of Modern Airliners in 1996. Lundgren had been working on the site during his military service. It initially hosted only his own aircraft photos before a new section was created for other photographers to upload their photos.
In 1997, Lundgren transitioned to a new site entitled Airliners.net and established a web server in his dormitory room. Three more servers were added, and eventually all servers were relocated to the computer rooms at the university. Lundgren started investing all of his time into the site, although he received help from a growing number of volunteers.
On 27 July 2007, Lundgren announced that the site would be acquired by Demand Media. It was sold to the company for US$8.2 million. Reasons behind the decision included the difficulty of managing the rapidly growing site, which was by that point supported by 25 servers.
A revamped site was launched on 14 June 2016.
In February 2017, the site was acquired by VerticalScope, majority-owned by Torstar.
Membership
The site offers free membership, using online advertising instead as a source of revenue. Before the June 2016 redesign, Airliners.net provided three levels of membership: the Photographer Account, Premium Membership and First Class Membership. The latter two required payment, while a Photographer Account could be created for free.
Features
The site has two main features: the photo database and the forum. The database contains over 2.7 million photos with over 8.6 billion total views as of June |
https://en.wikipedia.org/wiki/Fibrocapsa | Fibrocapsa is a genus of algae belonging to the family Fibrocapsaceae.
Species:
Fibrocapsa japonica S.Toriumi & H.Takano |
https://en.wikipedia.org/wiki/Predication%20%28philosophy%29 | Predication in philosophy refers to an act of judgement where one term is subsumed under another. A comprehensive conceptualization describes it as the understanding of the relation expressed by a predicative structure primordially (i.e. both originally and primarily) through the opposition between particular and general or the one and the many.
Predication is also associated or used interchangeably with the concept of attribution where both terms pertain to the way judgment and ideas acquire a new property in the second operation of the mind (or the mental operation of judging).
Background
Predication emerged when ancient philosophers began exploring reality and the two entities that divide it: properties and the things that bear them. These thinkers investigated what the division between thing and property amounted to. It was argued that the relationship resembled the logical analysis of a sentence wherein the division of subject and predicate arises spontaneously. It was Aristotle who posited that the division between subject and predicate is fundamental and that there is no truth unless a property is "predicated of" something. In Plato's works, predication is demonstrated in the analysis of desire. He stated through Socrates that the type of dominant excess gives its name to the one who has it such as how drunkenness gives its name to a drunkard. Here, predication confirms the reality of this form of excess on the being who partakes in it. Pythagoreans also touched on predication as they explained how number is the essence of everything. They hold that a number has an independent reality, arguing that substances such as fire and water were not the real essences of the things they are predicated. In describing Greek philosophy, Charles Kahn identified predication as one of the three concepts - along with truth and reality - that ontology connected.
It is suggested that predication is equivalent to the German concept of Aussage. In Grundlagen, for instance, |
https://en.wikipedia.org/wiki/Berezin%20integral | In mathematical physics, the Berezin integral, named after Felix Berezin, (also known as Grassmann integral, after Hermann Grassmann), is a way to define integration for functions of Grassmann variables (elements of the exterior algebra). It is not an integral in the Lebesgue sense; the word "integral" is used because the Berezin integral has properties analogous to the Lebesgue integral and because it extends the path integral in physics, where it is used as a sum over histories for fermions.
Definition
Let be the exterior algebra of polynomials in anticommuting elements over the field of complex numbers. (The ordering of the generators is fixed and defines the orientation of the exterior algebra.)
One variable
The Berezin integral over the sole Grassmann variable is defined to be a linear functional
where we define
so that :
These properties define the integral uniquely and imply
Take note that is the most general function of because Grassmann variables square to zero, so cannot have non-zero terms beyond linear order.
Multiple variables
The Berezin integral on is defined to be the unique linear functional with the following properties:
for any where means the left or the right partial derivative. These properties define the integral uniquely.
Notice that different conventions exist in the literature: Some authors define instead
The formula
expresses the Fubini law. On the right-hand side, the interior integral of a monomial is set to be where ; the integral of vanishes. The integral with respect to is calculated in the similar way and so on.
Change of Grassmann variables
Let be odd polynomials in some antisymmetric variables . The Jacobian is the matrix
where refers to the right derivative (). The formula for the coordinate change reads
Integrating even and odd variables
Definition
Consider now the algebra of functions of real commuting variables and of anticommuting variables (which is called the free superalgeb |
https://en.wikipedia.org/wiki/Vestibular%20nuclei | The vestibular nuclei (VN) are the cranial nuclei for the vestibular nerve located in the brainstem.
In Terminologia Anatomica they are grouped in both the pons and the medulla in the brainstem.
Structure
Path
The fibers of the vestibular nerve enter the medulla oblongata on the medial side of those of the cochlear, and pass between the inferior peduncle and the spinal tract of the trigeminal nerve.
They then divide into ascending and descending fibers. The latter end by arborizing around the cells of the medial nucleus, which is situated in the area acustica of the rhomboid fossa. The ascending fibers either end in the same manner or in the lateral nucleus, which is situated lateral to the area acustica and farther from the ventricular floor.
Some of the axons of the cells of the lateral nucleus, and possibly also of the medial nucleus, are continued upward through the inferior peduncle to the roof nuclei of the opposite side of the cerebellum, to which also other fibers of the vestibular root are prolonged without interruption in the nuclei of the medulla oblongata.
A second set of fibers from the medial and lateral nuclei end partly in the tegmentum, while the remainder ascend in the medial longitudinal fasciculus to arborize around the cells of the nuclei of the oculomotor nerve.
Fibers from the lateral vestibular nucleus also pass via the vestibulospinal tract, to anterior horn cells at many levels in the spinal cord, in order to co-ordinate head and trunk movements.
Subnuclei
There are 4 subnuclei; they are situated at the floor of the fourth ventricle.
See also
Vestibular nerve
Vestibulocerebellar syndrome |
https://en.wikipedia.org/wiki/IEEE%20754 | The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.
The standard defines:
arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special "not a number" values (NaNs)
interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form
rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions
operations: arithmetic and other operations (such as trigonometric functions) on arithmetic formats
exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)
IEEE 754-2008, published in August 2008, includes nearly all of the original IEEE 754-1985 standard, plus the IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic. The current version, IEEE 754-2019, was published in July 2019. It is a minor revision of the previous version, incorporating mainly clarifications, defect fixes and new recommended operations.
History
The first standard for floating-point arithmetic, IEEE 754-1985, was published in 1985. It covered only binary floating-point arithmetic.
A new version, IEEE 754-2008, was published in August 2008, following a seven-year revision process, chaired by Dan Zuras and edited by Mike Cowlishaw. It replaced both IEEE 754-1985 (binary floating-point arithmetic) and IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic. The binary formats in the original standard are included in this new standard along with three new basic formats, one b |
https://en.wikipedia.org/wiki/Thermodynamic%20modelling | Thermodynamic modelling is a set of different strategies that are used by engineers and scientists to develop models capable of evaluating different thermodynamic properties of a system. At each thermodynamic equilibrium state of a system, the thermodynamic properties of the system are specified. Generally, thermodynamic models are mathematical relations that relate different state properties to each other in order to eliminate the need of measuring all the properties of the system in different states.
The easiest thermodynamic models, also known as equations of state, can come from simple correlations that relate different thermodynamic properties using a linear or second-order polynomial function of temperature and pressures. They are generally fitted using experimental data available for that specific properties. This approach can result in limited predictability of the correlation and as a consequence it can be adopted only in a limited operating range.
By contrast, more advanced thermodynamic models are built in a way that can predict the thermodynamic behavior of the system, even if the functional form of the model is not based on the real thermodynamic behaviour of the material. These types of models contain different parameters that are gradually developed for each specific model in order to enhance the accuracy of the evaluating thermodynamic properties.
Cubic model development
Cubic equations of state refer to the group of thermodynamic models that can evaluate the specific volume of gas and liquid systems as a function of pressure and temperature. To develop a cubic model, first, it is essential to select a cubic functional form. The most famous functional forms of this category are Redlich-Kwong, Soave-Redlich-Kwong and Peng-Robinson. Although their initial form is empirically suggested, they are categorised as semi-empirical models as their parameters can be adjusted to fit the real experimental measurement data of the target system.
Pure compon |
https://en.wikipedia.org/wiki/ABAP | ABAP (Advanced Business Application Programming, originally Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "general report preparation processor") is a high-level programming language created by the German software company SAP SE. It is currently positioned, alongside Java, as the language for programming the SAP NetWeaver Application Server, which is part of the SAP NetWeaver platform for building business applications.
Introduction
ABAP is one of the many application-specific fourth-generation languages (4GLs) first developed in the 1980s. It was originally the report language for SAP R/2, a platform that enabled large corporations to build mainframe business applications for materials management and financial and management accounting.
ABAP used to be an abbreviation of Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "generic report preparation processor", but was later renamed to the English Advanced Business Application Programming. ABAP was one of the first languages to include the concept of Logical Databases (LDBs), which provides a high level of abstraction from the basic database level(s),which supports every platform, language and units.
The ABAP language was originally used by developers to develop the SAP R/3 platform. It was also intended to be used by SAP customers to enhance SAP applications – customers can develop custom reports and interfaces with ABAP programming. The language was geared towards more technical customers with programming experience.
ABAP remains as the language for creating programs for the client–server R/3 system, which SAP first released in 1992. As computer hardware evolved through the 1990s, more and more of SAP's applications and systems were written in ABAP. By 2001, all but the most basic functions were written in ABAP. In 1999, SAP released an object-oriented extension to ABAP called ABAP Objects, along with R/3 release 4.6.
SAP's current development platform NetWeaver supports both ABAP and Java.
|
https://en.wikipedia.org/wiki/Histone%20Database | The Histone Database is a comprehensive database of histone protein sequences including histone variants, classified by histone types and variants, maintained by National Center for Biotechnology Information. The creation of the Histone Database was stimulated by the X-ray analysis of the structure of the nucleosomal core histone octamer followed by the application of a novel motif searching method to a group of proteins containing the histone fold motif in the early-mid-1990. The first version of the Histone Database was released in 1995 and several updates have been released since then.
Current version of the Histone Database - HistoneDB 2.0 - with variants - includes sequence and structural annotations for all five histone types (H3, H4, H2A, H2B, H1) and major histone variants within each histone type. It has many interactive tools to explore and compare sequences of different histone variants from various organisms. The core of the database is a manually curated set of histone sequences grouped into 30 different variant subsets with variant-specific annotations. The curated set is supplemented by an automatically extracted set of histone sequences from the non-redundant protein database using algorithms trained on the curated set. The interactive web site supports various searching strategies in both datasets: browsing of phylogenetic trees; on-demand generation of multiple sequence alignments with feature annotations; classification of histone-like sequences and browsing of the taxonomic diversity for every histone variant. |
https://en.wikipedia.org/wiki/Spoofing%20%28finance%29 | Spoofing is a disruptive algorithmic trading activity employed by traders to outpace other market participants and to manipulate markets. Spoofers feign interest in trading futures, stocks and other products in financial markets creating an illusion of the demand and supply of the traded asset. In an order driven market, spoofers post a relatively large number of limit orders on one side of the limit order book to make other market participants believe that there is pressure to sell (limit orders are posted on the offer side of the book) or to buy (limit orders are posted on the bid side of the book) the asset.
Spoofing may cause prices to change because the market interprets the one-sided pressure in the limit order book as a shift in the balance of the number of investors who wish to purchase or sell the asset, which causes prices to increase (more buyers than sellers) or prices to decline (more sellers than buyers). Spoofers bid or offer with intent to cancel before the orders are filled. The flurry of activity around the buy or sell orders is intended to attract other traders to induce a particular market reaction. Spoofing can be a factor in the rise and fall of the price of shares and can be very profitable to the spoofer who can time buying and selling based on this manipulation.
Under the 2010 Dodd–Frank Act spoofing is defined as "the illegal practice of bidding or offering with intent to cancel before execution." Spoofing can be used with layering algorithms and front-running, activities which are also illegal.
High-frequency trading, the primary form of algorithmic trading used in financial markets is very profitable as it deals in high volumes of transactions. The five-year delay in arresting the lone spoofer, Navinder Singh Sarao, accused of exacerbating the 2010 Flash Crash—one of the most turbulent periods in the history of financial markets—has placed the self-regulatory bodies such as the Commodity Futures Trading Commission (CFTC) and Chicago Me |
https://en.wikipedia.org/wiki/Whiskey%20Lake | Whiskey Lake is Intel's codename for a family of third-generation 14nm Skylake low-power mobile processors. Intel announced Whiskey Lake on August 28, 2018.
Changes
14++ nm process, same as Coffee Lake
Increased turbo clocks (300–600 MHz)
14 nm PCH
Native USB 3.1 gen 2 support (10 Gbit/s)
Integrated Wi-Fi 802.11ac 160 MHz / WiFi 5 and Bluetooth 5.0
Intel Optane Memory support
List of Whiskey Lake CPUs
Mobile processors
The TDP for these CPUs is 15 W, but is configurable.
Core i5-8365U and i7-8665U support Intel vPro Technology
Pentium Gold and Celeron CPUs lack AVX2 support. |
https://en.wikipedia.org/wiki/Barbara%20Goette | Barbara Goette (26 July 1908 – 23 October 1997) was a German academic. She lived in Germany and then Australia. From 1935 to 1943, she was the private secretary of Ludwig Roselius, creator of Böttcherstraße and Café HAG, and financier of Focke-Wulf.
Early life
Barbara Goette matriculated in Kassel in 1928 and began studying mathematics, physics and philosophy at Freiburg University and then Kiel where she took her state examinations in 1934-35. She met Dr. Ludwig Roselius through the marriage of her brother to his youngest daughter. He suggested she work for the concern. Goette became his companion, carer, confidante and collaborator.
Career
In September 1936 during a meeting in Berlin, the Reichsluftfahrtministerium (German Aviation Ministry) recommended reconstruction of Focke-Wulf with 50% going to the state and 50% to a large electronics concern.
A short time later the Roselius conglomerate became majority shareholder with 46% and Lorenz (ITT) secured 27.8%. The aircraft company was reconstituted as Focke-Wulf Flugzeugbau GmbH. Goette was instrumental in assisting with this and the Böttcherstrasse was reclassified as 'degenerate art'. Roselius survived and consequently had the new FW 200 Condor fitted with 26 passenger seats (she was born on the 26th), safeguarding her legacy, in 1937.
After Dr. Roselius died in May 1943, Goette lectured in English at the Humboldt Hochschule in Berlin until the premises were demolished during a bombing raid. In 1944 she started her Ph.D. in philosophy at Kiel. There she met Dr. J. P. Leidig, whom she married in February 1945. Shortly after the war she acted as an interpreter for the military police in Gunzenhausen, Bavaria.
In 1950 the family settled in Adelaide, Australia. Dr. Leidig died in 1957 and Goette was left with two sons. She never remarried and taught mathematics at Woodlands Church of England Girls Grammar School for 23 years. One year she had 4 of the top 10 students in South Australia in her class. She rece |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.