source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Martha%20Siegel
|
Martha Jochnowitz Siegel is an American applied mathematician, probability theorist and mathematics educator who served as the editor of Mathematics Magazine from 1991 to 1996. In 2017 she won the Yueh-Gin Gung and Dr. Charles Y. Hu Award for Distinguished Service of the Mathematical Association of America for "her remarkable leadership in guiding the national conversation on undergraduate mathematics curriculum". She was a faculty member in the mathematics department of Towson University from 1971 until 2015, when she became a professor emerita.
Education and career
Siegel grew up in Brooklyn, the daughter of civil engineer Nat Jochnowitz. She became interested in mathematics through her father's interest in mathematical puzzles, and through the calculation of baseball statistics for the Brooklyn Dodgers. She did her undergraduate studies in mathematics at Russell Sage College, a small women's college in Troy, New York, while also taking classes at the nearby men-only Rensselaer Polytechnic Institute, as at that time Russell Sage had no mathematics department. At Russell Sage, she was a Kellas honor student, and president of the science club. She completed her Ph.D. in 1969 at the University of Rochester; her dissertation, On Birth and Death Processes, was supervised by Johannes Kemperman. During graduate school and until her 1971 move to Towson, she was on the faculty at Goucher College.
Contributions
At Towson, in 1981, Siegel founded an innovative and still-ongoing undergraduate applied mathematics program involving projects connected to local business and government. She is a co-author of the discrete mathematics and precalculus textbooks Finite Mathematics and Its Applications and Functioning in the Real World. She also served as chair of a committee of the Mathematical Association of America charged with producing the 2015 edition of their MAA Curriculum Guide to Undergraduate Majors in the Mathematical Sciences.
|
https://en.wikipedia.org/wiki/Ligature%20%28writing%29
|
In writing and typography, a ligature occurs where two or more graphemes or letters are joined to form a single glyph. Examples are the characters and used in English and French, in which the letters and are joined for the first ligature and the letters and are joined for the second ligature. For stylistic and legibility reasons, and are often merged to create (where the tittle on the merges with the hood of the ); the same is true of and to create . The common ampersand () developed from a ligature in which the handwritten Latin letters and (spelling , Latin for 'and') were combined.
History
The earliest known script Sumerian cuneiform and Egyptian hieratic both include many cases of character combinations that gradually evolve from ligatures into separately recognizable characters. Other notable ligatures, such as the Brahmic abugidas and the Germanic bind rune, figure prominently throughout ancient manuscripts. These new glyphs emerge alongside the proliferation of writing with a stylus, whether on paper or clay, and often for a practical reason: faster handwriting. Merchants especially needed a way to speed up the process of written communication and found that conjoining letters and abbreviating words for lay use was more convenient for record keeping and transaction than the bulky long forms.
Around the 9th and 10th centuries, monasteries became a fountainhead for these type of script modifications. Medieval scribes who wrote in Latin increased their writing speed by combining characters and by introducing notational abbreviations. Others conjoined letters for aesthetic purposes. For example, in blackletter, letters with right-facing bowls (b, o, and p) and those with left-facing bowls (c, e, o, d, g and q) were written with the facing edges of the bowls superimposed. In many script forms, characters such as h, m, and n had their vertical strokes superimposed. Scribes also used notational abbreviations to avoid having to write a whole charact
|
https://en.wikipedia.org/wiki/Andr%C3%A1s%20Vargha
|
András Vargha (born Budapest, 29 November 1949) is a Hungarian psychologist and statistician, head of the Institute of Psychology of the Károli Gáspár University of Reformed Church. His research field is psychometrics, on which he has held conferences.
As of May 2011, Vargha had 131 journal publications with an impact factor of 14050, and has worked or continues to work as a reviewer with academic journals.
Notes and references
1949 births
Living people
Academic staff of the Károli Gáspár University of the Reformed Church in Hungary
Psychometricians
Hungarian statisticians
Hungarian psychologists
Quantitative psychologists
|
https://en.wikipedia.org/wiki/Carrick%20mat
|
The carrick mat is a flat woven decorative knot which can be used as a mat or pad. Its name is based on the mat's decorative-type carrick bend with the ends connected together, forming an endless knot. A larger form, called the prolong knot, is made by expanding the basic carrick mat by extending, twisting, and overlapping its outer bights, then weaving the free ends through them. This process may be repeated to produce an arbitrarily long mat.
In its basic form it is the same as a 3-lead, 4-bight Turk's head knot. The basic carrick mat, made with two passes of rope, also forms the central motif in the logo of the International Guild of Knot Tyers.
When tied to form a cylinder around the central opening, instead of lying flat, it can be used as a woggle.
See also
List of knots
|
https://en.wikipedia.org/wiki/Anthony%20triangle
|
The Anthony triangle (also Anthony's triangle) is an organizational model. The triangle takes a hierarchical view of management structure, with many operational decisions at the bottom, some tactical decisions in the middle and few but important strategic decisions at the top of the triangle. The higher in the triangle an item is, the more scope it covers and less precise it becomes. As items move down they become more detailed and apply more precisely.
It came to the attention of the information systems community through the work of George Gorry and Michael Scott Morton.
|
https://en.wikipedia.org/wiki/DAnCER%20%28database%29
|
DAnCER (disease-annotated chromatin epigenetics resource) is a database for chromatin modifications and their relation to human disease.
It was developed by the Wodak Lab at the Hospital for Sick Children.
It has been developed to serve as the core bioinformatics resource for seven experimental and bioinformatics laboratories working together to unravel the mechanisms of chromatin modifications and their relation to human disease. Since molecular networks are essential to the understanding of biological processes, this research effort strives to explore CM-related genes in the full context of protein complexes, gene-expression regulation and pathways. To gain additional insights into the CM process in human cells, it also explores patterns of evolutionary conservation across model organisms - from sequence, domain composition and 3D structure, to interaction patterns and regulatory mechanisms.
See also
Epigenomics
chromatin
|
https://en.wikipedia.org/wiki/Far%20East%20Combined%20Bureau
|
The Far East Combined Bureau, an outstation of the British Government Code and Cypher School, was set up in Hong Kong in March 1935, to monitor Japanese, and also Chinese and Russian (Soviet) intelligence and radio traffic. Later it moved to Singapore, Colombo (Ceylon), Kilindini (Kenya), then returned to Colombo.
The Colombo site was known as HMS Anderson or Station Anderson.
Hong Kong
The FECB was located in an office block in the Naval dockyard, with an armed guard at the door (which negated any attempt at secrecy). The intercept site was on Stonecutters Island, four miles across the harbour, and manned by a dozen RAF and RN ratings (plus later four Army signallers). The codebreaking or Y section had Japanese, Chinese and Russian interpreters, under RN Paymaster Henry (Harry) Shaw, with Dick Thatcher and Neil Barnham. The FECB was headed by the Chief of Intelligence Staff (COIS) Captain John Waller, later by Captain F. J. Wylie.
Shaw had been dealing direct with GC&CS and the C-in-C Far East in Shanghai, but found that Waller expected that everything should go through him, and paid little regard to keeping sources secret. So by 1936 the two most senior naval officers were barely on speaking terms. Colonel Valentine Burkhart found when he arrived in 1936 that the bureau was involved in “turf wars”, although they eventually accepted that they had no control over the use of intelligence reports.
Initially the Y section was to focus on the three main Japanese Navy codes and cyphers; the Japanese Naval General Cypher, the Flag Officer code and the "tasogare" or basic naval reporting code used to report the sailings of individual ships. In 1938 a section was set up to attack Japanese commercial systems and so to track supply convoys. From 1936 many messages were sent back to London, to be deciphered by John Tiltman, who broke the first version of JN-25 in 1939.
Singapore
In August 1939, shortly before the outbreak of war with Germany, the FECB moved to Singapore
|
https://en.wikipedia.org/wiki/Log%20trigger
|
In relational databases, the log trigger or history trigger is a mechanism for automatic recording of information about changes inserting or/and updating or/and deleting rows in a database table.
It is a particular technique for change data capturing, and in data warehousing for dealing with slowly changing dimensions.
Definition
Suppose there is a table which we want to audit. This table contains the following columns:
Column1, Column2, ..., Columnn
The column Column1 is assumed to be the primary key.
These columns are defined to have the following types:
Type1, Type2, ..., Typen
The Log Trigger works writing the changes (INSERT, UPDATE and DELETE operations) on the table in another, history table, defined as following:
CREATE TABLE HistoryTable (
Column1 Type1,
Column2 Type2,
: :
Columnn Typen,
StartDate DATETIME,
EndDate DATETIME
)
As shown above, this new table contains the same columns as the original table, and additionally two new columns of type DATETIME: StartDate and EndDate. This is known as tuple versioning. These two additional columns define a period of time of "validity" of the data associated with a specified entity (the entity of the primary key), or in other words, it stores how the data were in the period of time between the StartDate (included) and EndDate (not included).
For each entity (distinct primary key) on the original table, the following structure is created in the history table. Data is shown as example.
Notice that if they are shown chronologically the EndDate column of any row is exactly the StartDate of its successor (if any). It does not mean that both rows are common to that point in time, since -by definition- the value of EndDate is not included.
There are two variants of the Log trigger, depending how the old values (DELETE, UPDATE) and new values (INSERT, UPDATE) are exposed to the trigger (it is RDBMS dependent):
Old and new values as fields of a record data structure
CREATE
|
https://en.wikipedia.org/wiki/Sperm-mediated%20gene%20transfer
|
Sperm-mediated gene transfer (SMGT) is a transgenic technique that transfers genes based on the ability of sperm cells to spontaneously bind to and internalize exogenous DNA and transport it into an oocyte during fertilization to produce genetically modified animals.1 Exogenous DNA refers to DNA that originates outside of the organism. Transgenic animals have been obtained using SMGT, but the efficiency of this technique is low. Low efficiency is mainly due to low uptake of exogenous DNA by the spermatozoa, reducing the chances of fertilizing the oocytes with transfected spermatozoa.2 In order to successfully produce transgenic animals by SMGT, the spermatozoa must attach the exogenous DNA into the head and these transfected spermatozoa must maintain their functionality to fertilize the oocyte.2 Genetically modified animals produced by SMGT are useful for research in biomedical, agricultural, and veterinary fields of study. SMGT could also be useful in generating animals as models for human diseases or lead to future discoveries relating to human gene therapy.
Sperm-Mediated Gene Transfer Mechanism
The method for SMGT uses the sperm cell, a natural vector of genetic material, to transport exogenous DNA. The exogenous DNA molecules bind to the cell membrane of the head of the sperm cell. This binding and internalization of the DNA is not a random event. The exogenous DNA interacts with the DNA-binding proteins (DBPs) that are present on the surface of the sperm cell.3 Spermatozoa are naturally protected against the intrusion of exogenous DNA molecules by an inhibitory factor present in mammals’ seminal fluid. This factor blocks the binding of sperm cells and exogenous DNA because in the presence of the inhibitory factor, DBPs lose their ability to bind to exogenous DNA. In the absence of this inhibitory factor, DBPs on sperm cells are able to interact with DNA and can then translocate the DNA into the cell. Therefore, the seminal fluid must be removed from the sper
|
https://en.wikipedia.org/wiki/List%20of%20fictional%20actuaries
|
Fictional actuaries and the appearance of actuaries in works of fiction have been the subject of a number of articles in actuarial journals.
Film
The Ice Road (2021) - Varnay (played by Benjamin Walker (actor); He introduced himself as an insurance actuary
About Schmidt (2002) - Warren Schmidt is portrayed by Jack Nicholson; the movie mostly covers Schmidt's retirement from an insurance company, and his adventures after retirement
Along Came Polly (2004) - Reuben Feffer (played by Ben Stiller) is a risk assessment expert, and though not explicitly stated, performs the job of an underwriter
Are You With It? (1948) - a musical comedy featuring Donald O'Connor as an actuary who is forced to join a carnival after misplacing a decimal point on a statistical table
The Billion Dollar Bubble (1976) - the Equity Funding scandal retold in the form of a movie, starring James Woods
Boyhood (2014) - Mason Evans, Sr. (played by Ethan Hawke) mentions at a baseball game that he recently passed his second actuarial exam, and later discusses his job at an insurance firm
Class Action (1991) - featured Gene Hackman and Mary Elizabeth Mastrantonio as father and daughter lawyers on opposite sides of a massive class action lawsuit; actuarial analysis plays a key role in the outcome
Double Indemnity (1944) - a Billy Wilder film, with Fred MacMurray and Barbara Stanwyck; possibly the first to feature an actuary; the plot revolves around a murder that seeks to gain advantage from a particular aspect of an insurance policy; an insurance investigator (played by Edward G. Robinson) knows the actuarial statistics and becomes suspicious
Escape Clause (1996) - Andrew McCarthy plays Richard Ramsay in an actuarial thriller; to quote TVguide.com, "The makers of this direct-to-video release thought the world was ready for a thriller about an insurance actuary. They thought wrong."
Fight Club (1999) - Edward Norton plays the protagonist, who briefly describes that his job entails the assessm
|
https://en.wikipedia.org/wiki/Shoshichi%20Kobayashi
|
was a Japanese mathematician. He was the eldest brother of electrical engineer and computer scientist Hisashi Kobayashi. His research interests were in Riemannian and complex manifolds, transformation groups of geometric structures, and Lie algebras.
Biography
Kobayashi graduated from the University of Tokyo in 1953. In 1956, he earned a Ph.D. from the University of Washington under Carl B. Allendoerfer. His dissertation was Theory of Connections. He then spent two years at the Institute for Advanced Study and two years at MIT. He joined the faculty of the University of California, Berkeley in 1962 as an assistant professor, was awarded tenure the following year, and was promoted to full professor in 1966.
Kobayashi served as chairman of the Berkeley Mathematics Department for a three-year term from 1978 to 1981 and for the 1992 Fall semester. He chose early retirement under the VERIP plan in 1994.
The two-volume book Foundations of Differential Geometry, which he coauthored with Katsumi Nomizu, has been known for its wide influence. In 1970 he was an invited speaker for the section on geometry and topology at the International Congress of Mathematicians in Nice.
Technical contributions
Kobayashi's earliest work dealt with the geometry of connections on principal bundles. Many of these results, along with others, were later absorbed into Foundations of Differential Geometry.
As a consequence of the Gauss–Codazzi equations and the commutation formulas for covariant derivatives, James Simons discovered a formula for the Laplacian of the second fundamental form of a submanifold of a Riemannian manifold. As a consequence, one can find a formula for the Laplacian of the norm-squared of the second fundamental form. This "Simons formula" simplifies significantly when the mean curvature of the submanifold is zero and when the Riemannian manifold has constant curvature. In this setting, Shiing-Shen Chern, Manfredo do Carmo, and Kobayashi studied the algebraic structure
|
https://en.wikipedia.org/wiki/25%20%28number%29
|
25 (twenty-five) is the natural number following 24 and preceding 26.
In mathematics
It is a square number, being 52 = 5 × 5, and hence the third non-unitary square prime of the form p2.
It is one of two two-digit numbers whose square and higher powers of the number also ends in the same last two digits, e.g., 252 = 625; the other is 76.
Twenty five has an even aliquot sum of 6, which is itself the first even and perfect number root of an aliquot sequence; not ending in (1 and 0).
It is the smallest square that is also a sum of two (non-zero) squares: 25 = 32 + 42. Hence, it often appears in illustrations of the Pythagorean theorem.
25 is the sum of the five consecutive single-digit odd natural numbers 1, 3, 5, 7, and 9.
25 is a centered octagonal number, a centered square number, a centered octahedral number, and an automorphic number.
25 percent (%) is equal to .
It is the smallest decimal Friedman number as it can be expressed by its own digits: 52.
It is also a Cullen number and a vertically symmetrical number. 25 is the smallest pseudoprime satisfying the congruence 7n = 7 mod n.
25 is the smallest aspiring number — a composite non-sociable number whose aliquot sequence does not terminate.
According to the Shapiro inequality, 25 is the smallest odd integer n such that there exist x, x, ..., x such that
where x = x, x = x.
Within decimal, one can readily test for divisibility by 25 by seeing if the last two digits of the number match 00, 25, 50, or 75.
There are 25 primes under 100.
In science
The Standard Model of physics features a total of 25 elementary particles: 12 fermions (made of 6 quarks and 6 leptons) and 13 bosons (made of 12 gauge bosons and 1 scalar boson).
The atomic number of manganese.
The average percentage DNA overlap of an individual with their half-sibling, grandparent, grandchild, aunt, uncle, niece, nephew, identical twin cousin (offspring of identical twins), or double cousin.
In religion
In Ezekiel's vision of a new temp
|
https://en.wikipedia.org/wiki/Bar%20%28heraldry%29
|
In heraldry, a bar is an ordinary consisting of a horizontal band across the shield. If only one bar appears across the middle of the shield, it is termed a fess; if two or more appear, they can only be called bars. Calling the bar a diminutive of the fess is inaccurate, however, because two bars may each be no smaller than a fess. Like the fess, bars too may bear complex lines (such as embattled, indented, nebuly, etc.). The diminutive form of the bar (narrower than a bar yet wider than a cottise) is the barrulet, though these frequently appear in pairs, the pair termed a "bar gemel" rather than "two barrulets".
Common ordinaries
A single bar placed across the top of the field is called a chief. A single bar placed over the center of the field is called a fess. Two to four of these appearing on a shield are called bars, and more than four are called barrulets.
Diminutives
Thin bars are termed barrulets. A still thinner bar or riband is known as a cottise. Cottises never appear alone and have no direction of their own, but are borne on each side of an ordinary (such as a fess, pale, bend or chevron). The ordinary thus accompanied by a cottise on each side is then described as "cottised", or these may even be "doubly cottised" (i.e. surrounded by four cottises, two along each side).
The "closet" is described as a band of the thickness between a bar and a barrulet, but is rarely found.
A bar that has been "couped" (cut) at the ends so as not to reach the edges of the field is called a hamade, hamaide or hummet, after the town of La Hamaide in Hainaut, Belgium. As a charge, it is almost always depicted in threes. The adjective is hummety.
Barry and barruly
A field divided by many bars — often six, eight or ten parts with two alternating tinctures — is described as barry (of x, y and z, where x is the number of bars, y is the first (uppermost) tincture, and z is the second tincture). A field divided into five, seven or nine parts with two alternating tinctures is n
|
https://en.wikipedia.org/wiki/Korn%E2%80%93Kreer%E2%80%93Lenssen%20model
|
The Korn–Kreer–Lenssen model (KKL model) is a discrete trinomial model proposed in 1998 by Ralf Korn, Markus Kreer and Mark Lenssen to model illiquid securities and to value financial derivatives on these. It generalizes the binomial Cox-Ross-Rubinstein model in a natural way as the stock in a given time interval can either rise one unit up, fall one unit down or remain unchanged. In contrast to Black–Scholes or Cox-Ross-Rubinstein model the market consisting of stock and cash is not complete yet. To value and replicate a financial derivative an additional traded security related to the original security needs to be added. This might be a Low Exercise Price Option (or short LEPO). The mathematical proof of arbitrage free pricing is based on martingale representations for point processes pioneered in the 1980s and 1990 by Albert Shiryaev, Robert Liptser and Marc Yor.
The dynamics is based on continuous time linear birth–death processes and analytic formulae for option prices and Greeks can be stated. Later work looks at market completion with general calls or puts. A comprehensive introduction may be found in the attached MSc-thesis.
The model belongs to the class of trinomial models and the difference to the standard trinomial tree is the following: if denotes the waiting time between two movements of the stock price then in the KKL-model remains finite and exponentially distributed whereas in trinomial trees the time is discrete and the limit is taken by numerical extrapolation afterwards.
See also
Binomial options pricing model
Trinomial tree
Valuation of options
Option: Model implementation
|
https://en.wikipedia.org/wiki/Paleobiota%20of%20the%20Burgess%20Shale
|
This is a list of the biota of the Burgess Shale, a Cambrian lagerstätte located in Yoho National Park in Canada.
The Burgess Shale is a fossil-bearing deposit exposed in the Canadian Rockies of British Columbia, Canada. It is famous for the exceptional preservation of the soft parts of its fossils. At 508 million years old (middle Cambrian), it is one of the earliest fossil beds containing soft-part imprints.
Arthropoda
Crown-group arthropods (euarthropods such as trilobites) and their stem-group relatives (such as radiodonts) are extremely diverse and some species are abundant in the Burgess Shale. Along with their earlier-diverging cousins, the "Lobopodians", they provide great information on early Panarthropod evolution.
Lobopodians
Lobopodians, a grouping of worm-like panarthropods from which arthropods arose, were present in the shale.
Sponges
Sponges (Porifera) were extremely diverse in the shale, with many of them belonging to the class Demospongiae.
Comb-jellies (Ctenophora)
Ctenophores, also known as comb jellies, are rare in the shale, but three genera are known from the site.
Hemichordata
Many of the Hemichordates from the shale have either remained enigmatic, or were once classified under other groupings.
Annelida
A number of different Annelid worms are known from the shale.
Priapulida
Various stem-group priapulids are known from the Burgess Shale.
Mollusca
The molluscs of the Burgess shale are diverse in body shapes, the ecological niches they filled, and their enigmatic qualities.
Brachiopods
Many of the brachiopods from the site are members of the class Lingulata.
Cnidaria
A wide variety of cnidarians like scyphozoans and conulariids are known from this site.
Echinodermata
The echinoderms of the shale represent extinct groups distantly related to extant groups.
Chordata
Chordates are very rare in the shale, but the two that are known are possibly being very important in the study of these creatures earlier evolution.
Gnathif
|
https://en.wikipedia.org/wiki/Stochastic%20differential%20equation
|
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.
SDEs have a random differential that is in the most basic case random white noise calculated as the derivative of a Brownian motion or more generally a semimartingale. However, other types of random behaviour are possible, such as jump processes like Lévy processes or semimartingales with jumps. Random differential equations are conjugate to stochastic differential equations. Stochastic differential equations can also be extended to differential manifolds.
Background
Stochastic differential equations originated in the theory of Brownian motion, in the work of Albert Einstein and Marian Smoluchowski in 1905, although Louis Bachelier was the first person credited with modeling Brownian motion in 1900, giving a very early example of Stochastic Differential Equation now known as Bachelier model. Some of these early examples were linear stochastic differential equations, also called 'Langevin' equations after French physicist Langevin, describing the motion of a harmonic oscillator subject to a random force.
The mathematical theory of stochastic differential equations was developed in the 1940s through the groundbreaking work of Japanese mathematician Kiyosi Itô, who introduced the concept of stochastic integral and initiated the study of nonlinear stochastic differential equations. Another approach was later proposed by Russian physicist Stratonovich, leading to a calculus similar to ordinary calculus.
Terminology
The most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a whi
|
https://en.wikipedia.org/wiki/Tunnel%20transmitter
|
A tunnel transmitter allows wireless reception in tunnels. It consists of a receiving antenna which receives the signal to be radiated in the tunnel, and a transmitting antenna installed in the tunnel, which is either a Yagi antenna or a line antenna. In principle, a tunnel transmitter can work purely passively, in which case the received signal is passed over a cable to the antenna in the tunnel. Active systems, however, are more often used. In some cases the radio frequency inside the tunnel is different from the one used by the broadcaster. More often the program inside is transmitted on the same frequency as outside, in which case the information signal should be demodulated or converted to an intermediate frequency in the outside receiver, and then modulated/shifted back in the transmitter. Otherwise feedback may occur.
Tunnel transmitters are used in Germany only for audio transmitters working in FM-range and for further radio services, such as mobile phone services which also work in this frequency range.
Elsewhere, there are also tunnel transmitters for audio broadcast transmitters, which work in the long and medium wave bands. An example of this is in the Dartford Crossing tunnel near London, where the programme "Virgin 1215" in the medium-wave band and the BBC's Radio 4 on 198 kHz long wave are rebroadcast in the tunnel. Emergency information could also be relayed using this system, interrupting programming to relay warnings.
See also
Leaky feeder
Through the earth mine communications
|
https://en.wikipedia.org/wiki/Adventive%20plant
|
Adventive plants or adventitious plants are plants that have established themselves in a place that does not correspond to their area of origin due to anthropogenic influence and, therefore, are all wild species that have only been established with the help of humans, in contrast to the native species.
The term "adventive" is used to describe species that are not self-sufficient, but need an episodic population assistance from their homeland. If, however, an adventive species becomes self-sustaining in its new geographic area, it is then naturalized. The term hemerochory is sometimes used synonymously with this one, but is often restricted to species that were unintentionally brought into the area and then naturalized, sometimes also for species that have firmly established themselves in their new habitat.
Categorization
Depending on the question and perspective, adventitious plants are divided into different subcategories:
Classification according to establishment history
Archaeophytes were introduced before 1492
Neophytes were introduced or immigrated after 1492.
The year 1492 is a conventionally chosen reference point. With the "discovery" of America and the age of discovery and colonialism, alien species from other parts of the world came to new areas on a large scale. Most of the archaeophytes immigrated with the introduction of agriculture (in the Neolithic). The status of a species as an archaeophyte is usually deduced (from the location and ecology of the species) and is hardly directly detectable.
Classification according to the degree of establishment
Agriophytes: species that have invaded natural or near-natural vegetation and could survive there without human intervention.
Epecophytes: Species that are only naturalized in vegetation units shaped by humans, such as meadows, weed flora or ruderal vegetation, but are firmly naturalized here.
Ephemerophytes: Species that are only introduced inconsistently, that will die out of culture for a sh
|
https://en.wikipedia.org/wiki/Gathering%204%20Gardner
|
Gathering 4 Gardner (G4G) is an educational foundation and non-profit corporation (Gathering 4 Gardner, Inc.) devoted to preserving the legacy and spirit of prolific writer Martin Gardner. G4G organizes conferences where people who have been inspired by or have a strong personal connection to Martin Gardner can meet and celebrate his influence. These events explore ideas and developments in recreational mathematics, magic, illusion, puzzles, philosophy, and rationality, and foster creative work in all of these areas by enthusiasts of all ages. G4G also facilitates a related series of events called Celebration of Mind (CoM).
History
Martin Gardner's prolific output as a columnist and writer—he authored over 100 books between 1951 and 2010—put him in contact with a large number of people on a wide range of subjects from magic, mathematics, puzzles, physics, philosophy, logic and rationality, to G. K. Chesterton, Alice in Wonderland, and the Wizard of Oz. As a result, he had a large following of amateurs and professionals eager to pay tribute to him, but many of them had only infrequent contact with each other. Moreover, Gardner was famously shy, and generally declined to appear at any events honoring him.
In the early 1990s, Atlanta-based entrepreneur and puzzle collector Thomas M. Rodgers (1943–2012), a friend of Martin Gardner's, conceived a plan to create a gathering of people who shared Gardner's interests, especially puzzles, magic, and mathematics. Rodgers invited the world's foremost puzzle composers and collectors, and enlisted magician Mark Setteducati and mathematician Elwyn Berlekamp to recruit leading magicians and recreational mathematicians, respectively. Gardner agreed to attend. Thus was born the first Gathering (G4G1), held in Atlanta, GA, in January 1993. For the next two decades G4G was supported mainly by Rodgers with "seemingly unfettered access to his personal time and resources".
In 2007 board members Rodgers, Berlekamp, Setteducati, Thane
|
https://en.wikipedia.org/wiki/Command%20substitution
|
In computing, command substitution is a facility that allows a command to be run and its output to be pasted back on the command line as arguments to another command. Command substitution first appeared in the Bourne shell, introduced with Version 7 Unix in 1979, and has remained a characteristic of all later Unix shells. The feature has since been adopted in other programming languages as well, including Perl, PHP, Ruby and Microsoft's Powershell under Windows. It also appears in Microsoft's CMD.EXE in the FOR command and the ( ) command.
Syntax and semantics
Shells typically implement command substitution by creating a child process to run the first command with its standard output piped back to the shell, which reads that output, parsing it into words separated by whitespace. Because the shell can't know it has all the output from the child until the pipe closes or the child dies, it waits until then before it starts another child process to run the second command.
This C shell example shows how one might search for all the C files containing the string malloc using fgrep and then edit any that are found using the vi editor. The syntactical notation shown here, ` ... `, using backquotes as delimiters, is the original style and is supported by all the common Unix shells.
#!/bin/csh
vi `fgrep -l malloc *.c`
Objections have been raised to both the syntax, how it's typed, and the semantics, how it works.
While easy to type, an important factor for an interactive command processor, the syntax has been criticized as awkward to nest, putting one command substitution inside another, because both the left and the right delimiters are the same. The KornShell (ksh) solved this with an alternative notation, $( ... ), borrowing from the notational style used for variable substitution. Today, most UNIX shells support this syntax. Microsoft's PowerShell also uses this notation, with the same semantics.
#!/bin/bash
vi $(fgrep -l malloc *.c)
The semantics, breaking
|
https://en.wikipedia.org/wiki/LU%20reduction
|
LU reduction is an algorithm related to LU decomposition. This term is usually used in the context of super computing and highly parallel computing. In this context it is used as a benchmarking algorithm, i.e. to provide a comparative measurement of speed for different computers. LU reduction is a special parallelized version of an LU decomposition algorithm, an example can be found in (Guitart 2001). The parallelized version usually distributes the work for a matrix row to a single processor and synchronizes the result with the whole matrix (Escribano 2000).
Sources
J. Oliver, J. Guitart, E. Ayguadé, N. Navarro and J. Torres. Strategies for Efficient Exploitation of Loop-level Parallelism in Java. Concurrency and Computation: Practice and Experience(Java Grande 2000 Special Issue), Vol.13 (8-9), pp. 663–680. ISSN 1532-0634, July 2001, , last retrieved on Sept. 14 2007
J. Guitart, X. Martorell, J. Torres, and E. Ayguadé, Improving Java Multithreading Facilities: the Java Nanos Environment, Research Report UPC-DAC-2001-8, Computer Architecture Department, Technical University of Catalonia, March 2001, .
Arturo González-Escribano, Arjan J. C. van Gemund, Valentín Cardeñoso-Payo et al., Measuring the Performance Impact of SP-Restricted Programming in Shared-Memory Machines, In Vector and Parallel Processing — VECPAR 2000, Springer Verlag, pp. 128–141, , 2000,
Numerical linear algebra
Supercomputers
|
https://en.wikipedia.org/wiki/Decision%20field%20theory
|
Decision field theory (DFT) is a dynamic-cognitive approach to human decision making. It is a cognitive model that describes how people actually make decisions rather than a rational or normative theory that prescribes what people should or ought to do. It is also a dynamic model of decision making rather than a static model, because it describes how a person's preferences evolve across time until a decision is reached rather than assuming a fixed state of preference. The preference evolution process is mathematically represented as a stochastic process called a diffusion process. It is used to predict how humans make decisions under uncertainty, how decisions change under time pressure, and how choice context changes preferences. This model can be used to predict not only the choices that are made but also decision or response times.
The paper "Decision Field Theory" was published by Jerome R. Busemeyer and James T. Townsend in 1993. The DFT has been shown to account for many puzzling findings regarding human choice behavior including violations of stochastic dominance, violations of strong stochastic transitivity, violations of independence between alternatives, serial-position effects on preference, speed accuracy tradeoff effects, inverse relation between probability and decision time, changes in decisions under time pressure, as well as preference reversals between choices and prices. The DFT also offers a bridge to neuroscience. Recently, the authors of decision field theory also have begun exploring a new theoretical direction called Quantum Cognition.
Introduction
The name decision field theory was chosen to reflect the fact that the inspiration for this theory comes from an earlier approach – avoidance conflict model contained in Kurt Lewin's general psychological theory, which he called field theory. DFT is a member of a general class of sequential sampling models that are commonly used in a variety of fields in cognition.
The basic ideas underlying the
|
https://en.wikipedia.org/wiki/Clavipectoral%20fascia
|
The clavipectoral fascia (costocoracoid membrane; coracoclavicular fascia) is a strong fascia situated under cover of the clavicular portion of the pectoralis major.
It occupies the interval between the pectoralis minor and subclavius, and protects the axillary vein and artery, and axillary nerve.
Traced upward, it splits to enclose the subclavius, and its two layers are attached to the clavicle, one in front of and the other behind the muscle; the deep layer fuses with the deep cervical fascia and with the sheath of the axillary vessels.
Medially, it blends with the fascia covering the first two intercostal spaces, and is attached also to the first rib medial to the origin of the subclavius.
Laterally, it is very thick and dense, and is attached to the coracoid process.
The portion extending from the first rib to the coracoid process is often whiter and denser than the rest, and is sometimes called the costocoracoid membrane.
Below this it is thin, and at the upper border of the pectoralis minor it splits into two layers to invest the muscle; from the lower border of the pectoralis minor it is continued downward to join the axillary fascia, and lateralward to join the fascia over the short head of the biceps brachii.
The clavipectoral fascia is pierced by the cephalic vein, thoracoacromial artery and vein, lymphatics and lateral pectoral nerve.
See also
Suspensory ligament of axilla
|
https://en.wikipedia.org/wiki/Ventricle%20%28heart%29
|
A ventricle is one of two large chambers toward the bottom of the heart that collect and expel blood towards the peripheral beds within the body and lungs. The blood pumped by a ventricle is supplied by an atrium, an adjacent chamber in the upper heart that is smaller than a ventricle. Interventricular means between the ventricles (for example the interventricular septum), while intraventricular means within one ventricle (for example an intraventricular block).
In a four-chambered heart, such as that in humans, there are two ventricles that operate in a double circulatory system: the right ventricle pumps blood into the pulmonary circulation to the lungs, and the left ventricle pumps blood into the systemic circulation through the aorta.
Structure
Ventricles have thicker walls than atria and generate higher blood pressures. The physiological load on the ventricles requiring pumping of blood throughout the body and lungs is much greater than the pressure generated by the atria to fill the ventricles. Further, the left ventricle has thicker walls than the right because it needs to pump blood to most of the body while the right ventricle fills only the lungs.
On the inner walls of the ventricles are irregular muscular columns called trabeculae carneae which cover all of the inner ventricular surfaces except that of the conus arteriosus, in the right ventricle. There are three types of these muscles. The third type, the papillary muscles, give origin at their apices to the chordae tendinae which attach to the cusps of the tricuspid valve and to the mitral valve.
The mass of the left ventricle, as estimated by magnetic resonance imaging, averages 143 g ± 38.4 g, with a range of 87–224 g.
The right ventricle is equal in size to the left ventricle and contains roughly 85 millilitres (3 imp fl oz; 3 US fl oz) in the adult. Its upper front surface is circled and convex, and forms much of the sternocostal surface of the heart. Its under surface is flattened, forming pa
|
https://en.wikipedia.org/wiki/N-linked%20glycosylation
|
N-linked glycosylation, is the attachment of an oligosaccharide, a carbohydrate consisting of several sugar molecules, sometimes also referred to as glycan, to a nitrogen atom (the amide nitrogen of an asparagine (Asn) residue of a protein), in a process called N-glycosylation, studied in biochemistry. The resulting protein is called an N-linked glycan, or simply an N-glycan.
This type of linkage is important for both the structure and function of many eukaryotic proteins. The N-linked glycosylation process occurs in eukaryotes and widely in archaea, but very rarely in bacteria. The nature of N-linked glycans attached to a glycoprotein is determined by the protein and the cell in which it is expressed. It also varies across species. Different species synthesize different types of N-linked glycan.
Energetics of bond formation
There are two types of bonds involved in a glycoprotein: bonds between the saccharides residues in the glycan and the linkage between the glycan chain and the protein molecule.
The sugar moieties are linked to one another in the glycan chain via glycosidic bonds. These bonds are typically formed between carbons 1 and 4 of the sugar molecules. The formation of glycosidic bond is energetically unfavourable, therefore the reaction is coupled to the hydrolysis of two ATP molecules.
On the other hand, the attachment of a glycan residue to a protein requires the recognition of a consensus sequence. N-linked glycans are almost always attached to the nitrogen atom of an asparagine (Asn) side chain that is present as a part of Asn–X–Ser/Thr consensus sequence, where X is any amino acid except proline (Pro).
In animal cells, the glycan attached to the asparagine is almost inevitably N-acetylglucosamine (GlcNAc) in the β-configuration. This β-linkage is similar to glycosidic bond between the sugar moieties in the glycan structure as described above. Instead of being attached to a sugar hydroxyl group, the anomeric carbon atom is attached to an amide
|
https://en.wikipedia.org/wiki/Yuxiang
|
Yuxiang () is a famous seasoning mixture in Chinese cuisine, and also refers to the resulting sauce in which meat or vegetables are cooked. It is said to have originated in Sichuan cuisine, and has since spread to other regional Chinese cuisines.
Despite the term literally meaning "fish fragrance" in Chinese, yuxiang contains no seafood and is typically not added to seafood.
On top of the basic mixture, cooking yuxiang almost always includes the use of sugar, vinegar, doubanjiang, soy sauce, and pickled chili peppers.
Preparation
Proper preparation of the yuxiang seasoning includes finely minced pao la jiao (pickled chili), white scallion, ginger, and garlic. They are mixed in more-or-less equal portions, though some prefer to include more scallions than ginger and garlic. The mixture is then fried in oil until fragrant. Water, starch, sugar, and vinegar are then added to create a basic sauce.
Dishes
The sauce is used most often for dishes containing beef, pork, or chicken. It is sometimes used for vegetarian recipes. In fact, Barbara Tropp suggests in The Modern Art of Chinese Cooking that the characters can also be interpreted as meaning "Sichuan-Hunan" (渝湘) flavor. Dishes that use yuxiang as the main seasoning have the term affixed to their name. For instance:
Yúxiāngròusī (魚香肉絲): Pork strips stir-fried with yuxiang
Yúxiāngqiézi (魚香茄子): Braised eggplants with yuxiang
Yúxiāngniúnǎn (魚香牛腩): Beef brisket stewed with yuxiang
|
https://en.wikipedia.org/wiki/Kanamori%E2%80%93McAloon%20theorem
|
In mathematical logic, the Kanamori–McAloon theorem, due to , gives an example of an incompleteness in Peano arithmetic, similar to that of the Paris–Harrington theorem. They showed that a certain finitistic theorem in Ramsey theory is not provable in Peano arithmetic (PA).
Statement
Given a set of non-negative integers, let denote the minimum element of . Let denote the set of all n-element subsets of .
A function where is said to be regressive if for all not containing 0.
The Kanamori–McAloon theorem states that the following proposition, denoted by in the original reference, is not provable in PA:
For every , there exists an such that for all regressive , there exists a set such that for all with , we have .
See also
Paris–Harrington theorem
Goodstein's theorem
Kruskal's tree theorem
|
https://en.wikipedia.org/wiki/Ryu%E2%80%93Takayanagi%20conjecture
|
The Ryu–Takayanagi conjecture is a conjecture within holography that posits a quantitative relationship between the entanglement entropy of a conformal field theory and the geometry of an associated anti-de Sitter spacetime. The formula characterizes "holographic screens" in the bulk; that is, it specifies which regions of the bulk geometry are "responsible to particular information in the dual CFT". The conjecture is named after and , who jointly published the result in 2006. As a result, the authors were awarded the 2015 New Horizons in Physics Prize for "fundamental ideas about entropy in quantum field theory and quantum gravity". The formula was generalized to a covariant form in 2007.
Motivation
The thermodynamics of black holes suggests certain relationships between the entropy of black holes and their geometry. Specifically, the Bekenstein–Hawking area formula conjectures that the entropy of a black hole is proportional to its surface area:
The Bekenstein–Hawking entropy is a measure of the information lost to external observers due to the presence of the horizon. The horizon of the black hole acts as a "screen" distinguishing one region of the spacetime (in this case the exterior of the black hole) that is not affected by another region (in this case the interior). The Bekenstein–Hawking area law states that the area of this surface is proportional to the entropy of the information lost behind it.
The Bekenstein–Hawking entropy is a statement about the gravitational entropy of a system; however, there is another type of entropy that is important in quantum information theory, namely the entanglement (or von Neumann) entropy. This form of entropy provides a measure of how far from a pure state a given quantum state is, or, equivalently, how entangled it is. The entanglement entropy is a useful concept in many areas, such as in condensed matter physics and quantum many-body systems. Given its use, and its suggestive similarity to the Bekenstein–Hawking e
|
https://en.wikipedia.org/wiki/Cepstrum
|
In Fourier analysis, the cepstrum (; plural cepstra, adjective cepstral) is the result of computing the inverse Fourier transform (IFT) of the logarithm of the estimated signal spectrum. The method is a tool for investigating periodic structures in frequency spectra. The power cepstrum has applications in the analysis of human speech.
The term cepstrum was derived by reversing the first four letters of spectrum. Operations on cepstra are labelled quefrency analysis (or quefrency alanysis), liftering, or cepstral analysis. It may be pronounced in the two ways given, the second having the advantage of avoiding confusion with kepstrum.
Origin
The concept of the cepstrum was introduced in 1963 by B. P. Bogert, M. J. Healy, and J. W. Tukey. It serves as a tool to investigate periodic structures in frequency spectra. Such effects are related to noticeable echos or reflections in the signal, or to the occurrence of harmonic frequencies (partials, overtones). Mathematically it deals with the problem of deconvolution of signals in the frequency space.
|
https://en.wikipedia.org/wiki/In%20Pursuit%20of%20the%20Unknown
|
In Pursuit of the Unknown: 17 Equations That Changed the World is a 2012 nonfiction book by British mathematician Ian Stewart , published by Basic Books. In the book Stewart traced a history of the role of mathematics in human history, beginning with the Pythagorean theorem (Pythagorean equation) to the equation that transformed the twenty-first century financial market, the Black–Scholes model.
Content
Seventeen equations are described in the book as follows:
Pythagorean equation,
Logarithm product identity,
Derivative,
Newton's law of universal gravitation,
Imaginary unit,
Euler's polyhedron formula,
Normal distribution,
Wave equation in one space dimension,
Fourier transform,
Navier–Stokes momentum equation,
Maxwell's equations, , , ,
Entropy and the second law of thermodynamics,
Mass–energy equivalence,
Time-dependent Schrödinger equation,
Entropy in information theory,
Logistic map,
Black–Scholes equation,
Reviews
Kirkus Reviews said that the book provided "clear, cogent explanations of how the equations work without burdening the reader with cumbersome derivations."
The Maclean's magazine review described the book as a "history of the human species told in equations" by the "finest living math popularizer".
The New York Book Review said that Stewart was a "genius in the way he conveys his excitement and sense of wonder across" who has a "valuable grasp" of "what it takes to make equations interesting" and "to make science cool."
The Physics Today journal review said that Stewart writes with "easy prose, which never fails to both educate and entertain."
Business Insider described the book as an "excellent and deeply researched book."
The Association for Computing Machinery's SIGACT News review called the book, the "latest spell" by the "master storyteller", the "Honorary Wizard of the Unseen University"—the "master storyteller"—who is able to "entertain" the reader with Greek symbols. The reviewer said Stewart focused on how eq
|
https://en.wikipedia.org/wiki/Bridge%20tap
|
Bridged tap or bridge tap is a long-used method of cabling for telephone lines. One cable pair (of wires) will "appear" in several different terminal locations (poles or pedestals). This allows the telephone company to use or "assign" that pair to any subscriber near those terminal locations. Once that customer disconnects, that pair becomes usable at any of the terminals. In the days of party lines, 2, 4, 6, or 8 users were commonly connected on the same pair which appeared at several different locations.
A bridge tap has no hybrid coil or other impedance matching components, just a “T” (or branch) in the cable. Thus the bridge presents an impedance mismatch. The unused branch of the T is usually left with no device connected to its end, thus has no electrical termination. Both the tap and its unterminated branch cause unwanted signal reflections, also called echoes.
Digital subscriber lines (DSL) can be affected by a bridged tap, depending on where the tap is bridged. DSL signals reflect from the discontinuities, sending the signal back through the cable pair, much like a tennis ball against a brick wall. The echoed signal is now out of phase and mixed with the original, creating, among other impairments, attenuation distortion. The modem receives both signals, gets confused and "takes errors" or cannot sync. If the bridged tap is long, the signal bounces back only in very attenuated form. Therefore, the modem will ignore the weaker signal and show no problem.
Line in bridge tap technology can be much less performing caused to the high attenuation added and to the length of the cable itself.
Bridged tap technology does not add latency. But it affects the performance of the line in general, it affects the bandwidth a lot.
A bridge tap can also be referred to as a "multiple" or a telephone pair "in multiple".
|
https://en.wikipedia.org/wiki/Sunway%20TaihuLight
|
The Sunway TaihuLight ( Shénwēi·tàihú zhī guāng) is a Chinese supercomputer which, , is ranked fourth in the TOP500 list, with a LINPACK benchmark rating of 93 petaflops. The name is translated as divine power, the light of Taihu Lake. This is nearly three times as fast as the previous Tianhe-2, which ran at 34 petaflops. , it is ranked as the 16th most energy-efficient supercomputer in the Green500, with an efficiency of 6.1 GFlops/watt. It was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi in the city of Wuxi, in Jiangsu province, China.
The Sunway TaihuLight was the world's fastest supercomputer for two years, from June 2016 to June 2018, according to the TOP500 lists. The record was surpassed in June 2018 by IBM's Summit.
Architecture
The Sunway TaihuLight uses a total of 40,960 Chinese-designed SW26010 manycore 64-bit RISC processors based on the Sunway architecture. Each processor chip contains 256 processing cores, and an additional four auxiliary cores for system management (also RISC cores, just more fully featured) for a total of 10,649,600 CPU cores across the entire system.
The processing cores feature 64 KB of scratchpad memory for data (and 16 KB for instructions) and communicate via a network on a chip, instead of having a traditional cache hierarchy.
Software
The system runs on its own operating system, Sunway RaiseOS 2.0.5, which is based on Linux. The system has its own customized implementation of OpenACC 2.0 to aid the parallelization of code.
Future development
China's first exascale supercomputer was scheduled to enter service by 2020 according to the head of the school of computing at the National University of Defense Technology (NUDT). According to the national plan for the next generation of high performance computers, the country would have develop an exascale computer during the 13th Five-Year-Plan period (2016–2020). The g
|
https://en.wikipedia.org/wiki/Humbert%20polynomials
|
In mathematics, the Humbert polynomials π(x) are a generalization of Pincherle polynomials introduced by given by the generating function
.
See also
Umbral calculus
|
https://en.wikipedia.org/wiki/RSSSF
|
The Rec.Sport.Soccer Statistics Foundation (RSSSF) is an international organization dedicated to collecting statistics about association football. The foundation aims to build an exhaustive archive of football-related information from around the world.
History
This enterprise, according to its founders, was created in January 1994 by three regulars of the Rec.Sport.Soccer (RSS) Usenet newsgroup: Lars Aarhus, Kent Hedlundh, and Karel Stokkermans. It was originally known as the "North European Rec.Sport.Soccer Statistics Foundation", but the geographical reference was dropped as its membership from other regions grew.
The RSSSF has members and contributors from all around the world and has spawned seven spin-off projects to more closely follow the leagues of that project's home country. The spin-off projects are dedicated to Albania, Brazil, Denmark, Norway, Romania, Uruguay, Venezuela and Egypt. In November 2002, the Polish service 90minut.pl became the official branch of RSSSF Poland.
Reception
RSSSF's database has been described as the "very best" for football data.
Rec.Sport.Soccer Player of the Year
Since 1992 a vote for the Best Footballer in the World among the readers of the rec.sport.soccer newsgroup. It was held yearly until 2005, when it was discontinued. The voting works as follows: each voter chooses five players, at most two of the same nationality, in order; these obtain five to one points. The nationality restriction was dropped for the 2003 vote, in which voting was restricted to 173 pre-selected players.
Wins by player
Wins by country
Wins by club
|
https://en.wikipedia.org/wiki/Salt%20equivalent
|
Salt equivalent is usually quoted on food nutrition information tables on food labels, and is a different way of defining sodium intake, noting that salt is chemically sodium chloride.
To convert from sodium to the approximate salt equivalent, multiply sodium content by 2.5:
(see: atomic mass and molecular mass).
Sources
British Nutrition Foundation article on salt
Further reading
Nutrition
Equivalent units
|
https://en.wikipedia.org/wiki/Linear%E2%80%93quadratic%E2%80%93Gaussian%20control
|
In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.
Under these assumptions an optimal control scheme within the class of linear control laws can be derived by a completion-of-squares argument. This control law which is known as the LQG controller, is unique and it is simply a combination of a Kalman filter (a linear–quadratic state estimator (LQE)) together with a linear–quadratic regulator (LQR). The separation principle states that the state estimator and the state feedback can be designed independently. LQG control applies to both linear time-invariant systems as well as linear time-varying systems, and constitutes a linear dynamic feedback control law that is easily computed and implemented: the LQG controller itself is a dynamic system like the system it controls. Both systems have the same state dimension.
A deeper statement of the separation principle is that the LQG controller is still optimal in a wider class of possibly nonlinear controllers. That is, utilizing a nonlinear control scheme will not improve the expected value of the cost function. This version of the separation principle is a special case of the separation principle of stochastic control which states that even when the process and output noise sources are possibly non-Gaussian martingales, as long as the system dynamics are linear, the optimal control separates into an optimal state estimator (which may no longer be a Kalman filter) and an LQR regulator.
In the classical LQG setting, implementation
|
https://en.wikipedia.org/wiki/Dark%20Energy%20Survey
|
The Dark Energy Survey (DES) is an astronomical survey designed to constrain the properties of dark energy. It uses images taken in the near-ultraviolet, visible, and near-infrared to measure the expansion of the universe using Type Ia supernovae, baryon acoustic oscillations, the number of galaxy clusters, and weak gravitational lensing. The collaboration is composed of research institutions and universities from the United States, Australia, Brazil, the United Kingdom, Germany, Spain, and Switzerland. The collaboration is divided into several scientific working groups. The director of DES is Josh Frieman.
The DES began by developing and building Dark Energy Camera (DECam), an instrument designed specifically for the survey. This camera has a wide field of view and high sensitivity, particularly in the red part of the visible spectrum and in the near infrared. Observations were performed with DECam mounted on the 4-meter Víctor M. Blanco Telescope, located at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. Observing sessions ran from 2013 to 2019; the DES collaboration has published results from the first three years of the survey.
DECam
DECam, short for the Dark Energy Camera, is a large camera built to replace the previous prime focus camera on the Victor M. Blanco Telescope. The camera consists of three major components: mechanics, optics, and CCDs.
Mechanics
The mechanics of the camera consists of a filter changer with an 8-filter capacity and shutter. There is also an optical barrel that supports 5 corrector lenses, the largest of which is 98 cm in diameter. These components are attached to the CCD focal plane which is cooled to with liquid nitrogen in order to reduce thermal noise in the CCDs. The focal plane is also kept in an extremely low vacuum of to prevent the formation of condensation on the sensors. The entire camera with lenses, filters, and CCDs weighs approximately 4 tons. When mounted at the prime focus it was supported with
|
https://en.wikipedia.org/wiki/Cauchy%27s%20theorem%20%28group%20theory%29
|
In mathematics, specifically group theory, Cauchy's theorem states that if is a finite group and is a prime number dividing the order of (the number of elements in ), then contains an element of order . That is, there is in such that is the smallest positive integer with = , where is the identity element of . It is named after Augustin-Louis Cauchy, who discovered it in 1845.
The theorem is related to Lagrange's theorem, which states that the order of any subgroup of a finite group divides the order of . Cauchy's theorem implies that for any prime divisor of the order of , there is a subgroup of whose order is —the cyclic group generated by the element in Cauchy's theorem.
Cauchy's theorem is generalized by Sylow's first theorem, which implies that if is the maximal power of dividing the order of , then has a subgroup of order (and using the fact that a -group is solvable, one can show that has subgroups of order for any less than or equal to ).
Statement and proof
Many texts prove the theorem with the use of strong induction and the class equation, though considerably less machinery is required to prove the theorem in the abelian case. One can also invoke group actions for the proof.
Proof 1
We first prove the special case that where is abelian, and then the general case; both proofs are by induction on = ||, and have as starting case = which is trivial because any non-identity element now has order . Suppose first that is abelian. Take any non-identity element , and let be the cyclic group it generates. If divides ||, then ||/ is an element of order . If does not divide ||, then it divides the order [:] of the quotient group /, which therefore contains an element of order by the inductive hypothesis. That element is a class for some in , and if is the order of in , then = in gives () = in /, so divides ; as before / is now an element of order in , completing the proof for the abelian case.
In the general case, let be th
|
https://en.wikipedia.org/wiki/Comet%20Lake
|
Comet Lake is Intel's codename for its 10th generation Core processors. They are manufactured using Intel's third 14 nm Skylake process revision, succeeding the Whiskey Lake U-series mobile processor and Coffee Lake desktop processor families. Intel announced low-power mobile Comet Lake-U CPUs on August 21, 2019, H-series mobile CPUs on April 2, 2020, desktop Comet Lake-S CPUs April 30, 2020, and Xeon W-1200 series workstation CPUs on May 13, 2020. Comet Lake processors and Ice Lake 10 nm processors are together branded as the Intel "10th Generation Core" family. Intel officially launched Comet Lake-Refresh CPUs on the same day as 11th Gen Core Rocket Lake launch. The low-power mobile Comet Lake-U Core and Celeron 5205U CPUs were discontinued on July 7, 2021.
Generational changes
All Comet Lake CPUs feature an updated Platform Controller Hub with CNVio2 controller with Wi-Fi 6 and external AX201 CRF module support.
Comet Lake-S compared to Coffee Lake-S Refresh
Up to ten CPU cores
Hyperthreading on all models, except for Celeron
Single core turbo boost up to 5.3 GHz (300 MHz higher); all-core turbo boost up to 4.9 GHz; Thermal Velocity Boost for Core i9; Turbo Boost Max 3.0 support for Core i7, i9
DDR4-2933 memory support for Core i7 and i9; DDR4-2666 for Core i3, Core i5, Pentium Gold, Celeron
400-series chipset based on the LGA 1200 socket
Comet Lake-H compared to Coffee Lake-H Refresh
Higher turbo frequencies by up to 300 Mhz
DDR4-2933 memory support
Thermal Velocity Boost for Core i7 and i9
Comet Lake-U compared to Whiskey Lake-U
Up to six CPU cores
Higher turbo frequencies by up to 300 MHz
DDR4-2666 and LPDDR3-2133 memory support
One notable architectural change of Comet Lake from its predecessors is removal of TSX instruction set extensions.
Entry-level CPUs like the i3 series no longer support ECC memory.
List of 10th generation Comet Lake processors
Desktop
Comet Lake-S
Pentium and Celeron CPUs lack AVX and AVX2 support.
Comet Lake-W
|
https://en.wikipedia.org/wiki/Baidu%20Yi
|
Baidu Yi (, 易 Yì meaning "exchange" or "easy"), also known as "Baidu Yun", was an operating system for mobile devices until Baidu suspended it. It is based on Google's Android but is a fork by Baidu, the dominant search engine operator in China. It was announced on 2 September 2011 at the 2011 Baidu Technology Innovation Conference in Beijing.
Background
On “2011 Baidu Technology Innovation Conference”, Baidu launched its first mobile terminal software platform, Baidu Yi. It has integrated Baidu's intelligent search, cloud service, and various Baidu apps. Through collaboration with terminal manufactures, operators, mobile Internet service providers, and other upstream and downstream industry chains, it provides the users with convenient, abundant, and personalized mobile Internet experience. Baidu-Yi was developed especially for domestic Chinese smartphones, built on top of Android but replacing much of the original Google software with the network's own alternatives. There is Ting Music, Baidu Maps instead of Google Maps, The Baidu Yue e-reader, and Google Search has been stripped out and replaced with Baidu Search. In March 2015, Baidu officially stated that Baidu Yun is suspended.
Features
It had Baidu applications, replacing Google's for many core functions, such as the search engine, instant messenger, ebook reader, and app store. Baidu is expected to provide 180 gigabytes of cloud storage for users.
Functions
SmartBox Search
Quick Search: Enable searching several seconds after booting to direct users to the targeted content within short time.
Voice Search: Enable searching while speaking.
Move Search: Enable searching while reading the content.
Smart Semantic Analysis: Accurately identify users' needs.
Local+Cloud Data: Precisely show search results.
Cloud Services
Initially 180 GB Storage: Can be upgraded to unlimited storage space.
Support Local Data Sync and Backup: Support Local documents, pictures, music, and videos synchronization and
|
https://en.wikipedia.org/wiki/Rendezvous%20hashing
|
Rendezvous or highest random weight (HRW) hashing is an algorithm that allows clients to achieve distributed agreement on a set of options out of a possible set of options. A typical application is when clients need to agree on which sites (or proxies) objects are assigned to.
Consistent hashing addresses the special case , using a different method. Rendezvous hashing is both much simpler and more general than consistent hashing (see below).
History
Rendezvous hashing was invented by David Thaler and Chinya Ravishankar at the University of Michigan in 1996. Consistent hashing appeared a year later in the literature.
Given its simplicity and generality, rendezvous hashing is now being preferred to consistent hashing in real-world applications. Rendezvous hashing was used very early on in many applications including mobile caching, router design, secure key establishment, and sharding and distributed databases. Other examples of real-world systems that use Rendezvous Hashing include the Github load balancer, the Apache Ignite distributed database, the Tahoe-LAFS file store, the CoBlitz large-file distribution service, Apache Druid, IBM's Cloud Object Store, the Arvados Data Management System, Apache Kafka, and by the Twitter EventBus pub/sub platform.
One of the first applications of rendezvous hashing was to enable multicast clients on the Internet (in contexts such as the MBONE) to identify multicast rendezvous points in a distributed fashion. It was used in 1998 by Microsoft's Cache Array Routing Protocol (CARP) for distributed cache coordination and routing. Some Protocol Independent Multicast routing protocols use rendezvous hashing to pick a rendezvous point.
Problem Definition and Approach
Algorithm
Rendezvous hashing solves a general version of the distributed hash table problem: We are given a set of sites (servers or proxies, say). How can any set of clients, given an object , agree on a k-subset of sites to assign to ? The standard version o
|
https://en.wikipedia.org/wiki/Mireille%20Broucke
|
Mireille Esther Broucke is a professor of electrical and computer engineering at the University of Toronto, interested in control theory, mathematical systems theory, and swarm robotics.
Broucke did her undergraduate studies at the University of Texas at Austin, where her father Roger A. Broucke, an immigrant from Belgium, was a professor of aerospace engineering and engineering mechanics. She graduated in 1984, with a bachelor's degree in electrical engineering. She went on to graduate study in electrical engineering and computer science at the University of California, Berkeley, earning a master's degree in 1987,
with summer jobs working on missile tracking software for Texas Instruments, General Dynamics, and Lockheed Corporation.
After completing her master's degree, she stayed in the San Francisco Bay Area, working on software for control systems and simulation at Intergraph and Integrated Systems. She returned to Berkeley for a Ph.D., which she completed in 2000. Her dissertation, supervised by Alberto Sangiovanni-Vincentelli, was Qualitative Analysis, Model Checking, and Controller Synthesis of Hybrid Systems.
After postdoctoral studies at Berkeley she joined the Toronto faculty in 2001.
|
https://en.wikipedia.org/wiki/PSR%20B1919%2B21
|
PSR B1919+21 is a pulsar with a period of 1.3373 seconds and a pulse width of 0.04 seconds. Discovered by Jocelyn Bell Burnell on 28 November 1967, it is the first discovered radio pulsar. The power and regularity of the signals were briefly thought to resemble an extraterrestrial beacon, leading the source to be nicknamed LGM, later LGM-1 (for "little green men").
The original designation of this pulsar was CP 1919, which stands for Cambridge Pulsar at RA . It is also known as PSR J1921+2153 and is located in the constellation of Vulpecula.
Discovery
In 1967, a radio signal was detected using the Interplanetary Scintillation Array of the Mullard Radio Astronomy Observatory in Cambridge, UK, by Jocelyn Bell Burnell. The signal had a -second period (not in 1967, but in 1991) and 0.04-second pulsewidth. It originated at celestial coordinates right ascension, +21° declination. It was detected by individual observation of miles of graphical data traces. Due to its almost perfect regularity, it was at first assumed to be spurious noise, but this hypothesis was promptly discarded. The discoverers jokingly named it little green men 1 (LGM-1), considering that it may have originated from an extraterrestrial civilization, but Bell Burnell soon ruled out extraterrestrial life as a source after discovering a similar signal from another part of the sky.
The original signal turned out to be radio emissions from the pulsar CP 1919, and was the first one recognized as such. Bell Burnell noted that other scientists could have discovered pulsars before her, but their observations were either ignored or disregarded. Researchers Thomas Gold and Fred Hoyle identified this astronomical object as a rapidly rotating neutron star immediately upon their announcement.
Before the nature of the signal was determined, the researchers, Bell Burnell and her PhD supervisor Antony Hewish, considered the possibility of extraterrestrial life:
We did not really believe that we had picked up sign
|
https://en.wikipedia.org/wiki/Faculty%20of%20Informatics%20and%20Statistics%2C%20University%20of%20Economics%20in%20Prague
|
The Faculty of Informatics and Statistics (FIS VŠE) (, abbreviated FIS, F4), also known as the School of Informatics and Statistics, is the fourth of six faculties at Prague University of Economics and Business. The faculty was established in 1991, following the dissolution of the Faculty of Direction. Its academic focus is informatics, statistics, econometrics and other mathematical methods applied to business practice. The faculty has eight departments and several research laboratories, and hosts around 2,500 students across its programs.
Departments
Departments of the faculty include:
Department of Demography (; KDEM)
Department of Econometrics (; KEKO)
Department of Economic Statistics (; KEST)
Department of Information and Knowledge Engineering (; KIZI)
Department of Information Technologies (; KIT) Research activities of the department focus on methodologies for development, operation and management of information systems.
Department of Mathematics (; KMAT)
Department of Multimedia (; KME)
Department of Statistics and Probability (; KSTP)
Department of Systems Analysis (; KSA), focusing on the application of principles of systems methodology and systems thinking into the fields of information systems, and business and management. Its main areas of research interest include: implementation of information systems within an organization, information management, strategic planning and business reengineering.
Academics
The faculty offers Bachelor, Master and doctoral study programs.
Bachelor programs
Bachelor study programs are 3-3,5 years in length and conclude with a Bachelor State Examination and defence of a Bachelor thesis. Bachelor theses usually focus on practical topics.
Applied Informatics
Information Media and Services
Mathematical Methods in Economics, a program focusing on quantitative methods, and the inter-related fields of economics, business economics and mathematical methods.
Multimedia in Economic Practice
Socio-economic dem
|
https://en.wikipedia.org/wiki/Sequential%20structure%20alignment%20program
|
The sequential structure alignment program (SSAP) in chemistry, physics, and biology is a method that uses double dynamic programming to produce a structural alignment based on atom-to-atom vectors in structure space. Instead of the alpha carbons typically used in structural alignment, SSAP constructs its vectors from the beta carbons for all residues except glycine, a method which thus takes into account the rotameric state of each residue as well as its location along the backbone. SSAP works by first constructing a series of inter-residue distance vectors between each residue and its nearest non-contiguous neighbors on each protein. A series of matrices are then constructed containing the vector differences between neighbors for each pair of residues for which vectors were constructed. Dynamic programming applied to each resulting matrix determines a series of optimal local alignments which are then summed into a "summary" matrix to which dynamic programming is applied again to determine the overall structural alignment.
SSAP originally produced only pairwise alignments but has since been extended to multiple alignments as well. It has been applied in an all-to-all fashion to produce a hierarchical fold classification scheme known as CATH (Class, Architecture, Topology, Homology),. which has been used to construct the CATH Protein Structure Classification database.
Generally, SSAP scores above 80 are associated with highly similar structures. Scores between 70 and 80 indicate a similar fold with minor variations. Structures yielding a score between 60 and 70 do not generally contain the same fold, but usually belong to the same protein class with common structural motifs.
See also
Structural alignment
Class, Architecture, Topology, Homology (CATH)
RMSD — A different structure comparison measure
TM-score — A different structure comparison measure
GDT — A different structure comparison measure
LCS — A different structure comparison measure
|
https://en.wikipedia.org/wiki/AKLT%20model
|
In condensed matter physics, an AKLT model, also known as an Affleck-Kennedy-Lieb-Tasaki model is an extension of the one-dimensional quantum Heisenberg spin model. The proposal and exact solution of this model by Ian Affleck, Elliott H. Lieb, Tom Kennedy and provided crucial insight into the physics of the spin-1 Heisenberg chain. It has also served as a useful example for such concepts as valence bond solid order, symmetry-protected topological order and matrix product state wavefunctions.
Background
A major motivation for the AKLT model was the Majumdar–Ghosh chain. Because two out of every set of three neighboring spins in a Majumdar–Ghosh ground state are paired into a singlet, or valence bond, the three spins together can never be found to be in a spin 3/2 state. In fact, the Majumdar–Ghosh Hamiltonian is nothing but the sum of all projectors of three neighboring spins onto a 3/2 state.
The main insight of the AKLT paper was that this construction could be generalized to obtain exactly solvable models for spin sizes other than 1/2. Just as one end of a valence bond is a spin 1/2, the ends of two valence bonds can be combined into a spin 1, three into a spin 3/2, etc.
Definition
Affleck et al. were interested in constructing a one-dimensional state with a valence bond between every pair of sites. Because this leads to two spin 1/2s for every site, the result must be the wavefunction of a spin 1 system.
For every adjacent pair of the spin 1s, two of the four constituent spin 1/2s are stuck in a total spin zero state. Therefore, each pair of spin 1s is forbidden from being in a combined spin 2 state.
By writing this condition as a sum of projectors that favor the spin 2 state of pairs of spin 1s, AKLT arrived at the following Hamiltonian
up to a constant,
where the are spin-1 operators, and the local 2-point projector that favors the spin 2 state of an adjacent pair of spins.
This Hamiltonian is similar to the spin 1, one-dimensional quantum Heis
|
https://en.wikipedia.org/wiki/List%20of%20semiconductor%20scale%20examples
|
Listed are many semiconductor scale examples for various metal–oxide–semiconductor field-effect transistor (MOSFET, or MOS transistor) semiconductor manufacturing process nodes.
Timeline of MOSFET demonstrations
PMOS and NMOS
CMOS (single-gate)
Multi-gate MOSFET (MuGFET)
Other types of MOSFET
Commercial products using micro-scale MOSFETs
Products featuring 20 μm manufacturing process
RCA's CD4000 series of integrated circuits (ICs) beginning in 1968.
Products featuring 10 μm manufacturing process
Intel 4004, the first single-chip microprocessor CPU, launched in 1971.
Intel 8008 CPU launched in 1972.
MOS Technology 6502 1 MHz CPU launched in 1975 (8 μm).
Products featuring 8 μm manufacturing process
Intel 1103, an early dynamic random-access memory (DRAM) chip launched in 1970.
Products featuring 6 μm manufacturing process
Toshiba TLCS-12, a microprocessor developed for the Ford EEC (Electronic Engine Control) system in 1973.
Intel 8080 CPU launched in 1974 was manufactured using this process.
The Television Interface Adaptor, the custom graphics and audio chip developed for the Atari 2600 in 1977.
MOS Technology SID, a programmable sound generator developed for the Commodore 64 in 1982.
MOS Technology VIC-II, a video display controller developed for the Commodore 64 in 1982 (5 μm).
Products featuring 3 μm manufacturing process
Intel 8085 CPU launched in 1976.
Intel 8086 CPU launched in 1978.
Intel 8088 CPU launched in 1979.
Motorola 68000 8 MHz CPU launched in 1979 (3.5 μm).
Products featuring 1.5 μm manufacturing process
NEC's 64kb SRAM memory chip in 1981.
Intel 80286 CPU launched in 1982.
The Amiga Advanced Graphics Architecture (initially sold in 1992) included chips such as Denise that were manufactured using a 1.5 μm CMOS process.
Products featuring 1 μm manufacturing process
NTT's DRAM memory chips, including its 64kb chip in 1979 and 256kb chip in 1980.
NEC's 1Mb DRAM memory chip in 1984.
Intel 80386 CPU launched in 198
|
https://en.wikipedia.org/wiki/Melanoma%20inhibitory%20activity
|
Melanoma-derived growth regulatory protein is a protein that in humans is encoded by the MIA gene.
It is a marker for malignant melanoma.
|
https://en.wikipedia.org/wiki/Baruch%20Schieber
|
Baruch M. Schieber (Hebrew: ברוך שיבר; born: December 1958) is a Professor of the Department of Computer Science at the New Jersey Institute of Technology (NJIT) and Director of the Institute for Future Technologies.
Early life and education
Baruch Schieber was born in Tel Aviv and was raised in Givatayim (a suburb of Tel Aviv). His father was a bank branch manager and his mother a housemaker. He graduated from Zeitlin High School in 1976.
Upon high school graduation, he began his academic studies at the Computer Science department of the Technion - Israel Institute of Technology in 1977 and received his B.Sc. in 1980 (summa cum laude). He then started his 5-year military service in unit 8200 of the Israeli Intelligence and reached the rank of lieutenant. While in the army he received his M.Sc. from the Computer Science department of the Technion - Israel Institute of Technology in 1984. He continued his Ph.D. studies at Tel Aviv University until 1987. His thesis on the design and analysis of parallel algorithms was supervised by Dr. Uzi Vishkin.
From 1987 to 1989 Schieber was a Postdoctoral Fellow at the Theory of Computation, Mathematical Sciences Department of IBM T.J. Watson Research Center, Yorktown Heights, New York.
Career
Schieber joined IBM T.J. Watson Research Center at Yorktown Heights in 1989 as a Research Staff Member in the Theory of Computation, Mathematical Sciences Department. In 1995 he became the manager of the department. From 2001 to 2015 Schieber served as the manager of the Optimization Center, Business Analytics and Mathematical Sciences Department of IBM T.J. Watson Research Center, and in 2017 he became Manager of the Center for Optimization, Mathematics, and Algorithms there. In 2017 Schieber became Manager of the Mathematics of AI, IBM Research AI at IBM T.J. Watson Research Center.
In 2018 he joined the New Jersey Institute of Technology as a Professor and chair of the Department of Computer Science (until June 2022). In July 2022 h
|
https://en.wikipedia.org/wiki/Power%20management%20system
|
On marine vessels the Power Management System PMS is in charge of controlling the electrical system. Its task is to make sure that the electrical system is safe and efficient. If the power consumption is larger than the power production capacity, load shedding is used to avoid blackout. Other features could be to automatic start and stop consumers (e.g., diesel generators) as the load varies.
A complete switchboard and generator control system
The marine Power Management System PMS is a complete switchboard and generator control system to synchronize the auxiliary engines of the ships by implementing automatic load sharing and optimizing the efficiency of the power plant. It handles various configurations of generators driven by diesel engines, steam turbines, and main engines in combination with switchboards of various complexity.
Power Management System PMS Operation
Electrical energy in any combination of the Generators is implemented according to calculations of the electric power tables of each vessel. PMS System decides which Generators combination will be the best according to the Load Consumptions. The capacity of the Generators is such that in the event of any one generating set will be stopped then it will still be possible to supply all services necessary to provide normal operational conditions of propulsion and safety. Furthermore, it will be sufficient to start the largest motor of the ship without causing any other motor to stop or having any adverse effect on other equipment in operation. In general a PMS Power Management System performs the following functions on a Ship:
Automatic Synchronizing
Automatic Load Sharing
Automatic Start/Stop/Stby Generators according to Load Demand
Large Motors Automatic Blocking
Load Analysis and Monitoring
Three (3) Phase Management and Voltage Matching
Redundant Power Distribution
Frequency Control
Blackout Start
Selection of Generators Priority (first leading main, second and third stby generator in se
|
https://en.wikipedia.org/wiki/MediaWiki
|
MediaWiki is free and open-source wiki software originally developed by Magnus Manske for use on Wikipedia on January 25, 2002 and further improved by Lee Daniel Crocker, after which it has been coordinated by the Wikimedia Foundation. It powers most websites hosted by the Foundation including Wikipedia, Wiktionary, Wikimedia Commons, Wikiquote, Meta-Wiki and Wikidata, which define a large part of the set requirements for the software.
MediaWiki is written in the PHP programming language and stores all text content into a database. The software is optimized to efficiently handle large projects, which can have terabytes of content and hundreds of thousands of views per second. Because Wikipedia is one of the world's largest and most visited websites, achieving scalability through multiple layers of caching and database replication has been a major concern for developers. Another major aspect of MediaWiki is its internationalization; its interface is available in more than 400 languages. The software has more than 1,000 configuration settings and more than 1,800 extensions available for enabling various features to be added or changed.
Besides its usage on Wikimedia sites, MediaWiki has been used as a knowledge management and content management system on websites such as Fandom, wikiHow and major internal installations like Intellipedia and Diplopedia.
License
MediaWiki is free and open-source and is distributed under the terms of the GNU General Public License version 2 or any later version. Its documentation, located at its official website at www.mediawiki.org, is released under the Creative Commons BY-SA 4.0 license and partly in the public domain. Specifically, the manuals and other content at MediaWiki.org are Creative Commons-licensed, while the set of help pages intended to be freely copied into fresh wiki installations and/or distributed with MediaWiki software is public domain. This was done to eliminate legal issues arising from the help pages being impo
|
https://en.wikipedia.org/wiki/Cutin
|
Cutin is one of two waxy polymers that are the main components of the plant cuticle, which covers all aerial surfaces of plants, the other being cutan. It is an insoluble substance with waterproof quality. Cutin also harbors cuticular waxes, which assist in cuticle structure. Cutan, the other major cuticle polymer, is much more readily preserved in fossil records. Cutin consists of omega hydroxy acids and their derivatives, which are interlinked via ester bonds, forming a polyester polymer of indeterminate size.
There are two major monomer families of cutin, the C16 and C18 families. The C16 family consists mainly of 16-hydroxy palmitic acid and 9,16- or 10,16-dihydroxypalmitic acid. The C18 family consists mainly of 18-hydroxy oleic acid, 9,10-epoxy-18-hydroxy stearic acid, and 9,10,18-trihydroxystearate.
|
https://en.wikipedia.org/wiki/Planck%27s%20law
|
In physics, Planck's law (also Planck radiation law) describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature , when there is no net flow of matter or energy between the body and its environment.
At the end of the 19th century, physicists were unable to explain why the observed spectrum of black-body radiation, which by then had been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, German physicist Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, , that was proportional to the frequency of its associated electromagnetic wave. While Planck originally regarded the hypothesis of dividing energy into increments as a mathematical artifice, introduced merely to get the correct answer, other physicists including Albert Einstein built on his work, and Planck's insight is now recognized to be of fundamental importance to quantum theory.
The law
Every physical body spontaneously and continuously emits electromagnetic radiation and the spectral radiance of a body, , describes the spectral emissive power per unit area, per unit solid angle and per unit frequency for particular radiation frequencies. The relationship given by Planck's radiation law, given below, shows that with increasing temperature, the total radiated energy of a body increases and the peak of the emitted spectrum shifts to shorter wavelengths. According to this, the spectral radiance of a body for frequency at absolute temperature is given (in cgs units) by
where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum. The cgs units of spectral radiance are .
The spectral radiance can also be expressed per unit wave
|
https://en.wikipedia.org/wiki/Jeotex
|
Jeotex, Inc. (known as Datawind, Inc until 2019) is a developer and manufacturer of low-cost tablet computers and smartphones. The company was founded in Montreal, Quebec, Canada to manufacture tablets for sale primarily in India, Nigeria, the United Kingdom, Canada, and the United States. The company is known for its development of the Aakash tablet computer, which is the “world's cheapest tablet” at US$37.99/unit. The Aakash tablet was developed for India's Ministry for Human Resource Development (MHRD).
Datawind manufactures mobile, internet devices, such as Pocket Surfer smartphones, UbiSurfer netbooks, and Ubislate tablets. Formerly, the company manufactured the Aakash tablets on behalf of the Government of India. The company's devices use a patented web-delivery platform for low-cost internet access on mobile networks. Datawind was listed on the Toronto Stock Exchange from 2014 to October 2018 when it was relisted to the TSX Venture Exchange after failing to meet listing requirements. It was delisted from the TSX Venture Exchange in 2021.
Datawind has offices in Montreal, Mississauga, London, Delhi and Amritsar. At the shareholder meeting in April 2019 it was agreed to change the name of the company to Jeotex, Inc., and the change took effect that month. On 10 June 2021, the company was declared bankrupt.
History
Datawind was founded in Montreal in 2001 by brothers Suneet and Raja Tuli. Suneet and Raja Tuli grew up in Northern Alberta, Canada. Suneet Tuli is a civil engineer, who graduated from the University of Toronto. Raja is a computer engineer, who graduated from the University of Alberta. Suneet Tuli is Datawind's chief executive officer and is responsible for its vision, strategy, and execution. Raja Tuli is the company's co-chairman and chief technology officer; he is also an inventor with dozens of patents across a broad range of technologies related to the internet, to imaging, and to energy sustenance.
Datawind's product range includes PocketSu
|
https://en.wikipedia.org/wiki/Pine%20bolete
|
Pine bolete is a common name for several mushrooms and may refer to:
Suillus bellinii
Boletus pinophilus
|
https://en.wikipedia.org/wiki/Indiscrete%20category
|
An indiscrete category is a category C in which every hom-set C(X, Y) is a singleton. Every class X gives rise to an indiscrete category whose objects are the elements of X such that for any two objects A and B, there is only one morphism from A to B. Any two nonempty indiscrete categories are equivalent to each other. The functor from Set to Cat that sends a set to the corresponding indiscrete category is right adjoint to the functor that sends a small category to its set of objects.
|
https://en.wikipedia.org/wiki/Xerox%20Operating%20System
|
The Xerox Operating System (XOS) was an operating system for the XDS Sigma series of computers "optimized for direct replacement of IBM DOS/360 installations" and to provide real-time and timesharing support.
The system was developed, beginning in 1969, for Xerox by the French firm CII (now Bull).
XOS was more successful in Europe than in the US, but was unable to compete with IBM. By 1972 there were 35 XOS installations in Europe, compared to 2 in the US.
|
https://en.wikipedia.org/wiki/Crambin
|
Crambin is a small seed storage protein from the Abyssinian cabbage. It belongs to thionins. It has 46 residues (amino acids). It has been extensively studied by X-ray crystallography since its crystals are unique and diffract to a resolution of 0.48 Å. Neutron scattering measurements are available also at a resolution of 1.1 Å.
|
https://en.wikipedia.org/wiki/Beck%E2%80%93Fahrner%20syndrome
|
Beck–Fahrner syndrome, also known as BEFAHRS and TET3 deficiency, is a rare genetic disorder caused by mutations of the TET3 gene. It can occur de novo or can be inherited in an autosomal dominant manner. Mutations in the TET3 gene disrupts DNA demethylation (an epigenetic mechanism) during early embryogenesis and neural development. Most common clinical presentation includes global developmental delay, psychomotor retardation, neurodevelopmental disorders, hypotonia, epilepsy and dysmorphic features. It is diagnosed using molecular and genetic testing in setting of typical symptoms. Management is supportive and intended to improve quality of life.
Signs and symptoms
Beck–Fahrner syndrome is also known as "BEFAHRS", which is a mnemonic for its most common features: behavioral differences, epilepsy, facial features, autistic features, hypotonia, retardation of psychomotor development, and size differences.
Global developmental delay is the most common symptom with psychomotor retardation, speech delay, disrupted fine and gross motor skills. Intellectual disability and learning disability are often present, sometimes concurrently. Over two-thirds of affected individuals have autism spectrum disorder or social communication disorder. Attention deficit hyperactivity disorder, obsessive–compulsive tendencies, anxiety, depression and psychosis have also been noted.
Neurodevelopmental anomalies, motor and movement disorders such as hypotonia (most common), dystonia, spasticity, tic disorder, myoclonus, dysmetria and abnormal posturing have been observed with varying intensity and frequency. Epilepsy affects over one-third individuals with Beck–Fahrner syndrome; generalized tonic-clonic seizures, complex partial seizures, absence seizures, epileptic spasms and electrical status epilepticus during slow-wave sleep have been seen. Neuroimaging yields non-specific findings in some cases, such as ventriculomegaly, periventricular leukomalacia, cerebellar hypoplasia and agenes
|
https://en.wikipedia.org/wiki/Robert%20S.%20Barton
|
Robert Stanley "Bob" Barton (February 13, 1925 – January 28, 2009) was the chief architect of the Burroughs B5000 and other computers such as the B1700, a co-inventor of dataflow architecture, and an influential professor at the University of Utah.
His students at Utah have had a large role in the development of computer science.
Barton designed machines at a more abstract level, not tied to the technology constraints of the time. He employed high-level languages and a stack machine in his design of the B5000 computer. Its design survives in the modern Unisys ClearPath MCP systems. His work with stack machine architectures was the first implementation in a mainframe computer.
Barton died on January 28, 2009, in Portland, Oregon, aged 83.
Career
Barton was born in New Britain, Connecticut in 1925 and received his BA in 1948, and his MS in 1949 in Mathematics, from the University of Iowa. His early experience with computers was when he worked in the IBM Applied Science Department in 1951.
In 1954, he joined the Shell Oil Company Technical Services, working on programming applications. He worked at Shell Development, a research group in Texas where he worked with a Burroughs/Datatron 205 computer. In 1958, he studied Irving Copi and Jan Łukasiewicz's work on symbolic logic and Polish notation, and considered its application to arithmetic expression processing on a computer.
Barton joined Burroughs Corporation, ElectroData Division, in Pasadena, California in the late 1950s. He managed a system programming group in 1959 which developed a compiler named BALGOL for the language ALGOL 58 on the Burroughs 220 computer.
In 1960, he became a consultant for Beckman Instruments working on data collection from satellite systems, for Lockheed Corporation working on satellite systems and organizing of data processing services, and for Burroughs continuing to work on the design concepts of the B5000.
After an assignment in Australia in 1963 for Control Data Corporation, he
|
https://en.wikipedia.org/wiki/Symmetric%20decreasing%20rearrangement
|
In mathematics, the symmetric decreasing rearrangement of a function is a function which is symmetric and decreasing, and whose level sets are of the same size as those of the original function.
Definition for sets
Given a measurable set, in one defines the symmetric rearrangement of called as the ball centered at the origin, whose volume (Lebesgue measure) is the same as that of the set
An equivalent definition is
where is the volume of the unit ball and where is the volume of
Definition for functions
The rearrangement of a non-negative, measurable real-valued function whose level sets (for ) have finite measure is
where denotes the indicator function of the set
In words, the value of gives the height for which the radius of the symmetric
rearrangement of is equal to We have the following motivation for this definition. Because the identity
holds for any non-negative function the above definition is the unique definition that forces the identity to hold.
Properties
The function is a symmetric and decreasing function whose level sets have the same measure as the level sets of that is,
If is a function in then
The Hardy–Littlewood inequality holds, that is,
Further, the Pólya–Szegő inequality holds. This says that if and if then
The symmetric decreasing rearrangement is order preserving and decreases distance, that is,
and
Applications
The Pólya–Szegő inequality yields, in the limit case, with the isoperimetric inequality. Also, one can use some relations with harmonic functions to prove the Rayleigh–Faber–Krahn inequality.
Nonsymmetric decreasing rearrangement
We can also define as a function on the nonnegative real numbers rather than on all of Let be a σ-finite measure space, and let be a measurable function that takes only finite (that is, real) values μ-a.e. (where "-a.e." means except possibly on a set of -measure zero). We define the distribution function by the rule
We can now define the decreasing rear
|
https://en.wikipedia.org/wiki/Arytenoid%20cartilage
|
The arytenoid cartilages () are a pair of small three-sided pyramids which form part of the larynx. They are the site of attachment of the vocal cords. Each is pyramidal or ladle-shaped and has three surfaces, a base, and an apex. The arytenoid cartilages allow for movement of the vocal cords by articulating with the cricoid cartilage. They may be affected by arthritis, dislocations, or sclerosis.
Structure
The arytenoid cartilages are part of the posterior part of the larynx.
Surfaces
The posterior surface is triangular, smooth, concave, and gives attachment to the arytenoid muscle and transversus.
The antero-lateral surface is somewhat convex and rough. On it, near the apex of the cartilage, is a rounded elevation (colliculus) from which a ridge (crista arcuata) curves at first backward and then downward and forward to the vocal process. The lower part of this crest intervenes between two depressions or foveæ, an upper, triangular, and a lower oblong in shape; the latter gives attachment to the thyroarytenoid muscle (vocal muscle).
The medial surface is narrow, smooth, and flattened, covered by mucous membrane. It forms the lateral boundary of the intercartilaginous part of the rima glottidis.
Base and apex
The base of each cartilage is broad, and on it is a concave smooth surface, for articulation with the cricoid cartilage.
Its lateral angle is called the muscular process.
Its anterior angle is called the vocal process.
The apex of each cartilage is pointed, curved backward and medialward, and surmounted by a small conical, cartilaginous nodule, the corniculate cartilage. It articulates with the cricoid lamina with a ball-and-socket joint.
Function
The arytenoid cartilages allow the vocal folds to be tensed, relaxed, or approximated. They articulate with the supero-lateral parts of the cricoid cartilage lamina, forming the cricoarytenoid joints at which they can come together, move apart, tilt anteriorly or posteriorly, and rotate.
Clinical signif
|
https://en.wikipedia.org/wiki/Nightingale%20floor
|
are floors that make a chirping sound when walked upon. These floors were used in the hallways of some temples and palaces, the most famous example being Nijō Castle, in Kyoto, Japan. Dry boards naturally creak under pressure, but these floors were built in a way that the flooring nails rub against a jacket or clamp, causing chirping noises. It is unclear if the design was intentional. It seems that, at least initially, the effect arose by chance. An information sign in Nijō castle states that "The singing sound is not actually intentional, stemming rather from the movement of nails against clumps in the floor caused by wear and tear over the years". Legend has it that the squeaking floors were used as a security device, assuring that no one could sneak through the corridors undetected.
The English name "nightingale" refers to the Japanese bush warbler, or uguisu, which is a common songbird in Japan.
Etymology
refers to the Japanese bush warbler. The latter segment comes from , which can be used to mean "to lay/board (flooring)", as in the expression yukaita wo haru (床板を張る) meaning "to board a/the floor". The verb haru becomes nominalized as hari and voiced through rendaku to become bari. In this form it refers to the method of boarding, as in other words like herinbōnbari (ヘリンボーン張り), which refers to flooring laid in a Herringbone pattern. As such, uguisubari means "Warbler boarding".
Construction
The floors were made from dried boards. Upside-down V-shaped joints move within the boards when pressure is applied.
Examples
The following locations incorporate nightingale floors:
Nijō Castle, Kyoto
Chion-in, Kyoto
Eikan-dō Zenrin-ji, Kyoto
Daikaku-ji, Kyoto
Modern influences and related topics
Melody Road in Hokkaido, Wakayama, and Gunma
Singing Road in Anyanag, Gyeonggi South Korea
Civic Musical Road in Lancaster, California
Across the Nightingale Floor, 2002 novel by Lian Hearn
Notes
|
https://en.wikipedia.org/wiki/Multidimensional%20spectral%20estimation
|
Multidimension spectral estimation is a generalization of spectral estimation, normally formulated for one-dimensional signals, to multidimensional signals or multivariate data, such as wave vectors.
Motivation
Multidimensional spectral estimation has gained popularity because of its application in fields like medicine, aerospace, sonar, radar, bio informatics and geophysics. In the recent past, a number of methods have been suggested to design models with finite parameters to estimate the power spectrum of multidimensional signals. In this article, we will be looking into the basics of methods used to estimate the power spectrum of multidimensional signals.
Applications
There are many applications of spectral estimation of multi-D signals such as classification of signals as low pass, high pass, pass band and stop band. It is also used in compression and coding of audio and video signals, beam forming and direction finding in radars, Seismic data estimation and processing, array of sensors and antennas and vibrational analysis. In the field of radio astronomy, it is used to synchronize the outputs of an array of telescopes.
Basic Concepts
In a single dimensional case, a signal is characterized by an amplitude and a time scale. The basic concepts involved in spectral estimation include autocorrelation, multi-D Fourier transform, mean square error and entropy. When it comes to multidimensional signals, there are two main approaches: use a bank of filters or estimate the parameters of the random process in order to estimate the power spectrum.
Methods
Classical Estimation Theory
It is a technique to estimate the power spectrum of a single dimensional or a multidimensional signal as it cannot be calculated accurately. Given are samples of a wide sense stationary random process and its second order statistics (measurements).The estimates are obtained by applying a multidimensional Fourier transform of the autocorrelation function of the random signal. The estimati
|
https://en.wikipedia.org/wiki/Persistence%20of%20a%20number
|
In mathematics, the persistence of a number is the number of times one must apply a given operation to an integer before reaching a fixed point at which the operation no longer alters the number.
Usually, this involves additive or multiplicative persistence of a non-negative integer, which is how often one has to replace the number by the sum or product of its digits until one reaches a single digit. Because the numbers are broken down into their digits, the additive or multiplicative persistence depends on the radix. In the remainder of this article, base ten is assumed.
The single-digit final state reached in the process of calculating an integer's additive persistence is its digital root. Put another way, a number's additive persistence counts how many times we must sum its digits to arrive at its digital root.
Examples
The additive persistence of 2718 is 2: first we find that 2 + 7 + 1 + 8 = 18, and then that 1 + 8 = 9. The multiplicative persistence of 39 is 3, because it takes three steps to reduce 39 to a single digit: 39 → 27 → 14 → 4. Also, 39 is the smallest number of multiplicative persistence 3.
Smallest numbers of a given multiplicative persistence
In base 10, there is thought to be no number with a multiplicative persistence > 11: this is known to be true for numbers up to 1020,000. The smallest numbers with persistence 0, 1, 2, ... are:
0, 10, 25, 39, 77, 679, 6788, 68889, 2677889, 26888999, 3778888999, 277777788888899.
The search for these numbers can be sped up by using additional properties of the decimal digits of these record-breaking numbers. These digits must be in increasing order (with the exception of the second number, 10), and – except for the first two digits – all digits must be 7, 8, or 9. There are also additional restrictions on the first two digits.
Based on these restrictions, the number of candidates for n-digit numbers with record-breaking persistence is only proportional to the square of n, a tiny fraction of all possible
|
https://en.wikipedia.org/wiki/BIBO%20stability
|
In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded.
A signal is bounded if there is a finite value such that the signal magnitude never exceeds , that is
For discrete-time signals:
For continuous-time signals:
Time-domain condition for linear time-invariant systems
Continuous-time necessary and sufficient condition
For a continuous time linear time-invariant (LTI) system, the condition for BIBO stability is that the impulse response, , be absolutely integrable, i.e., its L1 norm exists.
Discrete-time sufficient condition
For a discrete time LTI system, the condition for BIBO stability is that the impulse response be absolutely summable, i.e., its norm exists.
Proof of sufficiency
Given a discrete time LTI system with impulse response the relationship between the input and the output is
where denotes convolution. Then it follows by the definition of convolution
Let be the maximum value of , i.e., the -norm.
(by the triangle inequality)
If is absolutely summable, then and
So if is absolutely summable and is bounded, then is bounded as well because .
The proof for continuous-time follows the same arguments.
Frequency-domain condition for linear time-invariant systems
Continuous-time signals
For a rational and continuous-time system, the condition for stability is that the region of convergence (ROC) of the Laplace transform includes the imaginary axis. When the system is causal, the ROC is the open region to the right of a vertical line whose abscissa is the real part of the "largest pole", or the pole that has the greatest real part of any pole in the system. The real part of the largest pole defining the ROC is called the abscissa of convergence. Therefore, all poles of the system must be in the strict left half of the s
|
https://en.wikipedia.org/wiki/Variable%20shadowing
|
In computer programming, variable shadowing occurs when a variable declared within a certain scope (decision block, method, or inner class) has the same name as a variable declared in an outer scope. At the level of identifiers (names, rather than variables), this is known as name masking. This outer variable is said to be shadowed by the inner variable, while the inner identifier is said to mask the outer identifier. This can lead to confusion, as it may be unclear which variable subsequent uses of the shadowed variable name refer to, which depends on the name resolution rules of the language.
One of the first languages to introduce variable shadowing was ALGOL, which first introduced blocks to establish scopes. It was also permitted by many of the derivative programming languages including C, C++ and Java.
The C# language breaks this tradition, allowing variable shadowing between an inner and an outer class, and between a method and its containing class, but not between an if-block and its containing method, or between case statements in a switch block.
Some languages allow variable shadowing in more cases than others. For example Kotlin allows an inner variable in a function to shadow a passed argument and a variable in an inner block to shadow another in an outer block, while Java does not allow these. Both languages allow a passed argument to a function/Method to shadow a Class Field.
Some languages disallow variable shadowing completely such as CoffeeScript.
Example
Lua
The following Lua code provides an example of variable shadowing, in multiple blocks.
v = 1 -- a global variable
do
local v = v + 1 -- a new local that shadows global v
print(v) -- prints 2
do
local v = v * 2 -- another local that shadows outer local v
print(v) -- prints 4
end
print(v) -- prints 2
end
print(v) -- prints 1
Python
The following Python code provides another example of variable shadowing:
x = 0
def outer():
x = 1
def inner():
x = 2
|
https://en.wikipedia.org/wiki/Eugenio%20Elia%20Levi
|
Eugenio Elia Levi (18 October 1883 – 28 October 1917) was an Italian mathematician, known for his fundamental contributions in group theory, in the theory of partial differential operators and in the theory of functions of several complex variables. He was a younger brother of Beppo Levi and was killed in action during First World War.
Work
Research activity
He wrote 33 papers, classified by his colleague and friend Mauro Picone according to the scheme reproduced in this section.
Differential geometry
Group theory
He wrote only three papers in group theory: in the first one, discovered what is now called Levi decomposition, which was conjectured by Wilhelm Killing and proved by Élie Cartan in a special case.
Function theory
In the theory of functions of several complex variables he introduced the concept of pseudoconvexity during his investigations on the domain of existence of such functions: it turned out to be one of the key concepts of the theory.
Cauchy and Goursat problems
Boundary value problems
His researches in the theory of partial differential operators lead to the method of the parametrix, which is basically a way to construct fundamental solutions for elliptic partial differential operators with variable coefficients: the parametrix is widely used in the theory of pseudodifferential operators.
Calculus of variations
Publications
The full scientific production of Eugenio Elia Levi is collected in reference .
, reprinted also in , volume I. A a well-known memoir in Group theory: it was presented to the members of the Accademia delle Scienze di Torino during the session of April 2, 1905, by Luigi Bianchi.
. A short note announcing the results of paper .
. An important paper whose results were previously announced in the short note with the same title. It was also translated in Russian by N. D. Ajzenstat, currently available from the All-Russian Mathematical Portal: .
. An important paper in the theory of functions of several complex variables, w
|
https://en.wikipedia.org/wiki/Physical%20biochemistry
|
Physical biochemistry is a branch of biochemistry that deals with the theory, techniques, and methodology used to study the physical chemistry of biomolecules.
It also deals with the mathematical approaches for the analysis of biochemical reaction and the modelling of biological systems. It provides insight into the structure of macromolecules, and how chemical structure influences the physical properties of a biological substance.
It involves the use of physics, physical chemistry principles, and methodology to study biological systems. It employs various physical chemistry techniques such as chromatography, spectroscopy, Electrophoresis, X-ray crystallography, electron microscopy, and hydrodynamics.
See also
Physical chemistry
|
https://en.wikipedia.org/wiki/Mechanician
|
A mechanician is an engineer or a scientist working in the field of mechanics, or in a related or sub-field: engineering or computational mechanics, applied mechanics, geomechanics, biomechanics, and mechanics of materials. Names other than mechanician have been used occasionally, such as mechaniker and mechanicist.
The term mechanician is also used by the Irish Navy to refer to junior engine room ratings. In the British Royal Navy, Chief Mechanicians and Mechanicians 1st Class were Chief Petty Officers, Mechanicians 2nd and 3rd Class were Petty Officers, Mechanicians 4th Class were Leading Ratings, and Mechanicians 5th Class were Able Ratings. The rate was only applied to certain technical specialists and no longer exists.
In the New Zealand Post Office, which provided telephone service prior to the formation of Telecom New Zealand in 1987, "Mechanician" was a job classification for workers who serviced telephone exchange switching equipment. The term seems to have originated in the era of the 7A Rotary system exchange, and was superseded by "Technician" circa 1975, perhaps because "Mechanician" was no longer considered appropriate after the first 2000 type Step-byStep Strowger switch exchanges began to be introduced in 1952 (in Auckland, at Birkenhead exchange).
It is also the term by which makers of mechanical automata use in reference to their profession.
People who made lasting contributions to mechanics prior to the 20th century
Ibn al-Haytham: motion
Galileo Galilei: notion of strength
Robert Hooke: Hooke's law
Isaac Newton: Newton's laws, law of gravitation
Guillaume Amontons: laws of friction
Leonhard Euler: buckling, rigid body dynamics
Jean le Rond d'Alembert: d'Alembert's principle, Wave equation
Joseph Louis Lagrange: Lagrangian mechanics
Pierre-Simon Laplace: effects of surface tension
Sophie Germain: elasticity
Siméon Denis Poisson: elasticity
Claude-Louis Navier: elasticity, fluid mechanics
Augustin Louis Cauchy: elasticity
Barré de Saint-Venant
|
https://en.wikipedia.org/wiki/Clausen%20function
|
In mathematics, the Clausen function, introduced by , is a transcendental, special function of a single variable. It can variously be expressed in the form of a definite integral, a trigonometric series, and various other forms. It is intimately connected with the polylogarithm, inverse tangent integral, polygamma function, Riemann zeta function, Dirichlet eta function, and Dirichlet beta function.
The Clausen function of order 2 – often referred to as the Clausen function, despite being but one of a class of many – is given by the integral:
In the range the sine function inside the absolute value sign remains strictly positive, so the absolute value signs may be omitted. The Clausen function also has the Fourier series representation:
The Clausen functions, as a class of functions, feature extensively in many areas of modern mathematical research, particularly in relation to the evaluation of many classes of logarithmic and polylogarithmic integrals, both definite and indefinite. They also have numerous applications with regard to the summation of hypergeometric series, summations involving the inverse of the central binomial coefficient, sums of the polygamma function, and Dirichlet L-series.
Basic properties
The Clausen function (of order 2) has simple zeros at all (integer) multiples of since if is an integer, then
It has maxima at
and minima at
The following properties are immediate consequences of the series definition:
See .
General definition
More generally, one defines the two generalized Clausen functions:
which are valid for complex z with Re z >1. The definition may be extended to all of the complex plane through analytic continuation.
When z is replaced with a non-negative integer, the standard Clausen functions are defined by the following Fourier series:
N.B. The SL-type Clausen functions have the alternative notation and are sometimes referred to as the Glaisher–Clausen functions (after James Whitbread Lee Glaisher, hence the GL-
|
https://en.wikipedia.org/wiki/Von%20Neumann%20algebra
|
In mathematics, a von Neumann algebra or W*-algebra is a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. It is a special type of C*-algebra.
Von Neumann algebras were originally introduced by John von Neumann, motivated by his study of single operators, group representations, ergodic theory and quantum mechanics. His double commutant theorem shows that the analytic definition is equivalent to a purely algebraic definition as an algebra of symmetries.
Two basic examples of von Neumann algebras are as follows:
The ring of essentially bounded measurable functions on the real line is a commutative von Neumann algebra, whose elements act as multiplication operators by pointwise multiplication on the Hilbert space of square-integrable functions.
The algebra of all bounded operators on a Hilbert space is a von Neumann algebra, non-commutative if the Hilbert space has dimension at least .
Von Neumann algebras were first studied by in 1929; he and Francis Murray developed the basic theory, under the original name of rings of operators, in a series of papers written in the 1930s and 1940s (; ), reprinted in the collected works of .
Introductory accounts of von Neumann algebras are given in the online notes of and and the books by , , and . The three volume work by gives an encyclopedic account of the theory. The book by discusses more advanced topics.
Definitions
There are three common ways to define von Neumann algebras.
The first and most common way is to define them as weakly closed *-algebras of bounded operators (on a Hilbert space) containing the identity. In this definition the weak (operator) topology can be replaced by many other common topologies including the strong, ultrastrong or ultraweak operator topologies. The *-algebras of bounded operators that are closed in the norm topology are C*-algebras, so in particular any von Neumann algebra is a C*-algebra.
The seco
|
https://en.wikipedia.org/wiki/Hexapod%20%28robotics%29
|
A six-legged walking robot should not be confused with a Stewart platform, a kind of parallel manipulator used in robotics applications.
A hexapod robot is a mechanical vehicle that walks on six legs. Since a robot can be statically stable on three or more legs, a hexapod robot has a great deal of flexibility in how it can move. If legs become disabled, the robot may still be able to walk. Furthermore, not all of the robot's legs are needed for stability; other legs are free to reach new foot placements or manipulate a payload.
Many hexapod robots are biologically inspired by Hexapoda locomotion – the insectoid robots. Hexapods may be used to test biological theories about insect locomotion, motor control, and neurobiology.
Designs
Hexapod designs vary in leg arrangement. Insect-inspired robots are typically laterally symmetric, such as the RiSE robot at Carnegie Mellon. A radially symmetric hexapod is ATHLETE (All-Terrain Hex-Legged Extra-Terrestrial Explorer) robot at JPL.
Typically, individual legs range from two to six degrees of freedom. Hexapod feet are typically pointed, but can also be tipped with adhesive material to help climb walls or wheels so the robot can drive quickly when the ground is flat.
Locomotion
Most often, hexapods are controlled by gaits, which allow the robot to move forward, turn, and perhaps side-step. Some of the
most common gaits are as follows:
Alternating tripod: 3 legs on the ground at a time.
Quadruped.
Crawl: move just one leg at a time.
Gaits for hexapods are often stable, even in slightly rocky and uneven terrain.
Motion may also be nongaited, which means the sequence of leg motions is not fixed, but rather chosen by the computer in response to the sensed environment. This may be most helpful in very rocky terrain, but existing techniques for motion planning are computationally expensive.
Biologically inspired
Insects are chosen as models because their nervous system are simpler than other animal species. Als
|
https://en.wikipedia.org/wiki/Posterior%20longitudinal%20ligament
|
The posterior longitudinal ligament is a ligament connecting the posterior surfaces of the vertebral bodies of all of the vertebrae of humans. It weakly prevents hyperflexion of the vertebral column. It also prevents posterior spinal disc herniation, although problems with the ligament can cause it.
Anatomy
The posterior longitudinal ligament is situated within the vertebral canal. It extends across the posterior surfaces of the bodies of the vertebrae. It extends superoinferiorly between the body of the axis superiorly, and (sources differ) the sacrum and possibly the coccyx or upper sacral canal inferiorly. It is continuous with the tectorial membrane of atlanto-axial joint superiorly, and with the deep dorsal sacrococcygeal ligament inferiorly.
The ligament gradually grows narrower inferiorly. The ligament is thicker in the thoracic than in the cervical and lumbar regions. In the thoracic and lumbar regions, it presents a series of dentations with intervening concave margins.
The posterior longitudinal ligament is generally quite wide and thin, and has serrated edges. It is narrow at the vertebral bodies (where it is firmly attached and where it covers the basivertebral veins), and broader over the intervertebral discs (to which it attaches less firmly to allow for the passage of the basivertebral veins).
Structure
This ligament is composed of smooth, shining, longitudinal fibers - denser and more compact than those of the anterior longitudinal ligament - and consists of superficial layers occupying the interval between three or four vertebræ, and deeper layers which extend between adjacent vertebrae. Deep fibres run between each vertebral body. Superficial fibres run between multiple vertebrae.
Function
The posterior longitudinal ligament weakly prevents hyperflexion of the vertebral column. It also limits spinal disc herniation, although it is much narrower than the anterior longitudinal ligament.
Clinical significance
The posterior longitudinal ligam
|
https://en.wikipedia.org/wiki/CloudBees
|
CloudBees is an enterprise software delivery company. Sacha Labourey and Francois Dechery co-founded the company in early 2010, and investors include Matrix Partners, Lightspeed Venture Partners, HSBC, Verizon Ventures, Golub Capital, Goldman Sachs, Morgan Stanley, and Bridgepoint Group.
CloudBees is headquartered in San Jose, CA with additional offices in Raleigh, NC,
Lewes, DE, Richmond, VA, Berlin, London, and Neuchâtel, Switzerland. CloudBees' software originally included a Platform as a Service offering, which let developers use Jenkins in the cloud, along with an on-premise version of Jenkins with additional functions for enterprise companies. In 2020, CloudBees also introduced a Software Delivery Automation platform.
History
CloudBees was founded in 2010 by Sacha Labourey and Francois Dechery. Later that year, CloudBees acquired InfraDNA, a company run by Kohsuke Kawaguchi, the creator of Jenkins.
Since 2010, CloudBees has raised a total of over $250 million in venture financing from investors. CloudBees customers include Salesforce, Capital One, United States Air Force, and HSBC.
In September 2014, CloudBees stopped offering runtime PaaS services and began to focus on its enterprise Jenkins for on-premises and cloud-based continuous delivery. Also in 2014, Kohsuke Kawaguchi, the lead developer and founder of Jenkins, became CloudBees' CTO.
In 2016, the company added a Software as a Service (SaaS) version of its continuous delivery software.
In February 2018, CloudBees acquired the cloud-based continuous delivery company Codeship.
In 2019, CloudBees acquired Electric Cloud and Rollout.
In 2020, Kawaguchi left his role as CTO of CloudBees to found a new company, Launchable.
In 2021, CloudBees announced CloudBees Compliance, a compliance and risk analysis capability platform for software delivery. CloudBees raised $150 million in a series F funding round in December 2021.
In 2022, CloudBees announced the acquisition of ReleaseIQ, a SaaS-based offerin
|
https://en.wikipedia.org/wiki/UAN
|
UAN is a solution of urea and ammonium nitrate in water used as a fertilizer. The combination of urea and ammonium nitrate has an extremely low critical relative humidity (18% at 30 °C) and can therefore only be used in liquid fertilizers. The most commonly used grade of these fertilizer solutions is UAN 32.0.0 (32%N) known as UN32 or UN-32, which consists of 45% ammonium nitrate, 35% urea and only 20% water. Other grades are UAN 28, UAN 30 and UAN 18.
The solutions are quite corrosive towards mild steel (up to per year on C1010 steel) and are therefore generally equipped with a corrosion inhibitor to protect tanks, pipelines, nozzles, etc. Urea–ammonium nitrate solutions should not be combined with calcium ammonium nitrate (CAN-17) or other solutions prepared from calcium nitrate. A thick, milky-white insoluble precipitate forms that may plug nozzles.
Physical and chemical characteristics of urea ammonium nitrate solutions
The solutions contain a remarkably low amount of water and nevertheless have a low salt-out temperature:
|
https://en.wikipedia.org/wiki/Algebra%20of%20physical%20space
|
In physics, the algebra of physical space (APS) is the use of the Clifford or geometric algebra Cl3,0(R) of the three-dimensional Euclidean space as a model for (3+1)-dimensional spacetime, representing a point in spacetime via a paravector (3-dimensional vector plus a 1-dimensional scalar).
The Clifford algebra Cl3,0(R) has a faithful representation, generated by Pauli matrices, on the spin representation C2; further, Cl3,0(R) is isomorphic to the even subalgebra Cl(R) of the Clifford algebra Cl3,1(R).
APS can be used to construct a compact, unified and geometrical formalism for both classical and quantum mechanics.
APS should not be confused with spacetime algebra (STA), which concerns the Clifford algebra Cl1,3(R) of the four-dimensional Minkowski spacetime.
Special relativity
Spacetime position paravector
In APS, the spacetime position is represented as the paravector
where the time is given by the scalar part , and e1, e2, e3 are the standard basis for position space. Throughout, units such that are used, called natural units. In the Pauli matrix representation, the unit basis vectors are replaced by the Pauli matrices and the scalar part by the identity matrix. This means that the Pauli matrix representation of the space-time position is
Lorentz transformations and rotors
The restricted Lorentz transformations that preserve the direction of time and include rotations and boosts can be performed by an exponentiation of the spacetime rotation biparavector W
In the matrix representation, the Lorentz rotor is seen to form an instance of the SL(2,C) group (special linear group of degree 2 over the complex numbers), which is the double cover of the Lorentz group. The unimodularity of the Lorentz rotor is translated in the following condition in terms of the product of the Lorentz rotor with its Clifford conjugation
This Lorentz rotor can be always decomposed in two factors, one Hermitian , and the other unitary , such that
The unitary element R is cal
|
https://en.wikipedia.org/wiki/Organix%20Inc
|
Organix Inc is a US fine chemicals company specialising in chemical synthesis of analytical standards and custom synthesis of finished compounds and intermediates.
Chemistry
Organix carries out research and development of novel molecules used in a variety of pharmaceutical research applications. Some notable compounds include;
O-526
O-774
O-806
O-823
O-1057
O-1072 (Tropoxane)
O-1125
O-1238
O-1269
O-1270
O-1399
O-1602
O-1656
O-1660
O-1812
O-1871
O-1918
O-2050
O-2113
O-2172
O-2371
O-2372
O-2387
O-2390
O-2394
O-2545
O-2694
O-4210
O-4310
Life sciences industry
|
https://en.wikipedia.org/wiki/Gladys%20Anderson%20Emerson
|
Gladys Ludwina Anderson Emerson (July 1, 1903 – January 18, 1984) was an American historian, biochemist and nutritionist who researched the impact of vitamins on the body. She was the first person to isolate Vitamin E in a pure form, and won the Garvan–Olin Medal in 1952.
Early life and education
Gladys Anderson was born on July 1, 1903, in Caldwell, Kansas; she was the only child of Otis and Louise (Williams) Anderson. She attended grade school in Fort Worth, Texas, and high school in El Reno, Oklahoma.
She received her Bachelor of Science (B.S.) degree in chemistry and physics and her Artium Baccalaureatus (A.B.) degree in English from the Oklahoma College for Women. In 1926, she earned her Master of Arts (M.A.) degree in history and economics from Stanford.
After being a department head at a junior high school, teaching geography and history, she accepted a fellowship in biochemistry and nutrition at the University of California, Berkeley. She completed her Ph.D. in animal nutrition and biochemistry at Berkeley in 1932. In 1932, she married her colleague, Dr. Oliver Huddleston Emerson. Immediately following, they both were accepted as postdoctoral fellows at the University of Göttingen, Germany, where she worked with Nobel prize winners Adolf Otto Reinhold Windaus and Adolf Butenandt.
Research career
From 1933 to 1942, Anderson was a research associate at the Institute of Experimental Biology at the University of California at Berkeley, working with Herbert McLean Evans. Herbert Evans had identified and named Vitamin E in 1922, but Gladys Emerson was the first person to isolate it, by obtaining alpha-tocopherol from wheat germ oil. In 1940, she and her husband divorced.
In 1942, she went to work for Merck & Co. as a staff researcher, where she remained for 14 years culminating in her role as head of the department of animal nutrition. She worked with rhesus monkeys, studying Vitamin b complex. At Merck, she identified the impact of withholding B6 as con
|
https://en.wikipedia.org/wiki/Adaptive%20sampling
|
Adaptive sampling is a technique used in computational molecular biology to efficiently simulate protein folding when coupled with molecular dynamics simulations.
Background
Proteins spend a large portion – nearly 96% in some cases – of their folding time "waiting" in various thermodynamic free energy minima. Consequently, a straightforward simulation of this process would spend a great deal of computation to this state, with the transitions between the states – the aspects of protein folding of greater scientific interest – taking place only rarely. Adaptive sampling exploits this property to simulate the protein's phase space in between these states. Using adaptive sampling, molecular simulations that previously would have taken decades can be performed in a matter of weeks.
Theory
If a protein folds through the metastable states A -> B -> C, researchers can calculate the length of the transition time between A and C by simulating the A -> B transition and the B -> C transition. The protein may fold through alternative routes which may overlap in part with the A -> B -> C pathway. Decomposing the problem in this manner is efficient because each step can be simulated in parallel.
Applications
Adaptive sampling is used by the Folding@home distributed computing project in combination with Markov state models.
Disadvantages
While adaptive sampling is useful for short simulations, longer trajectories may be more helpful for certain types of biochemical problems.
See also
Folding@home
Hidden Markov model
Computational biology
Molecular biology
|
https://en.wikipedia.org/wiki/Clinical%20neurochemistry
|
Clinical neurochemistry is the field of neurological biochemistry which relates biochemical phenomena to clinical symptomatic manifestations in humans. While neurochemistry is mostly associated with the effects of neurotransmitters and similarly functioning chemicals on neurons themselves, clinical neurochemistry relates these phenomena to system-wide symptoms. Clinical neurochemistry is related to neurogenesis, neuromodulation, neuroplasticity, neuroendocrinology, and neuroimmunology in the context of associating neurological findings at both lower and higher level organismal functions.
Neuropharmacology and drug action
The integration of knowledge concerning the molecular and cellular actions of a drug within the brain circuitry leads to an overall understanding of a neurological drug's action mechanisms. This understanding of drug action in turn can be extrapolated to account for system-wide or clinical manifestations which are observed as symptoms. The clinical effects of a neural drug are due to both immediate changes in homeostasis and long-term neural adaptations characterized by the phenomena neural plasticity.
The most basic and fundamental neurological phenomena in neuropharmacology is the binding of a drug or neurologically active substance to a cellular target. One assay to determine the extent at which a ligand binds to its receptor is the radioligand binding assay (RBA), in which specific binding of a radioactively-labeled ligand is denoted by the difference between saturated and non-saturated tissue samples. While the RBA assay assumes that the tissue prepared has just one molecular target per ligand, in actuality this may not be the case. For example, serotonin binds to many diverse serotonin receptors which makes the RIA assay quite difficult to interpret. Because many receptors are essentially enzymes, the field of pharmakinetics utilizes the Michaelis–Menten equation to describe drug affinity (dissociation constant Kd) and total binding (Bmax).
|
https://en.wikipedia.org/wiki/Optocollic%20reflex
|
Optocollic reflex is a gaze stabilization reflex that occurs in birds in response to visual (optokinetic) inputs, and leads to head movements that compensate for passive displacements and rotations of the animal. The reflex seems to be more prominent when the bird is flying (or at least held in a "flying position"). The brain systems involved in the reflex are the nucleus of the basal optic roots, the pretectal nucleus lentiformis mesencephali, the vestibular nuclei, and the cerebellum
|
https://en.wikipedia.org/wiki/Arithmetic%20for%20Parents
|
Arithmetic for Parents (Sumizdat, 2007, ) is a book about mathematics education aimed at parents and teachers.
The author, Ron Aharoni, is a professor of mathematics at the Technion; he wrote the book based on his experiences teaching elementary mathematics to Israeli schoolchildren.
The book was originally written in Hebrew and was translated to English, Portuguese and Dutch.
|
https://en.wikipedia.org/wiki/Ferric%20uptake%20regulator%20family
|
In molecular biology, the ferric uptake regulator family is a family of bacterial proteins involved in regulating metal ion uptake and in metal homeostasis. The family is named for its founding member, known as the ferric uptake regulator or ferric uptake regulatory protein (Fur). Fur proteins are responsible for controlling the intracellular concentration of iron in many bacteria. Iron is essential for most organisms, but its concentration must be carefully managed over a wide range of environmental conditions; high concentrations can be toxic due to the formation of reactive oxygen species.
Function
Members of the ferric uptake regulator family are transcription factors that primarily exert their regulatory effects as repressors: when bound to their cognate metal ion, they are capable of binding DNA and preventing expression of the genes they regulate, but under low concentrations of metal, they undergo a conformational change that prevents DNA binding and lifts the repression. In the case of the ferric uptake regulator protein itself, its immediate downstream target is a noncoding RNA called RyhB.
In addition to the ferric uptake regulator protein, members of the Fur family are also involved in maintaining homeostasis with respect to other ions:
Mur, responsive to manganese.
Nur, responsive to nickel.
PerR, responsive to peroxide; PerR monomers contain two binding sites and occur in zinc/iron and zinc/manganese forms.
Zur, responsive to zinc; Zur regulates uptake and transport through a regulon involving ZinT and the transporter ZnuABC.
Irr, responsive to iron through the status of heme biosynthesis. Has both activator and repressor function. Prevalent in Rhizobium, Bradyrhizobium and many other alphaproteobacteria.
The iron dependent repressor family is a functionally similar but non-homologous family of proteins involved in iron homeostasis in prokaryotes.
Relationship to virulence
Metal homeostasis can be a factor in bacterial virulence, an observatio
|
https://en.wikipedia.org/wiki/GRANK
|
GRANK, or Global Rank is a ranking of the rarity of a species, and is a useful tool in determining conservation needs.
Global Ranks are derived from a consensus of various conservation data centres, natural heritage programmes, scientific experts and NatureServe.
They are based on the total number of known, extant populations worldwide, and to what degree they are threatened by destruction. Criteria also include securely protected populations, size of populations, and the ability of the species to persist.
G1 — Critically Imperiled At very high risk of extinction or collapse due to very restricted range, very few populations or occurrences, very steep declines, very severe threats, or other factors.
G2 — Imperiled At high risk of extinction or collapse due to restricted range, few populations or occurrences, steep declines, severe threats, or other factors.
G3 — Vulnerable At moderate risk of extinction or collapse due to a fairly restricted range, relatively few populations or occurrences, recent and widespread declines, threats, or other factors.
G4 — Apparently Secure At fairly low risk of extinction or collapse due to an extensive range and/or many populations or occurrences, but with possible cause for some concern as a result of local recent declines, threats, or other factors.
G5 — Secure At very low risk or extinction or collapse due to a very extensive range, abundant populations or occurrences, and little to no concern from declines or threats.
GH — Possibly Extinct (species) or Possibly Collapsed (ecosystems/communities) Known from only historical occurrences but still some hope of rediscovery. Examples of evidence include (1) that a species has not been documented in approximately 20-40 years despite some searching and/or some evidence of significant habitat loss or degradation; (2) that a species or ecosystem has been searched for unsuccessfully, but not thoroughly enough to presume that it is extinct or collapsed throughout its range.
|
https://en.wikipedia.org/wiki/Louis%20de%20Branges%20de%20Bourcia
|
Louis de Branges de Bourcia (born August 21, 1932) is a French-American mathematician. He is the Edward C. Elliott Distinguished Professor of Mathematics at Purdue University in West Lafayette, Indiana. He is best known for proving the long-standing Bieberbach conjecture in 1984, now called de Branges's theorem. He claims to have proved several important conjectures in mathematics, including the generalized Riemann hypothesis.
Born to American parents who lived in Paris, de Branges moved to the US in 1941 with his mother and sisters. His native language is French. He did his undergraduate studies at the Massachusetts Institute of Technology (1949–53), and received a PhD in mathematics from Cornell University (1953–57). His advisors were Wolfgang Fuchs and then-future Purdue colleague Harry Pollard. He spent two years (1959–60) at the Institute for Advanced Study and another two (1961–62) at the Courant Institute of Mathematical Sciences. He was appointed to Purdue in 1962.
An analyst, de Branges has made incursions into real, functional, complex, harmonic (Fourier) and Diophantine analyses. As far as particular techniques and approaches are concerned, he is an expert in spectral and operator theories.
Works
De Branges' proof of the Bieberbach conjecture was not initially accepted by the mathematical community. Rumors of his proof began to circulate in March 1984, but many mathematicians were skeptical because de Branges had earlier announced some false results, including a claimed proof of the invariant subspace conjecture in 1964 (incidentally, in December 2008 he published a new claimed proof for this conjecture on his website). It took verification by a team of mathematicians at Steklov Institute of Mathematics in Leningrad to validate de Branges' proof, a process that took several months and led later to significant simplification of the main argument. The original proof uses hypergeometric functions and innovative tools from the theory of Hilbert spaces of e
|
https://en.wikipedia.org/wiki/Numerical%20semigroup
|
In mathematics, a numerical semigroup is a special kind of a semigroup. Its underlying set is the set of all nonnegative integers except a finite number and the binary operation is the operation of addition of integers. Also, the integer 0 must be an element of the semigroup. For example, while the set {0, 2, 3, 4, 5, 6, ...} is a numerical semigroup, the set {0, 1, 3, 5, 6, ...} is not because 1 is in the set and 1 + 1 = 2 is not in the set. Numerical semigroups are commutative monoids and are also known as numerical monoids.
The definition of numerical semigroup is intimately related to the problem of determining nonnegative integers that can be expressed in the form x1n1 + x2 n2 + ... + xr nr for a given set {n1, n2, ..., nr} of positive integers and for arbitrary nonnegative integers x1, x2, ..., xr. This problem had been considered by several mathematicians like Frobenius (1849–1917) and Sylvester (1814–1897) at the end of the 19th century. During the second half of the twentieth century, interest in the study of numerical semigroups resurfaced because of their applications in algebraic geometry.
Definition and examples
Definition
Let N be the set of nonnegative integers. A subset S of N is called a numerical semigroup if the following conditions are satisfied.
0 is an element of S
N − S, the complement of S in N, is finite.
If x and y are in S then x + y is also in S.
There is a simple method to construct numerical semigroups. Let A = {n1, n2, ..., nr} be a nonempty set of positive integers. The set of all integers of the form x1 n1 + x2 n2 + ... + xr nr is the subset of N generated by A and is denoted by 〈 A 〉. The following theorem fully characterizes numerical semigroups.
Theorem
Let S be the subsemigroup of N generated by A. Then S is a numerical semigroup if and only if gcd (A) = 1. Moreover, every numerical semigroup arises in this way.
Examples
The following subsets of N are numerical semigroups.
〈 1 〉 = {0, 1, 2, 3, ...}
〈 1, 2 〉 = {0, 1
|
https://en.wikipedia.org/wiki/VividView
|
VividView Processor is the world's first LSI for improving visibility in displays when exposed to direct sunlight, while reducing power consumption at the same time. The technology was developed by Fujitsu Ten in cooperation with Fujitsu Semiconductor Limited and Fujitsu Laboratories.
Features of VividView Processor 3
Highly visible display even in direct sunlight
Energy conservation through backlight control
Sharp, clear, and vivid display of images
VividView technology overview
Color correction in direct sunlight
Immediate correction of tone and saturationFirst the image is analyzed and differences in tone are assigned to each area of the image. Then, "pinpoint correction" magnifies the contrast in even the brightest regions, and "tone correction" adjusts dark areas to a lighter tone that will stand out better. Finally, "saturation correction" limits the dilution of color and brightens the image. The combination of this technology improves visibility through the immediate and automatic correction of contrast and saturation in dark areas.
Adjustment of correction strength based on the brightness of direct sunlightA sensor detects the brightness of sunlight, which can fluctuate for a variety of reasons (season, time of day, location). Correction strength is adjusted based on brightness level, to achieve optimal image correction according to the degree of deterioration in image quality. A rapid response allows adjustments to be made even during repeated and drastic changes in bright sunlight in a short period of time.
Backlight control
Power consumption of display reduced 54% maximum, 24% average
Energy conserved in backlight while limiting deterioration in image qualityImage data is analyzed for each scene to provide optimal brightness: low-level brightness for dark images, and higher-level brightness for light images. The brightness level of the image data is corrected according to the brightness of the backlight, to maintain the same appearance as the o
|
https://en.wikipedia.org/wiki/Tertiary%20review
|
In software engineering, a tertiary review is a systematic review of systematic reviews. It is also referred to as a tertiary study in the software engineering literature. However, Umbrella review is the term more commonly used in medicine.
Kitchenham et al. suggest that methodologically there is no difference between a systematic review and a tertiary review. However, as the software engineering community has started performing tertiary reviews new concerns unique to tertiary reviews have surfaced. These include the challenge of quality assessment of systematic reviews, search validation and the additional risk of double counting.
Examples of Tertiary reviews in software engineering literature
Test quality
Machine Learning
Test-driven development
|
https://en.wikipedia.org/wiki/Seed%20company
|
Seed companies produce and sell seeds for flowers, fruits and vegetables to commercial growers and amateur gardeners. The production of seed is a multibillion-dollar business, which uses growing facilities and growing locations worldwide. While most of the seed is produced by large specialist growers, large amounts are also produced by small growers that produce only one to a few crop types. The larger companies supply seed both to commercial resellers and wholesalers. The resellers and wholesalers sell to vegetable and fruit growers, and to companies who package seed into packets and sell them on to the amateur gardener.
Most seed companies or resellers that sell to retail produce a catalog, for seed to be sown the following spring, that is generally published during early winter. These catalogs are eagerly awaited by the amateur gardener, as during winter months there is little that can be done in the garden so this time can be spent planning the following year’s gardening. The largest collection of nursery and seed trade catalogs in the U.S. is held at the National Agricultural Library where the earliest catalogs date from the late 18th century, with most published from the 1890s to the present.
Seed companies produce a huge range of seeds from highly developed F1 hybrids to open pollinated wild species. They have extensive research facilities to produce plants with genetic materials that result in improved uniformity and appeal. These qualities might include disease resistance, higher yields, dwarf habit and vibrant or new colors. These improvements are often closely guarded to protect them from being utilized by other producers, thus plant cultivars are often sold under the company's own name and protected by international laws from being grown for seed production by others. Along with the growth in the allotment movement, and the increasing popularity of gardening, there have emerged many small independent seed companies. Many of these are active in seed co
|
https://en.wikipedia.org/wiki/KOI8-B
|
KOI8-B is the informal name for an 8-bit Roman / Cyrillic character set constituting the common subset of the major KOI-8 variants (KOI8-R, KOI8-U, KOI8-RU, KOI8-E, KOI8-F). Accordingly, it is closely related to KOI8-R, but defines only the letter subset in the upper half. As such it was implemented by some font vendors for PC Unixes like Xenix in the late 1980s.
Character set
The following table shows the KOI8-B encoding. Each character is shown with its equivalent Unicode code point.
See also
KOI character encodings
|
https://en.wikipedia.org/wiki/Butter%20pecan
|
Butter pecan is a flavor, prominent especially in the United States, in ice cream, cakes, and cookies. Roasted pecans, butter, and vanilla flavor are used in butter pecan baked goods. Butter pecan ice cream is smooth vanilla ice cream with a slight buttery flavor, with pecans added. It is manufactured by many major ice cream brands. A variant of the recipe is butter almond, which replaces the pecans with almonds.
Butter pecan is a popular flavor of ice cream produced by many companies and is also one of the thirty-one flavors of Baskin Robbins.
See also
List of cookies
List of ice cream flavors
|
https://en.wikipedia.org/wiki/American%20Journal%20of%20Physiology
|
The American Journal of Physiology is a peer-reviewed scientific journal on physiology published by the American Physiological Society.
Vols. for 1898–1941 and 1948-56 include the Society's proceedings, including abstracts of papers presented at the 10th-53rd annual meetings, and the 1948-56 fall meetings.
Subjournals
The American Journal of Physiology has seven subjournals; according to the 2019 Journal Citation Reports their impact factors vary from 2.992 to 4.406:
AJP-Cell Physiology
AJP-Endocrinology and Metabolism
AJP-Gastrointestinal and Liver Physiology
AJP-Heart and Circulatory Physiology
AJP-Lung Cellular and Molecular Physiology
AJP-Regulatory, Integrative and Comparative Physiology
AJP-Renal Physiology
|
https://en.wikipedia.org/wiki/Rate%20Based%20Satellite%20Control%20Protocol
|
In computer networking, Rate Based Satellite Control Protocol (RBSCP) is a tunneling method proposed by Cisco to improve the performance of satellite network links with high latency and error rates.
The problem RBSCP addresses is that the long RTT on the link keeps TCP virtual circuits in slow start for a long time. This, in addition to the high loss give a very low amount of bandwidth on the channel. Since satellite links may be high-throughput, the overall link utilized may be below what is optimal from a technical and economic view.
Means of operation
RBSCP works by tunneling the usual IP packets within IP packets. The transport protocol
identifier is 199. On each end of the tunnel, routers buffer packets to utilize the link
better. In addition to this, RBSCP tunnel routers:
modify TCP options at connection setup.
implement a Performance Enhancing Proxy (PEP) that resends lost packets on behalf of the client, so loss is not interpreted as congestion.
External links
https://web.archive.org/web/20110706144353/http://cisco.biz/en/US/docs/ios/12_3t/12_3t7/feature/guide/gt_rbscp.html
Computer networking
|
https://en.wikipedia.org/wiki/Society%20for%20Applied%20Spectroscopy
|
The Society for Applied Spectroscopy (SAS) is an organization promoting research and education in the fields of spectroscopy, optics, and analytical chemistry. Founded in 1958, it is currently headquartered in Frederick, MD. In 2006 it had about 2,000 members worldwide.
SAS is perhaps best known for its technical conference with the Federation of Analytical Chemistry and Spectroscopy Societies and short courses on various aspects of spectroscopy and data analysis. The society publishes the scientific journal Applied Spectroscopy.
SAS is affiliated with American Institute of Physics (AIP), Coblentz, Council for Near Infrared Spectroscopy (CNIRS), Federation of Analytical Chemistry and Spectroscopy Societies (FACSS), The Instrumentation, Systems, and Automation Society (ISA), and Optical Society of America (OSA).
SAS provides a number of awards with honorariums to encourage and recognize outstanding achievements.
See also
Spectroscopy
American Institute of Physics (AIP)
The Instrumentation, Systems, and Automation Society (ISA)
Optical Society of America (OSA)
|
https://en.wikipedia.org/wiki/Sulfur%20cycle
|
The sulfur cycle is a biogeochemical cycle in which the sulfur moves between rocks, waterways and living systems. It is important in geology as it affects many minerals and in life because sulfur is an essential element (CHNOPS), being a constituent of many proteins and cofactors, and sulfur compounds can be used as oxidants or reductants in microbial respiration. The global sulfur cycle involves the transformations of sulfur species through different oxidation states, which play an important role in both geological and biological processes.
Steps of the sulfur cycle are:
Mineralization of organic sulfur into inorganic forms, such as hydrogen sulfide (H2S), elemental sulfur, as well as sulfide minerals.
Oxidation of hydrogen sulfide, sulfide, and elemental sulfur (S) to sulfate ().
Reduction of sulfate to sulfide.
Incorporation of sulfide into organic compounds (including metal-containing derivatives).
Disproportionation of sulfur compounds (elemental sulfur, sulfite, thiosulfate) into sulfate and hydrogen sulfide.
These are often termed as follows:
Assimilative sulfate reduction (see also sulfur assimilation) in which sulfate () is reduced by plants, fungi and various prokaryotes. The oxidation states of sulfur are +6 in sulfate and –2 in R–SH.
Desulfurization in which organic molecules containing sulfur can be desulfurized, producing hydrogen sulfide gas (H2S, oxidation state = –2). An analogous process for organic nitrogen compounds is deamination.
Oxidation of hydrogen sulfide produces elemental sulfur (S8), oxidation state = 0. This reaction occurs in the photosynthetic green and purple sulfur bacteria and some chemolithotrophs. Often the elemental sulfur is stored as polysulfides.
Oxidation in elemental sulfur by sulfur oxidizers produces sulfate.
Dissimilative sulfur reduction in which elemental sulfur can be reduced to hydrogen sulfide.
Dissimilative sulfate reduction in which sulfate reducers generate hydrogen sulfide from sulfate.
Sulfur oxidation
|
https://en.wikipedia.org/wiki/Fantasy%20Strike
|
Fantasy Strike is a free-to-play fighting video game developed and published by Sirlin Games. It revolves around one-on-one battles that require fast reflexes. The game was released on July 25, 2019 for Linux, macOS, Microsoft Windows, Nintendo Switch, PlayStation 4.
Gameplay
Fantasy Strike is designed to reduce unnecessary complexity compared to traditional fighting games, having dedicated buttons for every action, including melee, jump/throw, special moves, and a super move. Each player picks a character to play, which are then placed in an arena. By performing various attacks unique to the character, each player tries to bring down their opponents health pool down to zero to win a round. Whoever is the first to win four out of seven rounds is given the match win. In addition to attacks, players can use blocking to defend against attacks and break through blocks by using throws. A unique feature to Fantasy Strike is the "Yomi Counter", which can be performed when not attacking by pressing no buttons at all. When yomi countering, if the character is hit by an opponent's throw, they won't get hit and instead perform a counter throw, turning the tables.
The game features various modes. Single-player pits the player in matches against AI controlled opponents, with different modes putting a different spin on the formula. Arcade adds some story through artwork and dialog, as well as stronger version of the character Midori who serves as the final challenge, Survival provides a stream of progressively stronger opponents. Daily Challenge is similar to Survival, but can only be played once per day and compares the score between the players. Single Match allows a selection of any opponent and difficulty for a standard match. Boss Rush, which was added afterwards for the full release, allows the player to pick up and use power-ups, but introduces special opponents that also possess power-ups and get stronger after each battle.
Multiplayer allows player-against-player mat
|
https://en.wikipedia.org/wiki/In%20silico%20clinical%20trials
|
An in silico clinical trial, also known as a virtual clinical trial, is an individualized computer simulation used in the development or regulatory evaluation of a medicinal product, device, or intervention. While completely simulated clinical trials are not feasible with current technology and understanding of biology, its development would be expected to have major benefits over current in vivo clinical trials, and research on it is being pursued.
History
The term in silico indicates any use of computers in clinical trials, even if limited to management of clinical information in a database.
Rationale
The traditional model for the development of medical treatments and devices begins with pre-clinical development. In laboratories, test-tube and other in vitro experiments establish the plausibility for the efficacy of the treatment. Then in vivo animal models, with different species, provide guidance on the efficacy and safety of the product for humans. With success in both in vitro and in vivo studies, scientist can propose that clinical trials test whether the product be made available for humans. Clinical trials are often divided into four phases. Phase 3 involves testing a large number of people. When a medication fails at this stage, the financial losses can be catastrophic.
Predicting low-frequency side effects has been difficult, because such side effects need not become apparent until the treatment is adopted by many patients. The appearance of severe side-effects in phase three often causes development to stop, for ethical and economic reasons. Also, in recent years many candidate drugs failed in phase 3 trials because of lack of efficacy rather than for safety reasons. One reason for failure is that traditional trials aim to establish efficacy and safety for most subjects, rather than for individual subjects, and so efficacy is determined by a statistic of central tendency for the trial. Traditional trials do not adapt the treatment to the covariat
|
https://en.wikipedia.org/wiki/Chatt%20Lecture
|
The Chatt Lecture, named after Joseph Chatt is a lectureship of the John Innes Centre
Lecturers
2000 Robert Huber
2002 Tom Blundell
2003 Stephen J. Lippard
2004 Doug Rees
2005 George Whitesides
2006 Sir Jack Baldwin - "Studies on beta-lactam antibiotic biosynthesis"
2008 Timothy Richmond, ETH Zurich - "Chromatin structure and remodeling factor interaction"
2009 Fraser Stoddart, Northwestern University - 'Radically enhanced molecular recognition'
See also
Bateson Lecture
Biffen Lecture
Darlington Lecture
Haldane Lecture
List of biology awards
|
https://en.wikipedia.org/wiki/Syncytium
|
A syncytium (; : syncytia; from Greek: σύν syn "together" and κύτος kytos "box, i.e. cell") or symplasm is a multinucleate cell which can result from multiple cell fusions of uninuclear cells (i.e., cells with a single nucleus), in contrast to a coenocyte, which can result from multiple nuclear divisions without accompanying cytokinesis. The muscle cell that makes up animal skeletal muscle is a classic example of a syncytium cell. The term may also refer to cells interconnected by specialized membranes with gap junctions, as seen in the heart muscle cells and certain smooth muscle cells, which are synchronized electrically in an action potential.
The field of embryogenesis uses the word syncytium to refer to the coenocytic blastoderm embryos of invertebrates, such as Drosophila melanogaster.
Physiological examples
Protists
In protists, syncytia can be found in some rhizarians (e.g., chlorarachniophytes, plasmodiophorids, haplosporidians) and acellular slime moulds, dictyostelids (amoebozoans), acrasids (Excavata) and Haplozoon.
Plants
Some examples of plant syncytia, which result during plant development, include:
Developing endosperm
The non-articulated laticifers
The plasmodial tapetum, and
The "nucellar plasmodium" of the family Podostemaceae
Fungi
A syncytium is the normal cell structure for many fungi. Most fungi of Basidiomycota exist as a dikaryon in which thread-like cells of the mycelium are partially partitioned into segments each containing two differing nuclei, called a heterokaryon.
Animals
Nerve net
The neurons which makes up the subepithelial nerve net in comb jellies (Ctenophora) are fused into a neural syncytium, consisting of a continuous plasma membrane instead of being connected through synapses.
Skeletal muscle
A classic example of a syncytium is the formation of skeletal muscle. Large skeletal muscle fibers form by the fusion of thousands of individual muscle cells. The multinucleated arrangement is important in pathologic states such
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.