source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Contention-based%20protocol | A contention-based protocol (CBP) is a communications protocol for operating wireless telecommunication equipment that allows many users to use the same radio channel without pre-coordination. The "listen before talk" operating procedure in IEEE 802.11 is the most well known contention-based protocol.
Section 90.7 of Part 90 of the United States Federal Communications Commission rules define CBP as:
A protocol that allows multiple users to share the same spectrum by defining the events that must occur when two or more transmitters attempt to simultaneously access the same channel and establishing rules by which a transmitter provides reasonable opportunities for other transmitters to operate. Such a protocol may consist of procedures for initiating new transmissions, procedures for determining the state of the channel (available or unavailable), and procedures for managing retransmissions in the event of a busy channel.
This definition was added as part of the Rules for Wireless Broadband Services in the
3650-3700 MHz Band. |
https://en.wikipedia.org/wiki/Frances%20Power%20Cobbe | Frances Power Cobbe (4 December 1822 – 5 April 1904) was an Anglo-Irish writer, philosopher, religious thinker, social reformer, anti-vivisection activist and leading women's suffrage campaigner. She founded a number of animal advocacy groups, including the National Anti-Vivisection Society (NAVS) in 1875 and the British Union for the Abolition of Vivisection (BUAV) in 1898, and was a member of the executive council of the London National Society for Women's Suffrage.
Life
Frances Power Cobbe was a member of the prominent Cobbe family, descended from Archbishop Charles Cobbe, Primate of Ireland. She was born in Newbridge House in the family estate in present-day Donabate, County Dublin.
Cobbe was educated mainly at home by governesses with a brief period at a school in Brighton. She studied English literature, French, German, Italian, music, and the Bible. She then read heavily in the family library especially in religion and theology, joined several subscription libraries, and studied Greek and geometry with a local clergyman. She organised her own study schedule and ended up very well educated.
In the late 1830s Cobbe went through a crisis of faith. The humane theology of Theodore Parker, an American transcendentalist and abolitionist, restored her faith (she went on later to edit Parker's collected writings). She began to set out her ideas in what became an Essay on True Religion. Her father disapproved and for a while expelled her from the home. She kept studying and writing anyway and eventually revised the Essay into her first book, the Essay on Intuitive Morals. The first volume came out anonymously in 1855.
In 1857 Cobbe's father died and left her an annuity. She took the chance to travel on her own around parts of Europe and the Near East. This took her to Italy where she met a community of similarly independent women: Isa Blagden with whom she went on briefly to share a house, the sculptor Harriet Hosmer, the poet Elizabeth Barrett Browning, the pain |
https://en.wikipedia.org/wiki/Reverse%20cholesterol%20transport | Reverse cholesterol transport is a multi-step process resulting in the net movement of cholesterol from peripheral tissues back to the liver first via entering the lymphatic system, then the bloodstream.
Cholesterol from non-hepatic peripheral tissues is transferred to HDL by the ABCA1 (ATP-binding cassette transporter). Apolipoprotein A1 (ApoA-1), the major protein component of HDL, acts as an acceptor, and the phospholipid component of HDL acts as a sink for the mobilised cholesterol.
The cholesterol is converted to cholesteryl esters by the enzyme LCAT (lecithin-cholesterol acyltransferase).
The cholesteryl esters can be transferred, with the help of CETP (cholesterylester transfer protein) in exchange for triglycerides, to other lipoproteins (such as LDL and VLDL), and these lipoproteins can be taken up by secreting unesterified cholesterol into the bile or by converting cholesterol to bile acids.
Adiponectin induces ABCA1-mediated reverse cholesterol transport from macrophages by activation of PPAR-γ and LXRα/β.
Uptake of HDL2 is mediated by hepatic lipase, a special form of lipoprotein lipase found only in the liver. Hepatic lipase activity is increased by androgens and decreased by estrogens, which may account for higher concentrations of HDL2 in women.
Discoidal (Nascent) HDL:
Initially, HDL is discoidal in shape because it lacks esterified cholesterol but as it keeps accumulating free cholesterol in it, the enzyme LCAT keeps esterifying the free cholesterol.
When the HDL molecule is cholesterol rich, its shape is changed into more spherical and it becomes less dense (HDL 2). This is carried to the liver to release all the esterified cholesterol into the liver. |
https://en.wikipedia.org/wiki/DeWitt%20notation | Physics often deals with classical models where the dynamical variables are a collection of functions
{φα}α over a d-dimensional space/spacetime manifold M where α is the "flavor" index. This involves functionals over the φs, functional derivatives, functional integrals, etc. From a functional point of view this is equivalent to working with an infinite-dimensional smooth manifold where its points are an assignment of a function for each α, and the procedure is in analogy with differential geometry where the coordinates for a point x of the manifold M are φα(x).
In the DeWitt notation''' (named after theoretical physicist Bryce DeWitt), φα(x) is written as φi where i is now understood as an index covering both α and x.
So, given a smooth functional A, A,i stands for the functional derivative
as a functional of φ''. In other words, a "1-form" field over the infinite dimensional "functional manifold".
In integrals, the Einstein summation convention is used. Alternatively, |
https://en.wikipedia.org/wiki/Symmetric%20hydrogen%20bond | A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a 3-center 4-electron bond. This type of bond is much stronger than "normal" hydrogen bonds, in fact, its strength is comparable to a covalent bond. It is seen in ice at high pressure (Ice X), and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion [F−H−F]−. Much has been done to explain the symmetric hydrogen bond quantum-mechanically, as it seems to violate the duet rule for the first shell: The proton is effectively surrounded by four electrons. Because of this problem, some consider it to be an ionic bond. |
https://en.wikipedia.org/wiki/Conquiliologistas%20do%20Brasil | Conquiliologistas do Brasil (Portuguese: Conchologists of Brazil) is an association created on September 19, 1989, in São Paulo, Brazil, with the main goal of spreading and increasing conchology, the study of mollusc shells.
The association publishes the magazine Siratus, the monthly bulletin Calliostoma and the biannual peer-reviewed scientific journal Strombus, and maintains a library of more than 1500 scientific papers.
See also |
https://en.wikipedia.org/wiki/African%20Journal%20of%20Food%2C%20Agriculture%2C%20Nutrition%20and%20Development | The African Journal of Food, Agriculture, Nutrition and Development is a peer-reviewed academic journal covering food and nutrition issues in Africa.
External links
African Journal of Food, Agriculture, Nutrition and Development at Bioline International |
https://en.wikipedia.org/wiki/A%20Philosophical%20Essay%20on%20Probabilities | A Philosophical Essay on Probabilities is a work by Pierre-Simon Laplace on the mathematical theory of probability. The book consists of two parts, the first with five chapters and the second with thirteen.
Table of Contents
Part I - A Philosophical Essay on Probabilities
Introduction
Concerning Probability
General Principles of the Calculus of Probability
Concerning Hope
Analytical Methods of the Calculus of Probability
Part II - Application of the Calculus of Probabilities
Games of Chance
Concerning the Unknown Inequalities which may Exist among Chances Supposed to be Equal
Concerning the Laws of Probability which result from the Indefinite Multiplication of Events
Application of the Calculus of Probabilities to Natural Philosophy
Application of the Calculus of Probabilities to the Moral Sciences
Concerning the Probability of Testimonies
Concerning the Selections and Deliberations of Assemblies
Concerning the Probability of the Judgements of Tribunals
Concerning Tables of Mortality, and the Mean Durations of Life, Marriage and Some Assemblies
Concerning the Benefits of Institutions which Depend on the Probability of Events
Concerning Illusions in the Estimation of Probabilities
Concerning the Various Means of Approaching Certainty
Historical Note of the Calculus of Probabilities to 1816 |
https://en.wikipedia.org/wiki/Thermal%20dose%20unit | A Thermal dose unit (TDU) is a unit of measurement used in the oil and gas industry to measure exposure to thermal radiation. It is a function of intensity (power per unit area) and exposure time.
1 TDU = 1 (kW/m2)4/3s.
Results of exposure |
https://en.wikipedia.org/wiki/GENCI | The (GENCI) is a société civile owned for 49% by the French State represented by the Ministry of Higher Education and Research, for 20% by Commissariat à l'énergie atomique, 20% by French National Centre for Scientific Research, 10% by the Universities and 1% by National Institute for Research in Computer Science and Control. Its purpose is to implement and ensure the coordination of the major equipments of the French high-performance computing centres by providing funding and by assuming ownership.
GENCI coordinates the allocation of compute time for the three national computing centers CINES, IDRIS, and TGCC-CEA.
External links
GENCI website
Scientific agencies of the government of France
Supercomputers |
https://en.wikipedia.org/wiki/Haslem%20v.%20Lockwood | Thomas Haslem v. William A. Lockwood, Connecticut, (1871) is an important United States case in property, tort, conversion, trover and nuisance law.
The plaintiff directed his servants to rake abandoned horse manure into heaps that had accumulated in a public street, intending to carry it away the next day. Before he could do so, the defendant, who had no knowledge of the plaintiff's actions, found the heaps and hauled them off to his own land. The plaintiff sued the defendant in trover demanding payment for the price of the manure. The trial court held for the defendant, stating he owed nothing to the plaintiff. The plaintiff appealed and the Appellate Court of Connecticut held for the plaintiff, remanding the case for a new trial.
The manure originally belonged to the owners of the horses that dropped it. But when the owners abandoned it on the road, it became the property of the man who was first to claim it. The Court found that the best owner after the act of abandonment was the borough of Stamford, Connecticut where the manure was found. In the absence of a claim to the manure by the officials of Stamford, the plaintiff was entitled to it by reason of trover. The plaintiff was entitled to damages because the defendant had committed a conversion. The manure had not become a part of the real estate, as the defendant had argued. It remained separate and unattached to the land, and hence was not part of the fee of estate. Comparing manure to seaweed and laws in the 19th century having to do with the scraping into piles of natural things of this sort, the court held that 24 hours was a reasonable time for the defendant to wait to take the manure. That by this standard, and the fruits of his labour of raking into piles, the plaintiff was granted a new trial over the issue of damages.
Issues
Is manure abandoned on a road by passing horses property subject to the laws of trover?
Does manure abandoned on the road by passing horses become a part of |
https://en.wikipedia.org/wiki/Maurice%20Kraitchik | Maurice Borisovich Kraitchik (21 April 1882 – 19 August 1957) was a Belgian mathematician and populariser. His main interests were the theory of numbers and recreational mathematics.
He was born to a Jewish family in Minsk. He wrote several books on number theory during 1922–1930 and after the war, and from 1931 to 1939 edited Sphinx, a periodical devoted to recreational mathematics. During World War II, he emigrated to the United States, where he taught a course at the New School for Social Research in New York City on the general topic of "mathematical recreations."
Kraïtchik was agrégé of the Free University of Brussels, engineer at the Société Financière de Transports et d'Entreprises Industrielles (Sofina), and director of the Institut des Hautes Etudes de Belgique. He died in Brussels.
Kraïtchik is famous for having inspired the two envelopes problem in 1953, with the following puzzle in La mathématique des jeux:
Two people, equally rich, meet to compare the contents of their wallets. Each is ignorant of the contents of the two wallets. The game is as follows: whoever has the least money receives the contents of the wallet of the other (in the case where the amounts are equal, nothing happens). One of the two men can reason: "Suppose that I have the amount A in my wallet. That's the maximum that I could lose. If I win (probability 0.5), the amount that I'll have in my possession at the end of the game will be more than 2A. Therefore the game is favourable to me." The other man can reason in exactly the same way. In fact, by symmetry, the game is fair. Where is the mistake in the reasoning of each man?
Among his publications were the following:
Théorie des Nombres, Paris: Gauthier-Villars, 1922
Recherches sur la théorie des nombres, Paris: Gauthier-Villars, 1924
La mathématique des jeux ou Récréations mathématiques, Paris: Vuibert, 1930, 566 pages
Mathematical Recreations, New York: W. W. Norton, 1942 and London: George Allen & Unwin Ltd, 1943, 328 pa |
https://en.wikipedia.org/wiki/Mur%CF%86 | Murφ (/ˈmɝ.fi/, also spelled Murphi) is an explicit-state model checker developed at Stanford University, and widely used for formal verification of cache-coherence protocols.
History
Murφ's early history is described in a paper by David Dill. The first version of Murφ was designed at Stanford University in 1990 and 1991 by Prof. David Dill and his graduate students Andreas Drexler, Alan Hu, and Han Yang, and primarily implemented by Andreas Drexler. The specification language was extensively modified and extended by David Dill, Alan Hu, C. Norris Ip, Ralph Melton, Seungjoon Park, and Han Yang. Ralph Melton implemented the new version during the summer and fall of 1992. Seungjoon Park added liveness checking and fairness constraints, but because the algorithm for liveness verification conflicted with important optimizations, particularly symmetry reduction, liveness verification was omitted in subsequent releases. C. Norris Ip implemented reversible rules and repetition constructors (which are not included in release 3.1), and added symmetry and multiset reductions (which are). Ulrich Stern implemented hash compaction, improved the use of disk, and implemented Parallel Murφ.
The last release from Stanford was release 3.1 in November of 1993. Many derivative versions of Murφ have been created since then by other groups.
Features
The Murφ compiler accepts a model written in the Murφ specification language and outputs C++ code that constitutes a verifier for that model. (That is, the C++ code, when executed, performs explicit-state model checking on the design described by the specification.) The
Murφ specification language uses guarded commands and an asynchronous, interleaving model of concurrency, with all synchronization and communication done through global variables.
The verifier checks safety properties in the form of invariants and internal assertions that are specified in the model, and checks for deadlock. It does not check liveness
properties, though M |
https://en.wikipedia.org/wiki/TRANZ%20330 | The TRANZ 330 is a popular point-of-sale device manufactured by VeriFone in 1985. The most common application for these units is bank and credit card processing, however, as a general purpose computer, they can perform other novel functions. Other applications include gift/benefit card processing, prepaid phone cards, payroll and employee timekeeping, and even debit and ATM cards. They are programmed in a proprietary VeriFone TCL language (Terminal Control Language), which is unrelated to the Tool Command Language used in UNIX environments.
Point of sale companies
Embedded systems
Payment systems
Banking equipment |
https://en.wikipedia.org/wiki/SIGINT%20Activity%20Designator | A SIGINT Activity Designator (or SIGAD) identifies a signals intelligence (SIGINT) line of collection activity associated with a signals collection station, such as a base or a ship. For example, the SIGAD for Menwith Hill in the UK is USD1000. SIGADs are used by the signals intelligence agencies of the United States, the United Kingdom, Canada, Australia and New Zealand (the Five Eyes).
There are several thousand SIGADs including the substation SIGADs denoted with a trailing alpha character. Several dozen of these are significant. The leaked Boundless Informant reporting screenshot showed that it summarized 504 active SIGADs during a 30-day period in March 2013.
General format
A SIGAD consists of five to eight case insensitive alphanumeric characters. It takes the general form of an alphanumeric designator normally composed of a two- or three-letter prefix followed by one to three numbers. Often a dash is used to separate the alphabetic and numeric characters in the primary part of the designator, but less frequently a space is used as a separator or the alphabetic and numeric characters are concatenated together. An additional alphabetic character can be added to denote a sub-designator for a subset of the primary unit, such as a detachment. Lastly, a numeric character can be added after the aforementioned alphabetic to provide for a sub-sub-designator.
In the examples below an X represents an alphabetic character and an N represents a numeric character that are part of the primary designator. Likewise, an x represents an alphabetic character and an n represents a numeric character that are part of a sub-designator. Here are valid generalized examples of SIGADs:
The first two characters show which country operates the particular SIGINT facility, which can be US for the United States, UK for the United Kingdom, CA for Canada, AU for Australia and NZ for New Zealand. A third letter shows what sort of staff runs the station. SIGADs beginning with US without a th |
https://en.wikipedia.org/wiki/Software%20transactional%20memory | In computer science, software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. It is an alternative to lock-based synchronization. STM is a strategy implemented in software, rather than as a hardware component. A transaction in this context occurs when a piece of code executes a series of reads and writes to shared memory. These reads and writes logically occur at a single instant in time; intermediate states are not visible to other (successful) transactions. The idea of providing hardware support for transactions originated in a 1986 paper by Tom Knight. The idea was popularized by Maurice Herlihy and J. Eliot B. Moss. In 1995 Nir Shavit and Dan Touitou extended this idea to software-only transactional memory (STM). Since 2005, STM has been the focus of intense research and support for practical implementations is growing.
Performance
Unlike the locking techniques used in most modern multithreaded applications, STM is often very optimistic: a thread completes modifications to shared memory without regard for what other threads might be doing, recording every read and write that it is performing in a log. Instead of placing the onus on the writer to make sure it does not adversely affect other operations in progress, it is placed on the reader, who after completing an entire transaction verifies that other threads have not concurrently made changes to memory that it accessed in the past. This final operation, in which the changes of a transaction are validated and, if validation is successful, made permanent, is called a commit. A transaction may also abort at any time, causing all of its prior changes to be rolled back or undone. If a transaction cannot be committed due to conflicting changes, it is typically aborted and re-executed from the beginning until it succeeds.
The benefit of this optimistic approach is increased concurrency: no thread need |
https://en.wikipedia.org/wiki/ArchLabs | ArchLabs Linux is a lightweight rolling release Linux distribution based on a minimal Arch Linux operating system with the Openbox window manager. ArchLabs is inspired by BunsenLabs.
Features
The ArchLabs distribution contains a text-based installer, "AL-Installer" as its installation method, as well as baph, an AUR helper. The installer gives the user the ability to choose from 16 different assorted Desktop Environments and Window Managers as well as a selection of extra software, Linux Kernels, Display Managers and shells.
History
Initial releases used the Calamares installer. Early versions of ArchLabs started to become bloated with many unnecessary applications and programs. This sparked a change in direction. A slim down of the ISO size from over 2Gb in size down to approximately 580mb made download times a lot quicker.
Mínimo was the first of this minimal release with a change from the traditional Openbox panel, Tint2 to Polybar. Also introduced in this release was the original welcome script, named "AL-Hello" which was a nod to the "brother" distribution BunsenLabs. Mínimo was also the final release to have a release name, following releases followed a numbering pattern of YYYY.MM.
2018.02 release brought a new and improved AL-Hello welcome script and many additions and refining to the ArchLabs experience.
2018.07 saw more improvements to the newly written AL-Installer.
With the release of 2018.12 came the removal of the live environment and the post install script "AL-Hello". Options for choosing desktops and window managers as well as a selection of apps have been added to AL-Installer (ALI). Also introduced in this 2018.12 release was the in house AUR (Arch User Repository) Helper, baph (Basic AUR Package Helper).
2019.10.29 was ArchLabs third release for 2019 (After 2019.1.20 & 2019.10.28). Many changes were made including additional desktop environments and window managers added to the installer. Most notably, awesomewm and jwm.
ArchLabs firs |
https://en.wikipedia.org/wiki/Cleistogamy | Cleistogamy is a type of automatic self-pollination of certain plants that can propagate by using non-opening, self-pollinating flowers. Especially well known in peanuts, peas, and pansies, this behavior is most widespread in the grass family. However, the largest genus of cleistogamous plants is Viola.
The more common opposite of cleistogamy, or "closed marriage", is called chasmogamy, or "open marriage". Virtually all plants that produce cleistogamous flowers also produce chasmogamous ones. The principal advantage of cleistogamy is that it requires fewer plant resources to produce seeds than does chasmogamy, because development of petals, nectar and large amounts of pollen is not required. This efficiency makes cleistogamy particularly useful for seed production on unfavorable sites or adverse conditions. Impatiens capensis, for example, has been observed to produce only cleistogamous flowers after being severely damaged by grazing and to maintain populations on unfavorable sites with only cleistogamous flowers. The obvious disadvantage of cleistogamy is that self-fertilization occurs, which may suppress the creation of genetically superior plants. Another disadvantage of self-fertilization is that it leads to the expression in progeny of deleterious recessive mutations.
For genetically modified (GM) rapeseed, researchers hoping to minimise the admixture of GM and non-GM crops are attempting to use cleistogamy to prevent gene flow. However, preliminary results from Co-Extra, a current project within the EU research program, show that although cleistogamy reduces gene flow, it is not at the moment a consistently reliable tool for biocontainment; due to a certain instability of the cleistogamous trait, some flowers may open and release genetically modified pollen.
See also
Co-existence of genetically modified, conventional, and organic crops |
https://en.wikipedia.org/wiki/Fabio%20Mercurio | Fabio Mercurio (born 26 September 1966) is an Italian mathematician, internationally known for a number of results in mathematical finance.
Main results
Mercurio worked during his Ph.D. on incomplete markets theory using dynamic mean-variance hedging techniques. With Damiano Brigo (2002–2003), he has shown how to construct stochastic differential equations consistent with mixture models, applying this to volatility smile modeling in the context of local volatility models. He is also one of the main authors in inflation modeling. Mercurio has also authored several publications in top journals and co-authored the book Interest rate models: theory and practice for Springer-Verlag, that quickly became an international reference for stochastic dynamic interest rate modeling. He is the recipient of the 2020 Risk quant-of-the-year award jointly with Andrei Lyashenko of QRM for their joint paper Lyashenko and Mercurio (2019).
Affiliations
Currently Mercurio is the global head of Quantitative Analytics at Bloomberg L.P., New York City. He holds a Ph.D. in mathematical finance from the Erasmus University in Rotterdam.
Selected publications
A. Lyashenko and F. Mercurio (2019), "Libor replacement: a modelling framework for in-arrears term rates", Risk Magazine, June 2019, recipient of the "Quant of the year" award.
F. Mercurio and A. Pallavicini (2006), "Smiling at convexity: bridging swaption skews and CMS adjustments", Risk August, 64–69.
F. Mercurio and N. Moreni (2006), "Inflation with a smile", Risk March, Vol. 19(3), 70–75.
L. Bisesti, A. Castagna and F. Mercurio (2005), "Consistent Pricing and Hedging of an FX Options Book", Kyoto Economic Review 74(1), 65–83.
F. Mercurio (2005), "Pricing Inflation-Indexed Derivatives", Quantitative Finance 5(3), 289–302.
D. Brigo, F. Mercurio and G. Sartorelli (2003), "Alternative asset-price dynamics and volatility smile", Quantitative Finance 3(3), 173–183.
D. Brigo and F. Mercurio (2002), "Lognormal-Mixture Dynamics and Ca |
https://en.wikipedia.org/wiki/Probal%20Chaudhuri | Probal Chaudhuri (born 1963) is an Indian statistician. He is a professor of theoretical statistics and mathematics in the Indian Statistical Institute, Kolkata.
Chaudhuri obtained his BStat and MStat degrees from the Indian Statistical Institute, Kolkata, and PhD from University of California, Berkeley. He then joined University of Wisconsin, Madison as an assistant professor in 1988. After two years he returned to India in 1990 and joined the Indian Statistical Institute, Kolkata, as a lecturer. He was promoted to full professorship in 1997. Some of the widely used statistical techniques and concepts that he has invented and developed include: local polynomial nonparametric quantile regression, a geometric notion of quantiles for multivariate data, adaptive transformation and re-transformation technique for the construction of affine invariant distribution-free tests and robust estimates from multivariate data and the scale-space approach in function estimation and smoothing.
He was awarded the Shanti Swarup Bhatnagar Prize for Science and Technology in 2005, the highest science award in India, in the mathematical sciences category.
He was an invited speaker in International Congress of Mathematicians 2010, Hyderabad on the topic of "Probability and Statistics."
Other awards/honours
BM Birla Science Award (2001)
C. R. Rao National Award in Statistics (2005)
Fellow of Indian Academy of Sciences, Bangalore
Fellow of National Academy of Sciences (India), Allahabad |
https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton%20algorithm | The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.
Non-linear least squares problems arise, for instance, in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton, and first appeared in Gauss' 1809 work Theoria motus corporum coelestium in sectionibus conicis solem ambientum.
Description
Given functions (often called residuals) of variables with the Gauss–Newton algorithm iteratively finds the value of the variables that minimize the sum of squares
Starting with an initial guess for the minimum, the method proceeds by the iterations
where, if r and β are column vectors, the entries of the Jacobian matrix are
and the symbol denotes the matrix transpose.
At each iteration, the update can be found by rearranging the previous equation in the following two steps:
With substitutions , , and , this turns into the conventional matrix equation of form , which can then be solved in a variety of methods (see Notes).
If , the iteration simplifies to
which is a direct generalization of Newton's method in one dimension.
In data fitting, where the goal is to find the parameters such that a given model function best fits some data points , the functions are the residuals:
Then, the Gauss–Newton method can be expressed in terms of |
https://en.wikipedia.org/wiki/Gun%20chronograph | A ballistic chronograph or gun chronograph is a measuring instrument used to measure the velocity of a projectile in flight, typically fired from a gun.
History
Benjamin Robins (1707–1751) invented the ballistic pendulum that measures the momentum of the projectile fired by a gun. Dividing the momentum by the projectile mass gives the velocity. Robbins published his results as New Principles of Gunnery in 1742. The ballistic pendulum could make only one measurement per firing because the device catches the projectile. The gun's accuracy also limited how far down range a measurement could be made.
Alessandro Vittorio Papacino d'Antoni published results in 1765 using a wheel chronometer. This used a horizontal spinning wheel with a vertical paper mounted on the rim. The bullet was fired across the diameter of the wheel so that it pierced the paper on both sides, and the angular difference along with the rotation speed of the wheel was used to compute the bullet velocity.
An early chronograph that measures velocity directly was built in 1804 by Grobert, a colonel in the French Army. This used a rapidly rotating axle with two disks mounted on it about 13 feet apart. The bullet was fired parallel to the axle, and the angular displacement of the holes in the two disks, together with the rotational speed of the axle, yielded the bullet velocity.
describes Bashforth's chronograph that could make many measurements over long distances:
In 1865 the Rev. Francis Bashforth, M. A., who had then been recently appointed Professor of Applied Mathematics to the advanced class of artillery officers at Woolwich, began a series of experiments for determining the resistance of the air to the motion of both spherical and oblong projectiles, which he continued from time to time until 1880. As the instruments then in use for measuring velocities were incapable of giving the times occupied by a shot in passing over a series of successive equal spaces, he began his labors by inve |
https://en.wikipedia.org/wiki/Unique%20negative%20dimension | Unique negative dimension (UND) is a complexity measure for the model of learning from positive examples.
The unique negative dimension of a class of concepts is the size of the maximum subclass such that for every concept , we have is nonempty.
This concept was originally proposed by M. Gereb-Graus in "Complexity of learning from one-side examples", Technical Report TR-20-89, Harvard University Division of Engineering and Applied Science, 1989.
See also
Computational learning theory |
https://en.wikipedia.org/wiki/Denys%20Wilkinson%20Building | The Denys Wilkinson Building is a prominent 1960s building in Oxford, England, designed by Philip Dowson at Arup in 1967.
Overview
The building houses the astrophysics and particle physics sub-departments of the Department of Physics at Oxford University, plus the undergraduate teaching laboratories. It was originally built for the then Department of Nuclear Physics and named the Nuclear Physics Laboratory. From 1988, the building was known as the Nuclear and Astrophysics Laboratory (NAPL) after the (Sub-)Department of Astrophysics moved from the University Observatory in the Science Area. In 2001, the building was renamed as the Denys Wilkinson Building, in honour of the British nuclear physicist Sir Denys Wilkinson (1922–2016), who was involved in its original creation.
The building is located on the corner of Banbury Road to the west and Keble Road to the south. To the north is the tall Thom Building of Oxford University's Department of Engineering Science, also built in the 1960s. It forms part of the Keble Road Triangle. Attached is a large and distinctive fan-shaped superstructure that was built to house a Van de Graaff generator. Nikolaus Pevsner commented that this marked "the arrival of the 'New Brutalism' in Oxford".
Particle accelerators
The building was originally built to host two small (by today's standards) particle accelerators.
Tandem accelerator
The first was a vertical folded tandem electrostatic accelerator (see Tandem accelerators, the top being at floor level in the fan-shaped superstructure, the bottom in the basements. Negatively charged ions were introduced at the bottom and would be accelerated towards the large charge (10 million volts) built up by the Van de Graaff generator by electrostatic attraction. At the top, the ions would pass through a thin foil to strip off electrons, and then their trajectory would be bent 180° by a large magnetic field. The now positively charged nuclei would then be electrostatically repelled by the same |
https://en.wikipedia.org/wiki/Brewster%20angle%20microscope | A Brewster angle microscope (BAM) is a microscope for studying thin films on liquid surfaces, most typically Langmuir films. In a Brewster angle microscope, both the microscope and a polarized light source are aimed towards a liquid surface at that liquid's Brewster angle, in such a way for the microscope to catch an image of any light reflected from the light source via the liquid surface. Because there is no p-polarized reflection from the pure liquid when both are angled towards it at the Brewster angle, light is only reflected when some other phenomenon such as a surface film affects the liquid surface. The technique was first introduced in 1991.
Applications
Brewster angle microscopes enable the visualization of Langmuir monolayers or adsorbate films at the air-water interface for example as a function of packing density. They can be used either to study the properties of the Langmuir layer, or to indicate a suitable deposition pressure for Langmuir-Blodgett (LB) deposition. They can be used for example in the LB deposition of nanoparticles. Applications include:
Monolayer/film homogeneity. When combined with a Langmuir-Blodgett Trough, observation can be performed during compression/expansion at known surface pressures.
Optimizing the deposition parameters. Selecting optimal deposition pressure and other deposition parameters for LB coating.
Monolayer/film behavior. Observing phase changes, phase separation, domain size, shape and packing.
Monitoring of surface reactions. Photochemical reactions, polymerizations and enzyme kinetics can be followed in real time.
Monitoring and detection of surface active materials. For example, protein adsorption and nanoparticle flotation.
Lee et al. used a Brewster angle microscope to study optimal deposition parameters for Fe3O4 nanoparticles.
Daear et al. have written a recent review on the usage of BAMs in biological applications.
See also
Brewster's angle
Langmuir–Blodgett film
Nanoparticle deposition |
https://en.wikipedia.org/wiki/Jewish%20English%20Lexicon | Jewish English Lexicon (JEL) is an online dictionary of the language spoken by Jewish English speakers, encompassesing a varied assortment of terms that originate from ancient and modern Hebrew, Aramaic, Yiddish, Ladino, Arabic, among other languages. The lexicon treats "Jewish English" as a Jewish dialect of English as the overall structure of English remains intact despite the numerous distinctive additions from other languages.
Overview
The Jewish English Lexicon was created by Sarah Bunin Benor, an associate professor of Jewish studies at the Los Angeles division of Hebrew Union College. Benor, a scholar of the varieties of Jewish English spoken in the United States, created the lexicon in 2012 with the support of volunteers who contribute to the growth of the lexicon's database. Benor originally formed the lexicon as a class project several years prior to its publication on the internet. The lexicon offers a variety of search tools and filters including language of origin, regions where the word is commonly used, the groups of people who tend to speak the word, and dictionaries in which the term appears.
See also
Yinglish
Yeshivish |
https://en.wikipedia.org/wiki/Monounsaturated%20fat | In biochemistry and nutrition, a monounsaturated fat is a fat that contains a monounsaturated fatty acid (MUFA), a subclass of fatty acid characterized by having a double bond in the fatty acid chain with all of the remaining carbon atoms being single-bonded. By contrast, polyunsaturated fatty acids (PUFAs) have more than one double bond.
Molecular description
Monounsaturated fats are triglycerides containing one unsaturated fatty acid. Almost invariably that fatty acid is oleic acid (18:1 n−9). Palmitoleic acid (16:1 n−7) and cis-vaccenic acid (18:1 n−7) occur in small amounts in fats.
Health
Studies have shown that substituting dietary monounsaturated fat for saturated fat is associated with increased daily physical activity and resting energy expenditure. More physical activity was associated with a higher-oleic acid diet than one of a palmitic acid diet. From the study, it is shown that more monounsaturated fats lead to less anger and irritability.
Foods containing monounsaturated fats may affect low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol.
Levels of oleic acid along with other monounsaturated fatty acids in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. Monounsaturated fats and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d).
In children, consumption of monounsaturated oils is associated with healthier serum lipid profiles.
The Mediterranean diet is one heavily influenced by monounsaturated fats. People in Mediterranean countries consume more total fat than Northern European countries, but most of the fat is in the form of monounsaturated fatty acids from olive oil and omega-3 fatty acids from fish, vegetables, and certain meats like lamb, while consumption of satur |
https://en.wikipedia.org/wiki/Matthias%20Ettrich | Matthias Ettrich (born 14 June 1972) is a German computer scientist and founder of the KDE and LyX projects.
Early life
Ettrich was born in Bietigheim-Bissingen, Baden-Württemberg, West Germany, and went to school in Beilstein while living with his parents in Oberstenfeld. He passed the Abitur in 1991. Ettrich studied for his MSc in Computer Science at the Wilhelm Schickard Institute for Computer Science at the University of Tübingen.
Career
He currently resides in Berlin, Germany. He is currently focused on advising start-ups and corporations on digital transformation and in sound technical decision-making.
Free software projects
Ettrich founded and furthered the LyX project in 1995, initially conceived as a university term project. LyX is a graphical frontend to LaTeX.
Since LyX's main target platform was Linux, he started exploring different ways to improve the graphical user interface, ultimately leading him to the KDE project. Ettrich founded KDE in 1996 when he proposed on Usenet a "consistent, nice looking free desktop environment" for Unix-like systems using Qt as its widget toolkit.
On 6 November 2009, Ettrich was decorated with the Federal Cross of Merit for his contributions to free software. |
https://en.wikipedia.org/wiki/Qt%20Extended | Qt Extended (named Qtopia before September 30, 2008) is an application platform for embedded Linux-based mobile computing devices such as personal digital assistants, video projectors and mobile phones. It was initially developed by The Qt Company, at the time known as Qt Software and a subsidiary of Nokia. When they cancelled the project the free software portion of it was forked by the community and given the name Qt Extended Improved. The QtMoko Debian-based distribution is the natural successor to these projects as continued by the efforts of the Openmoko community.
Features
Qt Extended features:
Windowing system
Synchronization framework
Integrated development environment
Internationalization and localization support
Games and multimedia
Personal information manager applications
Full screen handwriting
Input methods
Personalization options
Productivity applications
Internet applications
Java integration
Wireless support
Qt Extended is dual licensed under the GNU General Public License (GPL) and proprietary licenses.
Devices and deployment
As of 2006, Qtopia was running on several million devices, including 11 mobile phone models and 30 other handheld devices.
Models included the Sharp Corporation Zaurus line of Linux handhelds, the Sony mylo, the Archos Portable Media Assistant (PMA430) (a multimedia device), the GamePark Holdings GP2X, Greenphone (an open phone initiative), Pocket PC, FIC Openmoko phones: Neo 1973 and FreeRunner. An unofficial hack allows its use on the Archos wifi series of portable media players (PMP) 604, 605, 705, and also on several Motorola phones such as E2, Z6 and A1200. The U980 of ZTE is the last phone running it.
Software development
Native applications could be developed and compiled using C++. Managed applications could be developed in Java.
Discontinuation
On March 3, 2009, Qt Software announced the discontinuation of Qt Extended as a standalone product, with some features integrated on the Qt Framework.
|
https://en.wikipedia.org/wiki/Supercomputing%20in%20Pakistan | The high performance supercomputing program started in mid-to-late 1980s in Pakistan. Supercomputing is a recent area of Computer science in which Pakistan has made progress, driven in part by the growth of the information technology age in the country. Developing on the ingenious supercomputer program started in 1980s when the deployment of the Cray supercomputers was initially denied.
The fastest supercomputer currently in use in Pakistan is developed and hosted by the National University of Sciences and Technology at its modeling and simulation research centre. As of November 2012, there are no supercomputers from Pakistan on the Top500 list.
Background
The initial interests of Pakistan in the research and development of supercomputing began during the early 1980s, at several high-powered institutions of the country. During this time, senior scientists at the Pakistan Atomic Energy Commission (PAEC) were the first to engage in research on high performance computing, while calculating and determining exact values involving fast-neutron calculations.
According to one scientist involved in the development of the supercomputer, a team of the leading scientists at PAEC developed powerful computerized electronic codes, acquired powerful high performance computers to design this system and came up with the first design that was to be manufactured, as part of the atomic bomb project. However, the most productive and pioneering research was carried out by physicist M.S. Zubairy at the Institute of Physics of Quaid-e-Azam University. Zubairy published two important books on Quantum Computers and high-performance computing throughout his career that are presently taught worldwide. In 1980s and 1990s, the scientific research and mathematical work on the supercomputers was also carried out by mathematician Dr. Tasneem Shah at the Kahuta Research Laboratories while trying to solve additive problems in Computational mathematics and the Statistical physics using the Monte |
https://en.wikipedia.org/wiki/Anatolian%20Biogeographic%20Region | The Anatolian Biogeographic Region is a biogeographic region of Turkey, as defined by the European Environment Agency .
Extent
The Anatolian Biogeographic Region covers the interior and east of Anatolia, and excludes the coastal areas along the Black Sea and Mediterranean.
It includes the central Anatolian Plateau, the Pontic and Taurus mountains and northern Mesopotamia.
It is an area of recently folded mountains formed from sedimentary rocks from the Paleozoic to Quaternary (539 million years ago to the present).
There are many intrusions and broad areas of recent volcanic material including Mount Ararat at , but no volcanic activity at present.
The area is geologically unstable and very prone to earthquakes.
It averages about above sea level, with rugged terrain surrounding areas of gently sloping or flat land.
The main rivers are the Euphrates, Tigris, Kizilirmak and Sakarya.
Environment
Most of the area receives low levels of precipitation.
There are large differences in temperature between summer and winter.
The region provides a biogeographical transition between Europe and Asia, and is home to several mammals originating in North Africa or Asia.
There are many endemic species of flora adapted to xerophytic and salt steppe conditions.
Threats to biodiversity include agriculture, over-grazing, exotic species, dams and drainage projects.
Notes
Sources
Environment of Turkey
Biogeography
Mount Ararat |
https://en.wikipedia.org/wiki/Quantum%20suicide%20and%20immortality | Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics. Purportedly, it can falsify any interpretation of quantum mechanics other than the Everett many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. This concept is sometimes conjectured to be applicable to real-world causes of death as well.
As a thought experiment, quantum suicide is an intellectual exercise in which an abstract setup is followed through to its logical consequences merely to prove a theoretical point. Virtually all physicists and philosophers of science who have described it, especially in popularized treatments, underscore that it relies on contrived, idealized circumstances that may be impossible or exceedingly difficult to realize in real life, and that its theoretical premises are controversial even among supporters of the many-worlds interpretation. Thus, as cosmologist Anthony Aguirre warns, "[...] it would be foolish (and selfish) in the extreme to let this possibility guide one's actions in any life-and-death question."
History
Hugh Everett did not mention quantum suicide or quantum immortality in writing; his work was intended as a solution to the paradoxes of quantum mechanics. Eugene Shikhovtsev's biography of Everett states that "Everett firmly believed that his many-worlds theory guaranteed him immortality: his consciousness, he argued, is bound at each branching to follow whatever path does not lead to death". Peter Byrne, author of a biography of Everett, reports that Everett also privately discussed quantum suicide (such as to play high-stakes Russian roulette and survive in the winning branch), but adds that "[i]t is unlikely, however, that Everett subscribed to this [quantum immortality] view, as the only sure thing it guarantees is that the majority of your copies will die, hardly a ra |
https://en.wikipedia.org/wiki/KANT%20%28software%29 | KANT is a computer algebra system for mathematicians interested in algebraic number theory, performing sophisticated computations in algebraic number fields, in global function fields, and in local fields. KASH is the associated command line interface. They have been developed by the Algebra and Number Theory research group of the Institute of Mathematics at Technische Universität Berlin under the project leadership of Prof. Dr Michael Pohst. Kant is free for non-commercial use.
See also
List of computer algebra systems |
https://en.wikipedia.org/wiki/Daemon%20%28computing%29 | In multitasking computer operating systems, a daemon ( or ) is a computer program that runs as a background process, rather than being under the direct control of an interactive user. Traditionally, the process names of a daemon end with the letter d, for clarification that the process is in fact a daemon, and for differentiation between a daemon and a normal computer program. For example, is a daemon that implements system logging facility, and is a daemon that serves incoming SSH connections.
In a Unix environment, the parent process of a daemon is often, but not always, the init process. A daemon is usually created either by a process forking a child process and then immediately exiting, thus causing init to adopt the child process, or by the init process directly launching the daemon. In addition, a daemon launched by forking and exiting typically must perform other operations, such as dissociating the process from any controlling terminal (tty). Such procedures are often implemented in various convenience routines such as daemon(3) in Unix.
Systems often start daemons at boot time that will respond to network requests, hardware activity, or other programs by performing some task. Daemons such as cron may also perform defined tasks at scheduled times.
Terminology
The term was coined by the programmers at MIT's Project MAC. According to Fernando J. Corbató, who worked on Project MAC in 1963, his team was the first to use the term daemon, inspired by Maxwell's demon, an imaginary agent in physics and thermodynamics that helped to sort molecules, stating, "We fancifully began to use the word daemon to describe background processes that worked tirelessly to perform system chores". Unix systems inherited this terminology. Maxwell's demon is consistent with Greek mythology's interpretation of a daemon as a supernatural being working in the background.
In the general sense, daemon is an older form of the word "demon", from the Greek δαίμων. In the Unix System Ad |
https://en.wikipedia.org/wiki/Second%20moment%20method | In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive. More generally, the "moment method" consists of bounding the probability that a random variable fluctuates far from its mean, by using its moments.
The method is often quantitative, in that one can often deduce a lower bound on the probability that the random variable is larger than some constant times its expectation. The method involves comparing the second moment of random variables to the square of the first moment.
First moment method
The first moment method is a simple application of Markov's inequality for integer-valued variables. For a non-negative, integer-valued random variable , we may want to prove that with high probability. To obtain an upper bound for , and thus a lower bound for , we first note that since takes only integer values, . Since is non-negative we can now apply Markov's inequality to obtain . Combining these we have ; the first moment method is simply the use of this inequality.
Second moment method
In the other direction, being "large" does not directly imply that is small. However, we can often use the second moment to derive such a conclusion, using Cauchy–Schwarz inequality.
The method can also be used on distributional limits of random variables. Furthermore, the estimate of the previous theorem can be refined by means of the so-called Paley–Zygmund inequality. Suppose that is a sequence of non-negative real-valued random variables which converge in law to a random variable . If there are finite positive constants , such that
hold for every , then it follows from the Paley–Zygmund inequality that for every and in
Consequently, the same inequality is satisfied by .
Example application of method
Setup of problem
The Bernoulli bond percolation subgraph of a graph at parameter is a random subgraph obtained from by deleting every edge of with pro |
https://en.wikipedia.org/wiki/VAXft | The VAXft was a family of fault-tolerant minicomputers developed and manufactured by Digital Equipment Corporation (DEC) using processors implementing the VAX instruction set architecture (ISA). "VAXft" stood for "Virtual Address Extension, fault tolerant". These systems ran the OpenVMS operating system, and were first supported by VMS 5.4. Two layered software products, VAXft System Services and VMS Volume Shadowing, were required to support the fault-tolerant features of the VAXft and for the redundancy of data stored on hard disk drives.
Architecture
All VAXft systems shared the same basic system architecture. A VAXft system consisted of two "zones" that operated in lock-step: "Zone A" and "Zone B". Each zone was a fully functional computer, capable of running an operating system, and was identical to the other in hardware configuration. Lock-step was achieved by hardware on the CPU module. The CPU module of each zone was connected to the other with a crosslink cable. The crosslink cables carried the results of instructions executed by one CPU module to the other, where they were compared by hardware with the results of the same instructions executed by the latter to ensure that they were identical. The two zones were kept synchronous by a clock signal carried by the crosslink cables. When a hardware failure occurred in one of the zones, the affected zone was brought offline without bringing down the other zone, which continued to operate as normal. When repairs were completed, the offline zone was powered on and automatically resynchronized with the other zone, restoring redundancy.
VAXft Model 310
The VAXft Model 310, introduced as the VAXft 3000 Model 310, code named "Cirrus", was introduced in February 1990 and shipped in June. It was the first VAXft model, and was DEC's first fault-tolerant computer that was generally available. At the 1991 launch of new VAXft models, the VAX 3000 Model 310 was renamed to follow the new naming scheme, becoming the VAXf |
https://en.wikipedia.org/wiki/Suillus%20mediterraneensis | Suillus mediterraneensis is a species of edible mushroom in the genus Suillus. It is found in Europe within Coniferous forests, mycorrhizal with two-needled pines (Pinus halepensis, P. pinea, P. pinaster). Originally named Boletus mediterraneensis in 1969, It was transferred to Suillus in 1992. It is similar to Suillus granulatus, but is distinguished by yellowish and not white flesh. |
https://en.wikipedia.org/wiki/Regular%20Hadamard%20matrix | In mathematics a regular Hadamard matrix is a Hadamard matrix whose row and column sums are all equal. While the order of a Hadamard matrix must be 1, 2, or a multiple of 4, regular Hadamard matrices carry the further restriction that the order be a square number. The excess, denoted E(H), of a Hadamard matrix H of order n is defined to be the sum of the entries of H. The excess satisfies the bound
|E(H)| ≤ n3/2. A Hadamard matrix attains this bound if and only if it is regular.
Parameters
If n = 4u2 is the order of a regular Hadamard matrix, then the excess is ±8u3 and the row and column sums all equal ±2u. It follows that each row has 2u2 ± u positive entries and 2u2 ∓ u negative entries. The orthogonality of rows implies that any two distinct rows have exactly u2 ± u positive entries in common. If H is interpreted as the incidence matrix of a block design, with 1 representing incidence and −1 representing non-incidence, then H corresponds to a symmetric 2-(v,k,λ) design with parameters (4u2, 2u2 ± u, u2 ± u). A design with these parameters is called a Menon design.
Construction
A number of methods for constructing regular Hadamard matrices are known, and some exhaustive computer searches have been done for regular Hadamard matrices with specified symmetry groups, but it is not known whether every even perfect square is the order of a regular Hadamard matrix. Bush-type Hadamard matrices are regular Hadamard matrices of a special form, and are connected with finite projective planes.
History and naming
Like Hadamard matrices more generally, regular Hadamard matrices are named after Jacques Hadamard. Menon designs are named after P Kesava Menon, and Bush-type Hadamard matrices are named after Kenneth A. Bush. |
https://en.wikipedia.org/wiki/107%20%28number%29 | 107 (one hundred [and] seven) is the natural number following 106 and preceding 108.
In mathematics
107 is the 28th prime number. The next prime is 109, with which it comprises a twin prime, making 107 a Chen prime.
Plugged into the expression , 107 yields 162259276829213363391578010288127, a Mersenne prime. 107 is itself a safe prime.
It is the fourth Busy beaver number, the maximum number of steps that any Turing machine with 2 symbols and 4 states can make before eventually halting.
It is the number of triangle-free graphs on 7 vertices.
It is the ninth emirp, because reversing its digits gives another prime number (701)
In other fields
As "one hundred and seven", it is the smallest positive integer requiring six syllables in English (without the "and" it only has five syllables and seventy-seven is a smaller 5-syllable number).
107 is also:
The atomic number of bohrium.
The emergency telephone number in Argentina and Cape Town.
The telephone of the police in Hungary.
A common designation for the fair use exception in copyright law (from 17 U.S.C. 107)
Peugeot 107 model of car
In sports
The 107% rule, a Formula One Sporting Regulation in operation from 1996 to 2002 and 2011 onward.
The number 107 is also associated with the Timbers Army supporters group of the Portland Timbers soccer team, in reference to the stadium seating section where the group originally congregated.
See also
List of highways numbered 107 |
https://en.wikipedia.org/wiki/World%20Backup%20Day | World Backup Day is a commemorative date celebrated annually by the backup industry and tech industry all over the world. It highlights the importance of protecting data and keeping systems and computers secure.
World Backup Day started with a post on Reddit where a user wrote about losing their hard drive and wishing someone had reminded them about how important it is to backup data. The campaign started by Ismail Jadun in 2011 and every year news outlets write articles about the importance of backing up data on World Backup Day.
Observance
Every year on March 31, companies tweet and have podcasts about the importance of backing up data to prevent data loss. On the website WorldBackupDay.com people can make a pledge in ten languages on various social media channels about the importance of backing up their data. The World Backup Day is recognized as National Calendar Day on many national holiday websites. |
https://en.wikipedia.org/wiki/Bis%28trifluoromethyl%29peroxide | Bis(trifluoromethyl)peroxide (BTP) is a fluorocarbon derivative first produced by Frédéric Swarts. It has some utility as a radical initiator for polymerisation reactions. BTP is unusual in the fact that, unlike many peroxides, it is a gas, is non-explosive, and has good thermal stability.
History
BTP was first synthesised by an electrolysis reaction using aqueous solutions containing trifluoroacetate ion but only in trace amounts. BTP was one of the by-products formed during trifluoromethylation reactions carried out by Swarts. Later it was discovered that BTP had some unusual properties and so more economically viable synthesis routes were sought. An early example was that of Porter and Cady, who were able to achieve a conversion rate of around 20-30% at atmospheric pressure and up to 90% at elevated pressure in an autoclave.
Synthesis and reaction
Present methods of the synthesis of BTP involves the reaction of carbonyl fluoride and chlorine trifluoride at 0-300 °C.
An example of this reaction is the reaction of carbonyl fluoride and chlorine trifluoride in the presence of alkali metal fluorides or bifluorides at 100-250 °C. This example is quite insensitive to variations in temperature.
Examples of the synthesis are:
2CF2O + ClF3 → CF3OOCF3 + ClF
6CF2O + 2ClF3 → 3CF3OOCF3 + Cl2
BTP can be isolated and purified by well-recognised procedures. In the mixture used to synthesize the compound chlorine monofluoride and chlorine trifluoride may still be present. These compounds are highly reactive and hazardous and are preferably deactivated as soon as possible. The deactivation is carried out by adding anhydrous calcium chloride to the mixture. The deactivated mixture is scrubbed with water and diluted caustically to remove the chlorine and residual carbonyl before drying to yield pure BTP.
Metabolism
In mammals, there are pathways for the metabolism of peroxides using various enzymes of the peroxidase class. For BTP, this would correspond to the following gener |
https://en.wikipedia.org/wiki/Video%20assistant%20referee | The video assistant referee (VAR) is a match official in association football who assists the referee by reviewing decisions using video footage and providing advice to the referee based on those reviews.
The assistant video assistant referee (AVAR) is match official appointed to assist the VAR in the video operation room. The responsibilities of the AVAR include watching the live action on the field while the VAR is undertaking a "check" or a "review", to keep a record of reviewable incidents, and to communicate the outcome of a review to broadcasters.
Following extensive trialling in a number of major competitions, VAR was formally written into the Laws of the Game by the International Football Association Board (IFAB) on March 3, 2018. Operating under the philosophy of "minimal interference, maximum benefit", the VAR system seeks to provide a way for "clear and obvious errors" and "serious missed incidents" to be corrected.
Procedure
There are four categories of decisions that can be reviewed.
Goal/no goal – attacking team commits an offence, ball out of play, ball entering goal, offside, handball, offences and encroachment during penalty kicks.
Penalty/no penalty – attacking team commits an offence, ball out of play, location of offence, incorrect awarding, offence not penalised.
Direct red card – denial of obvious goal-scoring opportunity, serious foul play, violent conduct/biting/spitting, using offensive/insulting/abusive language or gestures.
Mistaken identity in awarding a red or yellow card.
Check
The VAR and the AVARs automatically check every on-field referee decision falling under the four reviewable categories. The VAR may perform a "silent check," communicating to the referee that no mistake was made, while not causing any delay to the game. At other times, a VAR check may cause the game to be delayed while the VAR ascertains whether or not a possible mistake has occurred. The referee may delay the restart of play for this to occur, and indi |
https://en.wikipedia.org/wiki/Thompson%20uniqueness%20theorem | In mathematical finite group theory, Thompson's original uniqueness theorem states that in a minimal simple finite group of odd order there is a unique maximal subgroup containing a given elementary abelian subgroup of rank 3. gave a shorter proof of the uniqueness theorem. |
https://en.wikipedia.org/wiki/Recovery%20%28metallurgy%29 | In metallurgy, recovery is a process by which a metal or alloy's deformed grains can reduce their stored energy by the removal or rearrangement of defects in their crystal structure. These defects, primarily dislocations, are introduced by plastic deformation of the material and act to increase the yield strength of a material. Since recovery reduces the dislocation density, the process is normally accompanied by a reduction in a material's strength and a simultaneous increase in the ductility. As a result, recovery may be considered beneficial or detrimental depending on the circumstances.
Recovery is related to the similar processes of recrystallization and grain growth, each of them being stages of annealing. Recovery competes with recrystallization, as both are driven by the stored energy, but is also thought to be a necessary prerequisite for the nucleation of recrystallized grains. It is so called because there is a recovery of the electrical conductivity due to a reduction in dislocations. This creates defect-free channels, giving electrons an increased mean free path.
Definition
The physical processes that fall under the designations of recovery, recrystallization and grain growth are often difficult to distinguish in a precise manner. Doherty et al. (1998) stated:
"The authors have agreed that ... recovery can be defined as all annealing processes occurring in deformed materials that occur without the migration of a high-angle grain boundary"
Thus the process can be differentiated from recrystallization and grain growth as both feature extensive movement of high-angle grain boundaries.
If recovery occurs during deformation (a situation that is common in high-temperature processing) then it is referred to as 'dynamic' while recovery that occurs after processing is termed 'static'. The principal difference is that during dynamic recovery, stored energy continues to be introduced even as it is decreased by the recovery process - resulting in a form of dyna |
https://en.wikipedia.org/wiki/Re-order%20buffer | A re-order buffer (ROB) is a hardware unit used in an extension to the Tomasulo algorithm to support out-of-order and speculative instruction execution. The extension forces instructions to be committed in-order.
The buffer is a circular buffer (to provide a FIFO instruction ordering queue) implemented as an array/vector (which allows recording of results against instructions as they complete out of order).
There are three stages to the Tomasulo algorithm: "Issue", "Execute", "Write Result". In an extension to the algorithm, there is an additional "Commit" stage. During the Commit stage, instruction results are stored in a register or memory. The "Write Result" stage is modified to place results in the re-order buffer. Each instruction is tagged in the reservation station with its index in the ROB for this purpose.
The contents of the buffer are used for data dependencies of other instructions scheduled in the buffer. The head of the buffer will be committed once its result is valid. Its dependencies will have already been calculated and committed since they must be ahead of the instruction in the buffer though not necessarily adjacent to it. Data dependencies between instructions would normally stall the pipeline while an instruction waits for its dependent values. The ROB allows the pipeline to continue to process other instructions while ensuring results are committed in order to prevent data hazards such as read ahead of write (RAW), write ahead of read (WAR) and write ahead of write (WAW).
There are additional fields in every entry of the buffer to support the extended algorithm:
Instruction type (jump, store to memory, store to register)
Destination (either memory address or register number)
Result (value that goes to destination or indication of a (un)successful jump)
Validity (does the result already exist?)
The consequences of the re-order buffer include precise exceptions and easy rollback control of target address mis-predictions (branch or jum |
https://en.wikipedia.org/wiki/Opuntia%20macrorhiza | Opuntia macrorhiza is a common and widespread species of cactus with the common names plains pricklypear or prairie pricklypear or western pricklypear. It is found throughout the Great Plains of the United States, from Texas to Minnesota, and west into the Rocky Mountain states to New Mexico, Utah, and perhaps Idaho, with sporadic populations in the Mississippi and Ohio Valleys. It is also reported from northern Mexico in the states of Chihuahua, Sonora, Coahuila, Nuevo León, Durango, Tamaulipas, and San Luís Potosí., though all Arizona and Mexican records should be considered with caution due to confusion with other similar species. The species is cultivated as an ornamental in other locations.
The species prefers well-drained, sandy or gravelly soils, mostly in grassland areas. It is one of the shorter species of the genus, rarely over 30 cm (1 foot) tall, spreading horizontally and forming wide clumps. Flowers are showy and bright yellow, often with red markings near the base of the petals. Fruits are narrow, red, juicy and edible.
Subdivisions
Some subspecies and varieties have proposed within the species. None are accepted by Plants of the World Online , which treats Opuntia macrorhiza subsp. pottsii (Salm-Dyck) U.Guzmán & Mandujano and Opuntia macrorhiza var. pottsii (Salm-Dyck) L.D.Benson as the separate species Opuntia pottsii. |
https://en.wikipedia.org/wiki/Thomas%20Baxter%20%28mathematician%29 | Thomas Baxter ( 1732–1740), was a schoolmaster and mathematician who published an erroneous method of squaring the circle. He was derided as a "pseudo-mathematician" by F. Y. Edgeworth, writing for the Dictionary of National Biography.
When he was master of a private school at Crathorne, North Yorkshire, Baxter composed a book entitled The Circle squared (London: 1732), published in octavo. The mathematical book begins with the untrue assertion that "if the diameter of a circle be unity or one, the circumference of that circle will be 3.0625", where the value should correctly be pi. From this incorrect assumption, Baxter proves fourteen geometric theorems on circles, alongside some others on cones and ellipses, which Edgeworth refers to as of "equal absurdity" to Baxter's other assertions. Thomas Gent, who published the work, wrote in his reminisces, in The Life of Mr. Thomas Gent, that "as it never proved of any effect, it was converted to waste paper, to the great mortification of the author".
This book has received harsh reviews from modern mathematicians and scholars. Antiquary Edward Peacock referred to it as "no doubt, great rubbish". Mathematician Augustus De Morgan included Baxter's proof among his Budget of Paradoxes (1872), dismissing it as an absurd work. The work was the reason Edgeworth gave Baxter the epithet, "pseudo-mathematician".
Baxter published another work, Matho, or the Principles of Astronomy and Natural Philosophy accommodated to the Use of Younger Persons (London: 1740). Unlike Baxter's other work, this volume enjoyed considerable popularity in its time. |
https://en.wikipedia.org/wiki/Set%20packing | Set packing is a classical NP-complete problem in computational complexity theory and combinatorics, and was one of Karp's 21 NP-complete problems. Suppose one has a finite set S and a list of subsets of S. Then, the set packing problem asks if some k subsets in the list are pairwise disjoint (in other words, no two of them share an element).
More formally, given a universe and a family of subsets of , a packing is a subfamily of sets such that all sets in are pairwise disjoint. The size of the packing is . In the set packing decision problem, the input is a pair and an integer ; the question is whether
there is a set packing of size or more. In the set packing optimization problem, the input is a pair , and the task is to find a set packing that uses the most sets.
The problem is clearly in NP since, given subsets, we can easily verify that they are pairwise disjoint in polynomial time.
The optimization version of the problem, maximum set packing, asks for the maximum number of pairwise disjoint sets in the list. It is a maximization problem that can be formulated naturally as an integer linear program, belonging to the class of packing problems.
Integer linear program formulation
The maximum set packing problem can be formulated as the following integer linear program.
Complexity
The set packing problem is not only NP-complete, but its optimization version (general maximum set packing problem) has been proven as difficult to approximate as the maximum clique problem; in particular, it cannot be approximated within any constant factor. The best known algorithm approximates it within a factor of . The weighted variant can also be approximated as well.
Packing sets with a bounded size
The problem does have a variant which is more tractable. Given any positive integer k≥3, the k-set packing problem is a variant of set packing in which each set contains at most k elements.
When k=1, the problem is trivial. When k=2, the problem is equivalent to finding |
https://en.wikipedia.org/wiki/2-Amino-4-hydroxy-6-pyrophosphoryl-methylpteridine | 2-Amino-4-hydroxy-6-pyrophosphoryl-methylpteridine (7,8-Dihydropterin pyrophosphate, dihydropterin-CH2OH-diphosphate) is a pteridine; a precursor to dihydrofolic acid. |
https://en.wikipedia.org/wiki/Mondex | Mondex was a smart card electronic cash system, implemented as a stored-value card and owned by Mastercard.
Pioneered by two bankers from NatWest in 1990, it was spun-off to a separate consortium later on, then sold to Mastercard.
Mondex allowed users to use its electronic card as they would with cash, enabling peer-to-peer offline transfers between cards, which did not need any authorization, via Mondex ATMs, computer card readers, personal 'wallets' and specialized telephones. This offline nature of the system and other unique features made Mondex stand out from leading competitors at the time, such as Visa Cash, which was a closed system and was much closer in concept to a traditional payment cards' transactional operation.
Mondex also allowed for a full-card locking mechanism, usage with multiple currencies within a single card, and a certain degree of user anonymity. Mondex cards were at some point common place in many universities at a certain point as they were mostly trialed there, and were also issued as dual-application cards - like combo credit cards, ID cards, and loyalty membership cards.
The system was introduced around the world in more than a dozen nations, with various differing implementations. Despite continuous investment from Mastercard, the Mondex scheme did not seem to catch on worldwide and the last place where it operated, Taiwan, had its cards disabled in 31 May 2008, being succeeded by a similar but more technologically advanced system, named Mastercard Cash, which utilized contactless operation, culminating in the TaiwanMoney Card.
The Mondex scheme was a forerunner in the cashless society that is common today via mobile payment, digital wallets, and contactless payment, and was far ahead of its time.
History
The Mondex scheme was invented in 1990 by Tim Jones and Graham Higgins of NatWest in the United Kingdom. In March 1992 internal tests of the system, known at the time as 'Byte', started running at one of NatWest's major comp |
https://en.wikipedia.org/wiki/Giacomo%20Pylarini | Giacomo Pylarini or Jacobus Pylarinus or Iacob Pylarino (Greek: Ιάκωβος Πυλαρινός; 1659–1718) was a Greek physician and consul for the republic of Venice in Smyrna. In 1715 he became the first person to have an account of the practice of inoculation published by the Royal Society.
He studied law and then medicine at the University of Padua before qualifying as a physician. He travelled to different parts of Asia and Africa and practised both at Smyrna and Constantinople. In Moscow he was appointed physician to the Russian Tsar Peter the Great.
He returned to Smyrna for the second time and resided there as the Venetian Consul as well as practising physician.
He, together with another Greek doctor called Emmanuel Timoni, have introduced the variolation to Western Europe through their writing from Constantinople. |
https://en.wikipedia.org/wiki/Advanced%20Stirling%20radioisotope%20generator | The advanced Stirling radioisotope generator (ASRG) is a radioisotope power system first developed at NASA's Glenn Research Center. It uses a Stirling power conversion technology to convert radioactive-decay heat into electricity for use on spacecraft. The energy conversion process used by an ASRG is significantly more efficient than previous radioisotope systems, using one quarter of the plutonium-238 to produce the same amount of power.
Despite termination of the ASRG flight development contract in 2013, NASA continues a small investment testing by private companies. Flight-ready Stirling-based units are not expected before 2028.
Development
Development was undertaken in 2000 under joint sponsorship by the United States Department of Energy (DoE), Lockheed Martin Space Systems, and the Stirling Research Laboratory at NASA's Glenn Research Center (GRC) for potential future space missions.
In 2012, NASA chose a solar-powered mission (InSight) for the Discovery 12 interplanetary mission, negating the need for a radioisotope power system for the 2018 launch.
The DOE cancelled the Lockheed contract in late 2013, after the cost had risen to over $260 million, $110 million more than originally expected. It was also decided to make use of remaining program hardware in constructing and testing a second engineering unit (for testing and research), which was completed in August 2014 in a close-out phase and shipped to GRC. Testing done in 2015 showed power fluctuations after just 175 hr of operation, becoming more frequent and larger in magnitude.
NASA also needed more funding for continued plutonium-238 production (which will be used in existing MMRTGs for long-range probes in the meantime) and decided to use the savings from the ASRG cancellation to do so rather than take funding from science missions.
Despite termination of the ASRG flight development contract, NASA continues a small investment testing Stirling converter technologies developed by Sunpower Inc. and I |
https://en.wikipedia.org/wiki/YouTube%20and%20privacy | Since its founding in 2005, the American video-sharing website YouTube has been faced with a growing number of privacy issues, including allegations that it allows users to upload unauthorized copyrighted material and allows personal information from young children to be collected without their parents' consent.
Early history (2005–2010)
On March 12, 2007, Viacom sued YouTube, demanding $1 billion in damages, said that it had found more than 150,000 unauthorized clips of its material on YouTube that had been viewed "an astounding 1.5 billion times". YouTube responded by stating that it "goes far beyond its legal obligations in assisting content owners to protect their works".
During the same court battle, Viacom won a court ruling requiring YouTube to hand over 12 terabytes of data detailing the viewing habits of every user who has watched videos on the site. The decision was criticized by the Electronic Frontier Foundation, which called the court ruling "a setback to privacy rights".
COPPA settlement
In April 2018, a coalition of 23 groups (including the CCFC, CDD, and Common Sense Media) filed a complaint with the Federal Trade Commission, alleging that YouTube collected information from users under the age of 13 without parental consent, in violation of the Children's Online Privacy Protection Act (COPPA).
In September 2019, YouTube was fined $170 million by the FTC for collecting personal information from minors under the age of 13 (in particular, viewing history) without parental consent, in order to allow channel operators to serve targeted advertising on their videos. In particular, the FTC ruled that YouTube was partly liable under COPPA, as the service's rating and curation of content as being suitable for children constituted the targeting of the website towards children. In order to comply with the settlement, YouTube was ordered to "develop, implement, and maintain a system for Channel Owners to designate whether their Content on the YouTube Servic |
https://en.wikipedia.org/wiki/FLAGS%20register | The FLAGS register is the status register that contains the current state of an x86 CPU. The size and meanings of the flag bits are architecture dependent. It usually reflects the result of arithmetic operations as well as information about restrictions placed on the CPU operation at the current time. Some of those restrictions may include preventing some interrupts from triggering, prohibition of execution of a class of "privileged" instructions. Additional status flags may bypass memory mapping and define what action the CPU should take on arithmetic overflow.
The carry, parity, auxiliary carry (or half carry), zero and sign flags are included in many architectures.
In the i286 architecture, the register is 16 bits wide. Its successors, the EFLAGS and RFLAGS registers, are 32 bits and 64 bits wide, respectively. The wider registers retain compatibility with their smaller predecessors.
FLAGS
Note: The mask column in the table is the AND bitmask (as hexadecimal value) to query the flag(s) within FLAGS register value.
Usage
All FLAGS registers contain the condition codes, flag bits that let the results of one machine-language instruction affect another instruction. Arithmetic and logical instructions set some or all of the flags, and conditional jump instructions take variable action based on the value of certain flags. For example, jz (Jump if Zero), jc (Jump if Carry), and jo (Jump if Overflow) depend on specific flags. Other conditional jumps test combinations of several flags.
FLAGS registers can be moved from or to the stack. This is part of the job of saving and restoring CPU context, against a routine such as an interrupt service routine whose changes to registers should not be seen by the calling code. Here are the relevant instructions:
The PUSHF and POPF instructions transfer the 16-bit FLAGS register.
PUSHFD/POPFD (introduced with the i386 architecture) transfer the 32-bit double register EFLAGS.
PUSHFQ/POPFQ (introduced with the x64 architecture) tr |
https://en.wikipedia.org/wiki/Protecting%20Children%20from%20Internet%20Pornographers%20Act%20of%202011 | The Protecting Children from Internet Pornographers Act of 2011 () was a United States bill designed with the stated intention of increasing enforcement of laws related to the prosecution of child pornography and child sexual exploitation offenses. Representative Lamar Smith (R-Texas), sponsor of H.R. 1981, stated that, "When investigators develop leads that might result in saving a child or apprehending a pedophile, their efforts should not be frustrated because vital records were destroyed simply because there was no requirement to retain them."
Organizations that support the goal of the bill include the National Sheriffs' Association, the National Center for Missing and Exploited Children (NCMEC), the National Center for Victims of Crime, and Eastern North Carolina Stop Human Trafficking Now.
H.R. 1981 has been criticized for its scope and privacy implications. Opponents of the bill, which include Electronic Frontier Foundation (EFF), the American Civil Liberties Union, and the American Library Association, take issue with the violation of privacy that would necessarily occur if government could compel ISPs to render subscriber information. Kevin Bankston, an EFF staff attorney, stated that "The data retention mandate in this bill would treat every Internet user like a criminal and threaten the online privacy and free speech rights of every American..., ".
History
On May 25, 2011, Representative Lamar Smith of Texas introduced the bill. It was co-sponsored by 25 other House Representatives. The bill passed the United States House Judiciary Committee on July 28, 2011, by a vote of 19–10. As of January 2012, the bill had 39 co-sponsors. A Congressional Budget Office report on the costs of enacting the bill was released on October 12, 2011. The next step for the bill would be a debate in the House of Representatives.
Scope
H.R. 1981 would introduce harsher penalties for offenders and make it a crime to financially facilitate the sale, distribution and purcha |
https://en.wikipedia.org/wiki/Energy%20systems%20language | The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission.
Design intent
The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example).
In particular H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere, and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However Odum was interested not only in the electronic circuits themselves, but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology.
General characteristics
When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could |
https://en.wikipedia.org/wiki/Bob%20Rau | Bantwal Ramakrishna "Bob" Rau (1951 – December 10, 2002) was a computer engineer and HP Fellow. Rau was a founder and chief architect of Cydrome, where he helped develop the Very long instruction word technology that is now common in modern computer processors. Rau was the recipient of the 2002 Eckert–Mauchly Award.
IEEE Computer Society has established a "B. Ramakrishna Rau Award" in his memory. Past recipients include major contributors in the microarchitecture field. |
https://en.wikipedia.org/wiki/Order%20polytope | In mathematics, the order polytope of a finite partially ordered set is a convex polytope defined from the set. The points of the order polytope are the monotonic functions from the given set to the unit interval, its vertices correspond to the upper sets of the partial order, and its dimension is the number of elements in the partial order. The order polytope is a distributive polytope, meaning that coordinatewise minima and maxima of pairs of its points remain within the polytope.
The order polytope of a partial order should be distinguished from the linear ordering polytope, a polytope defined from a number as the convex hull of indicator vectors of the sets of edges of -vertex transitive tournaments.
Definition and example
A partially ordered set is a pair where is an arbitrary set and is a binary relation on pairs of elements of that is reflexive (for all , ), antisymmetric (for all with at most one of and can be true), and transitive (for all , if and then ).
A partially ordered set is said to be finite when is a finite set. In this case, the collection of all functions that map to the real numbers forms a finite-dimensional vector space, with pointwise addition of functions as the vector sum operation. The dimension of the space is just the number of elements of . The order polytope is defined to be the subset of this space consisting of functions with the following two properties:
For every , . That is, maps the elements of to the unit interval.
For every with , . That is, is a monotonic function
For example, for a partially ordered set consisting of two elements and , with in the partial order, the functions from these points to real numbers can be identified with points in the Cartesian plane. For this example, the order polytope consists of all points in the -plane with . This is an isosceles right triangle with vertices at (0,0), (0,1), and (1,1).
Vertices and facets
The vertices of the order polytope consist of monotonic f |
https://en.wikipedia.org/wiki/Off-key | Off-key is musical content that is not at the expected frequency or pitch period, either with respect to some absolute reference frequency, or in a ratiometric sense (i.e. through removal of exactly one degree of freedom, such as the frequency of a keynote), or pitch intervals not well-defined in the ratio of small whole numbers.
The term may also refer to a person or situation being out of step with what is considered normal or appropriate. A single note deliberately played or sung off-key can be called an "off-note". It is sometimes used the same way as a blue note in jazz.
Explanation of on-key
The opposite of off-key is on-key or in-key, which suggests that there is a well defined keynote, or reference pitch. This does not necessarily have to be an absolute pitch but rather one that is relative for at least the duration of a song. A song is usually in a certain key, which is usually the note that the song ends on, and is the base frequency around which it resolves to at the end.
The base-frequency is usually called the harmonic or key center. Being on-key presumes that there is a key center frequency around which some portion of notes have well defined intervals to.
Deliberate use off-key content
In jazz and blues music, certain notes called "blue notes" are deliberately sung somewhat flat for expressive effect. Examples include the words "Thought He Was a Goner" in the song "And the Cat Came Back" and the words "Yum Yum" in the children's song "Five Green and Speckled Frogs".
See also
Melody
Tonality
Blue note
Tonic (music)
Notes
Musical tuning |
https://en.wikipedia.org/wiki/Pullulan | Pullulan is a polysaccharide polymer consisting of maltotriose units, also known as α-1,4- ;α-1,6-glucan'. Three glucose units in maltotriose are connected by an α-1,4 glycosidic bond, whereas consecutive maltotriose units are connected to each other by an α-1,6 glycosidic bond. Pullulan is produced from starch by the fungus Aureobasidium pullulans. Pullulan is mainly used by the cell to resist desiccation and predation. The presence of this polysaccharide also facilitates diffusion of molecules both into and out of the cell.
As an edible, mostly tasteless polymer, the chief commercial use of pullulan is in the manufacture of edible films that are used in various breath freshener or oral hygiene products such as Listerine Cool Mint of Johnson and Johnson (USA) and Meltz Super Thin Mints of Avery Bio-Tech Private Ltd. (India). Pullulan and HPMC can also be used as a vegetarian substitute for drug capsules, rather than gelatine. As a food additive, it is known by the E number E1204.
Pullulan has also be explored as natural polymeric biomaterials to fabricated injectable scaffold for bone tissue engineering, cartilage tissue engineering, and intervertebral disc regeneration.
See also
Pullulanase
Desiccation |
https://en.wikipedia.org/wiki/Spesmilo | The spesmilo (, plural spesmiloj ) is an obsolete decimal international currency, proposed in 1907 by René de Saussure and used before World War I by a few British and Swiss banks, primarily the Ĉekbanko Esperantista.
The spesmilo was equivalent to one thousand spesoj, and worth of pure gold (0.8 grams of 22 karat gold), which at the time was about one-half United States dollar, two shillings (one-tenth of a pound sterling) in Britain, one Russian ruble, or Swiss francs. On 6 November 2022, that quantity of gold would be worth about US$43.50, £38 sterling, €44, ₽2692 Russian roubles, and SFr 43 Swiss francs.
The basic unit, the speso (from Italian spesa or German Spesen; spesmilo is Esperanto for "a thousand pennies"), was purposely made very small to avoid fractions.
Sign
The spesmilo sign, called spesmilsigno in Esperanto, is a monogram of a cursive capital "S", from whose tail emerges an "m". The currency sign is often typeset as the separate letters Sm.
In Unicode, the character is assigned in version 5.2.
Miscellaneous
The stelo was another currency unit used by the Universala Ligo from 1942 to the 1990s.
An Esperanto version of the board game Monopoly uses play money in denominations of spesmiloj. |
https://en.wikipedia.org/wiki/S416 | S416 (GTPL-11164) is a drug which acts as a selective inhibitor of the enzyme dihydroorotate dehydrogenase (DHODH). This enzyme is involved in the synthesis of pyrimidine nucleosides in the body, which are required for the synthesis of DNA and RNA. This is an important rate-limiting step in the replication of viruses, and so DHODH inhibitors may have applications as broad-spectrum antiviral drugs. In tests in vitro, S416 was found to have antiviral activity against a range of pathogenic RNA viruses including influenza, Zika virus, Ebola virus and SARS-CoV-2.
See also
Brequinar
Teriflunomide
Leflunomide |
https://en.wikipedia.org/wiki/Halorubrum%20lacusprofundi | Halorubrum lacusprofundi is a rod-shaped, halophilic Archaeon in the family of Halorubraceae. It was first isolated from Deep Lake in Antarctica in the 1980s.
Genome
Several strains of H. lacusprofundi have been discovered. The genome sequencing of the strain ACAM 32 was completed in 2008. The organism's genome consists of two circular chromosomes and a single circular plasmid. Chromosome I contains 2,735,295 base pairs encoding 2,801 genes and chromosome II contains 525,943 base pairs encoding 522 genes. The single plasmid contains 431,338 base pairs encoding 402 genes. At least one strain of H. lacusprofundi (R1S1) contains a plasmid (pR1SE) that enables horizontal gene transfer, which takes place via a mechanism that uses vesicle-enclosed virus-like particles.
Research
Its β-galactosidase enzyme has been extensively studied to understand how proteins function in low-temperature, high-saline environments. |
https://en.wikipedia.org/wiki/Indigenous%20bundle | In mathematics, an indigenous bundle on a Riemann surface is a fiber bundle with a flat connection associated to some complex projective structure. Indigenous bundles were introduced by . Indigenous bundles for curves over p-adic fields were introduced by in his study of p-adic Teichmüller theory. |
https://en.wikipedia.org/wiki/Science%20by%20press%20conference | Science by press conference or science by press release is the practice by which scientists put an unusual focus on publicizing results of research in the news media via press conferences or press releases. The term is usually used disparagingly, to suggest that the seekers of publicity are promoting claims of questionable scientific merit, using the media for attention as they are unlikely to win the approval of the scientific community.
Premature publicity violates a cultural value of most of the scientific community, which is that findings should be subjected to independent review with a "thorough examination by the scientific community" before they are widely publicized. The standard practice is to publish a paper in a peer-reviewed scientific journal. This idea has many merits, including that the scientific community has a responsibility to conduct itself in a deliberative, non-attention seeking way; and that its members should be oriented more towards the pursuit of insight than fame. Science by press conference in its most egregious forms can be undertaken on behalf of an individual researcher seeking fame, a corporation seeking to sway public opinion or investor perception, or a political or ideological movement.
History of the term
The phrase was coined by Spyros Andreopoulos, a public affairs officer at Stanford University Medical School, in a 1980 letter which appeared in the New England Journal of Medicine. Andreopoulos was commenting specifically on the publicity practices of biotechnology startups, including Biogen and Genentech. The journal in which it appeared had implemented a long-standing policy under editor Franz J. Ingelfinger which prohibited seeking publicity for research prior to its submission or publication, informally called the Ingelfinger Rule.
Notable examples of science by press conference
In 1989, chemists Stanley Pons and Martin Fleischmann held a press conference to claim they had successfully achieved cold fusion. (Highlight |
https://en.wikipedia.org/wiki/No-observed-adverse-effect%20level | The no-observed-adverse-effect level (NOAEL) denotes the level of exposure of an organism, found by experiment or observation, at which there is no biologically or statistically significant increase in the frequency or severity of any adverse effects of the tested protocol. In drug development, the NOAEL of a new drug is assessed in laboratory animals, such as mice, prior to initiation of human trials in order to establish a safe clinical starting dose in humans. The OECD publishes guidelines for Preclinical Safety Assessments, in order to help scientists discover the NOAEL.
Synopsis
Some adverse effects in the exposed population when compared to its appropriate control might include alteration of morphology, functional capacity, growth, development or life span. The NOAEL is determined or proposed by qualified personnel, often a pharmacologist or a toxicologist.
The NOAEL could be defined as "the highest experimental point that is without adverse effect," meaning that under laboratory conditions, it is the level where there are no side-effects. It either does not provide the effects of drug with respect to duration and dose, or it does not address the interpretation of risk based on toxicologically relevant effects.
In toxicology it is specifically the highest tested dose or concentration of a substance (i.e. a drug or chemical) or agent (e.g. radiation), at which no such adverse effect is found in exposed test organisms where higher doses or concentrations resulted in an adverse effect.
The NOAEL level may be used in the process of establishing a dose-response relationship, a fundamental step in most risk assessment methodologies.
Synonyms
The NOAEL is also known as NOEL (no-observed-effect level) as well as NEC (no-effect concentration) and NOEC (no-observed-effect concentration).
US EPA definition
The United States Environmental Protection Agency defines NOAEL as 'an exposure level at which there are no statistically or biologically significant increase |
https://en.wikipedia.org/wiki/List%20of%20widget%20toolkits | This article provides a list of widget toolkits (also known as GUI frameworks), used to construct the graphical user interface (GUI) of programs, organized by their relationships with various operating systems.
Low-level widget toolkits
Integrated in the operating system
macOS uses Cocoa. Mac OS 9 and macOS used to use Carbon for 32-bit applications.
The Windows API used in Microsoft Windows. Microsoft had the graphics functions integrated in the kernel until 2006
The Haiku operating system uses an extended and modernised version of the Be API that was used by its predecessor BeOS. Haiku Inc. is expected to drop binary and source compatibility with BeOS at some future time, which will result in a Haiku API.
As a separate layer on top of the operating system
The X Window System contains primitive building blocks, called Xt or "Intrinsics", but they are mostly only used by older toolkits such as: OLIT, Motif and Xaw. Most contemporary toolkits, such as GTK or Qt, bypass them and use Xlib or XCB directly.
The Amiga OS Intuition was formerly present in the Amiga Kickstart ROM and integrated itself with a medium-high level widget library which invoked the Workbench Amiga native GUI. Since Amiga OS 2.0, Intuition.library became disk based and object oriented. Also Workbench.library and Icon.library became disk based, and could be replaced with similar third-party solutions.
Since 2005, Microsoft has taken the graphics system out of Windows' kernel.
High-level widget toolkits
OS dependent
On Amiga
BOOPSI (Basic Object Oriented Programming System for Intuition) was introduced with OS 2.0 and enhanced Intuition with a system of classes in which every class represents a single widget or describes an interface event. This led to an evolution in which third-party developers each realised their own personal systems of classes.
MUI: object-oriented GUI toolkit and the official toolkit for MorphOS.
ReAction: object-oriented GUI toolkit and the official toolkit for A |
https://en.wikipedia.org/wiki/Spinalis | The spinalis is a portion of the erector spinae, a bundle of muscles and tendons, located nearest to the spine. It is divided into three parts: Spinalis dorsi, spinalis cervicis, and spinalis capitis.
Spinalis dorsi
Spinalis dorsi, the medial continuation of the sacrospinalis, is scarcely separable as a distinct muscle. It is situated at the medial side of the longissimus dorsi, and is intimately blended with it; it arises by three or four tendons from the spinous processes of the first two lumbar and the last two thoracic vertebrae: these, uniting, form a small muscle which is inserted by separate tendons into the spinous processes of the upper thoracic vertebrae, the number varying from four to eight.
It is intimately united with the semispinalis dorsi, situated beneath it.
Spinalis cervicis
Spinalis cervicis, or spinalis colli, is an inconstant muscle, which arises from the lower part of the nuchal ligament, the spinous process of the seventh cervical, and sometimes from the spinous processes of the first and second thoracic vertebrae, and is inserted into the spinous process of the axis, and occasionally into the spinous processes of the two cervical vertebrae below it.
Spinalis capitis
Spinalis capitis (biventer cervicis) is usually inseparably connected with the semispinalis capitis.
Spinalis capitis is not well characterized in modern anatomy textbooks and atlases, and is often
omitted from anatomical illustration. However, it can be identified as fibers that extend from the spinous processes of TV1 and CV7 to the cranium, often blending with semispinalis capitis
See also
Iliocostalis
Longissimus
Semispinalis muscle |
https://en.wikipedia.org/wiki/Natural%20frequency | Natural frequency, also known as eigenfrequency, is the frequency at which a system tends to oscillate in the absence of any driving force.
The motion pattern of a system oscillating at its natural frequency is called the normal mode (if all parts of the system move sinusoidally with that same frequency).
If the oscillating system is driven by an external force at the frequency at which the amplitude of its motion is greatest (close to a natural frequency of the system), this frequency is called resonant frequency.
Overview
Free vibrations of an elastic body, also called natural vibrations, occur at the natural frequency. Natural vibrations are different from forced vibrations which happen at the frequency of an applied force (forced frequency). If the forced frequency is equal to the natural frequency, the vibrations' amplitude increases manyfold. This phenomenon is known as resonance.
In analysis of systems, it is convenient to use the angular frequency rather than the frequency f, or the complex frequency domain parameter .
In a mass–spring system, with mass m and spring stiffness k, the natural angular frequency can be calculated as:
In an electrical network, ω is a natural angular frequency of a response function f(t) if the Laplace transform F(s) of f(t) includes the term , where for a real σ, and is a constant. Natural frequencies depend on network topology and element values but not their input. It can be shown that the set of natural frequencies in a network can be obtained by calculating the poles of all impedance and admittance functions of the network. A pole of the network transfer function is associated with a natural angular frequencies of the corresponding response variable; however there may exist some natural angular frequency that does not correspond to a pole of the network function. These happen at some special initial states.
In LC and RLC circuits, its natural angular frequency can be calculated as:
See also
Fundamental frequency
|
https://en.wikipedia.org/wiki/Small%20interfering%20RNA | Small interfering RNA (siRNA), sometimes known as short interfering RNA or silencing RNA, is a class of double-stranded RNA at first non-coding RNA molecules, typically 20–24 (normally 21) base pairs in length, similar to miRNA, and operating within the RNA interference (RNAi) pathway. It interferes with the expression of specific genes with complementary nucleotide sequences by degrading mRNA after transcription, preventing translation.
Structure
Naturally occurring siRNAs have a well-defined structure that is a short (usually 20 to 24-bp) double-stranded RNA (dsRNA) with phosphorylated 5' ends and hydroxylated 3' ends with two overhanging nucleotides.
The Dicer enzyme catalyzes production of siRNAs from long dsRNAs and small hairpin RNAs. siRNAs can also be introduced into cells by transfection. Since in principle any gene can be knocked down by a synthetic siRNA with a complementary sequence, siRNAs are an important tool for validating gene function and drug targeting in the post-genomic era.
History
In 1998, Andrew Fire at Carnegie Institution for Science in Washington DC and Craig Mello at University of Massachusetts in Worcester discovered the RNAi mechanism while working on the gene expression in the nematode, Caenorhabditis elegans. They won the Nobel prize for their research with RNAi in 2006. siRNAs and their role in post-transcriptional gene silencing(PTGS) was discovered in plants by David Baulcombe's group at the Sainsbury Laboratory in Norwich, England and reported in Science in 1999. Thomas Tuschl and colleagues soon reported in Nature that synthetic siRNAs could induce RNAi in mammalian cells. In 2001, the expression of a specific gene was successfully silenced by introducing chemically synthesized siRNA into mammalian cells (Tuschl et al.) These discoveries led to a surge in interest in harnessing RNAi for biomedical research and drug development. Significant developments in siRNA therapies have been made with both organic (carbon based) and in |
https://en.wikipedia.org/wiki/CGNS | CGNS stands for CFD General Notation System. It is a general, portable, and extensible standard for the storage and retrieval of CFD analysis data. It consists of a collection of conventions, and free and open software implementing those conventions. It is self-descriptive, cross-platform also termed platform or machine independent, documented, and administered by an international steering committee. It is also an American Institute of Aeronautics and Astronautics (AIAA) recommended practice. The CGNS project originated in 1994 as a joint effort between Boeing and NASA, and has since grown to include many other contributing organizations worldwide. In 1999, control of CGNS was completely transferred to a public forum known as the CGNS Steering Committee. This Committee is made up of international representatives from government and private industry.
The CGNS system consists of two parts: (1) a standard format (known as Standard Interface Data Structure, or SIDS) for recording the data, and (2) software that reads, writes, and modifies data in that format. The format is a conceptual entity established by the documentation; the software is a physical product supplied to enable developers to access and produce data recorded in that format.
The CGNS system is designed to facilitate the exchange of data between sites and applications, and to help stabilize the archiving of aerodynamic data. The data are stored in a compact, binary format and are accessible through a complete and extensible library of functions. The application programming interface (API) is cross-platform and can be easily implemented in C, C++, Fortran and Fortran 90 applications. A MEX interface mexCGNS also exists for calling the CGNS API in high-level programming languages MATLAB and GNU Octave. Object oriented interface CGNS++ and Python module pyCGNS exist.
The principal target of CGNS is data normally associated with compressible viscous flow (i.e., the Navier-Stokes equations), but the st |
https://en.wikipedia.org/wiki/Annie%20Dorrington | Annie Dorrington (19 March 1866 – 21 April 1926) was an Australian artist who was known for her wildflower paintings and watercolours. She is also one of the designers of the Australian flag.
Early life
On 19 March 1866, Annie Whistler was born at Litchfield Ashe, near Southampton, England. She was the second of nine children of Richard Whistler and his wife Sarah Mills (née Vines); she had six sisters and two brothers. Richard was a tenant farmer on the Foliejon Estate and farm in Winkfield, Berkshire; the family claimed to be related to the artist James McNeill Whistler, but this has not been proven. The farm adjoined Windsor Great Park, and Annie and her sisters sometimes saw Queen Victoria being driven through the park. Annie began painting in childhood and she and her sisters enjoyed painting scenes on the banks of the Thames River.
Richard Whistler died in 1887 and a bailiff named Charles Dorrington, who later became Annie's husband, came to manage the farm. When the Whistler sisters asked their mother the name of their prospective bailiff, she replied, "It could be Ahasuerus for all I know!" As a result, Charles Dorrington was known by the nickname 'Asu' from then on, and Annie would use 'Ahasuerus' as a pseudonym when she later entered Australia's national flag competition (see below). Several years after Richard's death, Sarah emigrated to Melbourne, Victoria, with all nine of her children. Charles Dorrington accompanied them, and in 1892 Charles and Annie were married in St. Alban's Church of England in Armadale, a suburb of Melbourne. Sarah had not wanted Annie to marry Dorrington and cut her off entirely as a result. Many years later, Annie's niece Kath Dowsing would recall that her name was never mentioned in the family. In 1895, Annie and Charles moved to Western Australia; they lived at Fremantle in 1897 before they moved to Perth in 1898. Charles worked for the Swan River Shipping Company in Perth until 1914, after which he became a shire clerk at |
https://en.wikipedia.org/wiki/Crown%20graph | In graph theory, a branch of mathematics, a crown graph on vertices is an undirected graph with two sets of vertices and and with an edge from to whenever .
The crown graph can be viewed as a complete bipartite graph from which the edges of a perfect matching have been removed, as the bipartite double cover of a complete graph, as the tensor product , as the complement of the Cartesian direct product of and , or as a bipartite Kneser graph representing the 1-item and -item subsets of an -item set, with an edge between two subsets whenever one is contained in the other.
Examples
The 6-vertex crown graph forms a cycle, and the 8-vertex crown graph is isomorphic to the graph of a cube.
In the Schläfli double six, a configuration of 12 lines and 30 points in three-dimensional space, the twelve lines intersect each other in the pattern of a 12-vertex crown graph.
Properties
The number of edges in a crown graph is the pronic number . Its achromatic number is : one can find a complete coloring by choosing each pair as one of the color classes. Crown graphs are symmetric and distance-transitive. describe partitions of the edges of a crown graph into equal-length cycles.
The -vertex crown graph may be embedded into four-dimensional Euclidean space in such a way that all of its edges have unit length. However, this embedding may also place some non-adjacent vertices a unit distance apart. An embedding in which edges are at unit distance and non-edges are not at unit distance requires at least dimensions. This example shows that a graph may require very different dimensions to be represented as a unit distance graphs and as a strict unit distance graph.
The minimum number of complete bipartite subgraphs needed to cover the edges of a crown graph (its bipartite dimension, or the size of a minimum biclique cover) is
the inverse function of the central binomial coefficient.
The complement graph of a -vertex crown graph is the Cartesian product of complete graph |
https://en.wikipedia.org/wiki/Local%20field | In mathematics, a field K is called a (non-Archimedean) local field if it is complete with respect to a topology induced by a discrete valuation v and if its residue field k is finite. Equivalently, a local field is a locally compact topological field with respect to a non-discrete topology. Sometimes, real numbers R, and the complex numbers C (with their standard topologies) are also defined to be local fields; this is the convention we will adopt below. Given a local field, the valuation defined on it can be of either of two types, each one corresponds to one of the two basic types of local fields: those in which the valuation is Archimedean and those in which it is not. In the first case, one calls the local field an Archimedean local field, in the second case, one calls it a non-Archimedean local field. Local fields arise naturally in number theory as completions of global fields.
While Archimedean local fields have been quite well known in mathematics for at least 250 years, the first examples of non-Archimedean local fields, the fields of p-adic numbers for positive prime integer p, were introduced by Kurt Hensel at the end of the 19th century.
Every local field is isomorphic (as a topological field) to one of the following:
Archimedean local fields (characteristic zero): the real numbers R, and the complex numbers C.
Non-Archimedean local fields of characteristic zero: finite extensions of the p-adic numbers Qp (where p is any prime number).
Non-Archimedean local fields of characteristic p (for p any given prime number): the field of formal Laurent series Fq((T)) over a finite field Fq, where q is a power of p.
In particular, of importance in number theory, classes of local fields show up as the completions of algebraic number fields with respect to their discrete valuation corresponding to one of their maximal ideals. Research papers in modern number theory often consider a more general notion, requiring only that the residue field be perfect of positi |
https://en.wikipedia.org/wiki/Rectus%20capitis%20lateralis%20muscle | The rectus capitis lateralis, a short, flat muscle, arises from the upper surface of the transverse process of the atlas, and is inserted into the under surface of the jugular process of the occipital bone.
Additional images
See also
Atlanto-occipital joint
Rectus capitis posterior major muscle
Rectus capitis posterior minor muscle
Rectus capitis anterior muscle |
https://en.wikipedia.org/wiki/PA-8000 | The PA-8000 (PCX-U), code-named Onyx, is a microprocessor developed and fabricated by Hewlett-Packard (HP) that implemented the PA-RISC 2.0 instruction set architecture (ISA). It was a completely new design with no circuitry derived from previous PA-RISC microprocessors. The PA-8000 was introduced on 2 November 1995 when shipments began to members of the Precision RISC Organization (PRO). It was used exclusively by PRO members and was not sold on the merchant market. All follow-on PA-8x00 processors (PA-8200 to PA-8900, described further below) are based on the basic PA-8000 processor core.
The PA-8000 was used by:
HP in its HP 9000 workstations and servers
NEC in its TX7/P590 server
Stratus Technologies in its Continuum fault-tolerant servers
Description
The PA-8000 is a four-way superscalar microprocessor that executes instructions out-of-order and speculatively. These features were not found in previous PA-RISC implementations, making the PA-8000 the first PA-RISC CPU to break the tradition of using simple microarchitectures and high-clock rate implementation to attain performance.
Instruction fetch unit
The PA-8000 has a four-stage front-end. During the first two stages, four instructions are fetched from the instruction cache by the instruction fetch unit (IFU). The IFU contains the program counter, branch history table (BHT), branch target address cache (BTAC) and a four-entry translation lookaside buffer (TLB). The TLB is used to translate virtual address to physical addresses for accessing the instruction cache. In the event of a TLB miss, the translation is requested from the main TLB.
Branch prediction
The PA-8000 performs branch prediction using static or dynamic methods. Which method the PA-8000 used was selected by a bit in each TLB entry. Static prediction considers most backwards branches as taken and forward branches as not taken. Static prediction also predicted the outcome of branches by examining hints encoded in the instructions themselve |
https://en.wikipedia.org/wiki/Singlet%20oxygen | Singlet oxygen, systematically named dioxygen(singlet) and dioxidene, is a gaseous inorganic chemical with the formula O=O (also written as or ), which is in a quantum state where all electrons are spin paired. It is kinetically unstable at ambient temperature, but the rate of decay is slow.
The lowest excited state of the diatomic oxygen molecule is a singlet state.
It is a gas with physical properties differing only subtly from those of the more prevalent triplet ground state of O2. In terms of its chemical reactivity, however, singlet oxygen is far more reactive toward organic compounds. It is responsible for the photodegradation of many materials but can be put to constructive use in preparative organic chemistry and photodynamic therapy. Trace amounts of singlet oxygen are found in the upper atmosphere and in polluted urban atmospheres where it contributes to the formation of lung-damaging nitrogen dioxide. It often appears and coexists confounded in environments that also generate ozone, such as pine forests with photodegradation of turpentine.
The terms 'singlet oxygen' and 'triplet oxygen' derive from each form's number of electron spins. The singlet has only one possible arrangement of electron spins with a total quantum spin of 0, while the triplet has three possible arrangements of electron spins with a total quantum spin of 1, corresponding to three degenerate states.
In spectroscopic notation, the lowest singlet and triplet forms of O2 are labeled 1Δg and 3Σ, respectively.
Electronic structure
Singlet oxygen refers to one of two singlet electronic excited states. The two singlet states are denoted 1Σ and 1Δg (the preceding superscript "1" indicates a singlet state). The singlet states of oxygen are 158 and 95 kilojoules per mole higher in energy than the triplet ground state of oxygen. Under most common laboratory conditions, the higher energy 1Σ singlet state rapidly converts to the more stable, lower energy 1Δg singlet state. This more stabl |
https://en.wikipedia.org/wiki/Minnesota%20Transracial%20Adoption%20Study | The Minnesota Transracial Adoption Study examined the IQ test scores of 130 black or interracial children adopted by advantaged white families. The aim of the study was to determine the contribution of environmental and genetic factors to the poor performance of black children on IQ tests as compared to white children. The initial study was published in 1976 by Sandra Scarr and Richard A. Weinberg. A follow-up study was published in 1992 by Richard Weinberg, Sandra Scarr and Irwin D. Waldman. Another related study investigating social adjustment in a subsample of the adopted black children was published in 1996. The 1992 follow-up study found that "social environment maintains a dominant role in determining the average IQ level of black and interracial children and that both social and genetic variables contribute to individual variations among them."
Background and study design
On measures of cognitive ability (IQ tests) and school performance, black children in the U.S. have performed worse than white children. At the time of the study, the gap in average performance between the two groups of children was approximately one standard deviation, which is equivalent to about 15 IQ points or 4 grade levels at high school graduation. Thus, the average IQ score of black children in the U.S. was approximately 85, compared to the average score of white children of 100. No detectable bias due to test construction or administration had been found, although this does not rule out other biases. The gap is functionally significant, which makes it an important area of study. The Minnesota Transracial Adoption Study tried to answer whether the gap is primarily caused by genetic factors or whether it is primarily caused by environmental and cultural factors.
The study was funded by the Grant Foundation and the National Institute of Child Health and Human Development.
By examining the cognitive ability and school performance of both black and white children adopted into white |
https://en.wikipedia.org/wiki/National%20Centre%20for%20Radio%20Astrophysics | The National Centre for Radio Astrophysics (NCRA; Hindi: राष्ट्रीय रेडियो खगोल भौतिकी केन्द्र) is a research institution in India in the field of radio astronomy is located in the Pune University Campus (just beside IUCAA), is part of the Tata Institute of Fundamental Research, Mumbai, India. NCRA has an active research program in many areas of Astronomy and Astrophysics, which includes studies of the Sun, Interplanetary scintillations, pulsars, the Interstellar medium, Active galaxies and cosmology and particularly in the specialized field of Radio Astronomy and Radio instrumentation. NCRA also provides exciting opportunities and challenges in engineering fields such as analog and digital electronics, signal processing, antenna design, telecommunication and software development.
NCRA has set up the Giant Metrewave Radio Telescope (GMRT), the world's largest telescope operating at meter wavelengths located at Khodad, 80 km from Pune. NCRA also operates the Ooty Radio Telescope (ORT), which is a large Cylindrical Telescope located near Udhagamandalam, India.
History
The Centre has its roots in the Radio Astronomy Group of TIFR, set up in the early 1960s under the leadership of Govind Swarup. The group designed and built the Ooty Radio Telescope. In the early 80's an ambitious plan for a new telescope was proposed - the Giant Metrewave Radio Telescope. Since the site chosen for this new telescope was close to Pune, a new home for the group was built in the scenic campus of Pune University. The radio astronomy group morphed into the National Centre for Radio Astrophysics around this time.
Research
The National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research (NCRA-TIFR) is a institute for radio astronomy in India. Research activities at NCRA-TIFR are centered on low frequency radio astronomy, with research in a wide range of areas, including solar physics, pulsars, active galactic nuclei, the interstellar medium, supernova remnants, the G |
https://en.wikipedia.org/wiki/Hilbrand%20J.%20Groenewold | Hilbrand Johannes "Hip" Groenewold (1910–1996) was a Dutch theoretical physicist who pioneered the largely operator-free formulation of quantum mechanics in phase space known as phase-space quantization.
Biography
Groenewold was born on 29 June 1910 in Muntendam in the province of Groningen. He graduated from the University of Groningen, with a major in physics and minors in mathematics and mechanics in 1934. After a visit to Cambridge to interact with John von Neumann (1934–5) on the links between classical and quantum mechanics, and a checkered career working with Frits Zernike in Groningen, then Leiden, the Hague, De Bilt, and several addresses in the North of the Netherlands during World War II, he earned his Ph.D. degree in 1946, under the tutelage of Léon Rosenfeld at Utrecht University. In 1951, he obtained a position in Groningen in theoretical physics, first as a lecturer, then as a senior lecturer, and finally as a professor in 1955. He was the initiator and organizer of the Vosbergen Conference in the Netherlands for over two decades.
His 1946 thesis paper laid the foundations of quantum mechanics in phase space, in unwitting parallel with J. E. Moyal. This treatise was the first to achieve full understanding of the Wigner–Weyl transform as an invertible transform, rather than as an unsatisfactory quantization rule. Significantly, this work further formulated and first appreciated the all-important star-product, the cornerstone of this formulation of the theory, ironically often also associated with Moyal's name, even though it is not featured in Moyal's papers and was not fully understood by Moyal.
Moreover, Groenewold first understood and demonstrated that the Moyal bracket is isomorphic to the quantum commutator, and thus that the latter cannot be made to faithfully correspond to the Poisson bracket, as had been envisioned by Paul Dirac. This observation and his counterexamples contrasting Poisson brackets to commutators have been generalize |
https://en.wikipedia.org/wiki/Skew-merged%20permutation | In the theory of permutation patterns, a skew-merged permutation is a permutation that can be partitioned into an increasing sequence and a decreasing sequence. They were first studied by and given their name by .
Characterization
The two smallest permutations that cannot be partitioned into an increasing and a decreasing sequence are 3412 and 2143. was the first to establish that a skew-merged permutation can also be equivalently defined as a permutation that avoids the two patterns 3412 and 2143.
A permutation is skew-merged if and only if its associated permutation graph is a split graph, a graph that can be partitioned into a clique (corresponding to the descending subsequence) and an independent set (corresponding to the ascending subsequence). The two forbidden patterns for skew-merged permutations, 3412 and 2143, correspond to two of the three forbidden induced subgraphs for split graphs, a four-vertex cycle and a graph with two disjoint edges, respectively. The third forbidden induced subgraph, a five-vertex cycle, cannot exist in a permutation graph (see ).
Enumeration
For the number of skew-merged permutations of length is
1, 2, 6, 22, 86, 340, 1340, 5254, 20518, 79932, 311028, 1209916, 4707964, 18330728, ... .
was the first to show that the generating function of these numbers is
from which it follows that the number of skew-merged permutations of length is given by the formula
and that these numbers obey the recurrence relation
Another derivation of the generating function for skew-merged permutations was given by .
Computational complexity
Testing whether one permutation is a pattern in another can be solved efficiently when the larger of the two permutations is skew-merged, as shown by . |
https://en.wikipedia.org/wiki/Joseph%20L.%20Ullman | Joseph Leonard Ullman (30 January 1923, in Buffalo, New York – 11 September 1995, in Chelsea, Michigan) was a mathematician who worked on classical analysis with a focus on approximation theory.
Ullman received his A.B. from the University of Buffalo and his graduate studies were interrupted by service in the U.S. Army in World War II. He was injured, received a Purple Heart, and spent the rest of the war as a mathematics instructor. He received a Ph.D. in 1949 from Stanford University with thesis Studies on Faber Polynomials under the direction of Gábor Szegő. Ullman became an instructor at the University of Michigan in 1949, an assistant professor in 1954, an associate professor in 1962, and a professor in 1966.
He wrote forty-three research papers. During his career at the University of Michigan he supervised eleven doctoral theses.
Selected works
with R. C. Lyndon:
with Matthew F. Wyneken:
with Vilmos Totik: |
https://en.wikipedia.org/wiki/Locally%20normal%20space | In mathematics, particularly topology, a topological space X is locally normal if intuitively it looks locally like a normal space. More precisely, a locally normal space satisfies the property that each point of the space belongs to a neighbourhood of the space that is normal under the subspace topology.
Formal definition
A topological space X is said to be locally normal if and only if each point, x, of X has a neighbourhood that is normal under the subspace topology.
Note that not every neighbourhood of x has to be normal, but at least one neighbourhood of x has to be normal (under the subspace topology).
Note however, that if a space were called locally normal if and only if each point of the space belonged to a subset of the space that was normal under the subspace topology, then every topological space would be locally normal. This is because, the singleton {x} is vacuously normal and contains x. Therefore, the definition is more restrictive.
Examples and properties
Every locally normal T1 space is locally regular and locally Hausdorff.
A locally compact Hausdorff space is always locally normal.
A normal space is always locally normal.
A T1 space need not be locally normal as the set of all real numbers endowed with the cofinite topology shows.
See also
Further reading |
https://en.wikipedia.org/wiki/Elliot%27s%20bird-of-paradise | Elliot's bird of paradise is a bird in the family Paradisaeidae, first described by Edward Ward in 1873. The bird is presumed by some ornithologists to be an intergeneric hybrid between a black sicklebill and Arfak astrapia. This assumption was made by the German ornithologist Erwin Stresemann who had also dismissed other new species of birds of paradise as hybrids. Other ornithologists dispute this claim. Errol Fuller argues that the Astrapia is a fanciful choice made with little supporting evidence, and that Elliot's Bird of Paradise is much smaller than the two proposed parent species. The specimens show a number of characteristics not present in either parent species, adding weight to the possibility of the specimens constituting a unique species. Stresemann had previously used the A. nigra x E. fastuosus explanation for the astrapian sicklebill as well.
History
Only two adult male specimens are known of this bird, held in the British Natural History Museum and the Dresden Natural History Museum, and presumably deriving from the Vogelkop Peninsula of north-western New Guinea. In 1930, Stresemann inspected both specimens and declared Elliot's Bird of Paradise to be a hybrid rather than a “real” species. Ornithologists have continued to express doubts about the bird's taxonomic status, however. As recently as 2012, Julian Hume and Michael Walters suggested that it is likely the elusive bird is either rare or extinct.
Notes |
https://en.wikipedia.org/wiki/Particle-induced%20X-ray%20emission | Particle-induced X-ray emission or proton-induced X-ray emission (PIXE) is a technique used for determining the elemental composition of a material or a sample. When a material is exposed to an ion beam, atomic interactions occur that give off EM radiation of wavelengths in the x-ray part of the electromagnetic spectrum specific to an element. PIXE is a powerful yet non-destructive elemental analysis technique now used routinely by geologists, archaeologists, art conservators and others to help answer questions of provenance, dating and authenticity.
The technique was first proposed in 1970 by Sven Johansson of Lund University, Sweden, and developed over the next few years with his colleagues Roland Akselsson and Thomas B Johansson.
Recent extensions of PIXE using tightly focused beams (down to 1 μm) gives the additional capability of microscopic analysis. This technique, called microPIXE, can be used to determine the distribution of trace elements in a wide range of samples. A related technique, particle-induced gamma-ray emission (PIGE) can be used to detect some light elements.
Theory
Three types of spectra can be collected from a PIXE experiment:
X-ray emission spectrum.
Rutherford backscattering spectrum.
Proton transmission spectrum.
X-ray emission
Quantum theory states that orbiting electrons of an atom must occupy discrete energy levels in order to be stable. Bombardment with ions of sufficient energy (usually MeV protons) produced by an ion accelerator, will cause inner shell ionization of atoms in a specimen. Outer shell electrons drop down to replace inner shell vacancies, however only certain transitions are allowed. X-rays of a characteristic energy of the element are emitted. An energy dispersive detector is used to record and measure these X-rays.
Only elements heavier than fluorine can be detected. The lower detection limit for a PIXE beam is given by the ability of the X-rays to pass through the window between the chamber and the X-ray det |
https://en.wikipedia.org/wiki/BED%20%28file%20format%29 | The BED (Browser Extensible Data) format is a text file format used to store genomic regions as coordinates and associated annotations. The data are presented in the form of columns separated by spaces or tabs. This format was developed during the Human Genome Project and then adopted by other sequencing projects. As a result of this increasingly wide use, this format had already become a de facto standard in bioinformatics before a formal specification was written.
One of the advantages of this format is the manipulation of coordinates instead of nucleotide sequences, which optimizes the power and computation time when comparing all or part of genomes. In addition, its simplicity makes it easy to manipulate and read (or parsing) coordinates or annotations using word processing and scripting languages such as Python, Ruby or Perl or more specialized tools such as BEDTools.
History
The end of the 20th century saw the emergence of the first projects to sequence complete genomes. Among these projects, the Human Genome Project was the most ambitious at the time, aiming to sequence for the first time a genome of several gigabases. This required the sequencing centres to carry out major methodological development in order to automate the processing of sequences and their analyses. Thus, many formats were created, such as FASTQ, GFF or BED. However, no official specifications were published at the time, which affected some formats such as FASTQ when sequencing projects multiplied at the beginning of the 21st century.
Its wide use within genome browsers has made it possible to define this format in a relatively stable way as this description is used by many tools.
Format
Initially the BED format did not have any official specification. Instead, the description provided by the UCSC Genome Browser has been widely used as a reference.
A formal BED specification was published in 2021 under the auspices of the Global Alliance for Genomics and Health.
Description
A BED f |
https://en.wikipedia.org/wiki/GLPi | GLPI (acronym: , or "Free IT Equipment Manager" in English) is an open source IT Asset Management, issue tracking system and service desk system. This software is written in PHP and distributed as open-source software under the GNU General Public License.
GLPI is a web-based application helping companies to manage their information system. The solution is able to build an inventory of all the organization's assets and to manage administrative and financial tasks. The system's functionalities help IT Administrators to create a database of technical resources, as well as a management and history of maintenances actions. Users can declare incidents or requests (based on asset or not) thanks to the Helpdesk feature.
History
The GLPI Community based-project started in 2003 and was directed by the INDEPNET association. Through the years, GLPI became widely used by both communities and companies, leading to a need of professional services around the system. Whereas the INDEPNET Association did not intend to offer services around the software, in 2008 the Association created a Partners' Network in order to achieve various objectives:
The first objective was to build an ecosystem where Partners could participate in GLPI Project. Secondly, Partners would financially support the association, in order to ensure the necessary software development. And finally, the ecosystem would guarantee a service delivery through a known and identified Network, directly connected to INDEPNET.
In 2009, Teclib’ started to integrate the software, developed the GLPI code and implemented new features. During summer 2015, the GLPI's Community leaders decided to transfer the roadmap management and the development leadership to Teclib’, so that Teclib´becomes editor of the GLPI system ensuring the software R&D.
The code remains under a GPL license and keeps its open source nature. The GLPI system continues to be improved thanks to the co-partnership between the community and the editor.
Timel |
https://en.wikipedia.org/wiki/CLDN17 | Claudin-17 is a protein that in humans is encoded by the CLDN17 gene. It belongs to the group of claudins; claudins are cell-cell junction proteins that keep that maintains cell- and tissue-barrier function. It forms anion-selective paracellular channels and is localized mainly in kidney proximal tubules. |
https://en.wikipedia.org/wiki/Registry%20of%20Standard%20Biological%20Parts | The Registry of Standard Biological Parts is a collection of genetic parts that are used in the assembly of systems and devices in synthetic biology. The registry was founded in 2003 at the Massachusetts Institute of Technology. The registry, as of 2018, contains over 20,000 parts. Recipients of the genetic parts include academic labs, established scientists, and student teams participating in the iGEM Foundation's annual synthetic biology competition.
The Registry of Standard Biological Parts conforms to the BioBrick standard, a standard for interchangeable genetic parts. BioBrick was developed by a nonprofit composed of researchers from MIT, Harvard, and UCSF. The registry offers genetic parts with the expectation that recipients will contribute data and new parts to improve the resource. The registry records and indexes biological parts and offers services including the synthesis and assembly of biological parts, systems, and devices.
The registry offers many types of biological parts, including DNA, plasmids, plasmid backbones, primers, promoters, protein coding sequences, protein domains, ribosomal binding sites, terminators, translational units, riboregulators, and composite parts. It also includes devices such as protein generators, reporters, inverters, receptors, senders, and measurement devices. A key idea that motivated the development of the Registry was to develop an abstraction hierarchy implemented through the parts categorization system.
The registry has previously received external funding through grants from the National Science Foundation, the Defense Advanced Research Projects Agency, and the National Institutes of Health.
See also
Synthetic biology |
https://en.wikipedia.org/wiki/Layer%20group | In mathematics, a layer group is a three-dimensional extension of a wallpaper group, with reflections in the third dimension. It is a space group with a two-dimensional lattice, meaning that it is symmetric over repeats in the two lattice directions. The symmetry group at each lattice point is an axial crystallographic point group with the main axis being perpendicular to the lattice plane.
Table of the 80 layer groups, organized by crystal system or lattice type, and by their point groups:
See also
Point group
Crystallographic point group
Space group
Rod group
Frieze group
Wallpaper group |
https://en.wikipedia.org/wiki/Cray%20T3E | The Cray T3E was Cray Research's second-generation massively parallel supercomputer architecture, launched in late November 1995. The first T3E was installed at the Pittsburgh Supercomputing Center in 1996. Like the previous Cray T3D, it was a fully distributed memory machine using a 3D torus topology interconnection network. The T3E initially used the DEC Alpha 21164 (EV5) microprocessor and was designed to scale from 8 to 2,176 Processing Elements (PEs). Each PE had between 64 MB and 2 GB of DRAM and a 6-way interconnect router with a payload bandwidth of 480 MB/s in each direction. Unlike many other MPP systems, including the T3D, the T3E was fully self-hosted and ran the UNICOS/mk distributed operating system with a GigaRing I/O subsystem integrated into the torus for network, disk and tape I/O.
The original T3E (retrospectively known as the T3E-600) had a 300 MHz processor clock. Later variants, using the faster 21164A (EV56) processor, comprised the T3E-900 (450 MHz), T3E-1200 (600 MHz), T3E-1200E (with improved memory and interconnect performance) and T3E-1350 (675 MHz). The T3E was available in both air-cooled (AC) and liquid-cooled (LC) configurations. AC systems were available with 16 to 128 user PEs, LC systems with 64 to 2048 user PEs.
A 1480-processor T3E-1200 was the first supercomputer to achieve a performance of more than 1 teraflops running a computational science application, in 1998.
After Cray Research was acquired by Silicon Graphics in February 1996, development of new Alpha-based systems was stopped. While providing the -900, -1200 and -1200E upgrades to the T3E, in the long term Silicon Graphics intended Cray T3E users to migrate to the Origin 3000, a MIPS-based distributed shared memory computer, introduced in 2000. However, the T3E continued in production after SGI sold the Cray business the same year.
See also
History of supercomputing |
https://en.wikipedia.org/wiki/TPM%20domain | The TPM domain family is named after the three founding proteins TLP18.3, Psb32 and MOLO-1. TPM domains have a characteristic fold (αβαβαββαα or βαβαββαα) composed of α helices (3+3 or 2+3) flanking four central β strands. The TPM fold has not been found in other protein domains to date. TPM was previously referred to as "DUF477" and "Repair_PSII".
In plants, the TPM domain-containing proteins TLP18.3 and Psb32 have been implicated the photosystem II (PSII) repair cycle. It may be involved in the regulation of synthesis/degradation of the D1 protein of the PSII core and in the assembly of PSII monomers into dimers in the grana stacks.
In the model nematode C. elegans, the MOLO-1 protein is an auxiliary subunit that positively modulates the gating of levamisole-sensitive acetylcholine receptors. |
https://en.wikipedia.org/wiki/RELX | RELX plc (pronounced "Rel-ex") is a British multinational information and analytics company headquartered in London, England. Its businesses provide scientific, technical and medical information and analytics; legal information and analytics; decision-making tools; and organise exhibitions. It operates in 40 countries and serves customers in over 180 nations. It was previously known as Reed Elsevier, and came into being in 1993 as a result of the merger of Reed International, a British trade book and magazine publisher, and Elsevier, a Netherlands-based scientific publisher.
The company is publicly listed, with shares traded on the London Stock Exchange, Amsterdam Stock Exchange and New York Stock Exchange (ticker symbols: London: REL, Amsterdam: REN, New York: RELX). The company is one of the constituents of the FTSE 100 Index, AEX Index, Financial Times Global 500 and Euronext 100 Index.
History
The company, which was previously known as Reed Elsevier, came into being in 1993, as a result of the merger of Reed International, a British trade book and magazine publisher, and Elsevier, a Netherlands-based scientific publisher. The company re-branded itself as RELX in February 2015.
Reed International
In 1895, Albert E. Reed established a newsprint manufacturing operation at Tovil Mill near Maidstone, Kent. The Reed family were Methodists and encouraged good working conditions for their staff in the then-dangerous print trade.
In 1965, Reed Group, as it was then known, became a conglomerate, creating its Decorative Products Division with the purchase of Crown Paints, Polycell and Sanderson's wallpaper and DIY decorating interests.
In 1970, Reed Group merged with the International Publishing Corporation and the company name was changed to Reed International Limited. The company continued to grow by merging with other publishers and produced high quality trade journals as IPC Business Press Ltd and women's and other consumer magazines as IPC magazines Ltd. Reed ent |
https://en.wikipedia.org/wiki/History%20of%20plant%20systematics | The history of plant systematics—the biological classification of plants—stretches from the work of ancient Greek to modern evolutionary biologists. As a field of science, plant systematics came into being only slowly, early plant lore usually being treated as part of the study of medicine. Later, classification and description was driven by natural history and natural theology. Until the advent of the theory of evolution, nearly all classification was based on the scala naturae. The professionalization of botany in the 18th and 19th century marked a shift toward more holistic classification methods, eventually based on evolutionary relationships.
Antiquity
The peripatetic philosopher Theophrastus (372–287 BC), as a student of Aristotle in Ancient Greece, wrote Historia Plantarum, the earliest surviving treatise on plants, where he listed the names of over 500 plant species. He did not articulate a formal classification scheme, but relied on the common groupings of folk taxonomy combined with growth form: tree shrub; undershrub; or herb.
The De Materia Medica of Dioscorides was an important early compendium of plant descriptions (over five hundred), classifying plants chiefly by their medicinal effects.
Medieval
The Byzantine emperor Constantine VII sent a copy of Dioscorides' pharmacopeia to the Umayyad Caliph Abd al-Rahman III who ruled Córdoba in the 9th century, and also sent a monk named Nicolas to translate the book into Arabic. It was in use from its publication in the 1st century until the 16th century, making it one of the major herbals throughout the Middle Ages. The taxonomy criteria of medieval texts is different from what is used today. Plants with similar external appearance were usually grouped under the same species name, though in modern taxonomy they are considered different.
Abū l-Khayr's botanical work is the most complete Andalusi botanical text known to modern scholars. It is noted for its detailed descriptions of plant morphology and phe |
https://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg%20parameters | In mechanical engineering, the Denavit–Hartenberg parameters (also called DH parameters) are the four parameters associated with a particular convention for attaching reference frames to the links of a spatial kinematic chain, or robot manipulator.
Jacques Denavit and Richard Hartenberg introduced this convention in 1955 in order to standardize the coordinate frames for spatial linkages.
Richard Paul demonstrated its value for the kinematic analysis of robotic systems in 1981.
While many conventions for attaching reference frames have been developed, the Denavit–Hartenberg convention remains a popular approach.
Denavit–Hartenberg convention
A commonly used convention for selecting frames of reference in robotics applications is the Denavit and Hartenberg (D–H) convention which was introduced by Jacques Denavit and Richard S. Hartenberg. In this convention, coordinate frames are attached to the joints between two links such that one transformation is associated with the joint, , and the second is associated with the link . The coordinate transformations along a serial robot consisting of links form the kinematics equations of the robot,
where is the transformation locating the end-link.
In order to determine the coordinate transformations and , the joints connecting the links are modeled as either hinged or sliding joints, each of which have a unique line in space that forms the joint axis and define the relative movement of the two links. A typical serial robot is characterized by a sequence of six lines , one for each joint in the robot. For each sequence of lines and , there is a common normal line . The system of six joint axes and five common normal lines form the kinematic skeleton of the typical six degree of freedom serial robot. Denavit and Hartenberg introduced the convention that z-coordinate axes are assigned to the joint axes and x-coordinate axes are assigned to the common normals .
This convention allows the definition of the moveme |
https://en.wikipedia.org/wiki/Active%20Archive%20Alliance | The Active Archive Alliance is a trade association that promotes a method of tiered storage. This method provides users access to data across a virtual file system that migrates data between multiple storage systems and media types including solid-state drive/flash, hard disk drives, magnetic tape, optical disk, and cloud. The result of an active archive implementation is that data can be stored on the most appropriate media type for the given retention and restoration requirements of that data. This allows less time sensitive or infrequently accessed data to be stored on less expensive media and eliminates the need for an administrator to manually migrate data between storage systems. Additionally, since storage systems such as tape libraries have low power consumption, the operational expense of storing data in an active archive is significantly reduced.
Active archives provide organizations with a persistent view of the data in their archives and make it easy to access files whenever needed. Active archives take advantage of metadata to keep track of where primary, secondary, and tertiary copies of data reside within the system so as to maintain online accessibility to any given file in a file system, regardless of the storage medium being utilized. The impetus for active archive applications, or the software involved in an active archive, was the growing amount of unstructured data in the typical data center and the need to be able to manage and efficiently store that data. As a result, active archive applications tend to be focused on file systems and unstructured data, rather than all collective data; however, many have features and functions that address traditional backup needs as well.
Active archives provide online access, searchability and retrieval of long-term data and enable virtually unlimited scalability to accommodate future growth. In addition, active archives enhance the business value of the data by enabling users to directly access the data o |
https://en.wikipedia.org/wiki/Role-oriented%20programming | Role-oriented programming as a form of computer programming aims at expressing things in terms that are analogous to human conceptual understanding of the world. This should make programs easier to understand and maintain.
The main idea of role-oriented programming is that humans think in terms of roles. This claim is often backed up by examples of social relations. For example, a student attending a class and the same student at a party are the same person, yet that person plays two different roles. In particular, the interactions of this person with the outside world depend on his current role. The roles typically share features, e.g., the intrinsic properties of being a person. This sharing of properties is often handled by the delegation mechanism.
In the older literature and in the field of databases, it seems that there has been little consideration for the context in which roles interplay with each other. Such a context is being established in newer role- and aspect-oriented programming languages such as Object Teams. Compare the use of "role" as "a set of software programs (services) that enable a server to perform specific functions for users or computers on the network" in Windows Server jargon.
Many researchers have argued the advantages of roles in modeling and implementation. Roles allow objects to evolve over time, they enable independent and concurrently existing views (interfaces) of the object, explicating the different contexts of the object, and separating concerns. Generally roles are a natural element of human daily concept-forming. Roles in programming languages enable objects to have changing interfaces, as we see in real life - things change over time, are used differently in different contexts, etc.
Authors of role literature
Barbara Pernici
Bent Bruun Kristensen
Bruce Wallace
Charles Bachman
Friedrich Steimann
Georg Gottlob
Kasper B. Graversen
Kasper Østerbye
Stephan Herrmann
Trygve Reenskaug
Thomas Kühn
Programming languag |
https://en.wikipedia.org/wiki/MIDACO | MIDACO (Mixed Integer Distributed Ant Colony Optimization) is a software package for numerical optimization based on evolutionary computing.
MIDACO was created in collaboration of
European Space Agency and EADS Astrium to solve constrained mixed-integer non-linear (MINLP) space applications.
MIDACO holds several record solutions on interplanetary spaceflight trajectory design problems made publicly available by European Space Agency. MIDACO is included in software packages like TOMLAB, Astos, and SigmaXL. |
https://en.wikipedia.org/wiki/Spinplasmonics | Spinplasmonics is a field of nanotechnology combining spintronics and plasmonics. The field was pioneered by Professor Abdulhakem Elezzabi at the University of Alberta in Canada. In a simple spinplasmonic device, light waves couple to electron spin states in a metallic structure. The most elementary spinplasmonic device consists of a bilayer structure made from magnetic and nonmagnetic metals. It is the nanometer scale interface between such metals that gives rise to an electron spin phenomenon. The plasmonic current is generated by optical excitation and its properties are manipulated by applying a weak magnetic field. Electrons with a specific spin state can cross the interfacial barrier, but those with a different spin state are impeded. Essentially, switching operations are performed with the electrons spin and then sent out as a light signal.
Spinplasmonic devices potentially have the advantages of high speed, miniaturization, low power consumption, and multifunctionality. On a length scale that is less than a single magnetic domain size, the interaction between atomic spins realigns the magnetic moments. Unlike semiconductor-based devices, smaller spinplasmonics devices are expected to be more efficient in transporting the spin-polarized electron current.
See also
Plasmon
Spintronics
Spin pumping
Spin transfer
List of emerging technologies |
https://en.wikipedia.org/wiki/Generated%20regressor | In least squares estimation problems, sometimes one or more regressors specified in the model are not observable. One way to circumvent this issue is to estimate or generate regressors from observable data. This generated regressor method is also applicable to unobserved instrumental variables. Under some regularity conditions, consistency and asymptotic normality of least squares estimator is preserved, but asymptotic variance has a different form in general.
Suppose the model of interest is the following:
where g is a conditional mean function and its form is known up to finite-dimensional parameter β. Here is not observable, but we know that for some function h known up to parameter , and a random sample is available. Suppose we have a consistent estimator of that uses the observation 's. Then, β can be estimated by (Non-Linear) Least Squares using . Some examples of the above setup include Anderson et al. (1976 and Barro (1977).
This problem falls into the framework of two-step M-estimator and thus consistency and asymptotic normality of the estimator can be verified using the general theory of two-step M-estimator. As in general two-step M-estimator problem, asymptotic variance of a generated regressor estimator is usually different from that of the estimator with all regressors observed. Yet, in some special cases, the asymptotic variances of the two estimators are identical. To give one such example, consider the setting in which the regression function is linear in parameter and unobserved regressor is a scalar. Denoting the coefficient of unobserved regressor by if and then the asymptotic variance is independent of whether observing the regressor.
With minor modifications in the model, the above formulation is also applicable to Instrumental Variable estimation. Suppose the model of interest is linear in parameter. Error term is correlated with some of the regressors, and the model specifies some instrumental variables, which are not observa |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.