source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/RATF | RATF (Robustness Analysis and Technology Forecasting) is a software development methodology acting as a plug in to the Rational Unified Process (RUP), ICONIX, Extreme Programming (XP) and Agile software development. The first part of the method was first published by in 2005 at the IASTED International conference on Software Engineering.
RATF makes use of principles provided by the TRIZ innovation method and its techniques such as ARIZ and Technology forecasting, supported by Robustness analysis. The novel principle provided by RATF is to elaborate on potential software evolution in a method loop consisting of the steps:
Extended Robustness Analysis - that investigates preliminary design options based on system expectations and system environment, thus identifying weaknesses in terms of system conflicts and likeliness for change.
Technology Forecasting - which proposes likely, better and fruitful system design and evolution
Extended Robustness Analysis - that investigates consequences of such evolution, identifying weaknesses and system conflicts
Then the Technology Forecasting step is repeated, and so on.
Essentially the RATF method is expected to give improve decision for future system architecture and design, taking advantage of technology forecasting and innovation, thus "enabling design of tomorrow's system, today". |
https://en.wikipedia.org/wiki/Tom%20Pepper | Tom Pepper (born August 24, 1975 in Des Moines, Iowa) is a computer programmer best known for his collaboration with Justin Frankel on the Gnutella peer-to-peer system. He and Frankel co-founded Nullsoft, whose most popular program is Winamp, which was sold to AOL in May 1999. He subsequently worked for AOL developing SHOUTcast, an Internet streaming audio service, with Frankel and Stephen "Tag" Loomis. After leaving AOL in 2004. he worked at RAZZ, Inc. He continues to collaborate with Frankel on independent projects like Ninjam.
See also
WASTE
Friend-to-friend (F2F)
File sharing
Peer-to-peer (P2P)
Gnutella
Nullsoft
Justin Frankel |
https://en.wikipedia.org/wiki/Renin%E2%80%93angiotensin%20system | The renin–angiotensin system (RAS), or renin–angiotensin–aldosterone system (RAAS), is a hormone system that regulates blood pressure, fluid and electrolyte balance, and systemic vascular resistance.
When renal blood flow is reduced, juxtaglomerular cells in the kidneys convert the precursor prorenin (already present in the blood) into renin and secrete it directly into the circulation. Plasma renin then carries out the conversion of angiotensinogen, released by the liver, to a decapeptide called angiotensin I. Angiotensin I is subsequently converted to angiotensin II (an octapeptide) by the angiotensin-converting enzyme (ACE) found on the surface of vascular endothelial cells, predominantly those of the lungs. Angiotensin II has a short life of about 1 to 2 minutes. Then, it is rapidly degraded into a heptapeptide called angiotensin III by angiotensinases which are present in red blood cells and vascular beds in many tissues.
Angiotensin III increases blood pressure and stimulates aldosterone secretion from the adrenal cortex; it has 100% adrenocortical stimulating activity and 40% vasopressor activity of angiotensin II.
Angiotensin IV also has adrenocortical and vasopressor activities
Angiotensin II is a potent vasoconstrictive peptide that causes blood vessels to narrow, resulting in increased blood pressure. Angiotensin II also stimulates the secretion of the hormone aldosterone from the adrenal cortex. Aldosterone causes the renal tubules to increase the reabsorption of sodium which in consequence causes the reabsorption of water into the blood, while at the same time causing the excretion of potassium (to maintain electrolyte balance). This increases the volume of extracellular fluid in the body, which also increases blood pressure.
If the RAS is abnormally active, blood pressure will be too high. There are several types of drugs which includes ACE inhibitors, angiotensin II receptor blockers (ARBs), and renin inhibitors that interrupt different steps in |
https://en.wikipedia.org/wiki/Giant%20cell | A giant cell (also known as a multinucleated giant cell, or multinucleate giant cell) is a mass formed by the union of several distinct cells (usually histiocytes), often forming a granuloma. Although there is typically a focus on the pathological aspects of multinucleate giant cells (MGCs), they also play many important physiological roles. Osteoclasts are a type of MGC that are critical for the maintenance, repair, and remodeling of bone and are present normally in a healthy human body. Osteoclasts are frequently classified and discussed separately from other MGCs which are more closely linked with disease.
Non-osteoclast MGCs can arise in response to an infection, such as tuberculosis, herpes, or HIV, or as part of a foreign body reaction. These MGCs are cells of monocyte or macrophage lineage fused together. Similar to their monocyte precursors, they can phagocytose foreign materials. However, their large size and extensive membrane ruffling make them better equipped to clear up larger particles. They utilize activated CR3s to ingest complement-opsonized targets. Non-osteoclast MGCs are also responsible for the clearance of cell debris, which is necessary for tissue remodeling after injuries.
Types include foreign-body giant cells, Langhans giant cells, Touton giant cells, Giant-cell arteritis, and Reed–Sternberg cells.
History
Osteoclasts were discovered in 1873. However, it wasn't until the development of the organ culture in the 1970s that their origin and function could be deduced. Although there was a consensus early on about the physiological function of osteoclasts, theories on their origins were heavily debated. Many believed osteoclasts and osteoblasts came from the same progenitor cell. Because of this, osteoclasts were thought to be derived from cells in connective tissue. Studies that observed that bone resorption could be restored by bone marrow and spleen transplants helped prove osteoclasts' hematopoietic origin.
Other multinucleated giant ce |
https://en.wikipedia.org/wiki/Behavioral%20contagion | Behavioral contagion is a form of social contagion involving the spread of behavior through a group. It refers to the propensity for a person to copy a certain behavior of others who are either in the vicinity, or whom they have been exposed to. The term was originally used by Gustave Le Bon in his 1895 work The Crowd: A Study of the Popular Mind to explain undesirable aspects of behavior of people in crowds. In the digital age, behavioral contagion is also concerned with the spread of online behavior and information. A variety of behavioral contagion mechanisms were incorporated in models of collective human behavior.
Behavioral contagion has been attributed to a variety of different factors. Often it is distinguished from collective behavior that arises from a direct attempt at social influence. A prominent theory involves the reduction of restraints, put forth by Fritz Redl in 1949 and analyzed in depth by Ladd Wheeler in 1966. Social psychologists acknowledge a number of other factors, which influence the likelihood of behavioral contagion occurring, such as deindividuation (Festinger, Pepitone, & Newcomb, 1952) and the emergence of social norms (Turner, 1964). In 1980, Freedman et al. have focused on the effects of physical factors on contagion, in particular, density and number.
J. O. Ogunlade (1979, p. 205) describes behavioral contagion as a "spontaneous, unsolicited and uncritical imitation of another's behavior" that occurs when certain variables are met: a) the observer and the model share a similar situation or mood (this is one way behavioral contagion can be readily applied to mob psychology); b) the model's behavior encourages the observer to review his condition and to change it; c) the model's behavior would assist the observer to resolve a conflict by reducing restraints, if copied; and d) the model is assumed to be a positive reference individual.
Types of contagion
Social contagion can occur through threshold models that assume that an indiv |
https://en.wikipedia.org/wiki/Translation%20%28geometry%29 | In Euclidean geometry, a translation is a geometric transformation that moves every point of a figure, shape or space by the same distance in a given direction. A translation can also be interpreted as the addition of a constant vector to every point, or as shifting the origin of the coordinate system. In a Euclidean space, any translation is an isometry.
As a function
If is a fixed vector, known as the translation vector, and is the initial position of some object, then the translation function will work as .
If is a translation, then the image of a subset under the function is the translate of by . The translate of by is often written .
Horizontal and vertical translations
In geometry, a vertical translation (also known as vertical shift) is a translation of a geometric object in a direction parallel to the vertical axis of the Cartesian coordinate system.
Often, vertical translations are considered for the graph of a function. If f is any function of x, then the graph of the function f(x) + c (whose values are given by adding a constant c to the values of f) may be obtained by a vertical translation of the graph of f(x) by distance c. For this reason the function f(x) + c is sometimes called a vertical translate of f(x). For instance, the antiderivatives of a function all differ from each other by a constant of integration and are therefore vertical translates of each other.
In function graphing, a horizontal translation is a transformation which results in a graph that is equivalent to shifting the base graph left or right in the direction of the x-axis. A graph is translated k units horizontally by moving each point on the graph k units horizontally.
For the base function f(x) and a constant k, the function given by g(x) = f(x − k), can be sketched f(x) shifted k units horizontally.
If function transformation was talked about in terms of geometric transformations it may be clearer why functions translate horizontally the way they do. When ad |
https://en.wikipedia.org/wiki/Ebola%20misinformation | Multiple conspiracy theories, hoaxes, and quack cures have circulated about ebola viruses, regarding the origin of outbreaks, treatments for ebola virus disease, and preventative measures.
Unproven and disproven treatments
During the Western African Ebola virus epidemic (2013-2016), a number of unproven and fake treatments were marketed online in the United States, including snake venom, vitamin C, "Nano Silver", and various homeopathic and herbal remedies, including clove oil, garlic, and ewedu soup. Gary Coody, national health fraud coordinator for the FDA, described the purveyors of these unproven treatments as "like storm-chasing roofers, who go and try to defraud people after a big storm. Some of them may be making an honest mistake; other companies are trying to rip people off." Coody also said the problem with implausible and unproven remedies is not only that they are unlikely to work, but also that such treatments may lead to patients delaying effective and timely medical care in a hospital setting.
Implausible and disproven methods for preventing Ebola
During the 2014 and 2019 outbreaks, a number of hoax remedies for the prevention of Ebola were spread online. One such common thread was the frequent use of essential oils. There is no evidence that any of these treatments will decrease the risk of Ebola virus infection, and no known plausible mechanisms for such an effect.
Virus origins
During the 2014 outbreak in Liberia, an article in the Liberian Observer alleged that the virus was a bioweapon designed by the US military as a form of population control. Other theories spreading online during the pandemic alleged that the New World Order had engineered the virus to impose quarantines and travel bans to soften an eventual descent into martial law. During a 2019 outbreak in the Democratic Republic of the Congo, rumors spread that the virus was imported to the country for financial gain, or as part of a plot to procure organs for the black market. |
https://en.wikipedia.org/wiki/Skin%20equivalent | A skin equivalent is an in vitro skin model using to conduct experiments on processes involving the skin, such as wound healing and keratinocyte migration. It is a more complex form of the dermal equivalent. |
https://en.wikipedia.org/wiki/Procera%20Networks | Procera Networks is a networking equipment company based in Fremont, California, United States, that designs and sells Network Intelligence solutions based on deep packet inspection (DPI) technology. Procera sells solutions to telecom operators, governments, enterprises, and network equipment vendors in the areas of Analytics, Traffic Management, Policy and Charging Control, and Service Provider Compliance.
History
Procera was incorporated in 2002 in California. The company was initially created to deliver intelligent Ethernet network switches. The company changed its product line when it merged with Netintact, a company based in Varberg, Sweden, that offered bandwidth management products to Scandinavian network operators under the PacketLogic brand. The merger was announced in May and closed in June 2006. Procera shifted the company product strategy to the Netintact product lines.
From 2006 to 2008, Procera sold inexpensive (less than 2 Gbit/s) traffic management products to small operators and enterprises, to operators like Com Hem in Sweden. In September 2007, Procera Networks became listed on the American Stock Exchange with stock symbol PKT.
Beginning in 2008, Procera began focusing on larger carriers and mobile operators. Several customers reported they use Procera's technology, such as Yoigo and Genband, which resells Procera products as its P-Series products.
Procera was named one of the fastest growing network companies by Deloitte for 2010 and 2011 as part of its Deloitte Fast 500 study. On June 24, 2011, Procera Networks joined the Russell 3000 Index. In December 2011, Procera moved to the NASDAQ stock exchange using the symbol PKT.
In 2013, Procera bought Vineyard Networks, a Canadian DPI company for Can$28 million. The Vineyard product is sold on the market as Network Application Visibility Library (NAVL) to network equipment vendors.
In 2015, Procera was acquired by Francisco Partners, a private equity firm based in San Francisco Procera now op |
https://en.wikipedia.org/wiki/Jericho%20Forum | The Jericho Forum was an international group working to define and promote de-perimeterisation. It was initiated by David Lacey from the Royal Mail, and grew out of a loose affiliation of interested corporate CISOs (Chief Information Security Officers), discussing the topic from the summer of 2003, after an initial meeting hosted by Cisco, but was officially founded in January 2004. It declared success, and merged with The Open Group industry consortium's Security Forum in 2014.
The problem
It was created because the founding members claimed that no one else was appropriately discussing the problems surrounding de-perimeterisation. They felt the need to create a forum to define and solve consistently such issues. One of the earlier outputs of the group is a position paper entitled the Jericho Forum Commandments which are a set of principles that describe how best to survive in a de-perimeterised world.
Membership
The Jericho Forum consisted of "user members" and "vendor members". Originally, only user members were allowed to stand for election. In December 2008 this was relaxed, allowing either vendor or user members to be eligible for election. The day-to-day management was provided by the Open Group.
While the Jericho Forum had its foundations in the UK, nearly all the initial members worked for corporates and had global responsibilities, and involvement grew to Europe, North America and Asia Pacific.
Results
After the initial focus on defining the problem, de-perimeterisation, the Forum then moved onto focussing on defining the solution, which it delivered in the publication of the Collaboration Oriented Architecture (COA) paper and COA Framework paper.
The next focus of the Jericho Forum was "Securely Collaborating in Clouds", which involves applying the COA concepts to the emerging Cloud Computing paradigm. The basic premise is that a collaborative approach is essential to gain most value from "the cloud". Much of this work was transferred to the Cloud |
https://en.wikipedia.org/wiki/Nowhere%20continuous%20function | In mathematics, a nowhere continuous function, also called an everywhere discontinuous function, is a function that is not continuous at any point of its domain. If is a function from real numbers to real numbers, then is nowhere continuous if for each point there is some such that for every we can find a point such that and . Therefore, no matter how close we get to any fixed point, there are even closer points at which the function takes not-nearby values.
More general definitions of this kind of function can be obtained, by replacing the absolute value by the distance function in a metric space, or by using the definition of continuity in a topological space.
Examples
Dirichlet function
One example of such a function is the indicator function of the rational numbers, also known as the Dirichlet function. This function is denoted as and has domain and codomain both equal to the real numbers. By definition, is equal to if is a rational number and it is if otherwise.
More generally, if is any subset of a topological space such that both and the complement of are dense in then the real-valued function which takes the value on and on the complement of will be nowhere continuous. Functions of this type were originally investigated by Peter Gustav Lejeune Dirichlet.
Non-trivial additive functions
A function is called an if it satisfies Cauchy's functional equation:
For example, every map of form where is some constant, is additive (in fact, it is linear and continuous). Furthermore, every linear map is of this form (by taking ).
Although every linear map is additive, not all additive maps are linear. An additive map is linear if and only if there exists a point at which it is continuous, in which case it is continuous everywhere. Consequently, every non-linear additive function is discontinuous at every point of its domain.
Nevertheless, the restriction of any additive function to any real scalar multiple of the rational number |
https://en.wikipedia.org/wiki/Fetal%20position | Fetal position (British English: also foetal) is the positioning of the body of a prenatal fetus as it develops. In this position, the back is curved, the head is bowed, and the limbs are bent and drawn up to the torso. A compact position is typical for fetuses. Many newborn mammals, especially rodents, remain in a fetal position well after birth.
This type of compact position is used in the medical profession to minimize injury to the neck and chest.
Some people assume a fetal position when sleeping, especially when the body becomes cold. In some cultures bodies have been buried in fetal position.
Sometimes, when a person has suffered extreme physical or psychological trauma (including massive stress), they will assume a similar compact position in which the back is curved forward, the legs are brought up as tightly against the abdomen as possible, the head is bowed as close to the abdomen as possible, and the arms are wrapped around the head to prevent further trauma.
This type of position has been observed in drug addicts, who enter the position when experiencing withdrawal. Sufferers of anxiety are also known to assume the same type of position during panic attacks.
Assuming this type of position and playing dead is often recommended as a strategy to end a bear attack.
See also
Neutral body posture
Position (obstetrics) |
https://en.wikipedia.org/wiki/Subatomic%20scale | The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles |
https://en.wikipedia.org/wiki/Computation | A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computations are mathematical equations and computer algorithms.
Mechanical or electronic devices (or, historically, people) that perform computations are known as computers. The study of computation is the field of computability, itself a sub-field of computer science.
Introduction
The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing Machine. Other (mathematically equivalent) definitions include Alonzo Church's lambda-definability, Herbrand-Gödel-Kleene's general recursiveness and Emil Post's 1-definability.
Today, any formal statement or calculation that exhibits this quality of well-definedness is termed computable, while the statement or calculation itself is referred to as a computation.
Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages.
Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements.
Some examples of mathematical statements that are computable include:
All statements characterised in modern programming languages, includ |
https://en.wikipedia.org/wiki/Nucleosynthesis | Nucleosynthesis is the process that creates new atomic nuclei from pre-existing nucleons (protons and neutrons) and nuclei. According to current theories, the first nuclei were formed a few minutes after the Big Bang, through nuclear reactions in a process called Big Bang nucleosynthesis. After about 20 minutes, the universe had expanded and cooled to a point at which these high-energy collisions among nucleons ended, so only the fastest and simplest reactions occurred, leaving our universe containing hydrogen and helium. The rest is traces of other elements such as lithium and the hydrogen isotope deuterium. Nucleosynthesis in stars and their explosions later produced the variety of elements and isotopes that we have today, in a process called cosmic chemical evolution. The amounts of total mass in elements heavier than hydrogen and helium (called 'metals' by astrophysicists) remains small (few percent), so that the universe still has approximately the same composition.
Stars fuse light elements to heavier ones in their cores, giving off energy in the process known as stellar nucleosynthesis. Nuclear fusion reactions create many of the lighter elements, up to and including iron and nickel in the most massive stars. Products of stellar nucleosynthesis remain trapped in stellar cores and remnants except if ejected through stellar winds and explosions. The neutron capture reactions of the r-process and s-process create heavier elements, from iron upwards.
Supernova nucleosynthesis within exploding stars is largely responsible for the elements between oxygen and rubidium: from the ejection of elements produced during stellar nucleosynthesis; through explosive nucleosynthesis during the supernova explosion; and from the r-process (absorption of multiple neutrons) during the explosion.
Neutron star mergers are a recently discovered major source of elements produced in the r-process. When two neutron stars collide, a significant amount of neutron-rich matter may be ej |
https://en.wikipedia.org/wiki/Google%20Neural%20Machine%20Translation | Google Neural Machine Translation (GNMT) is a neural machine translation (NMT) system developed by Google and introduced in November 2016 that uses an artificial neural network to increase fluency and accuracy in Google Translate. The neural network consists of two main blocks, an encoder and a decoder, both of LSTM architecture with 8 1024-wide layers each and a simple 1-layer 1024-wide feedforward attention mechanism connecting them. The total number of parameters has been variously described as over 160 million, approximately 210 million, 278 million or 380 million.
GNMT improves on the quality of translation by applying an example-based (EBMT) machine translation method in which the system learns from millions of examples of language translation. GNMT's proposed architecture of system learning was first tested on over a hundred languages supported by Google Translate. With the large end-to-end framework, the system learns over time to create better, more natural translations. GNMT attempts to translate whole sentences at a time, rather than just piece by piece. The GNMT network can undertake interlingual machine translation by encoding the semantics of the sentence, rather than by memorizing phrase-to-phrase translations.
History
The Google Brain project was established in 2011 in the "secretive Google X research lab" by Google Fellow Jeff Dean, Google Researcher Greg Corrado, and Stanford University Computer Science professor Andrew Ng. Ng's work has led to some of the biggest breakthroughs at Google and Stanford.
In November 2016, Google Neural Machine Translation system (GNMT) was introduced. Since then, Google Translate began using neural machine translation (NMT) in preference to its previous statistical methods (SMT) which had been used since October 2007, with its proprietary, in-house SMT technology.
Training GNMT was a big effort at the time and took, by a 2021 OpenAI estimate, on the order of 100 PFLOP/s*day (up to 10 FLOPs) of compute which was 1 |
https://en.wikipedia.org/wiki/Independence%20Theory%20in%20Combinatorics | Independence Theory in Combinatorics: An Introductory Account with Applications to Graphs and Transversals is an undergraduate-level mathematics textbook on the theory of matroids. It was written by Victor Bryant and Hazel Perfect, and published in 1980 by Chapman & Hall.
Topics
A major theme of Independence Theory in Combinatorics is the unifying nature of abstraction, and in particular the way that matroid theory can unify the concept of independence coming from different areas of mathematics. It has five chapters, the first of which provides basic definitions in graph theory, combinatorics, and linear algebra, and the second of which defines and introduces matroids, called in this book "independence spaces". As the name would suggest, these are defined primarily through their independent sets, but equivalences with definitions using circuits, matroid rank, and submodular set function are also presented, as are sums, minors, truncations, and duals of matroids.
Chapter three concerns graphic matroids, the matroids of spanning trees in graphs, and the greedy algorithm for minimum spanning trees. Chapter four includes material on transversal matroids, which can be described in terms of matchings of bipartite graphs, and includes additional material on matching theory and related topics including Hall's marriage theorem, Menger's theorem (an equivalence between minimum cuts and maximum sets disjoint paths in graphs), Latin squares, and gammoids. The final chapter concerns matroid representations using linear independence in vector spaces, labeled as an appendix and presented with fewer proofs.
Many exercises are included, of varied difficulty, with hints and solutions.
Audience and reception
The level of the text is appropriate for courses for advanced undergraduates or master's students, with only basic linear algebra as a prerequisite, and covers its material at a more accessible and general level than other texts on matroid theory. Although disagreeing with the |
https://en.wikipedia.org/wiki/Error%20hiding | In computer programming, error hiding (or error swallowing) is the practice of catching an error or exception, and then continuing without logging, processing, or reporting the error to other parts of the software. Handling errors in this manner is considered bad practice and an anti-pattern in computer programming. In languages with exception handling support, this practice is called exception swallowing.
Errors and exceptions have several purposes:
Help software maintainers track down and understand problems that happen when a user is running the software, when combined with a logging system
Provide useful information to the user of the software, when combined with meaningful error messages, error codes or error types shown in a UI, as console messages, or as data returned from an API (depending on the type of software and type of user)
Indicate that normal operation cannot continue, so the software can fall back to alternative ways of performing the required task or abort the operation.
When errors are swallowed, these purposes can't be accomplished. Information about the error is lost, which makes it very hard to track down problems. Depending on how the software is implemented, it can cause unintended side effects that cascade into other errors, destabilizing the system. Without information about the root cause of the problem, it's very hard to figure out what is going wrong or how to fix it.
Examples
Languages with exception handling
In this C# example, even though the code inside the try block throws an exception, it gets caught by the blanket catch clause. The exception has been swallowed and is considered handled, and the program continues.
try {
throw new Exception();
} catch {
// do nothing
}
In this PowerShell example, the trap clause catches the exception being thrown and swallows it by continuing execution. The "I should not be here" message is shown as if no exception had happened.
&{
trap { continue }
throw
write-output "I shou |
https://en.wikipedia.org/wiki/Catalent | Catalent, Inc. (Catalent Pharma Solutions) is a multinational corporation headquartered in Somerset, New Jersey. It is a global provider of delivery technologies, development, drug manufacturing, biologics, gene therapies and consumer health products. It employs more than 14,000 people, including approximately 2,400 scientists and technicians. In fiscal year 2020, it generated over $3 billion in annual revenue.
Catalent was formed in April 2007 when affiliates of the Blackstone Group L.P. acquired the core of the pharmaceutical technologies and services (PTS) segment of Cardinal Health, Inc. Cardinal Health created PTS through a series of acquisitions starting with R.P. Scherer Corporation in 1998.
In 2014, Catalent became a public company, listed on the New York Stock Exchange.
History
Before 2007
In 1996, Cardinal Health acquired PCI (Headquarters: Philadelphia, Pennsylvania). PCI (Packaging Coordinators Inc.) is a pharmaceutical contract packing service for commercial and clinical packaging.
In 1998, Cardinal Health acquired R.P. Scherer Corporation (Headquarters: Troy, Michigan). Robert Pauli Scherer founded the R.P. Scherer Corporation to commercialize his innovation of softgel encapsulation using the rotary die production process. The following year, in 1999, Cardinal Health acquired Automatic Liquid Packaging, Inc. (Headquarters: Woodstock, Ill.), whose Blow-Fill-Seal Technology allowed Cardinal to enter the sterile product market.
In 2001, Cardinal Health acquired International Processing Corporation, a company that was renowned for its expertise in oral modified-release dosage form development and manufacturing. In 2002, Cardinal Health acquired Magellan Laboratories Inc., a company that specialized in product development expertise. In 2003, Cardinal Health acquired Gala Biotech (Headquarters: Madison, Wisconsin). In the same year, Cardinal Health also acquired Intercare Group PLC, broadening its global capabilities in Europe.
From 2004 to 2006, Card |
https://en.wikipedia.org/wiki/KidsCom | KidsCom was a virtual world geared toward kids ages 8–14. KidsCom had many "worlds" (virtual places) that the user can go to in order to have fun with an avatar. It was a website for a long time dealing with new competition such as Webkinz. KidsCom was published by the now defunct Circle 1 Network, LLC in Milwaukee, Wisconsin and was first launched in 1995 as a site for kids. After receiving new capital in 2006, Circle 1 Network used those funds to enhance and expand KidsCom – a site that the company describes as safe, fun and educational.
As a result of those funds, the virtual world was launched in 2007 and gathered over 2 million users.
KidsCom primarily allowed kids to learn more about climate change while playing games, and making new friends. At its peak usage, it was praised for its dedication to both fun and learning, whilst teaching a new generation how to look after the Earth.
The KidsCom website was taken offline in 2019 after the parent company, Circle 1 Network, ceased to renew the domain. Its virtual world is no longer accessible.
History
KidsCom was one of the earliest kids-only sites on the Internet, having been online since February 1995. It was an early test site for a large CPG company interested in determining if kids were online. After a very successful test, KidsCom grew into more than just a test site.
On May 13, 1996, the Center for Media Education (CME) filed a petition requesting that the Federal Trade Commission investigate and bring law enforcement action for alleged deceptive practices in the operation of an Internet Web site called "KidsCom," then operated by SpectraCom, Inc. However, the FTC decided not to bring charges and the BBB said that KidsCom is an example of responsible marketing to children.
The FTC decided not to bring any charges or enforcement action against KidsCom for the following reasons:
KidsCom has modified its website in significant respects. KidsCom now sends an e-mail to parents when children register at th |
https://en.wikipedia.org/wiki/Institut%20f%C3%BCr%20Kunststoffverarbeitung | The Institut für Kunststoffverarbeitung in Industrie und Handwerk (IKV), the Institute for Plastics Processing in Industry and Trade at the Rheinisch-Westfälische Technische Hochschule Aachen, Germany, is a teaching and research institute for the study of plastics technology. It stands for practice-oriented research, innovation and technology transfer. The focus of the IKV is the integrative view of product development in the material, construction and processing sectors, in particular in plastics and rubber. The sponsor is a non-profit association that currently includes around 300 companies from the plastics industry worldwide (as of December 2018) and through which the institute maintains a close connection between industry and science. In addition, the IKV is a member of the (AiF).
The institute was founded in 1950 and, with around 350 employees, has become Europe's largest research and training institute in the field of plastics technology. The first head of the institute was , followed in 1959 by A. H. Henning. From 1965 to 1988 headed the institute, and until his retirement in 2011. Since 2011, the current head of the institute, and at the same time managing director of the association, is . He also holds the Chair for within the Faculty of Mechanical Engineering at RWTH Aachen University.
Tasks
The tasks of the institute are:
scientific and practice-oriented research in the field of plastics technology
the training of students to become qualified junior staff for the plastics industry
the training of practitioners in the craft industry in the field of plastics technology
Structure
The scientific departments injection molding/PUR technology, extrusion and further processing, molded part design/materials engineering and fibre-reinforced plastics are the operative units of the institute. The (KAP) (English: Center for Plastics Analysis and Testing) at the IKV supports and advises scientific departments and is available as a service for the industr |
https://en.wikipedia.org/wiki/Second%20screen | A second screen involves the use of a computing device to provide a different viewing experience for content on another device.
The term commonly refers to the use of such devices to provide interactive features, like posts on social media platforms that take input from the audience during a broadcast, such as a television program. This type of technology is designed to keep the audience engaged with whatever they are watching and has been found to support social television and generate an online conversation around specific content. It is a type of screen casting technology that allows a smartphone or tablet to display its contents on another screen. A second screen can also refer to having multiple monitors connected to a computer.
Analysis
Several studies show a tendency to use another device while watching television such as a tablet or smartphone. Other studies distinguish a higher percentage of comments or posts on social networks about the content that is being watched (Nielsen ratings).
Besides keeping the audience engaged (via polling, chatting, providing additional information about content and participants, etc.) and generating revenue via advertising, a second screen can be used as a metering solution to get information about the audience. Being more far-reaching and inexpensive, a second screen may replace people meters in the future.
One trend hampering the growth of second screens is that many shows are creating their own applications for them. It is considered impractical to expect users to download multiple applications and switch between them for each channel or show.
Conference and business meeting organizers may also incorporate second screens to deepen audience engagement. According to "2014 Trend Tracker", the second screen phenomenon is a significant and growing trend. "Attendees are so glued to their devices, even while watching a live presentation (or at home, on television) that marketers are supplying them with a simultaneous engage |
https://en.wikipedia.org/wiki/Fully%20qualified%20domain%20address | A fully qualified domain address (FQDA) is a string forming an Internet e-mail address. It was defined by the Internet Engineering Task Force in RFC 3801 for the use in voice profiles for Internet mail, but has been used on the Internet as early as 1988.
A FQDA is composed of a local part, followed by the symbol and the fully qualified domain name (FQDN) of the host responsible for a mailbox.
An example of a FQDA is: . The local part usually denotes a username, while the fully qualified domain name is used by mail transfer agents to determine the IP address of the host by querying the Domain Name System. |
https://en.wikipedia.org/wiki/Minor%20alar%20cartilage | In human anatomy the part of the nose which forms the lateral wall is curved to correspond with the ala of the nose; it is oval and flattened, narrow behind, where it is connected with the frontal process of the maxilla by a tough fibrous membrane, in which are found three or four small nasal cartilages the minor alar cartilages, also referred to as lesser alar or sesamoid cartilages or accessory cartilages. |
https://en.wikipedia.org/wiki/Initial%20mass%20function | In astronomy, the initial mass function (IMF) is an empirical function that describes the initial distribution of masses for a population of stars during star formation. IMF not only describes the formation and evolution of individual stars, it also serves as an important link that describes the formation and evolution of galaxies. The IMF is often given as a probability density function (PDF) that describes the probability of a star that has a certain mass. It differs from the present-day mass function (PDMF), which describes the current distribution of masses of stars, such as red giants, white dwarfs, neutron stars, and black holes, after a period of time of evolution away from the main sequence stars. IMF is derived from the luminosity function while PDMF is derived from the present-day luminosity function. IMF and PDMF can be linked through the "stellar creation function". Stellar creation function is defined as the number of stars per unit volume of space in a mass range and a time interval. For all the main sequence stars have greater lifetimes than the galaxy, IMF and PDMF are equivalent. Similarly, IMF and PDMF are equivalent in brown dwarfs due to their unlimited lifetimes.
The properties and evolution of a star are closely related to its mass, so the IMF is an important diagnostic tool for astronomers studying large quantities of stars. For example, the initial mass of a star is the primary factor of determining its colour, luminosity, radius, radiation spectrum, and quantity of materials and energy it emitted into interstellar space during its lifetime. At low masses, the IMF sets the Milky Way Galaxy mass budget and the number of substellar objects that form. At intermediate masses, the IMF controls chemical enrichment of the interstellar medium. At high masses, the IMF sets the number of core collapse supernovae that occur and therefore the kinetic energy feedback.
The IMF is relatively invariant from one group of stars to another, though some ob |
https://en.wikipedia.org/wiki/Focal%20length | The focal length of an optical system is a measure of how strongly the system converges or diverges light; it is the inverse of the system's optical power. A positive focal length indicates that a system converges light, while a negative focal length indicates that the system diverges light. A system with a shorter focal length bends the rays more sharply, bringing them to a focus in a shorter distance or diverging them more quickly. For the special case of a thin lens in air, a positive focal length is the distance over which initially collimated (parallel) rays are brought to a focus, or alternatively a negative focal length indicates how far in front of the lens a point source must be located to form a collimated beam. For more general optical systems, the focal length has no intuitive meaning; it is simply the inverse of the system's optical power.
In most photography and all telescopy, where the subject is essentially infinitely far away, longer focal length (lower optical power) leads to higher magnification and a narrower angle of view; conversely, shorter focal length or higher optical power is associated with lower magnification and a wider angle of view. On the other hand, in applications such as microscopy in which magnification is achieved by bringing the object close to the lens, a shorter focal length (higher optical power) leads to higher magnification because the subject can be brought closer to the center of projection.
Thin lens approximation
For a thin lens in air, the focal length is the distance from the center of the lens to the principal foci (or focal points) of the lens. For a converging lens (for example a convex lens), the focal length is positive and is the distance at which a beam of collimated light will be focused to a single spot. For a diverging lens (for example a concave lens), the focal length is negative and is the distance to the point from which a collimated beam appears to be diverging after passing through the lens.
When |
https://en.wikipedia.org/wiki/Sound%20localization%20in%20owls | Most owls are nocturnal or crepuscular birds of prey. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be able to accurately localize both the azimuth and the elevation of the sound source.
Introduction to sound localization
Owls are very adept nocturnal predators, hunting prey that includes small mammals, reptiles, and insects. They are able to rotate their head up to 270 degrees, lock onto prey, and launch a silent attack. Owls lock onto prey by using sound localization. Sound localization is an animal’s ability to identify the origin of a sound in distance and direction. Several owl species have ears that are asymmetrical in size and location, which enhances this ability. These species include barn owls (Tyto alba), northern saw-whet owls (Aegolius acadicus), and long-eared owls (Asio otus). The barn owl (Tyto alba) is the most commonly studied for sound localization because they use similar methods to humans for interpreting interaural time differences in the horizontal plane. This species has evolved a specialized set of pathways in the brain that allow them to hear a sound and map out the possible location of the object that elicited that sound. Sound waves enter the ear via the ear canal and travel until they reach the tympanic membrane. The tympanic membrane then sends these waves through the ossicles of the middle ear and into the inner ear that includes the vestibular organ, cochlea, and auditory nerve. They are then able to use interaural time difference (ITD) and interaural level difference (ILD) to pinpoint the location and elevation of their prey.
Anatomy of the ear
Owls tend to have asymmetric ears, with the openings being placed just behind t |
https://en.wikipedia.org/wiki/Miracula | Miracula is a genus of parasitic protists that parasite diatoms, containing the type species Miracula helgolandica. More recently, the species Miracula moenusica from the river Main in Frankfurt am Main, Miracula islandica from a shore in the north of Iceland, Miracula einbuarlaekurica from a stremlet in the north of Iceland, and Miracula blauvikensis from the shore at the research station Blávík in the east fjords of Iceland were added to the genus. It is the only genus in the family Miraculaceae, of uncertain taxonomic position within the Oomycetes. They're one of the most basal lineages in the phylogeny of Oomycetes.
Species
Miracula blauvikensis
Miracula helgolandica
Miracula islandica
Miracula moenusica
Miracula einbuarlaekurica |
https://en.wikipedia.org/wiki/Institut%20Laue%E2%80%93Langevin | The Institut Laue–Langevin (ILL) is an internationally financed scientific facility, situated on the Polygone Scientifique in Grenoble, France. It is one of the world centres for research using neutrons. Founded in 1967 and honouring the physicists Max von Laue and Paul Langevin, the ILL provides one of the most intense neutron sources in the world and the most intense continuous neutron flux in the world in the moderator region: 1.5×1015 neutrons per second per cm2, with a thermal power of typically 58.3 MW.
The ILL neutron scattering facilities allow the analysis of the structure of conducting and magnetic materials for future electronic devices, the measurement of stresses in mechanical materials. It also allows investigations into macromolecular assemblies, particularly protein dynamics and biomolecular structure. It is a world-renowned centre for nanoscale science.
History
The institute was founded by France and Germany, with the United Kingdom becoming the third major partner in 1973. These partner states provide, through Research Councils, the bulk of its funding. Ten other countries have since become partners. Scientists of institutions in the member states may apply to use the ILL facilities, and may invite scientists from other countries to participate. Experimental time is allocated by a scientific council involving ILL users. The use of the facility and travel costs for researchers are paid for by the institute. Commercial use, for which a fee is charged, is not subject to the scientific council review process. Over 750 experiments are completed every year, in fields including magnetism, superconductivity, materials engineering, and the study of liquids, colloids and biological substances such as proteins.
The high-flux research reactor produces neutrons through fission in a compact-core fuel element. Neutron moderators cool the neutrons to wavelengths usable experimentally. Neutrons are then directed at a suite of instruments to probe the struct |
https://en.wikipedia.org/wiki/Perl%C3%A9e | Perlée and perlage are French words for pearl pattern, a decorative metallic finish consisting of a pattern of small circles (pearls) applied to a surface by grinding. It is mostly used in the automotive and watchmaking industries as an indicator of expensive craftsmanship, particularly if applied by hand rather than machine.
Perlée in the watchmaking industry
Watch components that are given a perlée pattern may include the movement main plate and bridges, the inside of the watch case and the watch-case bottom. The perlée patterned parts of a wrist watch are commonly invisible from the outside except if the watch has a transparent casing or deliberately exposed internal parts.
Further grinding patterns used by watchmakers include the clavus pattern, also known by the French term oeil-de-perdrix (partridge eye).
Perlée in the automotive and aviation industries
The automotive industry has used perlée patterns on surfaces such as car dashboards since its infancy, but these days it is generally confined to luxury vehicles such as the Bugatti Veyron 16.4. Many aircraft of the Golden Age of Aviation had perlée engine-turning finishing used on their sheetmetal components, with one of the best-known examples being Charles Lindbergh's Spirit of St. Louis trans-Atlantic Ryan NYP aircraft of 1927.
See also
Engine turning
Guilloché
Visual motifs |
https://en.wikipedia.org/wiki/Sialuria | Sialuria is a group of disorders resulting in an accumulation of free sialic acid. One type, known as the Finnish type or Salla disease has been described in northeastern Finland and is due to a mutation in gene SLC17A5 on chromosome 6q4-15. The "French type sialuria" (), is a very rare condition presenting in infancy with failure to thrive, yellowish skin, large liver, low blood count, recurrent chest infections, bowel upsets, dehydration and characteristic facial features. |
https://en.wikipedia.org/wiki/Heteroduplex%20analysis | Heteroduplex analysis (HDA) is a method in biochemistry used to detect point mutations in DNA (Deoxyribonucleic acid) since 1992. Heteroduplexes are dsDNA molecules that have one or more mismatched pairs, on the other hand homoduplexes are dsDNA which are perfectly paired. This method of analysis depend up on the fact that heteroduplexes shows reduced mobility relative to the homoduplex DNA. heteroduplexes are formed between different DNA alleles. In a mixture of wild-type and mutant amplified DNA, heteroduplexes are formed in mutant alleles and homoduplexes are formed in wild-type alleles. There are two types of heteroduplexes based on type and extent of mutation in the DNA. Small deletions or insertion create bulge-type heteroduplexes which is stable and is verified by electron microscope. Single base substitutions creates more unstable heteroduplexes called bubble-type heteroduplexes, because of low stability it is difficult to visualize in electron microscopy. HDA is widely used for rapid screening of mutation of the 3 bp p.F508del deletion in the CFTR gene. |
https://en.wikipedia.org/wiki/Mucus | Mucus ( ) is a slippery aqueous secretion produced by, and covering, mucous membranes. It is typically produced from cells found in mucous glands, although it may also originate from mixed glands, which contain both serous and mucous cells. It is a viscous colloid containing inorganic salts, antimicrobial enzymes (such as lysozymes), immunoglobulins (especially IgA), and glycoproteins such as lactoferrin and mucins, which are produced by goblet cells in the mucous membranes and submucosal glands. Mucus serves to protect epithelial cells in the linings of the respiratory, digestive, and urogenital systems, and structures in the visual and auditory systems from pathogenic fungi, bacteria and viruses. Most of the mucus in the body is produced in the gastrointestinal tract.
Amphibians, fish, snails, slugs, and some other invertebrates also produce external mucus from their epidermis as protection against pathogens, and to help in movement and is also produced in fish to line their gills. Plants produce a similar substance called mucilage that is also produced by some microorganisms.
Respiratory system
In the human respiratory system, mucus is part of the airway surface liquid (ASL), also known as epithelial lining fluid (ELF), that lines most of the respiratory tract. The airway surface liquid consists of a sol layer termed the periciliary liquid layer and an overlying gel layer termed the mucus layer. The periciliary liquid layer is so named as it surrounds the cilia and lies on top of the surface epithelium. The periciliary liquid layer surrounding the cilia consists of a gel meshwork of cell-tethered mucins and polysaccharides. The mucus blanket aids in the protection of the lungs by trapping foreign particles before they enter them, in particular through the nose during normal breathing.
Mucus is made up of a fluid component of around 95% water, the mucin secretions from the goblet cells, and the submucosal glands (2–3% glycoproteins), proteoglycans (0.1–0.5%), |
https://en.wikipedia.org/wiki/Space%20Buddies | Space Buddies is a 2009 American science fiction comedy film. It is the third film in the Air Buddies franchise. It was released on February 3, 2009. Like Air Buddies and Snow Buddies, it was released directly on DVD and became the first one to be released on Blu-ray.
Plot
The film starts with Buddha and his owner, Sam, star-gazing. As a shooting star passes, Sam makes a wish that he can touch the Moon. The next day is the day of his school field trip to Vision Enterprises to watch a test launch of the Vision 1 spacecraft. However, since no pets are allowed to go, he has to leave Buddha at home. Buddha meets up with his siblings; Rosebud, Budderball, B-Dawg, and Mudbud, and invites them to come with him to go to see the test launch. They decide to hide in the school bus which soon arrives at the Vision Enterprises, and the dogs go to a space suit machine and put on space suits before following the students, who are being led by Dr. Finkel. The dogs get aboard the Space Shuttle Vision 1. At Mission Control in the Vision Enterprises, Pi confirms they are ready for launch. Meanwhile, the dogs take a close look around until they are sealed in the shuttle, which prepares for launch. Astro, who pilots the shuttle from Earth, launches the shuttle, and it flies to space.
At Mission Control, the humans realize the third tank of gas in the shuttle was never filled. With ten hours until the gas runs out, they look for solutions. They eventually decide to pilot the spacecraft to the old R.R.S.S. (Russian Research Space Station). They contact the cosmonaut living in the space station, named Yuri, telling him to refuel the Vision 1. As Vision 1 connects to the space station, the dogs decide to explore the space station, and they meet a dog called Sputnik who is under the care of Yuri. Sputnik explains that Yuri is quite content to stay in space, yet he wishes to go home. Yuri finds the dogs and becomes happy because the buddies can keep them company, so he traps the buddies in |
https://en.wikipedia.org/wiki/Flow%20chart%20language | Flow chart language (FCL) is a simple imperative programming language designed for the purposes of explaining fundamental concepts of program analysis and specialization, in particular, partial evaluation. The language was first presented in 1989 by Carsten K. Gomard and Neil D. Jones. It later resurfaced in their book with Peter Sestoft in 1993, and in John Hatcliff's lecture notes in 1998. The below describes FCL as it appeared in John Hatcliff's lecture notes.
FCL is an imperative programming language close to the way a Von Neumann computer executes a program. A program is executed sequentially by following a sequence of commands, while maintaining an implicit state, i.e. the global memory. FCL has no concept of procedures, but does provide conditional and unconditional jumps. FCL lives up to its name as the abstract call-graph of an FCL program is a straightforward flow chart.
An FCL program takes as input a finite series of named values as parameters, and produces a value as a result.
Syntax
We specify the syntax of FCL using Backus–Naur form.
An FCL program is a list of formal parameter declarations, an entry label, and a sequence of basic blocks:
<p> ::= "(" <x>* ")" "(" <l> ")" <b>+
Initially, the language only allows non-negative integer variables.
A basic block consists of a label, a list of assignments, and a jump.
<b> ::= <l> ":" <a>* <j>
An assignment assigns a variable to an expression. An expression is either a constant, a variable, or application of a built-in n-ary operator:
<a> := <x> ":=" <e>
<e> := <c> | <x> | <o> "(" <e>* ")"
Note, variable names occurring throughout the program need not be declared at the top of the program. The variables declared at the top of the program designate arguments to the program.
As values can only be non-negative integers, so can constants. The list of operations in general is irrelevant, so long as they have no side effects, which includes exceptions, e.g. division by 0:
<c> ::= "0" | "1" | "2" | ... |
https://en.wikipedia.org/wiki/Nekrasov%20matrix | In mathematics, a Nekrasov matrix or generalised Nekrasov matrix is a type of diagonally dominant matrix (i.e. one in which the diagonal elements are in some way greater than some function of the non-diagonal elements). Specifically if A is a generalised Nekrasov matrix, its diagonal elements are non-zero and the diagonal elements also satisfy,
where,
. |
https://en.wikipedia.org/wiki/MindRover | MindRover is a video game for PC, developed by CogniToy.
Gameplay
The game, which can be thought of as a successor to the Learning Company's Robot Odyssey, revolves around three activities:
Assemble virtual robots from a library of stock parts.
Programming robots using a special graphical interface (referred to in the game as "wiring") with a paradigm more based on multicomponent circuitry construction than on traditional programming.
Participate in events such as robot battles and racing games with newly programmed robot.
Availability
The game was developed for Microsoft Windows. Add-ons were available to control Lego Mindstorms robots.
The game was ported to Linux by Loki Software and Linux Game Publishing and to the Mac by MacPlay.
Development
The game had a budget of $500,000. In October 2000, CogniToy signed a contract with Tri Synergy to distribute the game.
Reception
The game received mostly positive reviews. Carla Harker reviewed the PC version of the game for Next Generation, rating it five stars out of five, and stated that "A truly amazing title for anyone looking for something unique and challenging." |
https://en.wikipedia.org/wiki/Alternating%20factorial | In mathematics, an alternating factorial is the absolute value of the alternating sum of the first n factorials of positive integers.
This is the same as their sum, with the odd-indexed factorials multiplied by −1 if n is even, and the even-indexed factorials multiplied by −1 if n is odd, resulting in an alternation of signs of the summands (or alternation of addition and subtraction operators, if preferred). To put it algebraically,
or with the recurrence relation
in which af(1) = 1.
The first few alternating factorials are
1, 1, 5, 19, 101, 619, 4421, 35899, 326981, 3301819, 36614981, 442386619, 5784634181, 81393657019
For example, the third alternating factorial is 1! – 2! + 3!. The fourth alternating factorial is −1! + 2! − 3! + 4! = 19. Regardless of the parity of n, the last (nth) summand, n!, is given a positive sign, the (n – 1)th summand is given a negative sign, and the signs of the lower-indexed summands are alternated accordingly.
This pattern of alternation ensures the resulting sums are all positive integers. Changing the rule so that either the odd- or even-indexed summands are given negative signs (regardless of the parity of n) changes the signs of the resulting sums but not their absolute values.
proved that there are only a finite number of alternating factorials that are also prime numbers, since 3612703 divides af(3612702) and therefore divides af(n) for all n ≥ 3612702. , the known primes and probable primes are af(n) for
n = 3, 4, 5, 6, 7, 8, 10, 15, 19, 41, 59, 61, 105, 160, 661, 2653, 3069, 3943, 4053, 4998, 8275, 9158, 11164
Only the values up to n = 661 have been proved prime in 2006. af(661) is approximately 7.818097272875 × 101578.
Notes |
https://en.wikipedia.org/wiki/Non-vascular%20plant | Non-vascular plants are plants without a vascular system consisting of xylem and phloem. Instead, they may possess simpler tissues that have specialized functions for the internal transport of water.
Non-vascular plants include two distantly related groups:
Bryophytes, an informal group that taxonomists treat as three separate land-plant divisions, namely: Bryophyta (mosses), Marchantiophyta (liverworts), and Anthocerotophyta (hornworts). In all bryophytes, the primary plants are the haploid gametophytes, with the only diploid portion being the attached sporophyte, consisting of a stalk and sporangium. Because these plants lack lignified water-conducting tissues, they cannot become as tall as most vascular plants.
Algae, especially green algae. The algae consist of several unrelated groups. Only the groups included in the Viridiplantae are still considered relatives of land plants.
These groups are sometimes called "lower plants", referring to their status as the earliest plant groups to evolve, but the usage is imprecise since both groups are polyphyletic and may be used to include vascular cryptogams, such as the ferns and fern allies that reproduce using spores. Non-vascular plants are often among the first species to move into new and inhospitable territories, along with prokaryotes and protists, and thus function as pioneer species.
Non-vascular plants do not have a wide variety of specialized tissue types. Mosses and leafy liverworts have structures called phyllids that resemble leaves, but only consist of single sheets of cells with no internal air spaces, no cuticle or stomata, and no xylem or phloem. Consequently, phyllids are unable to control the rate of water loss from their tissues and are said to be poikilohydric. Some liverworts, such as Marchantia, have a cuticle, and the sporophytes of mosses have both cuticles and stomata, which were important in the evolution of land plants.
All land plants have a life cycle with an alternation of generatio |
https://en.wikipedia.org/wiki/Class%20automorphism | In mathematics, in the realm of group theory, a class automorphism is an automorphism of a group that sends each element to within its conjugacy class. The class automorphisms form a subgroup of the automorphism group. Some facts:
Every inner automorphism is a class automorphism.
Every class automorphism is a family automorphism and a quotientable automorphism.
Under a quotient map, class automorphisms go to class automorphisms.
Every class automorphism is an IA automorphism, that is, it acts as identity on the abelianization.
Every class automorphism is a center-fixing automorphism, that is, it fixes all points in the center.
Normal subgroups are characterized as subgroups invariant under class automorphisms.
For infinite groups, an example of a class automorphism that is not inner is the following: take the finitary symmetric group on countably many elements and consider conjugation by an infinitary permutation. This conjugation defines an outer automorphism on the group of finitary permutations. However, for any specific finitary permutation, we can find a finitary permutation whose conjugation has the same effect as this infinitary permutation. This is essentially because the infinitary permutation takes permutations of finite supports to permutations of finite support.
For finite groups, the classical example is a group of order 32 obtained as the semidirect product of the cyclic ring on 8 elements, by its group of units acting via multiplication. Finding a class automorphism in the stability group that is not inner boils down to finding a cocycle for the action that is locally a coboundary but is not a global coboundary.
Group theory
Group automorphisms |
https://en.wikipedia.org/wiki/Vaginogram | A vaginogram is a medical imaging method in which a radiocontrast agent is injected while X-ray pictures are taken, to visualize structures of the vagina. It has been used to visualize ureterovaginal fistulas. |
https://en.wikipedia.org/wiki/Scitopia | Scitopia.org is a free federated, vertical search portal that enables users to explore the collective content of 21 science and technology societies – the research most cited in scholarly work and patents – from a single search box on the open web. It aggregates the entire electronic libraries of its founders – societies in major science and technology disciplines. More than three million documents, including peer-reviewed journal content, spanning hundreds of years of scientific and technological discovery, and conference proceedings, are searched through this dedicated gateway.
In addition to the published works of its partners, scitopia.org also searches a database of approximately 50 million worldwide patents from the United States Patent and Trademark Office, the Japan Patent Office and the European Patent Office.
To access the content, visitors to scitopia.org use an interface developed by Deep Web Technologies. Most recently known for its work on science.gov, Deep Web Technologies has experience in the development and refinement of federated searching, particularly in the development of scientific portals.
The Beta version of Scitopia.org was released in June 2007. Scitopia came out of Beta in October 2007. As of January 5, 2012 Scitopia has officially closed and the federated search portal is no longer available.
See also
Academic databases and search engines |
https://en.wikipedia.org/wiki/Slowsort | Slowsort is a sorting algorithm. It is of humorous nature and not useful. It is a reluctant algorithm based on the principle of multiply and surrender (a parody formed by taking the opposites of divide and conquer). It was published in 1984 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis (a parody of optimal algorithms and complexity analysis).
Algorithm
Slowsort is a recursive algorithm.
It sorts in-place.
It is a stable sort. (It does not change the order of equal-valued keys.)
This is an implementation in pseudocode:
procedure slowsort(A[], start_idx, end_idx) // Sort array range A[start ... end] in-place.
if start_idx ≥ end_idx then
return
middle_idx := floor( (start_idx + end_idx)/2 )
slowsort(A, start_idx, middle_idx) // (1.1)
slowsort(A, middle_idx + 1, end_idx) // (1.2)
if A[end_idx] < A[middle_idx] then
swap (A, end_idx, middle_idx) // (1.3)
slowsort(A, start_idx, end_idx - 1) // (2)
Sort the first half, recursively. (1.1)
Sort the second half, recursively. (1.2)
Find the maximum of the whole array by comparing the results of 1.1 and 1.2, and place it at the end of the list. (1.3)
Sort the entire list (except for the maximum now at the end), recursively. (2)
An unoptimized implementation in Haskell (purely functional) may look as follows:
slowsort :: (Ord a) => [a] -> [a]
slowsort xs
| length xs <= 1 = xs
| otherwise = slowsort xs' ++ [max llast rlast] -- (2)
where m = length xs `div` 2
l = slowsort $ take m xs -- (1.1)
r = slowsort $ drop m xs -- (1.2)
llast = last l
rlast = last r
xs' = init l ++ min llast rlast : init r
Complexity
The runtime for Slowsort is .
A lower asymptotic bound for in Landau notation is for any .
Slowsort is therefore not in polynomial time. Even the best case is worse than Bubble sort. |
https://en.wikipedia.org/wiki/Substrate%20%28biology%29 | In biology, a substrate is the surface on which an organism (such as a plant, fungus, or animal) lives. A substrate can include biotic or abiotic materials and animals. For example, encrusting algae that lives on a rock (its substrate) can be itself a substrate for an animal that lives on top of the algae. Inert substrates are used as growing support materials in the hydroponic cultivation of plants. In biology substrates are often activated by the nanoscopic process of substrate presentation.
In agriculture and horticulture
Cellulose substrate
Expanded clay aggregate (LECA)
Rock wool
Potting soil
Soil
In animal biotechnology
Requirements for animal cell and tissue culture
Requirements for animal cell and tissue culture are the same as described for plant cell, tissue and organ culture (In Vitro Culture Techniques: The Biotechnological Principles). Desirable requirements are (i) air conditioning of a room, (ii) hot room with temperature recorder, (iii) microscope room for carrying out microscopic work where different types of microscopes should be installed, (iv) dark room, (v) service room, (vi) sterilization room for sterilization of glassware and culture media, and (vii) preparation room for media preparation, etc. In addition the storage areas should be such where following should be kept properly : (i) liquids-ambient (4-20°C), (ii) glassware-shelving, (iii) plastics-shelving, (iv) small items-drawers, (v) specialized equipments-cupboard, slow turnover, (vi) chemicals-sidled containers.
For cell growth
There are many types of vertebrate cells that require support for their growth in vitro otherwise they will not grow properly. Such cells are called anchorage-dependent cells. Therefore, many substrates which may be adhesive (e.g. plastic, glass, palladium, metallic surfaces, etc.) or non-adhesive (e.g. agar, agarose, etc.) types may be used as discussed below:
Plastic as a substrate. Disposable plastics are cheaper substrate as they are commonly made |
https://en.wikipedia.org/wiki/Region%20connection%20calculus | The region connection calculus (RCC) is intended to serve for qualitative spatial representation and reasoning. RCC abstractly describes regions (in Euclidean space, or in a topological space) by their possible relations to each other. RCC8 consists of 8 basic relations that are possible between two regions:
disconnected (DC)
externally connected (EC)
equal (EQ)
partially overlapping (PO)
tangential proper part (TPP)
tangential proper part inverse (TPPi)
non-tangential proper part (NTPP)
non-tangential proper part inverse (NTPPi)
From these basic relations, combinations can be built. For example, proper part (PP) is the union of TPP and NTPP.
Axioms
RCC is governed by two axioms.
for any region x, x connects with itself
for any region x, y, if x connects with y, y will connect with x
Remark on the axioms
The two axioms describe two features of the connection relation, but not the characteristic feature of the connect relation. For example, we can say that an object is less than 10 meters away from itself and that if object A is less than 10 meters away from object B, object B will be less than 10 meters away from object A. So, the relation 'less-than-10-meters' also satisfies the above two axioms, but does not talk about the connection relation in the intended sense of RCC.
Composition table
The composition table of RCC8 are as follows:
"*" denotes the universal relation, no relation can be discarded.
Usage example: if a TPP b and b EC c, (row 4, column 2) of the table says that a DC c or a EC c.
Examples
The RCC8 calculus is intended for reasoning about spatial configurations. Consider the following example: two houses are connected via a road. Each house is located on an own property. The first house possibly touches the boundary of the property; the second one surely does not. What can we infer about the relation of the second property to the road?
The spatial configuration can be formalized in RCC8 as the following constraint network:
house |
https://en.wikipedia.org/wiki/Many-one%20reduction | In computability theory and computational complexity theory, a many-one reduction (also called mapping reduction) is a reduction which converts instances of one decision problem (whether an instance is in ) to another decision problem (whether an instance is in ) using an effective function. The reduced instance is in the language if and only if the initial instance is in its language . Thus if we can decide whether instances are in the language , we can decide whether instances are in its language by applying the reduction and solving for . Thus, reductions can be used to measure the relative computational difficulty of two problems. It is said that reduces to if, in layman's terms is at least as hard to solve as . This means that any algorithm that solves can also be used as part of a (otherwise relatively simple) program that solves .
Many-one reductions are a special case and stronger form of Turing reductions. With many-one reductions, the oracle (that is, our solution for ) can be invoked only once at the end, and the answer cannot be modified. This means that if we want to show that problem can be reduced to problem , we can use our solution for only once in our solution for , unlike in Turing reductions, where we can use our solution for as many times as needed in order to solve the membership problem for the given instance of .
Many-one reductions were first used by Emil Post in a paper published in 1944. Later Norman Shapiro used the same concept in 1956 under the name strong reducibility.
Definitions
Formal languages
Suppose and are formal languages over the alphabets and , respectively. A many-one reduction from to is a total computable function that has the property that each word is in if and only if is in .
If such a function exists, one says that is many-one reducible or m-reducible to and writes
Subsets of natural numbers
Given two sets one says is many-one reducible to and writes
if there exists a total computab |
https://en.wikipedia.org/wiki/Evolutionary%20trap | The term evolutionary trap has retained several definitions associated with different biological disciplines.
Evolutionary biology
Within evolutionary biology, this term has been used sporadically to refer to situations in which an evolved (and presumably well adapted and successful) trait has become obsolete or maladaptive due to the biophysical environment and/or competitions changing, but evolved complexities accumulated by prior adaptations now preclude any effective re-adaptation — as the organism can only modify upon or "patch up" existing traits (which essentially have become inherited "baggage") rather than devolving, removing or redesign a trait — leaving the species struggling to keep up with natural selection and thus vulnerable to competitive disadvantage or even extinction.
In the 1991 BBC lecture series Growing Up in the Universe, British evolutionary biologist Richard Dawkins once analogized the concept to that of a mountaineer blindly climbing up (because "evolution has no foresights") while not allowed to turn back downhill, ended up being trapped on one summit and thus cannot go anywhere else higher.
Ecology
Within behavioral and ecological sciences, evolutionary traps occur when rapid environmental change triggers organisms to make maladaptive behavioral decisions. While these traps may take place within any type of behavioral context (e.g. mate selection, navigation, nest-site selection), the most empirically and theoretically well-understood type of evolutionary trap is the ecological trap which represents maladaptive habitat selection behavior.
Witherington demonstrates an interesting case of a "navigational trap". Over evolutionary time, hatchling sea turtles have evolved the tendency to migrate toward the light of the moon upon emerging from their sand nests. However, in the modern world, this has resulted in them tending to orient towards bright beach-front lighting, which is a more intense light source than the moon. As a result, the |
https://en.wikipedia.org/wiki/Metyltetraprole | Metyltetraprole is a quinone outside inhibitor fungicide sold under the brand name Pavecto by its inventor, Sumitomo Chemical. It is the only tetrazolinone fungicide and the only one in the Fungicide Resistance Action Committee's subgroup 11A.
Development
Metyltetraprole was developed specifically to find an a.i. with the same mode of action (a QI) but with sufficiently different chemistry as to avoid "critical" QI resistance increasing around the world.
Target pathogens
Metyltetraprole is highly effective against Alternaria triticina.
Resistance
Developed because of increasing resistance to the main group of QIs. See §Development above.
Cross-resistance
It does not suffer cross-resistance with the resistance against 11 conferred by the cytochrome b mutation G143A. Cross-resistance against F129L is unassessed.
Binding Mode
The structure of the tetrazolinone pharmacophore is very similar to the triazolone pharmacophore of an inhibitor developed by AgoEva, for which the binding mode has been elucidated in the structure deposited as 3L73 in the protein databank. |
https://en.wikipedia.org/wiki/Bomab | The BOttle MAnnequin ABsorber phantom was developed by Bush in 1949 (Bush 1949) and has since been accepted in North America as the industry standard (ANSI 1995) for calibrating whole body counting systems.
The phantom consists of 10 polyethylene bottles, either cylinders or elliptical cylinders, that represent the head, neck chest, abdomen, thighs, calves, and arms. Each section is filled with a radioactive solution, in water, that has the amount of radioactivity proportional to the volume of each section. This simulates a homogeneous distribution of material throughout the body. The solution will also be acidified and contain stable element carrier so that the radioactivity does not plate out on the container walls.
The phantom, which contains a known amount of radioactivity can be used to calibrate the whole body counter by relating the observed response to the known amount of radioactivity. As different radioactive materials emit different energies of gamma photons, the calibration has to be repeated to cover the expected energy range: usually 120 to 2,000 keV.
Examples of radioactive isotopes that are used for efficiency calibration include 57Co, 60Co, 88Y, 137Cs and 152Eu.
Although the phantom was designed to be used lying down, it is used in any orientation.
Other uses
Performance testing: BOMAB phantoms are sometimes used by performance testing organizations to test operating assay facilities. Phantoms, containing known quantities of radioactive material, are sent to assay facilities as blind samples.
Design characteristics: Phantoms can be used to evaluate the relative effect of size, shape and positioning on the performance of in vivo measurement equipment.
Background: A water filled BOMAB is often used to estimate the (blank) background for in vivo assay systems.
Detection Limits: A BOMAB filled with approximately 140 g of K-40, which is the nominal content in a 70 kg man, is sometimes used to estimate detection sensitivity of in vivo pers |
https://en.wikipedia.org/wiki/Spatial%20verification | Spatial verification is a technique in which similar locations can be identified in an automated way through a sequence of images. The general method involves identifying a correlation between certain points among sets images, using techniques similar to those used for image registration.
The main problem is that outliers (that does not fit or does not match the selected model) affect adjustment called least squares (numerical analysis technique framed in mathematical optimization, which, given an set of ordered pairs: independent variable, dependent variable, and a family of functions, try to find the continuous function).
Advantages
Effective when one is able to find safe features without clutter.
Good results for correspondence in specific instances.
Disadvantages
The scaling models.
The spatial verification can not be used as post-processing.
Methods
The most widely used for spatial verification and avoid errors caused by these outliers methods are:
Random sample consensus (RANSAC)
Seeks to avoid the impact of outliers, that not fit with the model, so only considers inline which match the model in question. If an outlier is chosen to calculate the current setting, then the resulting line will have little support from the rest of the points.
The algorithm that is performed is a loop that performs the following steps:
Of the entire input data set, takes a subset randomly to estimate the model.
Compute model subset. The model is estimated with standard linear algorithms.
Find the matching values of transformation.
If the error is minimal model, this is accepted, and if the number of correspondences is long enough, the subset of points involved consensus assembly is referred. And it becomes to compute the estimated model in all correspondences.
The goal is to keep the model with the highest number of matches and the main problem is the number of times you have to repeat the process to obtain the best estimate of the model.
RANSAC set in advance the |
https://en.wikipedia.org/wiki/Minibar | A minibar is a small refrigerator, typically an absorption refrigerator, in a hotel room or cruise ship stateroom. The hotel staff fill it with drinks and snacks for the guest to purchase during their stay. It is stocked with a precise inventory of goods, with a price list. The guest is charged for goods consumed when checking out of the hotel. Some newer minibars use infrared or other automated methods of recording purchases. These detect the removal of an item, and charge the guest's credit card right away, even if the item is not consumed. This is done to prevent loss of product, theft and lost revenue.
The minibar is commonly stocked with small bottles of alcoholic beverages, juice, bottled water, and soft drinks. There may also be candy, cookies, crackers, and other small snacks. Prices are generally very high compared to similar items purchased from a store, because the guest is paying for the convenience of immediate access and also the upkeep of the bar. Prices vary, but it is common for one can of non-alcoholic beverage to cost $6–10 USD. Due to the convenience of room service and the minibar, prices charged to the patron are much higher than the hotel's restaurant or tuck shop. As premium bottled water has become popular with guests since the 2000s, there is "ambient placement" of such chargeable products outside the minibar and in the guests' line of vision; for example "by placing [bottled] water on bedside tables, during the night, people are more likely to grab it than get up to get a glass of water".
The world's first minibar was introduced at the Hong Kong Hilton Hotel by manager Robert Arnold in 1974. In the months following its introduction in-room drink sales increased 500%, and the Hong Kong Hilton's overall annual revenue was boosted by 5%. The following year the Hilton group rolled out the minibar concept across all its hotels.
In recent years, as minibars become less and less popular with guests, hotels have been eliminating this feature f |
https://en.wikipedia.org/wiki/Scalability%20testing | Scalability testing is the testing of a software application to measure its capability to scale up or scale out in terms of any of its non-functional capability.
Performance, scalability and reliability testing are usually grouped together by software quality analysts.
The main goals of scalability testing are to determine the user limit for the web application and ensure end user experience, under a high load, is not compromised. One example is if a web page can be accessed in a timely fashion with a limited delay in response. Another goal is to check if the server can cope i.e. Will the server crash if it is under a heavy load?
Dependent on the application that is being tested, different parameters are tested. If a webpage is being tested, the highest possible number of simultaneous users would be tested. Also dependent on the application being tested is the attributes that are tested - these can include CPU usage, network usage or user experience.
Successful testing will project most of the issues which could be related to the network, database or hardware/software.
Creating a scalability test
When creating a new application, it is difficult to accurately predict the number of users in 1, 2 or even 5 years. Although an estimate can be made, it is not a definite number. An issue with an increasing number of users is that it can create new areas of failure. For example, if you have 100,000 new visitors, it's not just access to the application that could be a problem; you might also experience issues with the database where you need to store all the data of these new customers.
Increment loads
This is why when creating a scalability test, it is important to scale up in increments. These steps can be split into small, medium and high loads.
We must scale up in increments as each stage tests a different aspect. Small loads ensure the system functions as it should on a basic level. Medium loads test the system can function at its expected level. High loads tes |
https://en.wikipedia.org/wiki/Irrational%20rotation | In the mathematical theory of dynamical systems, an irrational rotation is a map
where is an irrational number. Under the identification of a circle with , or with the interval with the boundary points glued together, this map becomes a rotation of a circle by a proportion of a full revolution (i.e., an angle of radians). Since is irrational, the rotation has infinite order in the circle group and the map has no periodic orbits.
Alternatively, we can use multiplicative notation for an irrational rotation by introducing the map
The relationship between the additive and multiplicative notations is the group isomorphism
.
It can be shown that is an isometry.
There is a strong distinction in circle rotations that depends on whether is rational or irrational. Rational rotations are less interesting examples of dynamical systems because if and , then when . It can also be shown that
when .
Significance
Irrational rotations form a fundamental example in the theory of dynamical systems. According to the Denjoy theorem, every orientation-preserving -diffeomorphism of the circle with an irrational rotation number is topologically conjugate to . An irrational rotation is a measure-preserving ergodic transformation, but it is not mixing. The Poincaré map for the dynamical system associated with the Kronecker foliation on a torus with angle is the irrational rotation by . C*-algebras associated with irrational rotations, known as irrational rotation algebras, have been extensively studied.
Properties
If is irrational, then the orbit of any element of under the rotation is dense in . Therefore, irrational rotations are topologically transitive.
Irrational (and rational) rotations are not topologically mixing.
Irrational rotations are uniquely ergodic, with the Lebesgue measure serving as the unique invariant probability measure.
Suppose . Since is ergodic,.
Generalizations
Circle rotations are examples of group translations.
For a general ori |
https://en.wikipedia.org/wiki/Logical%20machine | A logical machine or logical abacus is a tool containing a set of parts that uses energy to perform formal logic operations through the use of truth tables. Early logical machines were mechanical devices that performed basic operations in Boolean logic. The principal examples of such machines are those of William Stanley Jevons (logic piano), John Venn, and Allan Marquand.
Contemporary logical machines are computer-based electronic programs that perform proof assistance with theorems in mathematical logic. In the 21st century, these proof assistant programs have given birth to a new field of study called mathematical knowledge management.
Origins
The earliest logical machines were mechanical constructs built in the late 19th century. William Stanley Jevons invented the first logical machine in 1869, the logic piano. In 1883, Allan Marquand invented a new logical machine that performed the same operations as Jevons' logic piano but with improvements in design simplification, portability, and input-output controls.
A logical abacus is constructed to show all the possible combinations of a set of logical terms with their negatives, and, further, the way in which these combinations are affected by the addition of attributes or other limiting words, i.e., to simplify mechanically the solution of logical problems. These instruments are all more or less elaborate developments of the "logical slate", on which were written in vertical columns all the combinations of symbols or letters which could be made logically out of a definite number of terms. These were compared with any given premises, and those which were incompatible were crossed off. In the abacus the combinations are inscribed each on a single slip of wood or similar substance, which is moved by a key; incompatible combinations can thus be mechanically removed at will, in accordance with any given series of premises.
See also
Allan Marquand
William Stanley Jevons
Logics for computability |
https://en.wikipedia.org/wiki/List%20of%20order%20theory%20topics | Order theory is a branch of mathematics that studies various kinds of objects (often binary relations) that capture the intuitive notion of ordering, providing a framework for saying when one thing is "less than" or "precedes" another.
An alphabetical list of many notions of order theory can be found in the order theory glossary. See also inequality, extreme value and mathematical optimization.
Overview
Partially ordered set
Preorder
Totally ordered set
Total preorder
Chain
Trichotomy
Extended real number line
Antichain
Strict order
Hasse diagram
Directed acyclic graph
Duality (order theory)
Product order
Distinguished elements of partial orders
Greatest element (maximum, top, unit), Least element (minimum, bottom, zero)
Maximal element, minimal element
Upper bound
Least upper bound (supremum, join)
Greatest lower bound (infimum, meet)
Limit superior and limit inferior
Irreducible element
Prime element
Compact element
Subsets of partial orders
Cofinal and coinitial set, sometimes also called dense
Meet-dense set and join-dense set
Linked set (upwards and downwards)
Directed set (upwards and downwards)
centered and σ-centered set
Net (mathematics)
Upper set and lower set
Ideal and filter
Ultrafilter
Special types of partial orders
Completeness (order theory)
Dense order
Distributivity (order theory)
modular lattice
distributive lattice
completely distributive lattice
Ascending chain condition
Infinite descending chain
Countable chain condition, often abbreviated as ccc
Knaster's condition, sometimes denoted property (K)
Well-orders
Well-founded relation
Ordinal number
Well-quasi-ordering
Completeness properties
Semilattice
Lattice
(Directed) complete partial order, (d)cpo
Bounded complete
Complete lattice
Knaster–Tarski theorem
Infinite divisibility
Orders with further algebraic operations
Heyting algebra
Relatively complemented lattice
Complete Heyting algebra
Pointless topology
MV-algebra
Ockham algebras:
Stone algebra
De Morgan algebra
Kleene alg |
https://en.wikipedia.org/wiki/SINIX | SINIX is a discontinued variant of the Unix operating system from Siemens Nixdorf Informationssysteme. SINIX supersedes SIRM OS and Pyramid Technology's DC/OSx. Following X/Open's acceptance that its requirements for the use of the UNIX trademark were met, version 5.44 and subsequent releases were published as Reliant UNIX by Fujitsu Siemens Computers.
Features
In some versions of SINIX (5.2x) the user could emulate the behaviour of a number of different versions of Unix (known as universes). These included System V.3, System III or BSD. Each universe had its own command set, libraries and header files.
Xenix-based SINIX
The original SINIX was a modified version of Xenix and ran on Intel 80186 processors. For some years Siemens used the NSC-32x32 (up to Sinix 5.2x) and Intel 80486 CPUs (Sinix 5.4x - non MIPS) in their MX-Series.
System V-based SINIX
Later versions of SINIX based on System V were designed for the:
SNI RM-200, RM-300, RM-400 and RM-600 servers running on the MIPS processor (SINIX-N, SINIX-O, SINIX-P, SINIX-Y)
SNI PC-MX2, MX300-05/-10/-15/-30, Siemens MX500-75/-85 running NS320xx (SINIX-H)
PC-MXi, MX300-45 on the Intel X86 processor (SINIX-L)
SNI WX-200 and other IBM-compatible i386 PCs on the Intel 80386 and newer processors (SINIX-Z)
The last release under the SINIX name was version 5.43 in 1995.
Reliant UNIX
The last Reliant UNIX versions were registered as UNIX 95 compliant (XPG4 hard branding).
The last release of Reliant UNIX was version 5.45.
See also
BS2000
VM2000
External links
Siemens Business Services - SINIX patches and support
The SINIX operating system
Sven Mascheck, SINIX V5.20 Universes
MIPS operating systems
UNIX System V |
https://en.wikipedia.org/wiki/Negative%20multinomial%20distribution | In probability theory and statistics, the negative multinomial distribution is a generalization of the negative binomial distribution (NB(x0, p)) to more than two outcomes.
As with the univariate negative binomial distribution, if the parameter is a positive integer, the negative multinomial distribution has an urn model interpretation. Suppose we have an experiment that generates m+1≥2 possible outcomes, {X0,...,Xm}, each occurring with non-negative probabilities {p0,...,pm} respectively. If sampling proceeded until n observations were made, then {X0,...,Xm} would have been multinomially distributed. However, if the experiment is stopped once X0 reaches the predetermined value x0 (assuming x0 is a positive integer), then the distribution of the m-tuple {X1,...,Xm} is negative multinomial. These variables are not multinomially distributed because their sum X1+...+Xm is not fixed, being a draw from a negative binomial distribution.
Properties
Marginal distributions
If m-dimensional x is partitioned as follows
and accordingly
and let
The marginal distribution of is . That is the marginal distribution is also negative multinomial with the removed and the remaining p'''s properly scaled so as to add to one.
The univariate marginal is said to have a negative binomial distribution.
Conditional distributions
The conditional distribution of given is . That is,
Independent sums
If and If are independent, then
. Similarly and conversely, it is easy to see from the characteristic function that the negative multinomial is infinitely divisible.
Aggregation
If
then, if the random variables with subscripts i and j'' are dropped from the vector and replaced by their sum,
This aggregation property may be used to derive the marginal distribution of mentioned above.
Correlation matrix
The entries of the correlation matrix are
Parameter estimation
Method of Moments
If we let the mean vector of the negative multinomial be
and covariance matrix
then it i |
https://en.wikipedia.org/wiki/Panda%20Cloud%20Antivirus | Panda Cloud Antivirus is an antivirus software developed by Panda Security, a free and a paid version are available. It is cloud-based in the sense that files are scanned on a remote server without using processing power of the user's machine. The cloud technology is based on Panda's Collective Intelligence. It can run constantly, providing protection against viruses and malicious websites but slowing the system to some extent, or do a system scan.
Features
According to Panda Security, Panda Cloud Antivirus is able to detect viruses, trojans, worms, spyware, dialers, hacking tools, hacker and other security risks.
Panda Cloud Antivirus relies on its "Collective Intelligence" and the cloud for up-to-date information. It normally uses an Internet connection to access up-to-date information; if the Internet cannot be accessed, it will use a local cache of "the most common threats in circulation".
Reviews
An April 2009 review found Panda Cloud Antivirus 1.0 to be clean, fast, simple, easy to use, and with good detection rates. The same review scored Panda 100.00% in malware detection and 100.0% in malicious URL detection. Its overall score was 100%, a strong protection factor considering it is software.
When version 1.0 was released on November 10, 2009, PC Magazine reviewed Panda Cloud Antivirus and gave it an Editor's Choice Award for Best AV.
TechRadar's review states "We think that Panda Cloud Antivirus is best viewed as a defense tool rather than a utility for cleaning up a system that's already riddled with infection."
License
The free edition of Panda Cloud Antivirus is released under a license. Its usage is exclusively allowed for private households, state schools, non-governmental and non-profit organizations. |
https://en.wikipedia.org/wiki/News.admin.net-abuse.email | news.admin.net-abuse.email (sometimes abbreviated nanae or n.a.n-a.e, and often incorrectly spelled with a hyphen in "email") is a Usenet newsgroup devoted to discussion of the abuse of email systems, specifically through email spam and similar attacks. According to a timeline compiled by Keith Lynch, news.admin.net-abuse.email was the first widely available electronic forum for discussing spam.
Steve Linford, the founder of The Spamhaus Project, sometimes posts in the newsgroup.
Topics covered
In its original charter the following examples of "on-topic" areas were listed:
Chain letters
DoS attacks
Email address list
Email bomb
Email viruses
Filtering software
Large-scale mailings
Listserv bombs
Mailing list abuse
Pyramid schemes
Unsolicited email
Eventually, by mutual consent, it was also determined that the following were also "on-topic":
Cats (on a superficial/anecdotal level)
History
The group was officially proposed (i.e. its RFD posted) by Tim Skirvin (tskirvin) on July 9, 1996 alongside a number of other groups in order to reduce the load on the two net abuse groups at that time, news.admin.net-abuse.announce and news.admin.net-abuse.misc.
Later that month it went to vote and passed 451 to 28.
In September 2002 it was proposed that a subgroup, news.admin.net-abuse.email.blocklists, be created.
NANAEisms
Over time, some (more or less) NANAE-specific terms were coined:
404-compliant A website that has been terminated by its hosting provider for terms of service violation is said to be "404-compliant", a reference to the 404 "not found" status code in HTTP and a parody of spammers claiming their spam is 301 compliant, referring to a bill that never made it into a law.
Auto-ignore The automated response from an ISP's abuse desk, when it is believed that sending out the automated response is the only action the ISP will take.
Black hat An ISP that enables spamming, for example a hosting provider that does not act upon spam complaints. Named after old w |
https://en.wikipedia.org/wiki/Dual-ported%20RAM | Dual-ported RAM (DPRAM) is a type of random-access memory that allows multiple reads or writes to occur at the same time, or nearly the same time, unlike single-ported RAM which allows only one access at a time.
Examples
Video RAM (VRAM) is a common form of dual-ported dynamic RAM mostly used for video memory, allowing the central processing unit (CPU) to draw the image at the same time the video hardware is reading it out to the screen.
Apart from VRAM, most other types of dual-ported RAM are based on static RAM technology.
Most CPUs implement the processor registers as a small dual-ported or multi-ported RAM.
See also
Register file |
https://en.wikipedia.org/wiki/Stirling%20number | In mathematics, Stirling numbers arise in a variety of analytic and combinatorial problems. They are named after James Stirling, who introduced them in a purely algebraic setting in his book Methodus differentialis (1730). They were rediscovered and given a combinatorial meaning by Masanobu Saka in 1782.
Two different sets of numbers bear this name: the Stirling numbers of the first kind and the Stirling numbers of the second kind. Additionally, Lah numbers are sometimes referred to as Stirling numbers of the third kind. Each kind is detailed in its respective article, this one serving as a description of relations between them.
A common property of all three kinds is that they describe coefficients relating three different sequences of polynomials that frequently arise in combinatorics. Moreover, all three can be defined as the number of partitions of n elements into k non-empty subsets, where each subset is endowed with a certain kind of order (no order, cyclical, or linear).
Notation
Several different notations for Stirling numbers are in use. Ordinary (signed) Stirling numbers of the first kind are commonly denoted:
Unsigned Stirling numbers of the first kind, which count the number of permutations of n elements with k disjoint cycles, are denoted:
Stirling numbers of the second kind, which count the number of ways to partition a set of n elements into k nonempty subsets:
Abramowitz and Stegun use an uppercase and a blackletter , respectively, for the first and second kinds of Stirling number. The notation of brackets and braces, in analogy to binomial coefficients, was introduced in 1935 by Jovan Karamata and promoted later by Donald Knuth. (The bracket notation conflicts with a common notation for Gaussian coefficients.) The mathematical motivation for this type of notation, as well as additional Stirling number formulae, may be found on the page for Stirling numbers and exponential generating functions.
Another infrequent notation is and |
https://en.wikipedia.org/wiki/Norman%20Margolus | Norman H. Margolus (born 1955) is a Canadian-American physicist and computer scientist, known for his work on cellular automata and reversible computing. He is a research affiliate with the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.
Education and career
Margolus received his Ph.D. in physics in 1987 from the Massachusetts Institute of Technology (MIT) under the supervision of Edward Fredkin. He founded and was chief scientist for Permabit, an information storage device company.
Research contributions
Margolus was one of the organizers of a seminal research meeting on the connections between physics and computation theory, held on Mosquito Island in 1982. He is known for inventing the block cellular automaton and the Margolus neighborhood for block cellular automata, which he used to develop cellular automaton simulations of billiard-ball computers.
In the same work, Margolus also showed that the billiard ball model could be simulated by a second-order cellular automaton, a different type of cellular automaton invented by his thesis advisor, Edward Fredkin. These two simulations were among the first cellular automata that were both reversible (able to be run backwards as well as forwards for any number of time steps, without ambiguity) and universal (able to simulate the operations of any computer program); this combination of properties is important in low-energy computing, as it has been shown that the energy dissipation of computing devices may be made arbitrarily small if and only if they are reversible.
In connection with this issue, Margolus and his co-author Lev B. Levitin proved the Margolus–Levitin theorem showing that the speed of any computer is limited by the fundamental laws of physics to be at most proportional to its energy use; this implies that ultra-low-energy computers must run more slowly than conventional computers.
With Tommaso Toffoli, Margolus developed the CAM-6 cellular aut |
https://en.wikipedia.org/wiki/Caddy%20%28hardware%29 | In computer hardware, a caddy is a container used to hold some medium, such as a CD-ROM. If the medium is a hard disk drive, the caddy is also referred to as a disk enclosure. Its functionality is similar to that of the 3.5" floppy disk's jacket.
The purpose of a disk caddy is to protect the disk from damage when handling; its use dates back to at least the Capacitance Electronic Disc in 1981, and they were used in initial versions of Blu-ray Discs, though as a cost-saving measure newer versions use hard-coating technology to prevent scratches and do not need a caddy.
Caddies may be an integral part of the medium, as in some DVD-RAM discs, or separately attached.
Examples
Caddies date at least to the Capacitance Electronic Disc, which used a caddy from 1981 to protect the grooves of the disc.
While caddies have become obsolete, some websites still sell them, although they have become quite expensive.
Cartridges
In addition to caddies that serve purely a storage purpose, there are also ones that are designed to be loaded directly for data access, usually via a shutter.
Some early CD-ROM drives used a mechanism where CDs had to be inserted into special cartridges, somewhat similar in appearance to a jewel case. Although the idea behind this—a tougher plastic shell to protect the disc from damage—was sound, it did not gain wide acceptance among disc manufacturers. Consumers also eschewed the intended and pricey use, which required each disc to be protected with a caddy for its full useful life, preferring to only buy one caddy and transfer the discs between their traditional storage jewel cases and the caddy when in use, then the reverse when finished.
Drives that used the caddy format required "bare" discs to be placed into a caddy before use, making them less convenient to use. Drives that worked this way were referred to as caddy drives or caddy load(ing), but from about 1994 most computer manufacturers moved to tray-loading, or slot-loading drives.
Th |
https://en.wikipedia.org/wiki/Poppers | Popper is a slang term given broadly to recreational drug of the chemical class called alkyl nitrites that are inhaled. They act on the body as vasodilators. Most widely sold products include the original isoamyl nitrite, isopentyl nitrite, and isopropyl nitrite. Isobutyl nitrite is also widely used but is banned in the European Union. In some countries, poppers are labeled or packaged as room deodorizers, leather polish, nail polish remover, or videotape head cleaner to evade anti-drug laws.
Popper use has a relaxation effect on involuntary smooth muscles, such as those in the throat and anus. It is used for practical purposes to facilitate anal sex by increasing blood flow and relaxing sphincter muscles. The drug is also used for recreational drug purposes, typically for the "high" or "rush" that the drug can create, and to enhance sexual pleasure in general.
In popular culture, poppers have been part of club culture from the mid-1970s disco scene and surged in popularity in the 1980s and 1990s rave scene.
History
19th-century discovery
The French chemist Antoine Jérôme Balard synthesized amyl nitrite in 1844. Sir Thomas Lauder Brunton, a Scottish physician born in the year of amyl nitrite's first synthesis, documented its clinical use to treat angina pectoris in 1867 when patients experiencing chest pains would experience complete relief after inhalation. Brunton was inspired by earlier work with the same agent, performed by Arthur Gamgee and Benjamin Ward Richardson. Brunton reasoned that the angina sufferer's pain and discomfort could be reduced by administering amyl nitrite—to dilate the coronary arteries of patients, thus improving blood flow to the heart muscle.
Amyl nitrites were originally enclosed in a glass mesh called "pearls". The usual administration of these pearls was done by crushing them between the fingers, followed by a popping sound. This administration process seems to be the origin of the slang term "poppers". It was then administere |
https://en.wikipedia.org/wiki/XX%20male%20syndrome | XX male syndrome, also known as de la Chapelle syndrome, is a rare congenital intersex condition in which an individual with a 46,XX karyotype (otherwise associated with females) has phenotypically male characteristics that can vary among cases. Synonyms include 46,XX testicular difference of sex development (46,XX DSD), 46,XX sex reversal, nonsyndromic 46,XX testicular DSD, and XX sex reversal.
In 90 percent of these individuals, the syndrome is caused by the Y chromosome's SRY gene, which triggers male reproductive development, being atypically included in the crossing over of genetic information that takes place between the pseudoautosomal regions of the X and Y chromosomes during meiosis in the father. When the X with the SRY gene combines with a normal X from the mother during fertilization, the result is an XX male. Less common are SRY-negative XX males, which can be caused by a mutation in an autosomal or X chromosomal gene. The masculinization of XX males is variable.
This syndrome is diagnosed through various detection methods and occurs in approximately 1:20,000 newborn males, making it much less common than Klinefelter syndrome. Treatment is medically unnecessary, although some individuals choose to undergo treatments to make them appear more male or female. The alternative name for XX male syndrome refers to Finnish scientist Albert de la Chapelle, who studied the condition and its etiology.
Signs and symptoms
The appearance of XX males can fall into one of three categories: 1) males that have normal internal and external genitalia, 2) males with external ambiguities, and 3) males that have both internal and external genital ambiguities. External genital ambiguities can include hypospadias, micropenis, and clitoromegaly. Typically, the appearance of XX males differs from that of an XY male in that they are smaller in height and weight. Most XX males have small testes, and have an increase in maldescended testicles compared to XY males. All are believe |
https://en.wikipedia.org/wiki/MACD%20operations | MACD operations are basic actions (Move, Add, Change, Delete) taken by computer network or telecom service agents in the support of hardware and services. It can also refer to the "hours" spent and billed doing those kinds of support tasks.
See also
Call center
Customer service
Technical support |
https://en.wikipedia.org/wiki/Expert360 | Expert360 is an online marketplace co-founded by Bridget Loudon and Emily Yue and headquartered in Sydney, Australia. Expert360 acts as a digital network for matching independent business consultants with clients (companies, organizations) for short or long-term project work. The company is best known for its innovative approach to the local and international freelance marketplace.
Description
Expert360's online platform is structured like a standard freelance marketplace as it provides the tools to potential employers to post and manage jobs of interest to independent consultants in the upper tier employment market. Its main consultant base is made of highly qualified workers with experience in senior or executive positions in large and stable corporations or investment firms. The marketplace also has junior representatives from management consulting and investment firms.
Expert360's clients range from small-medium businesses to enterprises such as QSHR, Woolworths and Telstra, as well as consulting and investment firms.
History
Expert360 launched on July 1, 2013. The company's co-founders, Bridget Loudon and Emily Yue, raised $1 million (AUD) from investors in their first funding round at the end of 2013
In 2015, Expert360 closed an oversubscribed capital raising round of $4.1 million (AUD), backed by Russian investment fund Frontier Ventures, Australian technology fund Rampersand and several Australian angel investors. Allan Moss AO is also a notable investor of Expert360. On March 30, 2016, the company announced that it was opening an office in New York City in order to expand its reach in the United States.
See also
Freelance marketplace
Freelancers Union
Independent contractor
Mercenary
Misclassification of employees as independent contractors
Recruitment advertising
Self-employment |
https://en.wikipedia.org/wiki/Domain.com | Domain.com is a domain registrar and web hosting company headquartered in Jacksonville, Florida, and is a subsidiary of Newfold Digital.
History
Domain.com's origins existed as part of the Dotster brand founded by George DeCarlo in 1998. A graduate of the University of Portland, DeCarlo launched Dotster as a project of the Columbia Analytical Services before being purchased by Baker Capital in 2004.
In 2005, Dotster introduced a new domain technology which provided relevant search results based on domains or keywords entered by its users. It was awarded the Domain Pioneer Award from Verisign at the "25 Years of .com Gala" in 2010.
In 2011, Dotster and its subsidiaries, My Domain and Netfirms, were acquired by Endurance International Group. Among the domain names owned by Dotster was www.domain.com, which was determined by leadership to be the strongest branding for their attempt to put more emphasis on the domain registration growth. In 2012, Dotster began migrating domain accreditation to Domain.com, LLC, making it the official registrar for the company's domain business.
Services
Domain.com currently powers more than 1.2 million websites worldwide. Although they are known predominantly as domain registrar, the company also offers resources for shared hosting, WordPress hosting, and SSL certificates. They also are responsible for launching the .xyz top-level domain to increase the number of short, brandable URLs available to the public. |
https://en.wikipedia.org/wiki/Mandibular%20nerve | In neuroanatomy, the mandibular nerve (V) is the largest of the three divisions of the trigeminal nerve, the fifth cranial nerve (CN V). Unlike the other divisions of the trigeminal nerve (ophthalmic nerve, maxillary nerve) which contain only afferent fibers, the mandibular nerve contains both afferent and efferent fibers. These nerve fibers innervate structures of the lower jaw and face, such as the tongue, lower lip, and chin. The mandibular nerve also innervates the muscles of mastication.
Structure
Course
The large sensory root of mandibular nerve emerges from the lateral part of the trigeminal ganglion and exits the cranial cavity through the foramen ovale. The motor root (Latin: radix motoria s. portio minor), the small motor root of the trigeminal nerve, passes under the trigeminal ganglion and through the foramen ovale to unite with the sensory root just outside the skull.
The mandibular nerve immediately passes between tensor veli palatini, which is medial, and lateral pterygoid, which is lateral, and gives off a meningeal branch (nervus spinosus) and the nerve to medial pterygoid from its medial side. The nerve then divides into a small anterior division and a large posterior division.
Branches
The mandibular nerve gives off the following branches:
From the main trunk (before the division):
meningeal branch (nervus spinosus) (sensory)
medial pterygoid nerve (motor)
From the anterior division:
masseteric nerve (mixed)
deep temporal nerves (mixed)
buccal nerve (sensory)
lateral pterygoid nerve (motor)
From the posterior division:
auriculotemporal nerve (sensory)
lingual nerve (sensory)
inferior alveolar nerve (mixed)
mylohyoid nerve (motor)
incisive branch (sensory)
mental nerve (sensory)
Distribution
Anterior Division
(Motor Innervation - Muscles of mastication)
Masseteric nerve
Masseter muscle
Medial pterygoid nerve
Medial pterygoid muscle
Tensor tympani msucle
Tensor veli palatini (via tensor veli palatini branch)
Lateral pter |
https://en.wikipedia.org/wiki/Pib2 | Phosphatidylinositol 3-phosphate-binding protein 2 (Pib2) is a yeast protein involved in the regulation of TORC1 signaling and lysosomal membrane permeabilization. It is essential for the reactivation of TORC1 following exposure to rapamycin or nutrient starvation.
Discovery
Pib2 was first identified as a FYVE domain-containing protein able to bind phosphatidylinositol 3-phosphate (PI3P). Pib2 was later identified in a screen for rapamycin sensitivity, along with several other TORC1 regulatory proteins (including Ego1, Gtr1, Gtr2, and other key TORC1 related proteins).
Structure
Pib2 is a 70.6 kDa protein with 635 amino acids (Uniprot - P53191). Pib2 has 5 weakly conserved motifs among fungi and 2 universally conserved motifs. The partially conserved motifs are found in the N-terminal region of the protein and are generally referred to as regions A-E The universally conserved motifs include a phosphatidylinositol-3-phosphate (PI3P)-binding FYVE domain, and a short tail motif at the C-terminus.
Mammalian homologs
Pib2 has 2 mammalian homologs, Phafin1 (also known as LAPF or PLEKHF1) and Phafin2 (EAPF or PLEKHF2). The phafin proteins each have a PH (pleckstrin homology) domain and FYVE domain. Phafin1 also has a tail motif similar to that of Pib2. These proteins have not been shown to be involved in the regulation of mammalian TORC1 signaling but have been shown to be involved in related processes.
Function
TORC1 regulation
In Saccharomyces cerevisiae, Pib2 has been shown to be involved in regulating TORC1 signaling. Pib2 is found at the yeast vacuole and endosomes. The PI3P binding FYVE domain of Pib2 is key for this localization. Pib2 also interacts with some TORC1 components, including Kog1 and Tor1, and has been shown to be necessary for TORC1 reactivation following inhibition by rapamycin or nutrient starvation. Additionally, Pib2 is essential for TORC1 reactivation by stimulation with leucine and glutamine.
In terms of TORC1 reactivation, it has been o |
https://en.wikipedia.org/wiki/JBoss%20Enterprise%20Application%20Platform | The JBoss Enterprise Application Platform (or JBoss EAP) is a subscription-based/open-source Java EE-based application server runtime platform used for building, deploying, and hosting highly-transactional Java applications and services developed and maintained by Red Hat. The JBoss Enterprise Application Platform is part of Red Hat's Enterprise Middleware portfolio of software. Because it is Java-based, the JBoss application server operates across platforms; it is usable on any operating system that supports Java. JBoss Enterprise Application Platform was originally called JBoss and was developed by the eponymous company JBoss, acquired by Red Hat in 2006.
Product components and features
Red Hat's latest JBoss EAP version is 7, with Cumulative Patches 2 and Cumulative Patches 3 (JBoss EAP 7.2 and JBoss EAP 7.3, respectively).
Key features:
Eclipse-based Integrated Development Environment (IDE) is available using JBoss Developer Studio
Supports Java EE and Web Services standards
Enterprise Java Beans (EJB)
Java persistence using Hibernate
Object request broker (ORB) using JacORB for interoperability with CORBA objects
JBoss Seam framework, including Java annotations to enhance POJOs, and including JBoss jBPM
JavaServer Faces (JSF), including RichFaces
Web application services, including Apache Tomcat for JavaServer Pages (JSP) and Java Servlets
Caching, clustering, and high availability, are provided by the subsystem Infinispan (formerly JBoss Cache)
EJB that includes JNDI and RMI
Security services, including Java Authentication and Authorization Service (JAAS) and pluggable authentication modules (PAM)
Web Services and interoperability, including JAX-RPC, JAX-WS, many WS-* standards, and MTOM/XOP
Integration and messaging services, including J2EE Connector Architecture (JCA), Java Database Connectivity (JDBC), and Java Message Service (JMS)
Management and Service-Oriented Architecture (SOA) using Java Management Extensions (JMX)
Additional admini |
https://en.wikipedia.org/wiki/Semicircle%20law%20%28quantum%20Hall%20effect%29 | The semicircle law, in condensed matter physics, is a mathematical relationship that occurs between quantities measured in the quantum Hall effect. It describes a relationship between the anisotropic and isotropic components of the macroscopic conductivity tensor , and, when plotted, appears as a semicircle.
The semicircle law was first described theoretically in Dykhne and Ruzin's analysis of the quantum Hall effect as a mixture of 2 phases: a free electron gas, and a free hole gas. Mathematically, it states that where is the mean-field Hall conductivity, and is a parameter that encodes the classical conductivity of each phase. A similar law also holds for the resistivity.
A convenient reformulation of the law mixes conductivity and resistivity: where is an integer, the Hall divisor.
Although Dykhne and Ruzin's original analysis assumed little scattering, an assumption that proved empirically unsound, the law holds in the coherent-transport limits commonly observed in experiment.
Theoretically, the semicircle law originates from a representation of the modular group , which describes a symmetry between different Hall phases. (Note that this is not a symmetry in the conventional sense; there is no conserved current.) That group's strong connections to number theory also appear: Hall phase transitions (in a single layer) exhibit a selection rulethat also governs the Farey sequence. Indeed, plots of the semicircle law are also Farey diagrams.
In striped quantum Hall phases, the relationship is slightly more complex, because of the broken symmetry:Here and describe the macroscopic conductivity in directions aligned with and perpendicular to the stripes. |
https://en.wikipedia.org/wiki/ArchNet | Archnet is a collaborative digital humanities project focused on Islamic architecture and the built environment of Muslim societies. Conceptualized in 1998 and originally developed at the MIT School of Architecture and Planning in co-operation with the Aga Khan Trust for Culture. It has been maintained by the Aga Khan Documentation Center at MIT and the Aga Khan Trust for Culture since 2011.
Archnet is an open access resource providing all users with resources on architecture, urban design and development in the Muslim world.
History and Conceptualization
The Aga Khan Trust for Culture (AKTC) is an agency of the Aga Khan Development Network (AKDN). Through various programmes, partnerships, and initiatives, the AKTC seeks to improve the built environment in Asia and Africa where there is a significant Muslim presence. Archnet complements the work of the Trust by making its resources digitally accessible to individuals worldwide.
Archnet was conceptualized in 1998 during a series of discussions between Aga Khan IV; the President of the Massachusetts Institute of Technology (MIT) Charles Vest; and the Dean of MIT’s School of Architecture and Planning, William J. Mitchell. The foundations of Archnet were predicated on remarks made by Aga Khan in Istanbul in 1983, about his desire to make available the extensive dossiers resulting from the nominations for the Aga Khan Award for Architecture (AKAA) for the purpose of “[assisting] those institutions where the professionals of the future are trained.”
The purpose of the website is to create a viable platform upon which knowledge pertaining to the field of architecture can be shared. Archnet aims to expand the general intellectual frame of reference to transcend the barriers of geography, socio-economic status and religion, and to foster a spirit of collaboration and open dialogue. Archnet therefore manifests many of the Aga Khan’s values and principles regarding not only rural and urban development but also pluralism a |
https://en.wikipedia.org/wiki/Jan%20Arnoldus%20Schouten | Jan Arnoldus Schouten (28 August 1883 – 20 January 1971) was a Dutch mathematician and Professor at the Delft University of Technology. He was an important contributor to the development of tensor calculus and Ricci calculus, and was one of the founders of the Mathematisch Centrum in Amsterdam.
Biography
Schouten was born in Nieuwer-Amstel to a family of eminent shipping magnates. He attended a Hogere Burger School, and later he took up studies in electrical engineering at the Delft Polytechnical School. After graduating in 1908, he worked for Siemens in Berlin and for a public utility in Rotterdam before returning to study mathematics in Delft in 1912. During his study he had become fascinated by the power and subtleties of vector analysis. After a short while in industry, he returned to Delft to study Mathematics, where he received his Ph.D. degree in 1914 under supervision of Jacob Cardinaal with a thesis entitled .
Schouten was an effective university administrator and leader of mathematical societies. During his tenure as professor and as institute head he was involved in various controversies with the topologist and intuitionist mathematician L. E. J. Brouwer. He was a shrewd investor as well as mathematician and successfully managed the budget of the institute and Dutch mathematical society. He hosted the International Congress of Mathematicians in Amsterdam in early 1954, and gave the opening address. Schouten was one of the founders of the Mathematisch Centrum in Amsterdam.
Among his PhD candidates students were Johanna Manders (1919), Dirk Struik (1922), Johannes Haantjes (1933), Wouter van der Kulk (1945), and Albert Nijenhuis (1952).
In 1933 Schouten became member of the Royal Netherlands Academy of Arts and Sciences.
Schouten died in 1971 in Epe. His son Jan Frederik Schouten (1910-1980) was Professor at the Eindhoven University of Technology from 1958 to 1978.
Work
Schouten's dissertation applied his "direct analysis", modeled on the vector ana |
https://en.wikipedia.org/wiki/Coxiella%20burnetii | Coxiella burnetii is an obligate intracellular bacterial pathogen, and is the causative agent of Q fever. The genus Coxiella is morphologically similar to Rickettsia, but with a variety of genetic and physiological differences. C. burnetii is a small Gram-negative, coccobacillary bacterium that is highly resistant to environmental stresses such as high temperature, osmotic pressure, and ultraviolet light. These characteristics are attributed to a small cell variant form of the organism that is part of a biphasic developmental cycle, including a more metabolically and replicatively active large cell variant form. It can survive standard disinfectants, and is resistant to many other environmental changes like those presented in the phagolysosome.
History and naming
Research in the 1920s and 1930s identified what appeared to be a new type of Rickettsia, isolated from ticks, that was able to pass through filters. The first description of what may have been Coxiella burnetii was published in 1925 by Hideyo Noguchi, but since his samples did not survive, it remains unclear as to whether it was the same organism. The definitive descriptions were published in the late 1930s as part of research into the cause of Q fever, by Edward Holbrook Derrick and Macfarlane Burnet in Australia, and Herald Rea Cox and Gordon Davis at the Rocky Mountain Laboratory (RML) in the United States.
The RML team proposed the name Rickettsia diaporica, derived from the Greek word for having the ability to pass through filter pores, to avoid naming it after either Cox or Davis if indeed Noguchi's description had priority. Around the same time, Derrick proposed the name Rickettsia burnetii, in recognition of Burnet's contribution in identifying the organism as a Rickettsia. As it became clear that the species differed significantly from other Rickettsia, it was first elevated to a subgenus named after Cox, Coxiella, and then in 1948 to its own genus of that name, proposed by Cornelius B. Philip, a |
https://en.wikipedia.org/wiki/American%20Institute%20of%20Physics | The American Institute of Physics (AIP) promotes science and the profession of physics, publishes physics journals, and produces publications for scientific and engineering societies. The AIP is made up of various member societies. Its corporate headquarters are at the American Center for Physics in College Park, Maryland, but the institute also has offices in Melville, New York, and Beijing.
Historical overview
The AIP was founded in 1931 as a response to lack of funding for the sciences during the Great Depression. It formally incorporated in 1932 consisting of five original "member societies", and a total of four thousand members. A new set of member societies was added beginning in the mid-1960s. As soon as the AIP was established it began publishing scientific journals.
Member societies
Affiliated societies
List of publications
The AIP has a subsidiary called AIP Publishing (wholly owned non-profit) dedicated to scholarly publishing by the AIP and its member societies, as well on behalf of other partners.
AIP Style
Just as the American Chemical Society has its own style called ACS Style, AIP has its own citation style called AIP Style which is commonly used in physics.
See also
Institute of Physics
PACS
Science Writing Award
SPIE
Joan Warnow-Blewett |
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2035 | In molecular biology, glycoside hydrolase family 35 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 35 CAZY GH_35 comprises enzymes with only one known activity; beta-galactosidase (). Mammalian beta-galactosidase is a lysosomal enzyme (gene GLB1) which cleaves the terminal galactose from gangliosides, glycoproteins, and glycosaminoglycans and whose deficiency is the cause of the genetic disease Gm(1) gangliosidosis (Morquio disease type B). |
https://en.wikipedia.org/wiki/Jayme%20Luiz%20Szwarcfiter | Jayme Luiz Szwarcfiter (born July 5, 1942, in Rio de Janeiro) is a computer scientist in Brazil.
Biography
Szwarcfiter graduated in 1967 in electronic engineering from the Federal University of Rio de Janeiro (UFRJ). He received his MA in 1971 from COPPE. In 1975 he obtained his PhD in computer science from the University of Newcastle Upon Tyne, England, under supervision of Leslie Blackett Wilson. He is currently a professor emeritus at UFRJ. The Journal of the Brazilian Computer Society dedicated a special edition in 2001 to Szwarcfiter's major publications. Among others, he has written joint articles with Donald E. Knuth and Christos Papadimitriou.
Awards
He received the Award of Scientific Merit from the Brazilian Computer Society in 2005. In April 2006 he won the Almirante Álvaro Alberto prize in computer science, one of the most important academic recognitions in Brazil. Szwarcfiter is also one of the recipients of the Ordem Nacional do Mérito Científico (National Order of Scientific Merit).
In 2011, Prof. Szwarcfiter was elected a Member of the Brazilian Academy of Sciences.
Books |
https://en.wikipedia.org/wiki/Metabolic%20intermediate | Metabolic intermediates are molecules that are the precursors or metabolites of biologically significant molecules.
Although these intermediates are of relatively minor direct importance to cellular function, they can play important roles in the allosteric regulation of enzymes.
Clinical significance
Some can be useful in measuring rates of metabolic processes (for example, 3,4-dihydroxyphenylacetic acid or 3-aminoisobutyrate).
Because they can represent unnatural points of entry into natural metabolic pathways, some (such as AICA ribonucleotide) are of interest to researchers in developing new therapies.
See also
Metabolism
Metabolism |
https://en.wikipedia.org/wiki/Jessica%20Sklar | Jessica Katherine Sklar (born 1973) is a mathematician interested in abstract algebra, recreational mathematics, mathematics and art, and mathematics and popular culture. She is a professor of mathematics at Pacific Lutheran University, and former head of the mathematics department at Pacific Lutheran.
Education and career
As a high school student, Sklar studied poetry at the Interlochen Arts Academy. She did her undergraduate studies at Swarthmore College, where her mother Elizabeth S. had earned a degree in English (later becoming an English professor at Wayne State University) and her father Lawrence Sklar had taught philosophy. Jessica completed a double major in English and mathematics in 1995.
Next, Sklar moved to the University of Oregon for graduate study in mathematics, earning a master's degree in 1997 and completing her Ph.D. there in 2001. Her dissertation, Binomial Rings and Algebras, was supervised by Frank Wylie Anderson.
She has been a faculty member in the mathematics department at Pacific Lutheran since 2001.
Combining her interests in mathematics and art she is one of 24 mathematicians and artists who make up the Mathemalchemy Team.
Selected publications
“‘Bok bok’: exploring the game of Chicken in film,” with Jennifer F. Nordstrom. In: Handbook of the Mathematics of the Arts and Sciences. Ed. Bharath Sriraman. Springer International Publishing, Cham, 2020.
“‘Elegance in design’: mathematics and the works of Ted Chiang.” In: Handbook of the Mathematics of the Arts and Sciences. Ed. Bharath Sriraman. Springer International Publishing, Cham, 2020.
“Disciple” (poem). Journal of Humanistic Mathematics 7(2) (July 2017), 418.
First-Semester Abstract Algebra: A Structural Approach. GNU Free Documentation License, 2017.
“A confused electrician uses Smith normal form,” with Tom Edgar. Mathematics Magazine 89(1) (2016), 3–13.
Mathematics in Popular Culture: Essays on Appearances in Film, Literature, Games, Television and Other Media. Jefferson, NC: |
https://en.wikipedia.org/wiki/Gaya%20melon | The Gaya melon, also known as the ivory gaya, snowball, sweet snowball, ghost, dino(saur), dino(saur) egg, snow leopard, matice, matisse, sugar baby, and silver star melons, is a small to medium-sized honeydew cultivar developed originally in Japan and Korea and now grown in China, Mexico, southern California, and South America.
Description
The rind is very thin and is ivory in color with green streaking and the interior flesh is white. They are round in shape and may be slightly oblong. The flesh is juicy and soft towards the center but crispier towards the rind. It has been described to have a mild, sweet flavor with floral notes. It is best kept at room temperature and cut melons will stay good in a refrigerator for up to 5 days.
Availability
It is available from late spring to early summer and is available at various farmers' markets and Asian markets in California and is sought after because of its unique coloring. It is also available at supermarkets in Australia, among other countries.
See also
Melon |
https://en.wikipedia.org/wiki/Mishnat%20ha-Middot | The Mishnat ha-Middot (, 'Treatise of Measures') is the earliest known Hebrew treatise on geometry, composed of 49 mishnayot in six chapters. Scholars have dated the work to either the Mishnaic period or the early Islamic era.
History
Date of composition
Moritz Steinschneider dated the Mishnat ha-Middot to between 800 and 1200 CE. Sarfatti and Langermann have advanced Steinschneider's claim of Arabic influence on the work's terminology, and date the text to the early ninth century.
On the other hand, Hermann Schapira argued that the treatise dates from an earlier era, most likely the Mishnaic period, as its mathematical terminology differs from that of the Hebrew mathematicians of the Arab period. Solomon Gandz conjectured that the text was compiled no later than (possibly by Rabbi Nehemiah) and intended to be a part of the Mishnah, but was excluded from its final canonical edition because the work was regarded as too secular. The content resembles both the work of Hero of Alexandria (c. ) and that of al-Khwārizmī (c. ) and the proponents of the earlier dating therefore see the Mishnat ha-Middot linking Greek and Islamic mathematics.
Modern history
The Mishnat ha-Middot was discovered in MS 36 of the Munich Library by Moritz Steinschneider in 1862. The manuscript, copied in Constantinople in 1480, goes as far as the end of Chapter V. According to the colophon, the copyist believed the text to be complete. Steinschneider published the work in 1864, in honour of the seventieth birthday of Leopold Zunz. The text was edited and published again by mathematician Hermann Schapira in 1880.
After the discovery by Otto Neugebauer of a genizah-fragment in the Bodleian Library containing Chapter VI, Solomon Gandz published a complete version of the Mishnat ha-Middot in 1932, accompanied by a thorough philological analysis. A third manuscript of the work was found among uncatalogued material in the Archives of the Jewish Museum of Prague in 1965.
Contents
Although prima |
https://en.wikipedia.org/wiki/The%20One%20Minutes | The One Minutes is a global platform for one-minute videos. The One Minutes Foundation produces and distributes One Minutes, providing a platform for people to create and connect through short, accessible video art.
History
The One Minutes was initiated in 1998 by Katja van Stiphout and Michal Buttink, two students of the Sandberg Institute, Masters of Art and Design. The institute’s director Jos Houweling was asked to fill in an hour of airtime on local television, SALTO, once a month from midnight to 1 a.m. and offered this to two of his students. They invited fellow students and friends to fill the timeslot with one-minute films. A new format was born. Within the inexorable limitation of 60 seconds, the endless possibilities of video were revealed. The hour at midnight grew into a worldwide platform, where television channels, arts organisations and film festivals adopted segments of One Minutes, showcasing one-minutes at film festivals, art organisations and cultural institutes.
In 1999, The One Minutes Foundation was founded, under direction of Jos Houweling and supported by Sandberg Institute. Since then, The One Minutes Awards have been held annually to acknowledge the best One Minutes of the year. The One Minutes held their first workshop in China in 2000 at Xiamen University. Ever since, The One Minutes has been a bridge of cultural exchange between international and Chinese artists, filmmakers and students. In 2008, Chinese artists took part in the Venice Biennale as part of The One Minutes. Since 2009, East China Normal University and Shanghai Dragon TV have been organising the yearly The One Minutes International Competition, which is broadcast by Dragon TV, Shanghai Media Group’s satellite broadcaster. One Minutes were also shown at EXPO Shanghai in 2010. Since 2011, The One Minutes has been a returning section in the annual Shanghai International TV Festival.
In 2014, under the direction of Julia van Mourik, The One Minutes started a new curated pr |
https://en.wikipedia.org/wiki/Rendezvous%20with%20Rama%20%28video%20game%29 | Rendezvous with Rama is an interactive fiction game with graphics published by Telarium, a subsidiary of Spinnaker Software, in 1984. It was developed in cooperation with Arthur C. Clarke and based upon his 1973 science fiction novel Rendezvous with Rama.
Reception
German reviewers recognized the complexity of the storyline and the various possibilities of interaction with non-player characters.
See also
Fahrenheit 451 (video game)
Rama, 1996 computer game also based on Clarke's novel |
https://en.wikipedia.org/wiki/Cairo%20spiny%20mouse | The Cairo spiny mouse (Acomys cahirinus), also known as the common spiny mouse, Egyptian spiny mouse, or Arabian spiny mouse, is a nocturnal species of rodent in the family Muridae. It is found in Africa north of the Sahara, where its natural habitats are rocky areas and hot deserts. It is omnivorous and feeds on seeds, desert plants, snails, and insects. It is a gregarious animal and lives in small family groups. It is the first and only known rodent species that exhibit spontaneous decidualization and menstruation.
Description
The Cairo spiny mouse grows to a head and body length of about with a tail of much the same length. Adults weigh between . The colour of the Cairo spiny mouse is sandy-brown or greyish-brown above and whitish beneath. A line of spine-like bristles run along the ridge of the back. The snout is slender and pointed, the eyes are large, the ears are large and slightly pointed and the tail is devoid of hairs.
The spiny mouse is known to have relatively weak skin, compared to Mus musculus, and tail autotomy.
Distribution and habitat
The Cairo spiny mouse is native to northern Africa with its range extending from Mauritania, Morocco, and Algeria in the west to Sudan, Ethiopia, Eritrea, and Egypt in the east at altitudes up to about . It lives in dry stony habitats with sparse vegetation and is often found near human dwellings. It is common around cliffs and canyons and in gravelly plains with shrubby vegetation. It is not usually found in sandy habitats, but may be present among date palms.
Behaviour
Cairo spiny mice are social animals and live in a group with a dominant male. Breeding mostly takes place in the rainy season, between September and April, when availability of food is greater. The gestation period is five to six weeks, which is long for a mouse, and the young are well-developed when they are born. At this time, they are already covered with short fur and their eyes are open, and they soon start exploring their surroundings. The |
https://en.wikipedia.org/wiki/Edmonds%E2%80%93Pruhs%20protocol | Edmonds–Pruhs protocol is a protocol for fair cake-cutting. Its goal is to create a partially proportional division of a heterogeneous resource among n people, such that each person receives a subset of the cake which that person values as at least 1/an of the total, where is some sufficiently large constant. It is a randomized algorithm whose running time is O(n) with probability close to 1. The protocol was developed by Jeff Edmonds and Kirk Pruhs, who later improved it in joint work with Jaisingh Solanki.
Motivation
A proportional division of a cake can be achieved using the recursive halving algorithm in time O(n log n). Several hardness results show that this run-time is optimal under a wide variety of assumptions. In particular, recursive halving is the fastest possible algorithm for achieving full proportionality when the pieces must be contiguous, and it is the fastest possible deterministic algorithm for achieving even partial proportionality and even when the pieces are allowed to be disconnected. One case which is not covered by the hardness results is the case of randomized algorithms, guaranteeing only partial proportionality and with possibly disconnected pieces. The Edmonds–Pruhs protocol aims to provide an algorithm with run-time O(n) for this case.
The protocol
The general scheme is as follows:
Each partner privately partitions the cake to an pieces of equal subjective value. These n ⋅ an pieces are called candidate pieces.
Each partner picks 2d candidate pieces uniformly at random, with replacement (d is a constant to be determined later). The candidates are grouped into d pairs, which the partner reports to the algorithm. These n⋅d pairs are called quarterfinal brackets.
From each quarterfinal bracket, the algorithm selects a single piece - the piece that intersects the fewer number of other candidate pieces. These n ⋅ d pieces are called semifinal pieces.
For each partner, the algorithm selects a single piece; they are called final pie |
https://en.wikipedia.org/wiki/%CE%A4-additivity | In mathematics, in the field of measure theory, τ-additivity is a certain property of measures on topological spaces.
A measure or set function on a space whose domain is a sigma-algebra is said to be if for any upward-directed family of nonempty open sets such that its union is in the measure of the union is the supremum of measures of elements of that is,:
See also |
https://en.wikipedia.org/wiki/Artificial%20wisdom | Artificial wisdom is a software system that can demonstrate one or more qualities of being wise.
Artificial wisdom can be described as artificial intelligence reaching the top-level of decision-making when confronted with the most complex challenging situations. The term artificial wisdom is used when the "intelligence" is based on more than by chance collecting and interpreting data, but by design enriched with smart and conscience strategies that wise people would use.
When examining computer-aided wisdom; the partnership of artificial intelligence and contemplative neuroscience, concerns regarding the future of artificial intelligence shift to a more optimistic viewpoint. This artificial wisdom forms the basis of Louis Molnar's monographic article on artificial philosophy, where he coined the term and proposes how artificial intelligence might view its place in the grand scheme of things. |
https://en.wikipedia.org/wiki/147th%20meridian%20east | The meridian 147° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Pacific Ocean, Australasia, the Southern Ocean, and Antarctica to the South Pole.
The 147th meridian east forms a great circle with the 33rd meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 147th meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | East Siberian Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Sakha Republic — island of New Siberia
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | East Siberian Sea
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Sakha Republic Magadan Oblast — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Sea of Okhotsk
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" | Kuril Islands
| Island of Iturup, administered by (Sakhalin Oblast), but claimed by (Hokkaidō Prefecture)
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" | Passing just east of Shikotan island, Kuril Islands (at ) Passing just west of Satawal island, (at )
|-
|
! scope="row" |
| Manus Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" | Bismarck Sea
|-
|
! scope="row" |
| Long Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" | Bismarck Sea
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! s |
https://en.wikipedia.org/wiki/John%20B.%20Conway | John Bligh Conway (born September 22, 1939) is an American mathematician. He is currently a professor emeritus at the George Washington University. His specialty is functional analysis, particularly bounded operators on a Hilbert space.
Conway earned his B.S. from Loyola University and Ph.D. from Louisiana State University under the direction of Heron Collins in 1965, with a dissertation on The Strict Topology and Compactness in the Space of Measures. He has had 20 students who obtained doctorates under his supervision, most of them at Indiana University, where he was a close friend of mathematician Max Zorn. He served on the faculty there from 1965 to 1990, when he became head of the mathematics department at the University of Tennessee.
He is the author of a two-volume series on Functions of One Complex Variable (Springer-Verlag), which is a standard graduate text.
Selected publications |
https://en.wikipedia.org/wiki/Severe%20plastic%20deformation | Severe plastic deformation (SPD) is a generic term describing a group of metalworking techniques involving very large strains typically involving a complex stress state or high shear, resulting in a high defect density and equiaxed "ultrafine" grain (UFG) size (d < 500 nm) or nanocrystalline (NC) structure (d < 100 nm).
History
The significance of SPD was known from the ancient times, at least during the transition from the Bronze Age to the Iron Age, when repeated hammering and folding was employed for processing strategic tools such as swords. The development of the principles underlying SPD techniques goes back to the pioneering work of P.W. Bridgman at Harvard University in the 1930s. This work concerned the effects on solids of combining large hydrostatic pressures with concurrent shear deformation and it led to the award of the Nobel Prize in Physics in 1946. Very successful early implementations of these principles, described in more detail below, are the processes of equal-channel angular pressing (ECAP) developed by V.M. Segal and co-workers in Minsk in the 1970s and high-pressure torsion, derived from Bridgman's work, but not widely developed until the 1980s at the Russian Institute of Metals Physics in modern-day Yekaterinburg.
Some definitions of SPD describe it as a process in which high strain is applied without any significant change in the dimensions of the workpiece, resulting in a large hydrostatic pressure component. However, the mechanisms that lead to grain refinement in SPD are the same as those originally developed for mechanical alloying, a powder process that has been characterized as "severe plastic deformation" by authors as early as 1983. Additionally, some more recent processes such as asymmetric rolling, do result in a change in the dimensions of the workpiece, while still producing an ultrafine grain structure. The principles behind SPD have even been applied to surface treatments.
Methods
Equal channel angular Pressing
Equal chan |
https://en.wikipedia.org/wiki/Truth%20value | In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth, which in classical logic has only two possible values (true or false).
Computing
In some programming languages, any expression can be evaluated in a context that expects a Boolean data type. Typically (though this varies by programming language) expressions like the number zero, the empty string, empty lists, and null evaluate to false, and strings with content (like "abc"), other numbers, and objects evaluate to true.
Sometimes these classes of expressions are called "truthy" and "falsy" / "false".
Classical logic
In classical logic, with its intended semantics, the truth values are true (denoted by 1 or the verum ⊤), and untrue or false (denoted by 0 or the falsum ⊥); that is, classical logic is a two-valued logic. This set of two values is also called the Boolean domain. Corresponding semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables. Logical biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false. Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgan's laws:
¬(
¬(
Propositional variables become variables in the Boolean domain. Assigning values for propositional variables is referred to as valuation.
Intuitionistic and constructive logic
In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a truth value only if they can be given a constructive proof. It starts with a set of axioms, and a statement is true if one can build a proof of the statement from those axioms. A statement is false if one can deduce a contradiction from it. This leaves open the possibility of statements that have not yet been assigned a truth value.
Unproven statements in intuitionistic logic are not given an intermediate truth value (as is sometime |
https://en.wikipedia.org/wiki/Ramsauer%E2%80%93Townsend%20effect | The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gas. This effect is a result of quantum mechanics. The effect is named for Carl Ramsauer and John Sealy Townsend, who each independently studied the collisions between atoms and low-energy electrons in 1921.
Definitions
When an electron moves through a gas, its interactions with the gas atoms cause scattering to occur. These interactions are classified as inelastic if they cause excitation or ionization of the atom to occur and elastic if they do not.
The probability of scattering in such a system is defined as the number of electrons scattered, per unit electron current, per unit path length, per unit pressure at 0 °C, per unit solid angle. The number of collisions equals the total number of electrons scattered elastically and inelastically in all angles, and the probability of collision is the total number of collisions, per unit electron current, per unit path length, per unit pressure at 0 °C.
Because noble gas atoms have a relatively high first ionization energy and the electrons do not carry enough energy to cause excited electronic states, ionization and excitation of the atom are unlikely, and the probability of elastic scattering over all angles is approximately equal to the probability of collision.
Description
If one tries to predict the probability of collision with a classical model that treats the electron and atom as hard spheres, one finds that the probability of collision should be independent of the incident electron energy. However, Ramsauer and Townsend, independently observed that for slow-moving electrons in argon, krypton, or xenon, the probability of collision between the electrons and gas atoms obtains a minimum value for electrons with a certain amount of kinetic energy (about 1 electron volts for xenon gas).
No good explanation for the phenomeno |
https://en.wikipedia.org/wiki/Aronson%20Prize | The Aronson Prize () is a prize awarded for achievements in microbiology and immunology. It was established by the will of the pediatrician and bacteriologist Hans Aronson and has been awarded since 1921. Aronson bequeathed a large part of his estate to the establishment of the prize. The prize is awarded biannually on 8 March, the date of Aronson's death.
In 1969, the foundation that awarded the prize was dissolved on the initiative of its last chairman Georg Henneberg, and the responsibility for the prize and the remaining capital was transferred to the (West) Berlin government, in order to safeguard the existence of the prize. Since 1970, the prize has been awarded by the Senate of Berlin.
The first laureate was August von Wassermann. Among the Aronson laureates are several scientists who later were awarded the Nobel Prize in Physiology or Medicine, such as Karl Landsteiner and Gerhard Domagk.
Laureates
1921 August von Wassermann
1926 Karl Landsteiner
1931 Richard Otto
1944 Gerhard Domagk
1956 Helmut Ruska
1960 Paul Hans Karl Constantin Schmidt
1966 Peter Giesbrecht
1967 Albert Herrlich
1968 Friedrich Staib
1971 or 1972 Werner Köhler
1971 or 1972 Werner Schäfer
1973 Ernst Richard Habermann
1977 Werner Knapp
1981 Walter Doerfler
1982 Volker Schirrmacher
1985 Volker ter Meulen
1987 Karin Mölling
1988 Stefan H. E. Kaufmann
1989 Hans-Dieter Klenk
1990 Dieter Bitter-Suermann
1991 Bernhard Fleckenstein
1992 Stefan Carl Wilhelm Meuer
1993 Ulrich Koszinowski
1994 Thomas Hünig
1995 Otto Haller
1996 Thomas F. Meyer
1997 Bernhard Fleischer
1998 Jürgen Heesemann
1999 Ernst Theodor Rietschel
2000 Andreas Radbruch
2001 Sucharit Bhakdi
2002 Wolfgang Hammerschmidt
2007 Matthias Reddehase
2008 Matthias Frosch
See also
List of medicine awards |
https://en.wikipedia.org/wiki/NZB | NZB is an XML-based file format for retrieving posts from NNTP (Usenet) servers. The format was conceived by the developers of the Newzbin.com Usenet Index. NZB is effective when used with search-capable websites. These websites create NZB files out of what is needed to be downloaded. Using this concept, headers would not be downloaded hence the NZB method is quicker and more bandwidth-efficient than traditional methods.
Each Usenet message has a unique identifier called the "Message-ID". When a large file is posted to a Usenet newsgroup, it is usually divided into multiple messages (called segments or parts) each having its own Message-ID. An NZB-capable Usenet client will read all needed Message-IDs from the NZB file, download them and decode the messages back into a binary file (usually using yEnc or Uuencode).
File format example
The following is an example of an NZB 1.1 file.
<?xml version="1.0" encoding="iso-8859-1" ?>
<!DOCTYPE nzb PUBLIC "-//newzBin//DTD NZB 1.1//EN" "http://www.newzbin.com/DTD/nzb/nzb-1.1.dtd">
<nzb xmlns="http://www.newzbin.com/DTD/2003/nzb">
<head>
<meta type="title">Your File!</meta>
<meta type="tag">Example</meta>
</head>
<file poster="Joe Bloggs <bloggs@nowhere.example>;" date="1071674882" subject="Here's your file! abc-mr2a.r01 (1/2)">
<groups>
<group>alt.binaries.newzbin</group>
<group>alt.binaries.mojo</group>
</groups>
<segments>
<segment bytes="102394" number="1">123456789abcdef@news.newzbin.com</segment>
<segment bytes="4501" number="2">987654321fedbca@news.newzbin.com</segment>
</segments>
</file>
</nzb>
See also
Comparison of Usenet newsreaders |
https://en.wikipedia.org/wiki/Mobile%20signature | A mobile signature is a digital signature generated either on a mobile phone or on a SIM card on a mobile phone.
Origins of the term
mSign
The term first appeared in articles introducing mSign (short for Mobile Electronic Signature Consortium). It was founded in 1999 and comprised 35 member companies. In October 2000, the consortium published an XML-interface defining a protocol allowing service providers to obtain a mobile (digital) signature from a mobile phone subscriber.
In 2001, mSign gained industry-wide coverage when it came apparent that Brokat (one of the founding companies) also obtained a process patent in Germany for using the mobile phone to generate digital signatures.
ETSI-MSS standardization
The term was then used by Paul Gibson (G&D) and Romary Dupuis (France Telecom) in their standardisation work at the European Telecommunications Standards Institute (ETSI) and published in ETSI Technical Report TR 102 203.
The ETSI-MSS specifications define a SOAP interface and mobile signature roaming for systems implementing mobile signature services. ETSI TS 102 204, and ETSI TS 102 207.
Today
The mobile signature can have the legal equivalent of your own wet signature, hence the term "Mobile Ink", commercial term coined by Swiss Sicap. Other terms include "Mobile ID", "Mobile Certificate" by a circle of trust of 3 Finnish mobile network operators implementing a roaming mobile signature framework Mobiilivarmenne, etc.
According to the EU directives for electronic signatures the mobile signature can have the same level of protection as the handwritten signature if all components in the signature creation chain are appropriately certified. The governing standard for the mobile signature creation devices and equivalent of a handwritten signature is described in the Commission Decision 2003/511/EC of 14 July 2003 on the publication of reference numbers of generally recognised standards for electronic signature products in accordance with the Electronic Sign |
https://en.wikipedia.org/wiki/MOS%20Technology%20CIA | The 6526/8520 Complex Interface Adapter (CIA) was an integrated circuit made by MOS Technology. It served as an I/O port controller for the 6502 family of microprocessors, providing for parallel and serial I/O capabilities as well as timers and a Time-of-Day (TOD) clock. The device's most prominent use was in the Commodore 64 and Commodore 128(D), each of which included two CIA chips. The Commodore 1570 and Commodore 1571 floppy disk drives contained one CIA each. Furthermore, the Amiga home computers and the Commodore 1581 floppy disk drive employed a modified variant of the CIA circuit called 8520. 8520 is functionally equivalent to the 6526 except for the simplified TOD circuitry. Predecessor to CIA was PIA.
Parallel I/O
The CIA had two 8-bit bidirectional parallel I/O ports. Each port had a corresponding Data Direction Register, which allowed each data line to be individually set to input or output mode. A read of these ports always returned the status of the individual lines, regardless of the data direction that had been set.
Serial I/O
An internal bidirectional 8-bit shift register enabled the CIA to handle serial I/O. The chip could accept serial input clocked from an external source, and could send serial output clocked with one of the built-in programmable timers. An interrupt was generated whenever an 8-bit serial transfer had completed. It was possible to implement a simple "network" by connecting the shift register and clock outputs of several computers together.
The maximum bitrate is 500 kbit/s for the 2 MHz version.
The CIA incorporates a fix to a bug in the serial-shift register in the earlier 6522 VIA. The CIA was originally intended to allow fast communication with a disk drive, but in the end couldn't be used because of a desire to keep disk drive compatibility with the VIC-20; in practice the firmware of 1541 drive had to be made even slower than its VIC-20 predecessor to workaround a behaviour of the C64's video processor, that, when dra |
https://en.wikipedia.org/wiki/Hodograph | A hodograph is a diagram that gives a vectorial visual representation of the movement of a body or a fluid. It is the locus of one end of a variable vector, with the other end fixed. The position of any plotted data on such a diagram is proportional to the velocity of the moving particle. It is also called a velocity diagram. It appears to have been used by James Bradley, but its practical development is mainly from Sir William Rowan Hamilton, who published an account of it in the Proceedings of the Royal Irish Academy in 1846.
Applications
It is used in physics, astronomy, solid and fluid mechanics to plot deformation of material, motion of planets or any other data that involves the velocities of different parts of a body.
Meteorology
In meteorology, hodographs are used to plot winds from soundings of the Earth's atmosphere. It is a polar diagram where wind direction is indicated by the angle from the center axis and its strength by the distance from the center. In the figure to the right, at the bottom one finds values of wind at 4 heights above ground. They are plotted by the vectors to . One has to notice that direction are plotted as mentioned in the upper right corner.
With the hodograph and thermodynamic diagrams like the tephigram, meteorologists can calculate:
Wind shear: The lines uniting the extremities of successive vectors represent the variation in direction and value of the wind in a layer of the atmosphere. Wind shear is important information in the development of thunderstorms and future evolution of wind at these levels.
Turbulence: wind shear indicate the possible turbulence that would cause a hazard to aviation.
Temperature advection: change of temperature in a layer of air can be calculated by the direction of the wind at that level and the direction of the wind shear with the next level. In the northern hemisphere, warm air is to the right of a wind shear between levels in the atmosphere. The opposite is true in the southern one (see |
https://en.wikipedia.org/wiki/Wireless%20grid | Wireless grids are wireless computer networks consisting of different types of electronic devices with the ability to share their resources with any other device in the network in an ad hoc manner.
A definition of the wireless grid can be given as: "Ad hoc, distributed resource-sharing networks between heterogeneous wireless devices"
The following key characteristics further clarify this concept:
No centralized control
Small, low powered devices
Heterogeneous applications and interfaces
New types of resources like cameras, GPS trackers and sensors
Dynamic and unstable users / resources
The technologies that make up the wireless grid can be divided into two main categories; ad hoc networking and grid computing.
(Wireless) Ad hoc networking
In traditional networks, both wired and wireless, the connected devices, or nodes, depend on dedicated devices (edge devices) such as routers and/or servers for facilitating the throughput of information from one node to the other. These 'routing nodes' have the ability to determine where information is coming from and where it is supposed to go. They give out names and addresses (IP addresses) to each connected node and regulate the traffic between them. In wireless grids, such dedicated routing devices are not (always) available and the bandwidth that is permanently available to traditional networks has to be either 'borrowed' from an already existing network or publicly accessible bandwidth (open spectrum) has to be used.
A group addressing this problem is MANET (Mobile Ad Hoc Network).
Resource sharing
One of the intended aspects of wireless grids is that it will facilitate the sharing of a wide variety of resources. These will include both technical as information resources. The former being bandwidth, QoS, and web services, but also computational power and data storage capacity. Information resources can include virtually any kind of data from databases and membership lists to pictures and directories.
Ad hoc resource s |
https://en.wikipedia.org/wiki/Fragment%20%28novel%29 | Fragment (Random House, 2009) is a science-based thriller by best-selling author and screenwriter Warren Fahy. The novel focuses on a crew of young scientists from a reality TV show who must try to survive when their research vessel, the Trident, lands on Henders Island, where predatory creatures have been living and evolving for over half a billion years. Producer Lloyd Levin optioned Fahy's screenplay adaptation of Fragment for a major motion picture. Pandemonium, Fahy's sequel to Fragment, was published in March 2013.
Plot summary
In 1791, Captain Ambrose Henders and his crew stop at a tiny island in search of fresh water in the South Pacific. After one man, Henry Frears, is sent to obtain water, Captain Henders is forced to retreat to prevent more loss of life when Frears is eaten by unknown creatures. He writes of the account in his journal.
In the present day, an exploratory research vessel "The Trident" sails across the same stretch of the Pacific filming for a documentary series called "Sea life". When Captain Sol picks up an EPIRB distress beacon, The crew decides to investigate. They set ashore to scope out the shipwrecked "Balboa Bilbo", from which the distress beacon emanated. After arriving at the wreck, the crew are attacked by monstrous creatures. In the ensuing chaos, a camera captures the deaths of nearly all the crew as the rest of the world watches on a live feed. Researcher Nell and cameraman Zero barely manage to escape with their lives.
Eight days after the "Sea Life" incident, Nell has been called back to Hender's Island, after the live feed caused panic and forced the U.S. military to quarantine the island for study. After several attempts, a few live specimens are captured. During examination, they turn out to be highly specialized arthropods new to science. A mongoose is released into the jungle wearing a "critter cam" in an attempt to compare the Hender's Island fauna to invasive fauna from different areas of the world. The mongoose d |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.