source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Alternative%20beta | Alternative beta is the concept of managing volatile "alternative investments", often through the use of hedge funds. Alternative beta is often also referred to as "alternative risk premia".
Researcher Lars Jaeger says that the return from an investment mainly results from exposure to systematic risk factors. These exposures can take two basic forms: long only "buy and hold" exposures and exposures through the use of alternative investment techniques such as long/short investing, the use of derivatives (non-linear payout profiles), or the employment of leverage)
Background
Alternative investments
Although alternative investment is a general term, (commonly defined as any investment other than stocks, bonds or cash), alternative beta relates to the use of hedge funds. At its most basic, a hedge fund is an investment vehicle that pools capital from a number of investors and invests in securities and other instruments. It is administered by a professional management firm, and often structured as a limited partnership, limited liability company, or similar vehicle.
Volatility ("beta")
For an investment that involves risk to be worthwhile, its returns must be higher than a risk-free investment. The risk is related to volatility.
A measure of the factors influencing an investment's volatility is the beta. The beta is a measure of the risk arising from exposure to general market movements as opposed to idiosyncratic factors.
A beta below 1 can indicate either an investment with lower volatility than the market, or a volatile investment whose price movements are not highly correlated with the market. An example of the first is a treasury bill: the price does not go up or down a lot, so it has a low beta. An example of the second is gold. The price of gold does go up and down a lot, but not in the same direction or at the same time as the market.
A beta above 1 generally means that the asset both is volatile and tends to move up and down with the market. An example |
https://en.wikipedia.org/wiki/Digraphia | In sociolinguistics, digraphia refers to the use of more than one writing system for the same language. Synchronic digraphia is the coexistence of two or more writing systems for the same language, while diachronic digraphia (or sequential digraphia) is the replacement of one writing system by another for a particular language.
Hindustani, with an Urdu literary standard written in Urdu alphabet and a High Hindi standard written in Devanagari, is one of the 'textbook examples' of synchronic digraphia, cases where writing systems are used contemporaneously. An example of diachronic digraphia, where one writing system replaces another, occurs in the case of Turkish, for which the traditional Arabic writing system was replaced with a Latin-based system in 1928.
Digraphia has implications in language planning, language policy, and language ideology.
Terminology
Etymology
English digraphia, like French digraphie, etymologically derives from Greek di- δι- "twice" and -graphia -γραφία "writing".
Digraphia was modeled upon diglossia "the coexistence of two languages or dialects among a certain population", which derives from Greek diglossos δίγλωσσος "bilingual." Charles A. Ferguson, a founder of sociolinguistics, coined diglossia in 1959. Grivelet analyzes how the influence of diglossia on the unrelated notion of digraphia has "introduced some distortion in the process of defining digraphia," such as distinguishing "high" and "low" varieties. Peter Unseth notes one usage of "digraphia" that most closely parallels Ferguson's "diglossia," situations where a language uses different scripts for different domains; for instance, "shorthand in English, pinyin in Chinese for alphabetizing library files, etc. or several scripts which are replaced by Latin script during e-mail usage."
History
The Oxford English Dictionary, which does not yet include digraphia, enters two terms, digraph and digraphic. First, the linguistic term digraph is defined as, "A group of two letters ex |
https://en.wikipedia.org/wiki/Peak%20information%20rate | Peak information rate (PIR) is a burstable rate set on routers and/or switches that allows throughput overhead. Related to committed information rate (CIR) which is a committed rate speed guaranteed/capped. For example, a CIR of 10 Mbit/s PIR of 12 Mbit/s allows you access to 10 Mbit/s minimum speed with burst/spike control that allows a throttle of an additional 2 Mbit/s; this allows for data transmission to "settle" into a flow. PIR is defined in MEF Standard 10.4 Subscriber Ethernet Service Attributes
Excess information rate (EIR) is the magnitude of the burst above the CIR (PIR = EIR + CIR).
Maximum information rate (MIR) in reference to broadband wireless refers to maximum bandwidth the subscriber unit will be delivered from the wireless access point in kbit/s.
See also
Maximum throughput
Information rate |
https://en.wikipedia.org/wiki/Allomerism | Allomerism is the similarity in the crystalline structure of substances of different chemical composition. |
https://en.wikipedia.org/wiki/Carthamin | Carthamin is a natural red pigment derived from safflower (Carthamus tinctorius), earlier known as carthamine. It is used as a dye and a food coloring. As a food additive, it is known as Natural Red 26.
Safflower has been cultivated since ancient times, and carthamin was used as a dye in ancient Egypt. It was used extensively in the past for dyeing wool for the carpet industry in European countries, and in the dyeing of silk and the creation of cosmetics in Japan, where the color is called ; however, due to the expensive nature of the dye, Japanese safflower dyestuffs were sometimes diluted with other dyes, such as turmeric and sappan. It competed with the early synthetic dye fuchsine as a silk dye after fuchsine's 1859 discovery.
Carthamin is composed of two chalconoids; the conjugated bonds being the cause of the red color. It is derived from precarthamin by a decarboxylase. It should not be confused with carthamidin, another flavonoid.
The carthamin is biosynthesized from a chalcone (2,4,6,4'-tetrahydroxychalcone) and two glucose molecules to give safflor yellow A and with other glucose molecule, safflor yellow B. The next step is the formation of precarthamin and finally carthamin. |
https://en.wikipedia.org/wiki/Pickled%20walnuts | Pickled walnuts are a traditional English pickle, made from walnuts. They are considered a suitable accompaniment for a dish of cold turkey or ham, as well as blue cheese. There is a reference to "a mutton chop and a pickled walnut" in The Pickwick Papers by Charles Dickens and a mention in Evelyn Waugh’s Brideshead Revisited.
The process for preparing pickled walnuts takes a little more than a week. The green walnuts are brined before they will be pickled. The brine time helps with preservation and removes some of the bitterness in the unripe walnuts.
History
Pickled walnuts have been a delicacy in England since at least the early 18th-century. They were mentioned in several literary works.
The botanist Richard Bradley describes pickled walnuts in his 1728 book The Country Housewife and Lady's Director,
The Compleat Housewife (London, 1727) gives a recipe for "Another Way to pickle Walnuts". They're first submerged in vinegar for around two months, then boiled in a solution of high-quality vinegar with flavourings: dill seeds, whole nutmeg, peppercorns, mace and ginger root. The walnuts and pickle boiling are poured into a crock until the mixture has cooled. The nuts are then transferred to a gallipot with a large clove-studded garlic clove, mustard seeds on top with spices, covered with vine leaves over which the pickling liquid is poured.
Pickled walnuts are still commonly eaten in England, particularly at Christmas served with an English blue cheese such as Stilton. They are also used in recipes, commonly cooked in beef dishes. |
https://en.wikipedia.org/wiki/Compressibility%20equation | In statistical mechanics and thermodynamics the compressibility equation refers to an equation which relates the isothermal compressibility (and indirectly the pressure) to the structure of the liquid. It reads:where is the number density, g(r) is the radial distribution function and is the isothermal compressibility.
Using the Fourier representation of the Ornstein-Zernike equation the compressibility equation can be rewritten in the form:
where h(r) and c(r) are the indirect and direct correlation functions respectively. The compressibility equation is one of the many integral equations in statistical mechanics. |
https://en.wikipedia.org/wiki/Healthcare%20proxy | In the field of medicine, a healthcare proxy (commonly referred to as HCP) is a document (legal instrument) with which a patient (primary individual) appoints an agent to legally make healthcare decisions on behalf of the patient, when the patient is incapable of making and executing the healthcare decisions stipulated in the proxy. Once the healthcare proxy is effective, the agent continues making healthcare decisions as long as the primary individual is legally competent to decide. Moreover, in legal-administrative functions, the healthcare proxy is a legal instrument akin to a "springing" healthcare power of attorney. The proxy must declare the healthcare agent who will gain durable power attorney. This document also notifies of the authority given from the principal to the agent and states the limitations of this authority.
Those over the age of 18 are allowed to have a healthcare proxy, and these documents are useful in situations that render a person unable to communicate their wishes such as being in a persistent vegetative state, having a form of dementia or an illness that takes away one's ability to effectively communicate, or being under anesthesia when a decision needs to be made. Healthcare proxies are one of three ways that surrogate decision makers are enacted, the other two being court orders and laws for the automatic succession of decision makers. In contrast to a living will, healthcare proxies do not set out possible outcomes with predetermined reactions, rather they appoint someone to carry out the wishes of an individual.
History
The methods of healthcare planning and tools of advanced preparation have changed dramatically over the years. The concept of durable power of attorney arose in Virginia in 1954 for the purpose of setting property matters. This allowed for a continued existence of power of attorney following the original person losing capacity to carry out the necessary actions. This concept evolved over the years and in 1983, the P |
https://en.wikipedia.org/wiki/John%20Mallard | John Rowland Mallard OBE FRSE FREng (14 January 1927 – 25 February 2021) was an English physicist and professor of Medical Physics at the University of Aberdeen from 1965 until his retirement in 1992. He was known for setting up and leading the team that developed the first magnetic resonance imaging (MRI) full body scanner and, in particular, positron emission tomography (PET). He was born in Kingsthorpe, Northampton, England.
Career
Mallard completed his PhD research into magnetic properties of uranium at University College, Nottingham under Professor Leslie Fleetwood Bates in 1947.
Mallard worked as Assistant Physicist with the Liverpool Radium Institute where he completed his training in hospital physics. He joined Hammersmith Hospital and Post Graduate Medical School in 1953, and in 1959 Mallard developed the first whole-body isotope scanner (homemade) in the UK, used for detecting a brain tumour, with C. J. Peachey. Mallard published his theories on electron spin resonance and cancer in the journal Nature in 1964 but they went largely unnoticed. In 1965 he was appointed the first chair of Medical Physics at the University of Aberdeen, predicting at his first lecture that positron emission tomography (PET) would become one of the most important tools for diagnosis and studying of diseases. Mallard brought to Scotland its first PET scanner, leading a national fundraising campaign and agreeing to bring a second-hand research machine from London. The scanner was located in a facility next to Woodend Hospital, which has been since replaced by the John Mallard PET Centre at the Aberdeen Royal Infirmary.
In the 1970s, Mallard set up and led a team, which included James Hutchison and Dr Bill Edelstein, to build the first MRI full body scanner. The scanner was first used on 28 August 1980, to scan a terminal cancer patient, before being replaced in 1983. During the 1980s, Mallard discovered "spin warp imaging", a technique that could produce three-dimensional images |
https://en.wikipedia.org/wiki/Integrated%20Microbial%20Genomes%20System | The Integrated Microbial Genomes (IMG) system is a genome browsing and annotation platform developed by the U.S. Department of Energy (DOE)-Joint Genome Institute. IMG contains all the draft and complete microbial genomes sequenced by the DOE-JGI integrated with other publicly available genomes (including Archaea, Bacteria, Eukarya, Viruses and Plasmids). IMG provides users a set of tools for comparative analysis of microbial genomes along three dimensions: genes, genomes and functions. Users can select and transfer them in the comparative analysis carts based upon a variety of criteria. IMG also includes a genome annotation pipeline that integrates information from several tools, including KEGG, Pfam, InterPro, and the Gene Ontology, among others. Users can also type or upload their own gene annotations (called MyIMG gene annotations) and the IMG system will allow them to generate Genbank or EMBL format files containing these annotations.
In successive releases IMG has expanded to include several domain-specific tools. The Integrated Microbial Genomes with Microbiome Samples (IMG/M) system is an extension of the IMG system providing a comparative analysis context of assembled metagenomic data with the publicly available isolate genomes. The Integrated Microbial Genomes- Expert Review (IMG/ER) system provides support to individual scientists or group of scientists for functional annotation and curation of their microbial genomes of interest. Users can submit their annotated genomes (or request the IMG automated annotation pipeline to be applied first) into IMG-ER and proceed with manual curation and comparative analysis in the system, through secure (password protected) access. The IMG-HMP is focused on analysis of genomes related to the Human Microbiome Project (HMP) in the context of all publicly available genomes in IMG. The IMG-ABC system is a system for bacterial secondary metabolism analysis and targeted biosynthetic gene cluster discovery. The IMG-VR system |
https://en.wikipedia.org/wiki/History%20of%20cardiopulmonary%20resuscitation | The history of cardiopulmonary resuscitation (CPR) can be traced as far back as the literary works of ancient Egypt (c. 2686 – c. 2181 BCE). However, it was not until the 18th century that credible reports of cardiopulmonary resuscitation began to appear in the medical literature.
Mouth-to-mouth ventilation has been used for centuries as an element of CPR, but it fell out of favor in the late 19th century with the widespread adoption of manual resuscitative techniques such as the Marshall Hall method, Silvester's method, the Shafer method and the Holger Nielsen technique. The technique of mouth-to-mouth ventilation would not come back into favor until the late 1950s, after its "accidental rediscovery" by James Elam.
The modern elements of resuscitation for sudden cardiac arrest include CPR (consisting of ventilation of the lungs and chest compressions), defibrillation and emergency medical services (the means to bring these techniques to the patient quickly).
Earliest descriptions
The earliest references to CPR can be found in ancient Egyptian literature of the Old Kingdom of Egypt, in which Isis resurrected Osiris (her slain brother and husband) with the breath of life.
Other early references from the Iron Age can be found in the Bible. For example, according to the Genesis creation narrative, God breathed life into the nostrils of the first man. Later - according to the first Book of Kings - the prophet Elijah (the disciple and protégé of Elijah) resuscitated a Phoenician boy in the city of Zarephath. This is the first instance of resurrection of the dead recorded in the Bible. In the second Book of Kings, Elisha successfully performed mouth-to-mouth resuscitation on another apparently dead child, this time in the village of Shunem.
Renaissance
Burhan-ud-din Kermani, a physician in 15th century Persia, described his approach to the treatment of ghashy (cardiac and respiratory insufficiency), which involved moving the victim's arms and expanding and compressi |
https://en.wikipedia.org/wiki/Common%20Public%20Radio%20Interface | The Common Public Radio Interface (CPRI) standard defines an interface between Radio Equipment Control (REC) and Radio Equipment (RE). Oftentimes, CPRI links are used to carry data between cell sites and base stations.
The purpose of CPRI is to allow replacement of a copper or coax cable connection between a radio transceiver (used example for mobile-telephone communication and typically located in a tower) and a base station (typically located at the ground nearby), so the connection can be made to a remote and more convenient location. This connection (often referred to as the Fronthaul network) can be a fiber to an installation where multiple remote base stations may be served. This fiber supports both single and multi mode communication. The fiber end is connected with the Small Form-factor Pluggable (SFP) transceiver device.
The companies working to define the specification include Ericsson
AB, Huawei Technologies Co. Ltd, NEC Corporation and Nokia.
See also
Open Base Station Architecture Initiative (OBSAI)
Remote radio head (RRH) |
https://en.wikipedia.org/wiki/Coulomb%20damping | Coulomb damping is a type of constant mechanical damping in which the system's kinetic energy is absorbed via sliding friction (the friction generated by the relative motion of two surfaces that press against each other). Coulomb damping is a common damping mechanism that occurs in machinery.
History
Coulomb damping was so named because Charles-Augustin de Coulomb carried on research in mechanics. He later published a work on friction in 1781 entitled "Theory of Simple Machines" for an Academy of Sciences contest. Coulomb then gained much fame for his work with electricity and magnetism.
Modes of Coulombian friction
Coulomb damping absorbs energy with friction, which converts that kinetic energy into thermal energy, i.e. heat. Coulomb friction considers this under two distinct modes: either static, or kinetic.
Static friction occurs when two objects are not in relative motion, e.g. if both are stationary. The force exerted between the objects does exceed—in magnitude—the product of the normal force and the coefficient of static friction :
.
Kinetic friction on the other hand, occurs when two objects are undergoing relative motion, as they slide against each other. The force exerted between the moving objects is equal in magnitude to the product of the normal force and the coefficient of kinetic friction :
.
Regardless of the mode, friction always acts to oppose the objects' relative motion. The normal force is taken perpendicularly to the direction of relative motion; under the influence of gravity, and in the common case of an object supported by a horizontal surface, the normal force is just the weight of the object itself.
As there is no relative motion under static friction, no work is done, and hence no energy can be dissipated. An oscillating system is (by definition) only dampened via kinetic friction.
Illustration
Consider a block of mass that slides over a rough horizontal surface under the restraint of a spring with a spring constant . The s |
https://en.wikipedia.org/wiki/Euprymna%20scolopes |
Euprymna scolopes, also known as the Hawaiian bobtail squid, is a species of bobtail squid in the family Sepiolidae native to the central Pacific Ocean, where it occurs in shallow coastal waters off the Hawaiian Islands and Midway Island. The type specimen was collected off the Hawaiian Islands and is located at the National Museum of Natural History in Washington, D.C.
Euprymna scolopes grows to in mantle length. Hatchlings weigh and mature in 80 days. Adults weigh up to .
In the wild, E. scolopes feeds on species of shrimp, including Halocaridina rubra, Palaemon debilis, and Palaemon pacificus. In the laboratory, E. scolopes has been reared on a varied diet of animals, including mysids (Anisomysis sp.), brine shrimp (Artemia salina), mosquitofish (Gambusia affinis), prawns (Leander debilis), and octopuses (Octopus cyanea).
The Hawaiian monk seal (Monachus schauinslandi) preys on E. scolopes in northwestern Hawaiian waters.
On June 3, 2021, SpaceX CRS-22 launched E. scolopes, along with tardigrades, to the International Space Station. The squid were launched as hatchlings and will be studied to see if they can incorporate their symbiotic bacteria into their light organ while in space.
Symbiosis
Euprymna scolopes lives in a symbiotic relationship with the bioluminescent bacteria Aliivibrio fischeri, which inhabits a special light organ in the squid's mantle. The bacteria are fed a sugar and amino acid solution by the squid and in return hide the squid's silhouette when viewed from below by matching the amount of light hitting the top of the mantle (counter-illumination). E. scolopes serves as a model organism for animal-bacterial symbiosis and its relationship with A. fischeri has been carefully studied.
Acquisition
The bioluminescent bacterium, A. fischeri, is horizontally transmitted throughout the E. scolopes population. Hatchlings lack these necessary bacteria and must carefully select for them in a marine world saturated with other microorganisms.
To |
https://en.wikipedia.org/wiki/Galactosylceramide | A galactosylceramide, or galactocerebroside is a type of cerebroside consisting of a ceramide with a galactose residue at the 1-hydroxyl moiety.
The galactose is cleaved by galactosylceramidase.
Galactosylceramide is a marker for oligodendrocytes in the brain, whether or not they form myelin.
Additional images
See also
Alpha-Galactosylceramide
Krabbe disease
Myelin |
https://en.wikipedia.org/wiki/OLE%20DB%20for%20OLAP | OLE DB for OLAP (Object Linking and Embedding Database for Online Analytical Processing abbreviated ODBO) is a Microsoft published specification and an industry standard for multi-dimensional data processing. ODBO is the standard application programming interface (API) for exchanging metadata and data between an OLAP server and a client on a Windows platform. ODBO extends the ability of OLE DB to access multi-dimensional (OLAP) data stores.
Description
ODBO is the most widely supported, multi-dimensional API to date. Platform-specific to Microsoft Windows, ODBO was specifically designed for Online Analytical Processing (OLAP) systems by Microsoft as an extension to Object Linking and Embedding Database (OLE DB). ODBO uses Microsoft’s Component Object Model.
ODBO permits independent software vendors (ISVs) and corporate developers to create a single set of standard interfaces that allow OLAP clients to access multi-dimensional data, regardless of vendor or data source. ODBO is currently supported by a wide spectrum of server and client tools.
When exposing the ODBO interface, the underlying multi-dimensional database must also support the MDX Query Language. XML for Analysis is a newer interface to MDX Data Sources that is often supported in parallel with ODBO.
See also
XML for Analysis |
https://en.wikipedia.org/wiki/History%20of%20algebra | Algebra can essentially be considered as doing computations similar to those of arithmetic but with non-numerical mathematical objects. However, until the 19th century, algebra consisted essentially of the theory of equations. For example, the fundamental theorem of algebra belongs to the theory of equations and is not, nowadays, considered as belonging to algebra (in fact, every proof must use the completeness of the real numbers, which is not an algebraic property).
This article describes the history of the theory of equations, called here "algebra", from the origins to the emergence of algebra as a separate area of mathematics.
Etymology
The word "algebra" is derived from the Arabic word الجبر al-jabr, and this comes from the treatise written in the year 830 by the medieval Persian mathematician, Al-Khwārizmī, whose Arabic title, Kitāb al-muḫtaṣar fī ḥisāb al-ğabr wa-l-muqābala, can be translated as The Compendious Book on Calculation by Completion and Balancing. The treatise provided for the systematic solution of linear and quadratic equations. According to one history, "[i]t is not certain just what the terms al-jabr and muqabalah mean, but the usual interpretation is similar to that implied in the previous translation. The word 'al-jabr' presumably meant something like 'restoration' or 'completion' and seems to refer to the transposition of subtracted terms to the other side of an equation; the word 'muqabalah' is said to refer to 'reduction' or 'balancing'—that is, the cancellation of like terms on opposite sides of the equation. Arabic influence in Spain long after the time of al-Khwarizmi is found in Don Quixote, where the word 'algebrista' is used for a bone-setter, that is, a 'restorer'." The term is used by al-Khwarizmi to describe the operations that he introduced, "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. |
https://en.wikipedia.org/wiki/Environment%20of%20Australia | The Australian environment ranges from virtually pristine Antarctic territory and rainforests to degraded industrial areas of major cities. Forty distinct ecoregions have been identified across the Australian mainland and islands. Central Australia has a very dry climate. The interior has a number of deserts while most of the coastal areas are populated. Northern Australia experiences tropical cyclones while much of the country is prone to periodic drought. This dry and warm environment and exposure to cyclones, makes Australia particularly vulnerable to climate change -- with some areas already experiencing increases in wildfires and fragile ecosystems.
The island ecology of Australia has led to a number of unique endemic plant and animal species, notably marsupials like the kangaroo and koala. Agriculture and mining are predominant land uses which cause negative impacts on many different ecosystems. The management of the impact on the environment from the mining industry, the protection of the Great Barrier Reef, forests and native animals are recurring issues of conservation.
The protected areas in Australia are important sources of ecotourism, with sites like the Great Barrier Reef and World Heritage sites like Tasmanian Wilderness World Heritage Area or the Uluṟu-Kata Tjuṯa National Park draw both national and international tourism. Clean Up Australia Day was an initiative developed in 1989 to collaboratively clean up local areas and is held on the first Sunday of autumn (in March).
Protected areas
Protected areas cover 895,288 km2 of Australia's land area, or about 11.5% of the total land area. Of these, two-thirds are considered strictly protected (IUCN categories I to IV), and the rest is mostly managed resources protected area (IUCN category VI). There are also 200 marine protected areas, which cover a further 64.8 million hectares. Indigenous Protected Area have been established since the 1990s, the largest of which covers part of the Tanami Dese |
https://en.wikipedia.org/wiki/Generator%20%28category%20theory%29 | In mathematics, specifically category theory, a family of generators (or family of separators) of a category is a collection of objects in , such that for any two distinct morphisms in , that is with , there is some in and some morphism such that If the collection consists of a single object , we say it is a generator (or separator).
Generators are central to the definition of Grothendieck categories.
The dual concept is called a cogenerator or coseparator.
Examples
In the category of abelian groups, the group of integers is a generator: If f and g are different, then there is an element , such that . Hence the map suffices.
Similarly, the one-point set is a generator for the category of sets. In fact, any nonempty set is a generator.
In the category of sets, any set with at least two elements is a cogenerator.
In the category of modules over a ring R, a generator in a finite direct sum with itself contains an isomorphic copy of R as a direct summand. Consequently, a generator module is faithful, i.e. has zero annihilator. |
https://en.wikipedia.org/wiki/Auto%20insurance%20risk%20selection | Auto insurance risk selection is the process by which vehicle insurers determine whether or not to insure an individual and what insurance premium to charge. Depending on the jurisdiction, the insurance premium can be either mandated by the government or determined by the insurance company in accordance to a framework of regulations set by the government. Often, the insurer will have more freedom to set the price on physical damage coverages than on mandatory liability coverages.
When the premium is not mandated by the government, it is usually derived from the calculations of an actuary based on statistical data. The premium can vary depending on many factors that are believed to affect the expected cost of future claims. Those factors can include the car characteristics, the coverage selected (deductible, limit, covered perils), the profile of the driver (age, gender, driving history) and the usage of the car (commute to work or not, predicted annual distance driven).
History
Conventional methods for determining costs of motor vehicle insurance involve gathering relevant historical data from a personal interview with, or a written application completed by, the applicant for the insurance and by referencing the applicant's public motor vehicle driving record that is maintained by a governmental agency, such as a Bureau of Motor Vehicles. Such data results in a classification of the applicant to a broad actuarial class for which insurance rates are assigned based upon the empirical experience of the insurer. Many factors are deemed relevant to such classification in a particular actuarial class or risk level, such as age, sex, marital status, location of residence and driving record.
The current system of insurance creates groupings of vehicles and drivers (actuarial classes) based on the following types of classifications.
Vehicle: Age; manufacturer, model; and value.
Driver: Age; sex; marital status; driving record (based on government reports), violations |
https://en.wikipedia.org/wiki/Photonics%20Spectra | Photonics Spectra is a monthly business-to-business (B2B) magazine published for the engineers, scientists, and end users who develop, commercialize and buy photonic products. It provides both technical and applications information for all aspects of the global industry, integrating all segments of photonics: optics, lasers, imaging, fiber optics and electro-optics as well as photonic component manufacturing, solar cell improvements, LED lighting for cars and offices, THz, EHz, UV, IR, and visible light imaging and test equipment.
In addition to news and feature articles, Photonics Spectra contains business reports, technology updates, reader forums, new products and literature, calendars of conferences and courses, and applications reports.
Photonics Spectra has been published since 1967 by Laurin Publishing Company, Inc. in Pittsfield, MA, United States.
History
The first Optical Industry Directory was published in 1954 by Dr. Clifton Tuttle, an eminent retired Eastman Kodak physicist. At its inception the Directory was a small single volume. It succeeded notably, expanding over the years into the present multimedia publication.
Theresa "Teddi" C. Laurin (1924 - November 5, 2015) joined the company in 1962 and, as publisher, worked closely with Dr. Tuttle. In 1964 Francis T. Laurin and Teddi C. Laurin purchased and incorporated the company, which later became known as Laurin Publishing Company. In 1967, in response to industry demands, she founded and launched Optical Spectra. In 1982 the magazine's name was changed to Photonics Spectra to reflect the growing influence of these new light-based technologies. Today, the worldwide distribution of Photonics Spectra is over 100,000 copies.
Laurin Publishing currently maintains a staff of over 50 employees at its headquarters in Pittsfield, Mass. and at its editorial and sales branch offices. The company also includes several contributing editors located around the world and an editorial advisory board of over 25 |
https://en.wikipedia.org/wiki/Support%20polygon | For a rigid object in contact with a fixed environment and acted upon by gravity in the vertical direction, its support polygon is a horizontal region over which the center of mass must lie to achieve static stability. For example, for an object resting on a horizontal surface (e.g. a table), the support polygon is the convex hull of its "footprint" on the table.
The support polygon succinctly represents the conditions necessary for an object to be at equilibrium under gravity. That is, if the object's center of mass lies over the support polygon, then there exist a set of forces over the region of contact that exactly counteracts the forces of gravity. Note that this is a necessary condition for stability, but not a sufficient one.
Derivation
Let the object be in contact at a finite number of points . At each point , let be the set of forces that can be applied on the object at that point. Here, is known as the friction cone, and for the Coulomb model of friction, is actually a cone with apex at the origin, extending to infinity in the normal direction of the contact.
Let be the (unspecified) forces at the contact points. To balance the object in static equilibrium, the following Newton-Euler equations must be met on :
for all
where is the force of gravity on the object, and is its center of mass. The first two equations are the Newton-Euler equations, and the third requires all forces to be valid. If there is no set of forces that meet all these conditions, the object will not be in equilibrium.
The second equation has no dependence on the vertical component of the center of mass, and thus if a solution exists for one , the same solution works for all . Therefore, the set of all that have solutions to the above conditions is a set that extends infinitely in the up and down directions. The support polygon is simply the projection of this set on the horizontal plane.
These results can easily be extended to different friction models and |
https://en.wikipedia.org/wiki/1%20%2B%202%20%2B%204%20%2B%208%20%2B%20%E2%8B%AF | In mathematics, is the infinite series whose terms are the successive powers of two. As a geometric series, it is characterized by its first term, 1, and its common ratio, 2. As a series of real numbers it diverges to infinity, so the sum of this series is infinity.
However, it can be manipulated to yield a number of mathematically interesting results. For example, many summation methods are used in mathematics to assign numerical values even to a divergent series. For example, the Ramanujan summation of this series is −1, which is the limit of the series using the 2-adic metric.
Summation
The partial sums of are since these diverge to infinity, so does the series.
It is written as :
Therefore, any totally regular summation method gives a sum of infinity, including the Cesàro sum and Abel sum. On the other hand, there is at least one generally useful method that sums to the finite value of −1. The associated power series
has a radius of convergence around 0 of only so it does not converge at Nonetheless, the so-defined function has a unique analytic continuation to the complex plane with the point deleted, and it is given by the same rule Since the original series is said to be summable (E) to −1, and −1 is the (E) sum of the series. (The notation is due to G. H. Hardy in reference to Leonhard Euler's approach to divergent series).
An almost identical approach (the one taken by Euler himself) is to consider the power series whose coefficients are all 1, that is,
and plugging in These two series are related by the substitution
The fact that (E) summation assigns a finite value to shows that the general method is not totally regular. On the other hand, it possesses some other desirable qualities for a summation method, including stability and linearity. These latter two axioms actually force the sum to be −1, since they make the following manipulation valid:
In a useful sense, is a root of the equation (For example, is one of the two fixed |
https://en.wikipedia.org/wiki/Michael%20Luby | Michael George Luby is a mathematician and computer scientist, CEO of BitRipple, senior research scientist at the International Computer Science Institute (ICSI), former VP Technology at Qualcomm, co-founder and former chief technology officer of Digital Fountain. In coding theory he is known for leading the invention of the Tornado codes and the LT codes. In cryptography he is known for his contributions showing that any one-way function can be used as the basis for private cryptography, and for his analysis, in collaboration with Charles Rackoff, of the Feistel cipher construction. His distributed algorithm to find a maximal independent set in a computer network has also been influential.
Luby received his B.Sc. in mathematics from Massachusetts Institute of Technology in 1975. In 1983 he was awarded a Ph.D. in computer science from University of California, Berkeley. In 1996–1997, while at the ICSI, he led the team that invented Tornado codes. These were the first LDPC codes based on an irregular degree design that has proved crucial to all later good LDPC code designs, which provably achieve channel capacity for the erasure channel, and which have linear time encoding and decoding algorithms. In 1998 Luby left ICSI to found the Digital Fountain company, and shortly thereafter in 1998 he invented the LT codes, the first practical fountain codes. Qualcomm acquired Digital Fountain in 2009.
Awards
Luby's publications have won the 2002 IEEE Information Theory Society Information Theory Paper Award for leading the design and analysis of the first irregular LDPC error-correcting codes,
the 2003 SIAM Outstanding Paper Prize for the seminal paper showing how to construct a cryptographically unbreakable pseudo-random generator from any one-way function,
and the 2009 ACM SIGCOMM Test of Time Award.
In 2016 he was awarded the ACM Edsger W. Dijkstra Prize in Distributed Computing; the prize is given "for outstanding papers on the principles of distributed computing, who |
https://en.wikipedia.org/wiki/Fluency%20Voice%20Technology | Fluency Voice Technology was a company that developed and sold packaged speech recognition solutions for use in call centers. Fluency's Speech Recognition solutions are used by call centers worldwide to improve customer service and significantly reduce costs and are available on-premises and hosted.
History
1998 – Fluency was created as a spin-off from the Voice Research & Development team of a company called netdecisions. This R&D operation was established in Cambridge UK. The focus of the development was speech recognition systems based on the VXML standard.
2001 – Fluency became a separate entity in May 2001. Fluency began the creation of a software development platform specifically aimed at automating call center activities. This platform became Fluency's VoiceRunner.
2002 to 2004 – Fluency establishes accomplishes many successful deployments in customer sites such as National Express and Barclaycard.
2003 – Fluency expanded into the USA. Fluency also acquires Vocalis of Cambridge, UK in August 2003.
2004 – Fluency receives £6 million investment from leading European Venture Capitalists and establishes a global OEM partnership with Avaya, and the acquisition of SRC Telecom.
2008 – Fluency is acquired by Syntellect Ltd
Customers
Call Centers around the world use Fluency to improve service and reduce costs. They include Travelodge, Standard Life Bank, Sutton and East Surrey Water, Pizza Hut, CWT, Barclays, Powergen, First Choice, OutRight, J D Williams, Capital Blue Cross, Chelsea Building Society, EDF, bss, TV Licensing and Capita Software Services.
Speech recognition
Companies established in 1998 |
https://en.wikipedia.org/wiki/Fernseh | The Fernseh AG television company was registered in Berlin on July 3, 1929, by John Logie Baird, Robert Bosch, Zeiss Ikon and D.S. Loewe as partners. John Baird owned Baird Television Ltd. in London, Zeiss Ikon was a camera company in Dresden, D.S. Loewe owned a company in Berlin and Robert Bosch owned a company, Robert Bosch GmbH, in Stuttgart. with an initial capital of 100,000 Reichsmark. Fernseh AG did research and manufacturing of television equipment.
Etymology
The company name "Fernseh AG" is a compound of Fernsehen ‘television’ and Aktiengesellschaft (AG) ‘joint-stock company’. The company was mainly known by its German abbreviation "FESE". See section see also on this page for other uses.
Early years
In 1929 Fernseh AG's original board of directors included: Emanuel Goldberg, Oliver George Hutchinson (for Baird), David Ludwig Loewe, and Erich Carl Rassbach (for Bosch) and Eberhard Falkenstein who did the legal work.
Carl Zeiss's company worked alongside the early Bosch company. Much of the early work was in the area of research and development. Along with early TV sets (DE-6, E1, DE10) Fernseh AG made the first "Remote Truck"/"OB van", an "intermediate-film" mobile television camera in August 1932. This was a film camera that had its film developed in the truck and a "telecine" then transmitted the signal almost "live".
Fernseh GmbH
In 1939 Robert Bosch GmbH took complete ownership of Fernseh AG when Zeiss Ikon AG sold its share of Fernseh AG.
In 1952 Fernseh moved to Darmstadt, Germany, and increased its broadcast product line.
In 1967 Fernseh, by then commonly called "Bosch Fernseh", introduced color TV products. Fernseh offered a full line of video and film equipment: professional video cameras, VTRs and telecine devices. On August 27, 1967, the first color TV program in Germany aired, with a live broadcast from a Bosch Fernseh outside broadcast (OB) van. The networks ZDF, NDR and WDR each acquired a new color OB van from Bosch Fernseh to begin broadc |
https://en.wikipedia.org/wiki/Ensemble%20Kalman%20filter | The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter for large problems (essentially, the covariance matrix is replaced by the sample covariance), and it is now an important data assimilation component of ensemble forecasting. EnKF is related to the particle filter (in this context, a particle is the same thing as an ensemble member) but the EnKF makes the assumption that all probability distributions involved are Gaussian; when it is applicable, it is much more efficient than the particle filter.
Introduction
The ensemble Kalman filter (EnKF) is a Monte Carlo implementation of the Bayesian update problem: given a probability density function (PDF) of the state of the modeled system (the prior, called often the forecast in geosciences) and the data likelihood, Bayes' theorem is used to obtain the PDF after the data likelihood has been taken into account (the posterior, often called the analysis). This is called a Bayesian update. The Bayesian update is combined with advancing the model in time, incorporating new data from time to time. The original Kalman filter, introduced in 1960, assumes that all PDFs are Gaussian (the Gaussian assumption) and provides algebraic formulas for the change of the mean and the covariance matrix by the Bayesian update, as well as a formula for advancing the mean and covariance in time provided the system is linear. However, maintaining the covariance matrix is not feasible computationally for high-dimensional systems. For this reason, EnKFs were developed. EnKFs represent the distribution of the system state using a collection of state vectors, called an ensemble, and replace the covariance matrix by the sample covariance computed from the ensemble. The ensemble is operated with as if it were a random sample, but the ensemble members are real |
https://en.wikipedia.org/wiki/Artin%E2%80%93Rees%20lemma | In mathematics, the Artin–Rees lemma is a basic result about modules over a Noetherian ring, along with results such as the Hilbert basis theorem. It was proved in the 1950s in independent works by the mathematicians Emil Artin and David Rees; a special case was known to Oscar Zariski prior to their work.
An intuitive characterization of the lemma involves the notion that a submodule N of a module M over some ring A with specified ideal I holds a priori two topologies: one induced by the topology on M, and the other when considered with the I-adic topology over A. Then Artin-Rees dictates that these topologies actually coincide, at least when A is Noetherian and M finitely-generated.
One consequence of the lemma is the Krull intersection theorem. The result is also used to prove the exactness property of completion. The lemma also plays a key role in the study of ℓ-adic sheaves.
Statement
Let I be an ideal in a Noetherian ring R; let M be a finitely generated R-module and let N a submodule of M. Then there exists an integer k ≥ 1 so that, for n ≥ k,
Proof
The lemma immediately follows from the fact that R is Noetherian once necessary notions and notations are set up.
For any ring R and an ideal I in R, we set (B for blow-up.) We say a decreasing sequence of submodules is an I-filtration if ; moreover, it is stable if for sufficiently large n. If M is given an I-filtration, we set ; it is a graded module over .
Now, let M be a R-module with the I-filtration by finitely generated R-modules. We make an observation
is a finitely generated module over if and only if the filtration is I-stable.
Indeed, if the filtration is I-stable, then is generated by the first terms and those terms are finitely generated; thus, is finitely generated. Conversely, if it is finitely generated, say, by some homogeneous elements in , then, for , each f in can be written as
with the generators in . That is, .
We can now prove the lemma, assuming R is Noetherian. Let . T |
https://en.wikipedia.org/wiki/XM%20PCR | The XM PCR is a satellite receiver sold by XM Radio and discontinued in 2004, amidst piracy concerns. Programs allowed users to record every song played on an XM channel, quickly and cheaply building an MP3 library.
History
The Personal Computer Receiver (PCR) was first announced in 2003. The next year, XM pulled the PCRs from the market, reportedly due to music piracy.
Enhancements
Several enhancements have been created for the PCR, both software and hardware. In the software arena, PCR Replacement programs have been sprouting up on Internet forums and web sites. These are software packages that replace the interface included with the PCR, XMMT. Several features have been added to these new programs, including the ability to rip songs and build an MP3 library, time shift shows so that the user can listen at a more convenient time, control the radio via a web browser, and stream audio to other computers. Some web sites also offer a playlist log, which allows a user to browse a list of all the recently played songs or shows.
A hardware modification has also been discovered that allows the addition of a TOSLINK optical output, allowing users to connect the PCR to the optical digital input on a home theater receiver.
Replacements
The XM Direct receiver, also marketed as the XM Commander, can now serve the same purpose as the PCR. While the XM Direct is intended for automotive use, the unit itself is controlled by RS-232 command signals, and so is easily adapted to PC control. When combined with a "smart cable", which is really just a USB to Serial cable and a wiring adapter to connect to the XM Direct's control port, the XM Direct supports some features not found on the original PCR.
The XM Mini tuner may also hold promise for hardware tweakers. It uses the newest XM tuner and is much smaller than the XM Direct. Like the Direct, the Mini is designed to be used with an external system, in this case a home theater receiver. Unlike the Direct, the Mini is also ca |
https://en.wikipedia.org/wiki/Virtual%20telecine | A virtual telecine is a piece of video equipment that can play back data files in real time. The colorist-video operator controls the virtual telecine like a normal telecine, although without controls like focus and framing. The data files can be from a Spirit DataCine, motion picture film scanner (like a Cineon), CGI animation computer, or an Acquisition professional video camera. The normal input data file standard is DPX. The output of data files are often used in digital intermediate post-production using a film recorder for film-out. The control room for the virtual telecine is called the color suite.
The 2000 movie O Brother, Where Art Thou? was scanned with Spirit DataCine, color corrected with a VDC-2000 and a Pandora Int. Pogle Color Corrector with MegaDEF. A Kodak Lightning II film recorder was used to output the data back on to film.
Virtual telecines are also used in film restoration.
Another advantage of a virtual telecine is once the film is on the storage array the frames may be played over and over again without damage or dirt to the film. This would be the case for outputting to different TV standards (NTSC or PAL) or formats: (pan and scan, letterboxed, or other aspect ratio. Restoration, special effect, color grading, and other changes can be applied to the data file frames before playout.
Virtual telecine is like a "tape to tape" color correction process, but with the difference of: higher resolution (2k or 4k) and the use of film restoration tools with standards-aspect ratio tools.
2k virtual DataCine products
First virtual telecine by Philips, now Grass Valley a Thomson SA Brand:
VDC-2000 Virtual DataCine
Specter FS Virtual DataCine
These are able to play out 2k data files in non-linear real time. Size, rotation and color correction-color grading are all able to be done in real time controlled by a telecine color corrector. A Silicon Graphics-SGI computer, an Origin 2000, is used to play the data files to "Spirit DataCine hardware". The Vi |
https://en.wikipedia.org/wiki/Ethics%20of%20cloning | In bioethics, the ethics of cloning refers to a variety of ethical positions regarding the practice and possibilities of cloning, especially human cloning. While many of these views are religious in origin, some of the questions raised by cloning are faced by secular perspectives as well. Perspectives on human cloning are theoretical, as human therapeutic and reproductive cloning are not commercially used; animals are currently cloned in laboratories and in livestock production.
Advocates support the development of therapeutic cloning in order to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to technology.
Opponents of cloning have concerns that technology is not yet developed enough to be safe, and that it could be prone to abuse, either in the form of clones raised as slaves, or leading to the generation of humans from whom organs and tissues would be harvested. Opponents have also raised concerns about how cloned individuals could integrate with families and with society at large.
Religious groups are divided, with some opposing the technology as usurping God's place and, to the extent embryo are used, destroying a human life; others support therapeutic cloning's potential life-saving benefits.
Cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die, and while meat of cloned animals has been approved by the US FDA, its use is opposed by some other groups concerned about food safety.
Philosophical debate
The various forms of cloning, particularly human cloning, are controversial. There have been numerous demands for all progress in the human cloning field to be halted. Most scientific, governmental and religious organizations oppose reproductive clo |
https://en.wikipedia.org/wiki/Arthur%20Lynch%20%28politician%29 | Arthur Alfred Lynch (16 October 1861 – 25 March 1934) was an Irish Australian civil engineer, physician, journalist, author, soldier, anti-imperialist and polymath. He served as MP in the UK House of Commons as member of the Irish Parliamentary Party, representing Galway Borough from 1901 to 1902, and later West Clare from 1909 to 1918. Lynch fought on the Boer side during the Boer War in South Africa, for which he was sentenced to death but later pardoned. He supported the British war effort in the First World War, raising his own Irish battalion in Munster towards the end of the war.
Australian years
Lynch was born at Smythesdale near Ballarat, Victoria, the fourth of 14 children. His father, John Lynch, was an Irish Catholic surveyor and civil engineer and his mother Isabella (née MacGregor) was Scottish. John Lynch was a founder and first president of the Ballarat School of Mines, and a captain of Peter Lalor at the Eureka Stockade rebellion (1854) and John Lynch wrote a book, Austral Light (1893–94), about it – later republished as The Story of the Eureka Stockade.
Arthur Lynch was educated at Grenville College, Ballarat, (where he was "entranced" by differential calculus) and the University of Melbourne, where he took the degrees of BA in 1885 and MA in 1887. Lynch qualified as a civil engineer and practised this profession for a short period in Melbourne.
Europe and Ireland
Lynch left Australia and went to Berlin, where he studied physics, physiology and psychology at the University of Berlin in 1888–1889. He had a particular respect for Hermann von Helmholtz. Moving to London, Lynch took up journalism. In 1892, he contested Galway as a Parnellite candidate, but was defeated.
Lynch met Annie Powell (daughter of the Rev. John D. Powell) in Berlin and they were married in 1895. They were to have no children. In Lynch's words, the marriage "never lost its happiness" (My Life Story, p. 85).
In 1898, he was Paris correspondent for the London Daily Mail.
Boer |
https://en.wikipedia.org/wiki/VESA%20Enhanced%20Video%20Connector | The VESA Enhanced Video Connector (EVC) is a VESA standard that was intended to reduce the number of cables around a computer by incorporating video, audio, FireWire and USB into a single cable system, terminating in a 35-pin Molex MicroCross connector. The intent was to make the monitor the central point of connection. The EVC physical standard was ratified in November 1994, and the pinout and signaling standard followed one year later.
History
The Video Electronic Standards Association (VESA) began working on a successor to the VGA connector for analog video and released the EVC physical standard in November 1994, followed by a pinout and signal standard in November 1995. After the P&D standard was released in June 1997, revisions to the EVC standards were issued in November 1997.
EVC was used for few products, perhaps most commonly found on the HP9000 B/C/J-class workstations introduced in 1997. Although EVC did not find favour with computer manufacturers, it evolved into the somewhat more popular VESA Plug and Display (P&D) standard using a physically identical 35-pin interface with a different shell, capable of transmitting video (both analog and digital) and data. Digital Visual Interface (DVI, 1999), essentially a modified version of P&D stripped of the data signals with higher maximum resolution by adding a second, three-pair digital video channel, would become the industry standard for digital video connections and achieved widespread implementation.
Technical
A VESA EVC connector is capable of carrying analog video (VGA-based) output, video input (composite or S-video), FireWire, analog stereo audio (input and output), and USB signals. Analog video is carried by the C1–C4 pins surrounding the C5 crossed ground plane; this was a development of the 13W3 connector, which was typically fitted to high-end workstations and had three miniature coaxial terminals embedded in the connector. The quasi-coaxial "MicroCross" developed by Molex provided comparable shi |
https://en.wikipedia.org/wiki/Quark%20epoch | In physical cosmology, the quark epoch was the period in the evolution of the early universe when the fundamental interactions of gravitation, electromagnetism, the strong interaction and the weak interaction had taken their present forms, but the temperature of the universe was still too high to allow quarks to bind together to form hadrons. The quark epoch began approximately 10−12 seconds after the Big Bang, when the preceding electroweak epoch ended as the electroweak interaction separated into the weak interaction and electromagnetism. During the quark epoch, the universe was filled with a dense, hot quark–gluon plasma, containing quarks, leptons and their antiparticles. Collisions between particles were too energetic to allow quarks to combine into mesons or baryons. The quark epoch ended when the universe was about 10−6 seconds old, when the average energy of particle interactions had fallen below the binding energy of hadrons. The following period, when quarks became confined within hadrons, is known as the hadron epoch.
See also
Timeline of the early universe
Chronology of the universe
Cosmology |
https://en.wikipedia.org/wiki/Bilhete%20%C3%9Anico | Bilhete Único (Unified Ticket) is the name of the São Paulo transportation contactless smart card system for fare control.
Using Philips Mifare technology, the ticketing system is managed by SPTrans (São Paulo Transporte S/A), the city bus transportation authority, which is controlled by the municipal government. Tickets were first issued using the system on May 18, 2004, when Marta Suplicy was the mayor, allowing for up to four rides in two hours by paying a single fare on buses. From 2006 it has also been used in the local rapid transit system (São Paulo Metro) and on suburban railways operated by CPTM.
History
The original technical design (in about 1997) was based on Seoul's ticketing solution and provider. But the project was aborted, mostly due software problems with the complex Vale-Transporte regulation.
Around 2001/2002 the project was restarted by SPTrans under the title, Projeto de Bilhetagem Eletrônica. SPSTrans took on the role of Solution Integrator and Sponsor, choosing to have at least two providers for every supply and not to depend on a sole provider, as most other cities do.
Providers
Completion of the project resulted in the Bilhete Único, which has at least 30 different solution and service providers directly involved in the project.
The solution was a major gain solving the recharge problem: all cards are pre-paid, and recharge cannot be done on board. Other Brazilian cities failed on creating and spreading a large recharge network. Due to "win-win" agreements with Electronic Benefits Cards networks and the National Lottery network, São Paulo had over 6000 recharge points around the city by 2010.
Other software and hardware solution providers are:
portals and back-office.
Microsoft: Windows desktops on all parts. Windows servers, Biztalk and MS-SQL on EDI from garages.
Oracle: Provides the central SQL database and data warehouse.
IBM: Provides RISC servers and AIX on central processors.
Fares and regulations
As of January 1st, 2020, regu |
https://en.wikipedia.org/wiki/Michinori%20Yamashita | Michinori Yamashita (born 1953 in Japan) is a professor (Japanese mathematician) at the Rissho University. He studied at the Sophia University under Yukiyoshi Kawada and Kiichi Morita.
External links
Home page at Rissho Univ.
20th-century Japanese mathematicians
21st-century Japanese mathematicians
Number theorists
Living people
1953 births
Date of birth missing (living people)
Academic staff of Rissho University
Sophia University alumni |
https://en.wikipedia.org/wiki/Rudolf%20Luneburg | Rudolf Karl Lüneburg (30 March 1903, Volkersheim (Bockenem) - 19 August 1949, Great Falls, Montana), after his emigration at first Lueneburg, later Luneburg, sometimes misspelled Luneberg or Lunenberg) was a professor of mathematics and optics at the Dartmouth College Eye Institute. He was born in Germany, received his doctorate at Göttingen, and emigrated to the United States in 1935.
His work included an analysis of the geometry of visual space as expected from physiology and the assumption that the angle of vergence provides a constant measure of distance. From these premises he concluded that near field visual space is hyperbolic.
Bibliography
published in:
Reprint:
See also
Luneburg lens
Luneburg method
1903 births
1949 deaths
Emigrants from Nazi Germany to the United States
Geometers
Optical physicists
Dartmouth College faculty
20th-century German mathematicians
Academic staff of Leiden University
University of Göttingen alumni
New York University faculty
University of Southern California faculty
Brown University faculty |
https://en.wikipedia.org/wiki/Jones%27%20stain | Jones' stain, also Jones stain, is a methenamine silver-Periodic acid-Schiff stain used in pathology. It is also referred to as methenamine PAS which is commonly abbreviated MPAS.
It stains for basement membrane and is widely used in the investigation of medical kidney diseases.
The Jones stain demonstrates the spiked GBM, caused by subepithelial deposits, seen in membranous nephropathy.
See also
Staining |
https://en.wikipedia.org/wiki/Van%20Gieson%27s%20stain | Van Gieson's stain is a mixture of picric acid and acid fuchsin. It is the simplest method of differential staining of collagen and other connective tissue. It was introduced to histology by American neuropsychiatrist and pathologist Ira Van Gieson.
HvG stain generally refers to the combination of hematoxylin and Van Gieson's stain, but can possibly refer to a combination of hibiscus extract-iron solution and Van Gieson's stain.
Other dyes
Other dyes used in connection with Van Gieson staining include:
Alcian blue
Amido black 10B
Verhoeff's stain |
https://en.wikipedia.org/wiki/Kauffman%20polynomial | In knot theory, the Kauffman polynomial is a 2-variable knot polynomial due to Louis Kauffman. It is initially defined on a link diagram as
,
where is the writhe of the link diagram and is a polynomial in a and z defined on link diagrams by the following properties:
(O is the unknot).
L is unchanged under type II and III Reidemeister moves.
Here is a strand and (resp. ) is the same strand with a right-handed (resp. left-handed) curl added (using a type I Reidemeister move).
Additionally L must satisfy Kauffman's skein relation:
The pictures represent the L polynomial of the diagrams which differ inside a disc as shown but are identical outside.
Kauffman showed that L exists and is a regular isotopy invariant of unoriented links. It follows easily that F is an ambient isotopy invariant of oriented links.
The Jones polynomial is a special case of the Kauffman polynomial, as the L polynomial specializes to the bracket polynomial. The Kauffman polynomial is related to Chern–Simons gauge theories for SO(N) in the same way that the HOMFLY polynomial is related to Chern–Simons gauge theories for SU(N). |
https://en.wikipedia.org/wiki/Temperament%20test | Temperament tests assess dogs for certain behaviors or suitability for dog sports or adoption from an animal shelter by observing the animal for unwanted or potentially dangerous behavioral traits, such as aggressiveness towards other dogs or humans, shyness, or extreme fear.
AKC Temperament Test
In 2019, the American Kennel Club launched its AKC Temperament Test (ATT), a pass-fail evaluation by AKC licensed or member clubs. Evaluators are specially trained AKC Obedience judges, Rally judges and AKC Approved Canine Good Evaluators.
American Temperament Test Society
American Temperament Test Society, Inc. was started by Alfons Ertel in 1977. Ertel created a test for dogs that checks a dog's reaction to strangers, to auditory and visual stimuli (such as the gun shot test), and to unusual situations in an outdoor setting; it does not test indoor or home situation scenarios. It favors a bold confident dog. , the top three dog breeds that have tested with ATTS are Rottweiler (17% of all tests conducted), German Shepherd Dog (10%), and Doberman (5%). The test itself is copyrighted and prospective testers must apply to become official. The test is conducted as a pass-fail by majority rule of three testers, and each individual dog is graded according to its own breed's native aptitudes, and taking into account the individual dog's age, health and training. Though the ATTS is the only organization which posts pass rates "by breed", the breeds cannot be compared against each other because the grades are based on each breed's own characteristics. Despite that, attorneys have been encouraged to use the ATTS published "results by breed" to defend their clients in dangerous dog cases by comparing pass rates of the breed of their client's dog against the pass rates of other well-known non-aggressive pet dog breeds. , 34,686 tests have been completed; less than 1,000 per year.
BH-VT test by FCI
BH-VT, an abbreviation of a German term which roughly translates to "companion do |
https://en.wikipedia.org/wiki/Pyrimidine%20metabolism | Pyrimidine biosynthesis occurs both in the body and through organic synthesis.
De novo biosynthesis of pyrimidine
De Novo biosynthesis of a pyrimidine is catalyzed by three gene products CAD, DHODH and UMPS. The first three enzymes of the process are all coded by the same gene in CAD which consists of carbamoyl phosphate synthetase II, aspartate carbamoyltransferase and dihydroorotase. Dihydroorotate dehydrogenase (DHODH) unlike CAD and UMPS is a mono-functional enzyme and is localized in the mitochondria. UMPS is a bifunctional enzyme consisting of orotate phosphoribosyltransferase (OPRT) and orotidine monophosphate decarboxylase (OMPDC). Both, CAD and UMPS are localized around the mitochondria, in the cytosol. In Fungi, a similar protein exists but lacks the dihydroorotase function: another protein catalyzes the second step.
In other organisms (Bacteria, Archaea and the other Eukaryota), the first three steps are done by three different enzymes.
Pyrimidine catabolism
Pyrimidines are ultimately catabolized (degraded) to CO2, H2O, and urea. Cytosine can be broken down to uracil, which can be further broken down to N-carbamoyl-β-alanine, and then to beta-alanine, CO2, and ammonia by beta-ureidopropionase. Thymine is broken down into β-aminoisobutyrate which can be further broken down into intermediates eventually leading into the citric acid cycle.
β-aminoisobutyrate acts as a rough indicator for rate of DNA turnover.
Regulations of pyrimidine nucleotide biosynthesis
Through negative feedback inhibition, the end-products UTP and UDP prevent the enzyme CAD from catalyzing the reaction in animals. Conversely, PRPP and ATP act as positive effectors that enhance the enzyme's activity.
Pharmacotherapy
Modulating the pyrimidine metabolism pharmacologically has therapeutical uses, and could implement in cancer treatment.
Pyrimidine synthesis inhibitors are used in active moderate to severe rheumatoid arthritis and psoriatic arthritis, as well as in multiple scle |
https://en.wikipedia.org/wiki/Fatty%20acid%20synthesis | In biochemistry, fatty acid synthesis is the creation of fatty acids from acetyl-CoA and NADPH through the action of enzymes called fatty acid synthases. This process takes place in the cytoplasm of the cell. Most of the acetyl-CoA which is converted into fatty acids is derived from carbohydrates via the glycolytic pathway. The glycolytic pathway also provides the glycerol with which three fatty acids can combine (by means of ester bonds) to form triglycerides (also known as "triacylglycerols" – to distinguish them from fatty "acids" – or simply as "fat"), the final product of the lipogenic process. When only two fatty acids combine with glycerol and the third alcohol group is phosphorylated with a group such as phosphatidylcholine, a phospholipid is formed. Phospholipids form the bulk of the lipid bilayers that make up cell membranes and surrounds the organelles within the cells (such as the cell nucleus, mitochondria, endoplasmic reticulum, Golgi apparatus, etc.). In addition to cytosolic fatty acid synthesis, there is also mitochondrial fatty acid synthesis (mtFASII), in which malonyl-CoA is formed from malonic acid with the help of malonyl-CoA synthetase (ACSF3), which then becomes the final product octanoyl-ACP (C8) via further intermediate steps.
Straight-chain fatty acids
Straight-chain fatty acids occur in two types: saturated and unsaturated.
Saturated straight-chain fatty acids
Much like β-oxidation, straight-chain fatty acid synthesis occurs via the six recurring reactions shown below, until the 16-carbon palmitic acid is produced.
The diagrams presented show how fatty acids are synthesized in microorganisms and list the enzymes found in Escherichia coli. These reactions are performed by fatty acid synthase II (FASII), which in general contain multiple enzymes that act as one complex. FASII is present in prokaryotes, plants, fungi, and parasites, as well as in mitochondria.
In animals, as well as some fungi such as yeast, these same reactions occur |
https://en.wikipedia.org/wiki/Fatty%20acid%20degradation | Fatty acid degradation is the process in which fatty acids are broken down into their metabolites, in the end generating acetyl-CoA, the entry molecule for the citric acid cycle, the main energy supply of living organisms, including bacteria and animals. It includes three major steps:
Lipolysis of and release from adipose tissue
Activation and transport into mitochondria
β-oxidation
Lipolysis and release
Initially in the process of degradation, fatty acids are stored in adipocytes. The breakdown of this fat is known as lipolysis. The products of lipolysis, free fatty acids, are released into the bloodstream and circulate throughout the body. During the breakdown of triacylglycerols into fatty acids, more than 75% of the fatty acids are converted back into triacylglycerol, a natural mechanism to conserve energy, even in cases of starvation and exercise.
Activation and transport into mitochondria
Fatty acids must be activated before they can be carried into the mitochondria, where fatty acid oxidation occurs. This process occurs in two steps catalyzed by the enzyme fatty acyl-CoA synthetase.
Formation of an activated thioester bond
The enzyme first catalyzes nucleophilic attack on the α-phosphate of ATP to form pyrophosphate and an acyl chain linked to AMP. The next step is formation of an activated thioester bond between the fatty acyl chain and Coenzyme A.
The balanced equation for the above is:
RCOO− + CoASH + ATP → RCO-SCoA + AMP + PPi
This two-step reaction is freely reversible and its equilibrium lies near 1. To drive the reaction forward, the reaction is coupled to a strongly exergonic hydrolysis reaction: the enzyme inorganic pyrophosphatase cleaves the pyrophosphate liberated from ATP to two phosphate ions, consuming one water molecule in the process. Thus the net reaction becomes:
RCOO− + CoASH + ATP → RCO-SCoA+ AMP + 2Pi
Transport into the mitochondrial matrix
The inner mitochondrial membrane is impermeable to fatty acids and a specialized carnit |
https://en.wikipedia.org/wiki/Passano%20Foundation | The Passano Foundation, established in 1945, provides an annual award to a research scientist whose work – done in the United States – is thought to have immediate practical benefits. Many Passano laureates have subsequently won the Nobel Prize.
Selection of award winners
Passano Laureates
2023 Se-Jin Lee
2022 Duojia Pan
2021 Alfred Goldberg
2020 David Eisenberg
2019 Robert Fettiplace, James Hudspeth
2018 Carl June, Michel Sadelain
2017 Yuan Chang, Patrick S. Moore
2016 , Helen Hobbs
2015 James P. Allison (2018 Nobel Prize in Physiology or Medicine)
2014 Jeffrey I. Gordon
2013 Rudolf Jaenisch
2012 Eric N. Olson
2011 Elaine Fuchs
2010 David Julius (2021 Nobel Prize in Physiology or Medicine)
2009 Irving Weissman
2008 Thomas Südhof (2013 Nobel Prize in Physiology or Medicine)
2007 Joan Massagué Solé
2006 Napoleone Ferrara
2005 Jeffrey M. Friedman
2003 Andrew Z. Fire (2006 Nobel Prize in Physiology or Medicine)
2002 Alexander Rich
2001 Seymour Benzer
2000 Giuseppe Attardi, Douglas C. Wallace
1999 Elizabeth Blackburn (2009 Nobel Prize in Physiology or Medicine), Carol W. Greider (2009 Nobel Prize in Physiology or Medicine)
1998 H. Robert Horvitz (2002 Nobel Prize in Physiology or Medicine)
1997 James E. Darnell, Jr.
1996 Leland H. Hartwell (2001 Nobel Prize in Physiology or Medicine)
1995 Robert G. Roeder, Robert Tjian
1994 Bert Vogelstein
1993 Jack L. Strominger, Don Craig Wiley
1992 Charles Yanofsky
1991 William S. Sly, Stuart Kornfeld
1990 Alfred Goodman Gilman (1994 Nobel Prize in Physiology or Medicine)
1989 Victor Almon McKusick
1988 Edwin Gerhard Krebs (1992 Nobel Prize in Physiology or Medicine), Edmond Henri Fischer (1992 Nobel Prize in Physiology or Medicine)
1987 Irwin Fridovich
1986 Albert L. Lehninger, Eugene P. Kennedy
1985 Howard Green
1984 Peter Nowell
1983 John Michael Bishop (1989 Nobel Prize in Physiology or Medicine), Harold Elliot Varmus (1989 Nobel Prize in Physiology or Medicine)
1982 Roscoe O. Brady, |
https://en.wikipedia.org/wiki/Maclaurin%27s%20inequality | In mathematics, Maclaurin's inequality, named after Colin Maclaurin, is a refinement of the inequality of arithmetic and geometric means.
Let a1, a2, ..., an be positive real numbers, and for k = 1, 2, ..., n define the averages Sk as follows:
The numerator of this fraction is the elementary symmetric polynomial of degree k in the n variables a1, a2, ..., an, that is, the sum of all products of k of the numbers a1, a2, ..., an with the indices in increasing order. The denominator is the number of terms in the numerator, the binomial coefficient
Maclaurin's inequality is the following chain of inequalities:
with equality if and only if all the ai are equal.
For n = 2, this gives the usual inequality of arithmetic and geometric means of two numbers. Maclaurin's inequality is well illustrated by the case n = 4:
Maclaurin's inequality can be proved using Newton's inequalities or generalised Bernoulli's inequality.
See also
Newton's inequalities
Muirhead's inequality
Generalized mean inequality
Bernoulli's inequality |
https://en.wikipedia.org/wiki/Vapor%20quality | In thermodynamics, vapor quality is the mass fraction in a saturated mixture that is vapor; in other words, saturated vapor has a "quality" of 100%, and saturated liquid has a "quality" of 0%. Vapor quality is an intensive property which can be used in conjunction with other independent intensive properties to specify the thermodynamic state of the working fluid of a thermodynamic system. It has no meaning for substances which are not saturated mixtures (for example, compressed liquids or superheated fluids).
Vapor quality is an important quantity during the adiabatic expansion step in various thermodynamic cycles (like Organic Rankine cycle, Rankine cycle, etc.). Working fluids can be classified by using the appearance of droplets in the vapor during the expansion step.
Quality can be calculated by dividing the mass of the vapor by the mass of the total mixture:
where indicates mass.
Another definition used in chemical engineering defines quality () of a fluid as the fraction that is saturated liquid. By this definition, a saturated liquid has . A saturated vapor has .
An alternative definition is the 'equilibrium thermodynamic quality'. It can be used only for single-component mixtures (e.g. water with steam), and can take values < 0 (for sub-cooled fluids) and > 1 (for super-heated vapors):
where is the mixture specific enthalpy, defined as:
Subscripts and refer to saturated liquid and saturated gas respectively, and refers to vaporization.
Calculation
The above expression for vapor quality can be expressed as:
where is equal to either specific enthalpy, specific entropy, specific volume or specific internal energy, is the value of the specific property of saturated liquid state and is the value of the specific property of the substance in dome zone, which we can find both liquid and vapor .
Another expression of the same concept is:
where is the vapor mass and is the liquid mass.
Steam quality and work
The origin of the idea of vapor qua |
https://en.wikipedia.org/wiki/Software%20repository | A software repository, or repo for short, is a storage location for software packages. Often a table of contents is also stored, along with metadata. A software repository is typically managed by source or version control, or repository managers. Package managers allow automatically installing and updating repositories, sometimes called "packages".
Overview
Many software publishers and other organizations maintain servers on the Internet for this purpose, either free of charge or for a subscription fee. Repositories may be solely for particular programs, such as CPAN for the Perl programming language, or for an entire operating system. Operators of such repositories typically provide a package management system, tools intended to search for, install and otherwise manipulate software packages from the repositories. For example, many Linux distributions use Advanced Packaging Tool (APT), commonly found in Debian based distributions, or Yellowdog Updater, Modified (yum) found in Red Hat based distributions. There are also multiple independent package management systems, such as pacman, used in Arch Linux and equo, found in Sabayon Linux.
As software repositories are designed to include useful packages, major repositories are designed to be malware free. If a computer is configured to use a digitally signed repository from a reputable vendor, and is coupled with an appropriate permissions system, this significantly reduces the threat of malware to these systems. As a side effect, many systems that have these abilities do not need anti-malware software such as antivirus software.
Most major Linux distributions have many repositories around the world that mirror the main repository.
In an enterprise environment, a software repository is usually used to store artifacts, or to mirror external repositories which may be inaccessible due to security restrictions. Such repositories may provide additional functionality, like access control, versioning, security checks for u |
https://en.wikipedia.org/wiki/Plant%20geneticist | A plant geneticist is a scientist involved with the study of genetics in botany. Typical work is done with genes in order to isolate and then develop certain plant traits. Once a certain trait, such as plant height, fruit sweetness, or tolerance to cold, is found, a plant geneticist works to improve breeding methods to ensure that future plant generations possess the desired traits.
Plant genetics played a key role in the modern-day theories of heredity, beginning with Gregor Mendel's study of pea plants in the 19th century. The occupation has since grown to encompass advancements in biotechnology that have led to greater understanding of plant breeding and hybridization. Commercially, plant geneticists are sometimes employed to develop methods of making produce more nutritious, or altering plant pigments to make the food more enticing to consumers. |
https://en.wikipedia.org/wiki/Giuseppe%20Cocconi | Giuseppe Cocconi (1914–2008) was an Italian physicist who was director of the Proton Synchrotron at CERN in Geneva.
He is known for his work in particle physics and for his involvement with SETI where he wrote, "[t]he probability of success is difficult to estimate; but if we never search, the chance of success is zero."
Life
Cocconi was born in Como, Kingdom of Italy in 1914. He went to study physics at the University of Milan, and then in February 1938, went to the Sapienza University of Rome on the invitation of Edoardo Amaldi. There he met physicists Enrico Fermi, and Gilberto Bernardini. With Fermi, he built a Wilson chamber to study the disintegration of mesons. In August of that year, Cocconi laid the foundation of cosmic ray research in Milan. While at Milan, Cocconi supervised Vanna Tongiorgi, who picked cosmic rays as her thesis' subject, and later married her in 1945.
In 1942, Cocconi was nominated professor at University of Catania, but was engaged by the Italian army to research infrared phenomena for the Royal Italian Air Force until the end of World War II, in late 1944. He taught at Catania until 1947, when Hans Bethe made a request that he would join Cornell University. During his stay at Cornell, Cocconi and his wife performed many experiments there and in Echo Lake located in the Rocky Mountains, where they demonstrated the galactic and extragalactic origins of cosmic rays. In 1955, he was awarded a Guggenheim Fellowship. While at Cornell he also wrote, with Philip Morrison, his most famous paper "Searching for Interstellar Communications", on the 21 cm Hydrogen line, which turned out to be of vital importance in the SETI program.
During his sabbatical of 1959–1961, Cocconi helped kick-start the Proton Synchrotron research program at CERN, and conducted a series of experiment on proton-proton scattering, and on the cross section of protons and neutrons. He also continued this research at Brookhaven National Laboratory (BNL). In 1963 he returne |
https://en.wikipedia.org/wiki/Dugald%20Macpherson | H. Dugald Macpherson is a mathematician and logician. He is Professor of Pure Mathematics at the University of Leeds.
He obtained his DPhil from the University of Oxford in 1983 for his thesis entitled "Enumeration of Orbits of Infinite Permutation Groups" under the supervision of Peter Cameron. In 1997, he was awarded the Junior Berwick Prize by the London Mathematical Society. He continues to research into permutation groups and model theory. He is scientist in charge of the MODNET team at the University of Leeds. He co-authored the book Notes on Infinite Permutation Groups. |
https://en.wikipedia.org/wiki/Physical%20Society%20of%20Iran | The Physical Society of Iran (PSI) (انجمن فيزيک ايران) is Iran's professional and academic society of physicists. PSI is a non-profit organization aimed at establishing and strengthening scientific contacts between physicists and academic members of the country's institutes of higher education in the field of physics.
The society has over 10,000 members inside and outside Iran. In addition to its awards scheme and publications programme, the Physical Society of Iran holds annual conferences in several different fields, including optics and condensed matter physics. The society has proved instrumental in improving the state of education and research in physics throughout the country.
The society organizes annual meetings and it is an active member of TWAS. It has also close collaboration with the American Physical Society. In October 2003 APS and PSI jointly sponsored a school/workshop on string theory in Tehran.
The society's main journal is the Iranian Journal of Physics Research, which is published via the Isfahan University of Technology Press, and is recognized by the Ministry of Science of Iran. PSI was a sponsor of the 2007 International Physics Olympiad, which was hosted by Isfahan University of Technology.
History
The Physical Society of Iran was established in 1963 by Iran's elite physicists and engineers. Among the founders was Yusef Sobouti, currently chancellor of IASBS.
The first Annual Physics Conference of Iran was inaugurated in 1973 at Sepah Bank's arboretum, followed by Iran's second national conference on Physics the next year at Shahid Beheshti University. Activities of the society suffered a setback during the early years of the revolution, but picked up in 1983 and have been gathering momentum ever since.
Presidents
Yousef Sobouti (1988–91 and 1996–99)
Reza Mansouri
Hessamaddin Arfaei
Ezatolah Arzi
Hadi Akbarzadeh
Shahin Rouhani
Mohammad Reza Ejtehadi (current)
Awards
The following are awarded annually by PSI to selected recipie |
https://en.wikipedia.org/wiki/Hollow%20Moon | The Hollow Moon and the closely related Spaceship Moon are pseudoscientific hypotheses that propose that Earth's Moon is either wholly hollow or otherwise contains a substantial interior space. No scientific evidence exists to support the idea; seismic observations and other data collected since spacecraft began to orbit or land on the Moon indicate that it has a thin crust, extensive mantle and small, dense core, although overall it is much less dense than Earth.
The first publication to mention a hollow Moon was H. G. Wells' 1901 novel The First Men in the Moon. The concept of a (partially) hollow Moon has been employed in science fiction multiple times. In 1970, two Soviet authors published a short piece in the popular press speculating that the Moon might be "the Creation of Alien Intelligence". Since the late 1970s, the hypothesis has been endorsed by conspiracy theorists like Jim Marrs and David Icke.
Introduction
The Hollow Moon hypothesis is the suggestion that the Moon is hollow, usually as a product of an alien civilization. It is often called the Spaceship Moon hypothesis and often corresponds with beliefs in UFOs or ancient astronauts.
The suggestion of a hollow moon first appeared in science fiction, when H. G. Wells wrote about a hollow Moon in his 1901 book The First Men in the Moon. The concept of hollow planets was not new; The first discussion of a Hollow Earth was by scientist Edmond Halley in 1692. Wells borrowed from earlier fictional works that described a hollow Earth, such as the 1741 novel Niels Klim's Underground Travels by Ludvig Holberg.
Both Hollow Moon and Hollow Earth are now considered to be fringe theories or conspiracy theories. The concept of the Moon as a spaceship is often mentioned as one of David Icke's beliefs.
Claims and rebuttals
Density
The fact that the Moon is less dense than the Earth is advanced by conspiracy theorists as support for claims of a hollow Moon. The Moon's mean density is 3.3 g/cm3, whereas th |
https://en.wikipedia.org/wiki/Sturdee%27s%20pipistrelle | Sturdee's pipistrelle (Pipistrellus sturdeei), also known as the Bonin pipistrelle, is an extinct species of bat that was endemic to Japan.
Description
Pipistrellus sturdeei was thought to have existed solely on Haha-jima Island in the Bonin Islands, Japan, where the only known specimen was discovered. More recent scholarship, though, places doubt on the single specimen's origin and taxonomy. The previous population of this animal is unknown because only one specimen has been preserved, which is currently housed in the Natural History Museum, London. No record of Sturdee's pipistrelle has been observed since 1889. |
https://en.wikipedia.org/wiki/Brewing%20Industry%20Research%20Foundation | The Brewing Industry Research Foundation is now part of Campden BRI, a research association serving all sectors of the food and drink industry. The Brewing Division is based next to the M23, and the other Divisions are located in Chipping Campden, Gloucestershire, where about 330 people are employed.
History
Formation
In 1946 the Institute of Brewing recommended the setting up of an experimental research station, the Brewing Industry Research Foundation, with a full-time Director of Research and in 1947 Dr J Masson Gulland (Professor of Organic Chemistry, the University of Nottingham) was appointed to that position. Sadly Dr Masson Gulland was killed in a train crash before taking up his position and hence Sir Ian Heilbron (Imperial College London) agreed to become the second Director of Research at the new Brewing Industry Research Foundation (BIRF)in 1949.
Research building
In 1948 Lyttel Hall, Nutfield in Surrey was purchased and the main Hall was converted into laboratories, the squash court into a pilot brewery and other new buildings were developed as a workshops and conference facilities. In 1951 the Duke of Edinburgh Prince Philip formally opened the site. BIRF later became The Brewing Research Foundation, BRF International, Brewing Research International and today is simply known as BRI.
Research scope
Initially the BIRF focused on fundamental and applied research for the malting and brewing industries of the UK. Its staff made useful contributions in the areas of barley germination and yeast physiology. Since that time its role has evolved to become more service orientated offering analysis, food safety and information packages to an international client base.
Brewers Patents Ltd
The control of Brewing Patents Ltd was transferred in 1976 from the Brewers' Society to the Brewing Research Foundation.
Important Brewing Scientist Training Ground
Many young scientists of the Brewing Industry found initial employment at this Foundation to become important |
https://en.wikipedia.org/wiki/Frobenius%20matrix | A Frobenius matrix is a special kind of square matrix from numerical mathematics. A matrix is a Frobenius matrix if it has the following three properties:
all entries on the main diagonal are ones
the entries below the main diagonal of at most one column are arbitrary
every other entry is zero
The following matrix is an example.
Frobenius matrices are invertible. The inverse of a Frobenius matrix is again a Frobenius matrix, equal to the original matrix with changed signs outside the main diagonal. The inverse of the example above is therefore:
Frobenius matrices are named after Ferdinand Georg Frobenius.
The term Frobenius matrix may also be used for an alternative matrix form that differs from an Identity matrix only in the elements of a single row preceding the diagonal entry of that row (as opposed to the above definition which has the matrix differing from the identity matrix in a single column below the diagonal). The following matrix is an example of this alternative form showing a 4-by-4 matrix with its 3rd row differing from the identity matrix.
An alternative name for this latter form of Frobenius matrices is Gauss transformation matrix, after Carl Friedrich Gauss. They are used in the process of Gaussian elimination to represent the Gaussian transformations.
If a matrix is multiplied from the left (left multiplied) with a Gauss transformation matrix, a linear combination of
the preceding rows is added to the given row of the matrix (in the example shown above, a linear combination of rows 1 and 2 will be added to row 3). Multiplication with the inverse matrix subtracts the corresponding linear combination from the given row. This corresponds to one of the elementary operations of Gaussian elimination (besides the operation of transposing the rows and multiplying a row with a scalar multiple).
See also
Elementary matrix, a special case of a Frobenius matrix with only one off-diagonal nonzero
Notes |
https://en.wikipedia.org/wiki/CDC%201700 | The CDC 1700 was a 16-bit word minicomputer, manufactured by the Control Data Corporation with deliveries beginning in May 1966.
Over the years there were several versions. The original 1700 was constructed using air-cooled CDC 6600-like cordwood logic modules and core memory, although later models used different technology. The final models, called Cyber-18, added four general-purpose registers and a number of instructions to support a time-sharing operating system.
Hardware
The 1700 used ones' complement arithmetic and an ASCII-based character set, and supported memory write protection on an individual word basis. It had one general-purpose register and two indexing registers (one of which was implemented as a dedicated memory location). The instruction set was fairly simple and supported seven storage addressing modes, including multilevel (chained) indirect addressing.
Although described as a 16-bit system, the basic core storage memory was 4,096 18-bit words, each comprising
16 data bits
a parity bit, and
a program protection bit;memory could be expanded to 32,768 words; I/O was in units of 8 or 16 bits.
Peripherals
Available peripherals included teletypewriters, paper tape readers/punches, punched card readers/punches, line printers, magnetic tape drives, magnetic drums, fixed and removable magnetic disk drives, display terminals, communications controllers, Digigraphic display units, timers, etc. These interfaced to the processor using unbuffered interrupt-driven "A/Q" channels or buffered Direct Storage Access channels.
Software
The main operating systems for the 1700 were the Utility System, which usually took the form of several punched paper tapes (resident monitor plus utilities), a similar Operating System for larger configurations (often including punched cards and magnetic tape), and the Mass Storage Operating System (MSOS) for disk-based systems.
An assembler and a Fortran compiler were available. Pascal was also available, via a cros |
https://en.wikipedia.org/wiki/1858%20Bradford%20sweets%20poisoning | The 1858 Bradford sweets poisoning was the arsenic poisoning of more than 200 people in Bradford, England, when sweets accidentally made with arsenic were sold from a market stall. Twenty-one victims died as a result. The event contributed to the passage of the Pharmacy Act 1868 in the United Kingdom and legislation regulating the adulteration of foodstuffs.
Background
William Hardaker, known to locals as "Humbug Billy", sold sweets from a stall in the Greenmarket in central Bradford (now the site of Bradford's Arndale Centre). Hardaker purchased his supplies from Joseph Neal, who made the sweets (or "lozenges") on Stone Street a few hundred yards to the north. The lozenges in question were peppermint humbugs, made of peppermint oil incorporated into a base of sugar and gum. However, sugar was expensive (6½d per ) and so Neal would substitute powdered gypsum (½d per ) — known as "daff" — for some of the required sugar. The adulteration of foodstuffs with cheaper substances was common at the time and the adulterators used obscure nicknames ("daff", "multum", "flash", "stuff") to hide the practice.
Accidental poisoning
On the occasion in question, on 30 October 1858, Neal sent James Archer, a lodger who lived at his house, to collect daff for Hardaker's humbugs from druggist Charles Hodgson. Hodgson's pharmacy was away at Baildon Bridge in Shipley. Hodgson was at his pharmacy, but did not serve Archer owing to illness and so his requests were seen to by his young assistant, William Goddard. Goddard asked Hodgson where the daff was, and was told that it was in a cask in a corner of the attic. However, rather than daff, Goddard sold Archer of arsenic trioxide.
The mistake remained undetected even during manufacture of the sweets by James Appleton, an "experienced sweetmaker" employed by Neal, though Appleton did observe that the finished product looked different from the usual humbugs. Appleton was suffering symptoms of illness during the sweet-making process and w |
https://en.wikipedia.org/wiki/National%20Geophysical%20Research%20Institute | The National Geophysical Research Institute (NGRI) is a geoscientific research organization established in 1961 under the Council of Scientific and Industrial Research (CSIR), India's largest Research and Development organization. It is supported by more than 200 scientists and other technical staff whose research activities are published in several journals of national and international interest.
Research areas covered by this institute include hydrocarbon and coal exploration, mineral exploration, deep seismic sounding studies, exploration and management of groundwater resources, earthquake hazard assessment, structure of Earth's interior and its evolution (theoretical studies), geophysical instrument development and geothermal exploration.
The major facilities available at NGRI include:
Laser Ablation Multi-Collector Inductively Coupled Plasma Mass Spectrometer (LA-MC-ICPMS) with clean chemistry laboratory facility.
Mineral Physics Laboratory with high-pressure Diamond Anvil Cell (DAC), ultra high resolution (0.02/cm) double monochorometer, and micro-Raman spectrometer.
High-pressure laboratory consisting of Keithly electrometer, strain-measuring sensors, universal testing machine (100 tons), and Bridgeman-Birch high-pressure apparatus.
In-situ stress measurement facility consisting of hydraulic equipment.
Rock magnetism laboratory consisting of astatic magnetometer, digital spinner magnetometer, alternating magnetic field and thermal demagnetizers, high-field and low-field hysteresis and susceptibility meter.
Geochemical laboratory consisting of fully automated X-ray Fluorescence Spectrometer (XRF), Atomic Absorption Spectrometer, Inductively Coupled Plasma Mass Spectrometer (ICPMS), and Electron Probe Micro Analyzer (EPMA).
Geochronology and isotope geochemistry laboratory with facilities for Rb-Sr, Sm-Nd, and Pb-Pb analyses.
EM, Resistivity, and IP Model Laboratories.
Continuous Flow Isotope Ratio Mass Spectrometer Laboratory (CFIRMS).
Helium Em |
https://en.wikipedia.org/wiki/Effects%20of%20meditation | The psychological and physiological effects of meditation have been studied. In recent years, studies of meditation have increasingly involved the use of modern instruments, such as fMRI and EEG, which are able to observe brain physiology and neural activity in living subjects, either during the act of meditation itself or before and after meditation. Correlations can thus be established between meditative practices and brain structure or function.
Since the 1950s hundreds of studies on meditation have been conducted, but many of the early studies were flawed and thus yielded unreliable results. Contemporary studies have attempted to address many of these flaws with the hope of guiding current research into a more fruitful path. In 2013, researchers found moderate evidence that meditation can reduce anxiety, depression, and pain, but no evidence that it is more effective than active treatments such as drugs or exercise. Another major review article also cautioned about possible misinformation and misinterpretation of data related to the subject.
Effects of mindfulness meditation
A previous study commissioned by the US Agency for Healthcare Research and Quality found that meditation interventions reduce multiple negative dimensions of psychological stress. Other systematic reviews and meta-analyses show that mindfulness meditation has several mental health benefits such as bringing about reductions in depression symptoms, improvements in mood, stress-resilience and attentional control. Mindfulness interventions also appear to be a promising intervention for managing depression in youth.
Mindfulness meditation is useful for managing stress, anxiety and also appears to be effective in treating substance use disorders.
A recent meta analysis by Hilton et al. (2016) including 30 randomized controlled trials found high quality evidence for improvement in depressive symptoms.
Other review studies have shown that mindfulness meditation can enhance the psychological funct |
https://en.wikipedia.org/wiki/Aralkylamine%20N-acetyltransferase | Aralkylamine N-acetyltransferase (AANAT) (), also known as arylalkylamine N-acetyltransferase or serotonin N-acetyltransferase (SNAT), is an enzyme that is involved in the day/night rhythmic production of melatonin, by modification of serotonin. It is in humans encoded by the ~2.5 kb AANAT gene containing four exons, located on chromosome 17q25. The gene is translated into a 23 kDa large enzyme. It is well conserved through evolution and the human form of the protein is 80 percent identical to sheep and rat AANAT. It is an acetyl-CoA-dependent enzyme of the GCN5-related family of N-acetyltransferases (GNATs). It may contribute to multifactorial genetic diseases such as altered behavior in sleep/wake cycle and research is on-going with the aim of developing drugs that regulate AANAT function.
Nomenclature
The systematic name of this enzyme class is acetyl-CoA:2-arylethylamine N-acetyltransferase. Other names in common use include:
AANAT
Arylalkylamine N-acetyltransferase
Melatonin rhythm enzyme
Serotonin acetylase
Serotonin acetyltransferase
Serotonin N-acetyltransferase
The officially accepted name is aralkylamine N-acetyltransferase.
Function and mechanism
Tissue distribution
The AANAT mRNA transcript is mainly expressed in the central nervous system (CNS). It is detectable at low levels in several brain regions including the pituitary gland as well as in the retina. It is most highly abundant in the pineal gland which is the site of melatonin synthesis. Brain and pituitary AANAT may be involved in the modulation of serotonin-dependent aspects of human behavior and pituitary function.
Physiological function
In the pinealocyte cells of the pineal gland, aralkylamine N-acetyltransferase is involved in the conversion of serotonin to melatonin. It is the penultimate enzyme in the melatonin synthesis controlling the night/day rhythm in melatonin production in the vertebrate pineal gland. Melatonin is essential for seasonal reproduction, modulates the func |
https://en.wikipedia.org/wiki/Biological%20systems%20engineering | Biological systems engineering or Biosystems engineering is a broad-based engineering discipline with particular emphasis on non-medical biology. It can be thought of as a subset of the broader notion of biological engineering or bio-technology though not in the respects that pertain to biomedical engineering as biosystems engineering tends to focus less on medical applications than on agriculture, ecosystems, and food science. The discipline focuses broadly on environmentally sound and sustainable engineering solutions to meet societies' ecologically related needs. Biosystems engineering integrates the expertise of fundamental engineering fields with expertise from non-engineering disciplines.
Background and organization
Many college and university biological engineering departments have a history of being grounded in agricultural engineering and have only in the past two decades or so changed their names to reflect the movement towards more diverse biological based engineering programs. This major is sometimes called agricultural and biological engineering, biological and environmental engineering, etc., in different universities, generally reflecting interests of local employment opportunities.
Since biological engineering covers a wide spectrum, many departments now offer specialization options. Depending on the department and the specialization options offered within each program, curricula may overlap with other related fields. There are a number of different titles for BSE-related departments at various universities. The professional societies commonly associated with many Biological Engineering programs include the American Society of Agricultural and Biological Engineers (ASABE) and the Institute of Biological Engineering (IBE), which generally encompasses BSE. Some program also participate in the Biomedical Engineering Society (BMES) and the American Institute of Chemical Engineers (AIChE).
A biological systems engineer has a background in what bot |
https://en.wikipedia.org/wiki/Jugular%20venous%20arch | Just above the sternum the two anterior jugular veins communicate by a transverse trunk, the jugular venous arch (or venous jugular arch), which receive tributaries from the inferior thyroid veins; each also communicates with the internal jugular.
There are no valves in this vein. |
https://en.wikipedia.org/wiki/Iris%20albicans | Iris albicans, also known as the cemetery iris, white cemetery iris, or the white flag iris, is a species of iris which was planted on graves in Muslim regions and grows in many countries throughout the Middle East and northern Africa. It was later introduced to Spain, and then other European countries. It is a natural hybrid.
It grows to 30–60 cm tall. The leaves are grey-green, and broadly sword-shaped. The inflorescence is fan-shaped and contains two or three fragrant flowers. The flowers are grey or silvery in bud, and are white or off-white and 8 cm wide in bloom. It is a sterile hybrid, and spreads by rhizomal growth and division, as it cannot produce seeds.
Iris albicans has been cultivated since ancient times and may be the oldest iris in cultivation. Collected by Lange in 1860, it has been in cultivation since at least 1400 BC. Originating from Yemen and Saudi Arabia, it appears in a wall painting of the Botanical Garden of Tuthmosis III in the Temple of Amun at Karnak in ancient Thebes dated around 1426 BC.
Iris albicans is included in the Tasmanian Fire Service's list of low flammability plants, indicating that it is suitable for growing within a building protection zone. |
https://en.wikipedia.org/wiki/Bioprocess%20engineering | Bioprocess engineering, also biochemical engineering, is a specialization of chemical engineering or biological engineering. It deals with the design and development of equipment and processes for the manufacturing of products such as agriculture, food, feed, pharmaceuticals, nutraceuticals, chemicals, and polymers and paper from biological materials & treatment of waste water.
Bioprocess engineering is a conglomerate of mathematics, biology and industrial design, and consists of various spectrums like the design and study of bioreactors (operational mode, instrumentation, and physical layout) to the creation of kinetic models. It also deals with studying various biotechnological processes used in industries for large scale production of biological product for optimization of yield in the end product and the quality of end product. Bioprocess engineering may include the work of mechanical, electrical, and industrial engineers to apply principles of their disciplines to processes based on using living cells or sub component of such cells.
Colleges and universities
Auburn University
University of Georgia (Biochemical Engineering)
Michigan Technological University
McMaster University
Technical University of Munich
University of Natural Resources and Life Sciences, Vienna
Keck Graduate Institute of Applied Life Sciences (KGI Amgen Bioprocessing Center)
Kungliga Tekniska högskolan- KTH - Royal Institute of Technology (Dept. of Industrial Biotechnology)
Queensland University of Technology (QUT)
University of Cape Town (Centre for Bioprocess Engineering Research)
SUNY-ESF (Bioprocess Engineering Program)
Université de Sherbrooke
University of British Columbia
UC Berkeley
UC Davis
Savannah Technical College
University of Illinois Urbana-Champaign (Integrated Bioprocessing Research Laboratory)
University of Iowa (Chemical and Biochemical Engineering)
University of Minnesota (Bioproducts and Biosystems Engineering)
East Carolina University
Jacob School of Biotechnology |
https://en.wikipedia.org/wiki/Segment%20descriptor | In memory addressing for Intel x86 computer architectures, segment descriptors are a part of the segmentation unit, used for translating a logical address to a linear address. Segment descriptors describe the memory segment referred to in the logical address.
The segment descriptor (8 bytes long in 80286 and later) contains the following fields:
A segment base address
The segment limit which specifies the segment size
Access rights byte containing the protection mechanism information
Control bits
Structure
The x86 and x86-64 segment descriptor has the following form:
Where the fields stand for:
Base Address Starting memory address of the segment. Its length is 32 bits and it is created from the lower part bits 16 to 31, and the upper part bits 0 to 7, followed by bits 24 to 31.
Segment Limit Its length is 20 bits and is created from the lower part bits 0 to 15 and the upper part bits 16 to 19. It defines the address of the last accessible data. The length is one more than the value stored here. How exactly this should be interpreted depends on the Granularity bit of the segment descriptor.
G=Granularity If clear, the limit is in units of bytes, with a maximum of 220 bytes. If set, the limit is in units of 4096-byte pages, for a maximum of 232 bytes.
D/B
D = Default operand size : If clear, this is a 16-bit code segment; if set, this is a 32-bit segment.
B = Big: If set, the maximum offset size for a data segment is increased to 32-bit 0xffffffff. Otherwise it's the 16-bit max 0x0000ffff. Essentially the same meaning as "D".
L=Long If set, this is a 64-bit segment (and D must be zero), and code in this segment uses the 64-bit instruction encoding. "L" cannot be set at the same time as "D" aka "B". (Bit 21 in the image)
AVL=Available For software use, not used by hardware (Bit 20 in the image with the label A)
P=Present If clear, a "segment not present" exception is generated on any reference to this segment
DPL=Descriptor privilege le |
https://en.wikipedia.org/wiki/Quark%E2%80%93lepton%20complementarity | The quark–lepton complementarity (QLC) is a possible fundamental symmetry between quarks and leptons. First proposed in 1990 by Foot and Lew, it assumes that leptons as well as quarks come in three "colors". Such theory may reproduce the Standard Model at low energies, and hence quark–lepton symmetry may be realized in nature.
Possible evidence for QLC
Recent neutrino experiments confirm that the Pontecorvo–Maki–Nakagawa–Sakata matrix contains large mixing angles. For example, atmospheric measurements of particle decay yield ≈ 45°, while solar experiments yield ≈ 34°. Compare these results with ≈ 9° which is clearly smaller, at about ~× the size,
and with the quark mixing angles in the Cabibbo–Kobayashi–Maskawa matrix . The disparity that nature indicates between quark and lepton mixing angles has been viewed in terms of a "quark–lepton complementarity" which can be expressed in the relations
Possible consequences of QLC have been investigated in the literature and in particular a simple correspondence between the PMNS and CKM matrices have been proposed and analyzed in terms of a correlation matrix. The correlation matrix is roughly
defined as the product of the CKM and PMNS matrices:
Unitarity implies:
Open questions
One may ask where do the large lepton mixings come from? Is this information implicit in the form of the matrix? This question has been widely investigated in the literature, but its answer is still open. Furthermore, in some Grand Unification Theories (GUTs) the direct QLC correlation between the CKM and the PMNS mixing matrix can be obtained. In this class of models, the matrix is determined by the heavy Majorana neutrino mass matrix.
Despite the naive relations between the PMNS and CKM angles, a detailed analysis shows that the correlation matrix is phenomenologically compatible with a tribimaximal pattern, and only marginally with a bimaximal pattern. It is possible to include bimaximal forms of the correlation matrix in models wi |
https://en.wikipedia.org/wiki/Committee%20machine | A committee machine is a type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response. The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare with ensembles of classifiers.
Types
Static structures
In this class of committee machines, the responses of several predictors (experts) are combined by means of a mechanism that does not involve the input signal, hence the designation static. This category includes the following methods:
Ensemble averaging
In ensemble averaging, outputs of different predictors are linearly combined to produce an overall output.
Boosting
In boosting, a weak algorithm is converted into one that achieves arbitrarily high accuracy.
Dynamic structures
In this second class of committee machines, the input signal is directly involved in actuating the mechanism that integrates the outputs of the individual experts into an overall output, hence the designation dynamic. There are two kinds of dynamic structures:
Mixture of experts
In mixture of experts, the individual responses of the experts are non-linearly combined by means of a single gating network.
Hierarchical mixture of experts
In hierarchical mixture of experts, the individual responses of the individual experts are non-linearly combined by means of several gating networks arranged in a hierarchical fashion. |
https://en.wikipedia.org/wiki/Bornology | In mathematics, especially functional analysis, a bornology on a set X is a collection of subsets of X satisfying axioms that generalize the notion of boundedness. One of the key motivations behind bornologies and bornological analysis is the fact that bornological spaces provide a convenient setting for homological algebra in functional analysis. This is becausepg 9 the category of bornological spaces is additive, complete, cocomplete, and has a tensor product adjoint to an internal hom, all necessary components for homological algebra.
History
Bornology originates from functional analysis. There are two natural ways of studying the problems of functional analysis: one way is to study notions related to topologies (vector topologies, continuous operators, open/compact subsets, etc.) and the other is to study notions related to boundedness (vector bornologies, bounded operators, bounded subsets, etc.).
For normed spaces, from which functional analysis arose, topological and bornological notions are distinct but complementary and closely related.
For example, the unit ball centered at the origin is both a neighborhood of the origin and a bounded subset.
Furthermore, a subset of a normed space is a neighborhood of the origin (respectively, is a bounded set) exactly when it contains (respectively, it is contained in) a non-zero scalar multiple of this ball; so this is one instance where the topological and bornological notions are distinct but complementary (in the sense that their definitions differ only by which of and is used).
Other times, the distinction between topological and bornological notions may even be unnecessary.
For example, for linear maps between normed spaces, being continuous (a topological notion) is equivalent to being bounded (a bornological notion).
Although the distinction between topology and bornology is often blurred or unnecessary for normed space, it becomes more important when studying generalizations of normed spaces.
Neverthe |
https://en.wikipedia.org/wiki/Flying%20ice%20cube | In molecular dynamics (MD) simulations, the flying ice cube effect is an artifact in which the energy of high-frequency fundamental modes is drained into low-frequency modes, particularly into zero-frequency motions such as overall translation and rotation of the system. The artifact derives its name from a particularly noticeable manifestation that arises in simulations of particles in vacuum, where the system being simulated acquires high linear momentum and experiences extremely damped internal motions, freezing the system into a single conformation reminiscent of an ice cube or other rigid body flying through space. The artifact is entirely a consequence of molecular dynamics algorithms and is wholly unphysical, since it violates the principle of equipartition of energy.
Origin and avoidance
The flying ice cube artifact arises from repeated rescalings of the velocities of the particles in the simulation system. Velocity rescaling is a means of imposing a thermostat on the system by multiplying the velocities of a system's particles by a factor after an integration timestep is completed, as is done by the Berendsen thermostat and the Bussi–Donadio–Parrinello thermostat. These schemes fail when the rescaling is done to a kinetic energy distribution of an ensemble that is not invariant under microcanonical molecular dynamics; thus, the Berendsen thermostat (which rescales to the isokinetic ensemble) exhibits the artifact, while the Bussi–Donadio–Parrinello thermostat (which rescales to the canonical ensemble) does not exhibit the artifact. Rescaling to an ensemble that is not invariant under microcanonical molecular dynamics results in a violation of the balance condition that is a requirement of Monte Carlo simulations (molecular dynamics simulations with velocity rescaling thermostats can be thought of as Monte Carlo simulations with molecular dynamics moves and velocity rescaling moves), which is the artifact's underlying reason.
When the flying ice cube probl |
https://en.wikipedia.org/wiki/Reuse%20metrics | In software engineering, many reuse metrics and models are metrics used to measure code reuse and reusability. A metric is a quantitative indicator of an attribute of a thing. A model specifies relationships among metrics. Reuse models and metrics can be categorized into six types:
reuse cost-benefits models
maturity assessment
amount of reuse
failure modes
reusability
reuse library metrics
Reuse cost-benefits models include economic cost-benefit analysis as well as quality and productivity payoff.
Maturity assessment models categorize reuse programs by how advanced they are in implementing systematic reuse.
Amount of reuse metrics are used to assess and monitor a reuse improvement effort by tracking percentages of reuse for life cycle objects.
Failure modes analysis is used to identify and order the impediments to reuse in a given organization.
Reusability metrics indicate the likelihood that an artifact is reusable.
Reuse library metrics are used to manage and track usage of a reuse repository. |
https://en.wikipedia.org/wiki/Kohn%20anomaly | In the field of physics concerning condensed matter, a Kohn anomaly (also called the Kohn effect) is an anomaly in the dispersion relation of a phonon branch in a metal. It is named for Walter Kohn. For a specific wavevector, the frequency (and thus the energy) of the associated phonon is considerably lowered, and there is a discontinuity in its derivative. They have been first proposed by Walter Kohn in 1959. In extreme cases (that can happen in low-dimensional materials), the energy of this phonon is zero, meaning that a static distortion of the lattice appears. This is one explanation for charge density waves in solids. The wavevectors at which a Kohn anomaly is possible are the nesting vectors of the Fermi surface, that is vectors that connect a lot of points of the Fermi surface (for a one-dimensional chain of atoms this vector would be ). The electron phonon interaction causes a rigid shift of the Fermi sphere and a failure of the Born-Oppenheimer approximation since the electrons do not follow any more the ionic motion adiabatically.
In the phononic spectrum of a metal, a Kohn anomaly is a discontinuity in the derivative of the dispersion relation that occurs at certain high symmetry points of the first Brillouin zone, produced by the abrupt change in the screening of lattice vibrations by conduction electrons.
Kohn anomalies arise together with Friedel oscillations when one considers the Lindhard theory instead of the Thomas–Fermi approximation in order to find an expression for the dielectric function of a homogeneous electron gas. The expression for the real part of the reciprocal space dielectric function obtained following the Lindhard theory includes a logarithmic term that is singular at , where is the Fermi wavevector. Although this singularity is quite small in reciprocal space, if one takes the Fourier transform and passes into real space, the Gibbs phenomenon causes a strong oscillation of in the proximity of the singularity mentioned above. I |
https://en.wikipedia.org/wiki/SLiM | Simple Login Manager (SLiM) is a graphical display manager for the X Window System that can be run independently of any window manager or desktop environment. SLiM aims to be light, completely configurable, and suitable for machines on which remote login functionalities are not needed.
SLiM was forked from Per Lidén's Login.app program, with contributions from Martin Parm for PAM-related classes. SLiM is currently developed by Simone Rota and Johannes Winkelmann, and is currently maintained by Nobuhiro Iwamatsu.
As of March, 2016, SLiM seems to be abandoned. It is not fully compatible with systemd.
As of September, 2016, GhostBSD 10.3 replaced GDM with SLiM.
Features
SLiM supports the following features:
PNG and XFT support for alpha transparency and anti-aliased fonts
External themes support
Configurable runtime options: X server, login / shutdown / reboot commands
Single (GDM-like) or double (XDM-like) input control
Can load predefined user at startup
Configurable welcome / shutdown messages
Random theme selection
Dependencies
SLiM has the following dependencies:
X11
libpng
libjpeg
freetype
See also
LightDM, the formerly Ubuntu’s default display manager, now GDM
SDDM, the KDE Plasma 5 display manager
KDM, the KDE Plasma 4 display manager
GDM, the GNOME display manager
Other display managers |
https://en.wikipedia.org/wiki/Engine%20power | Engine power is the power that an engine can put out. It can be expressed in power units, most commonly kilowatt, pferdestärke (metric horsepower), or horsepower. In terms of internal combustion engines, the engine power usually describes the rated power, which is a power output that the engine can maintain over a long period of time according to a certain testing method, for example ISO 1585. In general though, an internal combustion engine has a power take-off shaft (the crankshaft), therefore, the rule for shaft power applies to internal combustion engines: Engine power is the product of the engine torque and the crankshaft's angular velocity.
Definition
Power is the product of torque and angular velocity:
Let:
Power in Watt (W)
Torque in Newton-metre (N·m)
Crankshaft speed per Second (s−1)
Angular velocity =
Power is then:
In internal combustion engines, the crankshaft speed is a more common figure than , so we can use instead, which is equivalent to :
Note that is per Second (s−1). If we want to use the common per Minute (min−1) instead, we have to divide by 60:
Usage
Numerical value equations
The approximate numerical value equations for engine power from torque and crankshaft speed are:
International unit system (SI)
Let:
Power in Kilowatt (kW)
Torque in Newton-metre (N·m)
Crankshaft speed per Minute (min−1)
Then:
Technical unit system (MKS)
Power in Pferdestärke (PS)
Torque in Kilopondmetre (kp·m)
Crankshaft speed per Minute (min−1)
Then:
Imperial/U.S. Customary unit system
Power in Horsepower (hp)
Torque in Pound-force foot (lbf·ft)
Crankshaft speed in Revolutions per Minute (rpm)
Then:
Example
A diesel engine produces a torque of 234 N·m at 4200 min−1, which is the engine's rated speed.
Let:
Then:
or using the numerical value equation:
The engine's rated power output is 103 kW.
Units
See also
List of production cars by power output
Bibliography |
https://en.wikipedia.org/wiki/Valve%20actuator | A valve actuator is the mechanism for opening and closing a valve. Manually operated valves require someone in attendance to adjust them using a direct or geared mechanism attached to the valve stem. Power-operated actuators, using gas pressure, hydraulic pressure or electricity, allow a valve to be adjusted remotely, or allow rapid operation of large valves. Power-operated valve actuators may be the final elements of an automatic control loop which automatically regulates some flow, level or other process. Actuators may be only to open and close the valve, or may allow intermediate positioning; some valve actuators include switches or other ways to remotely indicate the position of the valve.
Used for the automation of industrial valves, actuators can be found in all kinds of process plants. They are used in waste water treatment plants, power plants, refineries, mining and nuclear processes, food factories, and pipelines. Valve actuators play a major part in automating process control. The valves to be automated vary both in design and dimension. The diameters of the valves range from one-tenth of an inch to several feet.
Types
The common types of actuators are: manual, pneumatic, hydraulic, electric and spring.
Manual
A manual actuator employs levers, gears, or wheels to move the valve stem with a certain action. Manual actuators are powered by hand. Manual actuators are inexpensive, typically self-contained, and easy to operate by humans. However, some large valves are impossible to operate manually and some valves may be located in remote, toxic, or hostile environments that prevent manual operations in some conditions. As a safety feature, certain types of situations may require quicker operation than manual actuators can provide to close the valve.
Pneumatic
Air (or other gas) pressure is the power source for pneumatic valve actuators. They are used on linear or quarter-turn valves. Air pressure acts on a piston or bellows diaphragm creating linear fo |
https://en.wikipedia.org/wiki/Wobbulator | A wobbulator is an electronic device primarily used for the alignment of receiver or transmitter intermediate frequency strips. It is usually used in conjunction with an oscilloscope, to enable a visual representation of a receiver's passband to be seen, hence simplifying alignment; it was used to tune early consumer AM radios. The term "wobbulator" is a portmanteau of wobble and oscillator. A "wobbulator" (without capitalization) is a generic term for the swept-output RF oscillator described above, a frequency-modulated oscillator, also called a "sweep generator" by most professional electronics engineers and technicians. A wobbulator was used in some old microwave signal generators to create what amounted to frequency modulation. It physically altered the size of the klystron cavity, therefore changing the frequency.
When capitalized, "Wobbulator" refers to the trade name of a specific brand of RF/IF alignment generator. The Wobbulator was made by a company known as "TIC" (Tel-Instrument Company) although some units branded "Allen B. Du Mont Laboratories" and "Stromberg-Carlson" are rumoured to exist. These were apparently made under some form of license and branded with the name of the licensee, much as Radio Corporation of America through subsidiary Hazeltine Corp., licensed its KCS-20A television chassis design (used in models 630TS, 8TS30, etc.) to other television manufacturers (Air King, Crosley, Fada, et al.) for production under their brand names. The Wobbulator generator, designated model 1200A, combined sweep and marker functions into a single self-contained pushbutton controlled device which, when connected to an oscilloscope and television receiver under test, would display a representation of the receiver's RF/IF response curves with "markers" defining critical frequency reference points as a response curve on the oscilloscope screen. Such an amplitude-versus-frequency graph is also often referred to as a Bode (pronounced "bodee") plot or Bode gr |
https://en.wikipedia.org/wiki/Systems%20Tool%20Kit | Systems Tool Kit (formerly Satellite Tool Kit), often referred to by its initials STK, is a multi-physics software application from Analytical Graphics, Inc. (an Ansys company) that enables engineers and scientists to perform complex analyses of ground, sea, air, and space platforms, and to share results in one integrated environment. At the core of STK is a geometry engine for determining the time-dynamic position and attitude of objects ("assets"), and the spatial relationships among the objects under consideration including their relationships or accesses given a number of complex, simultaneous constraining conditions. STK has been developed since 1989 as a commercial off the shelf software tool. Originally created to solve problems involving Earth-orbiting satellites, it is now used in the aerospace and defense communities and for many other applications.
STK is used in government, commercial, and defense applications around the world. Clients of AGI are organizations such as NASA, ESA, CNES, DLR, Boeing, JAXA, ISRO, Lockheed Martin, Northrop Grumman, Airbus, The US DoD, and Civil Air Patrol.
History
In 1989, the three founders of Analytical Graphics, Inc. — Paul Graziani, Scott Reynolds, and Jim Poland, left GE Aerospace to create Satellite Tool Kit (STK) as an alternative to bespoke, project-specific aerospace software.
The original version of STK ran only on Sun Microsystems computers, but as PCs became more powerful, the code was converted to run on Windows.
STK was first adopted by the aerospace community for orbit analysis and access calculations (when a satellite can see a ground-station or image target), but as the software was expanded, more modules were added that included the ability to perform calculations for communications systems, radar, interplanetary missions and orbit collision avoidance.
The addition of 3D viewing capabilities led to the adoption of the STK by military users for real-time visualization of air, land and sea forces as we |
https://en.wikipedia.org/wiki/Oxalis%20violacea | Oxalis violacea, the violet wood-sorrel, is a perennial plant and herb in the family Oxalidaceae. It is native to the eastern and central United States.
Description
Oxalis violacea emerges in early spring from an underground bulb and produces leaf stems tall and flower umbels, or clusters, with up to 19 flowers on stems tall. The three-part leaves have heart-shaped leaflets. The plant is similar in appearance to small clovers such as the shamrock.
The plant bears lavender to white flowers wide with white to pale green centers above the foliage, during April or May, rarely to July, and, with rain, sometimes produces additional flowers without leaves from August to October.
Etymology
The genus name, Oxalis, is from the Greek word oxys, which means "sharp" and refers to the sharp or sour taste from the oxalic acid present in the plant. The specific epithet, violacea, is Latin for violet-colored.
Distribution and habitat
It is native plant in much of the United States, from the Rocky Mountains east to the Atlantic Ocean and Gulf of Mexico coasts, and through Eastern Canada. It has a tendency to cluster in open places in damp woods and on stream banks, and in moist prairies.
Conservation
The plant's conservation status is globally secure; however, it is listed as endangered in Massachusetts and Rhode Island, threatened in New York, and a species of special concern in Connecticut. It is presumed extirpated in Michigan.
Uses
Medicinal
Oxalis violacea was used as a medicinal plant by Native Americans, including the Cherokee and Pawnee peoples.
Culinary
All parts of the plant are edible – flowers, leaves, stems, and bulb. Oxalis is from the Greek word meaning sour, and this plant has a sour juice. It is used in salads. Moderate use of plant is advisable, as it should not be eaten in large quantities due to a high concentration of oxalic acid, ("salt of lemons") which can be poisonous.
It was a traditional food source of the Native American Apache, Cherokee, O |
https://en.wikipedia.org/wiki/Punishment%20%28psychology%29 | In operant conditioning, punishment is any change in a human or animal's surroundings which, occurring after a given behavior or response, reduces the likelihood of that behavior occurring again in the future. As with reinforcement, it is the behavior, not the human/animal, that is punished. Whether a change is or is not punishing is determined by its effect on the rate that the behavior occurs. This is called motivating operations (MO), because they alter the effectiveness of a stimulus. MO can be categorized in abolishing operations, decrease the effectiveness of the stimuli and establishing, increase the effectiveness of the stimuli. For example, a painful stimulus which would act as a punisher for most people may actually reinforce some behaviors of masochistic individuals.
There are two types of punishment, positive and negative. Positive punishment involves the introduction of a stimulus to decrease behavior while negative punishment involves the removal of a stimulus to decrease behavior. While similar to reinforcement, punishment's goal is to decrease behaviors while reinforcement's goal is to increase behaviors. Different kinds of stimuli exist as well. There are rewarding stimuli which are considered pleasant and aversive stimuli, which are considered unpleasant. There are also two types of punishers. There are primary punishers which directly affect the individual such as pain and are a natural response and then there are secondary punishers which are things that are learned to be negative like a buzzing sound when getting an answer wrong on a game show.
Conflicting findings have been found on the effectiveness of the use of punishment. Some have found that punishment can be a useful tool in suppressing behavior while some have found it to have a weak effect on suppressing behavior. Punishment can also lead to lasting negative unintended side effects as well. Punishment has been found to be effective in countries that are wealthy, high in trust, cooper |
https://en.wikipedia.org/wiki/Groundwater-related%20subsidence | Groundwater-related subsidence is the subsidence (or the sinking) of land resulting from unsustainable groundwater extraction. It is a growing problem in the developing world as cities increase in population and water use, without adequate pumping regulation and enforcement. One estimate has 80% of serious U.S. land subsidence problems associated with the excessive extraction of groundwater, making it a growing problem throughout the world.
Groundwater can be considered one of the last free resources, as anyone who can afford to drill can usually draw up merely according to their ability to pump (depending on local regulations). However, as seen in the figure, pumping-induced draw down causes a depression of the groundwater surface around the production well. This can ultimately affect a large region by making it more difficult and expensive to pump the deeper water. Thus, the extraction of groundwater becomes a tragedy of the commons, with resulting economic externalities.
Mechanism
The cause of the long-term surface changes associated with this phenomenon are fairly well known. As shown in the USGS figure, aquifers are frequently associated with compressible layers of silt or clay.
As the groundwater is pumped out, the effective stress changes, precipitating consolidation, which is often non-reversible. Thus, the total volume of the silts and clays is reduced, resulting in the lowering of the surface. The damage at the surface is much greater if there is differential settlement, or large-scale features, such as sinkholes and fissures.
Aquifer compaction is a significant concern along with pumping-induced land subsidence. A large portion of the groundwater storage potential of many aquifers can be significantly reduced when longterm groundwater extraction, and the resulting groundwater level decline, causes permanent compaction of fine sediment layers (silts and clays). A study in an arid agricultural region of Arizona showed that, even with a water level reco |
https://en.wikipedia.org/wiki/European%20Federation%20of%20Food%20Science%20and%20Technology | The European Federation of Food Science and Technology (EFFoST) is a European-based non-governmental organization devoted to the advancement of food science and technology. It consists of eighty different societies in 21 different European countries. They are a regional group of the International Union of Food Science and Technology.
EFFoST's roles
Increase closer contacts among the academic, government, and industrial areas in food.
Enhance rapid technology transfer to increase economic competitiveness in Europe.
Promote continuing education within food science and technology.
Standardize food law and its enforcement throughout Europe.
Maintain collaborative relationships within the European food industry on Knowledge sharing.
Publications
Innovative Food Science and Emerging Technologies (ISFET), the official scientific journal of EFFoST.
Trends in Food Science and Technology (TIFS), a mini peer-review scientific journal that is more involved in product development than basic research.
Food Processing Intelligence (FPI), the official EFFoST book published twice a year on the status of the food industry in Europe.
Position papers on various issues in European food science and technology.
Executive committee
The Executive Committee consists of a President, Past President, President-Elect, four other elected officials, and twelve Members-At-Large.
Headquarters
EFFoST is headquartered in Wageningen, Netherlands. |
https://en.wikipedia.org/wiki/Innovative%20Food%20Science%20and%20Emerging%20Technologies | Innovative Food Science and Emerging Technologies is a quarterly peer-reviewed scientific journal covering basic and applied research in food science and technology. It is an official journal of the European Federation of Food Science and Technology. , its editors-in-chief are Dietrich Knorr (Berlin University of Technology) and Marc C. Hendrickx (Katholieke Universiteit Leuven). According to the Journal Citation Reports, the journal has a 2011 impact factor of 3.030. |
https://en.wikipedia.org/wiki/Trends%20in%20Food%20Science%20and%20Technology | Trends in Food Science and Technology is a monthly peer-reviewed review journal covering food science and technology. It is an official publication of the European Federation of Food Science and Technology and of the International Union of Food Science and Technology. The editors-in-chief are Rickey Yada and Fidel Todra (Institute of Food Research).
Abstracting an indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 12.563. |
https://en.wikipedia.org/wiki/Dihydroorotate%20dehydrogenase | Dihydroorotate dehydrogenase (DHODH) is an enzyme that in humans is encoded by the DHODH gene on chromosome 16. The protein encoded by this gene catalyzes the fourth enzymatic step, the ubiquinone-mediated oxidation of dihydroorotate to orotate, in de novo pyrimidine biosynthesis. This protein is a mitochondrial protein located on the outer surface of the inner mitochondrial membrane (IMM). Inhibitors of this enzyme are used to treat autoimmune diseases such as rheumatoid arthritis.
Structure
DHODH can vary in cofactor content, oligomeric state, subcellular localization, and membrane association. An overall sequence alignment of these DHODH variants presents two classes of DHODHs: the cytosolic Class 1 and the membrane-bound Class 2. In Class 1 DHODH, a basic cysteine residue catalyzes the oxidation reaction, whereas in Class 2, the serine serves this catalytic function. Structurally, Class 1 DHODHs can also be divided into two subclasses, one of which forms homodimers and uses fumarate as its electron acceptor, and the other which forms heterotetramers and uses NAD+ as its electron acceptor. This second subclass contains an addition subunit (PyrK) containing an iron-sulfur cluster and a flavin adenine dinucleotide (FAD). Meanwhile, Class 2 DHODHs use coenzyme Q/ubiquinones for their oxidant.
In higher eukaryotes, this class of DHODH contains an N-terminal bipartite signal comprising a cationic, amphipathic mitochondrial targeting sequence of about 30 residues and a hydrophobic transmembrane sequence. The targeting sequence is responsible for this protein's localization to the IMM, possibly from recruiting the import apparatus and mediating ΔΨ-driven transport across the inner and outer mitochondrial membranes, while the transmembrane sequence is essential for its insertion into the IMM. This sequence is adjacent to a pair of α-helices, α1 and α2, which are connected by a short loop. Together, this pair forms a hydrophobic funnel that is suggested to serve as t |
https://en.wikipedia.org/wiki/Land%20Use%20Evolution%20and%20Impact%20Assessment%20Model | The Land Use Evolution and Impact Assessment Model (or LEAM) is a computer model developed
at the University of Illinois at Urbana-Champaign. LEAM is designed to simulate future land use
change as a result of alternative policies and development decisions. In recent years, LEAM has been used in combination with transportation and social cost models to better capture the effects land use has on transportation demand and social costs and vice versa.
History
LEAM was first developed in the LEAMlab of the Department of Urban and Regional Planning at the
University of Illinois at Urbana-Champaign in the late 1990s with funding from the National Science Foundation. Its popularity with counties and regional agencies in Illinois led to technology licensing from the university and commercialization. In 2003, LEAMgroup was founded by professors Dr. Brian Deal and Dr. Varkki Pallathucheril. Since then, LEAM and its associated planning and decision support tools have been applied all around the U.S. and abroad.
Approach
LEAM was developed to coordinate complex regional planning activities and aid in regionally-based
thinking, decision support, and policy establishment.
In LEAM, a region is represented as a 30x30-meter cell grid. A discrete-choice model controls whether
land use in each grid cell is transformed from its present state to a new state (residential, commercial, or industrial use) in a particular time step.
Several factors, or drivers, go into determining the likelihood of land use change. Drivers of change
include factors associated with each cell such as proximity to cities, employment centers, roads,
highways; slope; location within wetlands and floodplains; and characteristics of surrounding cells.
Whether or not a cell finally changes states is determined by its probability score and the scores of its
neighboring cells as well as a factor of chance.
LEAM results then serve as inputs to impact assessment models that determine the implications of land
use |
https://en.wikipedia.org/wiki/4%2C5-Dihydroorotic%20acid | 4,5-Dihydroorotic acid is a derivative of orotic acid which serves as an intermediate in pyrimidine biosynthesis. |
https://en.wikipedia.org/wiki/Gravity%20Pipe | Gravity Pipe (abbreviated GRAPE) is a project which uses hardware acceleration to perform gravitational computations. Integrated with Beowulf-style commodity computers, the GRAPE system calculates the force of gravity that a given mass, such as a star, exerts on others. The project resides at Tokyo University.
The GRAPE hardware acceleration component "pipes" the force computation to the general-purpose computer serving as a node in a parallelized cluster as the innermost loop of the gravitational model.
Its shortened name, GRAPE, was chosen as an intentional reference to the Apple Inc. line of computers.
Method
The primary calculation in GRAPE hardware is a summation of the forces between a particular star and every other star in the simulation.
Several versions (GRAPE-1, GRAPE-3 and GRAPE-5) use the logarithmic number system (LNS) in the pipeline to calculate the approximate force between two stars and take the antilogarithms of the x, y and z components before adding them to their corresponding total. The GRAPE-2, GRAPE-4 and GRAPE-6 use floating-point arithmetic for more accurate calculation of such forces. The advantage of the logarithmic-arithmetic versions is that they allow more and faster parallel pipes for a given hardware cost because all but the sum portion of the GRAPE algorithm (1.5 power of the sum of the squares of the input data divided by the input data) is easy to perform with LNS.
GRAPE-DR consists of a large number of simple processors, all operating in the SIMD fashion.
Application
GRAPE computes approximate solutions to the historically intractable n-body problem, which is of interest in astrophysics and celestial mechanics. n refers to the number of celestial bodies in a given problem. While the 2-body problem was solved by Kepler's laws in the 17th century, any calculation where n > 2 has historically been a nigh-impossible challenge. An analytical solution exists for n = 3, although the resulting series converges too slowly to be |
https://en.wikipedia.org/wiki/Statutory%20reserve | In the business of insurance, statutory reserves are those assets an insurance company is legally required to maintain on its balance sheet with respect to the unmatured obligations (i.e., expected future claims) of the company. Statutory reserves are a type of actuarial reserve.
Purpose
Statutory reserves are intended to ensure that insurance companies are able to meet future obligations created by insurance policies. These reserves must be reported in statements filed with insurance regulatory bodies. They are calculated with a certain level of conservatism in order to protect policyholders and beneficiaries.
Methods
There are two types of methods for calculation of statutory reserves. Reserve methodology may be fully prescribed by law, which is often called formula-based reserving. This is in contrast to principles-based reserves, where actuaries are given latitude to use professional judgement in determining methodology and assumptions for reserve calculation. In the United States, where formula-based reserves are used, the National Association of Insurance Commissioners plans to implement principles-based reserves in 2017.
Life insurance in the United States
In the U.S. life insurance industry, statutory reserves are most commonly computed using the Commissioner's Reserve Valuation Method, or CRVM, the method prescribed by law for computing minimum required reserves.
The size of a CRVM reserve, as with most life reserves, is affected by the age and sex of the insured person, how long the policy for which it is computed has been in force, the plan of insurance offered by the policy, the rate of interest used in the calculation, and the mortality table with which the actuarial present values are computed.
The Commissioner's Reserve Valuation Method was itself established by the Standard Valuation Law (SVL), which was created by the NAIC and adopted by the several states shortly after World War II. The first mortality table prescribed by the SVL was the 1941 |
https://en.wikipedia.org/wiki/Shared%20mesh | A shared mesh (also known as 'traditional' or 'best effort' mesh) is a wireless mesh network that uses a single radio to communicate via mesh backhaul links to all the neighboring nodes in the mesh. This is a first generation mesh where the total available bandwidth of the radio channel is ‘shared’ between all the neighboring nodes in the mesh. The capacity of the channel is further consumed by traffic being forwarded from one node to the next in the mesh – reducing the end to end traffic that can be passed. Because bandwidth is shared amongst all nodes in the mesh, and because every link in the mesh uses additional capacity, this type of network offers much lower end to end transmission rates than a switched mesh and degrades in capacity as nodes are added to the mesh.
Wireless mesh nodes typically include both mesh backhaul links and client access. A dual radio shared mesh node uses separate access and mesh backhaul radios. Only the mesh backhaul radio is shared. In a single radio shared mesh node, access and mesh backhaul are collapsed onto a single radio. Now the available bandwidth is shared between both the mesh links and client access, further reducing the end to end traffic available.
See also
Wireless mesh network
IEEE 802.11
Mesh networking
Switched mesh
Wi-Fi
Wireless LAN
802.16
External links
White Paper: Capacity of Wireless Mesh Networks Understanding single radio, dual radio and multi radio wireless mesh networks.
What is Third Generation Mesh? Review of three generation of mesh networking architectures.
Ugly Truths About Mesh Networks Performance issues of First and Second Generation Mesh products.
Wireless networking
Network topology
Radio technology |
https://en.wikipedia.org/wiki/Switched%20mesh | A switched mesh is a wireless mesh network that uses multiple radios to communicate via dedicated mesh backhaul links to each neighboring node in the mesh. Here all of the available bandwidth of each separate radio channel is dedicated to the link to the neighboring node. The total available bandwidth is the sum of the bandwidth of each of the links. Each dedicated mesh link is on a separate channel, ensuring that forwarded traffic does not use any bandwidth from any other link in the mesh. As a result, a switched mesh is capable of much higher capacities and transmission rates than a shared mesh and grows in capacity as nodes are added to the mesh.
A switched mesh node uses separate access and multiple mesh backhaul radios.
There are three distinct types of configuration of wireless mesh networking products in the market today:
single radio shared mesh in the first type one radio provides both backhaul (packet relaying) and client services (access to a laptop).
dual radio shared mesh in the second type one radio relays packets over multiple hops while another provides client access. This significantly improves backhaul bandwidth and latency.
switched mesh the third type uses two or more radios for the backhaul for higher bandwidth and low latency. Third generation wireless mesh networking products are replacing previous generation products as more demanding applications like voice and video need to be relayed over many hops of the mesh network.
See also
Shared mesh
Mesh networking
Wireless mesh networking
IEEE 802.11
802.16
Wireless LAN
Wi-Fi
Wireless networking
Network topology
Radio technology |
https://en.wikipedia.org/wiki/F%C3%BCrst-Plattner%20Rule | The Fürst-Plattner rule (also known as the trans-diaxial effect) describes the stereoselective addition of nucleophiles to cyclohexene derivatives.
Introduction
Cyclohexene derivatives, such as imines, epoxides, and halonium ions, react with nucleophiles in a stereoselective fashion, affording trans-diaxial addition products. The term “Trans-diaxial addition” describes the mechanism of the addition, however the products are likely to equilibrate by ring flip to the lower energy conformer, placing the new substituents in the equatorial position.
Mechanism and Stereochemistry
Epoxidation of a substituted cyclohexene affords a product where the R group resides in the pseudo-equatorial position. Nucleophilic ring-opening of this class of epoxides can occur by an attack at either the C1 or C2-position. It is well known that nucleophilic ring-opening reactions of these substrates can proceed with excellent regioselectivity. The Fürst-Plattner rule attributes this regiochemical control to a large preference for the reaction pathway that follows the more stable chair-like transition state (attack at the C1-position) compared to the one proceeding through the unfavored twist boat-like transition state (attack at the C2-position). The attack at the C1-position follows a substantially lower reaction barrier of around 5 kcal mol–1 depending on the specific conditions. Similarly, the Fürst-Plattner rule applies to nucleophilic additions to imines and halonium ions.
Examples
Epoxide addition
A recent example of the Fürst-Plattner rule can be seen from Chrisman et al. where limonene is epoxidized to give a 1:1 mixture of diastereomers. Exposure to a nitrogen nucleophile in water at reflux provides only one ring opened product in 75-85% ee.
Mechanism
The half-chair conformation indicates that attack occurs stereoselectively on the diastereomer where the electrophilic carbon can receive the nucleophile and proceed to the favored chair conformation.
Woodward's Reserpine Syn |
https://en.wikipedia.org/wiki/Fair%20coin | In probability theory and statistics, a sequence of independent Bernoulli trials with probability 1/2 of success on each trial is metaphorically called a fair coin. One for which the probability is not 1/2 is called a biased or unfair coin. In theoretical studies, the assumption that a coin is fair is often made by referring to an ideal coin.
John Edmund Kerrich performed experiments in coin flipping and found that a coin made from a wooden disk about the size of a crown and coated on one side with lead landed heads (wooden side up) 679 times out of 1000. In this experiment the coin was tossed by balancing it on the forefinger, flipping it using the thumb so that it spun through the air for about a foot before landing on a flat cloth spread over a table. Edwin Thompson Jaynes claimed that when a coin is caught in the hand, instead of being allowed to bounce, the physical bias in the coin is insignificant compared to the method of the toss, where with sufficient practice a coin can be made to land heads 100% of the time. Exploring the problem of checking whether a coin is fair is a well-established pedagogical tool in teaching statistics.
Probability space definition
In probability theory, a fair coin is defined as a probability space , which is in turn defined by the sample space, event space, and probability measure. Using for heads and for tails, the sample space of a coin is defined as:
The event space for a coin includes all sets of outcomes from the sample space which can be assigned a probability, which is the full power set . Thus, the event space is defined as:
is the event where neither outcome happens (which is impossible and can therefore be assigned 0 probability), and is the event where either outcome happens, (which is guaranteed and can be assigned 1 probability). Because the coin is fair, the possibility of any single outcome is 50-50. The probability measure is then defined by the function:
So the full probability space which defines a |
https://en.wikipedia.org/wiki/Ataxia%20telangiectasia%20and%20Rad3%20related | Serine/threonine-protein kinase ATR, also known as ataxia telangiectasia and Rad3-related protein (ATR) or FRAP-related protein 1 (FRP1), is an enzyme that, in humans, is encoded by the ATR gene. It is a large kinase of about 301.66 kDa. ATR belongs to the phosphatidylinositol 3-kinase-related kinase protein family. ATR is activated in response to single strand breaks, and works with ATM to ensure genome integrity.
Function
ATR is a serine/threonine-specific protein kinase that is involved in sensing DNA damage and activating the DNA damage checkpoint, leading to cell cycle arrest in eukaryotes. ATR is activated in response to persistent single-stranded DNA, which is a common intermediate formed during DNA damage detection and repair. Single-stranded DNA occurs at stalled replication forks and as an intermediate in DNA repair pathways such as nucleotide excision repair and homologous recombination repair. ATR is activated during more persistent issues with DNA damage; within cells, most DNA damage is repaired quickly and faithfully through other mechanisms. ATR works with a partner protein called ATRIP to recognize single-stranded DNA coated with RPA. RPA binds specifically to ATRIP, which then recruits ATR through an ATR activating domain (AAD) on its surface. This association of ATR with RPA is how ATR specifically binds to and works on single-stranded DNA—this was proven through experiments with cells that had mutated nucleotide excision pathways. In these cells, ATR was unable to activate after UV damage, showing the need for single stranded DNA for ATR activity. The acidic alpha-helix of ATRIP binds to a basic cleft in the large RPA subunit to create a site for effective ATR binding. Many other proteins exist that are recruited to the cite of ssDNA that are needed for ATR activation. While RPA recruits ATRIP, the RAD9-RAD1-HUS1 (9-1-1) complex is loaded onto the DNA adjacent to the ssDNA; though ATRIP and the 9-1-1 complex are recruited independently to th |
https://en.wikipedia.org/wiki/Schaum%27s%20Outlines | Schaum's Outlines () is a series of supplementary texts for American high school, AP, and college-level courses, currently published by McGraw-Hill Education Professional, a subsidiary of McGraw-Hill Education. The outlines cover a wide variety of academic subjects including mathematics, engineering and the physical sciences, computer science, biology and the health sciences, accounting, finance, economics, grammar and vocabulary, and other fields. In most subject areas the full title of each outline starts with Schaum's Outline of Theory and Problems of, but on the cover this has been shortened to simply Schaum's Outlines followed by the subject name in more recent texts.
Background and description
The series was originally developed in the 1930s by Daniel Schaum (November 13, 1913 – August 22, 2008), son of eastern European immigrants. McGraw-Hill purchased Schaum Publishing Company in 1967. Titles are continually revised to reflect current educational standards in their fields, including updates with new information, additional examples, use of new technology (calculators and computers), and so forth. New titles are also introduced in emerging fields such as computer graphics.
Many titles feature noted authors in their respective fields, such as Murray R. Spiegel and Seymour Lipschutz. Originally designed for college-level students as a supplement to standard course textbooks, each chapter of a typical Outline begins with only a terse explanation of relevant topics, followed by many fully worked examples to illustrate common problem-solving techniques, and ends with a set of further exercises where usually only brief answers are given and not full solutions.
Despite being marketed as a supplement, several titles have become widely used as primary textbooks for courses (the Discrete Mathematics and Statistics titles are examples). This is particularly true in settings where an important factor in the selection of a text is the price, such as in community colleg |
https://en.wikipedia.org/wiki/Franco%20P.%20Preparata | Franco P. Preparata is a computer scientist, the An Wang Professor, Emeritus, of Computer Science at Brown University.
He is best known for his 1985 book "Computational Geometry: An Introduction" into which he blended salient parts of M. I. Shamos' doctoral thesis (Shamos appears as a co-author of the book). This book, which represents a snapshot of the disciplines as of 1985, has been for many years the standard textbook in the field, and has been translated into four foreign Languages (Russian, Japanese, Chinese, and Polish). He has made several contributions to the computational geometry, the most recent being the notion of "algorithmic degree" as a key feature to control robust implementations of geometric algorithms.
In addition, Preparata has worked in many other areas of, or closely related to, computer science.
His initial work was in coding theory, where he (independently and simultaneously) contributed the Berlekamp-Preparata codes (optimal convolution codes for burst-error correction) and the Preparata codes, the first known systematic class of nonlinear binary codes, with higher information content than corresponding linear BCH codes of the same length. Thirty years later these codes have been found relevant to quantum coding theory.
In 1967, he substantially contributed to a model of system-level fault diagnosis, known today as the PMC (Preparata-Metze-Chien) model, which is a main issue in the design of highly dependable processing systems. This model is still the object of intense research today (as attested by the literature).
Over the years, he was also active in research in parallel computation and VLSI theory. His 1979 paper (with Jean Vuillemin), still highly cited, presented the cube-connected-cycles (CCC), a parallel architecture that optimally emulates the hypercube interconnection. This interconnection was closely reflected in the architecture of the CM2 of Thinking Machines Inc., the first massive-parallel system in the VLSI era. His |
https://en.wikipedia.org/wiki/Melanotroph | A melanotroph (or melanotrope) is a cell in the pituitary gland that generates melanocyte-stimulating hormone (α‐MSH) from its precursor pro-opiomelanocortin. Chronic stress can induce the secretion of α‐MSH in melanotrophs and lead to their subsequent degeneration.
See also
Chromophobe cell
Chromophil
Acidophil cell
Basophil cell
Oxyphil cell
Oxyphil cell (parathyroid)
Pituitary gland
Neuroendocrine cell
List of distinct cell types in the adult human body |
https://en.wikipedia.org/wiki/Adrien%20Pouliot%20Award | The Adrien Pouliot Award is presented annually by the Canadian Mathematical Society. The award is presented to individuals or teams in recognition of significant contributions to mathematics education in Canada. The inaugural award was presented in 1995. Persons and teams that are nominated for the award will have their applications considered for a period of three years. The award is named in honor of Canadian mathematician Adrien Pouliot. It should be distinguished with a different but similarly-named award, the Adrien Pouliot Prize of the Mathematical Association of Québec.
Recipients of the Adrien Pouliot Award
Source: Canadian Mathematical Society
See also
List of mathematics awards |
https://en.wikipedia.org/wiki/Group-specific%20antigen | Group-specific antigen, or gag, is the polyprotein that contains the core structural proteins of an Ortervirus (except Caulimoviridae). It was named as such because scientists used to believe it was antigenic. Now it is known that it makes up the inner shell, not the envelope exposed outside. It makes up all the structural units of viral conformation and provides supportive framework for mature virion.
All orthoretroviral gag proteins are processed by the protease (PR or pro) into MA (matrix), CA (capsid), NC (nucleocapsid) parts, and sometimes more.
If Gag fails to cleave into its subunits, virion fails to mature and remains uninfective.
It comprises part of the gag-onc fusion protein.
Gag in HIV
Numbering system
By convention, the HIV genome is numbered according to HIV-1 group M subtype B reference strain HXB2.
Transcription and mRNA processing
After a virus enters a target cell, the viral genome is integrated into the host cell chromatin. RNA polymerase II then transcribes the 9181 nucleotide full-length viral RNA. HIV Gag protein is encoded by the HIV gag gene, HXB2 nucleotides 790-2292.
MA
The HIV p17 matrix protein (MA) is a 17 kDa protein, of 132 amino acids, which comprises the N-terminus of the Gag polyprotein. It is responsible for targeting Gag polyprotein to the plasma membrane via interaction with PI(4,5)P2 through its highly basic region (HBR). HIV MA also makes contacts with the HIV trans-membrane glycoprotein gp41 in the assembled virus and, indeed, may have a critical role in recruiting Env glycoproteins to viral budding sites.
Once Gag is translated on ribosomes, Gag polyproteins are myristoylated at their N-terminal glycine residues by N-myristoyltransferase 1. This is a critical modification for plasma membrane targeting. In the membrane-unbound form, the MA myristoyl fatty acid tail is sequestered in a hydrophobic pocket in the core of the MA protein.
Recognition of plasma membrane PI(4,5)P2 by the MA HBR activates the "myristoyl swit |
https://en.wikipedia.org/wiki/Euphoria%20%28software%29 | Euphoria is a game animation middleware created by NaturalMotion based on Dynamic Motion Synthesis, NaturalMotion's proprietary technology for animating 3D characters on-the-fly "based on a full simulation of the 3D character, including body, muscles and motor nervous system". Instead of using predefined animations, the characters' actions and reactions are synthesized in real-time; they are different every time, even when replaying the same scene. While it is common for current video games to use limp "ragdolls" for animations generated on the fly, Euphoria employed a more complex method to animate the entirety of physically bound objects within the game environment. The engine was to be used in an Indiana Jones game that was later cancelled. According to its web site, Euphoria ran on the Microsoft Windows, OS X, Linux, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, iOS and Android platforms and was compatible with all commercial physics engines.
A press release that was enclosed with the second trailer eventually confirmed that Grand Theft Auto IV is the first of Rockstar's games to feature Euphoria. Red Dead Redemption is their second game to use this engine. The Star Wars titles, Star Wars: The Force Unleashed and The Force Unleashed II use Euphoria, as do games based on the Rockstar Advanced Game Engine (RAGE) including Grand Theft Auto V and Red Dead Redemption 2. Euphoria is integrated into the source code of RAGE.
In 2017, NaturalMotion announced it would end licensing of Euphoria, along with its other technologies, to concentrate on mobile games.
Software using Euphoria
In February 2007, NaturalMotion and Rockstar Games announced that Euphoria would be used in future Rockstar titles.
In August 2007, NaturalMotion announced Backbreaker, an American football game for next-generation consoles that employs Euphoria to generate tackles in real-time, as opposed to playback animation.
The July 2009 issue of Game Informer confirmed that Max Payne 3 would inclu |
https://en.wikipedia.org/wiki/Gag-onc%20fusion%20protein | The gag-onc fusion protein is a general term for a fusion protein formed from a group-specific antigen ('gag') gene and that of an oncogene ('onc'), a gene that plays a role in the development of a cancer. The name is also written as Gag-v-Onc, with "v" indicating that the Onc sequence resides in a viral genome. Onc is a generic placeholder for a given specific oncogene, such as C-jun. (In the case of a fusion with C-jun, the resulting "gag-jun" protein is known alternatively as p65).
Background
Gag genes are part of a general architecture for retroviruses, viruses that replicate through reverse transcription, where the gag region of the genome encodes proteins that constitute the matrix, capsid and nucleocapsid of the mature virus particles. Like in HIV's replication cycle, these proteins are needed for viral budding from the host cell's plasma membrane, where the fully formed virions leave the cell to infect other cells.
gag-v-onc
When a viral gene is introduced into the host cell and is sufficient to induce oncogenesis – the creation of cancerous cells – in the infected cell line, the gene is said to be a "viral transforming gene". When this type of gene is translated to a protein, the protein is called a "transforming protein". Note that since the viral oncogenes originated from a host genome, the transformation event is different from transduction, which describes the process of introducing non-native genes to a host organism via a viral infection.
Rous sarcoma virus
The Gag-v-Onc fusion protein from the Rous sarcoma virus illustrates the dual role that the fusion protein plays in the viral and host cellular life cycle. For example, the viral gene Src (as in "sarcoma") is not necessary for viral reproduction, but does affect virulence. Due to evidence of conserved homology between the v-Src gene and its host (animal) genomes, and its non-essential status for viral reproduction, the v-Src gene is likely to have been acquired from a host genome and altered by |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.