source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Boris%20Chertok | Boris Yevseyevich Chertok (; – 14 December 2011) was a Russian engineer in the former Soviet space program, mainly working in control systems, and later found employment in Roscosmos.
Major responsibility under his guidance was primarily based on computerized control system of the Russian missiles and rocketry system, and authored the four-volume book Rockets and People– the definitive source of information about the history of the Soviet space program.
From 1974, he was the deputy chief designer of the Korolev design bureau, the space aircraft designer bureau which he started working for in 1946. He retired in 1992.
Personal life
Born in Łódź (modern Poland), his family moved to Moscow when he was aged 3. Starting from 1930, he worked as an electrician in a metropolitan suburb. Since 1934, he was already designing military aircraft in Bolkhovitinov design bureau. In 1946, he entered the rocket-pioneering NII-88 as a head of control systems department, working along with Sergei Korolev, whose deputy he became after OKB-1 spun off from the NII-88 in 1956.
He was married to Yekaterina Semyonovna Golubkina. He was an atheist.
Rockets and People
Between 1994 and 1999 Boris Chertok, with support from his wife Yekaterina Golubkina, created the four-volume book series about the history of the Soviet space industry. The series was originally published in Russian, in 1999.
Черток Б.Е. Ракеты и люди — М.: Машиностроение, 1999. (B. Chertok, Rockets and People)
Черток Б.Е. Ракеты и люди. Фили — Подлипки — Тюратам — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. Fili — Podlipki — Tyuratam)
Черток Б.Е. Ракеты и люди. Горячие дни холодной войны — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. Hot Days of the Cold War)
Черток Б.Е. Ракеты и люди. Лунная гонка — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. The Moon Race)
Translation into English
NASA's History Division published four translated and somewhat edited volumes of the s |
https://en.wikipedia.org/wiki/EqualLogic | EqualLogic, Inc. was an American computer data storage company based in Nashua, New Hampshire, active from 2001 to 2007. In 2008, the company was merged into Dell Inc. Dell-branded EqualLogic products are iSCSI-based storage area network (SAN) systems. Dell has 3 different lines of SAN products: EqualLogic, Compellent and Dell PowerVault.
History
EqualLogic was a company based in Nashua, New Hampshire. Formed in 2001 by Peter Hayden, Paul Koning, and Paula Long, it raised $52 million from investors between 2001 and 2004. The company was considering an initial public offering on the Nasdaq stock-exchange, but accepted an offer from Dell in 2007, and was absorbed in late January 2008. The all-cash take-over transaction of $1.4 billion was the highest price paid for a company financed by venture investors at the time. At the time of acquisition, the company was backed by four venture capital investors: Charles River Ventures, TD Capital Ventures, Focus Ventures and Sigma Partners.
Architecture
EqualLogic systems use iSCSI via either Gigabit Ethernet or 10 Gigabit Ethernet controllers. The currently (June 2014) sold systems with 1 Gbit/s connections are the PS4100, PS6100 and PS6200 while the comparable systems with 10 Gbit/s Ethernet connections are PS4110, PS6110 and PS6210. There have been a number of previous generations, and as long as the software is updated on older systems they can work with the newer models. Within each series there are several options allowing for different types and sizes of hard disk drives or solid-state drives. EqualLogic options combine both in the same chassis and automatically migrate the most frequently accessed data to the SSDs. All PS series systems, except the PS-M4110 blade chassis system, are 19-inch rack systems in a 2 rack unit form factor or a 4 RU chassis for some of the PS61x0 models and the PS65x0 dense models.
EqualLogic Arrays can be combined with up to 16 arrays per group. Groups can mix different members, includin |
https://en.wikipedia.org/wiki/Bird%20park | Bird park may refer to:
Bali Bird Park, Bali, Indonesia
Fonghuanggu Bird and Ecology Park, Nantou, Taiwan
Francis William Bird Park, Massachusetts, United States
Jurong Bird Park, Singapore - the largest in terms of bird numbers.
Kuala Lumpur Bird Park, Kuala Lumpur, Malaysia
Melaka Bird Park, Malacca, Malaysia
Tokyo Wild Bird Park, Japan
Walsrode Bird Park, Lower Saxony, Germany - the largest in terms of area.
Bird Park (Mt. Lebanon, Pennsylavnia) - a municipal park in Mt. Lebanon, PA
World of Birds Wildlife Sanctuary and Monkey Park
See also
Zoo
Ornithology |
https://en.wikipedia.org/wiki/Richards%27%20theorem | Richards' theorem is a mathematical result due to Paul I. Richards in 1947. The theorem states that for,
if is a positive-real function (PRF) then is a PRF for all real, positive values of .
The theorem has applications in electrical network synthesis. The PRF property of an impedance function determines whether or not a passive network can be realised having that impedance. Richards' theorem led to a new method of realising such networks in the 1940s.
Proof
where is a PRF, is a positive real constant, and is the complex frequency variable, can be written as,
where,
Since is PRF then
is also PRF. The zeroes of this function are the poles of . Since a PRF can have no zeroes in the right-half s-plane, then can have no poles in the right-half s-plane and hence is analytic in the right-half s-plane.
Let
Then the magnitude of is given by,
Since the PRF condition requires that for all then for all . The maximum magnitude of occurs on the axis because is analytic in the right-half s-plane. Thus for .
Let , then the real part of is given by,
Because for then for and consequently must be a PRF.
Richards' theorem can also be derived from Schwarz's lemma.
Uses
The theorem was introduced by Paul I. Richards as part of his investigation into the properties of PRFs. The term PRF was coined by Otto Brune who proved that the PRF property was a necessary and sufficient condition for a function to be realisable as a passive electrical network, an important result in network synthesis. Richards gave the theorem in his 1947 paper in the reduced form,
that is, the special case where
The theorem (with the more general casse of being able to take on any value) formed the basis of the network synthesis technique presented by Raoul Bott and Richard Duffin in 1949. In the Bott-Duffin synthesis, represents the electrical network to be synthesised and is another (unknown) network incorporated within it ( is unitless, but has units of impe |
https://en.wikipedia.org/wiki/Marine%20technology | Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety.
Education and training
According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians.
The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment.
As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines.
Relationship to commerce
Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED |
https://en.wikipedia.org/wiki/Process%20capability%20index | The process capability index, or process capability ratio, is a statistical measure of process capability: the ability of an engineering process to produce an output within specification limits. The concept of process capability only holds meaning for processes that are in a state of statistical control. This means it cannot account for deviations which are not expected, such as misaligned, damaged, or worn equipment. Process capability indices measure how much "natural variation" a process experiences relative to its specification limits, and allows different processes to be compared to how well an organization controls them. Somewhat counterintuitively, higher index values indicate better performance, with zero indicating high deviation.
Example for non-specialists
A company produces axles with nominal diameter 20 mm on a lathe. As no axle can be made to exactly 20 mm, the designer specifies the maximum admissible deviations (called tolerances or specification limits). For instance, the requirement could be that axles need to be between 19.9 and 20.2 mm. The process capability index is a measure for how likely it is that a produced axle satisfies this requirement. The index pertains to statistical (natural) variations only. These are variations that naturally occur without a specific cause. Errors not addressed include operator errors, or play in the lathe's mechanisms resulting in a wrong or unpredictable tool position. If errors of the latter kinds occur, the process is not in a state of statistical control. When this is the case, the process capability index is meaningless.
Introduction
If the upper and lower specification limits of the process are USL and LSL, the target process mean is T, the estimated mean of the process is and the estimated variability of the process (expressed as a standard deviation) is , then commonly accepted process capability indices include:
is estimated using the sample standard deviation.
Recommended values
Process capabilit |
https://en.wikipedia.org/wiki/Mark%20Wheelis | Mark L. Wheelis is an American microbiologist. Wheelis is currently a professor in the College of Biological Sciences, University of California, Davis. Carl Woese and Otto Kandler with Wheelis wrote the important paper Towards a natural system of organisms: proposal for the domains Archaea, Bacteria, and Eucarya that proposed a change from the Two-empire system of Prokaryotes and Eukaryotes to the Three-domain system of the domains Eukaryota, Bacteria and Archaea.
Wheelis's research interests include the history of biological warfare. He co-authored (with Larry Gonick) The Cartoon Guide to Genetics (1983). Wheelis provided the scientific knowledge and text, while Gonick contributed the illustrations and humor.
Works
Larry Gonick & Mark Wheelis, The Cartoon Guide to Genetics, Longman Higher Education, 1983, 216 pp.
"Biological Warfare before 1914", In: Geissler E, Moon JEvC, editors. Biological and toxin weapons: research, development and use from the Middle Ages to 1945. London: Oxford University Press; 1999. pp 8–34. |
https://en.wikipedia.org/wiki/Globalization%20and%20disease | Globalization, the flow of information, goods, capital, and people across political and geographic boundaries, allows infectious diseases to rapidly spread around the world, while also allowing the alleviation of factors such as hunger and poverty, which are key determinants of global health. The spread of diseases across wide geographic scales has increased through history. Early diseases that spread from Asia to Europe were bubonic plague, influenza of various types, and similar infectious diseases.
In the current era of globalization, the world is more interdependent than at any other time. Efficient and inexpensive transportation has left few places inaccessible, and increased global trade in agricultural products has brought more and more people into contact with animal diseases that have subsequently jumped species barriers (see zoonosis).
Globalization intensified during the Age of Exploration, but trading routes had long been established between Asia and Europe, along which diseases were also transmitted. An increase in travel has helped spread diseases to natives of lands who had not previously been exposed. When a native population is infected with a new disease, where they have not developed antibodies through generations of previous exposure, the new disease tends to run rampant within the population.
Etiology, the modern branch of science that deals with the causes of infectious disease, recognizes five major modes of disease transmission: airborne, waterborne, bloodborne, by direct contact, and through vector (insects or other creatures that carry germs from one species to another). As humans began traveling over seas and across lands which were previously isolated, research suggests that diseases have been spread by all five transmission modes.
Travel patterns and globalization
The Age of Exploration generally refers to the period between the 15th and 17th centuries. During this time, technological advances in shipbuilding and navigation made it e |
https://en.wikipedia.org/wiki/The%20American%20Journal%20of%20Pathology | The American Journal of Pathology is a monthly peer-reviewed medical journal covering pathology. It is published by Elsevier on behalf of the American Society for Investigative Pathology, of which it is an official journal. The editor-in-chief is Martha B. Furie (Stony Brook University). The journal was established in 1896 as the Journal of the Boston Society of Medical Sciences and renamed The Journal of Medical Research in 1901, before obtaining its current title in 1925. According to the Journal Citation Reports, the journal has a 2022 impact factor of 6.0.
Editors
The following persons have been editors-in-chief of the journal: |
https://en.wikipedia.org/wiki/Da-Wen%20Sun | Sun Dawen (; ), known as Da-Wen Sun, is a Chinese-born professor who studies food engineering at University College Dublin.
Professor Sun is an Academician of six academies including Royal Irish Academy, Academia Europaea (The Academy of Europe), Polish Academy of Sciences, International Academy of Food Science and Technology, International Academy of Agricultural and Biosystems Engineering and International Academy of Refrigeration. He is also President of International Commission of Agricultural and Biosystems Engineering (CIGR).
Biography
Sun was born in Chaozhou, Guangdong, China. He received a first class BSc Honours and MSc in Mechanical Engineering, and a PhD in Chemical Engineering in China. He was appointed College Lecturer at National University of Ireland, Dublin (University College Dublin) in 1995, and was then senior lecturer, associate professor and full Professor. Sun is now Professor and Director of the Food Refrigeration and Computerised Food Technology Research Group in University College Dublin.
He is Editor-in-Chief of Food and Bioprocess Technology – an International Journal (Springer), Series Editor of Contemporary Food Engineering book series (CRC Press / Taylor & Francis), former Editor of Journal of Food Engineering (Elsevier), and editorial board member for a number of international journals. He is also a Chartered Engineer.
Academic career
Sun's academic work is in the area of food engineering research and education. His main research activities include cooling, drying and refrigeration processes and systems, quality and safety of food products, bioprocess simulation and optimisation, and computer vision technology. He has studied on vacuum cooling of cooked meats, pizza quality inspection by computer vision, and edible films for shelf-life extension of fruit and vegetables.
He is the most cited author of several food science journals such as Lebensmittel-Wissenschaft & Technologie, Journal of Food Engineering, Trends in Food Scien |
https://en.wikipedia.org/wiki/List%20of%20free%20electronics%20circuit%20simulators | List of free analog and digital electronic circuit simulators, available for Windows, macOS, Linux, and comparing against UC Berkeley SPICE. The following table is split into two groups based on whether it has a graphical visual interface or not. The later requires a separate program to provide that feature, such as Qucs-S, Oregano, or a PCB suite that supports external simulators, such as KiCad or gEDA.
See also
List of HDL simulators for VHDL, Verilog, SystemVerilog, ...
Espresso heuristic logic minimizer, such as Logic Friday
Comparison of EDA software
List of instruction set simulators |
https://en.wikipedia.org/wiki/J.%20Scott%20Turner | J. Scott Turner (born 11 August 1951) is an American physiologist who has contributed to the theory of collective intelligence through his fieldwork on the South African species of termite Macrotermes michaelseni, suggesting the architectural complexity and sophistication of their mounds as an instance of his theory of the extended organism or superorganism. His theory was reviewed in a range of journals, including Perspectives in Biology and Medicine, the New York Times Book Review, EMBO Reports,
and American Scientist.
Overview
Working at the interface among physiology, evolution and design led Turner to formulate the idea of the Extended Organism, reviewed in a range of journals, including Nature. Turner's current research focuses on the emergence of super-organismal structure and function in mound-building termites of southern Africa (Macrotermes). His extended organism idea was inspired by his work on termite mounds that clarified how the mound functions as an external lung for respiratory gas exchange for the colony as a whole. His prior work on the thermal capacity of incubated birds' eggs showed that an egg with an embryo and an incubating parent function not as two separate organisms but as a coupled physiological unit.
Building upon this empirical work, Turner has argued that the principle of homeostasis is a fundamental property of living systems that accounts for, among other things, the phenomenon of biological design. With this argument, Turner counters both Intelligent Design and strong Darwinism, showing how natural selection is complemented by other factors. Turner proposes that modern evolutionary theory over-emphasizes genetic natural selection and a tendency to separate information from catalysis at the molecular level. By connecting information and catalysis, epigenesis coupled with homeostasis exemplifies the internal, directive capacities of the organism, linking information and behavior. Turner has also suggested that termite mounds exempl |
https://en.wikipedia.org/wiki/Robert%20Schrader | Robert Schrader (12 September 1939, Berlin – 29 November 2015, Berlin) was a German theoretical and mathematical physicist. He is known for the Osterwalder–Schrader axioms.
Education and career
From 1959 to 1964 Schrader studied physics at Kiel University, the University of Zurich, and the University of Hamburg, where he completed his Diplom in 1964. His Diplom thesis Die Charaktere der inhomogenen Lorentzgruppe (The characters of the inhomogeneous Lorentz group) was supervised by Harry Lehmann and Hans Joos. In 1965 he went to ETH Zurich, where he worked as an assistant and received his doctorate (Promotion) in 1969 under the supervision of Klaus Hepp and Res Jost. His thesis, published in Communications in Mathematical Physics, dealt with the Lee model introduced in 1954 by Tsung-Dao Lee.
From 1970 to 1973 Schrader was a research fellow at Harvard University and at Princeton University. At Harvard under the supervision of Arthur Jaffe, he worked with Konrad Osterwalder on Euclidean quantum field theory. In 1971 Schrader habilitated at the University of Hamburg with the thesis Das Yukawa Modell in zwei Raum-Zeit-Dimensionen (The Yukawa model in two space-time dimensions). He was a professor of theoretical physics at the Free University of Berlin from 1973 until his retirement in 2005. He was a visiting scientist in 1974 and again in 1980 at the IHÉS at Paris, in 1976 in Harvard, in 1979 at CERN, for the academic year 1986/87 at the Institute for Advanced Study, and in 1989 at the ETH. For two academic years from 1982 to 1984, he was a visiting professor at the State University of New York at Stony Brook.
Schrader was the author or coauthor of more than 100 scientific publications. He dealt with axiomatic quantum field theory and, with Konrad Osterwalder, introduced in 1973 the Osterwalder–Schrader axioms for Euclidean Green's functions. Arthur Jaffe suggested to his postdocs Osterwalder and Schrader that they study the work on the Euclidean formulation of quant |
https://en.wikipedia.org/wiki/Patrocladogram | A patrocladogram is a cladistic branching pattern that has been precisely modified by use of patristic distances (i.e., divergences between lineages); a type of phylogram. The patristic distance is defined as, "the number of apomorphic step changes separating two taxa on a cladogram," and is used exclusively to determine the amount of divergence of a characteristic from a common ancestor. This means that cladistic and patristic distances are combined to construct a new tree using various phenetic algorithms. The purpose of the patrocladogram in biological classification is to form a hypothesis about which evolutionary processes are actually involved before making a taxonomic decision. Patrocladograms are based on biostatistics that include but are not limited to: parsimony, distance matrix, likelihood methods, and Bayesian probability. Some examples of genomically related data that can be used as inputs for these methods are: molecular sequences, whole genome sequences, gene frequencies, restriction sites, distance matrices, unique characters, mutations such as SNPs, and mitochondrial genome data.
Cautions with patrocladogram usage
Patrocladograms are graphs that assert hypotheses of similarity whereas phylogenetic trees are graphs that assert hypotheses of common ancestry. When a patrocladogram does not logically match with a comparable phylogenetic tree hypothesis it should not be used to define monophyletic groups. The usage of patrocladograms can skew interpretations of novel evolution or depict homologous traits as homoplastic.
Programs for patrocladogram analysis
Most phylograms are saved in some variant of the Newick format such as: PAUP*, MEGA, Molecular Evolutionary Genetics Analysis, Clustal, PHYLIP, or Nexus file. These various versions of the Newick format can then be used as an input for patristic distances in patrocladogram formation. There are two widely used pieces of software; one is used for analyzing patristic distance, and the other for creat |
https://en.wikipedia.org/wiki/Inquicus | Inquicus fellatus is an extinct, bowling pin-shaped worm from the Chengjiang Biota, in what was once a marine environment from Early Cambrian Yunnan province. Its fossils are found attached to fossils of the worms Cricocosmia and Mafangscolex in either a parasitic or commensalistic relationship.
Description
Inquicus individuals were up to three centimeters long, shaped like a bowling pin with an elongated body that tapered to a slightly bulbous head. They attached their bottom ends to their hosts, with their feeding appendages facing outwards and away from their hosts' bodies.
Behavior
Although Inquicus attached to host worms, it is unlikely that the relationship was directly parasitic. The attachment point of Inquicus did not penetrate the skin of the hosts, but rather attached through suction. The species also was stiff, with there being no evidence that it could bend its mouth backwards to feed on the host. It is more likely that they simply rode on their hosts while browsing for food, or used them as a form of locomotion. |
https://en.wikipedia.org/wiki/Fr%C3%A9edericksz%20transition | The Fréedericksz transition is a phase transition in liquid crystals produced when a sufficiently strong electric or magnetic field is applied to a liquid crystal in an undistorted state. Below a certain field threshold the director remains undistorted. As the field value is gradually increased from this threshold, the director begins to twist until it is aligned with the field. In this fashion the Fréedericksz transition can occur in three different configurations known as the twist, bend, and splay geometries. The phase transition was first observed by Fréedericksz and Repiewa in 1927. In this first experiment of theirs, one of the walls of the cell was concave so as to produce a variation in thickness along the cell. The phase transition is named in honor of the Russian physicist Vsevolod Frederiks.
Derivation
Twist geometry
If a nematic liquid crystal that is confined between two parallel plates that induce a planar anchoring is placed in a sufficiently high constant electric field then the director will be distorted. If under zero field the director aligns along the x-axis then upon application of an electric field along the y-axis the director will be given by:
Under this arrangement the distortion free energy density becomes:
The total energy per unit volume stored in the distortion and the electric field is given by:
The free energy per unit area is then:
Minimizing this using calculus of variations gives:
Rewriting this in terms of and where is the separation distance between the two plates results in the equation simplifying to:
By multiplying both sides of the differential equation by this equation can be simplified further as follows:
The value is the value of when . Substituting and into the equation above and integrating with respect to from 0 to 1 gives:
The value K(k) is the complete elliptic integral of the first kind. By noting that one finally obtains the threshold electric field .
As a result, by measuring the threshold elect |
https://en.wikipedia.org/wiki/Arteritis | Arteritis is the inflammation of the walls of arteries, usually as a result of infection or autoimmune response. Arteritis, a complex disorder, is still not entirely understood. Arteritis may be distinguished by its different types, based on the organ systems affected by the disease. A complication of arteritis is thrombosis, which can be fatal. Arteritis and phlebitis are forms of vasculitis.
Signs and Symptoms
Symptoms of general arteritis may include:
Inflammation
Fever
Increased production of red blood cells (erythrocytes)
Limping
Reduced pulse
Diagnosis
Diagnosis of arteritis is based on unusual medical symptoms. Similar symptoms may be caused by a number of other conditions, such as Ehlers-Danlos syndrome and Marfan syndrome (both heritable disorders of connective tissue), tuberculosis, syphilis, spondyloarthropathies, Cogans' syndrome, Buerger's, Behcet's, and Kawasaki disease. Various imaging techniques may be used to diagnose and monitor disease progression. Imaging modalities may include direct angiography, magnetic resonance angiography, and ultrasonography.
Angiography is commonly used in the diagnosis of Takayasu arteritis, especially in the advanced stages of the disease, when arterial stenosis, occlusion, and aneurysms may be observed. However, angiography is a relatively invasive investigation, exposing patients to large doses of radiation, so is not recommended for routine, long-term monitoring of disease progression in patients with Takayasu arteritis.
Computed tomography angiography can determine the size of the aorta and its surrounding branches, and can identify vessel wall lesions in middle to late stages of arteritis. CTA can also show the blood flow within the blood vessels. Like angiography, CTA exposes patients to high dosages of radiation.
Magnetic resonance angiography is used to diagnose Takayasu arteritis in the early stages, showing changes such as the thickening of the vessel wall. Even small changes may be measured, |
https://en.wikipedia.org/wiki/Ultramercial | Ultramercial, LLC is an online advertising company. The company primarily specializes in interactive advertisements, which emphasize user engagement in exchange for access to premium content, such as video, games, and public internet access. The company claimed that its system was effective, with a 4.84% average click-through rate in 2008 (in comparison to traditional advertisements).
The company was the subject of a patent infringement lawsuit against Hulu, YouTube and WildTangent; in the lawsuit, Ultramercial accused the two companies of infringing its patent ("the '545 Patent," filed in 2001), covering the business model surrounding a "Method and system for payment of intellectual property royalties by interposed sponsor on behalf of consumer over a telecommunications network". In other words, the '545 Patent was directed to modeling the value of certain programming based on the number of advertisements consumers would continue to watch, a more direct valuation of consumers' time that previous models based directly on dollars spent in advertisement campaigns. While WildTangent challenged the validity of the patent in 2011 because they felt it was too abstract, the Court of Appeals for the Federal Circuit upheld Ultramercial's patent, stating that it "does not simply claim the age-old idea that advertising can serve as currency. Instead [it] discloses a practical application of this idea." The court also asserted that the technical elements required to implement the system described were intricate enough to not be abstract.
In 2012, the Supreme Court ordered the Federal Circuit to re-examine the case in the wake of several recent patent rulings on "abstract" concepts. This ruling came in the wake of the Supreme Court's ruling on patentable subject matter in Mayo Collaborative Services v. Prometheus Laboratories, Inc. This opinion was also issued shortly after the America Invents Act ("AIA") went into effect, drastically changing the statutory landscape of US Pa |
https://en.wikipedia.org/wiki/Duncan%27s%20new%20multiple%20range%20test | In statistics, Duncan's new multiple range test (MRT) is a multiple comparison procedure developed by David B. Duncan in 1955. Duncan's MRT belongs to the general class of multiple comparison procedures that use the studentized range statistic qr to compare sets of means.
David B. Duncan developed this test as a modification of the Student–Newman–Keuls method that would have greater power. Duncan's MRT is especially protective against false negative (Type II) error at the expense of having a greater risk of making false positive (Type I) errors. Duncan's test is commonly used in agronomy and other agricultural research.
The result of the test is a set of subsets of means, where in each subset means have been found not to be significantly different from one another.
This test is often followed by the Compact Letter Display (CLD) methodology that renders the output of such test much more accessible to non-statistician audiences.
Definition
Assumptions:
1.A sample of observed means , which have been drawn independently from n normal populations with "true" means, respectively.
2.A common standard error . This standard error is unknown, but there is available the usual estimate , which is independent of the observed means and is based on a number of degrees of freedom, denoted by . (More precisely, , has the property that is distributed as with degrees of freedom, independently of sample means).
The exact definition of the test is:
The difference between any two means in a set of n means is significant provided the range of each and every subset which contains the given means is significant according to an level range test where , and is the number of means in the subset concerned.
Exception: The sole exception to this rule is that no difference between two means can be declared significant if the two means concerned are both contained in a subset of the means which has a non-significant range.
Procedure
The procedure consists of a series of pa |
https://en.wikipedia.org/wiki/Superuser | In computing, the superuser is a special user account used for system administration. Depending on the operating system (OS), the actual name of this account might be root, administrator, admin or supervisor. In some cases, the actual name of the account is not the determining factor; on Unix-like systems, for example, the user with a user identifier (UID) of zero is the superuser, regardless of the name of that account; and in systems which implement a role based security model, any user with the role of superuser (or its synonyms) can carry out all actions of the superuser account.
The principle of least privilege recommends that most users and applications run under an ordinary account to perform their work, as a superuser account is capable of making unrestricted, potentially adverse, system-wide changes.
Unix and Unix-like
In Unix-like computer OSes (such as Linux), root is the conventional name of the user who has all rights or permissions (to all files and programs) in all modes (single- or multi-user). Alternative names include baron in BeOS and avatar on some Unix variants. BSD often provides a toor ("root" written backward) account in addition to a root account. Regardless of the name, the superuser always has a user ID of 0. The root user can do many things an ordinary user cannot, such as changing the ownership of files and binding to network ports numbered below 1024.
The name root may have originated because root is the only user account with permission to modify the root directory of a Unix system. This directory was originally considered to be root's home directory, but the UNIX Filesystem Hierarchy Standard now recommends that root's home be at . The first process bootstrapped in a Unix-like system, usually called , runs with root privileges. It spawns all other processes directly or indirectly, which inherit their parents' privileges. Only a process running as root is allowed to change its user ID to that of another user; once it has done so, t |
https://en.wikipedia.org/wiki/Alternative%20splicing | Alternative splicing, or alternative RNA splicing, or differential splicing, is an alternative splicing process during gene expression that allows a single gene to code for multiple proteins. In this process, particular exons of a gene may be included within or excluded from the final, processed messenger RNA (mRNA) produced from that gene. This means the exons are joined in different combinations, leading to different (alternative) mRNA strands. Consequently, the proteins translated from alternatively spliced mRNAs usually contain differences in their amino acid sequence and, often, in their biological functions (see Figure).
Biologically relevant alternative splicing occurs as a normal phenomenon in eukaryotes, where it increases the number of proteins that can be encoded by the genome. In humans, it is widely believed that ~95% of multi-exonic genes are alternatively spliced to produce functional alternative products from the same gene but many scientists believe that most of the observed splice variants are due to splicing errors and the actual number of biologically relevant alternatively spliced genes is much lower.
Alternative splicing enables the regulated generation of multiple mRNA and protein products from a single gene.
There are numerous modes of alternative splicing observed, of which the most common is exon skipping. In this mode, a particular exon may be included in mRNAs under some conditions or in particular tissues, and omitted from the mRNA in others.
The production of alternatively spliced mRNAs is regulated by a system of trans-acting proteins that bind to cis-acting sites on the primary transcript itself. Such proteins include splicing activators that promote the usage of a particular splice site, and splicing repressors that reduce the usage of a particular site. Mechanisms of alternative splicing are highly variable, and new examples are constantly being found, particularly through the use of high-throughput techniques. Researchers hope |
https://en.wikipedia.org/wiki/Biogeographic%20classification%20of%20India | Biogeographic classification of India is the division of India according to biogeographic characteristics. Biogeography is the study of the distribution of species (biology), organisms, and ecosystems in geographic space and through geological time. India has a rich heritage of natural diversity. India ranks fourth in Asia and tenth in the world amongst the top 17 mega-diverse countries in the world. India harbours nearly 11% of the world's floral diversity comprising over 17500 documented flowering plants, 6200 endemic species, 7500 medicinal plants and 246 globally threatened species in only 2.4% of world's land area. India is also home to four biodiversity hotspots—Andaman & Nicobar Islands, Eastern Himalaya, Indo-Burma region, and the Western Ghats. Hence the importance of biogeographical study of India's natural heritage.
The first initiative to classify the forests of India was done by Champion in 1936 and revised by Seth in 1968. This was followed by pioneering work on India's biogeography by MS Mani in 1974. Numerous schemes divide India into biogeographic regions as part of global schemes based on varying parameters, e.g. the Global 200 scheme of the Worldwide Fund for Nature. In addition, ongoing research focusing on particular taxa have included biogeographic aspects particular to the taxa under study and the area under consideration.
Rogers and Panwar of the Wildlife Institute of India outlined a scheme to divide India zoogeographically in 1988 while planning a protected area network for India. Similarly the Forest Survey of India has issued an atlas of forest vegetation types in 2011 based on Champion & Seth (1968). However, there is no official scheme mandated by the Government of India, as has been issued by the European Environment Agency in the case of the European Union.
Biogeographic realms
At the broadest level, referred to as realm in Udvardy (1975), all of India falls in the Indomalayan realm, with the exception of the high Himalayas, which |
https://en.wikipedia.org/wiki/Ibalizumab | Ibalizumab, sold under the brand name Trogarzo, is a non-immunosuppressive humanised monoclonal antibody that binds CD4, the primary receptor for HIV, and inhibits HIV from entering cells. It is a post-attachment inhibitor, blocking HIV from binding to the CCR5 and CXCR4 co-receptors after HIV binds to the CD4 receptor on the surface of a CD4 cell. Post-attachment inhibitors are a subclass of HIV drugs called entry inhibitors.
On March 6, 2018, the U.S. Food and Drug Administration (FDA) approved ibalizumab for multidrug-resistant HIV-1. It is used in combination with other antiretroviral drugs. The U.S. Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
Ibalizumab, in combination with other antiretrovirals, is indicated for the treatment of human immunodeficiency virus type 1 (HIV-1) infection.
Development
Ibalizumab is being developed by TaiMed Biologics but was originally developed by Tanox, now part of Genentech. As part of Genentech's takeover of Tanox, the patent for ibalizumab was sold to TaiMed Biologics, a biotech company formed in 2007 with support from the Taiwanese Government through a $20 million investment by the state-owned National Development Fund.
Milestones for the intravenous (i.v.) infusion dosage form:
2003: completed a phase-1a clinical trial for i.v. infusion dosage form.
2003: granted fast track status by U.S. FDA.
2003: completed a phase-1b clinical trial for i.v. infusion dosage form.
2006: completed a phase-2a clinical trial for i.v. infusion dosage form.
2011: completed a phase-2b clinical trial for i.v. infusion dosage form.
2012: completed a phase-1 clinical trial for s.c. injection dosage form.
2013: initiated a phase-1/2 clinical trial for s.c. and i.m. injection dosage forms (on-going).
2014: granted orphan drug designation for HIV MDR patients by U.S. FDA.
2015: granted breakthrough therapy designation for i.v. infusion dosage form by U.S. FDA.
2015: initiated a phase-3 clinical t |
https://en.wikipedia.org/wiki/Software%20Industry%20Survey | The Software Industry Survey is an annual, for-the-public scientific survey about the size, composition, current state and future of the software industry and companies in Europe with origin in Finland.
The survey organization is led (in 2010 and 2011) by the Software Business Lab research group of the BIT research centre at Aalto University, School of Science and Technology (former Helsinki University of Technology) with the help of several industry partners. Researchers from the Helsinki University of Technology and Centres of Expertise first organized this survey in 1997 to provide an overview of the Finnish software industry with financing mainly from the National Technology Agency (Tekes) and the Finnish Ministry of Trade and Industry. The 2011 Finnish survey received responses from 506 participants which is a bit smaller than 2010 due to more strict selection criteria. Surveys analysing the industry in other European countries than Finland are run by research partners. In 2011 the survey was implemented in Austria and Germany.
The published reports by the survey group cover current economic impacts on the software industry (like the crises in 2009/2010 or Nokia’s changes for its mobile platform in 2011) on roughly 100 pages. Starting with the 2009 report, all included images and tables can be re-used under the free Creative Commons Attribution license version 3.0.
Apart from these reports, the survey's data is used as input for various scientific studies based on the collected data. Results are also used by media companies like newspapers as source for news articles. |
https://en.wikipedia.org/wiki/Brachyhypopomus%20walteri | Brachyhypopomus walteri is a species of electric knifefish. The species was discovered in the Central Amazon. |
https://en.wikipedia.org/wiki/Taylor%20cone | A Taylor cone refers to the cone observed in electrospinning, electrospraying and hydrodynamic spray processes from which a jet of charged particles emanates above a threshold voltage. Aside from electrospray ionization in mass spectrometry, the Taylor cone is important in field-emission electric propulsion (FEEP) and colloid thrusters used in fine control and high efficiency (low power) thrust of spacecraft.
History
This cone was described by Sir Geoffrey Ingram Taylor in 1964 before electrospray was "discovered". This work followed on the work of Zeleny who photographed a cone-jet of glycerine in a strong electric field and the work of several others: Wilson and Taylor (1925), Nolan (1926) and Macky (1931). Taylor was primarily interested in the behavior of water droplets in strong electric fields, such as in thunderstorms.
Formation
When a small volume of electrically conductive liquid is exposed to an electric field, the shape of liquid starts to deform from the shape caused by surface tension alone. As the voltage is increased the effect of the electric field becomes more prominent. As this effect of the electric field begins to exert a similar magnitude of force on the droplet as the surface tension does, a cone shape begins to form with convex sides and a rounded tip. This approaches the shape of a cone with a whole angle (width) of 98.6°. When a certain threshold voltage has been reached the slightly rounded tip inverts and emits a jet of liquid. This is called a cone-jet and is the beginning of the electrospraying process in which ions may be transferred to the gas phase. It is generally found that in order to achieve a stable cone-jet a slightly higher than threshold voltage must be used. As the voltage is increased even more, other modes of droplet disintegration are found. The term Taylor cone can specifically refer to the theoretical limit of a perfect cone of exactly the predicted angle or generally refer to the approximately conical portion of |
https://en.wikipedia.org/wiki/Pumpkin%20flour | Pumpkin flour, also known as pumpkin fruit flour is a type of flour made from dried pumpkin flesh, excluding the stem, and leaves, made with or without the rind and seeds included. Pumpkin products have drawn some commercial and research interest partly due to the low cost of pumpkin production. Additionally, pumpkin flour is a gluten-free flour which makes it suitable for people with coeliac disease, and it has been used in blends with other gluten-free flours to make baked goods. It has also been recognized as a potential additive to conventional wheat flour for producing composite flour with increased fiber, as well as being potentially useful as a natural form of food coloring for baked goods. Sun-dried pumpkin flour has a shelf-life of about 11.5 months. |
https://en.wikipedia.org/wiki/List%20of%20probability%20distributions | Many probability distributions that are important in theory or applications have been given specific names.
Discrete distributions
With finite support
The Bernoulli distribution, which takes value 1 with probability p and value 0 with probability q = 1 − p.
The Rademacher distribution, which takes value 1 with probability 1/2 and value −1 with probability 1/2.
The binomial distribution, which describes the number of successes in a series of independent Yes/No experiments all with the same probability of success.
The beta-binomial distribution, which describes the number of successes in a series of independent Yes/No experiments with heterogeneity in the success probability.
The degenerate distribution at x0, where X is certain to take the value x0. This does not look random, but it satisfies the definition of random variable. This is useful because it puts deterministic variables and random variables in the same formalism.
The discrete uniform distribution, where all elements of a finite set are equally likely. This is the theoretical distribution model for a balanced coin, an unbiased die, a casino roulette, or the first card of a well-shuffled deck.
The hypergeometric distribution, which describes the number of successes in the first m of a series of n consecutive Yes/No experiments, if the total number of successes is known. This distribution arises when there is no replacement.
The negative hypergeometric distribution, a distribution which describes the number of attempts needed to get the nth success in a series of Yes/No experiments without replacement.
The Poisson binomial distribution, which describes the number of successes in a series of independent Yes/No experiments with different success probabilities.
Fisher's noncentral hypergeometric distribution
Wallenius' noncentral hypergeometric distribution
Benford's law, which describes the frequency of the first digit of many naturally occurring data.
The ideal and robust soliton distributions.
Zipf's law or |
https://en.wikipedia.org/wiki/FAM214A | Protein FAM214A, also known as protein family with sequence similarity 214, A (FAM214A) is a protein that, in humans, is encoded by the FAM214A gene. FAM214A is a gene with unknown function found at the q21.2-q21.3 locus on Chromosome 15 (human). The protein product of this gene has two conserved domains, one of unknown function (DUF4210) and another one called Chromosome_Seg. Although the function of the FAM214A protein is uncharacterized, both DUF4210 and Chromosome_Seg have been predicted to play a role in chromosome segregation during meiosis.
Gene
Overview
The FAM214A gene is located on the negative DNA strand (see Sense (molecular biology)) of chromosome 15 between position 52,873,514 and 53,002,014; thus making the gene 97,303 base pairs (bp) long. FAM214A has been previously labeled with two other aliases, known as KIAA1370 and FLJ10980. The FAM214A gene is predicted to contain 12 exons which comprise the final 4231 bp mRNA transcript after transcription has occurred. It is this mRNA product that is then translated into the final FAM214A protein with the help of the promoter sequence and transcription factors. The promoter for the FAM214A mRNA sequence was predicted and analyzed by the El Dorado program on Genomatix. This promoter is 601 base pairs long and spans a portion of the 5' UTR.
Gene expression
FAM214A is considered to be ubiquitously expressed (or very nearly so) in low levels according to a number of sources such as BioGPS and the Expression Atlas. As can be seen in the BioGPS image below, there is a significantly higher expression level in immune-related cells and tissues, thus suggesting an immune role; however, there has been no specific in situ evidence to support this claim. Expression data has been collected from a number of studies performed on a large range of genes, therefore, some of the data is contradictory in nature.
Protein
Overview
The function of the FAM214A protein in humans is still unknown; however, there are three fu |
https://en.wikipedia.org/wiki/Lateral%20collateral%20ligament%20of%20ankle%20joint | The lateral collateral ligament of ankle joint (or external lateral ligament of the ankle-joint) are ligaments of the ankle which attach to the fibula.
Structure
Its components are:
anterior talofibular ligament
The anterior talofibular ligament attaches the anterior margin of the lateral malleolus to the adjacent region of the talus bone. The most common ligament involved in ankle sprain is the anterior talofibular ligament.
posterior talofibular ligament
The posterior talofibular ligament runs horizontally between the neck of the talus and the medial side of lateral malleolus
calcaneofibular ligament
The calcaneofibular ligament is attached on the posteromedial side of lateral malleolus and descends posteroinferiorly below to a lateral side of the calcaneus. |
https://en.wikipedia.org/wiki/Phasor | In physics and engineering, a phasor (a portmanteau of phase vector) is a complex number representing a sinusoidal function whose amplitude (), angular frequency (), and initial phase () are time-invariant. It is related to a more general concept called analytic representation, which decomposes a sinusoid into the product of a complex constant and a factor depending on time and frequency. The complex constant, which depends on amplitude and phase, is known as a phasor, or complex amplitude, and (in older texts) sinor or even complexor.
A common situation in electrical networks powered by time varying current is the existence of multiple sinusoids all with the same frequency, but different amplitudes and phases. The only difference in their analytic representations is the complex amplitude (phasor). A linear combination of such functions can be represented as a linear combination of phasors (known as phasor arithmetic or phasor algebra) and the time/frequency dependent factor that they all have in common.
The origin of the term phasor rightfully suggests that a (diagrammatic) calculus somewhat similar to that possible for vectors is possible for phasors as well. An important additional feature of the phasor transform is that differentiation and integration of sinusoidal signals (having constant amplitude, period and phase) corresponds to simple algebraic operations on the phasors; the phasor transform thus allows the analysis (calculation) of the AC steady state of RLC circuits by solving simple algebraic equations (albeit with complex coefficients) in the phasor domain instead of solving differential equations (with real coefficients) in the time domain. The originator of the phasor transform was Charles Proteus Steinmetz working at General Electric in the late 19th century. He got his inspiration from Oliver Heaviside. Heaviside's operational calculus was modified so that the variable p becomes jw. The complex number j has simple meaning : phase shift.
Glossin |
https://en.wikipedia.org/wiki/Imazapic | Imazapic is a chemical used as an herbicide. It controls many broad leaf weeds and controls or suppresses some grasses in pasture, rangeland and certain types of turf. It has a half-life of around 120 days in soil. Impazic is considered an environmental hazard due to its harmful effects on aquatic life. |
https://en.wikipedia.org/wiki/Orbitrap | In mass spectrometry, Orbitrap is an ion trap mass analyzer consisting of an outer barrel-like electrode and a coaxial inner spindle-like electrode that traps ions in an orbital motion around the spindle. The image current from the trapped ions is detected and converted to a mass spectrum using the Fourier transform of the frequency signal.
History
The concept of electrostatically trapping ions in an orbit around a central spindle was developed by Kenneth Hay Kingdon in the early 1920s. The Kingdon trap consists of a thin central wire and an outer cylindrical electrode. A static applied voltage results in a radial logarithmic potential between the electrodes. In 1981, Knight introduced a modified outer electrode that included an axial quadrupole term that confines the ions on the trap axis. Neither the Kingdon nor the Knight configurations were reported to produce mass spectra.
The invention of the Orbitrap analyzer and its proof-of-principle by Makarov at the end of the 1990s started a sequence of technology improvements which resulted in the commercial introduction of this analyzer by Thermo Fisher Scientific as a part of the hybrid LTQ Orbitrap instrument in 2005.
Principle of operation
Trapping
In the Orbitrap, ions are trapped because their electrostatic attraction to the inner electrode is balanced by their inertia. Thus, ions cycle around the inner electrode on elliptical trajectories. In addition, the ions also move back and forth along the axis of the central electrode so that their trajectories in space resemble helices. Due to the properties of the quadro-logarithmic potential, their axial motion is harmonic, i.e. it is completely independent not only of motion around the inner electrode but also of all initial parameters of the ions except their mass-to-charge ratios m/z. Its angular frequency is: ω = , where k is the force constant of the potential, similar to the spring constant.
Injection
In order to inject ions from an external ion source, the f |
https://en.wikipedia.org/wiki/Immunoglobulin%20class%20switching | Immunoglobulin class switching, also known as isotype switching, isotypic commutation or class-switch recombination (CSR), is a biological mechanism that changes a B cell's production of immunoglobulin from one type to another, such as from the isotype IgM to the isotype IgG. During this process, the constant-region portion of the antibody heavy chain is changed, but the variable region of the heavy chain stays the same (the terms variable and constant refer to changes or lack thereof between antibodies that target different epitopes). Since the variable region does not change, class switching does not affect antigen specificity. Instead, the antibody retains affinity for the same antigens, but can interact with different effector molecules.
Mechanism
Class switching occurs after activation of a mature B cell via its membrane-bound antibody molecule (or B cell receptor) to generate the different classes of antibody, all with the same variable domains as the original antibody generated in the immature B cell during the process of V(D)J recombination, but possessing distinct constant domains in their heavy chains.
Naïve mature B cells produce both IgM and IgD, which are the first two heavy chain segments in the immunoglobulin locus. After activation by antigen, these B cells proliferate. If these activated B cells encounter specific signaling molecules via their CD40 and cytokine receptors (both modulated by T helper cells), they undergo antibody class switching to produce IgG, IgA or IgE antibodies. During class switching, the constant region of the immunoglobulin heavy chain changes but the variable regions do not, and therefore antigenic specificity, remains the same. This allows different daughter cells from the same activated B cell to produce antibodies of different isotypes or subtypes (e.g. IgG1, IgG2 etc.).
In humans, the order of the heavy chain exons is as follows:
μ - IgM
δ - IgD
γ3 - IgG3
γ1 - IgG1
α1 - IgA1
γ2 - IgG2
γ4 - IgG4
ε - IgE
α2 |
https://en.wikipedia.org/wiki/William%20Poduska | John William Poduska Sr. is an American engineer and entrepreneur. He was a founder of Prime Computer, Apollo Computer, and Stellar Computer. Prior to that he headed the Electronics Research Lab at NASA's Cambridge, Massachusetts, facility and also worked at Honeywell.
Poduska has been involved in a number of other high-tech startups. He also has served on the boards of Novell, Anadarko Petroleum, Anystream, Boston Ballet, Wang Center and the Boston Lyric Opera.
Poduska was elected a member of the National Academy of Engineering in 1986 for technical and entrepreneurial leadership in computing, including development of Prime, the first virtual memory minicomputer, and Apollo, the first distributed, co-operating workstation.
Education
Poduska was born in Memphis, Tennessee. In 1955, he graduated from Central High School in Memphis. He went on to earn a S.B. and S.M. in electrical engineering, both in 1960, from MIT. He also earned a Sc.D. in EECS from MIT in 1962.
Awards
Recipient of the McDowell Award, National Academy of Engineering, 1986 |
https://en.wikipedia.org/wiki/AN/UYK-43 | The AN/UYK-43 was the standard 32-bit computer of the United States Navy for surface ship and submarine platforms, with the first unit delivered in October, 1984. Some 1,250 units were delivered through to 2000. The size of a refrigerator, it replaced the older AN/UYK-7, both built by UNISYS and shared the same instruction set. An enhancement to the UYK-43, the Open Systems Module (OSM), allows up to six VMEbus Type 6U commercial off-the-shelf (COTS) cards to be installed in a UYK-43 enclosure.
The UYK-43 is being replaced by commercial off-the-shelf systems. Retired systems are being cannibalized for repair parts to support systems still in use by U.S. and non-U.S. forces.
Architecture
The historic AN/UYK-43 architecture includes active redundancy. It includes multiple processors, multiple memory banks, and multiple input-output devices with interfaces for multiple disk drives. Power-on self test firmware incorporates features that reconfigure software loading in order to bypass failure. This allows it to run in degraded mode with failed processors, failed memory, failed disk drives, and failed input/output devices. Remote status boards perform fault reporting.
These features improve combat survivability and eliminate requirements for periodic diagnostic maintenance, which is the intent of condition-based maintenance.
The standardized computer programming language associated with UYK and AYK series computers is called CMS-2 developed by Rand Corporation.
See also
AN/UYK-44 16-bit computer
CMS-2 (programming language)
Military computers |
https://en.wikipedia.org/wiki/Permuted%20congruential%20generator | A permuted congruential generator (PCG) is a pseudorandom number generation algorithm developed in 2014 by Dr. M.E. O'Neill which applies an output permutation function to improve the statistical properties of a modulo-2n linear congruential generator. It achieves excellent statistical performance with small and fast code, and small state size.
A PCG differs from a classical linear congruential generator (LCG) in three ways:
the LCG modulus and state is larger, usually twice the size of the desired output,
it uses a power-of-2 modulus, which results in a particularly efficient implementation with a full period generator and unbiased output bits, and
the state is not output directly, but rather the most significant bits of the state are used to select a bitwise rotation or shift which is applied to the state to produce the output.
It is the variable rotation which eliminates the problem of a short period in the low-order bits that power-of-2 LCGs suffer from.
Variants
The PCG family includes a number of variants. The core LCG is defined for widths from 8 to 128 bits, although only 64 and 128 bits are recommended for practical use; smaller sizes are for statistical tests of the technique.
The additive constant in the LCG can be varied to produce different streams. The constant is an arbitrary odd integer, so it does not need to be stored explicitly; the address of the state variable itself (with the low bit set) can be used.
There are several different output transformations defined. All perform well, but some have a larger margin than others. They are built from the following components:
RR: A random (input-dependent) rotation, with output half the size of input. Given a 2b-bit input word, the top b−1 bits are used for the rotate amount, the next-most-significant 2b−1 bits are rotated right and used as the output, and the low 2b−1+1−b bits are discarded.
RS: A random (input-dependent) shift, for cases where rotates are more expensive. Again, the out |
https://en.wikipedia.org/wiki/Set%20theory%20of%20the%20real%20line | Set theory of the real line is an area of mathematics concerned with the application of set theory to aspects of the real numbers.
For example, one knows that all countable sets of reals are null, i.e. have Lebesgue measure 0; one might therefore ask the least possible size of a set
which is not Lebesgue null. This invariant is called the uniformity of the ideal of null sets, denoted . There are many such invariants associated with this and other ideals, e.g. the ideal of meagre sets, plus more which do not have a characterisation in terms of ideals. If the continuum hypothesis (CH) holds, then all such invariants are equal to , the least uncountable cardinal. For example, we know is uncountable, but being the size of some set of reals under CH it can be at most .
On the other hand, if one assumes Martin's Axiom (MA) all common invariants are "big", that is equal to , the cardinality of the continuum. Martin's Axiom is consistent with . In fact one should view Martin's Axiom as a forcing axiom that negates the need to do specific forcings of a certain class (those satisfying the ccc, since the consistency of MA with large continuum is proved by doing all such forcings (up to a certain size shown to be sufficient). Each invariant can be made large by some ccc forcing, thus each is big given MA.
If one restricts to specific forcings, some invariants will become big while others remain small. Analysing these effects is the major work of the area, seeking to determine which inequalities between invariants are provable and which are inconsistent with ZFC. The inequalities among the ideals of measure (null sets) and category (meagre sets) are captured in Cichon's diagram. Seventeen models (forcing constructions) were produced during the 1980s, starting with work of Arnold Miller, to demonstrate that no other inequalities are provable. These are analysed in detail in the book by Tomek Bartoszynski and Haim Judah, two of the eminent workers in the field.
One curious |
https://en.wikipedia.org/wiki/Caldwell%20v.%20J.%20H.%20Findorff%20%26%20Son%2C%20Inc. | Caldwell v. J. H. Findorff & Son, Inc., 2005 WI App 111, 283 Wis. 2d 508, 698 N.W.2d 132, was a 2005 case heard by the Wisconsin Court of Appeals in the United States.
Background
Yahara Elementary in DeForest, Wisconsin, opened for the 1992–93 school year. In March 2002, 340 students at Yahara Elementary (Deforest, Wisconsin) were sent home on accordance to an issue with excessive moisture in the building causing toxic mold to grow in the ventilation ducts, pipes, and on carpets. Three teachers, a custodian, and a student were suffering with respiratory problems and were insistent on taking the company, J.H. Findorff & Son, Inc. to court because they believed the mold issues were a product of the school having been built improperly. Two of the teachers a custodian and a student were involved in the lawsuit.
Trial court
On August 20, 2002, the plaintiffs sued Findorff in Dane County under the safe place statute and in strict liability alleging the defects in the original design and in construction that caused the excess moisture. The circuit court concluded that the plaintiffs discovered the cause of their symptoms more than three years before filing this suit making their claims time-barred. The plaintiffs claims under the safe place statute failed because the information submitted did not show that Findorff had custody or control of the area, the negligence claims failed too because none of their witnesses offered standard care opinions regarding Findorff. The plaintiffs then appealed the decision.
Decision
In April 2005, the Wisconsin Court of Appeals reversed the lower court in an opinion written by judge Dykman. The decision stated, in regards to the statute of limitations, that when the symptoms were discovered to be caused by the HVAC system was a question of fact that must be determined by the jury. As to the standard of care on the negligence claim, the court said that plaintiffs had submitted deposition testimony from an expert that could be used to esta |
https://en.wikipedia.org/wiki/Argentine%20torpedo | Tetronarce puelcha, commonly known as the Argentine torpedo, is a species of fish in the family Torpedinidae. It is found in Argentina, Brazil, and Uruguay. Its natural habitat is open seas. It is rare electric ray fish species, which is moderately large (104 cm) found in South West Atlantic (mostly in Argentina and Brazil). |
https://en.wikipedia.org/wiki/Lateral%20hypothalamus | The lateral hypothalamus (LH), also called the lateral hypothalamic area (LHA), contains the primary orexinergic nucleus within the hypothalamus that widely projects throughout the nervous system; this system of neurons mediates an array of cognitive and physical processes, such as promoting feeding behavior and arousal, reducing pain perception, and regulating body temperature, digestive functions, and blood pressure, among many others. Clinically significant disorders that involve dysfunctions of the orexinergic projection system include narcolepsy, motility disorders or functional gastrointestinal disorders involving visceral hypersensitivity (e.g., irritable bowel syndrome), and eating disorders.
The neurotransmitter glutamate and the endocannabinoids (e.g., anandamide) and the orexin neuropeptides orexin-A and orexin-B are the primary signaling neurochemicals in orexin neurons; pathway-specific neurochemicals include GABA, melanin-concentrating hormone, nociceptin, glucose, the dynorphin peptides, and the appetite-regulating peptide hormones leptin and ghrelin, among others. Notably, cannabinoid receptor 1 (CB1) is colocalized on orexinergic projection neurons in the lateral hypothalamus and many output structures, where the CB1 and orexin receptor 1 (OX1) receptors form the CB1–OX1 receptor heterodimer.
Inputs
Medial prefrontal cortex
Central nucleus of the amygdala
Outputs
The orexinergic projections from the lateral hypothalamus innervate the entirety of the remainder of the hypothalamus, with robust projections to the posterior hypothalamus, tuberomammillary nucleus (the histamine projection nucleus), the arcuate nucleus, and the paraventricular hypothalamic nucleus. In addition to the histaminergic nucleus, the orexin system also projects onto the ventral tegmental area dopamine nucleus, locus ceruleus noradrenergic nucleus, the serotonergic raphe nuclei, and cholinergic pedunculopontine nucleus and laterodorsal tegmental nucleus. The histaminergic, |
https://en.wikipedia.org/wiki/Visual%20space | Visual space is the experience of space by an aware observer. It is the subjective counterpart of the space of physical objects. There is a long history in philosophy, and later psychology of writings describing visual space, and its relationship to the space of physical objects. A partial list would include René Descartes, Immanuel Kant, Hermann von Helmholtz, William James, to name just a few.
Object Space and Visual Space.
Space of physical objects
The location and shape of physical objects can be accurately described with the tools of geometry. For practical purposes the space we occupy is Euclidean. It is three-dimensional and measurable using tools such as rulers. It can be quantified using co-ordinate systems like the Cartesian x,y,z, or polar coordinates with angles of elevation, azimuth and distance from an arbitrary origin.
Space of visual percepts
Percepts, the counterparts in the aware observer's conscious experience of objects in physical space, constitute an ordered ensemble or, as Ernst Cassirer explained, Visual Space can not be measured with rulers. Historically philosophers used introspection and reasoning to describe it. With the development of Psychophysics, beginning with Gustav Fechner, there has been an effort to develop suitable experimental procedures which allow objective descriptions of visual space, including geometric descriptions, to be developed and tested. An example illustrates the relationship between the concepts of object and visual space. Two straight lines are presented to an observer who is asked to set them so that they appear parallel. When this has been done, the lines are parallel in visual space A comparison is then possible with the actual measured layout of the lines in physical space. Good precision can be achieved using these and other psychophysical procedures in human observers or behavioral ones in trained animals.
Visual Space and the Visual Field
The visual field, the area or extent of physical |
https://en.wikipedia.org/wiki/Noncommutative%20projective%20geometry | In mathematics, noncommutative projective geometry is a noncommutative analog of projective geometry in the setting of noncommutative algebraic geometry.
Examples
The quantum plane, the most basic example, is the quotient ring of the free ring:
More generally, the quantum polynomial ring is the quotient ring:
Proj construction
By definition, the Proj of a graded ring R is the quotient category of the category of finitely generated graded modules over R by the subcategory of torsion modules. If R is a commutative Noetherian graded ring generated by degree-one elements, then the Proj of R in this sense is equivalent to the category of coherent sheaves on the usual Proj of R. Hence, the construction can be thought of as a generalization of the Proj construction for a commutative graded ring.
See also
Elliptic algebra
Calabi–Yau algebra
Sklyanin algebra |
https://en.wikipedia.org/wiki/Plasma%20parameters | Plasma parameters define various characteristics of a plasma, an electrically conductive collection of charged particles that responds collectively to electromagnetic forces. Plasma typically takes the form of neutral gas-like clouds or charged ion beams, but may also include dust and grains. The behaviour of such particle systems can be studied statistically.
Fundamental plasma parameters
All quantities are in Gaussian (cgs) units except energy and temperature which are in electronvolts. The ion mass is expressed in units of the proton mass and the ion charge in units of the elementary charge (in the case of a fully ionized atom, equals to the respective atomic number). The other physical quantities used are the Boltzmann constant (), speed of light (), and the Coulomb logarithm ().
Frequencies
Lengths
Velocities
Dimensionless
number of particles in a Debye sphere
Alfvén speed to speed of light ratio
electron plasma frequency to gyrofrequency ratio
ion plasma frequency to gyrofrequency ratio
thermal pressure to magnetic pressure ratio, or beta, β
magnetic field energy to ion rest energy ratio
Collisionality
In the study of tokamaks, collisionality is a dimensionless parameter which expresses the ratio of the electron-ion collision frequency to the banana orbit frequency.
The plasma collisionality is defined as
where denotes the electron-ion collision frequency, is the major radius of the plasma, is the inverse aspect-ratio, and is the safety factor. The plasma parameters and denote, respectively, the mass and temperature of the ions, and is the Boltzmann constant.
Electron temperature
Temperature is a statistical quantity whose formal definition is
or the change in internal energy with respect to entropy, holding volume and particle number constant. A practical definition comes from the fact that the atoms, molecules, or whatever particles in a system have an average kinetic energy. The average means to average over the kinetic |
https://en.wikipedia.org/wiki/Average%20propensity%20to%20consume | Average propensity to consume (APC) (as well as the marginal propensity to consume) is a concept developed by John Maynard Keynes to analyze the consumption function, which is a formula where total consumption expenditures (C) of a household consist of autonomous consumption (Ca) and income (Y) (or disposable income (Yd)) multiplied by marginal propensity to consume (c1 or MPC). According to Keynes, the individual's real income determines saving and consumption decisions.
Consumption function:
The average propensity to consume is referred to as the percentage of income spent on goods and services. It is the proportion of income that is consumed and it is calculated by dividing total consumption expenditure (C) by total income (Y):
It can be also explained as spending on every monetary unit of income.
Moreover, Keynes's theory claims that wealthier people spend less of their income on consumption than less wealthy people. This is caused by autonomous consumption as everyone needs to eat and get dressed, so they buy a certain amount of food and clothes or pay rent, they all spend some amount of money on these necessities. So the ratio is falling with higher income and wealth. This is why it seems like the poor consume more than the rich. But they only need to spent larger amount of their income on consumption because they have less money available.
Average propensity to consume is not as significant as the marginal propensity to consume (MPC) which represents an additional change in consumer spending as a result of an additional change in household income per monetary unit and it is calculated as derivative of consumption function with respect to income (ratio of change in consumption to change in income). It is used for calculating multiplier in aggregate expenditures model.
Characteristics of APC
Average propensity to consume is decreasing
Since autonomous consumption in positive (Ca>0), the ratio of APC falls with increase in disposable income because w |
https://en.wikipedia.org/wiki/Microbial%20fuel%20cell | Microbial fuel cell (MFC) is a type of bioelectrochemical fuel cell system also known as micro fuel cell that
generates electric current by diverting electrons produced from the microbial oxidation of reduced compounds (also known as fuel or electron donor) on the anode to oxidized compounds such as oxygen (also known as oxidizing agent or electron acceptor) on the cathode through an external electrical circuit. MFCs produce electricity by using the electrons derived from biochemical reactions catalyzed by bacteria.
MFCs can be grouped into two general categories: mediated and unmediated. The first MFCs, demonstrated in the early 20th century, used a mediator: a chemical that transfers electrons from the bacteria in the cell to the anode. Unmediated MFCs emerged in the 1970s; in this type of MFC the bacteria typically have electrochemically active redox proteins such as cytochromes on their outer membrane that can transfer electrons directly to the anode. In the 21st century MFCs have started to find commercial use in wastewater treatment.
History
The idea of using microbes to produce electricity was conceived in the early twentieth century. Michael Cressé Potter initiated the subject in 1911. Potter managed to generate electricity from Saccharomyces cerevisiae, but the work received little coverage. In 1931, Barnett Cohen created microbial half fuel cells that, when connected in series, were capable of producing over 35 volts with only a current of 2 milliamps.
A study by DelDuca et al. used hydrogen produced by the fermentation of glucose by Clostridium butyricum as the reactant at the anode of a hydrogen and air fuel cell. Though the cell functioned, it was unreliable owing to the unstable nature of hydrogen production by the micro-organisms. This issue was resolved by Suzuki et al. in 1976, who produced a successful MFC design a year later.
In the late 1970s, little was understood about how microbial fuel cells functioned. The concept was studied by Robin M |
https://en.wikipedia.org/wiki/Oersted%20Medal | The Oersted Medal recognizes notable contributions to the teaching of physics. Established in 1936, it is awarded by the American Association of Physics Teachers. The award is named for Hans Christian Ørsted. It is the Association's most prestigious award.
Well-known recipients include Nobel laureates Robert Andrews Millikan, Edward M. Purcell, Richard Feynman, Isidor I. Rabi, Norman F. Ramsey, Hans Bethe, and Carl Wieman; as well as Arnold Sommerfeld, George Uhlenbeck, Jerrold Zacharias, Philip Morrison, Melba Phillips, Victor Weisskopf, Gerald Holton, John A. Wheeler, Frank Oppenheimer, Robert Resnick, Carl Sagan, Freeman Dyson, Daniel Kleppner, and Lawrence Krauss, and Anthony French, David Hestenes, Robert Karplus, Robert Pohl, and Francis Sears.
The 2008 medalist, Mildred S. Dresselhaus, is the third woman to win the award in its 70-plus-year history.
Medalists
William Suddards Franklin – 1936
Edwin Herbert Hall – 1937
Alexander Wilmer Duff – 1938
Benjamin Harrison Brown – 1939
Robert Andrews Millikan – 1940
Henry Crew – 1941
not awarded in 1942
George Walter Stewart – 1943
Roland Roy Tileston – 1944
Homer Levi Dodge – 1945
Ray Lee Edwards – 1946
Duane Roller – 1947
William Harley Barber – 1948
Arnold Sommerfeld – 1949
Orrin H. Smith – 1950
John Wesley Hornbeck – 1951
Ansel A. Knowlton – 1952
Richard M. Sutton – 1953
Clifford N. Wall – 1954
Vernet E. Eaton – 1955
George E. Uhlenbeck – 1956
Mark W. Zemansky – 1957
Jay William Buchta – 1958
Paul Kirkpatrick – 1959
Robert W. Pohl – 1960
Jerrold R. Zacharias – 1961
Francis W. Sears – 1962
Francis L. Friedman – 1963
Walter Christian Michels – 1964
Philip Morrison – 1965
Leonard I. Schiff – 1966
Edward M. Purcell – 1967
Harvey E. White – 1968
Eric M. Rogers – 1969
Edwin C. Kemble – 1970
Uri Haber-Schaim – 1971
Richard P. Feynman – 1972
Arnold Arons – 1973
Melba N. Phillips – 1974
Robert Resnick – 1975
Victor F. Weisskopf – 1976
H. Richard Crane – 1977
Wallace A. Hilton |
https://en.wikipedia.org/wiki/Amagat | An amagat is a practical unit of volumetric number density. Although it can be applied to any substance at any conditions, it is defined as the number of ideal gas molecules per unit volume at 1 atm (101.325 kPa) and 0 °C (273.15 K). It is named after Émile Amagat, who also has Amagat's law named after him. The abbreviated form of amagat is "amg". The abbreviation "Am" has also been used.
SI conversion
The amg unit for number density can be converted to the SI unit mol/m3 by the formula
where ≘ indicates correspondence, since the SI unit is of molar concentration and not number density.
The conversion factor (44.615...) is called the Loschmidt number.
The number density of an ideal gas at pressure p and temperature T can be calculated as
where T0 = 273.15 K, and p0 = 101.325 kPa (STP before 1982).
Example
Number density of an ideal gas (such as air) at room temperature (20 °C) and 1 atm (101.325 kPa) is |
https://en.wikipedia.org/wiki/Mind%E2%80%93body%20problem | The mind–body problem is a philosophical problem concerning the relationship between thought and consciousness in the human mind, and the body.
It is not obvious how the concept of the mind and the concept of the body relate. For example, feelings of sadness (which are mental events) cause people to cry (which is a physical state of the body). Finding a joke funny (a mental event) causes one to laugh (another bodily state). Feelings of pain (in the mind) cause avoidance behaviours (in the body), and so on.
Similarly, changing the chemistry of the body (and the brain especially) via drugs (such as antipsychotics, SSRIs, or alcohol) can change one's state of mind in nontrivial ways. Alternatively, therapeutic interventions like cognitive behavioural therapy can change cognition in ways that have downstream effects on the bodily health.
In general, the existence of these mind–body connections seems unproblematic. Issues arise, however, once one considers what exactly we should make of these relations from a metaphysical or scientific perspective. Such reflections quickly raise a number of questions like:
Are the mind and body two distinct entities, or a single entity?
If the mind and body are two distinct entities, do the two of them causally interact?
Is it possible for these two distinct entities to causally interact?
What is the nature of this interaction?
Can this interaction ever be an object of empirical study?
If the mind and body are a single entity, then are mental events explicable in terms of physical events, or vice versa?
Is the relation between mental and physical events something that arises de novo at a certain point in development?
And so on. These and other questions that discuss the relation between mind and body are questions that all fall under the banner of the 'mind–body problem'.
Mind–body interaction and mental causation
Philosophers David L. Robb and John F. Heil introduce mental causation in terms of the mind–body problem of int |
https://en.wikipedia.org/wiki/Dolphin%20safe%20label | Dolphin-safe labels are used to denote compliance with laws or policies designed to minimize dolphin fatalities during fishing for tuna destined for canning.
Some labels impose stricter requirements than others. Dolphin-safe tuna labeling originates in the United States. The term Dolphin Friendly is often used in Europe, and has the same meaning, although, in Latin America, the standards for Dolphin Safe/Dolphin Friendly tuna is different than elsewhere. The labels have become increasingly controversial since their introduction, particularly among sustainability groups in the U.S., but this stems from the fact that Dolphin Safe was never meant to be an indication of tuna sustainability. Many U.S. labels that carry dolphin-safe labels are amongst the least sustainable for oceans, according to Greenpeace's 2017 Shopping Guide.
While the Dolphin Safe label and its standards have legal status in the United States under the Dolphin Protection Consumer Information Act, a part of the US Marine Mammal Protection Act, tuna companies around the world adhere to the standards on a voluntary basis, managed by the non-governmental organization Earth Island Institute, based in Berkeley, California. The Inter-American Tropical Tuna Commission has promoted an alternative Dolphin Safe label, which requires 100% coverage by independent observers on boats and limits the overall mortality of dolphins in the ocean. This label is mostly used in Latin America.
According to the U.S. Consumers Union, Earth Island and U.S. dolphin safe labels provide no guarantee that dolphins are not harmed during the fishing process because verification is neither universal nor independent. Still, tuna fishing boats and canneries operating under any of the various U.S. labeling standards are subject to surprise inspection and observation. For US import, companies face strict charges of fraud for any violation of the label standards, while Earth Island Institute (EII), an independent environmental orga |
https://en.wikipedia.org/wiki/Alloenzyme | Alloenzymes (or also called allozymes) are variant forms of an enzyme which differ structurally but not functionally from other allozymes coded for by different alleles at the same locus. These are opposed to isozymes, which are enzymes that perform the same function, but which are coded by genes located at different loci.
Alloenzymes are common biological enzymes that exhibit high levels of functional evolutionary conservation throughout specific phyla and kingdoms. They are used by phylogeneticists as molecular markers to gauge evolutionary histories and relationships between different species. This can be done because allozymes do not have the same structure. They can be separated by capillary electrophoresis. However, some species are monomorphic for many of their allozymes which would make it difficult for phylogeneticists to assess the evolutionary histories of these species. In these instances, phylogeneticists would have to use another method to determine the evolutionary history of a species.
These enzymes generally perform very basic functions found commonly throughout all lifeforms, such as DNA polymerase, the enzyme that repairs and copies DNA. Significant changes in this enzyme reflect significant events in evolutionary history of organisms. As expected DNA polymerase shows relatively small differences in its amino acid sequence between phyla and even kingdoms.
The key to choosing which alloenzyme to use in a comparison between multiple species is to choose one that is as variable as possible while still being present in all the organisms. By comparing the amino acid sequence of the enzyme in the species, more amino acid similarities should be seen in species that are more closely related, and fewer between those that are more distantly related. The less well conserved the enzyme is, the more amino acid differences will be present in even closely related species. |
https://en.wikipedia.org/wiki/Dorsal%20tarsometatarsal%20ligaments | The dorsal tarsometatarsal ligaments are ligaments located in the foot. They are strong, flat bands that stretch from the tarsal bones to the metatarsals .
The first metatarsal is joined to the first cuneiform by a broad, thin band; the second has three, one from each cuneiform bone; the third has one from the third cuneiform; the fourth has one from the third cuneiform and one from the cuboid; and the fifth, one from the cuboid. |
https://en.wikipedia.org/wiki/Saposin%20protein%20domain | The saposin domains refers to two evolutionally-conserved protein domains found in saposin and related proteins (SAPLIP). Saposins are small lysosomal proteins that serve as activators of various lysosomal lipid-degrading enzymes. They probably act by isolating the lipid substrate from the membrane surroundings, thus making it more accessible to the soluble degradative enzymes. All mammalian saposins are synthesized as a single precursor molecule (prosaposin) which contains four Saposin-B domains, yielding the active saposins after proteolytic cleavage, and two Saposin-A domains that are removed in the activation reaction.
The Saposin-B domains also occur in other proteins, most of them playing a role in interacting with membranes.
Classification
The saposin (SapB1-SapB2) domains are found in a wide range of proteins. Each half-domain encodes two alpha helices in the SapB domain for a total of four.
The mamallian prosaposin (domain organization below) is a prototypic family member. It also includes the N- and C-terminal SapA domains, both of which are proteolyticly cleaved as the proprotein matures. Four connected pairs of SapB1-SapB2 domains are released, sequentially named Saposin-A through D. Some closely related proteins, such as PSAPL1 and SFTPB, share the architecture and the cleaving mechanism in whole or in part. While Prosaposin and PSAPL1 act in lysosomal lipid degradation, SFTPB is released into the pulmonary surfactant, playing a role in rearranging lipids.
However, proteins like GNLY and AOAH do not carry a SapA domain. While GNLY is essentially a SapB with N-terminal extensions specialized for lysing pathogen cell membranes, the ADAH protein uses the uncleaved SapB domain for targeting the correct intracellular compartment.
The plant-specific insert is an unusual variation on the SapB domains. It features a circular permutation compared to the usual topology: instead of featuring a SapB1-SapB2 unit, it is made up of a SapB2-linker-SapB1 unit seem |
https://en.wikipedia.org/wiki/Rhizopogon%20roseolus | Rhizopogon roseolus, shōro (Japanese: 松露/ショウロ), is an ectomycorrhizal fungus, considered a delicacy in east Asia and Japan and used as a soil inoculant in agriculture and horticulture.
Morphology
The fruiting bodies are approximately spherical to elongated, often pear-shaped. Their diameter is up to three centimeters in dry specimens. Their color is initially white, but soon turns pink to reddish-brownish, sometimes also delicately violet-pink. At their base are root-like strands of mycelium. They give off a faint odor. There are numerous fine elastic fibrils or veins, which are not prominent, and are colored the same as the peridium or darker. This is 240 - 400 µm thick and single-layer. The gleba is initially white and becomes yellowish as it dries. The cavities within are labyrinthine, empty or filled with spores where small. They are formed by hyaline (transparent), branched hyphae. The basidia are club-shaped and hyaline, measuring 12-13 by 9-10 µm. The sterigmata are as long as the spores, which are uniquely colored, some ocher-tawny, smooth, and ellipsoidal in shape; they measure 7-16 by 3-5 µm. Since the basidia have lost the function of actively ejecting spores, the spores are dispersed not only by rainwater washing away the matured and viscous fruiting body fragments but also by insects and other feeding animals.
Distribution and ecology
Rhizopogon roseolus is considered a cosmopolite species, distributed in Europe, North America and northeastern Asia. It has also been artificially introduced into New Zealand as an edible fungus. The fungus lives by forming ectomycorrhizae with pine trees. It has characteristics similar to those of the pioneer plants, and often appears when the typical pioneer plants settle in areas that have been subjected to strong disturbance. In Europe it grows under Pinus nigra on calcareous soil, and it forms fruiting bodies from August to November. In Japan, it is found under pine trees such as Pinus densiflora and Pinus thunber |
https://en.wikipedia.org/wiki/Quasiprobability%20distribution | A quasiprobability distribution is a mathematical object similar to a probability distribution but which relaxes some of Kolmogorov's axioms of probability theory. Quasiprobabilities share several of general features with ordinary probabilities, such as, crucially, the ability to yield expectation values with respect to the weights of the distribution. However, they can violate the σ-additivity axiom: integrating over them does not necessarily yield probabilities of mutually exclusive states. Indeed, quasiprobability distributions also have regions of negative probability density, counterintuitively, contradicting the first axiom. Quasiprobability distributions arise naturally in the study of quantum mechanics when treated in phase space formulation, commonly used in quantum optics, time-frequency analysis, and elsewhere.
Introduction
In the most general form, the dynamics of a quantum-mechanical system are determined by a master equation in Hilbert space: an equation of motion for the density operator (usually written ) of the system. The density operator is defined with respect to a complete orthonormal basis. Although it is possible to directly integrate this equation for very small systems (i.e., systems with few particles or degrees of freedom), this quickly becomes intractable for larger systems. However, it is possible to prove that the density operator can always be written in a diagonal form, provided that it is with respect to an overcomplete basis. When the density operator is represented in such an overcomplete basis, then it can be written in a manner more resembling of an ordinary function, at the expense that the function has the features of a quasiprobability distribution. The evolution of the system is then completely determined by the evolution of the quasiprobability distribution function.
The coherent states, i.e. right eigenstates of the annihilation operator serve as the overcomplete basis in the construction described above. |
https://en.wikipedia.org/wiki/List%20of%20second%20moments%20of%20area | The following is a list of second moments of area of some shapes. The second moment of area, also known as area moment of inertia, is a geometrical property of an area which reflects how its points are distributed with respect to an arbitrary axis. The unit of dimension of the second moment of area is length to fourth power, L4, and should not be confused with the mass moment of inertia. If the piece is thin, however, the mass moment of inertia equals the area density times the area moment of inertia.
Second moments of area
Please note that for the second moment of area equations in the below table: and
Parallel axis theorem
The parallel axis theorem can be used to determine the second moment of area of a rigid body about any axis, given the body's second moment of area about a parallel axis through the body's centroid, the area of the cross section, and the perpendicular distance (d) between the axes.
See also
List of moments of inertia
List of centroids
Second polar moment of area |
https://en.wikipedia.org/wiki/Kaplansky%27s%20theorem%20on%20quadratic%20forms | In mathematics, Kaplansky's theorem on quadratic forms is a result on simultaneous representation of primes by quadratic forms. It was proved in 2003 by Irving Kaplansky.
Statement of the theorem
Kaplansky's theorem states that a prime p congruent to 1 modulo 16 is representable by both or none of x2 + 32y2 and x2 + 64y2, whereas a prime p congruent to 9 modulo 16 is representable by exactly one of these quadratic forms.
This is remarkable since the primes represented by each of these forms individually are not describable by congruence conditions.
Proof
Kaplansky's proof uses the facts that 2 is a 4th power modulo p if and only if p is representable by x2 + 64y2, and that −4 is an 8th power modulo p if and only if p is representable by x2 + 32y2.
Examples
The prime p = 17 is congruent to 1 modulo 16 and is representable by neither x2 + 32y2 nor x2 + 64y2.
The prime p = 113 is congruent to 1 modulo 16 and is representable by both x2 + 32y2 and x2+64y2 (since 113 = 92 + 32×12 and 113 = 72 + 64×12).
The prime p = 41 is congruent to 9 modulo 16 and is representable by x2 + 32y2 (since 41 = 32 + 32×12), but not by x2 + 64y2.
The prime p = 73 is congruent to 9 modulo 16 and is representable by x2 + 64y2 (since 73 = 32 + 64×12), but not by x2 + 32y2.
Similar results
Five results similar to Kaplansky's theorem are known:
A prime p congruent to 1 modulo 20 is representable by both or none of x2 + 20y2 and x2 + 100y2, whereas a prime p congruent to 9 modulo 20 is representable by exactly one of these quadratic forms.
A prime p congruent to 1, 16 or 22 modulo 39 is representable by both or none of x2 + xy + 10y2 and x2 + xy + 127y2, whereas a prime p congruent to 4, 10 or 25 modulo 39 is representable by exactly one of these quadratic forms.
A prime p congruent to 1, 16, 26, 31 or 36 modulo 55 is representable by both or none of x2 + xy + 14y2 and x2 + xy + 69y2, whereas a prime p congruent to 4, 9, 14, 34 or 49 modulo 55 is representable by exactly one of these quadrat |
https://en.wikipedia.org/wiki/Relation%20%28mathematics%29 | In mathematics, a relation on a set may, or may not, hold between two given set members.
For example, "is less than" is a relation on the set of natural numbers; it holds e.g. between 1 and 3 (denoted as 1<3) , and likewise between 3 and 4 (denoted as 3<4), but neither between 3 and 1 nor between 4 and 4.
As another example, "is sister of" is a relation on the set of all people, it holds e.g. between Marie Curie and Bronisława Dłuska, and likewise vice versa.
Set members may not be in relation "to a certain degree" - either they are in relation or they are not.
Formally, a relation over a set can be seen as a set of ordered pairs of members of .
The relation holds between and if is a member of .
For example, the relation "is less than" on the natural numbers is an infinite set of pairs of natural numbers that contains both and , but neither nor .
The relation "is a nontrivial divisor of" on the set of one-digit natural numbers is sufficiently small to be shown here:
; for example 2 is a nontrivial divisor of 8, but not vice versa, hence , but .
If is a relation that holds for and one often writes . For most common relations in mathematics, special symbols are introduced, like "<" for "is less than", and "|" for "is a nontrivial divisor of", and, most popular "=" for "is equal to". For example, "1<3", "1 is less than 3", and "" mean all the same; some authors also write "".
Various properties of relations are investigated.
A relation is reflexive if holds for all , and irreflexive if holds for no .
It is symmetric if always implies , and asymmetric if implies that is impossible.
It is transitive if and always implies .
For example, "is less than" is irreflexive, asymmetric, and transitive, but neither reflexive nor symmetric.
"is sister of" is transitive, but neither reflexive (e.g. Pierre Curie is not a sister of himself), nor symmetric, nor asymmetric; while being irreflexive or not may be a matter of definition (is every woman a sister of |
https://en.wikipedia.org/wiki/CII%20protein | cII or transcriptional activator II is a DNA-binding protein and important transcription factor in the life cycle of lambda phage. It is encoded in the lambda phage genome by the 291 base pair cII gene. cII plays a key role in determining whether the bacteriophage will incorporate its genome into its host and lie dormant (lysogeny), or replicate and kill the host (lysis).
Introduction
cII is the central “switchman” in the lambda phage bistable genetic switch, allowing environmental and cellular conditions to factor into the decision to lysogenize or to lyse its host. cII acts as a transcriptional activator of three promoters on the phage genome: pI, pRE, and pAQ. cII is an unstable protein with a half-life as short as 1.5 mins at 37˚C, enabling rapid fluctuations in its concentration. First isolated in 1982, cII's function in lambda's regulatory network has been extensively studied.
Structure and properties
cII binds DNA as a tetramer, composed of identical 11 kDa subunits. Although the cII gene encodes 97 codons, the mature cII protein subunit only contains 95 amino acids due to post-translational cleavage of the first two amino acids (fMet and Val). cII is toxic to bacteria when overexpressed, as it inhibits DNA synthesis.
cII binds to a homologous region 35 base pairs upstream of the promoters pI, pRE and pAQ. Unlike other DNA binding proteins, cII recognizes a direct repeat sequence TTGCN6TTGC rather than sequences that form palindromes. cII binds DNA ~2 orders of magnitude less strongly than the lambda repressor cI (3), and has a dissociation constant of ~80nM. DNA binding is achieved using the common helix-turn-helix motif (15), located between residues 26 and 45. On either side of the DNA-binding domain are domains crucial for tetramer formation, located in residues 9-25 and 46–71. cII's inherent in vivo instability stems from a C-terminal degradation tag, consisting of residues 89–97. This tag is recognized by host proteases HflA and HflB, cause rapid pr |
https://en.wikipedia.org/wiki/Ordinal%20collapsing%20function | In mathematical logic and set theory, an ordinal collapsing function (or projection function) is a technique for defining (notations for) certain recursive large countable ordinals, whose principle is to give names to certain ordinals much larger than the one being defined, perhaps even large cardinals (though they can be replaced with recursively large ordinals at the cost of extra technical difficulty), and then "collapse" them down to a system of notations for the sought-after ordinal. For this reason, ordinal collapsing functions are described as an impredicative manner of naming ordinals.
The details of the definition of ordinal collapsing functions vary, and get more complicated as greater ordinals are being defined, but the typical idea is that whenever the notation system "runs out of fuel" and cannot name a certain ordinal, a much larger ordinal is brought "from above" to give a name to that critical point. An example of how this works will be detailed below, for an ordinal collapsing function defining the Bachmann–Howard ordinal (i.e., defining a system of notations up to the Bachmann–Howard ordinal).
The use and definition of ordinal collapsing functions is inextricably intertwined with the theory of ordinal analysis, since the large countable ordinals defined and denoted by a given collapse are used to describe the ordinal-theoretic strength of certain formal systems, typically subsystems of analysis (such as those seen in the light of reverse mathematics), extensions of Kripke–Platek set theory, Bishop-style systems of constructive mathematics or Martin-Löf-style systems of intuitionistic type theory.
Ordinal collapsing functions are typically denoted using some variation of either the Greek letter (psi) or (theta).
An example leading up to the Bachmann–Howard ordinal
The choice of the ordinal collapsing function given as example below imitates greatly the system introduced by Buchholz but is limited to collapsing one cardinal for clarity of ex |
https://en.wikipedia.org/wiki/Gheorghe%20Moro%C8%99anu | Gheorghe Moroșanu (born April 30, 1950, in Darabani, Botoșani County, Romania) is a Romanian mathematician known for his works in Ordinary and Partial Differential Equations, Nonlinear Analysis, Calculus of Variations, Fluid Mechanics, Asymptotic Analysis, Applied Mathematics.
He earned his Ph.D. in 1981 from the Alexandru Ioan Cuza University in Iași.
He is currently affiliated with the Babeș-Bolyai University in Cluj-Napoca. Between 2002 and 2020 he was a professor at the Central European University in Budapest (an international English-language university, accredited in the USA), after previously holding positions at the University of Stuttgart and Alexandru Ioan Cuza University.
Among several administrative positions, he served as chairman of the Mathematics Department of the Central European University since 2004 to 2012. In 2008 he was appointed as egyetemi tanár (the highest academic title in Hungarian higher education) by the President of Hungary.
Before his university studies, during the 12-year period of education from primary to high school (1957-1969), Moroșanu was at the top of his class each academic year and demonstrated a keen interest in mathematics.
In 1969 he started studying mathematics at the Alexandru Ioan Cuza University in Iaşi. He was the first to earn a Ph.D. of his class of over 150 graduates. His PhD thesis, entitled Qualitative Problems for Nonlinear Differential Equations of Accretive Type in Banach Spaces, included original results published in top-ranked journals, such as Atti della Accademia Nazionale dei Lincei, Journal of Differential Equations, Journal of Mathematical Analysis and Applications, Nonlinear Analysis, Numerical Functional Analysis and Optimization. In particular, Moroșanu solved in his thesis the problem of the existence and uniqueness of the solution of a hyperbolic differential system with nonlocal boundary conditions, thus correcting a paper by Viorel Barbu and Ioan I. Vrabie, which actually does not cover |
https://en.wikipedia.org/wiki/Pentane%20interference | Pentane interference or syn-pentane interaction is the steric hindrance that the two terminal methyl groups experience in one of the chemical conformations of n-pentane. The possible conformations are combinations of anti conformations and gauche conformations and are anti-anti, anti-gauche+, gauche+ - gauche+ and gauche+ - gauche− of which the last one is especially energetically unfavorable. In macromolecules such as polyethylene pentane interference occurs between every fifth carbon atom. The 1,3-diaxial interactions of cyclohexane derivatives is a special case of this type of interaction, although there are additional gauche interactions shared between substituents and the ring in that case. A clear example of the syn-pentane interaction is apparent in the diaxial versus diequatorial heats of formation of cis 1,3-dialkyl cyclohexanes. Relative to the diequatorial conformer, the diaxial conformer is 2-3 kcal/mol higher in energy than the value that would be expected based on gauche interactions alone. Pentane interference helps explain molecular geometries in many chemical compounds, product ratios, and purported transition states. One specific type of syn-pentane interaction is known as 1,3 allylic strain or (A1,3 strain).
For instance in certain aldol adducts with 2,6-disubstituted aryl groups the molecular geometry has the vicinal hydrogen atoms in an antiperiplanar configuration both in a crystal lattice (X-ray diffraction) and in solution proton (NMR coupling constants) normally reserved for the most bulky groups i.d. both arenes:
The other contributing factor explaining this conformation is reduction in allylic strain by minimizing the dihedral angle between the arene double bond and the methine proton.
Syn-pentane interactions are responsible for the backbone-conformation dependence of protein side chain rotamer frequencies and their mean dihedral angles, which is evident from statistical analysis of protein side-chain rotamers in the Backbone-depende |
https://en.wikipedia.org/wiki/Lacrimal%20punctum | The lacrimal punctum (: puncta) or lacrimal point, is a minute opening on the summits of the lacrimal papillae, seen on the margins of the eyelids at the lateral extremity of the lacrimal lake. There are two lacrimal puncta in the medial (inside) portion of each eyelid. Normally, the puncta dip into the lacrimal lake.
Together, they function to collect tears produced by the lacrimal glands. The fluid is conveyed through the lacrimal canaliculi to the lacrimal sac, and thence via the nasolacrimal duct to the inferior nasal meatus of the nasal passage.
Additional images
See also
Imperforate lacrimal punctum
Lacrimal apparatus
Punctal plug |
https://en.wikipedia.org/wiki/Schur%20decomposition | In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.
Statement
The Schur decomposition reads as follows: if is an square matrix with complex entries, then A can be expressed as
where Q is a unitary matrix (so that its inverse Q−1 is also the conjugate transpose Q* of Q), and U is an upper triangular matrix, which is called a Schur form of A. Since U is similar to A, it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of U.
The Schur decomposition implies that there exists a nested sequence of A-invariant subspaces , and that there exists an ordered orthonormal basis (for the standard Hermitian form of ) such that the first i basis vectors span for each i occurring in the nested sequence. Phrased somewhat differently, the first part says that a linear operator J on a complex finite-dimensional vector space stabilizes a complete flag .
Proof
A constructive proof for the Schur decomposition is as follows: every operator A on a complex finite-dimensional vector space has an eigenvalue λ, corresponding to some eigenspace Vλ. Let Vλ⊥ be its orthogonal complement. It is clear that, with respect to this orthogonal decomposition, A has matrix representation (one can pick here any orthonormal bases Z1 and Z2 spanning Vλ and Vλ⊥ respectively)
where Iλ is the identity operator on Vλ. The above matrix would be upper-triangular except for the A22 block. But exactly the same procedure can be applied to the sub-matrix A22, viewed as an operator on Vλ⊥, and its submatrices. Continue this way until the resulting matrix is upper triangular. Since each conjugation increases the dimension of the upper-triangular block by at least one, this process takes at most n |
https://en.wikipedia.org/wiki/MALDI%20imaging | MALDI mass spectrometry imaging (MALDI-MSI) is the use of matrix-assisted laser desorption ionization as a mass spectrometry imaging technique in which the sample, often a thin tissue section, is moved in two dimensions while the mass spectrum is recorded. Advantages, like measuring the distribution of a large amount of analytes at one time without destroying the sample, make it a useful method in tissue-based study.
Sample preparation
Sample preparation is a critical step in imaging spectroscopy. Scientists take thin tissue slices mounted on conductive microscope slides and apply a suitable MALDI matrix to the tissue, either manually or automatically. Next, the microscope slide is inserted into a MALDI mass spectrometer. The mass spectrometer records the spatial distribution of molecular species such as peptides, proteins or small molecules. Suitable image processing software can be used to import data from the mass spectrometer to allow visualization and comparison with the optical image of the sample. Recent work has also demonstrated the capacity to create three-dimensional molecular images using MALDI imaging technology and comparison of these image volumes to other imaging modalities such as magnetic resonance imaging (MRI).
Tissue preparation
The tissue samples must be preserved quickly in order to reduce molecular degradation. The first step is to freeze the sample by wrapping the sample then submerging it in a cryogenic solution. Once frozen, the samples can be stored below -80 °C for up to a year.
When ready to be analyzed, the tissue is embedded in a gelatin media which supports the tissue while it is being cut, while reducing contamination that is seen in optimal cutting temperature compound (OCT) techniques. The mounted tissue section thickness varies depending on the tissue.
Tissue sections can then be thaw-mounted by placing the sample on the surface of a conductive slide that is of the same temperature, and then slowly warmed from below. The s |
https://en.wikipedia.org/wiki/Crossed%20beak | Crossed beak, also known as cross beak and scissor beak, is a congenital deformity in birds, particularly poultry, where the upper and lower parts of the beak do not align properly. This condition results in the beak crossing or growing in a way that prevents the bird from closing its beak properly. As a result, affected birds may have difficulty eating and performing other essential activities.
Identification
Crossed beak is a deformity characterized by a misalignment of the upper and lower beak, with one or both beaks exhibiting lateral deviation from the head's longitudinal axis.
The upper beak frequently exhibits horizontal bending at its base, alongside the affected mandible, and the skull, particularly the nasals and orbits, shows asymmetry.
Causes
Several factors, including incubation temperatures and variations, hereditary elements, developmental mishaps, bone morphogenetic protein 4, and management practices, have been identified as potential associations with or causes of crossed beak.
In a 1966 study focused on non-hereditary factors associated with crossed beak, it was identified that fluctuations in incubator temperature, stemming from electrical disruptions, emerged as a primary cause.
Recent studies have identified potential candidate genes for beak deformity in domestic chickens, with several belonging to the keratin family. In a breeding trial involving Appenzeller Barthuhn chickens, believed to have a high genetic predisposition for crossed beak, mating affected parent stock led to a notably higher proportion of offspring with deformed beaks compared to mating unaffected parents. Specifically, mating affected parents resulted in 13 (15.7%) offspring with crossed beak and 67 (80.7%) with normal beaks. Conversely, mating unaffected parents yielded 3 (2.9%) offspring with crossed beak and 95 (93.1%) with normal beaks. These findings suggest a potential hereditary link to beak deformations although the genetic basis of this condition remains unc |
https://en.wikipedia.org/wiki/Evolutionary%20psychology%20of%20religion | The evolutionary psychology of religion is the study of religious belief using evolutionary psychology principles. It is one approach to the psychology of religion. As with all other organs and organ functions, the brain's functional structure is argued to have a genetic basis, and is therefore subject to the effects of natural selection and evolution. Evolutionary psychologists seek to understand cognitive processes, religion in this case, by understanding the survival and reproductive functions they might serve.
Mechanisms of evolution
Scientists generally agree with the idea that a propensity to engage in religious behavior evolved early in human history. However, there is disagreement on the exact mechanisms that drove the evolution of the religious mind. There are two schools of thought. One is that religion itself evolved due to natural selection and is an adaptation, in which case religion conferred some sort of evolutionary advantage. The other is that religious beliefs and behaviors, such as the concept of a protogod,
may have emerged as by-products of other adaptive traits without initially being selected for because of their own benefits. A third suggestion is that different aspects of religion require different evolutionary explanations but also that different evolutionary explanations may apply to several aspects of religion.
Religious behavior often involves significant costs—including economic costs, celibacy, dangerous rituals, or the expending of time that could be used otherwise. This would suggest that natural selection should act against religious behavior unless it or something else causes religious behavior to have significant advantages.
Religion as an adaptation
Richard Sosis and Candace Alcorta have reviewed several of the prominent theories for the adaptive value of religion. Many are "social solidarity theories", which view religion as having evolved to enhance cooperation and cohesion within groups. Group membership in turn provides |
https://en.wikipedia.org/wiki/Die%20shrink | The term die shrink (sometimes optical shrink or process shrink) refers to the scaling of metal–oxide–semiconductor (MOS) devices. The act of shrinking a die creates a somewhat identical circuit using a more advanced fabrication process, usually involving an advance of lithographic nodes. This reduces overall costs for a chip company, as the absence of major architectural changes to the processor lowers research and development costs while at the same time allowing more processor dies to be manufactured on the same piece of silicon wafer, resulting in less cost per product sold.
Die shrinks are the key to lower prices and higher performance at semiconductor companies such as Samsung, Intel, TSMC, and SK Hynix, and fabless manufacturers such as AMD (including the former ATI), NVIDIA and MediaTek.
Details
Examples in the 2000s include the downscaling of the PlayStation 2's Emotion Engine processor from Sony and Toshiba (from 180 nm CMOS in 2000 to 90 nm CMOS in 2003), the codenamed Cedar Mill Pentium 4 processors (from 90 nm CMOS to 65 nm CMOS) and Penryn Core 2 processors (from 65 nm CMOS to 45 nm CMOS), the codenamed Brisbane Athlon 64 X2 processors (from 90 nm SOI to 65 nm SOI), various generations of GPUs from both ATI and NVIDIA, and various generations of RAM and flash memory chips from Samsung, Toshiba and SK Hynix. In January 2010, Intel released Clarkdale Core i5 and Core i7 processors fabricated with a 32 nm process, down from a previous 45 nm process used in older iterations of the Nehalem processor microarchitecture. Intel, in particular, formerly focused on leveraging die shrinks to improve product performance at a regular cadence through its Tick-Tock model. In this business model, every new microarchitecture (tock) is followed by a die shrink (tick) to improve performance with the same microarchitecture.
Die shrinks are beneficial to end-users as shrinking a die reduces the current used by each transistor switching on or off in semiconductor device |
https://en.wikipedia.org/wiki/Ammonifex%20degensii | Ammonifex degensii is a Gram-negative bacterium from the genus of Ammonifex which has been isolated from a volcanic hot spring from the Kawah Candradimuka crater, Dieng Plateau, Java, Indonesia. |
https://en.wikipedia.org/wiki/Magnesium%20lactate | Magnesium lactate, the magnesium salt of lactic acid, is a mineral supplement to prevent and treat low amounts of magnesium in the blood.
As a food additive, it is has the E number E329 and is used in food and beverages as an acidity regulator. |
https://en.wikipedia.org/wiki/Liquid%20marbles | Liquid marbles are non-stick droplets (normally aqueous) wrapped by micro- or nano-metrically scaled hydrophobic, colloidal particles (Teflon, polyethylene, lycopodium powder, carbon black, etc.); representing a platform for a diversity of chemical and biological applications. Liquid marbles are also found naturally; aphids convert honeydew droplets into marbles. A variety of non-organic and organic liquids may be converted into liquid marbles. Liquid marbles demonstrate elastic properties and do not coalesce when bounced or pressed lightly. Liquid marbles demonstrate a potential as micro-reactors, micro-containers for growing micro-organisms and cells, micro-fluidics devices, and have even been used in unconventional computing. Liquid marbles remain stable on solid and liquid surfaces. Statics and dynamics of rolling and bouncing of liquid marbles were reported. Liquid marbles coated with poly-disperse and mono-disperse particles have been reported. Liquid marbles are not hermetically coated by solid particles but connected to the gaseous phase. Kinetics of the evaporation of liquid marbles has been investigated.
Interfacial water marbles
Liquid marbles were first reported by P. Aussillous and D. Quere in 2001, who described a new method to construct portable water droplets in the atmospheric environment with hydrophobic coating on their surface to prevent the contact between water and the solid ground (Figure 1). Liquid marbles provide a new approach to transport liquid mass on the solid surface, which sufficiently transform the inconvenient glass containers into flexible, user-specified hydrophobic coating composed of powders of hydrophobic materials. Since then, the applications of liquid marbles in no-loss mass transport, microfluidics and microreactors have been extensively investigated. However, liquid marbles only reflect the water behavior at the solid-air interface, while there is no report on the water behavior at the liquid-liquid interface, as a result |
https://en.wikipedia.org/wiki/Botanical%20Provinces%20of%20Western%20Australia | The botanical provinces of Western Australia (or Beard's Provinces) delineate "natural" phytogeographic regions of WA, based on climate and types of vegetation. John Stanley Beard,
in "Plant Life of Western Australia" (p. 29-37) gives a short history of the various mappings.
In 1906, Ludwig Diels divided the state into an Eremaean Province and a South-West Province (together with further subdivisions), based on rainfall ranges, types of vegetation,
and species' distributions (Beard, 2015:p. 30). In 1944, C.A. Gardner
modified Diels' description, adding the Northern Province, which comprised the Kimberley and Pilbara districts. With Bennetts in 1956, he further
refined this to give state-wide divisions.
Subsequent work by Beard and others gave the current set of provinces used by Florabase in its descriptions of plants. (See, for example, the entry where Parsonsia diaphanophleba is described as being found in Beard's South-West Province.)
Beard's provinces are:
Northern Province (comprising North Kimberley, Central Kimberley, East Kimberley and Dampierland.)
Eremaean Province (comprising Great Sandy Desert, Little Sandy Desert, Gibson Desert, Tanami, Nullabor Region, Central Ranges, Great Victoria Desert, Murchison, Gascoyne, Pilbara and Carnarvon.)
South-Western Interzone (comprising the Coolgardie woodlands)
South-West Province (comprising Northern Sandplains, Wheat Belt Region, Mallee Region, Esperance Plains, Menzies, Warren, Dale and Drummond.)
Many of the subregions above have now been modified to give IBRA regions, among which are:
Warren (biogeographic region)
Mallee (biogeographic region)
See also
For other definitions and uses, see Regions of Western Australia, and Southwest Australia.
For the current set of Australian phytogeographic regions and subregions, see Interim Biogeographic Regionalisation for Australia (IBRA). |
https://en.wikipedia.org/wiki/Shrink-fitting | Shrink-fitting is a technique in which an interference fit is achieved by a relative size change after assembly. This is usually achieved by heating or cooling one component before assembly and allowing it to return to the ambient temperature after assembly, employing the phenomenon of thermal expansion to make a joint. For example, the thermal expansion of a piece of a metallic drainpipe allows a builder to fit the cooler piece to it. As the adjoined pieces reach the same temperature, the joint becomes strained and stronger.
Other examples are the fitting of a wrought iron tyre around the rim of a wooden cart wheel by a wheelwright, or of a steel tyre to the wheel of a railway engine or rolling stock. In both cases the tyre will be heated and expands to slightly greater than the wheel's diameter, and is fitted around it. After cooling, the tyre contracts, binding tightly in place. A common method used in industry is the use of induction shrink fitting which refers to the use of induction heating technology to pre-heat metal components between 150˚C and 300˚C thereby causing them to expand and allow for the insertion or removal of another component. Other methods of shrink-fitting include compression shrink fitting, which uses a cryogen such as liquid nitrogen to cool the insert, and shape memory coupling, which is achieved by means of a phase transition.
External links
Overview of interference fits
Industrial processes
Materials science |
https://en.wikipedia.org/wiki/Thoralf%20Skolem | Thoralf Albert Skolem (; 23 May 1887 – 23 March 1963) was a Norwegian mathematician who worked in mathematical logic and set theory.
Life
Although Skolem's father was a primary school teacher, most of his extended family were farmers. Skolem attended secondary school in Kristiania (later renamed Oslo), passing the university entrance examinations in 1905. He then entered Det Kongelige Frederiks Universitet to study mathematics, also taking courses in physics, chemistry, zoology and botany.
In 1909, he began working as an assistant to the physicist Kristian Birkeland, known for bombarding magnetized spheres with electrons and obtaining aurora-like effects; thus Skolem's first publications were physics papers written jointly with Birkeland. In 1913, Skolem passed the state examinations with distinction, and completed a dissertation titled Investigations on the Algebra of Logic. He also traveled with Birkeland to the Sudan to observe the zodiacal light. He spent the winter semester of 1915 at the University of Göttingen, at the time the leading research center in mathematical logic, metamathematics, and abstract algebra, fields in which Skolem eventually excelled. In 1916 he was appointed a research fellow at Det Kongelige Frederiks Universitet. In 1918, he became a Docent in Mathematics and was elected to the Norwegian Academy of Science and Letters.
Skolem did not at first formally enroll as a Ph.D. candidate, believing that the Ph.D. was unnecessary in Norway. He later changed his mind and submitted a thesis in 1926, titled Some theorems about integral solutions to certain algebraic equations and inequalities. His notional thesis advisor was Axel Thue, even though Thue had died in 1922.
In 1927, he married Edith Wilhelmine Hasvold.
Skolem continued to teach at Det kongelige Frederiks Universitet (renamed the University of Oslo in 1939) until 1930 when he became a Research Associate in Chr. Michelsen Institute in Bergen. This senior post allowed Skolem to condu |
https://en.wikipedia.org/wiki/Renegade%20III%3A%20The%20Final%20Chapter | Renegade III: The Final Chapter is a scrolling beat'em up video game released on the Amstrad CPC, Commodore 64, MSX, and ZX Spectrum in 1989 by Ocean Software under their "Imagine" label. The game is a sequel to Target: Renegade which itself is a sequel to the arcade game Renegade. Unlike the first two games, Renegade III follows the character known only as "Renegade" as he travels through time to rescue his captured girlfriend. It also dropped the two-player mode found in the previous title.
An Amiga version was developed but never released. It was leaked years later.
Reception
Though the game was highly praised by critics, receiving high scores in several prominent gaming publications such as a 91% score being awarded by Crash, and 84% in C+VG, it was derided by fans who criticised the game's storyline, lack of deep gameplay, weapons, glitches, poor physics and lack of a two-player option.
The Spanish magazine Microhobby valued the game with the following scores: Originality: 30% Graphics: 80% Motion: 80% Sound: 80% Difficulty: 100% Addiction: 80% |
https://en.wikipedia.org/wiki/Anderson%20Jacobson | Anderson Jacobson, also known for a time as CXR Anderson Jacobson and today as CXR Networks, is a vendor of communications equipment. Anderson Jacobson was an early manufacturer of acoustic modems and was spun off from SRI International (then the Stanford Research Institute). In the 1970s and 1980s, the company manufactured modems, some intended for consumers. The company was acquired by CXR Telecom in 1988, at which time The Times was following Anderson Jacobson's earnings reports. The flow of new products continued.
Today the company is a privately owned communication equipment vendor supplying products to Telecom Carriers, Service Providers, and the Defense, Transport and Utility markets. The company is headquartered in Abondant, France.
History
Anderson Jacobson was primarily a California-based manufacturer of acoustic coupler modems, but they also manufactured printing terminals designed to replace
teletypes.
Modems
Anderson Jacobson began early in 1967 as a manufacturer of one of the first acoustic data couplers. This technical advancement was a step beyond directly wiring to phone lines. By 1973, the company had
acoustic coupler products that transmitted at 150, 300 and 1200 baud.
Terminals
Some of their terminals were CRTs and others were Printer/Keyboard devices.
Historical Table of Anderson Jacobson terminals
Among the terminals that were marketed by Anderson Jacobson are:
CXR
After the merger, industrial references varied, including "Anderson Jacobson (CXR)"
CXR was purchased by Emrise Corporation an international manufacturer of defense and aerospace electronic devices and subsystems and telecommunications equipment. and, in 2016 sold for 690,000 British pounds to its former chairman/CEO.
CXR, described as "manufactures network telecommunications equipment," was still operating as of 2017, albeit not in the areas for which AJ had begun in 1967.
See also
SRI International
List of SRI International spin-offs |
https://en.wikipedia.org/wiki/Slot%20%28computer%20architecture%29 | A slot comprises the operation issue and data path machinery surrounding a set of one or more execution unit (also called a functional unit (FU)) which share these resources. The term slot is common for this purpose in very long instruction word (VLIW) computers, where the relationship between operation in an instruction and pipeline to execute it is explicit. In dynamically scheduled machines, the concept is more commonly called an execute pipeline.
Modern conventional central processing units (CPU) have several compute pipelines, for example: two arithmetic logic units (ALU), one floating point unit (FPU), one Streaming SIMD Extensions (SSE) (such as MMX), one branch. Each of them can issue one instruction per basic instruction cycle, but can have several instructions in process. These are what correspond to slots. The pipelines may have several FUs, such as an adder and a multiplier, but only one FU in a pipeline can be issued to in a given cycle. The FU population of a pipeline (slot) is a design option in a CPU. |
https://en.wikipedia.org/wiki/Probabilistic%20automaton | In mathematics and computer science, the probabilistic automaton (PA) is a generalization of the nondeterministic finite automaton; it includes the probability of a given transition into the transition function, turning it into a transition matrix. Thus, the probabilistic automaton also generalizes the concepts of a Markov chain and of a subshift of finite type. The languages recognized by probabilistic automata are called stochastic languages; these include the regular languages as a subset. The number of stochastic languages is uncountable.
The concept was introduced by Michael O. Rabin in 1963; a certain special case is sometimes known as the Rabin automaton (not to be confused with the subclass of ω-automata also referred to as Rabin automata). In recent years, a variant has been formulated in terms of quantum probabilities, the quantum finite automaton.
Informal Description
For a given initial state and input character, a deterministic finite automaton (DFA) has exactly one next state, and a nondeterministic finite automaton (NFA) has a set of next states. A probabilistic automaton (PA) instead has a weighted set (or vector) of next states, where the weights must sum to 1 and therefore can be interpreted as probabilities (making it a stochastic vector). The notions states and acceptance must also be modified to reflect the introduction of these weights. The state of the machine as a given step must now also be represented by a stochastic vector of states, and a state accepted if its total probability of being in an acceptance state exceeds some cut-off.
A PA is in some sense a half-way step from deterministic to non-deterministic, as it allows a set of next states but with restrictions on their weights. However, this is somewhat misleading, as the PA utilizes the notion of the real numbers to define the weights, which is absent in the definition of both DFAs and NFAs. This additional freedom enables them to decide languages that are not regular, such as the |
https://en.wikipedia.org/wiki/NAS%20Award%20in%20Molecular%20Biology | The NAS Award in Molecular Biology is awarded by the U.S. National Academy of Sciences "for recent notable discovery in molecular biology by a young scientist who is a citizen of the United States." It has been awarded annually since its inception in 1962.
List of NAS Award in Molecular Biology winners
Source: NAS
1962 Marshall Nirenberg for his studies of the molecular mechanisms for the biosynthesis of protein.
1963 Matthew Meselson for his leading role in developing and applying methods to measure the transmission of genetic information in the cell.
1964 Charles Yanofsky for his achievements in demonstrating how changes in the gene produce changes in the way protein is made in the body.
1965 Robert Stuart Edgar for his development and application of the method of "conditional lethal mutants" for the analysis of the genetic control of morpho-genesis at the molecular level.
1966 Norton D. Zinder for his discovery of RNA bacteriophages, a new class of bacteria-attacking viruses, which have provided researchers with a highly valuable and convenient method of studying fundamental processes in all living cells.
1967 Robert W. Holley for his elucidation of the full sequence of nucleotides in the molecule of a soluble RNA.
1968 Walter Gilbert for his signal contribution to the understanding of the regulatory mechanisms operative in genetic control of protein synthesis.
1969 for his genetic dissection of the mechanism of assembly of the bacterial virus particle and reconstruction of the virus in vitro.
1970 A. Dale Kaiser for his discovery that pure phage lambda DNA can infect susceptible bacterial cells and produce progeny, and for the effect of this discovery on the whole field of bacterial virus genetics.
1971 Masayasu Nomura for his studies on the structure and function of ribosomes and their molecular components.
1972 Howard M. Temin for his work leading to the discovery of reverse transcription.
1973 Donald D. Brown for his studies of the structure, regul |
https://en.wikipedia.org/wiki/Hall%20effect%20sensor | A Hall effect sensor (or simply Hall sensor) is a type of sensor which detects the presence and magnitude of a magnetic field using the Hall effect. The output voltage of a Hall sensor is directly proportional to the strength of the field. It is named for the American physicist Edwin Hall.
Hall sensors are used for proximity sensing, positioning, speed detection, and current sensing applications. Frequently, a Hall sensor is combined with threshold detection to act as a binary switch. Commonly seen in industrial applications such as the pictured pneumatic cylinder, they are also used in consumer equipment; for example, some computer printers use them to detect missing paper and open covers. Some 3D printers use them to measure filament thickness.
Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing, tachometers and anti-lock braking systems. They are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel with two equally spaced magnets, the voltage from the sensor peaks twice for each revolution. This arrangement is commonly used to regulate the speed of disk drives.
Principles
In a Hall sensor, a current is applied to a thin strip of metal. In the presence of a magnetic field perpendicular to the direction of the current, the charge carriers are deflected by the Lorentz force, producing a difference in electric potential (voltage) between the two sides of the strip. This voltage difference (the Hall voltage) is proportional to the strength of the magnetic field.
Hall effect sensors respond to static (non-changing) magnetic fields, in contrast to inductive sensors, which respond only to changes in fields.
Characteristics
Hall sensors are capable of measuring a wide range of magnetic fields, and are sensitive to both the magnitude and orientation of the field. When used as electronic switches, they are less prone to mechanical failur |
https://en.wikipedia.org/wiki/Robert%20Falla%20Memorial%20Award | The Robert Falla Memorial Award (sometimes referred to as the Falla Award) is granted by the Ornithological Society of New Zealand to people "who have made a significant contribution to both the Ornithological Society of New Zealand and to New Zealand ornithology".
It was set up in memory of Robert Falla after his death in 1979, using contributions from a public appeal. The first award was made in 1981, but for the first few years awards were made for the preceding year. In some years no award is made.
Recipients
1981: Ross McKenzie
1982: Archie Blackburn
1983: A.T. Edgar
1984: R.B. Sibson
1985: Maida Barlow
1986: Peter Child (posthumous)
1987: Peter Bull
1989: Graham Turbott
1990: Barrie Heather
1992: Beth Brown
1995: Paul Sagar
1997: David Crockett
1999: Hugh Robertson
2011: Ralph Powlesland
2014: Nick Allen
2018: David Melville
2019: Andrew Crossland
See also
List of ornithology awards |
https://en.wikipedia.org/wiki/List%20of%20atmospheric%20optical%20phenomena | Atmospheric optical phenomena include:
Afterglow
Airglow
Alexander's band, the dark region between the two bows of a double rainbow.
Alpenglow
Anthelion
Anticrepuscular rays
Aurora
Auroral light (northern and southern lights, aurora borealis and aurora australis)
Belt of Venus
Brocken Spectre
Circumhorizontal arc
Circumzenithal arc
Cloud iridescence
Crepuscular rays
Earth's shadow
Earthquake lights
Glories
Green flash
Halos, of Sun or Moon, including sun dogs
Haze
Heiligenschein or halo effect, partly caused by the opposition effect
Ice blink
Light pillar
Lightning
Mirages (including Fata Morgana)
Monochrome Rainbow
Moon dog
Moonbow
Nacreous cloud/Polar stratospheric cloud
Rainbow
Subsun
Sun dog
Tangent arc
Tyndall effect
Upper-atmospheric lightning, including red sprites, Blue jets, and ELVES
Water sky
See also |
https://en.wikipedia.org/wiki/Pierre%20Rosenstiehl | Pierre Rosenstiehl (5 December 1933 – 28 October 2020) was a French mathematician recognized for his work in graph theory, planar graphs, and graph drawing.
The Fraysseix-Rosenstiehl's planarity criterion is at the origin of the left-right planarity algorithm implemented in Pigale software, which is considered the fastest implemented planarity testing algorithm.
Rosenstiehl was directeur d’études at the École des Hautes Études en Sciences Sociales in Paris, before his retirement. He was a founding co-editor in chief of the European Journal of Combinatorics. Rosenstiehl, Giuseppe Di Battista, Peter Eades and Roberto Tamassia organized in 1992 at Marino (Italy) a meeting devoted to graph drawing which initiated a long series of international conferences, the International Symposia on Graph Drawing.
He has been a member of the French literary group Oulipo since 1992. He married the French author and illustrator Agnès Rosenstiehl. |
https://en.wikipedia.org/wiki/Caraga%20candy%20poisonings | Almost 2,000 people, mostly schoolchildren from the Caraga region of the Philippines, experienced food poisoning after consuming durian, mangosteen, and mango flavored candies in 2015. The Food and Drug Administration of the Philippines confirmed that the sweets were contaminated by staphylococcus bacteria, a bacteria commonly found on human skin. The cause was suspected to be accidental bacterial contamination by vendors, who had repackaged the candy.
Victims
Most of the victims of the food poisoning incident were schoolchildren within the Caraga Region. Victims reported of experiencing symptoms such as diarrhea, dizziness, and stomachache. The cases were reported by at least nine health facilities based in Surigao del Sur, Surigao del Norte and Agusan del Sur. At least 10 people were hospitalized.
The first cases were reported in Cagwait, Surigao del Sur in the morning of July 10.
Food poisoning symptoms were reported in the following towns:
Surigao del Sur
Carrascal
Cagwait
Cortes, Surigao del Sur
Lianga
San Agustin
Madrid
Marihatag
Tago
Tandag
Surigao del Norte
Placer
Surigao City
Agusan del Sur
Bayugan
Response
Acting Mayor Paolo Duterte of Davao City ordered an urgent investigation on July 10 regarding the matter to determine the exact cause of the candy contamination incident.
On July 11, 2015, the Department of Health in the Caraga declared a food poisoning outbreak in the region. Hospitals across the Caraga Region were put into white alert in response to the incident.
Investigation
The Food and Drug Administration (FDA) conducted microbiological tests on the samples of the contaminated candies. The FDA had suspected that the candies were contaminated by E. coli, Salmonella or staphylococcus based on the reported symptoms by victims of the food poisoning incident. They announced that the candy samples tested positive for staphylococcus aureus.
The FDA traced the contaminated candies' origin to two manufacturing facilities in Davao City |
https://en.wikipedia.org/wiki/Helical%20Dirac%20fermion | A Helical Dirac fermion is a charge carrier that behaves as a massless relativistic particle with its intrinsic spin locked to its translational momentum. |
https://en.wikipedia.org/wiki/Graduate%20Texts%20in%20Mathematics | Graduate Texts in Mathematics (GTM) () is a series of graduate-level textbooks in mathematics published by Springer-Verlag. The books in this series, like the other Springer-Verlag mathematics series, are yellow books of a standard size (with variable numbers of pages). The GTM series is easily identified by a white band at the top of the book.
The books in this series tend to be written at a more advanced level than the similar Undergraduate Texts in Mathematics series, although there is a fair amount of overlap between the two series in terms of material covered and difficulty level.
List of books
Introduction to Axiomatic Set Theory, Gaisi Takeuti, Wilson M. Zaring (1982, 2nd ed., )
Measure and Category – A Survey of the Analogies between Topological and Measure Spaces, John C. Oxtoby (1980, 2nd ed., )
Topological Vector Spaces, H. H. Schaefer, M. P. Wolff (1999, 2nd ed., )
A Course in Homological Algebra, Peter Hilton, Urs Stammbach (1997, 2nd ed., )
Categories for the Working Mathematician, Saunders Mac Lane (1998, 2nd ed., )
Projective Planes, Daniel R. Hughes, Fred C. Piper, (1982, )
A Course in Arithmetic, Jean-Pierre Serre (1996, )
Axiomatic Set Theory, Gaisi Takeuti, Wilson M. Zaring, (1973, )
Introduction to Lie Algebras and Representation Theory, James E. Humphreys (1997, )
A Course in Simple-Homotopy Theory, Marshall. M. Cohen, (1973, )
Functions of One Complex Variable I, John B. Conway (1978, 2nd ed., )
Advanced Mathematical Analysis, Richard Beals (1973, )
Rings and Categories of Modules, Frank W. Anderson, Kent R. Fuller (1992, 2nd ed., )
Stable Mappings and Their Singularities, Martin Golubitsky, Victor Guillemin, (1974, )
Lectures in Functional Analysis and Operator Theory, Sterling K. Berberian, (1974, )
The Structure of Fields, David J. Winter, (1974, )
Random Processes, Murray Rosenblatt, (1974, )
Measure Theory, Paul R. Halmos (1974, )
A Hilbert Space Problem Book, Paul R. Halmos (1982, 2nd ed., )
Fibre Bundles, Dale Husemoller (1994, |
https://en.wikipedia.org/wiki/Dihydroorotate%20dehydrogenase | Dihydroorotate dehydrogenase (DHODH) is an enzyme that in humans is encoded by the DHODH gene on chromosome 16. The protein encoded by this gene catalyzes the fourth enzymatic step, the ubiquinone-mediated oxidation of dihydroorotate to orotate, in de novo pyrimidine biosynthesis. This protein is a mitochondrial protein located on the outer surface of the inner mitochondrial membrane (IMM). Inhibitors of this enzyme are used to treat autoimmune diseases such as rheumatoid arthritis.
Structure
DHODH can vary in cofactor content, oligomeric state, subcellular localization, and membrane association. An overall sequence alignment of these DHODH variants presents two classes of DHODHs: the cytosolic Class 1 and the membrane-bound Class 2. In Class 1 DHODH, a basic cysteine residue catalyzes the oxidation reaction, whereas in Class 2, the serine serves this catalytic function. Structurally, Class 1 DHODHs can also be divided into two subclasses, one of which forms homodimers and uses fumarate as its electron acceptor, and the other which forms heterotetramers and uses NAD+ as its electron acceptor. This second subclass contains an addition subunit (PyrK) containing an iron-sulfur cluster and a flavin adenine dinucleotide (FAD). Meanwhile, Class 2 DHODHs use coenzyme Q/ubiquinones for their oxidant.
In higher eukaryotes, this class of DHODH contains an N-terminal bipartite signal comprising a cationic, amphipathic mitochondrial targeting sequence of about 30 residues and a hydrophobic transmembrane sequence. The targeting sequence is responsible for this protein's localization to the IMM, possibly from recruiting the import apparatus and mediating ΔΨ-driven transport across the inner and outer mitochondrial membranes, while the transmembrane sequence is essential for its insertion into the IMM. This sequence is adjacent to a pair of α-helices, α1 and α2, which are connected by a short loop. Together, this pair forms a hydrophobic funnel that is suggested to serve as t |
https://en.wikipedia.org/wiki/Electron-transferring-flavoprotein%20dehydrogenase | Electron-transferring-flavoprotein dehydrogenase (ETF dehydrogenase or electron transfer flavoprotein-ubiquinone oxidoreductase, ) is an enzyme that transfers electrons from electron-transferring flavoprotein in the mitochondrial matrix, to the ubiquinone pool in the inner mitochondrial membrane. It is part of the electron transport chain. The enzyme is found in both prokaryotes and eukaryotes and contains a flavin and FE-S cluster. In humans, it is encoded by the ETFDH gene. Deficiency in ETF dehydrogenase causes the human genetic disease multiple acyl-CoA dehydrogenase deficiency.
Function
ETQ-QO links the oxidation of fatty acids and some amino acids to oxidative phosphorylation in the mitochondria. Specifically, it catalyzes the transfer of electrons from electron transferring flavoprotein (ETF) to ubiquinone, reducing it to ubiquinol. The entire sequence of transfer reactions is as follows:
Acyl-CoA → Acyl-CoA dehydrogenase → ETF → ETF-QO → UQ → Complex III.
Catalyzed reaction
The overall reaction catalyzed by ETF-QO is as follows:
ETF-QO(red) + ubiquinone ↔ ETF-QO(ox) + ubiquinol
Enzymatic activity is usually assayed spectrophotometrically by reaction with octanoyl-CoA as the electron donor and ubiquinone-1 as the electron acceptor. The enzyme can also be assayed via disproportionation of ETF semiquinone. Both reactions are below:
Octanoyl-CoA + Q1 ↔ Q1H2 + Oct-2-enoyl-CoA
2 ETF1- ↔ ETFox + ETF2-
Structure
ETF-QO consists of one structural domain with three functional domains packed in close proximity: a FAD domain, a 4Fe4S cluster domain, and a UQ-binding domain. FAD is in an extended conformation and is buried deeply within its functional domain. Multiple hydrogen bonds and a positive helix dipole modulate the redox potential of FAD and can possibly stabilize the anionic semiquinone intermediate. The 4Fe4S cluster is also stabilized by extensive hydrogen bonding around the cluster and its cysteine components. Ubiquinone binding is achieved t |
https://en.wikipedia.org/wiki/BD%20%28company%29 | Becton, Dickinson and Company, also known as BD, is an American multinational medical technology company that manufactures and sells medical devices, instrument systems, and reagents. BD also provides consulting and analytics services in certain geographies.
BD is ranked #177 in the 2021 Fortune 500 list based on its revenues for the fiscal year ending September 30, 2020.
In October 2014, the company agreed to acquire CareFusion for a price of US$12.2 billion in cash and stock.
In April 2017, C. R. Bard announced that it would be acquired by Becton, Dickinson and Company (BD). The transaction was completed later that year, and the company became a wholly-owned subsidiary of BD, rebranded as Bard.
History
The company was founded in 1897 in New York City by Maxwell Becton and Fairleigh S. Dickinson. It later moved its headquarters to New Jersey.
In 2004, BD agreed to pay out US$100 million to settle allegations from competitor Retractable Technologies that it had engaged in anti-competitive behavior to prevent the distribution of Retractable's syringes, which are designed to prevent needlestick injury. The lawsuit touched off a series of legal conflicts between the companies. Retractable would accuse BD of patent infringement after BD released a retractable needle of its own. Later Retractable would claim BD was falsely advertising its own brand of retractable needle as the “world’s sharpest needle”.
Finances
For the fiscal year 2017, Becton Dickinson reported earnings of US$1.030 billion, with an annual revenue of US$12.093 billion, an increase of 10.5% over the previous fiscal cycle. Becton Dickinson's shares traded at over US$192 per share, and its market capitalization was valued at over US$63 billion in November 2018.
Business segments
Currently there are three business segments.
BD Medical
In certain places, BD Medical also offers consulting and analytics related services. BD Medical's Consulting services are primarily targeted at hospitals, healthca |
https://en.wikipedia.org/wiki/StrongDM | StrongDM is an American technology company that develops an infrastructure access platform.
History
StrongDM was founded in 2015 by Elizabeth Zalman, Justin McCarthy, and Schuyler Brown. The company was one of the first female led startups backed by Hearst's initiative to invest in women led startups. The company received an early investment of $250,000 from HearstLabs.
In 2018, the company released free open source software called Comply which allows smaller organizations to implement SOC 2 in an open source environment. Any organization can download a pre-authored library of 24 policies, edit directly in markdown, track versions with Github, assign compliance tasks through Jira and monitor progress in a unified dashboard.
Elizabeth Zalman continued as CEO until 2021 when Tim Prendergast was appointed new CEO.
As of 2023, the company had received investments from investors such as Bloomberg Beta and Tiger Global. Douglas Leone joined the board as part of an investment by Sequoia Capital.
Software
The software is a privileged access manager for aggregating secure access and permissions. It centralizes backend infrastructure access for legacy or multi-cloud environments. The software also integrates with identity providers, secret stores, and SIEM tools.
The platform provides and manages user access to backend infrastructure like serves and databases, and logs user actions in video replay.
Similar software is produced by CyberArk, Delinea, BeyondTrust, Teleport and Perimeter 81.
Fundraising
Founding CEO, Elizabeth Zalman is one of the first woman CEOs in Silicon Valley to successfully raise over $100M in venture capital. She successfully raised over $76M as CEO of StrongDM.
Seed - $800,000 in 2015 from Bloomberg Beta, Data Collective, SocialStarts, Jerry Neumann
Seed II, $3,000,000 in 2016 from True Ventures, Bloomberg Beta, Laconia Capital Group, Social Starts, Jerry Neumann
Series A, $17,000,000 in 2020 from Sequoia
Series B, $54,000,000 in 2021 fro |
https://en.wikipedia.org/wiki/RISC-V | RISC-V (pronounced "risk-five",) is an open standard instruction set architecture (ISA) based on established reduced instruction set computer (RISC) principles. Unlike most other ISA designs, RISC-V is provided under royalty-free open-source licenses. A number of companies are offering or have announced RISC-V hardware; open source operating systems with RISC-V support are available, and the instruction set is supported in several popular software toolchains.
As a RISC architecture, the RISC-V ISA is a load–store architecture. Its floating-point instructions use IEEE 754 floating-point. Notable features of the RISC-V ISA include: instruction bit field locations chosen to simplify the use of multiplexers in a CPU, a design that is architecturally neutral, and a fixed location for the sign bit of immediate values to speed up sign extension.
The instruction set is designed for a wide range of uses. The base instruction set has a fixed length of 32-bit naturally aligned instructions, and the ISA supports variable length extensions where each instruction can be any number of 16-bit parcels in length. Subsets support small embedded systems, personal computers, supercomputers with vector processors, and warehouse-scale 19 inch rack-mounted parallel computers.
The instruction set specification defines 32-bit and 64-bit address space variants. The specification includes a description of a 128-bit flat address space variant, as an extrapolation of 32 and 64 bit variants, but the 128-bit ISA remains "not frozen" intentionally, because there is yet so little practical experience with such large memory systems.
Unlike other academic designs which are typically optimized only for simplicity of exposition, the designers intended that the RISC-V instruction set be usable for practical computers. As of June 2019, version 2.2 of the user-space ISA and version 1.11 of the privileged ISA are frozen, permitting software and hardware development to proceed. The user-space ISA, now re |
https://en.wikipedia.org/wiki/Sri%20Sarma | Sridevi Sarma (born 1972) is an American biomedical and electrical engineer known for her work in applying control theory to improve therapies for neurological disorders such as Parkinson's disease and epilepsy. She is vice dean for graduate education of the Johns Hopkins University Whiting School of Engineering, associate director of the Johns Hopkins Institute for Computational Medicine, and an associate professor in the Johns Hopkins Department of Biomedical Engineering.
Early life and education
Sarma did her undergraduate studies at Cornell University where she received a BS in Electrical Engineering in 1994. She received her SM and PhD degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 1997 and 2006. From 2000 to 2003 she took a leave of absence to start a data analytics company. She was a postdoctoral fellow in the MIT Department of Brain and Cognitive Science from 2006 to 2009.
Work
Sarma joined the Johns Hopkins Department of Biomedical Engineering as a professor in 2009. She was appointed as associate director of the Johns Hopkins Institute for Computational Medicine in 2017, and vice dean of graduate education for the JHU Whiting School of Engineering in 2019. She is best known for her research combining learning theory and control systems with neuroscience to create translational work aimed at improving therapies for neurological disorders, including Parkinson's disease (PD) and epilepsy. Sarma has conducted research using control theoretic tools that provided an explanation of how deep brain stimulation (DBS) therapy works for PD.
Sarma has participated in the National Geographic TV series, Brain Games.
Awards and honors
L'Oreal For Women in Science fellow (2008)
National Science Foundation CAREER Award (2011)
Presidential Early Career Award for Scientists and Engineers (2012)
North American Neuromodulation Society Krishna Kumar New Investigator Award (2014)
Whiting School of Engineering |
https://en.wikipedia.org/wiki/Proper%20transfer%20function | In control theory, a proper transfer function is a transfer function in which the degree of the numerator does not exceed the degree of the denominator. A strictly proper transfer function is a transfer function where the degree of the numerator is less than the degree of the denominator.
The difference between the degree of the denominator (number of poles) and degree of the numerator (number of zeros) is the relative degree of the transfer function.
Example
The following transfer function:
is proper, because
.
is biproper, because
.
but is not strictly proper, because
.
The following transfer function is not proper (or strictly proper)
because
.
A not proper transfer function can be made proper by using the method of long division.
The following transfer function is strictly proper
because
.
Implications
A proper transfer function will never grow unbounded as the frequency approaches infinity:
A strictly proper transfer function will approach zero as the frequency approaches infinity (which is true for all physical processes):
Also, the integral of the real part of a strictly proper transfer function is zero. |
https://en.wikipedia.org/wiki/Sorbitan%20monostearate | Sorbitan monostearate is an ester of sorbitan (a sorbitol derivative) and stearic acid and is sometimes referred to as a synthetic wax.
Uses
Sorbitan monostearate is used in the manufacture of food and healthcare products as a non-ionic surfactant with emulsifying, dispersing, and wetting properties. It is also employed to create synthetic fibers, metal machining fluid, and as a brightener in the leather industry. Sorbitans are also known as "Spans".
Sorbitan monostearate has been approved by the European Union for use as a food additive (emulsifier) (E number: E 491). It is also approved for use by the British Pharmacopoeia.
See also
Polysorbate
Sorbitan tristearate (Span 65) |
https://en.wikipedia.org/wiki/Moving%20magnet%20and%20conductor%20problem | The moving magnet and conductor problem is a famous thought experiment, originating in the 19th century, concerning the intersection of classical electromagnetism and special relativity. In it, the current in a conductor moving with constant velocity, v, with respect to a magnet is calculated in the frame of reference of the magnet and in the frame of reference of the conductor. The observable quantity in the experiment, the current, is the same in either case, in accordance with the basic principle of relativity, which states: "Only relative motion is observable; there is no absolute standard of rest". However, according to Maxwell's equations, the charges in the conductor experience a magnetic force in the frame of the magnet and an electric force in the frame of the conductor. The same phenomenon would seem to have two different descriptions depending on the frame of reference of the observer.
This problem, along with the Fizeau experiment, the aberration of light, and more indirectly the negative aether drift tests such as the Michelson–Morley experiment, formed the basis of Einstein's development of the theory of relativity.
Introduction
Einstein's 1905 paper that introduced the world to relativity opens with a description of the magnet/conductor problem.
An overriding requirement on the descriptions in different frameworks is that they be consistent. Consistency is an issue because Newtonian mechanics predicts one transformation (so-called Galilean invariance) for the forces that drive the charges and cause the current, while electrodynamics as expressed by Maxwell's equations predicts that the fields that give rise to these forces transform differently (according to Lorentz invariance). Observations of the aberration of light, culminating in the Michelson–Morley experiment, established the validity of Lorentz invariance, and the development of special relativity resolved the resulting disagreement with Newtonian mechanics. Special relativity revised th |
https://en.wikipedia.org/wiki/XLink%20Kai | XLink Kai is a program developed by Team XLink allowing for online play of video games with support for LAN multiplayer modes. It enables players on the GameCube, Nintendo Switch, PlayStation 2, PlayStation 3, PlayStation 4, PlayStation Portable, PlayStation Vita / PlayStation TV, Xbox, Xbox 360, and Xbox One to play games across the Internet using a network configuration that simulates a local area network (LAN). It notably also allows original Xbox games to be played online again following the Xbox Live shutdown on 21 April 2010 (similar to that of Save Nintendo Wi-Fi for the Wii) and certain GameSpy titles such as Saints Row 2 to be played online after the GameSpy network shutdown on 31 May 2014.
Summary
The purpose of the software is to allow consoles to network with each other over the internet via the consoles' "local network play" capabilities. XLink Kai acts as tunneling software, installed to a compatible Microsoft Windows, macOS, or Linux computer on the same network as the console. Upon the console initiating a game's "network play" feature, the console's requests are routed to the computer. XLink, listening for these requests, allows other consoles to be found over the internet during this search, making it appear to the player's console that these other consoles are simply connected to the local network.
For modified ("modded") Xbox consoles, much of the functionality can be provided directly within the Xbox Media Center (XBMC for Xbox) GUI. The Kai client is still required to be running on a computer on the user's network, but players can control connections directly through the console. It is also possible to run the Kai client on other Linux-based devices, such as Raspberry Pi or NAS devices.
Usage
Users log onto XLink's servers using an XTag username, similar to a "Gamertag" for Xbox Live. XLink has "Arenas" for each compatible System Link game, with more popular games such as Halo 2 and SOCOM II having sub-arenas based on regions within them |
https://en.wikipedia.org/wiki/Weak%20derivative | In mathematics, a weak derivative is a generalization of the concept of the derivative of a function (strong derivative) for functions not assumed differentiable, but only integrable, i.e., to lie in the Lp space .
The method of integration by parts holds that for differentiable functions and we have
A function u' being the weak derivative of u is essentially defined by the requirement that this equation must hold for all infinitely differentiable functions vanishing at the boundary points ().
Definition
Let be a function in the Lebesgue space . We say that in is a weak derivative of if
for all infinitely differentiable functions with .
Generalizing to dimensions, if and are in the space of locally integrable functions for some open set , and if is a multi-index, we say that is the -weak derivative of if
for all , that is, for all infinitely differentiable functions with compact support in . Here is defined as
If has a weak derivative, it is often written since weak derivatives are unique (at least, up to a set of measure zero, see below).
Examples
The absolute value function , which is not differentiable at has a weak derivative known as the sign function, and given by This is not the only weak derivative for u: any w that is equal to v almost everywhere is also a weak derivative for u. (In particular, the definition of v(0) above is superfluous and can be replaced with any desired real number r.) Usually, this is not a problem, since in the theory of Lp spaces and Sobolev spaces, functions that are equal almost everywhere are identified.
The characteristic function of the rational numbers is nowhere differentiable yet has a weak derivative. Since the Lebesgue measure of the rational numbers is zero, Thus is a weak derivative of . Note that this does agree with our intuition since when considered as a member of an Lp space, is identified with the zero function.
The Cantor function c does not have a weak derivative, despite |
https://en.wikipedia.org/wiki/Cell%20fusion | Cell fusion is an important cellular process in which several uninucleate cells (cells with a single nucleus) combine to form a multinucleate cell, known as a syncytium. Cell fusion occurs during differentiation of myoblasts, osteoclasts and trophoblasts, during embryogenesis, and morphogenesis. Cell fusion is a necessary event in the maturation of cells so that they maintain their specific functions throughout growth.
History
In 1847 Theodore Schwann expanded upon the theory that all living organisms are composed of cells when he added to it that discrete cells are the basis of life. Schwann observed that in certain cells the walls and cavities of the cells coalesce together. It was this observation that provided the first hint that cells fuse.
It was not until 1960 that cell biologists deliberately fused cells for the first time. To fuse the cells, biologists combined isolated mouse cells, with the same kind of tissue, and induced fusion of their outer membrane using the Sendai virus (a respiratory virus in mice). Each of the fused hybrid cells contained a single nucleus with chromosomes from both fusion partners. Synkaryon became the name of this type of cell combined with a nucleus.
In the late 1960s biologists successfully fused cells of different types and from different species. The hybrid products of these fusions, heterokaryon, were hybrids that maintained two or more separate nuclei. This work was headed by Henry Harris at the University of Oxford and Nils Ringertz from Sweden's Karolinska Institute. These two men are responsible for reviving the interest of cell fusion. The hybrid cells interested biologists in the area of how different kinds of cytoplasm affect different kinds of nuclei. The work conducted by Henry and Nils showed that proteins from one gene fusion affect gene expression in the other partner's nucleus, and vice versa. These hybrid cells that were created were considered forced exceptions to normal cellular integrity and it was not unti |
https://en.wikipedia.org/wiki/Ericson-Ericson%20Lorentz-Lorenz%20correction | Ericson-Ericson Lorentz-Lorenz correction, also called the Ericson-Ericson Lorentz-Lorenz effect (EELL), refers to an analogy in the interface between nuclear, atomic and particle physics, which in its simplest form corresponds to the well known Lorentz-Lorenz equation (also referred to as the Clausius-Mossotti relation) for electromagnetic waves and light in a refractive medium.
These relations link the macroscopic quantities such as the refractive index to the dipole polarization of the individual atoms or molecules. When the latter are kept apart the polarizing field is no longer the average electric field in the medium. Similarly for the pion, the lightest meson and the carrier of the long range part of the nuclear force, its typical non-relativistic scattering for individual nucleons has a dominant dipole structure with a known average dipole polarizability of strength ("the average scattering volume").
The physics becomes closely similar although the nuclear density is about 15 orders of magnitude larger than that of ordinary matter and the nature of the dipole interaction is totally different.
The correction was predicted in 1963 by Magda Ericson and was derived in 1966 together with Torleif Ericson. The effect has since been re-derived in various ways, but is now understood as a general effect as long as the nucleons keep their individuality, independent of the detailed cause. This is the reason why in the molecular case of the classical Lorentz-Lorenz effect so many incompatible derivations give the same result. The EELL correction was first applied to the line shifts of hydrogen-like atoms, where the electron in the Coulomb field is replaced by a negatively charged pion. Its interaction with the central nucleus causes deviations in the line positions in such Bohr-like atoms.
The effect has greatly influenced the understanding of the pion-nucleus many-body problem by realizing that the scattering of a pion from a nucleon in nuclear matter is determine |
https://en.wikipedia.org/wiki/Torsion%20%28algebra%29 | In mathematics, specifically in ring theory, a torsion element is an element of a module that yields zero when multiplied by some non-zero-divisor of the ring. The torsion submodule of a module is the submodule formed by the torsion elements. A torsion module is a module that equals its torsion submodule. A module is torsion-free if its torsion submodule comprises only the zero element.
This terminology is more commonly used for modules over a domain, that is, when the regular elements of the ring are all its nonzero elements.
This terminology applies to abelian groups (with "module" and "submodule" replaced by "group" and "subgroup"). This is allowed by the fact that the abelian groups are the modules over the ring of integers (in fact, this is the origin of the terminology, that has been introduced for abelian groups before being generalized to modules).
In the case of groups that are noncommutative, a torsion element is an element of finite order. Contrary to the commutative case, the torsion elements do not form a subgroup, in general.
Definition
An element m of a module M over a ring R is called a torsion element of the module if there exists a regular element r of the ring (an element that is neither a left nor a right zero divisor) that annihilates m, i.e.,
In an integral domain (a commutative ring without zero divisors), every non-zero element is regular, so a torsion element of a module over an integral domain is one annihilated by a non-zero element of the integral domain. Some authors use this as the definition of a torsion element, but this definition does not work well over more general rings.
A module M over a ring R is called a torsion module if all its elements are torsion elements, and torsion-free if zero is the only torsion element. If the ring R is commutative then the set of all torsion elements forms a submodule of M, called the torsion submodule of M, sometimes denoted T(M). If R is not commutative, T(M) may or may not be a submodule. I |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.