id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
13,344,873 | https://en.wikipedia.org/wiki/Tasco | Tasco (also known as Tasco Worldwide) sells consumer telescopes. Tasco mainly imports telescopes for amateur astronomers but has expanded into other optical products, such as spotting scopes, microscopes, binoculars, telescopic sights, and other rifle accessories. Tasco sells via retail stores, catalogs, and online retailers. Tasco is based in Miramar, Florida. George Rosenfield founded the firm as the Tanross Supply Company in 1954. It started as a distributor of fishing tackle and hardware. The name was later shortened to Tasco as its offerings expanded to include binoculars and eyepieces.
Products
Telescopes
Tasco's telescopes have a reputation as entry-level equipment. It is one of several companies advertising their products based on claims of high magnification, far beyond any attainable usable magnification. Tasco's telescopes tend to be referred to as "department store telescopes." They are generally considered low-cost instruments that appeal to impulse buyers.
Binoculars
Tasco imports binoculars with magnifications ranging between seven and ten power. They also offer Snapshot series binoculars, which include an ability to record video and capture still pictures as seen through the binoculars. Users can transfer images to a computer via a USB cable. Tasco provides software for viewing and printing photographs taken on its devices.
Gun sights
Tasco imports telescopic sights for rifles, and handguns featuring magnifications of 1 to 40 power. They also import non-magnifying red dot sights.
Spotting scopes
Tasco offers several spotting scopes. These scopes are designed for rugged outdoor use and feature rubber armor protection as well as optional camouflage. Models have magnifications between 12 and 45 times, and feature panoramic view finding.
History
Tasco was founded by George Rosenfield in 1954. In March 1996, Rosenfield sold the business. At that time, Tasco employed 160 people at its Florida headquarters, and maintained a location in the state of Washington, which employed another 40.
In June 1998, Tasco purchased Celestron, another telescope manufacturer which focused on performance optical equipment and the more serious observer. Celestron was second only to Meade Instruments Corporation in sales of telescopes.
Early in 2001, Tasco began searching for a buyer as profits sank. Meade Corporation began negotiations for a merger, but the Federal Trade Commission blocked the attempt.
By June 2002, Wind Point Partners, then the parent company of Bushnell Performance Optics purchased the Tasco brand and all the company's intellectual property.
In July 2007, Wind Point Partners sold Bushnell Performance Optics along with Tasco property and sales rights to MidOcean Partners, a private equity firm based in New York and London.
On September 5, 2013, Alliant Techsystems announced it had entered into a definitive agreement to acquire Bushnell. Under the terms of the transaction, ATK paid $985 million in cash, subject to customary post-closing adjustments.
ATK spun-off Vista Outdoor upon closing its merger with Orbital Sciences and became Orbital ATK on February 9, 2015. Anyone holding ATK common stock at the end of the business day on February 2, 2015 received two shares of Vista Outdoor common stock. Eligible shareholders had their brokerage account credited or received a book-entry account statement reflecting their ownership. Vista Outdoor was thus initially 100% owned by ATK shareholders. Vista Outdoor stock traded on a "when-issued" basis from January 29, 2015 to February 9, 2015. It began "regular way" trading on the New York Stock Exchange on February 10, 2015 under the ticker symbol "VSTO." No payment or action of any kind was required of shareholders. This transaction was conducted on a tax-free basis. Shareholders subject to American taxes generally did not have to recognize a gain or loss for federal tax purposes.
Litigation
On May 29, 2002, Tasco Worldwide initiated liquidation of all its assets after defaulting on nearly $30 million in loans.
The company had been searching for a buyer for several months, but after much interest by Meade Instruments Corporation, the Federal Trade Commission, on this day, sanctioned a temporary restraining order in federal district court to preempt any attempt by Meade, the leading manufacturer of performance telescopes in the United States, to purchase all or certain assets of Tasco Holdings, Inc., including Celestron, a subsidiary and number two performance telescope provider in the U.S. The FTC argued that an acquisition by Meade of Celestron would negatively impact the performance telescope market by eliminating significant competition between the two companies and by creating a monopoly in the market for Schmidt-Cassegrain telescopes, which were currently only being sold in the U.S. by Celestron and Meade.
Later in 2002, Tasco and Celestron, then under the ownership of Bushnell Performance Optics, filed suits also in the District Court of California, alleging Meade products infringed on a United States patent entitled "Tripod Structure for Telescopes." Both companies sought injunctive relief and compensatory damages in an unspecified amount, and attorneys' fees and costs. In December 2002, the District Court denied the motions of both parties.
References
External links
Companies based in Miami
Telescope manufacturers
Technology companies established in 1954
1954 establishments in Florida
Optics manufacturing companies
2002 mergers and acquisitions | Tasco | [
"Astronomy"
] | 1,073 | [
"Telescope manufacturers",
"People associated with astronomy"
] |
13,345,478 | https://en.wikipedia.org/wiki/Schilder%27s%20theorem | In mathematics, Schilder's theorem is a generalization of the Laplace method from integrals on to functional Wiener integration. The theorem is used in the large deviations theory of stochastic processes. Roughly speaking, out of Schilder's theorem one gets an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path (which is constant with value 0). This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions.
Statement of the theorem
Let C0 = C0([0, T]; Rd) be the Banach space of continuous functions such that , equipped with the supremum norm ||⋅||∞ and be the subspace of absolutely continuous functions whose derivative is in (the so-called Cameron-Martin space). Define the rate function
on and let be two given functions, such that (the "action") has a unique minimum .
Then under some differentiability and growth assumptions on which are detailed in Schilder 1966, one has
where denotes expectation with respect to the Wiener measure on and is the Hessian of at the minimum ; is meant in the sense of an inner product.
Application to large deviations on the Wiener measure
Let B be a standard Brownian motion in d-dimensional Euclidean space Rd starting at the origin, 0 ∈ Rd; let W denote the law of B, i.e. classical Wiener measure. For ε > 0, let Wε denote the law of the rescaled process B. Then, on the Banach space C0 = C0([0, T]; Rd) of continuous functions such that , equipped with the supremum norm ||⋅||∞, the probability measures Wε satisfy the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by
if ω is absolutely continuous, and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0,
and
Example
Taking ε = 1/c2, one can use Schilder's theorem to obtain estimates for the probability that a standard Brownian motion B strays further than c from its starting point over the time interval [0, T], i.e. the probability
as c tends to infinity. Here Bc(0; ||⋅||∞) denotes the open ball of radius c about the zero function in C0, taken with respect to the supremum norm. First note that
Since the rate function is continuous on A, Schilder's theorem yields
making use of the fact that the infimum over paths in the collection A is attained for . This result can be heuristically interpreted as saying that, for large and/or large
In fact, the above probability can be estimated more precisely: for a standard Brownian motion in , and any and , we have:
References
(See theorem 5.2)
Asymptotic analysis
Theorems regarding stochastic processes
Large deviations theory | Schilder's theorem | [
"Mathematics"
] | 649 | [
"Theorems about stochastic processes",
"Theorems in probability theory",
"Mathematical analysis",
"Asymptotic analysis"
] |
13,345,571 | https://en.wikipedia.org/wiki/Sammon%20mapping | Sammon mapping or Sammon projection is an algorithm that maps a high-dimensional space to a space of lower dimensionality (see multidimensional scaling) by trying to preserve the structure of inter-point distances in high-dimensional space in the lower-dimension projection.
It is particularly suited for use in exploratory data analysis.
The method was proposed by John W. Sammon in 1969.
It is considered a non-linear approach as the mapping cannot be represented as a linear combination of the original variables as possible in techniques such as principal component analysis, which also makes it more difficult to use for classification applications.
Denote the distance between ith and jth objects in the original space by , and the distance between their projections by .
Sammon's mapping aims to minimize the following error function, which is often referred to as Sammon's stress or Sammon's error:
The minimization can be performed either by gradient descent, as proposed initially, or by other means, usually involving iterative methods.
The number of iterations needs to be experimentally determined and convergent solutions are not always guaranteed.
Many implementations prefer to use the first Principal Components as a starting configuration.
The Sammon mapping has been one of the most successful nonlinear metric multidimensional scaling methods since its advent in 1969, but effort has been focused on algorithm improvement rather than on the form of the stress function.
The performance of the Sammon mapping has been improved by extending its stress function using left Bregman divergence
and right Bregman divergence.
See also
Prefrontal cortex basal ganglia working memory
State–action–reward–state–action
Constructing skill trees
References
External links
HiSee – an open-source visualizer for high dimensional data
A C# based program with code on CodeProject.
Matlab code and method introduction
Functions and mappings
Dimension reduction | Sammon mapping | [
"Mathematics"
] | 380 | [
"Mathematical analysis",
"Mathematical relations",
"Mathematical objects",
"Functions and mappings"
] |
13,345,577 | https://en.wikipedia.org/wiki/Equity%20method | Equity method in accounting is the process of treating investments in associate companies. Equity accounting is usually applied where an investor entity holds 20–50% of the voting stock of the associate company, and therefore has significant influence on the latter's management. Under International Financial Reporting Standards/MAMAMO, equity method is also required in accounting for joint ventures. The investor records such investments as an asset on its balance sheet. The investor's proportional share of the associate company's net income increases the investment (and a net loss decreases the investment), and proportional payments of dividends decrease it. In the investor’s income statement
Equity accounting may also be appropriate where the investor has a smaller interest, depending on the nature of the actual relationship between the investor and investee. Control of the investee, usually through ownership of more than 50% of voting stock, results in recognition of a subsidiary, whose financial statements must be consolidated with the parent's. The ownership of less than 20% creates an investment position, carried at historic book or fair market value (if available for sale or held for trading) in the investor's balance sheet.
See also
Business valuation
Enterprise value
Minority interest
References
Further reading
External links
IAS 28 INVESTMENTS IN ASSOCIATES
Mergers and acquisitions
Accounting systems
Financial accounting | Equity method | [
"Technology"
] | 257 | [
"Information systems",
"Accounting systems"
] |
13,345,788 | https://en.wikipedia.org/wiki/Freidlin%E2%80%93Wentzell%20theorem | In mathematics, the Freidlin–Wentzell theorem (due to Mark Freidlin and Alexander D. Wentzell) is a result in the large deviations theory of stochastic processes. Roughly speaking, the Freidlin–Wentzell theorem gives an estimate for the probability that a (scaled-down) sample path of an Itō diffusion will stray far from the mean path. This statement is made precise using rate functions. The Freidlin–Wentzell theorem generalizes Schilder's theorem for standard Brownian motion.
Statement
Let B be a standard Brownian motion on Rd starting at the origin, 0 ∈ Rd, and let Xε be an Rd-valued Itō diffusion solving an Itō stochastic differential equation of the form
where the drift vector field b : Rd → Rd is uniformly Lipschitz continuous. Then, on the Banach space C0 = C0([0, T]; Rd) equipped with the supremum norm ||⋅||∞, the family of processes (Xε)ε>0 satisfies the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by
if ω lies in the Sobolev space H1([0, T]; Rd), and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0,
and
References
(See chapter 5.6)
Asymptotic analysis
Stochastic differential equations
Theorems in statistics
Large deviations theory
Probability theorems | Freidlin–Wentzell theorem | [
"Mathematics"
] | 327 | [
"Mathematical theorems",
"Theorems in statistics",
"Mathematical analysis",
"Theorems in probability theory",
"Asymptotic analysis",
"Mathematical problems"
] |
13,345,968 | https://en.wikipedia.org/wiki/Virtual%20black%20hole | In quantum gravity, a virtual black hole is a hypothetical micro black hole that exists temporarily as a result of a quantum fluctuation of spacetime. It is an example of quantum foam and is the gravitational analog of the virtual electron–positron pairs found in quantum electrodynamics. Theoretical arguments suggest that virtual black holes should have mass on the order of the Planck mass, lifetime around the Planck time, and occur with a number density of approximately one per Planck volume.
The emergence of virtual black holes at the Planck scale is a consequence of the uncertainty relation.
where is the radius of curvature of spacetime small domain, is the coordinate of the small domain, is the Planck length, is the reduced Planck constant, is the Newtonian constant of gravitation, and is the speed of light. These uncertainty relations are another form of Heisenberg's uncertainty principle at the Planck scale.
If virtual black holes exist, they provide a mechanism for proton decay. This is because when a black hole's mass increases via mass falling into the hole, and is theorized to decrease when Hawking radiation is emitted from the hole, the elementary particles emitted are, in general, not the same as those that fell in. Therefore, if two of a proton's constituent quarks fall into a virtual black hole, it is possible for an antiquark and a lepton to emerge, thus violating conservation of baryon number.
The existence of virtual black holes aggravates the black hole information loss paradox, as any physical process may potentially be disrupted by interaction with a virtual black hole.
See also
Quantum foam
Virtual particle
Quantum tunnelling
References
Further reading
Quantum gravity
Black holes | Virtual black hole | [
"Physics",
"Astronomy"
] | 341 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Quantum gravity",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Physics beyond the Standard Model"
] |
13,346,199 | https://en.wikipedia.org/wiki/Human%20knot | A human knot is a common icebreaker game or team building activity for new people to learn to work together in physical proximity.
The knot is a disentanglement puzzle in which a group of people in a circle each hold hands with two people who are not next to them, and the goal is to disentangle the limbs to get the group into a circle, without letting go of grasped hands. Instead, group members should step over or under arms to try to untangle the knot. Not all human knots are solvable, as can be shown in knot theory (see unknotting problem), and can remain knots or may end up as two or more circles.
An easy way to ensure that the game will end up with a single circle with no nodes is to start from a circle of people holding each other hand, looking all towards the center of the circle, and ask some of them to cross their arms and swap their left hand with their right hand grasping again the same neighbor; this corresponds to a Reidemeister move and thus preserves the solvability of the knot. When the game is successfully completed a certain number of people will appear to be outside of the circle. This number equals the number of people having crossed arms. The challenge is to solve the game several times starting each time with an increasing number of crossed people. To increase the difficulty level, players can be blindfolded or required that the game be played silently (no talking).
The game is recommended for children from 12 years and up, and is best suited to a group of ten or so players, although it can be played with as few as five and with much larger groups as well. No materials are required. The purpose of the human knot puzzle is to gain team building skills, problem solving skills, and communication skills among a group of people and onto the individuals participating.
References
Mechanical puzzles | Human knot | [
"Mathematics"
] | 381 | [
"Recreational mathematics",
"Mechanical puzzles"
] |
13,347,172 | https://en.wikipedia.org/wiki/Luminosity%20function%20%28astronomy%29 | In astronomy, a luminosity function gives the number of stars or galaxies per luminosity interval. Luminosity functions are used to study the properties of large groups or classes of objects, such as the stars in clusters or the galaxies in the Local Group.
Note that the term "function" is slightly misleading, and the luminosity function might better be described as a luminosity distribution. Given a luminosity as input, the luminosity function essentially returns the abundance of objects with that luminosity (specifically, number density per luminosity interval).
Main sequence luminosity function
The main sequence luminosity function maps the distribution of main sequence stars according to their luminosity. It is used to compare star formation and death rates, and evolutionary models, with observations. Main sequence luminosity functions vary depending on their host galaxy and on selection criteria for the stars, for example in the Solar neighbourhood or the Small Magellanic Cloud.
White dwarf luminosity function
The white dwarf luminosity function (WDLF) gives the number of white dwarf stars with a given luminosity. As this is determined by the rates at which these stars form and cool, it is of interest for the information it gives about the physics of white dwarf cooling and the age and history of the Galaxy.
Schechter luminosity function
The Schechter luminosity function provides an approximation of the abundance of galaxies in a luminosity interval . The luminosity function has units of a number density per unit luminosity and is given by a power law with an exponential cut-off at high luminosity
where is a characteristic galaxy luminosity controlling the cut-off, and the normalization has units of number density.
Equivalently, this equation can be expressed in terms of log-quantities with
The galaxy luminosity function may have different parameters for different populations and environments; it is not a universal function. One measurement from field galaxies is .
It is often more convenient to rewrite the Schechter function in terms of magnitudes, rather than luminosities. In this case, the Schechter function becomes:
Note that because the magnitude system is logarithmic, the power law has logarithmic slope . This is why a Schechter function with is said to be flat.
Integrals of the Schechter function can be expressed via the incomplete gamma function
Historically, the Schechter luminosity function was inspired by the Press–Schechter model. However, the connection between the two is not straight forward. If one assumes that every dark matter halo hosts one galaxy, then the Press-Schechter model yields a slope for galaxies instead of the value given above which is closer to -1. The reason for this failure is that large halos tend to have a large host galaxy and many smaller satellites, and small halos may not host any galaxies with stars. See, e.g., halo occupation distribution, for a more-detailed description of the halo-galaxy connection.
References
Stellar astronomy
Galaxies
Photometry
Equations of astronomy | Luminosity function (astronomy) | [
"Physics",
"Astronomy"
] | 640 | [
"Concepts in astronomy",
"Galaxies",
"Equations of astronomy",
"Astronomical objects",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
13,347,268 | https://en.wikipedia.org/wiki/Radiobiology | Radiobiology (also known as radiation biology, and uncommonly as actinobiology) is a field of clinical and basic medical sciences that involves the study of the effects of ionizing radiation on living things, in particular health effects of radiation. Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns, and/or rapid fatality through acute radiation syndrome. Controlled doses are used for medical imaging and radiotherapy.
Health effects
In general, ionizing radiation is harmful and potentially lethal to living beings but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis.
Most adverse health effects of radiation exposure may be grouped in two general categories:
deterministic effects (harmful tissue reactions) due in large part to the killing or malfunction of cells following high doses; and
stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.
Stochastic
Some effects of ionizing radiation on human health are stochastic, meaning that their probability of occurrence increases with dose, while the severity is independent of dose. Radiation-induced cancer, teratogenesis, cognitive decline, and heart disease are all stochastic effects induced by ionizing radiation.
Its most common impact is the stochastic induction of cancer with a latent period of years or decades after exposure. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model posits that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this linear model is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second.
Quantitative data on the effects of ionizing radiation on human health is relatively limited compared to other medical conditions because of the low number of cases to date, and because of the stochastic nature of some of the effects. Stochastic effects can only be measured through large epidemiological studies where enough data has been collected to remove confounding factors such as smoking habits and other lifestyle factors. The richest source of high-quality data comes from the study of Japanese atomic bomb survivors. In vitro and animal experiments are informative, but radioresistance varies greatly across species.
The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 in 2,000.
Deterministic
Deterministic effects are those that reliably occur above a threshold dose, and their severity increases with dose.
High radiation dose gives rise to deterministic effects which reliably occur above a threshold, and their severity increases with dose. Deterministic effects are not necessarily more or less serious than stochastic effects; either can ultimately lead to a temporary nuisance or a fatality. Examples of deterministic effects are:
Acute radiation syndrome, by acute whole-body radiation
Radiation burns, from radiation to a particular body surface
Radiation-induced thyroiditis, a potential side effect from radiation treatment against hyperthyroidism
Chronic radiation syndrome, from long-term radiation.
Radiation-induced lung injury, from for example radiation therapy to the lungs
Cataracts and infertility.
The US National Academy of Sciences Biological Effects of Ionizing Radiation Committee "has concluded that there is no compelling evidence to indicate a dose threshold below which the risk of tumor induction is zero".
By type of radiation
When alpha particle emitting isotopes are ingested, they are far more dangerous than their half-life or decay rate would suggest. This is due to the high relative biological effectiveness of alpha radiation to cause biological damage after alpha-emitting radioisotopes enter living cells. Ingested alpha emitter radioisotopes such as transuranics or actinides are an average of about 20 times more dangerous, and in some experiments up to 1000 times more dangerous than an equivalent activity of beta emitting or gamma emitting radioisotopes. If the radiation type is not known, it can be determined by differential measurements in the presence of electrical fields, magnetic fields, or with varying amounts of shielding.
In pregnancy
The risk for developing radiation-induced cancer at some point in life is greater when exposing a fetus than an adult, both because the cells are more vulnerable when they are growing, and because there is much longer lifespan after the dose to develop cancer. If there is too much radiation exposure there could be harmful effects on the unborn child or reproductive organs. Research shows that scanning more than once in nine months can harm the unborn child.
Possible deterministic effects include of radiation exposure in pregnancy include miscarriage, structural birth defects, growth restriction and intellectual disability. The deterministic effects have been studied at for example survivors of the atomic bombings of Hiroshima and Nagasaki and cases where radiation therapy has been necessary during pregnancy:
The intellectual deficit has been estimated to be about 25 IQ points per 1,000 mGy at 10 to 17 weeks of gestational age.
These effects are sometimes relevant when deciding about medical imaging in pregnancy, since projectional radiography and CT scanning exposes the fetus to radiation.
Also, the risk for the mother of later acquiring radiation-induced breast cancer seems to be particularly high for radiation doses during pregnancy.
Measurement
The human body cannot sense ionizing radiation except in very high doses, but the effects of ionization can be used to characterize the radiation. Parameters of interest include disintegration rate, particle flux, particle type, beam energy, kerma, dose rate, and radiation dose.
The monitoring and calculation of doses to safeguard human health is called dosimetry and is undertaken within the science of health physics. Key measurement tools are the use of dosimeters to give the external effective dose uptake and the use of bio-assay for ingested dose. The article on the sievert summarises the recommendations of the ICRU and ICRP on the use of dose quantities and includes a guide to the effects of ionizing radiation as measured in sieverts, and gives examples of approximate figures of dose uptake in certain situations.
The committed dose is a measure of the stochastic health risk due to an intake of radioactive material into the human body. The ICRP states "For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities. The radiation dose is determined from the intake using recommended dose coefficients".
Absorbed, equivalent and effective dose
The absorbed dose is a physical dose quantity D representing the mean energy imparted to matter per unit mass by ionizing radiation. In the SI system of units, the unit of measure is joules per kilogram, and its special name is gray (Gy). The non-SI CGS unit rad is sometimes also used, predominantly in the USA.
To represent stochastic risk the equivalent dose H T and effective dose E are used, and appropriate dose factors and coefficients are used to calculate these from the absorbed dose. Equivalent and effective dose quantities are expressed in units of the sievert or rem which implies that biological effects have been taken into account. These are usually in accordance with the recommendations of the International Committee on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU). The coherent system of radiological protection quantities developed by them is shown in the accompanying diagram.
Organizations
The International Commission on Radiological Protection (ICRP) manages the International System of Radiological Protection, which sets recommended limits for dose uptake. Dose values may represent absorbed, equivalent, effective, or committed dose.
Other important organizations studying the topic include:
International Commission on Radiation Units and Measurements (ICRU)
United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR)
US National Council on Radiation Protection and Measurements (NCRP)
UK UK Health Security Agency
US National Academy of Sciences (NAS through the BEIR studies)
French Institut de radioprotection et de sûreté nucléaire (IRSN)
European Committee on Radiation Risk (ECRR) the stage of radiation depends on the stage the body parts are affected
Exposure pathways
External
External exposure is exposure which occurs when the radioactive source (or other radiation source) is outside (and remains outside) the organism which is exposed. Examples of external exposure include:
A person who places a sealed radioactive source in his pocket
A space traveller who is irradiated by cosmic rays
A person who is treated for cancer by either teletherapy or brachytherapy. While in brachytherapy the source is inside the person it is still considered external exposure because it does not result in a committed dose.
A nuclear worker whose hands have been dirtied with radioactive dust. Assuming that his hands are cleaned before any radioactive material can be absorbed, inhaled or ingested, skin contamination is considered to be external exposure.
External exposure is relatively easy to estimate, and the irradiated organism does not become radioactive, except for a case where the radiation is an intense neutron beam which causes activation.
By type of medical imaging
Internal
Internal exposure occurs when the radioactive material enters the organism, and the radioactive atoms become incorporated into the organism. This can occur through inhalation, ingestion, or injection. Below are a series of examples of internal exposure.
The exposure caused by potassium-40 present within a normal person.
The exposure to the ingestion of a soluble radioactive substance, such as 89Sr in cows' milk.
A person who is being treated for cancer by means of a radiopharmaceutical where a radioisotope is used as a drug (usually a liquid or pill). A review of this topic was published in 1999. Because the radioactive material becomes intimately mixed with the affected object it is often difficult to decontaminate the object or person in a case where internal exposure is occurring. While some very insoluble materials such as fission products within a uranium dioxide matrix might never be able to truly become part of an organism, it is normal to consider such particles in the lungs and digestive tract as a form of internal contamination which results in internal exposure.
Boron neutron capture therapy (BNCT) involves injecting a boron-10 tagged chemical that preferentially binds to tumor cells. Neutrons from a nuclear reactor are shaped by a neutron moderator to the neutron energy spectrum suitable for BNCT treatment. The tumor is selectively bombarded with these neutrons. The neutrons quickly slow down in the body to become low energy thermal neutrons. These thermal neutrons are captured by the injected boron-10, forming excited (boron-11) which breaks down into lithium-7 and a helium-4 alpha particle both of these produce closely spaced ionizing radiation. This concept is described as a binary system using two separate components for the therapy of cancer. Each component in itself is relatively harmless to the cells, but when combined for treatment they produce a highly cytocidal (cytotoxic) effect which is lethal (within a limited range of 5-9 micrometers or approximately one cell diameter). Clinical trials, with promising results, are currently carried out in Finland and Japan.
When radioactive compounds enter the human body, the effects are different from those resulting from exposure to an external radiation source. Especially in the case of alpha radiation, which normally does not penetrate the skin, the exposure can be much more damaging after ingestion or inhalation. The radiation exposure is normally expressed as a committed dose.
History
Although radiation was discovered in late 19th century, the dangers of radioactivity and of radiation were not immediately recognized. Acute effects of radiation were first observed in the use of X-rays when German physicist Wilhelm Röntgen intentionally subjected his fingers to X-rays in 1895. He published his observations concerning the burns that developed, though he misattributed them to ozone, a free radical produced in air by X-rays. Other free radicals produced within the body are now understood to be more important. His injuries healed later.
As a field of medical sciences, radiobiology originated from Leopold Freund's 1896 demonstration of the therapeutic treatment of a hairy mole using the newly discovered form of electromagnetic radiation called X-rays. After irradiating frogs and insects with X-rays in early 1896, Ivan Romanovich Tarkhanov concluded that these newly discovered rays not only photograph, but also "affect the living function". At the same time, Pierre and Marie Curie discovered the radioactive polonium and radium later used to treat cancer.
The genetic effects of radiation, including the effects on cancer risk, were recognized much later. In 1927 Hermann Joseph Muller published research showing genetic effects, and in 1946 was awarded the Nobel prize for his findings.
More generally, the 1930s saw attempts to develop a general model for radiobiology. Notable here was Douglas Lea, whose presentation also included an exhaustive review of some 400 supporting publications.
Before the biological effects of radiation were known, many physicians and corporations had begun marketing radioactive substances as patent medicine and radioactive quackery. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie spoke out against this sort of treatment, warning that the effects of radiation on the human body were not well understood. Curie later died of aplastic anemia caused by radiation poisoning. Eben Byers, a famous American socialite, died of multiple cancers (but not acute radiation syndrome) in 1932 after consuming large quantities of radium over several years; his death drew public attention to dangers of radiation. By the 1930s, after a number of cases of bone necrosis and death in enthusiasts, radium-containing medical products had nearly vanished from the market.
In the United States, the experience of the so-called Radium Girls, where thousands of radium-dial painters contracted oral cancers— but no cases of acute radiation syndrome— popularized the warnings of occupational health associated with radiation hazards. Robley D. Evans, at MIT, developed the first standard for permissible body burden of radium, a key step in the establishment of nuclear medicine as a field of study. With the development of nuclear reactors and nuclear weapons in the 1940s, heightened scientific attention was given to the study of all manner of radiation effects.
The atomic bombings of Hiroshima and Nagasaki resulted in a large number of incidents of radiation poisoning, allowing for greater insight into its symptoms and dangers. Red Cross Hospital surgeon Dr. Terufumi Sasaki led intensive research into the Syndrome in the weeks and months following the Hiroshima bombings. Sasaki and his team were able to monitor the effects of radiation in patients of varying proximities to the blast itself, leading to the establishment of three recorded stages of the syndrome. Within 25–30 days of the explosion, the Red Cross surgeon noticed a sharp drop in white blood cell count and established this drop, along with symptoms of fever, as prognostic standards for Acute Radiation Syndrome. Actress Midori Naka, who was present during the atomic bombing of Hiroshima, was the first incident of radiation poisoning to be extensively studied. Her death on August 24, 1945, was the first death ever to be officially certified as a result of radiation poisoning (or "atomic bomb disease").
The Atomic Bomb Casualty Commission and the Radiation Effects Research Foundation have been monitoring the health status of the survivors and their descendants since 1946. They have found that radiation exposure increases cancer risk, but also that the average lifespan of survivors was reduced by only a few months compared to those not exposed to radiation. No health effects of any sort have thus far been detected in children of the survivors.
Areas of interest
The interactions between organisms and electromagnetic fields (EMF) and ionizing radiation can be studied in a number of ways:
Radiation physics
Radiation chemistry
Molecular and cell biology
Molecular genetics
Cell death and apoptosis
High and low-level electromagnetic radiation and health
Specific absorption rates of organisms
Radiation poisoning
Radiation oncology (radiation therapy in cancer)
Bioelectromagnetics
Electric field and Magnetic field - their general nature.
Electrophysiology - the scientific study of the electrical properties of biological cells and tissues.
Biomagnetism - the magnetic properties of living systems (see, for example, the research of David Cohen using SQUID imaging) and Magnetobiology - the study of effect of magnets upon living systems. See also Electromagnetic radiation and health
Bioelectromagnetism - the electromagnetic properties of living systems and Bioelectromagnetics - the study of the effect of electromagnetic fields on living systems.
Electrotherapy
Radiation therapy
Radiogenomics
Transcranial magnetic stimulation - a powerful electric current produces a transient, spatially focussed magnetic field that can penetrate the scalp and skull of a subject and induce electrical activity in the neurons on the surface of the brain.
Magnetic resonance imaging - a very powerful magnetic field is used to obtain a 3D image of the density of water molecules of the brain, revealing different anatomical structures. A related technique, functional magnetic resonance imaging, reveals the pattern of blood flow in the brain and can show which parts of the brain are involved in a particular task.
Embryogenesis, Ontogeny and Developmental biology - a discipline that has given rise to many scientific field theories.
Bioenergetics - the study of energy exchange on the molecular level of living systems.
Biological psychiatry, Neurology, Psychoneuroimmunology
Radiation sources for experimental radiobiology
Radiobiology experiments typically make use of a radiation source which could be:
An isotopic source, typically 137Cs or 60Co.
A particle accelerator generating high energy protons, electrons or charged ions. Biological samples can be irradiated using either a broad, uniform beam, or using a microbeam, focused down to cellular or subcellular sizes.
A UV lamp.
See also
Biological effects of radiation on the epigenome
Cell survival curve
Health threat from cosmic rays
NASA Space Radiation Laboratory
Radioactivity in biology
Radiology
Radiophobia
Radiosensitivity
References
Sources
ICRP, 2007. The 2007 Recommendations of the International Commission on Radiological Protection. ICRP Publication 103. Ann. ICRP 37 (2-4).
Further reading
Eric Hall, Radiobiology for the Radiologist. 2006. Lippincott
G.Gordon Steel, "Basic Clinical Radiobiology". 2002. Hodder Arnold.
The Institute for Radiation Biology at the Helmholtz-Center for Environmental Health | Radiobiology | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 3,823 | [
"Radiation health effects",
"Applied and interdisciplinary physics",
"Radiobiology",
"Medical physics",
"Radiation effects",
"Radioactivity"
] |
13,347,566 | https://en.wikipedia.org/wiki/Meromyosin | Meromyosin is a part of myosin (mero meaning "part of"). With regards to human anatomy myosin and actin constitute the basic functional unit of a muscle fiber, called sarcomere, playing a role in muscle contraction.
Biochemically viewed meromyosin form subunits of the actin-associated motor protein, myosin, as commonly obtained by trypsin proteolysis (protein breakdown). Following this proteolysis, two types of meromyosin are formed: heavy meromyosin (HMM) and light meromyosin (LMM).
Light meromyosin has a long, straight portion in the “tail” region. Heavy meromyosin (HMM) is a protein chain terminating in a globular head portion/cross bridge. HMM consists of two subunits, Heavy Meromyosin Subunit 1 and 2 (HMMS-1 and HMMS-2). The majority of myosin activity is concentrated in HMMS-1. HMMS-1 has an actin binding site and ATP binding site (myosin ATPase) that determines the rate of muscle contraction when muscle is stretched.
Light and heavy meromyosin are subunits of myosin filaments (thick myofilaments).
References
Motor proteins | Meromyosin | [
"Chemistry",
"Biology"
] | 278 | [
"Biotechnology stubs",
"Motor proteins",
"Biochemistry stubs",
"Molecular machines",
"Biochemistry"
] |
13,347,757 | https://en.wikipedia.org/wiki/Blackett%20effect | The Blackett effect, also called gravitational magnetism, is the hypothetical generation of a magnetic field by an uncharged, rotating body. This effect has not been observed.
History
Gravitational magnetism was proposed by the German-British physicist Arthur Schuster as an explanation for the magnetic field of the Earth, but was found nonexistent in a 1923 experiment by H. A. Wilson. The hypothesis was revived by the British physicist P. M. S. Blackett in 1947 when he proposed that a rotating body should generate a magnetic field proportional to its angular momentum. This was never generally accepted, and by the 1950s even Blackett felt it had been refuted., pp. 39–43
The Blackett effect was used by the science fiction writer James Blish in his series Cities in Flight (1955–1962) as the basis for his fictional stardrive, the spindizzy.
References
Obsolete theories in physics
Magnetism in astronomy
Gravity | Blackett effect | [
"Physics",
"Materials_science",
"Astronomy"
] | 190 | [
"Materials science stubs",
"Theoretical physics",
"Magnetism in astronomy",
"Electromagnetism stubs",
"Obsolete theories in physics"
] |
13,348,062 | https://en.wikipedia.org/wiki/Sodium%20methylparaben | Sodium methylparaben (sodium methyl para-hydroxybenzoate) is a compound with formula Na(CH3(C6H4COO)O). It is the sodium salt of methylparaben.
It is a food additive with the E number E219 which is used as a preservative.
References
Benzoate esters
Organic sodium salts
Food additives
Methyl esters
E-number additives
Phenolates | Sodium methylparaben | [
"Chemistry"
] | 92 | [
"Organic sodium salts",
"Phenolates",
"Salts"
] |
13,349,019 | https://en.wikipedia.org/wiki/Belle%20%28magazine%29 | Belle is an Australian design magazine, covering interior design and architecture as well as a raft of other home improvement content.
History and profile
Belle was started in 1974. The magazine was purchased by the Bauer Media Group as part of their acquisition of the Australian Consolidated Press. It was published on a bi-monthly basis until 2014 when its frequency was changed to eight times a year. The headquarters is in Sydney.
Neale Whitaker served as the editor-in-chief of the magazine. In December 2014 Tanya Buchanan was named editor-in-chief of Belle.
The circulation of Belle was 45,230 copies from January to June 2014.
Since 2020, Belle magazine has been owned by Are Media, which acquired Bauer Media's Australian asets.
References
External links
1974 establishments in Australia
Architecture magazines
Are Media
Bi-monthly magazines published in Australia
Design magazines
Eight times annually magazines
Lifestyle magazines published in Australia
Magazines established in 1974
Magazines published in Sydney | Belle (magazine) | [
"Engineering"
] | 186 | [
"Design magazines",
"Design"
] |
13,349,561 | https://en.wikipedia.org/wiki/Succinic%20semialdehyde | Succinic semialdehyde (SSA) is a GABA and GHB metabolite. It is formed from GABA by the action of GABA transaminase (4-aminobutyrate aminotransferase) and further oxidised to become succinic acid, which enters TCA cycle. SSA is oxidized into succinic acid by the enzyme succinic semialdehyde dehydrogenase, which uses NAD+ as a cofactor. When the oxidation of succinic semialdehyde to succinic acid is impaired, accumulation of succinic semialdehyde takes place which leads to succinic semialdehyde dehydrogenase deficiency.
In addition to the pathway involving GABA transaminase, gamma-hydroxybutyric acid (GHB) can also be metabolized to SSA via GHB dehydrogenase or by GHB transhydrogenase (D-2-hydroxyglutarate transhydrogenase).
See also
Transaminase (aminotransferase)
Succinic semialdehyde dehydrogenase deficiency
References
Aldehydes
Carboxylic acids
Gamma-Hydroxybutyric acid
Aldehydic acids | Succinic semialdehyde | [
"Chemistry"
] | 267 | [
"Carboxylic acids",
"Functional groups"
] |
13,349,978 | https://en.wikipedia.org/wiki/Casabella | Casabella is a monthly Italian architectural and product design magazine with a focus on modern, radical design and architecture. It includes interviews with the world's most prominent architects.
History and profile
Casabella was founded in 1928 at Milan by Guido Marangoni. Its initial name was La Casa Bella (The Beautiful Home). In 1933, the architect Giuseppe Pagano became editor, changing the name to Casabella. Subsequently, the architect Ernesto Nathan Rogers, who edited the magazine from 1953 to 1965, changed the name further to Casabella Continuità, Casabella Costruzioni, Costruzioni Casabella, and, after the departure of Rogers, Casabella.
During its history, Casabella featured many important architects and designers, including Franco Albini, Gae Aulenti, and Marco Zanuso, contributing as creative editors. It has also published some articles written by Barry Bergdoll, curator at the Department of Architecture and Design of The Museum of Modern Art (MOMA) of New York City. Semiotician Giovanni Klaus Koenig also worked for the journal.
After being edited by Vittorio Gregotti between 1981 and 1996, the magazine's editorial helm has been taken over by Francesco Dal Co. It is published by Gruppo Mondadori with a 2014 circulation of 45,000 copies.
Gallery
See also
List of magazines published in Italy
References
Further reading
Chiara Baglione, Casabella 1928-2008, Electa, Milano 2008.
External links
Casabella, Design Dictionary
1928 establishments in Italy
Architecture magazines
Design magazines
Italian-language magazines
Magazines established in 1928
Magazines published in Milan
Monthly magazines published in Italy
Arnoldo Mondadori Editore | Casabella | [
"Engineering"
] | 340 | [
"Design magazines",
"Design"
] |
13,350,759 | https://en.wikipedia.org/wiki/Dielectric%20wireless%20receiver | Dielectric wireless receiver is a type of radiofrequency receiver front-end featuring a complete absence of electronic circuitry and metal interconnects. It offers immunity against damage from intense electromagnetic radiation, produced by EMP and HPM sources. This receiver is known as ADNERF (an acronym used to signify an All-Dielectric Non-Electronic Radio Front-End). ADNERF is a type of Electro-Magnetic Pulse Tolerant Microwave Receiver (EMPiRe).
Background
The continuing trend towards reduced feature size and voltage in integrated circuits renders modern electronics highly susceptible to damage caused by High Power Microwave (HPM) and other microwave based directed energy sources. These induce high voltage transient surges of thousands of volts which can punch through the gate insulator in the transistor and can destroy the circuit's metal interconnects. To immunize electronic systems against such threats, the “soft spots” (metal and transistor) in a conventional receiver front-end, must be eliminated.
Operation
The basic concept of this photonic-assisted all-dielectric RF front-end technology is shown in Fig. 1. The Dielectric Resonator Antenna (DRA) in the front-end, functions as a concentrator of incoming electromagnetic field. When the electromagnetic (EM) field excites the resonance of DRA, a mode field pattern is built up inside the structure. The electro-optical (EO) resonator is placed at the location of the peak field magnitude (Fig. 2). The EO resonator converts the received EM signal to an intensity modulated optical signal which is then carried away from the antenna front-end via an optical fiber. At the remote location, the signal is converted back to an RF signal which is then amplified and processed using conventional techniques.
This front-end design significantly increases the threshold for damage associated with high power microwave signals. The lack of metal interconnects eliminates the one source of failure. In addition, the charge isolation provided by the optical link protects the electronic circuitry. Good sensitivity can be achieved due to signal enhancement provided by the microwave resonance in the DRA and optical resonance in the EO resonator. The modulating E-field (ERF) applied to the resonator should not be uniform across the disk otherwise no modulation occurs. To prevent this from happening, the EO resonator is placed off center from the symmetrical axis of DRA as shown in Fig. 2. The location of the EO resonator is chosen to coincide with the peak EM field inside the DRA, which is identified using 3-D EM simulations. The field profile shown in Figure 2b does not include the presence of EO resonator. In practice the presence of the EO crystal will change the field distribution.
References
Abrams, M. Dawn of the e-bomb. IEEE Spectrum 40, 24-30 (2003).
R. C. J. Hsu, A. Ayazi, B. Houshmand and B. Jalali, “All-Dielectric Photonic-Assisted Radio Front-End Technology,” Nature Photonics 1, 535–538 (2007).
A. Ayazi, C. J. Hsu, B. Houshmand, W. H. Steier, and B. Jalali, “All-dielectric photonics assisted wireless receiver,” Optics Express (2008).
DARPA's EMPiRe program.
Nonlinear optics
Optical devices
Radio frequency antenna types
Antennas (radio) | Dielectric wireless receiver | [
"Materials_science",
"Engineering"
] | 740 | [
"Glass engineering and science",
"Optical devices"
] |
13,351,034 | https://en.wikipedia.org/wiki/Werner%20Kuhn%20%28chemist%29 | Werner Kuhn (February 6, 1899 – August 27, 1963) was a Swiss physical chemist who developed the first model of the viscosity of polymer solutions using statistical mechanics. He is known for being the first to apply Boltzmann's entropy formula:
to the modeling of rubber molecules, i.e. the "rubber band entropy model", molecules which he imagined as chains of N independently oriented links of length b with an end-to-end distance of r. This model, which resulted in the derivation of the thermal equation of state of rubber, has since been extrapolated to the entropic modeling of proteins and other conformational polymer chained molecules attached to a surface.
Kuhn received a degree in chemical engineering at the Eidgenössische Technische Hochschule (ETH, Federal Institute of Technology), in Zürich, and later a doctorate (1923) in physical chemistry. He was appointed professor of physical chemistry at the University of Kiel (1936–39) and then returned to Switzerland as director of the Physico-Chemical Institute of the University of Basel (1939–63), where he also served as rector (1955–56).
In a 1951 lecture along with his student V.B. Hargitay, he was the first to hypothesize the countercurrent multiplier mechanism in the mammalian kidney, later to be discovered in many other similar biological systems.
See also
Excluded volume
Kuhn length
References
External links
Hirsch, Warren (2003). "Disorder in un-stretched rubberbands", JCE, February Vol. 80, No. 2, p. 145
Thermodynamicists
1899 births
1963 deaths
Swiss physical chemists | Werner Kuhn (chemist) | [
"Physics",
"Chemistry"
] | 352 | [
"Thermodynamics",
"Thermodynamicists"
] |
13,351,159 | https://en.wikipedia.org/wiki/Peugeot%20Concours%20Design | The Peugeot Concours Design competition was a biennial competition run by the French car manufacturer Peugeot. For each competition, entrants had to submit their designs for a car. A model of the winning design was built by the Peugeot Styling Centre and unveiled at the Frankfurt Motor Show. The last competition took place in 2008.
History
The first contest took place in 2000-2001, with the theme "2020" - a challenge to design a Peugeot car for the year 2020. The contest was announced at the Paris Motor Show in 2000, and culminated in the winning design being built by Peugeot, and unveiled at the Frankfurt Motor Show in 2001. Since then, the contest has followed a similar schedule every two years. For the 2006-2007 contest, the winning design featured in an Xbox 360 game.
Themes
Each contest has a theme, which contestants must follow when creating their designs:
2000-2001 "2020" - a concept car for the year 2020
2002-2003 "Retrofuturism"
2004-2005 "Design the Peugeot you dream of seeing in the near future"
2006-2007 "P.L.E.A.S.E." - stands for Pleasurable (to drive), Lively, Efficient, Accessible, Simple, Ecological
2008-2009 "Imagine the Peugeot in the worldwide megalopolis of tomorrow"
Rules
The contest runs to a similar schedule, with similar rules, each year. In 2008, the contest opened in June, with the deadline for submissions on July 20. When the deadline for submissions has passed, Peugeot selects 30 entries to go forward to the next stage. These 30 entries are placed on the competition's website and are voted on by visitors to the site and selected industry and press figures. Based on these votes, these 30 entries are narrowed down to ten entries and these entries are posted to the website. This stage is not a majority vote - the ten entries which are selected may not be the highest-voted entries.
In the final stage of voting, the Peugeot Jury select the winner and 2nd and 3rd runners-up from the final ten. The results of this are then posted to the competition's website, usually some time in February.
Prizes
In the 2008-2009 contest, the prize structure will be as follows:
1st place
€10,000
Xbox 360 console
"La Griffe" trophy, presented at the Geneva Motor Show
A 1:43 scale model of the previous contest's winning entry, made by Norev to be sold as merchandise
Construction of a full-scale model of the winning entry to be entered in two auto shows (see below)
Inclusion of their design in a forthcoming Xbox 360 game
Accommodation and expenses to attend the two motor shows
Entry to the Shanghai Motor Show 2009
Entry to the Mondial de Paris 2008
2nd place
€2,500
Xbox 360 console
A 1:43 scale model of the previous contest's winning entry
3rd place
€1,500
Xbox 360 console
A 1:43 scale model of the previous contest's winning entry
4th-10th place
€1,000
A 1:43 scale model of the previous contest's winning entry
11th-30th place
€300
Winners
2000-2001
The winner of the 2000-2001 competition was Moonster by Marko Lukovic. The concept behind this design was that the vehicle should be totally original. It features a wheel-level engine with a raised central compartment capable of seating two people.
2002-2003
This contest, themed "Retrofuturism", was won by Stefan Schulze with 4002. With this design, the artist intended to make the car look like a typical Peugeot. Many commentators felt that he was successful with this, citing the design of the headlights as particularly Peugeot-like.
2004-2005
This contest was won by André Costa with Moovie. His was a bubble-shaped 2-door, 2-seat city car, almost totally enclosed by glass. The creator has stated that the car was designed to be environmentally friendly and ideal for use in cities.
2006-2007
The 2006-2007 competition was won by Mihai Panaitescu with his design, Flux. According to its creator, the car is intended to be sporty and versatile, and able to handle many different environments with ease. Flux is meant to symbolise "the continuous change and flow of our daily lives during work and play" - to this end, the car contains an integrated Xbox 360 console.
See also
List of motor vehicle awards
Automotive design
Paris Motor Show
Geneva Motor Show
Frankfurt Motor Show
Peugeot
Inducement prize contest
References
External links
Peugeot Concours Design (English site)
Moonster home page
Peugeot (English site)
Automotive design
Design awards
Experimental vehicles | Peugeot Concours Design | [
"Engineering"
] | 940 | [
"Design",
"Design awards"
] |
13,352,174 | https://en.wikipedia.org/wiki/Quadrant%20%28instrument%29 | A quadrant is an instrument used to measure angles up to 90°. Different versions of this instrument could be used to calculate various readings, such as longitude, latitude, and time of day. Its earliest recorded usage was in ancient India in Rigvedic times by Rishi Atri to observe a solar eclipse. It was then proposed by Ptolemy as a better kind of astrolabe. Several different variations of the instrument were later produced by medieval Muslim astronomers. Mural quadrants were important astronomical instruments in 18th-century European observatories, establishing a use for positional astronomy.
Etymology
The term quadrant, meaning one fourth, refers to the fact that early versions of the instrument were derived from astrolabes. The quadrant condensed the workings of the astrolabe into an area one fourth the size of the astrolabe face; it was essentially a quarter of an astrolabe.
History
During Rigvedic times in ancient India, quadrants called 'Tureeyam's were used to measure the extent of a great solar eclipse. The use of a Tureeyam for observing a solar eclipse by Rishi Atri is described in the fifth mandala of the Rigveda, most likely between c. 1500 and 1000 BCE.
Early accounts of a quadrant also come from Ptolemy's Almagest around AD 150. He described a "plinth" that could measure the altitude of the noon sun by projecting the shadow of a peg on a graduated arc of 90 degrees. This quadrant was unlike later versions of the instrument; it was larger and consisted of several moving parts. Ptolemy's version was a derivative of the astrolabe and the purpose of this rudimentary device was to measure the meridian angle of the sun.
Islamic astronomers in the Middle Ages improved upon these ideas and constructed quadrants throughout the Middle East, in observatories such as Marageh, Rey and Samarkand. At first these quadrants were usually very large and stationary, and could be rotated to any bearing to give both the altitude and azimuth for any celestial body. As Islamic astronomers made advancements in astronomical theory and observational accuracy they are credited with developing four different types of quadrants during the Middle Ages and beyond. The first of these, the sine quadrant, was invented by Muhammad ibn Musa al-Khwarizmi in the 9th century at the House of Wisdom in Baghdad. The other types were the universal quadrant, the horary quadrant and the astrolabe quadrant.
During the Middle Ages the knowledge of these instruments spread to Europe. In the 13th century Jewish astronomer Jacob ben Machir ibn Tibbon was crucial in further developing the quadrant. He was a skilled astronomer and wrote several volumes on the topic, including an influential book detailing how to build and use an improved version of the quadrant. The quadrant that he invented came to be known as the novus quadrans, or new quadrant. This device was revolutionary because it was the first quadrant to be built that did not involve several moving parts and thus could be much smaller and more portable.
Tibbon's Hebrew manuscripts were translated into Latin and improved upon by Danish scholar Peter Nightingale several years later. Because of the translation, Tibbon, or Prophatius Judaeus as he was known in Latin, became an influential name in astronomy. His new quadrant was based upon the idea that the stereographic projection that defines a planispheric astrolabe can still work if the astrolabe parts are folded into a single quadrant. The result was a device that was far cheaper, easier to use and more portable than a standard astrolabe. Tibbon's work had a far reach and influenced Copernicus, Christopher Clavius and Erasmus Reinhold; and his manuscript was referenced in Dante's Divine Comedy.
As the quadrant became smaller and thus more portable, its value for navigation was soon realized. The first documented use of the quadrant to navigate at sea is in 1461, by Diogo Gomes. Sailors began by measuring the height of Polaris to ascertain their latitude. This application of quadrants is generally attributed to Arab sailors who traded along the east coast of Africa and often travelled out of sight of land. It soon became more common to take the height of the sun at a given time due to the fact that Polaris is not visible south of the equator.
In 1618, the English mathematician Edmund Gunter further adapted the quadrant with an invention that came to be known as the Gunter quadrant. This pocket sized quadrant was revolutionary because it was inscribed with projections of the tropics, the equator, the horizon and the ecliptic. With the correct tables one could use the quadrant to find the time, the date, the length of the day or night, the time of sunrise and sunset and the meridian. The Gunter quadrant was extremely useful but it had its drawbacks; the scales only applied to a certain latitude so the instrument's use was limited at sea.
Types
There are several types of quadrants:
Mural quadrants, used for determining the time by measuring the altitudes of astronomical objects. Tycho Brahe created one of the largest mural quadrants. In order to tell time he would place two clocks next to the quadrant so that he could identify the minutes and seconds in relation to the measurements on the side of the instrument.
Large frame-based instruments used for measuring angular distances between astronomical objects.
Geometric quadrant used by surveyors and navigators.
Davis quadrant a compact, framed instrument used by navigators for measuring the altitude of an astronomical object.
They can also be classified as:
Altitude – The plain quadrant with plumb line, used to take the altitude of an object.
Gunner's – A type of clinometer used by an artillerist to measure the elevation or depression angle of a gun barrel of a cannon or mortar, both to verify proper firing elevation, and to verify the correct alignment of the weapon-mounted fire control devices.
Gunter's – A quadrant used for time determination as well as the length of day, when the sun had risen and set, the date, and the meridian using scales and curves of the quadrant along with related tables. It was invented by Edmund Gunter in 1623. Gunter's quadrant was fairly simple which allowed for its widespread and long-lasting use in the 17th and 18th centuries. Gunter expanded the basic features of other quadrants to create a convenient and comprehensive instrument. Its distinguishable feature included projections of the tropics, equator, ecliptic, and the horizon.
Islamic – King identified four types of quadrants that were produced by Muslim astronomers.
The sine quadrant (Arabic: Rubul Mujayyab) – also known as the Sinecal Quadrant – was used for solving trigonometric problems and taking astronomical observations. It was developed by al-Khwarizmi in 9th century Baghdad and prevalent until the nineteenth century. Its defining feature is a graph-paper like grid on one side that is divided into sixty equal intervals on each axis and is also bounded by a 90 degree graduated arc. A cord was attached to the apex of the quadrant with a bead, for calculation, and a plumb bob. They were also sometimes drawn on the back of astrolabes.
The universal (shakkāzīya) quadrant – used for solving astronomical problems for any latitude: These quadrants had either one or two sets of shakkāzīya grids and were developed in the fourteenth century in Syria. Some astrolabes are also printed on the back with the universal quadrant like an astrolabe created by Ibn al-Sarrāj.
The horary quadrant – used for finding the time with the sun: The horary quadrant could be used to find the time either in equal or unequal (length of the day divided by twelve) hours. Different sets of markings were created for either equal or unequal hours. For measuring the time in equal hours, the horary quadrant could only be used for one specific latitude while a quadrant for unequal hours could be used anywhere based on an approximate formula. One edge of the quadrant had to be aligned with the sun, and once aligned, a bead on the plumbline attached to the centre of the quadrant showed the time of the day. A British version dated 1311 was listed by Christie's in December 2023, with the claim of being "the earliest dated English scientific instrument" without showing any provenance. A further example exists dated 1396, from European sources (Richard II of England). The oldest horary quadrant was found during an excavation in 2013 in the Hanseatic town of Zutphen (Netherlands), is dated ca. 1300, and is in the local Stedelijk Museum in Zutphen.
The astrolabe/almucantar quadrant – a quadrant developed from the astrolabe: This quadrant was marked with one half of a typical astrolabe plate as astrolabe plates are symmetrical. A cord attached from the centre of the quadrant with a bead at the other end was moved to represent the position of a celestial body (sun or a star). The ecliptic and star positions were marked on the quadrant for the above. It is not known where and when the astrolabe quadrant was invented, existent astrolabe quadrants are either of Ottoman or Mamluk origin, while there have been discovered twelfth century Egyptian and fourteenth century Syrian treatises on the astrolabe quadrant. These quadrants proved to be very popular alternatives to astrolabes.
Geometric quadrant
The geometric quadrant is a quarter-circle panel usually of wood or brass. Markings on the surface might be printed on paper and pasted to the wood or painted directly on the surface. Brass instruments had their markings scribed directly into the brass.
For marine navigation, the earliest examples were found around 1460. They were not graduated in degrees but rather had the latitudes of the most common destinations directly scribed on the limb. When in use, the navigator would sail north or south until the quadrant indicated he was at the destination's latitude, turn in the direction of the destination and sail to the destination maintaining a course of constant latitude. After 1480, more of the instruments were made with limbs graduated in degrees.
Along one edge there were two sights forming an alidade. A plumb bob was suspended by a line from the centre of the arc at the top.
In order to measure the altitude of a star, the observer would view the star through the sights and hold the quadrant so that the plane of the instrument was vertical. The plumb bob was allowed to hang vertical and the line indicated the reading on the arc's graduations. It was not uncommon for a second person to take the reading while the first concentrated on observing and holding the instrument in proper position.
The accuracy of the instrument was limited by its size and by the effect the wind or observer's motion would have on the plumb bob. For navigators on the deck of a moving ship, these limitations could be difficult to overcome.
Solar observations
In order to avoid staring into the sun to measure its altitude, navigators could hold the instrument in front of them with the sun to their side. By having the sunward sighting vane cast its shadow on the lower sighting vane, it was possible to align the instrument to the sun. Care would have to be taken to ensure that the altitude of the centre of the sun was determined. This could be done by averaging the elevations of the upper and lower umbra in the shadow.
Back observation quadrant
In order to perform measurements of the altitude of the sun, a back observation quadrant was developed.
With such a quadrant, the observer viewed the horizon from a sight vane (C in the figure on the right) through a slit in the horizon vane (B). This ensured the instrument was level. The observer moved the shadow vane (A) to a position on the graduated scale so as to cause its shadow to appear coincident with the level of the horizon on the horizon vane. This angle was the elevation of the sun.
Framed quadrant
Large frame quadrants were used for astronomical measurements, notably determining the altitude of celestial objects. They could be permanent installations, such as mural quadrants. Smaller quadrants could be moved. Like the similar astronomical sextants, they could be used in a vertical plane or made adjustable for any plane.
When set on a pedestal or other mount, they could be used to measure the angular distance between any two celestial objects.
The details on their construction and use are essentially the same as those of the astronomical sextants; refer to that article for details.
Navy: Used to gauge elevation on ships cannon, the quadrant had to be placed on each gun's trunnion in order to judge range, after the loading. The reading was taken at the top of the ship's roll, the gun adjusted, and checked, again at the top of the roll, and he went to the next gun, until all that were going to be fired were ready. The ship's Gunner was informed, who in turn informed the captain...You may fire when ready...at the next high roll, the cannon would be fired.
In more modern applications, the quadrant is attached to the trunnion ring or of a large naval gun to align it to benchmarks welded to the ship's deck. This is done to ensure firing of the gun hasn't "warped the deck." A flat surface on the mount gunhouse or turret is also checked against benchmarks, also, to ensure large bearings and/or bearing races haven't changed... to "calibrate" the gun.
Customization
During the Middle Ages, makers often added customization to impress the person for whom the quadrant was intended. In large, unused spaces on the instrument, a sigil or badge would often be added to denote the ownership by an important person or the allegiance of the owner.
See also
Davis quadrant
List of astronomical instruments
Mural instrument
References
Maurice Daumas, Scientific Instruments of the Seventeenth and Eighteenth Centuries and Their Makers, Portman Books, London 1989
External links
Gunter's Quadrant Article on the Gunter's Quadrant (PDF)
Gunter's Quadrant Simulation of Gunter's Quadrant (requires Java)
A working quadrant in coin form
Richard II (1396) era equal hour horary quadrant (pictures):
back, with tables
front, with watch angles
Surveying
Astronomical instruments
Historical scientific instruments
Astronomy in the medieval Islamic world
Technology in the medieval Islamic world
Angle measuring instruments
Navigational equipment
Iranian inventions
Greek inventions
Arab inventions
Marine navigation | Quadrant (instrument) | [
"Astronomy",
"Engineering"
] | 2,962 | [
"History of astronomy",
"Astronomy in the medieval Islamic world",
"Surveying",
"Civil engineering",
"Astronomical instruments"
] |
16,026,071 | https://en.wikipedia.org/wiki/Antimony%20regulus | Antimony regulus or antimony metal is a partially purified form of the element antimony. In modern commerce, it typically contains 0.4% to 1.0% of impurities, which typically include primarily arsenic, and smaller amounts of sulfur, zinc and iron. Selenium as an impurity is rare, but for some purposes must be avoided; other problematic impurities for various applications include copper, nickel, and lead.
Typical commercial antimony is unsuitable for production of solid-state-electronics devices, and for these 99.95% pure material is typically demanded.
References
External links
Chymistry of Isaac Newton project
Antimony | Antimony regulus | [
"Physics"
] | 133 | [
"Materials stubs",
"Materials",
"Matter"
] |
16,027,304 | https://en.wikipedia.org/wiki/Interferometric%20modulator%20display | Interferometric modulator display (IMOD, trademarked mirasol) is a technology used in electronic visual displays that can create various colors via interference of reflected light. The color is selected with an electrically switched light modulator comprising a microscopic cavity that is switched on and off using driver integrated circuits similar to those used to address liquid crystal displays (LCD). An IMOD-based reflective flat panel display includes hundreds of thousands of individual IMOD elements each a microelectromechanical systems (MEMS)-based device.
In one state, an IMOD subpixel absorbs incident light and appears black to the viewer. In a second state, it reflects light at a specific wavelength, using a diffraction grating effect. When not being addressed, an IMOD display consumes very little power. Unlike conventional back-lit liquid crystal displays, it is clearly visible in bright ambient light such as sunlight. IMOD prototypes as of mid-2010 could emit 15 frames per second (fps), and in November 2011 Qualcomm demonstrated another prototype reaching 30 fps, suitable for video playback. The smartwatch Qualcomm Toq features this display with 40 fps.
Mirasol screens were only able to produce 60 Hz video but it quickly drained the battery. Devices that used the screen have colors that look washed out, so the technology never saw mainstream support.
Working principle
The basic elements of an IMOD-based display are microscopic devices that act essentially as mirrors that can be switched on or off individually. Each of these elements reflects only one exact wavelength of light, such as a specific hue of red, green or blue, when turned on, and absorbs light (appears black) when off. Elements are organised into a rectangular array in order to produce a display screen.
An array of elements that all reflect the same color when turned on produces a monochromatic display, for example black and red (in this example using IMOD elements that reflect red light when "on"). As each element reflects only a certain amount of light, grouping several elements of the same color together as subpixels allows different brightness levels for a pixel based on how many elements are reflective at a particular time.
Multiple color displays are created by using subpixels, each designed to reflect a specific different color. Multiple elements of each color are generally used to both give more combinations of displayable color (by mixing the reflected colors) and to balance the overall brightness of the pixel.
Because elements only use power in order to switch between on and off states (no power is needed to reflect or absorb light hitting the display once the element is either reflecting or absorbing), IMOD-based displays potentially use much less power than displays that generate light and/or need constant power to keep pixels in a particular state. Being a reflective display, they require an external light source (such as daylight or a lamp) to be readable, just like paper or other electronic paper technologies.
Details
A pixel in an IMOD-based display consists of one or more subpixels that are individual microscopic interferometric cavities similar in operation to Fabry–Pérot interferometers (etalons). While a simple etalon consists of two half-silvered mirrors, an IMOD comprises a reflective membrane which can move in relation to a semi-transparent thin film stack. With an air gap defined within this cavity, the IMOD behaves like an optically resonant structure whose reflected color is determined by the size of the airgap. Application of a voltage to the IMOD creates electrostatic forces which bring the membrane into contact with the thin film stack. When this happens the behavior of the IMOD changes to that of an induced absorber. The consequence is that almost all incident light is absorbed and no colors are reflected. It is this binary operation that is the basis for the IMOD's application in reflective flat panel displays. Since the display utilizes light from ambient sources, the display's brightness increases in high ambient environments (i.e. sunlight). In contrast, a back-lit LCD suffers from incident light.
For a practical RGB color model (RGB) display, a single RGB pixel is built from several subpixels, because the brightness of a monochromatic pixel is not adjusted. A monochromatic array of subpixels represents different brightness levels for each color, and for each pixel, there are three such arrays: red, green and blue.
Development
The IMOD technology was invented by Mark W. Miles, a MEMS researcher and founder of Etalon, Inc., and (co-founder) of Iridigm Display Corporation. Qualcomm took over the development of this technology after its acquisition of Iridigm in 2004, and subsequently formed Qualcomm MEMS Technologies (QMT). Qualcomm has allowed commercialization of the technology under the trademark name "mirasol". This energy-efficient, biomimetic technology sees application and use in portable electronics such as e-book readers and mobile phones.
Future IMOD panels manufacturers include Qualcomm in conjunction with Foxlink, having established a joint-venture with Sollink (高強光電) in 2009 with a future facility dedicated to manufacturing IMOD panels. Production for this began in Jan 2011, with the fabricated panels intended for devices such as e-readers.
As of 2015, the IMOD Mirasol display laboratory in Longtan, Taiwan, formerly run by Qualcomm, is now apparently run by Apple.
Uses
IMOD displays are now available in the commercial marketplace. QMT's displays, using IMOD technology, are found in the Acoustic Research ARWH1 Stereo Bluetooth headset device, the Showcare Monitoring system (Korea), the Hisense C108, and MP3 applications from Freestyle Audio and Skullcandy. In the mobile phone marketplace, Taiwanese manufacturers Inventec and Cal-Comp have announced phones with mirasol displays, and LG claims to be developing "one or more" handsets using mirasol technology. These products all have only two-color (black plus one other) "bi-chromic" displays. A multi-color IMOD display is used in the Qualcomm Toq smartwatch.
References
Bibliography
Display technology
Qualcomm | Interferometric modulator display | [
"Engineering"
] | 1,303 | [
"Electronic engineering",
"Display technology"
] |
16,032,107 | https://en.wikipedia.org/wiki/Becher%20process | The Becher process is a process to produce rutile, a form of titanium dioxide, from the ore ilmenite. Although it is competitive with the chloride process and the sulfate process,
. the Becher process is not used on scale.
With the idealized formula FeTiO3, ilmenite contains 55-65% titanium dioxide, the rest being iron oxide. The Becher process, like other beneficiation processes, aims to remove iron. The Becher process exploits the conversion of the ferrous iron (FeO) to ferric iron (Fe2O3). Ilmenite ores can be upgraded to synthetic rutile by increasing their TiO2 content to between 90 and 96 percent.
History
This technology was developed in the early 1960s in Western Australia by a joint initiative between industry and government. The process was named after Robert Gordon Becher, who while working at the Western Australian Government Chemical Laboratories (the precursor to ChemCentre) invented, developed and introduced the technique to the Western Australian Mineral Sands industry. The process was patented in 1961.
Process
The Becher process is suitable for weathered ilmenite that has low concentrations of chromium and magnesium. There are four steps involved in removing the iron portion of the ilmenite:
Oxidation
Reduction
Aeration
Leaching
Oxidation
Oxidation involves heating the ilmenite in a rotary kiln with air to convert iron to iron(III) oxide:
4 FeTiO3 + O2 → 2 Fe2O3·TiO2 + 2 TiO2
This step is suitable for a range of ilmenite-containing feedstocks.
Reduction
Reduction is performed in a rotary kiln with pseudobrookite (Fe2O3.TiO2), coal, and sulfur, then heated to a temperature greater than 1200 °C. The iron oxide in the mineral grains is reduced to metallic iron to produce reduced ilmenite:
Fe2O3·TiO2 + 3 CO → 2 Fe + TiO2 + 3 CO2
The "reduced ilmenite" is separated from the char prior to the next step.
Aeration
Aeration involves the removal of the metallic iron created in the last step by "rusting" it out. This conversion is achieved in large tanks that contain 1% ammonium chloride solution with air being pumped through the tank. The tank is being continuously agitated, and the iron will rust and precipitate in the form of a slime.
4 Fe + 3 O2 → 2 Fe2O3
The finer iron oxide is then separated from the larger particles of synthetic rutile.
Acid leach
Once the majority of the iron oxide has been removed the remainder of it is leached away using 0.5M sulfuric acid.
References
Further reading
Chemical processes
Industrial processes
Titanium processes | Becher process | [
"Chemistry"
] | 569 | [
"Metallurgical processes",
"Titanium processes",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
16,032,809 | https://en.wikipedia.org/wiki/Heterophile%20antibody%20test | The mononuclear spot test or monospot test, a form of the heterophile antibody test, is a rapid test for infectious mononucleosis due to Epstein–Barr virus (EBV). It is an improvement on the Paul–Bunnell test. The test is specific for heterophile antibodies produced by the human immune system in response to EBV infection. Commercially available test kits are 70–92% sensitive and 96–100% specific, with a lower sensitivity in the first two weeks after clinical symptoms begin.
The United States Center for Disease Control deems the monospot test not to be very useful.
Medical uses
It is indicated as a confirmatory test when a physician suspects EBV, typically in the presence of clinical features such as fever, malaise, pharyngitis, tender lymphadenopathy (especially posterior cervical; often called "tender glands") and splenomegaly.
In the case of delayed or absent seroconversion, an immunofluorescence test could be used if the diagnosis is in doubt. It has the following characteristics: VCAs (Viral Capsid Antigen) of the IgM class, antibodies to EBV early antigen (anti-EA), absent antibodies to EBV nuclear antigen (anti-EBNA)
Usefulness
One source states that the specificity of the test is high, virtually 100%, Another source states that a number of other conditions can cause false positives. Rarely, however, a false positive heterophile antibody test may result from systemic lupus erythematosus, toxoplasmosis, rubella, lymphoma and leukemia.
However, the sensitivity is only moderate, so a negative test does not exclude EBV. This lack of sensitivity is especially the case in young children, many of whom will not produce detectable amounts of the heterophile antibody and will thus have a false negative test result.
Timing
It will generally not be positive during the 4–6 week incubation period before the onset of symptoms. The highest amount of heterophile antibodies occurs 2 to 5 weeks after the onset of symptoms. If positive, it will remain so for at least six weeks. An elevated heterophile antibody level may persist up to 1 year.
Process
The test is usually performed using commercially available test kits which detect the reaction of heterophile antibodies in a person's blood sample with horse or cow red blood cell antigens. These test kits work on the principles of latex agglutination or immunochromatography. Using this method, the test can be performed by individuals without specialized training, and the results may be available in as little as five minutes.
Manual versions of the test rely on the agglutination of horse erythrocytes by heterophile antibodies in patient serum. Heterophile means it reacts with proteins across species lines. Heterophile also can mean that it is an antibody that reacts with antigens other than the antigen that stimulated it (an antibody that crossreacts).
A 20% suspension of horse red cells is used in an isotonic 3–8% sodium citrate formulation.
One drop of the patient's serum to be tested is mixed on an opal glass slide with one drop of a particulate suspension of guinea-pig kidney stroma, and a suspension of ox red cell stroma; sera and suspensions are mixed with a wooden applicator 10 times.
Ten microliters of the horse red cell suspension are then added and mixed with each drop of adsorbed serum.
The mixture is left undisturbed for one minute (not rocked or shaken).
It is then examined for the presence or absence of red cell agglutination.
If stronger with the sera adsorbed with guinea-pig kidney, the test is positive.
If stronger with the sera adsorbed with ox red cell stroma, the test is negative.
If agglutination is absent in both mixtures, the test is negative.
A known 'positive' and 'negative' control serum is tested with each batch of test sera.
References
Virology
Infectious disease blood tests
Epstein–Barr virus
Immunologic tests | Heterophile antibody test | [
"Biology"
] | 869 | [
"Immunologic tests"
] |
16,036,981 | https://en.wikipedia.org/wiki/Lava%20filter | A lava filter is a biological filter that uses lavastone pebbles as support material on which microorganisms can grow in a thin biofilm. This community of microorganisms, known as the periphyton, break down the odor components in the air, such as hydrogen sulfide. The biodegradation processes that occurs is provided by the bacteria themselves. In order for this to work, sufficient oxygen as well as water and nutrients (for cell growth) is to be supplied.
Method
Contaminated air enters the system at the bottom of the filter and passes in an upward direction through the filter. Water is supplied through the surface of the biofilter and trickles down over the lava rock to the bottom, where it is collected. Constant water provisioning at the surface prevents dry-out of the active bacteria in the biofilm and ensures a constant pH value in the filter. It also functions to make nutrients available to the bacteria.
Percolating water collected at the filter bottom contains odour components as well as sulfuric acid from the biological oxidation of hydrogen sulfide. Depending on the process design the collected water is recirculated or subjected to further treatment.
Types of systems
At present: 2 types of systems are used;
constantly submerged lava filters (for treatment ponds, combined treatment ponds/irrigation reservoirs, ...)
not-submerged lava filters (for wastewater treatment; wastewater is simply sprayed on the pebbles with this system)
Constantly submerged lavafilters
These are constructed out of 2 layers of lava pebbles and a top layer of nutrient-free soil (only at the plants roots). On top, water-purifying plants (as Iris pseudacorus and Sparganium erectum) are placed. Usually, around 1/4 of the dimension of lavastone is required to purify the water and just like slow sand filters, a series of herringbone drains are placed (with lava filters these are placed at the bottom layer).
The water-purifying plants used with constantly submerged, planted, lavafilters (e.g. treatment ponds, self-purifying irrigation reservoirs, ...) include a wide variety of plants, depending on the local climate and geographical location. Plants are usually chosen which are indigenous in that location for environmental reasons and optimum workings of the system. In addition to water-purifying (de-nutrifying) plants, plants that supply oxygen, and shade are also added in ecologic water catchments, ponds, ... This to allow a complete ecosystem to form. Finally, in addition to plants, locally grown bacteria and non-predatory fish are also added to eliminate pests. The bacteria are usually grown locally by submerging straw in water and allowing it to form bacteria (arriving from the surrounding atmosphere). The plants used (placed on an area 1/4 of the water mass) are divided in 4 separate water depth-zones; knowingly:
A water-depth zone from 0–20 cm; Iris pseudacorus, Sparganium erectum, ... may be placed here (temperate climates)
A water-depth zone from 40–60 cm; Stratiotes aloides, Hydrocharis morsus-ranae, ... may be placed here (temperate climates)
A water-depth zone from 60–120 cm; Nymphea alba, ... may be placed here (temperate climates)
A submerged water-depth zone; Myriophyllum spicatum, ... may be placed here (temperate climates)
Finally, three types of (non-predatory) fish (surface; bottom and ground-swimmers) are chosen. This of course to ensure that the fish may 'get along'. Examples of the three types of fish (for temperate climates) are:
Surface swimming fish: Leuciscus leuciscus, Leuciscus idus, Scardinius erythrophthalmus
Middle-swimmers: Rutilus rutilus
Bottom-swimming fish: Tinca tinca
See also
Constructed wetland
Treatment pond
Organisms used in water purification
References
Water filters
Appropriate technology
Environmental soil science
DIY culture | Lava filter | [
"Chemistry",
"Environmental_science"
] | 852 | [
"Water treatment",
"Water filters",
"Environmental soil science",
"Filters"
] |
7,029,205 | https://en.wikipedia.org/wiki/Subsea%207 | Subsea 7 S.A. (stylised as Subsea7) is a Luxembourgish multinational services company involved in subsea engineering and construction serving the offshore energy industry. The company is registered in Luxembourg with its headquarters in London. Subsea 7 delivers offshore projects and provides services for the energy industry.
Subsea7 makes offshore energy transition feasible through working on lower-carbon oil and gas and by providing services for the growth of renewables and other emerging energy industries.
History
The company was formed by the January 2011 merger of two predecessor companies, Acergy S.A. and Subsea 7, Inc.
Acergy was founded in 1970 as Stolt Nielsen Seaway, a division of the Norwegian Stolt-Nielsen Group offering divers for the exploration of the North Sea. After a series of acquisitions, including Comex Services of France in 1992 and Houston, Texas–based Ceanic Corporation in 1998, the company changed its name to Stolt Offshore in 2000. Five years later Stolt-Nielsen spun out the company as an independent business listed on the Oslo Stock Exchange and NASDAQ. The firm renamed as Acergy in March 2006.
Subsea 7, Inc. was the result of a series of mergers between DSND Offshore AS, Halliburton Subsea, Subsea Offshore and Rockwater over an extended period, with Rockwater and SubSea merging in 1999 to form Halliburton Subsea, and the resulting company operating as a 50/50 joint venture with DSND in 2002 with the name Subsea 7. Halliburton exited the joint venture in November 2004. The company was listed on the Oslo Stock Exchange in August 2005 following its restructuring the same year.
On 21 June 2010 the combination of Acergy S.A. and Subsea 7 Inc. was announced and was completed on 7 January 2011. The new entity took the Subsea 7 name while retaining Acergy's Luxembourg domicile and operational headquarters in London. The chairman and chief executive roles were filled by Kristian Siem and Jean Cahuzac, who had previously held the same roles at Subsea 7 and Acergy respectively.
Presence
The current headquarters for Subsea 7 is located at 40 Brighton Road, Sutton, London.
See also
List of oilfield service companies
References
External links
Engineering companies of Luxembourg
Engineering companies of the United Kingdom
Companies listed on the Oslo Stock Exchange
Companies formerly listed on the Nasdaq
Energy engineering and contractor companies
Offshore engineering
Oilfield services companies
Multinational companies headquartered in Luxembourg
Multinational companies based in the City of London | Subsea 7 | [
"Engineering"
] | 513 | [
"Construction",
"Engineering companies",
"Energy engineering and contractor companies",
"Offshore engineering"
] |
7,029,214 | https://en.wikipedia.org/wiki/Radioactive%20scrap%20metal | Radioactive scrap metal is created when radioactive material enters the metal recycling process and contaminates scrap metal.
Overview
A "lost source accident" occurs when a radioactive object is lost or stolen. Such objects may appear in the scrap metal industry if people mistake them for harmless bits of metal. The International Atomic Energy Agency has provided guides for scrap metal collectors on what a sealed source might look like. The best known example of this type of event is the Goiânia accident, in Brazil.
While some lost-source accidents have not involved the scrap metal industry, they are good examples of the likely scale and scope of a lost-source accident. For example, the Red Army left sources behind in Didi Lilo, Georgia.
Another case occurred at Yanango where an 192Ir radiography source was lost and at Gilan, Iran a radiography source harmed a welder.
Radioactive sources have a wide range of uses in medicine and industry, and it is common for the design (and nature) of a source to be tailored to the specific application. Hence, it is impossible to state with confidence what the "typical" source looks like or contains. For instance, antistatic devices include beta and alpha emitters: polonium containing devices have been used to eliminate static electricity in such devices as paint spraying equipment. An overview of the gamma sources used for radiography can be seen at Radiographic equipment, and it is reasonable to consider this to be a good overview of small to moderate gamma sources.
Notable incidents
1930s and 1940s – In the US, gold which was contaminated with radioactive lead-210 entered the jewelry industry after a person in upstate New York melted gold seeds used in brachytherapy that had originally contained radon-222. By the 1960s, health officials had reported cases of skin damage and cancer in people who had worn contaminated rings. A 1981 investigation by the state identified at least 177 contaminated pieces of jewelry worn by 127 people throughout the state and in northwestern Pennsylvania, of which nine had developed cancer and 41 had a non-cancerous skin disease linked to radiation.
1982 – In northern Taiwan, a Cobalt-60 source was recycled with steel into rebar and used in the construction of apartment buildings, principally in Taipei from 1982 through 1984. Over 2,000 apartment units and shops were suspected as having been built using the material. About 10,000 people are believed to have been exposed to long-term low-level irradiation as a result. In the summer of 1992, a utility worker for the Taiwanese state-run electric utility Taipower brought a Geiger counter to his apartment to learn more about the device, and discovered that his apartment was contaminated. Despite awareness of the problem, owners of some of the buildings known to be contaminated have continued to rent apartments to tenants (in part because selling the units is illegal). Some research has shown that the radiation has had a "beneficial" effect upon the health of the tenants based on the death rate from cancers, Another study looking at the incidence of cancer found that although the overall risk of cancer was sharply reduced (SIR = 0.6, 95% CI 0.5 – 0.7), the incidence of certain leukemias in men (n = 6, SIR = 3.4, 95% CI 1.2 – 7.4) and thyroid cancer in women (n = 6, SIR = 2.6, 95% CI 1.0 – 5.7) were more prevalent.
December 1983 – Ciudad Juárez, Mexico. A local resident salvaged materials from a discarded radiation therapy machine containing 6,010 pellets of cobalt-60. Transport of the material led to severe contamination of his truck. When the truck was scrapped, it contaminated another 5,000 metric tonnes of steel to an estimated of activity. This steel was used to manufacture kitchen and restaurant table legs and rebar, some of which was shipped to the US and Canada. The incident was discovered months later when a truck delivering contaminated steel building materials to the Los Alamos National Laboratory drove into the facility through a radiation monitoring station intended to detect radiation leaving the facility. Contamination was later measured on roads used to transport the original damaged radiation source. Some pellets were found embedded in the roadway. In the state of Sinaloa, 109 houses were condemned due to use of contaminated building material. This incident prompted the Nuclear Regulatory Commission and Customs Service to install radiation detection equipment at all major border crossings.
September 1987 – Goiânia accident in Brazil; four people died from caesium radiation poisoning during their search for scrap metal, and 249 other people had significant radiation exposure.
May 1998 – Recycler Acerinox in Cádiz, Spain, unwittingly melted scrap metal containing caesium-137; the radioactive cloud drifted to Switzerland before being detected. (See Acerinox accident.)
January 2000 – At Samut Prakarn, a cobalt-60 teletherapy source was stolen and sold as scrap, and attempts were made by scrap metal workers to recycle the metal. Three people died, and thousands of others were exposed to radiation. It was found that at the edge of the scrap yard, the dose rate was about 1 to 10 mSv·h−1. The exact location of the source in the scrap yard was determined using a fluorescent screen which acted as a scintillator; this device was held on the end of a long pole.
July 2010 – During a routine inspection at the Port of Genoa, on Italy's northwest coast, a cargo container from Saudi Arabia containing nearly of scrap copper was detected to be emitting gamma radiation at a rate of around After spending over a year in quarantine on Port grounds, Italian officials dissected the container using robots and discovered a rod of cobalt-60 long and 0.8 cm in diameter intermingled with the scrap. Officials suspected its provenance to be inappropriately disposed of medical or food-processing equipment. The rod was sent to Germany for further analysis, after which it was likely to be recycled.
May 2013 – A batch of metal-studded belts sold by online retailer ASOS.com were confiscated and held in a US radioactive storage facility after testing positive for cobalt-60.
Physical and chemical compositions
The cleanup operation for the Goiânia accident was difficult both because the source containment had been opened, and the radioactive material was water-soluble.
In 1983, a different incident in Mexico wherein cobalt-60 was spilled in an otherwise similar exposure led to a very different pattern of contamination, since the cobalt in such a source is normally in the form of cobalt metal alloyed with some nickel to improve the mechanical properties of the radioactive metal. If such a source is abused, then the cobalt metal fragments do not tend to dissolve in water or become very mobile. If a cobalt or iridium source is lost at a ferrous metal scrapyard then it is often the case that the source will enter a furnace, the radioactive metal will melt and contaminate the steel from this furnace. In Mexico, some buildings have been demolished because of the level of cobalt-60 in the steel used to make them. Also, some of the steel which was rendered radioactive in the Mexican event was used to make legs for 1400 tables.
Source melting
In the case of some high-value scrap metals it is possible to decontaminate the material, but this is best done long before the metal goes to a scrap yard.
Ferrous scrap
In the case of a caesium source being melted in an electric arc furnace used for steel scrap, it is more likely that the caesium will contaminate the fly ash or dust from the furnace, while radium is likely to stay in the ash or slag. The United States Environmental Protection Agency provides data about the fate of different contaminating elements in a scrap furnace. Four different fates for the element exist: the element can stay in the metal (as with cobalt and ruthenium); the element can enter the slag (as in lanthanides, actinides and radium); the element can enter the furnace dust or fly ash (as with caesium), which accounts for around 5%; or the element can leave the furnace and pass through the baghouse to enter the air (as with iodine).
Aluminium scrap
It is normal to place silicon, aluminium scrap and flux in a furnace. This is heated to form molten aluminium. From the furnace three main streams are obtained, metal product, dross (metal oxides and halides which are skimmed off the molten metal product) and off gases which go to the baghouse. The cooled waste gasses are then allowed out into the environment.
Copper scrap
It is normal that good-quality scrap copper, such as that from a nuclear plant, is refined in one furnace before being refined further in an electrochemical process. The furnace generates impure metal, slag, dust and gases. The dust accumulates in a baghouse, while the gases are vented to the atmosphere. The impure metal from the furnace may be further refined in an electrochemical process.
If the copper refinery includes an electrochemical process after the furnace, then unwanted elements are removed from the impure metal and deposited as anode slime.
See also
Nuclear crime
References
External links
NRC pictures of items that may contain radioactive materials
Radiation accidents and incidents
Radioactive waste
Metals | Radioactive scrap metal | [
"Chemistry",
"Technology"
] | 1,898 | [
"Metals",
"Hazardous waste",
"Environmental impact of nuclear power",
"Radioactivity",
"Radioactive waste"
] |
7,029,586 | https://en.wikipedia.org/wiki/Brocard%20points | In geometry, Brocard points are special points within a triangle. They are named after Henri Brocard (1845–1922), a French mathematician.
Definition
In a triangle with sides , where the vertices are labeled in counterclockwise order, there is exactly one point such that the line segments form the same angle, , with the respective sides , namely that
Point is called the first Brocard point of the triangle , and the angle is called the Brocard angle of the triangle. This angle has the property that
There is also a second Brocard point, , in triangle such that line segments form equal angles with sides respectively. In other words, the equations
apply. Remarkably, this second Brocard point has the same Brocard angle as the first Brocard point. In other words, angle
is the same as
The two Brocard points are closely related to one another; in fact, the difference between the first and the second depends on the order in which the angles of triangle are taken. So for example, the first Brocard point of is the same as the second Brocard point of .
The two Brocard points of a triangle are isogonal conjugates of each other.
Construction
The most elegant construction of the Brocard points goes as follows. In the following example the first Brocard point is presented, but the construction for the second Brocard point is very similar.
As in the diagram above, form a circle through points and , tangent to edge of the triangle (the center of this circle is at the point where the perpendicular bisector of meets the line through point that is perpendicular to ). Symmetrically, form a circle through points and , tangent to edge , and a circle through points and , tangent to edge . These three circles have a common point, the first Brocard point of . See also Tangent lines to circles.
The three circles just constructed are also designated as epicycles of . The second Brocard point is constructed in similar fashion.
Trilinears and barycentrics of the first two Brocard points
Homogeneous trilinear coordinates for the first and second Brocard points are:
Thus their barycentric coordinates are:
The segment between the first two Brocard points
The Brocard points are an example of a bicentric pair of points, but they are not triangle centers because neither Brocard point is invariant under similarity transformations: reflecting a scalene triangle, a special case of a similarity, turns one Brocard point into the other. However, the unordered pair formed by both points is invariant under similarities. The midpoint of the two Brocard points, called the Brocard midpoint, has trilinear coordinates
and is a triangle center; it is center X(39) in the Encyclopedia of Triangle Centers. The third Brocard point, given in trilinear coordinates as
is the Brocard midpoint of the anticomplementary triangle and is also the isotomic conjugate of the symmedian point. It is center X(76) in the Encyclopedia of Triangle Centers.
The distance between the first two Brocard points and is always less than or equal to half the radius of the triangle's circumcircle:
The segment between the first two Brocard points is perpendicularly bisected at the Brocard midpoint by the line connecting the triangle's circumcenter and its Lemoine point. Moreover, the circumcenter, the Lemoine point, and the first two Brocard points are concyclic—they all fall on the same circle, of which the segment connecting the circumcenter and the Lemoine point is a diameter.
Distance from circumcenter
The Brocard points and are equidistant from the triangle's circumcenter :
Similarities and congruences
The pedal triangles of the first and second Brocard points are congruent to each other and similar to the original triangle.
If the lines , each through one of a triangle's vertices and its first Brocard point, intersect the triangle's circumcircle at points , then the triangle is congruent with the original triangle . The same is true if the first Brocard point is replaced by the second Brocard point .
Notes
References
.
.
External links
Third Brocard Point at MathWorld
Bicentric Pairs of Points and Related Triangle Centers
Bicentric Pairs of Points
Bicentric Points at MathWorld
Points defined for a triangle | Brocard points | [
"Mathematics"
] | 909 | [
"Points defined for a triangle",
"Point (geometry)"
] |
7,029,862 | https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Toxicology%20Research | The Indian Institute of Toxicology Research (previously the Industrial Toxicology Research Centre) is a laboratory run under the aegis of Council of Scientific and Industrial Research. It was established in 1965 by Sibte Hasan Zaidi and has its main campus in Lucknow with a satellite campus at Gheru. The research is centered in the Asia-Pacific region.
References
External links
List of CSIR Laboratories
CSIR homepage
Toxicology organizations
Environmental organisations based in India
Research institutes in Lucknow
Council of Scientific and Industrial Research
Environmental research institutes
Research institutes established in 1965
1965 establishments in Uttar Pradesh | Indian Institute of Toxicology Research | [
"Environmental_science"
] | 116 | [
"Toxicology organizations",
"Toxicology",
"Environmental research institutes",
"Environmental research"
] |
7,029,928 | https://en.wikipedia.org/wiki/National%20Botanical%20Research%20Institute | The National Botanical Research Institute (NBRI) is a research institute of the Council of Scientific and Industrial Research (CSIR) located in Lucknow, Uttar Pradesh, India. It is engaged in the field of taxonomy and modern biology.
History
Originally conceptualised and set up as the National Botanic Gardens (NBG) by Professor Kailas Nath Kaul on behalf of the State Government of Uttar Pradesh, it was taken over by the CSIR in 1953. Dr Triloki Nath Khoshoo joined in 1964 as the Assistant Director, shortly afterwards becoming the Director. Initially engaged in research work in the classical botanical disciplines, the NBG went on laying an increasing emphasis in keeping with the national needs and priorities in the field of plant sciences, on its applied and developmental research activities. Due to the untiring efforts of Dr Khoshoo, the institute rose to the stature of being the National Botanical Research Institute in 1978, reflecting the correct nature and extent of its aims and objectives, functions and R & D activities. Sikandar Bagh is a famous and historic pleasure garden, located in the grounds of the Institute.
Achievements
NBRI developed a new variety of bougainvillea, named Los Banos Variegata-Jayanthi.
In a move to fight against whiteflies National Botanical Research Institute (NBRI) Lucknow has developed a pest resistant variety of cotton.
A group of innovators developed first indigenous transgenic cotton variety expressing bt protein.
South Africa
National Botanical Research Institute (NBRI) is also the state botanical research institute of South Africa.
References
Council of Scientific and Industrial Research
Plant taxonomy
Botanical research institutes
Research institutes in Lucknow
Tourist attractions in Lucknow
Year of establishment missing
1953 establishments in Uttar Pradesh
Research institutes established in 1953 | National Botanical Research Institute | [
"Biology"
] | 354 | [
"Plant taxonomy",
"Plants"
] |
7,030,928 | https://en.wikipedia.org/wiki/Reflector%20%28cellular%20automaton%29 | In cellular automata such as Conway's Game of Life, a reflector is a pattern that can interact with a spaceship to change its direction of motion, without damage to the reflector pattern. In Life, many oscillators can reflect the glider; there also exist stable reflectors composed of still life patterns that, when they interact with a glider, reflect the glider and return to their stable state.
External links
New stable 180-degree glider reflector, Game of Life News, May 30, 2009
New stable 90-degree glider reflector, Game of Life News, May 29, 2013
Cellular automaton patterns | Reflector (cellular automaton) | [
"Technology"
] | 128 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
7,030,987 | https://en.wikipedia.org/wiki/Schreier%20conjecture | In finite group theory, the Schreier conjecture asserts that the outer automorphism group of every finite simple group is solvable. It was proposed by Otto Schreier in 1926, and is now known to be true as a result of the classification of finite simple groups, but no simpler proof is known.
References
.
Theorems about finite groups
Conjectures that have been proved | Schreier conjecture | [
"Mathematics"
] | 77 | [
"Mathematical theorems",
"Mathematical problems",
"Conjectures that have been proved"
] |
7,031,578 | https://en.wikipedia.org/wiki/Netocracy | Netocracy was a term invented by the editorial board of the American technology magazine Wired in the early 1990s. A portmanteau of Internet and aristocracy, netocracy refers to a perceived global upper-class that bases its power on a technological advantage and networking skills, in comparison to what is portrayed as a bourgeoisie of a gradually diminishing importance.
The concept was later picked up and redefined by Alexander Bard and Jan Söderqvist for their book Netocracy — The New Power Elite and Life After Capitalism (originally published in Swedish in 2000 as Nätokraterna : boken om det elektroniska klassamhället, published in English by Reuters/Pearsall UK in 2002).
The netocracy concept has been compared with Richard Florida's concept of the creative class. Bard and Söderqvist have also defined an underclass in opposition to the netocracy, which they refer to as the consumtariat.
The consumtariat
Alexander Bard describes a new underclass called the consumtariat, a portmanteau of consumer and proletariat, whose main activity is consumption, regulated from above. It is kept occupied with private problems, its desires provoked with the use of advertisements and its active participation is limited to things like product choice, product customization, engaging with interactive products and life-style choice.
Cyberdeutocracy
Similar to netocracy, is the concept of cyberdeutocracy. Karl W. Deutsch in his book The Nerves of Government: Models of Political Communication and Control hypothesized about "information elites, controlling means of mass communication and, accordingly, power institutions, the functioning of which is based on the use of information in their activities." Thus Deutsch introduced the concept of deutocracy, combining the words 'Deutsch' and 'autocracy' to get the new term. Cyberdeutocracy combines 'deutocracy' with the prefix 'cyber-' and is defined as a political regime based on the control by the political and corporate elites of the information and communication infrastructure of the Internet space. As a tool of social control, Cyberdeutocracy allows elites to engage in the:
destruction and/or transformation of existing meanings, symbols, values, and ideas
generation of new meanings, symbols, values, and ideas
introduction of these transformed and new meanings, symbols, values, and ideas into the public consciousness to shape society's perception of political reality.
The term was coined by Phillip Freiberg in his 2018 paper "What are CyberSimulacra and Cyberdeutocracy?"
Other usages
Netocracy can also refer to "Internet-enabled democracy" where issue-based politics will supersede party-based politics. In this sense, the word netocracy is also used as a portmanteau of Internet and democracy, not of Internet and aristocracy:
"In Seattle, organized labor ran interference for the ragtag groups assembled behind it, marshaling several thousand union members who feared that free trade might send their jobs abroad. In Washington, labor focused on lobbying Congress over the China-trade issue, leaving the IMF and the World Bank to the ad hoc Netocracy."
"From his bungalow in Berkeley, he's spreading the word of grassroots netocracy to the Beltway. He formed an Internet political consulting firm with Jerome ..."
See also
References
Further reading
Slavoj Zizek, Organs without Bodies,
Gareth Morgan (1992), Images of Organization;
A Hacker Manifesto,
External links
The Netocracy and the Consumtariat. Speech by Alexander Bard
Interview with the authors of Netocracy Part 1 234 5
Internet culture
Oligarchy
Information society
Information revolution
Power (social and political) concepts | Netocracy | [
"Technology"
] | 796 | [
"Computing and society",
"Information society"
] |
7,031,816 | https://en.wikipedia.org/wiki/Rokhlin%27s%20theorem | In 4-dimensional topology, a branch of mathematics, Rokhlin's theorem states that if a smooth, orientable, closed 4-manifold M has a spin structure (or, equivalently, the second Stiefel–Whitney class vanishes), then the signature of its intersection form, a quadratic form on the second cohomology group , is divisible by 16. The theorem is named for Vladimir Rokhlin, who proved it in 1952.
Examples
The intersection form on M
is unimodular on by Poincaré duality, and the vanishing of implies that the intersection form is even. By a theorem of Cahit Arf, any even unimodular lattice has signature divisible by 8, so Rokhlin's theorem forces one extra factor of 2 to divide the signature.
A K3 surface is compact, 4 dimensional, and vanishes, and the signature is −16, so 16 is the best possible number in Rokhlin's theorem.
A complex surface in of degree is spin if and only if is even. It has signature , which can be seen from Friedrich Hirzebruch's signature theorem. The case gives back the last example of a K3 surface.
Michael Freedman's E8 manifold is a simply connected compact topological manifold with vanishing and intersection form of signature 8. Rokhlin's theorem implies that this manifold has no smooth structure. This manifold shows that Rokhlin's theorem fails for the set of merely topological (rather than smooth) manifolds.
If the manifold M is simply connected (or more generally if the first homology group has no 2-torsion), then the vanishing of is equivalent to the intersection form being even. This is not true in general: an Enriques surface is a compact smooth 4 manifold and has even intersection form II1,9 of signature −8 (not divisible by 16), but the class does not vanish and is represented by a torsion element in the second cohomology group.
Proofs
Rokhlin's theorem can be deduced from the fact that the third stable homotopy group of spheres is cyclic of order 24; this is Rokhlin's original approach.
It can also be deduced from the Atiyah–Singer index theorem. See  genus and Rochlin's theorem.
gives a geometric proof.
The Rokhlin invariant
Since Rokhlin's theorem states that the signature of a spin smooth manifold is divisible by 16, the definition of the Rokhlin invariant is deduced as follows:
For 3-manifold and a spin structure on , the Rokhlin invariant in is defined to be the signature of any smooth compact spin 4-manifold with spin boundary .
If N is a spin 3-manifold then it bounds a spin 4-manifold M. The signature of M is divisible by 8, and an easy application of Rokhlin's theorem shows that its value mod 16 depends only on N and not on the choice of M. Homology 3-spheres have a unique spin structure so we can define the Rokhlin invariant of a homology 3-sphere to be the element of , where M any spin 4-manifold bounding the homology sphere.
For example, the Poincaré homology sphere bounds a spin 4-manifold with intersection form , so its Rokhlin invariant is 1. This result has some elementary consequences: the Poincaré homology sphere does not admit a smooth embedding in , nor does it bound a Mazur manifold.
More generally, if N is a spin 3-manifold (for example, any homology sphere), then the signature of any spin 4-manifold M with boundary N is well defined mod 16, and is called the Rokhlin invariant of N. On a topological 3-manifold N, the generalized Rokhlin invariant refers to the function whose domain is the spin structures on N, and which evaluates to the Rokhlin invariant of the pair where s is a spin structure on N.
The Rokhlin invariant of M is equal to half the Casson invariant mod 2. The Casson invariant is viewed as the Z-valued lift of the Rokhlin invariant of integral homology 3-sphere.
Generalizations
The Kervaire–Milnor theorem states that if is a characteristic sphere in a smooth compact 4-manifold M, then
.
A characteristic sphere is an embedded 2-sphere whose homology class represents the Stiefel–Whitney class . If vanishes, we can take to be any small sphere, which has self intersection number 0, so Rokhlin's theorem follows.
The Freedman–Kirby theorem states that if is a characteristic surface in a smooth compact 4-manifold M, then
.
where is the Arf invariant of a certain quadratic form on . This Arf invariant is obviously 0 if is a sphere, so the Kervaire–Milnor theorem is a special case.
A generalization of the Freedman-Kirby theorem to topological (rather than smooth) manifolds states that
,
where is the Kirby–Siebenmann invariant of M. The Kirby–Siebenmann invariant of M is 0 if M is smooth.
Armand Borel and Friedrich Hirzebruch proved the following theorem: If X is a smooth compact spin manifold of dimension divisible by 4 then the  genus is an integer, and is even if the dimension of X is 4 mod 8. This can be deduced from the Atiyah–Singer index theorem: Michael Atiyah and Isadore Singer showed that the  genus is the index of the Atiyah–Singer operator, which is always integral, and is even in dimensions 4 mod 8. For a 4-dimensional manifold, the Hirzebruch signature theorem shows that the signature is −8 times the  genus, so in dimension 4 this implies Rokhlin's theorem.
proved that if X is a compact oriented smooth spin manifold of dimension 4 mod 8, then its signature is divisible by 16.
References
(especially page 280)
Rokhlin, Vladimir A., New results in the theory of four-dimensional manifolds, Doklady Acad. Nauk. SSSR (N.S.) 84 (1952) 221–224.
Geometric topology
4-manifolds
Differential structures
Surgery theory
Theorems in topology | Rokhlin's theorem | [
"Mathematics"
] | 1,313 | [
"Geometric topology",
"Theorems in topology",
"Topology",
"Mathematical problems",
"Mathematical theorems"
] |
7,032,835 | https://en.wikipedia.org/wiki/Knudsen%20diffusion | Knudsen diffusion, named after Martin Knudsen, is a means of diffusion that occurs when the scale length of a system is comparable to or smaller than the mean free path of the particles involved. An example of this is in a long pore with a narrow diameter (2–50 nm) because molecules frequently collide with the pore wall. As another example, consider the diffusion of gas molecules through very small capillary pores. If the pore diameter is smaller than the mean free path of the diffusing gas molecules, and the density of the gas is low, the gas molecules collide with the pore walls more frequently than with each other, leading to Knudsen diffusion.
In fluid mechanics, the Knudsen number is a good measure of the relative importance of Knudsen diffusion. A Knudsen number much greater than one indicates Knudsen diffusion is important. In practice, Knudsen diffusion applies only to gases because the mean free path for molecules in the liquid state is very small, typically near the diameter of the molecule itself.
Mathematical description
The diffusivity for Knudsen diffusion is obtained from the self-diffusion coefficient derived from the kinetic theory of gases:
For Knudsen diffusion, path length λ is replaced with pore diameter , as species A is now more likely to collide with the pore wall as opposed with another molecule. The Knudsen diffusivity for diffusing species A, is thus
where is the gas constant (8.3144 J/(mol·K) in SI units), molar mass is expressed in units of kg/mol, and temperature T (in kelvins). Knudsen diffusivity thus depends on the pore diameter, species molar mass and temperature. Expressed as a molecular flux, Knudsen diffusion follows the equation for Fick's first law of diffusion:
Here, is the molecular flux in mol/m²·s, is the molar concentration in . The diffusive flux is driven by a concentration gradient, which in most cases is embodied as a pressure gradient (i.e. therefore where is the pressure difference between both sides of the pore and is the length of the pore).
If we assume that is much less than , the average absolute pressure in the system (i.e. ) then we can express the Knudsen flux as a volumetric flow rate as follows:
,
where is the volumetric flow rate in . If the pore is relatively short, entrance effects can significantly reduce to net flux through the pore. In this case, the law of effusion can be used to calculate the excess resistance due to entrance effects rather easily by substituting an effective length in for . Generally, the Knudsen process is significant only at low pressure and small pore diameter. However there may be instances where both Knudsen diffusion and molecular diffusion are important. The effective diffusivity of species A in a binary mixture of A and B, is determined by
where and is the flux of component i.
For cases where α = 0 (, i.e. countercurrent diffusion) or where is close to zero, the equation reduces to
Knudsen self diffusion
In the Knudsen diffusion regime, the molecules do not interact with one another, so that they move in straight lines between points on the pore channel surface. Self-diffusivity is a measure of the translational mobility of individual molecules. Under conditions of thermodynamic equilibrium, a molecule is tagged and its trajectory followed over a long time. If the motion is diffusive, and in a medium without long-range correlations, the squared displacement of the molecule from its original position will eventually grow linearly with time (Einstein’s equation). To reduce statistical errors in simulations, the self-diffusivity, , of a species is defined from ensemble averaging Einstein’s equation over a large enough number of molecules N.
See also
Knudsen flow
Knudsen equation
Atomic diffusion
Mass diffusivity
References
External links
Knudsen number and diffusivity calculators
Diffusion | Knudsen diffusion | [
"Physics",
"Chemistry"
] | 857 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion"
] |
7,033,091 | https://en.wikipedia.org/wiki/Dodecahedral%20conjecture | The dodecahedral conjecture in geometry is intimately related to sphere packing.
László Fejes Tóth, a 20th-century Hungarian geometer, considered the Voronoi decomposition of any given packing of unit spheres. He conjectured in 1943 that the minimal volume of any cell in the resulting Voronoi decomposition was at least as large as the volume of a regular dodecahedron circumscribed to a unit sphere.
Thomas Callister Hales and Sean McLaughlin proved the conjecture in 1998, following the same strategy that led Hales to his proof of the Kepler conjecture. The proofs rely on extensive computations. McLaughlin was awarded the 1999 Morgan Prize for his contribution to this proof.
References
Theorems in geometry
Conjectures that have been proved | Dodecahedral conjecture | [
"Mathematics"
] | 153 | [
"Conjectures that have been proved",
"Geometry",
"Geometry stubs",
"Mathematical problems",
"Theorems in geometry",
"Mathematical theorems"
] |
7,034,672 | https://en.wikipedia.org/wiki/Weapon%20storage%20area | Weapon storage areas (WSA), also known as special ammunition storage (SAS), were extremely well guarded and well defended locations where NATO nuclear weapons were stored during the Cold War era.
In most situations, the WSA or SAS areas were located inside the perimeter of an army barracks or an air base in NATO territory, but in a few cases they were located deep inside wooded areas and miles away from a military base.
Due to changes in the political landscape, the number of special weapons in Europe has been drastically decreased. Moreover, the introduction of the WS3 Weapon Storage and Security System has made WSAs obsolete.
At present, few WSAs are still operational as modern day special weapons are stored in the floors of concrete aircraft shelters and placed under 24/7 electronic surveillance.
Examples
Bossier Base
Killeen Base
Lake Mead Base/Nellis Area 2
Manzano Base
Medina Annex prior to NSA/CSS Texas Cryptologic Center being located there
Naval Submarine Base Kings Bay
Naval Base Kitsap, former Naval Submarine Base Bangor
See also
List of established military terms
War reserve stock
Supply depot
Nuclear warfare
Military logistics
Military terminology | Weapon storage area | [
"Chemistry"
] | 226 | [
"Radioactivity",
"Nuclear warfare"
] |
7,035,609 | https://en.wikipedia.org/wiki/SYBYL%20line%20notation | The SYBYL line notation or SLN is a specification for unambiguously describing the structure of chemical molecules using short ASCII strings. SLN differs from SMILES in several significant ways. SLN can specify molecules, molecular queries, and reactions in a single line notation whereas SMILES handles these through language extensions. SLN has support for relative stereochemistry, it can distinguish mixtures of enantiomers from pure molecules with pure but unresolved stereochemistry. In SMILES aromaticity is considered to be a property of both atoms and bonds whereas in SLN it is a property of bonds.
Description
Like SMILES, SLN is a linear language that describes molecules. This provides a lot of similarities with SMILES despite SLN's many differences from SMILES, and as a result, this description will heavily compare SLN to SMILES and its extensions.
Attributes
Attributes, bracketed strings with additional data like [key1=value1, key2...], is a core feature of SLN. Attributes can be applied to atoms and bonds. Attributes not defined officially are available to users for private extensions.
When searching for molecules, comparison operators such as fcharge>-0.125 can be used in place of the usual equal sign. A ! preceding a key/value group inverts the result of the comparison.
Entire molecules or reactions can too have attributes. The square brackets are changed to a pair of <> signs.
Atoms
Anything that starts with an uppercase letter identifies an atom in SLN. Hydrogens are not automatically added, but the single bonds with hydrogen can be abbreviated for organic compounds, resulting in CH4 instead of C(H)(H)(H)H for methane. The author argues that explicit hydrogens allow for more robust parsing.
Attributes defined for atoms include I= for isotope mass number, charge= for formal charge, fcharge for partial charge, s= for stereochemistry, and spin= for radicals (s, d, t respectively for singlet, doublet, triplet). A formal charge of charge=2 can be abbreviated as +2, and vice versa for negative charges; - and + is additionally recognized as −1 or +1 charges. * is a shorthand for spin=d. Stereochemistry on atoms is mostly tetrahedral, with the R/S and D/L available among others; it can be explicit (E) or relative (R), or specify a mixture (M) of stereoisomers at this atom. A normal/inverted (N/I) notation, equivalent to @@ and @ in SMILES, is provided. A lot of additional attributes are provided for searching.
In addition to elemental atoms SLN supports the specification of wild card atoms: Any (match any atom), and Hev (match any heavy atom). It also has an extensive Markush syntax for specifying combinatorial libraries and RGROUP queries. SLN has several query atom types for matching groups of atoms. Each type has the group name, followed by an optional positive integer.
{| class="wikitable"
|-
! Group
! Description
|-
! R
| Used to match a side chain. Matched atoms must not have any connection to the core
|-
! X
| Used to match side chains and rings. Atoms matching an X group can match side chains and rings
|-
! Rx
| Matches side chains and rings, a ring closure must match a second Rx group
|}
The "0" mass number denotes the usual isotope, so N[I=0] equals N[I=14] matching 14N and N[!I=0] matching every other isotope.
Bonds
SLN uses largely the same bonding notation as SMILES, with -, =, #, and : for single, double, triple, and aromatic bonds. . is used for zero-order bonds, similarly to reaction SMILES, although a + is preferred for distinct molecules.
Most single bonds are implicit, so CH3CH3(CH3CH3) can be used instead of CH3-CH3(CH3–CH3) for ethane. Explicit single bonds are useful for three-center bonds.
The s= attribute is defined for double bonds, to convey stereochemistry information in E–Z (E/Z) or cis–trans (c/t) notation. N/I is available and stands for the "main" chain, which is trans or cis to each other.
Rings
SLN writes rings in a more explicit pattern than SMILES, with benzene specified as C[1]H:CH:CH:CH:CH:CH:@1. An atom is tagged as an anchor on the ring with a single numeric attribute, and @1 can then be used to specify this (in our case, "number one") atom for bonding back to.
Branching
SLN branches are identical to SMILES branches, with parentheses specifying them. Propionic acid is CH3CH2C(=O)OH(\scriptstyle CH3CH2C(=O)OH).
Reactions
SLN supports reactions with -> connecting the reactants and the products. Atom mapping is possible with the use of [#num] attributes. The reaction center (rc) attribute can be added to bonds, and the chiral conversion (cc) attribute to atoms.
Misc.
Multiple lines can be merged into a syntactical line by writing a \ (backslash) at the end of each line. This allows for breaking a long line into multiple lines, for example in a reaction with each molecule on its own line.
See also
Simplified molecular input line entry specification (SMILES notation)
Smiles arbitrary target specification (SMARTS notation)
References
Chemical nomenclature
Encodings
Chemical file formats | SYBYL line notation | [
"Chemistry"
] | 1,186 | [
"Chemistry software",
"nan",
"Chemical file formats"
] |
7,035,802 | https://en.wikipedia.org/wiki/Astroparticle%20physics | Astroparticle physics, also called particle astrophysics, is a branch of particle physics that studies elementary particles of astrophysical origin and their relation to astrophysics and cosmology. It is a relatively new field of research emerging at the intersection of particle physics, astronomy, astrophysics, detector physics, relativity, solid state physics, and cosmology. Partly motivated by the discovery of neutrino oscillation, the field has undergone rapid development, both theoretically and experimentally, since the early 2000s.
History
The field of astroparticle physics is evolved out of optical astronomy. With the growth of detector technology came the more mature astrophysics, which involved multiple physics subtopics, such as mechanics, electrodynamics, thermodynamics, plasma physics, nuclear physics, relativity, and particle physics. Particle physicists found astrophysics necessary due to difficulty in producing particles with comparable energy to those found in space. For example, the cosmic ray spectrum contains particles with energies as high as 1020 eV, where a proton–proton collision at the Large Hadron Collider occurs at an energy of ~1012 eV.
The field can be said to have begun in 1910, when a German physicist named Theodor Wulf measured the ionization in the air, an indicator of gamma radiation, at the bottom and top of the Eiffel Tower. He found that there was far more ionization at the top than what was expected if only terrestrial sources were attributed for this radiation.
The Austrian physicist Victor Francis Hess hypothesized that some of the ionization was caused by radiation from the sky. In order to defend this hypothesis, Hess designed instruments capable of operating at high altitudes and performed observations on ionization up to an altitude of 5.3 km. From 1911 to 1913, Hess made ten flights to meticulously measure ionization levels. Through prior calculations, he did not expect there to be any ionization above an altitude of 500 m if terrestrial sources were the sole cause of radiation. His measurements however, revealed that although the ionization levels initially decreased with altitude, they began to sharply rise at some point. At the peaks of his flights, he found that the ionization levels were much greater than at the surface. Hess was then able to conclude that "a radiation of very high penetrating power enters our atmosphere from above". Furthermore, one of Hess's flights was during a near-total eclipse of the Sun. Since he did not observe a dip in ionization levels, Hess reasoned that the source had to be further away in space. For this discovery, Hess was one of the people awarded the Nobel Prize in Physics in 1936. In 1925, Robert Millikan confirmed Hess's findings and subsequently coined the term 'cosmic rays'.
Many physicists knowledgeable about the origins of the field of astroparticle physics prefer to attribute this 'discovery' of cosmic rays by Hess as the starting point for the field.
Topics of research
While it may be difficult to decide on a standard 'textbook' description of the field of astroparticle physics, the field can be characterized by the topics of research that are actively being pursued. The journal Astroparticle Physics accepts papers that are focused on new developments in the following areas:
High-energy cosmic-ray physics and astrophysics;
Particle cosmology;
Particle astrophysics;
Related astrophysics: supernova, active galactic nuclei, cosmic abundances, dark matter etc.;
High-energy, VHE and UHE gamma-ray astronomy;
High- and low-energy neutrino astronomy;
Instrumentation and detector developments related to the above-mentioned fields.
Open questions
One main task for the future of the field is simply to thoroughly define itself beyond working definitions and clearly differentiate itself from astrophysics and other related topics.
Current unsolved problems for the field of astroparticle physics include characterization of dark matter and dark energy. Observations of the orbital velocities of stars in the Milky Way and other galaxies starting with Walter Baade and Fritz Zwicky in the 1930s, along with observed velocities of galaxies in galactic clusters, found motion far exceeding the energy density of the visible matter needed to account for their dynamics. Since the early nineties some candidates have been found to partially explain some of the missing dark matter, but they are nowhere near sufficient to offer a full explanation. The finding of an accelerating universe suggests that a large part of the missing dark matter is stored as dark energy in a dynamical vacuum.
Another question for astroparticle physicists is why is there so much more matter than antimatter in the universe today. Baryogenesis is the term for the hypothetical processes that produced the unequal numbers of baryons and antibaryons in the early universe, which is why the universe is made of matter today, and not antimatter.
Experimental facilities
The rapid development of this field has led to the design of new types of infrastructure. In underground laboratories or with specially designed telescopes, antennas and satellite experiments, astroparticle physicists employ new detection methods to observe a wide range of cosmic particles including neutrinos, gamma rays and cosmic rays at the highest energies. They are also searching for dark matter and gravitational waves. Experimental particle physicists are limited by the technology of their terrestrial accelerators, which are only able to produce a small fraction of the energies found in nature.
The following is an incomplete list of laboratories and experiments in astroparticle physics.
Underground laboratories
These facilities are located deep underground, to shield very sensitive experiments from cosmic rays that would otherwise preclude the observation of very rare phenomena.
China Jinping Underground Laboratory is a deep underground laboratory in the Jinping Mountains of Sichuan, China.
Kamioka Observatory is a neutrino and gravitational waves laboratory located underground in the Mozumi Mine near the Kamioka section of the city of Hida in Gifu Prefecture, Japan. It is the site of the Super-Kamiokande experiment.
Laboratori Nazionali del Gran Sasso (LNGS) is a laboratory that hosts experiments requiring a low-background environment. Its experimental halls are located within the Gran Sasso mountain, near L'Aquila (Italy).
SNOLAB is located 2 km underground in an active mine, in Greater Sudbury (Canada). Expanded from the original Sudbury Neutrino Observatory, the entire underground laboratory is operated as a cleanroom, hosting experiments in neutrino physics and dark matter searches.
Sanford Underground Research Facility (SURF) located in Lead, South Dakota hosts multiple experiments and is funded in part by the United States Department of Energy.
Neutrino detectors
Very large neutrino detectors are required to record the extremely rare interactions of neutrinos with atomic matter.
IceCube (Antarctica). The largest particle detector in the world, was completed in December 2010. The purpose of the detector is to investigate high energy neutrinos, search for dark matter, observe supernovae explosions, and search for exotic particles such as magnetic monopoles.
ANTARES (Toulon, France). A Neutrino detector 2.5 km under the Mediterranean Sea off the coast of Toulon, France. Designed to locate and observe neutrino flux in the direction of the southern hemisphere.
NESTOR Project (Pylos, Greece). The target of the international collaboration is the deployment of a neutrino telescope on the sea floor off of Pylos, Greece.
BOREXINO, a real-time detector, installed at LNGS, designed to detect neutrinos from the Sun with an organic liquid scintillator target.
Dark matter detectors
Experiments are dedicated to the direct detection of dark matter interactions with the detector target material.
LZ experiment is a dark matter direct detection experiment hoping to observe weakly interacting massive particles (WIMPs) scatters on xenon nuclei. The experiment is located at the Sanford Underground Research Facility (SURF) in South Dakota, and is managed by the United States Department of Energy's Lawrence Berkeley National Lab.
XENONnT, the upgrade of XENON1T, is a dark matter direct search experiment located at LNGS and is expected to be sensitive to WIMPs with spin-independent cross section of 10−48 cm2.
The Global Argon Dark Matter Collaboration operates a series of liquid argon experiments: DarkSide-50 at LNGS, DEAP-3600 at SNOLAB, and the upcoming DarkSide-20k detector at LNGS. These experiments look for WIMPs and heavier dark matter particle candidates.
The Cryogenic Dark Matter Search (CDMS) is a series of experiments searching for WIMPs interactions with semiconductor detectors at millikelvin temperatures.
The CERN Axion Solar Telescope (CERN, Switzerland) searches for axions originating from the Sun.
Cosmic ray observatories
Interested in high-energy cosmic ray detection are:
Pierre Auger Observatory (Malargüe, Argentina) detects and investigates high energy cosmic rays using two techniques. One is to study the particles interactions with water placed in surface detector tanks. The other technique is to track the development of air showers through observation of ultraviolet light emitted high in the Earth's atmosphere.
Telescope Array Project (Delta, Utah), an experiment for the detection of ultra high energy cosmic rays (UHECRs) using a ground array and fluorescence techniques in the desert of west Utah.
See also
Astroparticle Physics (journal)
Urca process
Unsolved problems in physics
References
External links
Aspera European network portal
www.astroparticle.org: all about astroparticle physics...
Aspera news
Virtual Institute of Astroparticle Physics
Helmholtz Alliance for Astroparticle Physics
UCLA Astro-Particle Physics at UCLA
Journal of Cosmology and Astroparticle Physics
Astroparticle Physics in the Netherlands
Astroparticle and High Energy Physics
ASD: Astroparticle Physics Laboratory at NASA
Teaching Astroparticle Physics
Particle physics | Astroparticle physics | [
"Physics"
] | 2,042 | [
"Astroparticle physics",
"Particle physics",
"Astrophysics"
] |
10,858,517 | https://en.wikipedia.org/wiki/Human%20Fertilisation%20and%20Embryology%20Act%201990 | The Human Fertilisation and Embryology Act 1990 (c. 37) is an Act of the Parliament of the United Kingdom. It created the Human Fertilisation and Embryology Authority which is in charge of human embryo research, along with monitoring and licensing fertility clinics in the United Kingdom.
The Authority is composed of a chairman, a deputy chairman, and however many members are appointed by the UK Secretary of State. They are in charge of reviewing information about human embryos and subsequent development, provision of treatment services, and activities governed by the Act of 1990. The Authority also offers information and advice to people seeking treatment, and to those who have donated gametes or embryos for purposes or activities covered in the Act of 1990. Some of the subjects under the Human Fertilisation and Embryology Act of 1990 are prohibitions in connection with gametes, embryos, and germ cells.
The Act also addresses licensing conditions, code of practice, and procedure of approval involving human embryos. This only concerns human embryos which have reached the two cell zygote stage, at which they are considered "fertilised" in the act. It also governs the keeping and using of human embryos, but only outside a woman's body. The act contains amendments to UK law regarding termination of pregnancy, surrogacy and parental rights.
Human Fertilisation and Embryology Authority and stem cell policy
The Human Fertilization and Embryology Act 1990 regulates ex-vivo human embryo creation and the research involving them. This act established the Human Fertilisation and Embryology Authority (HFEA) to regulate treatment and research in the UK involving human embryos. In 2001, an extension of the Act legalized embryo research for the purposes of "increasing knowledge about the development of embryos," "increasing knowledge about serious disease," and "enabling any such knowledge to be applied in developing treatments for serious disease." The HFEA grants licenses and research permission for up to three years, based on approval of five steps by the Research License Committee.
HFEA policies are reviewed by specialists in the field regularly. After research and literature are reviewed, and open public meetings are held, the summarized information is presented to the Human Fertilisation Embryology Authority.
Policy under review
Sperm and egg donation
Donors must meet certain criteria in order to be eligible for sperm, egg, or embryo donation. The donor can donate for research purposes or fertility treatment. Donors should find a HFEA licensed clinic, or can go through the National Gamete Donation Trust.
Multiple births as a result of IVF
The HFEA is carrying out a detailed review to determine the best way to reduce the risk of multiple pregnancies with in vitro fertilization (IVF). For example, Nadya Suleman (or "Octomom") is publicly known for giving birth to octuplets after IVF treatment.
Review of scientific methods to reduce mitochondrial disease
This policy allows for the use of techniques which alter the mitochondrial DNA of the egg or an embryo used in IVF, to prevent serious mitochondrial diseases from being inherited.
Past policy reviews
The policies reviewed by HFEA cover everything from human reproductive cloning to the creation of human-animal hybrids, and include subjects such as ethics with scientific and social significance.
Genetic testing
Sperm, eggs and embryos received in the donation process are currently tested for many medical conditions, and also quarantined for six months to reduce the risk of complications to the mother and child. Other than a screening for genetic disorders, donors are tested for HIV, hepatitis B, and hepatitis C.
Embryo research
Embryos must be donated by a woman between the ages of 18 and 35 years old, who has also undergone a medical screening and given informed consent (which can be revoked at any point up until the embryo is used).
Fees and regulation
£3,000 for extraction and initial freezing
£160 yearly for storage
£4,000-£8,000 total per treatment cycle
Risks of treatment
"Welfare of the Child" review (multiple pregnancy), for people seeking IVF treatment. While there is always a risk of having a multiples pregnancy after receiving IVF treatment, HFEA is reviewing policies which will reduce this dangerous possibility. No more than two eggs or embryos can be legally implanted in a woman in an IVF treatment. There is a 25% success rate of this procedure per treatment cycle.
Clinical safety
Includes safety procedure regulations at fertility clinics; includes safe cryopreservation of eggs and embryos. Eggs and embryos are stored for ten years after the initial treatment. If the patient decides not to pursue another pregnancy, the eggs and embryos can be donated for research or to another couple for fertility treatments.
Payment of donors
In donor-assisted conception, the donor may not receive any monetary compensation (in the UK), although they may have related expenses covered.
HFEA and liquid nitrogen gamete storage
Sperm, eggs and embryos are stored in liquid nitrogen using cryopreservation (defined as the freezing of cells or whole tissues to sub-zero temperatures—the boiling point of liquid nitrogen).
This method preserves living organisms in a state where they can be restored to how they were before freezing.
A cryoprotective compound (a liquid called cryopreservation medium), along with carefully controlled cooling and warming cycles ensure that minimal damage is done to the cells.
However, the freezing process is still somewhat damaging. Therefore, men wishing to donate sperm or have it stored for future use must make six sperm deposits for every one child they wish to have, due to the 50% survival rate of the sperm in each deposit. The sperm is then put into straw shaped vials, and placed in a storage tank of either liquid nitrogen, or liquid nitrogen vapor. The sub-zero temperatures of the liquid generally range from -150 degrees Celsius, to -196 degrees Celsius.
According to HFEA, the storage period for both human gametes and embryos cannot exceed ten years. HFEA requires a full informed consent from each party that has any relation to the egg, gametes, or embryo, all of which must be stored in accordance with their consents.
Exceptions to the informed consent of gamete storage:
Consent is not required if the gametes were legally obtained from the person in question before they reached 18 years of age.
Consent is not required if the person in question is about to undergo a medical procedure which could impair their fertility. In this situation, a licensed medical practitioner can sign off for the storage of the gametes. A medical practitioner can also authorize storage of the gametes, if it is believed to be in the person's best interest.
Consent is not required if the person in question is under 16 years of age, and is therefore considered incompetent to give consent.
The act states that it is legal to "take" gametes or accept those provided, and store them without a person's consent, if the person is considered incapable, or until they "acquire such capacity."
However, under paragraphs 9 and 10 of HFEA 1990, a person's gametes cannot be legally stored in the UK after their death.
Earlier steps and legislation
In July 1982 the Warnock Committee Inquiry was established. It was "to consider recent and potential developments in medicine and science related to human fertilisation and embryology; to consider what policies and safeguards should be applied, including consideration of the social, ethical, and legal implications of these developments; and to make recommendations."
The Warnock Report was published on 18 July 1984. The report stated that a regulator was needed due to the 'special status' of embryos.
In 1985 the Interim Licensing Authority was created. It was supposed to regulate work and research regarding human in vitro fertilisation until a permanent government legislation was passed. It remained the only authority until 1990.
The Unborn Children Protection Bill was also created in 1985. It was written by Enoch Powell and prohibited embryonic research. The Health Secretary would only have been allowed an embryo to be kept and implanted if it was for the sole purpose of assisting a named woman to bear a child. No other reason was allowed. This bill was not passed. It was reintroduced in 1986, where it again failed to pass. This was repeated again 1989.
The Surrogacy Arrangements Act 1985 was the first law that governed surrogacy arrangements. It criminalized commercial surrogacy arrangements.
In 1987 the framework for human fertilisation and embryology was created. A white paper was published in regards to the recommendations of the Warnock Report.
In 1990 the Human Fertilisation and Embryology Act 1990 was passed. The Human Fertilisation and Embryology Authority, HFEA, officially started work August 1, 1991.
HFEA coverage
The act covers several areas:
Any and all fertility treatment of humans involving the use of donated genetic material (eggs, sperm or embryos).
The storage of human eggs, sperm and embryos.
Research on early human embryos.
The creation of the Human Fertilisation and Embryology Authority, or HFEA, which regulates assisted reproduction in the UK.
Within the act an embryo is defined as a live human embryo where fertilisation is complete, complete is defined as the appearance of a two cell zygote.
Storage of human eggs, sperm, and embryos
The act states that eggs, sperm, and embryo can only be stored for a finite amount of time in very specific conditions that are regulated by the Human Fertilisation and Embryology Authority.
Human eggs and sperm can be stored for up to ten years.
Human embryos can be stored for maximum of five years.
Research on early human embryos
Research on human embryos can only be performed for specifically defined purposes that must be considered 'necessary and desirable' by the Human Fertilisation and Embryology Authority. Research can only be performed on an embryo for a maximum of fourteen days or until the primitive streak appears. The genetic composition of any cell within the embryo cannot be altered during the embryo's formation for research.
The act defined several purposes:
Innovations in infertility treatments
Increasing knowledge regarding miscarriages
Increasing knowledge of congenital disease
Developing more effectual methods of contraception
Generating methods for detecting and identifying gene or chromosomal irregularities in embryos before implantation.
Abortion provisions
Section 37 of the Act amends the Abortion Act 1967. The section specifies and broadens the conditions where abortion is legal.
Women who consider abortion are referred to two doctors. Each doctor then advises her whether abortion is a suitable decision based on the conditions listed below. An abortion is granted only when the doctors reach a unanimous decision that the woman may terminate her pregnancy. An abortion that is performed without this decision or under any other circumstances is considered unlawful.
Abortion may be granted under one of the following circumstances:
The registered medical practitioner that performs the abortion will continue to act in accordance with the Infant Life (Preservation) Act 1929.
Amendments
In 1991 the statutory storage period and special expeditions sections were revisited. Regulations were extended storage periods for eggs and sperm. Licensing rules for egg and sperm storage were also clarified.
A Disclosure of Information Act was created in 1992. This allowed the Human Fertilisation and Embryology Authority to disclose information to others with the patient's consent. for example, information could be shared with their general practitioner.
The Criminal Justice and Public Order Act 1994 added section 156. This prohibited the treatment of cells from aborted embryos. During the same year the Parental Orders regulations allowed parental orders to be made in surrogacy cases.
In 1996 the permitted storage period for embryos was extended.
The Human Fertilisation and Embryology (Deceased Fathers) Act 2003 amended section 28 in 2000.
Sperm may be taken from a deceased male to fertilize an egg if the corresponding man and woman were:
married
living as man and wife
or had been receiving treatment together at a licensed clinic.
In 2001 the Human Fertilisation and Embryology Regulations were added. These regulations extended the purposes that an embryo can be created for in regards to research.
Better understanding of embryonic development
Further knowledge of serious disease
research involving the treatment of serious disease
In addition, the Human Reproductive Cloning Act 2001 was passed. This essential made human reproductive cloning illegal by outlawing the implantation of research embryos.
As of 2004 the Disclosure of Donor Information Regulations were formed. Any sperm or egg donors registered after April 1, 2005, were required to pass on name and last address given to the offspring. During this time Parliament began reviewing the Human Fertilisation and Embryology Act 1990.
Licensing of all establishments handling gametes for treatment was required as of 2007 in the Quality and Safety Regulations.
In 2006 a white paper was published regarding a revised legislation for fertility. This led to the Human Fertilisation and Embryology Act 2008, HFE, being passed. This was a major review of fertility legislation, updating and amending the act of 1990. In 2009 the HFE act was passed. This is the current law in the UK.
See also
Human Reproductive Cloning Act 2001
Human Fertilisation and Embryology (Deceased Fathers) Act 2003
Human Fertilisation and Embryology Act 2008
References
External links
United Kingdom abortion law
Acts of the Parliament of the United Kingdom concerning healthcare
Cloning
Medical genetics in the United Kingdom
United Kingdom Acts of Parliament 1990
Medical regulation in the United Kingdom
Surrogacy | Human Fertilisation and Embryology Act 1990 | [
"Engineering",
"Biology"
] | 2,760 | [
"Cloning",
"Genetic engineering"
] |
10,858,909 | https://en.wikipedia.org/wiki/Extensions%20of%20symmetric%20operators | In functional analysis, one is interested in extensions of symmetric operators acting on a Hilbert space. Of particular importance is the existence, and sometimes explicit constructions, of self-adjoint extensions. This problem arises, for example, when one needs to specify domains of self-adjointness for formal expressions of observables in quantum mechanics. Other applications of solutions to this problem can be seen in various moment problems.
This article discusses a few related problems of this type. The unifying theme is that each problem has an operator-theoretic characterization which gives a corresponding parametrization of solutions. More specifically, finding self-adjoint extensions, with various requirements, of symmetric operators is equivalent to finding unitary extensions of suitable partial isometries.
Symmetric operators
Let be a Hilbert space. A linear operator acting on with dense domain is symmetric if
If , the Hellinger-Toeplitz theorem says that is a bounded operator, in which case is self-adjoint and the extension problem is trivial. In general, a symmetric operator is self-adjoint if the domain of its adjoint, , lies in .
When dealing with unbounded operators, it is often desirable to be able to assume that the operator in question is closed. In the present context, it is a convenient fact that every densely defined, symmetric operator is
closable. That is, has the smallest closed extension, called the closure of . This can
be shown by invoking the symmetric assumption and Riesz representation theorem. Since and its closure have the same closed extensions, it can always be assumed that the symmetric operator of interest is closed.
In the next section, a symmetric operator will be assumed to be densely defined and closed.
Self-adjoint extensions of symmetric operators
If an operator on the Hilbert space is symmetric, when does it have self-adjoint extensions? An operator that has a unique self-adjoint extension is said to be essentially self-adjoint; equivalently, an operator is essentially self-adjoint if its closure (the operator whose graph is the closure of the graph of ) is self-adjoint. In general, a symmetric operator could have many self-adjoint extensions or none at all. Thus, we would like a classification of its self-adjoint extensions.
The first basic criterion for essential self-adjointness is the following:
Equivalently, is essentially self-adjoint if and only if the operators have trivial kernels. That is to say, fails to be self-adjoint if and only if has an eigenvector with complex eigenvalues .
Another way of looking at the issue is provided by the Cayley transform of a self-adjoint operator and the deficiency indices.
is isometric on its domain. Moreover, is dense in .
Conversely, given any densely defined operator which is isometric on its (not necessarily closed) domain and such that is dense, then there is a (unique) densely defined symmetric operator
such that
The mappings and are inverses of each other, i.e., .
The mapping is called the Cayley transform. It associates a partially defined isometry to any symmetric densely defined operator. Note that the mappings and are monotone: This means that if is a symmetric operator that extends the densely defined symmetric operator , then extends , and similarly for .
This immediately gives us a necessary and sufficient condition for to have a self-adjoint extension, as follows:
A partially defined isometric operator on a Hilbert space has a unique isometric extension to the norm closure of . A partially defined isometric operator with closed domain is called a partial isometry.
Define the deficiency subspaces of A by
In this language, the description of the self-adjoint extension problem given by the theorem can be restated as follows: a symmetric operator has self-adjoint extensions if and only if the deficiency subspaces and have the same dimension.
The deficiency indices of a partial isometry are defined as the dimension of the orthogonal complements of the domain and range:
We see that there is a bijection between symmetric extensions of an operator and isometric extensions of its Cayley transform. The symmetric extension is self-adjoint if and only if the corresponding isometric extension is unitary.
A symmetric operator has a unique self-adjoint extension if and only if both its deficiency indices are zero. Such an operator is said to be essentially self-adjoint. Symmetric operators which are not essentially self-adjoint may still have a canonical self-adjoint extension. Such is the case for non-negative symmetric operators (or more generally, operators which are bounded below). These operators always have a canonically defined Friedrichs extension and for these operators we can define a canonical functional calculus. Many operators that occur in analysis are bounded below (such as the negative of the Laplacian operator), so the issue of essential adjointness for these operators is less critical.
Suppose is symmetric densely defined. Then any symmetric extension of is a restriction of . Indeed, and symmetric yields by applying the definition of . This notion leads to the von Neumann formulae:
Example
Consider the Hilbert space . On the subspace of absolutely continuous function that vanish on the boundary, define the operator by
Integration by parts shows is symmetric. Its adjoint is the same operator with being the absolutely continuous functions with no boundary condition. We will see that extending A amounts to modifying the boundary conditions, thereby enlarging and reducing , until the two coincide.
Direct calculation shows that and are one-dimensional subspaces given by
where is a normalizing constant. The self-adjoint extensions of are parametrized by the circle group . For each unitary transformation defined by
there corresponds an extension with domain
If , then is absolutely continuous and
Conversely, if is absolutely continuous and for some , then lies in the above domain.
The self-adjoint operators are instances of the momentum operator in quantum mechanics.
Self-adjoint extension on a larger space
Every partial isometry can be extended, on a possibly larger space, to a unitary operator. Consequently, every symmetric operator has a self-adjoint extension, on a possibly larger space.
Positive symmetric operators
A symmetric operator is called positive if
It is known that for every such , one has . Therefore, every positive symmetric operator has self-adjoint extensions. The more interesting question in this direction is whether has positive self-adjoint extensions.
For two positive operators and , we put if
in the sense of bounded operators.
Structure of 2 × 2 matrix contractions
While the extension problem for general symmetric operators is essentially that of extending partial isometries to unitaries, for positive symmetric operators the question becomes one of extending contractions: by "filling out" certain unknown entries of a 2 × 2 self-adjoint contraction, we obtain the positive self-adjoint extensions of a positive symmetric operator.
Before stating the relevant result, we first fix some terminology. For a contraction , acting on , we define its defect operators by
The defect spaces of are
The defect operators indicate the non-unitarity of , while the defect spaces ensure uniqueness in some parameterizations.
Using this machinery, one can explicitly describe the structure of general matrix contractions. We will only need the 2 × 2 case. Every 2 × 2 contraction can be uniquely expressed as
where each is a contraction.
Extensions of Positive symmetric operators
The Cayley transform for general symmetric operators can be adapted to this special case. For every non-negative number ,
This suggests we assign to every positive symmetric operator a contraction
defined by
which have matrix representation
It is easily verified that the entry, projected onto , is self-adjoint. The operator can be written as
with . If is a contraction that extends and its projection onto its domain is self-adjoint, then it is clear that its inverse Cayley transform
defined on is a positive symmetric extension of . The symmetric property follows from its projection onto its own domain being self-adjoint and positivity follows from contractivity. The converse is also true: given a positive symmetric extension of , its Cayley transform is a contraction satisfying the stated "partial" self-adjoint property.
The unitarity criterion of the Cayley transform is replaced by self-adjointness for positive operators.
Therefore, finding self-adjoint extension for a positive symmetric operator becomes a "matrix completion problem". Specifically, we need to embed the column contraction into a 2 × 2 self-adjoint contraction. This can always be done and the structure of such contractions gives a parametrization of all possible extensions.
By the preceding subsection, all self-adjoint extensions of takes the form
So the self-adjoint positive extensions of are in bijective correspondence with the self-adjoint contractions on the defect space of . The contractions and give rise to positive extensions and respectively. These are the smallest and largest positive extensions of in the sense that
for any positive self-adjoint extension of . The operator is the Friedrichs extension of and is the von Neumann-Krein extension of .
Similar results can be obtained for accretive operators.
Notes
References
A. Alonso and B. Simon, The Birman-Krein-Vishik theory of self-adjoint extensions of semibounded operators. J. Operator Theory 4 (1980), 251-270.
Gr. Arsene and A. Gheondea, Completing matrix contractions, J. Operator Theory 7 (1982), 179-189.
N. Dunford and J.T. Schwartz, Linear Operators, Part II, Interscience, 1958.
Functional analysis
Operator theory
Linear operators | Extensions of symmetric operators | [
"Mathematics"
] | 2,009 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Linear operators",
"Mathematical relations"
] |
10,859,332 | https://en.wikipedia.org/wiki/NOAA-17 | NOAA-17, also known as NOAA-M before launch, was an operational, polar orbiting, weather satellite series (NOAA K-N) operated by the National Environmental Satellite Service (NESS) of the National Oceanic and Atmospheric Administration (NOAA). NOAA-17 also continued the series of Advanced TIROS-N (ATN) spacecraft begun with the launch of NOAA-8 (NOAA-E) in 1983 but with additional new and improved instrumentation over the NOAA A-L series and a new launch vehicle (Titan 23G).
Launch
NOAA-17 was launched by the Titan 23G launch vehicle on 24 June 2002 at 18:23:04 UTC from Vandenberg Air Force Base, at Vandenberg Space Launch Complex 4 (SLW-4W), in a Sun-synchronous orbit, at 823 km above the Earth, orbiting every 101.20 minutes. NOAA-17 was in an afternoon equator-crossing orbit and has replaced the NOAA-15 as the prime afternoon spacecraft.
Spacecraft
The goal of the NOAA/NESS polar orbiting program is to provide output products used in meteorological prediction and warning, oceanographic and hydrologic services, and space environment monitoring. The polar orbiting system complements the NOAA/NESS geostationary meteorological satellite program (GOES). The NOAA-17 Advanced TIROS-N spacecraft is based on the Defense Meteorological Satellite Program (DMSP Block 5D) spacecraft and is a modified version of the ATN spacecraft (NOAA 6-11, 13-16) to accommodate the new instrumentation, supporting antennas and electrical subsystems. The spacecraft structure consists of four components: 1° the Reaction System Support (RSS); 2° the Equipment Support Module (ESM); 3° the Instrument Mounting Platform (IMP); and 4° the Solar Array (SA).
Instruments
All of the instruments are located on the ESM and the IMP. The spacecraft power is provided by a direct energy transfer system from the single solar array which consists of eight panels of solar cells. The in-orbit Attitude Determination and Control Subsystem (ADACS) provides three-axis pointing control by controlling torque in three mutually orthogonal momentum wheels with input from the Earth Sensor Assembly (ESA) for pitch, roll, and yaw updates. The ADACS controls the spacecraft attitude so that orientation of the three axes is maintained to within ± 0.2° and pitch, roll, and yaw to within 0.1°. The ADACS consists of the Earth Sensor Assembly (ESA), the Sun Sensor Assembly (SSA), four Reaction Wheel Assemblies (RWA), two roll/yaw coils (RYC), two pitch torquing coils (PTC), four gyros, and computer software for data processing. The ATN data handling subsystem, consists of the TIROS Information Processor (TIP) for low data rate instruments, the Manipulated Information Rate Processor (MIRP) for high data rate AVHRR, digital tape recorders (DTR), and a cross strap unit (XSU).
The NOAA-17 instrument complement consists of: 1° an improved six-channel Advanced Very High Resolution Radiometer/3 (AVHRR/3); 2° an improved High Resolution Infrared Radiation Sounder (HIRS/3); 3° the Search and Rescue Satellite Aided Tracking System (SARSAT), which consists of the Search and Rescue Repeater (SARR) and the Search and Rescue Processor (SARP-2); 4° the French/CNES-provided improved Argos Data Collection System (Argos DCS-2); 5° the Solar Backscatter Ultraviolet Spectral radiometer (SBUV/2); and 6° the Advanced Microwave Sounding Unit (AMSU), which consists of three separate modules, A1, A2, and B to replace the previous MSU and SSU instruments.
It hosts the Advanced Microwave Sounding Unit (AMSU), Advanced very-high-resolution radiometer (AVHRR) and High Resolution Infrared Radiation Sounder (HIRS) instruments' Automatic Picture Transmission (APT) transmitter. NOAA-17 has the same suite of instruments as carried by NOAA-16.
Advanced Very High Resolution Radiometer (AVHRR/3)
The AVHRR/3 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is an improved instrument over previous AVHRRs. The AVHRR/3 adds a sixth channel and is a cross-track scanning instrument providing imaging and radiometric data in the visible, near-IR and infrared of the same area on the Earth. Data from the visible and near-IR channels provide information on vegetation, clouds, snow, and ice. Data from the near-IR and thermal channels provide information on the land and ocean surface temperature and radiative properties of clouds. Only five channels can be transmitted simultaneously with channels 3A and 3B being switched for day/night operation. The instrument produces data in High Resolution Picture Transmission (HRPT) mode at 1.1 km resolution or in Automatic Picture Transmission (APT) mode at a reduced resolution of 4 km. The AVHRR/3 scans 55.4° per scan line on either side of the orbital track and scans 360 lines per minute. The six channels are: 1) channel 1, visible (0.58-0.68 μm); 2) channel 2, near-IR (0.725-1.0 μm); 3) channel 3A, near-IR (1.58-1.64 μm); 4) channel 3B, infrared (3.55-3.93 μm); 5) channel 4, infrared (10.3-11.3 μm); and 6) channel 5 (11.5-12.5 μm).
High Resolution Infrared Sounder (HIRS/3)
The improved HIRS/3 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is a 20-channel, step-scanned, visible and infrared spectrometer designed to provide atmospheric temperature and moisture profiles. The HIRS/3 instrument is basically identical to the HIRS/2 flown on previous spacecraft except for changes in six spectral bands to improve the sounding accuracy. The HIRS/3 is used to derive water vapor, ozone, and cloud liquid water content. The instrument scans 49.5° on either side of the orbital track with a ground resolution at nadir of 17.4 km. The instrument produces 56 IFOVs for each 1,125 km scan line at 42 km between IFOVs along-track. The instrument consists of 19 IR and 1 visible channel centered at 14.95, 14.71, 14.49, 14.22, 13.97, 13.64, 13.35, 11.11, 9.71, 12.45, 7.33, 6.52, 4.57, 4.52, 4.47, 4.45, 4.13, 4.0, 3.76, and 0.69 μm.
Advanced Microwave Sounding Unit (AMSU-A)
The AMSU was an instrument on the Advanced TIROS-N (ATN) NOAA K-N series of operational meteorological satellites. The AMSU consisted of two functionally independent units, AMSU-A and AMSU-B. The AMSU-A was a line-scan instrument designed to measure scene radiance in 15 channels, ranging from 23.8 to 89 GHz, to derive atmospheric temperature profiles from the Earth's surface to about 3 millibar pressure height. The instrument was a total power system having a field of view (FOV) of 3.3° at half-power points. The antenna provided cross track scan 50° on either side of the orbital track at nadir with a total of 30 IFOVs per scan line. The AMSU-A was calibrated on-board using a blackbody and space as references. The AMSU-A was physically divided into two separate modules which interface independently with the spacecraft. The AMSU-A1 contained all of the 5 mm oxygen channels (channels 3-14) and the 80 GHz channel. The AMSU-A2 module consisted of two low-frequency channels (channels 1 and 2). The 15 channels had a center frequency at: 23.8, 31.4, 50.3, 52.8, 53.6, 54.4, 54.94, 55.5, six at 57.29, and 89 GHz.
Advanced Microwave Sounding Unit (AMSU-B)
The AMSU is an instrument on the Advanced TIROS-N (ATN) NOAA K-N series of operational meteorological satellites. The AMSU consists of two functionally independent units, AMSU-A and AMSU-B. The AMSU-B is a line-scan instrument designed to measure scene radiance in five channels, ranging from 89 GHz to 183 GHz for the computation of atmospheric water vapor profiles. The AMSU-B is a total power system with a FOV of 1.1° at half-power points. The antenna provides a cross-track scan, scanning 50° on either side of the orbital track with 90 IFOVs per scan line. On-board calibration is accomplished with blackbody targets and space as references. The AMSU-B channels at the center frequency are: 90, 157, and 3 channels at 183.31 GHz.
Space Environment Monitor (SEM-2)
The SEM-2 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites provides measurements to determine the population of the Earth's radiation belts and data on charged particle precipitation in the upper atmosphere as a result of solar activity. The SEM-2 consists of two separate sensors the Total Energy Detector (TED) and the Medium Energy Proton/Electron Detector (MEPED). In addition, the SEM-2 includes a common Data Processing Unit (DPU). The TED uses eight programmed swept electrostatic curved-plate analyzers to select particle type and energy and Channeltron detectors to measure the intensity in the selected energy bands. The particle energies range from 50 eV to 20 keV. The MEPED detects protons, electrons, and ions with energies from 30 keV to several tens of MeV. The MEPED consists of four directional solid-state detector telescopes and four omnidirectional sensors. The DPU sorts and counts the events and the results are multiplexed and incorporated into the satellite telemetry system. Once received on the ground, the SEM-2 data is separated from the rest of the data and sent to the NOAA Space Environment Laboratory in Boulder, Colorado, for processing and dissemination.
Solar Backscatter Ultraviolet Radiometer (SBUV/2)
The SBUV/2 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is a dual monochrometer ultraviolet grating spectrometer for stratospheric ozone measurements. The SBUV/2 is designed to measure scene radiance and solar spectral irradiance in the ultraviolet spectral range from 160 to 406 nm. Measurements are made in discrete mode or sweep mode. In discrete mode, measurements are made in 12 spectral bands from which the total ozone and vertical distribution of ozone are derived. In the sweep mode, a continuous spectral scan from 160 to 406 nm is made primarily for computation of ultraviolet solar spectral irradiance. The 12 spectral channels are (in nm): 252.0, 273.61, 283.1, 287.7, 292,29, 297.59, 301.97, 305.87, 312.57, 317.56, 331.26, and 339.89.
Search and Rescue Satellite Aided Tracking System (SARSAT)
The SARSAT on the Advanced TIROS-N NOAA K-N series of polar orbiting meteorological satellites is designed to detect and locate Emergency Locator Transmitters (ELTs) and Emergency Position-Indicating Radio Beacons (EPIRB). The SARSAT instrumentation consists of two elements: the Search and Rescue Repeater (SARR) and the Search and Rescue Processor (SARP-2). The SARR is a radiofrequency (RF) system that accepts signals from emergency ground transmitters at three very high frequency (VHF/UHF) ranges (121.5 MHz, 243 MHz and 406.05 MHz) and translates, multiplexes, and transmits these signals at L-band frequency (1.544 GHz) to local Search and Rescue stations (LUTs or Local User Terminals) on the ground. The location of the transmitter is determined by retrieving the Doppler information in the relayed signal at the LUT. The SARP-2 is a receiver and processor that accepts digital data from emergency ground transmitters at UHF and demodulates, processes, stores, and relays the data to the SARR where they are combined with the three SARR signals and transmitted via L-band frequency to local stations.
ARGOS Data Collection System (Argos DCS-2)
The Argos Data Collection System (DCS-2) on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is a random-access system for the collection of meteorological data from in situ platforms (moveable and fixed). The Argos DCS-2 collects telemetry data using a one-way RF link from data collection platforms (such as buoys, free-floating balloons and remote weather stations) and processes the inputs for on-board storage and later transmission from the spacecraft. For free-floating platforms, the DCS-2 system determines the position to within 5 to 8 km RMS and velocity to an accuracy of 1.0 to 1.6 mps RMS. The DCS-2 measures the in-coming signal frequency and time. The formatted data are stored on the satellite for transmission to NOAA stations. The DCS-2 data is stripped from the GAC data by NOAA/NESDIS and sent to the Argos center at CNES in France for processing, distribution to users, and archival.
Telecommunications
The TIP formats low bit rate instruments and telemetry to tape recorders and direct read-out. The MIRP process high data rate AVHRR to tape recorders (GAC) and direct read-out (HRPT and LAC). On-board recorders can store 110 minutes of GAC, 10 minutes HRPT and 250 minutes TIP.
Mission
NOAA-17 was decommissioned on 10 April 2013.
Break up and coincidence
On 18 March 2021, the 18th Space Control Squadron of the U.S. Space Force confirmed that NOAA-17 disintegrated in orbit on 10 March 2021, and that 100 trackable pieces of space debris were being tracked. There was no indication of collision as the cause of the break up. NOAA-16 had broken up in November 2015. Two satellites in the U.S. Defense Meteorological Satellite Program, DMSP F-13 (February 2015) and DMSP F-12 (October 2016) had also broken up due to battery problems, and DMSP F-11 (April 2004) exploded due to propulsion system. The similarity of debris distribution with F-13 and F-11 pointed to battery rupture from overcharging as the intermediate cause of the breakup. The investigation could not determine why the battery became overcharged despite proper decommissioning.
At the same time, the 18th Space Control Squadron also spotted the break up of the Chinese satellite Yunhai-1 02 on 18 March 2021. Later analysis found that the Yunhai breakup was caused by debris from a Ukrainian Zenit-2 upper stage launched in 1996.
References
External links
NOAA-17 Satellite Position
Orbital Links
Weather satellites of the United States
Spacecraft launched in 2002
Spacecraft that broke apart in space
Television Infrared Observation Satellites | NOAA-17 | [
"Technology"
] | 3,324 | [
"Space debris",
"Spacecraft that broke apart in space"
] |
10,859,405 | https://en.wikipedia.org/wiki/NOAA-16 | NOAA-16, also known as NOAA-L before launch, was an operational, polar orbiting, weather satellite series (NOAA K-N) operated by the National Environmental Satellite Service (NESS) of the National Oceanic and Atmospheric Administration (NOAA). NOAA-16 continued the series of Advanced TIROS-N (ATN) spacecraft that began with the launch of NOAA-8 (NOAA-E) in 1983; but it had additional new and improved instrumentation over the NOAA A-K series and a new launch vehicle (Titan 23G). It was launched on 21 September 2000 and, following an unknown anomaly, it was decommissioned on 9 June 2014. In November 2015 it broke up in orbit, creating more than 200 pieces of debris.
Launch
NOAA-16 was launched by the Titan 23G launch vehicle on 21 September 2000 at 10:22 UTC from Vandenberg Air Force Base, at Vandenberg Space Launch Complex 4 (SLW-4W), in a Sun-synchronous orbit, at 843 km above the Earth, orbiting every 102.10 minutes. NOAA-16 was in a morning equator-crossing orbit and has replaced the NOAA-14 as the prime morning spacecraft.
Spacecraft
The goal of the NOAA/NESS polar orbiting program is to provide output products used in meteorological prediction and warning, oceanographic and hydrologic services, and space environment monitoring. The polar orbiting system complements the NOAA/NESS geostationary meteorological satellite program (GOES). The NOAA-16 Advanced TIROS-N spacecraft was based on the Defense Meteorological Satellite Program (DMSP Block 5D) spacecraft and was a modified version of the ATN spacecraft (NOAA 6-11, 13-15) to accommodate the new instrumentation, supporting antennas and electrical subsystems. The spacecraft structure consisted of four components: 1° the Reaction System Support (RSS); 2° the Equipment Support Module (ESM); 3° the Instrument Mounting Platform (IMP); and 4° the Solar Array (SA).
Instruments
All of the instruments were located on the ESM and the IMP. The spacecraft power was provided by a direct energy transfer system from the single solar array which consisted of eight panels of solar cells. The in-orbit Attitude Determination and Control Subsystem (ADACS) provided three-axis pointing control by controlling torque in three mutually orthogonal momentum wheels with input from the Earth Sensor Assembly (ESA) for pitch, roll, and yaw updates. The ADACS controlled the spacecraft attitude so that orientation of the three axes was maintained to within ± 0.2° and pitch, roll, and yaw to within 0.1°. The ADACS consisted of the Earth Sensor Assembly (ESA), the Sun Sensor Assembly (SSA), four Reaction Wheel Assemblies (RWA), two roll/yaw coils (RYC), two pitch torquing coils (PTC), four gyros, and computer software for data processing. The ATN data handling subsystem, consisted of the TIROS Information Processor (TIP) for low data rate instruments, the Manipulated Information Rate Processor (MIRP) for high data rate AVHRR, digital tape recorders (DTR), and a cross strap unit (XSU).
The NOAA-16 instrument complement consists of: 1° an improved six-channel Advanced Very High Resolution Radiometer/3 (AVHRR/3); 2° an improved High Resolution Infrared Radiation Sounder (HIRS/3); 3° the Search and Rescue Satellite Aided Tracking System (SARSAT), which consists of the Search and Rescue Repeater (SARR) and the Search and Rescue Processor (SARP-2); 4° the French/CNES-provided improved Argos Data Collection System (Argos DCS-2); 5° the Solar Backscatter Ultraviolet Spectral radiometer (SBUV/2); and 6° the Advanced Microwave Sounding Unit (AMSU), which consists of three separate modules, A1, A2, and B to replace the previous MSU and SSU instruments.
It hosts the Advanced Microwave Sounding Unit (AMSU), Advanced very-high-resolution radiometer (AVHRR) and High Resolution Infrared Radiation Sounder (HIRS) instruments' Automatic Picture Transmission (APT) transmitter. NOAA-16 has the same suite of instruments as carried by NOAA-15 plus an SBUV/2 instrument as well.
Advanced Very High Resolution Radiometer (AVHRR/3)
The AVHRR/3 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is an improved instrument over previous AVHRRs. The AVHRR/3 adds a sixth channel and is a cross-track scanning instrument providing imaging and radiometric data in the visible, near-IR and infrared of the same area on the Earth. Data from the visible and near-IR channels provide information on vegetation, clouds, snow, and ice. Data from the near-IR and thermal channels provide information on the land and ocean surface temperature and radiative properties of clouds. Only five channels can be transmitted simultaneously with channels 3A and 3B being switched for day/night operation. The instrument produces data in High Resolution Picture Transmission (HRPT) mode at 1.1 km resolution or in Automatic Picture Transmission (APT) mode at a reduced resolution of 4 km. The AVHRR/3 scans 55.4° per scan line on either side of the orbital track and scans 360 lines per minute. The six channels are: 1) channel 1, visible (0.58-0.68 μm); 2) channel 2, near-IR (0.725-1.0 μm); 3) channel 3A, near-IR (1.58-1.64 μm); 4) channel 3B, infrared (3.55-3.93 μm; 5) channel 4, infrared (10.3-11.3 μm); and 6) channel 5 (11.5-12.5 μm).
High Resolution Infrared Sounder (HIRS/3)
The improved HIRS/3 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting weather satellites is a 20-channel, step-scanned, visible and infrared spectrometer designed to provide atmospheric temperature and moisture profiles. The HIRS/3 instrument is basically identical to the HIRS/2 flown on previous spacecraft except for changes in six spectral bands to improve the sounding accuracy. The HIRS/3 is used to derive water vapor, ozone, and cloud liquid water content. The instrument scans 49.5° on either side of the orbital track with a ground resolution at nadir of 17.4 km. The instrument produces 56 IFOVs for each 1,125 km scan line at 42 km between IFOVs along-track. The instrument consists of 19 infrared and 1 visible channel centered at 14.95, 14.71, 14.49, 14.22, 13.97, 13.64, 13.35, 11.11, 9.71, 12.45, 7.33, 6.52, 4.57, 4.52, 4.47, 4.45, 4.13, 4.0, 3.76, and 0.69 μm.
Advanced Microwave Sounding Unit (AMSU-A)
The AMSU was an instrument on the Advanced TIROS-N (ATN) NOAA K-N series of operational meteorological satellites. The AMSU consisted of two functionally independent units, AMSU-A and AMSU-B. The AMSU-A was a line-scan instrument designed to measure scene radiance in 15 channels, ranging from 23.8 to 89 GHz, to derive atmospheric temperature profiles from the Earth's surface to about 3 millibar pressure height. The instrument was a total power system having a field of view (FOV) of 3.3° at half-power points. The antenna provided cross track scan 50° on either side of the orbital track at nadir with a total of 30 IFOVs per scan line. The AMSU-A was calibrated on-board using a blackbody and space as references. The AMSU-A was physically divided into two separate modules which interface independently with the spacecraft. The AMSU-A1 contained all of the 5 mm oxygen channels (channels 3-14) and the 80 GHz channel. The AMSU-A2 module consisted of two low-frequency channels (channels 1 and 2). The 15 channels had a center frequency at: 23.8, 31.4, 50.3, 52.8, 53.6, 54.4, 54.94, 55.5, six at 57.29, and 89 GHz.
Advanced Microwave Sounding Unit (AMSU-B)
The AMSU was an instrument on the Advanced TIROS-N (ATN) NOAA K-N series of operational meteorological satellites. The AMSU consisted of two functionally independent units, AMSU-A and AMSU-B. The AMSU-B was a line-scan instrument designed to measure scene radiance in five channels, ranging from 89 GHz to 183 GHz for the computation of atmospheric water vapor profiles. The AMSU-B was a total power system with a field of view (FOV) of 1.1° at half-power points. The antenna provided a cross-track scan, scanning 50° on either side of the orbital track with 90 IFOVs per scan line. On-board calibration was accomplished with blackbody targets and space as references. The AMSU-B channels at the center frequency (GHz) were: 90, 157, and 3 channels at 183.31.
Space Environment Monitor-2 (SEM-2)
The SEM-2 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites provides measurements to determine the population of the Earth's radiation belts and data on charged particle precipitation in the upper atmosphere as a result of solar activity. The SEM-2 consists of two separate sensors the Total Energy Detector (TED) and the Medium Energy Proton/Electron Detector (MEPED). In addition, the SEM-2 includes a common Data Processing Unit (DPU). The TED uses eight programmed swept electrostatic curved-plate analyzers to select particle type and energy and Channeltron detectors to measure the intensity in the selected energy bands. The particle energies range from 50 eV to 20 keV. The MEPED detects protons, electrons, and ions with energies from 30keV to several tens of MeV. The MEPED consists of four directional solid-state detector telescopes and four omnidirectional sensors. The DPU sorts and counts the events and the results are multiplexed and incorporated into the satellite telemetry system. Once received on the ground, the SEM-2 data is separated from the rest of the data and sent to the NOAA Space Environment Laboratory in Boulder, Colorado, for processing and dissemination.
Search and Rescue Satellite Aided Tracking System (SARSAT)
The SARSAT on the Advanced TIROS-N NOAA K-N series of polar orbiting meteorological satellites is designed to detect and locate Emergency Locator Transmitters (ELTs) and Emergency Position-Indicating Radio Beacons. The SARSAT instrumentation consists of two elements: the Search and Rescue Repeater (SARR) and the Search and Rescue Processor (SARP-2). The SARR is a radiofrequency (RF) system that accepts signals from emergency ground transmitters at three very high frequency (VHF/UHF) ranges (121.5 MHz, 243 MHz and 406.05 MHz) and translates, multiplexes, and transmits these signals at L-band frequency (1.544 GHz) to local Search and Rescue stations (LUTs or Local User Terminals) on the ground. The location of the transmitter is determined by retrieving the Doppler information in the relayed signal at the LUT. The SARP-2 is a receiver and processor that accepts digital data from emergency ground transmitters at UHF and demodulates, processes, stores, and relays the data to the SARR where they are combined with the three SARR signals and transmitted via L-band frequency to local stations.
ARGOS Data Collection System (Argos DCS-2)
The DCS-2 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is a random-access system for the collection of meteorological data from in situ platforms (moveable and fixed). The Argos DCS-2 collects telemetry data using a one-way RF link from data collection platforms (such as buoys, free-floating balloons and remote weather stations) and processes the inputs for on-board storage and later transmission from the spacecraft. For free-floating platforms, the DCS-2 system determines the position to within 5 to 8 km RMS and velocity to an accuracy of 1.0 to 1.6 mps RMS. The DCS-2 measures the in-coming signal frequency and time. The formatted data are stored on the satellite for transmission to NOAA stations. The DCS-2 data is stripped from the GAC data by NOAA / NESDIS and sent to the Argos center at CNES in France for processing, distribution to users, and archival.
Solar Backscatter Ultraviolet Radiometer (SBUV/2)
The SBUV/2 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is a dual monochrometer ultraviolet grating spectrometer for stratospheric ozone measurements. The SBUV/2 is designed to measure scene radiance and solar spectral irradiance in the ultraviolet spectral range from 160 to 406 nm. Measurements are made in discrete mode or sweep mode. In discrete mode, measurements are made in 12 spectral bands from which the total ozone and vertical distribution of ozone are derived. In the sweep mode, a continuous spectral scan from 160 to 406 nm is made primarily for computation of ultraviolet solar spectral irradiance. The 12 spectral channels are (μm): 252.0, 273.61, 283.1, 287.7, 292,29, 297.59, 301.97, 305.87, 312.57, 317.56, 331.26, and 339.89.
Telecommunications
The TIP formats low bit rate instruments and telemetry to tape recorders and direct read-out. The MIRP process high data rate AVHRR to tape recorders (GAC) and direct read-out (HRPT and LAC). On-board recorders can store 110 minutes of GAC, 10 minutes HRPT and 250 minutes TIP.
Anomaly, Decommissioning and Breakup
The Automatic Picture Transmission (APT) of NOAA-16 became inoperable due to sensor degradation on 15 November 2000, and the High Resolution Picture Transmission (HRPT) was done via STX-1 (1698 MHz) starting on 9 November 2010.
On June 6, 2014, NOAA-16 controllers were unable to establish contact with the satellite due to an undefined "critical anomaly". After extensive engineering analysis and recovery efforts it was determined that recovery of the mission was not possible. It was decommissioned on 9 June 2014. On 25 November 2015, at 08:16 UTC, the Combined Space Operations Center (JSpOC) identified a possible breakup of NOAA 16 (#26536). All associated objects have been added to conjunction assessment screenings, and satellite operators was notified of close approaches between the debris and active satellites. The JSpOC catalogs the debris objects when sufficient data is available. As of 26 March 2016, 275 pieces of debris were being tracked.
The debris caused no danger for other satellites at the time and there was no indication that a collision had caused the breakup of NOAA 16.
Debris distribution suggested that battery rupture as a possible cause of the breakup similar to Defense Meteorological Satellite Program's DMSP F-13, F-11, and NOAA 17 breakups. DMSP F-13 was known to have battery overcharge issues.
References
External links
NOAA-16 Satellite Position
Orbital Tracking
Weather satellites of the United States
Spacecraft launched in 2000
Spacecraft that broke apart in space
Television Infrared Observation Satellites | NOAA-16 | [
"Technology"
] | 3,407 | [
"Space debris",
"Spacecraft that broke apart in space"
] |
10,859,521 | https://en.wikipedia.org/wiki/Cadastral%20community | A cadastral community (or cadastre community, cadastral [or cadastre] municipality, cadastral [or cadastre] commune, cadastral [or cadastre] unit, cadastral [or cadastre] district, cadastral [or cadastre] area, cadastral [or cadastre] territory) is a cadastral subdivision of municipalities in the nations of Austria, Bosnia and Herzegovina, Croatia, the Czech Republic, Serbia, Slovakia, Slovenia, the Netherlands, and the Italian provinces of South Tyrol, Trentino, Gorizia and Trieste. A cadastral community records property ownership in a cadastre, which is a register describing property ownership by boundary lines of the real estate.
The common etymology in the Central European successor states of the Habsburg monarchy comes from (KG; literally "cadastral municipality" or "cadastral community"), plural: Katastralgemeinden, translated as or comune catastale, , . In Czech and Slovak, the historical name (, , literally "cadastral municipality/community") was changed to and (literally "cadastral area" or "cadastral territory") in 1928 and today, on official websites, it is usually translated to English by the (misleading) terms "cadastral unit" in Czechia and "cadastral district" in Slovakia. In what is today Hungary, the concept and term () existed only in the past.
History
In 1764, at the behest of Empress Maria Theresa, a complete survey of the Habsburg lands was begun, initiated by the general staff of the Imperial and Royal Army under Field Marshal Count Leopold Joseph von Daun, who had become aware of the lack of reliable maps in the Seven Years' War. Maria Theresa's son Emperor Joseph II ordered the implication of a complete urbarium for property tax purposes in 1785. The present-day cadastre was completed after the Napoleonic Wars from 1817 onwards under Emperor Francis I of Austria (Franziszeischer Kataster). Since then, the Austrian (i.e. Cisleithanian) crown lands were subdivided in Katastralgemeinden; surveying in the Hungarian (Transleithanian) lands started in 1850. Municipalities as administrative subdivisions with certain rights of self-governance were not established until after the 1848 revolutions.
Most of the nowadays Katastralgemeinden once had been independent communes and were incorporated on the occasion of a municipal territory reforms. They can be further divided into smaller villages and localities (Ortschaften). There were 7,847 Katastralgemeinden in Austria in 2014. For land registration, the unit identifier used in a Katastralgemeinde is "KG-Nr" (KG-Nummer, or number).
The Dutch system of kadastrale gemeenten was set up around 1830. When municipalities are merged, often the cadastral communes remain as they were, so one civil municipality can consist of more than one cadastral commune; but again, a cadastral commune can never be part of more than one civil municipality.
See also
Districts of Vienna
Municipalities of South Tyrol
Municipalities of Slovenia
Municipalities of Croatia
Municipality (Austria)
References
External links
Cadastral Template -- A Worldwide Comparison of Cadastral Systems
CT-C4 - Description of what kind of registers are operated and maintained in different countries
Surveying
Subdivisions of Austria
Subdivisions of Croatia
Subdivisions of the Czech Republic
Subdivisions of the Netherlands
Subdivisions of Slovakia
Subdivisions of Slovenia
Subdivisions of Italy
South Tyrol
Trentino
Province of Gorizia
Province of Trieste
Land registration
Subdivisions of Austria-Hungary | Cadastral community | [
"Engineering"
] | 746 | [
"Surveying",
"Civil engineering"
] |
10,860,682 | https://en.wikipedia.org/wiki/BMPR1B | Bone morphogenetic protein receptor type-1B also known as CDw293 (cluster of differentiation w293) is a protein that in humans is encoded by the BMPR1B gene.
Function
BMPR1B is a member of the bone morphogenetic protein (BMP) receptor family of transmembrane serine/threonine kinases. The ligands of this receptor are BMPs, which are members of the TGF-beta superfamily. BMPs are involved in endochondral bone formation and embryogenesis. These proteins transduce their signals through the formation of heteromeric complexes of 2 different types of serine (threonine) kinase receptors: type I receptors of about 50-55 kD and type II receptors of about 70-80 kD. Type II receptors bind ligands in the absence of type I receptors, but they require their respective type I receptors for signaling, whereas type I receptors require their respective type II receptors for ligand binding.
The BMPR1B receptor plays a role in the formation of middle and proximal phalanges.
Clinical significance
Mutations in this gene have been associated with primary pulmonary hypertension.
In the chick embryo, it has been shown that BMPR1B is found in precartilaginous condensations. BMPR1B is the major transducer of signals in these condensations as demonstrated in experiments using constitutively active BMPR1B receptors. BMPR1B is a more effective transducer of GDF5 than BMPR1A. Unlike BMPR1A null mice, which die at an early embryonic stage, BMPR1B null mice are viable.
References
External links
Bone morphogenetic protein
Clusters of differentiation
GS domain
Receptors
Transmembrane receptors
S/T domain
EC 2.7.11 | BMPR1B | [
"Chemistry"
] | 392 | [
"Transmembrane receptors",
"Receptors",
"Signal transduction"
] |
10,860,735 | https://en.wikipedia.org/wiki/Hybrid%20image | A hybrid image is an image that is perceived in one of two different ways, depending on viewing distance, based on the way humans process visual input. A technique for creating hybrid images exhibiting this optical illusion was developed by Aude Oliva of MIT and Philippe G. Schyns of University of Glasgow, a method they originally proposed in 1994. Hybrid images combine the low spatial frequencies of one picture with the high spatial frequencies of another picture, producing an image with an interpretation that changes with viewing distance.
Perhaps the most familiar example is one featuring Albert Einstein and Marilyn Monroe. Looking at the picture from a short distance, one can see a sharp image of Einstein, with only a hint of blurry distortion hinting at the presence of an overlaid image. Viewed from a distance in which the fine detail blurs, the unmistakable face of Monroe emerges.
Other techniques that can help to see the "hidden" image include squinting, scrolling quickly over the image or looking at the thumbnail of the image.
Gallery
References
External links
Hybrid Images
Hybrid Imagery set on flickr
Hybrid Image generator
Optical illusions | Hybrid image | [
"Physics"
] | 219 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
10,864,058 | https://en.wikipedia.org/wiki/Keggin%20structure | The Keggin structure is the best known structural form for heteropoly acids. It is the structural form of α-Keggin anions, which have a general formula of , where X is the heteroatom (most commonly are pentavalent phosphorus PV, tetravalent silicon SiIV, or trivalent boron BIII), M is the addendum atom (most common are molybdenum Mo and tungsten W), and O represents oxygen. The structure self-assembles in acidic aqueous solution and is a commonly used type of polyoxometalate catalysts.
History
The first α-Keggin anion, ammonium phosphomolybdate (), was first reported by Berzelius in 1826. In 1892, Blomstrand proposed the structure of phosphomolybdic acid and other poly-acids as a chain or ring configuration. Alfred Werner, using the coordination compounds ideas of Copaux, attempted to explain the structure of silicotungstic acid. He assumed a central group, ion, enclosed by four , where R is a unipositive ion. The are linked to the central group by primary valences. Two more groups were linked to the central group by secondary valences. This proposal accounted for the characteristics of most poly-acids, but not all.
In 1928, Linus Pauling proposed a structure for α-Keggin anions consisting of a tetrahedral central ion, , caged by twelve octahedra. In this proposed structure, three of the oxygen on each of the octahedra shared electrons with three neighboring octahedra. As a result, 18 oxygen atoms were used as bridging atoms between the metal atoms. The remaining oxygen atoms bonded to a proton. This structure explained many characteristics that were observed such as basicities of alkali metal salts and the hydrated form of some of the salts. However the structure could not explain the structure of dehydrated acids.
James Fargher Keggin with the use of X-ray diffraction experimentally determined the structure of α-Keggin anions in 1934. The Keggin structure accounts for both the hydrated and dehydrated α-Keggin anions without the need for significant structural change. The Keggin structure is the widely accepted structure for the α-Keggin anions.
Structure and physical properties
=
+
The structure has full tetrahedral symmetry and is composed of one heteroatom surrounded by four oxygen atoms to form a tetrahedron. The heteroatom is located centrally and caged by 12 octahedral units linked to one another by the neighboring oxygen atoms. There are a total of 24 bridging oxygen atoms that link the 12 addenda atoms. The metal centres in the 12 octahedra are arranged on a sphere almost equidistant from each other, in four units, giving the complete structure an overall tetrahedral symmetry. The bond length between atoms varies depending on the heteroatom (X) and the addenda atoms (M). For the 12–phosphotungstic acid, Keggin determined the bond length between the heteroatom and each the four central oxygen atoms to be 1.5 Å. The bond length form the central oxygen to the addenda atoms is 2.43 Å. The bond length between the addenda atoms and each of the bridging oxygen is 1.9 Å. The remaining 12 oxygen atoms that are each double bonded to an addenda atom have a bond length of 1.70 Å. The octahedra are therefore distorted. This structure allows the molecule to hydrate and dehydrate without significant structural changes and the molecule is thermally stable in the solid state for use in vapor phase reactions at high temperatures (400−500 °C).
Isomerism
Including the original Keggin structure there are 5 isomers, designated by the prefixes α-, β-, γ-, δ- and ε-. The original Keggin structure is designated α. These isomers are sometimes termed Baker, Baker–Figgis or rotational isomers, These involve different rotational orientations of the units, which lowers the symmetry of the overall structure.
Lacunary Keggin structures
The term lacunary is applied to ions which have a fragment missing, sometimes called defect structures. Examples are the and formed by the removal from the Keggin structure of sufficient Mo and O atoms to eliminate 1 or 3 adjacent octahedra. The Dawson structure is made up of two Keggin lacunary fragments with 3 missing octahedra.
Group 13 cations with the Keggin structure
The cluster cation has the Keggin structure with a tetrahedral Al atom in the centre of the cluster coordinated to 4 oxygen atoms. The formula can be expressed as . This ion is generally called the ion. A analogue is known an unusual ionic compound with an cation and a Keggin polyoxoanion has been characterised.
The iron Keggin ion
Due to the similar aqueous chemistries of aluminium and iron, it has been long thought that an analogous iron polycation should be isolatable from water. Moreover, in 2007, the structure of ferrihydrite was determined and shown to be built of iron Keggin ions. This further captured scientists' imagination and drive to isolate the iron Keggin ion. In 2015, the iron Keggin ion was isolated from water, but as a polyanion with a −17 charge; and protecting chemistry was required. Iron-bound water is very acidic; so it is difficult to capture the intermediate Keggin ion form without bulky and nonprotic ligands instead of the water that is found in the aluminum Keggin ion. However, more important in this synthesis was the bismuth () counterions that provided high positive charge to stabilize the high negative charge of the heptadecavalent polyanion.
Chemical properties
The stability of the Keggin structure allows the metals in the anion to be readily reduced. Depending on the solvent, acidity of the solution and the charge on the α-Keggin anion, it can be reversibly reduced in one- or multiple-electron steps. For example, the silicotungstate anion can be reduced a −20 state. Some anions such as silicotungstic acid are strong enough as an acid as sulfuric acid and can be used in its place as an acid catalyst.
Preparation
In general α-Keggin anions are synthesized in acidic solutions. For example, 12-phosphotungstic acid is formed by condensing phosphate ion with tungstate ions. The heteropolyacid that is formed has the Keggin structure.
Uses
α-Keggin anions have been used as catalyst in the following reactions: hydration, polymerization and oxidation reaction as catalysts. Japanese chemical companies have commercialized the use of the compounds in hydration of propene, oxidation of methacrolein, hydration of isobutene and n-butene, and polymerization of THF.
Suppliers
12-Phosphotungstic acid, the compound James F. Keggin used to determine the structure, can be purchased commercially. Other compounds that contain the α-Keggin anion such as silicotungstic acid and phosphomolybdic acid are also commercially available at Aldrich Chemicals, Fisher Chemicals, Alfa Aesar, VWR Chemical, American Elements, and others.
References
Cluster chemistry
Heteropoly acids
Anions | Keggin structure | [
"Physics",
"Chemistry"
] | 1,598 | [
"Matter",
"Acids",
"Anions",
"Cluster chemistry",
"Heteropoly acids",
"Organometallic chemistry",
"Ions"
] |
10,864,076 | https://en.wikipedia.org/wiki/Statistical%20static%20timing%20analysis | Conventional static timing analysis (STA) has been a stock analysis algorithm for the design of digital circuits for a long time. However the increased variation in semiconductor devices and interconnect has introduced a number of issues that cannot be handled by traditional (deterministic) STA. This has led to considerable research into statistical static timing analysis, which replaces the normal deterministic timing of gates and interconnects with probability distributions, and gives a distribution of possible circuit outcomes rather than a single outcome.
Comparison with conventional STA
Deterministic STA is popular for good reasons:
It requires no vectors, so it does not miss paths.
The run time is linear in circuit size (for the basic algorithm).
The result is conservative.
It typically uses some fairly simple libraries (typically delay and output slope as a function of input slope and output load).
It is easy to extend to incremental operation for use in optimization.
STA, while very successful, has a number of limitations:
Cannot easily handle within-die correlation, especially if spatial correlation is included.
Needs many corners to handle all possible cases.
If there are significant random variations, then in order to be conservative at all times, it is too pessimistic to result in competitive products.
Changes to address various correlation problems, such as CPPR (Common Path Pessimism Removal) make the basic algorithm slower than linear time, or non-incremental, or both.
SSTA attacks these limitations more or less directly. First, SSTA uses sensitivities to find correlations among delays. Then it uses these correlations when computing how to add statistical distributions of delays.
There is no technical reason why determistic STA could not be enhanced to handle correlation and sensitivities, by keeping a vector of sensitivities with each value as SSTA does. Historically, this seemed like a big burden to add to STA, whereas it was clear it was needed for SSTA, so no-one complained. See some of the criticism of SSTA below where this alternative is proposed.
Methods
There are two main categories of SSTA algorithms – path-based and block-based methods.
A path-based algorithm sums gate and wire delays on specific paths. The statistical calculation is simple, but the paths of interest must be identified prior to running the analysis. There is the potential that some other paths may be relevant but not analyzed so path selection is important.
A block-based algorithm generates the arrival times (and required) times for each node, working forward (and backward) from the clocked elements. The advantage is completeness, and no need for path selection. The biggest problem is that a statistical max (or min) operation that also considered correlation is needed, which is a hard technical problem.
There are SSTA cell characterization tools that are now available such as Altos Design Automation's Variety tool.
Criticism
A number of criticisms have been leveled at SSTA:
It's too complex, especially with realistic (non-gaussian) distributions.
It's hard to couple to an optimization flow or algorithm.
It's hard to get the data the algorithm needs. Even if you can get this data, it is likely to be time-varying and hence unreliable.
If used seriously by the customers of a fab, it restricts the changes the fab might make, if they change that statistical properties of the process.
The benefit is relatively small, compared to an enhanced deterministic STA that also takes into account sensitivities and correlation.
Tools that perform static timing analysis
FPGAs
Altera Quartus II
Xilinx ISE
ASICs
Synopsys Primetime (Synopsys PrimeTime)
Cadence Encounter Timing System (Cadence Tempus)
IBM EinsTimer
ANSYS Path FX (ANSYS Path FX)
See also
Dynamic timing analysis
References
Timing in electronic circuits
Formal methods | Statistical static timing analysis | [
"Engineering"
] | 795 | [
"Software engineering",
"Formal methods"
] |
10,865,561 | https://en.wikipedia.org/wiki/Keum-boo | Keum-boo (; also Geumbu, Kum-Boo or Kum-bu—Korean "attached gold") is an ancient Korean gilding technique used to apply thin sheets of gold to silver, to make silver-gilt. Traditionally, this technique is accomplished by first depleting a surface of sterling silver to bring up a thin layer of fine silver. Then 24 carat gold foil is applied with heat and pressure—mechanical gilding—to produce a permanent diffusion bond.
Pure precious metals such as gold and silver have a very similar atomic structure and therefore have a good potential for bonding. Heating these metals to a temperature between 260–370°C increases the movement of the atoms. When pressure is added, this causes an electron exchange at the surface between the two metals, creating a permanent diffusion bond. This diffusion bond occurs far below the soldering temperature for either metal (Dhein, 2004).
Examples of this technique have probably been observed, but not positively identified on pieces from the second half of the first millennium B.C. and from the early first millennium A.D. (Oddy, 1981).
This technique is used in many cultures, including Chinese, Japanese and in the West to bond gold to other metals, including iron, copper, aluminum, gold alloys, white gold, palladium and platinum. Foil made from gold alloys can be applied to silver and other metals by first depletion gilding the surface of the foil (Lewton-Brain, 1987–1993).
See also
Art movement
Creativity techniques
List of art media
List of artistic media
List of art movements
List of most expensive paintings
List of most expensive sculptures
List of art techniques
List of sculptors
References
Dhein, Christine (2004). "Keum-Boo Pillow Pendant"
Oddy, Andrew (1981). "Gilding Through the Ages"
Lewton-Brain, Charles (1987–1993). "Keum-Boo"
Korean Kum-Boo Metalwork, Traditional Arts Program Notes, 1998, Department of Anthropology, California Academy of Sciences.
Artistic techniques
Gold
Metal plating
Korean art | Keum-boo | [
"Chemistry"
] | 428 | [
"Metallurgical processes",
"Coatings",
"Metal plating"
] |
10,867,713 | https://en.wikipedia.org/wiki/Workover | The term workover is used to refer to any kind of oil well intervention involving invasive techniques, such as wireline, coiled tubing or snubbing. More specifically, a workover refers to the expensive process of pulling and replacing completion or production hardware in order to extend the life of the well.
Reason to perform a workover
Workovers rank among the most complex, difficult and expensive types of well work. They are performed only if the well completion is terminally unsuitable for the job at hand. The production tubing may have become damaged due to operational factors like corrosion to the point where well integrity is threatened. Downhole components such as tubing, retrievable downhole safety valves, or electrical submersible pumps may have malfunctioned, needing replacement.
In other circumstances, the reason for a workover may not be that the completion itself is in a bad condition, but that changing reservoir conditions make the former completion unsuitable. For example, a high productivity well may have been completed with 5½" tubing to allow high flow rates because a narrower tubing would have unnecessarily choked the flow, but declining productivity could lead to stable flow being unsupportable through such a wide bore.
Operation
Before any workover, the well must first be killed. Since workovers are long planned in advance, there would be much time to plan the well kill and so the reverse circulation would be common. The intense nature of this operation often requires no less than the capabilities of a drilling rig.
The workover begins by killing the well then removing the wellhead and possibly the flow line, then installing a B.O.P. commonly known as a blowout preventer, then lifting the tubing hanger from the casing head, thus beginning to pull the completion out of the well. The string will almost always be fixed in place by at least one production packer. If the packer is retrievable it can be released easily enough and pulled out with the completion string. If it is permanent, then it is common to cut the tubing just above it and pull out the upper portion of the string. If necessary, the packer and the tubing left in the well can be milled out, though more commonly, the new completion will make use of it by setting a new packer just above it and running new tubing down to the top of the old one.
Workovers on casing
Although less exposed to wellbore fluids, casing strings too have been known to lose integrity. On occasion, it may be deemed economical to pull and replace it. Since casing strings are cemented in place, this is significantly more difficult and expensive than replacing the completion string. If in some instances the casing string cannot be removed from the well, it may be necessary to sidetrack the offending area and recomplete, which is also an expensive process. For all but the most productive well, replacing the casing would never be economical.
References
External links
Schlumberger oilfield glossary
Oil wells
Petroleum production | Workover | [
"Chemistry"
] | 626 | [
"Petroleum technology",
"Oil wells"
] |
10,867,794 | https://en.wikipedia.org/wiki/Synchronization%20of%20chaos | Synchronization of chaos is a phenomenon that may occur when two or more dissipative chaotic systems are coupled.
Because of the exponential divergence of the nearby trajectories of chaotic systems, having two chaotic systems evolving in synchrony might appear surprising. However, synchronization of coupled or driven chaotic oscillators is a phenomenon well established experimentally and reasonably well-understood theoretically.
The stability of synchronization for coupled systems can be analyzed using master stability. Synchronization of chaos is a rich phenomenon and a multi-disciplinary subject with a broad range of applications.
Synchronization may present a variety of forms depending on the nature of the interacting systems and the type of coupling, and the proximity between the systems.
Identical synchronization
This type of synchronization is also known as complete synchronization. It can be observed for identical chaotic systems.
The systems are said to be completely synchronized when there is a set of initial conditions so that the systems eventually
evolve identically in time. In the simplest case of two diffusively coupled
dynamics is described by
where is the vector field modeling the isolated chaotic dynamics and is the coupling parameter.
The regime defines an invariant subspace of the coupled system, if this subspace is
locally attractive then the coupled system exhibit identical synchronization.
If the coupling vanishes the oscillators are decoupled, and the chaotic behavior leads to a divergence of nearby trajectories. Complete synchronization
occurs due to the interaction, if the coupling parameter is large enough so that the divergence of trajectories of interacting systems due to chaos is suppressed by the diffusive coupling. To find the critical coupling strength we study the behavior of the difference . Assuming that is
small we can expand the vector field in series and obtain a linear differential equation - by neglecting the Taylor remainder - governing the behavior of the difference
where denotes the Jacobian of the vector field along the solution. If then we obtain
and since the dynamics of chaotic we have ,
where denotes the maximum Lyapunov exponent of the isolated system. Now using the ansatz
we pass from the equation for to the equation for . Therefore, we obtain
yield a critical coupling strength , for all the system exhibit complete synchronization.
The existence of a critical coupling strength is related to the chaotic nature of the isolated dynamics.
In general, this reasoning leads to the correct critical coupling value for synchronization. However, in some cases one might
observe loss of synchronization for coupling strengths larger than the critical value. This occurs because the nonlinear terms
neglected in the derivation of the critical coupling value can play an important role and destroy the exponential bound for the
behavior of the difference. It is however, possible to give a rigorous treatment to this problem and obtain a critical value so that the
nonlinearities will not affect the stability.
Generalized synchronization
This type of synchronization occurs mainly when the coupled chaotic oscillators are different, although it has also been reported between identical oscillators. Given the dynamical variables and that determine the state of the oscillators, generalized synchronization occurs when there is a functional, , such that, after a transitory evolution from appropriate initial conditions, it is . This means that the dynamical state of one of the oscillators is completely determined by the state of the other. When the oscillators are mutually coupled this functional has to be invertible, if there is a drive-response configuration the drive determines the evolution of the response, and Φ does not need to be invertible. Identical synchronization is the particular case of generalized synchronization when is the identity.
Phase synchronization
Phase synchronization occurs when the coupled chaotic oscillators keep their phase difference bounded while their amplitudes remain uncorrelated.
This phenomenon occurs even if the oscillators are not identical. Observation of phase synchronization requires a previous definition of the phase of a chaotic oscillator. In many practical cases, it is possible to find a plane in phase space in which the projection of the trajectories of the oscillator follows a rotation around a well-defined center. If this is the case, the phase is defined by the angle, φ(t), described by the segment joining the center of rotation and the projection of the trajectory point onto the plane. In other cases it is still possible to define a phase by means of techniques provided by the theory of signal processing, such as the Hilbert transform. In any case, if φ1(t) and φ2(t) denote the phases of the two coupled oscillators, synchronization of the phase is given by the relation nφ1(t)=mφ2(t) with m and n whole numbers.
Anticipated and lag synchronization
In these cases, the synchronized state is characterized by a time interval τ such that the dynamical variables of the oscillators, and , are related by ; this means that the dynamics of one of the oscillators follows, or anticipates, the dynamics of the other. Anticipated synchronization may occur between chaotic oscillators whose dynamics is described by delay differential equations, coupled in a drive-response configuration. In this case, the response anticipates the dynamics of the drive. Lag synchronization may occur when the strength of the coupling between phase-synchronized oscillators is increased.
Amplitude envelope synchronization
This is a mild form of synchronization that may appear between two weakly coupled chaotic oscillators. In this case, there is no correlation between phases nor amplitudes; instead, the oscillations of the two systems develop a periodic envelope that has the same frequency in the two systems.
This has the same order of magnitude than the difference between the average frequencies of oscillation of the two chaotic oscillator. Often, amplitude envelope synchronization precedes phase synchronization in the sense that when the strength of the coupling between two amplitude envelope synchronized oscillators is increased, phase synchronization develops.
All these forms of synchronization share the property of asymptotic stability. This means that once the synchronized state has been reached, the effect of a small perturbation that destroys synchronization is rapidly damped, and synchronization is recovered again. Mathematically, asymptotic stability is characterized by a positive Lyapunov exponent of the system composed of the two oscillators, which becomes negative when chaotic synchronization is achieved.
Some chaotic systems allow even stronger control of chaos, and both synchronization of chaos and control of chaos constitute parts of what's known as "cybernetical physics".
Notes
References
See also
Kuramoto model
Chaos theory
Nonlinear systems | Synchronization of chaos | [
"Mathematics"
] | 1,438 | [
"Nonlinear systems",
"Dynamical systems"
] |
10,868,187 | https://en.wikipedia.org/wiki/Units%20of%20paper%20quantity | Various measures of paper quantity have been and are in use. Although there are no S.I. units such as quires or bales, there are ISO and DIN standards for the ream. Expressions used here include U.S. Customary Units.
Units
Writing paper measurements
25 sheets = 1 quire
500 sheets = 20 quires = 1 ream
1,000 sheets = 40 quires = 2 reams = 1 bundle
5,000 sheets = 200 quires = 10 reams = 5 bundles = 1 bale
200,000 sheets = 8,000 quires = 400 reams = 200 bundles = 40 bales = 1 pallet
'Short' paper measurements
24 sheets = 1 'short' quire
480 sheets = 20 'short' quires = 1 'short' ream
960 sheets = 40 'short' quires = 2 'short' reams = 1 'short' bundle
4,800 sheets = 200 'short' quires = 10 'short' reams = 5 'short' bundles = 1 'short' bale
Posters and printing measurements
516 sheets (= 21½ 'short' quires) = 1 printer's ream
1,032 sheets = 2 printer's reams = 1 printer's bundle
5,160 sheets = 5 printer's bundles = 1 printer's bale
Cover and index paper
250 sheets = 1 ream
Quire
A quire of paper is a measure of paper quantity. The usual meaning is 25 sheets of the same size and quality: of a ream of 500 sheets. Quires of 25 sheets are often used for machine-made paper, while quires of 24 sheets are often used for handmade or specialised paper of 480-sheet reams. (As an old UK and US measure, in some sources, a quire was originally 24 sheets.) Quires of 15, 18 or 20 sheets have also been used, depending on the type of paper.
Etymology
The current word quire derives from Old English or , from Old French , , (cf. modern French ), from Latin , 'by fours', 'fourfold'. Later, when bookmaking switched to using paper and it became possible to easily stitch 5 to 7 sheets at a time, the association of with four was quickly lost.
History
In the Middle Ages, a quire (also called a "gathering") was most often formed of four folded sheets of vellum or parchment, i.e. eight leaves or folios, 16 sides. The term quaternion (or sometimes ) designates such a quire. A quire made of a single folded sheet (i.e. two leaves, four sides) is a bifolium (plural bifolia); a binion is a quire of two sheets (i.e. four leaves, 8 sides); and a quinion is five sheets (ten leaves, 20 sides). This last meaning is preserved in the modern Italian term for quire, .
Formerly, when paper was packed at the paper mill, the top and bottom quires were made up of slightly damaged sheets ("outsides") to protect the good quires ("insides"). These outside quires were known as cassie quires (from French , 'broken'), or "cording quires" and had only 20 sheets to the quire. The printer Philip Luckombe in a book published in 1770 mentions both 24- and 25-sheet quires; he also details printer's wastage, and the sorting and recycling of damaged cassie quires. An 1826 French manual on typography complained that cording quires (usually containing some salvageable paper) from the Netherlands barely contained a single good sheet.
It also became the name for any booklet small enough to be made from a single quire of paper. Simon Winchester, in The Surgeon of Crowthorne, cites a specific number, defining quire as "a booklet eight pages thick." Several European words for quire keep the meaning of "book of paper": German , Danish , Dutch .
In blankbook binding, quire is a term indicating 80 pages.
Ream
A ream of paper is a quantity of sheets of the same size and quality. International standards organizations define the ream as 500 identical sheets. This ream of 500 sheets (20 quires of 25 sheets) is also known as a 'long' ream, and is gradually replacing the old value of 480 sheets, now known as a 'short' ream. Reams of 472 and 516 sheets are still current, but in retail outlets paper is typically sold in reams of 500. As an old UK and US unit, a perfect ream was equal to 516 sheets.
Certain types of specialist papers such as tissue paper, greaseproof paper, handmade paper, and blotting paper are still sold (especially in the UK) in 'short' reams of 480 sheets (20 quires of 24 sheets). However, the commercial use of the word 'ream' for quantities of paper other than 500 is now deprecated by such standards as ISO 4046. In Europe, the DIN 6730 standard for Paper and Board includes a definition of 1 ream of A4 80 gsm (80 g/m2) paper equals 500 sheets.
Etymology
The word 'ream' derives from Old French , from Spanish , from Arabic 'bundle' (of paper), from , 'collect into a bundle', reflecting the Moors having brought the manufacture of cotton paper to Spain. The early variant rym (late 15c.) suggests a Dutch influence. (cf. Dutch ), probably during the time of Spanish Habsburg control of the Netherlands.
History
The number of sheets in a ream has varied locally over the centuries, often according to the size and type of paper being sold. Reams of 500 sheets (20 quires of 25 sheets) were known in England in c. 1594; in 1706 a ream was defined as 20 quires, either 24 or 25 sheets to the quire. In 18th- and 19th-century Europe, the size of the ream varied widely. In Lombardy a ream of music paper was 450 or 480 sheets; in Britain, Holland and Germany a ream of 480 sheets was common; in the Veneto it was more frequently 500. Some paper manufacturers counted 546 sheets (21 quires of 26 sheets). J. S. Bach's manuscript paper at Weimar was ordered by the ream of 480 sheets. In 1840, a ream in Lisbon was 17 (25-sheet) quires and three sheets = 428 sheets, and a double ream was 18 (24-sheet) quires and two sheets = 434 sheets; and in Bremen, blotting or packing paper was sold in reams of 300 (20 quires of 15 sheets). A mid-19th century Milanese-Italian dictionary has an example for a (ream) as being either 450 or 480 sheets.
In the UK in 1914, paper was sold using the following reams:
472 sheets: mill ream (18 short quires of 24 sheets of 'insides', two cording quires of 20 sheets of 'outsides')
480 sheets: (20 short quires of 24 sheets) – now called 'short' ream (as an old UK and US measure, in some sources, a ream was previously equal to 480 sheets)
500 sheets: (20 quires of 25 sheets) – now also called 'long' ream
504 sheets: stationer's ream (21 short quires)
516 sheets: printer's ream (21½ short quires) – also called 'perfect ream'
Reams of 500 sheets were mostly used only for newsprint. Since the late 20th century, the 500-sheet ream has become the de facto international standard.
Bundle
A paper bundle is a quantity of sheets of paper, currently standardized as 1,000 sheets. A bundle consists of two reams or 40 quires. As an old UK and US measure, it was previously equal to 960 sheets.
When referring to chipboard, there are two standards in the US. In general, a package of approximately 50 pounds of chipboard is called a bundle. Thus, a bundle of 22 point chipboard (0.022" thick) 24" × 38", with each sheet weighing 0.556 pounds, contains 90 sheets. However, chipboard sold in size 11" × 17" and smaller is packaged and sold as bundles of 25 pounds.
Bale
A paper bale is a quantity of sheets of paper, currently standardized as 5,000 sheets. A bale consists of five bundles, ten reams or 200 quires. As an old UK and US measure, it was previously equal to 4800 sheets.
See also
Book size
History of paper
History of printing
Octavo
Paper density
Paper size
Explanatory notes
References
External links
ream (rm) at A Dictionary of Units of Measurement
ream at The Online Quantinary (yet see also quire at the same site for historical evidence of 500-sheet reams as early as 1590.)
Ream on Paper Dictionary
Paper
Units of amount
fr:Rame | Units of paper quantity | [
"Mathematics"
] | 1,907 | [
"Units of amount",
"Quantity",
"Units of measurement"
] |
10,868,200 | https://en.wikipedia.org/wiki/Downhole%20safety%20valve | A downhole safety valve refers to a component on an oil and gas well, which acts as a failsafe to prevent the uncontrolled release of reservoir fluids in the event of a worst-case-scenario surface disaster. It is almost always installed as a vital component on the completion.
Operation
These valves are commonly uni-directional flapper valves which open downwards such that the flow of wellbore fluids tries to push it shut, while pressure from the surface pushes it open. This means that when closed, it will isolate the reservoir fluids from the surface.
Most downhole safety valves are controlled hydraulically from the surface, meaning they are opened using a hydraulic connection linked directly to a well control panel. When hydraulic pressure is applied down a control line, the hydraulic pressure forces a sleeve within the valve to slide downwards. This movement compresses a large spring and pushes the flapper downwards to open the valve. When hydraulic pressure is removed, the spring pushes the sleeve back up and causes the flapper to shut. In this way, it is failsafe and will isolate the wellbore in the event of a loss of the wellhead. The full designation for a typical valve is 'tubing retrievable, surface controlled, subsurface safety valve', abbreviated to TR-SCSSV.
Positioning
The location of the downhole safety valve within the completion is a precisely determined parameter intended to optimise safety. There are arguments against it either being too high or too low in the well and so the final depth is a compromise of all factors. MMS regulations state that the valve must be placed no less than below the mudline.
Reasons to keep it high
The further down the well the DHSV is located, the greater the potential inventory of hydrocarbons above it when closed. This means that in the event of loss of containment at surface, there is more fluid to be spilled causing environmental damage, or in the worst case, more fuel for a fire. Therefore, placing the valve higher limits this hazard.
Another reason relates to the hydraulic control line. Hydraulic pressure is required to keep the valve open as part of the failsafe design. However, if the valve is too far down the well, then the weight of the hydraulic fluid alone may apply sufficient pressure to keep the valve open, even with the loss of surface pressurisation.
Reasons to keep it low
As part of the role of the DHSV to isolate the surface from wellbore fluids, it is necessary for the valve to be positioned away from the well where it could potentially come to harm. This implies that it must be placed subsurface in all circumstances, i.e. in offshore wells, not above the seabed. There is also the risk of cratering in the event of a catastrophic loss of the topside facility. The valve is specifically placed below the maximum depth where cratering is expected to be a risk.
If there is a risk of methane hydrate (clathrate) plugs forming as the pressure changes through the valve due to Joule–Thomson cooling, then this is a reason to keep it low, where the rock is warmer than an appropriately-calculated temperature.
Deploying and retrieving
Most downhole safety valves installed as part of the completion design are classed as "tubing retrievable". This means that they are installed as a component of the completion string and run in during completion. Retrieving the valve, should it malfunction, requires a workover. The full name for this most common type of downhole safety valve is a Tubing Retrievable Surface Controlled Sub-Surface Valve, shortened in completion diagrams to TRSCSSV.
If a tubing retrievable valve fails, rather than go to the expense of a workover, a "wireline retrievable" valve may be used instead. This type of valve can fit inside the production tubing and is deployed on wireline after the old valve has been straddled open.
Legal requirement
The importance of DHSVs is undisputed. Graphic images of oil wells in Kuwait on fire after the First Gulf War after their wellheads were removed, demonstrate the perils of not using the components (at the time, they were deemed unnecessary because they were onshore wells). It is, however, not a direct legal requirement in many places. In the United Kingdom, no law mandates the use of DHSVs. However, the 1974 Health & Safety at Work Act requires that measures are taken to ensure that the uncontrolled release of wellbore fluids is prevented even in the worst case. The brilliance of the act is that it does not issue prescriptive guideline for how to achieve the goal of health and safety, but merely sets out the requirement that the goal be achieved. It is up to the oil companies to decide how to achieve it and DHSVs are an important component of that decision. As such, although not a legal requirement, it is company policy for many operators in the UKCS.
Issues
While the DHSV isolates the production tubing, a loss of integrity could allow wellbore fluid to bypass the valve and escape to surface through the annulus. For wells using gas lift, it may be a requirement to install a safety valve in the 'A' annulus of the well to ensure that the surface is protected from a loss of annulus containment. However, these valves are not as common and they are not necessarily installed at the same position in the well, meaning it is possible that fluids could snake their way around the valves to surface.
See also
Completion
Oil well
References
Oil wells | Downhole safety valve | [
"Chemistry"
] | 1,154 | [
"Petroleum technology",
"Oil wells"
] |
10,868,301 | https://en.wikipedia.org/wiki/Neoschizomer | Neoschizomers are restriction enzymes that recognize the same nucleotide sequence, but cleave at different sites. The first restriction enzyme discovered to recognize a sequence is called the prototype, and others that recognize the same sequence are isoschizomers. Neoschizomers are a subset of isoschizomers.
For example, MaeII is the prototype enzyme for the sequence "ACGT", with the cleavage site A↓CGT. One of its neoschizomers, Tsp49I, also recognizes the sequence "ACGT", but cleaves at ACGT↓.
Another example is SmaI (CCC↓GGG), which is a neoschizomer of XmaI (C↓CCGGG).
Use in molecular biology
External links
Rebase restriction enzyme database, http://rebase.neb.com/rebase/
See also
Isoschizomer
Isocaudomer
List of restriction enzyme cutting sites
References
Restriction enzymes
pl:Izoschizomery | Neoschizomer | [
"Biology"
] | 217 | [
"Genetics techniques",
"Restriction enzymes"
] |
10,868,982 | https://en.wikipedia.org/wiki/Q40%20%28motherboard%29 | The Q40 and Q60 (sometimes known generically as the Qx0 series) are computer motherboards designed in the late 1990s, based on the Motorola 68040 and 68060 microprocessors respectively and intended to be partially compatible with the Sinclair QL microcomputer. Later these were sold as a fully assembled computer in an AT desktop case.
Hardware
The Q40 and Q60 motherboards were designed by Peter Graf of Germany and manufactured by D&D Systems of the United Kingdom.
Peter Graf designed it to fit any standard AT form factor computer case, although D&D later sold it as complete, fully assembled computer.
The Q40 consists of a sub-AT form factor board comprising a 40 MHz 68040 processor, 1 MiB of video RAM, and several PLDs implementing a QL-compatible video display generator, an ISA bus, stereo 20 kHz audio DACs and an AT keyboard interface. Floppy disk, ATA hard disk, RS-232 and Centronics printer port interfaces are provided by an ISA "multi-I/O" card in one of the two ISA slots provided. Up to 32 MiB of FPM or EDO RAM can be installed in two 72-pin SIMM slots. Also included are sockets for two ROM devices, 2 kiB of non-volatile RAM and a real-time clock. Both of the QL's standard video modes are supported, plus two extended modes: 512×256 or 1024×512 pixels with 16-bit colour. The Q40 board was produced in limited quantities before being superseded by the Q40i; essentially a Q60 board with a 40 MHz 68040.
The Q60 became available in 1999. It is a revised board with a 66 MHz 68060 (Q60/66) or 80 MHz 68LC060 (Q60/80) processor and support for up to 128 MiB of RAM.
In 2013 Peter Graf announced that he was working on the Q68, a FPGA based QL compatible single board computer. The Q68 was first presented to the public in April 2014 and became available in autumn 2017. It is produced and marketed by Derek Stewart (of former D&D Systems).
Software
Three operating systems are available for the Q40/Q60; these comprise QDOS Classic (an enhanced version of Qdos 1.10), SMSQ/E and a custom Linux distribution.
References
External links
Official Q40/Q60 website
68k-based computers
Motherboard
Sinclair QL clones | Q40 (motherboard) | [
"Technology"
] | 516 | [
"Computing stubs",
"Computer hardware stubs"
] |
10,869,637 | https://en.wikipedia.org/wiki/Aluminium%20diboride | Aluminium diboride (AlB2) is a chemical compound made from the metal aluminium and the metalloid boron. It is one of two compounds of aluminium and boron, the other being AlB12, which are both commonly referred to as aluminium boride.
Structurally the B atoms form graphite-like sheets with Al atoms between them, and this is very similar to the structure of magnesium diboride. Single crystals of AlB2 exhibit metallic conductivity along the axis parallel to the basal hexagonal plane.
Aluminium boride is considered a hazardous substance as it reacts with acids and hydrogen gas to produce toxic gases. For example, it reacts with hydrochloric acid to release borane and aluminium chloride.
The crystal structure of AlB2 is often used as a prototype structure to describe intermetallic compounds. There are a large number of structure types that fall within the AlB2 structural family.
See also
Boride
References
External links
"Boron";
Borides
Aluminium compounds | Aluminium diboride | [
"Chemistry"
] | 206 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
10,870,301 | https://en.wikipedia.org/wiki/Metrifonate | Metrifonate (INN) or trichlorfon (USAN) is an irreversible organophosphate acetylcholinesterase inhibitor. It is a prodrug which is activated non-enzymatically into the active agent dichlorvos.
It is used as an insecticide.
According to the US Environmental Protection Agency trichlorfon has been used on golf course turf, home lawns, non-food contact areas of food and meat processing plants, ornamental shrubs and plants, and ornamental and baitfish ponds. Used to control caterpillars, white grubs, mole crickets, cattle lice, sod webworms, leaf miners, stink bugs, flies, ants, cockroaches, earwigs, crickets, diving beetle, water scavenger beetle, water boatman backswimmer, water scorpions, giant water bugs and pillbugs. After reregistration, a number of its uses were voluntarily restricted, and currently, it is used in nonfood areas to control flies, roaches, and ants among other pests. Outdoors it is used on ornamental plants, golf courses, and lawn grass to treat lepidopteran larvae pests, it is also used to treat flies in animal husbandry in areas that are not accessible to animals, it also used to control harvester ants.
It can be used to treat schistosomiasis caused by Schistosoma haematobium, but is no longer commercially available.
It has been proposed for use in treatment of Alzheimer's disease, but use for that purpose is not currently recommended.
Bans and restrictions
In the United States, trichlorfon/metrifonate may only be used on nonfood and nonfeed sites.
Trichlorfon/metrifonate was banned in the EU in 2008 (Regulation (EC) 689/2008) and in Brazil in 2010.
Trichlorfon/metrifonate was banned in Argentina in 2018, noting that trichlorvon converts to dichlorvos by metabolism in plants, as well as by biodegradation of the soil.
Trichlorfon/metrifonate was banned in New Zealand in 2011.
Trichlorfon/metrifonate was banned in India from 2020.
References
Acetylcholinesterase inhibitors
Insecticides
Antiparasitic agents
Organophosphate insecticides
Phosphonate esters
Trichloromethyl compounds
Prodrugs | Metrifonate | [
"Chemistry",
"Biology"
] | 527 | [
"Chemicals in medicine",
"Biocides",
"Antiparasitic agents",
"Prodrugs"
] |
10,870,385 | https://en.wikipedia.org/wiki/Radicicol | Radicicol, also known as monorden, is a natural product that binds to Hsp90 (Heat Shock Protein 90) and alters its function. HSP90 client proteins play important roles in the regulation of the cell cycle, cell growth, cell survival, apoptosis, angiogenesis and oncogenesis.
Biosynthesis
Biosynthesis of radicicol has been best studied in Pochonia chlamydosporia, in which the majority of the core structure is produced in vivo through iterative type I polyketide synthases. This structure produced is the earliest intermediate in the radicicol biosynthesis, monocillin II. This intermediate is transformed to radicicol through halogenation and epoxide formation performed by RadH and RadP respectively. These enzymes are coded by the genes Rdc2 and Rdc4 in the pathway, and removing either of these results in a product that has the monocillin II core, but does not have either the epoxide or halogen added.
See also
Geldanamycin
References
Further reading
Review of the chemistry and biology of resorcylic acid lactones, including radicicol.
Epoxides
Macrolides
Halogen-containing natural products
Polyketides
Chloroarenes | Radicicol | [
"Chemistry"
] | 271 | [
"Biomolecules by chemical classification",
"Natural products",
"Polyketides"
] |
10,870,901 | https://en.wikipedia.org/wiki/NGC%206818 | The Little Gem Nebula or NGC 6818 is a planetary nebula located in the constellation of Sagittarius. It has magnitude 10 and oval diameter of 15 to 22 arcseconds with a 15th magnitude central star.
It was discovered by William Herschel in 1787.
NGC 6818 is located in the constellation of Sagittarius (The Archer), roughly 6000 light-years away from Earth. The glow of the cloud is just over half a light-year across.
When stars like the Sun are near end of life, they send their outer layers into space to create glowing clouds of gas, a planetary nebulae. This ejection of mass is uneven, and planetary nebulae can have complex shapes. NGC 6818 shows knotty filament-like structures and distinct layers of material, with a bright and enclosed central bubble surrounded by a larger, more diffuse cloud.
Scientists believe that the stellar wind from the central star propels the outflowing material, forming the elongated shape of NGC 6818. As this stellar wind moves through the slower-moving cloud it creates particularly bright spots in the bubble's outer layers.
Gallery
See also
List of NGC objects
Planetary nebula
References
Robert Burnham, Jr, Burnham's Celestial Handbook: An observer's guide to the universe beyond the solar system, vol 3, p. 1558
External links
Planetary Nebula NGC 6818
Encyclopedia article
Planetary nebulae
6818
Sagittarius (constellation)
Astronomical objects discovered in 1787 | NGC 6818 | [
"Astronomy"
] | 298 | [
"Nebula stubs",
"Sagittarius (constellation)",
"Astronomy stubs",
"Constellations"
] |
14,502,375 | https://en.wikipedia.org/wiki/Gaston%20Bonnier | Gaston Eugène Marie Bonnier (; 9 April 1853 – 2 January 1922) was a French botanist and plant ecologist.
Biography
Bonnier first studied at École Normale Supérieure in Paris from 1873 to 1876. Together with Charles Flahault, he studied at Uppsala University in 1878. They published two articles about their impressions:
Observations sur la flore cryptogamique de la Scandinavie
Sur la distribution des végétaux dans la region moyenne de la presqu’ile Scandinave (both with Charles Flahault 1879)
He became assistant professor, later full professor, of botany at Sorbonne in 1887 and, in addition, he founded a Plant Biological Laboratory in Fontainebleau in 1889. The same year, he co-founded the scientific journal Revue Générale de Botanique, which he edited until 1922.
He was an early exponent of experimental plant ecology. He transplanted alpine plants between the Alps and Pyrenees and the research garden in Fontainebleau. The results were published in:
Cultures expérimentales dans les Alpes et les Pyrénées. Revue Générale de Botanique 2 (1890): 513–546.
Les plantes arctiques comparées aux mêmes espèces des Alpes et des Pyrénées (1894).
Nouvelles observations sur les cultures expérimentales à diverses altitudes et cultures par semis. Revue Générale de Botanique 22 (1920): 305–326.
He authored several floras of France, such as
Nouvelle flore du Nord de la France et de la Belgique pour la détermination facile des plantes sans mots techniques. Vol. I. Tableaux synoptiques des plantes vasculaires de la flore de la France. P. Dupont, Paris, 1894 (With Georges de Layens (1834–1897)).
Vol. II. Nouvelle Flore des mousses et des hépatiques with Charles Isidore Douin (1858–1944). P. Dupont, Paris, 1895.
Vol. III. Nouvelle Flore des champignons with Julien Noël Costantin (1857–1936) and Léon Jean Marie Dufour (1862–1942). P. Dupont, Paris, 1895.
Flore complète illustrée de France, Suisse et Belgique. (1911).
Notable students of Gaston Bonnier include Henri Devaux, Maurice Bouly de Lesdain, Paul Becquerel, Louis Emberger, Paul Jaccard, and Albert Maige among others.
References
1853 births
1922 deaths
Members of the French Academy of Sciences
Corresponding members of the Saint Petersburg Academy of Sciences
20th-century French botanists
Academic staff of the University of Paris
French ecologists
19th-century French botanists
Plant ecologists
Lamarckism | Gaston Bonnier | [
"Biology"
] | 569 | [
"Non-Darwinian evolution",
"Biology theories",
"Obsolete biology theories",
"Lamarckism"
] |
14,503,028 | https://en.wikipedia.org/wiki/IMP%20cyclohydrolase | In enzymology, an IMP cyclohydrolase () is an enzyme that catalyzes the chemical reaction
IMP + H2O 5-formamido-1-(5-phospho-D-ribosyl)imidazole-4-carboxamide
Thus, the two substrates of this enzyme are IMP and H2O, whereas its product is 5-formamido-1-(5-phospho-D-ribosyl)imidazole-4-carboxamide.
This enzyme belongs to the family of hydrolases, those acting on carbon-nitrogen bonds other than peptide bonds, specifically in cyclic amidines. The systematic name of this enzyme class is IMP 1,2-hydrolase (decyclizing). Other names in common use include inosinicase, and inosinate cyclohydrolase. This enzyme
catalyses the cyclisation of 5-formylamidoimidazole-4-carboxamide ribonucleotide to IMP, a reaction which is important in de novo purine biosynthesis in archaeal species.
Structural studies
In most cases this single-domain protein is arranged to form an overall fold that consists of a four-layered alpha-beta-beta-alpha core structure. The two antiparallel beta-sheets pack against each other and are covered by alpha-helices on one face of the molecule. The protein is structurally similar to members of the N-terminal nucleophile (NTN) hydrolase superfamily. A deep pocket was in fact found on the surface of IMP cyclohydrolase in a position equivalent to that of active sites of NTN-hydrolases, but an N-terminal nucleophile could not be found. Therefore, it is thought that this enzyme is structurally but not functionally similar to members of the NTN-hydrolase family.
As of late 2007, 14 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , , , , and .
References
Further reading
Protein domains
EC 3.5.4
Enzymes of known structure | IMP cyclohydrolase | [
"Biology"
] | 460 | [
"Protein domains",
"Protein classification"
] |
14,503,680 | https://en.wikipedia.org/wiki/Unger%20model | The Unger Model is an empirical standard model for near-end crosstalk (NEXT) power spectra as experienced by communication systems over unshielded twisted pair (UTP).
Twisted pair cables are usually grouped together in a binder where they experience crosstalk. Based on empirical observations, Unger proposed that, at the 1% worst case, the NEXT power spectra , due to a single disturber, can be bounded by
while the NEXT power spectra due to 49 disturbers (full binder) can be bounded by
References
See also
Digital subscriber line
Telecommunications | Unger model | [
"Technology"
] | 117 | [
"Information and communications technology",
"Telecommunications"
] |
14,503,823 | https://en.wikipedia.org/wiki/Esophageal%20stent | An esophageal stent is a stent (tube) placed in the esophagus to keep a blocked area open so the patient can swallow soft food and liquids. They are effective in the treatment of conditions causing intrinsic esophageal obstruction or external esophageal compression. For the palliative treatment of esophageal cancer most esophageal stents are self-expandable metallic stents. For benign esophageal disease such as refractory esophageal strictures, plastic stents are available. Common complications include chest pain, overgrowth of tissue around the stent and stent migration. Esophageal stents may also be used to staunch the bleeding of esophageal varices.
Esophageal stents are placed using endoscopy when after the tip of the endoscope is positioned above the area to be stented, then guidewire is passed through the obstruction into the stomach. The endoscope is withdrawn and using the guidewire with either fluoroscopic or endoscopic guidance the stent is passed down the guidewire to the affected area of the esophagus and deployed. Finally, the guidewire is removed and the stent is left to fully expand over the next 2–3 days.
In one study of 997 patients who had self-expanding metal stents for malignant esophageal obstruction it was found that esophageal stents were 95% effective.
Pros of Esophageal Stent
There are several potential benefits of an esophageal stent procedure:
Symptoms relief: stents can help by alleviating symptoms e.g. swallowing, chest pain, and weight loss caused by a narrowed or blocked esophagus.
Fast Results: Normally performed in a day and quick recovery.
Minor invasive: When using an endoscope, it makes the procedure less invasive than some other treatments.
Palliative care: Stents help patients with advanced esophageal cancer by relieving symptoms and improving the quality of life.
Alternative to surgery: For older and less healthy patients, an esophageal stent is a viable alternative to surgery,
Cons of Esophageal Stent
There are also several potential drawbacks to an esophageal stent procedure:
Complications: Bleeding, infection, and perforation of the esophagus may occur.
Stent migration: Stent may move causing symptoms to recur or lead to other complications.
Stent obstruction: Blockage can occur, repeating symptoms or other complications.
Stent related pain: Chest or throat pain may occur after the procedure; requiring additional treatment or adjustment of the stent.
Stent removal: Check with your doctor on the stent type used for the procedure. Ask if it may need to be removed at a later date and the process and issues that may come about as a result.
Additional images
References
External links
Esophageal stent entry in the public domain NCI Dictionary of Cancer Terms
Surgical oncology
Implants (medicine)
Medical devices | Esophageal stent | [
"Biology"
] | 648 | [
"Medical devices",
"Medical technology"
] |
14,503,865 | https://en.wikipedia.org/wiki/Estrogen%20receptor%20test | The estrogen receptor test (ERT) is a laboratory test to determine whether cancer cells have estrogen receptors. This information can guide treatment of the cancer.
The test uses immunohistochemical techniques on the estrogen receptor (ER). Immunohistochemistry (IHC) methods involve selective identification of antigen proteins by exploiting antigen–antibody relationships.
History
Historically, the ligand binding assay was used to determine ER activity. This method was limited because large quantities of fresh tissue were needed for each assay. IHC can be performed on fixed tissue and needle biopsies, and is more accurate in assessing ER status of a tumor.
Today, ER analysis is one of many routinely performed immunohistochemical assays performed to classify hormone receptor status of breast cancers to provide insight into cancer prognosis and management.
Estrogen receptor types
There are two main types of estrogen receptor (ER): estrogen receptor alpha (ERα), and estrogen receptor beta (ER-β), also known as NR3A2. Both are nuclear receptors activated by the sex hormone estrogen. Estrogen signaling can be selectively stimulated or inhibited, dependent on the equilibrium of these two receptor types in target organs. These two ER types are encoded by different genes located on separate chromosomes and have different functions. ERα is encoded by the ESR1 (Estrogen Receptor 1) gene, is mostly active in the mammary gland and uterus, and aids in the regulation of skeletal homeostasis and metabolism. ER-β plays a prominent role in the central nervous and immune systems.
Immunohistochemical assessment
The ERT immunohistochemical assessment is a semi-quantitative method used to predict the likelihood of successful treatment of breast cancer with anti-estrogen therapy. ER-positive breast carcinomas are likely to respond to endocrine treatments. Therefore, monitoring ER activity can be essential in understanding disease progression and guiding treatment.
Various target antibodies may be used in the IHC assessment of the ER. Typically, the antibody used is the anti-Estrogen Receptor (ER) (SP1) Rabbit Monoclonal Antibody. Employing SP1 allows detection of estrogen receptor (ER) antigens in sections of the fixed tissue samples. In conjunction with light microscopy, approximate ER activity can be estimated using the level of staining of the cell's components. The anti-ER (SP1) antibody targets the ER alpha protein (ERα) located in the nucleus of ER-positive cells. The anti-ER (SP1) antibody's response is a useful indication of the progression, management, and prediction of therapy outcome of breast cancer. These antibodies are commercially available from three commonly used auto strain vendors: Dako, Leica, and Ventana. In a study by Kornaga et al., all behaved similarly in the semi-quantitative analysis of breast cancer biopsy samples.
In a study in 2002, six breast carcinoma cases were received, characterized, and analyzed through the ERT IHC assessment. The level of known ER activity was classified (negative, low, medium, and high) and selected for observation. After embedment in a paraffin block, the samples were stained using a hematoxylin and eosin staining (H & E staining) system. The IHC analysis was performed on the same day using anti-ER monoclonal antibodies. There was a consistently strong correlation between the IHC results and the known ER activity.
Estrogen receptor test and breast cancer
Observation of estrogen receptor activity provides insight into growth and proliferation of breast cancer. The complex biochemical reactions of estrogen receptors are necessary for the mediation of cellular interactions in response to various cell-altering factors, including ligands, cofactors, and other simulative complexes.
Monitoring tumor development with the estrogen receptor test
The estrogen receptor is a regulator of cellular functions, including cell growth and proliferation, and can serve as a means of inter-cellular differentiation. Monitoring the activity of the ER via the ERT is necessary as it plays an essential role in normal breast development and function, as well as in cancerous situations. Accurate measurements of the ER activities are critical in the initial classification and monitoring of progression in breast cancers. The ER can serve as an indicative biomarker as it is a potential predictor for the clinical responses of a patient to certain treatments. Patients with breast cancer that is ER-positive at presentation are most likely to respond to cancer treatments through endocrine therapy.
Estrogen receptor test in mammary epithelial cancer
Estrogen receptors are over-expressed in approximately 70% of diagnosed breast cancers. Growing exposure of the mammary epithelium to estrogen is related to the risk of breast cancer as the binding of estrogen to the HER2 receptor in mammary cells causes a rise in the division and cell synthesis. This ultimately leads to a higher risk of replication errors, and the disruption of the normal cellular processes results in mistakes in apoptosis, cellular proliferation, or DNA repair.
The ERT has been suggested as a predictor for the level of success of the use of endocrine therapy in cancer treatment. Many of the endocrine therapies for breast cancer treatments involve the use of selective estrogen receptor modulators (SERMs). SERMs, such as tamoxifen, are ER antagonists in breast tissue. Estrogen receptor tests are used in determining the sensitivity of breast cancer lesions to tamoxifen. Patients with ER-positive tumors are likely to respond well to these endocrine therapies.
References
Tumor markers | Estrogen receptor test | [
"Chemistry",
"Biology"
] | 1,154 | [
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
14,503,907 | https://en.wikipedia.org/wiki/Metals%20of%20antiquity | The metals of antiquity are the seven metals which humans had identified and found use for in prehistoric times in Africa, Europe and throughout Asia: gold, silver, copper, tin, lead, iron, and mercury.
Zinc, arsenic, and antimony were also known during antiquity, but they were not recognised as distinct metals until later. A special case is platinum; it was known to native South Americans around the time Europe was going through classical antiquity, but was unknown to Europeans until the 18th century. Thus, at most eleven elemental metals and metalloids were known by the end of antiquity; this contrasts greatly with the situation today, with over 90 elemental metals known. Bismuth only began to be recognised as distinct around 1500 by the European and Incan civilisations. The first elemental metal with a clearly identifiable discoverer is cobalt, discovered in 1735 by Georg Brandt, by which time the Scientific Revolution was in full swing. (Even then, cobalt might have been prepared before the 13th century by alchemists roasting and reducing its ore, but,in any case, its distinct nature was not recognised.)
History
Copper was probably the first metal mined and crafted by humans. It was originally obtained as a native metal and later from the smelting of ores. Earliest estimates of the discovery of copper suggest around 9000 BC in the Middle East. It was one of the most important materials to humans throughout the Chalcolithic and Bronze Ages. Copper beads dating from 6000 BC have been found in Çatalhöyük, Anatolia, and the archaeological site of Belovode on the Rudnik mountain in Serbia contains the world's oldest securely dated evidence of copper smelting from 5000 BC. It was recognised as an element by Louis Guyton de Morveau, Antoine Lavoisier, Claude Berthollet, and Antoine-François de Fourcroy in 1787.
It is believed that lead smelting began at least 9,000 years ago, and the oldest known artifact of lead is a statuette found at the temple of Osiris on the site of Abydos dated around 3800 BC. It was recognised as an element by Guyton de Morveau, Lavoisier, Berthollet, and Fourcroy in 1787.
The earliest gold artifacts were discovered at the site of Wadi Qana in the Levant. Silver is estimated to have been discovered in Asia Minor shortly after copper and gold.
There is evidence that iron was known from before 5000 BC. The oldest known iron objects used by humans are some beads of meteoric iron, made in Egypt in about 4000 BC. The discovery of smelting around 3000 BC led to the start of the Iron Age around 1200 BC and the prominent use of iron for tools and weapons. It was recognised as an element by Guyton de Morveau, Lavoisier, Berthollet, and Fourcroy in 1787.
Tin was first smelted in combination with copper around 3500 BC to produce bronze - and thus giving place to the Bronze Age (except in some places which did not experience a significant Bronze Age, passing directly from the Neolithic Stone Age to the Iron Age). Kestel, in southern Turkey, is the site of an ancient Cassiterite mine that was used from 3250 to 1800 BC. The oldest artifacts date from around 2000 BC. It was recognised as an element by Guyton de Morveau, Lavoisier, Berthollet, and Fourcroy in 1787.
Characteristics
Melting point
The metals of antiquity generally have low melting points, with iron being the exception.
Mercury melts at −38.829 °C (−37.89 °F) (being liquid at room temperature).
Tin melts at 231 °C (449 °F)
Lead melts at 327 °C (621 °F)
Silver at 961 °C (1763 °F)
Gold at 1064 °C (1947 °F)
Copper at 1084 °C (1984 °F)
Iron is the outlier at 1538 °C (2800 °F), making it far more difficult to melt in antiquity. Cultures developed ironworking proficiency at different rates; however, evidence from the Near East suggests that smelting was possible but impractical circa 1500 BC, and relatively commonplace across most of Eurasia by 500 BC. However, until this period, generally known as the Iron Age, ironwork would have been impossible.
The other metals discovered before the Scientific Revolution largely fit the pattern, except for high-melting platinum:
Bismuth melts at 272 °C (521 °F)
Zinc melts at 420 °C (787 °F), but importantly boils at 907 °C (1665 °F), a temperature below the melting point of silver. Consequently, at the temperatures needed to reduce zinc oxide to the metal, the metal is already gaseous.
Arsenic sublimes at 615 °C (1137 °F), passing directly from the solid state to the gaseous state.
Antimony melts at 631 °C (1167 °F)
Platinum melts at 1768 °C (3215 °F), even higher than iron. Native South Americans worked with it instead by sintering: they combined gold and platinum powders, until the alloy became soft enough to shape with tools.
Extraction
While all the metals of antiquity but lead occur natively, only gold and silver are commonly found as the native metal.
Gold and silver occur frequently in their native form
Mercury compounds are reduced to elemental mercury simply by low-temperature heating (500 °C).
Tin and iron occur as oxides and can be reduced with carbon monoxide (produced by, for example, burning charcoal) at 900 °C.
Copper and lead compounds can be roasted to produce the oxides, which are then reduced with carbon monoxide at 900 °C.
Meteoric iron is often found as the native metal and it was the earliest source for iron objects known to humanity
Symbolism
The practice of alchemy in the Western world, based on a Hellenistic and Babylonian approach to planetary astronomy, often ascribed a symbolic association between the seven then-known celestial bodies and the metals known to the Greeks and Babylonians during antiquity. Additionally, some alchemists and astrologers believed there was an association, sometimes called a rulership, between days of the week, the alchemical metals, and the planets that were said to hold "dominion" over them. There was some early variation, but the most common associations since antiquity are the following:
See also
Timeline of chemical element discoveries
Ashtadhatu, the eight metals of Hindu alchemy (these seven plus zinc)
History of metallurgy in the Indian subcontinent
History of metallurgy in China
Metallurgy in pre-Columbian America
Copper metallurgy in Africa
Iron metallurgy in Africa
References
Further reading
http://www.webelements.com/ cited from these sources:
A.M. James and M.P. Lord in Macmillan's Chemical and Physical Data, Macmillan, London, UK, 1992.
G.W.C. Kaye and T.H. Laby in Tables of physical and chemical constants, Longman, London, UK, 15th edition, 1993.
Metals
History of metallurgy | Metals of antiquity | [
"Chemistry",
"Materials_science"
] | 1,457 | [
"Metallurgy",
"Metals",
"History of metallurgy"
] |
14,503,955 | https://en.wikipedia.org/wiki/Paul%20W.%20Chun | Paul W. Chun is a professor emeritus at the University of Florida. He is a researcher in the field of protein folding equilibria, in particular, he is known as the "leading proponent" of using the Planck-Benzinger thermal work function to understand protein folding thermodynamics and stability. As such Chun has written a number of papers relating to the thermodynamics of protein folding. He received his Ph.D. in 1965 from the University of Missouri for work on the interaction of casein molecules, and joined the department of biochemistry and molecular biology at the University of Florida soon thereafter. He retired in 2003.
He has published 68 peer-reviewed papers listed in Scopus.
References
External links
Home page.
Year of birth missing (living people)
Living people
University of Florida faculty
University of Missouri alumni
American biochemists
American molecular biologists
Thermodynamicists | Paul W. Chun | [
"Physics",
"Chemistry"
] | 183 | [
"Thermodynamics",
"Thermodynamicists"
] |
14,504,482 | https://en.wikipedia.org/wiki/Micro%20power%20source | Micro power sources and nano power sources are units of RFID, MEMS, microsystems and nanosystems for energy-power generation, harvesting from ambient, storage and conversion.
References
[1] La O` G.J., In H.J., Crumlin E., Barbastathis G., Shao-Horn Y. Resent advances in microdevices for electrochemical energy conversion and storage // Int. J. Energy Res. 2007. V.31. P.548-575.
[2] Curtright A.E., Bouwman P.J., Wartena R.C., Swider-Lyons K.E. Power sources for nanotechnology // International Journal of Nanotechnology. 2004. V.1. Nos.1/2. P.226-239
Energy technology
Microtechnology | Micro power source | [
"Materials_science",
"Engineering"
] | 182 | [
"Materials science",
"Microtechnology"
] |
14,504,484 | https://en.wikipedia.org/wiki/Walking%20fern | Walking fern may refer to two species of fern in the genus Asplenium, which are occasionally placed in a separate genus Camptosorus. The name "walking fern" derives from the fact that new plantlets grow wherever the arching leaves of the parent touch the ground, creating a walking effect. Both have evergreen, undivided, slightly leathery leaves that are triangular and taper to a thin point. On the bottom of the leaves, sori, or spore-bearing structures, cluster along the veins. These hardy plants can be found in shady spots of limestone ledges and limy forest places.
Asplenium rhizophyllum (syn: Camptosorus rhizophyllum), native to North America
Asplenium ruprechtii (syn: Camptosorus sibiricus), native to East Asia
It may also refer to:
Adiantum caudatum, a species of maidenhair fern
References
"walking fern." Encyclopædia Britannica. . Encyclopædia Britannica Online. <http://www.britannica.com/eb/article-9075948>.
Ferns | Walking fern | [
"Biology"
] | 244 | [
"Ferns",
"Plants"
] |
14,504,931 | https://en.wikipedia.org/wiki/Self-evaluation%20maintenance%20theory | Self-evaluation maintenance (SEM) concerns discrepancies between two people in a relationship. The theory posits an individual will maintain as well as enhance their self-esteem via a social comparison to another individual. Self-evaluation refers to the self-perceived social ranking one has towards themselves. It is the continuous process of determining personal growth and progress, which can be raised or lowered by the behavior of others. Abraham Tesser created the self-evaluation maintenance theory in 1988. The self-evaluation maintenance model assumes two things: that a person will try to maintain or increase their own self-evaluation, and self-evaluation is influenced by relationships with others.
A person's self-evaluation (which is similar to self-esteem) may be raised when a close other performs well. For example, a sibling scores the winning goal in an important game. Self-evaluation will increase because that person is sharing his/her success. The closer the psychological relationship and the greater the success, the more a person will share in the success. This is considered the reflection process. When closeness and performance are high, self-evaluation is raised in the reflection process. If someone who is psychologically close performs well on a task that is irrelevant to a person's self-definition, that person is able to benefit by sharing in the success of the achievement.
At the same time, the success of a close other can decrease someone's self-evaluation in the comparison process. This is because the success of a close other invites comparison on one's own capabilities, thereby directly affecting one's own self-evaluation. This is also strengthened with the closeness of the psychological relationship with the successful other. Using a similar example: a sibling scores the winning goal in an important game; but you are also on the same team and through comparison, your self-evaluation is lowered. When closeness (sibling) and performance (scored the winning goal) are high, self-evaluation is decreased in the comparison process. This is further expressed when the comparison is related to something you value in your personal identity. If you are aspiring to become a professional soccer player, but your sibling scores the winning goal and you do not, the comparison aspect of SEM will decrease your self-evaluation.
In both the reflection and comparison processes, closeness and performance level are significant factors. If the closeness of another decreases, then a person is less likely to share the success and/or compare him/herself, which lessens the likelihood of decreasing self-evaluation. A person is more likely to compare him/herself to someone close to him/her, like a sibling or a best friend, than a stranger. There are different factors in which a person can assume closeness: family, friends, people with similar characteristics, etc. If an individual is not close to a particular person, then it makes sense that he/she will not share in their success or be threatened by their success. At the same time, if the person's performance is low, there is no reason to share the success and increase self-evaluation; there is also no reason to compare him/herself to the other person. Because their performance is low, there is no reason it should raise or lower his/her self-evaluation. According to Tesser's (1988) theory, if a sibling did not do well in his/her game, then there is no reason the individual's self-evaluation will be affected.
Closeness and performance can either raise self-evaluation through reflection or lower self-evaluation through comparison. Relevance to self-identity determines whether reflection or comparison will occur. There are many different dimensions that can be important to an individual's self-definition. A self-defining factor is any factor that is personally relevant to your identity. For example, skills in music may be important to one's self-definition, but at the same time, being good in math may not be as important, even if you are skilled at it. Relating to your self-definition, you may consider yourself a musician but not a mathematician, even if you are skilled in both. Relevance assumes that a particular factor that is important to an individual is also important to another person. Relevance can be as simple as a shared dimension which one considers important to who they are. If relevance is high, then one will engage in comparison, but if relevance is low, one will engage in reflection. For example, if athletics is important to a person and that person considers athletics to be an important dimension of his/her self-definition, then when a sibling does well in athletics, the comparison process will take place and his/her self-evaluation will decrease. On the other hand, if athletics is not a dimension he/she uses for self-definition, the reflection process will take place and he/she will celebrate the sibling's success with the sibling; his/her self-evaluation will increase along with the sibling's because he/she is not threatened or challenged by the sibling's athletic capability.
Tesser (1988) suggests that people may do things to reduce the decrease in self-evaluation from comparison. One can spend less time with that particular individual, thereby reducing closeness or one can change their important self-definition and take up a new hobby or focus on a different self-defining activity, which reduces relevance (e.g., A siblings success in your favorite sport may lead you to stop playing). The third way of avoiding a decrease in self-evaluation through the comparison process is to affect another's performance (e.g., by hiding a sibling's favorite shoes or believe that his/her performance was based on luck) or one can improve their own skills by practicing more. The conditions that predict whether an individual will interfere with another's performance in the sake of their own self-evaluation include the closeness of the individuals and the relevance of the activity. When the relevance is high, the comparison process is more important than the reflection process. When the relevance is high and the activity is high in self-defining importance, the other person poses a larger threat than when the relevance is low.
Moral behavior
Mazar et al. (2008) investigated how self-concept maintenance applies to moral behavior. They found that participants engaged in dishonest behaviors to achieve external benefits up to a point. However, their need to maintain a positive view of themselves, as being honest, limited the extent of their dishonest behavior.
Research examples
Tesser & Smith (1980) experimented with this theory. Men were recruited and asked to bring a friend with them. They were then put into groups of four, Man A and Man A's friend along with Man B and Man B's friend. Half the subjects were told that the study's purpose was measuring important verbal skills and leadership. This was the high relevance group. The other two subjects were told that the task had nothing to do with verbal skills, leadership or anything important. This was considered the low relevance group. The activity was based on the game Password, where persons have to guess a word based on clues. Each man was given an opportunity to guess the word while the other three gave clues from a list. The other three can give clues that are easy or difficult based on their own judgment and whether or not they would like to help the other person guess the word. The clues given to the person were necessary to guess the word. The first pair of partners performed poorly (as instructed in the experimental design). The experiment was interested in the behavior of the second group of men. The next pairing was designed to partner a stranger with a friend. Researchers were trying to see when a friend was helped more than a stranger and when a stranger was helped more than a friend. The results supported their hypothesis. In 10 out of 13 sessions, when relevance was high (told that this activity measures important verbal and leadership skills) the stranger was helped more than a friend. Also, in 10 out of 13 sessions, when relevance was low (subjects were told that this activity determined nothing of importance) the friend was helped more than the stranger. The prediction of the self-evaluation maintenance theory was strongly supported.
Having previously discovered that the most positive evaluations occurred in participants when have low relevance with high closeness to another individual, Tesser (1989) sought to test whether emotional arousal mediated this relation. In the above sibling sport examples, it is evident that the self-evaluation process is an emotionally stimulating one. Tesser was interested in whether the emotional effect was a side-effect of the self-evaluation process, or whether it was a mediating effect (i.e., whether it was a partial factor influencing the evaluation). Tesser believed that if emotion was a mediating factor, that if emotional arousal was engaged and misattributed, that the self-evaluation process would be activated with all other factors controlled. To test, subjects arrived in pairs that knew one another prior. Two conditions were given vitamin C pills, where in the control condition they were truthfully told the pills would have no effect, and in the misattribution condition, they were told these pills would cause arousal, activating a placebo effect. Subjects then completed both relevant and non-relevant tasks, both with other subjects close and not close with them, then ratings of the other participants were measured. The results found that subjects in the misattribution condition had much more extreme ratings of other participants. When the task was high in relevancy, the subject rating the other participant much worse than the control condition. The findings show that while emotional activation is not the only factor determining evaluations, it is a mediating factor with some effect.
Zuckerman & Jost (2001) compares the self-evaluation maintenance theory to the work of Feld (1991). As the self-evaluatory maintenance theory would lead one to judge a stranger higher than their friends (based on popularity) in order to prevent a drop in self-evaluation, Feld's (1991) research demonstrated that people must have fewer friends than their friends do in order to remain popular. This is based on a mathematical equation that explains why popular people are involved in more social circles than unpopular people. These are not the only two research examples. For more examples see the references.
This graph illustrates the basic principles of Tesser's (1988) self-evaluatory maintenance model of behavior. Relevance determines whether reflection or comparison will occur. When relevance is low (the factor does not affect self-definition) as the other's performance increases, so does self-evaluation, allowing that person to share in the celebration of the other person (reflection). When relevance is high (the factor is important to self-definition also) as the other's performance increases, self-evaluation decreases because that person is being compared to the other person (comparison). If relevance is high, then one will engage in comparison, but if relevance is low, one will engage in reflection.
See also
Evaluation
Friendship paradox
Implicit egotism
Self
Notes
References
Interpersonal relationships
Self | Self-evaluation maintenance theory | [
"Biology"
] | 2,252 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
14,505,071 | https://en.wikipedia.org/wiki/Splenogonadal%20fusion | Splenogonadal fusion is a rare congenital malformation that results from an abnormal connection between the primitive spleen and gonad during gestation. A portion of the splenic tissue then descends with the gonad. Splenogonadal fusion has been classified into two types: continuous, where there remains a connection between the main spleen and gonad; and discontinuous, where ectopic splenic tissue is attached to the gonad, but there is no connection to the orthotopic spleen. Patients can also have an accessory spleen. Patients with continuous splenogonadal fusion frequently have additional congenital abnormalities including limb defects, micrognathia, skull anomalies, Spina bifida, cardiac defects, anorectal abnormalities, and most commonly cryptorchidism. Terminal limb defects have been documented in at least 25 cases which makes up a separate diagnosis of splenogonadal fusion limb defect (SGFLD) syndrome.
The anomaly was first described in 1883 by Bostroem. Since then more than 150 cases of splenogonadal fusion have been documented, predominantly in males. The condition is considered benign. A few cases of testicular neoplasm have been reported in association with splenogonadal fusion. The reported cases have occurred in patients with a history of cryptorchidism, which is associated with an elevated risk of neoplasm.
Splenogonadal fusion occurs with a male-to-female ratio of 16:1, and is seen nearly exclusively on the left side. The condition remains a diagnostic challenge, but preoperative consideration of the diagnosis and use of ultrasound may help avoid unnecessary orchiectomy. The presence of splenic tissue may be confirmed with a technetium-99m sulfur colloid scan.
Classification
Splenogonadal fusion is separated into two types:
Continuous (55%): The native spleen is connected to the gonad. The connection occurs through a cord of splenic tissue or fibrous band.
Discontinuous (45%): Ectopic splenic tissue is present and attached to the gonad. An accessory spleen is present and the native spleen lacks the direct attachment to the gonad.
Pathogenesis
The cause of splenogonadal fusion is still unclear, and there are several proposed mechanisms. A developmental field defect that occurs during blastogenesis is the current explanation of pathogenesis. The spleen derives from mesenchymal tissue and an inappropriate fusion can happen between the gonadal ridge during gut rotation, which occurs between weeks 5 and 8 of fetal life. Splenogonadal fusion may result from an unknown teratogenic insult. The timing of this insult may correlate to the severity of associated defects. There is also postulation that fusion may occur due to adhesion after an inflammatory response or a lack of apoptosis between the structures. Siblings documented to have splenogonadal fusion and an accessory spleen provides additional evidence of a possible genetic component.
Associated Conditions
Additional congenital abnormalities are most often found associated with the continuous type of splenogonadal fusion. These abnormalities include micrognathia, macroglossia, anal atresia, and pulmonary hypoplasia. The most commonly associated malformation is cryptorchidism. When limb abnormalities occur, Splenogonadal Fusion with Limb Defects is made as a separate diagnosis. Splenogonadal Fusion with Limb Defects is also of unknown cause but some literature suggests that the condition may be related to Hanhart syndrome and facial femoral syndrome. Splenogonadal fusion has also been identified in infants with Möbius syndrome and Poland syndrome.
Diagnosis
Diagnosis can be challenging pre-operatively, as patients are commonly asymptomatic. Physical examination can aid in diagnosis if the mass is palpated, but further confirmatory tests are necessary. Females are reportedly less affected by splenogonadal fusion, though it is possible that the condition is underdiagnosed due to the difficulty of internal gonad examination in females. On scrotal ultrasound, ectopic splenic tissue may appear as an encapsulated homogeneous extratesticular mass, isoechoic with the normal testis. Subtle hypoechoic nodules may be present in the mass. Limitations of doppler ultrasonography include visualizing the nonspecific paratesticular masses which can mimic malignancies such as rhabdomyosarcoma or embryonal sarcoma. Technetium-99m sulfur colloid scan is sensitive to target the liver, spleen, and bone marrow, therefore the scans can be used to identify ectopic splenic tissue when clinical suspicion is high. When technetium-99m sulfur colloid scans are not included in a patient's diagnostic workup, the diagnosis is often not made until the surgeon performs an abdominal exploration using laparoscopy which can visualize the splenic tissue grossly. Evaluation by biopsy including frozen section procedure is confirmatory for splenogonadal fusion. Histological examination will show normal splenic tissue which is made up of red and white pulp. Many people with splenogonadal fusion go undiagnosed without complication during their lifetime. Many cases have been diagnosed at autopsy or incidentally after orchiectomy.
Treatment
Treatment remains controversial given the benign nature of splenogonadal fusion. Splenogonadal fusion does not have known clinical manifestations that require intervention. Clinical observation is considered when the mass is diagnosed pre-operatively. Surgery is often required to confirm the diagnosis and exclude a mimic of a malignant mass. Surgical approach should attempt to divide the mass and the gonad at the connecting plane for removal of the splenic tissue. Orchiectomy is generally not indicated. Causality between splenogonadal fusion and future malignant transformation has not been established, but literature has highlighted infrequent cases of testicular neoplasms and splenogonadal fusion. These cases were found in patients who also had a history of cryptorchidism, which is a known risk factor for testicular cancer.
See also
Splenogonadal fusion-limb defects-micrognathia syndrome
References
Developmental biology
Congenital disorders of genital organs | Splenogonadal fusion | [
"Biology"
] | 1,314 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
14,505,265 | https://en.wikipedia.org/wiki/Seattle%20Internet%20Exchange | The Seattle Internet Exchange (SIX) is an Internet exchange point in Seattle, USA. Its switch fabric is centered at the Westin Building and extended to KOMO Plaza, Sabey Intergate, and other locations. The SIX is one of the most successful examples of neutral and independent peering points, created as a free exchange point originally sponsored only by donations. The SIX is the most frequently cited model upon which other neutral Internet exchanges are based, and its financial and governance models are often cited as inspiration for other exchanges. It continues to run without any recurring charges to the participants and current major funding comes from one-time 10, 100, and 400 Gbit/s port fees, as well as from voluntary contributions from stakeholders. The SIX is a 501(c)(6) tax-exempt non-profit corporation.
As of April 21, 2024, there are 370 networks (443 routers) connected to the SIX advertising at least 195,000 (140,000 IPv4, 55,000 IPv6) unique Border Gateway Protocol (BGP) routes. There are two route servers running Bird Internet routing daemon (BIRD).
Technology
The core of the SIX consists of Arista Networks switches, with a 7808R3, 7512R, and 7508R at the Westin Building, a 7504R3 at KOMO Plaza, and a 7504R at Sabey Intergate.
Participants may connect to the SIX core using a 1 Gbit/s, 10 Gbit/s, 40 Gbit/s, 100 Gbit/s, or 400 Gbit/s Ethernet connection (fiber) or to one of several extensions. Extensions are sponsored by colocation facilities or transport providers.
Both IPv4 and IPv6 peering is available and encouraged at the SIX, with availability dependent on the peer. Jumbo frame peering at 9000-byte maximum transmission unit (MTU) is available.
Extensions
The following is a list of SIX extensions:
Archeo Futurus: Connects participants at H5 Data Centers Seattle.
Astound Broadband: Regional network.
BSO/IX Reach: Worldwide WAN.
Equinix: PAIX SEA, which is a neutral Internet exchange point operated by Equinix in Seattle, Washington.
Minnesota VoIP: Minnesota area.
NOCIX: North Kansas City area.
Ptera: Spokane area.
Reliable Internet (Arrow Calgary): Calgary area.
Wholesail Networks: Regional network.
Wowrack: Wowrack datacenter in Tukwila, Washington.
See also
List of Internet exchange points
References
External links
Internet exchange points in the United States
Network access
1997 establishments in Washington (state) | Seattle Internet Exchange | [
"Engineering"
] | 544 | [
"Electronic engineering",
"Network access"
] |
14,505,530 | https://en.wikipedia.org/wiki/Pier%20Luigi%20Luisi | Pier Luigi Luisi (born 23 May 1938) is an Italian chemist and academic. He received the "professor emeritus" title from the Swiss Federal Institute of Technology (ETHZ). He worked there as a scientist from 1970 until 2003, and as a Professor of Chemistry from 1980 until he departed. Luisi then moved to the Roma Tre University as a Professor of Biochemistry, where he worked until 2015.
In 1985, Luisi founded the Cortona Week, an international summer school.
Personal life
Pier Luigi Luisi was born on 23 May 1938. He is now a retired professor and lives in Tavira, Portugal.
Education
Luisi graduated with a chemistry degree from the Scuola Normale Superiore di Pisa.
Books
The Systems View of Life: A Unifying Vision (with Fritjof Capra) Cambridge University Press, 2014, translated in several languages. The Italian edition was published by Aboca, 2014, under the title Vita e Natura - Una visione sistemica.
The Emergence of Life: From Chemical Origins to Synthetic Biology Cambridge University Press, second edition 2016
Giant Vesicles (Perspectives in Supramolecular Chemistry) (with Peter Walde)
Mind and life: discussions with the Dalai Lama on the nature of reality, Columbia University Press, 2009, ,
References
External links
Pier Luigi Luisi on Lifeboat Foundation
Pier Luigi Luisi on Meer.com
Pier Luigi Luisi on Meer.com
Pier Luigi Luisi on Meer.com
1938 births
Living people
Scuola Normale Superiore di Pisa alumni
Swiss chemists
Synthetic biologists
Academic staff of Roma Tre University | Pier Luigi Luisi | [
"Biology"
] | 328 | [
"Synthetic biology",
"Synthetic biologists"
] |
14,506,423 | https://en.wikipedia.org/wiki/Formation%20evaluation%20neutron%20porosity | In the field of formation evaluation, porosity is one of the key measurements to quantify oil and gas reserves. Neutron porosity measurement employs a neutron source to measure the hydrogen index in a reservoir, which is directly related to porosity. The Hydrogen Index (HI) of a material is defined as the ratio of the concentration of hydrogen atoms per cm3 in the material, to that of pure water at 75 °F. As hydrogen atoms are present in both water and oil-filled reservoirs, measurement of the amount allows estimation of the amount of liquid-filled porosity.
Physics
Neutrons are typically emitted by a radioactive source such as Americium Beryllium (Am-Be) or Plutonium Beryllium (Pu-Be), or generated by electronic neutron generators such as minitron. Fast neutrons are emitted by these sources with energy ranges from 4 MeV to 14 MeV, and inelastically interact with matter. Once slowed down to 2 MeV, they start to scatter elastically and slow down further until the neutrons reach a thermal energy level of about 0.025 eV. When thermal neutrons are then absorbed, gamma rays are emitted. A suitable detector, positioned at a certain distance from the source, can measure either epithermal neutron population, thermal neutron population, or the gamma rays emitted after the absorption.
Mechanics of elastic collisions predict that the maximum energy transfer occurs during collisions of two particles of equal mass. Therefore, a hydrogen atom (H) will cause a neutron to slow down the most, as they are of roughly equal mass. As hydrogen is fundamentally associated to the amount of water and/or oil present in the pore space, measurement of neutron population within the investigated volume is directly linked to porosity.
Correction
Determination of porosity is one of the most important uses of neutron porosity log. Correction parameters for lithology, borehole parameters, and others are necessary for accurate porosity determination as follow:
Borehole size
Borehole salinity
Borehole temperature and pressure
Mud cake
Mud weight
Formation salinity
Tool standoff from borehole wall
Interpretation
Subject to various assumptions and corrections, values of apparent porosity can be derived from any neutron log. One can not underestimate the slow down of neutrons by other elements even if they are less effective. Certain effects, such as lithology, clay content, and amount and type of hydrocarbons, can be recognized and corrected for only if additional porosity information is available, for example from sonic and/or density log. Any interpretation of a neutron log alone should be undertaken with a realization of the uncertainties involved.
Effect of light hydrocarbon and gas
The quantitative response of neutron tool to gas or light hydrocarbon depends primarily on hydrogen index and "excavation effect". The hydrogen index can be estimated from the composition and density of the hydrocarbons
Given a fixed volume, gas has considerably lower hydrogen concentration. When pore spaces in the rock are excavated and replaced with gas, the formation has smaller neutron-slowing characteristic, hence the terms "Excavation Effect". If this effect is ignored, a neutron log will show a low porosity value. This characteristic allows a neutron porosity log to be used with other porosity logs (such as a density log) to detect gas zones and identify gas-liquid contacts.
Measurement technique
Neutron tools are based on the measurement of a neutron cloud of different energy levels within the investigated volume. Epithermal-neutron tools measure epithermal neutron density with energy levels between 100eV and 0.1eV in the formation. Thermal-neutron tools only measure the population of neutrons with a thermal energy level, and Neutron-gamma tools measure the intensity of gamma flux generated by thermal neutron capture. The tools usually have two detectors (or more) with different spacings from the source to produce ratio of count rates, which theoretically reduce borehole effects.
A Helium-3 (He-3) filled proportional counter is the most common epithermal and thermal neutron detector. Helium has a high neutron capture cross section and produces the following reaction when interacting with a neutron.
3He + 1n → 1H + 3H + 764keV energy
To boost the charge produced by the interaction between helium and a neutron, a high voltage is applied to the anode of the counter. A high operating voltage is chosen to give enough gain for counting purposes. Most helium-3 counters use a quench gas to stabilize high voltage performance and prevent run-away.
See also
Neutron temperature
Effective porosity
Gas porosity
References
John T. Dewan, "Open-Hole Nuclear Logging - State of the Art" - SPWLA Twenty-Seventh Annual Logging Symposium, June 9–13, 1986.
Radioactivity
Well logging | Formation evaluation neutron porosity | [
"Physics",
"Chemistry",
"Engineering"
] | 967 | [
"Well logging",
"Petroleum engineering",
"Radioactivity",
"Nuclear physics"
] |
14,506,616 | https://en.wikipedia.org/wiki/Follow-the-sun | Follow-the-sun (FTS), a sub-field of globally distributed software engineering (GDSE), is a type of global knowledge workflow designed in order to reduce the time to market, in which the knowledge product is owned and advanced by a production site in one time zone and handed off at the end of their work day to the next production site that is several time zones west to continue that work. Ideally, the work days in these time zones overlap such that when one site ends their day, the next one starts.
FTS has the potential to significantly increase the total development time per day (as viewed from the perspective of a single time zone): with two sites the development time can increase to up to 16 hours, or up to 24 hours if there are three sites, reducing the development duration by as much as 67%.
It is not commonly practiced in industry and has few documented cases where it is applied successfully. This is likely because of its uncommon requirements, leading to a lack of knowledge on how to successfully apply FTS in practice.
History
Follow-the-sun can be traced back to the mid-1990s where IBM had the first global software team which was specifically set up to take advantages of FTS. The team was spread out across five sites around the globe. Unfortunately, in this case FTS was unsuccessful because it was uncommon to hand off the software artifacts daily.
Two other cases of FTS at IBM have been documented by Treinen and Miller-Frost. The first team was spread out across a site in the United States and a site in Australia. FTS was successful for this team. The second team was spread out across a site in the United States and a site in India. In this case FTS was unsuccessful because of miscommunication, time zone issues and cultural differences.
Principles
FTS is based on four principles:
The main objective is the reduction of development duration / time to market.
Production sites are many time zones apart.
There is always one and only one site that owns and works on the project.
Handoffs are conducted daily at the end of each shift. The next production site is several time zones west.
Common misconceptions
An important step in defining FTS is to disambiguate it from other globally distributed configurations to clearly state what FTS is not. These types of similar globally distributed configurations are not FTS:
Global knowledge work is defined as geographically dispersed knowledge workers working collaboratively from multiple locations. This is not FTS because there are no handoffs.
24/7 service. In this configuration work is distributed to workers who are available at that time. It is focused on availability and the workers have little dependency, whereas FTS is focused on duration reductions and requires dependencies between the different sites in order to perform the daily handoffs.
24-hour manufacturing. This configuration focuses on making shifts fully optimize expensive resources that could not produce more by increasing the number of employees per shift. However, this driver of reducing the resource cost is not the driver of FTS.
Collocated multi shifts. In contrast to FTS this configuration chooses one location where labor is cheap and runs multiple eight-hour shifts concurrently.
Difficulties
FTS's largest strength, spreading the development over multiple time zones, is simultaneously its largest weakness. Its distributed workflow is more complex to implement due to cultural and technical differences as well as the differences in time making coordination and communication challenging.
The main reason why FTS is difficult to implement is because the handoffs are an essential element that is hard to get right. The largest factor causing this difficulty is poor communication.
There are few documented cases of companies successfully applying FTS. Some companies have claimed to successfully implement FTS but these companies did not practice the daily handoffs. However, a limited amount of successful applications of FTS that did include daily handoffs of artefacts, using a distributed-concurrent model, were found by Cameron.
Recent studies on FTS have moved to mathematical modeling of FTS. The research is focused on the issue of speed and the issues around the handoffs.
Methods
As FTS is a sub-field of GDSE, the same agile software development methodologies that are found to work well in GDSE work well with FTS. In particular, Carmel et al. (2009) argue that agile software development methodologies assist the FTS principles because they:
Support daily handoffs: the continuous integration and automated integration of source code allows each site to work in their own code bases during their work day, while the integration maintains updated, testable code to be used by the next site.
Deal with communication: agile methodologies emphasize communication. They specifically emphasize face-to-face communication, which can be done within one site. Since FTS aims to reduce inter-site communication, the face-to-face aspect is not a large hindrance to the overall application of agile development methodologies.
Elicit cooperation and collaboration: as FTS requires more collaboration and cooperation, this emphasis is especially useful.
Challenges
Kroll et al. (2013) have researched papers published between 1990 and 2012 and found 36 best practices and 17 challenges for FTS. The challenges were grouped in three categories: coordination, communication and culture. These challenges should be overcome to implement FTS successfully.
Coordination
Time zone differences reduce opportunities for real-time collaboration. Team members have to be flexible to achieve overlap with remote colleagues. The limited overlap and the delay in responses have a negative impact on the coordination.
Daily handoff cycles or handing off work-in-progress are a requirement of FTS because without it the time to market cannot be decreased.
Geographical dispersion
Cost estimation
Loss of teamness
Number of sites
Coordination breakdown
Managerial difficulties
Technical platforms
Communication
Loss of communication richness / face-to-face communication
Social cultural diversity difficulties
Synchronous communication
Language difference
Technical difficulties
Manage religious or national holidays.
Culture
Cultural differences
Different technical backgrounds
Best practices
It is of great importance to select and adapt a methodology for the daily handoffs e.g. using agile software development or the waterfall model.
Identified best practices are the use of agile methods and using technologies to develop FTS activities. Agile supports daily handoffs which is a critical challenge in FTS. Management tools can be used to estimate and plan schedules, manage sprints and track progress. Additionally, technologies like conference video, emails and telephone calls are easy to implement and allow companies to perform synchronous and asynchronous communication between teams and works well in an agile environment.
Follow-the-moon
A related concept is follow-the-moon, which is scheduling work to be performed specifically during local night-time hours for reasons such as saving on datacenter costs by using cheaper night-time electricity or spare processing power.
Other terms
24-hour development
round-the-clock-development
See also
Time to market
Change-of-shift report
Notes and references
External links
Example of use in industry - IT Support
Industrial engineering | Follow-the-sun | [
"Engineering"
] | 1,415 | [
"Industrial engineering"
] |
14,507,786 | https://en.wikipedia.org/wiki/Petrochemical%20industry%20in%20Romania | The emergence of oil production in the territory now known as Romania dates back to 1857, with oil facilities gaining strategic military significance in 1916 during World War I. Throughout World War II, the Kingdom of Romania held the position as the largest oil producer in Europe, second only to the USSR, whose primary oil source was located in Azerbaijan. The oil extracted from Romania played a pivotal role in Axis military operations, a fact underscored in Adolf Hitler's 1942 speech.
The Romanian petrochemical industry, particularly centered around Ploiești, became a focal point for Allied bombing raids, notably during Operation Tidal Wave. The Soviet Red Army later occupied the Romanian oilfields in August 1944. Post-World War II, extensive reconstruction and expansion initiatives were undertaken under the communist regime. Following the events of 1989, a significant portion of the industry underwent privatization.
Present-day Romania boasts significant oil-refining capabilities, demonstrating a notable interest in the Central Asia-Europe pipelines while actively cultivating relations with select Arab States of the Persian Gulf. With a total of 10 refineries and an overall refining capacity of approximately 504,000 barrels per day (80,100 cubic meters per day), Romania stands as the leading nation in the eastern European region in terms of refining industry scale.
Romania's extensive refining capacity surpasses its domestic demand for refined petroleum products, enabling the country to engage in substantial exports of various oil products and petrochemicals. This includes, but is not limited to, lubricants, bitumen, and fertilizers, distributed across the eastern European region.
By 2017, the number of refineries possessing the capability to produce had dwindled to just five, with the overall capacity experiencing a decline to 13.7 million metric tons per year.
Refineries
This is an incomplete list of oil refineries in Romania:
Petrobrazi Refinery, (Petrom/OMV),
Petrotel Lukoil Refinery, (LUKOIL),
Petromidia Constanța Refinery, (Rompetrol),
Vega Refinery, (Rompetrol),
Petrolsub Suplacu de Barcău Refinery, (Petrom/OMV),
Dormant refineries:
Astra Refinery, (Interagro),
Steaua Română Câmpina Refinery, (Omnimpex Chemicals),
Closed refineries:
Arpechim Refinery, (Petrom/OMV), which used to process
RAFO Onești Refinery, (Calder A), which used to process
Petrochemical processing platforms
Romania has closed down the majority of the petrochemical processing platforms. Those remaining are:
KazMunayGas: Petromidia
Chimcomplex
Oltchim
See also
Oil campaign of World War II
Economy of Romania
Industry of Romania
Operation Tidal Wave
References
Economy of Romania
Romania
Energy in Romania
Petrochemical industry
Petroleum in Romania | Petrochemical industry in Romania | [
"Chemistry"
] | 592 | [
"Petrochemical industry"
] |
14,508,250 | https://en.wikipedia.org/wiki/Institute%20for%20Cosmic%20Ray%20Research | The Institute for Cosmic Ray Research (ICRR) of the University of Tokyo (東京大学宇宙線研究所 Tōkyōdaigaku Uchūsen Kenkyūsho) was established in 1976 for the study of cosmic rays.
The gravitational wave studies group is currently constructing the detector KAGRA located at the Kamioka Observatory.
Facilities
Kashiwa Campus at the University of Tokyo
Akeno Observatory
Kamioka Observatory
Norikura Observatory
Current projects
Super-Kamiokande - Detection of neutrinos and search for proton decays in a large water tank
Tibet - Search for point sources of VHE cosmic gamma rays at Tibet heights
Telescope Array Project - Aiming at highest energy cosmic ray physics by detecting weak light from atmosphere
Gravitational Wave Group - Constructing the gravitational wave detector KAGRA
Observational Cosmology Group - Subaru Hyper Suprime-Cam Survey
Theory Group - Theoretical studies for verifying Grand Unified Theory and early Universe
High Energy Astrophysics Group - Theoretical studies for pulsars, gamma-ray bursts, AGNs, acceleration mechanisms of particles etc.
References
External links
Astrophysics research institutes
Research institutes in Japan
University of Tokyo | Institute for Cosmic Ray Research | [
"Physics"
] | 231 | [
"Astrophysics research institutes",
"Astrophysics"
] |
14,508,339 | https://en.wikipedia.org/wiki/Indicator%20organism | Indicator organisms are used as a proxy to monitor conditions in a particular environment, ecosystem, area, habitat, or consumer product. Certain bacteria, fungi and helminth eggs are being used for various purposes.
Types
Indicator bacteria
Certain bacteria can be used as indicator organisms in particular situations, such as when present in bodies of water. Indicator bacteria themselves may not be pathogenic but their presence in waste may indicate the presence of other pathogens. Similar to how there are various types of indicator organisms, there are also various types of indicator bacteria. The most common indicators are total coliforms, fecal coliforms, E. coli, and enterococci. The presence of bacteria commonly found in human feces, termed coliform bacteria (e.g. E. coli), in surface water is a common indicator of faecal contamination. The means by which pathogens found in fecal matter can enter recreational bodies of water include, but are not limited to, sewage, septic systems, urban runoff, coastal recreational waste, and livestock waste.
For this reason, sanitation programs often test water for the presence of these organisms to ensure that drinking water systems are not contaminated with feces. This testing can be done using several methods which generally involve taking samples of water, or passing large amounts of water through a filter to sample bacteria, then testing to see if bacteria from that water grow on selective media such as MacConkey agar. MacConkey agar will only allow the growth of gram-negative bacteria and the bacteria will grow differently according to how it metabolizes lactose or its lack of ability to metabolize it. Alternatively, the sample can be tested to see if it utilizes various nutrients in ways characteristic of coliform bacteria.
Coliform bacteria selected as indicators of faecal contamination must not persist in the environment for long periods of time following efflux from the intestine, and their presence must be closely correlated with contamination by other faecal organisms. Indicator organisms need not be pathogenic.
Non-coliform bacteria, such as Streptococcus bovis and certain clostridia, may also be used as an index of faecal contamination.
The presence of indicator bacteria is measured in a variety of ecosystems and sometimes alongside other measurements. In the Great Lakes, a study was conducted testing for both fecal indicator bacteria (FIB) concentrations and pathogen gene markers. The FIB measured in this study included fecal coliform bacteria, E. coli, and enterococci. FIB were collected via membrane filtration and serial dilution methods, producing samples which could be cultured and used to run PCR and amplify the pathogenic genes in question. Among the 22 sampling locations, 165 samples were analyzed and E. coli concentrations were found to range from less than 2 to 26,000 CFU/100mL, enterococci ranged from less than 2 to 31,000 CFU/100mL, and fecal coliform bacteria ranged from less than 2 to 950 CFU/100mL.
Another example of indicator bacteria being measured for safety purposes is in Malibu, CA. The state of California requires that beaches with greater than 50,000 visitors a year be monitored for FIB. High FIB concentrations, exceeding what is considered acceptable by the EPA were observed in Malibu Lagoon and other Malibu beaches. Measurement of high levels of FIB leads to a search to determine what the source(s) is/are. Potential sources of FIB in the Malibu area include waste from sewage treatment systems, runoff from local developments, and wildlife waste. Common FIB were measured including enterococci which presented itself in levels as high as 242,000 MPN/100mL within onsite wastewater treatment systems. The measurement of FIB is widespread and used for the purpose of providing safe waters.
In Texas, the occurrence and distribution of FIB, in particular fecal coliforms and E. coli, were measured in streams that receive discharge from the Dallas Fort Worth International Airport and the surrounding area. These streams receiving the waste are home to aquatic life, used for recreational purposes, and as fishing sites. Various standards exist in order to ensure the safety of all organisms present in the ecosystem, including humans. E. coli is used as an indicator of unsafe or below standard water quality for recreational use in Texas. The standards for E. coli levels that declare contact recreation unsafe are a geometric mean of over 126 cfu/100mL or over a fourth of the samples measuring levels greater than 394cfu/100mL. Various sites were tested, some found to exceed acceptable levels of E. coli and therefore did not support recreational use. This is yet another example of how testing for indicator bacteria is used to determine whether bodies of water are safe for various uses, particularly recreational use.
Indicator fungi
Penicillium species, Aspergillus niger and Candida albicans are used in the pharmaceutical industry for microbial limit testing, bioburden assessment, method validation, antimicrobial challenge tests, and quality control testing. When used in this capacity, Penicillium and A. niger are compendial mold indicator organisms.
Molds such as Trichoderma, Exophiala, Stachybotrys, Aspergillus fumigatus, Aspergillus versicolor, Phialophora, Fusarium, Ulocladium and certain yeasts are used as indicators of indoor air quality.
Metagenomic techniques allow for the sequencing of whole populations of microorganisms in a single operation. With metagenomic sequencing, it is possible to use the entire community of fungal organisms, or mycobiome in the soil or water of a given area as a biological indicator of anthropogenic activity, such as sewage overflow from an urban area or fertilizer and pesticide runoff from an agricultural one.
Composition of fungal communities has been found to be a good indicator of environmental properties like pH, altitude and water temperature. Chauvet used this approach to take ecosystem-wide measurements of these variables using a network of monitoring stations at 27 streams in Southwestern France.
Cudowski et al. sampled fungi in the water of the Augustow canal in eastern Poland. They took many standard measures of water quality -- temperature, oxygen saturation, pH, and dissolved nitrogen, organic carbon and sulfur levels. They identified species with microscopic methods and RFLP analysis. They found 38 fungal species, including 12 hyphomycetiae and 13 potential pathogens, belonging either to the dermatophytes or to relatives of C. albicans. Cudowski et al. found that they could determine whether a sample of water had been taken from the natural (lake-like) or artificial part of the canal. They also found that the three major groups of fungi that they found, hyphomycetes, dermatophytes and Candida relatives, could predict many of their water quality measurements, which formed two clusters in a redundancy analysis.
Bouffand et al. used Arbuscular Mycorhizzal Fungi (AMF), an asexual clade of fungi that form symbiotic relationships with plant root systems, as indicators to assess soil function and biodiversity in many sites across Europe. They took soil samples in various climatic zones (atlantic, continental, mediterranean, alpine) and three land use regimes (arable, grassland, forestry), and sequenced the DNA of the fungi the soil contained. They found eight indicator species for soil pH: four that were only present when pH was less than 5, three for pH > 5 and one for pH > 7. They found eight indicators of land use: two for forests, five for farm- and grassland, and one for both. They also found one indicator fungus that was present when soil organic carbon was high, and another present when it was low.
Indicator helminth eggs
The eggs from helminths (parasitic worms) are a commonly used indicator organism to assess the safety of sanitation and wastewater reuse systems (such schemes are also called reuse of human excreta). This is because they are the most resistant pathogens of all types of pathogens (pathogens can be viruses, bacteria, protozoa and helminths). It means they are relatively hard to destroy through conventional treatment methods. They can survive for 10–12 months in tropical climates. These eggs are also called ova in the literature.
Helminth eggs that are found in wastewater and sludge stem from soil-transmitted helminths (STHs) which include Ascaris lumbricoides (Ascaris), Anclostoma duodenale, Necator americanus (hookworm), and Trichuris trichiura (whipworm). Ascaris and whipworm that are identified in reusable wastewater systems can cause certain diseases and complications if ingested by humans and pigs. Hookworms will plant and hatch their larvae into the soil where they grow until maturity. Once the hookworm eggs are fully developed, they infect organisms by crawling through the organism’s skin.
The presence or absence of viable helminth eggs ("viable" meaning that a larva would be able to hatch from the egg) in a sample of dried fecal matter, compost or fecal sludge is often used to assess the efficiency of diverse wastewater and sludge treatment processes in terms of pathogen removal. In particular, the number of viable Ascaris eggs is often taken as an indicator for all helminth eggs in treatment processes as they are very common in many parts of the world and relatively easy to identify under the microscope. However, the exact inactivation characteristics may vary for different types of helminth eggs.
The technique used for testing depends on the type of sample. When the helminth ova are in sludge, processes such as alkaline-post stabilization, acid treatment, and anaerobic digestion are used to reduce the amount of helminth ova in areas where there is a large amount. These methods make it possible for helminth ova to be within the healthy requirements of ≤1 helminth ova per liter. Dehydration is used to inactivate helminth ova in fecal sludge. This type of inactivation occurs when feces is stored between 1-2 years, a high total solids content (>50-60%) is present, items such as leaves, lime, earth, etc. are added, and at a temperature of 30°C or higher.
See also
Coliform bacteria
Coliform index
E. coli
Indicator species
References
Bacteria
Environmental and ecological indicators | Indicator organism | [
"Biology"
] | 2,182 | [
"Prokaryotes",
"Microorganisms",
"Bacteria"
] |
14,508,661 | https://en.wikipedia.org/wiki/Molecule-based%20magnets | Molecule-based magnets (MBMs) or molecular magnets are a class of materials capable of displaying ferromagnetism and other more complex magnetic phenomena. This class expands the materials properties typically associated with magnets to include low density, transparency, electrical insulation, and low-temperature fabrication, as well as combine magnetic ordering with other properties such as photoresponsiveness. Essentially all of the common magnetic phenomena associated with conventional transition-metal magnets and rare-earth magnets can be found in molecule-based magnets. Prior to 2011, MBMs were seen to exhibit "magnetic ordering with Curie temperature (Tc) exceeding room temperature".
History
The first synthesis and characterization of MBMs was accomplished by Wickman and co-workers in 1967. This was a diethyldithiocarbamate-Fe(III) chloride compound.
In February 1992, Gatteschi and Sessoli published on MBMs with particular attention to the fabrication of systems in which stable organic radicals are coupled to metal ions. At that date, the highest Tc on record was measured by SQUID magnetometer as 30K.
The field exploded in 1996 with the publication of the book "Molecular Magnetism: From Molecular Assemblies to the Devices".
In February 2007, de Jong et al. grew thin-film TCNE MBM in situ, while in September 2007, photoinduced magnetism was demonstrated in a TCNE organic-based magnetic semiconductor.
The June 2011 issue of Chemical Society Reviews was devoted to MBMs. In the editorial, written by Miller and Gatteschi, are mentioned TCNE and above-room-temperature magnetic ordering along with many other unusual properties of MBMs.
Theory
The mechanism by which molecule-based magnets stabilize and display a net magnetic moment is different than that present in traditional metal- and ceramic-based magnets. For metallic magnets, the unpaired electrons align through quantum mechanical effects (termed exchange) by virtue of the way in which the electrons fill the orbitals of the conductive band. For most oxide-based ceramic magnets, the unpaired electrons on the metal centers align via the intervening diamagnetic bridging oxide (termed superexchange). The magnetic moment in molecule-based magnets is typically stabilized by one or more of three main mechanisms:
Through space or dipolar coupling
Exchange between orthogonal (non-overlapping) orbitals in the same spatial region
Net moment via antiferromagnetic coupling of non-equal spin centers (ferrimagnetism)
In general, molecule-based magnets tend to be of low dimensionality. Classic magnetic alloys based on iron and other ferromagnetic materials feature metallic bonding, with all atoms essentially bonded to all nearest neighbors in the crystal lattice. Thus, critical temperatures at which point these classical magnets cross over to the ordered magnetic state tend to be high, since interactions between spin centers is strong. Molecule-based magnets, however, have spin bearing units on molecular entities, often with highly directional bonding. In some cases, chemical bonding is restricted to one dimension (chains). Thus, interactions between spin centers are also limited to one dimension, and ordering temperatures are much lower than metal/alloy-type magnets. Also, large parts of the magnetic material are essentially diamagnetic, and contribute nothing to the net magnetic moment.
Applications
In 2015 oxo-dimeric Fe(salen)-based magnets ("anticancer nanomagnets") in a water suspension were shown to demonstrate intrinsic room temperature ferromagnetic behavior, as well as antitumor activity, with possible medical applications in chemotherapy, magnetic drug delivery, magnetic resonance imaging (MRI), and magnetic field-induced local hyperthermia therapy.
Background
Molecule-based magnets comprise a class of materials which differ from conventional magnets in one of several ways. Most traditional magnetic materials are comprised purely of metals (Fe, Co, Ni) or metal oxides (CrO2) in which the unpaired electrons spins that contribute to the net magnetic moment reside only on metal atoms in d- or f-type orbitals.
In molecule-based magnets, the structural building blocks are molecular in nature. These building blocks are either purely organic molecules, coordination compounds or a combination of both. In this case, the unpaired electrons may reside in d or f orbitals on isolated metal atoms, but may also reside in highly localized s and p orbitals as well on the purely organic species. Like conventional magnets, they may be classified as hard or soft, depending on the magnitude of the coercive field.
Another distinguishing feature is that molecule-based magnets are prepared via low-temperature solution-based techniques, versus high-temperature metallurgical processing or electroplating (in the case of magnetic thin films). This enables a chemical tailoring of the molecular building blocks to tune the magnetic properties.
Specific materials include purely organic magnets made of organic radicals for example p-nitrophenyl nitronyl nitroxides, decamethylferrocenium tetracyanoethenide, mixed coordination compounds with bridging organic radicals, Prussian blue related compounds, and charge-transfer complexes.
Molecule-based magnets derive their net moment from the cooperative effect of the spin-bearing molecular entities, and can display bulk ferromagnetic and ferrimagnetic behavior with a true critical temperature. In this regard, they are contrasted with single-molecule magnets, which are essentially superparamagnets (displaying a blocking temperature versus a true critical temperature). This critical temperature represents the point at which the materials switches from a simple paramagnet to a bulk magnet, and can be detected by ac susceptibility and specific heat measurements.
References
Types of magnets
Drug delivery devices
Magnetic devices | Molecule-based magnets | [
"Chemistry"
] | 1,195 | [
"Pharmacology",
"Drug delivery devices"
] |
14,508,709 | https://en.wikipedia.org/wiki/Benactyzine | Benactyzine is an anticholinergic drug that was used in the treatment of clinical depression and anxiety disorders before it was pulled from the U.S. market by the FDA due to serious side effects.
Its use for these indications was limited by side effects such as dry mouth and nausea, and at high doses it can cause more severe symptoms such as deliriant and hallucinogenic effects. "Large doses of benactyzine in normal subjects may produce a state resembling the action of mescaline or LSD."
Brand names have included: Suavitil, Phebex, Phobex, Cedad, Cevanol, Deprol, Lucidil, Morcain, Nutinal, Parasan. While there was some tentative evidence of effectiveness when combined with meprobamate, with the medication no longer available it is not clinically important.
History
Benactyzine was brought to market in the US in 1957 by Merck under the tradename, Suavitil.
See also
Benapryzine
References
External links
Anxiolytics
Benzilate esters
Diethylamino compounds
Deliriants
Muscarinic antagonists
Withdrawn drugs | Benactyzine | [
"Chemistry"
] | 245 | [
"Drug safety",
"Withdrawn drugs"
] |
14,508,753 | https://en.wikipedia.org/wiki/Waking%20up%20early | Waking up early is rising before most others and has also been described as a productivity method - rising early and consistently so as to be able to accomplish more during the day. This method has been recommended since antiquity and is now recommended by a number of personal development gurus.
Commentary
Within the context of religious observances, spiritual writers have called this practice "the heroic minute", referring to the sacrifice which this entails.
The philosopher Aristotle wrote in his Economics that "Rising before daylight is also to be commended; it is a healthy habit, and gives more time for the management of the household as well as for liberal studies."
The Chinese proverb "A year's plan begins in spring, and the day's plan begins in morning", emphasizing that morning is the most important time of day, has been recorded in proverb anthologies as early as the Liang dynasty.
Benjamin Franklin wrote in Poor Richard's Almanack : "Early to bed and early to rise, makes a man healthy, wealthy, and wise". It is a saying that is viewed as a commonsensical proverb, which was included in "A Method of Prayer" by Mathew Henry who also listed it as a phrase "long said." Franklin is also quoted as saying: "The early morning has gold in its mouth", a translation of the German proverb "Morgenstund hat Gold im Mund". There is a book entitled "'Early to bed, and early to rise, makes a man healthy, wealthy, and wise', or, Early Rising: A Natural, Social, and Religious Duty" by Anna Laetitia Waring from 1855, sometimes misattributed to Franklin.
"The early bird gets the worm" is a proverb that suggests that getting up early will lead to success during the day.
Which brings to mind the immediate counterpoint: "what about the early worm, shouldn't he have stayed in bed?"
James Thurber, in his book Fables for our Time, ended the Fable of the Shrike: "Early to rise and early to bed, makes a Shrike healthy, and wealthy, and dead".
Criticisms
Such recommendations may cast individuals with different natural sleep patterns as lazy or unmotivated when it is a much different matter for a person with a longer or delayed sleep cycle to get up earlier in the morning than for a person with an advanced sleep cycle. In effect, the person accustomed to a later wake time is being asked not to wake up an hour early but 3–4 hours early, while waking up "normally" may already be an unrecognized challenge imposed by the environment.
The bias toward early morning can also adversely affect adolescents in particular. Teenagers tend to require at least 9 full hours of sleep each night, and changes to the endocrine system during puberty shift the natural wake time later in the morning. Enforcing early start times despite this can have negative effects on mood, academic performance, and social skills.
See also
Lark (person)
References
External links
Early to Rise by Benjamin Franklin
Why can't I wake up on time
Personal development
Sleep
Morning | Waking up early | [
"Astronomy",
"Biology"
] | 641 | [
"Time in astronomy",
"Personal development",
"Behavior",
"Morning",
"Sleep",
"Human behavior"
] |
14,508,970 | https://en.wikipedia.org/wiki/Physiological%20agonism%20and%20antagonism | Physiological agonism describes the action of a substance which ultimately produces the same effects in the body as another substance—as if they were both agonists at the same receptor—without actually binding to the same receptor. Physiological antagonism describes the behavior of a substance that produces effects counteracting those of another substance (a result similar to that produced by an antagonist blocking the action of an agonist at the same receptor) using a mechanism that does not involve binding to the same receptor.
Examples
Physiological agonists
Epinephrine induces platelet aggregation, and so does hepatocyte growth factor (HGF). Thus, they are physiological agonists to each other.
Physiological antagonists
There are several substances that have antihistaminergic action despite not being ligands for the histamine receptor. For instance, epinephrine raises arterial pressure through vasoconstriction mediated by A1-adrenergic receptor activation, in contrast to histamine, which lowers arterial pressure. Thus, despite not being true antihistamines because they do not bind to and block the histamine receptor, epinephrine and other such substances are physiological antagonists to histamine.
References
Medical terminology
Pharmacodynamics
Physiology
Receptor agonists | Physiological agonism and antagonism | [
"Chemistry",
"Biology"
] | 264 | [
"Pharmacology",
"Physiology",
"Receptor agonists",
"Pharmacodynamics",
"Biotechnology stubs",
"Biochemistry stubs",
"Biochemistry",
"Neurochemistry"
] |
14,509,243 | https://en.wikipedia.org/wiki/Basel%20Action%20Network | The Basel Action Network (BAN), a charitable non-governmental organization, works to combat the export of toxic waste from technology and other products from industrialized societies to developing countries. BAN is based in Seattle, Washington, United States, with a partner office in the Philippines. BAN is named after the Basel Convention, a 1989 United Nations treaty designed to control and prevent the dumping of toxic wastes, particularly on developing countries. BAN serves as an unofficial watchdog and promoter of the Basel Convention and its decisions.
Campaigns
BAN currently runs four campaigns focusing on decreasing the amount of toxins entering the environment and protecting underdeveloped countries from serving as a toxic dump of the developed countries of the world. These include:
The e-Stewards Initiative
BAN's e-Stewards Electronics Stewardship campaign seeks to prevent toxic trade in hazardous electronic waste and includes a certification program for responsible electronics recycling known as the e-Stewards Initiative. It is available to electronics recyclers after they prove to have environmentally and socially responsible recycling techniques following audits conducted by accredited certifying bodies. Recyclers can become e-Steward certified after proving that they follow all national and international laws concerning electronic waste and its proper disposal, which includes bans on exporting, land dumping, incineration, and use of prison labor.
When the e-Stewards initiative was initially started with the Electronics TakeBack Coalition, it was called "The Electronics Recycler's Pledge of True Stewardship". In the beginning, the initiative verified a recycler's participation through "desk" and paper audits only. The e-Stewards certification, however, has been updated and requires compliance verification by a third party auditor.
Green ship recycling
BAN has teamed up with several other non-governmental organizations (NGOs), including Greenpeace to form the NGO Platform on Shipbreaking. The platform is focused on the responsible ship breaking disposal of end-of-life shipping vessels. The overall purpose of the platform is to stop the illegal dumping of toxic waste traveling from developed countries to undeveloped countries. The platform is focused on finding more sustainable, environmentally and socially responsible disposal techniques of disposing of such wastes, which can be achieved through a system where the polluter will be responsible for paying any fees associated with the legal and safe disposal of ships and other marine vessels. The NGO platform endorses the principles outlined in the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal.
See also
Computer recycling
Electronic waste in the United States
Environmental issues in the United States
Notes
References
Metech Announces Support for BAN E-Stewards Program
USA's trashed TVs, computer monitors can make toxic mess BAN Founder Jim Puckett expects much more e-waste will be exported from the U.S. once the broadcasting industry switches to digital signals on Feb. 17 and millions of households junk their old analog TV sets.
Responsible Electronics Recyclers
"After Dump, What Happens To Electronic Waste?", interview with Jim Puckett by Terry Gross, Fresh Air, NPR, December 21, 2010.
External links
Basel Action Network
e-Stewards
Electronic waste in the United States
Hazardous waste
International environmental organizations
Charities based in Washington (state)
International organizations based in the United States
Non-profit organizations based in Seattle
Waste organizations
501(c)(3) organizations | Basel Action Network | [
"Technology"
] | 685 | [
"Hazardous waste"
] |
14,509,284 | https://en.wikipedia.org/wiki/Kitchen%20exhaust%20cleaning | Kitchen exhaust cleaning (often referred to as hood cleaning) is the process of removing grease that has accumulated inside the ducts, hoods, fans and vents of exhaust systems of commercial kitchens. Left uncleaned, kitchen exhaust systems eventually accumulate enough grease to become a fire hazard.
Exhaust systems must be inspected regularly, at intervals consistent with usage, to determine whether cleaning is needed before a dangerous amount of grease has accumulated.
Cleaning
National Fire Protection Association Standard 96, Standard for Ventilation Control and Fire Protection of Commercial Cooking Operations, provides cleaning requirements. The cleaning frequency depends on the type of food being cooked and volume of grease laden vapors drawn up through hood plenum.
Caustic chemicals
Caustic chemicals can be applied to break down the grease. After that, hot water can be used to rinse away the residue.
In extreme situations, where grease buildup is too heavy for a chemical application and a rinse, scrapers may be used to remove excess buildup from the contaminated surfaces, before chemicals are applied.
References
Cleaning
Indoor air pollution | Kitchen exhaust cleaning | [
"Chemistry"
] | 213 | [
"Cleaning",
"Surface science"
] |
14,509,372 | https://en.wikipedia.org/wiki/List%20of%20ProCurve%20products | HP ProCurve was the name of the networking division of Hewlett-Packard from 1998 to 2010 and associated with the products that it sold. The name of the division was changed to HP Networking in September 2010. Please use HP Networking Products for an actual list of products.
The HP ProCurve division sold network switches, wireless access points, WAN routers, and Access Control Servers/Software under the "HP ProCurve" brand name.
Switching
Core Switches
8212zl Series - (Released September 2007) Core switch offering, 12-module slot chassis with dual fabric modules and options for dual management modules and system support modules for high availability (HA). IPV6-ready, 692 Gbit/s fabric. Up to 48 10GbE ports, 288 Gb ports, or 288 SFPs. Powered by a combination of either 875W or 1500W PSUs, to provide a maximum of 3600W (5400W using additional power supplies) of power for PoE.
Datacenter Switches
6600 Series - (Released February 2009) Datacenter switch offered in five versions. There are four switches with either 24 or 48 Gb ports, with two models featuring four 10GbE SFP ports. There is also a 24 port 10GbE version. All of these feature front to back cooling and removable power supplies.
Interconnect Fabric
8100fl series - Chassis based, 8 or 16 slot bays. Supports up to 16 10 Gigabit Ethernet ports / 160 Gigabit Ethernet Ports / 160 SFPs.
Distribution/Aggregator
6200yl - Stackable switch, Layer 3, with 24 SFP transceiver ports, and the capability of 10GE ports
6400cl series - Stackable switch, Layer 3, with either CX4 10GE ports or X2 10GE ports
6108 - Stackable switch, with 6 Gigabit ports, and a further 2 Dual Personality Gigabit ports (either 1000BASE-T or SFPs)
Managed edge switches
Entry level
2530, 2620 and 2540 lines are Aruba/HPE branded and included for comparison purposes only
Mainstream
2920 line is Aruba/HPE branded and included for comparison purposes only
Chassis/Advanced
Web managed switches
1800 series - Fanless 8 or 24 Gb ports. The 1800-24G also has 2 Dual Personality Ports (2 x Gb or SFP). No CLI or SNMP management.
1700 series - Fanless 7 10/100 ports plus 1 Gb or 22 10/100 ports plus 2 Gb. The 1700-24 also has 2 Dual Personality Ports (2 x Gb or SFPs). No CLI or SNMP management.
Unmanaged switches
2300 series
2124
1400 series
408
Routing
WAN Routers
7000dl - Stackable WAN routers with modules for T1/E1, E1+G.703, ADSL2+, Serial, ISDN, and also IPsec VPN.
German company .vantronix marketed software products until 2009.
Mobility
Due to country laws, ProCurve released different versions of their wireless access points and MultiService Access points.
MultiService Mobility / Access Controllers
The MSM Access and Mobility Controllers support security, roaming and quality of service across MSM Access Points utilising 802.11 a/b/g/n wireless technology.
MSM710 - Supports up to 10 x MSM Access points. Supports up to 100 Guest Users.
MSM730 - Supports up to 40 x MSM Access points. Supports up to 500 Guest Users.
MSM750 - Supports up to 200 x MSM Access points. Supports up to 2000 Guest Users.
MSM760 - Supports 40 x MSM Access Points, plus license support up to 200
MSM765 - Supports 40 x MSM Access Points, plus license support up to 200. This is a module form, and based on the ProCurve ONE.
MultiService Access Points
Most access points are designed to work in controlled mode: a controller manages and provides authentication services for them.
MSM310 - Single 802.11a/b/g radio. Includes 2.4 GHz dipole antennas
MSM310-R - External use. Single 802.11a/b/g radio. Includes 2.4 GHz dipole antennas
MSM313 - Integrated MSM Controller + single radio Access Point
MSM313-R - External Use. Integrated MSM Controller + single radio Access Point
MSM317 - Single 802.11b/g radio, with integrated 4 port switch
MSM320 - Dual radios (802.11a/b/g + 802.11a/b/g) for outdoor deployment options. Includes 2.4 GHz dipole antennas. Supports PoE.
MSM320-R - External use. Dual radios (802.11a/b/g + 802.11a/b/g). Includes 2.4 GHz dipole antennas. Supports PoE.
MSM323 - Integrated MSM Controller + dual radio Access point.
MSM323-R - External Use. Integrated MSM Controller + dual radio Access point.
MSM325 - Dual radios (802.11a/b/g + 802.11a/b/g) including RF security sensor. Requires PoE
MSM335 - Triple radios (802.11a/b/g + 802.11a/b/g + 802.11a/b/g RF security sensor)
MSM410 - Single 802.11 a/b/g/n radio. Requires PoE. Internal antenna only.
MSM422 - Dual-radio 802.11n + 802.11a/b/g.
MSM460 - Dual-Radio 802.11a/b/g/n. Only internal 3x3 MIMO Antenna. Requires PoE.
MSM466 - Same as MSM460 but with external 3x3 MIMO antenna connectors, no internal antenna. Requires PoE.
Centralised wireless
Wireless Edge Services Module - Controls Radio ports, and is an integrated module that fits into ProCurve Switches 5300xl / 5400zl / 8200zl only. Redundant Module available for failover. Supports the following Radio Ports:
RP-210 - Single 802.11b/g radio and integrated antenna
RP-220 - Dual-radio design (one 802.11a and one 802.11b/g); plenum rated; external antennas required
RP-230 - Dual-radio design (one 802.11a and one 802.11b/g); features internal, integrated antennas
Wireless access points
M110 - Single 802.11a/b/g radio
M111 - Wireless Client Bridge including dual band antennas
AP-530 - Wireless access point; Dual radios support simultaneous 802.11a and 802.11b/g transmissions. The AP-530 has two integrated radios (one of which supports 802.11a/b/g; the other of which supports 802.11b/g). The AP supports the Wireless Distribution System.
AP-420 - Wireless access point; Features a single, dual-diversity 802.11b/g radio.
AP-10ag - Wireless access point; Dual radios support simultaneous 802.11a and 802.11b/g transmissions.
Management Software
ProCurve Manager (PCM) is a network management suite for products by ProCurve.
ProCurve Manager
ProCurve Manager comes in two versions; a base version supplied both free of charge with all managed ProCurve Products and also for download, and a "Plus" version that incorporates more advanced functionality and also enables plugin support. There is a 60-day trial version including all modules. Both derive from the trial version and need to be activated via Internet.
The Plus version can also be implemented in HP OpenView Network Node Manager for Windows. The software ProCurve Manager is predominantly for ProCurve products.
Protocols
PCM uses Link Layer Discovery Protocol (LLDP, Cisco Discovery Protocol (CDP) and FDP (Foundry) for detecting network devices
For identification and deep inspection of network devices SNMP V2c or V3 is used.
Network traffic is analysed using RMON and sFlow.
Plugins
IDM (Identity Driven Manager) - Add-on Module for PCM+; contains Intranet Network Access Security using 802.1X; compatible with MicrosoftNetwork Access Protection (NAP) since Version IDM V2.3
NIM (Network Immunity Manager) - Add-On Module for PCM+ v2.2 and above; contains Intranet Intrusion Detection and Network Behavior Anomaly Detection (NBAD) using sFlow
PMM (ProCurve Mobility Manager) - Add-on Module for PCM+; contains Element Management for ProCurve Access Points (420/520/530) starting from Version PMM V1; WESM Modules and Radio Ports are supported since Version PMM V2. Since PMM v3, the MSM Access Points and Controllers are now supported
Security
Network access control with endpoint testing
ProCurve Network Access Controller 800 - Management and security for endpoints when they access the network
Firewall
The Threat Management Services Module is based on the ProCurve ONE Module, and is primarily a firewall with additional Intrusion-prevention system and VPN capabilities
Accessories
External power supplies
ProCurve 600 Redundant external power supply - supports one of six times Redundant Power for series 2600-PWR (not series 2600 w/o PWR), 2610, 2800, 3400cl, 6400cl and 7000dl as well as two times optional External PoE Power for series 2600-PWR, 2610-PWR or mandatory External PoE Power for series 5300 with xl 24-Port 10/100-TX PoE Module only
ProCurve 610 External power supply - supports four times optional External PoE Power for series 2600-PWR, 2610-PWR, or mandatory External PoE Power for series 5300 with xl 24-Port 10/100-TX PoE Module only
ProCurve 620 Redundant/External power supply - supports two times optional External PoE Power for series 3500yl and two times Redundant Power for series 2900, 3500yl and 6200yl
ProCurve Switch zl power supply shelf - supports two times optional External PoE Power for series 5400zl and 8200zl; must be additionally equipped with max. two 875W or 1500W (typical) ProCurve Switch zl power supplies
GBICs and optics
ProCurve have a range of Transceivers, GBICs and 10GbE optics for use within ProCurve devices.
Transceivers are used in the unmanaged 2100 & 2300 series, and the managed 2500 series of switches
GBICs are used for most switches for 100 Mbit/s and 1000 Mbit/s fiber connectivity. All fiber GBICs have an LC presentation.
ProCurve ONE
HP ProCurve ONE Services zl Module
The HP ProCurve ONE Services zl Module is an x86-based server module that provides two 10-GbE network links into the switch backplane. Coupled with ProCurve-certified services and applications that can take advantage of the switch-targeted API for better performance, this module creates a virtual appliance within a switch slot to provide solutions for business needs, such as network security. The ProCurve ONE Services zl Module is supported in the following switches:
HP ProCurve Switch 5406zl
HP ProCurve Switch 5412zl
HP ProCurve Switch 8212zl
The following applications have completed, or will complete the ProCurve ONE Integrated certification on the HP ProCurve Services zl Module in early 2009.
Data center automation
HP ProCurve Data Center Connection Manager ONE Software (Q3 2009)
Location
Ekahau Real Time Location System
Wireless IPS
AirTight Networks SpectraGuard Enterprise
Network management
InMon Corporation Traffic Sentinel
VoIP / Unified Communications
Aastra 5000 Next Generation IP telephony
Avaya Unified Communications Solutions
Video distribution
VBrick Systems ViP
Other - Unsupported
Other 'unofficial' methods for loading alternative platform software such as pfSense and VMware's ESXi on to ONE Service modules have been discovered.
HP ZL Compute Blade on the Cheap | Tinkeringdadblog
Discontinued Products
1600M - stackable Layer 2 switch
2400M - stackable Layer 2 switch
2424M - stackable Layer 2 switch
4000M - modular Layer 2 switch
8000M - modular Layer 2 switch
9400 - modular Layer 3 Router
AP 520 - Access Point
4100gl - modular Layer 2 switch
2700 series - unmanaged Layer 2 switch
9300m series - modular Layer 3 Router
ProCurve Access Controller Series 700wl
745wl
ACM (Access Control Module) for the 5300xl only
5300xl series - Chassis based, Layer 3, in either 4 or 8 slot bays.
See also
Aruba Networks
ProCurve
References
External links
HPE Networking website
VisioCafe (Home of HP VisioShapes)
ProCurve
Network management
ProCurve products | List of ProCurve products | [
"Technology",
"Engineering"
] | 2,754 | [
"Computing-related lists",
"Network management",
"Computer networks engineering",
"Lists of computer hardware"
] |
14,510,148 | https://en.wikipedia.org/wiki/Weibel%20instability | The Weibel instability is a plasma instability present in homogeneous or nearly homogeneous electromagnetic plasmas which possess an anisotropy in momentum (velocity) space. This anisotropy is most generally understood as two temperatures in different directions. Burton Fried showed that this instability can be understood more simply as the superposition of many counter-streaming beams. In this sense, it is like the two-stream instability except that the perturbations are electromagnetic and result in filamentation as opposed to electrostatic perturbations which would result in charge bunching. In the linear limit the instability causes exponential growth of electromagnetic fields in the plasma which help restore momentum space isotropy. In very extreme cases, the Weibel instability is related to one- or two-dimensional stream instabilities.
Consider an electron-ion plasma in which the ions are fixed and the electrons are hotter in the y-direction than in x or z-direction.
To see how magnetic field perturbation would grow, suppose a field B = B cos kx spontaneously arises from noise. The Lorentz force then bends the electron trajectories with the result that upward-moving-ev x B electrons congregate at B and downward-moving ones at A. The resulting current sheets generate magnetic field that enhances the original field and thus
perturbation grows.
Weibel instability is also common in astrophysical plasmas, such as collisionless shock formation in supernova remnants and -ray bursts.
A Simple Example of Weibel Instability
As a simple example of Weibel instability, consider an electron beam with density and initial velocity propagating in a plasma of density with velocity . The analysis below will show how an electromagnetic perturbation in the form of a plane wave gives rise to a Weibel instability in this simple anisotropic plasma system. We assume a non-relativistic plasma for simplicity.
We assume there is no background electric or magnetic field i.e. . The perturbation will be taken as an electromagnetic wave propagating along i.e. . Assume the electric field has the form
With the assumed spatial and time dependence, we may use and . From Faraday's Law, we may obtain the perturbation magnetic field
Consider the electron beam. We assume small perturbations, and so linearize the velocity and density . The goal is to find the perturbation electron beam current density
where second-order terms have been neglected. To do that, we start with the fluid momentum equation for the electron beam
which can be simplified by noting that and neglecting second-order terms. With the plane wave assumption for the derivatives, the momentum equation becomes
We can decompose the above equations in components, paying attention to the cross product at the far right, and obtain the non-zero components of the beam velocity perturbation:
To find the perturbation density , we use the fluid continuity equation for the electron beam
which can again be simplified by noting that and neglecting second-order terms. The result is
Using these results, we may use the equation for the beam perturbation current density given above to find
Analogous expressions can be written for the perturbation current density of the left-moving plasma. By noting that the x-component of the perturbation current density is proportional to , we see that with our assumptions for the beam and plasma unperturbed densities and velocities the x-component of the net current density will vanish, whereas the z-components, which are proportional to , will add. The net current density perturbation is therefore
The dispersion relation can now be found from Maxwell's Equations:
where is the speed of light in free space. By defining the effective plasma frequency , the equation above results in
This bi-quadratic equation may be easily solved to give the dispersion relation
In the search for instabilities, we look for ( is assumed real). Therefore, we must take the dispersion relation/mode corresponding to the minus sign in the equation above.
To gain further insight on the instability, it is useful to harness our non-relativistic assumption to simplify the square root term, by noting that
The resulting dispersion relation is then much simpler
is purely imaginary. Writing
we see that , indeed corresponding to an instability.
The electromagnetic fields then have the form
Therefore, the electric and magnetic fields are out of phase, and by noting that
so we see this is a primarily magnetic perturbation although there is a non-zero electric perturbation. The magnetic field growth results in the characteristic filamentation structure of Weibel instability. Saturation will happen when the growth rate is on the order of the electron cyclotron frequency
References
Lecture
See also
Chromo-Weibel Instability
Plasma instabilities
Particle physics | Weibel instability | [
"Physics"
] | 983 | [
"Plasma phenomena",
"Physical phenomena",
"Particle physics",
"Plasma instabilities"
] |
14,510,267 | https://en.wikipedia.org/wiki/Lonicera%20%C3%97%20heckrottii | Lonicera × heckrottii, the golden flame honeysuckle, is a plant in the honeysuckle family, Caprifoliaceae, grown in gardens for its showy flowers and long season of bloom.
Description
Lonicera × heckrottii is a vine with opposite, simple leaves, on twining stems. They have fragrant pink to yellow flowers. Lonicera x heckrotti is believed to be a hybrid of Lonicera sempervirens and Lonicera x americana.
References
heckrottii
Hybrid plants
Taxa named by Alfred Rehder | Lonicera × heckrottii | [
"Biology"
] | 122 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
14,511,511 | https://en.wikipedia.org/wiki/Rafael%20Robb | Rafael Robb (born October 31, 1950) is an economist and former professor at the University of Pennsylvania who confessed to killing his wife in 2006.
Academic career
Robb received his bachelor's degree from the Hebrew University of Jerusalem. He went on to obtain a Ph.D. in economics at UCLA. Robb joined the University of Pennsylvania faculty in 1984, and was a tenured professor at the time of his arrest in 2007.
Robb specialized in game theory, a mathematical discipline used to analyze political, economic, and military strategies. He has published numerous papers on game theory and other economic topics with scholars from Greece, Israel, Japan, and the US. In most of the papers, his family name is spelled as "Rob". He is also a fellow of the Econometric Society, one of the highest honors in economics.
Personal life
Robb grew up in Israel, and emigrated to the US to pursue graduate studies. He met Ellen Gregory Robb, a sales manager, in 1987, and they married in 1990. They have one daughter.
Killing of wife
Robb pleaded guilty in November 2007 to voluntary manslaughter in the high-profile death of his wife, Ellen Gregory Robb. She had been bludgeoned to death. Her death occurred on December 22, 2006, during an argument over the couple's divorce and the plans for their home in Upper Merion Township, Pennsylvania.
Robb was arrested on January 8, 2007, and charged with murder. The case was prosecuted by District Attorney (later County Commissioner specially appointed as prosecutor) Bruce Castor.
Robb pleaded guilty to manslaughter on November 26, 2007, and resigned from the university. Robb was sentenced on November 19, 2008, to a 10-year prison term, though the prosecutor asked for a sentence of 10 to 20 years. He sought parole after five years, as allowed by his sentence. The state board initially approved, and then rescinded, the request.
Civil case
Following Robb's guilty plea, Ellen Gregory Robb's family brought a civil wrongful death suit against him. In 2014, Robb was ordered to pay $124.26 million in compensatory and punitive damages to his deceased wife's estate, of which the sole beneficiary is his daughter. Robb then made an appeal to reduce this to $100 million, claiming that some of the evidence presented against him during the civil trial should not have been permitted as it unfairly influenced jurors against him.
Near the end of his prison sentence, Robb also requested access to frozen assets in order to pay for living expenses. Upon his release, he formally withdrew this plea.
Release
Robb was released from prison on January 8, 2017. After release, he moved to a suburb of Pittsburgh.
Notable publications
Michihiro Kandori, George J. Mailath, and Rafael Robb: "Learning, Mutation, and Long Run Equilibria in Games", Econometrica, Vol. 61, No. 1 (Jan. 1993), pp. 29–56
References
External links
Discovery Communications: Investigation Discovery: Stolen Voices, Buried Secrets: Episode Guide ("Checkmate", second episode on page)
1950 births
Fellows of the Econometric Society
Game theorists
Living people
University of Pennsylvania faculty
American people convicted of manslaughter
Prisoners and detainees of Pennsylvania
20th-century American Jews
People from Upper Merion Township, Pennsylvania
Hebrew University of Jerusalem alumni
University of California, Los Angeles alumni
21st-century American Jews | Rafael Robb | [
"Mathematics"
] | 703 | [
"Game theorists",
"Game theory"
] |
14,511,671 | https://en.wikipedia.org/wiki/Interpretation%20%28logic%29 | An interpretation is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.
The most commonly studied formal logics are propositional logic, predicate logic and their modal analogs, and for these there are standard ways of presenting an interpretation. In these contexts an interpretation is a function that provides the extension of symbols and strings of symbols of an object language. For example, an interpretation function could take the predicate T (for "tall") and assign it the extension {a} (for "Abraham Lincoln"). All our interpretation does is assign the extension {a} to the non-logical constant T, and does not make a claim about whether T is to stand for tall and 'a' for Abraham Lincoln. Nor does logical interpretation have anything to say about logical connectives like 'and', 'or' and 'not'. Though we may take these symbols to stand for certain things or concepts, this is not determined by the interpretation function.
An interpretation often (but not always) provides a way to determine the truth values of sentences in a language. If a given interpretation assigns the value True to a sentence or theory, the interpretation is called a model of that sentence or theory.
Formal languages
A formal language consists of a possibly infinite set of sentences (variously called words or formulas) built from a fixed set of letters or symbols. The inventory from which these letters are taken is called the alphabet over which the language is defined. To distinguish the strings of symbols that are in a formal language from arbitrary strings of symbols, the former are sometimes called well-formed formulæ (wff). The essential feature of a formal language is that its syntax can be defined without reference to interpretation. For example, we can determine that (P or Q) is a well-formed formula even without knowing whether it is true or false.
Example
A formal language can be defined with the
alphabet , and with a word being in if it begins with and is composed solely of the symbols and .
A possible interpretation of could assign the decimal digit '1' to and '0' to . Then would denote 101 under this interpretation of .
Logical constants
In the specific cases of propositional logic and predicate logic, the formal languages considered have alphabets that are divided into two sets: the logical symbols (logical constants) and the non-logical symbols. The idea behind this terminology is that logical symbols have the same meaning regardless of the subject matter being studied, while non-logical symbols change in meaning depending on the area of investigation.
Logical constants are always given the same meaning by every interpretation of the standard kind, so that only the meanings of the non-logical symbols are changed. Logical constants include quantifier symbols ∀ ("all") and ∃ ("some"), symbols for logical connectives ∧ ("and"), ∨ ("or"), ¬ ("not"), parentheses and other grouping symbols, and (in many treatments) the equality symbol =.
General properties of truth-functional interpretations
Many of the commonly studied interpretations associate each sentence in a formal language with a single truth value, either True or False. These interpretations are called truth functional; they include the usual interpretations of propositional and first-order logic. The sentences that are made true by a particular assignment are said to be satisfied by that assignment.
In classical logic, no sentence can be made both true and false by the same interpretation, although this is not true of glut logics such as LP. Even in classical logic, however, it is possible that the truth value of the same sentence can be different under different interpretations. A sentence is consistent if it is true under at least one interpretation; otherwise it is inconsistent. A sentence φ is said to be logically valid if it is satisfied by every interpretation (if φ is satisfied by every interpretation that satisfies ψ then φ is said to be a logical consequence of ψ).
Logical connectives
Some of the logical symbols of a language (other than quantifiers) are truth-functional connectives that represent truth functions — functions that take truth values as arguments and return truth values as outputs (in other words, these are operations on truth values of sentences).
The truth-functional connectives enable compound sentences to be built up from simpler sentences. In this way, the truth value of the compound sentence is defined as a certain truth function of the truth values of the simpler sentences. The connectives are usually taken to be logical constants, meaning that the meaning of the connectives is always the same, independent of what interpretations are given to the other symbols in a formula.
This is how we define logical connectives in propositional logic:
¬Φ is True iff Φ is False.
(Φ ∧ Ψ) is True iff Φ is True and Ψ is True.
(Φ ∨ Ψ) is True iff Φ is True or Ψ is True (or both are True).
(Φ → Ψ) is True iff ¬Φ is True or Ψ is True (or both are True).
(Φ ↔ Ψ) is True iff (Φ → Ψ) is True and (Ψ → Φ) is True.
So under a given interpretation of all the sentence letters Φ and Ψ (i.e., after assigning a truth-value to each sentence letter), we can determine the truth-values of all formulas that have them as constituents, as a function of the logical connectives. The following table shows how this kind of thing looks. The first two columns show the truth-values of the sentence letters as determined by the four possible interpretations. The other columns show the truth-values of formulas built from these sentence letters, with truth-values determined recursively.
Now it is easier to see what makes a formula logically valid. Take the formula F: (Φ ∨ ¬Φ). If our interpretation function makes Φ True, then ¬Φ is made False by the negation connective. Since the disjunct Φ of F is True under that interpretation, F is True. Now the only other possible interpretation of Φ makes it False, and if so, ¬Φ is made True by the negation function. That would make F True again, since one of Fs disjuncts, ¬Φ, would be true under this interpretation. Since these two interpretations for F are the only possible logical interpretations, and since F comes out True for both, we say that it is logically valid or tautologous.
Interpretation of a theory
An interpretation of a theory is the relationship between a theory and some subject matter when there is a many-to-one correspondence between certain elementary statements of the theory, and certain statements related to the subject matter. If every elementary statement in the theory has a correspondent it is called a full interpretation, otherwise it is called a partial interpretation.
Interpretations for propositional logic
The formal language for propositional logic consists of formulas built up from propositional symbols (also called sentential symbols, sentential variables, propositional variables) and logical connectives. The only non-logical symbols in a formal language for propositional logic are the propositional symbols, which are often denoted by capital letters. To make the formal language precise, a specific set of propositional symbols must be fixed.
The standard kind of interpretation in this setting is a function that maps each propositional symbol to one of the truth values true and false. This function is known as a truth assignment or valuation function. In many presentations, it is literally a truth value that is assigned, but some presentations assign truthbearers instead.
For a language with n distinct propositional variables there are 2n distinct possible interpretations. For any particular variable a, for example, there are 21=2 possible interpretations: 1) a is assigned T, or 2) a is assigned F. For the pair a, b there are 22=4 possible interpretations: 1) both are assigned T, 2) both are assigned F, 3) a is assigned T and b is assigned F, or 4) a is assigned F and b is assigned T.
Given any truth assignment for a set of propositional symbols, there is a unique extension to an interpretation for all the propositional formulas built up from those variables. This extended interpretation is defined inductively, using the truth-table definitions of the logical connectives discussed above.
First-order logic
Unlike propositional logic, where every language is the same apart from a choice of a different set of propositional variables, there are many different first-order languages. Each first-order language is defined by a signature. The signature consists of a set of non-logical symbols and an identification of each of these symbols as either a constant symbol, a function symbol, or a predicate symbol. In the case of function and predicate symbols, a natural number arity is also assigned. The alphabet for the formal language consists of logical constants, the equality relation symbol =, all the symbols from the signature, and an additional infinite set of symbols known as variables.
For example, in the language of rings, there are constant symbols 0 and 1, two binary function symbols + and ·, and no binary relation symbols. (Here the equality relation is taken as a logical constant.)
Again, we might define a first-order language L, as consisting of individual symbols a, b, and c; predicate symbols F, G, H, I and J; variables x, y, z; no function letters; no sentential symbols.
Formal languages for first-order logic
Given a signature σ, the corresponding formal language is known as the set of σ-formulas. Each σ-formula is built up out of atomic formulas by means of logical connectives; atomic formulas are built from terms using predicate symbols. The formal definition of the set of σ-formulas proceeds in the other direction: first, terms are assembled from the constant and function symbols together with the variables. Then, terms can be combined into an atomic formula using a predicate symbol (relation symbol) from the signature or the special predicate symbol "=" for equality (see the section "Interpreting equality" below). Finally, the formulas of the language are assembled from atomic formulas using the logical connectives and quantifiers.
Interpretations of a first-order language
To ascribe meaning to all sentences of a first-order language, the following information is needed.
A domain of discourse D, usually required to be non-empty (see below).
For every constant symbol, an element of D as its interpretation.
For every n-ary function symbol, an n-ary function from D to D as its interpretation (that is, a function Dn → D).
For every n-ary predicate symbol, an n-ary relation on D as its interpretation (that is, a subset of Dn).
An object carrying this information is known as a structure ( signature σ), or σ-structure, or L-structure (of language L), or as a "model".
The information specified in the interpretation provides enough information to give a truth value to any atomic formula, after each of its free variables, if any, has been replaced by an element of the domain. The truth value of an arbitrary sentence is then defined inductively using the T-schema, which is a definition of first-order semantics developed by Alfred Tarski. The T-schema interprets the logical connectives using truth tables, as discussed above. Thus, for example, is satisfied if and only if both φ and ψ are satisfied.
This leaves the issue of how to interpret formulas of the form and . The domain of discourse forms the range for these quantifiers. The idea is that the sentence is true under an interpretation exactly when every substitution instance of φ(x), where x is replaced by some element of the domain, is satisfied. The formula is satisfied if there is at least one element d of the domain such that φ(d) is satisfied.
Strictly speaking, a substitution instance such as the formula φ(d) mentioned above is not a formula in the original formal language of φ, because d is an element of the domain. There are two ways of handling this technical issue. The first is to pass to a larger language in which each element of the domain is named by a constant symbol. The second is to add to the interpretation a function that assigns each variable to an element of the domain. Then the T-schema can quantify over variations of the original interpretation in which this variable assignment function is changed, instead of quantifying over substitution instances.
Some authors also admit propositional variables in first-order logic, which must then also be interpreted. A propositional variable can stand on its own as an atomic formula. The interpretation of a propositional variable is one of the two truth values true and false.
Because the first-order interpretations described here are defined in set theory, they do not associate each predicate symbol with a property (or relation), but rather with the extension of that property (or relation). In other words, these first-order interpretations are extensional not intensional.
Example of a first-order interpretation
An example of interpretation of the language L described above is as follows.
Domain: A chess set
Individual constants: a: The white King, b: The black Queen, c: The white King's pawn
F(x): x is a piece
G(x): x is a pawn
H(x): x is black
I(x): x is white
J(x, y): x can capture y
In the interpretation of L:
the following are true sentences: F(a), G(c), H(b), I(a), J(b, c),
the following are false sentences: J(a, c), G(a).
Non-empty domain requirement
As stated above, a first-order interpretation is usually required to specify a nonempty set as the domain of discourse. The reason for this requirement is to guarantee that equivalences such as
where x is not a free variable of φ, are logically valid. This equivalence holds in every interpretation with a nonempty domain, but does not always hold when empty domains are permitted. For example, the equivalence
fails in any structure with an empty domain. Thus the proof theory of first-order logic becomes more complicated when empty structures are permitted. However, the gain in allowing them is negligible, as both the intended interpretations and the interesting interpretations of the theories people study have non-empty domains.
Empty relations do not cause any problem for first-order interpretations, because there is no similar notion of passing a relation symbol across a logical connective, enlarging its scope in the process. Thus it is acceptable for relation symbols to be interpreted as being identically false. However, the interpretation of a function symbol must always assign a well-defined and total function to the symbol.
Interpreting equality
The equality relation is often treated specially in first order logic and other predicate logics. There are two general approaches.
The first approach is to treat equality as no different than any other binary relation. In this case, if an equality symbol is included in the signature, it is usually necessary to add various axioms about equality to axiom systems (for example, the substitution axiom saying that if a = b and R(a) holds then R(b) holds as well). This approach to equality is most useful when studying signatures that do not include the equality relation, such as the signature for set theory or the signature for second-order arithmetic in which there is only an equality relation for numbers, but not an equality relation for set of numbers.
The second approach is to treat the equality relation symbol as a logical constant that must be interpreted by the real equality relation in any interpretation. An interpretation that interprets equality this way is known as a normal model, so this second approach is the same as only studying interpretations that happen to be normal models. The advantage of this approach is that the axioms related to equality are automatically satisfied by every normal model, and so they do not need to be explicitly included in first-order theories when equality is treated this way. This second approach is sometimes called first order logic with equality, but many authors adopt it for the general study of first-order logic without comment.
There are a few other reasons to restrict study of first-order logic to normal models. First, it is known that any first-order interpretation in which equality is interpreted by an equivalence relation and satisfies the substitution axioms for equality can be cut down to an elementarily equivalent interpretation on a subset of the original domain. Thus there is little additional generality in studying non-normal models. Second, if non-normal models are considered, then every consistent theory has an infinite model; this affects the statements of results such as the Löwenheim–Skolem theorem, which are usually stated under the assumption that only normal models are considered.
Many-sorted first-order logic
A generalization of first order logic considers languages with more than one sort of variables. The idea is different sorts of variables represent different types of objects. Every sort of variable can be quantified; thus an interpretation for a many-sorted language has a separate domain for each of the sorts of variables to range over (there is an infinite collection of variables of each of the different sorts). Function and relation symbols, in addition to having arities, are specified so that each of their arguments must come from a certain sort.
One example of many-sorted logic is for planar Euclidean geometry. There are two sorts; points and lines. There is an equality relation symbol for points, an equality relation symbol for lines, and a binary incidence relation E which takes one point variable and one line variable. The intended interpretation of this language has the point variables range over all points on the Euclidean plane, the line variable range over all lines on the plane, and the incidence relation E(p,l) holds if and only if point p is on line l.
Higher-order predicate logics
A formal language for higher-order predicate logic looks much the same as a formal language for first-order logic. The difference is that there are now many different types of variables. Some variables correspond to elements of the domain, as in first-order logic. Other variables correspond to objects of higher type: subsets of the domain, functions from the domain, functions that take a subset of the domain and return a function from the domain to subsets of the domain, etc. All of these types of variables can be quantified.
There are two kinds of interpretations commonly employed for higher-order logic. Full semantics require that, once the domain of discourse is satisfied, the higher-order variables range over all possible elements of the correct type (all subsets of the domain, all functions from the domain to itself, etc.). Thus the specification of a full interpretation is the same as the specification of a first-order interpretation. Henkin semantics, which are essentially multi-sorted first-order semantics, require the interpretation to specify a separate domain for each type of higher-order variable to range over. Thus an interpretation in Henkin semantics includes a domain D, a collection of subsets of D, a collection of functions from D to D, etc. The relationship between these two semantics is an important topic in higher order logic.
Non-classical interpretations
The interpretations of propositional logic and predicate logic described above are not the only possible interpretations. In particular, there are other types of interpretations that are used in the study of non-classical logic (such as intuitionistic logic), and in the study of modal logic.
Interpretations used to study non-classical logic include topological models, Boolean-valued models, and Kripke models. Modal logic is also studied using Kripke models.
Intended interpretations
Many formal languages are associated with a particular interpretation that is used to motivate them. For example, the first-order signature for set theory includes only one binary relation, ∈, which is intended to represent set membership, and the domain of discourse in a first-order theory of the natural numbers is intended to be the set of natural numbers.
The intended interpretation is called the standard model (a term introduced by Abraham Robinson in 1960). In the context of Peano arithmetic, it consists of the natural numbers with their ordinary arithmetical operations. All models that are isomorphic to the one just given are also called standard; these models all satisfy the Peano axioms. There are also non-standard models of the (first-order version of the) Peano axioms, which contain elements not correlated with any natural number.
While the intended interpretation can have no explicit indication in the strictly formal syntactical rules, it naturally affects the choice of the formation and transformation rules of the syntactical system. For example, primitive signs must permit expression of the concepts to be modeled; sentential formulas are chosen so that their counterparts in the intended interpretation are meaningful declarative sentences; primitive sentences need to come out as true sentences in the interpretation; rules of inference must be such that, if the sentence is directly derivable from a sentence , then turns out to be a true sentence, with meaning implication, as usual. These requirements ensure that all provable sentences also come out to be true.
Most formal systems have many more models than they were intended to have (the existence of non-standard models is an example). When we speak about 'models' in empirical sciences, we mean, if we want reality to be a model of our science, to speak about an intended model. A model in the empirical sciences is an intended factually-true descriptive interpretation (or in other contexts: a non-intended arbitrary interpretation used to clarify such an intended factually-true descriptive interpretation.) All models are interpretations that have the same domain of discourse as the intended one, but other assignments for non-logical constants.
Example
Given a simple formal system (we shall call this one ) whose alphabet α consists only of three symbols and whose formation rule for formulas is:
'Any string of symbols of which is at least 6 symbols long, and which is not infinitely long, is a formula of . Nothing else is a formula of .'
The single axiom schema of is:
" " (where " " is a metasyntactic variable standing for a finite string of " "s )
A formal proof can be constructed as follows:
In this example the theorem produced " " can be interpreted as meaning "One plus three equals four." A different interpretation would be to read it backwards as "Four minus three equals one."
Other concepts of interpretation
There are other uses of the term "interpretation" that are commonly used, which do not refer to the assignment of meanings to formal languages.
In model theory, a structure A is said to interpret a structure B if there is a definable subset D of A, and definable relations and functions on D, such that B is isomorphic to the structure with domain D and these functions and relations. In some settings, it is not the domain D that is used, but rather D modulo an equivalence relation definable in A. For additional information, see Interpretation (model theory).
A theory T is said to interpret another theory S if there is a finite extension by definitions T′ of T such that S is contained in T′.
See also
Conceptual model
Free variables and Name binding
Formal semantics (natural language)
Herbrand interpretation
Interpretation (model theory)
Logical system
Löwenheim–Skolem theorem
Modal logic
Model theory
Satisfiable
Truth
Notes
References
External links
Stanford Enc. Phil: Classical Logic, 4. Semantics
Formal languages
Interpretation (philosophy)
Model theory
Philosophy of language
Semantics | Interpretation (logic) | [
"Mathematics"
] | 4,930 | [
"Formal languages",
"Mathematical logic",
"Model theory"
] |
14,511,776 | https://en.wikipedia.org/wiki/Kawasaki%27s%20theorem | Kawasaki's theorem or Kawasaki–Justin theorem is a theorem in the mathematics of paper folding that describes the crease patterns with a single vertex that may be folded to form a flat figure. It states that the pattern is flat-foldable if and only if alternatingly adding and subtracting the angles of consecutive folds around the vertex gives an alternating sum of zero.
Crease patterns with more than one vertex do not obey such a simple criterion, and are NP-hard to fold.
The theorem is named after one of its discoverers, Toshikazu Kawasaki. However, several others also contributed to its discovery, and it is sometimes called the Kawasaki–Justin theorem or Husimi's theorem after other contributors, Jacques Justin and Kôdi Husimi.
Statement
A one-vertex crease pattern consists of a set of rays or creases drawn on a flat sheet of paper, all emanating from the same point interior to the sheet. (This point is called the vertex of the pattern.) Each crease must be folded, but the pattern does not specify whether the folds should be mountain folds or valley folds. The goal is to determine whether it is possible to fold the paper so that every crease is folded, no folds occur elsewhere, and the whole folded sheet of paper lies flat.
To fold flat, the number of creases must be even. This follows, for instance, from Maekawa's theorem, which states that the number of mountain folds at a flat-folded vertex differs from the number of valley folds by exactly two folds. Therefore, suppose that a crease pattern consists of an even number of creases, and let be the consecutive angles between the creases around the vertex, in clockwise order, starting at any one of the angles. Then Kawasaki's theorem states that the crease pattern may be folded flat if and only if the alternating sum and difference of the angles adds to zero:
An equivalent way of stating the same condition is that, if the angles are partitioned into two alternating subsets, then the sum of the angles in either of the two subsets is exactly 180 degrees. However, this equivalent form applies only to a crease pattern on a flat piece of paper, whereas the alternating sum form of the condition remains valid for crease patterns on conical sheets of paper with nonzero defect at the vertex.
Local and global flat-foldability
Kawasaki's theorem, applied to each of the vertices of an arbitrary crease pattern, determines whether the crease pattern is locally flat-foldable, meaning that the part of the crease pattern near the vertex can be flat-folded. However, there exist crease patterns that are locally flat-foldable but that have no global flat folding that works for the whole crease pattern at once. conjectured that global flat-foldability could be tested by checking Kawasaki's theorem at each vertex of a crease pattern, and then also testing bipartiteness of an undirected graph associated with the crease pattern. However, this conjecture was disproven by , who showed that Hull's conditions are not sufficient. More strongly, Bern and Hayes showed that the problem of testing global flat-foldability is NP-complete.
Proof
To show that Kawasaki's condition necessarily holds for any flat-folded figure, it suffices to observe that, at each fold, the orientation of the paper is reversed. Thus, if the first crease in the flat-folded figure is placed in the plane parallel to the -axis, the next crease must be rotated from it by an angle of , the crease after that by an angle of (because the second angle has the reverse orientation from the first), etc. In order for the paper to meet back up with itself at the final angle, Kawasaki's condition must be met.
Showing that the condition is also a sufficient condition is a matter of describing how to fold a given crease pattern so that it folds flat. That is, one must show how to choose whether to make mountain or valley folds, and in what order the flaps of paper should be arranged on top of each other. One way to do this is to choose a number such that the partial alternating sum
is as small as possible. Either and the partial sum is an empty sum that is also zero, or for some nonzero choice of the partial sum is negative. Then, accordion fold the pattern, starting with angle and alternating between mountain and valley folds, placing each angular wedge of the paper below the previous folds. At each step until the final fold, an accordion fold of this type will never self-intersect. The choice of ensures that the first wedge sticks out to the left of all the other folded pieces of paper, allowing the final wedge to connect back up to it.
An alternative proof of sufficiency can be used to show that there are many different flat foldings. Consider the smallest angle and the two creases on either side of it. Mountain-fold one of these two creases and valley-fold the other, choosing arbitrarily which fold to use for which crease. Then, glue the resulting flap of paper onto the remaining part of the crease pattern. The result of this gluing will be a crease pattern with two fewer creases, on a conical sheet of paper, that still satisfies Kawasaki's condition. Therefore, by mathematical induction, repeating this process will eventually lead to a flat folding. The base case of the induction is a cone with only two creases and two equal-angle wedges, which can obviously be flat-folded by using a mountain fold for both creases. There are two ways to choose which folds to use in each step of this method, and each step eliminates two creases. Therefore, any crease pattern with creases that satisfies Kawasaki's condition has at least different choices of mountain and valley folds that all lead to valid flat foldings.
History
In the late 1970s, Kôdi Husimi and David A. Huffman independently observed that flat-folded figures with four creases have opposite angles adding to , a special case of Kawasaki's theorem. Huffman included the result in a 1976 paper on curved creases, and
Husimi published the four-crease theorem in a book on origami geometry with his wife Mitsue Husimi.
The same result was published even earlier, in a pair of papers from 1966 by S. Murata that also included the six-crease case and the general case of Maekawa's theorem.
The fact that crease patterns with arbitrarily many creases necessarily have alternating sums of angles adding to was discovered by Kawasaki, by Stuart Robertson, and by Jacques Justin (again, independently of each other) in the late 1970s and early 1980s.
Because of Justin's contribution to the problem, Kawasaki's theorem has also been called the Kawasaki–Justin theorem.
The fact that this condition is sufficient—that is, that crease patterns with evenly many angles, alternatingly summing to can always be flat-folded—may have been first stated by .
Kawasaki himself has called the result Husimi's theorem, after Kôdi Husimi, and some other authors have followed this terminology as well. The name "Kawasaki's theorem" was first given to this result in Origami for the Connoisseur by Kunihiko Kasahara and Toshie Takahama (Japan Publications, 1987).
credits the lower bound of on the number of different flat-foldings of a crease pattern meeting the conditions of the theorem to independent work in the early 1990s by Azuma, Justin, and Ewins and Hull.
Although Kawasaki's theorem completely describes the folding patterns that have flat-folded states, it does not describe the folding process needed to reach that state. For some (multi-vertex) folding patterns, it is necessary to curve or bend the paper while transforming it from a flat sheet to its flat-folded state, rather than keeping the rest of the paper flat and only changing the dihedral angles at each fold. For rigid origami (a type of folding that keeps the surface flat except at its folds, suitable for hinged panels of rigid material rather than flexible paper), the condition of Kawasaki's theorem turns out to be sufficient for a single-vertex crease pattern to move from an unfolded state to a flat-folded state.
References
External links
Mathematical theorems
Paper folding | Kawasaki's theorem | [
"Mathematics"
] | 1,750 | [
"Recreational mathematics",
"nan",
"Mathematical problems",
"Mathematical theorems",
"Paper folding"
] |
14,511,835 | https://en.wikipedia.org/wiki/Trap-lining | In ethology and behavioral ecology, trap-lining or traplining is a feeding strategy in which an individual visits food sources on a regular, repeatable sequence, much as trappers check their lines of traps. Traplining is usually seen in species foraging for floral resources. This involves a specified route in which the individual traverses in the same order repeatedly to check specific plants for flowers that hold nectar, even over long distances. Trap-lining has been described in several taxa, including bees, butterflies, tamarins, bats, rats, and hummingbirds and tropical fruit-eating mammals such as opossums, capuchins and kinkajous. Traplining is used to term the method in which bumblebees and hummingbirds go about collecting nectar, and consequently, pollinating each plant they visit. The term "traplining" was originally coined by Daniel Janzen, although the concept was discussed by Charles Darwin and Nikolaas Tinbergen.
Behavioral response
In the instance of hummingbirds and bumblebees, traplining is an evolutionary response to the allocation of resources between species. Specifically, individual hummingbirds form their own specific routes in order to minimize competition and maximize nutrient availability. Some hummingbird species are territorial (e.g. rufous hummingbird, Selasphorus rufus,) and defend a specific territory, while others are trapliners (i.e. Long-billed hermit, Phaethornis longirostris) and constantly check different locations for food. Because of this, territorial hummingbirds will be more robust, while traplining hummingbirds have adaptations such as longer wings for more efficient flying. Traplining hummingbirds will move from source to source, obtaining nectar from each. Over time, one hummingbird will be the primary visitor to a particular source.
In the case of bumblebees, when competitors are removed, there is an influx to the removal area and less time is spent traplining over long distances. This demonstrates the ability to behaviorally adapt based on surrounding competition. In addition, bumblebees use traplining to distinguish between high nectar-producing flowers and low-nectar producing flowers by consistently recognizing and visiting those that produce higher levels. Other types of bees, such as with euglossine bees (i.e. Euglossa imperialis) use traplining to forage efficiently by flying rapidly from one precise flowering plant to the next in a set circuit, even ignoring newly blooming plants which are adjacent, but outside, of its daily route. By doing so, these euglossine bees significantly reduce the amount of time and energy spent searching for nectar each day. In general, it is seen that traplining species have higher nutritional rewards than non-traplining species.
Energy conservation
Traplining hummingbirds are known to be active proportionally to nectar production in flowers, decreasing throughout the day. Therefore, traplining hummingbirds can spend less time foraging, and obtain their energy intake from a few number of flowers. Spending less time searching for food means less energy spent flying and searching. Traplining bumblebees prioritize their routes based on travel distance and reward quantity. It is seen that the total distance of the trapline is related to the abundance of the reward (nectar) in the environment.
Spatial cognition and memory
Traplining can also be an indication of the levels of spatial cognition of species that use the technique. For example, traplining in bumblebees is an indication that bumblebees have spatial reference memory, or spatial memory, that is used to create specific routes in short term foraging. The ability to remember specific routes long-term cuts down foraging and flying time, consequently conserving energy. This theory has been tested, showing that bumblebees can remember the shortest route to the reward, even when the original path has been changed or obstructed. Additionally, bees cut down the amount of time spent revisiting sites with little or no nutritive reward. Bees with access to only short-term memory forage inefficiently.
Advantages
One of the main advantages of traplining is that the route can be taught to other members of the population quickly or over a period of hours, leading all members to a reliable food source. When the group works together on finding a particular source of food they can quickly establish where it is and get the route information transferred to all the individuals in the population. This ensures that the entire community is able to quickly find and consume the nutrients that are needed.
Traplining helps foragers that are competing for resources that replenish in a decelerating way. For example, nectar in a plant is slowly replaced over time, while acorns only occur once a year. Traplining can help plant diversity and evolution by keeping pollen with different genetics flowing from plant to plant. It is mostly pollinators that use traplining as a way to ensure they always know where the food sources they are looking for are. This means that organisms like bumblebees and hummingbirds can transfer pollen anywhere from the starting point of the route to the final food source along the path. Since the path is always the same, it greatly reduces the risk of self-pollination (iterogamy) because the pollinator won't return to the same flower on that particular foraging session.
Overall, plant species that are visited by trapliners have increased fitness and evolutionary advantages. Because of this mutualistic relationship between traplining hummingbirds and plants, traplining hummingbirds have been referred to as "legitimate pollinators", while territorial hummingbirds have been referred to as "nectar thieves". If an organism that traplines learns where a food source is once, they can always return to that food source because they can remember minute details about the location of the source. This allows them to adapt quickly if one of the major sources suddenly becomes scarce or destroyed.
Disadvantages
Serious obstacles, such as the arrangement of plant life, can hamper traplining. If the route zig zags through the understory of the tropical rainforest, some of the organisms using the route can get lost because of very subtle changes, such as a treefall gap or heavy rainfall. This could cause an individual to be separated from the entire group if it isn't able to find the path back to the original route. Some food sources can be overlooked because the traplining route in use does not lead the organisms to the area that these resources are in.
Since the route is very specific, the organisms following it may also miss out on opportunities to come in contact with potential mates. Male bumblebees going directly to the source of food have been observed to pass up on female bumblebees as potential mates that are along the same path, preferring to continue foraging and bring food back to the hive. This can take away from species diversification and could possibly delete some traits in the gene pool that are useful.
Research
Observing traplining in the natural world has proven to be very difficult and little is known about how and why species trapline, but the study of traplining in the natural environment does take place. In one particular study, individual bees trained on five artificial flowers of equal reward were observed traplining between those five flowers. When a new flower of higher reward gets included in the group, the bees subsequently adjust their trapline to include the higher reward flower. Under natural conditions they hypothesized that it would likely be beneficial for bees to prioritize higher reward flowers to either beat out competition or conserve energy.
In other field experiments, ecologists created a "competition vacuum" to observe whether or not bumblebees adjusted their feeding routes based on intense direct competition between other bumblebees. This study showed that bees in areas of higher competition are more productive than the control bees. Bumblebees opportunistically adjust their use of traplining routes in response to activity of other competing bees. Another effective way to study the behavior of traplining species is via computer simulation and indoor flight cage experiments. Simulation models can be made to show the linkage between pollinator movement and pollen flow. This model considers how service by the pollinators with different foraging patterns would affect the flow of pollen.
Indoor flight cage experiments allow for easier determination between test subjects and easier observation of behavior and patterns. Bees in small study environments seem to demonstrate less traplining tendencies than bees that were studied in environments that stretched over several hectares. A larger working area increases the need for traplining techniques to further conserve energy and maximize nutrient intake and that bees most often trapline due strictly to travel distance. The bees remember these complex flight paths by breaking them into small segments using vectors, landmarks and other environmental factors, each one pointing to the next destination.
Despite a long history of research on bee learning and navigation, most knowledge has been deduced from the behavior of foragers traveling between their nest and a single feeding location. Only recently, studies of bumblebees foraging in arrays of artificial flowers fitted with automated tracking systems have started to describe the learning mechanisms behind complex route formation between multiple locations. The demonstration that all these observations can be accurately replicated by a single learning heuristic model holds considerable promises to further investigate these questions and fill a major gap in cognitive ecology.
See also
Optimal foraging theory
References
Eating behaviors
Bird behavior | Trap-lining | [
"Biology"
] | 1,897 | [
"Behavior by type of animal",
"Behavior",
"Biological interactions",
"Eating behaviors",
"Bird behavior"
] |
14,512,959 | https://en.wikipedia.org/wiki/Beyond%20Bias%20and%20Barriers | Beyond Bias and Barriers: Fulfilling the Potential of Women in Academic Science and Engineering () is a major report about the status of women in science from the United States National Academy of Sciences. Published in 2006, the report closely examines the data, proposed explanations, and possible responses to the relative dearth of women in science and engineering higher education in the United States.
History
The report was written by the "Committee on Maximizing the Potential of Women in Academic Science and Engineering", a panel at the National Academy of Sciences. The Committee was chaired by Donna Shalala, and included college presidents, provosts, professors, scientists, and policy analysts. Committee members included: Alice M. Agogino, Lotte Bailyn, Robert J. Birgeneau, Ana Mari Cauce, Catherine D. DeAngelis, Denice Denton (who committed suicide before the release of the report), Barbara Grosz, Jo Handeslman, Nan Keohane, Shirley Malcom, Geraldine Richmond, Alice M. Rivlin, Ruth Simmons, Elizabeth Spelke, Joan Steitz, Elaine Weyuker, and Maria T. Zuber.
As is typical with NAS reports, after the Committee drafted the report, it underwent a peer review process within the NAS, in this case reviewed by another Committee of nineteen members.
Conclusions
The report first notes and documents significant gender gaps throughout the academic pipeline in the sciences, finding that the numbers of women in the sciences decrease "at every educational transition" from high school through fully tenured faculty positions. For instance, over the past 30 years women have earned more than 30% of the doctorates in the social and behavioral sciences and more than 20% in the life sciences; but they hold only 15% of the full professorships in those fields. Minority women are "all but absent from professorships".
The report then goes on to review ideas about the sources of the gender gaps, ultimately finding that the problem is "unconscious but pervasive bias", "arbitrary and subjective" evaluation processes, and a historic system which bases childrearing and family responsibilities on the concept of a professional spouse with a stay-at-home "wife". Specifically the report found significant evidence of bias: women are paid less, promoted more slowly, receive fewer honors, and hold fewer leadership positions. Although progress has been made in some areas—women are nearly at gender parity with men in entering graduate school in biology; when women are considered for initial promotion to associate professor they succeed at the same rates as men—there are still significant gaps.
The report found that widespread ideas about women's and men's differences were largely irrelevant, including theories advanced such as cognitive abilities or preferences, career aspirations and ambition, or productivity and work ethic issues.
Finally, the report reviews a number of potential solutions and makes a variety of recommendations to level the playing field and "stop the leaks" in the leaky pipeline. These steps include~~ Different types of reform that will help mitigate eliminate biases. National Academy of Sciences
alteration of academic procedures for hiring and evaluation ("promotion and tenure"), at the institutional level;
additional support for working parents at the institutional level;
efforts across the field to monitor hiring practices
efforts to institute blind-review in peer review processes to eliminate gender bias;
and other efforts.
Reception and influence
The report was widely well-received, received significant media coverage, and sparked a number of institutional-level meetings in the months after its release. In general its even-handed and data-rich approach was lauded, although John Tierney, a New York Times opinion columnist, suggested that the report must have been biased because its committee was largely made up of women. In response, Donna Shalala denied that the gender of the scientists biased their scientific findings, and pointed out that while the committee itself was largely made up of women, the committee's work was peer-reviewed by a National Academy of Sciences committee of 19 that included 10 men.
A number of educational institutions held meetings or established committees to implement recommendations from the report, including Harvard University, M.I.T., University of Texas, Iowa State University, University of Wisconsin–Madison, Boston University, Stanford University, and the National Science Foundation,
The American Council on Education (ACE), a higher education umbrella organization, took the recommendation from the report to monitor hiring practices, and agreed to convene its member organizations to examine ways to do so. The American Association for the Advancement of Science (AAAS) also convened a meeting at its annual conference.
Critics have pointed out that in comparing the representation of women in the hiring pool (recent PhD recipients) versus among recent hires (assistant professors), approximations were used as mentioned in Notes on page 17 of the report; these overestimated both the representation and the utilization of women. One was taking the representation of women among professors from the Survey of Earned Doctorates; as noted in the report, this ignored all professors who received their PhDs abroad. Also data from the Survey of Earned Doctorates represent samples of those surveyed. A third approximation is introduced by combining disciplines which show opposite trends in the utilization of their hiring pools, such as chemistry and chemical engineering. More accurate data and comparisons are available from the Nelson Diversity Surveys (Table 11), which give more accurate comparisons by including all professors regardless of national origin, by getting populations instead of samples, and by treating the disciplines separately.
See also
Association for Women in Science (AWIS)
Women in science
Women in medicine
United States National Academy of Sciences
References
National Academy of Sciences, Beyond Bias and Barriers: Fulfilling the Potential of Women in Academic Science and Engineering (National Academies Press, 2006) ()
Laurel Haak (National Research Council) and Alice Agozino (Committee member), "Beyond Bias and Barriers: Press for the National Academies' Report", last updated Feb. 2, 2007 (available at Agozino's website, last visited Dec. 1, 2007).
Ana Mari Cauce, "Women in Science: Looking Beyond Bias and Barriers," Editorial, The Seattle Times, (Dec. 19, 2006).
Doug Lederman, "The Real Barriers for Women in Science", Inside Higher Ed, (Sept. 19, 2006).
Cornelia Dean, "Institutions Hinder Female Academics, Panel Says", New York Times, (Sept. 18, 2006).
National Academy of Sciences, "Broad National Effort Urgently Needed to Maximize Potential of Women Scientists and Engineers in Academia", Press Release, (Sept. 18, 2006).
Tabitha M. Powledge, "Beyond Bias and Barriers" (Science Careers", Science, (Oct. 13, 2006).
Lonnie Shekhtman, "Experts Meet at AAAS to Evaluate Ways to Recruit and Retain Women in S&T", AAAS News Release (Oct. 30, 2006).
Maxine Singer, "Beyond Bias and Barriers", Editorial, Science, v.314, n. 5801, p. 893 (Nov. 10, 2006).
John Tierney, "Academy of P.C. Sciences", Editorial, New York Times (Sept. 26, 2006).
Footnotes
Further research
Powerpoint presentation used for NAS press conference
External links
Institute for Women in Trades, Technology and Science (IWITTS)
Women and science
Science books
Gender studies books
Gender studies literature
2006 non-fiction books | Beyond Bias and Barriers | [
"Technology"
] | 1,509 | [
"Women and science",
"Women in science and technology"
] |
3,016,399 | https://en.wikipedia.org/wiki/List%20of%20recombinant%20proteins | The following is a list of notable proteins that are produced from recombinant DNA, using biomolecular engineering. In many cases, recombinant human proteins have replaced the original animal-derived version used in medicine. The prefix "rh" for "recombinant human" appears less and less in the literature. A much larger number of recombinant proteins is used in the research laboratory. These include both commercially available proteins (for example most of the enzymes used in the molecular biology laboratory), and those that are generated in the course specific research projects.
Human recombinants that largely replaced animal or harvested from human types
Medicinal applications
Human growth hormone (rHGH): Humatrope from Lilly and Serostim from Serono replaced cadaver harvested human growth hormone
human insulin (BHI): Humulin from Lilly and Novolin from Novo Nordisk among others largely replaced bovine and porcine insulin for human therapy. Some prefer to continue using the animal-sourced preparations, as there is some evidence that synthetic insulin varieties are more likely to induce hypoglycemia unawareness. Remaining manufacturers of highly purified animal-sourced insulin include the U.K.'s Wockhardt Ltd. (headquartered in India), Argentina's Laboratorios Beta S.A., and China's Wanbang Biopharma Co.
Follicle-stimulating hormone (FSH) as a recombinant gonadotropin preparation replaced Serono's Pergonal which was previously isolated from post-menopausal female urine
Factor VIII: Kogenate from Bayer replaced blood harvested factor VIII
Research applications
Ribosomal proteins: For the studies of individual ribosomal proteins, the use of proteins that are produced and purified from recombinant sources has largely replaced those that are obtained through isolation. However, isolation is still required for the studies of the whole ribosome.
Lysosomal proteins: Lysosomal proteins are difficult to produce recombinantly due to the number and type of post-translational modifications that they have (e.g. glycosylation). As a result, recombinant lysosomal proteins are usually produced in mammalian cells. Plant cell culture was used to produce FDA-approved glycosylated lysosomal protein-drug, and additional drug candidates. Recent studies have shown that it may be possible to produce recombinant lysosomal proteins with microorganisms such as Escherichia coli and Saccharomyces cerevisiae. Recombinant lysosomal proteins are used for both research and medical applications, such as enzyme replacement therapy.
Human recombinants with recombination as only source
Medicinal applications
Erythropoietin (EPO): Epogen from Amgen
Granulocyte colony-stimulating factor (G-CSF): filgrastim sold as Neupogen from Amgen; pegfilgrastim sold as Neulasta
alpha-galactosidase A: Fabrazyme by Genzyme
alpha-L-iduronidase: (rhIDU; laronidase) Aldurazyme by BioMarin Pharmaceutical and Genzyme
N-acetylgalactosamine-4-sulfatase (rhASB; galsulfase): Naglazyme by BioMarin Pharmaceutical
Dornase alfa, a DNase sold under the trade name Pulmozyme by Genentech
Tissue plasminogen activator (TPA) Activase by Genentech
Glucocerebrosidase: Ceredase by Genzyme
Interferon (IF) Interferon-beta-1a: Avonex from Biogen Idec; Rebif from Serono; Interferon beta-1b as Betaseron from Schering. It is being investigated for the treatments of diseases including Guillain-Barré syndrome and multiple sclerosis.
Insulin-like growth factor 1 (IGF-1)
Rasburicase, a Urate Oxidase analog sold as Elitek from Sanofi
Animal recombinants
Medicinal applications
Bovine somatotropin (bST)
Porcine somatotropin (pST)
Bovine Chymosin
Bacterial recombinants
Industrial applications
Xylanases
Proteases, which have found applications in both the industrial (such as the food industry) and domestic settings.
Viral recombinants
Medicinal applications
Envelope protein of the hepatitis B virus marketed as Engerix-B by SmithKline Beecham
HPV Vaccine proteins
Plant recombinants
Research applications
Polyphenol oxidases (PPOs): These include both catechol oxidases and tyrosinases. In additional to research, PPOs have also found applications as biocatalysts.
Cystatins are proteins that inhibit cysteine proteases. Research are ongoing to evaluate the potential of using cystatins in crop protection to control herbivorous pests and pathogens.
Industrial applications
Laccases have found a wide range of application, from food additive and beverage processing to biomedical diagnosis, and as cross‐linking agents for furniture construction or in the production of biofuels.
The tyrosinase‐induced polymerization of peptides offers facile access to artificial mussel foot protein analogues. Next generation universal glues can be envisioned that perform effectively even under rigorous seawater conditions and adapt to a broad range of difficult surfaces.
See also
Protein production
Gene expression
Protein purification
Host cell protein
References
External links
Laboratorios Beta S.A website
CP Pharma/Wockhardt UK website
Biotechnology
Biotechnology products | List of recombinant proteins | [
"Biology"
] | 1,182 | [
"nan",
"Recombinant proteins",
"Biotechnology products",
"Biotechnology"
] |
3,017,092 | https://en.wikipedia.org/wiki/Spin-1/2 | In quantum mechanics, spin is an intrinsic property of all elementary particles. All known fermions, the particles that constitute ordinary matter, have a spin of . The spin number describes how many symmetrical facets a particle has in one full rotation; a spin of means that the particle must be rotated by two full turns (through 720°) before it has the same configuration as when it started.
Particles having net spin include the proton, neutron, electron, neutrino, and quarks. The dynamics of spin- objects cannot be accurately described using classical physics; they are among the simplest systems which require quantum mechanics to describe them. As such, the study of the behavior of spin- systems forms a central part of quantum mechanics.
Stern–Gerlach experiment
The necessity of introducing half-integer spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong heterogeneous magnetic field, which then splits into N parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two—the ground state therefore could not be an integer, because even if the intrinsic angular momentum of the atoms were the smallest (non-zero) integer possible, 1, the beam would be split into 3 parts, corresponding to atoms with Lz = −1, +1, and 0, with 0 simply being the value known to come between −1 and +1 while also being a whole-integer itself, and thus a valid quantized spin number in this case. The existence of this hypothetical "extra step" between the two polarized quantum states would necessitate a third quantum state; a third beam, which is not observed in the experiment. The conclusion was that silver atoms had net intrinsic angular momentum of .
General properties
Spin- objects are all fermions (a fact explained by the spin–statistics theorem) and satisfy the Pauli exclusion principle. Spin- particles can have a permanent magnetic moment along the direction of their spin, and this magnetic moment gives rise to electromagnetic interactions that depend on the spin. One such effect that was important in the discovery of spin is the Zeeman effect, the splitting of a spectral line into several components in the presence of a static magnetic field.
Unlike in more complicated quantum mechanical systems, the spin of a spin- particle can be expressed as a linear combination of just two eigenstates, or eigenspinors. These are traditionally labeled spin up and spin down. Because of this, the quantum-mechanical spin operators can be represented as simple 2 × 2 matrices. These matrices are called the Pauli matrices.
Creation and annihilation operators can be constructed for spin- objects; these obey the same commutation relations as other angular momentum operators.
Connection to the uncertainty principle
One consequence of the generalized uncertainty principle is that the spin projection operators (which measure the spin along a given direction like x, y, or z) cannot be measured simultaneously. Physically, this means that the axis about which a particle is spinning is ill-defined. A measurement of the z-component of spin destroys any information about the x- and y-components that might previously have been obtained.
Mathematical description
A spin- particle is characterized by an angular momentum quantum number for spin s of . In solutions of the Schrödinger equation, angular momentum is quantized according to this number, so that total spin angular momentum
However, the observed fine structure when the electron is observed along one axis, such as the z-axis, is quantized in terms of a magnetic quantum number, which can be viewed as a quantization of a vector component of this total angular momentum, which can have only the values of .
Note that these values for angular momentum are functions only of the reduced Planck constant (the angular momentum of any photon), with no dependence on mass or charge.
Complex phase
Mathematically, quantum mechanical spin is not described by a vector as in classical angular momentum. It is described by a complex-valued vector with two components called a spinor. There are subtle differences between the behavior of spinors and vectors under coordinate rotations, stemming from the behavior of a vector space over a complex field.
When a spinor is rotated by 360° (one full turn), it transforms to its negative, and then after a further rotation of 360° it transforms back to its initial value again. This is because in quantum theory the state of a particle or system is represented by a complex probability amplitude (wavefunction) ψ, and when the system is measured, the probability of finding the system in the state ψ equals , the absolute square (square of the absolute value) of the amplitude. In mathematical terms, the quantum Hilbert space carries a projective representation of the rotation group SO(3).
Suppose a detector that can be rotated measures a particle in which the probabilities of detecting some state are affected by the rotation of the detector. When the system is rotated through 360°, the observed output and physics are the same as initially but the amplitudes are changed for a spin- particle by a factor of −1 or a phase shift of half of 360°. When the probabilities are calculated, the −1 is squared, , so the predicted physics is the same as in the starting position. Also, in a spin- particle there are only two spin states and the amplitudes for both change by the same −1 factor, so the interference effects are identical, unlike the case for higher spins. The complex probability amplitudes are something of a theoretical construct which cannot be directly observed.
If the probability amplitudes rotated by the same amount as the detector, then they would have changed by a factor of −1 when the equipment was rotated by 180° which when squared would predict the same output as at the start, but experiments show this to be wrong. If the detector is rotated by 180°, the result with spin- particles can be different from what it would be if not rotated, hence the factor of a half is necessary to make the predictions of the theory match the experiments.
In terms of more direct evidence, physical effects of the difference between the rotation of a spin- particle by 360° as compared with 720° have been experimentally observed in classic experiments in neutron interferometry. In particular, if a beam of spin-oriented spin- particles is split, and just one of the beams is rotated about the axis of its direction of motion and then recombined with the original beam, different interference effects are observed depending on the angle of rotation. In the case of rotation by 360°, cancellation effects are observed, whereas in the case of rotation by 720°, the beams are mutually reinforcing.
Non-relativistic quantum mechanics
The quantum state of a spin- particle can be described by a two-component complex-valued vector called a spinor. Observable states of the particle are then found by the spin operators Sx, Sy, and Sz, and the total spin operator S.
Observables
When spinors are used to describe the quantum states, the three spin operators (Sx, Sy, Sz,) can be described by 2 × 2 matrices called the Pauli matrices whose eigenvalues are .
For example, the spin projection operator Sz affects a measurement of the spin in the z direction.
The two eigenvalues of Sz, , then correspond to the following eigenspinors:
These vectors form a complete basis for the Hilbert space describing the spin- particle. Thus, linear combinations of these two states can represent all possible states of the spin, including in the x- and y-directions.
The ladder operators are:
Since , it follows that and . Thus:
Their normalized eigenspinors can be found in the usual way. For Sx, they are:
For Sy, they are:
Relativistic quantum mechanics
While non relativistic quantum mechanics defines spin with 2 dimensions in Hilbert space with dynamics that are described in 3-dimensional space and time, relativistic quantum mechanics defines the spin with 4 dimensions in Hilbert space and dynamics described by 4-dimensional space-time.
Observables
As a consequence of the four-dimensional nature of space-time in relativity, relativistic quantum mechanics uses 4×4 matrices to describe spin operators and observables.
History
When physicist Paul Dirac tried to modify the Schrödinger equation so that it was consistent with Einstein's theory of relativity, he found it was only possible by including matrices in the resulting Dirac equation, implying the wave must have multiple components leading to spin.
The 4π spinor rotation was experimentally verified using neutron interferometry in 1974, by Helmut Rauch and collaborators, after being suggested by Yakir Aharonov and Leonard Susskind in 1967.
See also
Projective representation
Notes
Further reading
External links
Rotation in three dimensions
Quantum models | Spin-1/2 | [
"Physics"
] | 1,825 | [
"Quantum models",
"Quantum mechanics"
] |
3,017,141 | https://en.wikipedia.org/wiki/Opera%20glasses | Opera glasses, also known as theater binoculars or Galilean binoculars, are compact, low-power optical magnification devices, usually used at performance events, whose name is derived from traditional use of binoculars at opera performances. Magnification power below 5× is usually desired in these circumstances in order to minimize image shake and maintain a large enough field of view. A magnification of 3× is normally recommended. The design of many modern opera glasses of the ornamental variety is based on the popular lorgnettes of the 19th century. Often, modern theatre binoculars are equipped with an LED flashlight, which makes it easier to find a place in the dark.
In addition to the more stereotypical binocular type, folding opera glasses were another common design. They were made mostly of metal and glass, with a leatherette cover for grip and color. Although folding glasses have existed in one form or another since the 1890s, they were perhaps most popular in the mid-20th century and many from this era are marked "Made in Japan" or, less commonly, "Made in Occupied Japan". The design can still be purchased new, although the most common contemporary designs are now almost entirely plastic.
See also
London Opera Glass Company
Monocular
Spotting scope
Opera
Opera hat
Opera cloak
Opera gloves
References
External links
Chambers's Encyclopaedia: A Dictionary of Universal Knowledge, Volume 7
The Opticalia Museum Opera Glasses
Opera glasses vs binoculars
The Encyclopedia Americana, Volume 11
The History of Opera Glasses
Binoculars
Glasses
Glasses
Theatre | Opera glasses | [
"Astronomy"
] | 306 | [
"Binoculars",
"Astronomical instruments"
] |
3,017,382 | https://en.wikipedia.org/wiki/Projective%20object | In category theory, the notion of a projective object generalizes the notion of a projective module. Projective objects in abelian categories are used in homological algebra. The dual notion of a projective object is that of an injective object.
Definition
An object in a category is projective if for any epimorphism and morphism , there is a morphism such that , i.e. the following diagram commutes:
That is, every morphism factors through every epimorphism .
If C is locally small, i.e., in particular is a set for any object X in C, this definition is equivalent to the condition that the hom functor (also known as corepresentable functor)
preserves epimorphisms.
Projective objects in abelian categories
If the category C is an abelian category such as, for example, the category of abelian groups, then P is projective if and only if
is an exact functor, where Ab is the category of abelian groups.
An abelian category is said to have enough projectives if, for every object of , there is a projective object of and an epimorphism from P to A or, equivalently, a short exact sequence
The purpose of this definition is to ensure that any object A admits a projective resolution, i.e., a (long) exact sequence
where the objects are projective.
Projectivity with respect to restricted classes
discusses the notion of projective (and dually injective) objects relative to a so-called bicategory, which consists of a pair of subcategories of "injections" and "surjections" in the given category C. These subcategories are subject to certain formal properties including the requirement that any surjection is an epimorphism. A projective object (relative to the fixed class of surjections) is then an object P so that Hom(P, −) turns the fixed class of surjections (as opposed to all epimorphisms) into surjections of sets (in the usual sense).
Properties
The coproduct of two projective objects is projective.
The retract of a projective object is projective.
Examples
The statement that all sets are projective is equivalent to the axiom of choice.
The projective objects in the category of abelian groups are the free abelian groups.
Let be a ring with identity. Consider the (abelian) category -Mod of left -modules. The projective objects in -Mod are precisely the projective left R-modules. Consequently, is itself a projective object in -Mod. Dually, the injective objects in -Mod are exactly the injective left R-modules.
The category of left (right) -modules also has enough projectives. This is true since, for every left (right) -module , we can take to be the free (and hence projective) -module generated by a generating set for (for example we can take to be ). Then the canonical projection is the required surjection.
The projective objects in the category of compact Hausdorff spaces are precisely the extremally disconnected spaces. This result is due to , with a simplified proof given by .
In the category of Banach spaces and contractions (i.e., functionals whose norm is at most 1), the epimorphisms are precisely the maps with dense image. shows that the zero space is the only projective object in this category. There are non-trivial spaces, though, which are projective with respect to the class of surjective contractions. In the category of normed vector spaces with contractions (and surjective maps as "surjections"), the projective objects are precisely the -spaces.
References
External links
Homological algebra
Objects (category theory) | Projective object | [
"Mathematics"
] | 788 | [
"Mathematical structures",
"Objects (category theory)",
"Fields of abstract algebra",
"Category theory",
"Homological algebra"
] |
3,017,426 | https://en.wikipedia.org/wiki/Leray%27s%20theorem | In algebraic topology and algebraic geometry, Leray's theorem (so named after Jean Leray) relates abstract sheaf cohomology with Čech cohomology.
Let be a sheaf on a topological space and an open cover of If is acyclic on every finite intersection of elements of (meaning that for all and all , then
where is the -th Čech cohomology group of with respect to the open cover
References
Bonavero, Laurent. Cohomology of Line Bundles on Toric Varieties, Vanishing Theorems. Lectures 16-17 from "Summer School 2000: Geometry of Toric Varieties."
Sheaf theory
Theorems in algebraic geometry
Theorems in algebraic topology | Leray's theorem | [
"Mathematics"
] | 142 | [
"Theorems in algebraic geometry",
"Mathematical structures",
"Category theory stubs",
"Theorems in topology",
"Sheaf theory",
"Topology",
"Category theory",
"Theorems in geometry",
"Theorems in algebraic topology"
] |
3,017,438 | https://en.wikipedia.org/wiki/De%20Rham%E2%80%93Weil%20theorem | In algebraic topology, the De Rham–Weil theorem allows computation of sheaf cohomology using an acyclic resolution of the sheaf in question.
Let be a sheaf on a topological space and a resolution of by acyclic sheaves. Then
where denotes the -th sheaf cohomology group of with coefficients in
The De Rham–Weil theorem follows from the more general fact that derived functors may be computed using acyclic resolutions instead of simply injective resolutions.
See also
de Rham theorem
References
Homological algebra
Sheaf theory | De Rham–Weil theorem | [
"Mathematics"
] | 121 | [
"Theorems in algebraic geometry",
"Mathematical structures",
"Sheaf theory",
"Topology",
"Category theory",
"Fields of abstract algebra",
"Theorems in geometry",
"Homological algebra"
] |
3,017,748 | https://en.wikipedia.org/wiki/Types%20of%20fiction%20with%20multiple%20endings | A narrative typically ends in one set way, but certain kinds of narrative allow for multiple endings.
Comics
The Death-Ray by Daniel Clowes
Cliff Hanger by Jack Edward Oliver
Literature
The Choose Your Own Adventure series
Fighting Fantasy
Life's Lottery
The French Lieutenant's Woman
Theater
Ayn Rand's 1934 play Night of January 16th allowed the audience to affect the ending by acting as the "jury" and voting the defendant "innocent" or "guilty".
The 1985 musical The Mystery of Edwin Drood
Dario Fo's 1970 play, Accidental Death of an Anarchist
The long-running play Shear Madness has multiple, audience-selected endings.
Films
DVDs and Blu-ray discs may include an alternate ending as a special feature. These are usually not considered canon.
Films which include multiple endings within the main cut of the film:
Clue
Wayne's World and its sequel, Wayne's World 2
Scarface
Sliding Doors
Run Lola Run
Harikrishnans
The Messiah, which includes one account of Jesus' crucifixion according to Christian teaching and one according to Muslim teaching.
28 Days Later
Unfriended: Dark Web
Black Mirror: Bandersnatch
1408
Television
Crown Court
Do the Right Thing (BBC TV series, 1994-1995)
Animation
Dragon's Lair and Space Ace
The fifth season finale of the Rooster Teeth web-series, Red vs. Blue
Video games
Video games, as an interactive medium, allow for a wide variety of outcomes. Especially nonlinear video games such as visual novels, role-playing games and interactive dramas often feature multiple endings. Multiple endings may increase a game's replay value, encourage customization or deviate from the story in the form of easter eggs. As such, these video games often, but not always, feature one or multiple "true" or "good endings" which are canonized either by the developer or player base as well as "false" or "bad endings".
Role-playing video games
Multiple endings can be an integral part of many visual novels and role-playing games as these genres often emphasize customization and individuality.
Examples of role-playing games that feature multiple endings:
Chrono Trigger, which was cited as revolutionary for including multiple endings when it released in 1995.
Baldur's Gate 3, which features four distinct major endings. Larian Studios announced that there were 17,000 possible endings ahead of the game's release.
Cyberpunk 2077, which features four canon endings with one additional one being introduced with the downloadable content Phantom Liberty. There is one more "false ending" in which the protagonist commits suicide ahead of the finale, and another secret ending which only becomes available after waiting five minutes before choosing a dialogue option.
Mass Effect 3's endings were cause for controversy. Players felt their character choices felt inconsequential and criticized the game's endings for its lack of closure and inconsistencies.
The Dragon Age series includes a variety of impactful choices the player can make throughout the games with typically one major one at the end. In Dragon Age II and Dragon Age: Inquisition, players are able to import their save files from previous games to alter the games' world building, flavor text for multiple characters and events, and appearances from recurring characters, allowing for a player-specific canon. For the fourth installment in the series, Dragon Age: The Veilguard, developer BioWare changed this system so that players could now reflect the previous games' different endings directly in the game's character creator.
Choice-driven video games
Multiple endings are a common feature in "choice-driven" games in which decisions made by the player serve as the main gameplay loop. These games are usually adventure or storytelling games whose ending or sometimes even entire story changes depending on the player's active, in the form of dialogue options, or passive choices, such as games with moral systems.
Examples of choice-driven games that feature multiple endings:
Life Is Strange, which includes two canon endings. Players can choose either to save the protagonist's home town from a tornado while sacrificing her implied love interest or vice versa.
Life Is Strange 2, which, unlike its predecessor, also takes previous choices the player has made into consideration in the form of a "moral system" for its seven endings.
Detroit: Become Human, Heavy Rain, Beyond: Two Souls and many more games created by Quantic Dream feature multiple endings depending on the player's choices.
Until Dawn, The Quarry and The Casting of Frank Stone all feature multiple endings for each of their large cast of protagonists, a common trope in Supermassive Games.
Telltale Games are known for featuring a variety of different endings in their story-based games such as their The Walking Dead or Game of Thrones video game adaptations.
Multiple endings as a gameplay mechanic
Some video games revolve their entire story around the concept of multiple endings and utilize them as a gameplay mechanic. This is done either chronologically, whereby the player experiences a game's ending multiple times but through different point-of-views, or through "knowledge-gating" in which all endings are achievable from the start but have to be deduced through trial and error or through experiencing the game's other endings.
Examples of video games that feature endings as a gameplay mechanic:
The Stanley Parable, which features eighteen different endings.
The Talos Principle and its sequel.
Nier and Nier Automata, which require the player to finish the game multiple times before reaching their "true endings". Each subsequent playthrough unlocks new content in the form of cutscenes or playable characters.
See also
Alternate ending
Interactive fiction
Visual novel
References
Plot (narrative)
Video game design
Endings | Types of fiction with multiple endings | [
"Physics"
] | 1,157 | [
"Spacetime",
"Endings",
"Physical quantities",
"Time"
] |
3,017,822 | https://en.wikipedia.org/wiki/Nibble%20%28magazine%29 | Nibble was a magazine for Apple II personal computer enthusiasts published from 1980 until 1992. The name means "half a byte" or "four bits." The proper spelling for a half-byte is "nybble", riffing off of the term "byte". Most of the articles incorporated the source code of a small to medium-sized utility, application program, or game (each written specifically for the magazine) and a detailed description of how it worked. The headquarters was in Lincoln, Massachusetts.
History
The magazine was first published in January 1980 by Mike Harvey. Originally published eight times per year, by 1984 the magazine had attained a popularity that allowed it to become a monthly publication. It was published for more than twelve years; the July 1992 issue was the last.
The magazine also published checksum tables that, with utilities available from the magazine, helped pinpoint the location of any errors in a reader's own typed-in copy. The programs were also available on disk for a small fee for those who did not want to spend the time to type them in.
A technical highlight of Nibble was a regular column called Disassembly Lines, in which Dr. Sanford Mossberg presented assembly listings he had reverse-engineered from interesting parts of Applesoft BASIC and the Apple DOS to illustrate how they worked. Later Mossberg turned his attention to the Apple IIGS and dissected its Toolbox and operating system as well.
Omnibus editions of the best articles from each year's issues, dubbed Nibble Express, were published annually. The magazine also published other books that repurposed magazine material on various topics, such as games, personal finance programs, and "Apple secrets." Mossberg's Disassembly Lines columns were also collected in four volumes.
Harvey's publishing company, MicroSPARC (later MindCraft), published a number of Apple II programming utilities including an assembler and a BASIC-like set of macros for it. These were sold by mail-order from ads in the magazine. (Trivia: the company changed its name after Sun Microsystems bought the name MicroSPARC for a new line of processors.)
A spinoff Macintosh publication, Nibble Mac, was first a section in Nibble and then was published separately. Like the original, Nibble Mac focused on hobbyist programming, notably HyperCard.
Most of the Nibble material, including Nibble Mac, is now available again from the publisher through his Web site.
References
External links
Official site
Monthly magazines published in the United States
Apple II periodicals
Defunct computer magazines published in the United States
Home computer magazines
Magazines established in 1980
Magazines disestablished in 1992
Magazines published in Massachusetts
Eight times annually magazines published in the United States | Nibble (magazine) | [
"Technology"
] | 555 | [
"Computing stubs",
"Computer magazine stubs"
] |
3,017,886 | https://en.wikipedia.org/wiki/Motor%20protein | Motor proteins are a class of molecular motors that can move along the cytoskeleton of cells. They convert chemical energy into mechanical work by the hydrolysis of ATP. Flagellar rotation, however, is powered by a proton pump.
Cellular functions
Motor proteins are the driving force behind most active transport of proteins and vesicles in the cytoplasm. Kinesins and cytoplasmic dyneins play essential roles in intracellular transport such as axonal transport and in the formation of the spindle apparatus and the separation of the chromosomes during mitosis and meiosis. Axonemal dynein, found in cilia and flagella, is crucial to cell motility, for example in spermatozoa, and fluid transport, for example in trachea. The muscle protein myosin "motors" the contraction of muscle fibers in animals.
Diseases associated with motor protein defects
The importance of motor proteins in cells becomes evident when they fail to fulfill their function. For example, kinesin deficiencies have been identified as the cause for Charcot-Marie-Tooth disease and some kidney diseases. Dynein deficiencies can lead to chronic infections of the respiratory tract as cilia fail to function without dynein. Numerous myosin deficiencies are related to disease states and genetic syndromes. Because myosin II is essential for muscle contraction, defects in muscular myosin predictably cause myopathies. Myosin is necessary in the process of hearing because of its role in the growth of stereocilia so defects in myosin protein structure can lead to Usher syndrome and non-syndromic deafness.
Cytoskeletal motor proteins
Motor proteins utilizing the cytoskeleton for movement fall into two categories based on their substrate: microfilaments or microtubules. Actin motors such as myosin move along microfilaments through interaction with actin, and microtubule motors such as dynein and kinesin move along microtubules through interaction with tubulin.
There are two basic types of microtubule motors: plus-end motors and minus-end motors, depending on the direction in which they "walk" along the microtubule cables within the cell.
Actin motors
Myosin
Myosins are a superfamily of actin motor proteins that convert chemical energy in the form of ATP to mechanical energy, thus generating force and movement. The first identified myosin, myosin II, is responsible for generating muscle contraction. Myosin II is an elongated protein that is formed from two heavy chains with motor heads and two light chains. Each myosin head contains actin and ATP binding site. The myosin heads bind and hydrolyze ATP, which provides the energy to walk toward the plus end of an actin filament. Myosin II are also vital in the process of cell division. For example, non-muscle myosin II bipolar thick filaments provide the force of contraction needed to divide the cell into two daughter cells during cytokinesis. In addition to myosin II, many other myosin types are responsible for variety of movement of non-muscle cells. For example, myosin is involved in intracellular organization and the protrusion of actin-rich structures at the cell surface. Myosin V is involved in vesicle and organelle transport. Myosin XI is involved in cytoplasmic streaming, wherein movement along microfilament networks in the cell allows organelles and cytoplasm to stream in a particular direction. Eighteen different classes of myosins are known.
Genomic representation of myosin motors:
Fungi (yeast): 5
Plants (Arabidopsis): 17
Insects (Drosophila): 13
Mammals (human): 40
Chromadorea ( nematode C. elegans): 15
Microtubule motors
Kinesin
Kinesins are a superfamily of related motor proteins that use a microtubule track in anterograde movement. They are vital to spindle formation in mitotic and meiotic chromosome separation during cell division and are also responsible for shuttling mitochondria, Golgi bodies, and vesicles within eukaryotic cells. Kinesins have two heavy chains and two light chains per active motor. The two globular head motor domains in heavy chains can convert the chemical energy of ATP hydrolysis into mechanical work to move along microtubules. The direction in which cargo is transported can be towards the plus-end or the minus-end, depending on the type of kinesin. In general, kinesins with N-terminal motor domains move their cargo towards the plus ends of microtubules located at the cell periphery, while kinesins with C-terminal motor domains move cargo towards the minus ends of microtubules located at the nucleus. Fourteen distinct kinesin families are known, with some additional kinesin-like proteins that cannot be classified into these families.
Genomic representation of kinesin motors:
Fungi (yeast): 6
Plants (Arabidopsis thaliana): 61
Insects (Drosophila melanogaster): 25
Mammals (human): 45
Dynein
Dyneins are microtubule motors capable of a retrograde sliding movement. Dynein complexes are much larger and more complex than kinesin and myosin motors. Dyneins are composed of two or three heavy chains and a large and variable number of associated light chains. Dyneins drive intracellular transport toward the minus end of microtubules which lies in the microtubule organizing center near the nucleus. The dynein family has two major branches. Axonemal dyneins facilitate the beating of cilia and flagella by rapid and efficient sliding movements of microtubules. Another branch is cytoplasmic dyneins which facilitate the transport of intracellular cargos. Compared to 15 types of axonemal dynein, only two cytoplasmic forms are known.
Genomic representation of dynein motors:
Fungi (yeast): 1
Plants (Arabidopsis thaliana): 0
Insects (Drosophila melanogaster): 13
Mammals (human): 14-15
Plant-specific motors
In contrast to animals, fungi and non-vascular plants, the cells of flowering plants lack dynein motors. However, they contain a larger number of different kinesins. Many of these plant-specific kinesin groups are specialized for functions during plant cell mitosis. Plant cells differ from animal cells in that they have a cell wall. During mitosis, the new cell wall is built by the formation of a cell plate starting in the center of the cell. This process is facilitated by a phragmoplast, a microtubule array unique to plant cell mitosis. The building of cell plate and ultimately the new cell wall requires kinesin-like motor proteins.
Another motor protein essential for plant cell division is kinesin-like calmodulin-binding protein (KCBP), which is unique to plants and part kinesin and part myosin.
Other molecular motors
Besides the motor proteins above, there are many more types of proteins capable of generating forces and torque in the cell. Many of these molecular motors are ubiquitous in both prokaryotic and eukaryotic cells, although some, such as those involved with cytoskeletal elements or chromatin, are unique to eukaryotes. The motor protein prestin, expressed in mammalian cochlear outer hair cells, produces mechanical amplification in the cochlea. It is a direct voltage-to-force converter, which operates at the microsecond rate and possesses piezoelectric properties.
See also
ATP synthase
Cytoskeleton
Protein dynamics
References
External links
MBInfo - What are Motor Proteins?
Ron Vale's Seminar: "Molecular Motor Proteins"
Biology of Motor Proteins Institute for Biophysical Chemistry, Göttingen
Jonathan Howard (2001), Mechanics of motor proteins and the cytoskeleton.
Cell movement
Molecular machines | Motor protein | [
"Physics",
"Chemistry",
"Materials_science",
"Technology"
] | 1,672 | [
"Machines",
"Motor proteins",
"Molecular machines",
"Physical systems",
"Nanotechnology"
] |
3,018,110 | https://en.wikipedia.org/wiki/Desert%20pavement | A desert pavement, also called reg (in western Sahara), serir (in eastern Sahara), gibber (in Australia), or saï (in central Asia) is a desert surface covered with closely packed, interlocking angular or rounded rock fragments of pebble and cobble size. They typically top alluvial fans. Desert varnish collects on the exposed surface rocks over time.
Geologists debate the mechanics of pavement formation and their age.
Formation
Several theories have been proposed for the formation of desert pavements. A common theory suggests that they form through the gradual removal of sand, dust and other fine-grained material by the wind and intermittent rain, leaving the larger fragments behind. The larger fragments are shaken into place through the forces of rain, running water, wind, gravity, creep, thermal expansion and contraction, wetting and drying, frost heaving, animal traffic, and the Earth's constant microseismic vibrations. The removal of small particles by wind does not continue indefinitely, because once the pavement forms, it acts as a barrier to resist further erosion. The small particles collect underneath the pavement surface, forming a vesicular A soil horizon (designated "Av").
A second theory supposes that desert pavements form from the shrink/swell properties of the clay underneath the pavement; when precipitation is absorbed by clay it causes it to expand, and when it dries it cracks along planes of weakness. Over time, this geomorphic action transports small pebbles to the surface, where they stay through lack of precipitation that would otherwise destroy the pavement by transport of the clasts or excessive vegetative growth.
A newer theory of pavement formation comes from studies of places such as Cima Dome, in the Mojave Desert of California, by Stephen Wells and his coworkers. At Cima Dome, geologically recent lava flows are covered by younger soil layers, with desert pavement on top of them, made of rubble from the same lava. The soil has been built up, not blown away, yet the stones remain on top. There are no stones in the soil, not even gravel.
Researchers can determine how many years a stone has been exposed on the ground. Wells used a method based on cosmogenic helium-3, which forms by cosmic ray bombardment at the ground surface. Helium-3 is retained inside grains of olivine and pyroxene in the lava flows, building up with exposure time. The helium-3 dates show that the lava stones in the desert pavement at Cima Dome have all been at the surface the same amount of time as the solid lava flows right next to them. He wrote in a July 1995 article in Geology, that he concluded, "stone pavements are born at the surface." While the stones remain on the surface due to heave, deposition of windblown dust must build up the soil beneath that pavement.
For the geologist, this discovery means that some desert pavements preserve a long history of dust deposition beneath them. The dust is a record of ancient climate, just as it is on the deep sea floor and in the world's ice caps.
Desert pavement surfaces are often coated with desert varnish, a dark brown, sometimes shiny coating that contains clay minerals. In the US a famous example can be found on Newspaper Rock in southeastern Utah. Desert varnish is a thin coating (patina) of clays, iron, and manganese on the surface of sun-baked boulders. Micro-organisms may also play a role in their formation. Desert varnish is also prevalent in the Mojave desert and Great Basin geomorphic province.
Local names
Stony deserts may be known by different names according to the region. Examples include:
Gibbers: Covering extensive areas in Australia such as parts of the Tirari-Sturt stony desert ecoregion are desert pavements called Gibber Plains after the pebbles or gibbers. Gibber is also used to describe ecological communities, such as Gibber Chenopod Shrublands or Gibber Transition Shrublands.
In North Africa, a vast stony desert plain is known as reg. This is in contrast with erg, which refers to a sandy desert area.
See also
, a mechanism of surface rock formation
Notes
References
Al-Qudah, K.A. 2003. The influence of long-term landscape stability on flood hydrology and geomorphic evolution of valley floor in the northeastern Badin of Jordan. Doctoral thesis, University of Nevada, Reno.
Anderson, K.C. 1999. Processes of vesicular horizon development and desert pavement formation on basalt flows of the Cima Volcanic Field and alluvial fans of the Avawatz Mountains Piedmont, Mojave Desert, California. Doctoral thesis, University of California, Riverside.
Goudie, A.S. 2008. The history and nature of wind erosion in deserts. Annual Review of Earth and Planetary Sciences 36:97-119.
Grotzinger, et al. 2007. Understanding Earth, fifth edition. Freeman and Company. 458–460.
Haff, P.K. and Werner, B.T. 1996. Dynamical processes on desert pavements and the healing of surficial disturbance. Quaternary Research 45(1):38-46.
Meadows, D.G., Young, M.H. and McDonald, E.V. 2006. Estimating the fine soil fraction of desert pavements using ground penetrating radar. Vadose Zone Journal 5(2):720-730.
Qu Jianjun, Huang Ning, Dong Guangrong and Zhang Weimin. 2001. The role and significance of the Gobi desert pavement in controlling sand movement on the cliff top near the Dunhuang Magao Grottoes. Journal of Arid Environments 48(3):357-371.
Rieman, H.M. 1979. Deflation armor (desert pavement). The Lapidary Journal 33(7):1648-1650.
Williams, S.H. and Zimbelman, J.R. 1994. Desert pavement evolution: An example of the role of sheetflood. The Journal of Geology 102(2):243-248.
External links
Desert Processes Working Group
Aeolian landforms
Deserts
Sediments
zh:岩漠 | Desert pavement | [
"Biology"
] | 1,275 | [
"Deserts",
"Ecosystems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.