id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
15,528,840 | https://en.wikipedia.org/wiki/Geographical%20center%20of%20Sweden | The geographical center of Sweden is contested amongst at least four locations.
Flataklocken
The oldest and most famous geographical center of Sweden is Flataklocken, a spot next to Lake Munkby in Torpshammar, Medelpad at . The site was identified in 1947 after an initiative by the newspaperman Gustaf von Platen. The method used for calculating this point was that of the centre of gravity of the geometrical figure of Sweden. The calculation was made by professor Nils Friberg and Tor Andeldorf at the geography department of Stockholm University, using a cardboard cutout map of Sweden with outlying islands attached directly to the mainland. They balanced the map model on a needle and declared the balancing point the geographical center.
A delegation including Gustaf von Platen, explorer Hans Ostelius and orienteer Gösta Lagerfelt trekked through the wilderness to the site and declared it the geographical center. Later, a sign marking the significance of the spot was erected and a lookout built. The site has since become a popular tourist attraction.
Other claims
Area towns Ånge and Östersund often claim to be the geographical center of Sweden.
Other places have been claimed to be the centre of Sweden, using differing methods. The most prominent is Ytterhogdal in Hälsingland, based on the methodology of calculating the latitude for the point halfway between the northernmost (Treriksröset) and southernmost (southern tip of Skåne) points, and then taking the mid between the easternmost and westernmost points at that latitude.
See also
Geographical centres
Demographical center of Sweden
Sources
Ånge Municipality Flataklocken Official Website
Sveriges Mittpunkt - Flataklocken
Ytterhogdal vill bli Sveriges mittpunkt, Länstidningen 21 November 2005
Center
Sweden
Geography of Västernorrland County
Hälsingland | Geographical center of Sweden | Physics,Mathematics | 381 |
43,440,252 | https://en.wikipedia.org/wiki/Digital%20buffer | A digital buffer (or a logic buffer) is an electronic circuit element used to copy a digital input signal and isolate it from any output load. For the typical case of using voltages as logic signals, a logic buffer's input impedance is high, so it draws little current from the input circuit, to avoid disturbing its signal.
The digital buffer is important in data transmission between connected systems. Buffers are used in registers (data storage device) and buses (data transferring device). To connect to a shared bus, a tri-state digital buffer should be used, because it has a high impedance ("inactive" or "disconnected") output state (in addition to logic low and high).
Functionality
A voltage buffer amplifier transfers a voltage from a high output impedance circuit to a second circuit with low input impedance. Directly connecting a low impedance load to a power source draws current according to Ohm's law. The high current affects the source. Buffer inputs are high impedance. A buffered load effectively does not affect the source circuit. The buffer's output current is generated within the buffer. In this way, a buffer provides isolation between a power source and a low impedance. The buffer does not intentionally amplify or attenuate the input signal, and so may be called a unity gain buffer.
A digital buffer is a type of voltage buffer amplifier that is only concerned about digital logic levels, and thus may be non-linear. It may also act as a level shifter, with output voltages differing from the input voltages. One case of this is an inverting buffer which translates an active-high signal to an active-low one (or vice versa).
Types
Single input voltage buffer
Inverting buffer
This buffer's output state is the opposite of the input state. If the input is high, the output is low, and vice versa. Graphically, an inverting buffer is represented by a triangle with a small circle at the output, with the circle signifying inversion. The inverter is a basic building block in digital electronics. Decoders, state machines, and other sophisticated digital devices often include inverters.
Non-inverting buffer
This kind of buffer performs no inversion or decision-making possibilities. A single input digital buffer is different from an inverter. It does not invert or alter its input signal in any way. It reads an input and outputs a value. Usually, the input side reads either HIGH or LOW input and outputs a HIGH or LOW value, correspondingly. Whether the output terminal sends off HIGH or LOW signal is determined by its input value. The output value will be high if and only if the input value is high. In other words, Q will be high if and only if A is HIGH.
Tri-state digital buffer
Unlike the single input digital buffer which has only one input, the tri-state digital buffer has two inputs: a data input and a control input. (A control input is analogous to a valve, which controls the data flow.) When the control input is active, the output value is the input value, and the buffer is not different from the single input digital buffer.
Active high tri-state digital buffer
An active-high tri-state digital buffer is a buffer that is in an active state that transmits its data input to the output only when its control input voltage is high (logic 1). But when the control input is low (logic 0), the output is high impedance (abbreviated as "Hi-Z"), as if the part had been removed from the circuit.
Active low tri-state digital buffer
It is basically the same as active high digital buffer except the fact that the buffer is active when the control input is at a low state.
Inverting tri-state digital buffer
Tri-State digital buffers also have inverting varieties in which the output is the inverse of the input.
Application
Single input voltage buffers are used in many places for measurements including:
In strain gauge circuitry to measure deformations in structures like bridges, airplane wings and I-beams in buildings.
In temperature measurement circuitry for boilers and in high altitude aircraft in a cold environment.
In control circuits for aircraft, people movers in airports, subways and in many different production operations.
Tri-state voltage buffers are used widely to transmit onto shared buses, since a bus can only transmit one input device's data at a time. The high-impedance output state effectively temporarily disconnects that input device from the bus, since at most only one device should actively drive the bus's shared wires.
References
Electronic circuits | Digital buffer | Engineering | 949 |
2,022,267 | https://en.wikipedia.org/wiki/Antimonide | Antimonides (sometimes called stibnides or stibinides) are compounds of antimony with more electropositive elements. The antimonide ion is but the term refers also to any anionic derivative of antimony.
Antimonides are often prepared by heating the elements. The reduction of antimony by alkali metals or by other methods leads to alkali metal antimonides of various types. Known antimonides include isolated ions (in and ). Other motifs include dumbbells in , discrete antimony chains, for example, in , infinite spirals (in NaSb, RbSb), planar four-membered rings , cages in , and net shaped anions in .
Some antimonides are semiconductors, e.g. those of the boron group such as indium antimonide. Being reducing, many antimonides are decomposed by oxygen.
References
See also
Antimonide mineral
Anions
Antimonides | Antimonide | Physics,Chemistry | 200 |
437,496 | https://en.wikipedia.org/wiki/National%20Chemical%20Laboratory | The National Chemical Laboratory (NCL) is an Indian government laboratory based in Pune, in western India.
Popularly known as NCL, a constituent member of the Council of Scientific & Industrial Research (CSIR) India, it was established in 1950. Dr Ashish Lele is the Director of NCL and took charge on 1 April 2021. There are approximately 200 scientific staff working here. The interdisciplinary research center has a wide research scope and specializes in polymer science, organic chemistry, catalysis, materials chemistry, chemical engineering, biochemical sciences and process development. It houses good infrastructure for measurement science and chemical information.
National Collection of Industrial Microorganisms (NCIM) is located here and is a microbial culture repository maintaining a variety of industrially important microbial culture stock.
There are about 400 graduate students pursuing research towards doctoral degree; about 50 students are awarded Ph.D. degree every year; and the young talent pool adds in every few years.
NCL publishes over 400 research papers annually in the field of chemical sciences and over 60 patents worldwide. It is a unique source of research education producing the largest number of PhDs in chemical sciences within India.
Research Groups in NCL
Physical & Materials Chemistry
Catalysis & Inorganic Chemistry
Chemical Biology & Biometric Chemistry
Chemical Engineering Science
Complex Fluidics & Polymer Engineering
Heterogeneous and Homogeneous Catalysis
Industrial Flow Modelling
Materials Chemistry
Nanomaterials Science & Technology
Organic Chemistry
Plant Tissue Culture
Polymer Chemistry & Materials
Process Design & Development
Theory & Computational Sciences
Enzymology and Microbiology
Catalytic Reactors and Separation
Polymer Science & Engineering
Biology Division
Digital Information & Resource Centre (DIRC)
Facilities
PES Modern English School
NCL has a primary and secondary school within its premises. Established in 1985 as the NCL School, it has since been renamed the 'Progressive Education Society's Modern English School'. Since 2006 it has been extended to include a junior college.
Dispensary
NCL also has a Dispensary inside its premises which provides free of charge medical assistance to NCL employees and dependants under the CGHS.
Past directors
The organization was headed by many notable scientists since inception.
James William McBain (1950-1952)
G. I. Finch (1952-1957)
K. Venkataraman (1957-1966)
B. D. Tilak (1966-1978)
L. K. Doraiswamy (1978-1989)
Raghunath Anant Mashelkar (1989-1995)
Paul Ratnasamy (1995-2002)
Swaminathan Sivaram (2002-2010)
Sourav Pal (2010-2015)
K. Vijayamohanan Pillai (2015-2016)
Ashwini Nangia (2016-2020)
S.Chandrasekhar (2020-2021)
Ashish Lele( 2021-Present)
Notable alumni
Dr Kallam Anji Reddy founder of Dr Reddy Labs.
B D Kulkarni: Known for work on fluidized bed reactor.
References
External links
National Collection of Industrial Microorganisms
Dr. Ashish Lele takes charge of NCL
Organisations based in Pune
Research institutes in Pune
Chemical research institutes
Council of Scientific and Industrial Research
Chemical industry of India
1950 establishments in Bombay State
Government agencies established in 1950 | National Chemical Laboratory | Chemistry | 656 |
37,236,964 | https://en.wikipedia.org/wiki/Casas-Alvero%20conjecture | In mathematics, the Casas-Alvero conjecture is an open problem about polynomials which have factors in common with their derivatives, proposed by Eduardo Casas-Alvero in 2001.
Formal statement
Let f be a polynomial of degree d defined over a field K of characteristic zero. If f has a factor in common with each of its derivatives f(i), i = 1, ..., d − 1, then the conjecture predicts that f must be a power of a linear polynomial.
Analogue in non-zero characteristic
The conjecture is false over a field of characteristic p: any inseparable polynomial f(Xp) without constant term satisfies the condition since all derivatives are zero. Another counterexample (which is separable) is Xp+1 − Xp.
Special cases
The conjecture is known to hold in characteristic zero for degrees of the form pk or 2pk where p is prime and k is a positive integer. Similarly, it is known for degrees of the form 3pk where p ≠ 2, for degrees of the form 4pk where
p ≠ 3, 5, 7, and for degrees of the form 5pk where p ≠ 2, 3, 7, 11, 131, 193, 599, 3541, 8009. Similar results are available for degrees of the form 6pk and 7pk. It has recently been established for d = 12, making d = 20 the smallest open degree.
References
Conjectures
Unsolved problems in number theory | Casas-Alvero conjecture | Mathematics | 309 |
23,944,162 | https://en.wikipedia.org/wiki/Press%E2%80%93Schechter%20formalism | The Press–Schechter formalism is a mathematical model for predicting the number of objects (such as galaxies, galaxy clusters or dark matter halos) of a certain mass within a given volume of the Universe. It was described in an academic paper by William H. Press and Paul Schechter in 1974.
Background
In the context of cold dark matter cosmological models, perturbations on all scales are imprinted on the universe at very early times, for example by quantum fluctuations during an inflationary era. Later, as radiation redshifts away, these become mass perturbations, and they start to grow linearly. Only long after that, starting with small mass scales and advancing over time to larger mass scales, do the perturbations actually collapse to form (for example) galaxies or clusters of galaxies, in so-called hierarchical structure formation (see Physical cosmology).
Press and Schechter observed that the fraction of mass in collapsed objects more massive than some mass M is related to the fraction of volume samples in which the smoothed initial density fluctuations are above some density threshold. This yields a formula for the mass function (distribution of masses) of objects at any given time.
Result
The Press–Schechter formalism predicts that the number of objects with mass between and is:
where is the index of the power spectrum of the fluctuations in the early universe , is the mean (baryonic and dark) matter density of the universe at the time the fluctuation from which the object was formed had gravitationally collapsed, and is a cut-off mass below which structures will form. Its value is:
is the standard deviation per unit volume of the fluctuation from which the object was formed had gravitationally collapsed, at the time of the gravitational collapse, and R is the scale of the universe at that time. Parameters with subscript 0 are at the time of the initial creation of the fluctuations (or any later time before the gravitational collapse).
Qualitatively, the prediction is that the mass distribution is a power law for
small masses, with an exponential cutoff above some characteristic mass that
increases with time. Such functions had previously been noted by Schechter
as observed luminosity functions,
and are now known as Schechter luminosity functions. The Press-Schechter
formalism provided the first quantitative model for how such functions might
arise.
The case of a scale-free power spectrum, n=0 (or, equivalently, a scalar spectral index of 1), is very close to the spectrum of the current standard cosmological model. In this case, has a simpler form. Written in mass-free units:
Assumptions and derivation sketch
The Press–Schechter formalism is derived through three key assumptions:
Matter in the Universe has perturbations following a Gaussian distribution and the variance of this distribution is scale-dependent, given by the power spectrum
Matter perturbations grow linearly with the growth function
Halos are spherical, virialized overdensities with a density above a critical density
In other words, fluctuations are small at some early cosmological time, and grow until they cross a threshold ending in gravitational collapse into a halo. These perturbations are modeled linearly, even though the eventual collapse is itself a non-linear process.
We introduce the smoothed density field given by averaged over a sphere with center and mass contained inside (i.e., is convolved with a top-hat window function). The sphere radius is of order Then if a halo exists at with mass at least
Since perturbations are Gaussian distributed with an average 0 and variance we can directly compute the probability of halos forming with masses at least as
Implicitly, and depend on redshift, so the above probability does as well. The variance given in the 1974 paper is
where is the mass standard deviation in the volume of the fluctuation.
Note, that in the limit of large perturbations we expect all matter to be contained in halos such that However, the above equation gives us the limit One can make an ad-hoc argument and say that negative perturbations are not contributing in this scheme so that we are mistakenly leaving out half of the mass. And so, the Press-Schechter ansatz is
the fraction of matter contained in halos of mass
A fractional fluctuation ; at some cosmological time reaches gravitational collapse after the universe has expanded by a factor of 1/δ since that time. Using this, the normal distribution of the fluctuations, written in terms of the , , and gives the Press-Schechter formula.
Generalizations
A number of generalizations of the Press–Schechter formula exist, such as the Sheth–Tormen approximation.
References
Astrophysics
Equations of astronomy
Mathematical modeling | Press–Schechter formalism | Physics,Astronomy,Mathematics | 1,002 |
16,853,466 | https://en.wikipedia.org/wiki/Social%20panic | A social panic is a state where a social or community group reacts negatively and in an extreme or irrational manner to unexpected or unforeseen changes in their expected social status quo. According to Folk Devils and Moral Panics by Stanley Cohen, the definition can be broken down to many different sections. The sections, which were identified by Erich Goode and Nachman Ben-Yehuda in 1994, include concern, hostility, consensus, disproportionality, and volatility. Concern, which is not to be mistaken with fear, is about the possible or potential threat. Hostility occurs when an outrage occurs toward the people who were a part of the problem and agencies who are accountable. These people are seen as the enemy since their behavior is viewed as a danger to society. Consensus includes a distributed agreement that an actual threat is going to take place. This is where the media and other sources come in to aid in spreading of the panic. Disproportionality compares people's reactions to the actual seriousness of the condition. Volatility is when there is no longer any more panic.
Causes
Grass root model
This model states that social panic commonly occurs first through the people in society, at a "grassroots" level. The feeling that something meaningful is threatened is dispersed throughout everyone in society. This sense of panic not only displays itself through the people but also through areas such as the media and political groups. The media serves as a way to present the public opinion about the reality of the situation. This theory states that the media can't report concern where none originally exists. The media and politicians are merely an outlet for displaying what people are expressing. Furthermore, the media can affect the way the public sees situations.
An example of this theory is how people cause social panics due to nuclear power. After the Three Mile Island accident, where there was a nuclear meltdown, people evacuated their homes even though no workers or residents living in that area were injured or killed. The only reason people in that area were aware of what was going on was due to the social panic that people caused when they reacted to the situation. This panic was caused by the general public, not by elites or interest group as in the models explained below.
Elite-engineered model
This posits that social panics are exaggerated or invented problems created by elites or people who are considered higher among others in society. These type of people produce fear among the other classes over an issue that is not considered dangerous to the society. The reason for these actions is to redirect the attention away from the problems that impact the elite or those in power. The people who are considered elite could be someone who runs a company or is very rich, as they may have connections with the media and are familiar with politicians that can make proposals in their favor.
An example that illustrates this theory can be seen in the Russians, specifically the Czars, who turned the focus away from the anger of poverty by spreading a Jewish conspiracy. This caused mobs to form and kill Jewish communities. This capacity of the elites to control direction allows them to accomplish their own goals. They want to continue to benefit from the economic and political inequality.
Interest-group model
This model suggests that panics are created by people in interest groups who direct the public's focus on actions that are intended to be morally negative and be a danger to society. They want them to recognize a problem that affects them directly. Unlike in the elite-Engineered model, the interest groups are the ones who create social panic. Interest groups believe they are providing a public service because they will benefit from what they are doing. They do this by using the media to influence public opinion. If they are successful in doing this, it will call attention to their particular interest group, gain the trust of society and wealth, and be more advanced than opposing interest groups.
An example that demonstrates this theory is when politicians in the United States, seeking reelection, used the issue of drug abuse to cause social panic. Even though were in office and wished to remain there, they still believed that drug use was a problem they wanted to address to the public.
The media
The media plays a crucial part in delivering social reaction. According to Stanley Cohen, there are three processes that the media expresses: Exaggeration and distortion, prediction, and symbolization.
Exaggeration and distortion
In this process, the media can "over-report" with their choice of words. For example, the word "disturbance" can be used to mean having a noise complaint due to loud music next door and a group of people acting violently by throwing rocks and setting vehicles on fire. The wording of the stories can make a minor problem seem more serious than it really is. This can make people overreact in response to relatively minor problems and may lead them to believe that disturbances, acts of terrorism, riots, and instances have the same meaning.
Furthermore, the headlines used by the media might cause society to act irrationally to a story about minor issues. They can be misleading and can report information that has nothing to do with the actual story. Negative words such as "violence" can be used when there was no violence involved. The media can also point to specific characteristics that are the reason for the crime that was committed. For example, a story can discuss a murder, but the headline might focus on the hoodie the culprit was wearing. Emphasizing the hoodie will draw attention to what the person was wearing instead of the murder that took place. This causes people to become paranoid and overreact when they see someone wearing clothing that looks suspicious.
Prediction
This is where the media speculates that an incident might occur again. The media can report that an event will occur in the future, which is not always the case. People involved describe what should be done the next time it happens and what precautions should be taken. Predicting the future can cause people to constantly think about what could go wrong and lead to catastrophe. This can cause major stress and cause people to have social panics more often. However, there are certain situations where making predictions is necessary for security, such as hurricanes, earthquakes, and other natural disasters.
Symbolization
This involves stereotypes, words and images that possess symbolic powers that can cause different emotions. Symbolization can be described in three processes. It includes words such as "deviant" and, as Cohen would say, "it becomes symbolic of a certain status." By this he means that the word represents something meaningful. Then the object, which can include clothing, represents the word. Therefore, the object can also symbolize the status. Neutral words can symbolize an event or emotion. For example, people can have specific feelings connected to the word "Hiroshima" that remind them of the bombing that occurred there. Furthermore, the use of labels given to a person or word puts them in a certain group in society. Those individuals that are in that group are viewed and interpreted based on their label.
Symbolization, exaggeration and distortion are similar in the sense that they both use misleading headlines that creates negative symbols. For example, images can be posed to seem more dramatic or intense than they really are. Through both of these procedures, it is easy to cause people to come to conclusions that the news and photographs always display reality.
Reaction
After the events of 9/11, people were left in fear of crisis occurring again. According to Robert Wuthnow in his book Be Very Afraid, people have responded aggressively, spending large amounts of money in fighting terrorism. The United States has spent billions and trillions on defense and homeland security respectively. However, the problem lies in how we react.
Since people have become more defensive, the focus needs to be on the correct way to act instead of an improper response. As mentioned earlier, predicting about the endless possibilities about what can happen can be just dangerous as the threat itself. People don't believe they can defend themselves from future terrorist's attacks. Individuals were constantly reminded of the concern and fear they should be experiencing by the tremendous amount of media coverage and books being published after the September 11 attacks. The event caused "personal engagement" throughout the country. In Boston, people questioned others about ties they had with Osama bin Laden. These attacks were unlike any other attacks since people experienced them firsthand whether on the news or in person. The natural response of Americans during this time was to take action—facing the fear of terrorism, whether by taking revenge or through urging caution.
Criticism
Angela McRobbie and Sarah Thornton claim that Stanley Cohen's work on moral panic is outdated and argue that more modern information is required. McRobbie suggests that idea of moral panic has become so common that the media knowingly and mindfully uses it. Thornton argues that the media originally caused moral panics inadvertently; however, the media now manipulates it on its own.
Yvonne Jewkes describes the term as vague and the failure to clarify the position of the public "as media audiences or a body of opinion". She believes that social panics are not gladly received by the government, and that there is no proof that an extensive social anxiety exists surrounding them. Jewkes, as also mentioned by McRobbie, believes that moral panic is widely used by the media and that in order for it to have a "sound conceptual basis" it needs to be revised and carefully improved.
See also
Deviancy amplification spiral
Moral panic
Social anxiety
Social mania
References
Social phenomena
Mass psychogenic illness
Deviance (sociology) | Social panic | Biology | 1,926 |
78,191,277 | https://en.wikipedia.org/wiki/HIP%2094292 | HIP 94292, commonly referred to by its KIC designation KIC 9145955, is a red-giant branch star located in the northern constellation of Lyra.
Description
It has an apparent magnitude of 10.05, which makes it too faint to observe with the naked eye, but readily visible through a 35-mm aperture telescope. Gaia EDR3 parallax measurements place the star some distant, and it is receding with a heliocentric radial velocity of +17.4 km/s.
HIP 94292 is an evolved giant star with a spectral type of G8III. It is currently on the red-giant branch (RGB), undergoing the CNO cycle within a hydrogen shell surrounding an inert core made of helium. With a radius 5.6 times that of the Sun and an effective temperature just over , it radiates 18.8 times the luminosity of the Sun from its photosphere. Due to its higher mass of 1.24 , it is further evolved than the Sun despite a similar age of billion years.
The helium core has been precisely measured to have a mass of and a radius of . As expected of RGB stars, HIP 94292 exhibits solar-like oscillations.
See also
KIC 9970396: a similar red giant in an eclipsing binary.
References
G-type giants
Lyra
BD+45 02850
094292
KIC 9145955
J19113253+4531225 | HIP 94292 | Astronomy | 313 |
27,157,474 | https://en.wikipedia.org/wiki/Backbiting | Backbiting or tale-bearing is to slander someone in their absence — to bite them behind their back. Originally, backbiting referred to an unsporting attack from the rear in the blood sport of bearbaiting.
Causes
Backbiting may occur as a form of release after a confrontation. By insulting the opposing person, the backbiter diminishes them and, by doing so, restores their own self-esteem. A bond may also be established with the confidante if they are receptive to the hostile comment. Such gossip is common in human society as people seek to divert blame and establish their place in the dominance hierarchy. But the backbiting may be perceived as a form of delinquent behaviour due to an inferiority complex.
Religious views
In most major religions, backbiting is considered a sin. Leaders of the Baháʼí Faith condemned it as the worst of sins as it destroyed the 'life of the soul' and provoked divine wrath. In Buddhism, backbiting goes against the ideal of right speech. Saint Thomas Aquinas classified it as a mortal sin, given that, as with other mortal sins, the act attains its perfection, that is, the act is committed with full knowledge and full consent of the will. Islam considers ghibah, or backbiting, to be a major sin and the Qur'an compares it to the abhorrent act of eating the flesh of one's dead brother. Additionally, it is not permissible for one to keep quiet and listen to backbiting. In Judaism, backbiting is known as hotzaat shem ra (spreading a bad name) and is considered a severe sin.
in the 19th century, Charlotte Elizabeth wrote an account of backbiting for the moral education of children in places such as Sunday school.
Notable examples
In the Book of Numbers, the elder siblings of Moses – Miriam and Aaron – talk against him together. God is angered and punishes Miriam with leprosy.
Gordon Brown notoriously spoke of Gillian Duffy as being a "sort of bigoted woman" after conversing with her pleasantly during his 2010 election campaign. This remark was made to his staff as he was driving away but was picked up by a live microphone. This incident caused him great embarrassment and he returned to apologise, declaring that he was a "penitent sinner."
References
Harassment and bullying
Communication of falsehoods
Defamation
Religious philosophical concepts
Sin | Backbiting | Biology | 506 |
60,928,672 | https://en.wikipedia.org/wiki/Federal%20Service%20for%20Technical%20and%20Export%20Control | The Federal Service for Technical and Export Control of Russia (FSTEC of Russia / FSTEK) is a military agency of the Russian Federation, under the Russian Ministry of Defence. It licenses the export of weapons and dual-use technology items, and is also responsible for Russian military information security.
FSTEC of Russia maintains the Data Security Threats Database, Russia's national vulnerability database. and requires Western technology companies to submit source code and other trade secrets before allowing their products to be imported into Russia. FSTEC also liaises with the FSB, which controls cryptography in Russia.
In 2019, FSTEC of Russia granted Astra Linux special status regarding its use in processing Russian classified information.
References
Ministry of Defence (Russia)
Computer security in Russia
Computer security organizations | Federal Service for Technical and Export Control | Technology | 159 |
31,306,300 | https://en.wikipedia.org/wiki/Acefluranol | Acefluranol (developmental code name BX-591), also known as 2,3-bis(3,4-diacetoxy-5-fluorophenyl)pentane, is a nonsteroidal antiestrogen of the stilbestrol group that was never marketed.
See also
Bifluranol
Pentafluranol
Terfluranol
References
Abandoned drugs
Acetate esters
Antiestrogen esters
Antiestrogens
Fluoroarenes
Hormonal antineoplastic drugs | Acefluranol | Chemistry | 115 |
12,936,761 | https://en.wikipedia.org/wiki/MACPF | The Membrane Attack Complex/Perforin (MACPF) superfamily, sometimes referred to as the MACPF/CDC superfamily, is named after a domain that is common to the membrane attack complex (MAC) proteins of the complement system (C6, C7, C8α, C8β and C9) and perforin (PF). Members of this protein family are pore-forming toxins (PFTs). In eukaryotes, MACPF proteins play a role in immunity and development.
Archetypal members of the family are complement C9 and perforin, both of which function in human immunity. C9 functions by punching holes in the membranes of Gram-negative bacteria. Perforin is released by cytotoxic T cells and lyses virally infected and transformed cells. In addition, perforin permits delivery of cytotoxic proteases called granzymes that cause cell death. Deficiency of either protein can result in human disease. Structural studies reveal that MACPF domains are related to cholesterol-dependent cytolysins (CDCs), a family of pore forming toxins previously thought to only exist in bacteria.
Families
As of early 2016, there are three families belonging to the MACPF superfamily:
1.C.12 - The Thiol-activated Cholesterol-dependent Cytolysin (CDC) Family
1.C.39 - The Membrane Attack Complex/Perforin (MACPF) Family
1.C.97 - The Pleurotolysin Pore-forming (Pleurotolysin) Family
Membrane Attack Complex/Perforin (MACPF) Family
Proteins containing MACPF domains play key roles in vertebrate immunity, embryonic development, and neural-cell migration. The ninth component of complement and perforin form oligomeric pores that lyse bacteria and kill virus-infected cells, respectively. The crystal structure of a bacterial MACPF protein, Plu-MACPF from Photorhabdus luminescens was determined (). The MACPF domain is structurally similar to pore-forming cholesterol-dependent cytolysins from gram-positive bacteria, suggesting that MACPF proteins create pores and disrupt cell membranes similar to cytolysin. A representative list of proteins belonging to the MACPF family can be found in the Transporter Classification Database.
Biological roles of MACPF domain containing proteins
Many proteins belonging to the MACPF superfamily play key roles in plant and animal immunity.
Complement proteins C6-C9 all contain a MACPF domain and assemble into the membrane attack complex. C6, C7 and C8β appear to be non-lytic and function as scaffold proteins within the MAC. In contrast both C8α and C9 are capable of lysing cells. The final stage of MAC formation involves polymerisation of C9 into a large pore that punches a hole in the outer membrane of gram-negative bacteria.
Perforin is stored in granules within cytotoxic T-cells and is responsible for killing virally infected and transformed cells. Perforin functions via two distinct mechanisms. Firstly, like C9, high concentrations of perforin can form pores that lyse cells. Secondly, perforin permits delivery of the cytotoxic granzymes A and B into target cells. Once delivered, granzymes are able to induce apoptosis and cause target cell death.
The plant protein CAD1 (TC# 1.C.39.11.3) functions in the plant immune response to bacterial infection.
The sea anemone Actineria villosa uses a MACPF (AvTX-60A; TC# 1.C.39.10.1)protein as a lethal toxin.
MACPF proteins are also important for the invasion of the Malarial parasite into the mosquito host and the liver.
Not all MACPF proteins function in defence or attack. For example, astrotactin-1 (TC# 9.B.87.3.1) is involved in neural cell migration in mammals and apextrin (TC# 1.C.39.7.4) is involved in sea urchin (Heliocidaris erythrogramma) development. Drosophila Torso-like protein (TC# 1.C.39.15.1), which controls embryonic patterning, also contains a MACPF domain. Its function is implicated in a receptor tyrosine kinase signaling pathway that specifies differentiation and terminal cell fate.
Functionally uncharacterised MACPF proteins are sporadically distributed in bacteria. Several species of Chlamydia contain MACPF proteins. The insect pathogenic bacteria Photorhabdus luminescens also contains a MACPF protein, however, this molecule appears non-lytic.
Structure and mechanism
The X-ray crystal structure of Plu-MACPF, a protein from the insect pathogenic enterobacteria Photorhabdus luminescens has been determined (figure 1). These data reveal that the MACPF domain is homologous to pore forming cholesterol dependent cytolysins (CDC's) from gram-positive pathogenic bacteria such as Clostridium perfringens (which causes gas gangrene). The amino acid sequence identity between the two families is extremely low, and the relationship is not detectable using conventional sequence based data mining techniques.
It is suggested that MACPF proteins and CDCs form pores in the same way (figure 1). Specifically it is hypothesised that MACPF proteins oligomerise to form a large circular pore (figure 2). A concerted conformational change within each monomer then results in two α-helical regions unwinding to form four amphipathic β-strands that span the membrane of the target cell. Like CDC's MACPF proteins are thus β-pore forming toxins that act like a molecular hole punch.
Other crystal structures for members of the MACPF superfamily can be found in RCSB: i.e., , , , ,
Control of MACPF proteins
Complement regulatory proteins such as CD59 function as MAC inhibitors and prevent inappropriate activity of complement against self cells (Figure 3). Biochemical studies have revealed the peptide sequences in C8α and C9 that bind to CD59. Analysis of the MACPF domain structures reveals that these sequences map to the second cluster of helices that unfurl to span the membrane. It is therefore suggested that CD59 directly inhibits the MAC by interfering with conformational change in one of the membrane spanning regions.
Other proteins that bind to the MAC include C8γ. This protein belongs to the lipocalin family and interacts with C8α. The binding site on C8α is known, however, the precise role of C8γ in the MAC remains to be understood.
Role in human disease
Deficiency of C9, or other components of the MAC results in an increased susceptibility to diseases caused by gram-negative bacteria such as meningococcal meningitis. Overactivity of MACPF proteins can also cause disease. Most notably, deficiency of the MAC inhibitor CD59 results in an overactivity of complement and Paroxysmal nocturnal hemoglobinuria.
Perforin deficiency results in the commonly fatal disorder familial hemophagocytic lymphohistiocytosis (FHL or HLH). This disease is characterised by an overactivation of lymphocytes which results in cytokine mediated organ damage.
The MACPF protein DBCCR1 may function as a tumor suppressor in bladder cancer.
Human proteins containing this domain
C6; C7; C8A; C8B; C9; FAM5B; FAM5C; MPEG1;
PRF1
References
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | MACPF | Biology | 1,655 |
15,650,509 | https://en.wikipedia.org/wiki/List%20of%20Apple%20drives | A list of all Apple internal and external drives in chronological order of introduction.
Floppy disk drives
Disk II
Disk III
Apple "Twiggy" FileWare
Disk IIc
400K Drive (internal)
Macintosh External Disk Drive (400K)
UniDisk
DuoDisk
UniDisk 3.5
Macintosh 800K External Drive
Disk 5.25
Apple 3.5 Drive
Apple SuperDrive
Macintosh HDI-20 External 1.4MB Drive
Hard disk drives
Apple ProFile
Apple Widget
Macintosh Hard Disk 20
Apple Hard Disk 20SC
Xserve RAID
Time Capsule
Optical drives
AppleCD
PowerCD
SuperDrive
Apple MacBook Air SuperDrive
Other drives
Apple Tape Backup 40SC
Fusion Drive
Drives
Drives
Drives
Apple drives
drives | List of Apple drives | Technology | 141 |
14,517,481 | https://en.wikipedia.org/wiki/GPR142 | Probable G-protein coupled receptor 142 is a protein that in humans is encoded by the GPR142 gene.
GPR142 is a member of the rhodopsin family of G protein-coupled receptors (GPRs) (Fredriksson et al., 2003).[supplied by OMIM]
References
Further reading
G protein-coupled receptors | GPR142 | Chemistry | 74 |
41,603,134 | https://en.wikipedia.org/wiki/Code%20as%20data | In computer science, the expression code as data refers to the idea that source code written in a programming language can be manipulated as data, such as a sequence of characters or an abstract syntax tree (AST), and it has an execution semantics only in the context of a given compiler or interpreter. The notion is often used in the context of Lisp-like languages that use S-expressions as their main syntax, as writing programs using nested lists of symbols makes the interpretation of the program as an AST quite transparent (a property known as homoiconicity).
These ideas are generally used in the context of what is called metaprogramming, writing programs that treat other programs as their data. For example, code-as-data allows the serialization of first-class functions in a portable manner. Another use case is storing a program in a string, which is then processed by a compiler to produce an executable. More often there is a reflection API that exposes the structure of a program as an object within the language, reducing the possibility of creating a malformed program.
In computational theory, Kleene's second recursion theorem provides a form of code-is-data, by proving that a program can have access to its own source code.
Code-as-data is also a principle of the Von Neumann architecture, since stored programs and data are both represented as bits in the same memory device. This architecture offers the ability to write self-modifying code. It also opens the security risk of disguising a malicious program as user data and then using an exploit to direct execution to the malicious program.
Data as Code
In declarative programming, the Data as Code (DaC) principle refers to the idea that an arbitrary data structure can be exposed using a specialized language semantics or API. For example, a list of integers or a string is data, but in languages such as Lisp and Perl, they can be directly entered and evaluated as code. Configuration scripts, domain-specific languages and markup languages are cases where program execution is controlled by data elements that are not clearly sequences of commands.
References
Programming language topics | Code as data | Technology,Engineering | 439 |
61,954,047 | https://en.wikipedia.org/wiki/Anderson%20function | Anderson functions describe the projection of a magnetic dipole field in a given direction at points along an arbitrary line. They are useful in the study of magnetic anomaly detection, with historical applications in submarine hunting and underwater mine detection. They approximately describe the signal detected by a total field sensor as the sensor passes by a target (assuming the targets signature is small compared to the Earth's magnetic field).
Definition
The magnetic field from a magnetic dipole along a given line, and in any given direction can be described by the following basis functions:
which are known as Anderson functions.
Definitions:
is the dipole's strength and direction
is the projected direction (often the Earth's magnetic field in a region)
is the position along the line
points in the direction of the line
is a vector from the dipole to the point of closest approach (CPA) of the line
, a dimensionless quantity for simplification
The total magnetic field along the line is given by
where is the magnetic constant, and are the Anderson coefficients, which depend on the geometry of the system. These are
where and are unit vectors (given by and , respectively).
Note, the antisymmetric portion of the function is represented by the second function. Correspondingly, the sign of depends on how is defined (e.g. direction is 'forward').
Total field measurements
The total field measurement resulting from a dipole field in the presence of a background field (such as earth magnetic field) is
The last line is an approximation that is accurate if the background field is much larger than contributions from the dipole. In such a case the total field reduces to the sum of the background field, and the projection of the dipole field onto the background field. This means that the total field can be accurately described as an Anderson function with an offset.
References
Functions and mappings | Anderson function | Mathematics | 375 |
358,237 | https://en.wikipedia.org/wiki/Roof%20garden | A roof garden is a garden on the roof of a building. Besides the decorative benefit, roof plantings may provide food, temperature control, hydrological benefits, architectural enhancement, habitats or corridors for wildlife, recreational opportunities, and in large scale it may even have ecological benefits. The practice of cultivating food on the rooftop of buildings is sometimes referred to as rooftop farming. Rooftop farming is usually done using green roof, hydroponics, aeroponics or air-dynaponics systems or container gardens.
History
Humans have grown plants atop structures since the ziggurats of ancient Mesopotamia (4th millennium BC–600 BC) had plantings of trees and shrubs on aboveground terraces. An example in Roman times was the Villa of the Mysteries in Pompeii, which had an elevated terrace where plants were grown. A roof garden has also been discovered around an audience hall in Roman-Byzantine Caesarea. The medieval Egyptian city of Fustat had a number of high-rise buildings that Nasir Khusraw in the early 11th century described as rising up to 14 stories, with roof gardens on the top story complete with ox-drawn water wheels for irrigating them.
Among the Seven Wonders of the Ancient World, The Hanging Gardens of Babylon are often depicted as tall structures holding vegetation; even immense trees.
In New York City between 1880 and Prohibition large rooftop gardens built included the Hotel Astor (New York City), the American Theater on Eighth Avenue, the garden atop Stanford White's 1890 Madison Square Garden, and the Paradise Roof Garden opened by Oscar Hammerstein I in 1900.
Commercial greenhouses on rooftops have existed at least since 1969, when Terrestris rooftop nursery opened on 60th st. in New York City.
In the 2010s, large commercial hydroponic rooftop farms were started by Gotham Greens, Lufa Farms, and others.
Environmental impact
Roof gardens are most often found in urban environments. Plants have the ability to reduce the overall heat absorption of the building which then reduces energy consumption for cooling. "The primary cause of heat build-up in cities is insolation, the absorption of solar radiation by roads and buildings in the city and the storage of this heat in the building material and its subsequent re-radiation. Plant surfaces however, as a result of transpiration, do not rise more than above the ambient and are sometimes cooler." This then translates into a cooling of the environment between , depending on the area on earth (in hotter areas, the environmental temperature will cool more). The study was performed by the University of Cardiff.
A study at the National Research Council of Canada showed the differences between roofs with gardens and roofs without gardens against temperature. The study shows temperature effects on different layers of each roof at different times of the day. Roof gardens are obviously very beneficial in reducing the effects of temperature against roofs without gardens. “If widely adopted, rooftop gardens could reduce the urban heat island, which would decrease smog episodes, problems associated with heat stress and further lower energy consumption.”
Aside from rooftop gardens providing resistance to thermal radiation, rooftop gardens are also beneficial in reducing rain run off. A roof garden can delay run off; reduce the rate and volume of run off. “As cities grow, permeable substrates are replaced by impervious structures such as buildings and paved roads. Storm water run-off and combined sewage overflow events are now major problems for many cities in North America. A key solution is to reduce peak flow by delaying (e.g., control flow drain on roofs) or retaining run-off (e.g., rain detention basins). Rooftop gardens can delay peak flow and retain the run-off for later use by the plants.”
Urban agriculture
“In an accessible rooftop garden, space becomes available for localized small-scale urban agriculture, a source of local food production. An urban garden can supplement the diets of the community it feeds with fresh produce and provide a tangible tie to food production.” At Trent University, there is currently a working rooftop garden which provides food to the student café and local citizens.
Available gardening areas in cities are often seriously lacking, which is likely the key impetus for many roof gardens. The garden may be on the roof of an autonomous building which takes care of its own water and waste. Hydroponics and other alternative methods can expand the possibilities of roof top gardening by reducing, for example, the need for soil or its tremendous weight. Plantings in containers are used extensively in roof top gardens. Planting in containers prevents added stress to the roof's waterproofing. One high-profile example of a building with a roof garden is Chicago City Hall.
For those who live in small apartments with little space, square foot gardening, or (when even less space is available) green walls (vertical gardening) can be a solution. These use much less space than traditional gardening. These also encourage environmentally responsible practices, eliminating tilling, reducing or eliminating pesticides, and weeding, and encouraging the recycling of wastes through composting.
Importance to urban planning
Becoming green is a high priority for urban planners. The environmental and aesthetic benefits to cities are the prime motivation. It was calculated that the temperature in Tokyo could be lowered by if 50% of all available rooftop space were planted with greenery. This would lead to savings of approximately 100 million yen.
Singapore is active in green urban development. "Roof gardens present possibilities for carrying the notions of nature and open space further in tall building development." When surveyed, 80% of Singapore residents voted for more roof gardens to be implemented in the city's plans. Recreational reasons, such as leisure and relaxation, beautifying the environment, and greenery and nature, received the most votes. Planting roof gardens on the tops of buildings is a way to make cities more efficient.
A roof garden can be distinguished from a green roof, although the two terms are often used interchangeably. The term roof garden is well suited to roof spaces that incorporate recreation, and entertaining and provide additional outdoor living space for the building's residents. It may include planters, plants, dining and lounging furniture, outdoor structures such as pergolas and sheds, and automated irrigation and lighting systems.
Although they may provide aesthetic and recreational benefits a green roof is not necessarily designed for this purpose. A green roof may not provide any recreational space and be constructed with an emphasis on improving the insulation or improving the overall energy efficiency and reducing the cooling and heating costs within a building.
Green roofs may be extensive or intensive. The terms are used to describe the type of planting required. The panels that comprise a green roof are generally no more than a few centimeters up to 30 cm (a few inches up to a foot) in depth, since weight is an important factor when covering an entire roof surface. The plants that go into a green roof are usually sedum or other shallow-rooted plants that will tolerate the hot, dry, windy conditions that prevail in most rooftop gardens. With a green roof, "the plants' layer can shield off as much as 87% of solar radiation while a bare roof receives 100% direct exposure".
The planters on a roof garden may be designed for a variety of functions and vary greatly in depth to satisfy aesthetic and recreational purposes. These planters can hold a range of ornamental plants: anything from trees, shrubs, vines, or an assortment of flowers. As aesthetics and recreation are the priority they may not provide the environmental and energy benefits of a green roof.
In popular culture
American jazz singer Al Jarreau composed a song named "Roof Garden", released on his 1981 album.
Apu Nahasapeemapetilon of the TV show The Simpsons has a rooftop garden visited by Paul McCartney and his wife.
In BBC's 1990 television miniseries House of Cards, the main character, Member of Parliament (MP) Francis Urquhart, murders journalist Mattie Storin by throwing her off the Palace of Westminster's rooftop garden.
Gallery
See also
Agrivoltaic
Building-integrated agriculture
Cool roof
Green building
Green infrastructure
Greening
Kensington Roof Gardens
List of garden types
Low-flow irrigation systems
Metropolitan Museum of Art Roof Garden
Ralph Hancock, designer of The Rockefeller Center Roof Gardens
Roof deck
Terrace garden
Urban green space
Urban park
Wildlife corridor
References
External links
The New York Times article about rooftop garden in Manhattan
Types of garden
Architectural elements
Urban agriculture
Roofs
Sustainable building | Roof garden | Technology,Engineering | 1,699 |
68,689,792 | https://en.wikipedia.org/wiki/Surface%20Duo%202 | The Surface Duo 2 is a discontinued dual-touchscreen Android smartphone manufactured by Microsoft. Announced during a hardware-oriented event on September 22, 2021, it is the successor to the original Surface Duo.
Specifications
Hardware
The Surface Duo 2 uses a similar folio form factor to the first-generation model, although it is slightly thicker to accommodate a larger battery, and now offered in a new black color option. It features a pair of 5.8-inch OLED displays with a 90 Hz refresh rate, with their innermost edges being curved over the side. This is used as part of a new feature known as the "Glance Bar", which allows notifications and other content to be displayed along its spine. As with the Surface Duo, it supports Surface Pen styluses, including the concurrently-unveiled Surface Slim Pen 2.
The device uses the Qualcomm Snapdragon 888 5G system-on-chip, adding 5G and near-field communication (NFC) support that was not present on the original Surface Duo. It also has an increased, 8 GB of RAM, dual speakers, and a fingerprint reader in its power button.
The Surface Duo 2 has both front and rear-facing cameras, with the rear camera featuring a 12-megapixel lens, 16-megapixel wide-angle lens, and a 12-megapixel telephoto lens.
Software
The Surface Duo 2 launched with Android 11. An update to Android 12 L was released in October 2022, adding Windows 11-inspired design elements and other new features.
Reception
The Surface Duo 2 received a mixed reception. The Verge praised the Duo 2 for having more competitive hardware than the first-generation model, including a faster processor, more RAM, and 5G support (although still lacking features such as wireless charging), and its software for being "undeniably better than it was on the original at launch". However, the reviewer noted issues with touch input response, web browsers such as Microsoft Edge and Google Chrome did not support displaying multiple pages on the two screens, and felt that its cameras were exceeded in quality by even cheaper devices such as the Pixel 5a. In conclusion, it was felt that "between the bugs and inherent awkwardness of the form factor, the Duo 2 is just a difficult device to live with day to day, much like its predecessor." As of October 21, 2024, the Surface Duo 2 was discontinued by Microsoft and the last security update was October 8, 2024.
Timeline
References
Foldable smartphones
Microsoft Surface
Products introduced in 2021 | Surface Duo 2 | Technology | 528 |
51,663,914 | https://en.wikipedia.org/wiki/Tlalocite | Tlalocite is a rare and complex tellurate mineral with the formula Cu10Zn6(TeO4)2(TeO3)(OH)25Cl · 27 H2O. It has a Mohs hardness of 1, and a cyan color. It was named after Tlaloc, the Aztec god of rain, in allusion to the high amount of water contained within the crystal structure. It is not to be confused with quetzalcoatlite, which often looks similar in color and habit.
Occurrence
Tlalocite was first identified in the Bambollite mine (La Oriental), Moctezuma, Municipio de Moctezuma, Sonora, Mexico and it was approved by the IMA in 1974. It often occurs together with tenorite, azurite, malachite and tlapallite. It is found in partially oxidized portions of tellurium-bearing hydrothermal veins.
References
Copper(II) minerals
Zinc minerals
Tellurite minerals
Tellurate minerals
25
Orthorhombic minerals
Minerals described in 1974 | Tlalocite | Chemistry | 225 |
4,046,826 | https://en.wikipedia.org/wiki/Indolamines | Indolamines are a family of neurotransmitters that share a common molecular structure. Indolamines are a classification of monoamine neurotransmitter, along with catecholamines and ethylamine derivatives. A common example of an indolamine is the tryptophan derivative serotonin, a neurotransmitter involved in mood and sleep. Another example of an indolamine is melatonin.
In biochemistry, indolamines are substituted indole compounds that contain an amino group. Examples of indolamines include the lysergamides.
Synthesis
Indolamines are biologically synthesized from the essential amino acid tryptophan. Tryptophan is synthesized into serotonin through the addition of a hydroxyl group by the enzyme tryptophan hydroxylase and the subsequent removal of the carboxyl group by the enzyme 5-HTP decarboxylase.
See also
Indole
Tryptamine
References
Neurotransmitters
Indoles
Amines | Indolamines | Chemistry | 213 |
70,979,123 | https://en.wikipedia.org/wiki/Gibellula | Gibellula is a genus of parasitic fungi which attacks arachnids.
The genus Gibellula was named after Prof. Giuseppe Gibelli.
References
Cordycipitaceae
Parasitic fungi
Taxa described in 1877
Taxa named by Pier Andrea Saccardo
Hypocreales genera | Gibellula | Biology | 59 |
56,686,207 | https://en.wikipedia.org/wiki/HD%202454 | HD 2454 is a probable binary star system in the zodiac constellation of Pisces. With an apparent visual magnitude of 6.04, it is near the lower limit of visibility to the naked eye under good seeing conditions. An annual parallax shift of 26.3 mas as measured from Earth's orbit provides a distance estimate of 124 light years. It has a relatively high proper motion, traversing the celestial sphere at a rate of 0.208 arcseconds per year, and is moving closer to the Sun with a heliocentric radial velocity of −10 km/s.
The visible component of this system is an F-type main-sequence star with a stellar classification of , showing an abnormally strong line of singly-ionized strontium (Sr II) at a wavelength of 4077 Å. It has an estimated 1.23 times the mass of the Sun and 1.6 times the Sun's radius. The star is about 1.9 billion years old with a rotation period of around three days. It is radiating 4.6 times the Sun's luminosity from its photosphere at an effective temperature of around 6,508 K.
HD 2454 was the first star to be identified as a Barium dwarf, by Tomkin et al. (1989), and is the brightest such object. It displays a mild overabundance of the element barium, which is hypothesized to have been accreted when an unresolved white dwarf companion was passing through the asymptotic giant branch (RGB) stage.
The visible component displays significant overabundances of three s-process peak elements that are generated during the RGB phase, as well as a mild overabundance of carbon. In contrast, it shows severe depletion of lithium and beryllium, as well as a notable underabundance of boron. The surface abundances of these lighter elements may have been altered during the mass transfer process, having been previously consumed in the core region of the companion.
References
F-type main-sequence stars
Barium stars
Pisces (constellation)
Durchmusterung objects
0107
Piscium, 88
002454
002235 | HD 2454 | Astronomy | 462 |
503,432 | https://en.wikipedia.org/wiki/EMI%20%28protocol%29 | External Machine Interface (EMI), an extension to Universal Computer Protocol (UCP), is a protocol primarily used to connect to short message service centres (SMSCs) for mobile telephones. The protocol was developed by CMG Wireless Data Solutions, now part of Mavenir.
Syntax
A typical EMI/UCP exchange looks like this :
^B01/00045/O/30/66677789///1//////68656C6C6F/CE^C
^B01/00041/R/30/A//66677789:180594141236/F3^C
The start of the packet is signaled by ^B (STX, hex 02) and the end with ^C (ETX, hex 03). Fields within the packet are separated by / characters.
The first four fields form the mandatory header. the third is the operation type (O for operation, R for result), and the fourth is the operation (here 30, "short message transfer").
The subsequent fields are dependent on the operation. In the first line above, '66677789' is the recipient's address (telephone number) and '68656C6C6F' is the content of the message, in this case the ASCII string "hello". The second line is the response with a matching transaction reference number, where 'A' indicates that the message was successfully acknowledged by the SMSC, and a timestamp is suffixed to the phone number to show time of delivery.
The final field is the checksum, calculated simply by summing all bytes in the packet (including slashes) and taking the 8 least significant bits from the result.
The full specification is available on the LogicaCMG website developers' forum, but registration is required.
Technical limitations
The two-digit transaction reference number means that an entity sending text messages can only have 100 outstanding messages (per session); this can limit performance, but only over a slow network and with incorrectly configured applications on one's SMSC (for example one session, with number of windows greater than 100). In practice it does not have any impact on delivery throughput.
The EMI UCP documentation does not specify a default alphabet for alphanumeric messages after decoding from hex digits. (It specifies an alphabet of IRA for the encoded message, which is the same as 7 bit ASCII as 0-9 and A-Z are invariant characters). The related ETS 300 133-3 standard specifies the GSM-7 alphabet, which accommodates more languages than ASCII by replacing unprintable control codes with additional printable characters. In practice the GSM-7 alphabet is used. Other encodings, such as UCS-2, can be sent by using a transparent message and specifying the Data Coding Scheme.
Alternatives
Short message peer-to-peer protocol (SMPP) also provides SMS over TCP/IP.
Computer Interface for Message Distribution (CIMD) developed by Nokia
External links
ETS 300 133-3
LogicaCMG: Downloads for developers (link no longer active as of 2007-12-24)
UCP Specification (Vodafone Germany)
A more detailed UCP Specification
UCP Perl implementation (for developers)
Kannel, Open-Source WAP and SMS Gateway with UCP/EMI 4.0 support.
GSM standard
Mobile technology
Network protocols | EMI (protocol) | Technology | 722 |
5,206,174 | https://en.wikipedia.org/wiki/Virosome | A virosome is a drug or vaccine delivery mechanism consisting of unilamellar phospholipid membrane (either a mono- or bi-layer) vesicle incorporating virus derived proteins to allow the virosomes to fuse with target cells. Viruses are infectious agents that can replicate in their host organism, however virosomes do not replicate. The properties that virosomes share with viruses are based on their structure; virosomes are essentially safely modified viral envelopes that contain the phospholipid membrane and surface glycoproteins. As a drug or vaccine delivery mechanism they are biologically compatible with many host organisms and are also biodegradable. The use of reconstituted virally derived proteins in the formation of the virosome allows for the utilization of what would otherwise be the immunogenic properties of a live-attenuated virus, but is instead a safely killed virus. A safely killed virus can serve as a promising vector because it won't cause infection and the viral structure allows the virosome to recognize specific components of its target cells.
Virosomes structure
Virosomes are vehicles that have a spherical shape with a phospholipid mono/bilayer membrane. Inside of the virosome, there is a central cavity that holds the therapeutic molecules such as nucleic acids, proteins, and drugs. On the surface of the virosome, there can be different types of glycoproteins. Glycoproteins are a type of protein that have an oligosaccharide chain bonded to amino acid chains. The different types of glycoproteins on the surface of the virosome increases the specificity of the target cells because the surface glycoproteins help with recognition as well as the attachments of the virosomes to their target cells. In the case of the influenza virosome, the glycoproteins are antigen, haemagglutinin, and neuraminidase. Antigens are molecules that triggers an immune response when targeted by a specific antibody that corresponds to the shape of the antigen. Haemagglutinin is a viral glycoprotein that causes red blood cell agglutination. Neuraminidase are enzymes that break glycosidic linkages. The size and surface molecules presented on of the virosome can be modified so that it can target different types of cells.
Virosome applications
Virosomes deliver antigens and therapeutic agents to their targeted cells. Virosomes can act as immunopotentiating agents and as agents of targeted drug delivery. Virosomes as immunopotentiating agents activate cell mediated and humoral immune responses. Virosomes are suspended in saline buffers and are administered through respiratory, parenteral, intravenous, oral, intramuscular, and topical routes.
Influenza virosomes
In contrast to liposomes, virosomes contain functional viral envelope glycoproteins: influenza virus hemagglutinin (HA) and neuraminidase (NA) intercalated in the phospholipid bilayer membrane. They have a typical mean diameter of 150 nm. Essentially, virosomes represent reconstituted empty influenza virus envelopes, devoid of the nucleocapsid including the genetic material of the source virus.
Non-influenza virosomes
They are also being considered for HIV-1 vaccine research.
They were used as a drug carrier mechanism for experimental cancer therapies.
Benefits and challenges
The benefits of virosomes are that the specific structure and small size help with the precision of target cells. The phospholipid membrane protects the virosome from adverse reactions in the body and the membrane allows the virosome to be biocompatible and biodegradable in the body. The challenges of virosomes are the rapid detection and activation of the immune response against the viral glycoproteins, which can result in a decrease of the virosomes. However, glycoproteins can still induce a prophylactic response against the virus, which helps with establishing virosomes as vaccine delivery systems. If the virosome is administered into the bloodstream, the virosome can disintegrate. However, if the virosome can reach the target quickly enough, the drug delivery will still happen. There are some challenges with virosomes, but there are ways in which the virosome can still help activate the immune response.
References
External links
"What are virosomes?"
"Virosome based vaccine"
Vaccination | Virosome | Biology | 979 |
616,019 | https://en.wikipedia.org/wiki/Distributivity%20%28order%20theory%29 | In the mathematical area of order theory, there are various notions of the common concept of distributivity, applied to the formation of suprema and infima. Most of these apply to partially ordered sets that are at least lattices, but the concept can in fact reasonably be generalized to semilattices as well.
Distributive lattices
Probably the most common type of distributivity is the one defined for lattices, where the formation of binary suprema and infima provide the total operations of join () and meet (). Distributivity of these two operations is then expressed by requiring that the identity
hold for all elements x, y, and z. This distributivity law defines the class of distributive lattices. Note that this requirement can be rephrased by saying that binary meets preserve binary joins. The above statement is known to be equivalent to its order dual
such that one of these properties suffices to define distributivity for lattices. Typical examples of distributive lattice are totally ordered sets, Boolean algebras, and Heyting algebras. Every finite distributive lattice is isomorphic to a lattice of sets, ordered by inclusion (Birkhoff's representation theorem).
Distributivity for semilattices
A semilattice is partially ordered set with only one of the two lattice operations, either a meet- or a join-semilattice. Given that there is only one binary operation, distributivity obviously cannot be defined in the standard way. Nevertheless, because of the interaction of the single operation with the given order, the following definition of distributivity remains possible. A meet-semilattice is distributive, if for all a, b, and x:
If a ∧ b ≤ x then there exist a and b such that a ≤ a, b ≤ b' and x = a ∧ b' .
Distributive join-semilattices are defined dually: a join-semilattice is distributive, if for all a, b, and x:
If x ≤ a ∨ b then there exist a and b such that a ≤ a, b ≤ b and x = a ∨ b' .
In either case, a' and b' need not be unique.
These definitions are justified by the fact that given any lattice L, the following statements are all equivalent:
L is distributive as a meet-semilattice
L is distributive as a join-semilattice
L is a distributive lattice.
Thus any distributive meet-semilattice in which binary joins exist is a distributive lattice.
A join-semilattice is distributive if and only if the lattice of its ideals (under inclusion) is distributive.
This definition of distributivity allows generalizing some statements about distributive lattices to distributive semilattices.
Distributivity laws for complete lattices
For a complete lattice, arbitrary subsets have both infima and suprema and thus infinitary meet and join operations are available. Several extended notions of distributivity can thus be described. For example, for the infinite distributive law, finite meets may distribute over arbitrary joins, i.e.
may hold for all elements x and all subsets S of the lattice. Complete lattices with this property are called frames, locales or complete Heyting algebras. They arise in connection with pointless topology and Stone duality. This distributive law is not equivalent to its dual statement
which defines the class of dual frames or complete co-Heyting algebras.
Now one can go even further and define orders where arbitrary joins distribute over arbitrary meets. Such structures are called completely distributive lattices. However, expressing this requires formulations that are a little more technical. Consider a doubly indexed family {xj,k | j in J, k in K(j)} of elements of a complete lattice, and let F be the set of choice functions f choosing for each index j of J some index f(j) in K(j). A complete lattice is completely distributive if for all such data the following statement holds:
Complete distributivity is again a self-dual property, i.e. dualizing the above statement yields the same class of complete lattices. Completely distributive complete lattices (also called completely distributive lattices for short) are indeed highly special structures. See the article on completely distributive lattices.
Distributive elements in arbitrary lattices
In an arbitrary lattice, an element x is called a distributive element if ∀y,z: =
An element x is called a dual distributive element if ∀y,z: =
In a distributive lattice, every element is of course both distributive and dual distributive.
In a non-distributive lattice, there may be elements that are distributive, but not dual distributive (and vice versa).
For example, in the depicted pentagon lattice N5, the element x is distributive, but not dual distributive, since = = x ≠ z = =
In an arbitrary lattice L, the following are equivalent:
x is a distributive element;
The map φ defined by φ(y) = x ∨ y is a lattice homomorphism from L to the upper closure ↑x = { y ∈ L: x ≤ y };
The binary relation Θx on L defined by y Θx z if x ∨ y = x ∨ z is a congruence relation, that is, an equivalence relation compatible with ∧ and ∨.
In an arbitrary lattice, if x1 and x2 are distributive elements, then so is x1 ∨ x2.
Literature
Distributivity is a basic concept that is treated in any textbook on lattice and order theory. See the literature given for the articles on order theory and lattice theory. More specific literature includes:
G. N. Raney, Completely distributive complete lattices, Proceedings of the American Mathematical Society, 3: 677 - 680, 1952.
References
Order theory | Distributivity (order theory) | Mathematics | 1,377 |
36,052,045 | https://en.wikipedia.org/wiki/Continuous%20foam%20separation | Continuous foam separation is a chemical process closely related to foam fractionation in which foam is used to separate components of a solution when they differ in surface activity. In any solution, surface active components tend to adsorb to gas-liquid interfaces while surface inactive components stay within the bulk solution. When a solution is foamed, the most surface active components collect in the foam and the foam can be easily extracted. This process is commonly used in large-scale projects such as water waste treatment due to a continuous gas flow in the solution.
There are two types of foam that can form from this process. They are wet foam (or kugelschaum) and dry foam (or polyederschaum). Wet foam tends to form at the lower portion of the foam column, while dry foam tends to form at the upper portion. The wet foam is more spherical and viscous, and the dry foam tends to be larger in diameter and less viscous. Wet foam forms closer to the originating liquid, while dry foam develops at the outer boundaries. As such, what most people usually understand as foam is actually only dry foam.
The setup for continuous foam separation consists of securing a column at the top of the container of solution that is to be foamed. Air or a specific gas is dispersed in the solution through a sparger. A collecting column at the top collects the foam being produced. The foam is then collected and collapsed in another container.
In the continuous foam separation process a continuous gas line is fed into the solution, therefore causing continuous foaming to occur. Continuous foam separation may not be as efficient in separating solutes as opposed to separating a fixed amount of solution.
History
Processes similar to continuous foam separation have been commonly used for decades. Protein skimmers are one example of foam separation used in saltwater aquariums. The earliest documents pertaining to foam separation is dated back to 1959, when Robert Schnepf and Elmer Gaden, Jr. studied the effects of pH and concentration on the separation of bovine serum albumin from solution. A different study performed by R.B. Grieves and R. K. Woods in 1964 focused on the various effects of separation based on the changes of certain variables (i.e. temperature, position of feed introduction, etc.). In 1965, Robert Lemlich of the University of Cincinnati made another study on foam fractionation. Lemlich researched the science behind foam fractionation through theory and equations.
As stated earlier, continuous foam separation is closely related to foam fractionation where hydrophobic solutes attach to the surfaces of bubbles and rise to form foam. Foam fractionation is used on a smaller scale whereas continuous foam separation is implemented on a larger scale such as water treatment for a city. An article published by the Water Environment Federation in 1969, discussed the idea of using foam fractionation to treat pollution in rivers and other water resources in cities. Since then, little research has been done to further understand this process. There are still many studies that implement this process for their research, such as the separation of biomolecules in the medical field.
Background
Surface chemistry
Continuous foam separation is dependent on the contaminant’s ability to adsorb to the surface of the solvent based on their chemical potentials. If the chemical potentials promote surface adsorption, the contaminant will move from the bulk of the solvent and form a film at the surface of the foam bubble. The resulting film is considered a monolayer.
As contaminants', or surfactants', concentration in the bulk decreases, the surface concentration increases; this increases surface tension at the liquid-vapor interface. Surface tension describes how difficult it is to extend the area of a surface. If surface tension is high, there is a large free energy required to increase the surface area. The surface of the bubbles will contract due to this increased surface tension. This contraction encourages the formation of a foam.
Foams
Definition
Foam is a type of colloidal dispersion where gas is dispersed throughout a liquid phase. The liquid phase is also called the continuous phase because it is an uninterrupted, unlike the gas phase.
Structure
As the foam is formed, it changes in structure. As the liquid foams up into the gas, the foam bubbles begin as packed uniform spheres. This phase is the wet phase. The farther up the column the foam travels, the air bubbles distort to form polyhedral shapes, the dry phase. The liquid that separates the flat faces between two polyhedral bubbles is called the lamellae; it is a continuous liquid phase. The areas where three lamellae meet are called plateau borders. When the bubbles in the foam are the same size the lamellae in the plateau borders meet at 120 degree angles. Since the lamella is slightly curved, the plateau region is at low pressure. The continuous liquid phase is held to the bubble surfaces by the surfactant molecules that make up the solution being foamed. This fixation is important because otherwise the foam becomes very unstable as the liquid drains into the plateau region making the lamellae thin. Once the lamellae become too thin they will rupture.
Theory
Young–Laplace equation
As vapor bubbles form in a liquid solvent, interfacial tension causes a pressure difference, Δp, across the surface given by the Young–Laplace equation. The pressure is greater on the concave side of the liquid lamellae (the inside of the bubble) with radius, R, dependent on the pressure differential. For spherical bubbles in a wet foam and standard surface tension , the equation for the change in pressure is as follows:
As the vapor bubbles distort and take the form of a more complex geometry than a simple sphere, the two principal radii of curvature and would be used in the following equation:
As pressure grows inside the bubbles, the liquid lamellae shown in the figure above will forced to move toward plateau borders causing a collapse of the lamellae.
Gibbs adsorption isotherm
The Gibbs adsorption isotherm can be used to determine the change in surface tension with changing concentration. Since chemical potential varies with a change in concentration, the following equation can be used to estimate the change in surface tension where is the change in surface tension of the interface, is the surface excess of the solvent, is the surface excess of the solute (surfactant), is the change in chemical potential of the solvent, and is the change in chemical potential of the solute:
For ideal cases, and the created foam is dependent on the change in chemical potential of the solute. During foaming, the solute experiences a change in chemical potential as it goes from the bulk solution to the foam surface. In this case, the following equation can be applied where is the activity of the surfactant, is the gas constant, and is the absolute temperature:
In order solve for the area on the foam surface occupied by one adsorbed molecule, , the following equation can be used where is the Avogadro constant.
Applications
Wastewater treatment
Continuous foam separation is used in wastewater treatment to remove detergent-derived foaming agents such as ABS, which became common in wastewater by the 1950s. In 1959 it was shown that by adding 2-octane to foamed wastewater, 94% of ABS could be removed from the activated sludge through using foam separation techniques. The foam produced during wastewater treatment can either be recycled back into the activated sludge tank within a waste treatment plant, the bacterial organisms that live there have been found to break down ABS when allowed enough time, or extracted and collapsed for disposal. Foam separation has also been found to decrease the chemical oxygen demand when used as secondary treatment technique for wastewater.
Heavy metal removal
The removal of heavy metal ions from wastewater is important because they accumulate easily in the food chain, ending in animals such as swordfish that humans eat. Foam separation can be used to remove heavy metal ions from wastewater at low costs, especially when used in multistage systems. When performing ion foam separation there are three operational conditions that must be met for optimal production of foam for ion removal: foam formation, flooding, and weeping/dumping.
Protein extraction
Foam separation can be used for the extraction of proteins from a solution especially to concentrate the protein from a dilute solution. When purifying proteins from solution on an industrial scale, the most cost efficient method is desired. As such, foam separation offers a method with low capital and maintenance costs due to the simple mechanical design; this design also allows for easy operation. However, there are two reasons why using foam separation to extract protein from solution has not been widespread: firstly some proteins denature when going through the foaming process and secondly, control and prediction of foaming is typically difficult to calculate. In order to determine the success of protein extraction through foaming three calculations are used.
The Enrichment ratio demonstrates how effective the foaming is in extracting the protein from the solution into the foam, the higher the number the better the affinity the protein has for the foam state.
The Separation ratio is similar to the enrichment ratio in that the more effective the extraction of protein from the solution into the foam, the higher the number will be.
Recovery is how efficiently the protein is removed from the solution into the foam state, the higher the percentage, the better the process is at recovering protein from solute into the foam state.
Foam hydrodynamics as well as many of the variables that affect the success of foaming have limited understanding. This complicates using mathematical calculations to predict protein recovery by foaming. However some trends have been determined; high recovery rates have been linked to high concentrations of protein in the initial solution, high gas flow rates, and high feed flow rates. Enrichment is also known to increase when foaming is performed using shallow pools. Using pools with low heights allows for only a small amount of protein to adsorb from the solution to the surface of the bubbles in the foam resulting in lower surface viscosity. This leads to coalescence of the unstable foam higher up in the column causing an increase in the bubble size and an increase in the reflux of the protein in the foam. However, an increased velocity of the gas being pumped into the system has been shown to lead to a decrease in the enrichment ratio. Since these calculations are difficult to predict, bench and then pilot scale experiments are often performed in order to determine if foaming is a viable technique for extraction on an industrial scale.
Bacterial cell extraction
Separation of cells is typically done using centrifugation, however foam separation has also been used as a more energy efficient technique. This method has been used on many species of bacteria cells such as Hansenula polymorph, Saccharomyces carlsbergensis, Bacillus polymyxa, Escherichia coli, and Bacillus subtilis, being most effective on cells that have hydrophobic surfaces.
Current and Future Directions
Continuous foam extraction was initially used in regard to wastewater treatment in the 1960s. Since then there has not been a lot of research in foaming as an extraction technique. However, in recent years foaming for protein and pharmaceutical extraction has gained increased interest for researchers. Purification of products is the most expensive part of product production in biotechnology, foaming offers an alternative method that is less expensive than some current techniques.
Separation equipment
Foaming apparatus
Continuous foam separation is one of two major modes of foam separation with the other being batch foam separation. The difference between the two modes is that in continuous mode, surfactant solution is continuously fed through a feed into the foam column and a solution, extracted of surfactant, is also continuously exiting the bottom of the apparatus. The figure to the right shows a diagram of a basic continuous foam separator. The process is stationary (or in steady state) as long as the volume of liquid is constant as a function of time. As long as the process is in steady state, the liquid will not overflow into the foaming column. Depending on the design of the foam separator, the location of the feed flowing in can vary from atop of the liquid solution to the top of the foam column.
The creation of the foam starts with the flow of gas into the bottom of the liquid column. The amount of gas flow into the apparatus is measured and maintained through a flow meter. As the foam rises and becomes drained of the liquid, it gets diverted into a separate container to collect the foamate. The height of the foam column is dependent on the application. The diverted foam is liquefied by collapsing the foam bubbles. This can usually be achieved by mechanical means or by lowering the pressure in the foamate collecting vessel. Foam separators for different types of applications use the basic set up shown in the diagram, but can vary with placements and addition of equipment.
Design considerations
Additional equipment on the basic form of a foam separator apparatus can be used to achieve other desired effects that suit the type of application, but the underlying process of separation remains the same. The addition of equipment is used to optimize the parameters, enrichment E, or recovery R. Typically, enrichment and recovery are opposing parameters, but there have been some recent studies showing the ability to simultaneously optimize both parameters. The variation of flow rates on the gas input as well as other equipment settings has effects on the optimization of the parameters. The table compares foam separation to other techniques used to separate the protein, α-lactalbumin, from a whey protein solution.
pH
pH is an important factor in foaming because it will determine if a surfactant will be able to move into the foam phase from bulk liquid phase. The isoelectric point is one factor that must be taken into consideration, when surfactants have neutral charges they are more favorable for adsorption to the liquid-gas interface. pH offers a unique problem for proteins due to the fact that they will denature in pHs that are too high or low. While the isoelectric point is ideal for surfactant adsorption, it has been found that foam is most stable at a pH of 4 and that the foam volume is maximized at pH 10.
Surfactants
The chain length of non polar parts of surfactants will determine how easily the molecules can adsorb to the foam, and will therefore determine how effective the separation of the surfactant from the solution will be. Longer chains surfactants tend to associate into micelles at the solid-liquid surface. The concentration of the surfactant also plays a factor in the percent removal of the surfactant.
Other
Some other factors that affect the effectiveness of foaming include the flow rate of the gas, the bubble size and distribution, the temperature of the solution, and the agitation of the solution. Detergents are known to affect foaming. They increase the ability of the solution to foam, increasing the amount of protein recovered in the foamate. Some detergents act as stabilizers for the foam, such as cetyltrimethylammonium bromide (CTAB).
External links
Biosurfactant Foam Separation
Metal Foam Creation Video
Collapsing Metal Foam Video
References
Chemical processes | Continuous foam separation | Chemistry | 3,099 |
60,408 | https://en.wikipedia.org/wiki/HCL%20Notes | HCL Notes (formerly Lotus Notes then IBM Notes) is a proprietary collaborative software platform for Unix (AIX), IBM i, Windows, Linux, and macOS, sold by HCLTech. The client application is called Notes while the server component is branded HCL Domino.
HCL Notes provides business collaboration functions, such as email, calendars, to-do lists, contact management, discussion forums, file sharing, websites, instant messaging, blogs, document libraries, user directories, and custom applications. It can also be used with other HCL Domino applications and databases. IBM Notes 9 Social Edition removed integration with the office software package IBM Lotus Symphony, which had been integrated with the Lotus Notes client in versions 8.x.
Lotus Development Corporation originally developed "Lotus Notes" in 1989. IBM bought Lotus in 1995 and it became known as the Lotus Development division of IBM. On December 6, 2018, IBM announced that it was selling a number of software products to HCLSoftware for $1.8bn, including Notes and Domino. This acquisition was completed in July 2019.
Design
HCL Domino is a client-server cross-platform application runtime environment.
Domino provides email, calendars, instant messaging (with additional HCLSoftware voice- and video-conferencing and web-collaboration), discussions/forums, blogs, and an inbuilt personnel/user directory. In addition to these standard applications, an organization may use the Domino Designer development environment and other tools to develop additional integrated applications such as request approval / workflow and document management.
The Domino product consists of several components:
HCL Notes client application (since version 8, this is based on Eclipse)
HCL Notes client, either:
a rich client
a web client, HCL iNotes
a mobile email client, HCL Notes Traveler
HCL Verse client, either:
a web email client, Verse on Premises (VOP)
a mobile email client, Verse Mobile (for iOS and Android)
HCL Domino server
HCL Domino Administration Client
HCL Domino Designer (Eclipse-based integrated development environment) for creating client-server applications that run within the Notes framework
Domino competes with products from other companies such as Microsoft, Google, Zimbra and others. Because of the application development abilities, HCL Domino is often compared to products like Microsoft Sharepoint. The database in Domino can be replicated between servers and between server and client, thereby allowing clients offline capabilities.
Domino, a business application as well as a messaging server, is compatible with both Notes and web-browsers. Notes (and since IBM Domino 9, the HCAA) may be used to access any Domino application, such as discussion forums, document libraries, and numerous other applications. Notes resembles a web-browser in that it may run any compatible application that the user has permission for.
Domino provides applications that can be used to:
access, store and present information through a user interface
enforce security
replicate, that is, allow many different servers to contain the same information and have many users work with that data
The standard storage mechanism in Domino is a document-database format, the "Notes Storage Facility" (.nsf). The .nsf file will normally contain both an application design and its associated data. Domino can also access relational databases, either through an additional server called HCL Enterprise Integrator for Domino, through ODBC calls or through the use of XPages.
As Domino is an application runtime environment, email and calendars operate as applications within Notes, which HCL provides with the product. A Domino application-developer can change or completely replace that application. HCL has released the base templates as open source as well.
Programmers can develop applications for Domino in a variety of development languages including:
the Java programming language either directly or through XPages
LotusScript, a language resembling Visual Basic
the JavaScript programming language via the Domino AppDev Pack
The client supports a formula language as well as JavaScript. Software developers can build applications to run either within the Notes application runtime environment or through a web server for use in a web browser, although the interface would need to be developed separately unless XPages is used.
Use
Notes can be used for email, as a calendar, PIM, instant messaging, Web browsing, and other applications. Notes can access both local- and server-based applications and data.
Notes can function as an IMAP and POP email client with non-Domino mail servers. The system can retrieve recipient addresses from any LDAP server, including Active Directory, and includes a web browser, although it can be configured by a Domino Developer to launch a different web browser instead.
Features include group calendars and schedules, SMTP/MIME-based email, NNTP-based news support, and automatic HTML conversion of all documents by the Domino HTTP task.
Notes can be used with Sametime instant-messaging to allow to see other users online and chat with one or more of them at the same time. Beginning with Release 6.5, this function has been freely available. Presence awareness is available in email and other HCL Domino applications for users in organizations that use both Notes and Sametime.
Since version 7, Notes has provided a Web services interface. Domino can be a Web server for HTML files; authentication of access to Domino databases or HTML files uses the Domino user directory and external systems such as Microsoft Active Directory.
A design client, Domino Designer, can allow the development of database applications consisting of forms (which allow users to create documents) and views (which display selected document fields in columns).
In addition to its role as a groupware system (email, calendaring, shared documents and discussions), HCL Notes and Domino can also construct "workflow"-type applications, particularly those which require approval processes and routing of data.
Since Release 5, server clustering has had the ability to provide geographic redundancy for servers.
Notes System Diagnostic (NSD) gathers information about the running of a Notes workstation or of a Domino server.
On October 10, 2018, IBM released IBM Domino v10.0 and IBM Notes 10.0 as the latest release. In December, 2019, HCL released HCL Domino v11 and HCL Notes v11.
Overview
Client/server
Notes and Domino are client/server database environments. The server software is called Domino and the client software is Notes. Domino software can run on Windows, Unix, AIX, and IBM mid-range systems and can scale to tens of thousands of users per server. There are different supported versions of the Domino server that are supported on the various levels of server operating systems. Usually the latest server operating system is only officially supported by a version of HCL Domino that is released at about the same time as that OS.
Domino has security capabilities on a variety of levels. The authorizations can be granular, down to the field level in specific records all the way up to 10 different parameters that can be set up at a database level, with intermediate options in between. Users can also assign access for other users to their personal calendar and email on a more generic reader, editor, edit with delete and manage my calendar levels. All of the security in Notes and Domino is independent of the server OS or Active Directory. Optionally, the Notes client can be configured to have the user use their Active Directory identity.
Data replication
The first release of Lotus Notes included a generalized replication facility. The generalized nature of this feature set it apart from predecessors like Usenet and continued to differentiate Lotus Notes.
Domino servers and Notes clients identify NSF files by their Replica IDs, and keep replicated files synchronized by bi-directionally exchanging data, metadata, and application logic and design. There are options available to define what meta-data replicates, or specifically exclude certain meta data from replicating. Replication between two servers, or between a client and a server, can occur over a network or a point-to-point modem connection. Replication between servers may occur at intervals according to a defined schedule, in near-real-time when triggered by data changes in server clusters, or when triggered by an administrator or program.
Creation of a local replica of an NSF file on the hard disk of an HCL Notes client enables the user to fully use Notes and Domino databases while working off-line. The client synchronizes any changes when client and server next connect. Local replicas are also sometimes maintained for use while connected to the network in order to reduce network latency. Replication between a Notes client and Domino server can run automatically according to a schedule, or manually in response to a user or programmatic request. Since Notes 6, local replicas maintain all security features programmed into the applications. Earlier releases of Notes did not always do so. Early releases also did not offer a way to encrypt NSF files, raising concerns that local replicas might expose too much confidential data on laptops or insecure home office computers, but more recent releases offer encryption, and as of the default setting for newly created local replicas.
Security
Lotus Notes was the first widely adopted software product to use public key cryptography for client–server and server–server authentication and for encryption of data. Until US laws regulating encryption were changed in 2000, IBM and Lotus were prohibited from exporting versions of Notes that supported symmetric encryption keys that were longer than 40 bits. In 1997, Lotus negotiated an agreement with the NSA that allowed export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a "workload reduction factor" for the NSA. This strengthened the protection for users of Notes outside the US against private-sector industrial espionage, but not against spying by the US government. This implementation was widely announced, but with some justification many people did consider it to be a backdoor. Some governments objected to being put at a disadvantage to the NSA, and as a result Lotus continued to support the 40-bit version for export to those countries.
Notes and Domino also uses a code-signature framework that controls the security context, runtime, and rights of custom code developed and introduced into the environment. Notes 5 introduced an execution control list (ECL) at the client level. The ECL allows or denies the execution of custom code based on the signature attached to it, preventing code from untrusted (and possibly malignant) sources from running. Notes and Domino 6 allowed client ECLs to be managed centrally by server administrators through the implementation of policies. Since release 4.5, the code signatures listed in properly configured ECLs prevent code from being executed by external sources, to avoid virus propagation through Notes/Domino environments. Administrators can centrally control whether each mailbox user can add exceptions to, and thus override, the ECL.
Database security
Access control lists (ACLs) control a user of server's level of access to that database. Only a user with Manager access can create or modify the ACL. Default entries in the ACL can be set when the Manager creates the database.
Roles, rather than user id, can determine access level.
Programming
Notes and Domino is a cross-platform, distributed document-oriented NoSQL database and messaging framework and rapid application development environment that includes pre-built applications like email, calendar, etc. This sets it apart from its major commercial competitors, such as Microsoft Exchange or Novell GroupWise, which are purpose-built applications for mail and calendaring that offer APIs for extensibility.
Domino databases are built using the Domino Designer client, available only for Microsoft Windows; standard user clients are available for Windows, Linux, and macOS. A key feature of Notes is that many replicas of the same database can exist at the same time on different servers and clients, across dissimilar platforms; the same storage architecture is used for both client and server replicas. Originally, replication in Notes happened at document (i.e., record) level. With release of Notes 4 in 1996, replication was changed so that it now occurs at field level.
A database is a Notes Storage Facility (.nsf) file, containing basic units of storage known as a "note". Every note has a UniqueID that is shared by all its replicas. Every replica also has a UniqueID that uniquely identifies it within any cluster of servers, a domain of servers, or even across domains belonging to many organizations that are all hosting replicas of the same database. Each note also stores its creation and modification dates, and one or more Items.
There are several classes of notes, including design notes and document notes. Design notes are created and modified with the Domino Designer client, and represent programmable elements, such as the GUI layout of forms for displaying and editing data, or formulas and scripts for manipulating data. Document notes represent user data, and are created and modified with the Notes client, via a web browser, via mail routing and delivery, or via programmed code.
Document notes can have parent-child relationships, but Notes should not be considered a hierarchical database in the classic sense of information management systems. Notes databases are also not relational, although there is a SQL driver that can be used with Notes, and it does have some features that can be used to develop applications that mimic relational features. Notes does not support atomic transactions, and its file locking is rudimentary. Notes is a document-oriented database (document-based, schema-less, loosely structured) with support for rich content and powerful indexing facilities. This structure closely mimics paper-based work flows that Notes is typically used to automate.
Items represent the content of a note. Every item has a name, a type, and may have some flags set. A note can have more than one item with the same name. Item types include Number, Number List, Text, Text List, Date-Time, Date-Time List, and Rich Text. Flags are used for managing attributes associated with the item, such as read or write security. Items in design notes represent the programmed elements of a database. For example, the layout of an entry form is stored in the rich text Body item within a form design note. This means that the design of the database can replicate to users' desktops just like the data itself, making it extremely easy to deploy updated applications.
Items in document notes represent user-entered or computed data. An item named "Form" in a document note can be used to bind a document to a form design note, which directs the Notes client to merge the content of the document note items with the GUI information and code represented in the given form design note for display and editing purposes. However, other methods can be used to override this binding of a document to a form note. The resulting loose binding of documents to design information is one of the cornerstones of the power of Notes. Traditional database developers used to working with rigidly enforced schemas, on the other hand, may consider the power of this feature to be a double-edged sword.
Notes application development uses several programming languages. Formula and LotusScript are the two original ones. LotusScript is similar to, and may even be considered a specialized implementation of, Visual Basic, but with the addition of many native classes that model the Notes environment, whereas Formula is similar to Lotus 1-2-3 formula language but is unique to Notes.
Java was integrated into IBM Notes beginning with Release 4.5. With Release 5, Java support was greatly enhanced and expanded, and JavaScript was added. While LotusScript remains a primary tool in developing applications for the Lotus Notes client, Java and JavaScript are the primary tools for server-based processing, developing applications for browser access, and allowing browsers to emulate the functionality of the IBM Notes client. With XPages, the IBM Notes client can now natively process Java and JavaScript code, although applications development usually requires at least some code specific to only IBM Notes or only a browser.
As of version 6, Lotus established an XML programming interface in addition to the options already available. The Domino XML Language (DXL) provides XML representations of all data and design resources in the Notes model, allowing any XML processing tool to create and modify IBM Notes and Domino data.
Since Release 8.5, XPages were also integrated into IBM Notes.
External to the Notes application, HCL provides toolkits in C, C++, and Java to connect to the Domino database and perform a wide variety of tasks. The C toolkit is the most mature, and the C++ toolkit is an objectized version of the C toolkit, lacking many functions the C toolkit provides. The Java toolkit is the least mature of the three and can be used for basic application needs.
Database
IBM Notes includes a database management system but Notes files are different from relational or object databases because they are document-centric. Document-oriented databases such as Notes allow multiple values in items (fields), do not require a schema, come with built-in document-level access control, and store rich text data. IBM Domino 7 to 8.5.x supports the use of IBM Db2 database as an alternative store for IBM Notes databases. This NSFDB2 feature, however, is now in maintenance mode with no further development planned. An IBM Notes database can be mapped to a relational database using tools like DECS, [LEI], JDBCSql for Domino or NotesSQL.
Configuration
The HCL Domino server or the Domino client store their configuration in their own databases / application files (*.nsf). No relevant configuration settings are saved in the Windows Registry if the operating system is Windows. Some other configuration options (primary the start configuration) is stored in the notes.ini (there are currently over 2000 known options available).
Use as an email client
Notes is commonly deployed as an end-user email client in larger organizations.
When an organization employs an HCL Domino server, it usually also deploys the supplied Notes client for accessing the Notes application for email and calendaring but also to use document management and workflow applications. As Notes is a runtime environment, and the email and calendaring functions in Notes are simply an application provided by HCL, the administrators are free to develop alternate email and calendaring applications. It is also possible to alter, amend or extend the HCL supplied email and calendaring application.
The Domino server also supports POP3 and IMAP mail clients, and through an extension product (HCL mail support for Microsoft Outlook) supports native access for Microsoft Outlook clients.
HCL also provides iNotes (in Notes 6.5 renamed to "Domino Web Access" but in version 8.0 reverted to iNotes), to allow the use of email and calendaring features through web browsers on Windows, Mac and Linux, such as Internet Explorer and Firefox. There are several spam filtering programs available (including IBM Lotus Protector), and a rules engine allowing user-defined mail processing to be performed by the server.
Comparison with other email clients
Notes was designed as a collaborative application platform where email was just one of numerous applications that ran in the Notes client software. The Notes client was also designed to run on multiple platforms including Windows, OS/2, classic Mac OS, SCO Open Desktop UNIX, and Linux. These two factors have resulted in the user interface containing some differences from applications that only run on Windows. Furthermore, these differences have often remained in the product to retain backward compatibility with earlier releases, instead of conforming to updated Windows UI standards. The following are some of these differences.
Properties dialog boxes for formatting text, hyperlinks and other rich-text information can remain open after a user makes changes to selected text. This provides flexibility to select new text and apply other formatting without closing the dialog box, selecting new text and opening a new format dialog box. Almost all other Windows applications require the user to close the dialog box, select new text, then open a new dialog box for formatting/changes.
Properties dialog boxes also automatically recognize the type of text selected and display appropriate selections (for instance, a hyperlink properties box).
Users can format tables as tabbed interfaces as part of form design (for applications) or within mail messages (or in rich-text fields in applications). This provides users the ability to provide tab-style organization to documents, similar to popular tab navigation in most web portals, etc.
End-users can readily insert links to Notes applications, Notes views or other Notes documents into Notes documents.
Deleting a document (or email) will delete it from every folder in which it appears, since the folders simply contain links to the same back-end document. Some other email clients only delete the email from the current folder; if the email appears in other folders it is left alone, requiring the user to hunt through multiple folders in order to completely delete a message. In Notes, clicking on "Remove from Folder" will remove the document only from that folder leaving all other instances intact.
The All Documents and Sent "views" differ from other collections of documents known as "folders" and exhibit different behaviors. Specifically, mail cannot be dragged out of them, and so removed from those views; the email can only be "copied" from them. This is because these are views, and their membership indexes are maintained according to characteristics of the documents contained in them, rather than based on user interaction as is the case for a folder. This technical difference can be baffling to users, in environments where no training is given. All Documents contain all of the documents in a mailbox, no matter which folder it is in. The only way to remove something from All Documents is to delete it outright.
Lotus Notes 7 and older versions had more differences, which were removed from subsequent releases:
Users select a "New Memo" to send an email, rather than "New Mail" or "New Message". (Notes 8 calls the command "New Message")
To select multiple documents in a Notes view, one drags one's mouse next to the documents to select, rather than using +single click. (Notes 8 uses keypress conventions.)
The searching function offers a "phrase search", rather than the more common "or search", and Notes requires users to spell out Boolean conditions in search-strings. As a result, users must search for "delete AND folder" in order to find help text that contains the phrase "delete a folder". Searching for "delete folder" does not yield the desired result. (Notes 8 uses or-search conventions.)
Lotus Notes 8.0 (released in 2007) became the first version to employ a dedicated user-experience team, resulting in changes in the IBM Notes client experience in the primary and new notes user interface. This new interface runs in the open source Eclipse Framework, which is a project started by IBM, opening up more application development opportunities through the use of Eclipse plug-ins. The new interface provides many new user interface features and the ability to include user-selected applications/applets in small panes in the interface. Lotus Notes 8.0 also included a new email interface / design to match the new Lotus Notes 8.0 eclipse based interface. Eclipse is a Java framework and allows IBM to port Notes to other platforms rapidly. An issue with Eclipse and therefore Notes 8.0 is the applications start-up and user-interaction speed. Lotus Notes 8.5 sped up the application and the increase in general specification of PCs means this is less of an issue.
IBM Notes 9 continued the evolution of the user interface to more closely align with modern application interfaces found in many commercial packaged or web-based software. Currently, the software still does not have an auto-correct option - or even ability - to reverse accidental use of caps lock.
Domino is now running on the Eclipse platform and offers many new development environments and tools such as XPages.
For lower spec PCs, a new version of the old interface is still provided albeit as it is the old interface many of the new features are not available and the email user interface reverts to the Notes 7.x style.
This new user experience builds on Notes 6.5 (released in 2003), which upgraded the email client, previously regarded by many as the product's Achilles heel. Features added at that time included:
drag and drop of folders
replication of unread marks between servers
follow-up flags
reply and forward indicators on emails
ability to edit an attachment and save the changes back to an email id
Reception
Publications such as The Guardian in 2006 have criticized earlier versions of Lotus Notes for having an "unintuitive [user] interface" and cite widespread dissatisfaction with the usability of the client software. The Guardian indicated that Notes has not necessarily suffered as a result of this dissatisfaction due to the fact that "the people who choose [enterprise software] tend not to be the ones who use it."
Earlier versions of Notes have also been criticized for violating an important usability best practice that suggests a consistent UI is often better than custom alternative. Software written for a particular operating system should follow that particular OS's user interface style guide. Not following those style guides can confuse users. A notable example is F5 keyboard shortcut, which is used to refresh window contents in Microsoft Windows. Pressing F5 in Lotus Notes before release 8.0 caused it to lock screen. Since this was a major point of criticism this was changed in release 8.0. Old versions did not support proportional scrollbars (which give the user an idea of how long the document is, relative to the portion being viewed). Proportional scroll bars were only introduced in Notes 8.
Older versions of Notes also suffered from similar user interaction choices, many of which were also corrected in subsequent releases. One example that was corrected in Release 8.5: In earlier versions the out-of-office agent needed to be manually enabled when leaving and disabled when coming back, even if start and end date have been set. As of Release 8.5 the out-of-office notification now automatically shuts off without a need for a manual disable.
Unlike some other e-mail client software programs, IBM Notes developers made a choice to not allow individual users to determine whether a return receipt is sent when they open an e-mail; rather, that option is configured at the server level. IBM developers believe "Allowing individual cancellation of return receipt violates the intent of a return receipt function within an organization". So, depending on system settings, users will have no choice in return receipts going back to spammers or other senders of unwanted e-mail. This has led tech sites to publish ways to get around this feature of Notes. For IBM Notes 9.0 and IBM iNotes 9.0, the IBM Domino server's .INI file can now contain an entry to control return receipt in a manner that's more aligned with community expectations (IBM Notes 9 Product Documentation).
When Notes crashes, some processes may continue running and prevent the application from being restarted until they are killed.
Related software
Related IBM Lotus products
Over the 30-year history of IBM Notes, Lotus Development Corporation and later IBM have developed many other software products that are based on, or integrated with IBM Notes. The most prominent of these is the IBM Lotus Domino server software, which was originally known as the Lotus Notes Server and gained a separate name with the release of version 4.5. The server platform also became the foundation for products such as IBM Lotus Quickr for Domino, for document management, and IBM Sametime for instant messaging, audio and video communication, and web conferencing, and with Release 8.5, IBM Connections.
In early releases of IBM Notes, there was considerable emphasis on client-side integration with the IBM Lotus SmartSuite environment. With Microsoft's increasing predominance in office productivity software, the desktop integration focus switched for a time to Microsoft Office. With the release of version 8.0 in 2007, based on the Eclipse framework, IBM again added integration with its own office-productivity suite, the OpenOffice.org-derived IBM Lotus Symphony. IBM Lotus Expeditor is a framework for developing Eclipse-based applications.
Other IBM products and technologies have also been built to integrate with IBM Notes. For mobile-device synchronization, this previously included the client-side IBM Lotus Easysync Pro product (no longer in development) and IBM Notes Traveler, a newer no-charge server-side add-on for mail, calendar and contact sync. A recent addition to IBM's portfolio are two IBM Lotus Protector products for mail security and encryption, which have been built to integrate with IBM Notes.
Related software from other vendors
With a long market history and large installed base, Notes and Domino have spawned a large third-party software ecosystem. Such products can be divided into four broad, and somewhat overlapping classes:
Notes and Domino applications are software programs written in the form of one or more Notes databases, and often supplied as NTF templates. This type of software typically is focused on providing business benefit from Notes' core collaboration, workflow and messaging capabilities. Examples include customer relationship management (CRM), human resources, and project tracking systems. Some applications of this sort may offer a browser interface in addition to Notes client access. The code within these programs typically uses the same languages available to an in-house Domino developer: Notes formula language, LotusScript, Java and JavaScript.
Notes and Domino add-ons, tools and extensions are generally executable programs written in C, C++ or another compiled language that are designed specifically to integrate with Notes and Domino. This class of software may include both client- and server-side executable components. In some cases, Notes databases may be used for configuration and reporting. Since the advent of the Eclipse-based Notes 8 Standard client, client-side add-ons may also include Eclipse plug-ins and XML-based widgets. The typical role for this type of software is to support or extend core Notes functionality. Examples include spam and anti-virus products, server administration and monitoring tools, messaging and storage management products, policy-based tools, data synchronization tools and developer tools.
Notes and Domino-aware adds-ins and agents are also executable programs, but they are designed to extend the reach of a general networked software product to Notes and Domino data. This class includes server and client backup software, anti-spam and anti-virus products, and e-discovery and archiving systems. It also includes add-ins to integrate Notes with third-party offerings such as Cisco WebEx conferencing service or the Salesforce.com CRM platform.
History
Notes has a history spanning more than 30 years. Its chief inspiration was PLATO Notes, created by David R. Woolley at the University of Illinois in 1973. In today's terminology, PLATO Notes supported user-created discussion groups, and it was part of the foundation for an online community which thrived for more than 20 years on the PLATO system. Ray Ozzie worked with PLATO while attending the University of Illinois in the 1970s. When PC network technology began to emerge, Ozzie made a deal with Mitch Kapor, the founder of Lotus Development Corporation, that resulted in the formation of Iris Associates in 1984 to develop products that would combine the capabilities of PCs with the collaborative tools pioneered in PLATO. The agreement put control of product development under Ozzie and Iris, and sales and marketing under Lotus. In 1994, after the release and marketplace success of Notes R3, Lotus purchased Iris. In 1995 IBM purchased Lotus.
In 2008, IBM released XPages technology, based on JavaServer Faces. This allows Domino applications to be better surfaced to browser clients, though the UX and business logic must be completely rewritten. Previously, Domino applications could be accessed through browsers, but required extensive web specific modifications to get full functionality in browsers. XPages also gave the application new capabilities that are not possible with the classic Notes client. The IBM Domino 9 Social Edition included the Notes Browser Plugin, which would surface Notes applications through a minified version of the rich desktop client contained in a browser tab.
Branding
Prior to release 4.5, the Lotus Notes branding encompassed both the client and server applications. In 1996, Lotus released an HTTP server add-on for the Notes 4 server called "Domino". This add-on allowed Notes documents to be rendered as web pages in real time. Later that year, the Domino web server was integrated into release 4.5 of the core Notes server and the entire server program was re-branded, taking on the name "Domino". Only the client program officially retained the "Lotus Notes" name.
In November 2012, IBM announced it would be dropping the Lotus brand and moving forward with the IBM brand only to identify products, including Notes and Domino. On October 9, 2018, IBM announced the availability of the latest version of the client and server software.
In 2019, Domino and Notes became enterprise software products managed under HCLSoftware.
Release history
21st century
IBM donated parts of the IBM Notes and Domino code to OpenOffice.org on September 12, 2007 and since 2008 has been regularly donating code to OpenNTF.org.
Despite repeated predictions of the decline or impending demise of IBM Notes and Domino, such as Forbes magazine's 1998 "The decline and fall of Lotus", the installed base of Lotus Notes has increased from an estimated 42 million seats in September 1998 to approximately 140 million cumulative licenses sold through 2008. Once IBM Workplace was discontinued in 2006, speculation about dropping Notes was rendered moot. Moreover, IBM introduced iNotes for iPhone two years later.
IBM contributed some of the code it had developed for the integration of the OpenOffice.org suite into Notes 8 to the project. IBM also packaged its version of OpenOffice.org for free distribution as IBM Lotus Symphony.
IBM Notes and Domino 9 Social Edition shipped on March 21, 2013. Changes include significantly updated user interface, near-parity of IBM Notes and IBM iNotes functionality, the IBM Notes Browser Plugin, new XPages controls added to IBM Domino, refreshed IBM Domino Designer user interface, added support for To Dos on Android mobile devices, and additional server functionality as detailed in the Announcement Letter.
In late 2016, IBM announced that there would not be a Notes 9.0.2 release, but 9.0.1 would be supported until at least 2021. In the same presentation IBM also stated that their internal users had been migrated away from Notes and onto the IBM Verse client.
On October 25, 2017, IBM announced a plan to deliver a Domino V10 family update sometime in 2018. The new version will be built in partnership with HCLTech. IBM's development and support team responsible for these products are moving to HCL, however, the marketing, and sales continue to be IBM-led. Product strategy is shared between IBM and HCL. As part of the announcement, IBM indicated that there is no formal end to product support planned.
On October 9, 2018, IBM announced IBM Domino 10.0 and IBM Notes 10.0 in Frankfurt, Germany, and made them available to download on October 10, 2018.
See also
List of IBM products
IBM Collaboration Solutions (formerly Lotus) Software division
Comparison of email clients
IBM Lotus Domino Web Access
Comparison of feed aggregators
Lotus Multi-Byte Character Set (LMBCS)
NotesPeek
References
HCL Domino
Notes
1989 software
Lotus Notes
Lotus Notes
Lotus Notes
Lotus Notes
Lotus Notes
Proprietary database management systems
Document-oriented databases
NoSQL
Proprietary commercial software for Linux
Email systems
Divested IBM products
2019 mergers and acquisitions | HCL Notes | Technology | 7,308 |
62,605,770 | https://en.wikipedia.org/wiki/NGC%20701 | NGC 701 is a spiral galaxy with a high star formation rate in the constellation Cetus. It is estimated to be 86 million light years from the Milky Way and has a diameter of approximately 65,000 light years. The object was discovered on January 10, 1785 by the German-British astronomer William Herschel.
References
External links
701
Spiral galaxies
Cetus
006826 | NGC 701 | Astronomy | 78 |
40,134,031 | https://en.wikipedia.org/wiki/Mars-Aster | Mars-Aster was a proposed Russian mission for a Mars rover. It was part of exploration plans outlined by the Space Council of the Russian Academy of Sciences in 1997. The project was delayed in the mid-2000s in favor of Phobos-Grunt and Luna-Glob, and it has not been approved since then.
References
Cancelled spacecraft
Missions to Mars
Russian space probes | Mars-Aster | Astronomy | 79 |
31,857,185 | https://en.wikipedia.org/wiki/Shaft%20voltage | Shaft voltage occurs in electric motors and generators due to leakage, induction, or capacitive coupling with the windings of the motor. It can occur in motors powered by variable-frequency drives, as often used in heating, ventilation, air conditioning and refrigeration systems. DC machines may have leakage current from the armature windings that energizes the shaft. Currents due to shaft voltage causes deterioration of motor bearings, but can be prevented with a grounding brush on the shaft, grounding of the motor frame, insulation of the bearing supports, or shielding.
Shaft voltage can be induced by non-symmetrical magnetic fields of the motor (or generator) itself. External sources of shaft voltage include other coupled machines, and electrostatic charging due to rubber belts rubbing on drive pulleys.
Every rotor has some degree of capacitive coupling to the motor's electrical windings, but the effective inline capacitor acts as a high-pass filter, so the coupling is often weak at 50–60 Hz line frequency. But many Variable Frequency Drives (VFD) induce significant voltage onto the shaft of the driven motor, because of the kilohertz switching of the insulated gate bipolar transistors (IGBTs), which produce the pulse-width modulation used to control the motor. The presence of high frequency ground currents can cause sparks, arcing and electrical shocks and can damage bearings.
Counter-measures
Techniques used to minimise this problem include: insulation, alternate discharge paths, Faraday shield, insulated bearings, ceramic bearings, grounding brush and shaft grounding ring.
Faraday shield
An electrostatic shielded induction motor (ESIM) is one approach to the shaft-voltage problem, as the insulation reduces voltage levels below the dielectric breakdown. This effectively stops bearing degradation and offers one solution to accelerated bearing wear caused by fluting, induced by pulsewidth modulated (PWM) inverters.
Grounding brush
Grounding the shaft by installing a grounding brush device on either the non-drive end or drive end of a VFD electric motor provides an alternate low-impedance path from the motor shaft to the motor case. This method channels the current away from the bearings. It significantly reduces shaft voltage and therefore bearing current by not allowing voltage to build up on the rotor.
Shaft grounding ring
A shaft grounding ring is installed around the motor shaft and creates a low impedance pathway for current to flow back to the motor frame and to ground. Various styles of rings exist such as those containing microfilaments making direct contact with the shaft or rings that clamp onto the shaft with a carbon brush riding on the ring (not directly on the shaft).
Insulated bearings
Insulated bearings eliminate the path to ground through the bearing for current to flow. However, installing insulated bearings does not eliminate the shaft voltage, which will still find the lowest impedance path to ground. This can potentially cause a problem if the path happens to be through the driven load or through some other component.
Shielded cable
High frequency grounding can be significantly improved by installing shielded cable with an extremely low impedance path between the VFD and the motor. One popular cable type is continuous corrugated aluminum sheath cable.
See also
Stray voltage
References
External links
A Unique System for Reducing High Frequency Stray Noise and Transient Common Mode Ground Currents to Zero, While Enhancing Other Ground Issues Meeting Notices and Rule Changes from Electrical Manufacturing and Coil Winding
Electric motors
Electric power
Electrical safety
Electrical wiring
Power cables | Shaft voltage | Physics,Technology,Engineering | 715 |
21,795,029 | https://en.wikipedia.org/wiki/Retrobright | Retrobright (stylized as retr0bright or Retrobrite) is a hydrogen peroxide-based process for removing yellowing from ABS plastics.
Yellowing in ABS plastic occurs when it is exposed to UV light or excessive heat, which causes photo-oxidation of polymers that breaks polymer chains and causes the plastic to yellow and become brittle.
History
One method of reversing the yellowed discoloration was first discovered in 2007 in a German retrocomputing forum, before spreading to an English blog where it was further detailed. The process has been continually refined since.
Composition
Retrobright consists of hydrogen peroxide, a small amount of the "active oxygen" laundry booster TAED as a catalyst, and a source of UV.
The optimum mixture and conditions for reversing yellowing of plastics:
A hydrogen peroxide solution. Hydrogen peroxide-based hair bleaching creams available at beauty supply stores can also be used, and are viscous, allowing them to be applied with less waste (especially to large pieces such as computer panels or monitors). The cream must be carefully applied and wrapped evenly with plastic wrap to avoid streaks in the final product.
Approximately 1 ml per 3 liters (1 part in 3000 by volume, alternatively teaspoonful per US gallon) of tetraacetylethylenediamine (TAED)-based laundry booster (concentrations of TAED vary).
A source of ultraviolet light, from sunlight or a UV lamp.
Xanthan gum or arrowroot can be added to the solution, creating an easier-to-apply gel.
Alternatives
Sodium percarbonate may also be used by dissolving it in water and following the usual steps for hydrogen peroxide, as it is sodium carbonate and hydrogen peroxide in a crystalline form.
Ozone gas can also be used for retrobrighting, as long as an ozone generator, a suitable container of sufficient size and a source of UV are available, but can take longer than other methods.
A simpler but slower process involving merely exposure of the yellowed plastic to bright sunlight has been described, variously called 'Sunbrighting' or 'Lightbrighting'. This has both empirical evidence of effectiveness and the theoretical backing of some published scientific literature, which emphasises exposure to strong visible light while minimising ultraviolet exposure.
Effectiveness
The long-term effectiveness of these techniques is unclear. Some have discovered the yellowing reappears, and there are concerns that the process weakens and only bleaches the already damaged plastic.
Similar processes
The usage has also expanded to other retro restoration applications, such as classic and collectible sneaker restoration.
References
Cleaning products
Plastics
Hacker culture | Retrobright | Physics,Chemistry | 539 |
7,760,747 | https://en.wikipedia.org/wiki/Theoretical%20plate | A theoretical plate in many separation processes is a hypothetical zone or stage in which two phases, such as the liquid and vapor phases of a substance, establish an equilibrium with each other. Such equilibrium stages may also be referred to as an equilibrium stage, ideal stage, or a theoretical tray. The performance of many separation processes depends on having series of equilibrium stages and is enhanced by providing more such stages. In other words, having more theoretical plates increases the efficiency of the separation process be it either a distillation, absorption, chromatographic, adsorption or similar process.
Applications
The concept of theoretical plates and trays or equilibrium stages is used in the design of many different types of separation.
Distillation columns
The concept of theoretical plates in designing distillation processes has been discussed in many reference texts. Any physical device that provides good contact between the vapor and liquid phases present in industrial-scale distillation columns or laboratory-scale glassware distillation columns constitutes a "plate" or "tray". Since an actual, physical plate can never be a 100% efficient equilibrium stage, the number of actual plates is more than the required theoretical plates.
where is the number of actual, physical plates or trays, is the number of theoretical plates or trays and is the plate or tray efficiency.
So-called bubble-cap or valve-cap trays are examples of the vapor and liquid contact devices used in industrial distillation columns. Another example of vapor and liquid contact devices are the spikes in laboratory Vigreux fractionating columns.
The trays or plates used in industrial distillation columns are fabricated of circular steel plates and usually installed inside the column at intervals of about 60 to 75 cm (24 to 30 inches) up the height of the column. That spacing is chosen primarily for ease of installation and ease of access for future repair or maintenance.
An example of a very simple tray is a perforated tray. The desired contacting between vapor and liquid occurs as the vapor, flowing upwards through the perforations, comes into contact with the liquid flowing downwards through the perforations. In current modern practice, as shown in the adjacent diagram, better contacting is achieved by installing bubble-caps or valve caps at each perforation to promote the formation of vapor bubbles flowing through a thin layer of liquid maintained by a weir on each tray.
To design a distillation unit or a similar chemical process, the number of theoretical trays or plates (that is, hypothetical equilibrium stages), , required in the process should be determined, taking into account a likely range of feedstock composition and the desired degree of separation of the components in the output fractions. In industrial continuous fractionating columns, is determined by starting at either the top or bottom of the column and calculating material balances, heat balances and equilibrium flash vaporizations for each of the succession of equilibrium stages until the desired end product composition is achieved. The calculation process requires the availability of a great deal of vapor–liquid equilibrium data for the components present in the distillation feed, and the calculation procedure is very complex.
In an industrial distillation column, the required to achieve a given separation also depends upon the amount of reflux used. Using more reflux decreases the number of plates required and using less reflux increases the number of plates required. Hence, the calculation of is usually repeated at various reflux rates. is then divided by the tray efficiency, E, to determine the actual number of trays or physical plates, , needed in the separating column. The final design choice of the number of trays to be installed in an industrial distillation column is then selected based upon an economic balance between the cost of additional trays and the cost of using a higher reflux rate.
There is a very important distinction between the theoretical plate terminology used in discussing conventional distillation trays and the theoretical plate terminology used in the discussions below of packed bed distillation or absorption or in chromatography or other applications. The theoretical plate in conventional distillation trays has no "height". It is simply a hypothetical equilibrium stage. However, the theoretical plate in packed beds, chromatography and other applications is defined as having a height.
The empirical formula known as Van Winkle's Correlation can be used to predict the Murphree plate efficiency for distillation columns separating binary systems.
Distillation and absorption packed beds
Distillation and absorption separation processes using packed beds for vapor and liquid contacting have an equivalent concept referred to as the plate height or the height equivalent to a theoretical plate (HETP). HETP arises from the same concept of equilibrium stages as does the theoretical plate and is numerically equal to the absorption bed length divided by the number of theoretical plates in the absorption bed (and in practice is measured in this way).
where is the number of theoretical plates (also called the "plate count"), is the total bed height and is the height equivalent to a theoretical plate.
The material in packed beds can either be random dumped packing (1-3" wide) such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing and the vapors contact the wetted surface, where mass transfer occurs.
Chromatographic processes
The theoretical plate concept was also adapted for chromatographic processes by Martin and Synge. The IUPAC's Gold Book provides a definition of the number of theoretical plates in a chromatography column.
The same equation applies in chromatography processes as for the packed bed processes, namely:
In packed column chromatography, the HETP may also be calculated with the Van Deemter equation.
In capillary column chromatography HETP is given by the Golay equation.
Other applications
The concept of theoretical plates or trays applies to other processes as well, such as capillary electrophoresis and some types of adsorption.
See also
Batch distillation
Continuous distillation
Extractive distillation
Fenske equation
Fractional distillation
McCabe–Thiele method
References
External links
Distillation, An Introduction by Ming Tham, Newcastle University, UK
Distillation Theory by Ivar J. Halvorsen and Sigurd Skogestad, Norwegian University of Science and Technology, Norway
Separation processes
Unit operations
Chemical engineering
Chromatography
Distillation | Theoretical plate | Chemistry,Engineering | 1,305 |
24,154,487 | https://en.wikipedia.org/wiki/C16H24N2O | {{DISPLAYTITLE:C16H24N2O}}
The molecular formula C16H24N2O (molar mass : 260.38 g/mol) may refer to:
4-HO-DiPT, a synthetic hallucinogen
5-HO-DiPT
4-HO-DPT
4-HO-EiBT
5-MeO-EiPT
5-MeO-EPT
Oxymetazoline
Ropinirole, a non-ergoline dopamine agonist | C16H24N2O | Chemistry | 112 |
23,920,792 | https://en.wikipedia.org/wiki/C8H13NO2 | {{DISPLAYTITLE:C8H13NO2}}
The molecular formula C8H13NO2 (molar mass: 155.19 g/mol, exact mass: 155.0946 u) may refer to:
Arecoline
Bemegride
Scopine
Retronecine
Molecular formulas | C8H13NO2 | Physics,Chemistry | 66 |
12,837,452 | https://en.wikipedia.org/wiki/Ilium/Olympos | Ilium/Olympos is a series of two science fiction novels by Dan Simmons. The events are set in motion by beings who appear to be ancient Greek gods. Like Simmons' earlier series, the Hyperion Cantos, it is a form of "literary science fiction"; it relies heavily on intertextuality, in this case with Homer and Shakespeare as well as references to Marcel Proust's À la recherche du temps perdu (or In Search of Lost Time) and Vladimir Nabokov's novel Ada or Ardor: A Family Chronicle.
As with most of his science fiction and in particular with Hyperion, Ilium demonstrates that Simmons writes in the soft science fiction tradition of Ray Bradbury and Ursula K. Le Guin. Ilium is based on a literary approach similar to most of Bradbury's work, but describes larger segments of society and broader historical events. As in Le Guin's Hainish series, Simmons places the action of Ilium in a vast and complex universe made of relatively plausible technological and scientific elements. Yet Ilium is different from any of the works of Bradbury and Le Guin in its exploration of the very far future of humanity, and in the extra human or post-human themes associated with this. It deals with the concept of technological singularity where technological change starts to occur beyond the ability of humanity to presently predict or comprehend. The first book, Ilium, received the Locus Award for Best Science Fiction novel in 2004.
Plot introduction
The series centers on three main character groups: that of the scholic Hockenberry, Helen and Greek and Trojan warriors from the Iliad; Daeman, Harman, Ada and the other humans of Earth; and the moravecs, specifically Mahnmut the Europan and Orphu of Io. The novels are written in first-person, present-tense when centered on Hockenberry's character, but features third-person, past-tense narrative in all other instances. Much like Simmons' Hyperion where the characters' stories are told over the course of the novels and the actual events serve as a frame, the three groups of characters' stories are told over the course of the novels and their stories do not begin to converge until the end.
Characters in Ilium/Olympos
Old-style humans
The "old-style" humans of Earth exist at what the post-humans claimed would be a stable, minimum herd population of one million. In reality, their numbers are much smaller than that, around 300,000, because each woman is allowed to have only one child. Their DNA incorporates moth genetics which allows sperm-storage and the choice of father-sperm years after sexual intercourse has actually occurred. This reproductive method causes many children not to know their father, as well as helps to break incest taboos in that the firmary, which controls the fertilization, protects against a child of close relatives being born. The old style humans never appear any older than about 40 since every twenty years they are physically rejuvenated.
Ada: the owner of Ardis Hall and Harman's lover. She is just past her first twenty. She hosts Odysseus/Noman for his time on Earth.
Daeman: a pudgy man approaching his second twenty. Both a ladies' man and a lepidopterist. Also terrified of dinosaurs. At the start of Ilium he is a pudgy, immature man-child who wishes to have sex with his cousin (as incest taboos have all but ceased to exist in his society), Ada (whom he had a brief relationship with when she was a teenager), but by the end of the tale he is a mature leader who is very fit and strong. His mother's name is Marina.
Hannah: Ada's younger friend. Both inventor and artist. Develops a romantic interest in Odysseus.
Harman: Ada's lover. 99 years old. Only human with the ability to read, other than Savi.
Savi: the Wandering Jew. The only old-style human not gathered up in the final fax 1,400 years earlier. She has survived the years by spending most of them sleeping in cryo crèches and spending only a few months awake at a time every few decades.
Moravecs
Named after the roboticist Hans Moravec, they are autonomous, sentient, self-evolving biomechanical organisms that dwell on the Jovian moons. They were seeded throughout the outer Solar System by humans during the Lost Age. Most moravecs are self-described humanists and study Lost Age culture, including literature, television programs and movies.
Mahnmut the Europan: explorer of Europa's oceans and skipper of the submersible, The Dark Lady. An amateur Shakespearean scholar.
Orphu of Io: a heavily armored, 1,200-year-old hard-vac moravec that is shaped not unlike a crab. Weighing eight tons and measuring six meters in length, Orphu works in the sulfur-torus of Io, and is a Proust enthusiast.
rockvecs: a subgroup of the moravecs, the rockvecs live on the Asteroid Belt and are more adapted for combat and hostile environments than the moravecs.
Scholics
Dead scholars from previous centuries that were rebuilt by the Olympian gods from their DNA. Their duties are to observe the Trojan War and report the discrepancies that occur between it and Homer's Iliad.
Dr. Thomas Hockenberry: Ph.D. in classical studies and a Homeric scholar. Died of cancer in 2006 and is resurrected by the Olympian Gods as a scholic. Lover of Helen of Troy. He is the oldest surviving scholic.
Dr. Keith Nightenhelser: Hockenberry's oldest friend and a fellow scholic. (The real Nightenhelser was Simmons' roommate at Wabash College and is currently a professor at DePauw University.)
Others
Achaeans and Trojans: the heroes and minor characters are drawn from Homer's epics, as well as the works of Virgil, Proclus, Pindar, Aeschylus, Euripides, and classical Greek mythology.
Ariel: a character from The Tempest and the avatar of the evolved, self-aware biosphere. Using locks of Harman's hair, Daeman's hair, and her own hair, Savi makes a deal with Ariel in order that they might pass without being attacked by the calibani.
Caliban: a monster, son of Sycorax and servant of Prospero, whom John Clute describes as "a cross between Gollum and the alien of Alien." He is cloned to create the calibani, weaker clones of himself. Caliban speaks in strange speech patterns, with much of his dialogue taken from the dramatic monologue "Caliban upon Setebos" by Robert Browning. Simmons chooses not to portray Caliban as the "oppressed but noble native soul straining under the yoke of capitalist-colonial-imperialism" that current interpretations employ to portray him, which he views as "a weak, pale, politically correct shadow of the slithery monstrosity that made audiences shiver in Shakespeare's day ... Shakespeare and his audiences understood that Caliban was a monsterand a really monstrous monster, ready to rape and impregnate Prospero's lovely daughter at the slightest opportunity."
Odysseus: Odysseus after his Odyssey, ten years older than the Odysseus who fights in the Trojan War. In Olympos, he adopts the name Noman, which is a reference to the name Odysseus gives to Polyphemus the Cyclops on their encounter, in Greek, Outis (), meaning "no man" or "nobody". He is a different entity than the Odysseus on Mars.
Olympian Gods: former post-humans who were transformed into gods by Prospero's technology. They do not remember the science behind their technology, save for Zeus and Hephaestus, and they are described both as preliterate and post-literate, for which reason they enlist the services of Thomas Hockenberry and other scholics. They dwell on Olympus Mons on Mars and use quantum teleportation in order to get to the recreation of Troy on an alternate Earth. Though the events of the Trojan War are being recreated with the knowledge of Homer's Iliad, the only ones who know its outcome are the scholics and Zeus as Zeus has forbidden the other gods from knowing.
post-humans: former humans who enhanced themselves far beyond the normal bounds of humanity and dwelt in orbital rings above the Earth until Prospero turned some into Olympian gods. The others were slaughtered by Caliban. They had no need of bodies, but when they took on human form they only took on the shape of women.
Prospero: a character from The Tempest who is the avatar of the self-aware, post-Internet logosphere, a reference to Vladimir Vernadsky's idea of the noosphere.
Setebos: Sycorax and Caliban's god. The god is described as "many-handed as a cuttlefish" in reference to "Caliban upon Setebos" by Robert Browning and is described by Prospero as being an "arbitrary god of great power, a September eleven god, an Auschwitz god."
Sycorax: a witch and Caliban's mother. Also known as Circe or Demyx or Calypso.
The Quiet: an unknown entity (presumably, God, from the Demogorgon's speeches and the words of Prospero) said to incarnate himself in different forms all across the universe. He is Setebos' nemesis, which could create a kind of God-Against-the Devil picture as Setebos is the background antagonist and Prospero and Ariel, servants of The Quiet, are the background protagonists.
zeks: the Little Green Men of Mars. A chlorophyll-based lifeform that comes from the Earth of an alternate universe. Their name comes from a slang term related to the Russian word sharashka, which is a scientific or technical institute staffed with prisoners. The prisoners of these Soviet labor camps were called zeks. (This description of the origin of the term is a mistake of the author. Not only sharashka prisoners were called zeks, it is a common term for all Gulag camp prisoners, derived from the word zaklyuchennyi, inmate. The camp described in the A Day in the Life of Ivan Denisovich is a regular labor camp, not a sharashka.)
Science of Ilium/Olympos
As much of the action derives from fiction involving gods and wizards, Simmons rationalises most of this through his use of far-future technology and science, including:
String theory: interdimensional transport is conducted via Brane Holes.
Nanotechnology provides the gods' immortality and powers, and many of the cybernetic functions possessed by some of the humans.
Reference to Vladimir Vernadsky's idea of the noosphere is made to explain the origins of powerful entities such as Ariel and Prospero, the former arising from a network of datalogging mote machines, and the latter of whom derives from a post-Internet logosphere.
Quantum theory and Quantum Gravity are also used to account for a number of other things, from Achilles' immortality (his mother, Thetis, set the quantum probability for his death to zero for all means of death other than by Paris' bow) to teleportation and shapeshifting powers.
ARNists use recombinant DNA techniques to resurrect long-dead and prehistoric animals.
Pantheistic solipsism is used to explain how 'mythical' characters have entered the "real" world.
Weapons
Old style humansOther than flechette rifles scavenged from caches, crossbows are the main form of weapon as old style humans have forgotten almost everything and can only build crossbows.
GodsTasers, energy shields and titanium lances.
MoravecsWeapons of mass destruction including the Device, ship-based weapons, kinetic missiles.
Miscellaneous
What follows is a definition of terms that are either used within Ilium or are related to its science, technology and fictional history:
ARNists: short for "recombinant RNA artists". ARNists use recombinant DNA techniques to resurrect long-dead and prehistoric animals. Simmons borrows this term from his Hyperion Cantos.
E-ring/P-ring: short for "equatorial ring" or "polar ring" respectively. The rings described are not solid, but rather similar to the rings around Jupiter or Saturn: hundreds of thousands of large individual solid elements, built and occupied by the post-humans before Caliban and Prospero were stranded there and Caliban began murdering the post-humans. The rings are visible from the Earth's surface, but the old-style humans do not know exactly what they are.
Faxnodes: much as the transporter of Star Trek works, the faxnode system takes a living organism, maps out its structure, breaks down its atoms and assembles a copy at the faxport at the intended destination. This copy is a facsimile, or fax, of the original. (Unlike most science-fiction transporter technology, it is revealed late in the story that the matter is not "changed into energy" or "sent" anywhere; a traveler's body is completely destroyed, and re-created from scratch at the destination.)
Final fax: the 9,113 Jews of Savi's time to live through the Rubicon virus are suspended in a fax beam by Prospero and Ariel with the understanding that once the two get the Earth back into order, they will be released.
Firmary: short for "infirmary". A room in the e-ring that the humans of Earth fax to every Twenty (every twentieth birthday) for physical rejuvenation, or when hurt or killed in order to be healed. If they were killed, the firmary removes all memory of their death in order to lessen the psychological impact of the event.
Global Caliphate: an empire that, among other things, attempts to destroy the Jewish population of Earth. They released the Rubicon virus to kill all Jews on Earth as well as programmed the voynix to kill any remaining Jews who escaped the infection.
Quantum theory and quantum gravity: used to account for a number of other things, including Achilles' immortality (in that Thetis set the quantum probability for his death to zero for all other means of death other than by Paris' bow), teleportation, and shapeshifting powers.
Rubicon virus: created by the Global Caliphate and released with the intention of exterminating those of Jewish descent. It had the reverse effect, killing eleven billion people (ninety-seven percent of the world's population), but Israeli scientists were able to develop an inoculation against the virus and inoculate their own people's DNA, but did not have the time to save the rest of humanity.
Turin cloth: a cloth used by the people of Earth that, when draped over the eyes, allows them to view the events of the Trojan War, which they believe is just a drama being created for their entertainment. Named after the Shroud of Turin
Voynix: named after the Voynich manuscript. The voynix are biomechanical, self-replicating, programmable robots. They originated in an alternate universe, and were brought into the Ilium universe before 3000 A.D. The Global Caliphate somehow gained access to these proto-voynix and after replicating three million of them, battled the New European Union around 3000 A.D. In 3200 A.D., the Global Caliphate upgraded the voynix and programmed them to kill Jews. Using time travel technology acquired from the French (previously used to investigate the Voynich Manuscript and which resulted in the destruction of Paris), the Global Caliphate sent the voynix forward in time to 4600 A.D. Upon their arrival they begin to replicate rapidly in the Mediterranean Basin. As the post-human operations there were put at risk, Prospero and Sycorax created the calibani to fend off the voynix, and eventually Prospero reprogrammed them into inactivation. After the final fax, they were reprogrammed to serve the new old-style humans.
Literary and cultural influences
Simmons references such historical figures, fictional characters and works as Christopher Marlowe, Bram Stoker's Dracula, Plato, Gollum, the Disney character Pluto, Samuel Beckett, and William Butler Yeats' "The Second Coming", among others. As well as referencing these works and figures, he uses others more extensively, shaping his novel by the examples he chooses, such as 9/11 and its effects on the Earth and its nations.
Ilium is thematically influenced by extropianism, peopled as it is with post-humans of the far future. It therefore continues to explore the theme pioneered by H. G. Wells in The Time Machine, a work which is also referenced several times in Simmons' work. One of the most notable references is when the old woman Savi calls the current people of Earth eloi, using the word as an expression of her disgust of their self-indulgent society, lack of culture and ignorance of their past.
Ilium also includes allusions to the work of Nabokov. The most apparent of these are the inclusion of Ardis Hall and the names of Ada, Daeman and Marina, all borrowed from Ada or Ardor: A Family Chronicle. The society that the old-style humans live in also resembles that of Antiterra, a parallel of our Earth circa 19th century, which features a society in which there exists a lack of repression and Christian morality, shown by Daeman's intent to seduce his cousin. Simmons also includes references to Nabokov's fondness for butterflies, such as the butterfly genetics incorporated in the old-style humans and Daeman's enthusiasm as a lepidopterist.
Mahnmut of Europa is identified as a Shakespearean scholar as in the first chapter he is introduced where he analyzes Sonnet 116 in order to send it to his correspondent, Orphu of Io, and it is here that Shakespeare's influence on Ilium begins. Mahnmut's submersible is named The Dark Lady, an allusion to a figure in Shakespeare's sonnets. There is also, of course, The Tempests presence in the characters of Prospero, Ariel and Caliban. There are also multiple references to other Shakespeare works and characters such as Falstaff, Henry IV, Part I and Twelfth Night. Shakespeare himself even makes an appearance in a dream to Mahnmut and quotes from Sonnet 31.
Proustian memory investigations had a heavy hand in the novel's making, which helps explain why Simmons chose Ada or Ardor: A Family Chronicle over something more well-understood of Nabokov's, such as Pale Fire. Ada or Ardor was written in such a structure as to mimic someone recalling their own memories, a subject which Proust explores in his work À la recherche du temps perdu. Orphu of Io is more interested in Proust than Mahnmut's Shakespeare, as he considers Proust "perhaps the ultimate explorer of time, memory, and perception."
Simmons' portrayal of Odysseus speaking to the old-style humans at Ardis Hall is also reminiscent of the Bibles Jesus teaching his disciples. Odysseus is even addressed as "Teacher" by one of his listeners in a way reminiscent of Jesus being addressed as "Rabbi," which is commonly translated as "Teacher".
Movie adaptation
In January 2004, it was announced that the screenplay he wrote for his novels Ilium and Olympos would be made into a film by Digital Domain and Barnet Bain Films, with Simmons acting as executive producer. Ilium is described as an "epic tale that spans 5,000 years and sweeps across the entire solar system, including themes and characters from Homer's The Iliad and Shakespeare's The Tempest."
Awards and recognition
IliumLocus Award winner, Hugo Award nominee, 2004
OlymposLocus Award shortlist, 2006
References
Science fiction book series
Works by Dan Simmons
Science fantasy novels
Novels set on Mars
Classical mythology in popular culture
Greek and Roman deities in fiction
Fiction about nanotechnology
Quantum fiction
Fiction about resurrection
Fiction about teleportation
Biological weapons in popular culture
Self-replicating machines in fiction
Novels about time travel
Novels based on the Iliad
Novels based on the Odyssey
Modern adaptations of the Odyssey
Modern adaptations of the Iliad | Ilium/Olympos | Materials_science,Biology | 4,351 |
21,560,340 | https://en.wikipedia.org/wiki/TB6Cs1H4%20snoRNA | TB6Cs1H4 is a member of the H/ACA-like class of non-coding RNA (ncRNA) molecule that guide the sites of modification of uridines to pseudouridines of substrate RNAs. It is known as a small nucleolar RNA (snoRNA) thus named because of its cellular localization in the nucleolus of the eukaryotic cell. TB6Cs1H4 is predicted to guide the pseudouridylation of LSU5 ribosomal RNA (rRNA) at residue Ψ824.
References
Non-coding RNA | TB6Cs1H4 snoRNA | Chemistry | 123 |
56,942,807 | https://en.wikipedia.org/wiki/List%20of%20rail%20accidents%20%281890%E2%80%931899%29 | This is a list of rail accidents from 1890 to 1899.
1890
March 4 – United Kingdom – A London and North Western Railway express train from London to Scotland ran away on the downgrade from Shap to Carlisle on the Caledonian Railway and crashed into a stationary locomotive, killing four people. The train had automatic vacuum brakes, but the locomotive was also equipped for simple vacuum brake working, and the driver had become confused and selected the wrong mode.
March 21 – United Kingdom – An accident involving a South Eastern Railway train at , London killed three people.
August 19 – United States – 1890 Quincy train wreck, Quincy, Massachusetts: A jack used to level rails was left on the tracks. A passenger train then collided with it causing a derailment. Twenty-four people were killed due to the impact of the collision and through scalding.
September 19 – United States – Shoemakersville, Berks County, Pennsylvania: Two coal trains on the Reading Railroad collided leaving debris on the adjacent passenger track. An approaching express passenger train derailed (engine, tender, baggage car, mail car, and three of the five passenger cars) into the Schuylkill River killing 22 people and injuring 30.
October 23 – United States – near Hinton, West Virginia, on the Chesapeake and Ohio Railway, the eastbound Fast Flying Virginian struck a rockslide on the tracks, resulting in the death of the engineer. This accident was immortalized by the ballad Engine One-Forty-Three.
November 11 – United Kingdom – Norton Fitzwarren rail crash, England: A passenger train collided with a freight train that had been shunted onto the main line when the signalman forgot the line was obstructed. Ten people were killed and 11 seriously injured.
November 28 – United Kingdom – On the North British Railway two trains, both headed by NBR D class 0-6-0 locomotives, crashed head-on on the Todd's Mill Viaduct; one locomotive plunged off the bridge.
December 19 -United States- Wallingford, CT. A locomotive engine boiler explosion on the New York, New Haven and Hartford railroad causes the boiler sandbox to go through a house. No injuries.
1891
March 8 – United Kingdom – A Great Western Railway passenger train was derailed by a snowdrift at Camborne, Cornwall.
April 19 – United States – Kipton, Ohio, United States: A passenger train and a freight train collided just east of the Kipton depot killing eight. The accident was attributed to one of the engineers' watches having stopped and being four minutes behind. Webster Clay Ball, watch dealer and inspector of Cleveland, Ohio was later appointed as Watch Inspector for the Lake Shore and Michigan Southern Railroad.
May 1 – United Kingdom – Norwood Junction rail accident: A London Brighton and South Coast Railway passenger train derailed near , London when a cast iron bridge collapsed. A few minor injuries resulted.
May 17 – United States – Greenvale, New York, United States: A horse's hoof caught in the switching apparatus at Greenvale (LIRR station) resulted in both the death of the horse and two crew members, as well as the destruction of the station house.
June 14 – Switzerland – Münchenstein rail disaster - Münchenstein, Basel: An iron-girder bridge collapsed as a crowded passenger train passed, killing 71 and injuring 171.
July 3 - United States – Ravenna, Ohio: On the Erie road a freight train ran into a stationary passenger train from behind. Three cars were telescoped and fire claimed 19 victims, 23 injured.
August 27 – United States – Statesville, North Carolina: A passenger train of the Western North Carolina Railroad derailed upon reaching a bridge, plunging to the creek below, killing 22 and injuring 26.
August 31 – United Kingdom – A London, Chatham and Dover Railway empty stock train overran the buffers at station, Kent killing one.
September 9 – United States – Oyster Bay, New York: A boiler explosion of a locomotive at Oyster Bay (LIRR station) resulted in the deaths of three crewmembers. The Long Island Railroad locomotive had replaced the one that was involved in the wreck in Greenvale in May 1891.
October 17 – United Kingdom – A Great Eastern Railway passenger train derailed at Lavenham, Suffolk.
October 22 - Canada - Two Canadian Pacific Railway freight trains collided at a siding between Kemnay and Brandon, Manitoba. One train had extra cabooses containing passengers, several of whom were either killed or badly injured.
December 4 – United States – Great East Thompson Train Wreck; East Thompson, Connecticut: Four trains collided on the New York and New England Railroad. Two freight trains collided due to sloppy dispatching, jackknifing several cars. The Long Island & Eastern States Express passenger train then hit the wreckage, killing the engineer and fireman. Shortly thereafter, despite an attempt to flag it down, the Norwich Steamboat Express ran into the rear of the Eastern States Express, setting the last sleeper on fire as well as the locomotive cab. In all, only two passengers were killed; the body of one was never found.
United States – Delta, California: An elephant traveling on a Southern Pacific train removed a coupling pin from a car. The forward portion of the train traveled before the locomotive crew discovered the split.
1892
February 22 – United Kingdom – A London, Brighton and South Coast Railway passenger train ran into a South Eastern Railway locomotive at , East Sussex. The passenger train overran a danger signal damaging both locomotives.
June 9 – United Kingdom – Esholt Junction rail crash - A Midland Railway passenger train overran signals and collided with another at Esholt Junction, Yorkshire killing five and injuring thirty.
September 22 - United Kingdom - Lindal railway incident - A locomotive shunting was suddenly swallowed up by the ground collapsing beneath into a sinkhole, and remains there today. No injuries reported.
November 2 – United Kingdom – Thirsk rail crash, Thirsk, Yorkshire, England: a signalman suffering from distress and lack of sleep due to a family emergency forgot about a freight train standing outside his signal box. Eight people were killed and 39 injured.
1893
January 18 – United States – Lonsdale, Rhode Island. Eight of 23 sleigh ride passengers were killed when a sleigh collided with a Providence & Worcester Railroad freight train. Several horses were killed. Six passengers died at the scene and two died at Rhode Island Hospital. The sleigh ride was coming to Cumberland after an evening excursion from North Attleboro, Massachusetts. The engine's operator told investigators that weather conditions were very cold that night and speculated that the sleigh riders never heard the train whistle. Witnesses said because of a bend of the railroad, the passengers of the sleigh never saw the train that hit them.
May 30 - United States - Tyrone, Pennsylvania - Walter L. Main Circus train wreck, a circus train going down a curved embankment lost control and derailed. 5 of the circus workers were killed, plus a number of animal casualties ranging from 50 to 72. Unharmed animals managed to escape the wreck and were reported being seen in the countryside for months after the wreck.
July 18 - United States - East Aurora, New York. A derailment involving a twelve-car excursion train returning from a Lime Lake, New York summer picnic, by the Western New York and Pennsylvania Railroad. Engine #124 landed on Engine #30.
August 6 - United States - Lindsey, Ohio. 3 were killed and hundreds injured when the sleeper cars on the Chicago Express of the Lake Shore and Michigan Southern Railway derailed and crashed into a freight train waiting on a siding. Among the injured were members of the Chicago Colts baseball team.
August 12 – United Kingdom – Llantrisant rail accident, 13 were killed when mechanical failure led to derailment.
August 31 – United States – Chester train wreck, a bridge collapse plunged four train cars into the Westfield River, killing 14 people.
1894
August 9 – United States – 1894 Rock Island railroad wreck, Lincoln, Nebraska: Determined to be an act of sabotage. Eleven of 33 passengers died.
October 4 – United Kingdom – A North Eastern Railway sleeping car express overran signals and collided with the rear of a freight train at Castle Hills, Yorkshire. One person was killed. The passengers included two Cabinet ministers, Arnold Morley and Lord Tweedmouth.
November 12 – United Kingdom – A Great Western Railway boat train was derailed in a flood at , Dorset.
December 22 – United Kingdom – Chelford rail accident: During shunting operations, strong winds blew a high-sided wagon into other wagons. It derailed, blocking the main line, and was then struck by an oncoming express train, killing 14 passengers.
December 22 – United Kingdom – A light engine collided with a South Eastern Railway passenger train at , Surrey injuring six.
1895
February 27 – United States – A freight train and a log train collide on the Houston East and West Texas Railway north of Lufkin, Texas; an unknown person stole a third locomotive and ran it into the two stalled trains and then fled.
April 13 – United Kingdom – A Great Western Railway passenger train derailed between and . The cause was found to be damaged track caused by excessive speed of the previous train.
August 1 – United Kingdom – A London, Chatham and Dover Railway freight train collided with an excursion train at , Kent killing one.
October 22 – France – Montparnasse derailment — At Gare Montparnasse, Paris, an express train overran a buffer stop because the driver approached the station too fast and a Westinghouse air brake failure. It crossed about of concourse before plummeting through a window and crushing one person in a shop below. The locomotive remained outside the station for several days and attracted a number of photographers.
November 10 – United Kingdom – A Great Northern Railway train derailed at St Neots, killing two people.
December 22 – United Kingdom – A London and North Western Railway express passenger train collided with a freight wagon which had run away and fouled the main line. Fourteen people were killed and 79 were injured.
1896
March 7 – United Kingdom – The last carriage of a Great Northern Railway passenger train derailed at Little Bytham, Lincolnshire, causing other carriages to derail. The cause was found to be the premature removal of a speed restriction. Two people were killed.
Easter Monday, April 6 – United Kingdom – Llanberis, Wales: On the opening day of the Snowdon Mountain Railway, locomotive No. 1 Ladas ran away plummeted down a steep slope after it derailed. The engine was destroyed, but the driver and fireman were able to jump clear and the carriages were stopped by the guard. One passenger jumped off the moving train and fell beneath the wheels. He later died from his injuries. The line then closed for over a year before re-opening on April 19, 1897.
May 26 – Canada – in Victoria, British Columbia, Point Ellice Bridge disaster: a passenger train with 143 passengers aboard crashed through Point Ellice Bridge into the Upper Harbour. Fifty-five were killed. A coroner's jury concluded that the tramway operator, the Consolidated Electric Railway Company, was responsible because it allowed the streetcar to be loaded with a greater number of passengers than the bridge was designed to support.
July 30 – United States – 1896 Atlantic City rail crash – two trains collided at a crossing just west of Atlantic City, New Jersey, crushing five loaded passenger coaches, killing 50 and seriously injuring around 60.
August 3 – United Kingdom – A Lancashire and Yorkshire Railway passenger train collided with a West Lancashire Railway passenger train at Preston Junction, Lancashire because the driver of the former had misread signals. One person was killed and seven were injured.
August 15 – United Kingdom – A London and North Western Railway sleeping car express derailed at , Lancashire due to excessive speed on a curve. One person was killed.
August 29 – United Kingdom – The locomotive of a -to- train derailed near , East Sussex when it collided with a traction engine and threshing machine using an occupation crossing.
September 15 – United States – The Crash at Crush – Showman William George Crush convinced officials of the Missouri-Kansas-Texas Railroad (MKT, known as "the Katy"), to let him stage a colossal train wreck. The crowd was transported to the show site, near the town of West, Texas, producing much passenger revenue for the company. A one-day town is thrown up and named Crush, boasting a platform and tank cars supplying 100 faucets. Two six-car trains of obsolete rolling stock, pulled by dolled-up locomotives were let loose at each other over a course with spectacular result. When the wrecked engines' boilers exploded, flying shrapnel killed at least three of the 30,000 spectators (some sources estimate 40,000) and injured many more.
December 4 – United States – A freight train consisting of Engine No. 155 and twenty-six cars of freight was running from Brattleboro, Vermont to New London, Connecticut. Just outside Eagleville, Connecticut the train became uncoupled between cars 10 and 11. As the crew in the back tried to stop the back part of the train, the crew in the locomotive increased speed to gain distance from the uncoupled cars. The boiler exploded killing brakeman Warren Thomas, Engineer Otis Hall, and his brother, fireman Benjamin Hall.
December 27 – United States – A passenger train, No. 41 of the Birmingham Mineral Railroad, plunged through a bridge 110 feet (34 m) over the Cahaba River, east of West Blocton, Alabama, killing 22 or 23 of the 31 people on board, many burned beyond recognition.
1897
January 23 - United States - A train partially derailed after striking a boulder north of Oakdale, Tennessee. The boulder was suspected to have fallen onto the tracks following recent rains. The engine's fireman was killed, and the engineer seriously injured. Passengers reported only minor injuries.
January 26 – Canada – The regular westbound CP express train between Halifax and Montreal, hauled by an ICR engine, came off the rails outside Dorchester, New Brunswick, loaded with six tons (5.4 t) of freshly minted Canadian pennies from London. Two people were killed and 38 injured, including the Canadian Minister of the Militia, Frederick William Borden. It is known as "The Penny Wreck".
May 1 – Russia – A military train derailed north of Puka, Governorate of Livonia. 58 people were killed and 44 injured in the accident.
June 11 – Denmark – Gentofte train crash, Denmark: An express train passed a signal at danger and collided with a stationary passenger train at Gentofte station. Forty were killed and more than 100 injured.
June 11 – United Kingdom – Welshampton rail crash - eleven were killed when an excursion train derailed.
June 30 – United States – West Chicago, Ill. Collision of two trains of the Chicago and Northwestern R.R. Three killed and 20-30 injured.
June 30 – United States – West Terre Haute, IN. Vandalia R.R. 1 killed and 3 reported fatally injured.
September 1 – United Kingdom – A passenger train derailed near Heathfield, East Sussex, killing the driver.
October 24 – United States – Garrison train crash in Garrison, New York, the Sunday morning train No. 46, on the New York Central & Hudson River Railroad, wrecked near King's Dock of the Hudson River division, about south of Garrison, New York. 19 were killed.
November 4 – Canada – A Canadian Pacific Railway freight train collided with a parked CPR yard engine at the station at Havelock, Ontario resulting in three employees being injured, seven cars derailed, two locomotives severely damaged and the main track being blocked. Preliminary investigations suggested that the yard engine should not have been on the siding.
1898
January 3 – United Kingdom – A North British Railway freight train derailed at , Lothian when hit by an express passenger train which overran signals. One person was killed and 21 injured.
January 29 – United States – A Maine Central Railroad train crashed near Orono. The accident killed six.
United States – A Tallulah Falls Railway train pulling a children's excursion derailed due to bad track. The locomotive and baggage car toppled from the track. The baggage car fell onto its side and the locomotive rolled to the bottom of the embankment, killing the engineer. No children were injured.
March 21 – United Kingdom – St Johns train crash 1898: A South Eastern and Chatham Railway passenger train ran into the rear of another passenger train at , London due to a signalman's error. Three people were killed and twenty injured.
May 8 - United States - Columbus Ohio-Excrusion train on Akron R.R. accident. 1 killed and 2 injured
June 26 - United States - Two trains transporting the 2nd United States Volunteer Cavalry were involved in a rear-end collision near Tupelo, Mississippi. The first train had stopped to take on water before being struck by the second. Five passengers were killed and fifteen injured.
August 16 – Cape Colony, South Africa – A rake of goods trucks, one of which was carrying 34 native passengers, ran away backwards from Mostertshoek passing loop, Great Karoo. They had not been properly braked on a falling gradient prior to the detachment of the goods train's locomotive. The runaway trucks eventually collided with the following fast mail/passenger train. In the collision, 27 of the native passengers were killed, as were five adults and one child aboard the mail/passenger train. Additionally, two Post Office officials and one of the drivers of the mail/passenger train were badly injured; others suffered minor injuries.
September 2 – United Kingdom – Wellingborough rail accident: A parcels trolley fell off the platform at , Northamptonshire and was hit by a Midland Railway express train, which derailed. Seven people were killed and 65 injured.
November 24 – United Kingdom – On the gauge Tralee and Dingle Light Railway in what is now the Republic of Ireland, a train of one cattle waggon and three passenger cars was derailed by high winds between Lispole and Aunascaul. Of the four passengers on board, one was killed and two others injured.
United States – The second major accident on the Tallulah Falls Railway occurred at the more than high Panther Creek trestle, the highest trestle on the line. When a passenger train reached the highest section of the bridge, the supports gave way beneath it, causing the locomotive, tender, and first car to fall into the ravine. The second coach remained on the still erect portion of the bridge, having stopped inches from the edge. One passenger was killed and no other injuries were reported.
United Kingdom – A mail train derailed near , Cornwall. The Great Western Railway 3521 Class locomotives frequently experienced excessive oscillation when running at speed.
1899
January 12 – United Kingdom – A London and North Western Railway express freight train derailed at Penmaenmawr, Caernarfonshire because the formation was washed away in a storm. Both locomotive crew were killed.
February 18 – Belgium – In heavy fog, the train from London to Brussels via Calais, ran into a train from Tial, which had stopped at the station of Forest, near Brussels. 19 persons were killed and 100+ injured. The Post Express - Feb 18, 1899
March 11 – New Zealand – Rakaia railway accident Two excursion trains returning from Ashburton to Christchurch collided when the second train rear-ended the first; four passengers were killed and 22 injured. The accident led to the fitting of air brakes to rolling stock and improved signalling.
September 15 – United States – Missouri Pacific fast freight No. 124 crashed through a burning bridge between Paul and Julian, Nebraska, killing three men and derailing over 20 cars.
September 20 – United States – Two St. Louis–San Francisco Railway (Frisco) trains, one passenger and one freight, hit head-on in Missouri. Four people were killed.
October 23 – United Kingdom – A Caledonian Railway express train collided with a cattle train at , Angus. One person was killed.
December 23 – United Kingdom – A rear-end collision occurred at , West Sussex.
See also
London Underground accidents
References
Sources
External links
Rail accidents 1890-1899
19th-century railway accidents | List of rail accidents (1890–1899) | Technology | 4,136 |
8,806,323 | https://en.wikipedia.org/wiki/Home%20medical%20equipment | This article discusses the definitions and types of home medical equipment (HME), also known as durable medical equipment (DME), and durable medical equipment and prosthetics and orthotics (DMEPOS).
HME / DMEPOS
Home medical equipment is a category of devices used for patients whose care is being managed from a home or other private facility managed by a nonprofessional caregiver or family member. It is often referred to as "durable" medical equipment (DME) as it is intended to withstand repeated use by non-professionals or the patient, and is appropriate for use in the home.
Medical supplies of an expendable nature, such as bandages, rubber gloves and irrigating kits are not considered by Medicare to be DME.
Within the US medical and insurance industries, the following acronyms are used to describe home medical equipment:
DME: Durable Medical Equipment
HME: Home Medical Equipment
DMEPOS: Durable Medical Equipment, Prosthetics, Orthotics and Supplies
Types of home medical equipment
The following are representative examples of home medical equipment
Air ionizer
Air purifier
Apnea monitor
Artificial limb
Bedpan
Cannula
Catheter
Colostomy bag
CPAP machine
Crutch
Diabetic Shoes
Drug test
Enemas
Feeding tube
Glucose meter
Heating pad
Hospital bed
Infusion pump
Lift chair
Nasal cannula
Nebulizer
Oxygen concentrator
Oxygen cylinder
Patient lift
Pill splitter
Prosthetic device
Pulse oximeter
Traction splint
Walker
Ventilator
Wheelchair
Obtaining and using home medical equipment
For most home medical equipment to be reimbursed by insurance, a patient must have a doctor's prescription for the equipment needed. Some equipment, such as oxygen, is FDA regulated and must be prescribed by a physician before purchase whether insurance reimbursed or otherwise.
The physician may recommend a supplier for the home medical equipment, or the patient will have to research this on their own. HME / DMEPOS suppliers are located throughout the country and some specialty shops can also be found on the internet.
There is no established typical size for HME / DMEPOS suppliers. Supply companies include very large organizations such as Walgreens, Lincare, and Apria to smaller local companies operated by sole proprietors or families. A new evolution in the home medical equipment arena is the advent of internet retailers who have lower operating costs so they often sell equipment for lower prices than local "brick and mortar", but lack the ability to offer in-home setup, equipment training and customer service. In all cases, however, there are strict rules and laws governing HME / DMEPOS suppliers that participate in Medicare and Medicaid programs. In addition to rules outlined the National Supplier Clearinghouse, of division of CMS (centers for Medicare and Medicaid), all Medicare DME suppliers must obtain and maintain accreditation by one of many approved accrediting bodies.
Once a patient or caregiver selects an appropriate HME / DMEPOS supplier, he/she presents the supplier with the prescription and patient's insurance information. HME / DMEPOS suppliers maintain an inventory of products and equipment, so fulfillment of the prescription is rapid, much like a Pharmacy.
The HME / DMEPOS supplier is obligated to perform certain functions when providing home medical equipment. These include:
Proper delivery and setup of the equipment
Ensuring the home environment is suitable and safe for proper usage of the equipment
Training the patient, family and caregivers on the proper usage and maintenance of the equipment
Informing the patient and/or caregiver of their rights and responsibilities
All HME / DMEPOS suppliers are required to comply with Health Insurance Portability and Accountability Act (HIPAA) to protect patients' confidentiality and records.
Insurance
In the United States
Home medical equipment is typically covered by patient's healthcare insurance, including Medicare (Part B). In order to properly code home medical equipment for billing, the Healthcare Common Procedure Coding System HCPCS is utilized.
As of 2014, under the Medicare Prescription Drug, Improvement, and Modernization Act of 2003, providers of HME/DMEPOS will be required to become third-party accredited to standards regulated by the Centers for Medicare and Medicaid Services (CMS) in order to continue eligibility under Medicare Part B. This effort aims to standardize and improve the quality of service to patients provided by home medical equipment suppliers.
See also
Medical device
Medical technology
Medical equipment
Loan closet
Medtrade- the largest international trade fair for HME in the US
References
Medical equipment
Medicare and Medicaid (United States) | Home medical equipment | Biology | 951 |
22,327,794 | https://en.wikipedia.org/wiki/Aichi%20small-elevator%20manufacturing%20corporation | The Aichi small-elevator manufacturing corporation is a manufacturer of vertical transportation systems, mainly elevators and Dumbwaiters. Founded in Aichi, Japan in 1969.
Aichi small-elevator manufacturing corporation makes a speciality of small elevators, dumbwaiter and passenger lifts beside staircases.
History
Aichi small-elevator manufacturing corporation was founded in 1969 under private management. It was organized in 1974 as a yugen kaisha and
re-organized in 1976 as a kabushiki gaisha. The company moved to its present location in 1980.
Products
Elevators for passenger service.
Stretcher capable elevators.
Freight elevators.
Hydraulic elevators.
Dumbwaiter (elevator).
Home elevators.
Staircase passenger lift.
Staircase wheelchair lift.
External links
Official website
Official website
References
Manufacturing companies of Japan
Vertical transport devices
Japanese brands | Aichi small-elevator manufacturing corporation | Technology | 161 |
36,071,642 | https://en.wikipedia.org/wiki/Public%20interest%20design | Public interest design is a human-centered and participatory design practice that places emphasis on the “triple bottom line” of sustainable design that includes ecological, economic, and social issues and on designing products, structures, and systems that address issues such as economic development and the preservation of the environment. Projects incorporating public interest design focus on the general good of the local citizens with a fundamentally collaborative perspective.
Starting in the late 1990s, several books, convenings, and exhibitions have generated new momentum and investment in public interest design. Since then, public interest design—frequently described as a movement or field—has gained public recognition.
History
Public interest design grew out of the community design movement, which got its start in 1968 after American civil rights leader Whitney Young issued a challenge to attendees of the American Institute of Architects (AIA) national convention:
". . . you are not a profession that has distinguished itself by your social and civic contributions to the cause of civil rights, and I am sure this does not come to you as any shock. You are most distinguished by your thunderous silence and your complete irrelevance."
The response to Young’s challenge was the establishment of community design centers (CDCs) across the United States. CDCs, which were often established with the support of area universities, provided a variety of design services – such as affordable housing - within their own neighborhoods.
In architecture schools, “design/build programs” provided outreach to meet local design needs, particularly in low-income and underserved areas. One of the earliest design/build programs was Yale University’s Vlock Building Project. The project, which was initiated by students at Yale University School of Architecture in 1967, requires graduate students to design and build low-income housing.
One of the most publicized programs is the Auburn University Rural Studio design/build program, which was founded in 1993. Samuel Mockbee and D.K. Ruth created the program to inspire hands-on community-outreach and service-based architectural opportunities for students. The program gained traction due to Mockbee investing in the low-income housing aesthetics — an aspect previously downplayed in architectural design of houses for the poor. Mockbee and Ruth expressed their understanding of the communities through their architectural designs; the visuals and functionality address the needs of the citizens. The Rural Studio’s first project, Bryant House, was completed in 1994 for $16,500.
Public Interest Design from the 1990s – Present
Interest in public interest design – particularly socially responsible architecture – began to grow during the 1990s and continued into the first decade of the new millennium in reaction to the expansive globalization. Conferences, books, and exhibitions began to showcase the design work being done beyond the community design centers, which had greatly decreased in numbers since their peak in the seventies.
Non-profit organizations – including Architecture for Humanity, BaSiC Initiative, Design Corps, Public Architecture, Project H, Project Locus, and MASS Design Group – began to provide design services that served a larger segment of the population than had been served by traditional design professions.
Many public interest design organizations also provide training and service-learning programs for architecture students and graduates. In 1999, the Enterprise Rose Architectural Fellowship was established, giving young architects the opportunity to work on three-year-long design and community development projects in low-income communities.
Two of the earliest formal public interest design programs include the Gulf Coast Community Design Studio at Mississippi State University and the Public Interest Design Summer Program at the University of Texas
. In February 2015, Portland State University launched the first graduate certificate program in Public Interest Design in the United States.
The first professional-level training was conducted in July 2011 by the Public Interest Design Institute (PIDI) and held at the Harvard Graduate School of Design.
Also in 2011, a survey of American Institute of Architects (AIA), 77% of AIA members agreed that the mission of the professional practice of public interest design could be defined as the belief that every person should be able to live in a socially, economically, and environmentally healthy community.
Conferences and exhibits
The annual Structures for Inclusion conference showcases public interest design projects from around the world. The first conference, which was held in 2000, was called “Design for the 98% Without Architects." Speaking at the conference, Rural Studio co-founder Samuel Mockbee challenged attendees to serve a greater segment of the population: “I believe most of us would agree that American architecture today exists primarily within a thin band of elite social and economic conditions...in creating architecture, and ultimately community, it should make no difference which economic or social type is served, as long as the status quo of the actual world is transformed by an imagination that creates a proper harmony for both the affluent and the disadvantaged."
In 2007, the Cooper Hewitt National Design Museum held an exhibition, titled “Design for the Other 90%,” curated by Cynthia Smith. Following the success of this exhibit, Smith developed the "Design Other 90" initiative into an ongoing series, the second of which was titled “Design for the Other 90%: CITIES” (2011), held at the United Nations headquarters. In 2010, Andres Lipek of the Museum of Modern Art in New York curated an exhibit, called “Small Scale, Big Change: New Architectures of Social Engagement.”
Professional Networks
One of the oldest professional networks related to public interest design is the professional organization Association for Community Design (ACD), which was founded in 1977.
In 2005, adopting a term coined by architect Kimberly Dowdell, the Social Economic Environmental Design (SEED) Network was co-founded by a group of community design leaders, during a meeting hosted by the Loeb Fellowship at the Harvard Graduate School of Design. The SEED Network established a common set of five principles and criteria for practitioners of public interest design. An evaluation tool called the SEED Evaluator is available to assist designers and practitioners in developing projects that align with SEED Network goals and criteria.
In 2006, the Open Architecture Network was launched by Architecture for Humanity in conjunction with co-founder Cameron Sinclair's TED Wish. Taking on the name Worldchanging in 2011, the network is an open-source community dedicated to improving living conditions through innovative and sustainable design. Designers of all persuasions can share ideas, designs and plans as well as collaborate and manage projects. while protecting their intellectual property rights using the Creative Commons "some rights reserved" licensing system.
In 2007, DESIGN 21: Social Design Network, an online platform built in partnership with UNESCO, was launched.
In 2011, the Design Other 90 Network was launched by the Cooper-Hewitt, National Design Museum, in conjunction with its Design with the Other 90%: CITIES exhibition.
In 2012, IDEO.org, with the support of The Bill & Melinda Gates Foundation, launched HCD Connect, a network for social sector leaders committed to human-centered design. In this context, human-centered design begins with the end-user of a product, place, or system — taking into account their needs, behaviors and desires. The fast-growing professional network of 15,000 builds on "The Human-Centered Design Toolkit," which was designed specifically for people, nonprofits, and social enterprises that work with low-income communities throughout the world. People using the HCD Toolkit or human-centered design in the social sector now have a place to share their experiences, ask questions, and connect with others working in similar areas or on similar challenges.
See also
Design/Build
Healthy community design
Leadership in Energy and Environmental Design
Participatory design
Sustainable architecture
Sustainable Design
Social design
References
Further reading
Books advocating public interest design:
Jones, T., Pettus, W., & Pyatok, M. (1997). Good Neighbors, Affordable Family Housing.
Carpenter, W. J. (1997). Learning by Building: Design and Construction in Architectural Education.
Bell, B. (2003). Good Deeds, Good Design: Community Service through Architecture.
Stohr, K. & Sinclair, C. (editors) (2006). Design Like You Give a Damn: Architectural Responses to Humanitarian Crises.
Bell, B. & Wakeford, K. (editors) (2008). Expanding Architecture, Design as Activism.
Piloton, E. (2009). Design Revolution: 100 Products that Empower People.
Cary, J. (2010). The Power of Pro Bono: 40 Stories about Design for the Public Good by Architects and Their Clients.
Stohr, K. & Sinclair, C. (editors) (2012). Design Like You Give a Damn 2: Building Change from the Ground Up.
External links
Infographic: “From Idealism to Realism: The History of Public Interest Design”
PublicInterestDesign.org daily blog and website
Public Interest Design Institute
The SEED Network
Academic disciplines
Architectural design
Building engineering
Environmental design
Sustainable architecture
Sustainable building
Sustainable design
Interest design | Public interest design | Engineering,Environmental_science | 1,797 |
37,583,459 | https://en.wikipedia.org/wiki/10%20Leonis%20Minoris | 10 Leonis Minoris is a single variable star in the northern constellation Leo Minor, located approximately 191 light years away based on parallax. It has the variable star designation SU Leonis Minoris; 10 Leonis Minoris is the Flamsteed designation. This body is visible to the naked eye as a faint, orange-hued star with a baseline apparent visual magnitude of 4.54. It is moving closer to the Earth with a heliocentric radial velocity of −12 km/s.
This is an evolved giant star with a stellar classification of G8.5 III.
It is reported as a RS CVn variable with magnitude varying by 0.02 mag. and showing a high level of chromospheric activity. The star has 2.54 times the mass of the Sun and has expanded to 8.7 times the Sun's radius. It is radiating 46 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 5,099 K.
References
G-type giants
RS Canum Venaticorum variables
Leo Minor
BD+37 2004
Leonis Minoris, 10
082635
Leonis Minoris, SU
046952
3800 | 10 Leonis Minoris | Astronomy | 249 |
1,108,991 | https://en.wikipedia.org/wiki/Status%20effect | In role-playing games, a status effect is a temporary modification to a game character’s original set of stats that usually comes into play when special powers and abilities (such as spells) are used, often during combat. It appears in numerous computer and video games of many genres, most commonly in role-playing video games. The term status effect can be applied both to changes that provide a character an advantage (increased attributes, defensive barriers, regeneration), and those that hinder the character (decreased attributes, incapacitation, degeneration). Especially in MMORPGs, beneficial effects are referred to as buffs, and hindering effects are called debuffs.
Acquiring and removing status effects
A status effect in the abstract is a persistent consequence of a certain in-game event or action, and as such innumerable variants exist across the gaming field. Status effects may result from one character performing a certain type of attack on another. Players may acquire status effects by consuming items, casting spells on themselves or each other, activating devices in the world, interacting with NPCs, or remaining in a particular location. Meeting certain criteria may result in the character acquiring a condition, which can have a status effect associated with it; for example: if their hunger level is high they may acquire a 'starving' condition, which produces a status effect that reduces their health regeneration. Some games offer permanent status effects which persist for an entire level and act as modifications to the game's native difficulty.
The process of removing a status effect varies. Some status effects expire after a certain amount of time has elapsed. Most games contain items capable of healing specific status effects, or rarer items which can heal all of them. Many games also include magic spells that can eliminate status effects. Status effects are often removed at the end of a battle or once the originating enemy is defeated, however some may persist until they are explicitly cured. Games which allow players to rest may remove some status effects when that action is taken. If a game has multiple classes, one will often be a class capable of healing, who will have a greater ability to remove negative status effects than other classes.
In addition, many games have weapons, armor, or other equipment that can mitigate status effects or prevent a character from getting one in the first place. Depending on the game, some increase the chance to escape suffering the effect each time the player may potentially receive it, while others grant complete immunity. However, sometimes the equipment that is resisting an effect, will in exchange, as a penalty, increase vulnerability against a different effect, offering the player the opportunity to make tactical choices.
Buffs and debuffs
In many MMORPGs, the terms buff and debuff are commonly used to describe status effects. Some spells or powers may debuff an enemy while buffing an ally at the same time.
Buff is the term generically used to describe a positive status effect that affects mainly player or enemy statistics (usually cast as a spell). Debuffs are effects that may negatively impact a player character or a non-player character in some way other than reducing their hit points. Some examples of buffs and debuffs are:
There are countless other debuffs, depending on the game played, though all share the same concept: to make a certain target less powerful in one or more aspects. Both buffs and debuffs are generally of a temporary nature, wearing off after a certain period of time.
Auras
Many modern real-time strategy games have hero units, single units that are powerful, but limited in number (usually only one of a single type allowed). In addition to their normally very high stats, many heroes also have auras which confer beneficial status effects or attribute bonuses to any friendly units that enter within a certain radius of the hero. This makes the hero unit an important factor in an engagement as, in addition to their formidable combat skills and powerful abilities, they also make the units around them more effective.
Some heroes and spellcaster units can also confer or inflict buffs, debuffs, and other status effects to units as spells.
See also
MMORPG
Dungeons & Dragons
Pathfinder Roleplaying Game
References
External links
Status Effects at Giant Bomb
Video game terminology | Status effect | Technology | 870 |
22,116,295 | https://en.wikipedia.org/wiki/Electronic%20Journal%20of%20Combinatorics | The Electronic Journal of Combinatorics is a peer-reviewed open access scientific journal covering research in combinatorial mathematics.
The journal was established in 1994 by Herbert Wilf (University of Pennsylvania) and Neil Calkin (Georgia Institute of Technology). The Electronic Journal of Combinatorics is a founding member of the Free Journal Network. According to the Journal Citation Reports, the journal had a 2017 impact factor of 0.762.
Editors-in-chief
Current
The current editors-in-chief at Electronic Journal of Combinatorics are:
Maria Axenovich, Karlsruhe Institute of Technology, Germany
Miklós Bóna, University of Florida, United States
Julia Böttcher, London School of Economics, United Kingdom
Richard A. Brualdi, University of Wisconsin, Madison, United States
Zdeněk Dvořák, Charles University, Czech Republic
Nikolaos Fountoulakis, University of Birmingham, United Kingdom
Eric Fusy, CNRS/LIX, École Polytechnique, France
Felix Joos, Universität Heidelberg, Germany
Brendan McKay, Australian National University, Australia
Bojan Mohar, Simon Fraser University, Canada
Greta Panova, University of Southern California, United States
Alexey Pokrovskiy, University College London, United Kingdom
Gordon Royle, University of Western Australia, Australia
Bruce Sagan, Michigan State University, United States
Paco Santos, University of Cantabria, Spain
Maya Stein, University of Chile, Chile
Edwin van Dam, Tilburg University, Netherlands
Ian Wanless, Monash University, Australia
David Wood, Monash University, Australia
Qing Xiang, Southern University of Science and Technology, China
Since 2013, one of the editors-in-chief has been designated the Chief Editorial Officer. The present officer is Richard Brualdi.
Past
The following people have been editors-in-chief of the Electronic Journal of Combinatorics:
Dynamic surveys
In addition to publishing normal articles, the journal also contains a class of articles called Dynamic Surveys that are not assigned to volumes and can be repeatedly updated by the authors.
Open access
Since its inception, the journal has operated under the diamond-model open access model, charging no fees to either authors or readers. It is a founding member of the Free Journal Network.
Copyright
Since its inception, the journal has left copyright of all published material with its authors. Instead, authors provide the journal with an irrevocable licence to publish and agree that any further publication of the material acknowledges the journal. Since 2018, authors have been strongly encouraged to release their articles under a Creative Commons license.
References
External links
Combinatorics journals
Academic journals established in 1994
Open access journals
English-language journals
Online-only journals | Electronic Journal of Combinatorics | Mathematics | 543 |
20,127,208 | https://en.wikipedia.org/wiki/Tabimorelin | Tabimorelin (INN) (developmental code name NN-703) is a drug which acts as a potent, orally-active agonist of the ghrelin/growth hormone secretagogue receptor (GHSR) and growth hormone secretagogue, mimicking the effects of the endogenous peptide agonist ghrelin as a stimulator of growth hormone (GH) release. It was one of the first GH secretagogues developed and is largely a modified polypeptide, but it is nevertheless orally-active in vivo. Tabimorelin produced sustained increases in levels of GH and insulin-like growth factor 1 (IGF-1), along with smaller transient increases in levels of other hormones such as adrenocorticotropic hormone (ACTH), cortisol, and prolactin. However actual clinical effects in adults with growth hormone deficiency were limited, with only the most severely GH-deficient patients showing significant benefit, and tabimorelin was also found to act as a CYP3A4 inhibitor which could cause it to have undesirable interactions with other drugs.
See also
List of growth hormone secretagogues
References
Ghrelin receptor agonists
Growth hormone secretagogues
Peptides
Experimental drugs | Tabimorelin | Chemistry | 268 |
7,817,593 | https://en.wikipedia.org/wiki/Mars%20Surface%20Exploration | Mars Surface Exploration (MSE) is a systems engineering tool for the design of rover missions originally developed in 2003 by the Space Systems Engineering graduate class at MIT. It has since been further enhanced by the MIT Space Systems Laboratory with the support of the Jet Propulsion Laboratory (JPL). The tool is intended to help designers during pre-phase A rover mission design. MSE enables designers to model and analyze very rapidly a wide range of design options for a mission whose science goals have been defined. The emphasis is on breadth rather than on in-depth modeling of specific designs. Other rover modeling tools exist at NASA’s and ESA’s concurrent engineering facilities that take the approach of interconnecting sophisticated software design environments to conduct detailed analyses of a particular mission. MSE's approach complements in-depth modeling techniques which, in return, assist in the validation of MSE's models at various points of the design space.
Analyses
MSE has been used to analyze various types of missions ranging from traditional rover missions (e.g. Mars Science Laboratory and ExoMars) to Mars Sample Return-type missions and Lunar missions.
See also
Exploration of Mars
Sojourner
Mars Exploration Rovers
Mars Science Laboratory
ExoMars
Rover Mission Analysis and Design (RoMAD)
References
External links
MIT Space Systems Laboratory
Other research at the MIT Space Systems Laboratory EMFF
Aerospace engineering | Mars Surface Exploration | Engineering | 276 |
2,001,331 | https://en.wikipedia.org/wiki/Simple%20precedence%20grammar | A simple precedence grammar is a context-free formal grammar that can be parsed with a simple precedence parser. The concept was first created in 1964 by Claude Pair, and was later rediscovered, from ideas due to Robert Floyd, by Niklaus Wirth and Helmut Weber who published a paper, entitled EULER: a generalization of ALGOL, and its formal definition, published in 1966 in the Communications of the ACM.
Formal definition
G = (N, Σ, P, S) is a simple precedence grammar if all the production rules in P comply with the following constraints:
There are no erasing rules (ε-productions)
There are no useless rules (unreachable symbols or unproductive rules)
For each pair of symbols X, Y (X, Y (N ∪ Σ)) there is only one Wirth–Weber precedence relation.
G is uniquely inversible
Examples
precedence table
Notes
References
Alfred V. Aho, Jeffrey D. Ullman (1977). Principles of Compiler Design. 1st Edition. Addison–Wesley.
William A. Barrett, John D. Couch (1979). Compiler construction: Theory and Practice. Science Research Associate.
Jean-Paul Tremblay, P. G. Sorenson (1985). The Theory and Practice of Compiler Writing. McGraw–Hill.
External links
"Simple Precedence Relations" at Clemson University
Formal languages | Simple precedence grammar | Mathematics,Technology | 279 |
67,207,064 | https://en.wikipedia.org/wiki/Markarian%2050 | Markarian 50 is a young open cluster in the Milky Way galaxy. Located at about 3,460 pc away in the constellation Cassiopeia, its age is estimated at only about 7.5 million years old. Markarian 50 may be a member of the OB association Cassiopeia OB2. The Wolf-Rayet star WR 157 is a member of Markarian 50.
References
Open clusters
Cassiopeia (constellation)
Star-forming regions | Markarian 50 | Astronomy | 94 |
38,070,109 | https://en.wikipedia.org/wiki/Central%20Drugs%20Standard%20Control%20Organisation | The Central Drugs Standard Control Organisation (CDSCO) is India's national regulatory body for cosmetics, pharmaceuticals and medical devices. It serves a similar function to the Food and Drug Administration (FDA) of the United States or the European Medicines Agency of the European Union. The Indian government has announced its plan to bring all medical devices, including implants and contraceptives under a review of the Central Drugs and Standard Control Organisation (CDSCO).
Within the CDSCO, the Drug Controller General of India (DCGI) regulates pharmaceutical and medical devices and is positioning within the Ministry of Health and Family Welfare. The DCGI is advised by the Drug Technical Advisory Board (DTAB) and the Drug Consultative Committee (DCC). Divided into zonal offices, each one carries out pre-licensing and post-licensing inspections, post-market surveillance, and drug recalls (where necessary). Manufacturers who deal with the authority required to name an Authorized Indian Representative (AIR) to represent them in all dealings with the CDSCO in India.
Divisions
Central Drugs Standard Control Organization has 8 divisions:
BA/BE
New Drugs
Medical Device & Diagnostics
DCC-DTAB
Import & Registration
Biological
Cosmetics
Clinical Trials
See also
Healthcare in India
National Health Authority
Drugs and Cosmetics Act, 1940
Indian Council of Medical Research
Hindustan Antibiotics Limited
Drugs Controller General of India
National Pharmaceutical Pricing Authority
Indian Pharmacopoeia Commission
Pharmaceutical Export Promotion Council
Food Safety and Standards Authority of India
Pharmacy Council of India
Pharmaceutical industry in India
National Medical Commission
Ministry of Health and Family Welfare
National Commission for Indian System of Medicine
Rajasthan Right to Health Care Act 2022
Drugs and Magic Remedies (Objectionable Advertisements) Act, 1954
References
External links
SUGAM Online Licensing
Medical and health organisations based in India
Food safety organizations
National agencies for drug regulation
Regulators of biotechnology products
Ministry of Health and Family Welfare
Regulatory agencies of India | Central Drugs Standard Control Organisation | Chemistry,Biology | 373 |
34,730,396 | https://en.wikipedia.org/wiki/Geosmithia%20morbida | Geosmithia morbida is a species of anamorphic fungus in the Bionectriaceae family that, together with the activity of the walnut twig beetle, causes thousand cankers disease in species of walnut trees (Juglans spp.). It was described as new to science in 2010 from specimens collected in the southern United States. The fungus, transmitted by the walnut twig beetle, Pityophthorus juglandis, is known from the western USA from California to Colorado. The cankers resulting from infection restrict nutrient flow and typically kill the host tree within three to four years. Based on closeness of internal transcribed spacer DNA, the closest relative of G. morbida is G. fassatiae. The specific epithet morbida refers to the deadly pathogenic effect it has on its host.
References
External links
USDA ARS Fungal Database
Bionectriaceae
morbida
Fungi described in 2010
Fungi of the United States
Fungal tree pathogens and diseases
Walnut tree diseases
Fungi without expected TNC conservation status
Fungus species | Geosmithia morbida | Biology | 213 |
53,886,302 | https://en.wikipedia.org/wiki/Optomechatronics | In engineering, optomechatronics is a field that investigates the integration of optical components and technology into mechatronic systems. The optical components in these systems are used as sensors to measure mechanical quantities such as surface structure and orientation. Optical sensors are used in a feedback loop as part of control systems for mechatronic devices. Optomechatronics has applications in areas such as adaptive optics, vehicular automation, optofluidics, optical tweezers and thin-film technology.
References
External links
International Society for Optomechatronics
International Journal of Optomechatronics
Electromechanical engineering
Optical metrology | Optomechatronics | Engineering | 132 |
19,590,427 | https://en.wikipedia.org/wiki/Iain%20Coldham | Iain Coldham is an organic chemist and Professor of Organic Chemistry at the University of Sheffield. He obtained his PhD from the University of Cambridge and conducted postdoctoral research in Austin, Texas, in 1989. Coldham's research areas include the intramolecular trapping of episulfonium ions with amine nucleophiles, the use of triisopropylsilyl enol ethers in organic synthesis, chiral organolithium chemistry, azomethine ylide cycloaddition, and natural product synthesis.
Academic career
1991: Lecturer in Organic Chemistry - University of Exeter
1998: Senior Lecturer in organic Chemistry - University of Exeter
2003: Reader in Organic Chemistry - University of Sheffield
2008: Professor of Organic Chemistry - University of Sheffield
Research
Chiral organolithium chemistry
Azomethine ylide cycloaddition
Natural product synthesis
References
External links
Coldham group website
Year of birth missing (living people)
Living people
Academics of the University of Exeter
Academics of the University of Sheffield
Alumni of the University of Cambridge
English chemists
British organic chemists | Iain Coldham | Chemistry | 220 |
1,493,236 | https://en.wikipedia.org/wiki/Random%20permutation | A random permutation is a random permutation of a set of objects, that is, a permutation-valued random variable. The use of random permutations is common in games of chance and in randomized algorithms in coding theory, cryptography, and simulation. A good example of a random permutation is the fair shuffling of a standard deck of cards: this is ideally a random permutation of the 52 cards.
Computation of random permutations
Entry-by-entry methods
One algorithm for generating a random permutation of a set of size n uniformly at random, i.e., such that each of the n! permutations is equally likely to appear, is to generate a sequence by uniformly randomly selecting an integer between 1 and n (inclusive), sequentially and without replacement n times, and then to interpret this sequence (x1, ..., xn) as the permutation
shown here in two-line notation.
An inefficient brute-force method for sampling without replacement could select from the numbers between 1 and n at every step, retrying the selection whenever the random number picked is a repeat of a number already selected until selecting a number that has not yet been selected. The expected number of retries per step in such cases will scale with the inverse of the fraction of numbers already selected, and the overall number of retries as the sum of those inverses, making this an inefficient approach.
Such retries can be avoided using an algorithm where, on each ith step when x1, ..., xi − 1 have already been chosen, one chooses a uniformly random number j from between 1 and n − i + 1 (inclusive) and sets xi equal to the jth largest of the numbers that have not yet been selected. This selects uniformly randomly among the remaining numbers at every step without retries.
Fisher-Yates shuffles
A simple algorithm to generate a permutation of n items uniformly at random without retries, known as the Fisher–Yates shuffle, is to start with any permutation (for example, the identity permutation), and then go through the positions 0 through n − 2 (we use a convention where the first element has index 0, and the last element has index n − 1), and for each position i swap the element currently there with a randomly chosen element from positions i through n − 1 (the end), inclusive. Any permutation of n elements will be produced by this algorithm with probability exactly 1/n!, thus yielding a uniform distribution of the permutations.
unsigned uniform(unsigned m); /* Returns a random integer 0 <= uniform(m) <= m-1 with uniform distribution */
void initialize_and_permute(unsigned permutation[], unsigned n)
{
unsigned i;
for (i = 0; i <= n-2; i++) {
unsigned j = i+uniform(n-i); /* A random integer such that i ≤ j < n */
swap(permutation[i], permutation[j]); /* Swap the randomly picked element with permutation[i] */
}
}
If the uniform() function is implemented simply as random() % (m) then there will be a bias in the distribution of permutations if the number of return values of random() is not a multiple of m. However, this effect is small if the number of return values of random() is orders of magnitude greater than m.
Randomness testing
As with all computational implementations of random processes, the quality of the distribution generated by an implementation of a randomized algorithm such as the Fisher-Yates shuffle, i.e., how close the actually generated distribution is to the desired distribution, will depend on the quality of underlying sources of randomness in the implementation such as pseudorandom number generators or hardware random number generators. There are many randomness tests for random permutations, such as the "overlapping permutations" test of the Diehard tests. A typical form of such tests is to take some permutation statistic for which the distribution is theoretically known and then test whether the distribution of that statistic on a set of randomly generated permutations from an implementation closely approximates the distribution of that statistic from the true distribution.
Statistics on random permutations
Fixed points
The probability distribution for the number of fixed points of a uniformly distributed random permutation of n elements approaches a Poisson distribution with expected value 1 as n grows. The first n moments of this distribution are exactly those of the Poisson distribution. In particular, the probability that a random permutation has no fixed points (i.e., that the permutation is a derangement) approaches 1/e as n increases.
See also
Ewens's sampling formula — a connection with population genetics
Faro shuffle
Golomb–Dickman constant
Random permutation statistics
Shuffling algorithms — random sort method, iterative exchange method
Pseudorandom permutation
References
External links
Random permutation at MathWorld
Random permutation generation -- detailed and practical explanation of Knuth shuffle algorithm and its variants for generating k-permutations (permutations of k elements chosen from a list) and k-subsets (generating a subset of the elements in the list without replacement) with pseudocode
Permutations
Randomized algorithms | Random permutation | Mathematics | 1,124 |
62,375,361 | https://en.wikipedia.org/wiki/Frontiers%20of%20Biogeography | Frontiers of Biogeography is a peer-reviewed open access scientific journal publishing biogeographical science, with the academic standards expected of a journal operated by and for an academic society. It published on behalf of the International Biogeographical Society, using the eScholarship Publishing platform. The current editor-in-chief is Robert J. Whittaker.
Abstracting and indexing
The journal is abstracted and indexed in:
References
External links
Open access journals
Ecology journals
Geography journals
Biogeography
Academic journals established in 2009
English-language journals | Frontiers of Biogeography | Biology,Environmental_science | 113 |
52,048,187 | https://en.wikipedia.org/wiki/Twister%20sister%20ribozyme | The twister sister ribozyme (TS) is an RNA structure that catalyzes its own cleavage at a specific site. In other words, it is a self-cleaving ribozyme. The twister sister ribozyme was discovered by a bioinformatics strategy as an RNA Associated with Genes Associated with Twister and Hammerhead ribozymes, or RAGATH.
The twister sister ribozyme has a possible structural similarity to twister ribozymes. Some striking similarities were noted, but also surprising differences, such as the absence of the two pseudoknot interactions in the twister ribozyme. The exact nature of the structural relationship between twister and twister sister ribozymes, if any, has not be determined.
Discovery
The twister sister ribozyme was discovered through a bioinformatic search. This study conducted a search for conserved RNA structures near known twister and hammerhead ribozymes as well as certain protein-coding genes based on the fact that many ribozymes are located near to each other and near those genetic fragments. Later they tested the self-cleaving activity of 15 conserved RNA motifs that were found in these regions. 3 out of the 15 RNA motifs showed self-cleaving activity, which were the twister sister ribozyme, the pistol ribozyme and the hatchet ribozyme.
Structure
The crystal structures of the pre-catalytic state of the twister sister ribozymes were solved by two research groups independently.
The structure of a three-way junctional twister sister ribozyme is composed of two co-axial stacked helical sections connected with a three-way junction and two tertiary contacts. The active site, a scissile phosphate, is located in a loop with quasihelical character in one coaxial base-stacked helix. Five divalent metal ions are coordinate to RNA ligands, one of which is directly bound to C54 O2’ near the scissile phosphate and exchange inner sphere water molecules with the RNA ligands.
The crystal structure of a four-way junctional twister sister ribozyme is different from the three-way junctional one in terms of long-range interaction and active site structure. The active site of a four-way junctional twister sister is splayed-apart with an interaction between guanine and scissile phosphate. Besides, there are seven divalent metal ions in this ribozyme.
So far, we only know the pre-catalytic conformation of twister sister ribozymes. Understanding the transition state is needed to explain the relationship between twister ribozyme and twister sister ribozyme as well as the structure differences of the active site between the three-way and four-way junctional twister sister ribozymes.
Catalytic mechanism
Generally, nucleolytic ribozymes cleave a specific phosphodiester linkage by SN2 mechanism. The O2' acts as a nucleophile to attack the adjacent P, with O5’ as a leaving group. The catalytic products are a cyclic 2’,3’ phosphate and a 5’-hydroxyl.
The catalytic activity of twister sister increases with pH and depends on divalent metal ion. The cleavage speed increases 10 fold with each increase in pH unit and reach a plateau near pH 7, which indicates that the 2-hydroxyl group of cytidine near the active site is fully deprotonated at pH 7 in the ribozyme. However, the structural basis for the catalytic activity is still under investigation.
The three-way junctional twister sister is a metalloenzyme. The inner sphere water of a divalent metal ion bound to C54 O2’ acts as a general base to deprotonate the 2-hydroxyl group, making it a stronger nucleophile, but the general acid which can stabilize the oxyanion leaving group remains unknown. This mechanism is supported by the exponential correlation between catalytic activity and the pKa of hydrated metal ion.
For the four-way junctional twister sister, Ren and coworkers find that guanine with an amino group is likely to play a role in the catalysis because G5 mutations result in very low catalytic activity. However, it remains unclear whether guanine directly participates in the catalysis as it is not absolutely conserved. The formation of a pseudoknot for four-way junctional TS was found to be highly Mg2+ dependent by conducting SHAPE (Selective-2′ -Hydroxyl Acylation analyzed by Primer Extension) experiments.
References
RNA
Ribozymes | Twister sister ribozyme | Chemistry | 966 |
53,751,966 | https://en.wikipedia.org/wiki/Michael%20B%C3%BChl | Michael Bühl is a professor of Computational and Theoretical Chemistry in the School of Chemistry, University of St. Andrews. He has published work on the performance of various density functionals, modelling thermal and medium effects, transition-metal NMR of metalloenzymes, modelling of homogeneous catalysis, and molecular dynamics of transition metal complexes.
Biography
Bühl was born in 1962. He earned his PhD at the University of Erlangen-Nuremberg's Institute for Organic Chemistry (Institut für organische Chemie), where his thesis advisor was Paul von Ragué Schleyer. In 1992, he worked as a post-doctoral researcher with Henry F. Schaefer III (University of Georgia). He was an Oberassistent at the Institute of Organic Chemistry, University of Zürich between 1993 and 1999. In 1999, he also worked at Max-Planck-Institut für Kohlenforschung, Mülheim. He was on the faculty at the University of Zürich from 1998 to 2000 and then at University of Wuppertal from 2000 to 2008. He is Chair of Computational Chemistry at the University of St. Andrews since 2008.
Research interests
Bühl's group applies the tools of computational quantum chemistry to study a variety of chemical and biochemical systems and their properties, focussing on transition-metal and f-element chemistry, homogeneous and bio-catalysis, and NMR properties. The methods employed are mostly rooted in density-functional theory (DFT), including quantum-mechanical/molecular-mechanical (QM/MM) calculations and first-principles molecular dynamics simulations.
References
External links
Buehl's research group
Scottish chemists
Academics of the University of St Andrews
Living people
1962 births
Computational chemistry | Michael Bühl | Chemistry | 353 |
4,770 | https://en.wikipedia.org/wiki/Business%20ethics | Business ethics (also known as corporate ethics) is a form of applied ethics or professional ethics, that examines ethical principles and moral or ethical problems that can arise in a business environment. It applies to all aspects of business conduct and is relevant to the conduct of individuals and entire organizations. These ethics originate from individuals, organizational statements or the legal system. These norms, values, ethical, and unethical practices are the principles that guide a business.
Business ethics refers to contemporary organizational standards, principles, sets of values and norms that govern the actions and behavior of an individual in the business organization. Business ethics have two dimensions, normative business ethics or descriptive business ethics. As a corporate practice and a career specialization, the field is primarily normative. Academics attempting to understand business behavior employ descriptive methods. The range and quantity of business ethical issues reflects the interaction of profit-maximizing behavior with non-economic concerns.
Interest in business ethics accelerated dramatically during the 1980s and 1990s, both within major corporations and within academia. For example, most major corporations today promote their commitment to non-economic values under headings such as ethics codes and social responsibility charters.
Adam Smith said in 1776, "People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices." Governments use laws and regulations to point business behavior in what they perceive to be beneficial directions. Ethics implicitly regulates areas and details of behavior that lie beyond governmental control. The emergence of large corporations with limited relationships and sensitivity to the communities in which they operate accelerated the development of formal ethics regimes.
Maintaining an ethical status is the responsibility of the manager of the business. According to a 1990 article in the Journal of Business Ethics, "Managing ethical behavior is one of the most pervasive and complex problems facing business organizations today."
History
Business ethics reflect the norms of each historical period. As time passes, norms evolve, causing accepted behaviors to become objectionable. Business ethics and the resulting behavior evolved as well. Business was involved in slavery, colonialism, and the Cold War.
The term 'business ethics' came into common use in the United States in the early 1970s. By the mid-1980s at least 500 courses in business ethics reached 40,000 students, using some twenty textbooks and at least ten casebooks supported by professional societies, centers and journals of business ethics. The Society for Business Ethics was founded in 1980. European business schools adopted business ethics after 1987 commencing with the European Business Ethics Network. In 1982 the first single-authored books in the field appeared.
Firms began highlighting their ethical stature in the late 1980s and early 1990s, possibly in an attempt to distance themselves from the business scandals of the day, such as the savings and loan crisis. The concept of business ethics caught the attention of academics, media and business firms by the end of the Cold War. However, criticism of business practices was attacked for infringing the freedom of entrepreneurs and critics were accused of supporting communists. This scuttled the discourse of business ethics both in media and academia. The Defense Industry Initiative on Business Ethics and Conduct (DII) was created to support corporate ethical conduct. This era began the belief and support of self-regulation and free trade, which lifted tariffs and barriers and allowed businesses to merge and divest in an increasing global atmosphere.
Religious and philosophical origins
One of the earliest written treatments of business ethics is found in the Tirukkuṛaḷ, a Tamil book dated variously from 300 BCE to the 7th century CE and attributed to Thiruvalluvar. Many verses discuss business ethics, in particular, verse 113, adapting to a changing environment in verses 474, 426, and 140, learning the intricacies of different tasks in verses 462 and 677.
Overview
Business ethics reflects the philosophy of business, of which one aim is to determine the fundamental purposes of a company. Business purpose expresses the company's reason for existing. Modern discussion on the purpose of business has been freshened by views from thinkers such as Richard R. Ellesworth, Peter Drucker, and Nikos Mourkogiannis: Earlier views such as Milton Friedman's held that the purpose of a business organization is to make profit for shareholders. Nevertheless, the purpose of maximizing shareholder's wealth often "fails to energize employees". In practice, many non-shareholders also benefit from a firm's economic activity, among them employees through contractual compensation and its broader impact, consumers by the tangible or non-tangible value derived from their purchase choices; society as a whole through taxation and/or the company's involvement in social action when it occurs. On the other hand, if a company's purpose is to maximize shareholder returns, then sacrificing profits for other concerns is a violation of its fiduciary responsibility. Corporate entities are legal persons but this does not mean they are legally entitled to all of the rights and liabilities as natural persons.
Ethics are the rules or standards that govern our decisions on a daily basis. Many consider "ethics" with conscience or a simplistic sense of "right" and "wrong". Others would say that ethics is an internal code that governs an individual's conduct, ingrained into each person by family, faith, tradition, community, laws, and personal mores. Corporations and professional organizations, particularly licensing boards, generally will have a written code of ethics that governs standards of professional conduct expected of all in the field.
It is important to note that "law" and "ethics" are not synonymous, nor are the "legal" and "ethical" courses of action in a given situation necessarily the same. Statutes and regulations passed by legislative bodies and administrative boards set forth the "law". Slavery once was legal in the US, but one certainly would not say enslaving another was an "ethical" act.
Economist Milton Friedman wrote that corporate executives' "responsibility ... generally will be to make as much money as possible while conforming to their basic rules of the society, both those embodied in law and those embodied in ethical custom". Friedman also said, "the only entities who can have responsibilities are individuals ... A business cannot have responsibilities. So the question is, do corporate executives, provided they stay within the law, have responsibilities in their business activities other than to make as much money for their stockholders as possible? And my answer to that is, no, they do not." This view is known as the Friedman doctrine. A multi-country 2011 survey found support for this view among the "informed public" ranging from 30 to 80%. Ronald Duska and Jacques Cory have described Friedman's argument as consequentialist or utilitarian rather than pragmatic: Friedman's argument implies that unrestrained corporate freedom would benefit the most people in the long term. Duska argued that Friedman failed to differentiate two very different aspects of business: (1) the motive of individuals, who are generally motivated by profit to participate in business, and (2) the socially sanctioned purpose of business, or the reason why people allow businesses to exist, which is to provide goods and services to people. So Friedman was wrong that making a profit is the only concern of business, Duska argued.
Peter Drucker once said, "There is neither a separate ethics of business nor is one needed", implying that standards of personal ethics cover all business situations. However, Drucker in another instance said that the ultimate responsibility of company directors is not to harm—primum non nocere.
Philosopher and author Ayn Rand has put forth her idea of rational egoism, which also applies to business ethics. She stresses that position of the entrepreneur, who has to be responsible for his own happiness and the business is a means to said happiness, where the entrepreneur is not required to serve the interest of anyone else and no-one is entitled to his/her work.
Another view of business is that it must exhibit corporate social responsibility (CSR): an umbrella term indicating that an ethical business must act as a responsible citizen of the communities in which it operates even at the cost of profits or other goals. In the US and most other nations, corporate entities are legally treated as persons in some respects. For example, they can hold title to property, sue and be sued and are subject to taxation, although their free speech rights are limited. This can be interpreted to imply that they have independent ethical responsibilities. Duska argued that stakeholders expect a business to be ethical and that violating that expectation must be counterproductive for the business.
Ethical issues include the rights and duties between a company and its employees, suppliers, customers and neighbors, its fiduciary responsibility to its shareholders. Issues concerning relations between different companies include hostile take-overs and industrial espionage. Related issues include corporate governance; corporate social entrepreneurship; political contributions; legal issues such as the ethical debate over introducing a crime of corporate manslaughter; and the marketing of corporations' ethics policies.
According to research published by the Institute of Business Ethics and Ipsos MORI in late 2012, the three major areas of public concern regarding business ethics in Britain are executive pay, corporate tax avoidance and bribery and corruption.
Ethical standards of an entire organization can be damaged if a corporate psychopath is in charge. This will not only affect the company and its outcome but the employees who work under a corporate psychopath. The way a corporate psychopath can rise in a company is by their manipulation, scheming, and bullying. They do this in a way that can hide their true character and intentions within a company.
Functional business areas
Finance
Fundamentally, finance is a social science discipline. The discipline borders behavioral economics, sociology, economics, accounting and management. It concerns technical issues such as the mix of debt and equity, dividend policy, the evaluation of alternative investment projects, options, futures, swaps, and other derivatives, portfolio diversification and many others. Finance is often mistaken by the people to be a discipline free from ethical burdens. The 2008 financial crisis caused critics to challenge the ethics of the executives in charge of U.S. and European financial institutions and financial regulatory bodies. Finance ethics is overlooked for another reason—issues in finance are often addressed as matters of law rather than ethics.
Finance paradigm
Aristotle said, "the end and purpose of the polis is the good life". Adam Smith characterized the good life in terms of material goods and intellectual and moral excellences of character. Smith in his The Wealth of Nations commented, "All for ourselves, and nothing for other people, seems, in every age of the world, to have been the vile maxim of the masters of mankind." However, a section of economists influenced by the ideology of neoliberalism, interpreted the objective of economics to be maximization of economic growth through accelerated consumption and production of goods and services. Neoliberal ideology promoted finance from its position as a component of economics to its core. Proponents of the ideology hold that unrestricted financial flows, if redeemed from the shackles of "financial repressions", best help impoverished nations to grow. The theory holds that open financial systems accelerate economic growth by encouraging foreign capital inflows, thereby enabling higher levels of savings, investment, employment, productivity and "welfare", along with containing corruption. Neoliberals recommended that governments open their financial systems to the global market with minimal regulation over capital flows. The recommendations however, met with criticisms from various schools of ethical philosophy. Some pragmatic ethicists, found these claims to be unfalsifiable and a priori, although neither of these makes the recommendations false or unethical per se. Raising economic growth to the highest value necessarily means that welfare is subordinate, although advocates dispute this saying that economic growth provides more welfare than known alternatives. Since history shows that neither regulated nor unregulated firms always behave ethically, neither regime offers an ethical panacea.
Neoliberal recommendations to developing countries to unconditionally open up their economies to transnational finance corporations was fiercely contested by some ethicists. The claim that deregulation and the opening up of economies would reduce corruption was also contested.
Dobson observes, "a rational agent is simply one who pursues personal material advantage ad infinitum. In essence, to be rational in finance is to be individualistic, materialistic, and competitive. Business is a game played by individuals, as with all games the object is to win, and winning is measured in terms solely of material wealth. Within the discipline, this rationality concept is never questioned, and has indeed become the theory-of-the-firm's sine qua non". Financial ethics is in this view a mathematical function of shareholder wealth. Such simplifying assumptions were once necessary for the construction of mathematically robust models. However, signalling theory and agency theory extended the paradigm to greater realism.
Other issues
Fairness in trading practices, trading conditions, financial contracting, sales practices, consultancy services, tax payments, internal audit, external audit and executive compensation also, fall under the umbrella of finance and accounting. Particular corporate ethical/legal abuses include: creative accounting, earnings management, misleading financial analysis, insider trading, securities fraud, bribery/kickbacks and facilitation payments. Outside of corporations, bucket shops and forex scams are criminal manipulations of financial markets. Cases include accounting scandals, Enron, WorldCom and Satyam.
Human resource management
Human resource management occupies the sphere of activity of recruitment selection, orientation, performance appraisal, training and development, industrial relations and health and safety issues. Business Ethicists differ in their orientation towards labor ethics. Some assess human resource policies according to whether they support an egalitarian workplace and the dignity of labor.
Issues including employment itself, privacy, compensation in accord with comparable worth, collective bargaining (and/or its opposite) can be seen either as inalienable rights or as negotiable.
Discrimination by age (preferring the young or the old), gender/sexual harassment, race, religion, disability, weight and attractiveness. A common approach to remedying discrimination is affirmative action.
Once hired, employees have the right to the occasional cost of living increases, as well as raises based on merit. Promotions, however, are not a right, and there are often fewer openings than qualified applicants. It may seem unfair if an employee who has been with a company longer is passed over for a promotion, but it is not unethical. It is only unethical if the employer did not give the employee proper consideration or used improper criteria for the promotion. Each employer should know the distinction between what is unethical and what is illegal. If an action is illegal it is breaking the law but if an action seems morally incorrect that is unethical. In the workplace what is unethical does not mean illegal and should follow the guidelines put in place by OSHA (Occupational Safety and Health Administration), EEOC (Equal Employment Opportunity Commission), and other law binding entities.
Potential employees have ethical obligations to employers, involving intellectual property protection and whistle-blowing.
Employers must consider workplace safety, which may involve modifying the workplace, or providing appropriate training or hazard disclosure. This differentiates on the location and type of work that is taking place and can need to comply with the standards to protect employees and non-employees under workplace safety.
Larger economic issues such as immigration, trade policy, globalization and trade unionism affect workplaces and have an ethical dimension, but are often beyond the purview of individual companies.
Trade unions
Trade unions, for example, may push employers to establish due process for workers, but may also cause job loss by demanding unsustainable compensation and work rules.
Unionized workplaces may confront union busting and strike breaking and face the ethical implications of work rules that advantage some workers over others.
Management strategy
Among the many people management strategies that companies employ are a "soft" approach that regards employees as a source of creative energy and participants in workplace decision making, a "hard" version explicitly focused on control and Theory Z that emphasizes philosophy, culture and consensus. None ensure ethical behavior. Some studies claim that sustainable success requires a humanely treated and satisfied workforce.
Sales and marketing
Marketing ethics came of age only as late as the 1990s. Marketing ethics was approached from ethical perspectives of virtue or virtue ethics, deontology, consequentialism, pragmatism and relativism.
Ethics in marketing deals with the principles, values and/or ideas by which marketers (and marketing institutions) ought to act. Marketing ethics is also contested terrain, beyond the previously described issue of potential conflicts between profitability and other concerns. Ethical marketing issues include marketing redundant or dangerous products/services, transparency about environmental risks, transparency about product ingredients such as genetically modified organisms possible health risks, financial risks, security risks, etc., respect for consumer privacy and autonomy, advertising truthfulness and fairness in pricing & distribution.
According to Borgerson, and Schroeder (2008), marketing can influence individuals' perceptions of and interactions with other people, implying an ethical responsibility to avoid distorting those perceptions and interactions.
Marketing ethics involves pricing practices, including illegal actions such as price fixing and legal actions including price discrimination and price skimming. Certain promotional activities have drawn fire, including greenwashing, bait and switch, shilling, viral marketing, spam (electronic), pyramid schemes and multi-level marketing. Advertising has raised objections about attack ads, subliminal messages, sex in advertising and marketing in schools.
Inter-organizational relationships
Scholars in business and management have paid much attention to the ethical issues in the different forms of relationships between organizations such as buyer-supplier relationships, networks, alliances, or joint ventures. Drawing in particular on Transaction Cost Theory and Agency Theory, they note the risk of opportunistic and unethical practices between partners through, for instance, shirking, poaching, and other deceitful behaviors. In turn, research on inter-organizational relationships has observed the role of formal and informal mechanisms to both prevent unethical practices and mitigate their consequences. It especially discusses the importance of formal contracts and relational norms between partners to manage ethical issues.
Emerging issues
Being the most important element of a business, stakeholders' main concern is to determine whether or not the business is behaving ethically or unethically. The business's actions and decisions should be primarily ethical before it happens to become an ethical or even legal issue. "In the case of the government, community, and society what was merely an ethical issue can become a legal debate and eventually law."
Some emerging ethical issues are:
Corporate environmental responsibility: Businesses impacts on eco-systemic environments can no longer be neglected and ecosystems' impacts on business activities are becoming more imminent.
Fairness: The three aspects that motivate people to be fair is; equality, optimization, and reciprocity. Fairness is the quality of being just, equitable, and impartial.
Misuse of company's times and resources: This particular topic may not seem to be a very common one, but it is very important, as it costs a company billions of dollars on a yearly basis. This misuse is from late arrivals, leaving early, long lunch breaks, inappropriate sick days etc. This has been observed as a major form of misconduct in businesses today. One of the greatest ways employees participate in the misuse of company's time and resources is by using the company computer for personal use.
Consumer fraud: There are many different types of fraud, namely; friendly fraud, return fraud, wardrobing, price arbitrage, returning stolen goods. Fraud is a major unethical practice within businesses which should be paid special attention. Consumer fraud is when consumers attempt to deceive businesses for their very own benefit.
Abusive behavior: A common ethical issue among employees. Abusive behavior consists of inflicting intimidating acts on other employees. Such acts include harassing, using profanity, threatening someone physically and insulting them, and being annoying.
Production
This area of business ethics usually deals with the duties of a company to ensure that products and production processes do not needlessly cause harm. Since few goods and services can be produced and consumed with zero risks, determining the ethical course can be difficult. In some case, consumers demand products that harm them, such as tobacco products. Production may have environmental impacts, including pollution, habitat destruction and urban sprawl. The downstream effects of technologies nuclear power, genetically modified food and mobile phones may not be well understood. While the precautionary principle may prohibit introducing new technology whose consequences are not fully understood, that principle would have prohibited the newest technology introduced since the industrial revolution. Product testing protocols have been attacked for violating the rights of both humans and animals. There are sources that provide information on companies that are environmentally responsible or do not test on animals.
Property
The etymological root of property is the Latin , which refers to 'nature', 'quality', 'one's own', 'special characteristic', 'proper', 'intrinsic', 'inherent', 'regular', 'normal', 'genuine', 'thorough, complete, perfect' etc. The word property is value loaded and associated with the personal qualities of propriety and respectability, also implies questions relating to ownership. A 'proper' person owns and is true to herself or himself, and is thus genuine, perfect and pure.
Modern history of property rights
Modern discourse on property emerged by the turn of the 17th century within theological discussions of that time. For instance, John Locke justified property rights saying that God had made "the earth, and all inferior creatures, [in] common to all men".
In 1802 utilitarian Jeremy Bentham stated, "property and law are born together and die together".
One argument for property ownership is that it enhances individual liberty by extending the line of non-interference by the state or others around the person. Seen from this perspective, property right is absolute and property has a special and distinctive character that precedes its legal protection. Blackstone conceptualized property as the "sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe".
Slaves as property
During the seventeenth and eighteenth centuries, slavery spread to European colonies including America, where colonial legislatures defined the legal status of slaves as a form of property.
Combined with theological justification, the property was taken to be essentially natural ordained by God. Property, which later gained meaning as ownership and appeared natural to Locke, Jefferson and to many of the 18th and 19th century intellectuals as land, labor or idea, and property right over slaves had the same theological and essentialized justification It was even held that the property in slaves was a sacred right. Wiecek says, "Yet slavery was more clearly and explicitly established under the Constitution than it had been under the Articles". In an 1857 judgment, US Supreme Court Chief Justice Roger B. Taney said, "The right of property in a slave is distinctly and expressly affirmed in the Constitution."
Natural right vs social construct
Neoliberals hold that private property rights are a non-negotiable natural right. Davies counters with "property is no different from other legal categories in that it is simply a consequence of the significance attached by law to the relationships between legal persons." Singer claims, "Property is a form of power, and the distribution of power is a political problem of the highest order". Rose finds, Property' is only an effect, a construction, of relationships between people, meaning that its objective character is contestable. Persons and things, are 'constituted' or 'fabricated' by legal and other normative techniques." Singer observes, "A private property regime is not, after all, a Hobbesian state of nature; it requires a working legal system that can define, allocate, and enforce property rights." Davis claims that common law theory generally favors the view that "property is not essentially a 'right to a thing', but rather a separable bundle of rights subsisting between persons which may vary according to the context and the object which is at stake".
In common parlance property rights involve a bundle of rights including occupancy, use and enjoyment, and the right to sell, devise, give, or lease all or part of these rights. Custodians of property have obligations as well as rights. Michelman writes, "A property regime thus depends on a great deal of cooperation, trustworthiness, and self-restraint among the people who enjoy it."
Menon claims that the autonomous individual, responsible for his/her own existence is a cultural construct moulded by Western culture rather than the truth about the human condition. Penner views property as an "illusion"—a "normative phantasm" without substance.
In the neoliberal literature, the property is part of the private side of a public/private dichotomy and acts a counterweight to state power. Davies counters that "any space may be subject to plural meanings or appropriations which do not necessarily come into conflict".
Private property has never been a universal doctrine, although since the end of the Cold War is it has become nearly so. Some societies, e.g., Native American bands, held land, if not all property, in common. When groups came into conflict, the victor often appropriated the loser's property. The rights paradigm tended to stabilize the distribution of property holdings on the presumption that title had been lawfully acquired.
Property does not exist in isolation, and so property rights too. Bryan claimed that property rights describe relations among people and not just relations between people and things Singer holds that the idea that owners have no legal obligations to others wrongly supposes that property rights hardly ever conflict with other legally protected interests. Singer continues implying that legal realists "did not take the character and structure of social relations as an important independent factor in choosing the rules that govern market life". Ethics of property rights begins with recognizing the vacuous nature of the notion of property.
Intellectual property
Intellectual property (IP) encompasses expressions of ideas, thoughts, codes, and information. "Intellectual property rights" (IPR) treat IP as a kind of real property, subject to analogous protections, rather than as a reproducible good or service. Boldrin and Levine argue that "government does not ordinarily enforce monopolies for producers of other goods. This is because it is widely recognized that monopoly creates many social costs. Intellectual monopoly is no different in this respect. The question we address is whether it also creates social benefits commensurate with these social costs."
International standards relating to Intellectual Property Rights are enforced through Agreement on Trade-Related Aspects of Intellectual Property Rights. In the US, IP other than copyrights is regulated by the United States Patent and Trademark Office.
The US Constitution included the power to protect intellectual property, empowering the Federal government "to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries". Boldrin and Levine see no value in such state-enforced monopolies stating, "we ordinarily think of innovative monopoly as an oxymoron. Further, they comment, 'intellectual property' "is not like ordinary property at all, but constitutes a government grant of a costly and dangerous private monopoly over ideas. We show through theory and example that intellectual monopoly is not necessary for innovation and as a practical matter is damaging to growth, prosperity, and liberty". Steelman defends patent monopolies, writing, "Consider prescription drugs, for instance. Such drugs have benefited millions of people, improving or extending their lives. Patent protection enables drug companies to recoup their development costs because for a specific period of time they have the sole right to manufacture and distribute the products they have invented." The court cases by 39 pharmaceutical companies against South Africa's 1997 Medicines and Related Substances Control Amendment Act, which intended to provide affordable HIV medicines has been cited as a harmful effect of patents.
One attack on IPR is moral rather than utilitarian, claiming that inventions are mostly a collective, cumulative, path dependent, social creation and therefore, no one person or firm should be able to monopolize them even for a limited period. The opposing argument is that the benefits of innovation arrive sooner when patents encourage innovators and their investors to increase their commitments.
Roderick T. Long, a libertarian philosopher, argued:
Machlup concluded that patents do not have the intended effect of enhancing innovation. Self-declared anarchist Proudhon, in his 1847 seminal work noted, "Monopoly is the natural opposite of competition," and continued, "Competition is the vital force which animates the collective being: to destroy it, if such a supposition were possible, would be to kill society."
Mindeli and Pipiya argued that the knowledge economy is an economy of abundance because it relies on the "infinite potential" of knowledge and ideas rather than on the limited resources of natural resources, labor and capital. Allison envisioned an egalitarian distribution of knowledge. Kinsella claimed that IPR create artificial scarcity and reduce equality. Bouckaert wrote, "Natural scarcity is that which follows from the relationship between man and nature. Scarcity is natural when it is possible to conceive of it before any human, institutional, contractual arrangement. Artificial scarcity, on the other hand, is the outcome of such arrangements. Artificial scarcity can hardly serve as a justification for the legal framework that causes that scarcity. Such an argument would be completely circular. On the contrary, artificial scarcity itself needs a justification" Corporations fund much IP creation and can acquire IP they do not create, to which Menon and others have objected. Andersen claims that IPR has increasingly become an instrument in eroding public domain.
Ethical and legal issues include patent infringement, copyright infringement, trademark infringement, patent and copyright misuse, submarine patents, biological patents, patent, copyright and trademark trolling, employee raiding and monopolizing talent, bioprospecting, biopiracy and industrial espionage, digital rights management.
Notable IP copyright cases include A&M Records, Inc. v. Napster, Inc., Eldred v. Ashcroft, and Disney's lawsuit against the Air Pirates.
International issues
While business ethics emerged as a field in the 1970s, international business ethics did not emerge until the late 1990s, looking back on the international developments of that decade. Many new practical issues arose out of the international context of business. Theoretical issues such as cultural relativity of ethical values receive more emphasis in this field. Other, older issues can be grouped here as well. Issues and subfields include:
The search for universal values as a basis for international commercial behavior
Comparison of business ethical traditions in different countries and on the basis of their respective GDP and corruption rankings
Comparison of business ethical traditions from various religious perspectives
Ethical issues arising out of international business transactions—e.g., bioprospecting and biopiracy in the pharmaceutical industry; the fair trade movement; transfer pricing.
Issues such as globalization and cultural imperialism
Varying global standards—e.g., the use of child labor
The way in which multinationals take advantage of international differences, such as outsourcing production (e.g. clothes) and services (e.g. call centers) to low-wage countries
The permissibility of international commerce with pariah states
Foreign countries often use dumping as a competitive threat, selling products at prices lower than their normal value. This can lead to problems in domestic markets. It becomes difficult for these markets to compete with the pricing set by foreign markets. In 2009, the International Trade Commission has been researching anti-dumping laws. Dumping is often seen as an ethical issue, as larger companies are taking advantage of other less economically advanced companies.
Issues
Ethical issues often arise in business settings, whether through business transactions or forming new business relationships. It also has a huge focus in the auditing field whereby the type of verification can be directly dictated by ethical theory. An ethical issue in a business atmosphere may refer to any situation that requires business associates as individuals, or as a group (for example, a department or firm) to evaluate the morality of specific actions, and subsequently, make a decision amongst the choices. Some ethical issues of particular concern in today's evolving business market include such topics as: honesty, integrity, professional behaviors, environmental issues, harassment, and fraud to name a few. From a 2009 National Business Ethics survey, it was found that types of employee-observed ethical misconduct included abusive behavior (at a rate of 22 percent), discrimination (at a rate of 14 percent), improper hiring practices (at a rate of 10 percent), and company resource abuse (at a rate of percent).
The ethical issues associated with honesty are widespread and vary greatly in business, from the misuse of company time or resources to lying with malicious intent, engaging in bribery, or creating conflicts of interest within an organization. Honesty encompasses wholly the truthful speech and actions of an individual. Some cultures and belief systems even consider honesty to be an essential pillar of life, such as Confucianism and Buddhism (referred to as sacca, part of the Four Noble Truths). Many employees lie in order to reach goals, avoid assignments or negative issues; however, sacrificing honesty in order to gain status or reap rewards poses potential problems for the overall ethical culture organization, and jeopardizes organizational goals in the long run. Using company time or resources for personal use is also, commonly viewed as unethical because it boils down to stealing from the company. The misuse of resources costs companies billions of dollars each year, averaging about 4.25 hours per week of stolen time alone, and employees' abuse of Internet services is another main concern. Bribery, on the other hand, is not only considered unethical is business practices, but it is also illegal. In accordance with this, the Foreign Corrupt Practices Act was established in 1977 to deter international businesses from giving or receiving unwarranted payments and gifts that were intended to influence the decisions of executives and political officials. Although, small payments known as facilitation payments will not be considered unlawful under the Foreign Corrupt Practices Act if they are used towards regular public governance activities, such as permits or licenses.
Influential factors on business ethics
Many aspects of the work environment influence an individual's decision-making regarding ethics in the business world. When an individual is on the path of growing a company, many outside influences can pressure them to perform a certain way. The core of the person's performance in the workplace is rooted in their personal code of behavior. A person's personal code of ethics encompasses many different qualities such as integrity, honesty, communication, respect, compassion, and common goals. In addition, the ethical standards set forth by a person's superior(s) often translate into their own code of ethics. The company's policy is the 'umbrella' of ethics that play a major role in the personal development and decision-making processes that people make with respect to ethical behavior.
The ethics of a company and its individuals are heavily influenced by the state of their country. If a country is heavily plagued with poverty, large corporations continuously grow, but smaller companies begin to wither and are then forced to adapt and scavenge for any method of survival. As a result, the leadership of the company is often tempted to participate in unethical methods to obtain new business opportunities. Additionally, Social Media is arguably the most influential factor in ethics. The immediate access to so much information and the opinions of millions highly influence people's behaviors. The desire to conform with what is portrayed as the norm often manipulates our idea of what is morally and ethically sound. Popular trends on social media and the instant gratification that is received from participating in such quickly distort people's ideas and decisions.
Economic systems
Political economy and political philosophy have ethical implications, particularly regarding the distribution of economic benefits. John Rawls and Robert Nozick are both notable contributors. For example, Rawls has been interpreted as offering a critique of offshore outsourcing on social contract grounds.
Law and regulation
Laws are the written statutes, codes, and opinions of government organizations by which citizens, businesses, and persons present within a jurisdiction are expected to govern themselves or face legal sanction. Sanctions for violating the law can include (a) civil penalties, such as fines, pecuniary damages, and loss of licenses, property, rights, or privileges; (b) criminal penalties, such as fines, probation, imprisonment, or a combination thereof; or (c) both civil and criminal penalties.
Very often it is held that business is not bound by any ethics other than abiding by the law. Milton Friedman is the pioneer of the view. He held that corporations have the obligation to make a profit within the framework of the legal system, nothing more. Friedman made it explicit that the duty of the business leaders is, "to make as much money as possible while conforming to the basic rules of the society, both those embodied in the law and those embodied in ethical custom". Ethics for Friedman is nothing more than abiding by customs and laws. The reduction of ethics to abidance to laws and customs, however, have drawn serious criticisms.
Counter to Friedman's logic it is observed that legal procedures are technocratic, bureaucratic, rigid and obligatory whereas ethical act is conscientious, voluntary choice beyond normativity. Law is retroactive. Crime precedes law. Law against crime, to be passed, the crime must have happened. Laws are blind to the crimes undefined in it. Further, as per law, "conduct is not criminal unless forbidden by law which gives advance warning that such conduct is criminal". Also, the law presumes the accused is innocent until proven guilty and that the state must establish the guilt of the accused beyond reasonable doubt. As per liberal laws followed in most of the democracies, until the government prosecutor proves the firm guilty with the limited resources available to her, the accused is considered to be innocent. Though the liberal premises of law is necessary to protect individuals from being persecuted by Government, it is not a sufficient mechanism to make firms morally accountable.
Implementation
Corporate policies
As part of more comprehensive compliance and ethics programs, many companies have formulated internal policies pertaining to the ethical conduct of employees. These policies can be simple exhortations in broad, highly generalized language (typically called a corporate ethics statement), or they can be more detailed policies, containing specific behavioral requirements (typically called corporate ethics codes). They are generally meant to identify the company's expectations of workers and to offer guidance on handling some of the more common ethical problems that might arise in the course of doing business. It is hoped that having such a policy will lead to greater ethical awareness, consistency in application, and the avoidance of ethical disasters.
An increasing number of companies also require employees to attend seminars regarding business conduct, which often include discussion of the company's policies, specific case studies, and legal requirements. Some companies even require their employees to sign agreements stating that they will abide by the company's rules of conduct.
Many companies are assessing the environmental factors that can lead employees to engage in unethical conduct. A competitive business environment may call for unethical behavior. Lying has become expected in fields such as trading. An example of this are the issues surrounding the unethical actions of the Salomon Brothers.
Not everyone supports corporate policies that govern ethical conduct. Some claim that ethical problems are better dealt with by depending upon employees to use their own judgment.
Others believe that corporate ethics policies are primarily rooted in utilitarian concerns and that they are mainly to limit the company's legal liability or to curry public favor by giving the appearance of being a good corporate citizen. Ideally, the company will avoid a lawsuit because its employees will follow the rules. Should a lawsuit occur, the company can claim that the problem would not have arisen if the employee had only followed the code properly.
Some corporations have tried to burnish their ethical image by creating whistle-blower protections, such as anonymity. In the case of Citi, they call this the Ethics Hotline. Though it is unclear whether firms such as Citi take offences reported to these hotlines seriously or not. Sometimes there is a disconnection between the company's code of ethics and the company's actual practices. Thus, whether or not such conduct is explicitly sanctioned by management, at worst, this makes the policy duplicitous, and, at best, it is merely a marketing tool.
Jones and Parker wrote, "Most of what we read under the name business ethics is either sentimental common sense or a set of excuses for being unpleasant." Many manuals are procedural form filling exercises unconcerned about the real ethical dilemmas. For instance, the US Department of Commerce ethics program treats business ethics as a set of instructions and procedures to be followed by 'ethics officers'., some others claim being ethical is just for the sake of being ethical. Business ethicists may trivialize the subject, offering standard answers that do not reflect the situation's complexity.
Richard DeGeorge wrote in regard to the importance of maintaining a corporate code:
Ethics officers
Following a series of fraud, corruption, and abuse scandals that affected the United States defense industry in the mid-1980s, the Defense Industry Initiative (DII) was created to promote ethical business practices and ethics management in multiple industries. Subsequent to these scandals, many organizations began appointing ethics officers (also referred to as "compliance" officers). In 1991, the Ethics & Compliance Officer Association —originally the Ethics Officer Association (EOA)—was founded at the Center for Business Ethics at Bentley University as a professional association for ethics and compliance officers.
The 1991 passing of the Federal Sentencing Guidelines for Organizations in 1991 was another factor in many companies appointing ethics/compliance officers. These guidelines, intended to assist judges with sentencing, set standards organizations must follow to obtain a reduction in sentence if they should be convicted of a federal offense.
Following the high-profile corporate scandals of companies like Enron, WorldCom and Tyco between 2001 and 2004, and following the passage of the Sarbanes–Oxley Act, many small and mid-sized companies also began to appoint ethics officers.
Often reporting to the chief executive officer, ethics officers focus on uncovering or preventing unethical and illegal actions. This is accomplished by assessing the ethical implications of the company's activities, making recommendations on ethical policies, and disseminating information to employees.
The effectiveness of ethics officers is not clear. The establishment of an ethics officer position is likely to be insufficient in driving ethical business practices without a corporate culture that values ethical behavior. These values and behaviors should be consistently and systemically supported by those at the top of the organization. Employees with strong community involvement, loyalty to employers, superiors or owners, smart work practices, trust among the team members do inculcate a corporate culture.
Sustainability initiatives
Many corporate and business strategies now include sustainability. In addition to the traditional environmental 'green' sustainability concerns, business ethics practices have expanded to include social sustainability. Social sustainability focuses on issues related to human capital in the business supply chain, such as worker's rights, working conditions, child labor, and human trafficking. Incorporation of these considerations is increasing, as consumers and procurement officials demand documentation of a business's compliance with national and international initiatives, guidelines, and standards. Many industries have organizations dedicated to verifying ethical delivery of products from start to finish, such as the Kimberly Process, which aims to stop the flow of conflict diamonds into international markets, or the Fair Wear Foundation, dedicated to sustainability and fairness in the garment industry.
Initiatives in sustainability encompass "green" topics, as well as social sustainability. Tao et al. refer to a variety of "green" business practices including green strategy, green design, green production and green operation. There are however many different ways in which sustainability initiatives can be implemented by a company.
Improving operations
An organization can implement sustainability initiatives by improving its operations and manufacturing process so as to make it more aligned with environment, social, and governance issues. Johnson & Johnson incorporates policies from the Universal Declaration of Human Rights, International Covenant on Civil and Political Rights and International Covenant on Economic, Social and Cultural Rights, applying these principles not only for members of its supply chain but also internal operations. Walmart has made commitments to doubling its truck fleet efficiency by 2015 by replacing 2/3rds of its fleet with more fuel-efficient trucks, including hybrids. Dell has integrated alternative, recycled, and recyclable materials in its products and packaging design, improving energy efficiency and design for end-of-life and recyclability. Dell plans to reduce the energy intensity of its product portfolio by 80% by 2020.
Board leadership
The board of a company can decide to lower executive compensation by a given percentage, and give the percentage of compensation to a specific cause. This is an effort which can only be implemented from the top, as it will affect the compensation of all executives in the company. In Alcoa, an aluminum company based in the US, "1/5th of executive cash compensation is tied to safety, diversity, and environmental stewardship, which includes greenhouse gas emission reductions and energy efficiency" (Best Practices). This is not usually the case for most companies, where we see the board take a uniform step towards the environment, social, and governance issues. This is only the case for companies that are directly linked to utilities, energy, or material industries, something which Alcoa as an aluminum company, falls in line with. Instead, formal committees focused on the environment, social, and governance issues are more usually seen in governance committees and audit committees, rather than the board of directors. "According to research analysis done by Pearl Meyer in support of the NACD 2017 Director Compensation Report shows that among 1,400 public companies reviewed, only slightly more than five percent of boards have a designated committee to address ESG issues." (How compensation can).
Management accountability
Similar to board leadership, creating steering committees and other types of committees specialized for sustainability, senior executives are identified who are held accountable for meeting and constantly improving sustainability goals.
Executive compensation
Introducing bonus schemes that reward executives for meeting non-financial performance goals including safety targets, greenhouse gas emissions, reduction targets, and goals engaging stakeholders to help shape the companies public policy positions. Companies such as Exelon have implemented policies like this.
Stakeholder engagement
Other companies will keep sustainability within its strategy and goals, presenting findings at shareholder meetings, and actively tracking metrics on sustainability. Companies such as PepsiCo, Heineken, and FIFCO take steps in this direction to implement sustainability initiatives. (Best Practices). Companies such as Coca-Cola have actively tried improve their efficiency of water usage, hiring 3rd party auditors to evaluate their water management approach. FIFCO has also led successfully led water-management initiatives.
Employee engagement
Implementation of sustainability projects through directly appealing to employees (typically through the human resource department) is another option for companies to implement sustainability. This involves integrating sustainability into the company culture, with hiring practices and employee training. General Electric is a company that is taking the lead in implementing initiatives in this manner. Bank of America directly engaged employees by implement LEED (leadership in Energy and Environmental Design) certified buildings, with a fifth of its building meeting these certifications.
Supply chain management
Establishing requirements for not only internal operations but also first-tier suppliers as well as second-tier suppliers to help drive environmental and social expectations further down the supply chain. Companies such as Starbucks, FIFCO and Ford Motor Company have implemented requirements that suppliers must meet to win their business. Starbucks has led efforts in engaging suppliers and local communities where they operate to accelerate investment in sustainable farming. Starbucks set a goal of ethically sourcing 100% of its coffee beans by 2015.
Transparency
By revealing decision-making data about how sustainability was reached, companies can give away insights that can help others across the industry and beyond make more sustainable decisions. Nike launched its "making app" in 2013 which released data about the sustainability in the materials it was using. This ultimately allows other companies to make more sustainable design decisions and create lower impact products.
Academic discipline
As an academic discipline, business ethics emerged in the 1970s. Since no academic business ethics journals or conferences existed, researchers published in general management journals and attended general conferences. Over time, specialized peer-reviewed journals appeared, and more researchers entered the field. Corporate scandals in the earlier 2000s increased the field's popularity. As of 2009, sixteen academic journals devoted to various business ethics issues existed, with Journal of Business Ethics and Business Ethics Quarterly considered the leaders. Journal of Business Ethics Education publishes articles specifically about education in business ethics.
The International Business Development Institute is a global non-profit organization that represents 217 nations and all 50 United States. It offers a Charter in Business Development that focuses on ethical business practices and standards. The Charter is directed by Harvard University, MIT, and Fulbright Scholars, and it includes graduate-level coursework in economics, politics, marketing, management, technology, and legal aspects of business development as it pertains to business ethics. IBDI also oversees the International Business Development Institute of Asia which provides individuals living in 20 Asian nations the opportunity to earn the Charter.
Religious views
In Sharia law, followed by many Muslims, banking specifically prohibits charging interest on loans. Traditional Confucian thought discourages profit-seeking. Christianity offers the Golden Rule command, "Therefore all things whatsoever ye would that men should do to you, do ye even so to them: for this is the law and the prophets."
According to the article "Theory of the real economy", there is a more narrow point of view from the Christianity faith towards the relationship between ethics and religious traditions. This article stresses how Christianity is capable of establishing reliable boundaries for financial institutions. One criticism comes from Pope Benedict by describing the "damaging effects of the real economy of badly managed and largely speculative financial dealing." It is mentioned that Christianity has the potential to transform the nature of finance and investment but only if theologians and ethicist provide more evidence of what is real in the economic life. Business ethics receives an extensive treatment in Jewish thought and Rabbinic literature, both from an ethical (Mussar) and a legal (Halakha) perspective; see article Jewish business ethics for further discussion.
According to the article "Indian Philosophy and Business Ethics: A Review", by Chandrani Chattopadyay, Hindus follow "Dharma" as Business Ethics and unethical business practices are termed "Adharma". Businessmen are supposed to maintain steady-mindedness, self-purification, non-violence, concentration, clarity and control over senses. Books like Bhagavat Gita and Arthashastra contribute a lot towards conduct of ethical business.
Related disciplines
Business ethics is related to philosophy of economics, the branch of philosophy that deals with the philosophical, political, and ethical underpinnings of business and economics. Business ethics operates on the premise, for example, that the ethical operation of a private business is possible—those who dispute that premise, such as libertarian socialists (who contend that "business ethics" is an oxymoron) do so by definition outside of the domain of business ethics proper.
The philosophy of economics also deals with questions such as what, if any, are the social responsibilities of a business; business management theory; theories of individualism vs. collectivism; free will among participants in the marketplace; the role of self interest; invisible hand theories; the requirements of social justice; and natural rights, especially property rights, in relation to the business enterprise.
Business ethics is also related to political economy, which is economic analysis from political and historical perspectives. Political economy deals with the distributive consequences of economic actions.
See also
B Corporation (certification)
Business culture
Business law
Corporate behaviour
Corporate crime
Corporate social responsibility
Eastern ethics in business
Ethical altruism / Ethical egoism
Ethical code
Ethical consumerism
Ethical implications in contracts
Ethical job
Ethicism
Evil corporation
Moral psychology
Optimism bias
Organizational ethics
Penny stock scam
Philosophy and economics
Political corruption
Strategic misrepresentation
Strategic planning
Work ethic
Protestant work ethic
Notes
References
General references
Further reading
External links
Applied ethics
Industrial and organizational psychology | Business ethics | Biology | 10,734 |
2,902,528 | https://en.wikipedia.org/wiki/32%20Andromedae | 32 Andromedae, abbreviated 32 And, is a star in the northern constellation of Andromeda. 32 Andromedae is the Flamsteed designation. It is faintly visible to the naked eye with an apparent visual magnitude is 5.30. The distance to 32 And, as estimated from its annual parallax shift of , is around 331 light years. It is moving closer to the Earth with a heliocentric radial velocity of −5 km/s.
With an age of 420 million years, this is a red giant star with a stellar classification of G8 III, indicating it has consumed the hydrogen at its core and evolved off the main sequence. It has 2.8 times the mass of the Sun and has expanded to 12 times the Sun's radius. The star is radiating 90 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 5,107 K.
References
External links
Image 26 Andromedae
G-type giants
Andromeda (constellation)
Durchmusterung objects
Andromedae, 32
003817
003231
0175 | 32 Andromedae | Astronomy | 229 |
72,021,555 | https://en.wikipedia.org/wiki/Sonja%20Lapajne%20Oblak | Sonja Lapajne Oblak (July 15, 1906 – September 29, 1993) was a Slovenian architect. She was the first Slovenian woman to graduate as a civil engineer from the Faculty of Technology in Ljubljana and Slovenia's first female urban planner. She was a member of the Partisans during the Second World War, and she survived incarceration in the Ravensbruck concentration camp before playing a part in rebuilding Yugoslavia in the postwar period.
Early life and education
Sonja Lapajne was born in Šentvid pri Ljubljani in 1906 and baptized Zofija-Sonja. Her parents were Antonija and Živko Lapajne, her father was a prominent medical doctor specialising in the treatment of tuberculosis and public health work, including running a hygiene institute with an interest in eugenics after the First World War.
She became the first Slovenian woman to graduate as a civil engineer from the Faculty of Technology in Ljubljana in 1932.
Career
From 1934 to 1943 she worked as a structural engineer for the technical department of the royal administration of the province of Drava Banate in Ljubljana, supervising the construction of buildings planned by the state at the time. She collaborated with prominent architects of the time, Jože Plečnik, Emil Navinšek, Vinko Glanz, and Edvard Ravnikar. She calculated the world's first corridor-free, reinforced concrete school building designed by the architect Navinsek in 1936.
She worked on the static calculations for the construction of reinforced concrete buildings and supervised their creation. Buildings she worked on included Ljublijana's Gimnazija Bezigrad High School, the Gallery of Modern Art and the National and University Library, and the King Hotel in Rogaška Slatina.
Second World War
In 1941 she joined the Yugoslav Partisans, a resistance movement against the Axis forces during the Second World War, led by the Communist Party of Yugoslavia (KPJ) under the leadership of Josip Broz Tito. By 1943 she was the party secretary of the Liberation Front, but she was captured and imprisoned by the Italians. After the Italian capitulation, she was interned in the German Ravensbrück concentration camp, where she remained until the end of the war.
Postwar career
Lapajne Oblak became Slovenia's first female urban planner. After the war she worked in leading construction companies in Yugoslavia and as an urban planner. In the 1950s, she was involved in the development plan for the Mura Valley region in northeastern Slovenia. Until her retirement in 1969, she was the director of the Institute for Architecture, Urban Planning, and Civil Engineering in Ljubljana.
Sonja Lapajne Oblak died in 1993 in Ljubljana.
Awards
Order of Merits for the People with Golden Star (1976)
Order of the Republic with Silver Wreath (1973)
Order of Brotherhood and Unity, 2nd class
Order of Labour, 2nd class
Order of Merits for the People, 3rd class (1946)
Commemorative Medal of the Partisans of 1941
Commemoration
Sonia Lapagne Oblak featured in the exhibition To the Fore: Female Pioneers of Slovenian Architecture, Civil Engineering, and Design at the DESSA Gallery, Ljubljana in 2017.
In 2018 she was featured In the Foreground: Pioneering Women of Slovenian Architecture, Construction and Design, organised by the Slovenian Academy of Sciences, an outdoor exhibition along the promenade on the Krakov Embankment. The other 19 women featured were Darinka Battelino, Alenka Kham Pičman, Janja Lap, Dana Pajnič, Lidija Podbregar, Barbara Rot, Olga Rusanova, Erna Tomšič, Mojca Vogelnik, Vladimira Bratuž, Majda Dobravec Lajovic, Mgada Fornazarič Kocmut, Marta Ivanšek, Nives Kalin Vehovar, Juta Krulc, Seta Mušič, Dušana Šantel Kanoni, Gizela Šuklje and Branka Tancig Novak.
References
1906 births
1993 deaths
Slovenian architects
Slovene Partisans
Slovenian women architects
20th-century Slovenian architects
20th-century architects
Yugoslav architects
Urban planners
Slovenian urban planners
Yugoslav urban planners
Women urban planners
Civil engineers
Female resistance members of World War II
Slovenian women engineers
Slovenian engineers | Sonja Lapajne Oblak | Engineering | 854 |
32,050,260 | https://en.wikipedia.org/wiki/Cloud%20manufacturing | Cloud manufacturing (CMfg) is a new manufacturing paradigm developed from existing advanced manufacturing models (e.g., ASP, AM, NM, MGrid) and enterprise information technologies under the support of cloud computing, Internet of Things (IoT), virtualization and service-oriented technologies, and advanced computing technologies. It transforms manufacturing resources and manufacturing capabilities into manufacturing services, which can be managed and operated in an intelligent and unified way to enable the full sharing and circulating of manufacturing resources and manufacturing capabilities. CMfg can provide safe and reliable, high quality, cheap and on-demand manufacturing services for the whole lifecycle of manufacturing. The concept of manufacturing here refers to big manufacturing that includes the whole lifecycle of a product (e.g. design, simulation, production, test, maintenance).
The concept of Cloud manufacturing was initially proposed by the research group led by Prof. Bo Hu Li and Prof. Lin Zhang in China in 2010.
Related discussions and research were conducted hereafter, and some similar definitions (e.g. Cloud-Based Design and Manufacturing (CBDM).
) to cloud manufacturing were introduced.
Cloud manufacturing is a type of parallel, networked, and distributed system consisting of an integrated and inter-connected virtualized service pool (manufacturing cloud) of manufacturing resources and capabilities as well as capabilities of intelligent management and on-demand use of services to provide solutions for all kinds of users involved in the whole lifecycle of manufacturing.
Types
Cloud Manufacturing can be divided into two categories.
The first category concerns deploying manufacturing software on the Cloud, i.e. a “manufacturing version” of Computing. CAx software can be supplied as a service on the Manufacturing Cloud (MCloud).
The second category has a broader scope, cutting across production, management, design and engineering abilities in a manufacturing business. Unlike with computing and data storage, manufacturing involves physical equipment, monitors, materials and so on. In this kind of Cloud Manufacturing system, both material and non-material facilities are implemented on the Manufacturing Cloud to support the whole supply chain. Costly resources are shared on the network. This means that the utilisation rate of rarely used equipment rises and the cost of expensive equipment is reduced. According to the concept of Cloud technology, there will not be direct interaction between Cloud Users and Service Providers. The Cloud User should neither manage nor control the infrastructure and manufacturing applications. As a matter of fact, the former can be considered part of the latter.
In CMfg system, various manufacturing resources and abilities can be intelligently sensed and connected into wider Internet, and automatically managed and controlled using IoT technologies (e.g., RFID, wired and wireless sensor network, embedded system). Then the manufacturing resources and abilities are virtualized and encapsulated into different manufacturing cloud services (MCSs), that can be accessed, invoked, and deployed based on knowledge by using virtualization technologies, service-oriented technologies, and cloud computing technologies. The MCSs are classified and aggregated according to specific rules and algorithms, and different kinds of manufacturing clouds are constructed. Different users can search and invoke the qualified MCSs from related manufacturing cloud according to their needs, and assemble them to be a virtual manufacturing environment or solution to complete their manufacturing task involved in the whole life cycle of manufacturing processes under the support of cloud computing, service-oriented technologies, and advanced computing technologies.
Four types of cloud deployment modes (public, private, community and hybrid clouds) are ubiquitous as a single point of access.
Private cloud refers to a centralized management effort in which manufacturing services are shared within one company or its subsidiaries. Enterprises' mission-critical and core-business applications are often kept in a private cloud.
Community cloud is a collaborative effort in which manufacturing services are shared between several organizations from a specific community with common concerns.
Public cloud realizes the key concept of sharing services with the general public in a multi-tenant environment.
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain distinct entities but are also bound together, offering the benefits of multiple deployment modes.
Resources
From the resource’s perspective, each kind of manufacturing capability requires support from the related manufacturing resource. For each type of manufacturing capability, its related manufacturing resource comes in two forms, soft resources and hard resources.
Soft resources
Software: software applications throughout the product lifecycle including design, analysis, simulation, process planning, and are only beginning to be embraced by the electronics manufacturing industry.
Knowledge: experience and know-how needed to complete a production task, i.e. engineering knowledge, product models, standards, evaluation procedures and results, customer feedback, and manufacturing in the cloud provides just as many solutions as the number of questions it also raises for manufacturing executives wanting to make the best possible decision.
Skill: expertise in performing a specific manufacturing task.
Personnel: human resource engaged in the manufacturing process, i.e. designers, operators, managers, technicians, project teams, customer service, etc.
Experience: performance, quality, client evaluation, etc.
Business Network: business relationships and business opportunity networks that exist in an enterprise.
Hard resources
Manufacturing Equipment: facilities needed for completing a manufacturing task, e.g. machine tools, cutters, test and monitoring equipment and other fabrication tools.
Monitoring/Control Resource: devices used to identify and control other manufacturing resource, for instance, RFID (Radio-Frequency IDentification), WSN (Wireless Sensor Network), virtual managers and remote controllers.
Computational Resource: computing devices to support production process, e.g. servers, computers, storage media, control devices, etc.
Materials: inputs and outputs in a production system, e.g. raw material, product-in-progress, finished product, power, water, lubricants, etc.
Storage: automated storage and retrieval systems, logic controllers, location of warehouses, volume capacity and schedule/optimization methods.
Transportation: movement of manufacturing inputs/outputs from one location to another. It includes the modes of transport, e.g. air, rail, road, water, cable, pipeline and space, and the related price, and time taken.
See also
3D printing
Cloud computing
Cyber manufacturing
References
Digital manufacturing
Technology neologisms
Cloud computing | Cloud manufacturing | Technology | 1,264 |
11,798,302 | https://en.wikipedia.org/wiki/Ganoderma%20tsugae | Ganoderma tsugae, also known as hemlock varnish shelf, is a flat polypore mushroom of the genus Ganoderma.
Habitat
In contrast to Ganoderma lucidum, to which it is closely related and which it closely resembles, G. tsugae tends to grow on conifers, especially hemlocks.
Uses
Like G. lucidum, G. tsugae is non-poisonous but generally considered inedible, because of its solid woody nature; however, teas and extracts made from its fruiting bodies supposedly allow medicinal use of the compounds it contains, although this is controversial within the scientific community. A hot water extraction or tea can be very effective for extracting the polysaccharides; however, an alcohol or alcohol/glycerin extraction method is more effective for the triterpenoids.
The fresh, soft growth of the "lip" of G. tsugae can be sautéed and prepared much like other edible mushrooms. While in this nascent stage it is not woody, it can still be tough and chewy.
Medicinal
Like G. lucidum, G. tsugae is purported to have medicinal properties including use for dressing a skin wound. Though phylogenetic analysis has begun to better differentiate between many closely related species of Ganoderma; there is still disagreement as to which have the most medicinal properties. Natural and artificial variations (e.g. growing conditions and preparation) can also effect the species' medicinal value.
Studies in mice have shown that G. tsugae shows several potential medicinal benefits including anti-tumor activity through some of the active polysaccharides found in G. tsugae. G. tsugae has also been shown to significantly promote wound healing in mice as well as markedly increase the proliferation and migration of fibroblast cells in culture.
References
tsugae
Dietary supplements
Inedible fungi
Medicinal fungi
Fungus species | Ganoderma tsugae | Biology | 394 |
44,875,706 | https://en.wikipedia.org/wiki/Afghan%20units%20of%20measurement | A variety of units of measurement have been used in Afghanistan to measure length, mass and capacity. Those units were similar to Iranian, Arabian and Indian units. In 1924, Afghanistan adopted the metric system.
Length
These lengths are not necessarily standardized and could differ between different regions of Afghanistan:
1 gaz-i-shah (Kabul yard) = 1.065 meters (m)
1 girah-i gaz-i-shah = 0.066 m
1 gaz-i-mimar (mason's yard) = 0.838 m
1 gaz-i-jareeb (for land) = 0.736 m
1 jareeb (one side) = 44.183 m
1 biswah (one side) = 9.879 m
1 biswasah (one side) = 2.209 m
1 jareeb (land measurement) = 2,000 m2 (standardized)
1 goes = 1.16 m (45.67 in)
Weights
1 nakhud = 0.19 gram (g)
1 misqal = 24 nakhuds = 4.4 g
1 khurd = 110.4 g
1 pao = 441.6 g
1 charak = 1766.4 g = 1.77 kilogram (kg)
1 seer = 30 miskals = 7066.0 g = 7.07 kg
1 man = 40 seers = 4.5 or more kg
1 kharwar = 80 sers = 100 mans = 565,280.0 g = 565.28 kg
1 puri = just under 1 kg
1 khaltar = approximately 7 kg
Localized differences
British sources from the late 19th and early 20th century described some Afghanese weights as follows:
1 Herati seer = 8 tolas = British (Indian) seer
1 Herati man = 40 seers = 4 seers British
1 Herati kharwar = 100 mans = 10 maunds British
1 Mazar seer = 1 Kabuli seers (11 ) British seers
1 Mazar man = 16 Mazar seers = 4 maunds 20 seers British
1 Mazar kharwar = 3 Mazar mans = 13 maunds
1 kadam or gaz-i-shari (Turkestan) = 28 inches (pace) = 16 tasa
1 farsakh (Herat) or 1 sang (Turkestan) = 12,000 kadam = 5 miles
1 grain per kulba (southern Afghanistan) = 50 Kandahari kharwars
1 Tashkurghan seer = 9 British seers
1 Taskhurghan man = 8 seers = 1 maund 32 seers British
1 Kandahari yard = 41 inches British
1 tanab (Kandahar) = 85 acres British
See also
Persian units of measurement
Arab units of measurement
Indian units of measurement
References
Culture of Afghanistan
Afghanistan
Regulation in Afghanistan
Standards | Afghan units of measurement | Mathematics | 593 |
21,105,516 | https://en.wikipedia.org/wiki/TB9Cs4H2%20snoRNA | TB9Cs4H2 is a member of the H/ACA-like class of non-coding RNA (ncRNA) molecule that guide the sites of modification of uridines to pseudouridines of substrate RNAs. It is known as a small nucleolar RNA (snoRNA) thus named because of its cellular localization in the nucleolus of the eukaryotic cell. TB9Cs4H2 is predicted to guide the pseudouridylation of LSU3 ribosomal RNA (rRNA) at residue Ψ1336.
References
Non-coding RNA | TB9Cs4H2 snoRNA | Chemistry | 123 |
9,887,669 | https://en.wikipedia.org/wiki/Gunnison%20Tunnel | The Gunnison Tunnel is an irrigation tunnel constructed between 1905 and 1909 by the U.S. Bureau of Reclamation in Montrose County, Colorado. The tunnel diverts water from the Gunnison River to the arid Uncompahgre Valley around Montrose, Colorado.
History
At the time of its completion, it was the longest irrigation tunnel in the world and quickly made the area around Montrose into profitable agricultural lands. In 1972, the tunnel was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers (ASCE).
The idea for a tunnel is credited to Frank Lauzon, a miner and prospector. By the early 1890s he was farming in Montrose. Popular lore is that idea came to him in a dream that the waters of the Gunnison River should be brought to the valley. In the late 1890s, the campaign for the tunnel was led by Omer Madison Kem. National funding was approved in 1902.
As construction was undertaken, two advances in technology made work safer and easier. Jackhammers fed by a compressor replaced hand-turned drill bits to set holes for blasting charges. Dynamite replaced black powder for blasting. By 1906 shifts of workers up to 30 at a time worked in the tunnel.
The tunnel opened in 1909 to much fanfare with a dedication ceremony attended by President William Howard Taft.
It was listed on the National Register of Historic Places in 1979.
The tunnel is long and is in cross-section, with square corners at the bottom and an arched roof. It drops about over its length. At the deepest, it is about beneath the surface of Vernal Mesa.
In 2009, some communities in the Uncompahgre Valley, celebrated the centennial anniversary of the Gunnison Tunnel opening.
See also
List of Historic Civil Engineering Landmarks
List of tunnels documented by the Historic American Engineering Record in Colorado
References
Further reading
External links
American Society of Civil Engineers site - The Gunnison Tunnel article
Gunnison River
Tunnels in Colorado
Water tunnels in the United States
Water tunnels on the National Register of Historic Places
Transportation buildings and structures in Montrose County, Colorado
Curecanti National Recreation Area
Historic American Engineering Record in Colorado
Interbasin transfer
Industrial buildings and structures on the National Register of Historic Places in Colorado
National Register of Historic Places in Montrose County, Colorado
National Register of Historic Places in national parks
Transportation buildings and structures on the National Register of Historic Places in Colorado
Water supply infrastructure on the National Register of Historic Places
Historic Civil Engineering Landmarks
Tunnels completed in 1909 | Gunnison Tunnel | Engineering,Environmental_science | 494 |
22,721,223 | https://en.wikipedia.org/wiki/CGh%20physics | cGh physics refers to the historical attempts in physics to unify relativity, gravitation, and quantum mechanics, in particular following the ideas of Matvei Petrovich Bronstein and George Gamow. The letters are the standard symbols for the speed of light (), the gravitational constant (), and the Planck constant ().
If one considers these three universal constants as the basis for a 3-D coordinate system and envisions a cube, then this pedagogic construction provides a framework, which is referred to as the cGh cube, or physics cube, or cube of theoretical physics (CTP). This cube can be used for organizing major subjects within physics as occupying each of the eight corners. The eight corners of the cGh physics cube are:
Classical mechanics (_, _, _)
Special relativity (, _, _), gravitation (_, , _), quantum mechanics (_, _, )
General relativity (, , _), quantum field theory (, _, ), non-relativistic quantum theory with gravity (_, , )
Theory of everything, or relativistic quantum gravity (, , )
Other cGh physics topics include Hawking radiation and black-hole thermodynamics.
While there are several other physical constants, these three are given special consideration because they can be used to define all Planck units and thus all physical quantities. The three constants are therefore used sometimes as a framework for philosophical study and as one of pedagogical patterns.
Overview
Before the first successful estimate of the speed of light in 1676, it was not known whether light was transmitted instantaneously or not. Because of the tremendously large value of the speed of light—c (i.e. 299,792,458 metres per second in vacuum)—compared to the range of human perceptual response and visual processing, the propagation of light is normally perceived as instantaneous. Hence, the ratio 1/c is sufficiently close to zero that all subsequent differences of calculations in relativistic mechanics are similarly 'invisible' relative to human perception. However, at speeds comparable to the speed of light (c), Lorentz transformation (as per special relativity) produces substantially different results which agree more accurately with (sufficiently precise) experimental measurement. Non-relativistic theory can then be derived by taking the limit as the speed of light tends to infinity—i.e. ignoring terms (in the Taylor expansion) with a factor of 1/c—producing a first-order approximation of the formulae.
The gravitational constant (G) is irrelevant for a system where gravitational forces are negligible. For example, the special theory of relativity is the special case of general relativity in the limit G → 0.
Similarly, in the theories where the effects of quantum mechanics are irrelevant, the value of Planck constant (h) can be neglected. For example, setting h → 0 in the commutation relation of quantum mechanics, the uncertainty in the simultaneous measurement of two conjugate variables tends to zero, approximating quantum mechanics with classical mechanics.
In popular culture
George Gamow chose "C. G. H." as the initials of his fictitious character, Mr C. G. H. Tompkins.
References
Theoretical physics | CGh physics | Physics | 680 |
32,659,150 | https://en.wikipedia.org/wiki/Siemens%20C10 | The Siemens C10 is a mobile phone made by Siemens in December 1997. The phone was available in four colours: Blue, yellow, red and grey. The C10 had a green backlit display capable of showing three lines. It weighed 165 g with battery and 117 g without.
References
External links
Mobile phones introduced in 1997
C10 | Siemens C10 | Technology | 69 |
1,188,566 | https://en.wikipedia.org/wiki/Enhanced%20Variable%20Rate%20Codec | Enhanced Variable Rate CODEC (EVRC) is a speech codec used in CDMA networks. It was developed in 1995 to replace the QCELP vocoder which used more bandwidth on the carrier's network, thus EVRC's primary goal was to offer the mobile carriers more capacity on their networks while not increasing the amount of bandwidth or wireless spectrum needed. EVRC uses RCELP technology.
EVRC compresses each 20 milliseconds of 8000 Hz, 16-bit sampled speech input into output frames of one of three different sizes: full rate – 171 bits (8.55 kbit/s), half rate – 80 bits (4.0 kbit/s), eighth rate – 16 bits (0.8 kbit/s). A quarter rate was not included in the original EVRC specification and eventually became part of EVRC-B.
EVRC was replaced by SMV. Recently, however, SMV itself has been replaced by the new CDMA2000 4GV codecs. 4GV is the next generation 3GPP2 standards-based EVRC-B codec. 4GV is designed to allow service providers to dynamically prioritize voice capacity on their network as required.
EVRC can be also used in 3GPP2 container file format - 3G2.
References
External links
3GPP2 specification
EVRC – The Savior of CDMA?
- Enhancements to RTP Payload Formats for EVRC Family Codecs
Speech codecs
3rd Generation Partnership Project 2 standards | Enhanced Variable Rate Codec | Technology | 318 |
24,292,829 | https://en.wikipedia.org/wiki/Adobe%20Fonts | Adobe Fonts (formerly Typekit) is an online service that provides its subscribers with access to its font library, under a single licensing agreement. The fonts may be used directly on websites, or synced via Adobe Creative Cloud to applications on the subscriber's computers.
Adobe Fonts was launched as Typekit in November 2009 by Small Batch, Inc., a company run by creators of the Google Analytics service. In October 2011, the service was acquired by Adobe. On 15 October 2018, Typekit changed its name to Adobe Fonts.
Adobe Fonts offers over 30,000 fonts as of November 2024. These fonts can be used for both personal and commercial purposes. Accessible with a Creative Cloud subscription, Adobe Fonts provides access to a wide selection of fonts from 150 different type foundries.
Adobe has also partnered with Monotype, which helps streamline the management of enterprise font licenses for businesses. The collaboration with Monotype makes Adobe Originals, a set of exclusive fonts created by Adobe, more accessible to brands and designers. The mixture of these features seeks to support design needs, from individual projects to large-scale commercial use.
See also
Adobe Font Folio
Adobe Originals
References
External links
Fonts
Digital typography
Web design | Adobe Fonts | Engineering | 256 |
8,143,783 | https://en.wikipedia.org/wiki/New%20Technologies%20Demonstrator%20Programme | The New Technologies Demonstrator Programme is a scheme part of Defra's Waste Implementation Programme, New Technologies Workstream, to demonstrate advanced solid waste processing technologies in England. A pot of £30million was allocated to fund 10 demonstrator projects with the project being headed by Dave Brooks at Defra. The scheme is not on schedule for the ambitious targets that were initially set out by Defra, however 9 projects out of the initial 10 are now projected to be operational by April 2009, over 2 years behind schedule.
The scheme
The scheme initially was allocated £32 million, of which £2 million was to help fund research and development into waste technology. The scheme for the distribution of the main £30 million pot commenced in 2004 and was originally split into two rounds:
ROUND 1: 5 demonstrator projects in operation by 31 December 2005
ROUND 2: 5 demonstrator projects in operation by 31 December 2006
Their project had a huge response for the first round, with 71 pre-qualification questionnaire submissions being filed from interested parties. The quality of some of the initial bids were criticised by Martin Brockelhurst, Head of Waste Strategy, at the Environment Agency who remarked some of the applications were poor and came from a "young industry".
Controversy
There have been concerns that the project is taking too long and some participants threatened to walk out. On 11 April 2006, Defra declared that its initial timescales were ambitious and projects were not on target. Of the 10 original projects planned a total of 9 have now been signed and includes gasification, in-vessel composting, anaerobic digestion and mechanical heat treatment. From the original target dates for operational demonstrator plants outlined in the initial assessment criteria only 2 projects are now operational (true as of 27 November 2006). On 24 November 2006, Dave Brooks announced that the new target for all plants being operational is April 2009.
The projects
Operational
Greenfinch anaerobic digesters, Ludlow, Shropshire
Bioganix in-vessel composting plant, Leominster
Fairport Engineering, mechanical heat treatment, Merseyside
Energos gasification plant, Isle of Wight (currently under reconstruction until 2018)
Contracts signed
ADAS/Envar in-vessel composing plant, St Ives, Cambridgeshire
Premier Waste aerobic digestion plant, Durham
Abandoned or Cancelled
Novera gasification plant, Dagenham Novera withdrew from the DEFRA scheme in 2007.
Compact Power gasification plant, Avonmouth
Yorwaste gasification plant, Seamer Carr, Scarborough
See also
Isle of Wight gasification facility
References
Bioenergy in the United Kingdom
Waste treatment technology
Waste management in the United Kingdom | New Technologies Demonstrator Programme | Chemistry,Engineering | 536 |
70,238,543 | https://en.wikipedia.org/wiki/Water%20sachet | Water sachets or sachet water is a common form of selling pre-filtered or sanitized water in plastic, heat sealed bags in parts of the global south, and are especially popular in Africa. Water sachets are cheaper to produce than plastic bottles, and easier to transport. In some countries, water vendors refer to sachet water as "pure water".
High demand, and poor collection of waste from consumers, has resulted in significant plastic pollution and waste from sachets throughout the West Africa. Accumulation of sachets frequently causes blocked stormwater drainage, and other issues. Some countries, such as Senegal, have banned disposable sachets.
Because sachets are frequently filled in small and often unregulated facilities, inadequate sanitary conditions can occasionally result in disease or contamination. However, in countries like Ghana consumers still prefer that access over other forms of venders, with a perception of lower risk. This form of water distribution provides vital access to water in communities that otherwise wouldn't have it. However, some scholars have identified this method of distribution as having potential human rights and social justice issues, limiting the right to water and sanitation.
Health concerns
Studies of sachets frequently find improper sanitary conditions among sachet producers. One study of sachets in Port Harcourt, Nigeria found that sachet water has significant contamination from various disease causing microbes. Prolonged storage of the sachets found human-health threatening levels of the microbes after 4 months in several of the samples. Similarly following the onset of the COVID pandemic, in Damongo found 96% of producers didn't have adequate sanitary measures.
By country
Ghana
Sachet water is common through Ghana. A 2012 review of sachet use in Ghana found sachet water ubiquitous especially in poorer communities. Sachets were typically 500 ml polyethylene bags, and heat sealed at each end. Sachet water delivery is part of a larger trend in delivery by private water vendors from municipal taps.
Packaging water in small plastic bags started in the 1990s, and that practice grew after the introduction of Chinese machines for filling and heat sealing bags. A price increase in 2022, saw significant changes in the sales in the Ashanti region.
Nigeria
Sachet water has become increasingly important part of the water access in Nigeria, especially fast growing cities like Lagos. The cost of Sachet water is dependent on economic changes. In 2021, the Association for Table Water Producers of Nigeria increased the price of bag of sachet water to 200 naira due to increase in production cost. A significant devaluation of local currency led to significant price increases in 2022.
In 2024, sachet water currently sells for N50 per sachet, A bag sells between N400 and N500, the increase stikes due to changes in the economy. Some cities have improvised to start selling ice water has some can't afford to buy sachet water.
Around June, 2024 two water companies were closed in owerri by National Agency for Food Drugs Administration and Control (NAFDAC) due to poor manufacturing process and unhygienic production. The two factories including Elmabo Table Water and Sylchap Table Water while Giver Table water was cautioned for minor issues.
See also
Drinking water
Purified water
Self-supply of water and sanitation
WASH – Water supply, sanitation and hygiene
Water kiosk
References
Water
Plastics
Drinks | Water sachet | Physics,Environmental_science | 694 |
41,563,895 | https://en.wikipedia.org/wiki/C18H29NO2 | {{DISPLAYTITLE:C18H29NO2}}
The molecular formula C18H29NO2 (molar mass: 291.43 g/mol, exact mass: 291.2198 u) may refer to:
Exaprolol
Ketocaine
Penbutolol | C18H29NO2 | Chemistry | 64 |
13,053,833 | https://en.wikipedia.org/wiki/Orexin%20receptor | The orexin receptor (also referred to as the hypocretin receptor) is a G-protein-coupled receptor that binds the neuropeptide orexin. There are two variants, OX1 and OX2, each encoded by a different gene (, ).
Both orexin receptors exhibit a similar pharmacology – the 2 orexin peptides, orexin-A and orexin-B, bind to both receptors and, in each case, agonist binding results in an increase in intracellular calcium levels. However, orexin-B shows a 5- to 10-fold selectivity for orexin receptor type 2, whilst orexin-A is equipotent at both receptors.
Several orexin receptor antagonists are in development for potential use in sleep disorders. The first of these, suvorexant, has been on the market in the United States since 2015. There were two orexin agonists under development .
Ligands
Several drugs acting on the orexin system are under development, either orexin agonists for the treatment of conditions such as narcolepsy, or orexin antagonists for insomnia. In August 2015, Nagahara et al. published their work in synthesizing the first HCRT/OX2R agonist, compound 26, with good potency and selectivity.
No neuropeptide agonists are yet available, although synthetic orexin-A polypeptide has been made available as a nasal spray and tested on monkeys. One non-peptide antagonist is currently available in the U.S., Merck's suvorexant (Belsomra), two additional agents are in development: SB-649,868 by GlaxoSmithKline, for sleep disorders, and ACT-462206, currently in human clinical trials. Another drug in development, almorexant (ACT-078573) by Actelion, was abandoned due to adverse effects. Lemborexant, an orexin receptor antagonist, was approved for use in the United States in 2019.
Most ligands acting on the orexin system so far are polypeptides modified from the endogenous agonists orexin-A and orexin-B, however there are some subtype-selective non-peptide antagonists available for research purposes.
Agonists
Non-selective
Orexins – dual OX1 and OX2 receptor agonists
Orexin-A – approximately equipotent at the OX1 and OX2 receptors
Orexin-B – approximately 5- to 10-fold selectivity for the OX2 receptor over the OX1 receptor
AEX-5 – selective OX1 receptor agonist; also a cathepsin H inhibitor and dopamine reuptake inhibitor
AEX-19 – dual OX1 and OX2 receptor agonist
AEX-24 – selective OX2 receptor agonist; also an "S1R" agonist
Selective
ALKS-2680 — selective oral OX2 receptor agonist
Danavorexton (TAK-925) – selective OX2 receptor agonist
E-2086 – selective OX2 receptor agonist
Firazorexton (TAK-994) – selective OX2 receptor agonist
Oveporexton – selective OX2 receptor agonist
SB-668875 – selective OX2 receptor agonist
Suntinorexton (TAK-861) – selective OX2 receptor agonist
PhotOrexin – photoswitchable orexin-B analogue to control the OX2 receptor at nanomolar concentration in vivo.
Antagonists
Non-selective
Almorexant (ACT-078573) – dual OX1 and OX2 receptor antagonist
Daridorexant (Quviviq; ACT-541468) – dual OX1 and OX2 receptor antagonist
Filorexant (MK-6096) – dual OX1 and OX2 receptor antagonist
GSK-649868 (SB-649868) – dual OX1 and OX2 receptor antagonist
Lemborexant (Dayvigo) – dual OX1 and OX2 receptor antagonist
Suvorexant (Belsomra) – dual OX1 and OX2 receptor antagonist
Vornorexant (ORN-0829, TS-142) – dual OX1 and OX2 receptor antagonist
Selective
ACT-335827 – selective OX1 receptor antagonist
AZD-4041 – selective OX1 receptor antagonist
C4X-3256 (INDV-2000) – selective OX1 receptor antagonist
CVN-766 – selective OX1 receptor antagonist
EMPA – selective OX2 receptor antagonist
JNJ-10397049 – selective OX2 receptor antagonist
Nivasorexant (ACT-539313) – selective OX1 receptor antagonist
RTIOX-276 – selective OX1 receptor antagonist
SB-334867 – selective OX1 receptor antagonist
SB-408124 – selective OX1 receptor antagonist
Seltorexant (MIN-202, JNJ-42847922, JNJ-922) – selective OX2 receptor antagonist
TCS-OX2-29 – selective OX2 receptor antagonist
Tebideutorexant (JNJ-61393215; JNJ-3215) – selective OX1 receptor antagonist
References
External links
Protein families
G protein-coupled receptors | Orexin receptor | Chemistry,Biology | 1,111 |
2,154,436 | https://en.wikipedia.org/wiki/Next-generation%20lithography | Next-generation lithography or NGL is a term used in integrated circuit manufacturing to describe the lithography technologies in development which are intended to replace current techniques. Driven by Moore's law in the semiconductor industries, the shrinking of the chip size and critical dimension continues. The term applies to any lithography method which uses a shorter-wavelength light or beam type than the current state of the art, such as X-ray lithography, electron beam lithography, focused ion beam lithography, and nanoimprint lithography. The term may also be used to describe techniques which achieve finer resolution features from an existing light wavelength.
Many technologies once termed "next generation" have entered commercial production, and open-air photolithography, with visible light projected through hand-drawn photomasks, has gradually progressed to deep-UV immersion lithography using optical proximity correction, inverse lithography technology, off-axis illumination, phase-shift masks, double patterning, and multiple patterning. In the late 2010s, the combination of many such techniques was able to achieve features on the order of 20 nm with the 193 nm-wavelength ArF excimer laser in the 14 nm, 10 nm and 7 nm processes, though at the cost of adding processing steps and therefore cost.
13.5 nm extreme ultraviolet (EUV) lithography, long considered a leading candidate for next-generation lithography, began to enter commercial mass-production in 2018. As of 2021, Samsung and TSMC were gradually phasing EUV lithography into their production lines, as it became economical to replace multiple processing steps with single EUV steps. As of the early 2020s, many EUV techniques are still in development and many challenges remain to be solved, positioning EUV lithography as being in transition from "next generation" to "state of the art."
Candidates for next-generation lithography beyond EUV include X-ray lithography, electron beam lithography, focused ion beam lithography, nanoimprint lithography, and quantum lithography. Several of these technologies have experienced periods of popularity, but have remained outcompeted by the continuing improvements in photolithography. Electron beam lithography was most popular during the 1970s, but was replaced in popularity by X-ray lithography during the 1980s and early 1990s, and then by EUV lithography from the mid-1990s to the mid-2000s. Focused ion beam lithography has carved a niche for itself in the area of defect repair. Nanoimprint's popularity is rising, and is positioned to succeed EUV as the most popular choice for next-generation lithography, due to its inherent simplicity and low cost of operation as well as its success in the LED, hard disk drive and microfluidics sectors.
The rise and fall in popularity of each NGL candidate has largely hinged on its throughput capability and its cost of operation and implementation. Electron beam and nanoimprint lithography are limited mainly by the throughput, while EUV and X-ray lithography are limited by implementation and operation costs. The projection of charged particles (ions or electrons) through stencil masks was also popularly considered in the early 2000s but eventually fell victim to both low throughput and implementation difficulties.
Issues
Fundamental issues
Regardless of whether NGL or photolithography is used, etching of polymer (resist) is the last step. Ultimately the quality (roughness) as well as resolution of this polymer etching limits the inherent resolution of the lithography technique. Next generation lithography also generally makes use of ionizing radiation, leading to secondary electrons which can limit resolution to effectively > 20 nm.
Studies have also found that for NGL to reach LER (line edge roughness) objectives ways to control variables such as polymer size, image contrast and resist contrast must be found.
Market issues
The above-mentioned competition between NGL and the recurring extension of photolithography, where the latter consistently wins, may be more a strategic than a technical matter. If a highly scalable NGL technology were to become readily available, late adopters of leading-edge technology would immediately have the opportunity to leapfrog the current use of advanced but costly photolithography techniques, at the expense of the early adopters of leading-edge technology, who have been the key investors in NGL. While this would level the playing field, it is disruptive enough to the industry landscape that the leading semiconductor companies would probably not want to see it happen.
The following example would make this clearer. Suppose company A manufactures down to 28 nm, while company B manufactures down to 7 nm, by extending its photolithography capability by implementing double patterning. If an NGL were deployed for the 5 nm node, both companies would benefit, but company A currently manufacturing at the 28 nm node would benefit much more because it would immediately be able to use the NGL for manufacturing at all design rules from 22 nm down to 7 nm (skipping all the said multiple patterning), while company B would only benefit starting at the 5 nm node, having already spent much on extending photolithography from its 22 nm process down to 7 nm. The gap between Company B, whose customers expect it to advance the leading edge, and Company A, whose customers don't expect an equally aggressive roadmap, will continue to widen as NGL is delayed and photolithography is extended at greater and greater cost, making the deployment of NGL less and less attractive strategically for Company B. With NGL deployment, customers will also be able to demand lower prices for products made at advanced generations.
This becomes more clear when considering that each resolution enhancement technique applied to photolithography generally extends the capability by only one or two generations. For this reason, the observation that "optical lithography will live forever" will likely hold, as the early adopters of leading-edge technology will never benefit from highly scalable lithography technologies in a competitive environment.
There is therefore great pressure to deploy an NGL as soon as possible, but the NGL ultimately may be realized in the form of photolithography with more efficient multiple patterning, such as directed self-assembly or aggressive cut reduction.
See also
Computational lithography
Nanolithography
Quantum Lithography
References
Lithography (microfabrication) | Next-generation lithography | Materials_science | 1,318 |
64,314,552 | https://en.wikipedia.org/wiki/Floriana%20Tuna | Floriana Tuna is a Romanian chemist and a Senior Research Fellow in the Department of Chemistry at The University of Manchester. Her research in general is based on inorganic chemistry and magnetochemistry, specifically on molecular magnetism, EPR spectroscopy and quantum computing.
Education
Floriana completed her Bachelor of Science at University of Bucharest. She continued to read her Master of Science degree at University of Bucharest and successfully completed it in 1989 before moving to the Institute of Physical Chemistry of the Romanian Academy to read her Doctor of Philosophy degree in transition metal chemistry, which was completed in 1997 and was supervised by Marius Andruh and Luminița Patron.
Research and career
Upon graduation, Floriana completed her postdoctoral research in Molecular Magnetism with Jean-Pascal Sutter at Institut de Chimie de la Matière Condensée de Bordeaux (ICMCB), France and also as a visiting Deutscher Akademischer Austauschdienst (DAAD) Fellow at University of Heidelberg, Germany. She then received a Marie Curie Individual Fellowship at University of Warwick to work in supramolecular chemistry before moving to University of Manchester in 2003 as a Researcher. She was later promoted to the position of Senior Researcher. She is currently part of the Molecular Magnetism group at University of Manchester, working along with David Collison, Nicholas F. Chilton, Grigore Timco, and Richard Winpenny.
Floriana's research in general is based on inorganic chemistry and magnetochemistry, specifically on molecular magnetism, EPR spectroscopy and quantum computing.
Notable work
In 2019, Floriana participated in a research which reported the capability of a MFI-type zeolite (NbAlS-1) could be used to convert aqueous solutions of γ-valerolactone (GVL) (obtained from biomass-derived carbohydrates) into butenes with a yield of more than 99% at ambient pressure under continuous flow conditions. The conversion of the renewable biomass into butenes offered the prospect for the sustainable production of butene as a platform chemical for the manufacture of renewable materials.
In 2019, she participated in a research which showed the capability to use a porous metal–organic framework (MOF) to provide a selective, fully reversible and repeatable capability to capture nitrogen dioxide (NO2), a toxic air pollutant produced particularly by diesel and bio-fuel use. The NO2 can then be easily converted into nitric acid, an industry with a wide range of uses including, agricultural fertilizer for crops; rocket propellant and nylon.
In 2016, Floriana confirmed the capability to use pulsed EPR spectroscopy to measure the covalency of actinide complexes in a research in collaboration with Eric McInnes and David P. Mills at the University of Manchester. Prior to this research, the extent of covalency in actinide complexes was less understood as this nature of bonding was not studied due to limited technology and methods of experimentation at the time. The use of pulsed EPR spectroscopy was able to determine the covalency of thorium(III) and Uranium(III) complexes for the first time and this paved the way to further research on the use of these complexes in the separation and recycling of nuclear waste.
Awards and nominations
Romanian Academy 'Ilie Murgulescu' Award (2005)
Major publications
References
External links
Floriana Tuna at University of Manchester
Living people
Year of birth missing (living people)
21st-century chemists
Academics of the University of Manchester
University of Bucharest alumni
Inorganic chemists
Romanian chemists
Romanian women scientists
Romanian emigrants to the United Kingdom
Romanian expatriates in England | Floriana Tuna | Chemistry | 744 |
5,613,537 | https://en.wikipedia.org/wiki/Squares%20of%20Savannah%2C%20Georgia | The city of Savannah, Province of Georgia, was laid out in 1733, in what was colonial America, around four open squares, each surrounded by four residential "tithing") blocks and four civic ("trust") blocks. The layout of a square and eight surrounding blocks was known as a "ward." The original plan (now known as the Oglethorpe Plan) was part of a larger regional plan that included gardens, farms, and "outlying villages." Once the four wards were developed in the mid-1730s, two additional wards were laid. Oglethorpe's agrarian balance was abandoned after the Georgia Trustee period. Additional squares were added during the late 18th and 19th centuries, and by 1851 there were 24 squares in the city. In the 20th century, three of the squares were demolished or altered beyond recognition, leaving 21. In 2010, one of the three "lost" squares, Ellis, was reclaimed, bringing the total to today's 22.
Most of Savannah's squares are named in honor or in memory of a person, persons or historical event; many contain monuments, markers, memorials, statues, plaques, and other tributes. The statues and monuments were placed in the squares partly to protect the squares from demolition.
Today, the area is part of a large urban preservation district known as the Savannah Historic District.
Overview
The city of Savannah was founded in 1733 by General James Oglethorpe. Although cherished by many today for their aesthetic beauty, the first squares were originally intended to provide colonists space for practical reasons such as militia training exercises. The original plan resembles the layout of contemporary military camps, which were likely quite familiar to General Oglethorpe. The layout was also a reaction against the cramped conditions that fueled the Great Fire of London in 1666. A square was established for each ward of the new city. The first four were Johnson, Perceval (now Wright), Ellis, and St. James (now Telfair) Squares, and themselves formed a larger square on the bluff overlooking the Savannah River. The original plan actually called for six squares, and as the city grew the grid of wards and squares was extended so that 33 squares were eventually created on a five-by-two-hundred grid. (Two points on this grid were occupied by Colonial Park Cemetery, established in 1750, and four others—in the southern corners of the downtown area—were never developed with squares.) When the city began to expand south of Gaston Street, the grid of squares was abandoned and Forsyth Park was allowed to serve as a single, centralized park for that area.
All of the squares measure approximately from east to west, but they vary north to south from approximately 100 to . Typically, each square is intersected north-south and east-west by wide, two-way streets. They are bounded to the west and east by the south- and north-bound lanes of the intersecting north-south street, and to the north and south by smaller one-way streets running east-to-west and west-to-east, respectively. As a result, traffic flows one way—counterclockwise—around the squares, which thus function much like traffic circles.
Each square sits (or, in some cases, sat) at the center of a ward, which often shares its name with its square. The lots to the east and west of the squares, flanking the major east-west axis, were considered "trust lots" in the original city plan and intended for large public buildings such as churches, schools, or markets. The remainder of the ward was divided into four areas, called tithings, each of which was further divided into ten residential lots. This arrangement is illustrated in the 1770 Plan of Savannah, reproduced here, and remains readily visible in the modern aerial photograph above. The distinction between trust lot and residential lot has always been fluid. Some grand homes, such as the well-known Mercer House, stand on trust lots, while many of the residential lots have long hosted commercial properties.
All of the squares are a part of the Savannah Historic District and fall within an area of less than one half square mile. The five squares along Bull Street—Monterey, Madison, Chippewa, Wright, and Johnson—were intended to be grand monument spaces and have been called Savannah's "Crown Jewels." Many of the other squares were designed more simply as commons or parks, although most serve as memorials as well.
Architect John Massengale has called Savannah's city plan "the most intelligent grid in America, perhaps the world", and Edmund Bacon wrote that "it remains as one of the finest diagrams for city organization and growth in existence." The American Society of Civil Engineers has honored Oglethorpe's plan for Savannah as a National Historic Civil Engineering Landmark, and in 1994 the plan was nominated for inclusion in the UNESCO World Heritage List. The squares are a major point of interest for millions of tourists visiting Savannah each year, and they have been credited with stabilizing once-deteriorating neighborhoods and revitalizing Savannah's downtown commercial district.
First four squares, 1733
The first four squares were laid out by James Oglethorpe in 1733, the same year in which he founded the Georgia colony and the city of Savannah.
Johnson Square
Johnson Square was the first of Savannah's squares, and remains the largest of the 22. It was named for Robert Johnson, colonial governor of South Carolina and a friend of General Oglethorpe. Interred under the Nathanael Greene Monument in the square is Revolutionary War hero General Nathanael Greene, the namesake of nearby Greene Square.
Johnson Square contains two fountains, as well as a sundial dedicated to Colonel William Bull, the namesake of Savannah's Bull Street.
Another landmark of Johnson Square is the Johnson Square Business Center. This building, formerly known as the Savannah Bank Building, was the city's first "skyscraper", built in 1911. Johnson Square is known as the financial district, or banking square, and many of the City's financial services companies are located here. These companies include the Savannah Bancorp, Savannah Bank, Coastal Bank Headquarters, Bank of America branch, SunTrust branch, United Community Bank branch, TitleMax Corporate Headquarters, and a Regions Bank building.
Johnson Square is also home to Christ Church, "the Mother Church of Georgia", established in 1733. Early clergy of the church include John Wesley and George Whitefield.
Wright Square
The second square established in Savannah, Perceval Square was named for John Perceval, 1st Earl of Egmont, generally regarded as the man who gave the colony of Georgia its name (a tribute to Great Britain's King George II). It was renamed in 1763 to honor James Wright, the third and final royal governor of Georgia. Throughout its history it has also been known as Court House Square and Post Office Square; the present Tomochichi Federal Building and U.S. Courthouse is adjacent to the west.
The square is the burial site of Tomochichi, a leader of the Creek nation of Native Americans. Tomochichi was a trusted friend of James Oglethorpe and assisted him in the founding of his colony.
Ellis Square
What was originally called Decker Square is located on Barnard between Bryan and Congress Streets. It was laid out in 1733 as part of Decker Ward, the third ward created in Savannah. The ward and square were named for Sir Matthew Decker, one of Trustees for the Establishment of the Colony of Georgia in America, Commissioner of funds collection for the Trust, director and governor of the East India Company, and member of Parliament. The square was renamed for Sir Henry Ellis, the second Royal Governor of the colony of Georgia.
It was also known as Marketplace Square, as from the 1730s through the 1950s it served as a center of commerce and was home to four successive market houses. Prior to Union General Sherman's arrival in December 1864, it was also the site of a slave market with some indications of slaves being held under the northwest corner of the square.
In 1954 the city signed a 50-year lease with the Savannah Merchants Cooperative Parking Association, allowing the association to raze the existing structure and construct a parking garage to serve the City Market retail project. Anger over the demolition of the market house helped spur the historic preservation movement (most notably the Historic Savannah Foundation) in Savannah.
When the garage's lease expired in 2004, the city began plans to restore Ellis Square. It was officially reopened at a dedication ceremony held on March 11, 2010. A bronze statue, by Susie Chisholm, of songwriter-lyricist Johnny Mercer, a native Savannahian, was formally unveiled in Ellis Square on November 18, 2009.
Telfair Square
St. James Square was named in honor of a green space in London, England, and marked one of the most fashionable neighborhoods in early Savannah. It was renamed in 1883 to honor the Telfair family. It is the only square honoring a family rather than an individual. The Telfairs included former Governor Edward Telfair, Congressman Thomas Telfair (Edward Telfair's son), and Mary Telfair (1791–1875), benefactor of Savannah's Telfair Museum of Art. Telfair Academy overlooks the western side of the square. The square also contains tributes to the Girl Scouts of the USA, founded by Savannahian Juliette Gordon Low, and to the chambered nautilus. Telfair Square is located on Barnard, between State and York Streets.
Two new squares
Oglethorpe's plan called for six wards and squares. Lower New Square and Upper New Square—now Reynolds and Oglethorpe Squares—completed the founder's vision.
Reynolds Square
Originally known as Lower New Square, laid out in 1734, the square was later renamed for Captain John Reynolds, governor of Georgia in the mid-1750s.
The square contains a bronze statue by Marshall Daugherty honoring John Wesley, founder of Methodism. Wesley spent most of his life in England but undertook a mission to Savannah (1735–1738), during which time he founded the first Sunday school in America. The statue was installed in 1969 on the spot where Wesley's home is believed to have stood. The statue is intended to show Wesley preaching out-of-doors as he did when leading services for Native Americans, a practice which angered church elders who believed that the Gospel should only be preached inside the church building.
Reynolds Square was the site of the Filature, which housed silkworms as part of an early—and unsuccessful—attempt to establish a silk industry in the Georgia colony. It is located on Abercorn, between Bryan and Congress Streets.
The Olde Pink House (also known as Habersham House) stands in the square's northwestern trust lot. Immediately to its south, across East Saint Julian Street and in the southwestern trust lot, is the Oliver Sturges House.
Oglethorpe Square
Upper New Square was laid out in 1742 and was later renamed in honor of Georgia founder General James Oglethorpe, although his statue is located in Chippewa Square, to the southwest.
The home of Georgia's first Royal Governor, John Reynolds, was located on the southeastern trust lot (now a parking lot of The Presidents' Quarters Inn) overlooking the square. Reynolds arrived in Savannah October 29, 1754.
The residences of the Royal Surveyors of Georgia and South Carolina were located on the northeastern trust lots, the site of today's Owens–Thomas House. The Presidents' Quarters Inn, a 16-room historic bed and breakfast, is located on the southeastern trust lots.
The square contains a pedestal honoring Moravian missionaries who arrived at the same time as John Wesley and settled in Savannah from 1735 to 1740, before resettling in Pennsylvania.
A Savannah veterans’ group had unsuccessfully proposed erecting a memorial to veterans of World War II in Oglethorpe Square (which was installed on River Street).
The Unitarian Universalist Church was originally based on the square, prior to its move to the western side of Troup Square in 1860.
The 1790s
Savannah grew rapidly in the late 18th century and six new wards were established in the 1790s alone, including the four that now comprise the northeastern quadrant of the Historic District. The new wards expanded the grid by one unit to the west and by two to the east. Due to space restrictions these new wards are slightly narrower east-to-west than the original six.
Washington Square
Built in 1790, Washington Square was named in 1791 for the first President of the United States, who visited Savannah in that year. It was one of only two squares named to honor a then-living person; Troup Square was the other.
Washington Square was the site of the Trustees' Garden.
The square was once the site of massive New Year's Eve bonfires; these were discontinued in the 1950s.
In 1964 Savannah Landscape Architect Clermont Huger Lee and Mills B Lane planned and initiated a project to close the fire lane, add North Carolina bluestone pavers, initiate the use of different paving materials, install water cisterns, and lastly install new walks, benches, lighting, and plantings.
Franklin Square
Franklin Square was designed and laid out in 1790. It is located on the western end of town at the intersection of Montgomery Street and W Julian Street, bordered on the north side by W Bryan St and on the south side by W Congress St. It was named in 1791 for Benjamin Franklin, who served as an agent for the colony of Georgia from 1768 to 1778 and who had died in 1790.
The square was destroyed in 1935 but was restored in the mid-1980s. The memorial sculpture includes a depiction of 12-year-old Henri Christophe, who became the commander of the Haitian army and King of Haiti.
Warren Square
Warren Square was laid out in 1791 and named for General Joseph Warren, a Revolutionary War hero killed at the Battle of Bunker Hill and who had served as President of the Provincial Government of Massachusetts. British gunpowder seized by Savannahians had been sent to aid the Americans at Bunker Hill. The "sister city" relationship between Savannah and Boston survived even the Civil War, and Bostonians sent shiploads of provisions to Savannah shortly after the city surrendered to General Sherman in 1864. Warren Square is on Habersham, between Bryan and Congress Streets.
In 1963 Savannah Landscape Architect Clermont Huger Lee and Mills B Lane planned and initiated a project to replace the sand square with plantings, add walks, benches, lighting and plantings, and install barriers to prevent drive through for fire lane.
Columbia Square
Columbia Square was laid out in 1799 and is named for Columbia, the poetic personification of the United States. It is located on Habersham, between State and York Streets. In the center of the square is a fountain that formerly stood at Wormsloe, the estate of Noble Jones, one of Georgia's first settlers. It was moved to Columbia Square in 1970 to honor Augusta and Wymberly DeRenne, descendants of Jones. It is sometimes called the "rustic fountain," as it is decorated with vines, leaves, flowers, and other woodland motifs.
Greene Square
Greene Square was laid out in 1799 and is named for Revolutionary War hero General Nathanael Greene, one of George Washington's most effective generals.
Liberty Square
Liberty Square was laid out in 1799 and named in honor of the Sons of Liberty and the victory over the British in the Revolutionary War. It was located on Montgomery between State and York Streets. It was paved over to make way for improvements to Montgomery Street. A small portion remains and is the site of the "Flame of Freedom" sculpture.
19th-century squares
Expansion of Oglethorpe's grid of wards and squares continued through the first half of the 19th century, until a total of 24 squares stood in downtown Savannah.
Elbert Square
Elbert Square was laid out in 1801 and named for Samuel Elbert, a Revolutionary soldier, sheriff of Chatham County, and Governor of Georgia. It was located on Montgomery between Hull and Perry streets. It was paved over to make way for improvements to Montgomery Street and today is represented by a small grassy area across Montgomery from the west entrance to the Civic Center.
Chippewa Square
Chippewa Square was laid out in 1815 and named in honor of American soldiers killed in the Battle of Chippawa during the War of 1812. (The spelling "Chippewa" is correct in reference to this square.)
In the center of the square is the James Oglethorpe Monument, created by sculptor Daniel Chester French and architect Henry Bacon and unveiled in 1910. Oglethorpe faces south, toward Georgia's one-time enemy in Spanish Florida, and his sword is drawn. Busts of Confederate figures Francis Stebbins Bartow and Lafayette McLaws were moved from Chippewa Square to Forsyth Park to make room for the Oglethorpe monument. Due to the location of the monument, Savannahians sometimes refer to this as Oglethorpe Square, although the actual Oglethorpe Square sits just to the northeast.
The "park bench" scene which opens the 1994 film Forrest Gump was filmed on the north side of Chippewa Square.
Chippewa Square is also home to First Baptist Church (1833), the Philbrick-Eastman House (1844), and The Savannah Theatre (1818).
Orleans Square
Orleans Square was laid out in 1815, commemorating General Andrew Jackson's victory at the Battle of New Orleans in January of that year. In the center of the square the German Memorial Fountain honors early German immigrants to Savannah. Installed in 1989 it commemorates the 250th anniversary of Georgia and of Savannah, as well as the 300th anniversary of the arrival in Philadelphia of 13 Rhenish families. Orleans Square is located on Barnard, between Hull and Perry Streets, and is adjacent to the Savannah Civic Center.
Lafayette Square
The square contains a fountain commemorating the 250th anniversary of the founding of the Georgia colony, donated by the Colonial Dames of Georgia in 1984, as well as cobblestone sidewalks.
Adjacent to the square is the Roman Catholic Cathedral Basilica of St. John the Baptist,. Given this proximity, Lafayette Square features prominently in Savannah's massive Saint Patrick's Day celebrations. Water in the fountain is dyed green for the occasion.
In this area is the museum known as the Flannery O'Connor Childhood Home, which is open to the public.
Marist Place, the former Marist School for Boys, stands in the southwest tithing of the square.
Pulaski Square
Pulaski Square was laid out in 1837 and is named for General Casimir Pulaski, a Polish-born Revolutionary War hero who died of wounds received in the siege of Savannah (1779). It is one of the few squares without a monument—General Pulaski's statue is actually in nearby Monterey Square.
Prior to the birth of the historical preservation movement and the restoration of much of Savannah's downtown Pulaski sheltered a sizeable homeless population and was one of several squares that had been paved to allow traffic to drive straight through its center.
Pulaski square is located on Barnard, between Harris and Charlton Streets, and is known for its live oaks.
Madison Square
Madison Square was laid out in 1837 and named for James Madison, fourth President of the United States.
In the center of the square is the William Jasper Monument, an 1888 work by Alexander Doyle memorializing Sergeant William Jasper, a soldier in the siege of Savannah who, though mortally wounded, heroically recovered his company's banner. Savannahians sometimes refer to this as Jasper Square, in honor of Jasper's statue.
Madison Square features a vintage cannon from the Savannah Armory. These now mark the starting points of the first highways in Georgia, the Ogeechee Road leading to Darien and the Augusta Road.
The square also includes a monument marking the center of the British resistance during the siege.
In 1971 Savannah Landscape Architect Clermont Huger Lee and Mills B. Lane planned and initiated a project to install new walk patterns with offset sitting areas and connecting walks at curbs, add new benches, lighting and planting.
Crawford Square
Crawford Square was laid out in 1841 and named in honor of Secretary of the Treasury William Harris Crawford. Crawford ran for president in 1824 but came in third, after winner John Quincy Adams and runner-up Andrew Jackson.
Although Crawford is the smallest of the squares, it anchors the largest ward, as Crawford Ward includes the territory of Colonial Park Cemetery.
During the era of Jim Crow this was the only square in which African-Americans were permitted.
While all squares were once fenced it is the only one that remains so. Crawford Square has also retained its cistern, a holdover from early fire fighting practices. After a major fire in 1820 firemen maintained duty stations in the squares, each of which was equipped with a storage cistern.
Chatham Square
Chatham Square was laid out in 1847 and named in 1851 for William Pitt, 1st Earl of Chatham. Although Pitt never visited Savannah he was an early supporter of the Georgia colony and both Chatham Square and Chatham County are named in his honor.
The square is sometimes known locally as Barnard Square, in reference to the 1901-built Barnard Street School (which actually stands at 212 West Taylor Street) and has served as a building for the Savannah College of Art and Design since 1988.
The college renamed it Pepe Hall.
Monterey Square
Monterey Square was laid out in 1847 and commemorates the Battle of Monterrey (1846), in which American forces under General Zachary Taylor captured the city of Monterrey during the Mexican–American War. (The correct spelling in reference to the square is "Monterey", with a single r.)
In the center of the square is an 1853 monument honoring General Casimir Pulaski.
Monterey Square is the site of Mercer House, built by Hugh Mercer and more recently the home of antiques dealer and conservator Jim Williams. The house (which fills an entire block), and the square itself, were featured prominently in John Berendt's 1994 true crime novel Midnight in the Garden of Good and Evil (written before Ellis Square was reinstated). Monterey Square has been used as a setting for several motion pictures, including the 1997 film version of Berendt's novel. The Comer House is also featured in the movie.
The square also is home to Congregation Mickve Israel, which boasts one of the few Gothic-style synagogues in America, dating from 1878.
All but one of the buildings surrounding the square are original to the square, the exception being the United Way Building.
Troup Square
Troup Square was laid out in 1851 and is named for former Georgia Governor, Congressman, and Senator George Troup. It is one of only two squares named for a person living at the time (the other being Washington Square). A large iron armillary sphere stands in the center of the square, supported by six small metal turtles.
A special dog fountain is located on the west side of the square. The Myers Drinking Fountain was a gift from Savannah mayor Herman Myers in 1897 and originally placed in Forsyth Park. When moved to Troup Square its height was adjusted for canine use and has become the site of an annual Blessing of the Animals.
The Unitarian Universalist Church sits on the western side of the square. It is believed that James Lord Pierpont wrote the tune to "Jingle Bells" while he was the church's music director, but other sources claim he only copyrighted it when he was in the role, and that he wrote it in Medford, Massachusetts.
In 1969 Savannah Landscape Architect Clermont Huger Lee and Mills B Lane planned and initiated a project to remove the central vandalized playground, close the fire lane, install an armillary sundial, and add new walls, benches, lighting, and plantings.
Taylor Square
Taylor Square was laid out in 1851 and was originally named for South Carolina statesman John C. Calhoun, who served as Secretary of War, Secretary of State, and as vice president under John Quincy Adams and Andrew Jackson. In 2023, it was renamed Taylor Square, in honor of the first American Civil War black nurse, educator and memoirist Susie King Taylor.
The square is sometimes called Massie Square, in reference to a neighborhood school.
The square is also home to Wesley Monumental United Methodist Church, founded in 1868.
It is the only square with all of its original buildings intact.
The square is believed to have been built over a slave burial ground, with around one thousand bodies buried in it. In 2004 a skull was found by utility workers outside the Massie Heritage Interpretation Center on the square's southeastern side.
Whitefield Square
Whitefield Square was laid out in 1851, the final square built.
It is named for the Rev. George Whitefield, founder of Bethesda Home for Boys (a residential education program – formerly the Bethesda Orphanage) in the 18th century, and still in existence on the south side of the city.
The square has a gazebo in its center.
Andrew Bryan, the founder of the First African Baptist Church, is buried in the square, as is Henry Cunningham, the minister of the Second African Baptist Church.
Forsyth Park
After 1851, as the city expanded south of Gaston Street, further extensions of Oglethorpe's grid of wards and squares were abandoned. Forsyth Park, located just south of Monterey Ward, was intended to be a single large park that would serve the growing southern portion of the city just as the squares had served their individual wards. The original northern portion of the park, surrounding the well-known fountain, occupied an area the size of an entire ward from the old city, and the park more than doubled in size during later years. Other, smaller neighborhood parks have been established in the southern portions of the city.
Summary
Analysis
While some authorities believe that the original plan allowed for growth of the city and thus expansion of the grid, the regional plan suggests otherwise: the ratio of town lots to country lots was in balance and growth of the urban grid would have destroyed that balance.
See also
Sanborn Fire Insurance Maps of Savannah
Notes
References
City of Savannah's Savannah's Squares page, accessed June 13, 2007. This page contains links to individual pages on each of Savannah's 24 squares, many with photographs. These pages are referenced throughout this article.
External links
Map and aerial views of the historic district from Visitor information from Savannah.com
Tour Guide Manual from the City of Savannah website
A street map of the historic district from Savannah.com
Another street map of the historic district from Sherpa Guides
Savannah Squares book site
Haitian American Historical Society, organizers of the Haitian Volunteers monument
Photo essay of all 24 squares in Savannah
Savannah GA Historic Squares POV Driving – Travel Towner, YouTube, January 1, 2021
Squares of Savannah, Georgia
Squares of Savannah
Squares | Squares of Savannah, Georgia | Engineering | 5,430 |
15,028,877 | https://en.wikipedia.org/wiki/Renin%20receptor | The renin receptor also known as ATPase H(+)-transporting lysosomal accessory protein 2, or the prorenin receptor, is a protein that in humans is encoded by the ATP6AP2 gene.
Function
The renin receptor binds renin and prorenin. Binding of renin to this receptor induces the conversion of angiotensinogen to angiotensin I.
This protein is associated with proton-translocating ATPases which have fundamental roles in energy conservation, secondary active transport, acidification of intracellular compartments, and cellular pH homeostasis. There are three classes of ATPases- F, P, and V. The vacuolar (V-type) ATPases have a transmembrane proton-conducting sector and an extramembrane catalytic sector. This protein has been found associated with the transmembrane sector of the V-type ATPases.
References
Further reading
External links
Transmembrane receptors | Renin receptor | Chemistry | 202 |
28,992,521 | https://en.wikipedia.org/wiki/Linker%20DNA | In molecular biology, linker DNA is double-stranded DNA (38-53 base pairs long) in between two nucleosome cores that, in association with histone H1, holds the cores together. Linker DNA is seen as the string in the "beads and string model", which is made by using an ionic solution on the chromatin. Linker DNA connects to histone H1 and histone H1 sits on the nucleosome core. Nucleosome is technically the consolidation of a nucleosome core and one adjacent linker DNA; however, the term nucleosome is used freely for solely the core. Linker DNA may be degraded by endonucleases.
The linkers are short double stranded DNA segments which are formed of oligonucleotides. These contain target sites for the action of one or more restriction enzymes. The linkers can be synthesized chemically and can be ligated to the blunt end of foreign DNA or vector DNA. These are then treated with restriction endonuclease enzyme to produce cohesive ends of DNA fragments. The commonly used linkers are EcoRI-linkers and sal-I linkers.
References
External links
Image illustrating linker DNA
DNA
Electrochemistry | Linker DNA | Chemistry | 259 |
21,852,896 | https://en.wikipedia.org/wiki/Flavor%20and%20Extract%20Manufacturers%20Association | The Flavor and Extract Manufacturers Association (FEMA) is a food industry trade group based in the United States. FEMA was founded in 1909 by several flavor firms in response to the passage of the Pure Food and Drug Act of 1906. Founding members were McCormick & Company, Ulman Driefus & Company, Jones Brothers, Blanke Baer Chemical Company, Frank Tea & Spice Company, Foote & Jenkes, Sherer Gillett Company, and C.F. Sauer Company.
Since its founding, FEMA has played instrumental roles in creating a program to assess the safety and "generally recognized as safe" status of flavor ingredients, advocating for policies that positively impact the food and flavor industry, and in representing its members' interests during the creation of the Food Additives Amendment of 1958, an amendment to the United States' Food, Drugs, and Cosmetic Act of 1938.
FEMA maintains a Flavor Ingredient Library, a list of all flavoring ingredients allowed in the United States.
Critics of FEMA have said that the organization and the safety assessments it makes lack transparency. The Center for Science in the Public Interest has written that the Flavor and Extract Manufacturers Association typically makes determinations that substances are generally recognized as safe without FDA oversight, and that the process is often secretive.
See also
Food safety
References
External links
Flavors
American food and drink organizations
Food chemistry organizations
Food technology organizations
Trade associations based in the United States
1909 establishments in the United States | Flavor and Extract Manufacturers Association | Chemistry | 290 |
14,878,373 | https://en.wikipedia.org/wiki/PCDHB10 | Protocadherin beta-10 is a protein that in humans is encoded by the PCDHB10 gene.
This gene is a member of the protocadherin beta gene cluster, one of three related gene clusters tandemly linked on chromosome five. The gene clusters demonstrate an unusual genomic organization similar to that of B-cell and T-cell receptor gene clusters. The beta cluster contains 16 genes and 3 pseudogenes, each encoding 6 extracellular cadherin domains and a cytoplasmic tail that deviates from others in the cadherin superfamily. The extracellular domains interact in a homophilic manner to specify differential cell-cell connections. Unlike the alpha and gamma clusters, the transcripts from these genes are made up of only one large exon, not sharing common 3' exons as expected. These neural cadherin-like cell adhesion proteins are integral plasma membrane proteins. Their specific functions are unknown but they most likely play a critical role in the establishment and function of specific cell-cell neural connections.
References
Further reading | PCDHB10 | Chemistry | 215 |
16,801,208 | https://en.wikipedia.org/wiki/NComputing | NComputing is a desktop virtualization company that manufactures hardware and software to create virtual desktops (sometimes called zero clients or thin clients) which enable multiple users to simultaneously share a single operating system instance.
NComputing, based in San Mateo, California, is a privately held for-profit company with offices in the United States, Singapore, UK, Germany, India, Korea, and Poland; and resellers around the world.
History
Founding
In 2003, Young Song, a former VP at eMachines, met German entrepreneur Klaus Maier (formerly CEO of hydrapark), who had spent more than ten years developing the core software on which NComputing is based. They formed a team to develop the complementary hardware in Korea, while the software was written in Poland and Russia. After they successfully launched the product and reached $10 million revenue in two years, the two founders decided to move its headquarters to Silicon Valley. Stephen Dukker, former chairman of eMachines, joined NComputing in August 2006, to lead the company together.
Financing
Dukker introduced NComputing to venture capitalists and technology journalists in September 2006 at DEMOfall 06. By October 2006, NComputing had raised $8 million from Scale Venture Partners (formerly known as BA Venture Partners). In January 2008, the company raised a $28 million series B round of financing, led by Silicon Valley venture capital firm Menlo Ventures with participation from Scale Venture Partners and South Korea's Daehong Technew Corp. In April 2012, the company raised a $21.8 million series C round of financing led by Questmark Partners with participation from existing investors. In 2017, original NComputing Co., Ltd, a Korean corporation, became the ultimate holding company for all other subsidiaries and had raised $6 million from MDI VC (Telkom Indonesia VC in Jakarta), Pinnacle Ventures (Menlo Park, United States), and Bokwang Ventrues (Seoul, Korea) for accelerating South Asia region's growth and boost enterprise VDI software products.
Current growth
The company was founded in 2003. Current global usage is 20 million daily users in 140 countries. Typical customer profile includes 70,000 education and business organizations including 5,000 school districts in the United States. NComputing has shipped more than three million units overall, including 180,000 seats to provide one computing seat for every K–12 student in the country of North Macedonia. As of 2017, the company has 100 employees worldwide.
According to several survey metrics, Ncomputing is positioned as one of the top five major players in the enterprise thin client market - next to Dell, Hewlett-Packard, Lenovo, and IGEL.
Operating system and virtualization support
Linux support
Linux is supported through a version of vSpace Server for Linux software. Currently, NComputing offers support for Ubuntu 14.04, 16.04 and 18.04. This software is proprietary and requires a server-based license. There is a 10-day free trial period. The vSpace Server for Linux provides features like client session monitoring, virtual IP, optimized video playback and messaging service between the clients.
Windows support
Windows is supported through a version of vSpace Server for Windows software. The supported versions of Windows include: Windows Server 2003 R2 SP2, Windows XP SP3 (32-bit); Windows Server 2008 SP2, Windows Vista SP21(32-bit); Windows Server 2008 R2 SP1, Windows MultiPoint Server 2011, Windows 7 SP1 (both 32- and 64-bit), Windows 8 SP1 (64-bit), Windows Server 2012 R2, Windows 10, Windows Server 2016 and Windows Server 2019.،vSpace Server software utilizes Microsoft Remote Desktop Services features to host user sessions.
See also
VDI support
NComputing's VERDE VDI Enterprise Edition 8.x product provides a virtualization solution based on Virtual Desktop Infrastructure. NComputing acquired VERDE VDI intellectual property rights from Virtual Bridges in Q1 2017 and officially launched the product in June 2017. It runs on KVM enabled Linux OS baremetal or nested KVM enabled Azure or Google Cloud. Current version 8.3.4 supports 10 languages.
See also
Multiseat configuration
Windows MultiPoint
References
Computer companies of the United States
Computer hardware companies
Computer systems companies
Thin clients
Remote desktop protocols
Privately held companies based in California
Computer companies established in 2003
2003 establishments in California | NComputing | Technology | 908 |
63,480,925 | https://en.wikipedia.org/wiki/NGC%20937 | NGC 937 is a barred spiral galaxy located in the constellation Andromeda about 251 million light years from the Milky Way. It was discovered by the French astronomer Édouard Stephan on 12 December 1884.
See also
List of NGC objects (1–1000)
References
External links
Barred spiral galaxies
0937
01961
+07-06-024
Andromeda (constellation)
009480
Astronomical objects discovered in 1884
Discoveries by Édouard Stephan | NGC 937 | Astronomy | 90 |
50,849,684 | https://en.wikipedia.org/wiki/Steroid%20ester | A steroid ester is an ester of a steroid. They include androgen esters, estrogen esters, progestogen esters, and corticosteroid esters. Steroid esters may be naturally occurring/endogenous like DHEA sulfate or synthetic like estradiol valerate. Esterification is useful because it is often able to render the parent steroid into a prodrug of itself with altered chemical properties such as improved metabolic stability, water solubility, and/or lipophilicity. This, in turn, can enhance pharmacokinetics, for instance by improving the steroid's bioavailability and/or conferring depot activity and hence an extended duration with intramuscular or subcutaneous injection.
Esterification of steroids with fatty acids was developed to prolong the duration of effect of steroid hormones. By 1957, more than 500 steroid esters had been synthesized, most frequently of androgens. The longer the fatty acid chain, up to a certain optimal length, the longer the duration when prepared as an oil solution and injected. Across a chain length range of 6 to 12 carbon atoms, a length of 9 or 10 carbon atoms (nonanoate or decanoate ester) was found to be optimal in rodents in the case of testosterone esters. Fatty acid esters increase the lipophilicity of steroids, with longer fatty acids resulting in greater lipophilicity. The greater solubility in oil allows the steroid esters to be dissolved in a smaller oil volume, thereby allowing for larger doses with intramuscular injection. In addition, the greater the lipophilicity of the steroid, as measured by the octanol/water partition coefficient (logP), the slower its release from the oily depot at the injection site and the longer its duration.
Steroid esters can also be prepared as crystalline aqueous suspensions. Aqueous suspensions of steroid crystals result in prolongation of duration with intramuscular injection similarly to oil solutions. The duration is longer than that of oil solutions, intermediate between oil solutions and subcutaneous pellet implants. The sizes of crystals in suspensions varies and can range from 0.1 μm to some hundreds of μm. The duration of crystalline steroid suspensions increases directly with the size of the crystals. However, crystalline suspensions have an irritating effect in the body, and intramuscular injections of crystalline steroid suspensions result in painful local reactions. These reactions worsen with larger crystals, and for this reason, crystal sizes must be limited to minimize local reactions. Particle sizes of more than 300 μg in the case of estradiol benzoate by intramuscular injection have been found to be too painful for use.
In some cases, crystalline steroid suspensions are used not for prolongation of effect, but because the solubility of the steroid result in this preparation being the only practical way to deliver the steroid in a reasonable injection volume. Examples include cortisone acetate and hydrocortisone and its esters. A requirement of long-lasting crystalline steroid administration is that the steroid be sufficiently water-insoluble, so that it dissolves slowly and thereby attains a prolonged therapeutic effect. The crystals in suspensions can sometimes clump together or aggregate and grow in size. This can be avoided by careful formulation. Crystalline suspensions of steroids are prepared either by precipitation or by dispersing finely divided material in an aqueous suspension medium. Desired particle size can be achieved by grinding, for instance through the use of an atomizer.
Adolf Butenandt reported in 1932 that estrone benzoate in oil solution had a prolonged duration with injection in animals. No such prolongation of action occurred if it was given by intravenous injection. Estradiol benzoate was synthesized in 1933 and was marketed for use the same year.
Sulfur-based esters
Certain sulfur-based steroid esters have a sulfamate or sulfonamide moiety as the ester, typically at the C3 and/or C17β positions. Like many other steroid esters, they are prodrugs. Unlike other steroid esters however, they bypass first-pass metabolism with oral administration and have high oral bioavailability and potency, abolished first-pass hepatic impact, and long elimination half-lives and durations of action. They are under development for potential clinical use. Examples include the estradiol esters estradiol sulfamate (E2MATE; also a potent steroid sulfatase inhibitor) and EC508 (estradiol 17β-(1-(4-(aminosulfonyl)benzoyl)-L-proline)), the testosterone ester EC586 (testosterone 17β-(1-((5-(aminosulfonyl)-2-pyridinyl)carbonyl)-L-proline)), and sulfonamide esters of levonorgestrel and etonogestrel.
See also
List of steroid esters
Steroid sulfate
References
Further reading
Prodrugs
Steroid esters | Steroid ester | Chemistry | 1,094 |
172,327 | https://en.wikipedia.org/wiki/Congenital%20rubella%20syndrome | Congenital rubella syndrome (CRS) occurs when a human fetus is infected with the rubella virus (German measles) via maternal-fetal transmission and develops birth defects. The most common congenital defects affect the ophthalmologic, cardiac, auditory, and neurologic systems.
Rubella infection in pregnancy can result in various outcomes ranging from asymptomatic infection to congenital defects to miscarriage and fetal death. If infection occurs 0–11 weeks after conception, the infant has a 90% risk of being affected. If the infection occurs 12–20 weeks after conception, the risk is 20%. Infants are not generally affected if rubella is contracted during the third trimester. Diagnosis of congenital rubella syndrome is made through a series of clinical and laboratory findings and management is based on the infant's clinical presentation. Maintaining rubella outbreak control via vaccination is essential in preventing congenital rubella infection and congenital rubella syndrome.
Congenital rubella syndrome was discovered in 1941 by Australian Norman McAlister Gregg.
Signs and symptoms
The classic triad for congenital rubella syndrome is:
Sensorineural deafness (58% of patients)
Eye abnormalities—especially retinopathy, cataract, glaucoma, and microphthalmia (43% of patients)
Congenital heart disease—especially pulmonary artery stenosis and patent ductus arteriosus (50% of patients)
Other manifestations of CRS may include:
Spleen, liver, or bone marrow problems (some of which may disappear shortly after birth)
Intellectual disability
Small head size (microcephaly)
Low birth weight
Thrombocytopenic purpura, leading to easy or excessive bleeding or bruising
Extramedullary hematopoiesis (presents as a characteristic blueberry muffin rash)
Enlarged liver (hepatomegaly)
Small jaw size (micrognathia)
Radiolucent bone disease
Skin lesions
Children who have been exposed to rubella in the womb should also be watched closely as they age for any indication of:
Developmental delay
Autism
Schizophrenia
Growth retardation
Learning disabilities
Thyroid disorders
Diabetes mellitus
Diagnosis
Diagnosis of congenital rubella syndrome is made based on clinical findings and laboratory criteria. Laboratory criteria includes at least one of the following:
Detection of the rubella virus via RT-PCR
Detection of rubella-specific IgM antibody
Detection of infant rubella-specific IgG antibody at higher levels (and persists for a longer time) than expected for passive maternal transmission
Isolation of the rubella virus by nasal, blood, throat, urine, or cerebrospinal fluid specimens
Clinical definition is characterized by findings in the following categories:
Cataracts/congenital glaucoma, congenital heart disease (most commonly, patent ductus arteriosus or peripheral pulmonary artery stenosis), hearing impairment, pigmentary retinopathy
Purpura, hepatosplenomegaly, jaundice, microcephaly, developmental delay, meningoencephalitis, radiolucent bone disease
A patient is classified into the following cases depending on their clinical and laboratory findings:
Suspected: A patient that has one or more of the clinical findings listed above but does not meet the definition for probable or confirmed classification
Probable: A patient that does not have laboratory confirmation of congenital rubella but has either two clinical findings from Group 1 as listed above OR one clinical finding from Group 1 and one clinical finding from Group 2 as listed above
Confirmed: A patient with at least one laboratory finding and one clinical finding (from either group) as listed above
Infection only: A patient with no clinical findings as described above but meeting at least one confirmed laboratory criteria
Prevention
Vaccinating the majority of the population is effective at preventing congenital rubella syndrome. With the introduction of the rubella vaccine in 1969, the number of cases of rubella in the United States has decreased 99%, from 57,686 cases in 1969 to 271 cases in 1999. For women who plan to become pregnant, the MMR (measles mumps, rubella) vaccination is highly recommended, at least 28 days prior to conception. The vaccine should not be given to women who are already pregnant as it contains live viral particles. Other preventative actions can include the screening and vaccinations of high-risk personnel, such as medical and child care professions.
Infants with birth defects suspected to be caused by congenital rubella infection should be investigated thoroughly. Confirmed cases should be reported to the local or state health department to assess control of the virus and isolation of the infant should be maintained.
Management
Infants with known rubella exposure during pregnancy or those with a confirmed or suspected infection should receive close follow-up and supportive care. There are no medications or antivirals that will shorten the clinical course of the virus. Only those with immunity to rubella should have contact with infected infants, as they can shed viral particles in their respiratory secretions though 1 year of age (unless they test with repeated negative viral cultures at age 3 months). Many infants can be born with multiple birth defects that require multidisciplinary management and interventions based on clinical manifestations. Often these infants will require extended period or life-long follow up with medical specialists. Early diagnosis of congenital rubella syndrome is important for planning future medical care and educational placement.
Auditory Care
Many infants with CRS may be born with sensorineural deafness and thus should undergo a newborn hearing evaluation. Hearing loss may not be apparent at birth and thus requires close auditory follow up. Infants with confirmed hearing impairment may require hearing aids and may benefit from an early intervention program.
Ophthalmologic Care
Eye abnormalities including cataracts, infantile glaucoma and retinopathy are common in infants born with CRS. Infants should undergo eye examinations after birth and during early childhood. Those with congenital eye defects require care from a pediatric ophthalmologist for specialized care and follow up.
Cardiac Care
Congenital cardiac anomalies including pulmonary artery stenosis and patent ductus arteriosus can be seen in infants with CRS. Infants should undergo cardiac evaluation soon after birth and those with confirmed cardiac lesions will require specialized care with a pediatric cardiologist for any interventions and follow-up care.
See also
Jay Horwitz (born 1945), New York Mets executive born with the syndrome
References
Congenital disorders
Infections specific to the perinatal period
Rubella
Syndromes caused by microbes
Virus-related cutaneous conditions
Disability
Infectious diseases
Pediatrics
Hematology | Congenital rubella syndrome | Biology | 1,321 |
50,978,083 | https://en.wikipedia.org/wiki/Jaszczak%20phantom | A Jaszczak phantom () aka Data Spectrum ECT phantom is an imaging phantom used for validating scanner geometry, 3D contrast, uniformity, resolution, attenuation and scatter correction or alignment tasks in nuclear medicine. It is commonly used in academic centers and hospitals to characterize a SPECT or some gamma camera systems for quality control purposes. It is used for accreditation by clinical and academic facilities for the American College of Radiology.
The phantom was developed by Ronald J. Jaszczak of Duke University, and was filed for a patent in 1982. It is a cylinder containing fillable inserts that is often used with a radionuclide such as Technetium-99m or Fluorine-18.
Although the phantom can be used for acceptance testing, the National Electrical Manufacturers Association recommends a 30 million count acquisition and section reconstruction of the phantom be performed quarterly.
In 1981 Ronald J. Jaszczak founded Data Spectrum Corporation which manufactures the Jaszczak phantom and several other nuclear imaging tools, such as the Hoffman Brain phantom.
Structure and composition
Jaszczak phantoms consist of a main cylinder or tank made of acrylic plastic with several inserts. The circular phantom comes in two varieties: flanged and flangeless. The latter is recommended by the American College of Radiology for accreditation of nuclear medicine departments. All Jaszczak phantoms have six solid spheres and six sets of 'cold' rods. In flanged models, the sizes of the spheres vary. The number of rods in each set depends on the size of the rod in that set as different models of the phantom have rods of different sizes. In flangeless models, the diameters of the spheres are 9.5, 12.7, 15.9, 19.1, 25.4 and 31.8 mm, while the rod diameters are 4.8, 6.4, 7.9, 9.5, 11.1 and 12.7 mm. Both solid spheres and rod inserts mimic cold lesions in a hot background. Spheres are used to measure the image contrast while the rods are used to investigate the image resolution in SPECT systems.
References
External links
ACR Accreditation of Nuclear Medicine and PET Imaging Departments
Nuclear medicine
Quality control tools
Positron emission tomography | Jaszczak phantom | Physics | 471 |
47,157,896 | https://en.wikipedia.org/wiki/Genetics%20of%20infertility | About 10–15% of human couples are infertile, unable to conceive. In approximately in half of these cases, the underlying cause is related to the male. The underlying causative factors in the male infertility can be attributed to environmental toxins, systemic disorders such as, hypothalamic–pituitary disease, testicular cancers and germ-cell aplasia. Genetic factors including aneuploidies and single-gene mutations are also contributed to the male infertility. Patients with nonobstructive azoospermia or oligozoospermia show microdeletions in the long arm of the Y chromosome and/or chromosomal abnormalities, each with the respective frequency of 9.7% and 13%. A large percentage of human male infertility is estimated to be caused by mutations in genes involved in primary or secondary spermatogenesis and sperm quality and function. Single-gene defects are the focus of most research carried out in this field.
NR5A1 mutations are associated with male infertility, suggesting the possibility that these mutations cause the infertility. However, it is possible that these mutations individually have no major effect and only contribute to the male infertility by collaboration with other contributors such as environmental factors and other genomics variants. Vice versa, existence of the other alleles could reduce the phenotypic effects of impaired NR5A1 proteins and attenuate the expression of abnormal phenotypes and manifest male infertility solely.
NR5A1 roles in sex development and related disorders
Nuclear receptor subfamily 5 group A member 1 (NR5A1), also known as SF1 or Ad4BP (MIM 184757), is located on the long arm of chromosome 9 (9q33.3). The NR5A1 is an orphan nuclear receptor that was first identified following the search for a common regulator of the cytochrome P450 steroid hydroxylase enzyme family. This receptor is a pivotal transcriptional regulator of an array of genes involved in reproduction, steroidogenesis and male sexual differentiation and also plays a crucial role in adrenal gland formation in both sexes. NR5A1 regulates the Müllerian inhibitory substance by binding to a conserved upstream regulatory element and directly participates in the process of mammalian sex determination through Müllerian duct regression. Targeted disruption of NR5A1 (Ftzf1) in mice results in gonadal and adrenal agenesis, persistence of Müllerian structures and abnormalities of the hypothalamus and pituitary gonadotropes. Heterozygous animals demonstrate a milder phenotype including an impaired adrenal stress response and reduced testicular size. In humans, NR5A1 mutations were first described in patients with 46, XY karyotype and disorders of sex development (DSD), Müllerian structures and primary adrenal failure (MIM 612965). After that, heterozygous NR5A1 mutations were described in seven patients showing 46, XY karyotype and ambiguous genitalia, gonadal dysgenesis, but no adrenal insufficiency. Since then, studies have confirmed that mutations in NR5A1 in patients with 46, XY karyotype cause severe underandrogenisation, but no adrenal insufficiency, establishing dynamic and dosage-dependent actions for NR5A1. Subsequent studies revealed that NR5A1 heterozygous mutations cause primary ovarian insufficiency (MIM 612964).
NR5A1 new roles in fertility and infertility
Recently, NR5A1 mutations have been related to human male infertility (MIM 613957). These findings substantially increase the number of NR5A1 mutations reported in humans and show that mutations in NR5A1 can be found in patients with a wide range of phenotypic features, ranging from 46,XY sex reversal with primary adrenal failure to male infertility. For the first time, Bashamboo et al. (2010) conducted a study on the nonobstructive infertile men (a non-Caucasian mixed ancestry n = 315), which resulted in the report of all missense mutations in the NR5A1 gene with 4% frequency. Functional studies of the missense mutations revealed impaired transcriptional activation of NR5A1-responsive target genes. Subsequently, three missense mutations were identified as associated with and most likely the cause of the male infertility, according to computational analyses. The study indicated that the mutation frequency is below 1% (Caucasian German origin, n = 488). In another study the coding sequence of NR5A1 has been analysed in a cohort of 90 well-characterised idiopathic Iranian azoospermic infertile men versus 112 fertile men. Heterozygous NR5A1 mutations were found in 2 of 90 (2.2%) of cases. These two patients harboured missense mutations within the hinge region (p.P97T) and ligand-binding domain (p.E237K) of the NR5A1 protein.
Small supernumerary marker chromosomes and infertility
Small supernumerary marker chromosome (sSMCs) are extra chromosomes consisting of parts of virtually any other chromosome(s). By definition, they are smaller than one of the smaller chromosomes, chromosome 20. sSMCs typically develop in individuals as a result of abnormal chromosomal events occurring in one of their parent's eggs, sperms, or zygotes but in less common cases are directly inherited from a parent carrier of the sSMC. sSMCs occur in 0.125% of all infertility cases, are 7.5-fold more common in men, and in women are often associated with ovarian failure. The sSMCs associated with infertility can consist of parts of virtually any other chromosome. While only a small percentage of these sSMCs have had their genetic material defined, those that have include sSMCs containing: a) band 11.1 from the short arm of chromosome 15 (notated as (15)q11.1)(this sSMC is associated with premature ovarian failure); b) band ll.2 from the short arm of chromosome 13 (notated as (13)q11.2)(this sSMC is associated with oligoasthenoteratozoospermia, i.e. oligozoospermia [low sperm count], teratozoospermia [presence of sperm with abnormal shapes], and asthenozoospermia [sperm with reduced motility]); c) band 11 from the short arm of chromosome 14 (notated as (14)q11.1)(this sCMC is associated with otherwise uncharacterized infertility; and d) band 11 on the short arm of chromosome 22 notated as (22)q11)(this sSMC is associated with repeated abortions).
See also
Female infertility
References
Genetics
Infertility | Genetics of infertility | Biology | 1,465 |
54,366,493 | https://en.wikipedia.org/wiki/Skeletocutis%20subvulgaris | Skeletocutis subvulgaris is a species of poroid, white rot fungus in the family Polyporaceae. Found in China, it was described as a new species in 1998 by mycologist Yu-Chen Dai. It was named for its resemblance to Skeletocutis vulgaris. The type collection was made in Hongqi District, Jilin Province, where it was found growing on the rotting wood of Korean pine (Pinus koraiensis).
Description
The fungus has a soft, thin, crust-like fruit body forming strips that measure long by wide; these strips are sometimes joined to make larger patches. The pore surface is whitish with small pores numbering 6–8 per millimetre. S. subvulgaris has a dimitic hyphal system. Some of the hyphae of the dissepiment edges (the tissue between the pores) is encrusted with spiny crystals. The skeletal hyphae have a distinct lumen, which helps distinguish this species from the similar S. vulgaris. Spores of S. subvulgaris are roughly cylindrical, thin walled and hyaline, and measure 3.1–4.1 by 1.1–1.6 μm.
References
Fungi described in 1998
Fungi of China
subvulgaris
Taxa named by Yu-Cheng Dai
Fungus species | Skeletocutis subvulgaris | Biology | 280 |
1,048,084 | https://en.wikipedia.org/wiki/Sunwise | Sunwise, sunward or deasil (sometimes spelled deosil), are terms meaning to go clockwise or in the direction of the sun, as seen from the northern hemisphere. The opposite term is widdershins (Lowland Scots), or tuathal (Scottish Gaelic). In Scottish culture, this turning direction is also considered auspicious, while the converse is true for counter-clockwise motion.
Irish culture
During the days of Gaelic Ireland and of the Irish clans, the Psalter known as was used as both a rallying cry and protector in battle by the Chiefs of Clan O'Donnell. Before a battle it was customary for a chosen monk or holy man (usually attached to the Clan McGroarty and who was in a state of grace) to wear the Cathach and the cumdach, or book shrine, around his neck and then walk three times sunwise around the warriors of Clan O'Donnell.
According to folklorist Kevin Danaher, on St. John's Eve in Ulster and Connaught, it was customary to light a bonfire at sunset and to walk sunwise around the fire while praying the rosary. Those who could not afford a rosary would keep tally by holding a small pebble during each prayer and throwing it into the bonfire as each prayer was completed.
Similar praying of the rosary or other similar prayers while walking sunwise around Christian pilgrimage shrines or holy wells is also traditional in Irish culture during pattern days.
Scottish culture
This is descriptive of the ceremony observed by the druids, of walking round their temples by the south, in the course of their directions, always keeping their temples on their right. This course (diasil or deiseal) was deemed propitious, while the contrary course is perceived as fatal, or at least unpropitious. From this ancient superstition are derived several Gaelic customs which were still observed around the turn of the twentieth century, such as drinking over the left thumb, as Toland expresses it, or according to the course of the sun.
Similarly to the pre-battle use of the Cathach of St. Columba in Gaelic Ireland, the Brecbannoch of St Columba, a reliquary containing the partial human remains of the Saint, was traditionally carried three times sunwise around Scottish armies before they gave battle. The most famous example of this was during the Scottish Wars of Independence, shortly before the Scots under Robert the Bruce faced the English army at the Battle of Bannockburn in 1314.
Martin Martin says:
The use of the sunwise circle was also traditional in the Highlands during Christian pilgrimages in honour of St Máel Ruba, particularly to the shrine where he is said to have established a hermitage upon Isle Maree.
"Deosil" and other spellings
Wicca uses the spelling deosil, which violates the Gaelic orthography principle that a consonant must be surrounded by either broad vowels (a, o, u) or slender vowels (e, i). The Oxford English Dictionary gives precedence to the spelling "deasil", which violates the same principle, but acknowledges "deiseal", "deisal", and "deisul" as well.
Other cultures
This distinction exists in traditional Tibetan religion. Tibetan Buddhists go round their shrines sunwise, but followers of Bonpo go widdershins. The former consider Bonpo to be merely a perversion of their practice, but Bonpo adherents claim that their religion, as the indigenous one of Tibet, was doing this prior to the arrival of Buddhism in the country.
The Hindu pradakshina, the auspicious circumambulation of a temple, is also made clockwise.
See also
Circumambulation
References
Sources
(Deiseal)
Catholic Church in Ireland
History of Catholicism in Scotland
Irish folklore
Irish mythology
Roman Catholic pilgrimage sites in Ireland
Scottish folklore
Tibet
Orientation (geometry) | Sunwise | Physics,Mathematics | 802 |
4,007,073 | https://en.wikipedia.org/wiki/Gene%20expression%20profiling | In the field of molecular biology, gene expression profiling is the measurement of the activity (the expression) of thousands of genes at once, to create a global picture of cellular function. These profiles can, for example, distinguish between cells that are actively dividing, or show how the cells react to a particular treatment. Many experiments of this sort measure an entire genome simultaneously, that is, every gene present in a particular cell.
Several transcriptomics technologies can be used to generate the necessary data to analyse. DNA microarrays measure the relative activity of previously identified target genes. Sequence based techniques, like RNA-Seq, provide information on the sequences of genes in addition to their expression level.
Background
Expression profiling is a logical next step after sequencing a genome: the sequence tells us what the cell could possibly do, while the expression profile tells us what it is actually doing at a point in time. Genes contain the instructions for making messenger RNA (mRNA), but at any moment each cell makes mRNA from only a fraction of the genes it carries. If a gene is used to produce mRNA, it is considered "on", otherwise "off". Many factors determine whether a gene is on or off, such as the time of day, whether or not the cell is actively dividing, its local environment, and chemical signals from other cells. For instance, skin cells, liver cells and nerve cells turn on (express) somewhat different genes and that is in large part what makes them different. Therefore, an expression profile allows one to deduce a cell's type, state, environment, and so forth.
Expression profiling experiments often involve measuring the relative amount of mRNA expressed in two or more experimental conditions. This is because altered levels of a specific sequence of mRNA suggest a changed need for the protein coded by the mRNA, perhaps indicating a homeostatic response or a pathological condition. For example, higher levels of mRNA coding for alcohol dehydrogenase suggest that the cells or tissues under study are responding to increased levels of ethanol in their environment. Similarly, if breast cancer cells express higher levels of mRNA associated with a particular transmembrane receptor than normal cells do, it might be that this receptor plays a role in breast cancer. A drug that interferes with this receptor may prevent or treat breast cancer. In developing a drug, one may perform gene expression profiling experiments to help assess the drug's toxicity, perhaps by looking for changing levels in the expression of cytochrome P450 genes, which may be a biomarker of drug metabolism. Gene expression profiling may become an important diagnostic test.
Comparison to proteomics
The human genome contains on the order of 20,000 genes which work in concert to produce roughly 1,000,000 distinct proteins. This is due to alternative splicing, and also because cells make important changes to proteins through posttranslational modification after they first construct them, so a given gene serves as the basis for many possible versions of a particular protein. In any case, a single mass spectrometry experiment can identify about
2,000 proteins or 0.2% of the total. While knowledge of the precise proteins a cell makes (proteomics) is more relevant than knowing how much messenger RNA is made from each gene, gene expression profiling provides the most global picture possible in a single experiment. However, proteomics methodology is improving. In other species, such as yeast, it is possible to identify over 4,000 proteins in just over one hour.
Use in hypothesis generation and testing
Sometimes, a scientist already has an idea of what is going on, a hypothesis, and he or she performs an expression profiling experiment with the idea of potentially disproving this hypothesis. In other words, the scientist is making a specific prediction about levels of expression that could turn out to be false.
More commonly, expression profiling takes place before enough is known about how genes interact with experimental conditions for a testable hypothesis to exist. With no hypothesis, there is nothing to disprove, but expression profiling can help to identify a candidate hypothesis for future experiments. Most early expression profiling experiments, and many current ones, have this form which is known as class discovery. A popular approach to class discovery involves grouping similar genes or samples together using one of the many existing clustering methods such the traditional k-means or hierarchical clustering, or the more recent MCL. Apart from selecting a clustering algorithm, user usually has to choose an appropriate proximity measure (distance or similarity) between data objects. The figure above represents the output of a two dimensional cluster, in which similar samples (rows, above) and similar gene probes (columns) were organized so that they would lie close together. The simplest form of class discovery would be to list all the genes that changed by more than a certain amount between two experimental conditions.
Class prediction is more difficult than class discovery, but it allows one to answer questions of direct clinical significance such as, given this profile, what is the probability that this patient will respond to this drug? This requires many examples of profiles that responded and did not respond, as well as cross-validation techniques to discriminate between them.
Limitations
In general, expression profiling studies report those genes that showed statistically significant differences under changed experimental conditions. This is typically a small fraction of the genome for several reasons. First, different cells and tissues express a subset of genes as a direct consequence of cellular differentiation so many genes are turned off. Second, many of the genes code for proteins that are required for survival in very specific amounts so many genes do not change. Third, cells use many other mechanisms to regulate proteins in addition to altering the amount of mRNA, so these genes may stay consistently expressed even when protein concentrations are rising and falling. Fourth, financial constraints limit expression profiling experiments to a small number of observations of the same gene under identical conditions, reducing the statistical power of the experiment, making it impossible for the experiment to identify important but subtle changes. Finally, it takes a great amount of effort to discuss the biological significance of each regulated gene, so scientists often limit their discussion to a subset. Newer microarray analysis techniques automate certain aspects of attaching biological significance to expression profiling results, but this remains a very difficult problem.
The relatively short length of gene lists published from expression profiling experiments limits the extent to which experiments performed in different laboratories appear to agree. Placing expression profiling results in a publicly accessible microarray database makes it possible for researchers to assess expression patterns beyond the scope of published results, perhaps identifying similarity with their own work.
Validation of high throughput measurements
Both DNA microarrays and quantitative PCR exploit the preferential binding or "base pairing" of complementary nucleic acid sequences, and both are used in gene expression profiling, often in a serial fashion. While high throughput DNA microarrays lack the quantitative accuracy of qPCR, it takes about the same time to measure the gene expression of a few dozen genes via qPCR as it would to measure an entire genome using DNA microarrays. So it often makes sense to perform semi-quantitative DNA microarray analysis experiments to identify candidate genes, then perform qPCR on some of the most interesting candidate genes to validate the microarray results. Other experiments, such as a Western blot of some of the protein products of differentially expressed genes, make conclusions based on the expression profile more persuasive, since the mRNA levels do not necessarily correlate to the amount of expressed protein.
Statistical analysis
Data analysis of microarrays has become an area of intense research. Simply stating that a group of genes were regulated by at least twofold, once a common practice, lacks a solid statistical footing. With five or fewer replicates in each group, typical for microarrays, a single outlier observation can create an apparent difference greater than two-fold. In addition, arbitrarily setting the bar at two-fold is not biologically sound, as it eliminates from consideration many genes with obvious biological significance.
Rather than identify differentially expressed genes using a fold change cutoff, one can use a variety of statistical tests or omnibus tests such as ANOVA, all of which consider both fold change and variability to create a p-value, an estimate of how often we would observe the data by chance alone. Applying p-values to microarrays is complicated by the large number of multiple comparisons (genes) involved. For example, a p-value of 0.05 is typically thought to indicate significance, since it estimates a 5% probability of observing the data by chance. But with 10,000 genes on a microarray, 500 genes would be identified as significant at p < 0.05 even if there were no difference between the experimental groups. One obvious solution is to consider significant only those genes meeting a much more stringent p value criterion, e.g., one could perform a Bonferroni correction on the p-values, or use a false discovery rate calculation to adjust p-values in proportion to the number of parallel tests involved. Unfortunately, these approaches may reduce the number of significant genes to zero, even when genes are in fact differentially expressed. Current statistics such as Rank products aim to strike a balance between false discovery of genes due to chance variation and non-discovery of differentially expressed genes. Commonly cited methods include the Significance Analysis of Microarrays (SAM) and a wide variety of methods are available from Bioconductor and a variety of analysis packages from bioinformatics companies.
Selecting a different test usually identifies a different list of significant genes since each test operates under a specific set of assumptions, and places a different emphasis on certain features in the data. Many tests begin with the assumption of a normal distribution in the data, because that seems like a sensible starting point and often produces results that appear more significant. Some tests consider the joint distribution of all gene observations to estimate general variability in measurements, while others look at each gene in isolation. Many modern microarray analysis techniques involve bootstrapping (statistics), machine learning or Monte Carlo methods.
As the number of replicate measurements in a microarray experiment increases, various statistical approaches yield increasingly similar results, but lack of concordance between different statistical methods makes array results appear less trustworthy. The MAQC Project makes recommendations to guide researchers in selecting more standard methods (e.g. using p-value and fold-change together for selecting the differentially expressed genes) so that experiments performed in different laboratories will agree better.
Different from the analysis on differentially expressed individual genes, another type of analysis focuses on differential expression or perturbation of pre-defined gene sets and is called gene set analysis. Gene set analysis demonstrated several major advantages over individual gene differential expression analysis. Gene sets are groups of genes that are functionally related according to current knowledge. Therefore, gene set analysis is considered a knowledge based analysis approach. Commonly used gene sets include those derived from KEGG pathways, Gene Ontology terms, gene groups that share some other functional annotations, such as common transcriptional regulators etc. Representative gene set analysis methods include Gene Set Enrichment Analysis (GSEA), which estimates significance of gene sets based on permutation of sample labels, and Generally Applicable Gene-set Enrichment (GAGE), which tests the significance of gene sets based on permutation of gene labels or a parametric distribution.
Gene annotation
While the statistics may identify which gene products change under experimental conditions, making biological sense of expression profiling rests on knowing which protein each gene product makes and what function this protein performs. Gene annotation provides functional and other information, for example the location of each gene within a particular chromosome. Some functional annotations are more reliable than others; some are absent. Gene annotation databases change regularly, and various databases refer to the same protein by different names, reflecting a changing understanding of protein function. Use of standardized gene nomenclature helps address the naming aspect of the problem, but exact matching of transcripts to genes remains an important consideration.
Categorizing regulated genes
Having identified some set of regulated genes, the next step in expression profiling involves looking for patterns within the regulated set. Do the proteins made from these genes perform similar functions? Are they chemically similar? Do they reside in similar parts of the cell? Gene ontology analysis provides a standard way to define these relationships. Gene ontologies start with very broad categories, e.g., "metabolic process" and break them down into smaller categories, e.g., "carbohydrate metabolic process" and finally into quite restrictive categories like "inositol and derivative phosphorylation".
Genes have other attributes beside biological function, chemical properties and cellular location. One can compose sets of genes based on proximity to other genes, association with a disease, and relationships with drugs or toxins. The Molecular Signatures Database and the Comparative Toxicogenomics Database are examples of resources to categorize genes in numerous ways.
Finding patterns among regulated genes
Regulated genes are categorized in terms of what they are and what they do, important relationships between genes may emerge. For example, we might see evidence that a certain gene creates a protein to make an enzyme that activates a protein to turn on a second gene on our list. This second gene may be a transcription factor that regulates yet another gene from our list. Observing these links we may begin to suspect that they represent much more than chance associations in the results, and that they are all on our list because of an underlying biological process. On the other hand, it could be that if one selected genes at random, one might find many that seem to have something in common. In this sense, we need rigorous statistical procedures to test whether the emerging biological themes is significant or not. That is where gene set analysis comes in.
Cause and effect relationships
Fairly straightforward statistics provide estimates of whether associations between genes on lists are greater than what one would expect by chance. These statistics are interesting, even if they represent a substantial oversimplification of what is really going on. Here is an example. Suppose there are 10,000 genes in an experiment, only 50 (0.5%) of which play a known role in making cholesterol. The experiment identifies 200 regulated genes. Of those, 40 (20%) turn out to be on a list of cholesterol genes as well. Based on the overall prevalence of the cholesterol genes (0.5%) one expects an average of 1 cholesterol gene for every 200 regulated genes, that is, 0.005 times 200. This expectation is an average, so one expects to see more than one some of the time. The question becomes how often we would see 40 instead of 1 due to pure chance.
According to the hypergeometric distribution, one would expect to try about 10^57 times (10 followed by 56 zeroes) before picking 39 or more of the cholesterol genes from a pool of 10,000 by drawing 200 genes at random. Whether one pays much attention to how infinitesimally small the probability of observing this by chance is, one would conclude that the regulated gene list is enriched in genes with a known cholesterol association.
One might further hypothesize that the experimental treatment regulates cholesterol, because the treatment seems to selectively regulate genes associated with cholesterol. While this may be true, there are a number of reasons why making this a firm conclusion based on enrichment alone represents an unwarranted leap of faith. One previously mentioned issue has to do with the observation that gene regulation may have no direct impact on protein regulation: even if the proteins coded for by these genes do nothing other than make cholesterol, showing that their mRNA is altered does not directly tell us what is happening at the protein level. It is quite possible that the amount of these cholesterol-related proteins remains constant under the experimental conditions. Second, even if protein levels do change, perhaps there is always enough of them around to make cholesterol as fast as it can be possibly made, that is, another protein, not on our list, is the rate determining step in the process of making cholesterol. Finally, proteins typically play many roles, so these genes may be regulated not because of their shared association with making cholesterol but because of a shared role in a completely independent process.
Bearing the foregoing caveats in mind, while gene profiles do not in themselves prove causal relationships between treatments and biological effects, they do offer unique biological insights that would often be very difficult to arrive at in other ways.
Using patterns to find regulated genes
As described above, one can identify significantly regulated genes first and then find patterns by comparing the list of significant genes to sets of genes known to share certain associations. One can also work the problem in reverse order. Here is a very simple example. Suppose there are 40 genes associated with a known process, for example, a predisposition to diabetes. Looking at two groups of expression profiles, one for mice fed a high carbohydrate diet and one for mice fed a low carbohydrate diet, one observes that all 40 diabetes genes are expressed at a higher level in the high carbohydrate group than the low carbohydrate group. Regardless of whether any of these genes would have made it to a list of significantly altered genes, observing all 40 up, and none down appears unlikely to be the result of pure chance: flipping 40 heads in a row is predicted to occur about one time in a trillion attempts using a fair coin.
For a type of cell, the group of genes whose combined expression pattern is uniquely characteristic to a given condition constitutes the gene signature of this condition. Ideally, the gene signature can be used to select a group of patients at a specific state of a disease with accuracy that facilitates selection of treatments.
Gene Set Enrichment Analysis (GSEA) and similar methods take advantage of this kind of logic but uses more sophisticated statistics, because component genes in real processes display more complex behavior than simply moving up or down as a group, and the amount the genes move up and down is meaningful, not just the direction. In any case, these statistics measure how different the behavior of some small set of genes is compared to genes not in that small set.
GSEA uses a Kolmogorov Smirnov style statistic to see whether any previously defined gene sets exhibited unusual behavior in the current expression profile. This leads to a multiple hypothesis testing challenge, but reasonable methods exist to address it.
Conclusions
Expression profiling provides new information about what genes do under various conditions. Overall, microarray technology produces reliable expression profiles. From this information one can generate new hypotheses about biology or test existing ones. However, the size and complexity of these experiments often results in a wide variety of possible interpretations. In many cases, analyzing expression profiling results takes far more effort than performing the initial experiments.
Most researchers use multiple statistical methods and exploratory data analysis before publishing their expression profiling results, coordinating their efforts with a bioinformatician or other expert in DNA microarrays. Good experimental design, adequate biological replication and follow up experiments play key roles in successful expression profiling experiments.
See also
Gene expression profiling in cancer
Spatiotemporal gene expression
Transcriptomics
Splice variant analysis
References
External links
Comparative Transcriptomics Analysis in Reference Module in Life Sciences
Genetics techniques
Molecular genetics
Microarrays | Gene expression profiling | Chemistry,Materials_science,Engineering,Biology | 3,980 |
167,053 | https://en.wikipedia.org/wiki/Unit%20vector | In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in (pronounced "v-hat").
The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e.,
where ‖u‖ is the norm (or length) of u. The term normalized vector is sometimes used as a synonym for unit vector.
A unit vector is often used to represent directions, such as normal directions.
Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination form of unit vectors.
Orthogonal coordinates
Cartesian coordinates
Unit vectors may be used to represent the axes of a Cartesian coordinate system. For instance, the standard unit vectors in the direction of the x, y, and z axes of a three dimensional Cartesian coordinate system are
They form a set of mutually orthogonal unit vectors, typically referred to as a standard basis in linear algebra.
They are often denoted using common vector notation (e.g., x or ) rather than standard unit vector notation (e.g., x̂). In most contexts it can be assumed that x, y, and z, (or and ) are versors of a 3-D Cartesian coordinate system. The notations (î, ĵ, k̂), (x̂1, x̂2, x̂3), (êx, êy, êz), or (ê1, ê2, ê3), with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity (for instance with index symbols such as i, j, k, which are used to identify an element of a set or array or sequence of variables).
When a unit vector in space is expressed in Cartesian notation as a linear combination of x, y, z, its three scalar components can be referred to as direction cosines. The value of each component is equal to the cosine of the angle formed by the unit vector with the respective basis vector. This is one of the methods used to describe the orientation (angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector).
Cylindrical coordinates
The three orthogonal unit vectors appropriate to cylindrical symmetry are:
(also designated or ), representing the direction along which the distance of the point from the axis of symmetry is measured;
, representing the direction of the motion that would be observed if the point were rotating counterclockwise about the symmetry axis;
, representing the direction of the symmetry axis;
They are related to the Cartesian basis , , by:
The vectors and are functions of and are not constant in direction. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The derivatives with respect to are:
Spherical coordinates
The unit vectors appropriate to spherical symmetry are: , the direction in which the radial distance from the origin increases; , the direction in which the angle in the x-y plane counterclockwise from the positive x-axis is increasing; and , the direction in which the angle from the positive z axis is increasing. To minimize redundancy of representations, the polar angle is usually taken to lie between zero and 180 degrees. It is especially important to note the context of any ordered triplet written in spherical coordinates, as the roles of and are often reversed. Here, the American "physics" convention is used. This leaves the azimuthal angle defined the same as in cylindrical coordinates. The Cartesian relations are:
The spherical unit vectors depend on both and , and hence there are 5 possible non-zero derivatives. For a more complete description, see Jacobian matrix and determinant. The non-zero derivatives are:
General unit vectors
Common themes of unit vectors occur throughout physics and geometry:
Curvilinear coordinates
In general, a coordinate system may be uniquely specified using a number of linearly independent unit vectors (the actual number being equal to the degrees of freedom of the space). For ordinary 3-space, these vectors may be denoted . It is nearly always convenient to define the system to be orthonormal and right-handed:
where is the Kronecker delta (which is 1 for i = j, and 0 otherwise) and is the Levi-Civita symbol (which is 1 for permutations ordered as ijk, and −1 for permutations ordered as kji).
Right versor
A unit vector in was called a right versor by W. R. Hamilton, as he developed his quaternions . In fact, he was the originator of the term vector, as every quaternion has a scalar part s and a vector part v. If v is a unit vector in , then the square of v in quaternions is –1. Thus by Euler's formula, is a versor in the 3-sphere. When θ is a right angle, the versor is a right versor: its scalar part is zero and its vector part v is a unit vector in .
Thus the right versors extend the notion of imaginary units found in the complex plane, where the right versors now range over the 2-sphere rather than the pair {i, –i} in the complex plane.
By extension, a right quaternion is a real multiple of a right versor.
See also
Cartesian coordinate system
Coordinate system
Curvilinear coordinates
Four-velocity
Jacobian matrix and determinant
Normal vector
Polar coordinate system
Standard basis
Unit interval
Unit square, cube, circle, sphere, and hyperbola
Vector notation
Vector of ones
Unit matrix
Notes
References
Linear algebra
Elementary mathematics
1 (number)
Vectors (mathematics and physics) | Unit vector | Mathematics | 1,228 |
76,831,877 | https://en.wikipedia.org/wiki/NGC%203978 | NGC 3978 is a large intermediate spiral galaxy with a bar located in the constellation of Ursa Major. It is located 460 million light-years away from the Solar System and was discovered by William Herschel on March 19, 1790, but also observed by John Herschel on April 14, 1831.
NGC 3978 has a luminosity class of II-III and it has a broad H II region which contains regions of ionized hydrogen. In addition, it is categorized as a LINER galaxy by SIMBAD, meaning its nucleus presents an emission spectrum which is characterized by broad lines of weakly ionized atoms.
According to Vaucouleurs and Corwin, NGC 3978 and NGC 3975 form a galaxy pair with each other.
Supernovae
Two supernovae were discovered in NGC 3978: SN 2003cq and SN 2008l.
SN 2003cq
SN 2003cq was discovered on March 30, 2003, by British astronomer Ron Arbour. It was located 32".0 east and 2".3 south of the nucleus with a magnitude of 17.1. This supernova was Type Ia.
SN 2008I
SN 2008I was discovered by astronomers P. Thrasher, W. Li, and A. V. Filippenko as part of Lick Observatory Supernova Search (LOSS) on January 2, 2008. It was located 3".7 west and 10."4 north of the nucleus with magnitude of 19.1. The supernova was Type II which possibly resulted from a collapse of a massive star.
References
3978
Intermediate spiral galaxies
Ursa Major
037502
037502
06910
Astronomical objects discovered in 1790
Discoveries by William Herschel
2MASS objects
SDSS objects
IRAS catalogue objects | NGC 3978 | Astronomy | 366 |
7,375,449 | https://en.wikipedia.org/wiki/Rough%20number | A k-rough number, as defined by Finch in 2001 and 2003, is a positive integer whose prime factors are all greater than or equal to k. k-roughness has alternately been defined as requiring all prime factors to strictly exceed k.
Examples (after Finch)
Every odd positive integer is 3-rough.
Every positive integer that is congruent to 1 or 5 mod 6 is 5-rough.
Every positive integer is 2-rough, since all its prime factors, being prime numbers, exceed 1.
See also
Buchstab function, used to count rough numbers
Smooth number
Notes
References
Finch's definition from Number Theory Archives
"Divisibility, Smoothness and Cryptographic Applications", D. Naccache and I. E. Shparlinski, pp. 115–173 in Algebraic Aspects of Digital Communications, eds. Tanush Shaska and Engjell Hasimaj, IOS Press, 2009, .
The On-Line Encyclopedia of Integer Sequences (OEIS)
lists p-rough numbers for small p:
2-rough numbers: A000027
3-rough numbers: A005408
5-rough numbers: A007310
7-rough numbers: A007775
11-rough numbers: A008364
13-rough numbers: A008365
17-rough numbers: A008366
19-rough numbers: A166061
23-rough numbers: A166063
Integer sequences | Rough number | Mathematics | 302 |
5,839,673 | https://en.wikipedia.org/wiki/Quantitative%20analysis%20%28chemistry%29 | In analytical chemistry, quantitative analysis is the determination of the absolute or relative abundance (often expressed as a concentration) of one, several or all particular substance(s) present in a sample. It relates to the determination of percentage of constituents in any given sample.
Methods
Once the presence of certain substances in a sample is known, the study of their absolute or relative abundance could help in determining specific properties. Knowing the composition of a sample is very important, and several ways have been developed to make it possible, like gravimetric and volumetric analysis. Gravimetric analysis yields more accurate data about the composition of a sample than volumetric analysis but also takes more time to perform in the laboratory. Volumetric analysis, on the other hand, doesn't take that much time and can produce satisfactory results. Volumetric analysis can be simply a titration based in a neutralization reaction but it can also be a precipitation or a complex forming reaction as well as a titration based in a redox reaction. However, each method in quantitative analysis has a general specification, in neutralization reactions, for example, the reaction that occurs is between an acid and a base, which yields a salt and water, hence the name neutralization. In the precipitation reactions the standard solution is in the most cases silver nitrate which is used as a reagent to react with the ions present in the sample and to form a highly insoluble precipitate. Precipitation methods are often called simply as argentometry. In the two other methods the situation is the same. Complex forming titration is a reaction that occurs between metal ions and a standard solution that is in the most cases EDTA (Ethylene Diamine Tetra Acetic acid). In the redox titration that reaction is carried out between an oxidizing agent and a reduction agent. There are some more methods like Liebig method / Duma's method / Kjeldahl's method and Carius method for estimation of organic compounds.
For example, quantitative analysis performed by mass spectrometry on biological samples can determine, by the relative abundance ratio of specific proteins, indications of certain diseases, like cancer.
Quantitative vs. qualitative
The term "quantitative analysis" is often used in comparison (or contrast) with "qualitative analysis", which seeks information about the identity or form of substance present. For instance, a chemist might be given an unknown solid sample. They will use "qualitative" techniques (perhaps NMR or IR spectroscopy) to identify the compounds present, and then quantitative techniques to determine the amount of each compound in the sample. Careful procedures for recognizing the presence of different metal ions have been developed, although they have largely been replaced by modern instruments; these are collectively known as qualitative inorganic analysis. Similar tests for identifying organic compounds (by testing for different functional groups) are also known.
Many techniques can be used for either qualitative or quantitative measurements. For instance, suppose an indicator solution changes color in the presence of a metal ion. It could be used as a qualitative test: does the indicator solution change color when a drop of sample is added? It could also be used as a quantitative test, by studying the color of the indicator solution with different concentrations of the metal ion. (This would probably be done using ultraviolet-visible spectroscopy.)
See also
Microanalysis
Isotope dilution
References
Analytical chemistry | Quantitative analysis (chemistry) | Chemistry | 697 |
54,842,715 | https://en.wikipedia.org/wiki/Interoception | Interoception is the collection of senses providing information to the organism about the internal state of the body. This can be both conscious and subconscious. It encompasses the brain's process of integrating signals relayed from the body into specific subregions—like the brainstem, thalamus, insula, somatosensory, and anterior cingulate cortex—allowing for a nuanced representation of the physiological state of the body. This is important for maintaining homeostatic conditions in the body and, potentially, facilitating self-awareness.
Interoceptive signals are projected to the brain via a diversity of neural pathways, in particular from the lamina I of the spinal cord along the spinothalamic pathway and through the projections of the solitary nucleus, that allow for the sensory processing and prediction of internal bodily states. Misrepresentations of internal states, or a disconnect between the body's signals and the brain's interpretation and prediction of those signals, have been suggested to underlie conditions such as anxiety, depression, panic disorder, anorexia nervosa, bulimia nervosa, posttraumatic stress disorder (PTSD), obsessive compulsive disorder (OCD), attention deficit hyperactivity disorder (ADHD), alexithymia, somatic symptom disorder, and illness anxiety disorder.
The contemporary definition of interoception is not synonymous with the term "visceroception". Visceroception refers to the perception of bodily signals arising specifically from the viscera: the heart, lungs, stomach, and bladder, along with other internal organs in the trunk of the body. This does not include organs like the brain and skin. Interoception encompasses visceral signaling, but more broadly relates to all physiological tissues that relay a signal to the central nervous system about the current state of the body. Interoceptive signals are transmitted to the brain via multiple pathways including the lamina I spinothalamic pathway, the classical viscerosensory pathway, the vagus nerve and glossopharyngeal nerve, chemosensory pathways in the blood, and somatosensory pathways from the skin.
Interoceptive signals arise from many different physiological systems of the body. The most commonly studied system is cardiovascular interoception which is typically measured by directing attention towards the sensation of the heartbeat during various tasks. Other physiological systems integral to interoceptive processing include the respiratory system, gastrointestinal and genitourinary systems, nociceptive system, thermoregulatory system, endocrine and immune systems. Soft cutaneous touch is another sensory signal often included within the interoceptive processing system.
History and etymology
Early to mid-1900s
The concept of interoception was introduced in 1906 by the Nobel Laureate Sir Charles S. Sherrington. He did not use the noun interoception, but did describe as interoceptive those receptors that are within the viscera—what are today called "visceroceptive"—and thus excluded all other receptors and information from the body, which he grouped as either exteroceptive or proprioceptive. In Sherrington's model, exteroceptive receptors were those that received information from outward stimuli, like light, touch, sound, and odor. He classified temperature and nociception as exteroceptive sensations as well, though these are now regarded as having interoceptive qualities. He further divided the internal milieu of the body by its somatic and autonomic functions. And proprioceptors were those found in skeletal tissue that control voluntary movement. For him, interoceptors (a term which has lost prevalence in modern literature) were thus confined to visceral involuntary smooth muscle (e.g. surrounding blood vessels).
Further work on interoceptive processing after Sherrington was delayed for many years owing to the influential claim by John Newport Langley that the autonomic nervous system used only efferent (brain-to-body) signaling to implement its functions.
By the 1950s and 1960s, many investigations of interoceptive processing had been conducted, and once it had become apparent that interoceptive receptors are present in many tissues of the body other researchers began to investigate afferent body-to-brain signals, mainly by conducting animal experiments to see if interoceptive conditioning was possible. Using principles of Pavlovian conditioning, different physiological systems in dogs were perturbed to elicit a conditioned response to food.
For example, in one experiment, dogs' pelvises were distended using infusions of solution when food was presented to them. After rounds of pairing the two, salivation occurred without presenting food once the pelvis was distended. Interoceptive conditioning studies like this illustrated that interoceptive sensations may be important for learned behavior and emotion.
Mid-1900s to 2000
The increased interest in interoception during the late 1950s and the 1960s, as evidenced by the number of papers published, has been referred to as the "biofeedback blip". This was a phase during which many researchers examined humans' ability to gain control over autonomic functions as a method of treatment for various conditions.
Interoception did not gain widespread popularity within the scientific community until the mid- to late-twentieth century. During that period, some researchers chose to use the terms visceroceptor and interoceptor interchangeably, in line with Sherrington's usage, others combined proprioceptive and visceroceptive information into one category—interoception—based on physiological data about the lack of differences in nerve impulses, and still others proposed that interoception includes more than just endogenous (internal) stimuli. Exactly which sensory signals could or should be classified as interoceptive remains the subject of ongoing debate.
During the 1980s, psychophysiologists began to extensively examine cardiovascular interoception, introducing several different experimental tasks for studying the perception of the heartbeat: heartbeat counting, heartbeat tapping, and heartbeat detection. Psychiatrists also began to look at the effects of pharmacological stimulation on the symptoms of panic disorder. All of this further increased researchers' interest in interoception, including the development of theoretical models of the integration of interoceptive information within the body over time.
2000 and on
The twenty-first century has seen a tremendous increase in publications on the topic of interoception, and to a recognition of its multifaceted nature . This has led to the emergence of different ideas about interoception. One contemporary definition widens the concept to encompass "the skin and all that is underneath the skin" and the perception and function of bodily activity to more fully understand psychosomatic processes. In a similar vein, neuroanatomists hoping to find the anatomical basis of interoceptive functioning have stated the existence of a homeostatic pathway from the body to the brain that represents "the physiological status of all tissues in the body," and that this mapping onto the brain provides an individual with subjective feeling states that are critical for human emotion and self-awareness.
For example, interoception is the fundament of the modern view on allostasis and allostatic load. The regulatory model of allostasis claims that the brain's primary role as an organ is the predictive regulation of internal sensations. Predictive regulation is the brain's ability to anticipate needs and prepare to fulfill them before they arise. In this model, the brain is responsible for efficient regulation of its internal milieu.
Interoception is sometimes generally referred to as "the perception of internal body states" although there are many interoceptive processes that are not consciously perceived. Importantly, interoception is made possible through a process of "integrating the information coming from inside the body into the central nervous system". This definition deviates from Sherrington's original proposition, but exemplifies the dynamic and widening breadth of interoception as a concept in modern literature.
Facets of interoception
Although interoception as a term has more recently gained increased popularity, different aspects of it have been studied since the 1950s. These include the features of attention, detection, magnitude, discrimination, accuracy, sensibility, and self-reporting. Despite not using the word interoception specifically, many publications in the physiology and medical fields have focused on understanding interoceptive information processing in different organ systems. Attention describes the ability to observe sensations within the body, it can be directed voluntarily in a "top down" manner or it can be attracted involuntarily in a "bottom up" manner.
Detection reflects the presence or absence of a conscious report of interoceptive stimuli, like a heartbeat or growling stomach. Magnitude is the intensity of the stimulus, or how strongly the stimuli is felt. Discrimination describes the ability to localize interoceptive stimuli in the body to specific organs and differentiate them from other bodily stimuli that also occur, like distinguishing between a heart which is beating hard from an upset stomach. Accuracy (or sensitivity) refers to how precisely and correctly an individual can monitor specific interoceptive processes.
Self-reporting is itself multifaceted. It describes the ability to reflect on interoceptive experiences occurring over different periods of time, make judgments about them, and describe them. Brain-body interactions can also be studied using neuroimaging techniques to map functional interactions between brain and peripheral signals. Although all of these components of interoception have been studied since the mid-twentieth century, they have not been brought together under the umbrella term "interoception" until more recently.
The term "interoceptive awareness" is also frequently used to encompass any (or all) of the different interoception features that are accessible to conscious self-report. This multifaceted approach offers a unified way of looking at interoceptive functioning and its different features, it clarifies the definition of interoception itself, and it informs structured ways of assessing interoceptive experiences in an individual.
Interoceptive physiology
Cardiovascular system
Cardiac interoception has been widely studied as a method of evaluating interoceptive sensation. This is done using different tasks including heartbeat counting, heartbeat tapping, heartbeat detection and heartbeat attention tasks. Heartbeat counting tasks ask participants to count the number of felt heartbeats during short time periods. Their reported count is then compared with the actual count obtained with an electrocardiogram. This measures the participant's attention to his or her own heartbeat, the accuracy with which that is perceived, and the ability of the participant to report that measurement. However, results can be influenced by the participant's preexisting knowledge of his or her heart rate and an insensitivity to heart rate change.
Heartbeat detection tasks work by providing a participant with a musical tone which is played simultaneously or non-simultaneously with one's heartbeat, asking the participant to report whether it is simultaneous or not with the tones. Heartbeat detection is commonly used because of its ability discern an individual's performance above chance levels, so-called "good detectors". However, such detection rates among participants for this task are usually only 35%. It also measures the participant's attention, detection, discrimination, accuracy and self-report of the interoceptive process.
Heartbeat attention tasks are the most minimalistic, and involve simply the top-down direction of attention towards an interoceptive sensation such as the heartbeat, breath, or stomach. Most perceptions of heartbeat sensations usually occur during a time of homeostatic perturbation, such as when the state of the body changes from external or internal influences such as physical exertion or elevated arousal states, e.g., riding a roller coaster, watching a scary movie, public speaking anxiety, or having a panic attack. For this reason, cardiac interoception is also sometimes studied by inducing perturbations of bodily state. This can be done pharmacologically using adrenaline-like drugs, such as isoproterenol, which mimic activation of the sympathetic nervous system, resulting in increased heart rate and respiration rate, similar to the "fight-or-flight" response. This approach provides a physiological basis for understanding psychiatric and neurological disorders that are characterized by heightened sympathetic nervous system activity.
Respiratory and chemoreceptive system
Respiratory perception can differ from other interoceptive physiological symptoms because of an individual's ability to exert voluntary control over the system with controlled breathing or breathing exercises. This system is often measured using restrictive breathing loads and/or inhalation, which are designed to mimic labored breathing sensations. Dyspnea, or difficulty breathing, is a commonly felt sensation associated with panic attacks; however, due to the voluntary control of breathing, this domain of interoception usually requires implementation of much more elaborate experimental controls to quantify in comparison to cardiac interoception.
Gastrointestinal and genitourinary systems
Common interoceptive sensations related to the gastrointestinal and genitourinary systems are hunger and fullness. These are homeostatic signals that tell an individual when to eat and when to stop eating. The dorsal mid-insula appears to be integral in taste processing during gastrointestinal interoceptive attention tasks. Rectal and bladder distensions are used as a method to perturb the homeostatic environment of the gastrointestinal and genitourinary systems, using placement of balloon catheters which can be inflated to achieve different stimulus intensities. Associative fear learning paradigms have been used to study how innocuous signals might lead to abnormal states of gastrointestinal hypersensitivity and anxiety. Biofeedback therapy has been used for individuals with impaired gastrointestinal interoception, showing positive outcomes for some patients.
Nociceptive system
Nociception refers to the receiving and processing of pain inducing stimuli by the central nervous system. Functional brain imaging studies during painful stimulation of the skin with heated probes, during mechanical compression, and electric shock have suggested that the insular cortex is prominently activated during pain processing. Thus while pain was once thought of as an exteroceptive sensation, based on functional imaging and anatomical evidence it is now understood that it has an interoceptive component.
Thermoregulatory system
Temperature and pain are thought to be represented as "feelings" of coolness and warmness and pleasantness or unpleasantness in the brain. These sensory and affective characteristics of thermoregulation may motivate certain behavioral responses depending on the state of the body (for example, moving away from a source of heat to a cooler space). Such perturbations in the internal homeostatic environment of an organism are thought to be key aspects of a motivational process giving rise to emotional states, and have been proposed to be represented principally by the insular cortex as feelings. These feelings then influence drives when the anterior cingulate cortex is activated.
Endocrine and immune systems
The endocrine and immune systems are necessary body systems that aid in allostasis and homeostatic control. Imbalances in these systems, along with other genetic and social factors, may be linked to interoceptive dysregulation in depression. These increased allostatic changes may cause a hyperawareness of interoceptive signaling and a hypo-awareness of exteroceptive signaling in depression patients.
Affective touch
Affective touch refers to the stimulation of slow, unmyelinated C tactile afferents. This is accompanied by a sense of pleasantness, and has been likened to other interoceptive modalities like thermoregulation and nociception because of the similarities in anatomical function. Soft touch activates the insula rather than the somatosensory cortex, indicating that it has an affective importance absent in Aβ fibers. Since soft touch utilizes a separate pathway, it may have a social relevancy, allowing the body to separate the "noise" of outward stimuli from stimuli that evokes an affective feeling.
Neuroanatomical pathways
Multiple neural pathways relay information integral to interoceptive processing from the body to the brain. these include the lamina I spinothalamic pathway, the visceroceptive pathway, and the somatosensory pathway.
Lamina I spinothalamic pathway
The lamina I spinothalamic pathway is commonly known for carrying information to the brain about temperature and pain, but it has been suggested to more broadly relay all information about the homeostatic condition of the body.
Afferent signals enter the spinal cord at the superficial layer of the dorsal horn
Second order neurons cross the midline of the spinal cord and project up the opposite side, synapsing on the nucleus of the solitary tract, parabrachial nucleus, and periaqueductal gray in the brainstem
Third order neurons in the ventral posteromedial nucleus in the thalamus relay the signal to the dorsal posterior insula
The signal is re-represented on either the right or left side
Visceroceptive pathway
The visceroceptive pathway relays information about visceral organs to the brain.
Afferent signals from the vagus nerve enter the brainstem making synaptic connections with the nucleus of the solitary tract and parabrachial nucleus
The signal is relayed to the ventromedial basal nucleus of the thalamus
Third order neurons send the signal to the posterior insula
Somatosensory pathway
The somatosensory pathway relays information about proprioception and discriminative touch to the brain through different receptors in the skin.
Afferent signals from the mechanoreceptors or proprioceptors enter the spinal cord at the dorsal root ganglia
Second order neurons cross the midline in the medulla, projecting up the opposite side and synapse on third order neurons in the ventral posterior lateral nucleus or ventromedial posterior nucleus of the thalamus
Third order neurons in the thalamus relay the signal to the primary somatosensory cortex in the brain
Cortical processing of interoception
Thalamus
The thalamus receives signals from sympathetic and parasympathetic afferents during interoceptive processing. The ventromedial posterior nucleus (VMpo) is a subregion of the thalamus which receives sympathetic information from lamina I spinothalamic neurons. The human VMpo is much larger than that of primates and sub-primates and is important for processing of nociceptive, thermoregulatory, and visceral sensations. The ventromedial basal nucleus (VMb) receives parasympathetic information from visceral and gustatory systems.
Insular cortex
The insula is critically involved in the processing, integration, and cortical representation of visceral and interoceptive information. Lamina I spinothalamic and vagal afferents project via the brainstem and thalamus to the posterior and mid dorsal insula respectively. From there, information travels to the posterior and mid-insula, which combines visceral and somatosensory information.
The insula is also activated during a variety of exteroceptive and affective tasks. The insula is considered to be a "hub" region because it has an extremely high number of connections with other brain areas, suggesting it may be important for an integration of lower-level physiological information and salience.
Anterior insular cortex
The anterior insular cortex (AIC) is involved in the representation of "cognitive feelings" which arise from the moment-to-moment integration of homeostatic information from the body. These feelings engender self-awareness by creating a sentient being (someone able to feel and perceive) aware of bodily and cognitive processing. Essentially all subjective bodily feelings are associated with activation in the anterior insula: interoceptive feelings come to awareness in the anterior insula.
PET studies of cool feelings revealed that activiation that correlated with the subjective ratings of cool temperatures on the hand appeared first in the mid-insula, and then it was strongest in the right anterior insula and orbitofrontal cortex. Feelings from the body of heat pain or C-tactile affective touch also produce activation in the posterior, mid-, and anterior insula, and the strongest activation associated with these subjective feelings is in the anterior insula.
One study of heat pain explicitly showed that activation in the dorsal posterior insula correlated with objective painful heat intensity, whereas activation in the right anterior insula correlated with subjective pain intensity ratings. Similarly, subjective ratings of flavors and tastes correlates strongly with activation in the left anterior insula. The available evidence supports the interpretation that feelings from the body is engendered in the anterior insula.
Cytoarchitecture and granulation
The insula contains three major subregions defined by the presence or absence of a granule cell layer: granular, dysgranular (slightly granulated) agranular. Each of these portions of the insular cortex are important for different levels of functional connectivity. Information from the thalamus is projected to all three regions. Those with increased granulation are considered to be capable of receiving sensory input.
Anterior cingulate cortex
The anterior cingulate cortex (ACC) plays a significant role in motivation and the creation of emotion. An emotion can be seen as comprising both a feeling and a motivation based on that feeling. According to one view, the "feeling" is represented in the insula, while the "motivation" is represented in the ACC. Many interoceptive tasks activate the insula and ACC together, specifically tasks that elicit strong aversive feeling states like pain.
Somatosensory cortex
The sensory motor cortex provides an alternative pathway for sensing interoceptive stimuli. Although not following the conventional pathway for interoceptive awareness, skin afferents which project to the primary and secondary somatosensory cortices provide the brain with information regarding bodily information. This area of the brain is commonly engaged by gastrointestinal distension and nociceptive stimulation, but it likely plays a role in representing other interoceptive sensations as well.
In one study, a patient with bilateral insula and ACC damage was given isoproterenol as a method of exciting the cardiovascular system. Despite damage to putative interoceptive areas of the brain, the patient was able to perceive his heartbeat with similar accuracy compared to healthy individuals; however, once lidocaine was applied to the patient's chest over the region of maximum cardiac sensation and the test was run again, the patient did not sense any change in heartbeat whatsoever. This suggested that somatosensory information from afferents innervating the skin outside of the heart may provide information to the brain about the heart's pounding through the somatosensory cortex.
Interoception and emotion
The relationship between interoception and emotional experience is an intimate one. In the late 19th century, Charles Darwin noted and discussed the involvement of sensations from the viscera by describing similarities between humans and animals reactions to fear in his book, The Expression of Emotions in Man and Animals. Later, William James and Carl Lange developed the James-Lange theory of emotion, which states that bodily sensations provide the critical basis for emotional experience.
The somatic marker hypothesis, proposed by Antonio Damasio, expands upon the James-Lange theory and posits that decisions and the ensuing behaviors are optimally guided by physiological patterns of interoceptive and emotional information. Ensuing models focusing on the neurobiology of feelings states emphasized that the brain's mapping of different physiological body states are the critical ingredients for emotional experience and consciousness. In another model, Bud Craig argues that the intertwining of interoceptive and homeostatic processes is responsible for initiating and maintaining motivational states and engendering human self-awareness.
Interoception and mental illness
Disturbances of interoception occur prominently and frequently in psychiatric disorders. These symptom fluctuations are often observed during the most severe expression of dysfunction, and they figure prominently in diagnostic classification of several psychiatric disorders. A few typical examples are reviewed next.
Panic disorder
Palpitations and dyspnea are hallmarks of panic attacks. Studies have shown that panic disorder patients report a heightened experience of interoceptive sensations, but these studies have failed to clarify whether this is simply due to their systematic bias toward describing such feelings. However, other studies have shown that panic disorder patients feel heartbeat sensations more intensely when the state of the body is perturbed by pharmacological agents, suggesting they exhibit heightened sensitivity to experiencing interoceptive sensations.
Generalized anxiety disorder
Patients with generalized anxiety disorder (GAD) frequently report being bothered by interoceptive feelings of muscle tension, headaches, fatigue, gastrointestinal complaints, and pain.
Posttraumatic stress disorder
Functional neuroimaging studies have shown that posttraumatic stress disorder (PTSD) patients exhibit a decreased activation in the right anterior insula, a region of the brain that is largely responsible for identifying the mismatch between cognitive and interoceptive states. Further, because PTSD patients have shown decreased activation within many nodes of the lamina I homeostatic pathway—a pathway through which the thalamus sends interoceptive information to the anterior insula and anterior cingulate—it has been suggested that PTSD patients experience reduced interoceptive awareness. Approaches such as somatic experiencing use an interoceptive approach to treat PTSD.
Anxiety disorders
The broad consensus of studies investigating the link between interoceptive awareness and anxiety disorders is that people with anxiety disorders experience heightened awareness of and accuracy in identifying interoceptive processes. Functional imaging studies provide evidence that people with anxiety disorders experience heightened interoceptive accuracy, suggested by hyperactivation in the anterior cingulate cortex—a region of the brain associated with interoception—in several different kinds of anxiety disorders. The insula has been suggested to be abnormal in a large scale study across anxiety disorders in general. Other studies have found that interoceptive accuracy is increased in these patients, as evidenced by their superior ability in heartbeat detection tasks in comparison to healthy controls.
Anorexia nervosa
Anorexia nervosa (AN) has been associated with interoceptive disturbances. Patients with AN often develop insensitivity to interoceptive cues of hunger, and yet are highly anxious and report disturbed interoceptive experiences, both inside and out. While AN patients concentrate on distorted perceptions of their body exterior in fear of weight gain, they also report altered physical states within their bodies, such as indistinct feelings of fullness, or an inability to distinguish emotional states from bodily sensations in general (called alexithymia).
Bulimia nervosa
Studies suggest that patients with and recovered from bulimia nervosa (BN) exhibit abnormal interoceptive sensory processing and reduced interoceptive awareness under resting physiological conditions. Specifically, patients with BN report reduced sensitivity to many other kinds of internal and external sensations, exhibiting increased thresholds to heat pain compared to healthy subjects and an increased gastric capacity. Neuroimaging literature suggests a pattern of abnormal interoceptive processing in patients with BN based on increased activity and volume in the insula and anterior cingulate cortex—regions associated with interoception and taste processing—when looking at food.
Major depressive disorder
Major depressive disorder (MDD) has been theoretically linked to interoceptive dysfunction. Studies have shown that women with MDD are less accurate on heartbeat counting tasks than are men with MDD and that, in general, patients with MDD are less accurate at counting heartbeat than are patients with panic or anxiety disorders. However, patients with MDD do not always exhibit reduced cardiac interoceptive accuracy; depressed patients experiencing high levels of anxiety will actually be more accurate on heartbeat detection tasks than depressed patients with lower levels of anxiety.
Somatic symptom disorders
Patients with somatic symptom disorders score lower on heartbeat detection tasks than healthy controls, suggesting that interoceptive accuracy is poor in psychosomatic disorders. It has also been found that patients with psychosomatic disorders who are anxious or stressed report physical symptom discomfort at lower heart rates during exercise treadmill tests, implying poorer interoceptive distress tolerance in somatic symptom disorders with comorbid psychiatric conditions.
Obsessive compulsive disorder
Results from a study investigating the relationship between obsessive compulsive disorder (OCD) and internal body signals found that patients with OCD were more accurate on a heartbeat perception task than healthy controls and anxiety patients heightened interoceptive awareness.
Autism spectrum disorder
Patients with autism spectrum disorder (ASD) may have poorer interoceptive awareness than control subjects. It is hypothesized that this decrease in interoceptive accuracy is due to alexithymia, which is often associated with ASD. However, it has also been found that children with ASD actually show greater interoceptive sensitivity than control subjects when measured over a long period of time. Further investigation into the relationship between interoception and ASD is needed in order to fully understand the interoceptive aspect of the disorder.
Current theories of interoceptive processing
Embodied predictive interoception coding (EPIC)
The EPIC model proposes a method of understanding the brain's response to stimuli contrary to the classic "stimulus-response" model. The classical view of information processing is that when a peripheral stimulus provided information to the central nervous system, it was processed in the brain, and a response was elicited. The EPIC model deviates from this and proposes that the brain is involved in a process of active inference, that is, assiduously making predictions about situations based on previous experiences. These predictions, when coupled with incoming sensory signals, allow the brain to compute a prediction error. Interoceptive prediction errors signal the occurrence of discrepancies within the body, which the brain attempts to minimize. This can be done by modifying the predictions through brain-related pathways, altering the body position/location in order to better align incoming sensory signals with the prediction, or altering the brain's method of receiving incoming stimuli. Interoceptive prediction error signals are a key component of many theories of interoceptive dysfunction in physical and mental health.
Research and treatments
As attention on interoception increases among the scientific community, new research methods and treatment tactics are beginning to emerge. Because there currently is not a clear consensus as to what exactly interoception is, or the best ways in which to measure it, most research and questionnaires on the subject only measure individual facets of interoception. The convergence and interrelatedness of the questionnaire items between the constructs has been found to be low.
Ongoing research in interoception has shown the importance of perturbing interoceptive systems. This allows researchers the ability to document the effects of non-baseline states, which occur during times of panic or anxiety. It also provides the participant the ability to gauge the intensity of sensations within the body. This can be done through pharmacological interventions, balloon distensions, or respiratory breathing loads depending on the interoceptive system of interest.
Another research method used to study interoception is specialized flotation environments. Floating removes external stimuli so that individuals can more easily focus on the interoceptive sensations within their bodies. One idea with floating is that over many float sessions, patients with different kinds of disorders may learn to become more attuned or tolerant of their interoceptive sensations not only in the float tank but also in their everyday lives.
Whole body hyperthermia may provide a new treatment technique for major depressive disorder. It is thought that reducing one of the bodily symptoms of depression, which is increased inflammation, using whole body hyperthermia will also reduce depressive feelings represented in the brain. In theory, these techniques will help patients better attune themselves to their interoceptive sensations, allowing them a better understanding of what occurs in their bodies.
Acupuncture is an alternative treatment type for many people with anxiety and depression. Typically, it is self-prescribed by patients; however, results are inconclusive on its ability to manage symptoms of depression. Recently, Massage therapy has been shown to have the ability to reduce symptoms of generalized anxiety disorder.
Meditation and mindfulness have been looked into as possible techniques to enhance interoceptive awareness based on their tendency to redirect focus within oneself. Studies show that meditation and mindfulness practices promote attention to interoceptive sensations. In reality, with all the benefits of mindfulness, many may be ascribed to an increase in interoception. However, interoceptive awareness and accuracy in specific domains such as breath or body monitoring, may be more independent from a broad improvement in interoceptive accuracy. Meditation and mindfulness practices have been shown to modify the insula, which is considered central to our interoceptive abilities. Attention is also being given to the subtle body and its function as a CNS map in traditional Tibetan and Indian medicine.
Although a universal definition of interoception has not been reached, research on interoception and psychiatric disorders has shown a link between interoceptive processing and mental disorders. It has been proposed that exposure therapy, a commonly employed treatment for anxiety disorders, may provide a basis for a model of interoceptive exposure therapy that could be incorporated into treatment plans of different psychiatric disorders. One proposal states that multiple interoceptive challenges assessing different physiological systems could provide diagnosticians with the ability to create an "interoceptive profile" for a specific individual, creating a patient-specific treatment plan.
See also
Inflammatory reflex
Proprioception
References
External links
Interoception Library at Center for Open Science
Laureate Institute for Brain Research Interoception Summit 2016 on YouTube
Human body
Human physiology
Human anatomy | Interoception | Physics | 6,987 |
1,220,793 | https://en.wikipedia.org/wiki/Neodymium%28III%29%20chloride | Neodymium(III) chloride or neodymium trichloride is a chemical compound of neodymium and chlorine with the formula NdCl3. This anhydrous compound is a mauve-colored solid that rapidly absorbs water on exposure to air to form a purple-colored hexahydrate, NdCl3·6H2O. Neodymium(III) chloride is produced from minerals monazite and bastnäsite using a complex multistage extraction process. The chloride has several important applications as an intermediate chemical for production of neodymium metal and neodymium-based lasers and optical fibers. Other applications include a catalyst in organic synthesis and in decomposition of waste water contamination, corrosion protection of aluminium and its alloys, and fluorescent labeling of organic molecules (DNA).
Appearance
NdCl3 is a mauve colored hygroscopic solid whose color changes to purple upon absorption of atmospheric water. The resulting hydrate, like many other neodymium salts, has the interesting property that it appears different colors under fluorescent light- In the chloride's case, light yellow (see picture).
Structure
Solid
The anhydrous NdCl3 features Nd in a nine-coordinate tricapped trigonal prismatic geometry and crystallizes with the UCl3 structure. This hexagonal structure is common for many halogenated lanthanides and actinides such as LaCl3, LaBr3, SmCl3, PrCl3, EuCl3, CeCl3, CeBr3, GdCl3, AmCl3 and TbCl3 but not for YbCl3 and LuCl3.
Solution
The structure of neodymium(III) chloride in solution crucially depends on the solvent: In water, the major species are Nd(H2O)83+, and this situation is common for most rare earth chlorides and bromides. In methanol, the species are NdCl2(CH3OH)6+ and in hydrochloric acid NdCl(H2O)72+. The coordination of neodymium is octahedral (8-fold) in all cases, but the ligand structure is different.
Properties
NdCl3 is a soft paramagnetic solid, which turns ferromagnetic at very low temperature of 0.5 K. Its electrical conductivity is about 240 S/m and heat capacity is ~100 J/(mol·K). NdCl3 is readily soluble in water and ethanol, but not in chloroform or ether. Reduction of NdCl3 with Nd metal at temperatures above 650 °C yields NdCl2:
2 NdCl3 + Nd → 3 NdCl2
Heating of NdCl3 with water vapors or silica produces neodymium oxochloride:
NdCl3 + H2O → NdOCl + 2 HCl
2 NdCl3 + SiO2 → 2 NdOCl + SiCl4
Reacting NdCl3 with hydrogen sulfide at about 1100 °C produces neodymium sulfide:
2 NdCl3 + 3 H2S → 2 Nd2S3 + 6 HCl
Reactions with ammonia and phosphine at high temperatures yield neodymium nitride and phosphide, respectively:
NdCl3 + NH3 → NdN + 3 HCl
NdCl3 + PH3 → NdP + 3 HCl
Whereas the addition of hydrofluoric acid produces neodymium fluoride:
NdCl3 + 3 HF → NdF3 + 3 HCl
Preparation
NdCl3 is produced from minerals monazite and bastnäsite. The synthesis is complex because of the low abundance of neodymium in the Earth's crust (38 mg/kg) and because of difficulty of separating neodymium from other lanthanides. The process is however easier for neodymium than for other lanthanides because of its relatively high content in the mineral – up to 16% by weight, which is the third highest after cerium and lanthanum. Many synthesis varieties exist and one can be simplified as follows:
The crushed mineral is treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earths. The acidic filtrates are partially neutralized with sodium hydroxide to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes the main components, cerium, whose oxide is insoluble in HNO3. Neodymium oxide is separated from other rare-earth oxides by ion exchange. In this process, rare-earth ions are adsorbed onto suitable resin by ion exchange with hydrogen, ammonium or cupric ions present in the resin. The rare earth ions are then selectively washed out by suitable complexing agent, such as ammonium citrate or nitrilotracetate.
This process normally yields Nd2O3; the oxide is difficult to directly convert to elemental neodymium, which is often the goal of the whole technological procedure. Therefore, the oxide is treated with hydrochloric acid and ammonium chloride to produce the less stable NdCl3:
Nd2O3 + 6 NH4Cl → 2 NdCl3 + 3 H2O + 6 NH3
The thus produced NdCl3 quickly absorbs water and converts to NdCl3·6H2O hydrate, which is stable for storage, and can be converted back into NdCl3 when necessary. Simple rapid heating of the hydrate is not practical for that purpose because it causes hydrolysis with consequent production of Nd2O3. Therefore, anhydrous NdCl3 is prepared by dehydration of the hydrate either by slowly heating to 400 °C with 4-6 equivalents of ammonium chloride under high vacuum, or by heating with an excess of thionyl chloride for several hours. The NdCl3 can alternatively be prepared by reacting neodymium metal with hydrogen chloride or chlorine, though this method is not economical due to the relatively high price of the metal and is used for research purposes only. After preparation, it is usually purified by high temperature sublimation under high vacuum.
Applications
Production of neodymium metal
Neodymium(III) chloride is the most common starting compound for production of neodymium metal. NdCl3 is heated with ammonium chloride or ammonium fluoride and hydrofluoric acid or with alkali or alkaline earth metals in vacuum or argon atmosphere at 300–400 °C.
2 NdCl3 + 3 Ca → 2 Nd + 3 CaCl2
An alternative route is electrolysis of molten mixture of anhydrous NdCl3 and NaCl, KCl, or LiCl at temperatures about 700 °C. The mixture melts at those temperatures, even though they are lower than the melting points of NdCl3 and KCl (~770 °C).
Lasers and fiber amplifiers
Although NdCl3 itself does not have strong luminescence, it serves as a source of Nd3+ ions for various light emitting materials. The latter include Nd-YAG lasers and Nd-doped optical fiber amplifiers, which amplify light emitted by other lasers. The Nd-YAG laser emits infrared light at 1.064 micrometres and is the most popular solid-state laser (i.e. laser based on a solid medium). The reason for using NdCl3 rather than metallic neodymium or its oxide, in fabrication of fibers is easy decomposition of NdCl3 during the chemical vapor deposition; the latter process is widely used for the fiber grows.
Neodymium(III) chloride is a dopant not only of traditional silica-based optical fibers, but of plastic fibers (dopedphotolime-gelatin, polyimide, polyethylene, etc.) as well. It is also used in as an additive into infrared organic light-emitting diodes. Besides, neodymium doped organic films can not only act as LEDs, but also as color filters improving the LED emission spectrum.
Solubility of neodymium(III) chloride (and other rare-earth salts) is various solvents results in a new type of rare-earth laser, which uses not a solid but liquid as an active medium. The liquid containing Nd3+ ions is prepared in the following reactions:
SnCl4 + 2 SeOCl2 → SnCl62− + 2 SeOCl+
SbCl5 + SeOCl2 → SbCl6− + SeOCl+
3 SeOCl+ + NdCl3 → Nd3+(solv) + 3 SeOCl2,
where Nd3+ is in fact the solvated ion with several selenium oxychloride molecules coordinated in the first coordination sphere, that is [Nd(SeOCl2)m]3+. The laser liquids prepared by this technique emits at the same wavelength of 1.064 micrometres and possess properties, such as high gain and sharpness of the emission, that are more characteristic of crystalline than Nd-glass lasers. The quantum efficiency of those liquid lasers was about 0.75 relative to the traditional Nd:YAG laser.
Catalysis
Another important application of NdCl3 is in catalysis—in combination with organic chemicals, such as triethylaluminium and 2-propanol, it accelerates polymerization of various dienes. The products include such general purpose synthetic rubbers as polybutylene, polybutadiene, and polyisoprene.
Neodymium(III) chloride is also used to modify titanium dioxide. The latter is one of the most popular inorganic photocatalyst for decomposition of phenol, various dyes and other waste water contaminants. The catalytic action of titanium oxide has to be activated by UV light, i.e. artificial illumination. However, modifying titanium oxide with neodymium(III) chloride allows catalysis under visible illumination, such as sun light. The modified catalyst is prepared by chemical coprecipitation–peptization method by ammonium hydroxide from mixture of TiCl4 and NdCl3 in aqueous solution). This process is used commercially on large scale on 1000 liter reactor for using in photocatalytic self-cleaning paints.
Corrosion protection
Other applications are being developed. For example, it was reported that coating of aluminium or various aluminium alloys produces very corrosion-resistance surface, which then resisted immersion into concentrated aqueous solution of NaCl for two months without sign of pitting. The coating is produced either by immersion into aqueous solution of NdCl3 for a week or by electrolytic deposition using the same solution. In comparison with traditional chromium based corrosion inhibitors, NdCl3 and other rare-earth salts are environment friendly and much less toxic to humans and animals.
The protective action of NdCl3 on aluminium alloys is based on formation of insoluble neodymium hydroxide. Being a chloride, NdCl3 itself is a corrosive agent, which is sometimes used for corrosion testing of ceramics.
Labeling of organic molecules
Lanthanides, including neodymium are famous for their bright luminescence and therefore are widely used as fluorescent labels. In particular, NdCl3 has been incorporated into organic molecules, such as DNA, which could be then easily traced using a fluorescence microscope during various physical and chemical reactions.
Health issues
Neodymium(III) chloride does not seem toxic to humans and animals (approximately similar to table salt). The LD50 (dose at which there is 50% mortality) for animals is about 3.7 g per kg of body weight (mouse, oral), 0.15 g/kg (rabbit, intravenous injection). Mild irritation of skin occurs upon exposure with 500 mg during 24 hrs (Draize test on rabbits). Substances with LD50 above 2 g/kg are considered non-toxic.
See also
Lanthanoid
Neodymium-doped yttrium lithium fluoride
References
Chlorides
Neodymium(III) compounds
Lanthanide halides | Neodymium(III) chloride | Chemistry | 2,621 |
22,793,769 | https://en.wikipedia.org/wiki/Ecosystem%20management | Ecosystem management is an approach to natural resource management that aims to ensure the long-term sustainability and persistence of an ecosystem's function and services while meeting socioeconomic, political, and cultural needs. Although indigenous communities have employed sustainable ecosystem management approaches implicitly for millennia, ecosystem management emerged explicitly as a formal concept in the 1990s from a growing appreciation of the complexity of ecosystems and of humans' reliance and influence on natural systems (e.g., disturbance and ecological resilience).
Building upon traditional natural resource management, ecosystem management integrates ecological, socioeconomic, and institutional knowledge and priorities through diverse stakeholder participation. In contrast to command and control approaches to natural resource management, which often lead to declines in ecological resilience, ecosystem management is a holistic, adaptive method for evaluating and achieving resilience and sustainability. As such, implementation is context-dependent and may take a number of forms including adaptive management, strategic management, and landscape-scale conservation.
Formulations
The term “ecosystem management” was formalized in 1992 by F. Dale Robertson, former Chief of the U.S. Forest Service. Robertson stated, “By ecosystem management, we mean an ecological approach… [that] must blend the needs of people and environmental values in such a way that the National Forests and Grasslands represent diverse, healthy, productive and sustainable ecosystems.”
A variety of additional definitions of ecosystem management exist. For example, Robert T. Lackey emphasizes that ecosystem management is informed by ecological and social factors, is motivated by societal benefits, and is implemented over a specific timeframe and area. F. Stuart Chapin and co-authors emphasize that ecosystem management is guided by ecological science to ensure the long-term sustainability of ecosystem services, while Norman Christensen and coauthors emphasize that it is motivated by defined goals, employs adaptive practices, and accounts for the complexities of ecological systems. Peter Brussard and colleagues emphasize that ecosystem management balances preserving ecosystem health while sustaining human needs.
As a concept of natural resource management, ecosystem management remains both ambiguous and controversial, in part because some of its formulations rest on contested policy and scientific assertions. These assertions are important for understanding much of the conflict surrounding ecosystem management. For instance, some allege that professional natural resource managers, typically operating from within government bureaucracies and professional organizations, mask debate over controversial assertions by depicting ecosystem management as an evolution of past management approaches.
Principles of ecosystem management
A fundamental concern of ecosystem management is the long-term sustainability of the production of goods and services by ecosystems, as "intergenerational sustainability [is] a precondition for management, not an afterthought." Ideally, there should be clear, publicly stated goals with respect to future trajectories and behaviors of the system being managed. Other important requirements include a sound ecological understanding of the system including ecological dynamics and the context in which the system is embedded. An understanding of the role of humans as components of the ecosystems and the use of adaptive management is also important. While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems (e.g., agroecosystems and close to nature forestry).
Core principles and common themes of ecosystem management:
Systems thinking: Management has a holistic perspective rather than focusing on a particular level of biological hierarchy in an ecosystem (e.g., only conserving a specific species or only preserving ecosystem functioning).
Ecological boundaries: Ecological boundaries are clearly and formally defined, and management is place-based.
Ecological integrity: Management is focused on maintaining or reintroducing native biological diversity and on preserving natural disturbance regimes and other key processes that sustain resilience.
Data collection: Broad ecological research and data collection is needed to inform effective management (e.g., species diversity, habitat types, disturbance regimes, etc.).
Monitoring: The impacts of management methods are tracked, allowing for their outcomes to be evaluated and modified, if needed.
Adaptive management: Management is an iterative process in which methods are continuously reevaluated as new scientific knowledge is gained.
Interagency cooperation: As ecological boundaries often cross administrative boundaries, management often requires cooperation among a range of agencies and private stakeholders.
Organizational change: Successful implementation of management requires shifts in the structure and operation of land management agencies.
Humans and nature: Nature and people are intrinsically linked and humans shape, and are shaped by, ecological processes.
Values: Humans play a key role in guiding management goals, which reflect a stage in the continuing evolution of social values and priorities.
History
Pre-industrialization
Sustainable ecosystem management approaches have been used by societies throughout human history. Prior to colonization, indigenous cultures often sustainably managed their natural resources through intergenerational traditional ecological knowledge (TEK). In TEK, cultures acquire knowledge of their environment over time and this information is passed on to future generations through cultural customs including folklore, religion, and taboos. Traditional management strategies vary by region; examples include the burning of the longleaf pine ecosystem by Native Americans in what is today the southeastern United States, the ban of seabird guano harvest during the breeding season by the Inca, the sustainable harvest practices of glaucous-winged gull eggs by the Huna Tlingit, and the Maya milpa intercropping approach (which is still used today).
Post-industrialization
In industrialized Western society, ecosystems have been managed primarily to maximize yields of a particular natural resource. This method for managing ecosystems can be seen in the U.S. Forest Service's shift away from sustaining ecosystem health and toward maximizing timber production to support residential development following World War II. Furthermore, natural resource management has typically assumed a view that each ecosystem has a single best equilibrium and that minimizing variation around this equilibrium results in more dependable, greater yields of natural resources. For example, this perspective informed the long-held belief in forest fire suppression in the United States, which drove a decline in populations of fire-tolerant species and a buildup of fuel, leading to higher intensity fires. Additionally, these approaches to managing natural systems tended to (a) be site- and species-specific, rather than considering all components of an ecosystem collectively, (b) employ a “command and control” approach, and (c) exclude stakeholders from management decisions.
The latter half of the 20th century saw a paradigm shift in how ecosystems were viewed, with a growing appreciation for the importance of ecological disturbance and for the intrinsic link between natural resources and overall ecosystem health. Simultaneously, there was acknowledgment of society's reliance on ecosystem services (beyond provisioning goods) and of the inextricable role human-environment interactions play in ecosystems. In sum, ecosystems were increasingly seen as complex systems shaped by non-linear and stochastic processes, and thus, they could not be managed to achieve single, fully predictable outcomes. As a result of these complexities and often unforeseeable feedback from management strategies, DeFries and Nagendra deemed ecosystem management to be a “wicked problem”. Thus, the outcome of natural resource management's "evolution" over the course of the 20th century is ecosystem management, which explicitly recognizes that technical and scientific knowledge, though necessary in all approaches to natural resource management, are insufficient in themselves.
Stakeholders
Stakeholders are individuals or groups who are affected by or have an interest in ecosystem management decisions and actions. Stakeholders may also have power to influence the goals, policies, and outcomes of management. Ecosystem management stakeholders fall into the following groups based on their diverse concerns:
Stakeholders whose lives are directly tied to the ecosystem (e.g., members of local community)
Stakeholders who are not directly impacted, but have an interest in the ecosystem or its ecosystem services (e.g., NGOs, recreational groups)
Stakeholders concerned with the decision-making processes (e.g., environmental advocacy groups)
Stakeholders funding management plans (e.g., taxpayers, funding agencies)
Stakeholders representing public interest (e.g., public officials)
Strategies to stakeholder participation
The complexity of ecosystem management decisions, ranging from local to international scales, requires the participation of stakeholders with diverse understandings, perceptions, and values of ecosystems and ecosystem services. Due to these complexities, effective ecosystem management is flexible and develops reciprocal trust around issues of common interest, with the objective of creating mutually beneficial partnerships. Key attributes of successful participatory ecosystem management efforts have been identified:
Stakeholder involvement is inclusive, equitable, and focused on trust-building and empowerment.
Stakeholders are engaged early on, and their involvement continues beyond decision and into management.
Stakeholder analysis is performed to ensure parties are appropriately represented. This involves determining the stakeholders involved in the management issue; categorizing stakeholders based on their interest in and influence on the issue; and evaluating relationships between stakeholders.
Stakeholders agree upon the aims of the participatory process from its beginning, and the means and extent of stakeholder participation are case-specific.
Stakeholder participation is conducted through skilled facilitation.
Social, economic, and ecological goals are equally weighed, and stakeholders are actively involved in decision making, which is arrived at by collective consensus.
Stakeholders continually monitor management plan’s effectiveness.
Multidisciplinary data are collected, reflecting multidisciplinary priorities, and decisions are informed by both local and scientific knowledge.
Economic incentives are provided to parties responsible for implementing management plans.
To ensure long-term stakeholder involvement, participation is institutionalized.
Examples of stakeholder participation
Malpai Borderland management:
In the early 1990s, there was ongoing conflict between the ranching and environmentalist communities in the Malpai Borderlands. The former group was concerned about sustaining their livelihoods, while the latter was concerned about the environmental impacts of livestock grazing. The groups found common ground around conserving and restoring rangeland, and diverse stakeholders, including ranchers, environmental groups, scientists, and government agencies, were engaged in management discussions. In 1994, the rancher-led Malpai Borderlands Group was created to collaboratively pursue the goals of ecosystem protection, management, and restoration.
Helge å River & Kristianstads Vattenrike Biosphere Reserve:
In the 1980s, local government agencies and environmental groups noted declines in the health of the Helge å River ecosystem, including eutrophication, bird population declines, and deterioration of flooded meadows areas. There was concern that the Helge å, a Ramsar Wetland of International Importance, faced an imminent tipping point. In 1989, led by a municipal organization, a collaborative management strategy was adopted, involving diverse stakeholders concerned with the ecological, social, and economic facets of the ecosystem. The Kristianstads Vattenrike Biosphere Reserve was established in 2005 to promote the preservation of the ecosystem's socio-ecological services.
Strategies to ecosystem management
Several strategies to implementing the maintenance and restoration of natural and human-modified ecosystem exist. Command and control management and traditional natural resource management are the precursors to ecosystem management. Adaptive management, strategic management, and landscape-level conservation are different methodologies and processes involved in implementing ecosystem management:
Command and control management
Command and control management utilizes a linear problem solving approach, in which a perceived problem is resolved through controlling devices such as laws, threats, contracts, and/or agreements. This top-down approach is used across many disciplines, and it is best suited for addressing relatively simple, well-defined problems, which have a clear cause and effect, and for which there is broad societal agreement as to policy and management goals. In the context of natural systems, command and control management attempts to control nature in order to improve natural resource extractions, establish predictability, and reduce threats. Command and control strategies include the use of herbicides and pesticides to improve crop yields; the culling of predators to protect game bird species; and the safeguarding of timber supply, by suppressing forest fires.
However, due to the complexities of ecological systems, command and control management may result in unintended consequences. For example, wolves were extirpated from Yellowstone National Park in the mid-1920s to reduce elk predation. Long-term studies of wolf, elk, and tree populations since wolf reintroduction in 1995 demonstrate that reintroduction has decreased elk populations, improving tree species recruitment. Thus, by controlling ecosystems to limit natural variation and increase predictability, command and control management often leads to a decline the resilience of ecological, social, and economic systems, termed the “pathology of natural resource management”. In this “pathology”, an initially successful command and control practice drives relevant institutions to shift their focus toward control, over time obscuring the ecosystem’s natural behavior, while the economy becomes reliant on the system in its controlled state. Consequently, there has been a transition away from command and control management, and increased focus on more holistic adaptive management approaches and on arriving at management solutions through partnerships between stakeholders.
Natural resource management
The term natural resource management is frequently used in relation to a particular resource for human use, rather than the management of a whole ecosystem. Natural resource management aims to fulfill the societal demand for a given resource without causing harm to the ecosystem, or jeopardizing the future of the resource. Due to its focus on natural resources, socioeconomic factors significantly affect this management approach. Natural resource managers initially measure the overall condition of an ecosystem, and if the ecosystem's resources are healthy, the ideal degree of resource extraction is determined, which leaves enough to allow the resource to replenish itself for subsequent harvests. The condition of each resource in an ecosystem is subject to change at different spatial and time scales, and ecosystem attributes, such as watershed and soil health, and species diversity and abundance, need to be considered individually and collectively.
Informed by natural resource management, the ecosystem management concept is based on the relationship between sustainable ecosystem maintenance and human demand for natural resources and other ecosystem services. To achieve these goals, ecosystem managers can be appointed to balance natural resource extraction and conservation over a long-term timeframe. Partnerships between ecosystem managers, natural resource managers, and stakeholders should be encouraged in order to promote the sustainable use of limited natural resources.
Historically, some ecosystems have experienced limited resource extraction and have been able to subsist naturally. Other ecosystems, such as forests, which in many regions provide considerable timber resources, have undergone successful reforestation and consequently, have accommodated the needs of future generations. As human populations grow, introducing new stressors to ecosystems, such as climate change, invasive species, land-use change, and habitat fragmentation, future demand for natural resources is unpredictable. Although ecosystem changes may occur gradually, their cumulative impacts can have negative effects for both humans and wildlife. Geographic information system (GIS) applications and remote sensing can be used to monitor and evaluate natural resources and ecosystem health.
Adaptive management
Adaptive management is based on the concept that predicting future influences and disturbances to an ecosystem is limited and unclear. Therefore, an ecosystem should be managed to it maintain the greatest degree of ecological integrity and management practices should have the ability to change based on new experience and insights. In an adaptive management strategy, a hypotheses about an ecosystem and its functioning is formed, and then management techniques to test these hypotheses are implemented. The implemented methods are then analyzed to evaluate if ecosystem health improved or declined, and further analysis allows for the modification of methods until they successfully meet the needs of the ecosystem. Thus, adaptive management is an iterative approach, encouraging “informed trial-and-error”.
This management approach has had mixed success in the field of ecosystem management, fisheries management, wildlife management, and forest management, possibly because ecosystem managers may not be equipped with the decision-making skills needed to undertake an adaptive management methodology. Additionally, economic, social, and political priorities can interfere with adaptive management decisions. For this reason, for adaptive management to be successful it must be a social and scientific process, focusing on institutional strategies while implementing experimental management techniques.
Strategic management
As it relates to ecosystem management, strategic management encourages the establishment of goals that will sustain an ecosystem while keeping socioeconomic and politically relevant policy drivers in mind. This approach differs from other types of ecosystem management because it emphasizes stakeholders' involvement, relying on their input to develop the best management strategy for an ecosystem. Similar to other methods of ecosystem management, strategic management prioritizes evaluating and reviewing any impacts of management intervention on an ecosystem, and flexibility in adapting management protocols as a result of new information.
Landscape-level conservation
Landscape-level (or landscape-scale) conservation is a method that considers wildlife needs at a broader landscape scale when implementing conservation initiatives. By considering broad-scale, interconnected ecological systems, landscape-level conservation acknowledges the full scope of an environmental problem. Implementation of landscape-scale conservation is carried out in a number of ways. A wildlife corridor, for example, provides a connection between otherwise isolated habitat patches, presenting a solution to habitat fragmentation. These implementations can be found crossing over or under highways to reduce segmentation. In other instances, the habitat requirements of a keystone or vulnerable species is assessed to identify the best strategies for protecting the ecosystem and the species. However, simultaneously addressing the habitat requirements of multiple species in an ecosystem can be difficult, and as a result, more comprehensive approaches have been considered in landscape-level conservation.
In human-dominated landscapes, weighing the habitat requirements of wild flora and fauna versus the needs of humans presents challenges. Globally, human-induced environmental degradation is an increasing problem, which is why landscape-level approaches play an important role in ecosystem management. Traditional conservation methods targeted at individual species may need to be modified to include the maintenance of habitats through the consideration of both human and ecological factors.
See also
Ecosystem-based management
Ecosystem Management Decision Support
Sustainable forest management
Sustainable land management
References
Ecosystems
Natural resource management | Ecosystem management | Biology | 3,633 |
35,467,353 | https://en.wikipedia.org/wiki/Morchella%20prava | Morchella prava is a species of fungus in the family Morchellaceae described as new to science in 2012. It is found in the range 43–50°N across North America, where it fruits from April to June.
References
External links
prava
Edible fungi
Fungi described in 2012
Fungi of North America
Fungus species | Morchella prava | Biology | 67 |
21,353,343 | https://en.wikipedia.org/wiki/Lindeberg%27s%20condition | In probability theory, Lindeberg's condition is a sufficient condition (and under certain conditions also a necessary condition) for the central limit theorem (CLT) to hold for a sequence of independent random variables. Unlike the classical CLT, which requires that the random variables in question have finite variance and be both independent and identically distributed, Lindeberg's CLT only requires that they have finite variance, satisfy Lindeberg's condition, and be independent. It is named after the Finnish mathematician Jarl Waldemar Lindeberg.
Statement
Let be a probability space, and , be independent random variables defined on that space. Assume the expected values and variances exist and are finite. Also let
If this sequence of independent random variables satisfies Lindeberg's condition:
for all , where 1{…} is the indicator function, then the central limit theorem holds, i.e. the random variables
converge in distribution to a standard normal random variable as
Lindeberg's condition is sufficient, but not in general necessary (i.e. the inverse implication does not hold in general).
However, if the sequence of independent random variables in question satisfies
then Lindeberg's condition is both sufficient and necessary, i.e. it holds if and only if the result of central limit theorem holds.
Remarks
Feller's theorem
Feller's theorem can be used as an alternative method to prove that Lindeberg's condition holds. Letting and for simplicity , the theorem states
if , and converges weakly to a standard normal distribution as then satisfies the Lindeberg's condition.
This theorem can be used to disprove the central limit theorem holds for by using proof by contradiction. This procedure involves proving that Lindeberg's condition fails for .
Interpretation
Because the Lindeberg condition implies as , it guarantees that the contribution of any individual random variable () to the variance is arbitrarily small, for sufficiently large values of .
Example
Consider the following informative example which satisfies the Lindeberg condition. Let be a sequence of zero mean, variance 1 iid random variables and a non-random sequence satisfying:
Now, define the normalized elements of the linear combination:
which satisfies the Lindeberg condition:
but is finite so by DCT and the condition on the we have that this goes to 0 for every .
See also
Lyapunov condition
Central limit theorem
References
Theorems in statistics
Central limit theorem | Lindeberg's condition | Mathematics | 509 |
5,253,627 | https://en.wikipedia.org/wiki/Sun%20protective%20clothing | Sun protective clothing is clothing specifically designed for sun protection and is produced from a fabric rated for its level of ultraviolet (UV) protection. A novel weave structure and denier (related to thread count per inch) may produce sun protective properties. In addition, some textiles and fabrics employed in the use of sun protective clothing may be pre-treated with UV-inhibiting ingredients during manufacture to enhance their effectiveness.
In addition to special fabrics, sun protective clothing may also adhere to specific design parameters, including styling appropriate to full coverage of the skin most susceptible to UV damage. Long sleeves, ankle-length trousers, knee- to floor-length skirts, knee- to floor-length dresses, and collars are common styles for clothing as a sun protective measure.
Regular clothing can leave you exposed to the damaging effects of UV radiation. However, a number of fabrics and textiles in common use today need no further UV-blocking enhancement based on their inherent fiber structure, density of weave, and dye components, especially darker colors and indigo dyes. Good examples of these fabrics contain full percentages or blends of heavy-weight natural fibers like cotton, linen and hemp or light-weight synthetics such as polyester, nylon, spandex and polypropylene. Natural or synthetic indigo-dyed denim, twill weaves, canvas and satin are also good examples. However, a significant disadvantage is the heat retention caused by heavier-weight and darker-colored fabrics.
As sun protective clothing is usually meant to be worn during warm and humid weather, some UV-blocking textiles and clothing may be designed with ventilated weaves, moisture wicking and antibacterial properties to assist in cooling and breathability.
UPF (ultraviolet protection factor) represents the ratio of sunburn-causing UV without and with the protection of the fabric, similar to SPF (sun protection factor) ratings for sunscreen. While standard summer fabrics have UPF ~6, sun protective clothing typically has UPF ~30, which means that only 1 out of ~30 units of UV will pass through (~3%).
History
Although clothing has been used for protection against solar exposure for thousands of years, modern sun protective clothing was popularized (but not exclusively used) in Australia as an option or adjunct to sunscreen lotions and sunblock creams. Sun protective clothing and UV protective fabrics in Australia now follow a lab-testing procedure regulated by a Commonwealth agency: ARPANSA. This standard was established in 1996 after work by Australian swimwear companies. The British standard was established in 1998 by the National Radiological Protection Board and the British Standards Institute. Using the Australian method as a model, the US standard was formally established in 2001, and now employs a more-stringent testing protocol that includes fabric longevity, abrasion/wear and washability. UPF testing is now widely used on clothing for outdoor activities.
The original UPF rating system was enhanced in the United States by the American Society for Testing and Materials (ASTM) Committee D13.65, at the behest of the Federal Trade Commission (FTC) and the Consumer Product Safety Commission, to qualify and standardize the emerging sun protective clothing and textile industry. When the Food and Drug Administration (FDA) discontinued regulating sun-protective clothing, the Solar Protective Factory (whose CEO chaired the ASTM Committee) took the lead in developing the UPF testing protocols and labeling standards that are presently used in the United States.
In 1992, the FDA reviewed clothing that was being marketed with claims of sun protection (SPF, % UV blockage, or skin cancer prevention). Only one brand of sun protective clothing, Solumbra, was cleared under medical device regulations. The FDA initially regulated sun protective clothing as a medical device, but later transferred oversight for general sun protective clothing to the FTC. The UPF rating system may eventually be adopted by interested apparel/textile/fabric manufacturers as a "value added" program for consumer safety and awareness. Before UPF standards were in place (which directly measure a fabric's ability to block UV radiation), clothing was previously rated using SPF standards (which measure how long a person's skin takes to redden).
Fabric
Factors that affect the level of sun protection provided by a fabric, in approximate order of importance, include weave, color, weight, stretch, and wetness. The less open or more dense the fabric (weave, weight, stretch), the better the protection. Getting a fabric wet reduces the protection as much as half, except for silk and viscose which can get more protective when wet. Polyester contains a benzene ring that absorbs UV light. In addition, UV absorbers may be added at various points in the manufacturing process to enhance protection levels. In 2003, chemical company BASF embedded nanoparticles of titanium dioxide into a nylon fabric, which can be used for sun protective clothing that maintains its UV protection when wet.
There is some indication that washing fabrics in detergents containing fabric brighteners, which absorb UV radiation, might increase their protective capability. Studies at the University of Alberta also indicate that darker-colored fabrics offer more protection than lighter-colored fabrics.
While there is some correlation between the percentages of visible light and UV that pass through the same fabric, it is not a strong relationship. With new-technology textiles designed for the sole purpose of UV blocking, it is not always possible to judge the UV protection level simply by holding up the fabric and examining how much visible light passes through.
Provide more protection:
specially manufactured fabrics
cotton viscose fabrics
black or dark blue denim jeans
wool garments
satin-finished silk of any weight
tightly woven Bamboo/Lycra fabric
polyacrylonitrile
100% polyester
shiny polyester blends
tightly woven fabrics
REPREVE fabric
unbleached cotton (most cotton sold is bleached)
bamboo/cotton blend
Provide less protection:
polyester crepe
bleached cotton
viscose
knits
undyed/white jeans
worn/old fabric
UPF rating
A relatively new rating designation for sun protective textiles and clothing is UPF (ultraviolet protection factor), which represents the ratio of sunburn-causing UV measured without and with the protection of the fabric. For example, a fabric rated UPF 30 means that, if 30 units of UV fall on the fabric, only 1 unit will pass through to the skin. A UPF 30 fabric that blocks 29 out of 30 units of UV is therefore blocking 96.7%. Unlike SPF (sun protection factor) measurements that traditionally use human sunburn testing, UPF is measured using a laboratory instrument (spectrophotometer or spectroradiometer) and an artificial light source, and then applying a sunburn weighting curve (erythemal action spectrum) across the relevant UV wavelengths. Theoretically, human SPF testing and instrument UPF testing both generate comparable measurements of a product's ability to protect against sunburn.
Below is the ASTM Standard for Sun Protective Clothing and Swimwear:
According to testing by Consumer Reports, UPF 30+ is typical for protective fabrics, while UPF 20 is typical for standard summer fabrics.
UPF testing protocol
Developed in 1998 by Committee RA106, the testing standard for sun protective fabrics in the United States is the American Association of Textile Chemists and Colorists (AATCC) Test Method 183. This method is based on the original guidelines established in Australia in 1994.
AATCC 183 should be used in conjunction with other related standards including ASTM D 6544 and ASTM D 6603. ASTM D 6544 specifies simulating the life cycle of a fabric so that a UPF test can be done near the end of the fabric's life, when it typically provides the least UV protection. ASTM D 6603 is a consumer format recommended for visible hangtag and care labeling of sun protective clothing and textiles. A manufacturer may publish a test result to a maximum of UPF 50+.
Sun protective clothing and textile/fabric manufacturers are currently a self-regulating industry in North America, prescribed by the AATCC and ASTM methods of testing.
See also
Dress code
Hemline
Rash guard
Sunglasses
Sun hat
Sunlight
Umbrellas
UV index
Notes
References
Safety clothing
Sun tanning | Sun protective clothing | Chemistry | 1,681 |
37,572,903 | https://en.wikipedia.org/wiki/Acer%20CloudMobile%20S500 | Acer CloudMobile S500 is an Android smartphone that was announced in February 2012 and released in September 2012. The Acer CloudMobile S500 tech specifications includes a 4.3 inch IPS Display, Krait dual-core processor running at 1.5 GHz with 1 GB of RAM, 8 MP rear camera and a front-facing HD camera for video calls, face unlock or self-portrait photography. The device runs Android 4.0.4 (ICS), and it is 97% Vanilla OS, has only minor modifications on drop down notifications and on the slide to unlock.
CloudMobile S500
Android (operating system) devices
Touchscreen portable media players
Mobile phones introduced in 2012 | Acer CloudMobile S500 | Technology | 143 |
39,439,535 | https://en.wikipedia.org/wiki/SpaceTEC%20National%20Resource%20Center%20for%20Aerospace%20Technical%20Education | SpaceTEC® is one of the Advanced Technological Education (ATE) Centers funded by the National Science Foundation (NSF) for developing partnerships between academic institutions and industry partners to promote improvement in the education of science and engineering technicians at the undergraduate and secondary school levels. With an emphasis on two-year colleges, the ATE program focuses on the education of technicians for the high-technology fields that drive the world's economies.
Located in Cape Canaveral, Florida, SpaceTEC® supports the education and credentialing of aerospace technicians in six core areas and three advanced disciplines: (1) space vehicle processing activities (2) aerospace manufacturing; and (3) composite materials. A national consortium of community and technical colleges, universities, business and industry organizations, and government agencies promotes careers and educates candidates for technical employment.
The organization has been accredited by the International Certification Accreditation Council to ISO 17024 standards and offers performance-based examinations that result in industry-driven nationally recognized credentials that reflect the competencies employers demand. Successful candidates can qualify for college credits via transcripts provided by the American Council on Education. The SpaceTEC® national credentialing program has earned a formal Safety Approval by the U.S. Federal Aviation Administration's Office of Commercial Space Transportation.
History
SpaceTEC® was established in 2002 as an NSF National Center of Excellence funded in part by a three-year NSF Advanced Technological Education Program grant and renewed in 2005 for an additional four years. SpaceTEC® was awarded a four-year follow-on grant in 2009 as an NSF National Resource Center, and in April, 2013, it received a four-year renewal of its NSF grant. The center is now expanding operations to Science, technology, engineering, and mathematics (STEM fields) technicians working in technical fields beyond aerospace through its ‘’’CertTEC®’’’ commercial industry credentials.
The original consortium of industry, academia, and government representatives was known as the Community Colleges for Innovative Technology Transfer, a not-for-profit Florida corporation founded in 1994 to provide technician education for geographic information systems. Community Colleges for Innovative Technology Transfer received one of the first National Science Foundation grants for two-year community and technical colleges. The founding partners were all located near NASA or Department of Defense locations, providing a consortium strongly linked to post-secondary education programs for the nation's technical workforce in aerospace and defense. Its credentials are widely recognized in academic circles as well as within the U. S. aerospace industry.
Community Colleges for Innovative Technology Transfer, was restructured in 2009 and renamed SpaceTEC Partners, Inc. (SPI). to reflect a growing expansion of activities to technical education programs beyond aerospace. The mission of SPI is to create and implement an industry-driven, government-endorsed, technical education process that be shared with other educational venues. SpaceTEC® programs to educate and credential aerospace technicians have been adopted by educational institutions, NASA and Department of Defense contractors and for U. S. active duty personnel and veterans.
Most recently, SpaceTEC® has obtained the NASA human spaceflight database of educational and credentialing activities for its NSF National Resource Center and all of its partners and continues to support strong linkages between its industry and education partners.
References
External links
Official website of the SpaceTEC Aerospace Technical Education Center
US Air Force Credentialing Web Site
US Army Credentialing Opportunities On-Line
US Navy Credentialing Opportunities On-Line
Aerospace engineering organizations
Professional titles and certifications
Career and technical education
National Science Foundation | SpaceTEC National Resource Center for Aerospace Technical Education | Engineering | 716 |
2,903,841 | https://en.wikipedia.org/wiki/Xi%20Cancri | Xi Cancri (ξ Cancri, abbreviated Xi Cnc, ξ Cnc) is a spectroscopic binary star system in the zodiac constellation of Cancer. It is visible to the naked eye with an apparent visual magnitude of +5.15. Based upon parallax measurements obtained during the Hipparcos mission, it is roughly 370 light-years distant from the Sun.
The two components are designated Xi Cancri A (formally named Nahn ) and B.
Nomenclature
ξ Cancri (Latinised to Xi Cancri) is the system's Bayer designation. The designations of the two components as Xi Cancri A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Xi Cancri together with Lambda Leonis (Alterf) were the Persian Nahn, "the Nose", and the Coptic Piautos, "the Eye", both lunar asterisms. Nahn was also the name given to Xi Cancri in a 1971 NASA technical memorandum. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Nahn for the component Xi Cancri A on 1 June 2018 and it is now so included in the List of IAU-approved Star Names.
Properties
At its present distance, the visual magnitude is diminished by an extinction factor of 0.135 due to interstellar dust.
Xi Cancri is a single-lined spectroscopic binary star system with an orbital period of 4.66 years, an eccentricity of 0.06, and a semimajor axis of 0.01 arcsecond. The primary, Xi Cancri A, is a yellow G-type giant with an apparent magnitude of +5.70. Its companion, Xi Cancri B, is of magnitude 6.20.
References
G-type giants
Spectroscopic binaries
Cancri, Xi
Cancer (constellation)
Durchmusterung objects
Cancri, 77
078515
044946
3627 | Xi Cancri | Astronomy | 462 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.