id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
73,122,468 | https://en.wikipedia.org/wiki/Pebble%20%28social%20network%29 | Pebble (formerly T2) was an American social media platform founded by former Twitter employees Sarah Oh and Gabor Cselle. It provided an authenticated network where users could make posts and interact in communities before shutting down on 1 November 2023.
One day before the shutdown, Cselle launched a Mastodon instance of the same name. It closely resembles the look and feel of the old site.
Background
Prior to founding T2, Cselle oversaw the incubation of new consumer products in Google’s since-shuttered Area 120 incubator. Cselle had also been a Group Product Manager at Twitter from 2014 to 2016, where he worked on the consumer product, and relaunched Twitter’s logged-out homepage and mobile trends. Sarah Oh had previously worked as an executive in Trust and Safety at Twitter and Facebook. On the day Oh was laid off from twitter, Cselle called her to offer his condolences, and to offer Oh a position at T2 to aid in creating a new social media platform.
Cselle announced the development of T2 in November 2022.
In early 2023, T2 hired former Discord Senior Director of Engineering Michael Greer as its chief technology officer.
On 15 September 2023, the platform was rebranded as Pebble.
On 24 October 2023, the platform announced its shutdown on 1 November 2023, approximately one year since it started.
On 30 October 2023, Cselle launched the mastodon-instance pebble.social.
Platform
Pebble was one of several social media platforms conceived as an alternative to X (formerly Twitter) after its takeover by Elon Musk. The platform allowed 280 characters on user posts. Cselle expressed a desire to keep the platform as similar to the original Twitter platform as possible. It also emphasized security and safety features such as user authentication.
Pebble's moderation was planned to make use of both human review and artificial intelligence features.
On 25 April 2023, the platform's invite system launched, allowing its current community of around 1,000 users to invite their friends to the service instead of requiring users to join a waitlist. Each member of the platform was allowed up to 5 invites with the ability to request more invites if required. The platform was a web-based app only.
Pebble offered checkmark verification, similar to X. Verification was done through Persona and a $5 charge is invoiced to offset the cost of verification. Unlike rival X Blue, the payment is one-time.
External links
Official website
References
Defunct social networking services
American social networking websites
Real-time web
Text messaging
Microblogging services
Internet properties disestablished in 2023
Internet properties established in 2022 | Pebble (social network) | Technology | 548 |
71,801,051 | https://en.wikipedia.org/wiki/Operation%20Menai%20Bridge | Operation Menai Bridge is the code name for plans related to the death of King Charles III, The name refers to a suspension bridge in Wales. The plan includes the announcement of his death, the period of official mourning, and the details of his state funeral. Planning for the King's funeral began almost immediately after Charles's accession to the throne upon the death of his mother and predecessor, Queen Elizabeth II.
Background
The death of King George VI was communicated by using the phrase "Hyde Park Corner", to avoid Buckingham Palace switchboard operators learning the news too soon. For Queen Elizabeth The Queen Mother, Operation Tay Bridge was put into motion upon her death. Other code names used were Operation Forth Bridge for Prince Philip, Duke of Edinburgh, and Operation London Bridge for Queen Elizabeth II. Since the latter died at Balmoral Castle in Scotland, Operation Unicorn was also put into effect upon her death.
Post-accession
Following the accession of Charles III, planning for his funeral began "in earnest" on 20 September 2022, the day following the Queen's state funeral. As of 2024, details for Operation Menai Bridge continued to be regularly updated and reviewed, in light of Charles's diagnosis with cancer that year.
A 2024 biography of Charles III by Robert Hardman claimed the King's funeral arrangements have "been upgraded" to Operation London Bridge, mirroring those of his mother's. Planning for Charles III's funeral arrangement began shortly after his coronation held in 2023. The biography also claims that the codename Operation Menai Bridge is now being used for William, Prince of Wales, replacing the codename previously used, Operation Clare Bridge.
References
Charles III
Future events
Non-combat military operations involving the United Kingdom
State funerals in the United Kingdom | Operation Menai Bridge | Physics | 358 |
67,227,898 | https://en.wikipedia.org/wiki/Edmond%E2%80%93Ogston%20model | The Edmond–Ogston model is a thermodynamic model proposed by Elizabeth Edmond and Alexander George Ogston in 1968 to describe phase separation of two-component polymer mixtures in a common solvent. At the core of the model is an expression for the Helmholtz free energy
that takes into account terms in the concentration of the polymers up to second order, and needs three Virial coefficients and as input. Here is the molar concentration of polymer , is the universal gas constant, is the absolute temperature, is the system volume. It is possible to obtain explicit solutions for the coordinates of the critical point
,
where represents the slope of the binodal and spinodal in the critical point. Its value can be obtained by solving a third order polynomial in ,
,
which can be done analytically using Cardano's method and choosing the solution for which both and are positive.
The spinodal can be expressed analytically too, and the Lambert W function has a central role to express the coordinates of binodal and tie-lines.
The model is closely related to the Flory–Huggins model.
The model and its solutions have been generalized to mixtures with an arbitrary number of components , with greater or equal than 2.
References
Polymer chemistry
Solutions
Thermodynamic free energy | Edmond–Ogston model | Physics,Chemistry,Materials_science,Engineering | 263 |
30,338,902 | https://en.wikipedia.org/wiki/Air-wedge%20shearing%20interferometer | The air-wedge shearing interferometer is probably the simplest type of interferometer designed to visualize the disturbance of the wavefront after propagation through a test object. This interferometer is based on utilizing a thin wedged air-gap between two optical glass surfaces and can be used with virtually any light source even with non-coherent white light.
Setup
An air-wedge shearing interferometer is described in and was employed in set of experiments described in. This interferometer consists of two optical glass wedges (~2-5deg), pushed together and then slightly separated from one side to create a thin air-gap wedge. This air-gap wedge has a unique property: it is very thin (micrometer scale) and it has perfect flatness (~λ/10).
There are four nearly equal intensity Fresnel reflections (~4% for refraction coefficient 1.5) from the air-wedge interferometer (Fig.1):
from the exterior surface of the first glass block
from the interior surface of the first glass block
from the interior surface of the second glass block
from the exterior surface of the second glass block
The angle between beams 1-2 and 3-4 is non adjustable and depends only on the shape of the glass wedge. The angle between beams 2-3 is easily adjusted by varying the air-wedge angle. The distance between the air-wedge and an image plane should be long enough to spatially separate reflections 1 from 2 and 3 from 4. The overlap of beams 2 and 3 in the image plane creates an interferogram.
Alignment
To minimize image aberrations the angle plane of the glass wedges has to be placed orthogonal to the angle plane of the air-wedge. Because intensity of Fresnel reflections from a glass surface are polarization and angle dependent, it is necessary to keep the air-wedge plane nearly perpendicular to the incident beam (±5deg) to minimize instrumentally induced intensity variation. This is very important when coupling the air-wedge interferometer to imaging optics. The air-wedge interferometer has a very simple design and requiring only 2 standard BK7 glass wedges and 1 mirror holder (Fig.3).
Applications
Because of its extremely thin air-gap, the air-wedge interferometer was successfully applied in experiments with femto-second high-power lasers. Figure 4 shows an interferogram of laser interactions with a He jet in a vacuum chamber. The probing beam has ~500-fs duration, and ~1-μm wavelength. The air-wedge interferogram from even this very short coherence length laser beam exhibits clear, high-contrast interference lines.
Advantages
The air-wedge shearing interferometer is similar to the classical shearing interferometer but is micrometres thick, can operate with virtually any light source even with non-coherent white light, has an adjustable angular beam split, and uses standard inexpensive optical elements. Replacement of the second glass wedge by a plane-concave lens, will turn the lateral-shearing air-wedge interferometer to a radial-shearing interferometer, which is important for some specific applications.
The principle of interference from the air-wedge between two plane-parallel glass plates is described in a number of elementary optics textbooks. But this "classical" air-wedge arrangement has never been used for interferometry with field visualization owing to the overlap of all four reflected beams in the image plane. Design described in this article eliminates this obstruction and makes the air-wedge interferometer effective for practical applications with a visualization field interferometry.
See also
List of types of interferometers
Shearing interferometer
References
Interferometers | Air-wedge shearing interferometer | Technology,Engineering | 770 |
55,169,632 | https://en.wikipedia.org/wiki/4-Mercapto-4-methyl-2-pentanone | 4-Mercapto-4-methyl-2-pentanone is an aroma compound with the chemical formula C6H12OS . It has a tropical flavor. It is found in Sauvignon wines and is a potent odorant of new-world hops.
References
Ketones
Thiols | 4-Mercapto-4-methyl-2-pentanone | Chemistry | 65 |
693,848 | https://en.wikipedia.org/wiki/Hypergeometric%20identity | In mathematics, hypergeometric identities are equalities involving sums over hypergeometric terms, i.e. the coefficients occurring in hypergeometric series. These identities occur frequently in solutions to combinatorial problems, and also in the analysis of algorithms.
These identities were traditionally found 'by hand'. There exist now several algorithms which can find and prove all hypergeometric identities.
Examples
Definition
There are two definitions of hypergeometric terms, both used in different cases as explained below. See also hypergeometric series.
A term tk is a hypergeometric term if
is a rational function in k.
A term F(n,k) is a hypergeometric term if
is a rational function in k.
There exist two types of sums over hypergeometric terms, the definite and indefinite sums. A definite sum is of the form
The indefinite sum is of the form
Proofs
Although in the past proofs have been found for many specific identities, there exist several general algorithms to find and prove identities. These algorithms first find a simple expression for a sum over hypergeometric terms and then provide a certificate which anyone can use to check and prove the correctness of the identity.
For each of the hypergeometric sum types there exist one or more methods to find a simple expression. These methods also provide the certificate to check the identity's proof:
Definite sums: Sister Celine's Method, Zeilberger's algorithm
Indefinite sums: Gosper's algorithm
The book A = B by Marko Petkovšek, Herbert Wilf and Doron Zeilberger describes the three main approaches mentioned above.
See also
Table of Newtonian series
External links
The book "A = B", this book is freely downloadable from the internet.
Special-functions examples at exampleproblems.com
Factorial and binomial topics
Hypergeometric functions
Mathematical identities
fr:Identités hypergéométriques | Hypergeometric identity | Mathematics | 388 |
25,333,325 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20January%2016%2C%202037 | A partial solar eclipse will occur at the Moon's descending node of orbit on Friday, January 16, 2037, with a magnitude of 0.7049. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
A partial eclipse will be visible for parts of Europe, North Africa, the Middle East, and Central Asia.
Images
Animated path
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2037
A partial solar eclipse on January 16.
A total lunar eclipse on January 31.
A total solar eclipse on July 13.
A partial lunar eclipse on July 27.
Metonic
Preceded by: Solar eclipse of March 30, 2033
Followed by: Solar eclipse of November 4, 2040
Tzolkinex
Preceded by: Solar eclipse of December 5, 2029
Followed by: Solar eclipse of February 28, 2044
Half-Saros
Preceded by: Lunar eclipse of January 12, 2028
Followed by: Lunar eclipse of January 22, 2046
Tritos
Preceded by: Solar eclipse of February 17, 2026
Followed by: Solar eclipse of December 16, 2047
Solar Saros 122
Preceded by: Solar eclipse of January 6, 2019
Followed by: Solar eclipse of January 27, 2055
Inex
Preceded by: Solar eclipse of February 7, 2008
Followed by: Solar eclipse of December 27, 2065
Triad
Preceded by: Solar eclipse of March 18, 1950
Followed by: Solar eclipse of November 18, 2123
Solar eclipses of 2036–2039
Saros 122
Metonic series
Tritos series
Inex series
References
External links
NASA graphics
2037 in science
2037 1 16
2037 1 16 | Solar eclipse of January 16, 2037 | Astronomy | 509 |
5,235,067 | https://en.wikipedia.org/wiki/Coherent%20risk%20measure | In the fields of actuarial science and financial economics there are a number of ways that risk can be defined; to clarify the concept theoreticians have described a number of properties that a risk measure might or might not have. A coherent risk measure is a function that satisfies properties of monotonicity, sub-additivity, homogeneity, and translational invariance.
Properties
Consider a random outcome viewed as an element of a linear space of measurable functions, defined on an appropriate probability space. A functional → is said to be coherent risk measure for if it satisfies the following properties:
Normalized
That is, the risk when holding no assets is zero.
Monotonicity
That is, if portfolio always has better values than portfolio under almost all scenarios then the risk of should be less than the risk of . E.g. If is an in the money call option (or otherwise) on a stock, and is also an in the money call option with a lower strike price.
In financial risk management, monotonicity implies a portfolio with greater future returns has less risk.
Sub-additivity
Indeed, the risk of two portfolios together cannot get any worse than adding the two risks separately: this is the diversification principle.
In financial risk management, sub-additivity implies diversification is beneficial. The sub-additivity principle is sometimes also seen as problematic.
Positive homogeneity
Loosely speaking, if you double your portfolio then you double your risk.
In financial risk management, positive homogeneity implies the risk of a position is proportional to its size.
Translation invariance
If is a deterministic portfolio with guaranteed return and then
The portfolio is just adding cash to your portfolio . In particular, if then .
In financial risk management, translation invariance implies that the addition of a sure amount of capital reduces the risk by the same amount.
Convex risk measures
The notion of coherence has been subsequently relaxed. Indeed, the notions of Sub-additivity and Positive Homogeneity can be replaced by the notion of convexity:
Convexity
Examples of risk measure
Value at risk
It is well known that value at risk is not a coherent risk measure as it does not respect the sub-additivity property. An immediate consequence is that value at risk might discourage diversification.
Value at risk is, however, coherent, under the assumption of elliptically distributed losses (e.g. normally distributed) when the portfolio value is a linear function of the asset prices. However, in this case the value at risk becomes equivalent to a mean-variance approach where the risk of a portfolio is measured by the variance of the portfolio's return.
The Wang transform function (distortion function) for the Value at Risk is . The non-concavity of proves the non coherence of this risk measure.
Illustration
As a simple example to demonstrate the non-coherence of value-at-risk consider looking at the VaR of a portfolio at 95% confidence over the next year of two default-able zero coupon bonds that mature in 1 years time denominated in our numeraire currency.
Assume the following:
The current yield on the two bonds is 0%
The two bonds are from different issuers
Each bond has a 4% probability of defaulting over the next year
The event of default in either bond is independent of the other
Upon default the bonds have a recovery rate of 30%
Under these conditions the 95% VaR for holding either of the bonds is 0 since the probability of default is less than 5%. However if we held a portfolio that consisted of 50% of each bond by value then the 95% VaR is 35% (= 0.5*0.7 + 0.5*0) since the probability of at least one of the bonds defaulting is 7.84% (= 1 - 0.96*0.96) which exceeds 5%. This violates the sub-additivity property showing that VaR is not a coherent risk measure.
Average value at risk
The average value at risk (sometimes called expected shortfall or conditional value-at-risk or ) is a coherent risk measure, even though it is derived from Value at Risk which is not. The domain can be extended for more general Orlitz Hearts from the more typical Lp spaces.
Entropic value at risk
The entropic value at risk is a coherent risk measure.
Tail value at risk
The tail value at risk (or tail conditional expectation) is a coherent risk measure only when the underlying distribution is continuous.
The Wang transform function (distortion function) for the tail value at risk is . The concavity of proves the coherence of this risk measure in the case of continuous distribution.
Proportional Hazard (PH) risk measure
The PH risk measure (or Proportional Hazard Risk measure) transforms the hazard rates using a coefficient .
The Wang transform function (distortion function) for the PH risk measure is . The concavity of if proves the coherence of this risk measure.
g-Entropic risk measures
g-entropic risk measures are a class of information-theoretic coherent risk measures that involve some important cases such as CVaR and EVaR.
The Wang risk measure
The Wang risk measure is defined by the following Wang transform function (distortion function) . The coherence of this risk measure is a consequence of the concavity of .
Entropic risk measure
The entropic risk measure is a convex risk measure which is not coherent. It is related to the exponential utility.
Superhedging price
The superhedging price is a coherent risk measure.
Set-valued
In a situation with -valued portfolios such that risk can be measured in of the assets, then a set of portfolios is the proper way to depict risk. Set-valued risk measures are useful for markets with transaction costs.
Properties
A set-valued coherent risk measure is a function , where and where is a constant solvency cone and is the set of portfolios of the reference assets. must have the following properties:
Normalized
Translative in M
Monotone
Sublinear
General framework of Wang transform
Wang transform of the cumulative distribution function
A Wang transform of the cumulative distribution function is an increasing function where and . This function is called distortion function or Wang transform function.
The dual distortion function is .
Given a probability space , then for any random variable and any distortion function we can define a new probability measure such that for any it follows that
Actuarial premium principle
For any increasing concave Wang transform function, we could define a corresponding premium principle :
Coherent risk measure
A coherent risk measure could be defined by a Wang transform of the cumulative distribution function if and only if is concave.
Set-valued convex risk measure
If instead of the sublinear property,R is convex, then R is a set-valued convex risk measure.
Dual representation
A lower semi-continuous convex risk measure can be represented as
such that is a penalty function and is the set of probability measures absolutely continuous with respect to P (the "real world" probability measure), i.e. . The dual characterization is tied to spaces, Orlitz hearts, and their dual spaces.
A lower semi-continuous risk measure is coherent if and only if it can be represented as
such that .
See also
Risk metric - the abstract concept that a risk measure quantifies
RiskMetrics - a model for risk management
Spectral risk measure - a subset of coherent risk measures
Distortion risk measure
Conditional value-at-risk
Entropic value at risk
Financial risk
References
Actuarial science
Financial risk modeling | Coherent risk measure | Mathematics | 1,542 |
105,709 | https://en.wikipedia.org/wiki/Aerial%20tramway | An aerial tramway, aerial tram, sky tram, aerial cablecar, aerial cableway, telepherique, or seilbahn is a type of aerial lift which uses one or two stationary cables for support, with a third moving cable providing propulsion. With this form of lift, the grip of an aerial tramway cabin is fixed onto the propulsion cable and cannot be decoupled from it during operation. Aerial tramways usually provide lower line capacities and longer wait times than gondola lifts.
Terminology
Cable car is the usual term in British English, where tramway generally refers to a railed street tramway. In American English, cable car may additionally refer to a cable-pulled street tramway with detachable vehicles (e.g., San Francisco's cable cars). Consequently careful phrasing is necessary to prevent confusion.
It is also sometimes called a ropeway or even incorrectly referred to as a gondola lift. A gondola lift has cabins suspended from a continuously circulating cable whereas aerial trams simply shuttle back and forth on cables. In Japan, the two are considered as the same category of vehicle and called ropeway, while the term cable car refers to both ground-level cable cars and funiculars. An aerial railway where the vehicles are suspended from a fixed track as opposed to a cable is known as a suspension railway.
Overview
An aerial tramway consists of one or two fixed cables (called track cables), one loop of cable (called a haulage rope), and one or two passenger or cargo cabins. The fixed cables provide support for the cabins while the haulage rope, by means of a grip, is solidly connected to the truck (the wheel set that rolls on the track cables). An electric motor drives the haulage rope which provides propulsion. Aerial tramways are constructed as reversible systems; vehicles shuttling back and forth between two end terminals and propelled by a cable loop which stops and reverses direction when the cabins arrive at the end stations. Aerial tramways differ from gondola lifts in that gondola lifts are considered continuous systems (cabins attached onto a circulating haul cable that moves continuously).
Two-car tramways use a jig-back system: a large electric motor is located at the bottom of the tramway so that it effectively pulls one cabin down, using that cabin's weight to help pull the other cabin up. A similar system of cables is used in a funicular railway. The two passenger or cargo cabins, which carry from 4 to over 150 people, are situated at opposite ends of the loops of cable. Thus, while one is coming up, the other is going down the mountain, and they pass each other midway on the cable span.
Some aerial trams have only one cabin, which lends itself better to systems with small elevation changes along the cable run.
History
The first design of an aerial lift was by Croatian polymath Fausto Veranzio, and the first operational aerial tram was built in 1644 by Adam Wybe in Gdańsk, Poland. It was moved by horses and used to move soil over the river to build defences. It is called the first known cable lift in European history and precedes the invention of steel cables. It is not known how long this lift was used. Germany installed the second cable lift 230 years later, now using iron wire cable.
In mining
Aerial tramways are sometimes used in mountainous regions to carry ore from a mine located high on the mountain to an ore mill located at a lower elevation. Ore tramways were common in the early 20th century at the mines in North and South America. One can still be seen in the San Juan Mountains of the US state of Colorado. Another famous use of aerial tramways was at the Kennecott Copper mine in Wrangell-St. Elias National Park, Alaska.
Other firms entered the mining tramway business, including Otto, Leschen, Breco Ropeways Ltd., Ceretti and Tanfani, and Riblet. A major British contributor was Bullivant, which became a constituent of British Ropes in 1924.
Moving people
In the beginning of the 20th century, the rise of the middle class and the leisure industry allowed for investment in sight-seeing transport. Prior to 1893, a combined goods and passenger carrying cableway was installed at Gibraltar. Initially, its passengers were military personnel. An 1893 industry publication said of a two-mile system in Hong Kong that it "is the only wire tramway which has been erected exclusively for the carriage of individuals" (albeit workmen). After the pioneer cable car suitable for public transport on Mount Ulia in 1907 (San Sebastián, Spain) by Leonardo Torres Quevedo and the Wetterhorn Elevator (Grindelwald, Switzerland) in 1908, others to the top of high peaks in the Alps of Austria, Germany and Switzerland resulted. They were much less expensive to build than the earlier rack railway.
One of the first aerial trams was at Chamonix, while others in Switzerland, and Garmisch soon followed. From this, it was a natural transposition to build ski lifts and chairlifts. The first cable car in North America was at Cannon Mountain in Franconia, New Hampshire in 1938.
Many aerial tramways were built by Von Roll Ltd. of Switzerland, later acquired by Austrian lift manufacturer Doppelmayr. Other German, Swiss, and Austrian firms played an important role in the cable car business: Bleichert, Heckel, Pohlig, PHB (Pohlig-Heckel-Bleichert), Garaventa and Waagner-Biró. Now there are three groups dominating the world market: Doppelmayr Garaventa Group, Leitner Group, and Poma, the last two being owned by one person.
Some aerial tramways have their own propulsion, such as the Lasso Mule or the Josef Mountain Aerial Tramway near Merano, Italy.
Urban transport
While typically used for ski resorts, aerial tramways have come into use in the urban environment. The 1976 Roosevelt Island Tramway in New York City, the 2022 Rakavlit cable car in Haifa, Israel and the 2006 Portland Aerial Tram are examples where this technology has been successfully adapted for public transport.
Telpherage
The telpherage concept was first publicised in 1883 and several experimental lines were constructed. It was designed to compete not with railways, but with horses and carts.
The first commercial telpherage line was in Glynde, which is in Sussex, England. It was built to connect a newly opened clay pit to the local railway station and opened in 1885.
Double deckers
There are aerial tramways with double deck cabins. The Vanoise Express cable car carries 200 people in each cabin at a height of over the Ponturin gorge in France. The Shinhotaka Ropeway carries 121 people in each cabin at Mount Hotaka in Japan. The CabriO cable car to the summit of the Stanserhorn in Switzerland carries 60 persons, with the upper floor accommodating 30 people in the open air.
Records
First – Adam Wybe's construction in Gdańsk (1644). It was the first rope railway with many supports and the biggest built until the end of 19th century.
Longest (at time of building) and years operated:
1906–1927 Chilecito – Mina La Mejicana, Argentina ( and branch).
1925–1950 Dúrcal – Motril, Spain ( and branch).
1937–1941 Asmara – Massawa, Eritrea ( and branch), technically a Funifor.
1943–1987 Kristineberg-Boliden, Sweden. still working as the Norsjö ropeway.
Second longest:
1959–1986 Moanda – Mbinda, Gabon – Republic of Congo.
Longest over water:
1906 – the same century; Thio, New Caledonia. ship loading.
1941–2006 Forsby-Köping limestone cableway, Sweden. crossing of Hjälmaren strait. 42 km system.
2007 Nha Trang City – Vinpearl Land, Hon Tre Island, Vietnam. Total length 3.3 km.
Longest currently operational:
Norsjö aerial tramway Mensträsk-Bjurfors in Norsjö, Sweden. Passenger tramway, a section of the former 96-km Kristineberg-Boliden industrial ropeway.
12.5 km (7.8 mi) Mérida cable car Mérida, Venezuela.
Grindelwald–Männlichen gondola cableway, Switzerland
Wings of Tatev, Armenia, the world's longest reversible cable car line of one section.
Medeu-Shimbulak tramway near Almaty, Kazakhstan.
Sandia Peak Tramway, reversible tramway in Albuquerque, New Mexico.
Highest lift:
from at Chilecito – Mina La Mejicana, Argentina (drops back to at upper terminal).
Highest lift currently operational:
3188 m (10,459 ft) from 1,577 MSL to 4,765 MSL (5,174 FAMSL to 15,633 FAMSL) Mérida cable car, Venezuela.
Highest station:
Greater than 1935-19?? Aucanquilcha, Chile.
Lowest station:
below sea level Masada cableway, Israel.
Tallest support tower:
Cat Hai – Phu Long cable car, Vietnam.
As mass transit:
The Roosevelt Island Tramway in New York City was the first aerial tramway in North America used by commuters as a mode of mass transit (See Transportation in New York City). Passengers pay with the same farecard used for the New York City Subway.
The Portland Aerial Tram in Portland, Oregon, was opened in January 2007 and became the second public transportation aerial tramway in North America.
In Medellin, Colombia, both the Metro and the recent Metrocable aerial tramway addition can be used while paying a single fare.
Largest rotating cars:
Palm Springs Aerial Tramway in Palm Springs, California.
List of accidents
Despite the introduction of various safety measures (back-up power generators, evacuation plans, etc.) there have been several serious incidents on aerial tramways, some fatal.
August 29, 1961: A military plane split the hauling cable of the Vallée Blanche Aerial Tramway on the Aiguille du Midi in the Mont Blanc massif: six people killed.
July 9, 1974: Ulriksbanen is an aerial tramway in Bergen, Norway, operated by a tow rope, which hauls it, and a carrying rope. On July 9, 1974, as the carriage reached its destination at the top station and just as the carriage operator was about to open the doors, the tow rope broke. The carriage operator was thrown into the back of the vehicle, preventing him from reaching the emergency brake. The carriage began whizzing down the still intact carrying rope, gathering speed quickly and approaching the first vertical mast about 70meters away. Because the tow rope was broken, it was no longer taut at the point where it crossed over the the carriage crossed the mast, the broken tow rope jammed up and caused the carriage to jump off the carrying rope and begin to free-fall straight down towards the ground 15meters below. The carriage crashed to the ground on a downslope, causing the carriage to careen down the mountainside a further 30meters before it was crushed up against some boulders, finally coming to a stop. Four of the eight occupants were killed.
March 9, 1976: In the Italian Dolomites at Cavalese, a cab fell after a rope broke, killing 43. (See 1976 Cavalese cable car crash)
April 15, 1978: In a storm, two carrying ropes of the Squaw Valley Aerial Tramway in California fell from the aerial tramway support tower. One of the ropes partly destroyed the cabin. Four were killed, 32 injured.
June 1, 1990: Nineteen were killed and fifteen injured after a hauling rope broke in the 1990 Tbilisi Cable car accident
February 3, 1998: U.S. Marine Corps EA-6B Prowler jets severed the cable of an aerial ropeway in Cavalese, Italy, killing 20 people. (See Cavalese cable car disaster (1998))
July 1, 1999: Saint-Étienne-en-Dévoluy, France. An aerial tramway car detached from the cable it was traveling on and fell to the valley floor, killing all 20 occupants. The majority were employees and contractors of an international astronomical observatory run by the Institut de Radioastronomie Millémétrique. (See Saint-Étienne-en-Dévoluy cable car disaster)
October 19, 2003: Four were killed and 11 injured when three cars slipped off the cable of the Darjeeling Ropeway.
April 2, 2004: In Yerevan, Armenia on an urban cable car one of the two cabins derailed from the steel track cable and fell to the ground killing five, including two Iranians, and injuring 11 others. The second cabin slammed onto the lower station injuring three people.
October 9, 2004: Crash of a cabin of the Grünberg aerial tramway in Gmunden, Austria. Many injuries.
December 31, 2012: The Alyeska Resort Aerial Tramway was blown sideways while operating in high winds and was impaled on the tower guide, severely damaging the contacting cabin. Only minor injuries were incurred.
December 4, 2018, an exterior panel of the Portland Aerial Tram dropped at least 100 feet (30 m) and struck a pedestrian walking below.
May 23, 2021: 14 people were killed when a cable failed 300 m from the top of the Mottarone mountain.
October 21, 2021: One person died after a cable car cabin became detached from its cable at the Ještěd mountain in Liberec, Czech Republic.
April 12, 2024: One person died and seven people injured after a cable car cabin hit a pole and burst open in Antalya, Turkey.
Gallery
Cableways in fiction
"Ascension"
Blind Fury
Get Carter – coal spoil conveyor Blackhall Beach near Blackhall Colliery
Electric City (web series)
The Haunting of Tram Car 015 (P. Djèlí Clark)
Hoodwinked!
Kongfrontation
Moonraker (film)
Nighthawks (1981 film)
Night Train to Munich
Nitrome's Skywire games
On Her Majesty's Secret Service (film)
Where Eagles Dare
Zootopia
Kiff (TV series)
See also
Aerial lift
Aerial lift pylon
Blondin (quarry equipment)
Cable car
Cable ferry
Cable transport
Chairlift
COMILOG Cableway in Moanda
Funitel
Funicular
Gondola lift
List of aerial tramways
List of aerial lift manufacturers
List of spans
Riblet Tramway Company
Roosevelt Island Tramway
Ropeway
Skiing
Transport
Transporter bridge
Zip-line
References
External links
Aerial Tramways (worldwide) Lift-Database
Tatever ropeway – is the aerial ropeway to the natural and historic treasures of Syunik.
Aerial lifts
Croatian inventions
Scottish inventions
Ski lift types
Vertical transport devices | Aerial tramway | Technology | 3,038 |
6,183,688 | https://en.wikipedia.org/wiki/Demand%20destruction | Demand destruction is a permanent downward shift on the demand curve in the direction of lower demand of a commodity, such as energy products, induced by a prolonged period of high prices or constrained supply. In the context of the oil industry, "demand" generally refers to the quantity consumed (see for example the output of any major industry organization such as the International Energy Agency), rather than any measure of a demand curve as used in mainstream economics. In economics, demand destruction refers to a permanent or sustained decline in the demand for a certain good in response to persistent high prices or limited supply. Because of persistent high prices, consumers may decide that it is not worth purchasing as much of that good, or seek out alternatives as substitutes.
Usage
The term came to some prominence in tandem with the peak oil theory, where demand destruction is the reduction of demand for oil and oil-derived products. The term is used by Matthew Simmons, Mike Ruppert and other prominent proponents of the theory. It is also used in other resource industries, such as mining.
Examples
A familiar illustration of demand destruction is the effect of high gasoline prices on automobile sales. It has been widely observed that when gasoline prices are high enough, consumers tend to begin buying smaller and more efficient cars, gradually reducing per-capita demand for gasoline. If the price rise were caused by a temporary lack of supply, and the price then subsequently goes back down as supply returns to normal, the quantity of gas consumed in this case does not immediately go back to its previous level, since the smaller cars that had been sold remain in the fleet for some time. Demand thereby has been "destroyed", shifting the demand curve.
The expectation of future prices and their long-term maintenance at non-economic levels for a certain quantity of consumption also affects vehicle decisions. If the price of fuel is so high that marginal consumers cannot afford the same mileage without switching to a more efficient car, then they are forced to sell the less efficient one. An increase of the quantity of such vehicles causes the used market value to fall, which then increases the depreciation expected of a new vehicle, which increases the total cost of ownership of such vehicles, making them less popular.
The coal reserves in some regions are regarded as a stranded asset that may be permanently left in the ground. Competition from low priced natural gas, reduced demand for coal due to emission restrictions and uneconomic export situations each play a part. Environmental legislation that prevents fracking strands potential natural gas reserves.
During the 1000% price increase of natural gas in Europe in 2021-2022, up to 70% of Europe's nitrogen fertilizer production was shut down on various occasions, as gas is the main part of the production cost.
See also
1970s energy crisis
Oil price increases since 2003
Population growth
Stagflation
Supply and demand
References
Demand
Energy economics
Peak resource production
Scarcity | Demand destruction | Environmental_science | 579 |
26,339,274 | https://en.wikipedia.org/wiki/Sexual%20narcissism | Sexual narcissism has been described as an egocentric pattern of sexual behavior that involves an inflated sense of sexual ability and sexual entitlement. In addition, sexual narcissism is the erotic preoccupation with oneself as a superb lover through a desire to merge sexually with a mirror image of oneself. Sexual narcissism is an intimacy dysfunction in which sexual exploits are pursued, generally in the form of extramarital affairs, to overcompensate for low self-esteem and an inability to experience true intimacy. This behavioral pattern is believed to be more common in men than in women and has been tied to domestic violence in men and sexual coercion in couples. Hurlbert argues that sex is a natural biological given and therefore cannot be deemed as an addiction. He and his colleagues assert that any sexual addiction is nothing more than a misnomer for what is actually sexual narcissism or sexual compulsivity.
References
Human sexuality
Narcissism | Sexual narcissism | Biology | 202 |
16,693,588 | https://en.wikipedia.org/wiki/Reticulation%20%28single-access%20key%29 | In biology, a reticulation of a single-access identification key connects different branches of the identification tree to improve error tolerance and identification success. In a reticulated key, multiple paths lead to the same result; the tree data structure thus changes from a simple tree to a directed acyclic graph.
Two forms of reticulation can be distinguished: Terminal reticulation and inner reticulation.
In a terminal reticulation a single taxon or next-level-key is keyed out in several locations in the key. This type of reticulation is normally compatible with any printable presentation format of identification keys and normally does not require special precautions in software used for branching keys.
In an inner reticulation a couplet with further leads can be reached through more than one path. Depending on the software or printable presentation format, this may be more challenging. For the linked (= "parallel" or "bracketed") format, where each lead points to a numbered couplet, inner reticulations present no special challenge. However, for the nested (= "indented") presentation format, where all following couplets immediately follow their lead, a cross-connection to a different subtree in the key requires a special mechanism.
Reticulations generally improve the usability of a key, but may also diminish the overall probability of correct identification averaged over all taxa.
References
Taxonomy (biology) | Reticulation (single-access key) | Biology | 294 |
40,147,758 | https://en.wikipedia.org/wiki/Collider%20%28film%29 | Collider (sometimes referred as Collider World) is a 2013 Irish-Portuguese co-produced drama/science fiction film distributed by Entertainment. The film acts as the core for a transmedia project developed for various platforms.
The film focuses on the possible danger induced by research lead at the CERN. It premiered in London on October 16, 2013, and received am Emmy nomination for the Best Digital Production in 2014.
Plot
The Large Hadron Collider research at CERN was on the verge of a breakthrough in regards to black holes. Scientists believed that they would be able to control the micro black holes once created but Peter Ansay, a young genius in quantum physics thought differently. His report brought up astounding conclusions that the current research on the radiation phenomenon was inaccurate and that if they were actually able to create these black holes, the level of radiation would be so significantly lower than anticipated that they would literally implode in on themselves, taking all surrounding matter with them.
Depending on the exact level of radiation it could be catastrophic. Peter's report was rejected by CERN and his credibility as a scientist was destroyed. On 23 September 2012, Peter breaks into the CERN laboratories to gain access to the Collider and sabotage it preventing any future experiments. But something goes wrong in a way that Peter couldn't have predicted, and he is transported to 2018 by a wormhole to a world destroyed by natural disasters and at war with the Unknown. He wakes up in a strangely familiar place, convinced he's somehow been there before. He is now in the future, 2018, and "trapped" inside what seems to be a hotel.
Outside, the atmosphere is unlivable and anyone who ventures out disappears in the shadows, taken by the Unknown. What Peter soon realises is that he is not alone. Five total strangers from different times and places are also in the hotel with him, but unlike him they know nothing about the future or how they got there. Peter keeps getting flashes of memory and a sense of déjà vu linked to the future. If he's right then he has just 36 hours to figure out a way to get to CERN, reverse the wormhole and prevent the apocalypse because when the clock runs out so does Earth's existence.
Cast
Iain Robertson as Peter Ansay
Bella Heesom as Alisha Tate
Marco Costa as Carlos Vera
Lucy Cudden as Fiona Murphy
Jamie Maclachlan as Luke Spencer
as Lucia de Souza
Tie-in material and media
Prior to the film's development, Collider started as an interactive multi-platform project distributed by Entertainment that combines comic book series alongside a graphic novel, two mobile video games for Android devices webisodes and online presence.
Comics
A 6-issue comic book series alongside a graphic novel were written by Mike Garley and illustrated by R H Stewart, Gareth Gowran, Jack Tempest, Will Pickering and Martin Simmonds. The comics are available as on iTunes and Google Play.
In addition to the comics, a graphic novel is available on Amazon, the Apple iBookstore, and on paperback edition in major comic stores around Europe.
Games
Two mobile games have been available for iOS and Android platforms.
Collider Quest is the first game in the franchise. This game is about collecting items, solving puzzles and testing memory through multiple levels with increasing difficulty.
The game was released in the Summer of 2012, and is available in the iOS App Store and Google Play. had also been available on Facebook.
Collider Code Breaker is the second game in the franchise and is available exclusively on iOS devices. The game follows Luke Spencer on disarming every bomb by cracking their code.
Digital series
An 8-episode digital series that acts as prequel to the film was developed in January 2012 for YouTube and SAPO, gaining more than 1 million viewers. The series was subsequently broadcast on Portuguese television channel SIC Radical.
The series was shot in Lisbon and was written by Catriona Scott and Nuno Bernardo, with Bernardo being the director and Iain Robertson reprising his role as Peter Ansay from the film.
Reception
Rory Cashin of Entertainment.ie gave the film a 1 out 5.
References
External links
Collider World (official website)
2013 science fiction films
2013 films
Interactive films
Irish science fiction films
Multimedia works
English-language Portuguese films
Portuguese science fiction films
2010s films about time travel
Transmedia storytelling
Apocalyptic films
Films about wormholes
CERN
2010s English-language films
English-language science fiction films | Collider (film) | Technology | 907 |
1,228,679 | https://en.wikipedia.org/wiki/Gyromagnetic%20ratio | In physics, the gyromagnetic ratio (also sometimes known as the magnetogyric ratio in other disciplines) of a particle or system is the ratio of its magnetic moment to its angular momentum, and it is often denoted by the symbol , gamma. Its SI unit is the radian per second per tesla (rad⋅s−1⋅T−1) or, equivalently, the coulomb per kilogram (C⋅kg−1).
The term "gyromagnetic ratio" is often used as a synonym for a different but closely related quantity, the -factor. The -factor only differs from the gyromagnetic ratio in being dimensionless.
For a classical rotating body
Consider a nonconductive charged body rotating about an axis of symmetry. According to the laws of classical physics, it has both a magnetic dipole moment due to the movement of charge and an angular momentum due to the movement of mass arising from its rotation. It can be shown that as long as its charge and mass density and flow are distributed identically and rotationally symmetric, its gyromagnetic ratio is
where is its charge and is its mass.
The derivation of this relation is as follows. It suffices to demonstrate this for an infinitesimally narrow circular ring within the body, as the general result then follows from an integration. Suppose the ring has radius , area , mass , charge , and angular momentum . Then the magnitude of the magnetic dipole moment is
For an isolated electron
An isolated electron has an angular momentum and a magnetic moment resulting from its spin. While an electron's spin is sometimes visualized as a literal rotation about an axis, it cannot be attributed to mass distributed identically to the charge. The above classical relation does not hold, giving the wrong result by the absolute value of the electron's -factor, which is denoted :
where is the Bohr magneton.
The gyromagnetic ratio due to electron spin is twice that due to the orbiting of an electron.
In the framework of relativistic quantum mechanics,
where is the fine-structure constant. Here the small corrections to the relativistic result come from the quantum field theory calculations of the anomalous magnetic dipole moment. The electron -factor is known to twelve decimal places by measuring the electron magnetic moment in a one-electron cyclotron:
The electron gyromagnetic ratio is
The electron -factor and are in excellent agreement with theory; see Precision tests of QED for details.
Gyromagnetic factor not as a consequence of relativity
Since a gyromagnetic factor equal to 2 follows from Dirac's equation, it is a frequent misconception to think that a -factor 2 is a consequence of relativity; it is not. The factor 2 can be obtained from the linearization of both the Schrödinger equation and the relativistic Klein–Gordon equation (which leads to Dirac's). In both cases a 4-spinor is obtained and for both linearizations the -factor is found to be equal to 2. Therefore, the factor 2 is a consequence of the minimal coupling and of the fact of having the same order of derivatives for space and time.
Physical spin- particles which cannot be described by the linear gauged Dirac equation satisfy the gauged Klein–Gordon equation extended by the term according to,
Here, and stand for the Lorentz group generators in the Dirac space, and the electromagnetic tensor respectively, while is the electromagnetic four-potential. An example for such a particle is the spin companion to spin in the representation space of the Lorentz group. This particle has been shown to be characterized by and consequently to behave as a truly quadratic fermion.
For a nucleus
Protons, neutrons, and many nuclei carry nuclear spin, which gives rise to a gyromagnetic ratio as above. The ratio is conventionally written in terms of the proton mass and charge, even for neutrons and for other nuclei, for the sake of simplicity and consistency. The formula is:
where is the nuclear magneton, and is the -factor of the nucleon or nucleus in question. The ratio equal to , is 7.622593285(47) MHz/T.
The gyromagnetic ratio of a nucleus plays a role in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). These procedures rely on the fact that bulk magnetization due to nuclear spins precess in a magnetic field at a rate called the Larmor frequency, which is simply the product of the gyromagnetic ratio with the magnetic field strength. With this phenomenon, the sign of determines the sense (clockwise vs counterclockwise) of precession.
Most common nuclei such as 1H and 13C have positive gyromagnetic ratios. Approximate values for some common nuclei are given in the table below.
Larmor precession
Any free system with a constant gyromagnetic ratio, such as a rigid system of charges, a nucleus, or an electron, when placed in an external magnetic field (measured in teslas) that is not aligned with its magnetic moment, will precess at a frequency (measured in hertz) proportional to the external field:
For this reason, values of , in units of hertz per tesla (Hz/T), are often quoted instead of .
Heuristic derivation
The derivation of this ratio is as follows: First we must prove the torque resulting from subjecting a magnetic moment to a magnetic field is
The identity of the functional form of the stationary electric and magnetic fields has led to defining the magnitude of the magnetic dipole moment equally well as , or in the following way, imitating the moment of an electric dipole: The magnetic dipole can be represented by a needle of a compass with fictitious magnetic charges on the two poles and vector distance between the poles under the influence of the magnetic field of earth By classical mechanics the torque on this needle is But as previously stated so the desired formula comes up. is the unit distance vector.
The spinning electron model here is analogous to a gyroscope. For any rotating body the rate of change of the angular momentum equals the applied torque :
Note as an example the precession of a gyroscope. The earth's gravitational attraction applies a force or torque to the gyroscope in the vertical direction, and the angular momentum vector along the axis of the gyroscope rotates slowly about a vertical line through the pivot. In place of a gyroscope, imagine a sphere spinning around the axis with its center on the pivot of the gyroscope, and along the axis of the gyroscope two oppositely directed vectors both originated in the center of the sphere, upwards and downwards Replace the gravity with a magnetic flux density
represents the linear velocity of the pike of the arrow along a circle whose radius is where is the angle between and the vertical. Hence the angular velocity of the rotation of the spin is
Consequently,
This relationship also explains an apparent contradiction between the two equivalent terms, gyromagnetic ratio versus magnetogyric ratio: whereas it is a ratio of a magnetic property (i.e. dipole moment) to a gyric (rotational, from , "turn") property (i.e. angular momentum), it is also, at the same time, a ratio between the angular precession frequency (another gyric property) and the magnetic field.
The angular precession frequency has an important physical meaning: It is the angular cyclotron frequency, the resonance frequency of an ionized plasma being under the influence of a static finite magnetic field, when we superimpose a high frequency electromagnetic field.
See also
Charge-to-mass ratio
Chemical shift
Landé -factor
Larmor equation
Proton gyromagnetic ratio
References
Atomic physics
Nuclear magnetic resonance
Ratios | Gyromagnetic ratio | Physics,Chemistry,Mathematics | 1,626 |
5,532,622 | https://en.wikipedia.org/wiki/Fatty%20acid%20transport%20proteins | Fatty acid transport proteins (FATPs, SLC27, SLC27A) are a family of trans-membrane transport proteins, which allow and enhance the uptake of long chain fatty acids into cells. This subfamily is part of the solute carrier protein family. Within humans this family contains six very homologous proteins, which are expressed in all tissues of the body which use fatty acids:
SLC27A1 (FATP1) Long-chain fatty acid transport protein 1
SLC27A2 (FATP2) Very long-chain acyl-CoA synthetase
SLC27A3 (FATP3) Solute carrier family 27 member 3
SLC27A4 (FATP4) Long-chain fatty acid transport protein 4
SLC27A5 (FATP5) Bile acyl-CoA synthetase
SLC27A6 (FATP6) Long-chain fatty acid transport protein 6
References
Protein families
Transport proteins | Fatty acid transport proteins | Biology | 196 |
551,314 | https://en.wikipedia.org/wiki/Lists%20of%20astronomical%20objects | This is a list of lists, grouped by type of astronomical object.
Solar System
List of Solar System objects
List of gravitationally rounded objects of the Solar System
List of Solar System objects most distant from the Sun
List of Solar System objects by size
Lists of geological features of the Solar System
List of natural satellites (moons)
Lists of small Solar System bodies
Lists of comets
List of meteor showers
Minor planets
List of minor planets
List of exceptional asteroids
List of minor planet moons
List of damocloids
List of centaurs (small Solar System bodies)
List of trans-Neptunian objects
List of unnumbered minor planets
List of dwarf planets
List of possible dwarf planets
Exoplanets and brown dwarfs
Lists of planets
List of nearest exoplanets
List of largest exoplanets
List of brown dwarfs
List of Y-dwarfs
Stars
Lists of stars
List of nearest stars
List of brightest stars
List of hottest stars
List of nearest bright stars
List of most luminous stars
List of most massive stars
List of largest known stars
List of smallest stars
List of oldest stars
List of stars with proplyds
List of variable stars
List of semiregular variable stars
List of stars that dim oddly
List of X-ray pulsars
List of brown dwarfs
List of supernovae
List of supernova remnants
List of gamma-ray bursts
List of white dwarfs
Star constellations
Lists of constellations
Lists of stars by constellation
List of constellations by area
List of IAU designated constellations
Star clusters
List of open clusters
List of globular clusters
List of stellar streams
List of nearby stellar associations and moving groups
Nebulae
Lists of nebulae
List of dark nebulae
List of diffuse nebulae
List of planetary nebulae
List of protoplanetary nebulae
List of largest nebulae
Galaxies
Lists of galaxies
List of galaxies
List of largest galaxies
List of galaxies with richest globular cluster systems
List of nearest galaxies
List of galaxies named after people
List of spiral galaxies
List of polar-ring galaxies
List of ring galaxies
List of quasars
Satellite galaxies
List of satellite galaxies of the Milky Way
List of Andromeda's satellite galaxies
List of Triangulum's suspected satellite galaxies
Galaxy groups and clusters
List of galaxy groups and clusters
List of Abell clusters
List of galaxy superclusters
List of galaxy filaments
List of large quasar groups
Black holes
Lists of black holes
List of black holes
List of most massive black holes
List of nearest known black holes
Other lists
List of voids
List of largest voids
List of largest cosmic structures
List of the most distant astronomical objects
List of neutron stars
List of most massive neutron stars
List of multiplanetary systems
List of resolved circumstellar disks
Astronomical catalogues
List of astronomical catalogues
List of NGC objects
List of NGC objects (1–1000)
List of NGC objects (1001–2000)
List of NGC objects (2001–3000)
List of NGC objects (3001–4000)
List of NGC objects (4001–5000)
List of NGC objects (5001–6000)
List of NGC objects (6001–7000)
List of NGC objects (7001–7840)
List of IC objects
List of Messier objects
List of Caldwell objects
List of Herschel 400 objects
List of Melotte objects
List of Collinder objects
Map of astronomical objects
See also
American Astronomical Society
Outline of astronomy
Lists of astronauts
List of government space agencies
List of planetariums
Lists of space scientists
Lists of spacecraft | Lists of astronomical objects | Physics,Astronomy | 688 |
45,400,612 | https://en.wikipedia.org/wiki/Hyperloop%20One | Hyperloop One, known as Virgin Hyperloop until November 2022, was an American transportation technology company that worked to commercialize high-speed travel utilizing the Hyperloop concept which was a variant of the vacuum train. The company was established on June 1, 2014, and reorganized and renamed on October 12, 2017.
Hyperloop systems were intended to move cargo and passengers at airline speeds but at a fraction of the cost. They were designed to run suspended by magnetic systems in a partially-evacuated tube. The original Hyperloop concept proposed to use a linear electric motor to accelerate and decelerate an air bearing levitated pod through a low-pressure tube. The vehicle was to glide silently at speeds up to with very low turbulence. The system was proposed to be entirely autonomous, quiet, direct-to-destination, and on-demand. It would have been built on elevated structures or in tunnels, free of at-grade crossings and requiring less right of way than high-speed rail or highways.
Virgin Hyperloop made substantive technical changes to Elon Musk's initial proposal and chose not to pursue the Los Angeles–San Francisco notional route that Musk envisioned in his 2013 alpha-design white paper. It demonstrated a form of propulsion technology on May 11, 2016, at its test site in North Las Vegas. It completed a Development Loop (DevLoop) and on May 12, 2017, held its first full-scale test. The test combined Hyperloop components including vacuum, propulsion, levitation, sled, control systems, tube, and structures.
On November 8, 2020, after more than 400 uncrewed tests, the firm conducted the first human trial at a speed of at its test site in Las Vegas, Nevada. However, in February 2022, the company abandoned plans for human rated travel and instead focused on freight, firing more than 100 employees amounting to half its total workforce. In November of that year the company decided to rebrand, reverting to the name, Hyperloop One.
It was announced on December 21, 2023 that the company will cease operations on December 31, 2023 due to a number of factors including financial challenges, high interest rates, initial backing and support, as well as to its failure to secure any contracts for building a working hyperloop system; it began selling its assets and laying off remaining employees. According to The Verge, all of its intellectual property would shift to its majority stakeholder, major Dubai port operator DP World.
History
Origins
The idea of trains in vacuum has been elaborated many times in history of science and science-fiction. The concept of Hyperloop transportation was first introduced by Robert H. Goddard in 1904.
The recent plans for a version of vacuum train called Hyperloop emerged from a conversation between Elon Musk and Iranian-American Silicon Valley investor Shervin Pishevar when they were flying together to Cuba on a humanitarian mission in January 2012. Pishevar asked Musk to elaborate on his hyperloop idea, which the industrialist had been mulling over for some time. Pishevar suggested using it for cargo, an idea Musk hadn't considered, but he did say he was considering open-sourcing the concept because he was too busy running SpaceX and Tesla. Pishevar pushed Musk to publish his ideas about the hyperloop, so that Pishevar could study them.
On August 12, 2013, Musk released the Hyperloop Alpha white paper, generating widespread attention and enthusiasm. In the months that followed Pishevar incorporated Hyperloop Technologies, which would later be renamed Hyperloop One, and recruited the first board members, including David O. Sacks, Jim Messina, and Joe Lonsdale. Pishevar also recruited a cofounder, former SpaceX engineer Brogan Bambrogan. The firm set up shop in Bambrogan's garage in Los Angeles in November 2014. By January 2015, the firm had raised $9 million in venture capital from Pishevar's Sherpa Capital and investors such as Formation 8 and Zhen Fund, and was able to move into its current campus in the Los Angeles Arts District. Forbes magazine put the firm on its February 2015 cover, landing the startup many fresh recruits and much new investor interest. In June 2015, Pishevar recruited former Cisco president Rob Lloyd as an investor and, eventually, the company's CEO.
Funding and growth
Between June 2015 and December 2015, the company continued to hire engineers and expand its downtown campus (now up to 75,000 square feet). In December 2015, Hyperloop Tech announced it would hold an open-air propulsion test at a new Test and Safety Site in Nevada. At the time, the company disclosed it had raised $37 million in financing to date and was completing a Series B round of $80m, which they closed on in May 2016. In October 2016, the firm announced that it had raised another $50 million, led by an investment from 8VC and DP World.
The propulsion open-air test or POAT, was successfully held in North Las Vegas on May 11, 2016. The POAT sled accelerated to 134 mph (216 km/h) in 2.3 seconds, representing a crucial proof of concept. At the time, the renamed Hyperloop One announced it had secured partnerships with global engineering and design firms such as AECOM, SYSTRA, Arup, Deutsche Bahn, General Electric, and Bjarke Ingels.
On November 10, 2016, Hyperloop One released its first system designs in collaboration with the Bjarke Ingels Group.
On October 12, 2017, Hyperloop One and the Virgin Group announced that it developed a strategic investment partnership, resulting in Richard Branson joining the board of directors. The global strategic partnership will focus on passenger and mixed-use cargo service in addition to the creation of a new passenger division. Hyperloop One had raised $295 million on December 18, 2017, and subsequently was renamed Virgin Hyperloop One, and Branson became the chairman of the board of directors. As of May 2019, the company had raised $400 million.
In June 2020, the firm rebranded to Virgin Hyperloop, changing their logo and launching a new website.
In October 2020, West Virginia governor Jim Justice announced that Virgin Hyperloop would be constructing a certification facility on land in Tucker and Grant Counties. About 800 acres owned by Western Pocahontas near Mount Storm was donated to the West Virginia University Foundation, and cooperation was expected from WVU, Marshall University, and the West Virginia Community and Technical College System.
Focus on freight and layoffs
In February 2022, the Financial Times reported that the company laid off more than 100 employees, with the move allowing it to focus on cargo transport instead of passenger travel. In December 2022, a second round of layoffs was reported, focused on the firm's downtown Los Angeles staff and Las Vegas operational team.
While Hyperloop One focuses on freight, competitors continue to focus on a mix of freight and passenger travel.
The change in focus put construction of the West Virginia facility in question, until the company admitted in March 2023 that it had been cancelled.
Test pods
XP-1
After Hyperloop One began the construction of DevLoop in October 2016, the company successfully conducted the first full-system test using the levitating chassis without passenger pod on May 12, 2017. On July 12, 2017, the company revealed images of its first generation pod prototype to be used at the DevLoop test site in Nevada to test aerodynamics. The system-wide test integrated Hyperloop components including vacuum, propulsion, levitation, sled, control systems, tube, and structures. The company designed and built its first generation full-scale test pod name XP-1 (short for experimental pod one) to be used in the full-scale pod tests.
XP-1 has the length of , the width of , and the height of . The pod's motor was evolved from 500 motors that were built and tested in order to operate with resiliency in near-vacuum environment. The pod was successfully tested for the first time on July 29, 2017, with the of acceleration to reach the recorded speeds of . The pod achieved 3,151 horsepower during the test inside the depressurized tube with conditions similar to the atmosphere at above sea level.
On August 2, 2017, Hyperloop One successfully tested its XP-1 passenger pod, reaching speeds of up to . It traveled for just over before the brakes kicked in and it rolled to a stop. The XP-1 speed record was broken in August 2017 by WARR Hyperloop during the second Hyperloop Pod Competition with the top speeds of ; however, the pods in the competition were too small to carry passengers.
XP-1 set the world's speed record again during the test in December 2017, reaching . With that test, the company also demonstrated its airlock technology that allowed the pod to be transferred into the depressurized tube. With this system, XP-1 pod can be put in an airlock which takes a few minutes to depressurize before entering the already depressurized tube. Otherwise, the pod would need to enter the tube and wait for the 4-hour depressurization of the entire test tube. In 2018, WARR Hyperloop broke XP-1 record again in the third Hyperloop Pod Competition, on a longer track.
In the summer of 2019, the company took XP-1 on a roadshow to Ohio, Texas, Kansas, New York, Missouri, North Carolina, and Washington, D.C.
XP-2
For the company's passenger testing, they created a new vehicle, dubbed "experimental pod 2", or XP-2. The vehicle was designed by Bjarke Ingels Group and Kilo Design.
On November 8, 2020, after more than 400 uncrewed tests, the firm conducted the first human trial with Josh Giegel, its co-founder and CTO, and Sara Luchian, Director of Passenger Experience, as the first passengers at a speed of at its DevLoop test site in Las Vegas, Nevada. The test was conducted in a near-vacuum environment of 100 Pascals.
In March 2021, Virgin Hyperloop announced that the vehicle would be on display at the Smithsonian Arts and Industries Building in late 2021.
Following successful passenger testing, Virgin Hyperloop unveiled its commercial vehicle design in January 2021. Designed in collaboration with Seattle-based design firm Teague, each vehicle is planned to seat about 28 passengers but can transport thousands of passengers per hour in convoys.
Funding
Hyperloop One had raised over $485 million as of May 2019. Its investors include Sherpa Capital, Formation 8, 137 Ventures, DP World, Khosla Ventures, Caspian Venture Capital, Fast Digital, Western Technology Investment, Zhen Fund, GE Ventures, and SNCF.
Management
, the board of directors included Richard Branson (chairman), Justin Fishner-Wolfson, Sultan Ahmed Bin Sulayem, Rob Lloyd, Josh Giegel, Bill Shor, Yuvraj Narayan, Anatoly Braverman, and Emily White as a strategic adviser. Former board members include Peter Diamandis, Jim Messina, who as of July 2018 serves as strategic adviser, former Morgan Stanley executive Jim Rosenthal, Joe Lonsdale, the co-founder Shervin Pishevar, who took a leave of absence from Hyperloop One in December 2017 after multiple women accused him of sexual misconduct, and Ziyavudin Magomedov, a Russian billionaire who was arrested on embezzlement charges in 2018.
On November 8, 2018, Sultan Ahmed bin Sulayem succeeded Richard Branson as chairman.
In February 2021, co-founder Josh Giegel was named CEO, before being replaced by CFO Raja Narayanan in October 2021. The firm announced an intent to accelerate scheduled fielding of production systems from the early 2030s to the mid-2020s, and that the planned initial project would transport freight between the cities of Dubai and Abu-Dhabi in the United Arab Emirates.
Planned cooperation
In June 2016 the company announced a memorandum of understanding with the Summa Group and the Russian government to construct a hyperloop in Moscow and has since completed feasibility studies in Moscow and in the Far East.
In August 2016, the firm announced a deal with the world's third largest ports operator, DP World, to develop a cargo offloader system at Jebel Ali in Dubai. On November 8, 2016, the firm announced it had signed a deal with Dubai's Roads and Transport Authority (RTA) to conduct feasibility studies on potential passenger and cargo hyperloop routes in the United Arab Emirates.
By April 2017, the firm had feasibility studies underway in the United Arab Emirates, Finland, Sweden, the Netherlands, Switzerland, Moscow, and the UK. On September 1, 2017, the firm signed a letter of intent with Estonia to cooperate on the Helsinki–Tallinn Tunnel.
In February 2018, the Virgin Group signed an "intent agreement" with the Government of Maharashtra state of India to build a hyperloop transportation system between Mumbai and Pune. In August 2019, the government deemed hyperloop a public infrastructure project and approved the Virgin Hyperloop-DP World Consortium as the Original Project Proponent (OPP), recognizing hyperloop technology alongside other more traditional forms of mass transit. The Principal Scientific Adviser to the Government of India, K. VijayRaghavan, set up a Consultative Group on Future of Transportation (CGFT) to explore the regulatory path for hyperloop.
On July 19, 2018, an Ohio regional planning commission was investigating using hyperloop between airports and potentially between Chicago, Columbus, and Pittsburgh; in May 2020 the commission released the results of their Midwest Connect feasibility study, which found that the route would create $300 billion in overall economic benefits and reduce emission by 2.4 million tons.
In July 2018, Texas officials announced that the state will explore hyperloop technology for a route connecting Dallas, Austin, San Antonio, and Laredo. In June 2019, the firm announced an ongoing collaboration with the Sam Fox School of Washington University in St. Louis to explore proposals for the Missouri Hyperloop. In October 2019, Missouri became the first US state to conduct a hyperloop feasibility study, exploring a route between Kansas City and St. Louis.
In December 2019, the State Government of Punjab, India, signed an MoU with the firm to explore a route connecting the Amritsar-Ludhiana-Chandigarh corridor.
In February 2020, the firm signed a partnership agreement with Saudi Arabia to conduct a pre-feasibility study. In September 2020, Virgin Hyperloop signed a partnership agreement with Bangalore International Airport Limited to conduct a feasibility study for a proposed corridor from BLR Airport.
Hyperloop One Global Challenge
In 2016, the firm launched its Hyperloop One Global Challenge to find the locations for, develop, and construct the world's first hyperloop networks. In January 2017, the firm announced the 35 semifinalist routes (spread over 17 countries) and held a series of events showcasing the semifinalists, Vision for India in February, Vision for America in April and Vision for Europe in June. On September 14, 2017, Hyperloop One announced the 10 winners; they were to be invited to work closely with the firm on viability studies to try to bring their respective loops from proposal to reality.
The ten winning routes that were selected are:
Lawsuits
In July 2016, the CTO and co-founder Brogan BamBrogan left the company, later filing a lawsuit with three other former employees alleging breach of fiduciary duty and misuse of corporate resources. On July 19, 2016, Hyperloop One filed a counterclaim against the four former employees, alleging they staged a failed coup of the company, in the process breaching agreements around fiduciary duty, non-competes, proprietary information, and non-disparagement, as well as intentional interference with contractual relations. On November 18, 2016, both parties agreed to settle the lawsuit. Terms were confidential and not disclosed. BamBrogan and other former Hyperloop One and SpaceX employees went on to found Arrivo, another Hyperloop company (defunct in 2018).
References
External links
Alan James about Baltic Sea Hyperloop One ring
Transformed connections & enhanced cohesion. Example opportunities for Europe
Transformed connections & enhanced cohesion. Example opportunities for Europe, 7 June 2017
2014 establishments in California
2023 disestablishments in California
American companies established in 2014
American companies disestablished in 2023
Hyperloop
Technology companies based in Greater Los Angeles
Technology companies of the United States
Transport companies established in 2014
Transport companies disestablished in 2023
Transportation companies based in California
Transportation companies of the United States
Virgin Group | Hyperloop One | Technology,Engineering | 3,469 |
489,440 | https://en.wikipedia.org/wiki/Curvature%20form | In differential geometry, the curvature form describes curvature of a connection on a principal bundle. The Riemann curvature tensor in Riemannian geometry can be considered as a special case.
Definition
Let G be a Lie group with Lie algebra , and P → B be a principal G-bundle. Let ω be an Ehresmann connection on P (which is a -valued one-form on P).
Then the curvature form is the -valued 2-form on P defined by
(In another convention, 1/2 does not appear.) Here stands for exterior derivative, is defined in the article "Lie algebra-valued form" and D denotes the exterior covariant derivative. In other terms,
where X, Y are tangent vectors to P.
There is also another expression for Ω: if X, Y are horizontal vector fields on P, then
where hZ means the horizontal component of Z, on the right we identified a vertical vector field and a Lie algebra element generating it (fundamental vector field), and is the inverse of the normalization factor used by convention in the formula for the exterior derivative.
A connection is said to be flat if its curvature vanishes: Ω = 0. Equivalently, a connection is flat if the structure group can be reduced to the same underlying group but with the discrete topology.
Curvature form in a vector bundle
If E → B is a vector bundle, then one can also think of ω as a matrix of 1-forms and the above formula becomes the structure equation of E. Cartan:
where is the wedge product. More precisely, if and denote components of ω and Ω correspondingly, (so each is a usual 1-form and each is a usual 2-form) then
For example, for the tangent bundle of a Riemannian manifold, the structure group is O(n) and Ω is a 2-form with values in the Lie algebra of O(n), i.e. the antisymmetric matrices. In this case the form Ω is an alternative description of the curvature tensor, i.e.
using the standard notation for the Riemannian curvature tensor.
Bianchi identities
If is the canonical vector-valued 1-form on the frame bundle, the torsion of the connection form is the vector-valued 2-form defined by the structure equation
where as above D denotes the exterior covariant derivative.
The first Bianchi identity takes the form
The second Bianchi identity takes the form
and is valid more generally for any connection in a principal bundle.
The Bianchi identities can be written in tensor notation as:
The contracted Bianchi identities are used to derive the Einstein tensor in the Einstein field equations, the bulk of general theory of relativity.
Notes
References
Shoshichi Kobayashi and Katsumi Nomizu (1963) Foundations of Differential Geometry, Vol.I, Chapter 2.5 Curvature form and structure equation, p 75, Wiley Interscience.
See also
Connection (principal bundle)
Basic introduction to the mathematics of curved spacetime
Contracted Bianchi identities
Einstein tensor
Einstein field equations
General theory of relativity
Chern-Simons form
Curvature of Riemannian manifolds
Gauge theory
Curvature tensors
Differential geometry | Curvature form | Engineering | 647 |
24,145,929 | https://en.wikipedia.org/wiki/C27H40O4 | The molecular formula C27H40O4 (molar mass: 428.60 g/mol, exact mass: 428.2927 u) may refer to:
AM-938
Hydroxyprogesterone caproate (OHPC)
Testosterone hexahydrobenzylcarbonate
Molecular formulas | C27H40O4 | Physics,Chemistry | 69 |
11,570,289 | https://en.wikipedia.org/wiki/Fuscoporia%20gilva | Fuscoporia gilva, commonly known as the oak conk, is a species of fungal plant pathogen which infects several hosts.
Description
The fruit bodies typically grow in rows of horizontal platforms, which grow over several years and sometimes "smear" onto the wood. The caps are usually semicircular with lumpy margins, wide, with zonate colouration ranging from dark brown to light reddish-brown or yellowish at the margin, which is up to 1 cm thick and velvety. There are 5–8 pores per square millimetre. The flesh is tough and corky. The spore print is yellow.
Similar species
Mensularia radiata is usually found on non-oak hardwoods; fresh specimens often exhibit white-tipped pores near the margin.
Uses
In traditional Chinese medicine, it is used to treat stomachaches and cancer; polysaccharides isolated from lab-grown F. gilvus have been shown to inhibit the growth of melanoma in a mouse model.
See also
List of apricot diseases
List of black walnut diseases
List of Platanus diseases
List of sweetgum diseases
List of peach and nectarine diseases
List of mango diseases
References
Fungal tree pathogens and diseases
Fungi described in 1822
Taxa named by Lewis David de Schweinitz
Fungus species
Hymenochaetaceae | Fuscoporia gilva | Biology | 273 |
42,063,512 | https://en.wikipedia.org/wiki/Dodd%E2%80%93Bullough%E2%80%93Mikhailov%20equation | The Dodd–Bullough–Mikhailov equation is a nonlinear partial differential equation introduced by Roger Dodd, Robin Bullough, and Alexander Mikhailov.
In 2005, mathematician Abdul-Majid Wazwaz combined the Tzitzeica equation with Dodd–Bullough–Mikhailov equation into the Tzitz´eica–Dodd–Bullough–Mikhailov equation.
The Dodd–Bullough–Mikhailov equation has traveling wave solutions.
References
Graham W. Griffiths William E.Shiesser Traveling Wave Analysis of Partial Differential p135 Equations Academy Press
Richard H. Enns George C. McCGuire, Nonlinear Physics Birkhauser,1997
Inna Shingareva, Carlos Lizárraga-Celaya, Solving Nonlinear Partial Differential Equations with Maple Springer.
Eryk Infeld and George Rowlands, Nonlinear Waves, Solitons and Chaos, Cambridge 2000
Saber Elaydi, An Introduction to Difference Equationns, Springer 2000
Dongming Wang, Elimination Practice, Imperial College Press 2004
David Betounes, Partial Differential Equations for Computational Science: With Maple and Vector Analysis Springer, 1998
George Articolo Partial Differential Equations & Boundary Value Problems with Maple V Academic Press 1998
Nonlinear partial differential equations | Dodd–Bullough–Mikhailov equation | Mathematics | 243 |
10,201 | https://en.wikipedia.org/wiki/Exothermic%20process | In thermodynamics, an exothermic process () is a thermodynamic process or reaction that releases energy from the system to its surroundings, usually in the form of heat, but also in a form of light (e.g. a spark, flame, or flash), electricity (e.g. a battery), or sound (e.g. explosion heard when burning hydrogen). The term exothermic was first coined by 19th-century French chemist Marcellin Berthelot.
The opposite of an exothermic process is an endothermic process, one that absorbs energy, usually in the form of heat. The concept is frequently applied in the physical sciences to chemical reactions where chemical bond energy is converted to thermal energy (heat).
Two types of chemical reactions
Exothermic and endothermic describe two types of chemical reactions or systems found in nature, as follows:
Exothermic
An exothermic reaction occurs when heat is released to the surroundings. According to the IUPAC, an exothermic reaction is "a reaction for which the overall standard enthalpy change ΔH⚬ is negative". Some examples of exothermic process are fuel combustion, condensation and nuclear fission, which is used in nuclear power plants to release large amounts of energy.
Endothermic
In an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction, usually driven by a favorable entropy increase in the system. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving of one in another, requires calories from the surroundings, and the reaction cools the pouch and surroundings by absorbing heat from them.
Photosynthesis, the process that allows plants to convert carbon dioxide and water to sugar and oxygen, is an endothermic process: plants absorb radiant energy from the sun and use it in an endothermic, otherwise non-spontaneous process. The chemical energy stored can be freed by the inverse (spontaneous) process: combustion of sugar, which gives carbon dioxide, water and heat (radiant energy).
Energy release
Exothermic refers to a transformation in which a closed system releases energy (heat) to the surroundings, expressed by
When the transformation occurs at constant pressure and without exchange of electrical energy, heat is equal to the enthalpy change, i.e.
while at constant volume, according to the first law of thermodynamics it equals internal energy () change, i.e.
In an adiabatic system (i.e. a system that does not exchange heat with the surroundings), an otherwise exothermic process results in an increase in temperature of the system.
In exothermic chemical reactions, the heat that is released by the reaction takes the form of electromagnetic energy or kinetic energy of molecules. The transition of electrons from one quantum energy level to another causes light to be released. This light is equivalent in energy to some of the stabilization energy of the energy for the chemical reaction, i.e. the bond energy. This light that is released can be absorbed by other molecules in solution to give rise to molecular translations and rotations, which gives rise to the classical understanding of heat. In an exothermic reaction, the activation energy (energy needed to start the reaction) is less than the energy that is subsequently released, so there is a net release of energy.
Examples
Some examples of exothermic processes are:
Combustion of fuels such as wood, coal and oil/petroleum
The thermite reaction
The reaction of alkali metals and other highly electropositive metals with water
Condensation of rain from water vapor
Mixing water and strong acids or strong bases
The reaction of acids and bases
Dehydration of carbohydrates by sulfuric acid
The setting of cement and concrete
Some polymerization reactions such as the setting of epoxy resin
The reaction of most metals with halogens or oxygen
Nuclear fusion in hydrogen bombs and in stellar cores (to iron)
Nuclear fission of heavy elements
The reaction between zinc and hydrochloric acid
Respiration (breaking down of glucose to release energy in cells)
Implications for chemical reactions
Chemical exothermic reactions are generally more spontaneous than their counterparts, endothermic reactions.
In a thermochemical reaction that is exothermic, the heat may be listed among the products of the reaction.
See also
Calorimetry
Chemical thermodynamics
Differential scanning calorimetry
Endergonic
Endergonic reaction
Exergonic
Exergonic reaction
Endothermic reaction
References
External links
Observe exothermic reactions in a simple experiment
Thermodynamic processes
Chemical thermodynamics
da:Exoterm | Exothermic process | Physics,Chemistry | 989 |
15,143,713 | https://en.wikipedia.org/wiki/Integrated%20assessment%20modelling | Integrated assessment modelling (IAM) or integrated modelling (IM) is a term used for a type of scientific modelling that tries to link main features of society and economy with the biosphere and atmosphere into one modelling framework. The goal of integrated assessment modelling is to accommodate informed policy-making, usually in the context of climate change though also in other areas of human and social development. While the detail and extent of integrated disciplines varies strongly per model, all climatic integrated assessment modelling includes economic processes as well as processes producing greenhouse gases. Other integrated assessment models also integrate other aspects of human development such as education, health, infrastructure, and governance.
These models are integrated because they span multiple academic disciplines, including economics and climate science and for more comprehensive models also energy systems, land-use change, agriculture, infrastructure, conflict, governance, technology, education, and health. The word assessment comes from the use of these models to provide information for answering policy questions. To quantify these integrated assessment studies, numerical models are used. Integrated assessment modelling does not provide predictions for the future but rather estimates what possible scenarios look like.
There are different types of integrated assessment models. One classification distinguishes between firstly models that quantify future developmental pathways or scenarios and provide detailed, sectoral information on the complex processes modelled. Here they are called process-based models. Secondly, there are models that aggregate the costs of climate change and climate change mitigation to find estimates of the total costs of climate change. A second classification makes a distinction between models that extrapolate verified patterns (via econometrics equations), or models that determine (globally) optimal economic solutions from the perspective of a social planner, assuming (partial) equilibrium of the economy.
Process-based models
Intergovernmental Panel on Climate Change (IPCC) has relied on process-based integrated assessment models to quantify mitigation scenarios. They have been used to explore different pathways for staying within climate policy targets such as the 1.5 °C target agreed upon in the Paris Agreement. Moreover, these models have underpinned research including energy policy assessment and simulate the Shared socioeconomic pathways. Notable modelling frameworks include IMAGE, MESSAGEix, AIM/GCE, GCAM, REMIND-MAgPIE, and WITCH-GLOBIOM. While these scenarios are highly policy-relevant, interpretation of the scenarios should be done with care.
Non-equilibrium models include those based on econometric equations and evolutionary economics (such as E3ME), and agent-based models (such as the agent-based DSK-model). These models typically do not assume rational and representative agents, nor market equilibrium in the long term.
Aggregate cost-benefit models
Cost-benefit integrated assessment models are the main tools for calculating the social cost of carbon, or the marginal social cost of emitting one more tonne of carbon (as carbon dioxide) into the atmosphere at any point in time. For instance, the DICE, PAGE, and FUND models have been used by the US Interagency Working Group to calculate the social cost of carbon and its results have been used for regulatory impact analysis.
This type of modelling is carried out to find the total cost of climate impacts, which are generally considered a negative externality not captured by conventional markets. In order to correct such a market failure, for instance by using a carbon tax, the cost of emissions is required. However, the estimates of the social cost of carbon are highly uncertain and will remain so for the foreseeable future. It has been argued that "IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory, and can fool policy-makers into thinking that the forecasts the models generate have some kind of scientific legitimacy". Still, it has been argued that attempting to calculate the social cost of carbon is useful to gain insight into the effect of certain processes on climate impacts, as well as to better understand one of the determinants international cooperation in the governance of climate agreements.
Integrated assessment models have not been used solely to assess environmental or climate change-related fields. They have also been used to analyze patterns of conflict, the Sustainable Development Goals, trends across issue area in Africa, and food security.
Shortcomings
All numerical models have shortcomings. Integrated Assessment Models for climate change, in particular, have been severely criticized for problematic assumptions that led to greatly overestimating the cost/benefit ratio for mitigating climate change while relying on economic models inappropriate to the problem. In 2021, the integrated assessment modeling community examined gaps in what was termed the "possibility space" and how these might best be consolidated and addressed. In an October2021 working paper, Nicholas Stern argues that existing IAMs are inherently unable to capture the economic realities of the climate crisis under its current state of rapid progress.
Models undertaking optimisation methodologies have received numerous different critiques, a prominent one however, draws on the ideas of dynamical systems theory which understands systems as changing with no deterministic pathway or end-state.
This implies a very large, or even infinite, number of possible states of the system in the future with aspects and dynamics that cannot be known to observers of the current state of the system.
This type of uncertainty around future states of an evolutionary system has been referred to as ‘radical’ or ‘fundamental’ uncertainty.
This has led some researchers to call for more work on the broader array of possible futures and calling for modelling research on those alternative scenarios that have yet to receive substantial attention, for example post-growth scenarios.
Notes
References
External links
Integrated Assessment Society
Integrated Assessment Journal
Climate change policy
Environmental science
Environmental social science
Scientific modelling
Management cybernetics | Integrated assessment modelling | Environmental_science | 1,148 |
8,833,112 | https://en.wikipedia.org/wiki/Albert%20Morris | Albert Morris (13 August 1886 in Bridgetown, South Australia – 9 January 1939, Broken Hill New South Wales) was an acclaimed Australian botanist, landscaper, ecologist, conservationist and developer of arid-zone revegetation techniques that featured natural regeneration . Morris is particularly celebrated for his decisive role in the development of the Broken Hill regeneration area, a pioneering arid-zone natural regeneration project. The regeneration area project exhibited standards and principles characteristic of the contemporary environmental repair practice, ecological restoration. The work of Albert Morris, Margaret Morris and their restoration colleagues significantly influenced the development of New South Wales government soil erosion management policies in the 1940s.
First Nations communities
From time immemorial traditional owners, the Wilyakali people, cared for homelands that encompassed the extended Broken Hill and Barrier Ranges region, western New South Wales (hereafter NSW). They maintained relations with the Barkandji (aka Paakantyi) nation, of the Baaka (aka Darling River). From ca.1830 onwards, pastoralists forcibly dispossessed the Barkandji and Wilyakali communities, seizing homelands along the Baaka and steadily extending their influence to more distant regions. As well as being dispossessed of their spiritually significant homelands, First Nations communities of western NSW were for many decades subjected to various hardships: material deprivation; widespread ill health and epidemics; racism; confinement to government reserves and denial of civil liberties. Dedicated government rectification of these injustices only commenced in the latter decades of the twentieth century. In 2015, the Wilyakali community and the Barkandji nation, after eighteen years of challenging and protracted legal proceedings, were successful in establishing their Native title claim to traditional homelands along the Baaka and extensive areas of western NSW. Today, Australian First Nations communities assert that their homelands were never ceded to the Crown.
Early life of Albert and Margaret Morris
Albert was born in Bridgetown, South Australia, to parents Albert Joseph Morris and Emma Jane (Smith). Confronted by the economic depression that gripped South Australia in the late 1880s, Morris's father sought work in the new mines of far western NSW. He moved his family to Thackaringa, and then to nearby Broken Hill, to live. Broken Hill was to become Albert's permanent home.
Early in life, Albert developed a keen interest in plants. Possibly a serious childhood injury to his foot, which prevented him from taking part in the bustle of childhood activity, contributed to his independence and self-containment, and to an increasing interest in botany. However, it is documented that his father, Joe Morris, was an "enthusiastic" botanist and young Albert was his "offsider", so this was a more likely source of his botanical interests, as well as innate talent and an interest in the subject. By the time he was undertaking technical school studies in metallurgy and assaying, Morris had developed a small garden and nursery, and contributed to the cost of his fees by selling plants (pepper trees) that he had grown. Morris took up work with the Central Mine in Broken Hill, eventually becoming chief assayer for the company.
Albert Morris and Ellen Margaret Sayce (1887-1957) were married on 13 April 1909. Margaret (Morris) was a dressmaker, and developed extensive interests and skills in art, botany, conservation and journalism. She was a member of the Society of Friends, (Quakers). Albert's formative years were spent as an Anglican, and "some years" after his marriage he converted to Quakerism. Albert and Margaret, with family assistance, built a cottage in Cornish Street, Railway Town, a western suburb of Broken Hill.
The Broken Hill work of Albert Morris
Erosion and early experiments 1900s
By ca.1900, the previously well vegetated homelands of the Wilyakali community had progressively been exploited by overstocking on pastoralist stations (properties, or ranches), and further devastated by introduced animals such as rabbits, foxes and feral goats. The mining industry and the impacts of people and their stock had resulted in the Broken Hill region being stripped of trees such as Acacia aneura Mulga, Eucalyptus camaldulensis River Red Gum, and soil binding shrubs and ground cover plants. Natural recovery from these detrimental impacts was inhibited by the arid climate, which featured low average rainfall of 250 millimetres or less per annum, long dry periods and high summer temperatures. Exposed to the regular westerly winds, previously well vegetated and stable soils had been transformed into soil-drifts; severe dust storms were common. By the 1920s, these degraded vegetation and soil conditions were regarded as the norm.
As early as 1908, newspaper comments indicated that the sheet erosion around Broken Hill had already begun. Morris described the degraded landscape in these terms:
"The extending country stretched for miles without a vestige of any green thing and each stone or old tin had a streamer of sand tailing out from it. The fences were piled high with sand, inside and out and it looked as if the intended railway lines would just be buried every dusty day, which was every windy day".
Albert and Margaret Morris were concerned about the detrimental impacts that wind erosion was inflicting on the amenity of their fellow citizens in Broken Hill, as houses, gardens, roads and public facilities were often smothered in sand. Albert lamented the loss of indigenous fauna species brought about by the destruction of their natural habitat, and the breakdown of local natural ecosystems and their beauty. He looked for ways to manage these issues.
Several failures at establishing a barrier to the wind blown sand deposits in his exposed garden inspired Morris to search for plants that could be grown in the prevailing tough arid conditions, and which would control erosion by binding the exposed soils. He and Margaret began to acquire expertise with botanical taxonomy and systematics, and by the mid 1920s Albert was corresponding with other Australian botanists. He established a home nursery, purchasing adjoining land and expanding his garden.
Barrier Field Naturalists Club 1920
In 1920, along with Margaret Morris and W.D.K. McGillivray (1868-1933), a local doctor and also a prominent Australian ornithologist and natural scientist, Albert helped establish the Broken Hill based Barrier Field Naturalists Club, serving as its secretary until his death in 1939. Margaret also served on the executive of the club. Members were interested in natural sciences such as botany and geology, and also history, conducting regular field trips and lecture series. Albert and Margaret were prominent members, participating in field trips to the country around Broken Hill, studying and collecting specimens of the indigenous flora and observing the local ecosystems.
As well as Margaret's diverse contributions, it is important to note that throughout the 1920s and 1930s Albert's botanical, conservation, tree plantation and regeneration work were strongly stimulated and supported by the many talented members of the Field Naturalists Club, people such as Dr. William MacGillivray, his son Dr. Ian MacGillivray, Edmund Dow, Maurice Mawby and many others. Morris became widely recognised for his botanical expertise, urban tree plantation work, his propagation and contributions of plants to residents and civic bodies in Broken Hill, and for his firm belief in the possibility of revegetating the barren city landscapes.
The influence of Professor T G Osborn 1920s
University of Adelaide botanist and plant ecologist Professor T G Osborn had been concerned about the degradation of South Australia's arid-zone flora, and the resultant wind erosion, since approximately 1920. At the university's Koonamore research facility, Yunta, he studied the capacity of the flora to naturally regenerate under stock exclosure conditions. Osborn concluded that overstocking on pastoral stations was the primary cause of the vegetation degradation, and that natural regeneration of the flora was possible. He advised pastoralists to carefully manage station stocking levels, and to preserve the indigenous vegetation.
South Australian pastoralists heeded Osborn's research work and advice. From approximately 1930, pastoralists developed "'flora reserves", which were fenced areas that excluded stock and allowed natural regeneration of the indigenous flora. The largest known flora reserve was approximately four hectares (ten acres). Other pastoralists undertook furrowing projects (a form of ploughing), a practice that facilitated the natural regeneration of the flora. Many of these projects were highly successful, and degraded, wind eroded soil-drifts and scalds (areas of eroded, hardened, water impervious soil) were revegetated and stabilised.
Albert Morris was certainly aware of Professor Osborn's Koonamore research work by 1928, as the research was well publicised and Morris had engaged in botanical correspondence with Osborn. Quite possibly Morris visited the Koonamore research facility, as it was located only approximately 250 kilometres from Broken Hill. The restoration work of South Australian pastoralists also received some newspaper publicity, so it is quite possible that Morris's thinking on the restoration of degraded indigenous flora was influenced by the stock exclosure and natural regeneration research and projects conducted in South Australia.
Botany, conservation, restoration 1930s
Morris achieved national and international recognition as an expert on arid-zone Australian flora, and corresponded with many prominent Australian botanists. He, with Margaret, made a collection of about 8000 plant specimens, the bulk of which were donated to the Waite Institute in South Australia in 1944. This collection is now predominantly held by the State Herbarium of South Australia with some specimens held by other state collections, including the Royal Botanic Garden of NSW. He and Margaret were noted for their generosity and hospitality to fellow naturalists and others working at Broken Hill. Among those they befriended was the noted botanist and author Thistle Harris, who worked in Broken Hill as a teacher c.1930.
By 1936 Albert Morris had acquired considerable expertise in the distinct fields of arid-zone tree plantation establishment, and arid-zone natural regeneration. His expertise in natural regeneration was based on the field knowledge that he had acquired on Barrier Field Naturalists Club outings into the surrounding countryside, and his deep botanical knowledge of arid-zone flora species. His own home nursery experiments with sand stabilising plants such as Atriplex spp. saltbushes, further enhanced his regeneration and restoration knowledge. Quite possibly the natural regeneration work of Professor Osborn and the South Australian pastoralists had influenced him. Broad acreage furrowing field trials, conducted in 1935-36 by Morris with local pastoralists on their pastoral stations, facilitated natural regeneration of the indigenous flora and must also have convinced him of the efficacy of natural regeneration as a means of restoring degraded lands.
Albert was also possessed of extensive administrative and communication skills. His professional employment as an assayer involved responsible administrative duties, and he utilised this experience to good effect in his volunteer conservation work. As secretary of the Barrier Field Naturalists Club, he corresponded with and lobbied New South Wales state government ministers and other representatives of industry and government bodies, on conservation and restoration matters. In particular, in 1935, he wrote on behalf of the Barrier Field Naturalists to the New South Wales state government, urging the government to establish a fenced natural regeneration area around Broken Hill. In April 1936, Albert and other field naturalists presented detailed submissions on soil and flora conservation, and stock exclosure and natural regeneration techniques, to the New South Wales Erosion Committee.
Tree plantations, regeneration reserves 1936
Equipped with evidence of the efficacy of stock exclosure and natural regeneration as a means of restoring eroded lands, in May 1936 Albert and club members commenced lobbying the state government to fence two water reservoir sites in Broken Hill, to exclude stock and rabbits and allow the indigenous flora there to naturally regenerate. Due to Albert's persistence, this work was approved in September 1937, and the fencing was done in April 1939, shortly after his death.
However, Albert Morris is best remembered and celebrated for the natural regeneration area that now encircles Broken Hill, a project that is today referred to as the Broken Hill regeneration area. Displaying considerable initiative and management skills, Morris demonstrated to Broken Hill mining executives the botanical feasibility of his plans, and convinced them to financially back the project. The natural regeneration area project was conceived in the winter of 1936, and commenced in the spring of that year.
The Zinc Corporation, another Broken Hill mining company, had developed extensive plans to commence construction in 1936 of a new mine complex on a bare, desert like piece of ground located along the south-west urban fringes of Broken Hill. The company engaged the honorary services of Albert Morris to advise on the establishment of tree plantations adjacent to the proposed new mining, office and residential complex, to protect the complex from sand-drifts and the strong local westerly winds. Construction of these tree plantations, which were to be irrigated with waste water and established by traditional planting methods, but using indigenous Australian vegetation including saltbushes, a method Morris had experimented with, commenced in May, 1936.
The initial fencing of the main tree plantation site facilitated rapid and substantial natural regeneration within the still unplanted, and otherwise bare, fenced enclosure, of native grasses and forbs germinating from seed naturally stored in the soil. Crucially, this regrowth of indigenous vegetation persisted, as a result of foraging livestock and rabbits having been excluded by the new fencing.
The knowledgeable Albert Morris had fully anticipated and predicted the natural regeneration that occurred within the fenced tree plantations adjacent to the new mining complex. As mentioned, he had already observed and confirmed this process in previous broad acreage field trials, and was aware of the ways in which arid-zone indigenous flora seed could be naturally dispersed by wind and stored in the soil, germinate, and thrive after relatively small amounts of rainfall. Although at this time natural regeneration of many indigenous flora species, such as Eucalyptus spp. (often referred to as gum trees), was a familiar concept to many settler Australians, Morris's knowledge of the viability of various arid plant species' seed, and his experience with the natural regeneration capabilities of the indigenous flora communities, were exceptional.
Morris seized on this significant (approx. 22 acres; 9 hectares) demonstration of natural regeneration principles, and convinced the Zinc Corporation mine manager, A J Keast, to obtain the backing of senior Zinc Corp executive, W S Robinson, and other mining companies in Broken Hill, to undertake a new, separate project, the trial fencing of regeneration reserves to the south-west of the city. Morris intended that these reserves would primarily utilise natural regeneration, and limited, targeted amounts of planting, as their primary means of revegetation.
Broken Hill regeneration area 1936-58
Work on the Zinc Corporation mining complex tree plantations continued, but Morris, also in an honorary capacity, was now additionally advising on the new Broken Hill regeneration area project, which consisted of a series of fenced regeneration reserves extending around the south-west perimeter of Broken Hill and covering hundreds of hectares. This work commenced in the spring of 1936 and was completed in February 1937. Further reserves were added between 1937 and 1939. Good rains fell, and substantial revegetation success was achieved across all of the reserves. The entire south and westward aspects of Broken Hill were now protected from wind driven sand-drifts by naturally regenerated indigenous vegetation of the type that naturally occurred on the site. It is important to note though, that the traditional owners of the lands of Broken Hill and the surrounding region, the dispossessed Wilyakali community, appear to have had no opportunities to consider contributing to the development of the regeneration area project, despite their long and deep physical and spiritual connections to these lands. Also, it is unlikely that their Traditional Ecological Knowledge was utilised, either directly or indirectly.
Sadly, Albert Morris died in January 1939, after several months of illness, but he did live to see substantial evidence of the success of his regeneration vision and initiatives. Indeed, the successful vegetation regeneration within the initial set of regeneration reserves was highly praised by the visiting South Australian Erosion Committee in June 1937. Before he died, Albert was also aware that a Broken Hill community progress association had successfully obtained funds from the state government to finance the construction of a regeneration reserve to the south of the city in 1938-39. Unfortunately, Albert did not live to see the beneficial effect that the good rains of 1939 had on the reserves.
The resource demands of the Second World War (1939–45) delayed the development of further regeneration reserves and the encirclement of the city with a protective belt of indigenous flora. During this challenging period Margaret Morris played an important role in the botanical management, study and documentation of the reserves. She successfully promoted their benefits with regular newspaper articles, and authored an influential article in the Australian Journal of Science. In her various articles, Margaret emphasised the natural regeneration of indigenous species, such as Acacia aneura Mulga, that had occurred in the reserves. She wrote of the natural resilience of the regeneration reserves, correctly predicting that they would survive the severe drought of 1940, and was unstinting in her generous acknowledgement of the contributions made by members of the Broken Hill community, the mining industry and Broken Hill Council. The Barrier Field Naturalists Club also continued its involvement with the reserves, with members conducting botanical surveys of the thriving natural flora and advocating for the extension of the regeneration area. The Mine Managers Association of Broken Hill financed the upkeep of the regeneration reserves, and Broken Hill Council managed this work.
The citizens of Broken Hill suffered severely from the effects of the 1940 drought, and further prolonged dry periods in the early to mid-1940s, as enormous dust storms ravaged the city. Due to the success and popularity of the regeneration reserves, from 1946 the city administration lobbied the New South Wales government to complete the encirclement of the city with further regeneration reserves. Three new reserves were fenced to the north and east of Broken Hill between 1950 and 1958, and natural regeneration of the indigenous vegetation occurred. The regeneration reserves created between 1936 and 1958 now primarily comprise the current Broken Hill regeneration area, with minor adjustments having been made over the years.
Natural regeneration
It has in the past, and still is very often mistakenly assumed, that planting techniques were predominantly utilised to initially establish the regeneration reserves, and that the regeneration area project was primarily an exercise in planting. It is correct that the Zinc Corporation tree plantations of 1936-37, quite separate and also small projects relative to the regeneration reserves, and located immediately adjacent to the urban area and piped water resources, were irrigated, and their vegetation established by the manual planting of thousands of trees, along with saltbushes; this was documented at the time. However it is clear from Albert Morris's interest in natural regeneration, as already outlined in this article, and the historical documentation, that the regeneration reserves, as distinct from the tree plantations, primarily and intentionally utilised principles of stock exclosure (fencing to exclude stock) and natural regeneration, and not planting, to achieve the revegetation, with indigenous flora, of the hitherto barren reserves.
Albert Morris was interested in achieving broad acreage arid-zone revegetation outcomes, both for amenity and conservation purposes, and as he realised, it would have been impossible to achieve this, given the prevailing dry, hot and often drought stricken conditions, by utilising a planting technique. To propagate, manually plant and then keep hydrated until they were established the tens of thousands of trees, shrubs, grasses and forbs necessary for such a project, conducted over many hundreds of rugged hectares, would have required extensive seed collection and plant propagation capabilities, and generous personnel resources and funding; it is unlikely that such a project would even be feasible today. There is no evidence of such a large planting project occurring at the time of the establishment of the regeneration reserves. It was clearly Morris's intention that the establishment of vegetation in the regeneration reserves was to be primarily left to the factors associated with natural regeneration: germination of existing, naturally deposited and wind dispersed seeds of the local flora, the regrowth of established but degraded in ground rootstocks, and the local rainfall of approximately 250mm per year. Crucially, fencing around the reserves excluded the livestock and rabbits that had previously decimated this indigenous flora. In fact, University of Sydney researchers Professor Eric Ashby and Ilma Pidgeon were drawn to the project in order to study the spectacular natural regeneration of the indigenous flora that had occurred following exclusion of stock, and concluded that ‘fencing the land has restored the vegetation’. Spreading of seed by hand, and the ploughing of moisture impermeable claypans (aka scalds), were techniques also contemplated by Morris. Relatively little or no tree or shrub planting was done in order to establish the regeneration reserves, except in an undefined section of regeneration reserve no. 2, which was also irrigated, as this reserve was adjacent to the small and irrigated tree plantation no. 1, now known as Albert Morris Park. Some planting was carried out by community members along water courses and in claypans, and extensive tree planting was carried out along some road verges from approximately 1939.
Ecological restoration
The historical regeneration area project exhibits principles of the contemporary environmental repair concept, ecological restoration. See National standards for the practice of ecological restoration in Australia. Substantial to full restoration of the indigenous flora was aspired to; appropriate levels of site intervention, predominantly in the form of fencing and small amounts of furrowing and planting were adopted, with re-establishment of the indigenous vegetation primarily left to natural regeneration; formal science and local ecological knowledge were utilised; the indigenous flora and fauna were conserved; the residents of Broken Hill came to appreciate and engage with the project. However, as noted, there is no record of traditional owners and Custodians of the regional lands, the Wilyakali community, being presented with opportunities to consider contributing to the project.
Government erosion management policies and legislation 1940s
The Broken Hill regeneration area project and its outcomes significantly influenced the development of NSW state government soil erosion management policies and legislation. NSW Soil Conservation Service (established 1938) director Sam Clayton, and researcher Noel Beadle, were impressed by the successful revegetation outcomes achieved within the regeneration area. Throughout the 1940s, they pushed for and implemented state government land management policies that aimed to revegetate, by stock exclosure and natural regeneration processes, those landscapes of western NSW that were in a degraded vegetation condition, but were still, fortunately, not yet wind or water eroded. To achieve this outcome, Beadle recognised that tree planting programs were completely unfeasible, given the extent of the problem and the arid conditions. Clayton and Beadle also targeted the revegetation of the twenty million hectares of western NSW that were in an eroded condition. To achieve both of these objectives, state legislation was passed in 1949, and stock exclosure and natural regeneration processes were codified as government land management techniques and policies; overstocking was outlawed.
Remembrance and celebration
The regeneration area still encircles Broken Hill today, providing the city with an attractive ring of natural vegetation. Broken Hill City Council manages the regeneration area, with the crucial support of Landcare Broken Hill and members of the Barrier Field Naturalists Club. The regeneration reserves were recognised as cultural heritage items by the New South Wales National Trust in 1991. In 2015 the City of Broken Hill was declared a place of national heritage values by the Australian government. As part of this recognition, Albert's achievements, and the Broken Hill regeneration reserves, were listed as heritage values of the city.
The work of Albert Morris was valued and commemorated by the citizens of Broken Hill. In 1941 an impressive water fountain, dedicated to his memory and funded by public subscription, was installed outside the Technical College, Argent Street, Broken Hill. In 1944 Margaret Morris opened the Albert Morris Memorial Gates, which are now located in Wentworth Road, Broken Hill. The John Scougall Gates, named after Jack Scougall, a foreman of works on the regeneration reserves and later manager of the Zinc Corporation nursery, stand nearby.
A consortium of Australian ecological restoration organisations initiated the Albert Morris Award for an Outstanding Ecological Restoration Project in 2017, to mark the eighty year anniversary of the completion of the first regeneration reserves in 1937. In 22–24 August 2017, the Australian Association of Bush Regenerators, Broken Hill City Council, The Barrier Field Naturalists Club, Landcare Broken Hill and Broken Hill Art Exchange, came together with many visitors and local residents in Broken Hill to mark this event, with field trips and an inaugural Albert Morris Ecological Restoration Award dinner. The Award dinner recognised the skills, dedication and community spirit of Albert Morris, Margaret Morris and their many colleagues in the Barrier Field Naturalists Club, the contributions of Broken Hill citizens and community members, and the contributions of the mining industry of Broken Hill, Broken Hill City Council and the New South Wales state government, to the regeneration area project.
At the Award dinner the inaugural Albert Morris Award for an Outstanding Ecological Restoration Project was presented "to the Broken Hill Regeneration Reserves Project itself and all those who made it happen from 1936-1958 and those who are still making it happen". The actual award is a sculpture crafted by Badger Bates, a distinguished Barkandji (Paakantji), Broken Hill artist. The sculpture is titled 'Regeneration’ and is made from the wattle "Dead Finish", Acacia tetragonophylla. The 2018 Award was presented at the Society for Ecological Restoration Australasia Conference held in Brisbane, September, 2018, to Murray Local Land Services, recognising the Murray Riverina Travelling Stock Reserves Project.
The 1930s South Australian work of Albert Morris
In 1932 Essington Lewis, famed manager of the Australian industrial and mining corporation, Broken Hill Proprietary Company (BHP), invited Albert Morris to visit South Australia and investigate the possibility of establishing tree plantations at the companies' corporate towns of Whyalla and Iron Knob, for amenity purposes. During a series of visits between 1932 and 1937, Morris (the historical documentation does not record participation by Margaret Morris in the South Australian projects), successfully established an Australian flora plant nursery in Whyalla and developed plantations of Australian flora there and at Iron Knob. He also advised the municipal council of Port Pirie on the possibility of establishing plantations there, although no actual work appears to have been undertaken.
Morris initiated two natural regeneration projects in Whyalla, approximately between 1935 and 1937 (precise dates unknown). At Hummock Hill, fencing of the bare site to exclude dairy cattle led to the regeneration of the indigenous flora. The second project was located on the current site of the Ada Ryan Gardens in Whyalla, and involved the management of invasive beach sand dunes, by fencing to exclude rabbits and cattle, allowing the indigenous flora to recover. By 1939 both projects were being hailed as major successes, with tangible outcomes being evident.
References
AABR News October 2017 "The inaugural Albert Morris Ecological Restoration Award" http://www.aabr.org.au/learn/publications-presentations/aabr-newsletters/
Ardill, Peter J. (2017) "Albert Morris and the Broken Hill regeneration area: time, landscape and renewal". ed. 3. Australian Association of Bush Regenerators (AABR) Sydney. http://www.aabr.org.au/morris-broken-hill/
Ardill, Peter J. (2018) "The South Australian arid zone plantation and natural regeneration work of Albert Morris". Australian Association of Bush Regenerators (AABR) Sydney http://www.aabr.org.au/morris-broken-hill/
Ardill, Peter J.(2022) ‘Rekindling memory of environmental repair responses to the Australian wind erosion crisis of 1930–45: ecologically aligned restoration of degraded arid-zone pastoral lands and the resultant shaping of state soil conservation policies’ (January) The Repair Press Sydney https://ecologicalrestorationhistory.org/articles/
Ardill, Peter & Brodie, Louise ed. (2018) "Albert Morris and the Broken Hill regeneration area. Essays and supplementary materials commemorating and celebrating the history and eightieth anniversary of this project". Australian Association of Bush Regenerators Inc (AABR) Sydney.
Beadle, N.C.W. (1948) "The Vegetation and Pastures of Western New South Wales". Department of Conservation of NSW. Sydney. NSW.
Briggs, Barbara (2017) Bush Regeneration at Broken Hill:‘radical for their time’ "Australasian Plant Conservation" pp. 7–9 26:3 Dec 2017-Feb 2018.
Kennedy, B.E. (1986) 'Morris, Albert (Bert) (1886–1939)', Australian Dictionary of Biography, National Centre of Biography, Australian National University, http://adb.anu.edu.au/biography/morris-albert-bert-7659/text13397, published first in hardcopy 1986, accessed online 18 June 2018.This article was first published in hardcopy in Australian Dictionary of Biography, Volume 10, (MUP), 1986
CHAH (2016): Council of Heads of Australasian Herbaria. Australian National Herbarium Biographical Notes: Morris, Albert (1886 - 1939) http://www.anbg.gov.au/biography/morris-albert.html
Jones David S. (2011). Re-Greening ‘The Hill’: Albert Morris and the transformation of the Broken Hill landscape. Studies in the History of Gardens & Designed Landscapes 31: 181–195.
Jones, David (2016). "Evolution and significance of the regeneration reserve heritage landscape of broken hill: History, values and significance" [online]. Historic Environment, Vol. 28, No. 1, 2016: 40-57. Availability: <https://search.informit.com.au
McDonald, Tein (2017) "Report on the Albert Morris Inaugural Award" in "Australasian Plant Conservation" pp. 9–10 26:3 Dec 2017-Feb 2018.
McDonald, Tein, (2017a) “How do the Broken Hill Regeneration Reserves stand up as an Ecological Restoration project?” AABR News. No. 132, April, 2017. Australian Association of Bush Regenerators. Sydney. Australia. http://www.aabr.org.au/learn/publications-presentations/aabr-newsletters/
McDonald, Tein, (2017b) “Would the Broken Hill Regeneration Reserves meet today’s National Standards?” AABR News. No.134, 2017. Australian Association of Bush Regenerators. Sydney. Australia. http://www.aabr.org.au/learn/publications-presentations/aabr-newsletters/
Morris, A.(1938) "Broken Hill Fights Sand-Drift" in "Plant life of the West Darling", Barrier Field Naturalists Club compiler (Broken Hill, NSW, 1966)
Morris, M. (1939) "Plant Regeneration in the Broken Hill District" The Australian Journal of Science pp. 43–48. October.
Morris, M. (1966) "Biographical Notes" in ″Plantlife of the West Darling" ed. Barrier Field Naturalists Club. Broken Hill
OEH (2021) ‘Bioregions of NSW/A Brief overview of NSW/NSW-Regional History/New South Wales-Aboriginal occupation-Aboriginal occupation of the Western Division’ p. 15 New South Wales Office of Environment and Heritage. NSW Department Planning Industry Environment. Sydney https://www.environment.nsw.gov.au/-/media/OEH/Corporate-Site/Documents/Animals-andplants/ Bioregions/bioregions-of-new-south-wales.pdf
Pearce, Lilian M (2019) ‘Critical Histories for Ecological Restoration’ (Thesis for Doctor of Philosophy Australian National University) https://openresearchrepository.anu.edu.au/handle/1885/173547
Pidgeon, I Ashby, E (1940) ‘Studies in Applied Ecology I. A Statistical Analysis of Regeneration Following Protection from Grazing’ Proceedings of the Linnean Society of NSW 65 123-143 at p. 127 http://www.biodiversitylibrary.org/bibliography/6525#/summary
Robin, L. (2007) How A Continent Created A Nation (University New South Wales Press:Sydney)
Webber Horace, 1992 The Greening of the Hill - Re-vegetation of Broken Hill in the 1930s published by Hyland House
Specific
1886 births
1939 deaths
20th-century Australian botanists
History of Broken Hill
Converts to Quakerism
People from Broken Hill, New South Wales
Australian Quakers
Ecological restoration
Landscape ecology | Albert Morris | Chemistry,Engineering | 6,669 |
14,119,425 | https://en.wikipedia.org/wiki/Melanocortin%204%20receptor | Melanocortin 4 receptor (MC4R) is a melanocortin receptor that in humans is encoded by the gene. It encodes the MC4R protein, a G protein-coupled receptor (GPCR) that binds α-melanocyte stimulating hormone (α-MSH). In mouse models, MC4 receptors have been found to be involved in feeding behaviour, the regulation of metabolism, sexual behaviour, and male erectile function.
Clinical significance
In 2009, two very large genome-wide association studies of body mass index (BMI) confirmed the association of variants about 150 kilobases downstream of the MC4R gene with insulin resistance, obesity, and other anthropometric traits. MC4R may also have clinical utility as a biomarker for predicting individual susceptibility to drug-induced adverse effects causing weight gain and related metabolic abnormalities. Another GWAS performed in 2012 identified twenty SNPs located ~190 Kb downstream of MC4R in association with severe antipsychotic-induced weight gain. This locus overlapped with the region previously identified in the 2009 studies. The rs489693 polymorphism, in particular, sustained a statistically robust signal across three replication cohorts and demonstrated consistent recessive effects. This finding was replicated again by another research group in the following year. In accordance with the above, MC4 receptor agonists have garnered interest as potential treatments for obesity and insulin resistance, while MC4 receptor antagonists have attracted interest as potential treatments for cachexia. The structures of the receptor in complex with the agonist setmelanotide and the antagonist SHU9119 have been determined.
The MC4 receptor agonist bremelanotide (PT-141), sold under the brand name Vyleesi, was approved in the United States as a treatment for low sexual desire in women in 2019. Melanotan II, a synthetic analog of α-MSH, is marketed to the general population for sexual enhancement by internet retailers. PL-6983 and PF-00446687 are under investigation as potential treatments for both female and male sexual dysfunction, including hypoactive sexual desire disorder and erectile dysfunction. The non-selective melanocortin receptor agonist afamelanotide (NDP-α-MSH) has been found to induce brain-derived neurotrophic factor (BDNF) expression in the rodent brain via activation of the MC4 receptor and mediate "intense" neurogenesis and cognitive recovery in an animal model of Alzheimer's disease. MC4 receptor antagonists produce pronounced antidepressant- and anxiolytic-like effects in animal models of depression and anxiety. And agonists of the MC4 receptor such as melanotan II and PF-00446687, via activation of the central oxytocin system, have been found to promote pair bond formation in prairie voles and, due to these prosocial effects, have been suggested as possible treatments for social deficits in autism spectrum disorders and schizophrenia.
In 2008, MC4R mutations were reported to be associated with inherited human obesity. They were found in heterozygotes, suggesting an autosomal dominant inheritance pattern. However, based on other research and observations, these mutations seem to have an incomplete penetrance and some degree of codominance. It has a prevalence of 1.0–2.5% in people with body mass indices greater than 30, making it the most commonly known genetic defect predisposing people to obesity.
In an exome-wide meta-analysis across three cohorts (UKB,GHS and MCPS), there were 16 genes for which there genetic variants was associated with BMI.
Among the 16 genes, the analysis identified two for which rare mutations are known to cause monogenic obesity: MC4R and PCSK1 (proprotein convertase subtilisin/kexin type 1). One study provides genetic evidence linking rare coding variation to BMI and obesity-related phenotypes.
MC4R gene mutations are associated with early-onset severe obesity they effect of mutations on opacity in these two heterozygous coding genes among mutations in the MC4R gene (C293R and S94N) are:
• Rapid weight gains from early age (the most important feature).
• Development of severe obesity (BMI ≫97th percentile) at early ages, usually <3 years of age.
• Persistent food-seeking behavior, mostly reported from six months of age.
• Parental/siblings anthropometric data: suspect if relatives present normal anthropometric data.
• Tall stature/increased growth velocity (MC4R monogenic diabetes).
There is limited treatment options for the most common form of monogenic obesity, MC4R mutations symptoms can be treated with a Glucagon-like Peptide-1 Receptor Agonist liraglutide which cause weight loss by reducing appetite. They found that the effects of liraglutide 3.0 mg daily for 16 weeks causes weight reducing and glucose lowering and may be relevant treatment in the most common form of monogenic obesity.
Interactions
The MC4 receptor has been shown to interact with proopiomelanocortin (POMC). POMC is a precursor peptide pro-hormone which is cleaved into several other peptide hormones. All of the endogenous ligands of MC4 are produced by cleaving this one precursor peptide. These endogenous agonists include α-MSH, β-MSH, γ-MSH, and ACTH.
as a cofactor for ligand binding
GPCRs can bind a wide variety of extracellular ligands including physiological cations. Biological and pharmacological studies have previously implicated both and in the function of multiple members of the melanocortin receptor family. There is in the agonist-bound structure. The researches hypothesize that stabilizes the ligand-binding pocket and functions as an endogenous cofactor for the binding of α-MSH to MC4 receptor. is likely to bind when the receptor is exposed to extracellular concentrations (~1.2 mM in the extracellular space of the central nervous system) but might not be bound intracellularly ( concentration: 100 nm), thus suggesting a potential regulatory role for in α-MSH–binding dynamics.
Signaling along the phospholipase C pathway can significantly raise the intracellular concentration, and this may constitute positive feedback from signaling of MC4 receptor or other receptors that result in flux. This discovery highlights the plasticity and multipronged regulation and control of this receptor and will aid in next-generation structure-based drug design of therapeutics for MC4R-related obesity.
Ligands
Agonists
Non-selective
α-MSH
β-MSH
γ-MSH
ACTH
Afamelanotide
Bremelanotide
Melanotan II
Modimelanotide
Setmelanotide was approved by FDA as first-ever therapy for chronic weight management (IMCIVREE).The setmelanotide was advanced first-in-class, precision medicine that is designed to directly address the underlying cause of obesities driven by genetic deficits in the melanocortin-4 (MC4) receptor pathway.
Selective
AZD2820
LY-2112688
MK-0493
PF-00446687
PG-931
PL-6983
Ro 27-3225 – also some activity at MC1
THIQ
Antagonists
Non-selective
Agouti-related peptide
Agouti signalling peptide
SHU-8914
SHU-9005
SHU-9119
Selective
HS-014
HS-024
JKC-363
MCL-0020
MCL-0042 – also a serotonin reuptake inhibitor
MCL-0129
ML-00253764
MPB-10
Unknown
Semax
Evolution
Paralogues
Source:
MC5R
MC3R
MC1R
MC2R
S1PR1
LPAR1
GPR12
S1PR3
LPAR2
S1PR2
GPR6
GPR3
LPAR3
GPR119
CNR1
S1PR5
S1PR4
CNR2
See also
Melanocortin receptor
References
Further reading
External links
G protein-coupled receptors
Human proteins | Melanocortin 4 receptor | Chemistry | 1,731 |
76,529,929 | https://en.wikipedia.org/wiki/Rejoyn | Rejoyn is a prescription-only digital therapeutic smartphone app approved by the US FDA for the treatment of major depressive disorder (MDD) in adults ages 22 and up. It is prescribed in conjunction with standard antidepressant medication and professional guidance and support.
Rejoyn was developed by Otsuka America Pharmaceutical Inc., and gained FDA approval as a "medical device" on March 30th, 2024. The smartphone app helps patients with depression using exercises based on cognitive behavioral therapy (CBT) along with timed notifications to keep the patient engaged and in treatment. Randomized controlled trials showed that the Rejoyn app was more effective at relieving depression symptoms compared to a "sham app", a placebo app that required similar effort but was not intended to be helpful. Dr. John Torous, MD, MBI, a psychiatrist at the Beth Israel Deaconess Medical Center in Boston, said that the app seems to pose minimal risks, and is an important step forward in unlocking the power of smartphones in treating psychiatric disorders.
Some experts have signaled that the claims should be taken with caution, since the app was "tested only in a narrow subset of patients." and its benefits are "“not statistically significant,” according to the study’s primary outcome."
Notes
a MBI for Masters of Biomedical Informatics.
References
Psychiatry stubs
Psychiatry
Treatment of depression
Mobile applications
Health software
Otsuka Pharmaceutical | Rejoyn | Technology | 291 |
2,418,199 | https://en.wikipedia.org/wiki/Magnum%20%28rocket%29 | The Magnum was a large super-heavy-lift rocket designed by NASA's Marshall Space Flight Center during the mid-1990s. The Magnum, which never made it past the preliminary design phase, would have been a launcher some 96 meters (315 feet) tall, on the scale of the Saturn V and was originally designed to carry a human expedition to Mars. It was to have used two strap-on side boosters, similar to the Space Shuttle Solid Rocket Boosters (SRBs), but using liquid fuel instead. Some designs had the strap-on boosters using wings and jet engines, which would enable them to fly back to the launch area after they were jettisoned in flight. The Magnum was designed to carry around 80 tons of payload into low Earth orbit (LEO).
See also
Shuttle-C
Shuttle-derived vehicle
Shuttle-Derived Heavy Lift Launch Vehicle presented 2009
National Launch System, studied from 1991 to 1993
Constellation program, developed from 2005 to 2009 - cancelled
Space Launch System, developed and built from 2010 onwards
Studied Space Shuttle Variations and Derivatives
References
External links
Information about variants of Magnum
Low Cost Large Core Vehicle Structures Assessment - final report March 1998 re Magnum Launch Vehicle and Liquid Fly Back Booster.
Cancelled space launch vehicles
Space launch vehicles of the United States | Magnum (rocket) | Astronomy | 255 |
36,907,492 | https://en.wikipedia.org/wiki/Fences%20and%20pickets%20model%20of%20plasma%20membrane%20structure | The fences and pickets model of plasma membrane is a concept of cell membrane structure suggesting that the fluid plasma membrane is compartmentalized by actin-based
membrane-skeleton "fences" and anchored transmembrane protein "pickets". This model differs from older cell membrane structure concepts such as the Singer-Nicolson fluid mosaic model and the Saffman-Delbrück two-dimensional continuum fluid model that view the membrane as more or less homogeneous. The fences and pickets model was proposed to explain observations of molecular traffic made due to recent advances in single molecule tracking techniques.
Membrane skeleton fence model
The actin-based membrane skeleton (MSK) meshwork is directly situated on the cytoplasmic surface of the plasma membrane. Membrane skeleton fence, or membrane skeleton corralling model, suggests that this meshwork is likely to partition the plasma membrane into many small compartments with regard to the lateral diffusion of membrane molecules. Cytoplasmic domains collide with the actin-based membrane skeleton which induces temporary confinement or corralling of transmembrane (TM) proteins in the membrane skeleton mesh. TM proteins are capable to hop between adjacent compartments when the distance between the meshwork and the membrane becomes large enough, or when the meshwork temporarily and locally dissociates. Cytoplasmic molecules located on the inner surface of the plasma membrane also exhibit confinement within actin-based compartments. Recent evidences suggested that different lipid-anchored membrane proteins can undergo dynamic compartmentalization within specific membrane domains, primarily based on their spatially heterogeneous diffusion profiles, even in the absence of actin fences.
Anchored transmembrane protein picket model
The movement of phospholipids, even those located in the outer leaflet of the membrane, is regulated by the actin-based membrane skeleton meshwork. Which is surprising, because the membrane skeleton is located on the cytoplasmic surface of the plasma membrane, and cannot directly interact with the phospholipids located in the outer leaflet of the plasma membrane.
To explain the hop diffusion of phospholipids, consistently with that of TM proteins, a model named "anchored TM-protein picket model" has been proposed. In this model various TM proteins are anchored to and aligned along the membrane skeleton, and effectively act as rows of pickets against the free diffusion of phospholipids. This is due not only to the steric hindrance effect of these picket proteins, but also to the hydrodynamic-friction-like effects of these immobilized TM protein pickets on the surrounding lipid molecules.
When a TM protein is anchored to the membrane skeleton and immobilized, the viscosity of the fluid around it becomes higher, due to hydrodynamic friction effects at the surface of the immobilized protein. Therefore, when there are many such anchored TM proteins aligned along the membrane-skeleton fence, the compartment boundary becomes difficult for membrane molecules to pass through.
Receptor redistribution and clustering are key steps in many signal transduction pathways. Several reports have indicated the active roles played by the cytoskeleton in inhibiting or enabling the redistribution/clustering of membrane molecules.
Receptor monomers can hop across the inter-compartment boundaries quite readily, but when they form oligomers, their size increases and consequently their hop rate decreases dramatically.
Many receptors and other membrane-associated molecules are temporarily immobilized on actin filaments. This immobilization is often enhanced upon receptor engagement, and constitutes a key step for recruiting the downstream signaling molecule. Meanwhile, the formation of engaged receptor clusters might lead to de novo polymerization of actin filaments at the receptor cluster. As such, the actin-based membrane skeleton might work as a base scaffold for enhancing the interactions between the receptor and the actin-bound downstream molecules and for localized signaling.
The ‘‘pickets’’ and ‘‘fences’’ made of the membrane skeleton and the anchored transmembrane proteins provide the cell with a mechanism for preserving the spatial information of signal transduction in the membranewhereas pickets would influence both lipids and transmembrane proteins.
Pickets influence both lipids and transmembrane proteins traffic, whereas fences mostly influence only transmembrane proteins. Therefore, transmembrane proteins are corralled by both fences and pickets. In both models, membrane proteins and lipids can hop from a compartment to an adjacent one, probably when thermal fluctuations of the membrane and the membrane skeleton create a space between them large enough to allow the passage of integral membrane proteins, when an actin filament temporarily breaks, and/or when membrane molecules have sufficient kinetic energy to cross the barrier when they are in the boundary region.
References
Cell anatomy
Membrane biology | Fences and pickets model of plasma membrane structure | Chemistry | 998 |
1,657,860 | https://en.wikipedia.org/wiki/Hadwiger%20conjecture%20%28graph%20theory%29 | In graph theory, the Hadwiger conjecture states that if is loopless and has no minor then its chromatic number satisfies It is known to be true for The conjecture is a generalization of the four-color theorem and is considered to be one of the most important and challenging open problems in the field.
In more detail, if all proper colorings of an undirected graph use or more colors, then one can find disjoint connected subgraphs of such that each subgraph is connected by an edge to each other subgraph. Contracting the edges within each of these subgraphs so that each subgraph collapses to a single vertex produces a complete graph on vertices as a minor
This conjecture, a far-reaching generalization of the four-color problem, was made by Hugo Hadwiger in 1943 and is still unsolved. call it "one of the deepest unsolved problems in graph theory."
Equivalent forms
An equivalent form of the Hadwiger conjecture (the contrapositive of the form stated above) is that, if there is no sequence of edge contractions (each merging the two endpoints of some edge into a single supervertex) that brings a graph to the complete then must have a vertex coloring with colors.
In a minimal of any contracting each color class of the coloring to a single vertex will produce a complete However, this contraction process does not produce a minor because there is (by definition) no edge between any two vertices in the same color class, thus the contraction is not an edge contraction (which is required for minors). Hadwiger's conjecture states that there exists a different way of properly edge contracting sets of vertices to single vertices, producing a complete in such a way that all the contracted sets are connected.
If denotes the family of graphs having the property that all minors of graphs in can be then it follows from the Robertson–Seymour theorem that can be characterized by a finite set of forbidden minors. Hadwiger's conjecture is that this set consists of a single forbidden
The Hadwiger number of a graph is the size of the largest complete graph that is a minor of (or equivalently can be obtained by contracting edges It is also known as the contraction clique number The Hadwiger conjecture can be stated in the simple algebraic form where denotes the chromatic number
Special cases and partial results
The case is trivial: a graph requires more than one color if and only if it has an edge, and that edge is itself a minor. The case is also easy: the graphs requiring three colors are the non-bipartite graphs, and every non-bipartite graph has an odd cycle, which can be contracted to a 3-cycle, that is, a minor.
In the same paper in which he introduced the conjecture, Hadwiger proved its truth The graphs with no minor are the series–parallel graphs and their subgraphs. Each graph of this type has a vertex with at most two incident edges; one can 3-color any such graph by removing one such vertex, coloring the remaining graph recursively, and then adding back and coloring the removed vertex. Because the removed vertex has at most two edges, one of the three colors will always be available to color it when the vertex is added back.
The truth of the conjecture for implies the four color theorem: for, if the conjecture is true, every graph requiring five or more colors would have a minor and would (by Wagner's theorem) be nonplanar.
Klaus Wagner proved in 1937 that the case is actually equivalent to the four color theorem and therefore we now know it to be true. As Wagner showed, every graph that has no minor can be decomposed via clique-sums into pieces that are either planar or an 8-vertex Möbius ladder, and each of these pieces can be 4-colored independently of each other, so the 4-colorability of a -minor-free graph follows from the 4-colorability of each of the planar pieces.
proved the conjecture also using the four color theorem; their paper with this proof won the 1994 Fulkerson Prize. It follows from their proof that linklessly embeddable graphs, a three-dimensional analogue of planar graphs, have chromatic number at most five. Due to this result, the conjecture is known to be true but it remains unsolved for
For , some partial results are known: every 7-chromatic graph must contain either a minor or both a minor and a minor.
Every graph has a vertex with at most incident edges, from which it follows that a greedy coloring algorithm that removes this low-degree vertex, colors the remaining graph, and then adds back the removed vertex and colors it, will color the given graph with colors.
In the 1980s, Alexander V. Kostochka and Andrew Thomason both independently proved that every graph with no minor has average degree and can thus be colored using colors.
A sequence of improvements to this bound have led to a proof of -colorability for graphs without
Generalizations
György Hajós conjectured that Hadwiger's conjecture could be strengthened to subdivisions rather than minors: that is, that every graph with chromatic number contains a subdivision of a complete Hajós' conjecture is true but found counterexamples to this strengthened conjecture the cases and remain observed that Hajós' conjecture fails badly for random graphs: for in the limit as the number of vertices, goes to infinity, the probability approaches one that a random graph has chromatic and that its largest clique subdivision has vertices. In this context, it is worth noting that the probability also approaches one that a random graph has Hadwiger number greater than or equal to its chromatic number, so the Hadwiger conjecture holds for random graphs with high probability; more precisely, the Hadwiger number is with high probability proportional
asked whether Hadwiger's conjecture could be extended to list coloring. every graph with list chromatic number has a clique minor. However, the maximum list chromatic number of planar graphs is 5, not 4, so the extension fails already for graphs. More generally, for there exist graphs whose Hadwiger number is and whose list chromatic number
Gerards and Seymour conjectured that every graph with chromatic number has a complete graph as an odd minor. Such a structure can be represented as a family of vertex-disjoint subtrees of , each of which is two-colored, such that each pair of subtrees is connected by a monochromatic edge. Although graphs with no odd minor are not necessarily sparse, a similar upper bound holds for them as it does for the standard Hadwiger conjecture: a graph with no odd minor has chromatic number
By imposing extra conditions on , it may be possible to prove the existence of larger minors One example is the snark theorem, that every cubic graph requiring four colors in any edge coloring has the Petersen graph as a minor, conjectured by W. T. Tutte and announced to be proved in 2001 by Robertson, Sanders, Seymour, and Thomas.
Notes
References
Graph coloring
Graph minor theory
Conjectures
Unsolved problems in graph theory | Hadwiger conjecture (graph theory) | Mathematics | 1,459 |
73,221,578 | https://en.wikipedia.org/wiki/Leucocoprinus%20russoceps | Leucocoprinus russoceps is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was described in 1871 by the English botanists and mycologists Miles Joseph Berkeley and Christopher Edmund Broome who classified it as Agaricus (Lepiota) russoceps.
In 1887 it was reclassified as Lepiota russoceps by the Italian mycologist Pier Andrea Saccardo and then as Mastocephalus russoceps in 1891 by the German botanist Otto Kunze, however Kunze's Mastocephalus genus, along with most of 'Revisio generum plantarum was not widely accepted by the scientific community of the age so it remained a Lepiota.
In 1987 it was reclassified as Leucocoprinus russoceps by the mycologist Jörg Raithelhuber.
Description
Leucocoprinus russoceps is a small dapperling mushroom.Cap: 1.5-2.5cm wide starting campanulate before flattening and expanding to convex. The surface is yellow-brown to ochre with a pulverulent, powdery coating and striations from the edges. Gills: Pale, 'almost free' and close. Stem: 4cm long and 1.5mm thick at the top with a claviform taper to 4mm wide at the base. The surface is paler than the cap sometimes with a slight greenish tint with age whilst the interior is stuffed with white flesh. The stem ring may disappear. Spores:' Smooth, ovate to elliptic with a faint germ pore. 7.2-9 x 4.2-4.6 μm.
Habitat and distribution
The specimens were found growing on the ground in forests in Brazil.
The specimens studied by Berk and Broome were found on the ground in June 1860 in Ceylon (now Sri Lanka).
References
russoceps
Fungi of South America
Fungus species | Leucocoprinus russoceps | Biology | 404 |
52,875,487 | https://en.wikipedia.org/wiki/10K%20resolution | 10K resolution refers to a horizontal display resolutions of approximately 10,000 pixels. Unlike 4K UHD and 8K UHD, there are no 10K resolutions defined in the UHDTV broadcast standard. The first 10K displays demonstrated were ultrawide "21:9" screens with a resolution of , the same vertical resolution as 8K UHD.
History
On June 5, 2015, Chinese manufacturer BOE showed a 10K display with an aspect ratio of 64:27 (≈21:9) and a resolution of .
In November 2016, the Consumer Technology Association published CTA-861-G, an update to their standard for digital video transmission formats. This revision added support for 102404320, a 10K resolution with an aspect ratio of 64:27 (≈21:9), at up to 120Hz.
On January 4, 2017, HDMI version 2.1 was officially announced, and was later released on November 28, 2017. HDMI 2.1 includes support for all the formats listed in the CTA-861-G standard, including 10K (102404320) at up to 120Hz. HDMI 2.1 specifies a new Ultra High Speed HDMI cable which supports a bandwidth of up to 48 Gbit/s. Display Stream Compression (DSC) 1.2a is used for video formats higher than 8K resolution with 4:2:0 chroma subsampling.
10K resolutions are also sometimes seen in the case of gaming, for instance high resolution screenshots in the case of Minecraft with the OptiFine mod.
Cameras
, there are multiple companies producing photo cameras capable of 10K and higher resolutions, such as Phase One, Fujifilm, Hasselblad, and Sony. Other companies also create sensors capable of 10K resolution, though they are mostly not available to the general public, and are often used for scientific or industrial purposes.
Blackmagic Design is the only company producing a video camera capable of filming in resolutions 10K or higher with their URSA Mini Pro 12K.
See also
Ultrawide formats
References
External links
– official site
Television technology
Digital imaging
Ultra-high-definition television | 10K resolution | Technology | 447 |
26,998,311 | https://en.wikipedia.org/wiki/BSSN%20formalism | The BSSN formalism (Baumgarte, Shapiro, Shibata, Nakamura formalism) is a formalism of general relativity that was developed by Thomas W. Baumgarte, Stuart L. Shapiro, Masaru Shibata and Takashi Nakamura between 1987 and 1999. It is a modification of the ADM formalism developed during the 1950s.
The ADM formalism is a Hamiltonian formalism that does not permit stable and long-term numerical simulations. In the BSSN formalism, the ADM equations are modified by introducing auxiliary variables. The formalism has been tested for a long-term evolution of linear gravitational waves and used for a variety of purposes such as simulating the non-linear evolution of gravitational waves or the evolution and collision of black holes.
See also
ADM formalism
Canonical coordinates
Canonical gravity
Hamiltonian mechanics
References
Mathematical methods in general relativity
Formalism (deductive) | BSSN formalism | Physics | 188 |
1,866,533 | https://en.wikipedia.org/wiki/Electronic%20filter | Electronic filters are a type of signal processing filter in the form of electrical circuits. This article covers those filters consisting of lumped electronic components, as opposed to distributed-element filters. That is, using components and interconnections that, in analysis, can be considered to exist at a single point. These components can be in discrete packages or part of an integrated circuit.
Electronic filters remove unwanted frequency components from the applied signal, enhance wanted ones, or both. They can be:
passive or active
analog or digital
high-pass, low-pass, band-pass, band-stop (band-rejection; notch), or all-pass.
discrete-time (sampled) or continuous-time
linear or non-linear
infinite impulse response (IIR type) or finite impulse response (FIR type)
The most common types of electronic filters are linear filters, regardless of other aspects of their design. See the article on linear filters for details on their design and analysis.
History
The oldest forms of electronic filters are passive analog linear filters, constructed using only resistors and capacitors or resistors and inductors. These are known as RC and RL single-pole filters respectively. However, these simple filters have very limited uses. Multipole LC filters provide greater control of response form, bandwidth and transition bands. The first of these filters was the constant k filter, invented by George Campbell in 1910. Campbell's filter was a ladder network based on transmission line theory. Together with improved filters by Otto Zobel and others, these filters are known as image parameter filters. A major step forward was taken by Wilhelm Cauer who founded the field of network synthesis around the time of World War II. Cauer's theory allowed filters to be constructed that precisely followed some prescribed frequency function.
Classification by technology
Passive filters
Passive implementations of linear filters are based on combinations of resistors (R), inductors (L) and capacitors (C). These types are collectively known as passive filters, because they do not depend upon an external power supply and they do not contain active components such as transistors.
Inductors block high-frequency signals and conduct low-frequency signals, while capacitors do the reverse. A filter in which the signal passes through an inductor, or in which a capacitor provides a path to ground, presents less attenuation to low-frequency signals than high-frequency signals and is therefore a low-pass filter. If the signal passes through a capacitor, or has a path to ground through an inductor, then the filter presents less attenuation to high-frequency signals than low-frequency signals and therefore is a high-pass filter. Resistors on their own have no frequency-selective properties, but are added to inductors and capacitors to determine the time-constants of the circuit, and therefore the frequencies to which it responds.
The inductors and capacitors are the reactive elements of the filter. The number of elements determines the order of the filter. In this context, an LC tuned circuit being used in a band-pass or band-stop filter is considered a single element even though it consists of two components.
At high frequencies (above about 100 megahertz), sometimes the inductors consist of single loops or strips of sheet metal, and the capacitors consist of adjacent strips of metal. These inductive or capacitive pieces of metal are called stubs.
Single element types
The simplest passive filters, RC and RL filters, include only one reactive element, except for the hybrid LC filter, which is characterized by inductance and capacitance integrated in one element.
L filter
An L filter consists of two reactive elements, one in series and one in parallel.
T and π filters
Three-element filters can have a 'T' or 'π' topology and in either geometries, a low-pass, high-pass, band-pass, or band-stop characteristic is possible. The components can be chosen symmetric or not, depending on the required frequency characteristics. The high-pass T filter in the illustration, has a very low impedance at high frequencies, and a very high impedance at low frequencies. That means that it can be inserted in a transmission line, resulting in the high frequencies being passed and low frequencies being reflected. Likewise, for the illustrated low-pass π filter, the circuit can be connected to a transmission line, transmitting low frequencies and reflecting high frequencies. Using m-derived filter sections with correct termination impedances, the input impedance can be reasonably constant in the pass band.
Multiple-element types
Multiple-element filtration are usually constructed as a ladder network. These can be seen as a continuation of the L,T and π designs of filters. More elements are needed when it is desired to improve some parameter of the filter such as stop-band rejection or slope of transition from pass-band to stop-band.
Active filters
Active filters are implemented using a combination of passive and active (amplifying) components, and require an outside power source. Operational amplifiers are frequently used in active filter designs. These can have high Q factor, and can achieve resonance without the use of inductors. However, their upper frequency limit is limited by the bandwidth of the amplifiers.
Other filter technologies
There are many filter technologies other than lumped component electronics. These include digital filters, crystal filters, mechanical filters, surface acoustic wave (SAW) filters, thin-film bulk acoustic resonator (TFBAR, FBAR) based filters, garnet filters, and atomic filters (used in atomic clocks).
The transfer function
see also Filter (signal processing) for further analysis
The transfer function of a filter is the ratio of the output signal to that of the input signal as a function of the complex frequency :
.
The transfer function of all linear time-invariant filters, when constructed of lumped components (as opposed to distributed components such as transmission lines), will be the ratio of two polynomials in , i.e. a rational function of . The order of the transfer function will be the highest power of encountered in either the numerator or the denominator.
Classification by topology
Electronic filters can be classified by the technology used to implement them.
Filters using passive filter and active filter technology can be further classified by the particular electronic filter topology used to implement them.
Any given filter transfer function may be implemented in any electronic filter topology.
Some common circuit topologies are:
Cauer topology – passive
Sallen–Key topology – active
Multiple feedback topology – active
State variable topology – active
Biquadratic topology – active
Classification by design methodology
Historically, linear analog filter design has evolved through three major approaches. The oldest designs are simple circuits where the main design criterion was the Q factor of the circuit. This reflected the radio receiver application of filtering as Q was a measure of the frequency selectivity of a tuning circuit. From the 1920s filters began to be designed from the image point of view, mostly being driven by the requirements of telecommunications. After World War II the dominant methodology was network synthesis. The higher mathematics used originally required extensive tables of polynomial coefficient values to be published but modern computer resources have made that unnecessary.
Direct circuit analysis
Low order filters can be designed by directly applying basic circuit laws such as Kirchhoff's laws to obtain the transfer function. This kind of analysis is usually only carried out for simple filters of 1st or 2nd order.
Image impedance analysis
This approach analyses the filter sections from the point of view of the filter being in an infinite chain of identical sections. It has the advantages of simplicity of approach and the ability to easily extend to higher orders. It has the disadvantage that accuracy of predicted responses relies on filter terminations in the image impedance, which is usually not the case.
Network synthesis
The network synthesis approach starts with a required transfer function and then expresses that as a polynomial equation of the input impedance of the filter. The actual element values of the filter are obtained by continued-fraction or partial-fraction expansions of this polynomial. Unlike the image method, there is no need for impedance matching networks at the terminations as the effects of the terminating resistors are included in the analysis from the start.
Here is an image comparing Butterworth, Chebyshev, and elliptic filters. The filters in this illustration are all fifth-order low-pass filters. The particular implementation – analog or digital, passive or active – makes no difference; their output would be the same.
As is clear from the image, elliptic filters are sharper than all the others, but they show ripples on the whole bandwidth.
See also
Analog filter
Audio crossover
Audio filter
Cascaded integrator-comb filter
Comb filter
DSL filter
Electronic low-pass filter
Nyquist filter
RF and microwave filter
Switched-capacitor filter
Tone control circuits
Voltage-controlled filter
Notes and references
Catalog of passive filter types and component values. The Bible for practical electronic filter design.
External links
National Semiconductor AN-779 (TI SNOA224a) application note describing analog filter theory
Fundamentals of Electrical Engineering and Electronics – Detailed explanation of all types of filters
BAW filters (in French; PDF)
Some Interesting Filter Design Configurations & Transformations
Analog Filters for Data Conversion
Electronic circuits | Electronic filter | Chemistry,Engineering | 1,889 |
20,909,910 | https://en.wikipedia.org/wiki/Kasiri | Kasiri, also known as kaschiri and cassava beer, is an alcoholic drink made from cassava by Amerindians in Venezuela, Suriname and Guyana.
The roots of the cassava plant are grated, diluted in water, and pressed in a cylindrical basketwork press to extract the juice. The extracted juice is fermented to produce kasiri. In Brazil and Suriname the cassava roots are chewed and expectorated, a process where the amylase enzyme in saliva turns the starch into sugars and start fermentation.
The juice can also be boiled until it becomes a dark viscous syrup called kasripo (cassareep). This syrup has antiseptic properties and is used for flavoring.
See also
List of amylase-induced fermentations
References
Native American cuisine
Surinamese cuisine
Amylase induced fermentation
Guyanese cuisine
Types of beer
Cassava dishes | Kasiri | Chemistry | 190 |
37,606,596 | https://en.wikipedia.org/wiki/Shabaka%20%28window%29 | Shebeke () - are windows filled with coloured glass, created by Azerbaijani folk craftsmen from small wooden parts without glue and nails.
The building of the Sheki Khans Palace, shebeke fills walls, window openings of halls and rooms. Geometrically, shebeke windows fit with the general composition of the main facade of the palace. The continuous stained-glass shebeke-windows of the central halls and side rooms overlook the facade of the palace. It is believed that the replacement of the outer walls of the halls of the both floors and the upper rooms by lifting sashes-stained-glass windows is a feature of this ceremonial pavilion architecture.
Numerous residential stone houses of the 18th-19th centuries, decorated with shebeke, were also met in the city of Shusha.
Shebeke art
It is an intangible cultural heritage that has an artistic constructive form in the Middle Eastern architecture and in the decorative arts and crafts. It has been used in the architecture of Azerbaijan since the 11th-12th centuries. A masterpiece of the decorative art is the flatness consisting of small glass pieces assembled by the master with wood elements that fix each other without glue or nails. The main "secret" of this art consists of the transfer of wooden parts with a ledge and an indentation between which small glass pieces are inserted. Parts of the tree are made from solid wood species - boxwood, walnut, beech, and oak. Shebeke patterns, meaning lattice, symbolize the Sun, the energy of life, the eternal flow of the time and the infinity of the universe.
Shebeke structure
The surface dimensions of the shebeke can vary from a few square centimetres to several square meters, depending on the functional purpose of the items. Based on the connection with the architecture, the shebeke works of art can be classified in two directions: part of an architectural structure - a door, window, staircase; and individual interior items such as screen, lamp, chest, cabinet. Since the foundation of the ornament is made up of more precise geometric figures, there is another classification according to compositions: "jafari", "sekkiz", "onalty", "gullyabi", "shamsi", "gelu", and also "bendi-rumi".
Masters of art
Bearers of the art of shebeke and its symbols are folk masters. Famous folk craftsmen include Mekhti Mekhtiyev (19th century), Shahbuzla Abuzer Badalov (18th-19th centuries), Abbasgulu
Sheki (19th century). The revival of this art in the twentieth century was facilitated by Abdulhuseyn Babaev (1877-1961), Ashraf Rasulov (1928-1997).
At the moment, the development of art is promoted by Ashraf Rasulovs son Tofig (1961) and his grandson Ilgar (1990). Also, Soltan Ismailov and Huseyn Mustafazade (Sheki), Jabir Jabbarov (Ordubad), Rafik Allahverdiyev (Shusha) stand out in this area.
Restoration
In the 1950s and 1960s. under the direction of Ashraf Rasulov, the Sheki Khans Palace was restored for the first time. In 2001, the window shebeke of the Juma and Ambaras mosques
(17th-18th centuries) in Ordubad were restored under the lead of the master Jabir Jabbarov.
Huseyn Mustafazadeh, in 2002-2004, according to an agreement with the German company "Den Cmalcoh Ege MEK Gembelg GMB", participated in the restoration of the Palace of Sheki Khans. As a result, the doors and windows of the palace were restored, as well as most of the ceiling.
Shebeke art in Azerbaijan
The word “shebeke”, in translation from the Azerbaijani language, means “net”, “lattice”.
On the territory of Azerbaijan, shebeke as an art form was widespread in cities such as Sheki, Shusha, Ordubad, Baku, Ganja, Lankaran, Nakhichevan and Derbent (Russian Federation). The main center of Shebeke art is Sheki, where this tradition is still presented in its pure, classic form. Samples of this kind of art dated with 18th-19th centuries are concentrated here. A classic example of this art form is the Sheki Khans Palace (1762).
Shebeke art and its symbolism differ to some extent throughout the regions of Azerbaijan. For example, from the point of view of the manufacturing technology, the Ordubad craftsmen preferred the simplicity of the geometric shapes and an ascetic colour scheme. However, despite this, in the almost all regions of the country, the coloured glass is the main material.
Gallery
See also
Stained glass
Culture of Azerbaijan
References
Arts in Azerbaijan
Azerbaijani inventions
Architectural elements
Decorative arts
Glass art
Architecture in Azerbaijan
Islamic glass | Shabaka (window) | Technology,Engineering | 1,025 |
1,380,083 | https://en.wikipedia.org/wiki/International%20Psychopharmacology%20Algorithm%20Project | The International Psychopharmacology Algorithm Project (IPAP) is a non-profit corporation whose purpose is to "enable, enhance, and propagate" use of algorithms for the treatment of some Axis I psychiatric disorders.
Kenneth O Jobson founded the Project. The Dean Foundation provides funding.
IPAP has organized and supported several international conferences on psychopharmacology algorithms. It has also supported the creation of several algorithms based on expert opinion. It is now in the process of creating "evidence-based algorithms," that is algorithms created by experts and annotated with the evidence that leads to these algorithms. A schizophrenia algorithm has been created and one on Post Traumatic Stress Disorder (PTSD) was released in July 2005. A general anxiety disorder (GAD) algorithm was released in 2006. Periodic updates of the algorithms are released as the basis of evidence changes. In addition, the algorithms are being translated into various non-English languages (Chinese, Japanese, Spanish, and Thai) as the availability of translators permits.
References
External links
Psychiatry organizations
Biostatistics | International Psychopharmacology Algorithm Project | Chemistry | 216 |
2,148,734 | https://en.wikipedia.org/wiki/Drum%20replacement | Drum replacement is the practice, in modern music production, of an engineer or producer recording a live drummer and replacing (or adding to) the sound of a particular drum with a pre-recorded sample. For example, a drummer might play a beat, whereupon the engineer might then replace all of the snare hits with the sound of a hand-clap. It is considered by some to be one of the most arcane practices of the modern music production industry and is an example of the considerable influence of computers in modern music, even in genres not strictly classified as "electronic music."
Origins
The practice is an extension of the recording techniques of the 1970s through to the 1980s, wherein the constant search for better or "more perfect" sound led to a variety of techniques being tested, including the extensive use of drum machines. Among these techniques was drum replacement, which was pioneered by producer Roger Nichols while in the studio with Steely Dan in the late '70s, and has grown in both popularity and complexity since. One of the most common uses of this technique is the replacing of every snare hit in a performance (which may or may not sound subjectively "good") with an "ideal" snare drum hit.
Should the decision be made to use drum replacement techniques, the actual implementation of the practice usually falls to an audio engineer during the mixing stage.
Association
Drum replacing is often mentioned, along with autotune, harmonizers, and advanced compressors, as being symptomatic of the "artificial nature" of modern western music by certain critics. Some critics suggest that the practice defeats the purpose of having a live drummer as opposed to a drum machine, since the result is effectively exactly the same as what a drum machine would produce if the drum machine had a custom sample recorded for it by the engineer. Others laud it as one of the subtleties of studio technique, used by engineers to give their craft more complexity in an increasingly automated world.
References
External links
An example of drum replacement software - Slate Digital TRIGGER
An example of drum replacement software - Drumagog
Audio engineering | Drum replacement | Engineering | 422 |
211,177 | https://en.wikipedia.org/wiki/Lodestone | Lodestones are naturally magnetized pieces of the mineral magnetite. They are naturally occurring magnets, which can attract iron. The property of magnetism was first discovered in antiquity through lodestones. Pieces of lodestone, suspended so they could turn, were the first magnetic compasses, and their importance to early navigation is indicated by the name lodestone, which in Middle English means "course stone" or "leading stone",
from the now-obsolete meaning of lode as "journey, way".
Lodestone is one of only a very few minerals that is found naturally magnetized. Magnetite is black or brownish-black, with a metallic luster, a Mohs hardness of 5.5–6.5 and a black streak.
Origin
The process by which lodestone is created has long been an open question in geology. Only a small amount of the magnetite on the Earth is found magnetized as lodestone. Ordinary magnetite is attracted to a magnetic field as iron and steel are, but does not tend to become magnetized itself; it has too low a magnetic coercivity. (resistance to magnetization or demagnetization) Microscopic examination of lodestones has found them to be made of magnetite (Fe3O4) with inclusions of maghemite (cubic Fe2O3), often with impurity metal ions of titanium, aluminium, and manganese. This inhomogeneous crystalline structure gives this variety of magnetite sufficient coercivity to remain magnetized and thus be a permanent magnet.
The other question is how lodestones get magnetized. The Earth's magnetic field at 0.5 gauss is too weak to magnetize a lodestone by itself. The leading theory is that lodestones are magnetized by the strong magnetic fields surrounding lightning bolts. This is supported by the observation that they are mostly found near the surface of the Earth, rather than buried at great depth.
History
One of the earliest known references to lodestone's magnetic properties was made by 6th century BC Greek philosopher Thales of Miletus, whom the ancient Greeks credited with discovering lodestone's attraction to iron and other lodestones. The name magnet may come from lodestones found in Magnesia, Anatolia. The ancient Indian medical text Sushruta Samhita describes using magnetic properties of the lodestone to remove arrows embedded in a person's body.
The earliest Chinese literary reference to magnetism occurs in the 4th-century BC Book of the Devil Valley Master (Guiguzi).
In the chronicle Lüshi Chunqiu, from the 2nd century BC, it is explicitly stated that "the lodestone makes iron come or it attracts it." The earliest mention of a needle's attraction appears in a work composed between 20 and 100 AD, the Lunheng (Balanced Inquiries): "A lodestone attracts a needle." In the 2nd century BC, Chinese geomancers were experimenting with the magnetic properties of lodestone to make a "south-pointing spoon" for divination. When it is placed on a smooth bronze plate, the spoon would invariably rotate to a north–south axis. While this has been shown to work, archaeologists have yet to discover an actual spoon made of magnetite in a Han tomb.
Based on his discovery of an Olmec artifact (a shaped and grooved magnetic bar) in North America, astronomer John Carlson suggests that lodestone may have been used by the Olmec more than a thousand years prior to the Chinese discovery. Carlson speculates that the Olmecs, for astrological or geomantic purposes, used similar artifacts as a directional device, or to orient their temples, the dwellings of the living, or the interments of the dead. Detailed analysis of the Olmec artifact revealed that the "bar" was composed of hematite with titanium lamellae of Fe2–xTixO3 that accounted for the anomalous remanent magnetism of the artifact.
"A century of research has pushed back the first mention of the magnetic compass in Europe to Alexander Neckam about +1190, followed soon afterwards by Guyot de Provins in +1205 and Jacques de Vitry in +1269. All other European claims have been excluded by detailed study..."
Lodestones have frequently been displayed as valuable or prestigious objects. The Ashmolean Museum in Oxford contains a lodestone adorned with a gilt coronet that was donated by Mary Cavendish in 1756, possibly to secure her husband's appointment as Chancellor of Oxford University. Isaac Newton's signet ring reportedly contained a lodestone which was capable of lifting more than 200 times its own weight. And in 17th century London, the Royal Society displayed a spherical lodestone (a terrella or 'little Earth'), which was used to illustrate the Earth's magnetic fields and the function of mariners' compasses. One contemporary writer, the satirist Ned Ward, noted how the terrella "made a paper of Steel Filings prick up themselves one upon the back of another, that they stood pointing like the Bristles of a Hedge-Hog; and gave such Life and Merriment to a Parcel of Needles, that they danc'd [...] as if the devil were in them."
References
External links
Lodestone
Ancient Greek technology
Electric and magnetic fields in matter
Iron(II,III) minerals
Oxide minerals | Lodestone | Physics,Chemistry,Materials_science,Engineering | 1,128 |
931,501 | https://en.wikipedia.org/wiki/Ubiquitin%20carboxy-terminal%20hydrolase%20L1 | Ubiquitin carboxy-terminal hydrolase L1 (, ubiquitin C-terminal hydrolase, UCH-L1) is a deubiquitinating enzyme.
Function
UCH-L1 is a member of a gene family whose products hydrolyze small C-terminal adducts of ubiquitin to generate the ubiquitin monomer. Expression of UCH-L1 is highly specific to neurons and to cells of the diffuse neuroendocrine system and their tumors. It is abundantly present in all neurons (accounts for 1-2% of total brain protein), expressed specifically in neurons and testis/ovary.
The catalytic triad of UCH-L1 contains a cysteine at position 90, an aspartate at position 176, and a histidine at position 161 that are responsible for its hydrolase activity.
Relevance to neurodegenerative disorders
A point mutation (I93M) in the gene encoding this protein is implicated as the cause of Parkinson's disease in one German family, although this finding is controversial, as no other Parkinson's disease patients with this mutation have been found.
Furthermore, a polymorphism (S18Y) in this gene has been found to be associated with a reduced risk for Parkinson's disease. This polymorphism has specifically been shown to have antioxidant activity.
Another potentially protective function of UCH-L1 is its reported ability to stabilize monoubiquitin, an important component of the ubiquitin proteasome system. It is thought that by stabilizing the monomers of ubiquitin and thereby preventing their degradation, UCH-L1 increases the available pool of ubiquitin to be tagged onto proteins destined to be degraded by the proteasome.
The gene is also associated with Alzheimer's disease, and required for normal synaptic and cognitive function. Loss of Uchl1 increases the susceptibility of pancreatic beta-cells to programmed cell death, indicating that this protein plays a protective role in neuroendocrine cells and illustrating a link between diabetes and neurodegenerative diseases.
Patients with early-onset neurodegeneration in which the causative mutation was in the UCHL1 gene (specifically, the ubiquitin binding domain, E7A) display blindness, cerebellar ataxia, nystagmus, dorsal column dysfunction, and upper motor neuron dysfunction.
Ectopic expression
Although UCH-L1 protein expression is specific to neurons and testis/ovary tissue, it has been found to be expressed in certain lung-tumor cell lines. This abnormal expression of UCH-L1 is implicated in cancer and has led to the designation of UCH-L1 as an oncogene.
Furthermore there is evidence that UCH-L1 might play a role in the pathogenesis of membranous glomerulonephritis as UCH-L1 de novo expression in podocytes was seen in PHN, the rat model of human mGN.
This UCH-L1 expression is thought to induce at least in part podocyte hypertrophy.
Protein structure
Human UCH-L1 and the closely related protein UCHL3 have one of the most complicated knot structure yet discovered for a protein, with five knot crossings. It is speculated that a knot structure may increase a protein's resistance to degradation in the proteasome.
The conformation of the UCH-L1 protein may also be an important indication of neuroprotection or pathology. For example, the UCH-L1 dimer has been shown to exhibit the potentially pathogenic ligase activity and may lead to the aforementioned increase in aggregation of α-synuclein. The S18Y polymorphism of UCH-L1 has been shown to be less-prone to dimerization.
Interactions
Ubiquitin carboxy-terminal hydrolase L1 has been shown to interact with COP9 constitutive photomorphogenic homolog subunit 5.
UCH-L1 has also been shown to interact with α-synuclein, another protein implicated in the pathology of Parkinson disease. This activity is reported to be the result of its ubiquityl ligase activity which may be associated with the I93M pathogenic mutation in the gene.
Most recently, UCH-L1 has been demonstrated to interact with the E3 ligase, parkin. Parkin has been demonstrated to bind and ubiquitinylate UCH-L1 to promote lysosomal degradation of UCH-L1.
See also
Ubiquitin carboxyl-terminal esterase L3—the gene UCHL3
Alpha synuclein
Parkinson disease
Proteasome
References
Further reading
External links
EC 3.1.2
Molecular neuroscience | Ubiquitin carboxy-terminal hydrolase L1 | Chemistry | 1,008 |
18,195,323 | https://en.wikipedia.org/wiki/Laminaribiose | Laminaribiose C12H22O11 is a disaccharide which is used notably in the agricultural field and as an antiseptic. It is in general obtained by hydrolysis or by acetolysis of natural polysaccharides of plant origin. It is also a product of the caramelization of glucose.
References
Disaccharides | Laminaribiose | Chemistry | 78 |
59,493,423 | https://en.wikipedia.org/wiki/Alice%20Archenhold | Alice Archenhold (née Markus; 27 August 1874 – 9 February 1943) was a German astronomer whose husband was fellow astronomer Friedrich Simon Archenhold.
Alice Markus was born in Wiesbaden, Germany, and married Friedrich Simon Archenhold in July 1897 and lived in Berlin. They went on to have five children together.
Her sons, Günter, who became an astronomer, and Horst, both fled to England, but Alice was arrested and deported (along with her daughter Hilde) to Theresienstadt concentration camp, in Czechoslovakia, where she died on 9 February 1943.
She is commemorated on her husband's grave at the Zentralfriedhof Friedrichsfelde, Berlin.
In 2010 a street in Treptow-Köpenick was renamed after her as Alice Archenhold Weg.
See also
Photographs of Archenhold family: https://theskywasthelimit.de/en/history/
References
1874 births
Place of birth missing
1943 deaths
20th-century German astronomers
Women astronomers
German people who died in the Theresienstadt Ghetto
Women in Nazi Germany
Jewish German scientists
Jewish women scientists
Jewish astronomers | Alice Archenhold | Astronomy | 232 |
18,678,864 | https://en.wikipedia.org/wiki/V1291%20Aquilae | V1291 Aquilae is a single star in the equatorial constellation of Aquila. It has a yellow-white hue and is dimly visible to the naked eye with an apparent visual magnitude that fluctuates around 5.65. Based on parallax measurements, it is located at a distance of approximately 278 light years from the Sun. The star it is drifting closer with a radial velocity of −22 km/s.
In 1962, Helmut A. Abt and John C. Gloson published data showing that the star's brightness varied. Based on that publication, the star was given its variable star designation, V1291 Aquilae, in 1972.
This is a magnetic chemically peculiar star, or Ap star, with a stellar classification of F0VpSrCrEu, matching an F-type main-sequence star with abundance anomalies of strontium, chromium, and europium in the spectrum. It is a variable star of type Alpha2 Canum Venaticorum that ranges in visual magnitude from 5.61 down to 5.67 with a period of 223.826 days. This is most likely the mean rotational period of the star. V1291 Aquilae was one of the first Ap stars discovered with a period of more than 100 days. It shows a surface magnetic field strength of .
References
External links
HR 7575
Image V1291 Aquilae
F-type main-sequence stars
Ap stars
Alpha2 Canum Venaticorum variables
Aquila (constellation)
Durchmusterung objects
188041
097871
7575
Aquilae, V1291 | V1291 Aquilae | Astronomy | 344 |
47,944,052 | https://en.wikipedia.org/wiki/16-Androstene | 16-Androstenes, or androst-16-enes, are a class of endogenous androstane steroids that includes androstadienol, androstadienone, androstenone, and androstenol, which are pheromones. Some of the 16-androstenes, such as androstenone and androstenol, are odorous, and have been confirmed to contribute to human malodor.
Background
The 16-Androstene steroid is most commonly found and produced in boar testicle, specifically in un-castrated male pigs, which results in a foul odor. This foul odor typically has a urine-like or skatole odor which is as a result of high concentration and levels of the 16-Androstene steroid found in the boar's Adipose tissue, that is observed when the boar fat is cooked on heat. The 16-Androstene acts as a pheromone which is transported in a boar's body through the bloodstream to the salivary glands and is metabolized in the liver which produces alpha and beta-androstenol. The reason why the 16-Androstene steroid is essential in the overall population of boars is because it plays a vital role in the mating process, specifically attracting gilts. The 16-Androstene steroid is a vital steroid to study in order to better understand varying genes and metabolic pathways and its relation to the similarities and differences observed in human axillary odors.
Research Findings
The 16-Androstene steroid is a compound of interest in various research relating to the topic of steroid-based malodour. Most of the research conducted on the 16-Androstene steroid is done by experimentation of boars, often looking at various metabolic pathways and genetics which are similar and different in varying breeds of boars. These studies are conducted in order to utilize the research conducted on boars to better understand human axillary odors. Research conducted by Gower in 1994 suggested that the 16-Androstene along with other steroids such as the 5α-androstenol and 5α-androstenone, are prevalent in apocrine sweat glands. Later research by Austin and Ellis in 2003 revealed through the use of mass spectrometry (MS) and gas chromatography (GC), that the 16-Androstene steroid was present on axillary skin which determined that axillary bacteria are able to create 16-androstenes steroids from the bacteria that had the C16 double bond already present. Other research indicates that 3[beta]-hydroxysteroid dehydrogenase (3[beta]-HSD) plays a vital role in the metabolism of androstenone. When looking at the adipose tissue of a boar, it was also observed that there were high levels of androstenone when there was a low expression of enzymes, protein and mRNA showing a negative trend. Additionally, some research indicates that the presence of the 16-Androstene steroid contributes significantly to the role and function of the liver which is to participate in phase II conjugation metabolism. These were some of the research findings from various articles illustrating the role that the 16-Androstene steroid plays in metabolic pathways and genetics. All of these research findings are able to assist in better understanding genes, metabolic pathways, and enzymes which will aid in scientists understanding how to diminish boar taint / odor.
Research Methods
A variety of research methods were utilized in multiple research articles to gather vital information on the 16-Androstene steroid. Methods such as PCR, mass spectrometry (MS), gas chromatography (GC), solid-phase chromatography, microarray technology, and various other methods were utilized in the research articles to understand the 16-Androstene steroid.
See also
Androstane
Androsterone
Estratetraenol
References
Androstanes
Neurosteroids
Pheromones | 16-Androstene | Chemistry | 859 |
8,362,544 | https://en.wikipedia.org/wiki/Bromine%20pentafluoride%20%28data%20page%29 | This page provides supplementary chemical data on bromine pentafluoride.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as Matheson Trigas, and follow its directions.
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Bromine pentafluoride (data page) | Chemistry | 88 |
3,602,041 | https://en.wikipedia.org/wiki/Short%20code | Short codes, or short numbers, are short digit-sequences—significantly shorter than telephone numbers—that are used to address messages in the Multimedia Messaging System (MMS) and short message service (SMS) systems of mobile network operators. In addition to messaging, they may be used in abbreviated dialing.
Short codes are designed to be easier to read and remember than telephone numbers. Short codes are unique to each operator at the technological level. Even so, providers generally have agreements to avoid overlaps. In some countries, such as the United States, some classes of numbers are inter-operator (used by multiple providers or carriers). U.S. inter-operator numbers are called common short codes).
Organisations may set up short codes to encourage users to engage with services such as charity donations, mobile services, ordering ringtones, or television-program voting. Messages sent to a short code can be billed at a higher rate than a standard SMS and may even subscribe a customer to a recurring monthly service that will be added to the customer's mobile-phone bill until the user texts, for example, the word "STOP" to terminate the service.
Short codes and service identifiers (prefix)
Short codes are often associated with automated services. An automated program can handle the response and typically requires the sender to start the message with a command word or prefix. The service then responds to the command appropriately.
In ads or in other printed material where a provider has to provide both a prefix and the short code number, the advertisement will typically follow this format:
Example 1 - Long version: Text Football to 72404 for latest football news.
Example 2 - Short version: football@72404
Regional differences
Albania
Short Codes are five digits in length and start with 5, also are known as short codes for value added service.
Australia
Short codes are six or eight digits in length, starting with the prefix "19" followed by an additional four or six digits and two. Communications Alliance Ltd and WMC Global are responsible for governing premium and standard rate short codes in Australia. Transactional and Subscription services require a double sms MO opt-in or Web based opt-in with an MO reply.
Bangladesh
Codes are five digits in length. Bangladesh Telecommunication Regulatory Commission (BTRC) issues and controls short codes in Bangladesh.
Belgium
Codes are four digits in length.
Botswana
Codes are three digits in length.
Brazil
Codes are five digits in length.
Cambodia
Short Codes are four digits in length and start with 1.
Canada
Canadian Common Short Codes can be five or six digits long. Common Short Codes beginning with a leading '4' are reserved for private use by wireless network operators. Four-digit Common Short Codes are not permitted due to handset incompatibilities. Short code-based messages vary between zero-rated (paid for by campaign), standard rate (user is responsible for standard carrier charges), and premium rate (varies, C$1-10). Canadian Short codes are governed by the Canadian Wireless Telecommunications Association.
In February 2020, CWTA (Canadian Wireless Telecommunications Association) announced that Rogers Wireless will no longer participate in general use mobile codes in the future. A common short code is a code that is shared by more than one brand for multiple or general uses.
Chile
Codes are three and four digits in length.
Czech Republic
Messages sent to/from these short codes are known as Premium Rate SMS. Codes are seven digits in length for MO and five (not billed) or eight (billed) for MT direction, starting with nine, while two or three (depending on billing type=MO/MT) trailing digits express the price, e.g. SMS sent to 9090930 is billed for Kč30. Leading three digits are purpose type prefixes (908 for micro payments, 909 for adult content and 900 for everything else), digits at position four and five determines the service provider registered by a network operator. There are also other four digit shortcodes, used by a network operators for service only purposes (operator dependent)
Denmark
Codes are three or four digits in length.
Dominican Republic
Codes are four or five digits in length.
Ethiopia
Codes are four digits in length and start with 8, like 8xxx. Although the telecom sector in Ethiopia is controlled by the government, short code services are outsourced to the private sector. The short codes are used mostly for fundraising, lottery and polling.
European Union
Common EU-wide codes start with 11. Examples include: 118xxx - directory services, 116xxx - emergency helplines. This is in addition to the EU-wide emergency number 112.
Faroe islands
Codes are four digits in length, beginning with "12" or "19".
Finland
Codes are five or more digits in length, usually five or six.
France
Codes are five digits in length. Starting digits define the cost of the service.
Germany
Codes are four or five digits in length.
Greece
Codes are five digits in length.
Hong Kong
Codes are four to eight digits in length, start with digits 501-509. Emergency number is 992.
Hungary
Codes are four or five digits in length.
India
There are many companies in the Indian market who rent keywords, on a monthly basis, whose characters, on a typical telephone keypad, represent short codes. Short codes are five digits in length and have to start with the digit '5'. The five digits can be extended by three digits further representing three additional characters. Texts sent to these Short Codes are commonly referred to as Premium Rate SMS Messages and cost around Rs 1 to Rs 3 per text depending on the operator as well as the service. Any length of full message can be sent, ranging from 100–500 (some providers only support).
Indonesia
Codes are four digits in length with Rp2000 premium price.
Republic of Ireland
Short codes are five digits in length, and start with 5. The second digit generally indicates the maximum price, with 0 = completely free, 1 = standard text rate only, 3 = €0.60, and 7 having no maximum. Codes beginning 59 are ostensibly intended for adult services, but few if any of these codes are used.
Italy
In Italy, short codes have no fixed length, starting from three digits up to five. All short codes that start with the digit "4", are designated by a local telecommunications law for "network services". Widely known short codes are in the 48xxx range, commercial ringtones and mobile stuff download.
Korea, South
Codes are generally four to six digits in length, however short codes have no fixed length.
Latvia
In Latvia short codes also have no fixed length, starting from three digits up to five. All 4 digit short codes that start with "118" or 5 digit short codes that start with "1184" are designated to information service providers.
Lithuania
In Lithuania, short codes also have no fixed length, starting from three digits up to five. All short codes that start with the digit "1", are designated by a local telecommunications law for "network services".
Malaysia
Codes are five digits in length, start with "2" or "3", premium pricing from RM0.30 up to 10.00. Codes are MT billed so subscription services are allowed. Upon service description approval by mobile operators, dedicated codes are generally live in 4 weeks, and shared codes after 1 week.
Morocco
Codes are four digits in length.
Nepal
Codes are three to four digits in length. Dialing short codes are generally 3 digits, and reserved for public services. SMS shortcodes are used for a range of purposes, and are four digits.
The Netherlands
Codes are four digits in length.
New Zealand
Codes are three to four digits in length.
Nigeria
Codes are four to five digits in length.
Norway
Codes are four to five digits in length.
Pakistan
Codes are three and four digits in length. Users are charged PKR 5 - PKR 25 per SMS sent on short codes. Mobile operators charge a setup fee, monthly fee and fee per keyword for short codes. Short codes usage must abide by the rules set by PTA (Pakistan Telecom Authority).
Panama
Codes are four digits in length.
Poland
Commercial codes are five digit long (1xxxx) and are reachable from both mobile and fixed networks. Calls to short codes - from any type of network - are routed based on the location of the number originating the call; hence, if wishing to reach a particular geographical area, the subscriber might need to prefix the short code with an appropriate area code.
The Philippines
Codes are seven digits in length. The National Telecommunications Commission (NTC) is a regulatory agency providing an environment that ensures reliable, affordable and viable infrastructure and services in information and communications technology (ICT) accessible to all. Although the NTC is ultimately responsible for the governance of premium and non-premium shortcodes in the Philippines, the NTC's regulatory guidelines are not comprehensive when applied to shortcodes. Instead NTC's guidelines focus more on the carriers and the carrier's technical infrastructure. NTC's website does not contain any specific information with regard to premium SMS or standard rate SMS. There is relevant documentation for Bulk SMS and SPAM control via NTC's "AMENDMENT TO THE RULES AND REGULATIONS ON BROADCAST MESSAGING SERVICES", however again is not directly related to premium SMS.
Russia
Codes are four digits in length. The cost of the call or SMS to the short number varies from 1.2 to 300 rubles, depending on the number and the carrier.
Serbia
Codes are four digits in length.
Singapore
Codes are five digits in length.
South Africa
Codes are five digits in length. Short codes will start with either a "3" or "4". For example, 34001 or 42001. Each short code or short code range (a range will generally be 34000 to 34009) are assigned specific tariffs or end user prices (EUP). The tariff charges can range from R0.50 to R30.00 on mobile originated billing and from R0.50 to R50.00 using mobile terminated billing. Due to high costs associated with short code rental many providers offer shared shortcodes, which greatly reduces costs.
Spain
Codes are four digits in length.
Sweden
Codes are five digits in length.
Switzerland
Codes are three to five digits in length (most popular codes are three digits long); codes starting with "6" are reserved for adult services.
Taiwan
Codes are usually four digits in length, starting with digits "19".
Turkey
Codes are four digits in length.
United Kingdom
Codes are usually five, six or seven digits in length, mostly starting with 6, 7 or 8. The range of codes may be expanded in time to use other leading digits such as 4. Shortcodes are often owned by holding companies who then lease them out to service providers and advertisers to promote SMS services, charitable fundraising and marketing promotions such as news alerts, voting and quizzes.
Codes starting 70 are used by charities. Codes starting 72 are used by Society Lotteries. Adult related mobile services must use codes starting 69 or 89. Mobile operators sometimes use proprietary codes (either with a different leading digit, or shorter in length) for operator-specific functions. Depending on the service offered, users may interact with service providers either by calling the number, or by sending and/or receiving a text or MMS message.
Calls to mobile shortcodes may be free, or may be charged per call or at a per minute rate. Where the number can be called from any mobile network, the same charge will apply from all networks.
Messages sent to mobile shortcodes may be charged at a "standard rate", or with an additional premium charge. Where messages incur a "standard rate" charge, this is set by the sender's mobile provider and varies by provider.
Messages received from shortcodes may be free or may incur a premium charge. Messages can be used to deliver additional content, or a URL link that opens the users web browser at a specific web page. For subscription services, the charges may recur on a daily, weekly, monthly or other basis. To stop a subscription based shortcode service text the word 'STOP' to the shortcode number.
The service provider must state the applicable charges alongside the number. Calls and messages to mobile shortcodes do not count towards inclusive allowances or bundles.
Where the benefit passed on to the service provider is more than 10p per call, per minute, or per message, Ofcom's Premium Rate Services Condition defines it as being a Controlled Premium Rate Service (CPRS) and subject to the additional regulation detailed in The Regulation of Premium Rate Services Order 2024.
These services are currently regulated by the Phone-paid Services Authority. From 1 February 2025, Ofcom will regulate these services directly. A number of key PSA staff have already been embedded within Ofcom for some time in preparation for this.
United States
Standard, interoperable short codes in the U.S. are five or six digits long, never start with 1, and only work in the U.S. They are leased by the short code program's registry service provider iconectiv, under a deal with the Common Short Code Administration and CTIA. It costs twice as much to choose a specific code as it does to get one that is randomly assigned. Some carriers assign a subset of their carrier-specific codes to third parties.
"The Short Code Registry maintains a single database of available, reserved and registered short codes. CTIA administers the Common Short Code program, and iconectiv became the official U.S. Short Code Registry service provider in January, 2016. For more information, please see the Short Code Registry’s Best Practices and the Short Code Monitoring Handbook."
Texting "HELP" to a short code causes the short code service to return a message with terms and conditions, support information — consisting of either a toll-free phone number or email address at a minimum — and other information from the leaseholder of the short code. A user can opt-out from receiving any further messages from a short code service by texting "STOP", "END", "QUIT", "CANCEL", or "UNSUBSCRIBE" to the short code; after doing so, one final message confirming the opt-out is sent.
See also
Abbreviated dialing
Vertical service code
References
External links
Australian short code search (Australian Communications and Media Authority)
Common Short Code Administration (U.S.)
Short Code Management Group (U.K.) | Short code | Technology | 2,953 |
71,295,003 | https://en.wikipedia.org/wiki/Bryostigma%20epiphyscium | Bryostigma epiphyscium is a species of lichenicolous fungus in the order Arthoniales. Formerly classified in the genera Arthonia and Conida, it was transferred to the genus Bryostigma in 2020.
It is known to infect the lichen Physcia caesia and other lichens of the genus Physcia.
References
Arthoniomycetes
Fungi of Iceland
Fungi described in 1875
Taxa named by William Nylander (botanist)
Lichenicolous fungi
Fungus species | Bryostigma epiphyscium | Biology | 111 |
6,061,729 | https://en.wikipedia.org/wiki/Fluid%E2%80%93structure%20interaction | Fluid–structure interaction (FSI) is the interaction of some movable or deformable structure with an internal or surrounding fluid flow. Fluid–structure interactions can be stable or oscillatory. In oscillatory interactions, the strain induced in the solid structure causes it to move such that the source of strain is reduced, and the structure returns to its former state only for the process to repeat.
Examples
Fluid–structure interactions are a crucial consideration in the design of many engineering systems, e.g. automobile, aircraft, spacecraft, engines and bridges. Failing to consider the effects of oscillatory interactions can be catastrophic, especially in structures comprising materials susceptible to fatigue. Tacoma Narrows Bridge (1940), the first Tacoma Narrows Bridge, is probably one of the most infamous examples of large-scale failure. Aircraft wings and turbine blades can break due to FSI oscillations. A reed actually produces sound because the system of equations governing its dynamics has oscillatory solutions. The dynamic of reed valves used in two strokes engines and compressors is governed by FSI. The act of "blowing a raspberry" is another such example. The interaction between tribological machine components, such as bearings and gears, and lubricant is also an example of FSI. The lubricant flows between the contacting solid components and causes elastic deformation in them during this process. Fluid–structure interactions also occur in moving containers, where liquid oscillations due to the container motion impose substantial magnitudes of forces and moments to the container structure that affect the stability of the container transport system in a highly adverse manner. Another prominent example is the start up of a rocket engine, e.g. Space Shuttle main engine (SSME), where FSI can lead to considerable unsteady side loads on the nozzle structure. In addition to pressure-driven effects, FSI can also have a large influence on surface temperatures on supersonic and hypersonic vehicles.
Fluid–structure interactions also play a major role in appropriate modeling of blood flow. Blood vessels act as compliant tubes that change size dynamically when there are changes to blood pressure and velocity of flow. Failure to take into account this property of blood vessels can lead to a significant overestimation of resulting wall shear stress (WSS). This effect is especially imperative to take into account when analyzing aneurysms. It has become common practice to use computational fluid dynamics to analyze patient specific models. The neck of an aneurysm is the most susceptible to changes in to WSS. If the aneurysmal wall becomes weak enough, it becomes at risk of rupturing when WSS becomes too high. FSI models contain an overall lower WSS compared to non-compliant models. This is significant because incorrect modeling of aneurysms could lead to doctors deciding to perform invasive surgery on patients who were not at a high risk of rupture. While FSI offers better analysis, it comes at a cost of highly increased computational time. Non-compliant models have a computational time of a few hours, while FSI models could take up to 7 days to finish running. This leads to FSI models to be most useful for preventative measures for aneurysms caught early, but unusable for emergency situations where the aneurysm may have already ruptured.
Analysis
Fluid–structure interaction problems and multiphysics problems in general are often too complex to solve analytically and so they have to be analyzed by means of experiments or numerical simulation. Research in the fields of computational fluid dynamics and computational structural dynamics is still ongoing but the maturity of these fields enables numerical simulation of fluid-structure interaction. Two main approaches exist for the simulation of fluid–structure interaction problems:
Monolithic approach: the equations governing the flow and the displacement of the structure are solved simultaneously, with a single solver
Partitioned approach: the equations governing the flow and the displacement of the structure are solved separately, with two distinct solvers
The monolithic approach requires a code developed for this particular combination of physical problems whereas the partitioned approach preserves software modularity because an existing flow solver and structural solver are coupled. Moreover, the partitioned approach facilitates solution of the flow equations and the structural equations with different, possibly more efficient techniques which have been developed specifically for either flow equations or structural equations. On the other hand, development of stable and accurate coupling algorithm is required in partitioned simulations. In conclusion, the partitioned approach allows reusing existing software which is an attractive advantage. However, stability of the coupling method needs to be taken into consideration. This is especially difficult, if the mass of the moving structure is small in comparison to the mass of fluid which is displaced by the structure movement.
In addition, the treatment of meshes introduces other classifications of FSI analysis. For example,one can classify them as the conforming mesh methods and the non-conforming mesh methods. Other classifications can be mesh-based methods and meshless methods.
Numerical simulation
The Newton–Raphson method or a different fixed-point iteration can be used to solve FSI problems. Methods based on Newton–Raphson iteration are used in both the monolithic
and the partitioned approach. These methods solve the nonlinear flow equations and the structural equations in the entire fluid and solid domain with the Newton–Raphson method. The system of linear equations within the Newton–Raphson iteration can be solved without knowledge of the Jacobian with a matrix-free iterative method, using a finite difference approximation of the Jacobian-vector product.
Whereas Newton–Raphson methods solve the flow and structural problem for the state in the entire fluid and solid domain, it is also possible to reformulate an FSI problem as a system with only the degrees of freedom in the interface’s position as unknowns. This domain decomposition condenses the error of the FSI problem into a subspace related to the interface. The FSI problem can hence be written as either a root finding problem or a fixed point problem, with the interface’s position as unknowns.
Interface Newton–Raphson methods solve this root-finding problem with Newton–Raphson iterations, e.g. with an approximation of the Jacobian from a linear reduced-physics model. The interface quasi-Newton method with approximation for the inverse of the Jacobian from a least-squares model couples a black-box flow solver and structural solver by means of the information that has been gathered during the coupling iterations. This technique is based on the interface block quasi-Newton technique with an approximation for the Jacobians from least-squares models which reformulates the FSI problem as a system of equations with both the interface’s position and the stress distribution on the interface as unknowns. This system is solved with block quasi-Newton iterations of the Gauss–Seidel type and the Jacobians of the flow solver and structural solver are approximated by means of least-squares models.
The fixed-point problem can be solved with fixed-point iterations, also called (block) Gauss–Seidel iterations, which means that the flow problem and structural problem are solved successively until the change is smaller than the convergence criterion. However, the iterations converge slowly if at all, especially when the interaction between the fluid and the structure is strong due to a high fluid/structure density ratio or the incompressibility of the fluid. The convergence of the fixed point iterations can be stabilized and accelerated by Aitken relaxation and steepest descent relaxation, which adapt the relaxation factor in each iteration based on the previous iterations.
If the interaction between the fluid and the structure is weak, only one fixed-point iteration is required within each time step. These so-called staggered or loosely coupled methods do not enforce the equilibrium on the fluid–structure interface within a time step but they are suitable for the simulation of aeroelasticity with a heavy and rather stiff structure.
Several studies have analyzed the stability of partitioned algorithms for the simulation of fluid-structure interaction
.
See also
Immersed boundary method
Smoothed particle hydrodynamics
Stochastic Eulerian Lagrangian method
Computational fluid dynamics
Fluid mechanics, fluid dynamics
Structural mechanics, structural dynamics
CFD Online page about FSI
NASA page about a tail flutter test
YouTube movie about flutter of glider wings
Hydroelasticity
Slosh dynamics
Open source codes
solids4Foam, a toolbox for OpenFOAM with capabilities for solid mechanics and fluid solid interactions
oomph-lib
Elmer FSI page
CBC.solve Biomedical Solvers
preCICE Coupling Library
SPHinXsys multi-physics library It provides C++ APIs for physical accurate simulation and aims to model coupled industrial dynamic systems including fluid, solid, multi-body dynamics and beyond with SPH (smoothed particle hydrodynamics), a meshless computational method using particle discretization.
Academic Codes
Stochastic Immersed Boundary Methods in 3D, P. Atzberger, UCSB
Immersed Boundary Method for Adaptive Meshes in 3D, B. Griffith, NYU.
Immersed Boundary Method for Uniform Meshes in 2D, A. Fogelson, Utah
IFLS, IFL, TU Braunschweig
Commercial Codes
Abaqus Multiphysics Coupling
AcuSolve FSI applications
ADINA FSI homepage
Ansys' FSI homepage
Altair RADIOSS
Autodesk Simulation CFD
Simcenter STAR-CCM+ from Siemens Digital Industries Software
CoLyX - FSI and mesh-morphing from EVEN - Evolutionary Engineering AG
Fluidyn-MP FSI Multiphysics Coupling
COMSOL FSI homepage
MpCCI homepage
MSC Software MD Nastran
MSC Software Dytran
FINE/Oofelie FSI: Fully integrated and strongly coupled for better convergence
LS-DYNA Home Page
Fluidyn-MP FSI: Fluid-Structure Interaction
CompassFEM Tdyn
CompassFEM SeaFEM
Cradle SC/Tetra CFD Software
PARACHUTES FSI HomePage
References
Further reading
Modarres-Sadeghi, Yahya: Introduction to Fluid-Structure Interactions, 2021, Springer Nature, 978-3-030-85882-7, http://dx.doi.org/10.1007/978-3-030-85884-1
Introduces the subject of Fluid-Structure Interactions (FSI) to students and professionals and discusses the major ideas in FSI with the goal of providing the fundamental understanding to the readers who possess limited or no understanding of the subject.
Fluid mechanics
Fluid dynamics | Fluid–structure interaction | Chemistry,Engineering | 2,156 |
70,959,639 | https://en.wikipedia.org/wiki/Midpoint%20theorem%20%28conics%29 | In geometry, the midpoint theorem describes a property of parallel chords in a conic. It states that the midpoints of parallel chords in a conic are located on a common line.
The common line or line segment for the midpoints is called the diameter. For a circle, ellipse or hyperbola the diameter goes through its center. For a parabola the diameter is always perpendicular to its directrix and for a pair of intersecting lines (from a degenerate conic) the diameter goes through the point of intersection.
Gallery ( = eccentricity):
References
David Alexander Brannan, Matthew F. Esplen, Jeremy J. Gray (1999) Geometry Cambridge University Press , pages 59–66
Aleksander Simonic (November 2012) "On a Problem Concerning Two Conics", Crux Mathematicorum, volume 38(9): 372–377
C. G. Gibson (2003) Elementary Euclidean Geometry: An Introduction. Cambridge University Press pages 65–68
External links
Locus of Midpoints of Parallel Chords of Central Conic passes through Center at the Proof Wiki
midpoints of parallel chords in conics lie on a common line - interactive illustration
Conic sections
Theorems in plane geometry | Midpoint theorem (conics) | Mathematics | 250 |
753,403 | https://en.wikipedia.org/wiki/Dawn%20chorus%20%28electromagnetic%29 | The electromagnetic dawn chorus is a phenomenon that occurs most often at or shortly after dawn local time. It is believed to be generated by a Doppler-shifted cyclotron interaction between anisotropic distributions of energetic (> 40 keV) electrons and ambient background VLF noise. These energetic electrons are generally injected into the inner magnetosphere at the onset of the substorm expansion phase. Dawn choruses occur more frequently during magnetic storms.
This phenomenon also occurs during aurorae, when it is termed an auroral chorus.
With the proper radio equipment, dawn chorus can be converted to sounds that resemble, coincidentally, birds' dawn chorus.
See also
Auroral chorus
Dawn chorus (birds)
Hiss (electromagnetic)
Whistler (radio)
"Cluster One," a Pink Floyd track using sferics and dawn chorus as an overture
Notes
Further reading
External links
Natural VLF Radio - Sounds of Space Weather
2018 recording by NASA RBSP (Radiation Belt Storm Probe)
Electrical phenomena
Geomagnetism | Dawn chorus (electromagnetic) | Physics | 202 |
75,922,790 | https://en.wikipedia.org/wiki/Einsteinium%28II%29%20bromide | Einsteinium(II) bromide is a binary inorganic chemical compound of einsteinium and bromine with the chemical formula .
Synthesis
The compound can be prepared via a reduction of with .
References
Einsteinium compounds
Bromides
Actinide halides | Einsteinium(II) bromide | Chemistry | 50 |
8,387,306 | https://en.wikipedia.org/wiki/Heat%20capacity%20rate | The heat capacity rate is heat transfer terminology used in thermodynamics and different forms of engineering denoting the quantity of heat a flowing fluid of a certain mass flow rate is able to absorb or release per unit temperature change per unit time. It is typically denoted as C, listed from empirical data experimentally determined in various reference works, and is typically stated as a comparison between a hot and a cold fluid, Ch and Cc either graphically, or as a linearized equation. It is an important quantity in heat exchanger technology common to either heating or cooling systems and needs, and the solution of many real world problems such as the design of disparate items as different as a microprocessor and an internal combustion engine.
Basis
A hot fluid's heat capacity rate can be much greater than, equal to, or much less than the heat capacity rate of the same fluid when cold. In practice, it is most important in specifying heat-exchanger systems, wherein one fluid usually of dissimilar nature is used to cool another fluid such as the hot gases or steam cooled in a power plant by a heat sink from a water source—a case of dissimilar fluids, or for specifying the minimal cooling needs of heat transfer across boundaries, such as in air cooling.
As the ability of a fluid to resist change in temperature itself changes as heat transfer occurs changing its net average instantaneous temperature, it is a quantity of interest in designs which have to compensate for the fact that it varies continuously in a dynamic system. While itself varying, such change must be taken into account when designing a system for overall behavior to stimuli or likely environmental conditions, and in particular the worst-case conditions encountered under the high stresses imposed near the limits of operability— for example, an air-cooled engine in a desert climate on a very hot day.
If the hot fluid had a much larger heat capacity rate, then when hot and cold fluids went through a heat exchanger, the hot fluid would have a very small change in temperature while the cold fluid would heat up a significant amount. If the cool fluid has a much lower heat capacity rate, that is desirable. If they were equal, they would both change more or less temperature equally, assuming equal mass-flow per unit time through a heat exchanger. In practice, a cooling fluid which has both a higher specific heat capacity and a lower heat capacity rate is desirable, accounting for the pervasiveness of water cooling solutions in technology—the polar nature of the water molecule creates some distinct sub-atomic behaviors favorable in practice.
where C = heat capacity rate of the fluid of interest in W⋅K−1,
dm/dt = mass flow rate of the fluid of interest and
cp = specific heat of the fluid of interest.
See also
Heat
specific heat
Heat capacity
Heat capacity ratio
Heat equation
Heat transfer coefficient
Latent heat
Specific heat capacity
Specific melting heat
Temperature
Thermodynamics
Thermodynamic (absolute) temperature
Thermodynamic equations
Volumetric heat capacity
References
Fundamentals of Heat and Mass Transfer (6th edition) Incorpera, DeWitt, Bergmann, and Lavine
Heat transfer
Physical quantities
Temporal rates | Heat capacity rate | Physics,Chemistry,Mathematics | 647 |
55,596,625 | https://en.wikipedia.org/wiki/Hywind%20Scotland | Hywind Scotland is the world's first commercial wind farm using floating wind turbines, situated off Peterhead, Scotland.
The farm has five 6 MW Siemens direct-drive turbines on Hywind floating monopiles, with a total capacity of 30 MW. It is operated by Hywind (Scotland) Limited, a joint venture of Equinor (75%) and Masdar (25%).
Equinor (then: Statoil) launched the world's first operational deep-water floating large-capacity wind turbine in 2009, the 2.3 MW Hywind, which cost 400 million NOK (US$71 million, $31/W). The tall tower with a 2.3 MW Siemens turbine was towed from the Åmøy fjord and offshore into the North Sea in deep water, off of Stavanger, Norway on 9 June 2009 for a two-year test run, but remains working at the site while surviving wind speed and 19 m waves.
In 2015, the company received permission to install the wind farm in Scotland, in an attempt at reducing the cost relative to the original Hywind, in accordance with the Scottish Government's commitment for cost reduction. Manufacturing for the project, with a budgeted cost of NOK2 billion (£152m), started in 2016 in Spain, Norway and Scotland. The turbines were assembled at Stord in Norway in summer 2017 using the Saipem 7000 floating crane, and the finished turbines were moved to near Peterhead. Three suction anchors hold each turbine. Hywind Scotland was commissioned in October 2017.
While cost was reduced compared to the very expensive Hywind One at $31m/MW, it still came with a final capital cost of £264m, or £8.8m/MW, approximately three times the capital cost of fixed offshore windfarms. Measured by unit cost, Hywind's levelized cost of electricity (LCoE) is then £180/MWh ($248/MWh), about three times the typical LCoE of a fixed offshore wind farm at £55/MWh ($75.7/MWh). The high cost is partly compensated by £165.27/MWh from Renewable Obligation Certificates.
In its first 5 years of operation the facility has averaged a capacity factor of 54%, sometimes in 10 meter waves. By shutting down at the worst conditions, it survived Hurricane Ophelia, and then Storm Caroline with wind gusts at and waves of 8.2 metres.
The subsequent 88 MW Hywind Tampen (with concrete floating foundations) became operational at the Snorre and Gullfaks oil fields in Norway in 2023 at a cost of NOK 8 billion or £600m (£6.8/MW).
In May 2024 all 5 turbines were to be towed back to Norway for several months of the heavy maintenance of replacing the main bearings. All turbines were operating again by October 2024.
See also
Offshore wind power
References
2017 establishments in Scotland
2017 in technology
Equinor
Wind farms in Scotland
Offshore wind farms in the North Sea
Floating wind turbines
Energy infrastructure completed in 2017 | Hywind Scotland | Engineering | 650 |
28,870,123 | https://en.wikipedia.org/wiki/PGC%2039058 | PGC 39058 is a dwarf galaxy located 14 million light-years away in the constellation Draco. The galaxy is faint and obscured by HD 106381, a star in the foreground making it difficult to observe.
References
PGC 039058
039058
PGC 039058
7242
+11-15-037 | PGC 39058 | Astronomy | 70 |
14,598,881 | https://en.wikipedia.org/wiki/Floor%20sanding | Floor sanding is the process of removing the top surfaces of a wooden floor by sanding with abrasive materials.
A variety of floor materials can be sanded, including timber, cork, particleboard, and sometimes parquet. Some floors are laid and designed for sanding. Many old floors are sanded after the previous coverings are removed and suitable wood is found hidden beneath. Floor sanding usually involves three stages: Preparation, sanding, and coating with a protective sealant.
Drum Sander Machines
All modern sanding projects are completed with specialized sanding machines. Drum sander machines come in two versions. There are 110v and 220v floor sanders. 220v drum sanders are more powerful and remove more wood material than the 110v machine. Most homeowners who want to refinish their floors themselves use the 110v version as they are more readily available at tool rental stores. Belt sanders are preferred for the continuous sandpaper belt design to prevent sanding machine marks in floors. Feathering is an industry term used by handling the machine in such a way as to avoid deep scratch marks during start and finish. The belt sander was invented by Eugen Laegler in 1969 out of Güglingen, Germany. 90% of the area can be reached with the belt/drum sander. The remaining 10% left such as edges, corners, under cabinets, and stairs, are sanded by an edge sanding machine. A rotary machine known as a multi disc sander or buffer is then used for the final sanding steps. The buffers take abrasive discs, which rotate in same plain is the floor itself. The power of the stripping relies on the weight of the machine and therefore can be useful for surface treatments like buffing, light sanding or stripping old sealants. In the belt sanders the abrasive material is fitted and secured tight between a drum and a tension device. The belt moves vertically, along the grain of the floor surface, which assures a powerful stripping, good finish and a lasting abrasive. In drum sanders it is fitted just around the drum itself, which is less secure and retains a risk of leaving marks on a newly sanded surface.
A buffing machine is used also in the final stages of wood floor refinishing. This is a rotary machine with attached fine abrasives which helps remove differences between the vertical and horizontal circulations of the sanding drums and the disk of the edging machines. These fine abrasives also help to smooth the final finish by removing minor imperfections on the surface prior and between re-coatings.
Process
Preparation is the first stage of the wood floor sanding process. All nails which protrude above the boards are punched down. Nails can severely damage the sanding machines which are being used. Staples or tacks used to fasten previous coverings (if any) are removed to reduce the possibility of damage. Some brands or types of adhesives which have been used to secure coverings may need to be removed. Some adhesives, oils, and varnishes, will clog sandpaper and can even make sanding impossible.
After the floor is prepared, the sanding begins. The first cut is done with coarse-grit sandpaper to remove old coatings and to make the floor flat. The best method when using a drum sander is to start out with a lower grit belt sandpaper. For oak, maple, and ash hardwoods, It is recommended to start with 40 grit, then with each subsequent sanding pass, go up in sandpaper grit e.g. 60, 80, and finish with 100 grit. When wood floor planks are warped, cupped, or significantly uneven, it may require multiple passes. The differences in height between the boards are flattened uniformly. The large sanders are used across the grain of the timber. The most common paper used for the first cut is 40 grit. The areas which cannot be reached by the large sanders are sanded by an edger, at the same grit paper as the rest of the floor. If filling of holes or boards is desired this is the stage where this is usually done. 80 grit papers are usually used for the second cut. The belt sander is used inline with the grain of the timber in this cut. A finishing machine is then used to create the final finish. The grit paper used is of personal preference, however 100-150 grit papers are usually used.
The sanded floor is coated with polyurethane, oils, or other sealants. If it is an oil-based sealant, then it is highly poisonous, having a high volatile organic compound content, so wearing a suitable respirator mask is recommended.
Issues
Sanding removes all patina, and can change the character of old floors. The result does not always suit the character of the building.
Sanding old boards sometimes exposes worm eaten cores, effectively ruining the floor's appearance. This can reduce the sale price, or even cause the floor to require replacement.
Sanding removes material, and timber floors have a limit to how much they can be sanded.
Improper sanding, often caused by using an inferior sanding machine, can lead to 'chatter marks'. These occur when the sander has not been correctly positioned over the area to be sanded, the edge of the sander catches and creates a rippling effect over the wood or parquet floor.
Often these marks can only be discerned after the stain or sealant has been applied.
References
Floors
Wood products
Woodworking | Floor sanding | Engineering | 1,132 |
32,834,850 | https://en.wikipedia.org/wiki/Polymerase-endonuclease%20amplification%20reaction | Polymerase-endonuclease amplification reaction (PEAR) is a DNA amplification technology for the amplification of oligonucleotides. A target oligonucleotide and a tandem repeated antisense probe are subjected to repeated cycles of denaturing, annealing, elongation and cleaving, in which thermostable DNA polymerase elongation and strand slipping generate duplex tandem repeats, and thermostable endonuclease (PspGI) cleavage releases monomeric duplex oligonucleotides.
PEAR has the potential to be a useful tool for:
Large-scale production of oligonucleotides.
PEAR is a minimal DNA replication system, so it can be considered as a minimal life system. it is of therectical interests to study the origin and evolution of repetitive DNA.
The repetitive DNA products can be transferred directly into cells or organisms to study the function of the repetitive DNA.
References
DNA replication | Polymerase-endonuclease amplification reaction | Biology | 205 |
64,740,708 | https://en.wikipedia.org/wiki/FICD | FIC domain protein adenylyltransferase (FICD) is an enzyme in metazoans possessing adenylylation and deadenylylation activity (also known as (de)AMPylation), and is a member of the Fic (filamentation induced by cAMP) domain family of proteins. AMPylation is a reversible post-translational modification that FICD performs on target cellular protein substrates. FICD is the only known Fic domain encoded by the metazoan genome, and is located on chromosome 12 in humans. Catalytic activity is reliant on the enzyme's Fic domain, which catalyzes the addition of an AMP (adenylyl group) moiety to the substrate. FICD has been linked to many cellular pathways, most notably the ATF6 and PERK branches of the UPR (unfolded protein response) pathway regulating ER homeostasis. FICD is present at very low basal levels in most cell types in humans, and its expression is highly regulated. Examples of FICD include HYPE (Huntingtin Yeast Interacting Partner E) in humans, Fic-1 in C. elegans, and dfic in D. melanogaster.
Structure
The structure of FICD proteins consists of different regions, which are the SS/TM (signal sequence/transmembrane domain),TPR (tetratricopeptide repeat) domain and fic (filamentation induced by cAMP) domain. The secondary structure is primarily composed of nine α-helices. All FICD proteins share the same catalytic motif in their fic domain, consisting of the amino acid sequence HxFx(D/E)(G/A)N(G/K)R1xxR2, located at the C terminus of the protein. At the N terminus of the protein is an inhibitory α-helix, composed of the motif (S/T)xxxE(G/N). Interaction between the glutamate of the inhibitory α-helix and the second arginine of the fic motif prevent ATP from entering the catalytic cleft of the protein and participating in AMPylation. This auto-inhibition leads to very low activity of FICD proteins in vitro.
Mutants of FICD have been created which lead to different activity levels in vitro. A mutation of the catalytic histidine in the fic motif to an alanine leads to a complete loss of AMPylation activity by the enzyme. Conversely, mutation of the glutamate in the inhibitory α-helix to a glycine abolishes any auto-inhibition, creating a constitutively active enzyme.
FICD proteins can exist as either a dimer or a monomer, although they generally exist as an asymmetric dimer in solution. Linkage occurs between the fic domains of the two monomers to form the dimer. New crystal structures have recently been published to model the structure the HYPE monomer.
Mechanism
FICD's AMPylation activity is dependent on the sequence of its fic motif (HxFx(D/E)(G/A)N(G/K)R1xxR2). The basic mechanism of AMPylation involves the addition of an AMP group to a substrate residue containing a hydroxyl group, where the AMP group is taken from an ATP molecule. In the case of FICD proteins, the catalytic histidine acts as a general base, drawing a proton away from the hydroxyl group of the substrate (usually located on a threonine or serine residue). The hydroxyl group, now a nucleophile, will then attack the α-phosphate of ATP, thus attaching the AMP group to the target residue's hydroxyl group. This mechanism both requires the presence of the catalytic histidine and the correct orientation of ATP in the ATP-binding pocket of the FICD protein. Interactions between the secondary arginine (HxFx(D/E)GN(G/K)R1xxR2) and the γ-phosphate of ATP orients ATP correctly in the pocket so that the proton transfer between the hydroxyl group of the substrate and the α-phosphate of ATP can take place.
One mechanism by which FICD proteins are self-regulated is through interactions of the inhibitory α-helix and the catalytic fic domain. The glutamate found in the inhibitory α-helix motif ((S/T)xxxE(G/N))interacts with the secondary arginine in the fic motif, which in turn prevents interactions of the γ-phosphate of ATP at that site. Because ATP is unable to orient properly in the ATP binding pocket, AMPylation cannot occur.
FICD proteins are also capable of de-AMPylation activity, the complement to AMPylation. Research suggests that a switch between the dimerization state of FICD may be responsible for the switch between AMPylation and de-AMPylation activity, in which FICD in its monomeric form is responsible for AMPylation, while FICD as a dimer is responsible for de-AMPylation.
Function
FIC proteins are known for their general function of carrying out post-translational modifications on target proteins that are a part of the cell signaling system. The conserved fic domain is involved in the addition of phosphate-containing compounds including AMP,GTP and other nucleoside monophosphates and phosphates. Fic proteins play a vital role in the mediation of post-translational modifications in host cell proteins that interfere with cytoskeletal, trafficking, signaling or translation pathways such as the UPR pathway. The UPR (unfolded protein response) pathway depends on the activation of transmembrane transducers during ER stress to promote specific downstream effects. In the event of fluctuating unfolded proteins the UPR pathway becomes activated, and this includes the activation of Hsp70 chaperone, Bip. This process incorporates inactive oligomers and reversible AMPylation and de-AMPylation activities. Bip/GRP78 is adenylated by HYPE, which induces the UPR activation, reportedly at the specific sites Thr366 and Thr518 in vitro, and thereby is able to assist in carrying out the modifications to target proteins in order to maintain ER homeostasis. The expression of HYPE is regulated depending on the magnitude of ER stress.
Expression
Expression of FICD in most species and cell types occurs at very low levels, with human FICD (HYPE) being expressed between 2-20 NX of RNA in human cell lines. In humans, HYPE's under-expression has been linked to decreased ability of the UPR to maintain ER homeostasis.
In Drosophilia, knockdown of FICD (dfic in D. melanogaster) causes blindness in flies. This phenotype could be rescued by the expression of wild-type dfic in the glial cells of those flies.
In C. elegans, FICD (FIC-1 in C. elegans) is also expressed at a basal level throughout all cell types in the worm body. FIC-1 also shows no change in level of expression throughout the worm's lifetime. FIC-1 expression has been linked to immunity to P. aeruginosa, as mutants with inactive FIC-1 showed increased susceptibility to the bacteria.
Localization
The region of localization depends on the type of organism that different FIC proteins reside in. In many cases FICD is situated in the lumen of the endoplasmic reticulum where it adenylates Hsp70 chaperone, binding immunoglobulin protein (BiP) at the Thr-366 and Thr-518. HYPE localizes in the ER via its hydrophobic N-terminus. Drosophila CG9523 or dFic can be found in the cytosolic region but are also transcriptionally activated during ER stress.
HYPE has two sites of N- glycosylation at Asn275 and Asn446. FICD is a type II transmembrane protein with the Fic domain facing the ER lumen. Similar results have been shown for dFic through in vitro translation in the presence of microsomes, which indicated N-glycosylation at site Asn288.
Clinical Significance
While FICD (HYPE in humans) has not been directly linked to any disease pathways, some of its substrates are known participants in human diseases. BiP (GRP78/HSPA5), a validated substrate of HYPE, has been substantially linked to increased rates of cancer cell survival under proapoptotic conditions. α-syn, a putative substrate for HYPE, is a known factor in the development of Parkinson's disease through the formation of protein aggregates in the brain. HYPE also has other potential roles in the fields of neurodevelopment and neurodegeneration.
References
Enzymes
Post-translational modification | FICD | Chemistry | 1,910 |
1,731,762 | https://en.wikipedia.org/wiki/Solarsoft | Solarsoft is a collaborative software development system created at Lockheed-Martin to support solar data analysis and spacecraft operation activities. It is widely recognized in the solar physics community as having revolutionized solar data analysis starting in the early 1990s. Solarsoft is in active development and use by research groups on all seven continents.
Solarsoft is a store-and-forward system that makes use of rsync, csh and other UNIX tools to distribute the software to a wide variety of platforms. Solarsoft predates CVS and most other collaborative development systems; hence, it does not provide direct support for many features that today would be considered necessary, such as software versioning. The use of Solarsoft has grown to include calibration data and even complete catalog indices for some instruments, as well as the scientific software.
Most of the software in the Solarsoft tree pertains to either solar data analysis or specific space missions or observatories such as Yohkoh or SOHO. The vast majority is written in IDL, the most commonly used analysis platform in the solar physics community, though some C, ana, and PDL modules are also available.
External links
Solarsoft @ LMSAL
Solarsoft @ NASA
Physics software
Lockheed Martin | Solarsoft | Physics | 258 |
70,509,308 | https://en.wikipedia.org/wiki/Karen%20Morse%20%28chemist%29 | Karen Dale Williams Morse is a inorganic chemist. She was president of Western Washington University from 1993 until 2008, and was named the Bowman Distinguished Professor in 2014. She is an elected fellow of the American Association for the Advancement of Science.
Education and career
Morse has a B.A. from Denison University (1962), and an M.S. and Ph.D. from the University of Michigan. During her Ph.D. she worked on Lewis acids. Morse joined the faculty of Utah State University in 1968 in the department of chemistry and biochemistry, and subsequently became the department head, the dean, and was named provost in 1989. In 1993 she moved to Western Washington University where she was president until 2008. In 2014, Morse was named the Bowman Distinguished Professor at Western Washington University.
Morse's early research centered on the production and properties of phosphines. She also worked on borohydrides, phosphite, metal-phosphorus compounds, aryl phosphines Morse also led the professional training committee at the American Chemical Society where she expanded on options for recognizing educators who teach chemistry at the undergraduate and high school level.
Selected publications
Awards and honors
Morse was elected a fellow of the American Association for the Advancement of Science in 1986. In 1997 she received the Garvan–Olin Medal for scientific accomplishments by a woman chemist from the American Chemical Society. In 2012 Western Washington University named the chemistry building the Karen W. Morse Hall in recognition of her. In 2021, Utah State University awarded her with an honorary doctorate.
References
Living people
Inorganic chemists
Utah State University faculty
Western Washington University faculty
Heads of universities and colleges in the United States
Fellows of the American Association for the Advancement of Science
Year of birth missing (living people)
Women heads of universities and colleges
21st-century American women scientists
20th-century American chemists
20th-century American women scientists
American women chemists
21st-century American chemists
Academics from Bellingham, Washington
Scientists from Bellingham, Washington | Karen Morse (chemist) | Chemistry | 401 |
60,386,923 | https://en.wikipedia.org/wiki/Sotagliflozin | Sotagliflozin, sold under the brand name Inpefa among others, is a medication used to reduce the risk of death due to heart failure. It is a sodium-glucose cotransporter 2 (SGLT2) inhibitor. It is taken by mouth.
The most common side effect is genital infection in women. Other common side effects include diabetic ketoacidosis, diarrhea, and genital infection in men.
Sotagliflozin was approved for medical use in the European Union in April 2019, as Zynquista, for the treatment for type 1 diabetes, and in the United States in May 2023, to reduce the risk of death due to heart failure. The marketing authorization for sotagliflozin was withdrawn in the EU in August 2022.
Medical uses
In the United States, sotagliflozin is indicated to reduce the risk of cardiovascular death, hospitalization for heart failure, and urgent heart failure visit in adults with heart failure; or type 2 diabetes, chronic kidney disease, and other cardiovascular risk factors. Sotaglifozin is a sodium-glucose co-transporter 1 and 2 inhibitor that reduces both postprandial glucose and insulin levels by delaying intestinal glucose absorption, decreases gastric inhibitory polypeptide, and elevations in glucagon-like peptide and peptide yy levels are consistent with local inhibition of intestinal SGLT1. Combination of insulin with sotaglifozin 200 and 400 mg led to a significant lowering of systolic and diastolic blood pressure and multiple indirect markers of arterial stiffness, including pulse pressure, without changes in pulse rates. Also, it decreased the incidence of myocardial infarction and stroke, pointing to a potential side effect of SGLT1 inhibition.
History
The US Food and Drug Administration (FDA) approved sotagliflozin based on evidence from two clinical trials of 11,806 total participants with heart failure or with type 2 diabetes, chronic kidney disease, and other cardiovascular risk factors. The trials were conducted at 322 sites in 32 countries (SOLOIST/NCT03521934) and 750 sites in 42 countries (SCORED/NCT03315143) primarily in Europe, South America, and North America. Both trials were used for primary determination of the benefits and side effects of the drug. The benefits and side effects of sotagliflozin were evaluated in the two clinical trials. In both trials, participants were randomly assigned to receive either sotagliflozin or placebo by mouth once a day. Neither the participants nor the healthcare providers knew which treatment was being given until after the trial was completed. The benefit of sotagliflozin was evaluated by measuring the number of predefined events (death from cardiovascular causes, need for hospitalization for heart failure, or urgent medical care visit for heart failure) occurring in the patient population receiving sotagliflozin versus placebo.
Society and culture
Legal status
The FDA refused its approval for use in combination with insulin for the treatment of type 1 diabetes. It is developed by Lexicon Pharmaceuticals.
In May 2023, the US FDA approved sotagliflozin (Inpefa) to decrease the risk of cardiovascular death, hospitalization for heart failure, and urgent heart failure visit in adults with heart failure or type 2 diabetes, chronic kidney disease, and other cardiovascular risk.
References
Further reading
SGLT2 inhibitors
Withdrawn drugs | Sotagliflozin | Chemistry | 714 |
73,105,923 | https://en.wikipedia.org/wiki/Dimitra%20Markovitsi | Dimitra Markovitsi is a Greek-French photochemist. She is currently an Emeritus Research Director at the French National Center for Scientific Research (CNRS). She pioneered studies on the electronically excited states of liquid crystals and made significant advances to the understanding of processes triggered in DNA upon absorption of UV radiation. The two facets of her work have been the subject of a recent Marie Skodowska Curie European training network entitled "Light DyNAmics - DNA as a training platform for photodynamic processes in soft materials."
Early life
1954 marks the birth of Markovitsi, daughter of Tryfon and Eleftheria, in Athens. From 1958 to 1969, she resided at Krya Vrysi, Pella. Then she returned to Athens, where she finished with a degree in chemical engineering from the National Technical University (1978). Thanks to a scholarship from the French government, she relocated to France, where she earned a "Diplôme d'Etudes Approfondies" (equivalent to a Master's degree) in 1979 on "Energy and Pollution" from the Université Paris VII and, later, a Ph.D. in Chemistry from the Louis Pasteur University at Strasbourg (1983).
Research work
Dimitra Markovitsi's areas of research interest include photophysics and photochemistry in the condensed phase, time-resolved optical spectroscopy (absorption, fluorescence), excited states, energy, and charge transfer, charge separation, ionization, radical formation, photodamage, UV-induced primary processes in DNA (excited states, intrinsic fluorescence, electron ejection, oxidative damage) and G-quadruplexes.
Markovitsi studied the dimensionality of excitation transport in columnar phases. She discussed the effect of orientational disorder on the electronic excited states and introduced a model based on the exciton theory and quantum chemistry computations.
She published the first studies investigating the effect of structural disorder on the excited states of double helices and guanine quadruplexes. Simultaneously, she explored the behavior of the intrinsic DNA fluorescence from femtoseconds to nanoseconds. She provided evidence of the occurrence of excitation transport between nucleobases and the collective nature of Franck-Condon states. She reported the first spectroscopic investigation on DNA excited states in the UVA region; despite their very poor absorption, such excited states may contribute to the deterioration of the genetic code by solar light, whose UVA intensity is greater than that of UVB and UVC.
She identified an unanticipated phenomenon: low-energy UV radiation can ionize DNA multimers (but not their monomeric components), generating electron holes in the nucleobases. The latter radical species are precursors of oxidative damage and provide promise for nanodevices based on photoconductivity. She demonstrated that the photoionization of guanine quadruplexes can be adjusted by varying their structural parameters.
Markovitsi also investigated DNA reaction dynamics on nanosecond to millisecond timescales. This work focuses on the dimerization of nucleobases and the deprotonation and tautomerization of the guanine radical. Her research revealed the anisotropic character of such events, which are highly dependent on the local DNA environment, rendering the conventional models of chemical kinetics inadequate for describing them.
Markovitsi’s work has been published in collective books, including the “Handbook of Organic Photochemistry and Photobiology.
Career
While at Strasbourg in 1981, Markovitsi joined the CNRS. Then she relocated to the Paris area, where she worked from 1985 until 2021 in the CEA Paris-Saclay, in joint research Laboratories of the CNRS and the French Alternative Energies and Atomic Energy Commission. From 2001 to 2014, she was the director of the Francis Perrin Laboratory (Laboratoire Francis Perrin). After being appointed Emeritus Research Director, she moved to the Institut de Chimie Physique - Université Paris-Saclay.
Markovitsi served as president of the European Photochemistry Association from 2007 to 2010, and since 2014 she is the president of the International Foundation for Photochemistry.
Relevant publications
D. Markovitsi, On the Use of the Intrinsic DNA Fluorescence for Monitoring Its Damage: A Contribution from Fundamental Studies. 2024,https://doi.org/10.1021/acsomega.4c02256
D. Markovitsi, Processes triggered in guanine quadruplexes by direct absorption of UV radiation: From fundamental studies toward optoelectronic biosensors, Photochem. Photobiol. 2023, https://doi.org/10.1111/php.13826
Balanikas, E.; Banyasz, A.; Baldacchino, G.; Markovitsi, D. Deprotonation Dynamics of Guanine Radical Cations. Photochem. Photobiol. 2022, 98, 523-531.
Gustavsson, T.; Markovitsi, D. Fundamentals of the Intrinsic DNA Fluorescence. Acc. Chem. Res. 2021, 54, 1226-1235.
Balanikas, E.; Banyasz, A.; Douki, T.; Baldacchino, G.; Markovitsi, D. Guanine Radicals Induced in DNA by Low-Energy Photoionization. Acc. Chem. Res. 2020, 53, 1511–1519.
Banyasz, A.; Vay, I.; Changenet-Barret, P.; Gustavsson, T.; Douki, T.; Markovitsi, D. Base-pairing enhances fluorescence and favors cyclobutane dimer formation induced upon absorption of UVA radiation by DNA. J. Am. Chem. Soc. 2011, 133, 5163-5165.
Ecoffet, C.; Markovitsi, D.; Millie, P.; Lemaistre, J. Electronic excitations in organized molecular systems. A model for columnar aggregates of ionic compounds. Chem. Phys. 1993, 177, 629-643.
Other activities
Dimitra Markovitsi and her husband Gérard Balland translated from Greek into French the historical novel “Σέργιος και Βάκχος” by M. Karagatsis, appeared on 1959.
References
1954 births
Living people
Photochemists
20th-century women scientists | Dimitra Markovitsi | Chemistry | 1,382 |
16,726,294 | https://en.wikipedia.org/wiki/Centrosolar | Centrosolar, based in Munich, Germany, was one of the leading stock exchange listed solar companies in Europe. The company produced photovoltaic systems for private houses and industrial properties. Centrosolar marketed standard grid-connected systems and non-grid-connected solar power generators.
In 2008, following 10 months of construction, Centrosolar opened a 47,000 square-meter, 150 MW solar module factory in Wismar, Germany. The €23 million facility has created 250 new jobs in Wismar.
In 2014 the company declared bankruptcy. The German production facility was purchased by CS Wismar. Centrosolars subsidiaries in France and The Netherlands were acquired by the German solar producer Solarwatt and continued business as Solarwatt France SARL and Solarwatt BV.
See also
List of photovoltaics companies
Renewable energy commercialization
References
External links
Centrosolar's Kirsch Says Sales May Beat Estimates
Solar energy companies of Germany
Photovoltaics manufacturers | Centrosolar | Engineering | 201 |
54,632,447 | https://en.wikipedia.org/wiki/Haloplanus%20natans | Haloplanus natans is a halophilic Archaeon in the family of Halobacteriaceae and the type species of the genus Haloplanus. It was isolated from controlled mesocosms with a mixture of water from the Dead Sea and the Red Sea.
References
External links
Type strain of Haloplanus natans at BacDive - the Bacterial Diversity Metadatabase
Euryarchaeota
Archaea described in 2007 | Haloplanus natans | Biology | 88 |
1,905,105 | https://en.wikipedia.org/wiki/Flexible-fuel%20vehicle | A flexible-fuel vehicle (FFV) or dual-fuel vehicle (colloquially called a flex-fuel vehicle) is an alternative fuel vehicle with an internal combustion engine designed to run on more than one fuel, usually gasoline blended with either ethanol or methanol fuel, and both fuels are stored in the same common tank. Modern flex-fuel engines are capable of burning any proportion of the resulting blend in the combustion chamber as fuel injection and spark timing are adjusted automatically according to the actual blend detected by a fuel composition sensor. Flex-fuel vehicles are distinguished from bi-fuel vehicles, where two fuels are stored in separate tanks and the engine runs on one fuel at a time, for example, compressed natural gas (CNG), liquefied petroleum gas (LPG), or hydrogen.
The most common commercially available FFV in the world market is the ethanol flexible-fuel vehicle, with about 60 million automobiles, motorcycles and light duty trucks manufactured and sold worldwide by March 2018, and concentrated in four markets, Brazil (30.5 million light-duty vehicles and over 6 million motorcycles), the United States (27 million by the end of 2021), Canada (1.6 million by 2014), and Europe, led by Sweden (243,100). In addition to flex-fuel vehicles running with ethanol, in Europe and the US, mainly in California, there have been successful test programs with methanol flex-fuel vehicles, known as M85 flex-fuel vehicles. There have been also successful tests using P-series fuels with E85 flex fuel vehicles, but as of June 2008, this fuel is not yet available to the general public. These successful tests with P-series fuels were conducted on Ford Taurus and Dodge Caravan flexible-fuel vehicles.
Though technology exists to allow ethanol FFVs to run on any mixture of gasoline and ethanol, from pure gasoline up to 100% ethanol (E100), North American and European flex-fuel vehicles are optimized to run on E85, a blend of 85% anhydrous ethanol fuel with 15% gasoline. This upper limit in the ethanol content is set to reduce ethanol emissions at low temperatures and to avoid cold starting problems during cold weather, at temperatures lower than . The alcohol content is reduced during the winter in regions where temperatures fall below to a winter blend of E70 in the U.S. or to E75 in Sweden from November until March. Brazilian flex fuel vehicles are optimized to run on any mix of E20-E25 gasoline and up to 100% hydrous ethanol fuel (E100). The Brazilian flex vehicles were built-in with a small gasoline reservoir for cold starting the engine when temperatures drop below . An improved flex motor generation was launched in 2009 which eliminated the need for the secondary gas tank.
Terminology
As ethanol FFVs became commercially available during the late 1990s, the common use of the term "flexible-fuel vehicle" became synonymous with ethanol FFVs. In the United States flex-fuel vehicles are also known as "E85 vehicles". In Brazil, the FFVs are popularly known as "total flex" or simply "flex" cars. In Europe, FFVs are also known as "flexifuel" vehicles. Automakers, particularly in Brazil and the European market, use badging in their FFV models with the some variant of the word "flex", such as Volvo Flexifuel, or Volkswagen Total Flex, or Chevrolet FlexPower or Renault Hi-Flex, and Ford sells its Focus model in Europe as Flexifuel and as Flex in Brazil. In the US, only since 2008 FFV models feature a yellow gas cap with the label "E85/Gasoline" written on the top of the cap to differentiate E85s from gasoline only models.
Flexible-fuel vehicles (FFVs) are based on dual-fuel systems that supply both fuels into the combustion chamber at the same time in various calibrated proportions. The most common fuels used by FFVs today are unleaded gasoline and ethanol fuel. Ethanol FFVs can run on pure gasoline, pure ethanol (E100) or any combination of both. Methanol has also been blended with gasoline in flex-fuel vehicles known as M85 FFVs, but their use has been limited mainly to demonstration projects and small government fleets, particularly in California.
Bi-fuel vehicles. The term flexible-fuel vehicles is sometimes used to include other alternative fuel vehicles that can run with compressed natural gas (CNG), liquefied petroleum gas (LPG; also known as autogas), or hydrogen. However, all these vehicles actually are bi-fuel and not flexible-fuel vehicles, because they have engines that store the other fuel in a separate tank, and the engine runs on one fuel at a time. Bi-fuel vehicles have the capability to switch back and forth from gasoline to the other fuel, manually or automatically. The most common available fuel in the market for bi-fuel cars is natural gas (CNG), and by 2008 there were 9.6 million natural gas vehicles, led by Pakistan (2.0 million), Argentina (1.7 million), and Brazil (1.6 million). Natural gas vehicles are a popular choice as taxicabs in the main cities of Argentina and Brazil. Normally, standard gasoline vehicles are retrofitted in specialized shops, which involve installing the gas cylinder in the trunk and the CNG injection system and electronics.
Multifuel vehicles are capable of operating with more than two fuels. In 2004 GM do Brasil introduced the Chevrolet Astra 2.0 with a "MultiPower" engine built on flex fuel technology developed by Bosch of Brazil, and capable of using CNG, ethanol and gasoline (E20-E25 blend) as fuel. This automobile was aimed at the taxicab market and the switch among fuels is done manually. In 2006 Fiat introduced the Fiat Siena Tetra fuel, a four-fuel car developed under Magneti Marelli of Fiat Brazil. This automobile can run as a flex-fuel on 100% ethanol (E100); or on E-20 to E25, Brazil's normal ethanol gasoline blend; on pure gasoline (though no longer available in Brazil since 1993, it is still used in neighboring countries); or just on natural gas. The Siena Tetrafuel was engineered to switch from any gasoline-ethanol blend to CNG automatically, depending on the power required by road conditions. Another existing option is to retrofit an ethanol flexible-fuel vehicle to add a natural gas tank and the corresponding injection system. This option is popular among taxicab owners in São Paulo and Rio de Janeiro, Brazil, allowing users to choose among three fuels (E25, E100 and CNG) according to current market prices at the pump. Vehicles with this adaptation are known in Brazil as "tri-fuel" cars.
Flex-fuel hybrid electric and flex-fuel plug-in hybrid are two types of hybrid vehicles built with a combustion engine capable of running on gasoline, E-85, or E-100 to help drive the wheels in conjunction with the electric engine or to recharge the battery pack that powers the electric engine. In 2007 Ford produced 20 demonstration Escape Hybrid E85s for real-world testing in fleets in the U.S. Also as a demonstration project, Ford delivered in 2008 the first flexible-fuel plug-in hybrid SUV to the U.S. Department of Energy (DOE), a Ford Escape Plug-in Hybrid, which runs on gasoline or E85. GM announced that the Chevrolet Volt plug-in hybrid, launched in the U.S. in late 2010, would be the first commercially available flex-fuel plug-in capable of adapting the propulsion to several world markets such as the U.S., Brazil or Sweden, as the combustion engine can be adapted to run on E85, E100 or diesel respectively. The Volt was initially expected to be flex-fuel-capable in 2013, but it was not produced. Lotus Engineering unveiled the Lotus CityCar at the 2010 Paris Motor Show. The CityCar is a plug-in hybrid concept car designed for flex-fuel operation on ethanol, or methanol as well as regular gasoline.
In December 2018, Toyota do Brasil announced the development of the world's first commercial hybrid electric car with flex-fuel engine capable of running with electricity and any blend of ethanol fuel and gasoline. The flexible fuel hybrid technology was developed in partnership with several Brazilian federal universities, and a prototype was tested for six months using a Toyota Prius as development mule. Toyota announced plans to start production of a flex hybrid electric car for the Brazilian market in the second half of 2019.
History
The Ford Model T, produced from 1908 through 1927, was fitted with a carburetor with adjustable jetting, allowing use of ethanol, gasoline or kerosene (each by itself), or a combination of the first two mentioned fuels. Other car manufactures also provided engines for ethanol fuel use. Henry Ford continued to advocate for ethanol as fuel even during Prohibition. However, cheaper oil caused gasoline to prevail, until the 1973 oil crisis resulted in gasoline shortages and awareness on the dangers of oil dependence. This crisis opened a new opportunity for ethanol and other alternative fuels, such as methanol, gaseous fuels such as CNG and LPG, and also hydrogen. Ethanol, methanol and natural gas were the three alternative fuels that received more attention for research and development, and government support.
Since 1975, and as a response to the shock caused by the first oil crisis, the Brazilian government implemented the National Alcohol Program -Pró-Álcool- (), a nationwide program financed by the government to phase out automotive fuels derived from fossil fuels in favor of ethanol made from sugar cane. It began with a low blend of anhydrous alcohol with regular gasoline in 1976, and since July 2007 the mandatory blend is 25% of alcohol or gasohol E25. In 1979, and as a response to the second oil crisis, the first vehicle capable of running with pure hydrous ethanol (E100) was launched to the market, the Fiat 147, after testing with several prototypes developed by Fiat, Volkswagen, GM and Ford. The Brazilian government provided three important initial drivers for the ethanol industry: guaranteed purchases by the state-owned oil company Petrobras, low-interest loans for agro-industrial ethanol firms, and fixed gasoline and ethanol prices. After reaching more than 4 million cars and light trucks running on pure ethanol by the late 1980s, the use of E100-only vehicles sharply declined after increases in sugar prices produced shortages of ethanol fuel.
After extensive research that began in the 90s, a second push took place in March 2003, when the Brazilian subsidiary of Volkswagen launched to the market the first full flexible-fuel car, the Gol 1.6 Total Flex. Several months later was followed by other Brazilian automakers, and by 2010 General Motors, Fiat, Ford, Peugeot, Renault, Volkswagen, Honda, Mitsubishi, Toyota, Citroën, Nissan and Kia Motors were producing popular models of flex cars and light trucks. The adoption of ethanol flex fuel vehicles was so successful, that production of flex cars went from almost 40 thousand in 2003 to 1.7 million in 2007. This rapid adoption of the flex technology was facilitated by the fuel distribution infrastructure already in place, as around 27,000 filling stations countrywide were available by 1997 with at least one ethanol pump, a heritage of the Pró-Álcool program.
In the United States, initial support to develop alternative fuels by the government was also a response to the first oil crisis, and some time later, as a goal to improve air quality. Also, liquid fuels were preferred over gaseous fuels not only because they have a better volumetric energy density but also because they were the most compatible fuels with existing distribution systems and engines, thus avoiding a big departure from the existing technologies and taking advantage of the vehicle and the refueling infrastructure. California led the search of sustainable alternatives with interest focused in methanol. Ford Motor Company and other automakers responded to California's request for vehicles that run on methanol. In 1981, Ford delivered 40 dedicated methanol fuel (M100) Escorts to Los Angeles County, but only four refueling stations were installed. The biggest challenge in the development of alcohol vehicle technology was getting all of the fuel system materials compatible with the higher chemical reactivity of the fuel. Methanol was even more of a challenge than ethanol but much of the early experience gained with neat ethanol vehicle production in Brazil was transferable to methanol. The success of this small experimental fleet of M100s led California to request more of these vehicles, mainly for government fleets. In 1983, Ford built 582 M100 vehicles; 501 went to California, and the remaining to New Zealand, Sweden, Norway, United Kingdom, and Canada.
As an answer to the lack of refueling infrastructure, Ford began development of a flexible-fuel vehicle in 1982, and between 1985 and 1992, 705 experimental FFVs were built and delivered to California and Canada, including the 1.6L Ford Escort, the 3.0L Taurus, and the 5.0L LTD Crown Victoria. These vehicles could operate on either gasoline or methanol with only one fuel system. Legislation was passed to encourage the US auto industry to begin production, which started in 1993 for the M85 FFVs at Ford. In 1996, a new FFV Ford Taurus was developed, with models fully capable of running on either methanol or ethanol blended with gasoline. This ethanol version of the Taurus became the first commercial production of an E85 FFV. The momentum of the FFV production programs at the American car companies continued, although by the end of the 1990s, the emphasis shifted to the FFV E85 version, as it is today. Ethanol was preferred over methanol because there is a large support from the farming community, and thanks to the government's incentive programs and corn-based ethanol subsidies available at the time. Sweden also tested both the M85 and the E85 flexifuel vehicles, but due to agriculture policy, in the end emphasis was given to the ethanol flexifuel vehicles. Support for ethanol also comes from the fact that it is a biomass fuel, which addresses climate change concerns and greenhouse gas emissions, though nowadays these benefits are questioned and depend on the feedstock used for ethanol production and their indirect land use change impacts.
The demand for ethanol fuel produced from field corn in the United States was stimulated by the discovery in the late 90s that methyl tertiary butyl ether (MTBE), an oxygenate additive in gasoline, was contaminating groundwater. Due to the risks of widespread and costly litigation, and because MTBE use in gasoline was banned in almost 20 states by 2006, the substitution of MTBE opened a new market for ethanol fuel. This demand shift for ethanol as an oxygenate additive took place at a time when oil prices were already significantly rising. By 2006, about 50 percent of the gasoline used in the U.S. contained ethanol at different proportions, and ethanol production grew so fast that the US became the world's top ethanol producer, overtaking Brazil in 2005. This shift also contributed to a sharp increase in the production and sale of E85 flex vehicles since 2002.
Flexible-fuel vehicles by country
Brazil
Flexible-fuel technology started being developed by Brazilian engineers near the end of the 1990s. The Brazilian flexible fuel car is built with an ethanol-ready engine and one fuel tank for both fuels. The small gasoline reservoir for starting the engine in cold weather, used in earlier neat ethanol vehicles, was kept to avoid start up problems in the central and southern regions, where winter temperatures normally drop below . An improved flex motor generation was launched in 2009 and allowed to eliminate the need for this secondary gas reservoir tank. Another improvement was the reduction of fuel consumption and tailpipe emissions, between 10% and 15% as compared to flex motors sold in 2008. In March 2009 Volkswagen do Brasil launched the Polo E-Flex, the first flex fuel model without an auxiliary tank for cold start.
A key innovation in the Brazilian flex technology was avoiding the need for an additional dedicated sensor to monitor the ethanol-gasoline mix, which made the first American M85 flex fuel vehicles too expensive.
Brazilian flex cars are capable of running on just hydrated ethanol (E100), or just on a blend of gasoline with 25 to 27% anhydrous ethanol (the mandatory blend), or on any arbitrary combination of both fuels.
The flexibility of Brazilian FFVs empowers the consumers to choose the fuel depending on current market prices. As ethanol fuel economy is lower than gasoline because of ethanol's energy content is close to 34% less per unit volume than gasoline, flex cars running on ethanol get a lower mileage than when running on pure gasoline. However, this effect is partially offset by the usually lower price per liter of ethanol fuel. As a rule of thumb, Brazilian consumers are frequently advised by the media to use more alcohol than gasoline in their mix only when ethanol prices are 30% lower or more than gasoline, as ethanol price fluctuates heavily depending on the result of seasonal sugar cane harvests.
In March 2003 Volkswagen do Brasil launched in the market the Gol 1.6 Total Flex, the first commercial flexible fuel vehicle capable of running on any blend of gasoline and ethanol. GM do Brasil followed three months later with the Chevrolet Corsa 1.8 Flexpower, using an engine developed by a joint-venture with Fiat called PowerTrain. Passenger flex-fuel vehicles became a commercial success in the country, and , a total of 15 car manufacturers produce flex-fuel engines for the Brazilian market, dominating all light vehicle segments except sports cars, off-road vehicles and minivans.
The production of flex-fuel cars and light commercial vehicles since 2003 reached the milestone of 10 million vehicles in March 2010. At the end of 2012 registrations of flex-fuel cars and light trucks represented 87% of all passenger and light duty vehicles sold in the country in 2012, and climbed to a 94% market share of all new passenger vehicles sales in 2013. Production passed the 20 million-unit mark in June 2013.
By the end of 2014, flex-fuel cars represented 54% of the Brazilian registered stock of light-duty vehicles, while gasoline only vehicles represented 34.3%. , flex-fuel light-duty vehicle sales totaled 25.5 million units. The market share of flex vehicles reached 88.6% of all light-duty registrations in 2017. , fifteen years after the launch of the first flex fuel car, there were 30.5 million flex cars and light trucks registered in the country, and 6 million flex motorcycles.
The rapid success of flex vehicles was made possible by the existence of 33,000 filling stations with at least one ethanol pump available by 2006, a heritage of the early Pró-Álcool ethanol program. These facts, together with the mandatory use of E25 blend of gasoline throughout the country, allowed Brazil in 2008 to achieve more than 50% of fuel consumption in the gasoline market from sugar cane-based ethanol.
According to two separate research studies conducted in 2009, at the national level 65% of the flex-fuel registered vehicles regularly used ethanol fuel, and the usage climbed to 93% in São Paulo, the main ethanol producer state where local taxes are lower, and prices at the pump are more competitive than gasoline. However, as a result of higher ethanol prices caused by the Brazilian ethanol industry crisis that began in 2009, combined with government subsidies to keep gasoline price lower than the international market value, by November 2013 only 23% flex-fuel car owners were using ethanol, down from 66% in 2009.
One of the latest innovation within the Brazilian flexible-fuel technology is the development of flex-fuel motorcycles. The first flex-fuel motorcycle was launched by Honda in March 2009, the CG 150 Titan Mix. In September 2009, Honda launched a second flexible-fuel motorcycle, the on-off-road NXR 150 Bros Mix. By December 2012 the five available models of flexible-fuel motorcycles from Honda and Yamaha reached a cumulative production of 2,291,072 units, representing 31.8% of all motorcycles manufactured in Brazil since 2009, and 48.2% of motorcycle production in 2012. Flexible-fuel motorcycle production passed the 3 million-unit milestone in October 2013. The 4 million mark was reached in March 2015.
Europe
Sweden
Flexible-fuel vehicles were introduced in Sweden as a demonstration test in 1994, when three Ford Taurus were imported to show the technology existed. Because of the existing interest, a project was started in 1995 with 50 Ford Taurus E85 flexifuel in different parts of Sweden: Umeå, Örnsköldsvik, Härnösand, Stockholm, Karlstad, Linköping, and Växjö. From 1997 to 1998 an additional 300 Taurus were imported, and the number of E85 fueling grew to 40. Then in 1998 the city of Stockholm placed an order for 2,000 of FFVs for any car manufacturer willing to produce them. The objective was to jump-start the FFV industry in Sweden. The two domestic car makers Volvo Group and Saab AB refused to participate arguing there were not in place any ethanol filling stations. However, Ford Motor Company took the offer and began importing the flexifuel version of its Focus model, delivering the first cars in 2001, and selling more than 15,000 FFV Focus by 2005, then representing an 80% market share of the flexifuel market.
In 2005 both Volvo and Saab introduced to the Sweden market their flexifuel models. Saab began selling its 9-5 2.0 Biopower, joined in 2006 by its 9-5 2.3 Biopower. Volvo introduced its S40 and V50 with flexible-fuel engines, joined in late 2006 by the new C30. All Volvo models were initially restricted to the Sweden market, until 2007, when these three models were launched in eight new European markets. In 2007, Saab also started selling a BioPower version of its popular Saab 9-3 line. In 2008 the Saab-derived Cadillac BLS was introduced with E85 compatible engines, and Volvo launched the V70 with a 2.5-litre turbocharged Flexifuel engine.
All flexible-fuel vehicles in Sweden use an E75 winter blend instead of E85 to avoid engine starting problems during cold weather. This blend was introduced since the winter 2006–07 and E75 is used from November until March. For temperature below E85 flex vehicles require an engine block heater. The use of this device is also recommended for gasoline vehicles when temperatures drop below . Another option when extreme cold weather is expected is to add more pure gasoline in the tank, thus reducing the ethanol content below the E75 winter blend, or simply not to use E85 during extreme low temperature spells.
Sweden has achieved the largest E85 flexible-fuel vehicle fleet in Europe, with a sharp growth from 717 vehicles in 2001 to 243,136 through December 2014. As of 2008 a total of 70% of all flexifuel vehicles operating in the EU were registered in Sweden. The recent and accelerated growth of the Swedish fleet of E85 flexifuel vehicles is the result of the National Climate Policy in Global Cooperation Bill passed in 2005, which not only ratified the Kyoto Protocol but also sought to meet the 2003 EU Biofuels Directive regarding targets for use of biofuels, and also let to the 2006 government's commitment to eliminate oil imports by 2020.
In order to achieve these goals several government incentives were implemented. Ethanol, as the other biofuels, was exempted of both, the CO2 and energy taxes until 2009, resulting in a 30% price reduction at the pump of E85 fuel over gasoline. Furthermore, other demand side incentives for flexifuel vehicle owners include a bonus to buyers of FFVs, exemption from the Stockholm congestion tax, up to 20% discount on auto insurance, free parking spaces in most of the largest cities, owner annual registration taxes, and a 20% tax reduction for flexifuel company cars. Also, a part of the program, the Swedish Government ruled that 25% of their vehicle purchases (excluding police, fire and ambulance vehicles) must be alternative fuel vehicles. By the first months of 2008, this package of incentives resulted in sales of flexible-fuel cars representing 25% of new car sales.
On the supply side, since 2005 the gasoline fuelling stations selling more than 3 million liters of fuel a year are required to sell at least one type of biofuel, resulting in more than 1,200 gas stations selling E85 by August 2008. Despite all the sharp growth of E85 flexifuel cars, by 2007 they represented just two percent of the four million Swedish vehicles. In addition, this law also mandated all new filling stations to offer alternative fuels, and stations with an annual volume of more than one million liters are required to have an alternative fuel pump by December 2009. Therefore, the number of E85 pumps is expected to reach by 2009 nearly 60% of Sweden's 4,000 filling stations.
The Swedish-made Koenigsegg Jesko 300, the low downforce version of the Koenigsegg Jesko, is currently the fastest and most powerful flexible fuel vehicle with its turbocharged V8 producing over 1600 hp when running on biofuel, as compared to 1280 hp on 95 octane unleaded gasoline.
Other European countries
Flexifuel vehicles are sold in 18 European countries, including Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Hungary, Ireland, Italy, the Netherlands, Norway, Poland, Spain, Sweden, Switzerland, and the United Kingdom. Ford, Volvo and Saab are the main automakers offering flexifuel autos in the region.
France
Biofuel cars in general get strong tax incentives in France, including a 0 or 50% reduction on the tax on new vehicles, and a 40% reduction on CO2 tax for new cars. For company cars there is a corporate car tax free for two years and a recovery of 80% of the value added tax (VAT) on E85 vehicles. Also, E85 fuel price is set significantly lower than diesel or gasoline, resulting in E85 at € 0.80, diesel at €1.15, and gasoline at €1.30 per liter, as of April 2007. By May 2008, France had 211 pumps selling E85, even though the government made plans for the installation of up to 500 E85 pumps by year end 2007. French automakers Renault and PSA (Citroen & Peugeot) announced they will start selling FFV cars beginning in the summer 2007.
Germany
Biofuel emphasis in Germany is on biodiesel, and no specific incentives have been granted for E85 flex-fuel cars; however, there is complete exemption of taxes on all biofuels while there is a normal tax of €0.65 per liter of petroleum fuels. The distribution of E85 began in 2005, and with 219 stations as of September 2008, Germany ranks second after Sweden with the most E85 fueling stations in the EU.
As of July 2012 retail prices of E85 was €1.09 per liter, and gasoline was priced at €1.60 per liter (for gasoline RON 95), then providing enough margin to compensate for ethanol's lower fuel economy.
Ford has offered the Ford Focus since August 2005 in Germany. Ford is about to offer also the Mondeo and other models as FFV versions between 2008 and 2010. The Saab 9-5 and Saab 9-3 Biopower, the Peugeot 308 Bioflex, the Citroën C4 Bioflex, the Audi A5, two models of the Cadillac BLS, and five Volvo models are also available in the German market by 2008. Since 2011, Dacia offers the Logan MCV with a 1.6l 16v flexfuel engine.
Ireland
Ireland is the third best seller European market of E85 flex-fuel vehicles, after Sweden and France. Bioethanol (E85) in Ireland is made from whey, a waste product of cheese manufacturing. The Irish government established several incentives, including a 50% discount in vehicle registration taxes (VRT), which can account for more than one third of the retail price of a new car in Ireland (around €6,500). The bioethanol element of the E85 fuel is excise-free for fuel companies, allowing retail prices to be low enough to offset the 25 per cent cut in fuel economy that E-85 cars offer, due to ethanol's lower energy content than gasoline. Also, the value added tax (VAT) on the fuel can also be claimed back. E-85 fuel is available across the country in more than 20 of Maxol service stations. In October 2005, the 1.8 Ford Focus FFV became the first flexible-fuel vehicle to be commercially sold in Ireland. Later Ford launched the C-max and the Mondeo flexifuel models. Saab and Volvo also have E85 models available.
From 1 January 2011 E85 fuel is no longer excise-free in Ireland. Maxol has announced they will not provide E85 when their current supplies have run out.
Spain
The first flexifuel vehicles were introduced in Spain by late 2007, with the acquisition of 80 cars for use in the Spaniard official government fleet. At that time the country had only three gas stations selling E85, making necessary to deploy an official E85 fueling station in Madrid to attend these vehicles. Despite the introduction in the Spaniard market of several flexifuel models, by the end of 2008 still persists the problems of adequate E85 fueling infrastructure, as only 10 gas stations were selling E85 fuel to the public in the entire country.
United Kingdom
The UK government established several incentives for E85 flex-fuel vehicles. These include a fuel duty rebate on E85 fuel of 20 p per liter, until 2010; a £10 to 15 reduction in the vehicle excise duty (VED); and a 2% annual company car tax discount for flex-fuel cars. Despite the small number of E85 pump stations available, limited to the Morrisons supermarket chain stations, most automakers offer the same models in the UK that are available in the European market. In 2005 the Ford Focus Flexi-Fuel became the first flexible-fuel car sold in the UK, though E85 pumps were not opened until 2006. Volvo now offers its flexifuel models S80, S40, C30, V50 and V70. Other models available in the UK are the Ford C-Max Flexi-Fuel, and the Saab models 9-5 and 9-3 Flex-Fuel Biopower, and the new Saab Aero X BioPower E100 bioethanol. Despite being introduced around a decade ago, E85 is no longer commercially available in the UK
United States
, there were more than 21 million E85 flex-fuel vehicles in the United States, up from about 11 million flex-fuel cars and light trucks in operation as of early 2013. The number of flex-fuel vehicles on U.S. roads increased from 1.4 million in 2001, to 4.1 million in 2005, and rose to 7.3 million in 2008.
For the 2011 model year there are about 70 vehicles E85 capable, including sedans, vans, SUVs and pick-up trucks. Many of the models available in the market are trucks and sport-utility vehicles getting less than when filled with gasoline. Actual consumption of E85 among flex-fuel vehicle owners is limited. Nevertheless, the U.S. Department of Energy estimated that in 2011 only 862,837 flex-fuel fleet-operated vehicles were regularly fueled with E85. As a result, from all the ethanol fuel consumed in the country in 2009, only 1% was E85 consumed by flex-fuel vehicles.
The E85 blend is used in gasoline engines modified to accept such higher concentrations of ethanol, and the fuel injection is regulated through a dedicated sensor, which automatically detects the amount of ethanol in the fuel, allowing to adjust both fuel injection and spark timing accordingly to the actual blend available in the vehicle's tank. Because ethanol contains close to 34% less energy per unit volume than gasoline, E85 FFVs have a lower mileage per gallon than gasoline. Based on EPA tests for all 2006 E85 models, the average fuel economy for E85 vehicles was 25.56% lower than unleaded gasoline.
The American E85 flex-fuel vehicle was developed to run on any mixture of unleaded gasoline and ethanol, anywhere from 0% to 85% ethanol by volume. Both fuels are mixed in the same tank, and E85 is sold already blended. In order to reduce ethanol evaporative emissions and to avoid problems starting the engine during cold weather, the maximum blend of ethanol was set to 85%. There is also a seasonal reduction of the ethanol content to E70 (called winter E85 blend) in very cold regions, where temperatures fall below during the winter. In Wyoming for example, E70 is sold as E85 from October to May.
E85 flex-fuel vehicles are becoming increasingly common in the Midwest, where corn is a major crop and is the primary feedstock for ethanol fuel production. Regional retail E85 prices vary widely across the US, with more favorable prices in the Midwest region, where most corn is grown and ethanol produced. Depending on the vehicle capabilities, the break-even price of E85 has to be between 25 and 30% lower than gasoline.
Barriers to widespread adoption
A 2005 survey found that 68% of American flex-fuel car owners were not aware they owned an E85 flex. This was because the exteriors of flex and non-flex vehicles look exactly the same; there is no sale price difference between them; the lack of consumers' awareness about E85s; and also the initial decision of American automakers of not putting any kind of exterior labeling, so buyers could be unaware they are purchasing an E85 vehicle. Since 2008, all new FFV models in the US feature a bright yellow gas cap to remind drivers of the E85 capabilities and proper flex-fuel badging.
Some critics have argued that American automakers have been producing E85 flex models motivated by a loophole in the Corporate Average Fuel Economy (CAFE) requirements, that allows for a fuel economy credit for every flex-fuel vehicle sold, whether or not in practice these vehicles are fueled with E85. This loophole might allow the car industry to meet the CAFE targets in fuel economy just by spending between and that it cost to turn a conventional vehicle into a flex-fuel, without investing in new technology to improve fuel economy, and saving them the potential fines for not achieving that standard in a given model year. The CAFE standards proposed in 2011 for the period 2017–2025 will allow flexible-fuel vehicles to receive extra credit but only when the carmakers present data proving how much E85 such vehicles have actually consumed.
A major restriction hampering sales of E85 flex vehicles, or fueling with E85, is the limited infrastructure available to sell E85 to the public with only 2% of the motor fuel stations offering E85 by March 2014. , there were only 3,218 fueling stations selling E85 to the public in the entire U.S., while about 156,000 retail motor fuel outlets do not offer any ethanol blend. In addition, there has been a great concentration of E85 stations in the Corn Belt states. The main constraint for a more rapid expansion of E85 availability is that it requires dedicated storage tanks at filling stations, at an estimated cost of for each dedicated ethanol tank. The Obama Administration set the goal of installing 10,000 blender pumps nationwide until 2015, and to support this target the US Department of Agriculture (USDA) issued a rule in May 2011 to include flexible fuel pumps in the Rural Energy for America Program (REAP). This ruling will provide financial assistance to fuel station owners to install E85 and blender pumps.
Flex fuel conversion kit
A flex fuel conversion kit is a kit that allows a conventional equipment manufactured vehicle to be altered to operate on propane, natural gas, methane gas, ethanol, or electricity are classified as aftermarket AFV conversions. All vehicle conversions, except those that are completed for a vehicle to run on electricity, must meet current applicable U.S. Environmental Protection Agency (EPA) standards.
Latest developments
In 2008, Ford delivered the first flex-fuel plug-in hybrid as part of a demonstration project, a Ford Escape Plug-in Hybrid capable of running on E85 or gasoline. General Motors announced that the new Chevrolet Volt plug-in hybrid, launched in the United States market in December 2010, would be flex-fuel-capable in 2013. General Motors do Brasil announced that it will import from five to ten Volts to Brazil during the first semester of 2011 as part of a demonstration and also to lobby the federal government to enact financial incentives for green cars. If successful, GM would adapt the Volt to operate on ethanol fuel, as most new cars sold in Brazil are flex-fuel.
In 2008, Chrysler, General Motors, and Ford pledged to manufacture 50 percent of their entire vehicle line as flexible fuel in model year 2012, if enough fueling infrastructure develops. The Open Fuel Standard Act (OFS), introduced to Congress in May 2011, is intended to promote a massive adoption of flex-fuel vehicles capable of running on ethanol or methanol. The bill requires that 50 percent of automobiles made in 2014, 80 percent in 2016, and 95 percent in 2017, would be manufactured and warranted to operate on non-petroleum-based fuels, which includes existing technologies such as flex-fuel, natural gas, hydrogen, biodiesel, plug-in electric and fuel cell.
, almost half of new vehicles produced by Chrysler, Ford, and General Motors are flex-fuel, meaning roughly one-quarter of all new vehicles sold by 2015 are capable of using up to E85. However, obstacles to widespread use of E85 fuel remain. A 2014 analysis by the Renewable Fuels Association (RFA) found that oil companies prevent or discourage affiliated retailers from selling E85 through rigid franchise and branding agreements, restrictive supply contracts, and other tactics. The report showed independent retailers are five times more likely to offer E85 than retailers carrying an oil company brand.
Other countries
Australia
In January 2007 GM brought UK-sourced Saab 9-5 Biopower E85 flex-fuel vehicles to Australia as a trial, in order to measure interest in ethanol-powered vehicles in the country. Saab Australia placed the vehicles with the fleets of the Queensland Government, the media, and some ethanol producers. E85 is not available widely in Australia, but the Manildra Group provided the E85 blend fuel for this trial.
Saab Australia became the first car maker to produce an E85 flex-fuel car for the Australian market with the Saab 9-5 BioPower. One month later launched the new 9-3 BioPower, the first vehicle in Australia to give drivers a choice of three fuels, E85, diesel or gasoline, and both automobiles are sold for a small premium. Australia's largest independent fuel retailer, United Petroleum, announced plans to install Australia's first commercial E85 fuel pumps, one in Sydney and one in Melbourne.
GM Holden, the Victorian state government, Coskata, Caltex, Veolia Environmental Services and Mitsui have announced a consortium with a co-ordinated plan to build a bio-ethanol plant from household waste for use as E85 fuel. In August 2010 Caltex launched the E85 ethanol fuel called Bio E-Flex, designed for use in the Holden Commodore VE Series II flex-fuel vehicles to be released later in 2010. Caltex Australia plans to begin selling Bio E-Flex in Melbourne from September and expects to have Bio E-Flex available in more than 30 service stations in Melbourne, Sydney, Brisbane, Adelaide and Canberra by the end of October, with plans to increase to 100 metropolitan and regional locations in 2011.
Canada
As part of the North American auto market, by 2007 Canada had available 51 models of E85 flex-vehicles, most from Chrysler, Ford and General Motors, including automobiles, pickup trucks, and SUVs. The country had around 1.6 million capable flex fuel E85s on the roads by 2014. However, most users are not aware they own an E85, as vehicles are not clearly labeled as such, and even though the newer models have a yellow cap in the fuel tank informing that the vehicle can handle E85, most users are still not aware because there are very few gas stations offering E85. Another major drawback to greater E85 fuel use is the fact that by June 2008 Canada had only three public E85 pumps, all located in Ontario, in the cities of Guelph, Chatham, and Woodstock. E85 fueling is available primarily for fleet vehicles, including 20 government refueling stations not available for the public. The main feedstocks for E85 production in Canada are corn and wheat, and there were several proposals being discussed to increase the actual use of E85 fuel in FFVs, such as creating an ethanol-friendly highway or ethanol corridor.
Colombia
In March 2009 the Colombian government enacted a mandate to introduce E85 flexible-fuel cars. The executive decree applies to all gasoline-powered vehicles with engines smaller than 2.0 liters manufactured, imported, and commercialized in the country beginning in 2012, mandating that 60% of such vehicles must have flex-fuel engines capable of running with gasoline or E85, or any blend of both. By 2014 the mandatory quota is 80% and it will reach 100 percent by 2016. All vehicles with engines bigger than 2.0 liters must be E85 capable starting in 2013. The decree also mandates that by 2011 all gasoline stations must provide infrastructure to guarantee availability of E85 throughout the country. The mandatory introduction of E85 flex-fuels has caused controversy among carmakers, car dealers, gasoline station owners, and even some ethanol producers complained the industry is not ready to supply enough ethanol for the new E85 fleet.
India
Union Minister for Road Transport and Highways Nitin Gadkari has emphasised the adoption of alternative fuels which will be import substitutes, cost-effective, pollution-free, indigenous, and discourage the use of Petrol or diesel. During an event held on 20 October 2021, while addressing the media and journalists he has said that the government will ask all vehicle manufacturers to make flex-fuel engines under the Euro VI emission norms in the next six-eight months. Flex-fuel, or flexible fuel, is an alternative fuel made of a combination of gasoline and methanol or ethanol.
Mr. Gadkari has predicted that in the next 15 years, Indian automobile industry will be worth Rs 15 lakh crore and the Government is planning to submit an affidavit in the Hon'ble Supreme Court of India to allow manufacturing of flex-fuel engines under the Euro IV emission norms but for now he said that the Indian Government will ask all vehicle manufacturers to make flex-fuel engines (that can run on more than one fuel) under the Euro VI emission norms in the next 6–8 months.
New Zealand
In 2006 New Zealand began a pilot project with two E85 Ford Focus Flexi-Fuel evaluation cars. The main feedstock used in New Zealand for ethanol production is whey, a by-product of milk production.
Paraguay
Government officials and businessmen from Paraguay began negotiations in 2007 with Brazilian automakers in order to import flex cars that run on any blend of gasoline and ethanol. If successful, Paraguay would become the first destination for Brazilian flex-fuel car exports. In May 2008, the Paraguayan government announced a plan to eliminate import taxes of flex-fuel vehicles and an incentive program for ethanol production. The plan also includes the purchase of 20,000 flex cars in 2009 for the government fleet.
Thailand
In 2006, tax incentives were established in Thailand for the introduction of compressed natural gas (CNG) as an alternative fuel, by eliminating import duties and lowering excise taxes on CNG-compatible cars. Then in 2007, Thai authorities approved incentives for the production of "eco-cars", with the goal of the country to become a regional hub for the production of small, affordable and fuel-efficient cars. Seven automakers joint in the program, Toyota, Suzuki, Nissan, Mitsubishi, Honda, Tata and Volkswagen. In 2008 the government announced priority for E85, expecting these flex-fuel vehicles to become widely available in Thailand in 2009, three years ahead of schedule. The incentives include cuts in excise tax rates for E85-compatible cars and reduction of corporate taxes for ethanol producers to make sure E85 fuel supply will be met. This new plan however, brought confusion and protests by the automakers which sign-up for the "eco-cars", as competition with the E85 flex-fuel cars will negatively affect their ongoing plans and investments, and their production lines will have to be upgraded at a high cost for them to produce flex-fuel cars. They also complained that flex-fuel vehicles popular in a few countries around the world, limiting their export potential as compared with other engine technologies.
Despite the controversy, the first E85 flexible fuel vehicles were introduced in November 2008. The first two models available in the Thai market were the Volvo S80 and the C30. The S80 is manufactured locally and the C30 is imported. By the time of the introduction of flex vehicles there were already two gas stations with E85 fuel available. During 2009 it was expected that 15 fueling stations in Bangkok will have E85 fuel available. In October 2009 the Mitsubishi Lancer Ex was launched becoming the first mass-production E85 flexi-fuel vehicle produced in Thailand.
Comparison among the leading markets
List of currently produced flexible-fuel vehicles
Worldwide
Brazil
Europe
Citroën C4 1.6 BioFlex
Dacia Duster, Dacia Logan, Dacia Sandero
Fiat 500X 1.6 16V E.torQ, Fiat Aegea 1.6 16V E.torQ
Ford Focus, Ford C-MAX, Ford Mondeo, Ford S-Max, Ford Galaxy
Koenigsegg CCXR
Peugeot 307 1.6 BioFlex
Saab 9-5, Saab 9-3
SEAT León 1.6 MPI MultiFuel, SEAT Altea 1.6 MPI MultiFuel, SEAT Altea XL 1.6 MPI MultiFuel
Volvo C30 1.8F FlexiFuel, S40 1.8F FlexiFuel, V50 1.8F FlexiFuel, XC60 (concept), V70 2.0F FlexiFuel, S80 2.0F FlexiFuel
Thailand
Mitsubishi: Lancer Ex 1.8
Honda: Civic FB, Civic FC, City 6th gen, CR-V 4th gen, CR-V 5th gen, HR-V, Accord 9th gen, Accord 10th gen
Mazda: Mazda 3 BM / BP, Mazda CX-5 KE / KF, Mazda CX-3, Mazda CX-30
Toyota: Corolla Altis, C-HR, Camry XV70, Vios, Corolla Cross
Volvo: S60 DRIVe, S80 2.5FT
Ford: Focus 1.5 EcoBoost
Chevrolet: Captiva 2.4 Ecotec E85, Cruze 1.8 Ecotec E85
MG: 3 VTi-TECH HS TGI
Mercedes-Benz: GLA 200
United States
See also
Alternative propulsion
Battery electric vehicle
Bivalent (engine)
Butanol fuel
Clean Cities
Common ethanol fuel mixtures
Ethanol fuel by country
Food vs fuel
Natural gas vehicle
Plug-in hybrid electric vehicle
Vehicle conversion
References
External links
US Department of Energy: Flexible Fuel Vehicles
Fuel technology
Ethanol fuel
Embedded systems | Flexible-fuel vehicle | Technology,Engineering | 9,812 |
23,187,923 | https://en.wikipedia.org/wiki/Vieta%20jumping | In number theory, Vieta jumping, also known as root flipping, is a proof technique. It is most often used for problems in which a relation between two integers is given, along with a statement to prove about its solutions. In particular, it can be used to produce new solutions of a quadratic Diophantine equation from known ones. There exist multiple variations of Vieta jumping, all of which involve the common theme of infinite descent by finding new solutions to an equation using Vieta's formulas.
History
Vieta jumping is a classical method in the theory of quadratic Diophantine equations and binary quadratic forms. For example, it was used in the analysis of the Markov equation back in 1879 and in the 1953 paper of Mills.
In 1988, the method came to the attention to mathematical olympiad problems in the light of the first olympiad problem to use it in a solution that was proposed for the International Mathematics Olympiad and assumed to be the most difficult problem on the contest:
Let and be positive integers such that divides . Show that is the square of an integer.
Arthur Engel wrote the following about the problem's difficulty:
Among the eleven students receiving the maximum score for solving this problem were Ngô Bảo Châu, Ravi Vakil, Zvezdelina Stankova, and Nicușor Dan. Emanouil Atanassov (from Bulgaria) solved the problem in a paragraph and received a special prize.
Standard Vieta jumping
The concept of standard Vieta jumping is a proof by contradiction, and consists of the following four steps:
Assume toward a contradiction that some solution () exists that violates the given requirements.
Take the minimal such solution according to some definition of minimality.
Replace some by a variable x in the formulas, and obtain an equation for which is a solution.
Using Vieta's formulas, show that this implies the existence of a smaller solution, hence a contradiction.
Example
Problem #6 at IMO 1988: Let and be positive integers such that divides . Prove that is a perfect square.
Fix some value that is a non-square positive integer. Assume there exist positive integers for which .
Let be positive integers for which and such that is minimized, and without loss of generality assume .
Fixing , replace with the variable to yield . We know that one root of this equation is . By standard properties of quadratic equations, we know that the other root satisfies and .
The first expression for shows that is an integer, while the second expression implies that since is not a perfect square. From it further follows that , and hence is a positive integer. Finally, implies that , hence , and thus , which contradicts the minimality of .
Constant descent Vieta jumping
The method of constant descent Vieta jumping is used when we wish to prove a statement regarding a constant having something to do with the relation between and . Unlike standard Vieta jumping, constant descent is not a proof by contradiction, and it consists of the following four steps:
The equality case is proven so that it may be assumed that .
and are fixed and the expression relating , and is rearranged to form a quadratic with coefficients in terms of and , one of whose roots is . The other root, is determined using Vieta's formulas.
For all above a certain base case, show that and that is an integer. Thus, while maintaining the same , we may replace with and repeat this process until we arrive at the base case.
Prove the statement for the base case, and as has remained constant through this process, this is sufficient to prove the statement for all ordered pairs.
Example
Let and be positive integers such that divides . Prove that .
If dividing implies that divides 1, and hence the positive integers , and . So, without loss of generality, assume that .
For any satisfying the given condition, let and rearrange and substitute to get . One root to this quadratic is , so by Vieta's formulas the other root may be written as follows: .
The first equation shows that is an integer and the second that it is positive. Because and they are both integers, , and hence ; As long as , we always have , and therefore . Thus, while maintaining the same , we may replace with and repeat this process until we arrive at the base case.
The base case we arrive at is the case where . For to satisfy the given condition, must divide , which implies that divides 2, making either 1 or 2. The first case is eliminated because . In the second case, . As has remained constant throughout this process of Vieta jumping, this is sufficient to show that for any satisfying the given condition, will always equal 3.
Geometric interpretation
Vieta jumping can be described in terms of lattice points on hyperbolas in the first quadrant. The same process of finding smaller roots is used instead to find lower lattice points on a hyperbola while remaining in the first quadrant. The procedure is as follows:
From the given condition we obtain the equation of a family of hyperbolas that are unchanged by switching and so that they are symmetric about the line .
Prove the desired statement for the intersections of the hyperbolas and the line .
Assume there is some lattice point on some hyperbola and without loss of generality . Then by Vieta's formulas, there is a corresponding lattice point with the same -coordinate on the other branch of the hyperbola, and by reflection through a new point on the original branch of the hyperbola is obtained.
It is shown that this process produces lower points on the same branch and can be repeated until some condition (such as ) is achieved. Then by substitution of this condition into the equation of the hyperbola, the desired conclusion will be proven.
Example
This method can be applied to problem #6 at IMO 1988: Let and be positive integers such that divides . Prove that is a perfect square.
Let and fix the value of . If , is a perfect square as desired. If , then and there is no integral solution . When , the equation defines a hyperbola and represents an integral lattice point on .
If is an integral lattice point on with , then (since is integral) one can see that . This proposition's statement is then true for the point .
Now let be a lattice point on a branch with and (as the previous remark covers the case ). By symmetry, we can assume that and that is on the higher branch of . By applying Vieta's Formulas, is a lattice point on the lower branch of . Let . From the equation for , one sees that . Since , it follows that . Hence the point is in the first quadrant. By reflection, the point is also a point in the first quadrant on . Moreover from Vieta's formulas, , and . Combining this equation with , one can show that . The new constructed point is then in the first quadrant, on the higher branch of , and with smaller ,-coordinates than the point we started with.
The process in the previous step can be repeated whenever the point has a positive -coordinate. However, since the -coordinates of these points will form a decreasing sequence of non-negative integers, the process can only be repeated finitely many times before it produces a point on the upper branch of ; by substitution, is a square as required.
See also
Vieta's formulas
Proof by contradiction
Infinite descent
Markov number
Apollonian gasket
Notes
External links
Vieta Root Jumping at Brilliant.org
Number theory
Diophantine equations | Vieta jumping | Mathematics | 1,518 |
20,367,280 | https://en.wikipedia.org/wiki/Operational%20instruments%20of%20the%20Royal%20Observer%20Corps | The Royal Observer Corps (ROC) was a civil defence organisation operating in the United Kingdom between October 1925 and 31 December 1995, when the Corps' civilian volunteers were stood down. (ROC headquarters staff at RAF Bentley Priory stood down on 31 March 1996). Composed mainly of civilian spare-time volunteers, ROC personnel wore a Royal Air Force (RAF) style uniform and latterly came under the administrative control of RAF Strike Command and the operational control of the Home Office. Civilian volunteers were trained and administered by a small cadre of professional full-time officers under the command of the Commandant Royal Observer Corps; a serving RAF Air Commodore.
This sub article lists and describes the instruments used by the ROC in their nuclear detection and reporting role during the Cold War period.
Initial detection of nuclear bursts on the UK
Atomic Weapons Detection Recognition and Estimation of Yield known as AWDREY was a desk mounted automatic instrument, located at controls, that detected nuclear explosions and indicated the estimated size in megatons. Operating by measuring the intense flashes emitted by a nuclear explosion, together with a unit known as DIADEM which measured Electromagnetic Pulse (EMP), the instruments were tested daily by wholetime ROC officers and regularly reacted to the EMP from lightning strikes during thunderstorms. AWDREY was designed and built by the Atomic Weapons Establishment at Aldermaston and tested for performance and accuracy on real nuclear explosions at the 1957 Kiritimati (or Christmas Island) nuclear bomb test (after being mounted on board a ship). Reports following a reading on AWDREY were prefixed with the codewords "Tocsin Bang".
The Bomb Power Indicator or BPI consisted of a peak overpressure gauge with a dial that would register when the pressure wave from a nuclear explosion passed over the post. When related to the distance of the explosion from the post this pressure would indicate the power of the explosion. Reports following a reading on the BPI were preceded by the codeword "Tocsin".
The Ground Zero Indicator, or GZI or shadowgraph, consisted of four horizontally mounted cardinal compass point pinhole cameras within a metal drum, each 'camera' contained a sheet of photosensitive paper on which were printed horizontal and vertical calibration lines. The flash from a nuclear explosion would produce a mark on one or two of the papers within the drum. The position of the mark enabled the bearing and height of the burst to be estimated. With triangulation between neighbouring posts these readings would give an accurate height and position. The altitude of the explosion was important because a ground or near ground burst would produce radioactive fallout, whereas an air burst would produce only short distance and short lived initial radiations (but no fallout).
Static measurement of ionising radiation
The Radiac Survey Meter No 2 or RSM was a 1955-meter which used an ionisation chamber to measure gamma radiation, it could measure beta, by removing the base-plate and opening the beta shield. This meter suffered from a number of disadvantages: it required three different types of batteries, of which two were obsolete and had to be manufactured to special order, the circuit included a single electrometer valve or tube. These were favored as they had been tested on fallout in Australia after the Operation Buffalo nuclear tests, and remained in use until 1982 by commissioning a manufacturer to regularly produce special production runs of the obsolete batteries. Within the ROC the RSM was only used at post sites for three years when it was superseded in 1958 by the FSM and the RSM retained only for post attack mobile monitoring missions.
The Fixed Survey Meter or FSM was introduced in 1958. For the first time it was possible to operate the unit from within the Monitoring Post or Group HQ using an external Geiger Muller Probe connected via coaxial cable and mounted to a telescopic rod and protected on the surface by a polycarbonate dome. The FSM used the same obsolete high voltage batteries as the RSM.
In 1985 this instrument was replaced by the PDRM82(F). The PDRM82(F) was the fixed desktop version of the PDRM82. It gave more accurate readings and used standard 'C' cell torch batteries that lasted many times longer, up to 400 hours of operation. The compact and robust instruments were housed in sturdy orange coloured polycarbonate cases and had clear liquid crystal displays. The PDRM82(F) could also be operated from within the Monitoring Post or Group HQ as before, using an external Geiger Muller Probe connected via coaxial cable. The telescopic rod, mounting bracket and polycarbonate dome used by the earlier FSM remained in service and continued to be used with the PDRM82(F).
Portable measurement of radiation during Mobile Monitoring missions
The Radiac Survey Meter No 2 or RSM was a 1955-meter which measured gamma and beta radiation. Having been superseded within the ROC by the Fixed Survey Meter the RSM remained in use only for post attack mobile monitoring missions in a post attack period. Image can be seen in ‘Static measurement of ionising radiation’ section.
The Radiac Survey Meter, Lightweight, MkVI, produced by the AVO company (The MkIII and MkIV were also available) were issued to the ROC in the mid-late 1960s, but were not regarded favourable due to using almost obsolete Mallory batteries and were ionisation type meters that measured gamma radiation.
The PDRM82 or Portable Dose Rate Meter was the standard portable version of the new meters, that were manufactured by Plessey and introduced during the 1980s, giving more accurate readings and using standard 'C' cell torch batteries that lasted many times longer, up to 400 hours of operation. The compact and robust instruments were housed in sturdy orange coloured polycarbonate cases and had clear liquid crystal displays. The Radiac sensor was self-contained within the casing.
Measurement of personal absorptions
The Dosimeter pocket meters were issued to individual observers for measuring their personal levels of radiation absorption during operations. Three different grades of dosimeter were used, depending on ambient radiation levels. The original hand wound and temperamental dosimeter charging units (Charging Unit, Individual, Dosimeter No.1 & No.2) were replaced during the 1980s by a battery operated automatic charging unit (EAL Type N.105A).
See also
Royal Observer Corps
Royal Observer Corps Monitoring Post
Bomb Power Indicator
Ground Zero Indicator
Fixed Survey Meter
Nuclear MASINT US equivalent
References
Operator's Handbook for Meter Survey Radiac No.2 Published A.E.R.E. 1955
Civil Defence Pocket Book, General Information published Home Office (various dates)
Royal Observer Corps
Cold War military equipment of the United Kingdom
Measuring instruments | Operational instruments of the Royal Observer Corps | Technology,Engineering | 1,359 |
863,650 | https://en.wikipedia.org/wiki/List%20of%20Olympic%20medalists%20in%20art%20competitions | There were 146 medalists in the art competitions that were part of the Olympic Games from 1912 until 1948. These art competitions were considered an integral part of the movement by International Olympic Committee (IOC) founder Pierre de Coubertin and necessary to recapture the complete essence of the Ancient Olympic Games. Their absence before the 1912 Summer Olympics, according to journalism professor Richard Stanton, stems from Coubertin "not wanting to fragment the focus of his new and fragile movement". Art competitions were originally planned for inclusion in the 1908 Summer Olympics but were delayed after that edition's change in venue from Rome to London following the 1906 eruption of Mount Vesuvius. By the 1924 Summer Olympics they had grown to be considered internationally relevant and potentially "a milestone in advancing public awareness of art as a whole".
During their first three appearances, the art competitions were grouped into five broad categories: architecture, literature, music, painting, and sculpture. The Dutch Organizing Committee for the 1928 Summer Olympics split these into subcategories in the hopes of increasing participation. Although it was a successful strategy, the 1932 Summer Olympics eliminated several of these subcategories, which led to fewer entries in the broader categories. For the 1936 Summer Olympics, the German government proposed the addition of a film contest to the program, which was rejected.
Following a final appearance at the 1948 Summer Olympics, art competitions were removed from the Olympic program. Planners of the 1952 Summer Olympics opposed their inclusion on logistical grounds, claiming that the lack of an international association for the event meant that the entire onus of facilitation was placed on the local organizing committee. Concerns were also raised about the professionalism of the event, since only amateurs were allowed to participate in the sporting tournaments, and the growing commercialization of the competitions, as artists had been permitted to sell their submissions during the course of the Games since 1928. In 1952 an art festival and exhibition was held concurrent with the Games, a tradition that has been maintained in all subsequent Summer Olympics.
In 1952, art competition medals were removed from the official national medal counts. The IOC does not track medalists in Olympic art competitions in its database and thus the prize winners are only officially recorded in the original Olympic reports. Judges were not required to distribute first, second, and third place awards for every category, and thus certain events lack medalists in these placements. Since participants were allowed multiple submissions, it was also possible for artists to win more than one in a single event, as Alex Diggelmann of Switzerland did in the graphic arts category of the 1948 edition. Diggelmann is tied with Denmark's Josef Petersen, who won second prize three times in literature, for the number of medals captured in the art competitions. Luxembourg's Jean Jacoby is the only individual to win two gold medals, doing so in painting in 1924 and 1928. Of the 146 medalists, 11 were women and only Finnish author Aale Tynni was awarded gold. Germany was the most successful nation, with eight gold, seven silver, and nine bronze medals, although one was won by Coubertin himself, a Frenchman. He submitted his poem Ode to Sport under the pseudonyms Georges Hohrod and Martin Eschbach, as if it were a joint-entry, and won first prize in the 1912 literature category. The original report credits this medal to Germany. Two individuals, Walter W. Winans and Alfréd Hajós, won medals in both athletic and art competitions.
Architecture
Mixed architecture
Mixed architecture, architectural designs
Town planning
Literature
Mixed literature
Dramatic works
Epic works
Lyric and speculative works
Music
Mixed music
Compositions for orchestra
Solo and chorus compositions
Instrumental and chamber
Vocal
Painting
Mixed painting
Drawings and water colors
Engravings and etchings
Graphic works
Paintings
Sculpturing
Mixed sculpturing
Medals
Medals and plaques
Reliefs and medallions
Reliefs
Statues
Statistics
Multiple medalists
Medals per year
References
General
Specific
Notes
Art competitions
Arts awards
Architecture awards
Medalists in art competitions | List of Olympic medalists in art competitions | Engineering | 791 |
5,505,240 | https://en.wikipedia.org/wiki/List%20of%20MeSH%20codes%20%28D12.776%29 | The following is a partial list of the "D" codes for Medical Subject Headings (MeSH), as defined by the United States National Library of Medicine (NLM).
This list continues the information at List of MeSH codes (D12.644). Codes following these are found at List of MeSH codes (D13). For other MeSH codes, see List of MeSH codes.
The source for this content is the set of 2006 MeSH Trees from the NLM.
– proteins
– albumins
– c-reactive protein
– conalbumin
– lactalbumin
– ovalbumin
– avidin
– parvalbumins
– ricin
– serum albumin
– methemalbumin
– prealbumin
– serum albumin, bovine
– serum albumin, radio-iodinated
– technetium tc 99m aggregated albumin
– algal proteins
– amphibian proteins
– xenopus proteins
– amyloid
– amyloid beta-protein
– amyloid beta-protein precursor
– serum amyloid a protein
– serum amyloid p-component
– antifreeze proteins
– antifreeze proteins, type i
– antifreeze proteins, type ii
– antifreeze proteins, type iii
– antifreeze proteins, type iv
– apoproteins
– apoenzymes
– apolipoproteins
– apolipoprotein A
– apolipoprotein A1
– apolipoprotein A2
– apolipoprotein B
– apolipoprotein C
– apolipoprotein E
– aprotinin
– archaeal proteins
– bacteriorhodopsins
– dna topoisomerases, type i, archaeal
– halorhodopsins
– periplasmic proteins
– armadillo domain proteins
– beta-catenin
– gamma catenin
– plakophilins
– avian proteins
– bacterial proteins
See List of MeSH codes (D12.776.097).
– blood proteins
See List of MeSH codes (D12.776.124).
– carrier proteins
See List of MeSH codes (D12.776.157).
– cell cycle proteins
– cdc25 phosphatase
– cellular apoptosis susceptibility protein
– cullin proteins
– cyclin-dependent kinase inhibitor proteins
– cyclin-dependent kinase inhibitor p15
– cyclin-dependent kinase inhibitor p16
– cyclin-dependent kinase inhibitor p18
– cyclin-dependent kinase inhibitor p19
– cyclin-dependent kinase inhibitor p21
– cyclin-dependent kinase inhibitor p27
– cyclin-dependent kinase inhibitor p57
– cyclin-dependent kinases
– cdc2-cdc28 kinases
– cdc2 protein kinase
– cdc28 protein kinase, s cerevisiae
– cyclin-dependent kinase 5
– cyclin-dependent kinase 9
– cyclin-dependent kinase 2
– cyclin-dependent kinase 4
– cyclin-dependent kinase 6
– maturation-promoting factor
– cdc2 protein kinase
– cyclins
– cyclin A
– cyclin B
– cyclin D1
– cyclin E
– tumor suppressor protein p14arf
– cerebrospinal fluid proteins
– colipases
– contractile proteins
– muscle proteins
– actinin
– actins
– actomyosin
– calsequestrin
– capz actin capping protein
– caveolin 3
– cofilin 2
– dystrophin
– dystrophin-associated proteins
– dystroglycans
– sarcoglycans
– myogenic regulatory factors
– myod protein
– myogenic regulatory factor 5
– myogenin
– myoglobin
– myosins
– myosin heavy chains
– myosin light chains
– myosin subfragments
– myosin type i
– myosin type ii
– cardiac myosins
– atrial myosins
– ventricular myosins
– nonmuscle myosin type iia
– nonmuscle myosin type iib
– skeletal muscle myosins
– smooth muscle myosins
– parvalbumins
– profilins
– ryanodine receptor calcium release channel
– tropomodulin
– tropomyosin
– troponin
– troponin c
– troponin i
– troponin t
– cystatins
– cytoskeletal proteins
– adenomatous polyposis coli protein
– catenins
– alpha-catenin
– beta catenin
– gamma catenin
– dystrophin
– dystrophin-associated proteins
– dystroglycans
– intermediate filament proteins
– desmin
– glial fibrillary acidic protein
– keratin
– neurofilament proteins
– vimentin
– microfilament proteins
– actin capping proteins
– capz actin capping protein
– tropomodulin
– actin depolymerizing factors
– cofilin 1
– cofilin 2
– destrin
– actin-related protein 2-3 complex
– actin-related protein 2
– actin-related protein 3
– actinin
– actins
– cortactin
– gelsolin
– myosins
– myosin heavy chains
– myosin light chains
– myosin subfragments
– myosin type i
– myosin type ii
– cardiac myosins
– atrial myosins
– ventricular myosins
– nonmuscle myosin type iia
– nonmuscle myosin type iib
– skeletal muscle myosins
– smooth muscle myosins
– myosin type iii
– myosin type iv
– myosin type v
– profilins
– tropomyosin
– troponin
– troponin c
– troponin i
– troponin t
– wiskott-aldrich syndrome protein family
– wiskott-aldrich syndrome protein
– wiskott-aldrich syndrome protein, neuronal
– microtubule proteins
– dynein atpase
– microtubule-associated proteins
– dynamins
– dynamin i
– dynamin ii
– dynamin iii
– kinesin
– stathmin
– tau proteins
– tubulin
– plakins
– desmoplakins
– plectin
– plakophilins
– spectrin
– talin
– utrophin
– vinculin
– dental enamel proteins
– dietary proteins
– egg proteins, dietary
– conalbumin
– ovalbumin
– avidin
– ovomucin
– phosvitin
– milk proteins
– caseins
– lactalbumin
– lactoglobulins
– lactoferrin
– vegetable proteins
– dna-binding proteins
See List of MeSH codes (D12.776.260).
– dynein atpase
– egg proteins
– conalbumin
– egg proteins, dietary
– ovalbumin
– avidin
– ovomucin
– phosvitin
– vitellins
– vitellogenins
– epididymal secretory proteins
– eye proteins
– arrestins
– arrestin
– crystallins
– alpha-crystallins
– alpha-crystallin a chain
– alpha-crystallin b chain
– beta-crystallins
– beta-crystallin a chain
– beta-crystallin b chain
– delta-crystallins
– epsilon-crystallins
– gamma-crystallins
– omega-crystallins
– tau-crystallins
– zeta-crystallins
– guanylate cyclase-activating proteins
– opsin
– rhodopsin
– recoverin
– rhodopsin kinase
– fanconi anemia complementation group proteins
– brca2 protein
– fanconi anemia complementation group a protein
– fanconi anemia complementation group c protein
– fanconi anemia complementation group d2 protein
– fanconi anemia complementation group e protein
– fanconi anemia complementation group f protein
– fanconi anemia complementation group g protein
– fanconi anemia complementation group l protein
– fetal proteins
– alpha-fetoproteins
– fish proteins
– zebrafish proteins
– flavoproteins
– acetolactate synthase
– acyl-coa dehydrogenase
– acyl-coa dehydrogenase, long-chain
– acyl-CoA oxidase
– apoptosis inducing factor
– butyryl-coa dehydrogenase
– cytochrome-b(5) reductase
– dihydrolipoamide dehydrogenase
– electron-transferring flavoproteins
– electron transport complex i
– electron transport complex ii
– succinate dehydrogenase
– flavodoxin
– glutamate synthase (NADH)
– methylenetetrahydrofolate reductase (nadph2)
– nadh dehydrogenase
– nadph oxidase
– nitrate reductase (nadh)
– nitrate reductase (nad(p)h)
– nitrate reductase (nadph)
– retinal dehydrogenase
– sarcosine oxidase
– thioredoxin reductase (nadph)
– fungal proteins
– saccharomyces cerevisiae proteins
– cdc28 protein kinase, s cerevisiae
– cdc42 gtp-binding protein, saccharomyces cerevisiae
– mcm1 protein
– silent information regulator proteins, saccharomyces cerevisiae
– schizosaccharomyces pombe proteins
– globulins
– lactoglobulins
– lactoferrin
– serum globulins
– alpha-globulins
– alpha 1-antichymotrypsin
– alpha 1-antitrypsin
– alpha-macroglobulins
– antiplasmin
– antithrombin iii
– ceruloplasmin
– haptoglobins
– heparin cofactor ii
– orosomucoid
– progesterone-binding globulin
– retinol-binding proteins
– transcortin
– beta-globulins
– beta-2 microglobulin
– beta-thromboglobulin
– complement factor h
– hemopexin
– plasminogen
– angiostatins
– properdin
– sex hormone-binding globulin
– transferrin
– fibronectins
– immunoglobulins
– antibodies
– antibodies, anti-idiotypic
– antibodies, archaeal
– antibodies, bacterial
– antistreptolysin
– antibodies, bispecific
– antibodies, blocking
– antibodies, catalytic
– antibodies, fungal
– antibodies, helminth
– antibodies, heterophile
– antibodies, monoclonal
– muromonab-cd3
– antibodies, neoplasm
– antibodies, phospho-specific
– antibodies, protozoan
– antibodies, viral
– deltaretrovirus antibodies
– hiv antibodies
– htlv-i antibodies
– htlv-ii antibodies
– hepatitis antibodies
– hepatitis a antibodies
– hepatitis b antibodies
– hepatitis c antibodies
– antigen-antibody complex
– antitoxins
– antivenins
– botulinum antitoxin
– diphtheria antitoxin
– tetanus antitoxin
– autoantibodies
– antibodies, antineutrophil cytoplasmic
– antibodies, antinuclear
– antibodies, antiphospholipid
– antibodies, anticardiolipin
– lupus coagulation inhibitor
– complement c3 nephritic factor
– immunoconglutinins
– immunoglobulins, thyroid-stimulating
– long-acting thyroid stimulator
– rheumatoid factor
– binding sites, antibody
– complementarity determining regions
– hemolysins
– immune sera
– antilymphocyte serum
– immunoconjugates
– immunotoxins
– immunoglobulin allotypes
– immunoglobulin gm allotypes
– immunoglobulin km allotypes
– immunoglobulin isotypes
– immunoglobulin a
– immunoglobulin a, secretory
– secretory component
– immunoglobulin alpha-chains
– immunoglobulin d
– immunoglobulin delta-chains
– immunoglobulin e
– immunoglobulin epsilon-chains
– immunoglobulin g
– immunoglobulin gamma-chains
– immunoglobulin gm allotypes
– long-acting thyroid stimulator
– muromonab-cd3
– rho(d) immune globulin
– immunoglobulin m
– immunoglobulin mu-chains
– immunoglobulins, intravenous
– immunoglobulins, thyroid-stimulating
– insulin antibodies
– isoantibodies
– oligoclonal bands
– opsonin proteins
– plantibodies
– precipitins
– reagins
– gamma-globulins
– tuftsin
– immunoglobulin constant regions
– immunoglobulin fab fragments
– immunoglobulin fc fragments
– cd4 immunoadhesins
– immunoglobulin fragments
– immunoglobulin fab fragments
– immunoglobulin variable region
– complementarity determining regions
– immunoglobulin joining region
– tuftsin
– immunoglobulin fc fragments
– cd4 immunoadhesins
– immunoglobulin constant regions
– immunoglobulin idiotypes
– immunoglobulin subunits
– immunoglobulin heavy chains
– immunoglobulin alpha-chains
– immunoglobulin delta-chains
– immunoglobulin epsilon-chains
– immunoglobulin gamma-chains
– immunoglobulin gm allotypes
– immunoglobulin mu-chains
– immunoglobulin j-chains
– immunoglobulin light chains
– immunoglobulin kappa-chains
– immunoglobulin km allotypes
– immunoglobulin lambda-chains
– secretory component
– immunoglobulin variable region
– complementarity determining regions
– immunoglobulin fab fragments
– immunoglobulin joining region
– paraproteins
– bence jones protein
– cryoglobulins
– myeloma proteins
– pyroglobulins
– receptors, antigen, b-cell
– antigens, cd79
– macroglobulins
– alpha-macroglobulins
– transcobalamins
– thyroglobulin
– glycoproteins
See List of MeSH codes (D12.776.395).
– gtp-binding protein regulators
– gtpase-activating proteins
– chimerin proteins
– chimerin 1
– eukaryotic initiation factor-5
– ras gtpase-activating proteins
– neurofibromin 1
– p120 gtpase activating protein
– rgs proteins
– guanine nucleotide dissociation inhibitors
– guanine nucleotide exchange factors
– eukaryotic initiation factor-2b
– guanine nucleotide-releasing factor 2
– proto-oncogene proteins c-vav
– ral guanine nucleotide exchange factor
– ras guanine nucleotide exchange factors
– ras-GRF1
– son of sevenless proteins
– sos1 protein
– son of sevenless protein, drosophila
– heat-shock proteins
– chaperonins
– chaperonin 10
– groes protein
– chaperonin 60
– groel protein
– heat-shock proteins, small
– hsp20 heat-shock proteins
– hsp30 heat-shock proteins
– hsp40 heat-shock proteins
– hsp47 heat-shock proteins
– hsp70 heat-shock proteins
– hsc70 heat-shock proteins
– hsp72 heat-shock proteins
– hsp110 heat-shock proteins
– hsp90 heat-shock proteins
– helminth proteins
– caenorhabditis elegans proteins
– hemeproteins
– cytochromes
– cytochrome a group
– cytochromes a
– cytochromes a1
– cytochromes a3
– cytochrome b group
– cytochromes b6
– cytochromes b
– cytochromes b5
– cytochrome c group
– cytochromes c
– cytochromes c'
– cytochromes c1
– cytochromes c2
– cytochromes c6
– cytochrome d group
– cytochrome p-450 enzyme system
– aryl hydrocarbon hydroxylases
– aniline hydroxylase
– benzopyrene hydroxylase
– cytochrome p-450 cyp1a1
– cytochrome p-450 cyp1a2
– cytochrome p-450 cyp2b1
– cytochrome p-450 cyp2d6
– cytochrome p-450 cyp2e1
– cytochrome p-450 cyp3a
– camphor 5-monooxygenase
– steroid hydroxylases
– aldosterone synthase
– aromatase
– cholesterol 7 alpha-hydroxylase
– cholesterol side-chain cleavage enzyme
– 25-hydroxyvitamin d3 1-alpha-hydroxylase
– steroid 11-beta-hydroxylase
– steroid 12-alpha-hydroxylase
– steroid 16-alpha-hydroxylase
– steroid 17-alpha-hydroxylase
– steroid 21-hydroxylase
– cytochromes f
– hemocyanin
– hemoglobins
– carboxyhemoglobin
– erythrocruorins
– fetal hemoglobin
– hemoglobin A
– hemoglobin a, glycosylated
– hemoglobin A2
– hemoglobins, abnormal
– hemoglobin C
– hemoglobin E
– hemoglobin H
– hemoglobin J
– hemoglobin M
– hemoglobin, sickle
– methemoglobin
– oxyhemoglobins
– sulfhemoglobin
– leghemoglobin
– methemalbumin
– metmyoglobin
– myoglobin
– immediate-early proteins
– adenovirus E1 proteins
– adenovirus E1A proteins
– adenovirus E1B proteins
– butyrate response factor 1
– early growth response transcription factors
– early growth response protein 1
– early growth response protein 2
– early growth response protein 3
– tristetraprolin
– insect proteins
– drosophila proteins
– glue proteins, drosophila
– omega-agatoxin iva
– vitellogenins
– intercellular signaling peptides and proteins
– angiogenic proteins
– angiopoietins
– angiopoietin-1
– angiopoietin-2
– angiostatic proteins
– angiostatins
– endostatins
– vascular endothelial growth factors
– vascular endothelial growth factor a
– vascular endothelial growth factor b
– vascular endothelial growth factor c
– vascular endothelial growth factor d
– vascular endothelial growth factor, endocrine-gland-derived
– bone morphogenetic proteins
– cytokines
– autocrine motility factor
– chemokines
– beta-thromboglobulin
– chemokines, c
– chemokines, cc
– chemokines, cxc
– chemokines, cx3c
– interleukin-8
– macrophage inflammatory proteins
– macrophage inflammatory protein-1
– monocyte chemoattractant proteins
– monocyte chemoattractant protein-1
– platelet factor 4
– rantes
– growth substances
– hematopoietic cell growth factors
– colony-stimulating factors
– colony-stimulating factors, recombinant
– granulocyte colony stimulating factor, recombinant
– filgrastim
– granulocyte macrophage colony-stimulating factors, recombinant
– erythropoietin
– erythropoietin, recombinant
– epoetin alfa
– granulocyte colony-stimulating factor
– granulocyte colony stimulating factor, recombinant
– filgrastim
– granulocyte-macrophage colony-stimulating factor
– granulocyte macrophage colony-stimulating factors, recombinant
– interleukin-3
– macrophage colony-stimulating factor
– thrombopoietin
– stem cell factor
– interleukins
– interleukin-1
– interleukin-2
– interleukin-3
– interleukin-4
– interleukin-5
– interleukin-6
– interleukin-7
– interleukin-8
– interleukin-9
– interleukin-10
– interleukin-11
– interleukin-12
– interleukin-13
– interleukin-14
– interleukin-15
– interleukin-16
– interleukin-17
– interleukin-18
– transforming growth factor beta
– hepatocyte growth factor
– interferons
– interferon type i
– interferon type i, recombinant
– interferon alfa-2a
– interferon alfa-2b
– interferon alfa-2c
– interferon-alpha
– interferon alfa-2a
– interferon alfa-2b
– interferon alfa-2c
– interferon-beta
– interferon type ii
– interferon-gamma, recombinant
– lymphokines
– interferon type ii
– interleukin-2
– leukocyte migration-inhibitory factors
– lymphotoxin
– macrophage-activating factors
– interferon type ii
– macrophage migration-inhibitory factors
– neuroleukin
– suppressor factors, immunologic
– transfer factor
– monokines
– interleukin-1
– tumor necrosis factor-alpha
– tumor necrosis factors
– lymphotoxin
– tumor necrosis factor-alpha
– ephrins
– ephrin-A1
– ephrin-A2
– ephrin-A3
– ephrin-A4
– ephrin-A5
– ephrin-b1
– ephrin-b2
– ephrin-b3
– interferons
– interferon type i
– interferon type i, recombinant
– interferon alfa-2a
– interferon alfa-2b
– interferon alfa-2c
– interferon-alpha
– interferon alfa-2a
– interferon alfa-2b
– interferon alfa-2c
– interferon-beta
– interferon type ii
– interferon-gamma, recombinant
– nerve growth factors
– brain-derived neurotrophic factor
– ciliary neurotrophic factor
– glia maturation factor
– glial cell line-derived neurotrophic factors
– glial cell line-derived neurotrophic factor
– neurturin
– nerve growth factor
– neuregulins
– neuregulin-1
– neurotrophin 3
– pituitary adenylate cyclase-activating polypeptide
– parathyroid hormone-related protein
– semaphorins
– semaphorin-3a
– somatomedins
– insulin-like growth factor i
– insulin-like growth factor ii
– tumor necrosis factors
– lymphotoxin
– tumor necrosis factor-alpha
– wnt proteins
– wnt1 protein
– wnt2 protein
– intracellular signaling peptides and proteins
See List of MeSH codes (D12.776.476).
– iodoproteins
– thyroglobulin
– iron-regulatory proteins
– iron regulatory protein 1
– iron regulatory protein 2
– lectins
– antigens, cd22
– lectins, c-type
– antigens, cd94
– asialoglycoprotein receptor
– collectins
– mannose-binding lectin
– pulmonary surfactant-associated protein a
– pulmonary surfactant-associated protein d
– calnexin
– calreticulin
– galectins
– galectin-1
– galectin-2
– galectin-3
– galectin-4
– mannose-binding lectins
– mannose-binding lectin
– plant lectins
– abrin
– concanavalin a
– peanut agglutinin
– phytohemagglutinins
– pokeweed mitogens
– ricin
– wheat germ agglutinins
– wheat germ agglutinin-horseradish peroxidase conjugate
– receptors, n-acetylglucosamine
– selectins
– e-selectin
– l-selectin
– p-selectin
– lipoproteins
– chromogranins
– chylomicrons
– lipoprotein(a)
– lipoprotein-X
– lipoproteins, hdl
– lipoproteins, hdl cholesterol
– lipoproteins, ldl
– lipoproteins, ldl cholesterol
– lipoproteins, vldl
– lipoproteins, vldl cholesterol
– platelet factor 3
– vitellogenins
– ldl-receptor related proteins
– ldl-receptor related protein 1
– ldl-receptor related protein 2
– lithostathine
– luminescent protein
– aequorin
– green fluorescent protein
– luciferase
– luciferases, bacterial
– Firefly luciferase
– Renilla luciferase
– membrane proteins
See List of MeSH codes (D12.776.543).
– metalloproteins
– azurin
– ceruloplasmin
– hemocyanin
– hemosiderin
– iron-binding proteins
– ferritin
– apoferritin
– lactoferrin
– nonheme iron proteins
– hemerythrin
– inositol oxygenase
– iron-sulfur proteins
– adrenodoxin
– ferredoxin-nitrite reductase
– ferredoxins
– molybdoferredoxin
– rubredoxins
– iron regulatory protein 1
– iron regulatory protein 2
– electron transport complex i
– nadh dehydrogenase
– electron transport complex ii
– succinate dehydrogenase
– electron transport complex iii
– nitrate reductase (nad(p)h)
– nitrate reductase (nadph)
– lipoxygenase
– arachidonate lipoxygenases
– arachidonate 5-lipoxygenase
– arachidonate 12-lipoxygenase
– arachidonate 15-lipoxygenase
– retinal dehydrogenase
– tyrosine 3-monooxygenase
– transferrin
– metallothionein
– plastocyanin
– mitochondrial proteins
– mitochondrial membrane transport proteins
– mitochondrial adp, atp translocases
– adenine nucleotide translocator 1
– adenine nucleotide translocator 2
– adenine nucleotide translocator 3
– molecular chaperones
– alpha-crystallins
– alpha-crystallin a chain
– alpha-crystallin b chain
– chaperonins
– chaperonin 10
– groes protein
– chaperonin 60
– groel protein
– clusterin
– heat-shock proteins, small
– hsp20 heat-shock proteins
– hsp30 heat-shock proteins
– hsp47 heat-shock proteins
– hsp70 heat-shock proteins
– hsc70 heat-shock proteins
– hsp110 heat-shock proteins
– hsp72 heat-shock proteins
– hsp90 heat-shock proteins
– neuroendocrine secretory protein 7b2
– mutant proteins
– mutant chimeric proteins
– oncogene proteins, fusion
– fusion proteins, bcr-abl
– fusion proteins, gag-onc
– oncogene protein p65(gag-jun)
– oncogene protein tpr-met
– neoplasm proteins
– autocrine motility factor
– fusion proteins, bcr-abl
– myeloma proteins
– oncogene proteins
– oncogene proteins, fusion
– fusion proteins, bcr-abl
– fusion proteins, gag-onc
– oncogene protein p65(gag-jun)
– oncogene protein tpr-met
– oncogene proteins, viral
– adenovirus early proteins
– adenovirus E1 proteins
– adenovirus E1A proteins
– adenovirus E1B proteins
– adenovirus e2 proteins
– adenovirus e3 proteins
– adenovirus e4 proteins
– antigens, polyomavirus transforming
– papillomavirus e7 proteins
– retroviridae proteins, oncogenic
– fusion proteins, gag-onc
– oncogene protein p65(gag-jun)
– gene products, rex
– gene products, tax
– oncogene protein gp140(v-fms)
– oncogene protein p21(ras)
– oncogene protein p55(v-myc)
– oncogene protein pp60(v-src)
– oncogene protein v-akt
– oncogene protein v-cbl
– oncogene protein v-crk
– oncogene protein v-maf
– oncogene proteins v-abl
– oncogene proteins v-erba
– oncogene proteins v-erbb
– oncogene proteins v-fos
– oncogene proteins v-mos
– oncogene proteins v-myb
– oncogene proteins v-raf
– oncogene proteins v-rel
– oncogene proteins v-sis
– proto-oncogene proteins
– cyclin d1
– fibroblast growth factor 4
– fibroblast growth factor 6
– fms-like tyrosine kinase 3
– receptor, fibroblast growth factor, type 3
– muts homolog 2 protein
– myeloid-lymphoid leukemia protein
– proto-oncogene proteins c-abl
– proto-oncogene proteins c-akt
– proto-oncogene proteins c-bcl-2
– proto-oncogene proteins c-bcl-6
– proto-oncogene proteins c-bcr
– proto-oncogene proteins c-cbl
– proto-oncogene proteins c-crk
– proto-oncogene proteins c-ets
– proto-oncogene protein c-ets-1
– proto-oncogene protein c-ets-2
– proto-oncogene protein c-fli-1
– ternary complex factors
– ets-domain protein elk-1
– ets-domain protein elk-4
– proto-oncogene proteins c-fes
– proto-oncogene proteins c-fos
– proto-oncogene proteins c-fyn
– proto-oncogene proteins c-hck
– proto-oncogene proteins c-jun
– proto-oncogene proteins c-kit
– proto-oncogene proteins c-maf
– proto-oncogene proteins c-mdm2
– proto-oncogene proteins c-met
– proto-oncogene proteins c-mos
– proto-oncogene proteins c-myb
– proto-oncogene proteins c-myc
– proto-oncogene proteins c-pim-1
– proto-oncogene proteins c-rel
– proto-oncogene proteins c-ret
– proto-oncogene proteins c-sis
– proto-oncogene proteins c-vav
– proto-oncogene proteins c-yes
– proto-oncogene proteins p21(ras)
– proto-oncogene proteins pp60(c-src)
– raf kinases
– proto-oncogene proteins b-raf
– proto-oncogene proteins c-raf
– RNA-binding protein EWS
– lymphocyte specific protein tyrosine kinase p56(lck)
– receptor, erbb-2
– receptor, erbb-3
– receptor, macrophage colony-stimulating factor
– receptors, thyroid hormone
– thyroid hormone receptors alpha
– thyroid hormone receptors beta
– RNA-binding protein FUS
– stathmin
– wnt1 protein
– wnt2 protein
– tumor suppressor proteins
– adenomatous polyposis coli protein
– brca1 protein
– brca2 protein
– cyclin-dependent kinase inhibitor proteins
– cyclin-dependent kinase inhibitor p15
– cyclin-dependent kinase inhibitor p16
– cyclin-dependent kinase inhibitor p18
– cyclin-dependent kinase inhibitor p19
– cyclin-dependent kinase inhibitor p21
– cyclin-dependent kinase inhibitor p27
– cyclin-dependent kinase inhibitor p57
– kangai-1 protein
– neurofibromin 1
– neurofibromin 2
– pten phosphohydrolase
– retinoblastoma-like protein p107
– retinoblastoma-like protein p130
– retinoblastoma protein
– smad4 protein
– tumor suppressor protein p14arf
– tumor suppressor protein p53
– von hippel-lindau tumor suppressor protein
– wt1 proteins
– nerve tissue proteins
See List of MeSH codes (D12.776.641).
– nuclear proteins
See List of MeSH codes (D12.776.660).
– nucleoproteins
– chromatin
– euchromatin
– heterochromatin
– nucleosomes
– chromosomal proteins, non-histone
– centromere protein b
– high mobility group proteins
– hmgn proteins
– hmgn1 protein
– hmgn2 protein
– hmga proteins
– hmga1a protein
– hmga1b protein
– hmga1c protein
– hmga2 protein
– hmgb proteins
– hmgb1 protein
– hmgb2 protein
– hmgb3 protein
– sex-determining region y protein
– tcf transcription factors
– lymphoid enhancer-binding factor 1
– t cell transcription factor 1
– methyl-cpg-binding protein 2
– deoxyribonucleoproteins
– histones
– protamines
– clupeine
– salmine
– RNA-binding proteins
– butyrate response factor 1
– fragile x mental retardation protein
– host factor 1 protein
– hu paraneoplastic encephalomyelitis antigens
– iron regulatory protein 1
– iron regulatory protein 2
– mrna cleavage and polyadenylation factors
– cleavage and polyadenylation specificity factor
– cleavage stimulation factor
– poly(a)-binding proteins
– poly(a)-binding protein i
– poly(a)-binding protein ii
– polypyrimidine tract-binding protein
– ribonucleoproteins
– heterogeneous-nuclear ribonucleoproteins
– RNA-binding protein FUS
– heterogeneous-nuclear ribonucleoprotein group a-b
– heterogeneous-nuclear ribonucleoprotein group c
– heterogeneous-nuclear ribonucleoprotein d
– heterogeneous-nuclear ribonucleoprotein group f-h
– heterogeneous-nuclear ribonucleoprotein k
– heterogeneous-nuclear ribonucleoprotein l
– heterogeneous-nuclear ribonucleoprotein group m
– heterogeneous-nuclear ribonucleoprotein u
– RNA-binding protein EWS
– ribonuclease p
– ribonucleoproteins, small cytoplasmic
– signal recognition particle
– ribonucleoproteins, small nuclear
– ribonucleoproteins, small nucleolar
– ribonucleoprotein, u1 small nuclear
– ribonucleoprotein, u2 small nuclear
– ribonucleoprotein, u4-u6 small nuclear
– ribonucleoprotein, u5 small nuclear
– ribonucleoprotein, u7 small nuclear
– RNA-induced silencing complex
– vault ribonucleoprotein particles
– rna cap-binding proteins
– eukaryotic initiation factor-4f
– nuclear cap-binding protein complex
– oxidative phosphorylation coupling factors
– peptones
– phosphoproteins
– bcl-associated death protein
– brca1 protein
– caseins
– caveolin 1
– caveolin 2
– cdc2 protein kinase
– cortactin
– crk-associated substrate protein
– dopamine and camp-regulated phosphoprotein 32
– fanconi anemia complementation group a protein
– fanconi anemia complementation group d2 protein
– fanconi anemia complementation group g protein
– focal adhesion kinase 1
– interferon regulatory factor-3
– interferon regulatory factor-7
– paxillin
– phosvitin
– plectin
– smad proteins, receptor-regulated
– smad1 protein
– smad2 protein
– smad3 protein
– smad5 protein
– smad8 protein
– retinoblastoma-like protein p107
– retinoblastoma-like protein p130
– retinoblastoma protein
– stathmin
– synapsins
– tumor suppressor protein p53
– vitellogenins
– photoreceptors, microbial
– bacteriochlorophylls
– bacteriochlorophyll a
– rhodopsins, microbial
– bacteriorhodopsins
– halorhodopsins
– sensory rhodopsins
– photosynthetic reaction center complex proteins
– light-harvesting protein complexes
– cytochrome b6f complex
– cytochromes b6
– cytochromes f
– plastoquinol-plastocyanin reductase
– photosystem i protein complex
– photosystem ii protein complex
– plant proteins
– arabidopsis proteins
– agamous protein, arabidopsis
– deficiens protein
– ferredoxins
– g-box binding factors
– gluten
– gliadin
– leghemoglobin
– periplasmic proteins
– phycocyanin
– phycoerythrin
– phytochrome
– phytochrome a
– phytochrome b
– plant lectins
– abrin
– concanavalin a
– peanut agglutinin
– phytohemagglutinins
– pokeweed mitogens
– ricin
– wheat germ agglutinins
– wheat germ agglutinin-horseradish peroxidase conjugate
– plastocyanin
– soybean proteins
– trypsin inhibitor, bowman-birk soybean
– trypsin inhibitor, kunitz soybean
– trichosanthin
– vegetable proteins
– zein
– polyproteins
– gene products, env (gene)
– gene products, gag (gene)
– fusion proteins, gag-pol
– gene products, pol (gene)
– fusion proteins, gag-pol
– pregnancy proteins
– chorionic gonadotropin
– chorionic gonadotropin, beta subunit, human
– gonadotropins, equine
– placental lactogen
– pregnancy-associated alpha 2-macroglobulins
– pregnancy-associated plasma protein-a
– pregnancy-specific beta 1-glycoproteins
– prions
– prpc proteins
– prpsc proteins
– prp 27-30 protein
– protein hydrolysates
– protein isoforms
– isoenzymes
– protein precursors
– amyloid beta-protein precursor
– angiotensinogen
– fibrinogen
– fibrin fibrinogen degradation products
– fibrinopeptide a
– fibrinopeptide b
– glucagon precursors
– kininogens
– kininogen, high-molecular-weight
– kininogen, low-molecular-weight
– procollagen
– proinsulin
– c-peptide
– pro-opiomelanocortin
– tropoelastin
– protein subunits
– proteolipids
– myelin proteolipid protein
– pulmonary surfactant-associated protein c
– proteome
– protozoan proteins
– merozoite surface protein 1
– pulmonary surfactant-associated proteins
– pulmonary surfactant-associated protein a
– pulmonary surfactant-associated protein b
– pulmonary surfactant-associated protein c
– pulmonary surfactant-associated protein d
– receptors, cytoplasmic and nuclear
– hepatocyte nuclear factor 4
– peroxisome proliferator-activated receptors
– PPAR-alpha
– PPAR-beta
– PPAR-delta
– PPAR-gamma
– receptors, aryl hydrocarbon
– receptors, calcitriol
– receptors, melatonin
– receptors, retinoic acid
– Retinoid X receptors
– Retinoid X receptor alpha
– Retinoid X receptor beta
– Retinoid X receptor gamma
– receptors, steroid
– coup transcription factors
– coup transcription factor i
– coup transcription factor ii
– receptors, androgen
– receptors, estrogen
– estrogen receptor alpha
– estrogen receptor beta
– receptors, estradiol
– receptors, glucocorticoid
– receptors, mineralocorticoid
– receptors, aldosterone
– receptors, progesterone
– receptors, thyroid hormone
– thyroid hormone receptors alpha
– thyroid hormone receptors beta
– receptors, drug
– immunophilins
– cyclophilins
– tacrolimus binding proteins
– tacrolimus binding protein 1a
– receptors, phencyclidine
– recombinant proteins
– colony-stimulating factors, recombinant
– granulocyte colony stimulating factor, recombinant
– filgrastim
– granulocyte macrophage colony-stimulating factors, recombinant
– erythropoietin, recombinant
– epoetin alfa
– interferon type i, recombinant
– interferon alfa-2a
– interferon alfa-2b
– interferon alfa-2c
– interferon-gamma, recombinant
– recombinant fusion proteins
– cd4 immunoadhesins
– vaccines, synthetic
– vaccines, dna
– vaccines, edible
– vaccines, virosome
– reptilian proteins
– ribosomal proteins
– peptide elongation factors
– gtp phosphohydrolase-linked elongation factors
– peptide elongation factor g
– peptide elongation factor tu
– peptide elongation factor 1
– peptide elongation factor 2
– peptide initiation factors
– eukaryotic initiation factors
– eukaryotic initiation factor-1
– eukaryotic initiation factor-2
– eukaryotic initiation factor-2b
– eukaryotic initiation factor 3
– eukaryotic initiation factor-4f
– eukaryotic initiation factor-4a
– eukaryotic initiation factor-4e
– Eukaryotic initiation factor 4G
– eukaryotic initiation factor-5
– prokaryotic initiation factors
– prokaryotic initiation factor-1
– prokaryotic initiation factor-2
– prokaryotic initiation factor-3
– peptide termination factors
– ribosomal protein s6
– salivary proteins
(no MeSHNumber) LAPP (leech anti-platelet protein) - presently redirects to LAMP (software bundle) where the term is not mentioned
– glue proteins, drosophila
– scleroproteins
– extracellular matrix proteins
– activated-leukocyte cell adhesion molecule
– collagen
– fibrillar collagens
– Type I collagen
– Type II collagen
– Type III collagen
– Type V collagen
– Type XI collagen
– non-fibrillar collagens
– Type IV collagen
– Type VI collagen
– Type VII collagen
– Type VIII collagen
– Type X collagen
– Type XIII collagen
– Type XVIII collagen
– endostatins
– fibril-associated collagens
– Type IX collagen
– Type XII collagen
– procollagen
– tropocollagen
– elastin
– tropoelastin
– fibronectins
– laminin
– tenascin
– vitronectin
– gelatin
– keratin
– reticulin
– selenium-binding proteins
– selenoproteins
– selenoprotein p
– selenoprotein r
– selenoprotein w
– seminal plasma proteins
– prostatic secretory proteins
– prostate-specific antigen
– seminal vesicle secretory proteins
– serpins
– alpha 1-antichymotrypsin
– alpha 1-antitrypsin
– angiotensinogen
– antiplasmin
– antithrombins
– antithrombin iii
– heparin cofactor ii
– hirudins
– complement c1 inactivator proteins
– hsp47 heat-shock proteins
– ovalbumin
– plasminogen inactivators
– plasminogen activator inhibitor 1
– plasminogen activator inhibitor 2
– protein c inhibitor
– thyroxine-binding proteins
– silk
– fibroins
– sericins
– silver proteins
– thioredoxin
– thymosin
– tissue inhibitor of metalloproteinases
– tissue inhibitor of metalloproteinase-1
– tissue inhibitor of metalloproteinase-2
– tissue inhibitor of metalloproteinase-3
– transcription factors
See List of MeSH codes (D12.776.930).
– ubiquitins
– small ubiquitin-related modifier proteins
– sumo-1 protein
– ubiquitin
– polyubiquitin
– ubiquitin C
– viral proteins
– oncogene proteins, viral
– adenovirus early proteins
– adenovirus e1 proteins
– adenovirus e1a proteins
– adenovirus e1b proteins
– adenovirus e2 proteins
– adenovirus e3 proteins
– adenovirus e4 proteins
– antigens, polyomavirus transforming
– retroviridae proteins, oncogenic
– fusion proteins, gag-onc
– oncogene protein p65(gag-jun)
– gene products, rex
– gene products, tax (gene)
– oncogene protein gp140(v-fms)
– oncogene protein p21(ras)
– oncogene protein p55(v-myc)
– oncogene protein pp60(v-src)
– oncogene protein v-maf
– oncogene proteins v-abl
– oncogene proteins v-erba
– oncogene proteins v-erbb
– oncogene proteins v-fos
– oncogene proteins v-mos
– oncogene proteins v-myb
– oncogene proteins v-raf
– oncogene proteins v-rel
– oncogene proteins v-sis
– retroviridae proteins
– gene products, env (gene)
– hiv envelope protein gp41
– hiv envelope protein gp120
– hiv envelope protein gp160
– gene products, gag (gene)
– fusion proteins, gag-onc
– oncogene protein p65(gag-jun)
– fusion proteins, gag-pol
– hiv core protein p24
– gene products, pol (gene)
– fusion proteins, gag-pol
– hiv integrase
– HIV protease
– RNA-directed DNA polymerase
– hiv-1 reverse transcriptase
– retroviridae proteins, oncogenic
– fusion proteins, gag-onc
– oncogene protein p65(gag-jun)
– gene products, rex
– gene products, tax
– oncogene protein gp140(v-fms)
– oncogene protein p21(ras)
– oncogene protein p55(v-myc)
– oncogene protein pp60(v-src)
– oncogene protein v-maf
– oncogene proteins v-abl
– oncogene proteins v-erba
– oncogene proteins v-erbb
– oncogene proteins v-fos
– oncogene proteins v-mos
– oncogene proteins v-myb
– oncogene proteins v-raf
– oncogene proteins v-rel
– oncogene proteins v-sis
– viral nonstructural proteins
– viral regulatory proteins
– gene products, nef
– gene products, rex
– gene products, vif
– gene products, vpu
– immediate-early proteins
– trans-activators
– gene products, rev (HIV)
– gene products, tat
– gene products, tax
– gene products, vpr
– herpes simplex virus protein vmw65
– viral structural proteins
– nucleocapsid proteins
– capsid proteins
– viral core proteins
– gene products, gag
– fusion proteins, gag-pol
– hiv core protein p24
– gene products, pol (gene)
– fusion proteins, gag-pol
– hiv integrase
– HIV protease
– RNA-directed DNA polymerase
– hiv-1 reverse transcriptase
– viral envelope proteins
– gene products, env
– hiv envelope protein gp41
– hiv envelope protein gp120
– hiv envelope protein gp160
– hemagglutinins, viral
– hemagglutinin glycoproteins, influenza virus
– hn protein
– viral fusion proteins
– hiv envelope protein gp41
– viral matrix proteins
– gene products, vpu
– viral tail proteins
The list continues at List of MeSH codes (D13).
D12.776
Proteins
Protein classification | List of MeSH codes (D12.776) | Chemistry,Biology | 10,631 |
2,073,714 | https://en.wikipedia.org/wiki/Daniel%20Pedoe | Dan Pedoe (29 October 1910, London – 27 October 1998, St Paul, Minnesota, USA) was an English-born mathematician and geometer with a career spanning more than sixty years. In the course of his life he wrote approximately fifty research and expository papers in geometry. He is also the author of various core books on mathematics and geometry some of which have remained in print for decades and been translated into several languages. These books include the three-volume Methods of Algebraic Geometry (which he wrote in collaboration with W. V. D. Hodge), The Gentle Art of Mathematics, Circles: A Mathematical View, Geometry and the Visual Arts and most recently Japanese Temple Geometry Problems: San Gaku (with Hidetoshi Fukagawa).
Early life
Daniel Pedoe was born in London in 1910, the youngest of thirteen children of Szmul Abramski, a Jewish immigrant from Poland who found himself in London in the 1890s: he had boarded a cattle boat not knowing whether it was bound for New York or London, so his final destination was one of blind chance. Pedoe's mother, Ryfka Raszka Pedowicz, was the only child of Wolf Pedowicz, a corn merchant and his wife, Sarah Haimnovna Pecheska from Łomża then in Congress Poland (that part of Poland then under Russian control). The family name requires some explanation. The father, Abramski, was one of the Kohanim, a priestly group, and once in Britain, he changed his surname to Cohen. At first, all thirteen children took the surname Cohen, but later, to avoid any potential antisemitism, some of the Cohen children changed their surname to Pedoe, a contraction of their mother's maiden name; this happened while Daniel was at school, aged 12.
"Danny" was the youngest child in a family of thirteen children and his childhood was spent in relative poverty in the East End of London, despite their father being a skilled cabinetmaker. He attended the Central Foundation Boys' School where he was first influenced in his love of geometry by the headmaster Norman M. Gibbins and a textbook by Godfrey and Siddons. While still at school, Pedoe published his first paper, The geometric interpretation of Cagnoli's equation: sin b sin c + cos b cos c cos A = sin B sin C – cos B cos C cos a; it appeared in the Mathematical Gazette in 1929. He was successful at the "ten plus" examination and subsequently won a Scholarship to study mathematics at Cambridge University.
Cambridge and Princeton
During his first three years at Magdalene College, Cambridge, where he was a scholar, Pedoe was tutored in mathematics by Arthur Stanley Ramsey, the father of Frank P. Ramsey. He attended lectures by Ludwig Wittgenstein and Bertrand Russell, although he was unimpressed by the teaching style of either great man. Geometry became his main interest and, advised by Henry Baker, he started work on his doctorate and published several papers. In 1935 he took a break from Cambridge and went to the Institute for Advanced Study at Princeton where he worked with Solomon Lefschetz.
University of Southampton and Freeman Dyson
On his return to England in 1936, Pedoe was appointed as an assistant lecturer of the mathematics department at the University College, Southampton. More of his papers were published and after(?) 1937, he was awarded his PhD on the strength of his thesis, The Exceptional Curves on an Algebraic Surface, which was based on Henry Baker's work on the Italian theory of algebraic surfaces; he was examined by W. V. D. Hodge and Baker at Cambridge.
In the late 1930s, Pedoe married Mary Tunstall, an English geographer and the couple had a daughter, Naomi, and identical twin sons, Dan and Hugh, born in December 1939.
By 1941, Winchester College had lost several teachers to the army and had become unable to meet its teaching commitments. They requested help and Pedoe was asked to assist with the teaching of mathematics. He taught junior and senior classes (the juniors could be unruly) and in the senior class one of the students was Freeman Dyson who showed enormous early talent and was strongly encouraged by Pedoe with extra work and reading. Their friendship lasted more than fifty more years until Pedoe's death in 1998 and Dyson's list of people who have most influenced him begins "Hardy, Pedoe...".
In 1941 a collaboration with W. V. D. Hodge started which lasted some twelve years and included the writing of the huge three-volume work, Methods of Algebraic Geometry. Although the book was originally designed as a geometric counterpart of G. H. Hardy's A Course of Pure Mathematics it was never intended as a textbook and contains original material. First published in the 1940s, all three volumes were reprinted by Cambridge University Press in 1995.
Pedoe published three more papers in 1942: A remark on a property of a special pencil of quadrics, On some geometrical inequalities and An inequality for two triangles.
Birmingham University and Westfield College
In 1942, Pedoe moved to Birmingham for a lectureship at the University of Birmingham, working mainly in engineering mathematics. The family were not happy there, the local air pollution affected his children and Pedoe did not like the working environment. The professor of mathematical physics at Birmingham was Rudolf Peierls, who was working on the British project to develop the atomic bomb; he suggested to Pedoe that he, Pedoe, should do some war-work. He did so, and worked part-time to improve piston rings so as to emulate German dive-bombing tactics.
In 1947 he moved to Westfield College, part of the University of London, as a reader in mathematics. Once again, he was unhappy, both from domestic and professional points of view: his salary was insufficient for him to afford to buy a family home and he found the working environment to be "a strain".
Khartoum and Singapore
Encouraged by Mary to look abroad, he was appointed as head of the mathematics department at the University of Khartoum in the Sudan and took up the role in 1952 on trial basis with leave of absence from Westfield College. When Westfield pressed for firm decision, he resigned and stayed at Khartoum for seven years: the length of his contract. It was during this period that he wrote many of his books including The Gentle Art of Mathematics, Circles and a textbook, An Introduction to projective geometry.
Pedoe found the time at Khartoum to his taste; there was a comfortable life-style that allowed him to write and the family joined him each Christmas. Eventually, Mary stayed with him permanently, the children remaining in England.
In 1959, he was appointed as head of the mathematics department at the University of Singapore by Sir Alexander Oppenheim.
Purdue and Minnesota
Unwilling to retire at 55, the statutory retirement age in Singapore, he moved again, to Indiana in 1962, to take up a position at Purdue University, near Lafayette. Although the location was somewhat isolated, there was an active social life and he was kept busy. One of the positions he held there was as Senior Mathematician to the Minnesota College Geometry Project, which was to improve geometry teaching in high schools and colleges by making films and writing accompanying books.
After two years at Purdue, he was appointed as a professor at the University of Minnesota where he stayed until he retired in 1980, when he was made professor emeritus. He won a Lester R. Ford Award in 1968.
San Gaku
Pedoe's interest and work continued after his retirement and in 1984 he was approached by Hidetoshi Fukagawa, a high-school teacher in Aichi, Japan. Fukagawa had tried unsuccessfully to interest Japanese academics in San Gaku – Japanese wooden tablets containing geometric theorems which had hung in temples and shrines for around two centuries as offerings to the gods.
A collaboration started which resulted in the publication of the book, Japanese Temple Geometry Problems by the Charles Babbage Research Centre in Canada. The book succeeded in arousing interest in this uniquely Japanese form of mathematics.
Death
Dan Pedoe died in 1998, aged 88, after a long period with failing health. He was survived by his twin sons, Dan and Hugh Tunstall-Pedoe, and six grandchildren.
Archive
A collection of Daniel Pedoe's papers and correspondence throughout his life is to be found at the University of Birmingham archive centre.
Books
Methods of Algebraic Geometry (3 volumes)
Circles (republished as Circles: a Mathematical View)
A Course of Geometry for Colleges and Universities (republished as Geometry: A Comprehensive Course)
A Geometric Introduction to Linear Algebra
Geometry and the Liberal Arts (republished as Geometry and the Visual Arts)
The Gentle Art of Mathematics
See also
Pedoe's inequality
W. V. D. Hodge
Sangaku
References
"In Love with Geometry", Daniel Pedoe, College Mathematics Journal, 1998, Volume 29 pp. 170–188.
External links
*
1910 births
1998 deaths
Mathematicians from London
People educated at Central Foundation Boys' School
20th-century English mathematicians
Academics of the University of Birmingham
Academics of Westfield College
Academics of the University of Southampton
Alumni of Magdalene College, Cambridge
Institute for Advanced Study visiting scholars
Academic staff of the University of Khartoum
Purdue University faculty
University of Minnesota faculty
Geometers
British textbook writers
British expatriates in Sudan
British expatriates in the United States | Daniel Pedoe | Mathematics | 1,944 |
29,997,501 | https://en.wikipedia.org/wiki/CEPIC | CEPIC, from CEnter of the PICture industry (former: Coordination of European Agencies Press Stock Heritage), is a registered European Economic interest Grouping (EEIG) and international umbrella organization. It representing the interests of 11 national Picture Associations in Europe and further individual Agencies in EU-Institutions and international organizations.
History
CEPIC was founded in Berlin in 1993, where it still has its Head Office. In 1997, it received Observer Status at the World Intellectual Property Organization (WIPO). CEPIC was registered in Paris in 1999. In 2006 CEPIC became an associate member of International Press Telecommunications Council (IPTC). Until 2009 CEPIC was known as the Coordination of European Picture Agencies. The change of the name to Centre of the Picture Industry is a result of the advancement to a more global orientation of the organisation.
Structure
Currently, Picture Agencies with about photographers are represented by CEPIC. Members of CEPIC are mainly small and medium-sized companies, national museums and archives, which promote professional photography and footage in Press, Culture and Illustration.
The 7 members of the CEPIC Committee are elected every 2 years by representatives of the 11 national Picture Associations.
Goals and Activities
CEPIC promotes the exchange of information between Pictures Agencies from all over the world with its annual Congress, which takes place in another European city every year (see below). CEPIC stands for the defense of a balanced market. Contract templates and guidelines for good business relations between photographers, Picture Agencies and users are developed on a regular basis. Together with ICOMP, CEPIC works on the protection of competitive market conditions.
Another main issue is the protection of intellectual property and the payment of photographers. CEPIC speaks up for ethical codes and for the rights of photographers and Picture Agencies.
In 2009, CEPIC issued a critical statement on the US-American Google Book Settlement. This statement was submitted to the European Commission in cooperation with ICOMP, the German Book Trade Association (Börsenverein des Deutschen Buchhandels), the Internet Archive/Open Book Alliance and EBLIDA.
CEPIC is, furthermore, active in the discussion about so-called Orphan Works and supports a sector specific solution. To prevent pictures from becoming orphan works, CEPIC recommends the improvement of legal and technological means of protection.
Members
The 11 national Associations are:
Germany
BVPA – Bundesverband der Pressebild-Agenturen und Bildarchive
France
FNAPPI – Fédération Nationale des Agences de Presse Photos et Information
SAPHIR – Syndicat National des Agences Photographiques d'Information et de Reportage
SNAPIG – Syndicat National des Agences Photographiques d'Illustration Générale
Netherlands
NL image
Portugal
APAAI – Associaco Portuguesa das Agencias e Arquivos de Imagens
Sweden
BLF – Bildleverantörernas Förening
SBF – Svensk Bildbyraförening
Switzerland
SAB – Schweizerische Arbeitsgemeinschaft der Bild-Agenturen und-Archive
Spain
AEAPAF – Asociacion Empresarial de Agencias de Prensa y Archivos Fotograficos
United Kingdom
BAPLA – British Association of Picture Libraries and Agencies
The individual picture agencies are from Andorra, Germany, Finland, France, United Kingdom, Ireland, Israel, Italy, Norway, Austria, Poland, Romania, Russia, Spain, Czech Republic, Turkey, Hungary and the United States.
Annual congress
The annual CEPIC Congress is the largest picture agents event in the world. It brings together about delegates and 400 companies from 52 countries of 5 continents. Experts present reports and discuss Image Copyright, Indexing and Internet marketing among other issus. Further subjects are the challenge of the New Media, the situation of the Picture Market, its constant changes, as well as new technologies and footage.
See also
GLAM
References
External links
Official Website
1993 establishments in Germany
Metadata
News agencies based in Germany
Organizations established in 1993 | CEPIC | Technology | 814 |
63,427,550 | https://en.wikipedia.org/wiki/George%20Bowes%20%28prospector%29 | George Bowes (died 1606) was an English prospector. He mined for gold in Scotland.
Family background
George was a son of Sir George Bowes of Streatlam and Dorothy Mallory. He married Magdalen Bray, daughter of Sir Edward Bray.
Coal and copper
In 1595 he planned a coal mine on his own estate of Beddick Waterville. In 1602 he was allowed to mine for copper in the Knowsley estate belonging to the Earl of Derby. At Caldbeck near Keswick, in 1602, Bowes and Francis Nedham (a son of George Nedham or Needham) reported on old copper workings and lead which contained a proportion of silver.
Surveying the gold fields in 1603
Bowes became interested in mining for gold in Scotland, probably hearing of the work of George Douglas of Parkhead and of Cornelius de Vos who had strong links with the mines in Keswick. He wrote in 1603 that James VI had invited him to come to Scotland twice before the Union of the Crowns, by means of his uncle, the ambassador Robert Bowes.
He wrote a paper describing his reasons for seeking gold on Crawford Moor. He mentioned he had worked in Cornwall, Devon, Somerset, and at Keswick. Napier of Merchiston had shown him samples of gold ore. At the request of the Privy Council, Godolphin went to Carlisle to meet Bowes and Bevis Bulmer in November 1603.
Bowes wrote to Earl of Suffolk while he was at Leadhills on 10 December 1603, describing the geology, and mentioned an earlier venture when he tried to form a partnership with Thomas Foulis, who had exclusive rights from James VI, but was discouraged by Queen Elizabeth. In another letter he described a story from an old miner's father of the discovery of a vein of gold in the time of James IV or Regent Albany, ninety years earlier, which they backfilled and hid. Bowes was understandably cautious about these stories, which came from his rival Bevis Bulmer's employees, and men who would be grateful for work in new mining ventures.
Bowes, Napier of Merchiston, Bevis Bulmer and John Brode sent a joint letter from Edinburgh on 29 December 1603 to the Privy Council. They had surveyed the gold mining region according to the king's orders given at Wilton in November. Gold had previously been found by Bulmer in the head of a long stream that descended to the Elvan water, and on Steroc brae that descends to Wanlock water, in the Glengonnar water, and the Crawick water that flows into the Nith. Although much gold had been found, no vein bearing gold had been discovered, despite finds of gold mixed with spar. The Scottish miners had not searched for veins of ore until Bulmer and Thomas Foulis had recently dug for copper. Bulmer and his workers insisted they had not found a vein. The places were remote with no dwelling places, especially for works managers, and timber for houses and lodgings would have to be brought from Leith.
Mining in 1604
He received a grant of £100 in February 1604 from the exchequer to work a mine in Scotland. Bevis Bulmer had also been given £200 in January to work gold in the same district. His letters to Robert Cecil, now Lord Essendon, complain that Thomas Foulis had disrupted his workings by detaining his English timber man. He hoped that Lord Balmerino, Secretary for Scotland would help him. Bowes stayed at Codrus Cottage, above Wanlock Water.
He wrote in March 1604 of a difficult journey in winter to the works at Wanlock Water. The Venetian ambassador Nicolò Molin reported on his progress in February 1605. He heard that Bowes had told Queen Elizabeth about gold mines in Scotland, but she had arrested him. Bowes had found support from King James and produced 25 ounces of gold but with large costs, and was losing supporters.
On 10 June 1605 he wrote to the Robert Cecil, now Earl of Salisbury, from Biddick Waterville about the progress of his search for gold in Scotland near Wanlockhead. He had requested a tent big enough to feed 80 workmen. In 1604 he had found a potential seam of spar and yellowish clay, obtaining a grant of £200 to further the work, and in November began to mine there and drain the pit. He built houses on the lands of Sir Thomas Kirkpatrick of Closeburn. He was not confident of finding gold in any quantity, and on 28 May rode to Edinburgh to report his findings to Alexander Seton, Lord Chancellor and the Lord President, Lord Balmerino. Balmerino inspected his work and the works belonging to Bevis Bulmer. Bowes abandoned his efforts in June 1605 due to poor health, and Bevis Bulmer took over his concessions. Bowes died soon after.
In July 1606 his widow, Magdalen Bowes, petitioned the Earl of Salisbury for money and help, mentioning that George Bowes had held the offices of Constable of Raby Castle and Stewardship of the lands of Charles, Earl of Westmorland. During his absence in Scotland the offices were held by William Davenport and Edward Marley who transferred to John Richardson. Bowes had to pay Richardson to get them back. Bowes had mined copper at Keswick and Knowsley in Queen Elizabeth's time, and the efforts had given him bruises and distempers which shortened his life. The Earl of Dorset had given the two offices to their eldest son, also called George Bowes, but others had made difficulties.
Stephen Atkinson, wrote that "Mr Bowes" had found a vein of gold on Wanlock Water, which Bevis Bulmer later exploited. Atkinson states this was in the reign of Queen Elizabeth. Previously gold mines had been opened by Cornelius de Vos and George Douglas of Parkhead. The British Library has a fragmentary description of gold mining in Scotland which may have been written by George Bowes as a petition for funding in 1603.
George's elder brother Robert Bowes was killed in an accident at a copper mine in Keswick in 1610.
Family
The children of George Bowes and Magdalen Bray included:
George Bowes, baptised 6 May 1596.
Robert Bowes, baptised 29 September 1597, married Joan Hutton.
Bray Bowes, baptised 2 January 1603.
Ralph Bowes
Toby Bowes of the Waterside.
William Bowes, baptised 12 August 1599.
Dorothy Bowes, baptised 7 February 1604.
Magdalen Bowes, baptised 4 February 1593.
References
1606 deaths
16th-century English businesspeople
17th-century English people
Gold mines in Scotland
Mining engineers | George Bowes (prospector) | Engineering | 1,354 |
7,466,623 | https://en.wikipedia.org/wiki/Seafloor%20massive%20sulfide%20deposits | Seafloor massive sulfide deposits or SMS deposits, are modern equivalents of ancient volcanogenic massive sulfide ore deposits or VMS deposits. The term has been coined by mineral explorers to differentiate the modern deposit from the ancient.
SMS deposits were first recognized during the exploration of the deep oceans and the mid ocean ridge spreading centers in the early 1960s. Deep ocean research submersibles, bathyspheres and remote operated vehicles have visited and taken samples of black smoker chimneys, and it has been long recognised that such chimneys contain appreciable grades of Cu, Pb, Zn, Ag, Au and other trace metals.
SMS deposits form in the deep ocean around submarine volcanic arcs, where hydrothermal vents exhale sulfide-rich mineralising fluids into the ocean.
SMS deposits are laterally extensive and consist of a central vent mound around the area where the hydrothermal circulation exits, with a wide apron of unconsolidated sulfide silt or ooze which precipitates upon the seafloor.
Beginning about 2008, technologies were being developed for deepsea mining of these deposits.
Minerals
Mineralization in submarine magmatic-hydrothermal systems is a product of the chemical and thermal exchange between the ocean, the lithosphere, and the magmas emplaced within it. Different mineral associations precipitate during the typical stages of mineralization that characterize the life span of such systems.
Minerals present in a hydrothermal system or a fossil volcanogenic massive sulfide deposit are deposited passively or reactively. Mineral associations may vary (1) in different mineralized structures, either syngenetic (namely, passive precipitation in chimneys, mounds and stratiform deposits) or epigenetic (structures that correspond to feeder channels, and replacements of host rocks or pre-existing massive sulfide bodies), or structural zonation, (2) from proximal to distal associations with respect to venting areas within the same stratigraphic horizon, or horizontal zonation, (3) from deep to shallow associations (i.e., stockworks to mounds), or vertical zonation, (4) from early and climactic to late stages of mineralization (dominated by sulfides, and sulfates or oxides, respectively), or temporal zonation, and (5) in various volcano sedimentary contexts, depending essentially on the composition of volcanic rocks and, ultimately, on the tectonomagmatic context. The most common minerals in ore-bearing associations of volcanogenic massive sulfide deposits (non-metamorphosed or oxidized) and their modern analogues are pyrite, pyrrhotite, chalcopyrite, covellite, sphalerite, galena, tetrahedrite-tennantite, marcasite, realgar, orpiment, proustite-pyrargyrite, wurtzite, stannite (sulfides), Mn oxides, cassiterite, magnetite, hematite (oxides), barite, anhydrite (sulfates), calcite, siderite (carbonates) quartz and native gold, and are differently distributed in the various associations schematized above. The most common hydrothermal alteration assemblages are chloritic (including Mg-rich ones) and phyllic alteration (dominated by “sericite”, mostly illite), and also silicification, deep and shallow talcose alteration, and ferruginous (including Fe oxides, carbonates and sulfides) alteration.
Economic importance
Economic extraction of SMS deposits is in the theoretical stage, the greatest complication being the extreme water depths at which these deposits are forming. However, apparent vast areas of the peripheral areas of these black smoker zones contain a sulfide ooze which could, theoretically, be vacuumed up off the seafloor. Nautilus Minerals Inc. (Nautilus) was engaged in commercially exploring the ocean floor for copper, gold, silver and zinc seafloor massive sulphide (SMS) deposits, and mineral extraction from an SMS system. Nautilus' Solwara 1 Project located at 1,600 metres water depth in the Bismarck Sea, Papua New Guinea, was an attempt at the world's first deep-sea mining project, with first production originally expected in 2017. However, the company went bankrupt in 2019 after failing to secure funding for the project.
Known SMS deposits
Deep ocean drilling, seismic bathymetry surveys and mineral exploration deep sea drilling has delineated several areas worldwide with potentially economically viable SMS deposits, including:
Lau Basin
Kermadec Volcanic Arc
Colville Ridge
Bismarck Sea
Okinawa Trough
North Fiji Basin (see d'Entrecasteaux Ridge)
Red Sea
See also
Hydrothermal circulation
Mid ocean ridge
Ore genesis
RISE project
References
External links
The dawn of deep ocean mining, Steven Scott, Feb. 2006
Bertram C., A. Krätschell, K. O'Brien, W. Brückmann, A. Proelss, K. Rehdanz (2011). Metalliferous sediments in the Atlantis II deep -Assessing the geological and economic resource potential and legal constraints. Resources Policy 36(2011), 315–329.
Economic geology
Oceanography
Sedimentary rocks | Seafloor massive sulfide deposits | Physics,Environmental_science | 1,090 |
21,533,375 | https://en.wikipedia.org/wiki/Gemini%20%28constellation%29 | Gemini is one of the constellations of the zodiac and is located in the northern celestial hemisphere. It was one of the 48 constellations described by the 2nd century AD astronomer Ptolemy, and it remains one of the 88 modern constellations today. Its name is Latin for twins, and it is associated with the twins Castor and Pollux in Greek mythology. Its old astronomical symbol is (♊︎).
Location
Gemini lies between Taurus to the west and Cancer to the east, with Auriga and Lynx to the north, Monoceros and Canis Minor to the south, and Orion to the south-west.
In classical antiquity, Cancer was the location of the Sun on the northern solstice (June 21). During the first century AD, axial precession shifted it into Gemini. In 1990, the location of the Sun at the northern solstice moved from Gemini into Taurus, where it will remain until the 27th century AD and then move into Aries. The Sun will move through Gemini from June 21 to July 20 through 2062.
Gemini is prominent in the winter skies of the northern Hemisphere and is visible the entire night in December–January. The easiest way to locate the constellation is to find its two brightest stars Castor and Pollux eastward from the familiar V-shaped asterism (the open cluster Hyades) of Taurus and the three stars of Orion's Belt (Alnitak, Alnilam, and Mintaka). Another way is to mentally draw a line from the Pleiades star cluster located in Taurus and the brightest star in Leo, Regulus. In doing so, an imaginary line that is relatively close to the ecliptic is drawn, a line which intersects Gemini roughly at the midpoint of the constellation, just below Castor and Pollux.
When the Moon moves through Gemini, its motion can easily be observed in a single night as it appears first west of Castor and Pollux, then aligns, and finally appears east of them.
Features
Stars
The constellation contains 85 stars of naked eye visibility.
The brightest star in Gemini is Pollux, and the second-brightest is Castor. Castor's Bayer designation as "Alpha" arose because Johann Bayer did not carefully distinguish which of the two was the brighter when he assigned his eponymous designations in 1603. Although the characters of myth are twins, the actual stars are physically very different from each other.
α Gem (Castor) is a sextuple star system 52 light-years from Earth, which appears as a magnitude 1.6 blue-white star to the unaided eye. Two spectroscopic binaries are visible at magnitudes 1.9 and 3.0 with a period of 470 years. A wide-set red dwarf star is also a part of the system; this star is an Algol-type eclipsing binary star with a period of 19.5 hours; its minimum magnitude is 9.8 and its maximum magnitude is 9.3.
β Gem (Pollux) is an orange-hued giant star of magnitude 1.14, 34 light-years from Earth. Pollux has an extrasolar planet revolving around it, as do two other stars in Gemini, HD 50554, and HD 59686.
γ Gem (Alhena) is a blue-white hued star of magnitude 1.9, 105 light-years from Earth.
δ Gem (Wasat) is a long-period binary star 59 light-years from Earth. The primary is a white star of magnitude 3.5, and the secondary is an orange dwarf star of magnitude 8.2. The period is over 1000 years; it is divisible in medium amateur telescopes.
ε Gem (Mebsuta), a double star, includes a primary yellow supergiant of magnitude 3.1, nine hundred light-years from Earth. The optical companion, of magnitude 9.6, is visible in binoculars and small telescopes.
ζ Gem (Mekbuda) is a double star, whose primary is a Cepheid variable star with a period of 10.2 days; its minimum magnitude is 4.2 and its maximum magnitude is 3.6. It is a yellow supergiant, 1,200 light-years from Earth, with a radius that is 60 times solar, making it approximately 220,000 times the size of the Sun. The companion, a magnitude 7.6 star, is visible in binoculars and small amateur telescopes.
η Gem (Propus) is a binary star with a variable component. 380 light-years away, it has a period of 500 years and is only divisible in large amateur telescopes. The primary is a semi-regular red giant with a period of 233 days; its minimum magnitude is 3.9 and its maximum magnitude is 3.1. The secondary is of magnitude 6.
κ Gem is a binary star 143 light-years from Earth. The primary is a yellow giant of magnitude 3.6; the secondary is of magnitude 8. The two are only divisible in larger amateur instruments because of the discrepancy in brightness.
ν Gem is a double star divisible in binoculars and small amateur telescopes. The primary is a blue giant of magnitude 4.1, 550 light-years from Earth, and the secondary is of magnitude 8.
38 Gem, a binary star, is also divisible in small amateur telescopes, 84 light-years from Earth. The primary is a white star of magnitude 4.8 and the secondary is a yellow star of magnitude 7.8.
U Gem is a dwarf nova type cataclysmic variable discovered by J. R. Hind in 1855.
Mu Gem (Tejat) is the Bayer designation for a star in the northern constellation of Gemini. It has the traditional name Tejat Posterior, which means back foot, because it is the foot of Castor, one of the Gemini twins.
Deep-sky objects
M35 (NGC 2168) is a large, elongated open cluster of magnitude 5, discovered in the year 1745 by Swiss astronomer Philippe Loys de Chéseaux. It has an area of approximately 0.2 square degrees, the same size as the full moon. Its high magnitude means that M35 is visible to the unaided eye under dark skies; under brighter skies it is discernible in binoculars. The 200 stars of M35 are arranged in chains that curve throughout the cluster; it is 2800 light-years from Earth. Another open cluster in Gemini is NGC 2158. Visible in large amateur telescopes and very rich, it is more than 12,000 light-years from Earth.
NGC 2392 is a planetary nebula with an overall magnitude of 9.2, located 4,000 light-years from Earth. In a small amateur telescope, its 10th magnitude central star is visible, along with its blue-green elliptical disk. It is said to resemble the head of a person wearing a parka.
The Medusa Nebula is another planetary nebula, some 1,500 light-years distant. Geminga is a neutron star approximately 550 light-years from Earth. Other objects include NGC 2129, NGC 2158, NGC 2266, NGC 2331 and NGC 2355.
Meteor showers
The Geminids is a bright meteor shower that peaks on December 13–14. It has a maximum rate of approximately 100 meteors per hour, making it one of the richest meteor showers. The Epsilon Geminids peak between October 18 and October 29 and have only been recently confirmed. They overlap with the Orionids, which make the Epsilon Geminids difficult to detect visually. Epsilon Geminid meteors have a higher velocity than Orionids.
Mythology
In Babylonian astronomy, the stars Castor and Pollux were known as the Great Twins. The Twins were regarded as minor gods and were called Meshlamtaea and Lugalirra, meaning respectively 'The One who has arisen from the Underworld' and the 'Mighty King'. Both names can be understood as titles of Nergal, the major Babylonian god of plague and pestilence, who was king of the Underworld.
In Greek mythology, Gemini was associated with the myth of Castor and Pollux, the children of Leda and Argonauts both. Pollux was the son of Zeus, who seduced Leda, while Castor was the son of Tyndareus, king of Sparta and Leda's husband. Castor and Pollux were also mythologically associated with St. Elmo's fire in their role as the protectors of sailors. When Castor died, because he was mortal, Pollux begged his father Zeus to give Castor immortality, and he did, by uniting them together in the heavens.
Visualizations
Gemini is dominated by Castor and Pollux, two bright stars that appear relatively very closely together forming an o shape, encouraging the mythological link between the constellation and twinship. The twin above and to the right (as seen from the Northern Hemisphere) is Castor, whose brightest star is α Gem; it is a second-magnitude star and represents Castor's head. The twin below and to the left is Pollux, whose brightest star is β Gem (more commonly called Pollux); it is of the first magnitude and represents Pollux's head. Furthermore, the other stars can be visualized as two parallel lines descending from the two main stars, making it look like two figures.
H. A. Rey has suggested an alternative to the traditional visualization that connected the stars of Gemini to show twins holding hands. Pollux's torso is represented by the star υ Gem, Pollux's right hand by ι Gem, Pollux's left hand by κ Gem; all three of these stars are of the fourth magnitude. Pollux's pelvis is represented by the star δ Gem, Pollux's right knee by ζ Gem, Pollux's right foot by γ Gem, Pollux's left knee by λ Gem, and Pollux's left foot by ξ Gem. γ Gem is of the second magnitude, while δ and ξ Gem are of the third magnitude. Castor's torso is represented by the star τ Gem, Castor's left hand by ι Gem (which he shares with Pollux), Castor's right hand by θ Gem; all three of these stars are of the fourth magnitude. Castor's pelvis is represented by the star ε Gem, Castor's left foot by ν Gem, and Castor's right foot by μ Gem and η Gem; ε, μ, and η Gem are of the third magnitude. The brightest star in this constellation is Pollux.
Astronomy
In Meteorologica (1 343b30) Aristotle mentions that he observed Jupiter in conjunction with and then occulting a star in Gemini. This is the earliest-known observation of this nature. A study published in 1990 suggests the star involved was 1 Geminorum and the event took place on 5 December 337 BC.
When William Herschel discovered Uranus on 13 March 1781 it was located near η Gem. In 1930 Clyde Tombaugh exposed a series of photographic plates centred on δ Gem and discovered Pluto.
Equivalents
In Chinese astronomy, the stars that correspond to Gemini are located in two areas: the White Tiger of the West (西方白虎, Xī Fāng Bái Hǔ) and the Vermillion Bird of the South (南方朱雀, Nán Fāng Zhū Què).
In some cultures, the twin in Gemini refers to 'the unborn twin' and represents a spiritual or dual self that exists within.
Astrology
, the Sun appears in the constellation Gemini from June 21 to July 20. In tropical astrology, the Sun is considered to be in the sign Gemini from May 22 to June 21, and in sidereal astrology, from June 16 to July 16.
See also
Geminga, Gemini gamma-ray source
Gemini in Chinese astronomy
IC 444, reflection nebula
Messier 35 open cluster
Cancer Minor (constellation) - Obsolete constellation inside modern Gemini
References
Sources
.
Princeton: Princeton University Press. .
External links
The Deep Photographic Guide to the Constellations: Gemini
Astrojan Astronomical Picture Collection : The clickable Gemini
WikiSky: Gemini constellation
Ian Ridpath's Star Tales: Gemini
APOD Pictures of Gemini and Deep Sky Objects:
[http://apod.nasa.gov/apod/ap090506.html A Spring Sky Over Hirsau Abbey]
[http://apod.nasa.gov/apod/ap090503.html The Eskimo Nebula from Hubble]
[http://apod.nasa.gov/apod/ap100612.html The Medusa Nebula]
[http://apod.nasa.gov/apod/ap031215.html Open Star Clusters M35 and NGC 2158]
[http://apod.nasa.gov/apod/ap050319.html NGC 2266: Old Cluster in the NGC]
Warburg Institute Iconographic Database (medieval and early modern images of Gemini)
Constellations
Northern constellations
Constellations listed by Ptolemy | Gemini (constellation) | Astronomy | 2,691 |
25,428,398 | https://en.wikipedia.org/wiki/Public%20opinion%20on%20climate%20change | Public opinion on climate change is related to a broad set of variables, including the effects of sociodemographic, political, cultural, economic, and environmental factors as well as media coverage and interaction with different news and social media. International public opinion on climate change shows a majority viewing the crisis as an emergency.
Public opinion polling is an important part of studying climate communication and how to improve climate action, evidence of public opinion can help increase commitment to act by decision makers. Surveys and polling to assess opinion have been done since the 1980s, first focusing on awareness, but gradually including greater detail about commitments to climate action. More recently, global surveys give much finer data, for example, in January 2021, the United Nations Development Programme published the results of The Peoples' Climate Vote. This was the largest-ever climate survey, with responses from 1.2 million people in 50 countries, which indicated that 64% of respondents considered climate change to be an emergency, with forest and land conservation being the most popular solutions.
Public surveys
According to a 2015 journal article based on a literature review of thousands of articles related to over two hundred studies covering the period from 1980 to 2014, there was an increase in public awareness of climate change in the 1980s and early 1990s, followed by a period of growing concern— mixed with the rise of conflicting positions—in the later 1990s and early 2000s. This was followed by a period of "declining public concern and increasing skepticism" in some countries in the mid-2000s to late-2000s. From 2010 to 2014, there was a period suggesting "possible stabilization of public concern about climate change".
The 2021 Lloyd's Register Foundation World Risk Poll conducted by Gallup found that 67% of people viewed climate change as a threat to people in their country, which is a slight decrease from 69% in 2019, possibly due to the COVID-19 pandemic and its impact on health and livelihoods being pressing issues. The 2021 poll was conducted in 121 countries and included over 125,000 interviews. The study also revealed that many countries and regions with high experience of disasters related to natural hazards, including those made more frequent and severe by climate change, are also those with low resilience.
A 2021 survey conducted by the Institute of Economic Affairs (IEA) found that 75% of young British (16-24) respondents agreed with the view that climate change was a specifically capitalist problem.
Over 73,000 people speaking 87 different languages across 77 countries were asked 15 questions on climate change for the Peoples' Climate Vote 2024, a public opinion survey on climate change, which was conducted for the UN Development Programme (UNDP) with the University of Oxford and GeoPoll. It showed 80 percent people globally want their governments to take stronger action to tackle the climate crisis.
In 2024 Ipsos conducted a survey about the importance of climate issues in elections. It found that among the factors influencing the voters' decisions, climate change is generally at the 10th place of importance, long behind other issues, especially, inflation. This worried some experts as around 4 billion people from more than 65 countries, responsible for 40% of emissions, are expected to participate in national elections in 2024.
Results from climate survey of European Investment Bank in 2021
91% of Chinese respondents to an EU survey in 2021, 73% of Britons, 70% of Europeans and 60% of Americans support stronger policies for climate change mitigation. 63% of EU residents, 59% of Britons, 50% of Americans and 60% of Chinese respondents are in favor of switching to renewable energy. 18% of Americans are in favor of natural gas as a source of energy. For Britons and EU citizens, nuclear energy is a more popular energy alternative. 69% of EU respondents, 71% of UK respondents, 62% of US respondents and 89% of Chinese respondents support a tax on the items and services that contribute the most to global warming.
In the 2022 edition of the same climate survey, in the European Union and the United Kingdom 87% of respondents agree that their government has moved too slowly to address climate change, compared to 76% and 74%, respectively, in China and the United States. The majority of persons polled in the European Union and China (80% and 91%, respectively) think that climate change has an impact on their daily life. Meanwhile, Americans (67%) and Britons (65%) have a less extreme picture of this. More findings from the survey show that 63% of people in the European Union want energy costs to be dependent on use, with the greatest consumers paying more. This is compared to 83% in China, 63% in the UK and 57% in the US.
Compared to 84% in China, 66% in the United States, and 52% in the United Kingdom, 64% of EU respondents want polluting activities like air travel and SUVs to be taxed more heavily to account for their environmental impact. 88% of Chinese, 83% of British, and 72% of American respondents, 84% of EU respondents believe that a worldwide catastrophe is inevitable if the consumption of products and energy is not lowered in the next years.
According to the European Investment Bank's climate survey from 2022, 84% of EU respondents stated that if we do not significantly cut back on our consumption of goods and energy in the near future, the negative effects would be non-reversible. 63% of EU citizens want energy prices to be based on consumption, with higher costs for those individuals or businesses who use the most energy and 40% of respondents from the EU believe that their government should lower energy-related taxes in the near future. 87% of EU respondents and 85% of UK respondents believe that their governments are moving too slowly to halt climate change. Few respondents from the UK, EU, and the US believe that their governments will be successful in decreasing carbon emissions by 2030.
According to the 2024 survey conducted in the European Union, 86% of Europeans believe that investing in climate change adaptation can lead to job creation and stimulate the local economy. The same survey also reveals that 80% of respondents, including 89% from southern European countries, have encountered at least one extreme weather event over the last five years. Of these, 55% reported experiencing extreme heat and heatwaves, with higher occurrences in Spain (73%) and Romania (71%). Furthermore, 35% of respondents have been affected by droughts, with the incidence rising to 62% in Romania and 49% in Spain. Additionally, 34% have witnessed heavy storms or hail, with 62% in Slovenia and 49% in Croatia reporting such experiences.
Older surveys (prior to 2013)
In Europe, the notion of human influence on climate gained wide acceptance more rapidly than in the United States and other countries (data from 2007). A 2009 survey found that Europeans rated climate change as the second most serious problem facing the world, between "poverty, the lack of food and drinking water" and "a major global economic downturn". 87% of Europeans considered climate change to be a very serious or serious problem, while ten per cent did not consider it a serious problem.
A 15-nation poll conducted in 2006, by Pew Global found that there "is a substantial gap in concern over global warming—roughly two-thirds of Japanese (66%) and Indians (65%) say they personally worry a great deal about global warming. Roughly half of the populations of Spain (51%) and France (46%) also express great concern over global warming, based on those who have heard about the issue. But there is no evidence of alarm over global warming in either the United States or China—the two largest producers of greenhouse gases. Just 19% of Americans and 20% of the Chinese who have heard of the issue say they worry a lot about global warming—the lowest percentages in the 15 countries surveyed. Moreover, nearly half of Americans (47%) and somewhat fewer Chinese (37%) express little or no concern about the problem."
A 47-nation poll by Pew Global Attitudes conducted in 2007, found, "Substantial majorities 25 of 37 countries say global warming is a 'very serious' problem."
Matthew C. Nisbet and Teresa Myers' 2007 article—"Twenty Years of Public Opinion about Global Warming"—covered two decades in the United States starting in 1980—in which they investigated public awareness of the causes and impacts of global warming, public policy, scientific consensus on climate change, public support for the Kyoto Accord, and their concerns about the economic costs of potential public policies that responded to climate change. They found that from 1986 to 1990, the proportion of respondents who reported having heard about climate change increased from 39% to 74%. However, they noted "levels of overall understanding were limited".
A 2010 journal article in Risk Analysis compared and contrasted a 1992 survey and a 2009 survey of lay peoples' awareness and opinions of climate change. In 1992, the general public did not differentiate between climate change and the depletion of the ozone layer. Using a mental models methodology, researchers found that while there was a marked increase in understanding of climate change by 2009, many did not accept that global warming was "primarily due to increased concentrations of carbon dioxide in the atmosphere", or that the "single most important source of this carbon dioxide is the combustion of fossil fuels".
Influences on individual opinion
Geographic region
For a list of countries and their opinion see "By region and country", below.
The first major worldwide poll, conducted by Gallup in 2008–2009 in 127 countries, found that some 62% of people worldwide said they knew about global warming. In the industrialized countries of North America, Europe, and Japan, 67% or more knew about it (97% in the U.S., 99% in Japan); in developing countries, especially in Africa, fewer than a quarter knew about it, although many had noticed local weather changes. The survey results suggest that between 2007 and 2010 only 42% of the world's population were aware of climate change and believed that it is caused by human activity. Among those who knew about global warming, there was a wide variation between nations in belief that the warming was a result of human activities.
Adults in Asia, with the exception of those in developed countries, are the least likely to perceive global warming as a threat. In developed Asian countries like South Korea, perceptions of climate change are associated with strong emotional beliefs about its causes. In the western world, individuals are the most likely to be aware and perceive it as a very or somewhat serious threat to themselves and their families; although Europeans are more concerned about climate change than those in the United States. However, the public in Africa, where individuals are the most vulnerable to global warming while producing the least carbon dioxide, is the least aware – which translates into a low perception that it is a threat.
These variations pose a challenge to policymakers, as different countries travel down different paths, making an agreement over an appropriate response difficult. While Africa may be the most vulnerable and produce the least amount of greenhouse gases, they are the most ambivalent. The top five emitters (China, the United States, India, Russia, and Japan), who together emit half the world's greenhouse gases, vary in both awareness and concern. The United States, Russia, and Japan are the most aware at over 85% of the population. Conversely, only two-thirds of people in China and one-third in India are aware. Japan expresses the greatest concern of the 5, which translates into support for environmental policies. People in China, Russia, and the United States, while varying in awareness, have expressed a similar proportion of aware individuals concerned. Similarly, those aware in India are likely to be concerned, but India faces challenges spreading this concern to the remaining population as its energy needs increase over the next decade.
An online survey on environmental questions conducted in 20 countries by Ipsos MORI, "Global Trends 2014", shows broad agreement, especially on climate change and if it is caused by humans, though the U.S. ranked lowest with 54% agreement. It has been suggested that the low U.S. ranking is tied to denial campaigns.
A 2010 survey of 14 industrialized countries found that skepticism about the danger of global warming was highest in Australia, Norway, New Zealand and the United States, in that order, correlating positively with per capita emissions of carbon dioxide.
A survey conducted in the European Union in 2024 reveals that 50% of respondents see climate adaptation as a priority for their country in the upcoming years. Individuals residing in southern European nations are notably more concerned, with 65% considering adaptation a priority, which is 15 percentage points higher than the EU average. The same survey found that 28% of respondents in the EU think that the elderly should be prioritized for support in climate change adaptation, and 23% believe that individuals residing in high-risk areas should be the first to receive support.
Education
In countries varying in awareness, an educational gap translates into a gap in awareness. However an increase in awareness does not always result in an increase in perceived threat. In China, 98% of those who have completed four years or more of college education reported knowing something or a great deal of climate change while only 63% of those who have completed nine years of education reported the same. Despite the differences in awareness in China, all groups perceive a low level of threat from global warming. In India, those who are educated are more likely to be aware, and those who are educated there are far more likely to report perceiving global warming as a threat than those who are not educated. In Europe, individuals who have attained a higher level of education perceive climate change as a serious threat. There is also a strong association between education and Internet use. Europeans who use the Internet more are more likely to perceive climate change as a serious threat. However, a survey of American adults found "little disagreement among culturally diverse citizens on what science knows about climate change. In the US, individuals with greater science literacy and education have more polarized beliefs on climate change.
In the states of Washington, California, Oregon, and Idaho, people with more education were more likely to support the building of new fossil fuel power plants than people with less of an education.
Demographics
In general, there is a substantial variation in the direction in which demographic traits, like age or gender, correlate with climate change concern. While women and younger people tend to be more concerned about climate change in English-speaking constituencies, the opposite is true in most African countries.
Residential demographics affect perceptions of global warming. In China in 2008, 77% of those who lived in urban areas were aware of global warming compared to 52% in rural areas. This trend was mirrored in India with 49% to 29% awareness, respectively.
Of the countries where at least half the population is aware of global warming, those with the majority who believe that global warming is due to human activities have a greater national GDP per unit energy—or, a greater energy efficiency.
In Europe, individuals under fifty-five are more likely to perceive both "poverty, lack of food and drinking water" and climate change as a serious threat than individuals over fifty-five. Male individuals are more likely to perceive climate change as a threat than female individuals. Managers, white-collar workers, and students are more likely to perceive climate change as a greater threat than house persons and retired individuals.
In the United States, conservative white men are more likely than other Americans to deny climate change. Men are also less likely to believe that climate change is human caused or that there is a consensus message talking about the issues of climate change among scientists. A very similar trend has been documented in Norway, where 63% of conservative men deny anthropogenic climate change compared to just 36% of the general Norwegian population. In Sweden, political conservatism was similarly found to correlate with climate change denial, while in Brazil, climate change denial has been found to be more correlated with gender, with men being significantly more likely to express denialist viewpoints compared to women.
Women are more likely to support egalitarian policies and well as social programs for their community. Although there are differences between men and women when it comes to environmental public policy, both are less likely to support policies such as ones for regulations if the economy is doing poorly.
In Great Britain, a movement of by women known as "birthstrikers" advocates for refraining from procreation until the possibility of "climate breakdown and civilisation collapse" is averted.
In 2021 a global survey was conducted to understand the opinion of people in the age 16-25 about climate change. According to the study, 4 from 10 are hesitating about having children because they are afraid of climate change. 6 from 10 feel extreme anxiety about the issue. Similar number felt betrayed by older generations and governments.
Age differences
Youth show a deeper understanding and awareness of climate change than adults and older generations. Younger generations of people typically demonstrate more concern about climate change over older generations, and younger demographics show more negative and pessimistic attitudes towards climate change. With little ability to measure the consciousness of varying age demographics, it is difficult to understand the level of awareness of that younger generations exemplify compared to older generations toward climate change. However, younger demographics also believe at higher rates than older demographics that climate change can be successfully mitigated by taking action, and are more likely to express interest in taking action in order to help mitigate climate change.
About 28% of millennials say that they have taken some kind of action to help with climate change, and 40% have used social media to address climate change in some way, along with 45% of Gen Z youth. Younger generations are also more likely to support and vote for climate change policies than older generations.
In the western states of Washington, Idaho, Oregon, and California, older residents are more likely to support policies for building new fossil fuel power plants.
Political identification
Public opinion on climate change can be influenced by who people vote for. Although media coverage influences how some view climate change, research shows that voting behavior influences climate change skepticism. This shows that people's views on climate change tend to align with the people they voted for.
In Europe, opinion is not strongly divided among left and right parties. Although European political parties on the left, including Green parties, strongly support measures to address climate change, conservative European political parties maintain similar sentiments, most notably in Western and Northern Europe. For example, Margaret Thatcher, never a friend of the coal mining industry, was a strong supporter of an active climate protection policy and was instrumental in founding the Intergovernmental Panel on Climate Change and the British Hadley Centre for Climate Prediction and Research. Some speeches, as to the Royal Society on 27 September 1988 and to the UN general assembly in November 1989 helped to put climate change, acid rain, and general pollution in the British mainstream. After her career, however, Thatcher was less of a climate activist, as she called climate action a "marvelous excuse for supranational socialism", and called Al Gore an "apocalyptic hyperbole". France's center-right President Chirac pushed key environmental and climate change policies in France in 2005–2007. Conservative German administrations (under the Christian Democratic Union and Christian Social Union) in the past two decades have supported European Union climate change initiatives; concern about forest dieback and acid rain regulation were initiated under Kohl's archconservative minister of the interior Friedrich Zimmermann. In the period after former President George W. Bush announced that the United States was leaving the Kyoto Treaty, European media and newspapers on both the left and right criticized the move. The conservative Spanish La Razón, the Irish Times, the Irish Independent, the Danish Berlingske Tidende, and the Greek Kathimerini all condemned the Bush administration's decision, as did left-leaning newspapers.
In Norway, a 2013 poll conducted by TNS Gallup found that 92% of those who vote for the Socialist Left Party and 89% of those who vote for the Liberal Party believe that global warming is caused by humans, while the percentage who held this belief is 60% among voters for the Conservative Party and 41% among voters for the Progress Party.
The shared sentiments between the political left and right on climate change further illustrate the divide in perception between the United States and Europe on climate change. As an example, conservative German Prime Ministers Helmut Kohl and Angela Merkel have differed with other parties in Germany only on how to meet emissions reduction targets, not whether or not to establish or fulfill them.
A 2017 study found that those who changed their opinion on climate change between 2010 and 2014 did so "primarily to align better with those who shared their party identification and political ideology. This conforms with the theory of motivated reasoning: Evidence consistent with prior beliefs is viewed as strong and, on politically salient issues, people strive to bring their opinions into conformance with those who share their political identity". Furthermore, a 2019 study examining the growing skepticism of climate change among American Republicans argues that persuasion and rhetoric from party elites play a critical role in public opinion formation, and that these elite cues are propagated through mainstream and social media sources.
Those who care about the environment and want change are not happy about some policies, for example the support of the cap and trade policy but very few people are willing to pay more than 15 dollars per month for a program that is supposed to help the environment. According to a 2015 article published in Environmental Politics, while most Americans were aware of climate change, only 2% of respondents ranked the environment as the top issue in the US.
A 2014–2018 survey of Oklahoma (U.S.) residents found that partisans on the political right have much more unstable beliefs about climate change than partisans on the left. Contradicting previous literature indicating that climate beliefs are firmly held and invariable, the researchers said the results imply that opinions on the right are more susceptible to change.
Individual risk assessment and assignment
The IPCC attempts to orchestrate global (climate) change research to shape a worldwide consensus according to a 1996 article. However, the consensus approach has been dubbed more a liability than an asset in comparison to other environmental challenges. In 2010, an article in Current Sociology, said that the linear model of policy-making, based on a more knowledge we have, the better the political response will be was said to have not been working and was in the meantime rejected by sociology.
In a 1999 article, Sheldon Ungar, a Canadian sociologist, compared the different public reactions towards ozone depletion and climate change. The public opinion failed to tie climate change to concrete events which could be used as a threshold or beacon to signify immediate danger. Scientific predictions of a temperature rise of two to three degrees Celsius over several decades do not respond with people, e.g. in North America, that experience similar swings during a single day. As scientists define global warming a problem of the future, a liability in "attention economy", pessimistic outlooks in general and assigning extreme weather to climate change have often been discredited or ridiculed (compare Gore effect) in the public arena. While the greenhouse effect per se is essential for life on Earth, the case was quite different with the ozone shield and other metaphors about the ozone depletion. The scientific assessment of the ozone problem also had large uncertainties. But the metaphors used in the discussion (ozone shield, ozone hole) reflected better with lay people and their concerns.
The chlorofluorocarbon (CFC) regulation attempts of the end of the 1980s benefited from those easy-to-grasp metaphors and the personal risk assumptions taken from them. As well, the fate of celebrities like President Ronald Reagan, who had skin cancer removal in 1985 and 1987, was of high importance. In the case of public opinion on climate change, no imminent danger is perceived.
It has been hypothesised many times that no matter how strong the climate knowledge provided by risk analysts, experts and scientists is, risk perception determines agents' ultimate response in terms of mitigation. However, recent literature reports conflicting evidence about the actual impact of risk perception on agents’ climate response. Rather, a no-direct perception-response link with the mediation and moderation of many other factors and a strong dependency on the context analysed is shown. Some moderation factors considered as such in the specialised literature include communication and social norms. Yet, conflicting evidence of the disparity between public communication about climate change and the lack of behavioural change has also been observed in the general public. Likewise, doubts are raised about the observance of social norms as an influencing predominant factor that affects action on climate change. What is more, disparate evidence also showed that even agents highly engaged in mitigation (engagement is a mediation factor) actions fail ultimately to respond.
Ideology and religion
In the United States, ideology is an effective predictor of party identification, where conservatives are more prevalent among Republicans, and moderates and liberals among independents and Democrats. A shift in ideology is often associated with in a shift in political views. For example, when the number of conservatives rose from 2008 to 2009, the number of individuals who felt that global warming was being exaggerated in the media also rose.
The 2006 BBC World Service poll found that when asked about various policy options to reduce greenhouse gas emissions – tax incentives for alternative energy research and development, instalment of taxes to encourage energy conservation, and reliance on nuclear energy to reduce fossil fuels. The majority of those asked felt that tax incentives were the path of action that they preferred.
As of May 2016, polls have repeatedly found that a majority of Republican voters, particularly young ones, believe the government should take action to reduce carbon dioxide emissions.
After a country hosts the annual Conference of the Parties (COP) climate legislation increases, which causes policy diffusion. There is strong evidence of policy diffusion, which is when a policy is made it is influenced by the policy choices made elsewhere.This can have a positive effect on climate legislation.
Scientific analyses of international survey data show that right-wing orientation and individualism are strongly correlated to climate change denial in the US and other English-speaking countries, but much less in most non-English speaking nations.
Political ideologies are seen as one of the most consistent factors of the support or rejection of climate change public policies.
A person's political ideology is seen to affect the a person's cognitive and emotional appraisals which then affect how somebody sees climate change and if the dangers of it will inflict harm to them.
Certain religious beliefs like end times theology have also been found to be correlated with climate change denial, though they are not as reliable as predictors of it as political conservatism is.
Income
Income has a strong influence on public opinion regarding policies such as building more fossil fuel power plants or whether we should lighten up on environmental standards in industries. People that have bigger incomes in the states of Washington. Oregon, California and Idaho are more likely to support the policy regarding building new power plants and support lightening up on environmental standards in industries compared to people with a smaller income.
Economic Factors
Development of climate change policies is influenced by economic conditions. For a long period of time, such conditions were seen to play an important role in influencing political behavior. Economic issues and environmental issues are often seen as a trade-off since something that helps one was believed to negatively affect the other. Accordingly, the environment is usually seen as a problem that is not as big and crucial as the economy.
Visibility
Personal experience and noticing weather changes due to climate change are likely to motivate people to find solutions and act on them. After experiencing crop failures due to dry spells in Nepal, citizens were more likely find and incorporate adaptive strategies to fight thing from the vulnerability they face.
Outside of studying the differences in perception of climate change in large geographic areas, researchers have studied the effects of visibility to the individual citizen. In the scientific and academic community, there is an ongoing debate about whether visibility or seeing the effects of climate change literally with one's own eyes is helpful. Though some scientists are dismissive of anecdotal evidence, direct accounts have been studied to better reach local communities and understand their perception of climate change. Climate solutions presented to the public and the private sector have focused on bringing visual learning and practical everyday actions designed to promote further engagement such as community members conducting climate change tours and mapping the trees in their neighborhood.
Risk perception, as opposed to risk assessment, was constantly evaluated in these smaller, local studies. In a 2018 study of those residing near the Everglades, a prominent wetland ecosystem in Florida, participation in outdoor recreational activities, and elevation and distance from the shoreline of their residential location from the mean sea-level affected one's support in environmental conservation policy. Frequent beach goers and other outdoor recreational enthusiasts concerned by differing sea levels were cited to be potential likely mobilizers. Another 2018 study found 56% of the recreational fishermen polled in the area said "being able to see other wildlife" was very or extremely important, and 60% reported being "very much concerned" about the health of the Everglades ecosystem. In key American cities, the visibility of water stress and/or proximity to bodies of water increased the strength of water conservation policy in that area. Perceived shrinking water supply or flooding can be seen as motivating public stance on climate change. However, in arid areas where water was less visible, this brings up concerns of weaker policy in locales that truly need it.
Farmers in the Punjab region of Pakistan are now witnessing a significant decrease in rice production due to climate change. Those who rely on agriculture for their livelihood are the most concerned, based on a 2014 study of 450 households. More than half of those households adapted their farming to climate change.
Bodies of water and water scarcity, though very prominent concepts in this field of study, are not the only major factors when weighing the idea of visible climate change. For example, a 2021 study on the citizens' perception of geohazards was conducted in the Veneto region of northeastern Italy as part of the European Project RESPONSe (Interreg Italy–Croatia). Younger people were shown to be more invested in individual environmental impact versus older adults who were concerned about geohazards. The study was split between those who lived in the hinterlands and low coastal areas. Those shown living in the hinterlands were more inclined to be wary of geohazards and their risks. As those areas were said to be more susceptible to natural disasters, the study highlighted a larger awareness of natural hazards by those who historically are more vulnerable due to their proximity. While residents in general were aware due to their closeness to water sources, research also implies that there is translation needed between the framing of climate change and the immediate impacts to their living area for those who did not live in particularly affected areas.
There are other factors when it comes to visibility for the individual and personally witnessing climate change. While education has been aforementioned and studied as a factor, materials of one's study have also been investigated as a factor of visibility. After studying Portuguese public higher education institutions in 2021, those in the natural and environmental sciences are more inclined to do environment-friendly practices such as recycle and willingness to work for lower salaries for companies that commit to climate change action. Students in the sciences and engineering majors were least likely to do so. While this result can be attributed to initial interest in those areas to begin with, most students were said to be concerned about climate change and said that there needed to be more material about climate change needed to be implemented in their institution's curriculum. Younger students were more likely to be extremely concerned, although the authors' speculated this to be a product of greater social media literacy.
Social media
Across different cultures and languages, the use of social media as a news source is associated with lower levels of climate skepticism. A particular dynamic of social media discussion of climate change is the platform it provides for direct engagement by activists. In a study of the use of the comments sections on YouTube videos relating to climate change, for instance, a core group of users—both climate activists and skeptics—appeared repeatedly across these comments sections, with the majority taking a climate activist standpoint. Although often criticised as reinforcing rather than challenging users' view, social media has also been shown to have a role in cognitive reflection. A study of fora on Reddit highlighted that "while some communities are dominated by particular ideological viewpoints, others are more suggestive of deliberative debate."
Voicing believed wrongdoings
People can find motivation to act in the climate change movement when they are acting in the way to express disagreement with the decisions made by a higher power. In a 2017 Earth Day march, a majority of scientists and nonscientists were both seen to join the march to speak up to the Trump administration about what they have done regarding climate change. In addition, people felt motivated to join the march to protect the use of science to benefit the community and for its use in public good.
Understanding of scientific consensus
A scientific consensus on climate change exists, as recognized by national academies of science and other authoritative bodies. However, research has identified substantial geographical variation in the public's understanding of the scientific consensus.
There are marked differences between the opinion of scientists and that of the general public. A 2009 poll in the US found "[w]hile 84% of scientists say the earth is getting warmer because of human activity such as burning fossil fuels, just 49% of the public agrees". A 2010 poll in the UK for the BBC showed "Climate scepticism on the rise". Robert Watson found this "very disappointing" and said "We need the public to understand that climate change is serious so they will change their habits and help us move towards a low carbon economy."
A poll in 2009 regarding the issue of whether "some scientists have falsified research data to support their own theories and beliefs about global warming" showed that 59% of Americans believed it "at least somewhat likely", with 35% believing it was "very likely".
A 2018 study found that individuals were more likely to accept that global temperatures were increasing if they were shown the information in a chart rather than in text.
Media coverage
The popular media in the U.S. gives greater attention to skeptics relative to the scientific community as a whole, and the level of agreement within the scientific community has not been accurately communicated. US popular media coverage differs from that presented in other countries, where reporting is more consistent with the scientific literature. Some journalists attribute the difference to climate change denial being propagated, mainly in the US, by business-centered organizations employing tactics worked out previously by the US tobacco lobby. However, one study suggests that these tactic are less prominent in the media and that the public instead draws their opinions on climate mainly from the cues of political party elites.
The efforts of Al Gore and other environmental campaigns have focused on the effects of global warming and have managed to increase awareness and concern, but despite these efforts, as of 2007, the number of Americans believing humans are the cause of global warming was holding steady at 61%, and those believing that the popular media was understating the issue remained at about 35%. Between 2010 and 2013, the number of Americans who believe the media under-reports the seriousness of global warming has been increasing, and the number who think media overstates it has been falling. According to a 2013 Gallup US opinion poll, 57% believe global warming is at least as bad as portrayed in the media (with 33% thinking media has downplayed global warming and 24% saying coverage is accurate). Less than half of Americans (41%) think the problem is not as bad as media portrays it.
Another cause of climate change denial may be weariness from overexposure to the topic: some polls suggest that the public may have been discouraged by extremism when discussing the topic, while other polls show 54% of U.S. voters believe that "the news media make global warming appear worse than it really is."
A study published in PLOS One in 2024 found that even a single repetition of a claim was sufficient to increase the perceived truth of both climate science-aligned claims and climate change skeptic/denial claims—"highlighting the insidious effect of repetition". This effect was found even among climate science endorsers.
Impacts of public opinion on politics
Public opinion impacts on the issue of climate change because governments need willing electorates and citizens in order to implement policies that address climate change. Further, when climate change perceptions differ between the populace and governments, the communication of risk to the public becomes problematic. Finally, a public that is not aware of the issues surrounding climate change may resist or oppose climate change policies, which is of considerable importance to politicians and state leaders.
Public support for action to forestall global warming is as strong as public support has been historically for many other government actions; however, it is not "intense" in the sense that it overrides other priorities.
A 2017 journal article said that shifts in public opinion in the direction of pro-environmentalism strongly increased the adoption of renewable energy policies in Europe. A 2020 journal article said that countries in which more people believe in human-made climate change tend to have higher carbon prices.
According to a 2011 Gallup poll, the proportion of Americans who believe that the effects of global warming have begun or will begin in a few years rose to a peak in 2008 where it then declined, and a similar trend was found regarding the belief that global warming is a threat to their lifestyle within their lifetime. Concern over global warming often corresponds with economic downturns and national crisis such as 9/11 as Americans prioritize the economy and national security over environmental concerns. However the drop in concern in 2008 is unique compared to other environmental issues. Considered in the context of environmental issues, Americans consider global warming as a less critical concern than the pollution of rivers, lakes, and drinking water; toxic waste; fresh water needs; air pollution; damage to the ozone layer; and the loss of tropical rain forests. However, Americans prioritize global warming over species extinction and acid rain issues. Since 2000 the partisan gap has grown as Republican and Democratic views diverge.
By region and country
Climate change opinion is the aggregate of public opinion held by the adult population. Cost constraints often restrict surveys to sample only one or two countries from each continent or focus on only one region. Because of differences among questions, wording, and methods—it is difficult to reliably compare results or to generalize them to opinions held worldwide.
In 2007–2008, the Gallup Poll surveyed individuals from 128 countries in the first comprehensive study of global opinions. The Gallup Organization aggregated opinion from the adult population fifteen years of age and older, either through the telephone or personal interviews, and in both rural and urban areas except in areas where the safety of interviewer was threatened and in scarcely populated islands. Personal interviews were stratified by population size or geography and cluster sampling was achieved through one or more stages. Although error bounds vary, they were all below ±6% with 95% confidence.
Weighting countries to a 2008 World Bank population estimate, 61% of individuals worldwide were aware of global warming, developed countries more aware than developing, with Africa the least aware. The median of people perceiving it as a threat was 47%. Latin America and developed countries in Asia led the belief that climate change was a result of human activities, while Africa, parts of Asia and the Middle East, and countries from the Former Soviet Union led in the opposite. Awareness often translates to concern, although of those aware, individuals in Europe and developed countries in Asia perceived global warming as a greater threat than others.
In January 2021, the UNDP worked with Oxford University to release the world's largest survey of public opinion on climate change. It surveyed 50 countries, spanning all inhabited regions, and a majority of the world's population. Its finding suggested a growing concern for climate change. Overall, 64% of respondents believed climate change was an emergency. This belief was high among all regions, the highest being Western Europe and North America at 72%, and the lowest being Sub-Saharan Africa at 61%. It also identified a link between average income and concern for climate change. In the high income countries, 72% believed it was an emergency. This was 62% for middle income countries and 58% for low income countries. It asked people whether or not they supported 18 key policies over 6 areas, ranging from economy to transport. There was general support for all policy suggestions. For example, 8 of the 10 countries with the highest emissions saw a majority of respondents favor more renewable energy. The general impression was that the public wanted more policies to be implemented, and demanded more from policy makers. Overall, 59% of respondents who believed climate change was an emergency said the world should do 'everything necessary and urgently in response' to the crisis. Conversely, there was remarkably little support among respondents for no policies at all, with the highest being Pakistan at only 5%. The report indicated a widespread public awareness, concern, and desire for greater action among all regions of the world.
Using the Pew Research Center's 2015 Global Attitudes Survey, the journal article entitled Cross-national variation in determinants of climate change concern found that the most consistent predictor of concern about climate change in 36 countries surveyed proved to be 'commitment to democratic principles'. Believing that free elections, freedom of religion, equal rights for women, freedom of speech, freedom of the press, and lack of Internet censorship were 'very' rather than 'somewhat' important increased the probability of believing climate change is a very serious problem by 7 to 25 percent points in 26 of the 36 nations surveyed. It was the strongest predictor in 17.
Global North compared to Global South
Awareness about climate change is higher in developed countries than in developing countries. A large majority of people in Indonesia, Pakistan and Nigeria did not know about climate change in 2007, particularly in Muslim-majority countries. There is often awareness about environmental changes in developing countries, but the framework for understanding it is limited. In both developing and developed countries, people similarly believe that poor countries have a responsibility to act on climate change. Since the 2009 Copenhagen summit, concern over climate change in wealthy countries has diminished. In 2009, 63% of people in OECD member states considered climate change to be "very serious", but by 2015, only 48% did. Support for national leadership addressing climate change has also diminished. Of the 21 countries surveyed in GlobeScan's 2015 survey, Canada, France, Spain and the UK are the only ones with a majority of the population supporting further action by their leaders to meet Paris climate accord emission targets. While concern and desire for action has dropped in developed countries, awareness is higher; since 2000, twice as many people connect extreme weather events to human-caused climate change.
While climate change affects the entire planet, opinions about these affects vary significantly among regions of the world. The Middle East has one of the lowest rates of concern in the world, especially compared to Latin America. Europe and Africa have mixed views on climate change but lean towards action by a significant degree. Europeans focus substantially on climate change in comparison to United States residents, who are less concerned than the global median, even as the United States remains the second biggest emitter in the world. Droughts/water shortages are one of the biggest fears experienced about the impacts of climate change, especially in Latin America and Africa. Developed countries in Asia have levels of concern about climate change similar to Latin America, which has one of the highest rates of concern. This is surprising as developing countries in Asia have levels of worry similar to the Middle East, one of the areas with the lowest levels. Large emitters such as China usually ignore issues surrounding climate change as people in China have very low levels of concern about it. The only significant exceptions to this tendency by large emitters are Brazil and India. India is the third-biggest while Brazil is the eleventh-biggest emitter in the world; both have high levels of concern about climate change, similar to much of Latin America.
Africa
People in Africa are relatively concerned about climate change compared to the Middle East and parts of Asia. However, they are less concerned than most of Latin America and Europe. In 2015, 61% of people in Africa considered climate change to be a very serious problem, and 52% believe that climate change is harming people now. While 59% of Africans were worried about droughts or water shortages, only 16% were concerned about severe weather, and 3% are concerned about rising sea levels. By 2007, countries in Sub-Saharan Africa were especially troubled about increasing desertification even as they account for .04% of global carbon dioxide emissions. In 2011, concern in Sub-Saharan Africa over climate change dropped; only 34% of the population considered climate change to be a "very" or "somewhat serious issue". Even so, according to the Pew Research Center 2015 Global Attitudes Survey, some countries were more concerned than others. In Uganda, 79% of people, 68% in Ghana, 45% in South Africa and 40% in Ethiopia considered climate change to be a very serious problem.
In 2022, 51% of African respondents to a survey claimed climate change is one of the biggest problems they are facing. 41% saw inflation and 39% saw access to health care as the biggest issues. 76% responded that they prefer renewable energy as the main source of energy and 3 out of 4 respondents want renewable energy to be prioritized. 13% cite using fossil fuels.
Latin America
Latin America has a higher percentage of people concerned with climate change than other regions of the world. According to the Pew Research Center 74% consider climate change to be a serious problem and 77% say that it is harming people now, 20 points higher than the global median. The same study showed that 63% of people in Latin America are very concerned that climate change will harm them personally. When looked at more specifically, people in Mexico and Central America are the most worried, with 81.5% believing that climate change is a very serious issue. South America is slightly less anxious at 75% and the Caribbean, at the relatively high rate of 66.7%, is the least concerned. Brazil is an important country in global climate change politics because it is the eleventh largest emitter and unlike other large emitter countries, 86% consider global warming to be a very serious problem. Compared to the rest of the world, Latin America is more consistently concerned with high percentages of the population worried about climate change. Further, in Latin America, 67% believe in personal responsibility for climate change and say that people will have to make major lifestyle modifications.
Europe
Europeans have a tendency to be more concerned about climate change than much of the world, with the exception of Latin America. However, there is a divide between Eastern Europe, where people are less worried about climate change, and Western Europe. A global climate survey by the European Investment Bank showed that climate is the number one concern for Europeans. Most respondents said they were already feeling the effects of climate change. Many people believed climate change can still be reversed with 68% of Spanish respondents believing it can be reversed and 80% seeing themselves as part of the solution.
In Europe, there is a range from 88% to 97% of people feeling that climate change is happening and similar ranges are present for agreeing that climate change is caused by human activity and that the impacts of it will be bad. Generally Eastern European countries are slightly less likely to believe in climate change, or the dangers of it, with 63% saying it is very serious, 24% considering it to be fairly serious and only 10% saying it is not a serious problem. When asked if they feel a personal responsibility to help reduce climate change, on a scale of 0, not at all, to 10, a great deal, Europeans respond with the average score of 5.6. When looked at more specifically, Western Europeans are closer to the response of 7 while Eastern European countries respond with an average of less than 4. When asked if Europeans are willing to pay more for climate change, 49% are willing, however only 9% of Europeans have already switched to a greener energy supply. While a large majority of Europeans believe in the dangers of climate change, their feelings of personal responsibility to deal with the issue are much more limited. Especially in terms of actions that could already have been taken - such as having already switched to greener energies discussed above - one can see Europeans' feelings of personal responsibility are limited. 90% of Europeans interviewed for the European Investment Bank Climate Survey 2019 believe their children will be impacted by climate change in their everyday lives and 70% are willing to pay an extra tax to fight climate change.
According to the European Investment Bank's climate survey from 2022, the majority of Europeans believe that the conflict in Ukraine encourages them to conserve energy and lessen their reliance on fossil fuels, with 66% believing that the invasion's effects on the price of oil and gas should prompt actions to speed up the transition to a greener economy. This opinion is shared by responders from Britain and China, while Americans are divided.
Many people believe that the government should take a role in fostering individual behavioral changes to engage in climate change mitigation. Two-thirds of Europeans (66%) support harsher government measures requiring people to adjust their behavior in order to combat climate change (72% of respondents under 30 would welcome such restrictions).
Several survey studies found different types of opinions about climate change in society. For example, scholars have described "Global Warming's Five Germanys" or "Global Waming's Six Americas". For Germany, these types include Alarmed Actives, Convinced, Cautious, Disengaged, and Dismissive.
Asia/Pacific
Asia and the Pacific have a tendency to be less concerned about climate change, except small island states, with developing countries in Asia being less concerned than developed countries. In Asia and the Pacific, around 45% of people believe that climate change is a very serious problem and similarly 48% believe that it is harming people now. Only 37% of people in Asia and the Pacific are very concerned that climate change will harm them personally. There is a large gap between developing Asia and developed Asia. Only 31% of developing Asia considers global warming to be a "very" or "somewhat" serious threat and 74% of developed Asia considers global warming to be a serious threat. It could be argued that one reason for this is that people in more developed countries in Asia are more educated on the issues, especially given that developing countries in Asia do face significant threats from climate change. The most relevant views on climate change are those of the citizens in the countries that are emitting the most. For example, in China, the world's largest emitter, 68% of Chinese people are satisfied with their government's efforts to preserve the environment. And in India, the world's third largest emitter, 77% of Indian people are satisfied with their country's efforts to preserve the environment. 80% of Chinese citizens interviewed in the European Investment Bank Climate Survey 2019 believe climate change is still reversible, 72% believe their individual behaviour can make a difference in addressing climate change.
India
A research team led by Yale University's Anthony Leiserowitz, conducted an audience segmentation analysis in 2011 for India—"Global Warming's Six Indias", The 2011 study broke down the Indian public into six distinct audience groups based on climate change beliefs, attitudes, risk perceptions and policy preferences: informed (19%), experienced (24%), undecided (15%), unconcerned (15%), indifferent (11%), and the disengaged (16%). While the informed are the most concerned and aware of climate change and its threats, the disengaged do not care or have an opinion. The experienced believe it is happening or have felt the effects of climate change and can identify it when provided with a short description. The undecided, unconcerned and indifferent, all have varying levels of worry, concern and risk perception.
The same survey resulted in a different study, "Climate Change in the Indian Mind" showing that 41% of respondents had either never heard of the term global warming, or did not know what it meant while 7% claimed to know "a lot" about global warming. When provided with a description of global warming and what it might entail, 72% of the respondents agreed that it was happening. The study revealed that 56% of respondents perceived it to be caused by human activities while 31% perceived it to be caused primarily by natural changes in the environment. 54% agreed that hot days had become more frequent in their local area, in comparison to 21% of respondents perceiving frequency of severe storms as having increased. A majority of respondents (65%) perceived a severe drought or flood as having a medium to large impact on their lives. These impacts include effects on drinking water, food supply, healthy, income and their community. Higher education levels tended to correspond with higher levels of concern or worry regarding global warming and its effects on them personally.
41% of the respondents agreed that the government should be doing more to address issues stemming from climate change, with the most support (70%) for a national program to elevate climate literacy. 53% of respondents agreed that protecting the environment is important event at a cost to economic growth, highlighting the tendency of respondents to display egalitarian over individualistic values. Personal experiences with climate change risks are an important predictor of risk perception and policy support. Coupled with trust in different sources, mainly scientists and environmental organizations, higher usage of media and attention to news, policy support, public engagement and belief in global warming are seen to increase.
Pakistan
Middle East
While the increasing severity of droughts and other dangerous realities are and will continue to be a problem in the Middle East, the region has one of the smallest rates of concern in the world. 38% believe that climate change is a very serious problem and 26% believe that climate change is harming people now. Of the four Middle Eastern countries polled in a Pew Global Study, on what is their primary concern, Israel, Jordan, and Lebanon named ISIS, and Turkey stated United States encroachment. 38% of Israel considers climate change to be a major threat to their country, 40% of Jordan, 58% of Lebanon and 53% of Turkey. This is compared to relatively high numbers of residents who believe that ISIS is a major threat to their country ranging from 63% to 97%. In the poll, 38% of the Middle East are concerned about drought and 19% are concerned about long periods of unusually hot weather. 42% are satisfied with their own country's current efforts to preserve the environment.
North America
North America has mixed perceptions on climate change ranging from Mexico and Canada that are both more concerned, and the United States, the world's second largest emitter, that is less concerned. Mexico is the most concerned about climate change of the three countries in North America. 90% consider climate change to be a very serious problem and 83% believe that climate change is harming people substantially right now. Canadians are also seriously concerned, 20% are extremely concerned, 30% are definitely concerned, 31% are somewhat concerned and only 19% are not very/not at all concerned about climate change. While the United States which is the largest emitter of in North America and the second largest emitter of in the world has the lowest degrees of concern about climate change in North America. While 61% of Americans say they are concerned about climate change, that is 30% lower than Mexico and 20% lower than Canada. 41% believe that climate change could impact them personally. Nonetheless, 70% of Americans believe that environmental protections are more important than economic growth according to a Yale climate opinion study. 76% of US citizens interviewed for the European Investment Bank Climate Survey 2019 believe developed countries have a responsibility to help developing countries address climate change.
United States
In the U.S. global warming is nowadays often a partisan political issue. Republicans tend to oppose action against a threat that they regard as unproven, while Democrats tend to support actions that they believe will reduce global warming and its effects through the control of greenhouse gas emissions.
In the United States, support for environmental protection was relatively non-partisan in the twentieth century. Republican Theodore Roosevelt established national parks whereas Democrat Franklin Delano Roosevelt established the Soil Conservation Service. Republican Richard Nixon was instrumental in founding the United States Environmental Protection Agency, and tried to install a third pillar of NATO dealing with environmental challenges such as acid rain and the greenhouse effect. Daniel Patrick Moynihan was Nixon's NATO delegate for the topic.
This non-partisanship began to erode during the 1980s, when the Reagan administration described environmental protection as an economic burden. Views over global warming began to seriously diverge between Democrats and Republicans during the negotiations that led up to the creation of the Kyoto Protocol in 1998. In a 2008 Gallup poll of the American public, 76% of Democrats and only 41% of Republicans said that they believed global warming was already happening. The opinions of the political elites, such as members of Congress, tends to be even more polarized.
One public survey out of Yale University concluded that there are "Six Americas" as it pertains to categories of public opinion on climate change (or Global Warming, per the survey). These "Six Americas" are:
Alarmed: These individuals are convinced that climate change is happening, human-caused, and a serious threat. They actively support societal and political actions to address it.
Concerned: While not as convinced or engaged as the Alarmed group, the Concerned individuals still see climate change as a problem and are supportive of actions to address it.
Cautious: This group is uncertain about whether climate change is happening, and if it is, they believe it is a distant threat. They are less likely to support climate-related policies.
Disengaged: The Disengaged have little awareness or understanding of climate change and are not actively paying attention to the issue.
Doubtful: This group is skeptical about climate change, believing that if it is happening, it is due to natural causes and is not a serious concern.
Dismissive: The Dismissive individuals are actively convinced that climate change is not happening or is not a concern. They often reject the scientific consensus on the issue.
The "Six Americas" framework has aided in the development of more understanding climate change perception, behavior, adaptation and belief. In 2016, Shirley Fiske published a report which built on the "Six Americans" framework in order to identify the core cultural models from which Maryland farmers relate to and have opinions about climate change.
The two cultural models Fiske developed are:
"Climate change as natural change": The farmers studied interpreted changes in climate as natural phenomena
"Climate change as environmental change": From this perspective, climate change is a human-driven environmental problem, which requires human action to reverse or mitigate
References
Further reading
External links
Climate change and society
.
Environmental education
Public opinion | Public opinion on climate change | Environmental_science | 12,097 |
24,151,749 | https://en.wikipedia.org/wiki/C13H21NO3S | {{DISPLAYTITLE:C13H21NO3S}}
The molecular formula C13H21NO3S (molar mass: 271.38 g/mol) may refer to:
2C-T-13 (2,5-dimethoxy-4-(β-methoxyethylthio)phenethylamine)
HOT-7
Molecular formulas | C13H21NO3S | Physics,Chemistry | 84 |
61,183,442 | https://en.wikipedia.org/wiki/C18H19N | {{DISPLAYTITLE:C18H19N}}
The molecular formula C18H19N (molar mass: 249.35 g/mol, exact mass: 249.1517 u) may refer to:
Benzoctamine
4-Cyano-4'-pentylbiphenyl
Alpha-D2PV
Molecular formulas | C18H19N | Physics,Chemistry | 75 |
22,813,653 | https://en.wikipedia.org/wiki/Douglas%20West%20%28mathematician%29 | Douglas Brent West is a professor of graph theory at University of Illinois at Urbana-Champaign. He received his Ph.D. from Massachusetts Institute of Technology in 1978; his advisor was Daniel Kleitman. He is the "W" in G. W. Peck, a pseudonym for a group of six mathematicians that includes West. He is the editor of the journal Discrete Mathematics.
Selected work
Books
Introduction to Graph Theory - Second edition, Douglas B. West. Published by Prentice Hall 1996, 2001.
Mathematical Thinking: Problem-Solving and Proofs Second edition, John P D'Angelo and Douglas West. Published by Prentice Hall 1999.
Research work
Spanning trees with many leaves, DJ Kleitman, DB West - SIAM Journal on Discrete Mathematics, 1991.
Class of Solutions to the Gossip Problem, Part II, DB West - Discrete Mathematics, 1982.
The interval number of a planar graph: three intervals suffice, ER Scheinerman, DB West - Journal of combinatorial theory. Series B, 1983.
See also
Erdős–Gallai theorem
Necklace splitting problem
References
External links
West's home page at UIUC
Graph theorists
Living people
Massachusetts Institute of Technology School of Science alumni
University of Illinois Urbana-Champaign faculty
1953 births
20th-century American mathematicians
21st-century American mathematicians | Douglas West (mathematician) | Mathematics | 260 |
5,739,636 | https://en.wikipedia.org/wiki/Algebraically%20compact%20group | In mathematics, in the realm of abelian group theory, a group is said to be algebraically compact if it is a direct summand of every abelian group containing it as a pure subgroup.
Equivalent characterizations of algebraic compactness:
The reduced part of the group is Hausdorff and complete in the adic topology.
The group is pure injective, that is, injective with respect to exact sequences where the embedding is as a pure subgroup.
Relations with other properties:
A torsion-free group is cotorsion if and only if it is algebraically compact.
Every injective group is algebraically compact.
Ulm factors of cotorsion groups are algebraically compact.
External links
On endomorphism rings of Abelian groups
Abelian group theory
Properties of groups | Algebraically compact group | Mathematics | 166 |
32,789,895 | https://en.wikipedia.org/wiki/Ronidazole | Ronidazole is an antiprotozoal agent used in veterinary medicine for the treatment of histomoniasis and swine dysentery as well as Trichomonas gallinae, hexamitosis, Giardia, and Cochlosoma in all aviary birds and pigeons. It may also have use for the treatment of Tritrichomonas foetus infection in cats and for the treatment of Clostridioides difficile infection in humans.
References
Antiprotozoal agents
Carbamates
Nitroimidazoles | Ronidazole | Biology | 118 |
55,553,728 | https://en.wikipedia.org/wiki/Callistosporium%20elegans | Callistosporium elegans is a species of fungus known from São Tomé and Príncipe.
References
Callistosporium elegans at mycobank
Agaricales
Fungi described in 2017
Biota of São Tomé and Príncipe
Fungus species | Callistosporium elegans | Biology | 56 |
23,385,308 | https://en.wikipedia.org/wiki/Model%20photosphere | The photosphere denotes those solar or stellar surface layers from which optical radiation escapes. These stellar outer layers can be modeled by different computer programs. Often, calculated models are used, together with other programs, to calculate synthetic spectra for stars. For example, in varying the assumed abundance of a chemical element, and comparing the synthetic spectra to observed ones, the abundance of that element in that particular star can be determined.
As computers have evolved, the complexity of the models has deepened, becoming more realistic in including more physical data and excluding more of the simplifying assumptions. This evolution of the models has also made them applicable to different kinds of stars.
Common assumptions and computational methods
Local Thermodynamic Equilibrium (LTE)
This assumption (LTE) means that within any local computational volume, the state of thermodynamical equilibrium is assumed:
The inflow of radiation is determined by a blackbody spectrum set by the local temperature only. This radiation then interacts with the matter inside the volume.
The number of atoms or molecules occupying different excited energy states is determined by the Maxwell–Boltzmann distribution. This distribution is determined by the atomic excitation energies, and the local temperature.
The number of atoms in different ionization states is determined by the Saha equation. This distribution is determined by the atomic ionization energy, and the local temperature.
Plane parallel and spherical atmospheres
A common simplifying assumption is that the atmosphere is plane parallel, meaning that physical variables depend on one space coordinate only: the vertical depth (i.e., one assumes that we see the stellar atmosphere "head-on", ignoring the curved portions towards the limbs). In stars where the photosphere is relatively thick compared to the stellar diameter, this is not a good approximation and an assumption of a spherical atmosphere is more appropriate.
Expanding atmospheres
Many stars lose mass in the form of a stellar wind. Especially for stars which are very hot (photospheric temperatures > 10,000 Kelvin) and very luminous, these winds can be so dense that major parts of the emergent spectrum are formed in an "expanding atmosphere", i.e. in layers that are moving outward with a high speed that can reach a few 1000 km/s.
Hydrostatic equilibrium
This means that the star is currently not undergoing any radical changes in structure involving large scale pulsations, flows or mass loss.
Mixing length and microturbulence
This assumption means that the convective motions in the atmosphere are described by the mixing-length theory, modeled as parcels of gas rising and disintegrating. To account for some of the small-scale effects in convective motions, a parameter called microturbulence is often used. The microturbulence corresponds to the motions of atoms or molecules on scales smaller than the photon mean free path.
Different methods of treating opacity
To fully model the photosphere one would need to include every absorption line of every element present. This is not feasible because it would be computationally extremely demanding, and also all spectra are not fully known. Therefore, one needs to simplify the treatment of opacity. Methods used in photospheric models include:
Opacity sampling (OS)
Opacity sampling means that the radiative transfer is evaluated for a number of optical wavelengths spread across the interesting parts of the spectrum. Although the model would improve with more frequencies included, opacity sampling uses as few as practical, to still get a realistic model, thereby minimizing calculation time.
Opacity distribution functions (ODF)
In using opacity distribution functions, the spectra are divided into subsections, within which the absorption probabilities are rearranged and simplified to one smooth function. Similar to the opacity sampling method, this is improved by adding more intervals but at the cost of prolonging the computation time.
Different models
There are several different computer codes available modeling stellar photospheres. Some of them are described here and some of them are linked under "External links" below.
ATLAS
The ATLAS code was originally presented in 1970 by Robert Kurucz using the assumption of LTE and hydrostatic and plane parallel atmospheres. Since the source code is publicly available on the web, it has been amended by different persons numerous times over the years and nowadays exists in many versions. There are both plane parallel and spherical versions as well as those using opacity sampling or opacity distribution functions.
MARCS
The MARCS (Model Atmospheres in Radiative and Convective Scheme) code was originally presented in 1975 by Bengt Gustafsson, Roger Bell and others. The original code simulated stellar spectra assuming the atmosphere to be in hydrostatic equilibrium, plane parallel, with convection described by mixing-length theory. The evolution of the code has since involved better modeling of the line opacity (opacity sampling instead of opacity distribution functions), spherical modeling and including an increasing number of physical data.
Nowadays a large grid of different models is available on the web.
PHOENIX
The PHOENIX code is "risen from the ashes" of an earlier code called SNIRIS and mainly developed by Peter Hauschildt (Hamburger Sternwarte) from 1992 onwards; it is regularly updated and made available on the web. It runs in two different spatial configuration modes: the "classic" one-dimensional mode, assuming spherical symmetry, and the three-dimensional mode. It allows for calculations for many different astrophysical objects, i.e. supernovae, novae, stars and planets. It considers scattering and dust and allows for non-LTE computations over many atomic species, plus LTE over atoms and molecules.
PoWR
The PoWR (Potsdam Wolf-Rayet) code is designed for expanding stellar atmospheres, i.e. for stars with a stellar wind. It has been developed since the 1990s by Wolf-Rainer Hamann and collaborators at the Universität Potsdam (Germany) especially for simulating Wolf-Rayet stars, which are hot stars with very strong mass loss. Adopting spherical symmetry and stationarity, the program computes the occupation numbers of the atomic energy states, including the ionization balance, in non-LTE, and consistently solves the radiative transfer problem in the comoving frame. The stellar wind parameters (mass-loss rate, wind speed) can be specified as free parameter, or, alternatively, calculated from the hydrodynamic equation consistently.
As the PoWR code treats consistently the static and expanding layers of the stellar atmosphere, it is applicable for any types of hot stars. The code as such is not yet public, but large sets of models for Wolf-Rayet stars are available on the web.
3D hydrodynamical models
There are efforts to construct models not assuming LTE, and/or computing the detailed hydrodynamic motions instead of hydrostatic assumptions. These models are physically more realistic but also require more physical data such as cross sections and probabilities for different atomic processes. Such models are computationally rather demanding, and have not yet reached a stage of broader distribution.
Applications of Model Photospheres
Model Atmospheres, while interesting in their own right, are frequently used as part of input recipes and tools for studying other astrophysical problems.
Stellar Evolution
As a result of stellar evolution, changes in the internal structure of stars manifest themselves in the photosphere.
Synthetic Spectra
Spectral synthesis programs (e.g. Moog (code)) often use previously generated model photospheres to describe the physical conditions (temperature, pressure, etc...) through which photons must travel to escape the stellar atmosphere. Together with a list of absorption lines and an elemental abundance table, spectral synthesis programs generate synthetic spectra. By comparing these synthetic spectra to observed spectra of distant stars, astronomers can determine the properties (temperature, age, chemical composition, etc...) of these stars.
See also
Stellar structure
References
Gray, 2005, The observation and analysis of stellar photospheres, Cambridge University Press
Gustafsson et al., 1975, A grid of Model Atmospheres for Metal-deficient Giant Stars I, Astronomy and Astrophysics 42, 407-432
Gustafsson et al., 2008, A grid of MARCS model atmospheres for late-type stars, Astronomy and Astrophysics 486, 951-970
Mihalas, 1978, Stellar atmospheres, W.H. Freeman & Co.
Plez, 2008, MARCS model atmospheres, Physica Scripta T133, 014003
Rutten, Radiative transfer in stellar atmospheres
Tatum, Stellar atmospheres
External links
The Kurucz 1993 models
Robert L. Kurucz
The MARCS model
Spectral Models Stars P.Coelho
The MULTI model
The Pandora model
The PHOENIX model
The Tlusty model
The PoWR models for Wolf-Rayet stars
A package of stellar atmospheres software
Collaborative computational project (CCP7)
The Cloudy model (models the light from dilute gas clouds rather than stars)
A list of syntethic spectra on the web
SPECTRUM - a stellar spectral synthesis program
MOOG - a different spectral synthesis program
Stellar astronomy | Model photosphere | Astronomy | 1,866 |
33,862,094 | https://en.wikipedia.org/wiki/Cancer%20survival%20rates | Cancer survival rates vary by the type of cancer, stage at diagnosis, treatment given and many other factors, including country. In general survival rates are improving, although more so for some cancers than others. Survival rate can be measured in several ways, median life expectancy having advantages over others in terms of meaning for people involved, rather than as an epidemiological measure.
However, survival rates are currently often measured in terms of 5-year survival rates, which is the percentage of people who live at least five years after being diagnosed with cancer, and relative survival rates compare people with cancer to people in the overall population.
Several types of cancer are associated with high survival rates, including breast, prostate, testicular and colon cancer. Brain and pancreatic cancers have much lower median survival rates which have not improved as dramatically over the last forty years. Indeed, pancreatic cancer has one of the worst survival rates of all cancers. Small cell lung cancer has a five-year survival rate of 4% according to Cancer Centers of America's Website. The American Cancer Society reports 5-year relative survival rates of over 70% for women with stage 0-III breast cancer with a 5-year relative survival rate close to 100% for women with stage 0 or stage I breast cancer. The 5-year relative survival rate drops to 22% for women with stage IV (metastatic) breast cancer.
In cancer types with high survival rates, incidence is usually higher in the developed world, where longevity is also greater. Cancers with lower survival rates are more common in developing countries. The highest cancer survival rates are in countries such as South Korea, Japan, Israel, Australia, and the United States.
Survival rate trends
In the United States there has been an increase in the 5-year relative survival rate between people diagnosed with cancer in 1975-1977 (48.9%) and people diagnosed with cancer in 2007-2013 (69.2%); these figures coincide with a 20% decrease in cancer mortality from 1950 to 2014. Due to innovation in emerging treatments and cancer prevention strategies, the U.S.A cancer death rate has declined from 208.3 per 100,000 people in 1982 to 152.6 per 100,000 in 2017.
Lung cancer
In males, researchers suggest that the overall reduction in cancer death rates is due in large part to a reduction in tobacco use over the last half century, estimating that the reduction in lung cancer caused by tobacco smoking accounts for about 40% of the overall reduction in cancer death rates in men and is responsible for preventing at least 146,000 lung cancer deaths in men during the time period 1991-2003.
Breast cancer
The most common cancer among women in the United States is breast cancer (123.7 per 100,000), followed by lung cancer (51.5 per 100,000) and colorectal cancer (33.6 per 100,000), but lung cancer surpasses breast cancer as the leading cause of cancer death among women. Researchers attribute the reduction in breast cancer mortality to improved treatment, including the increased use in adjuvant chemotherapy.
Prostate cancer
The National Institute of Health (NIH) attributes the increase in the 5-year relative survival of prostate cancer (from 69% in the 1970s to 100% in 2006) to screening and diagnosis and due to the fact that men that participate in screening tend to be healthier and live longer than the average man and testing techniques that are able to detect slow growing cancer before they become life-threatening.
Childhood cancer
The most common type of cancer among children and adolescents is leukemia, followed by brain and other central nervous system tumors. Survival rates for most childhood cancers have improved, with a notable improvement in acute lymphoblastic leukemia (the most common childhood cancer). Due to improved treatment, the 5-year survival rate for acute lymphoblastic leukemia has increased from less than 10% in the 1960s to about 90% during the time period 2003-2009.
Improvements in cancer therapy
The improvement in survival rates for many cancers in the last half century is due to improved understanding about the causes of cancer and the availability of new treatment options, which are continually evolving. Where surgery was previously the only option for treatment, cancer is now treated with radiation and chemotherapy, including combination chemotherapy that favors treatment with many drugs over just one. Availability and access to clinical trials has also led to more targeted therapy and improved knowledge of treatment efficacy. There are currently over 60,000 clinical trials related to cancer registered on ClinicalTrials.gov, so novel approaches to cancer treatment are continuing to be developed. The NCI lists over 100 targeted therapies that have been approved for the treatment of 26 different cancer types by the United States Food and Drug Administration.
See also
Cancer prevalence
Cancer survivor
References
External links
Cancer survival rates
SEER*Explorer (US)
National Cancer Institute (US)
Cancer Research UK One and five year survival for various cancers.
Cancer Society 2014 estimated US occurrence an mortality for major cancer types, and by state.
Oncology
Epidemiology
Cancer | Cancer survival rates | Environmental_science | 1,022 |
33,116,840 | https://en.wikipedia.org/wiki/Tissue%20Engineering%20and%20Regenerative%20Medicine%20International%20Society | Tissue Engineering and Regenerative Medicine International Society is an international learned society dedicated to tissue engineering and regenerative medicine.
Background
Regenerative medicine involves processes of replacing, engineering or regenerating human cells, tissues or organs to restore or establish normal function. A major technology of regenerative medicine is tissue engineering, which has variously been defined as "an interdisciplinary field that applies the principles of engineering and the life sciences toward the development of biological substitutes that restore, maintain, or improve tissue function", or "the creation of new tissue by the deliberate and controlled stimulation of selected target cells through a systematic combination of molecular and mechanical signals".
History
Tissue engineering emerged during the 1990s as a potentially powerful option for regenerating tissue and research initiatives were established in various cities in the US and in European countries including the UK, Italy, Germany and Switzerland, and also in Japan. Soon fledgling societies were formed in these countries in order to represent these new sciences, notably the European Tissue Engineering Society (ETES) and, in the US, the Tissue Engineering Society (TES), soon to become the Tissue Engineering Society international (TESi) and the Regenerative Medicine Society (RMS).
Because of the overlap between the activities of these societies and the increasing globalization of science and medicine, considerations of a merger between TESI and ETES and RMS were initiated in 2004 and agreement was reached during 2005 about the formation of the consolidated society, the Tissue Engineering and Regenerative Medicine International Society (TERMIS). Election of officers for TERMIS took place in September 2005, and the by-laws were approved by the Board.
Rapid progress in the organization of TERMIS took place during late 2005 and 2006. The SYIS, Student and Young Investigator Section was established in January 2006, website and newsletter launched and membership dues procedures put in place.
Structure and governance
It was determined that each Chapter would have its own Council, the overall activities being determined by the Governing Board, on which each Council was represented, and an executive committee.
Society chapters
At the beginning of the Society, it was agreed that there would be Continental Chapters of TERMIS, initially TERMIS-North America (TERMIS-NA) and TERMIS-Europe (TERMIS-EU), to be joined at the time of the major Shanghai conference in October 2005 by TERMIS-Asia Pacific (TERMIS-AP). It was subsequently agreed that the remit of TERMIS-North America should be expanded to incorporate activity in South America, the chapter becoming TERMIS-Americas (TERMS-AM) officially in 2012.
Student and Young Investigator Section
The Student and Young Investigator Section of TERMIS (TERMIS-SYIS) brings together undergraduate and graduate students, post-doctoral researchers and young investigators in industry and academia related to tissue engineering and regenerative medicine. It follows the organizational and working pattern of TERMIS.
Activities
Journal
A contract was signed between TERMIS and the Mary Ann Liebert publisher which designated the journal Tissue Engineering, Parts A, B, and C as the official journal of TERMIS with free on-line access for the membership.
Conferences
It was agreed that there would be a World Congress every three years, with each Chapter organizing its own conference in the intervening two years.
Awards
Each TERMIS chapter has defined awards to recognize outstanding scientists and their contributions within the community.
TERMIS-AP
The Excellence Achievement Award has been established to recognize a researcher in the Asia-Pacific region who has made continuous and landmark contributions to the tissue engineering and regenerative medicine field.
The Outstanding Scientist Award has been established to recognize a mid-career researcher in the Asia-Pacific region who has made significant contributions to the TERM field.
The Young Scholar Award has been established to recognize a young researcher in the Asia-Pacific region who has made significant and consistent achievements in the TERM field, showing clear evidence of their potential to excel.
The Mary Ann Liebert, Inc. Best TERM Paper Award has been established to recognize a student researcher (undergraduate/graduate/postdoc) in the Asia-Pacific region who has achieved outstanding research accomplishments in the TERM field.
The TERMIS-AP Innovation Team Award has been established to recognize a team of researchers in the Asia-Pacific region. It aims to recognize successful applications of tissue engineering and regenerative medicine leading to the development of relevant products/therapies/technologies which will ultimately benefit the patients.
TERMIS-EU
The Career Achievement Award is aimed towards a recognition of individuals who have made outstanding contributions to the field of TERM and have carried out most of their career in the TERMIS-EU geographical area.
The Mid Terms Career Award has been established in 2020 to recognize individuals that are within 10–20 years after obtaining their PhD, with a successful research group and clear evidence of outstanding performance.
The Robert Brown Early Career Principal Investigator Award recognizes individuals that are within 2–10 years after obtaining their PhD, with clear evidence of a growing profile.
Award recipients
Fellows
Fellows of Tissue Engineering and Regenerative Medicine (FTERM) recipients are:
Alini, Mauro
Atala, Anthony
Badylak, Stephen
Cancedda, Ranieri
Cao, Yilin
Chatzinikolaidou, Maria
El Haj, Alicia
Fontanilla, Marta
Germain, Lucie
Gomes, Manuela
Griffith, Linda
Guldberg, Robert
Hellman, Kiki
Hilborn, Jöns
Hubbell, Jeffrey
Hutmacher, Dietmar
Khang, Gilson
Kirkpatrick, C. James
Langer, Robert
Lee, Hai-Bang
Lee, Jin Ho
Lewandowska-Szumiel, Malgorzata
Marra, Kacey
Martin, Ivan
McGuigan, Alison
Mikos, Antonios
Mooney, David
Motta, Antonella
Naughton, Gail
Okano, Teruo
Pandit, Abhay
Parenteau, Nancy
Radisic, Milica
Ratner, Buddy
Redl, Heinz
Reis, Rui L.
Richards, R. Geoff
Russell, Alan
Schenke-Layland, Katja
Shoichet, Molly
Smith, David
Tabata, Yasuhiko
Tuan, Rocky
Vacanti, Charles
Vacanti, Joseph
van Osch, Gerjo
Vunjak-Novakovic, Gordana
Wagner, William
Weiss, Anthony S.
Emeritus
Johnson, Peter
Williams, David
Deceased Fellows
Nerem, Robert
References
External links
TERMIS homepage
Tissue Engineering Journal
Tissue engineering
Medical associations based in the United States
Medical and health organizations based in California
International medical associations | Tissue Engineering and Regenerative Medicine International Society | Chemistry,Engineering,Biology | 1,310 |
21,723,923 | https://en.wikipedia.org/wiki/Reverse%20phase%20protein%20lysate%20microarray | A reverse phase protein lysate microarray (RPMA) is a protein microarray designed as a dot-blot platform that allows measurement of protein expression levels in a large number of biological samples simultaneously in a quantitative manner when high-quality antibodies are available.
Technically, minuscule amounts of (a) cellular lysates, from intact cells or laser capture microdissected cells, (b) body fluids such as serum, CSF, urine, vitreous, saliva, etc., are immobilized on individual spots on a microarray that is then incubated with a single specific antibody to detect expression of the target protein across many samples. A summary video of RPPA is available. One microarray, depending on the design, can accommodate hundreds to thousands of samples that are printed in a series of replicates. Detection is performed using either a primary or a secondary labeled antibody by chemiluminescent, fluorescent or colorimetric assays. The array is then imaged and the obtained data is quantified.
Multiplexing is achieved by probing multiple arrays spotted with the same lysate with different antibodies simultaneously and can be implemented as a quantitative calibrated assay. In addition, since RPMA can utilize whole-cell or undissected or microdissected cell lysates, it can provide direct quantifiable information concerning post translationally modified proteins that are not accessible with other high-throughput techniques. Thus, RPMA provides high-dimensional proteomic data in a high throughput, sensitive and quantitative manner. However, since the signal generated by RPMA could be generated from unspecific primary or secondary antibody binding, as is seen in other techniques such as ELISA, or immunohistochemistry, the signal from a single spot could be due to cross-reactivity. Thus, the antibodies used in RPMA must be carefully validated for specificity and performance against cell lysates by western blot.
RPMA has various uses such as quantitative analysis of protein expression in cancer cells, body fluids or tissues for biomarker profiling, cell signaling analysis and clinical prognosis, diagnosis or therapeutic prediction. This is possible as a RPMA with lysates from different cell lines and or laser capture microdissected tissue biopsies of different disease stages from various organs of one or many patients can be constructed for determination of relative or absolute abundance or differential expression of a protein marker level in a single experiment. It is also used for monitoring protein dynamics in response to various stimuli or doses of drugs at multiple time points. Some other applications that RPMA is used for include exploring and mapping protein signaling pathways, evaluating molecular drug targets and understanding a candidate drug's mechanism of action. It has been also suggested as a potential early screen test in cancer patients to facilitate or guide therapeutic decision making.
Other protein microarrays include forward protein microarrays (PMAs) and antibody microarrays (AMAs). PMAs immobilize individual purified and sometimes denatured recombinant proteins on the microarray that are screened by antibodies and other small compounds. AMAs immobilize antibodies that capture analytes from the sample applied on the microarray. The target protein is detected either by direct labeling or a secondary labeled antibody against a different epitope on the analyte target protein (sandwich approach). Both PMAs and AMAs can be classified as forward phase arrays as they involve immobilization of a bait to capture an analyte. In forward phase arrays, each array is incubated with one test sample such as a cellular lysate or a patient's serum, but multiple analytes in the sample are tested simultaneously. Figure 1 shows a forward (using antibody as a bait in here) and reverse phase protein microarray at the molecular level.
Experimental design and procedure
Depending on the research question or the type and aim of the study, RPMA can be designed by selecting the content of the array, the number of samples, sample placement within micro-plates, array layout, type of microarrayer, correct detection antibody, signal detection method, inclusion of control and quality control of the samples. The actual experiment is then set up in the laboratory and the results obtained are quantified and analyzed. The experimental stages are listed below:
Sample collection
Cells are grown in T-25 flasks at 37 degree and 5% in appropriate medium. Depending on the design of the study, after cells are confluent they could be treated with drugs, growth factors or they could be irradiated before lysis step. For time course studies, a stimulant is added to a set of flasks concurrently and the flasks are then processed at different time points. For drug dose studies, a set of flasks are treated with different doses of the drug and all the flasks are collected at the same time.
If a RPMA containing cell fraction lysates of a tissue/s is to be made, laser capture microdissection (LCM) or fine needle aspiration methods is used to isolate specific cells from a region of tissue microscopically.
Cell lysis
Pellets from cells collected through any of the above means are lysed with a cell lysis buffer to obtain high protein concentration.
Antibody screening
Aliquots of the lysates are pooled and resolved by two-dimensional single lane SDS-PAGE followed by western blotting on a nitrocellulose membrane. The membrane is cut into four-millimeter strips, and each strip is probed with a different antibody. Strips with single band indicate specific antibodies that are suitable for RPMA use. Antibody performance should be also validated with a smaller sample size under identical condition before actual sample collection for RPMA.
RPMA construction
Cell lysates are collected and are serially diluted six to ten times if using colorimetric techniques, or without dilution when fluorometric detection is used (due to the greater dynamic range of fluorescence than colorimetric detection). Serial dilutions are then plated in replicates into a 384- or a 1536-well microtiter plate. The lysates are then printed onto either nitrocellulose or PVDF membrane coated glass slides by a microarrayer such as Aushon BioSystem 2470 or Flexys robot (Genomic solution). Aushon 2470 with a solid pin system is the ideal choice as it can be used for producing arrays with very viscous lysates and it has humidity environmental control and automated slide supply system. That being said, there are published papers showing that Arrayit Microarray Printing Pins can also be used and produce microarrays with much higher throughput using less lysate. The membrane coated glass slides are commercially available from several different companies such as Schleicher and Schuell Bioscience (now owned by GE Whatman www.whatman.com), Grace BioLabs (www.gracebio.com), Thermo Scientific, and SCHOTT Nexterion (www.schott.com/nexterion).
Immunochemical signal detection
After the slides are printed, non-specific binding sites on the array are blocked using a blocking buffer such as I-Block and the arrays are probed with a primary antibody followed by a secondary antibody. Detection is usually conducted with DakoCytomation catalyzed signal amplification (CSA) system. For signal amplification, slides are incubated with streptavidin-biotin-peroxidase complex followed by biotinyl-tyramide/hydrogen peroxide and streptavidin-peroxidase. Development is completed using hydrogen peroxide and scans of the slides are obtained (1). Tyramide signal amplification works as follows: immobilized horseradish peroxidase (HRP) converts tyramide into reactive intermediate in the presence of hydrogen peroxide. Activated tyramide binds to neighboring proteins close to a site where the activating HRP enzyme is bound. This leads to more tyramide molecule deposition at the site; hence the signal amplification.
Lance Liotta and Emanual Petricoin invented the RPMA technique in 2001 (see history section below), and have developed a multiplexed detection method using near-infrared fluorescent techniques. In this study, they report the use of a dual dye-based approach that can effectively double the number of endpoints observed per array, allowing, for example, both phospho-specific and total protein levels to be measured and analyzed at once.
Data quantification and analysis
Once immunostaining has been performed protein expression must then be quantified. The signal levels can be obtained by using the reflective mode of an ordinary optical flatbed scanner if a colorimetric detection technique is used or by laser scanning, such as with a TECAN LS system, if fluorescent techniques are used. Two programs available online (P-SCAN and ProteinScan) can then be used to convert the scanned image into numerical values. These programs quantify signal intensities at each spot and use a dose interpolation algorithm (DI25) to compute a single normalized protein expression level value for each sample. Normalization is necessary to account for differences in total protein concentration between each sample and so that antibody staining can be directly compared between samples. This can be achieved by performing an experiment in parallel in which total proteins are stained by colloidal gold total protein staining or Sypro Ruby total protein staining. When multiple RPMAs are analyzed, the signal intensity values can be displayed as a heat map, allowing for Bayesian clustering analysis and profiling of signaling pathways. An optimal software tool, custom designed for RPMAs is called Microvigene, by Vigene Tech, Inc.
Strengths
The greatest strength of RPMAs is that they allow for high throughput, multiplexed, ultra-sensitive detection of proteins from extremely small numbers of input material, a feat which cannot be done by conventional western blotting or ELISA. The small spot size on the microarray, ranging in diameter from 85 to 200 micrometres, enables the analysis of thousands of samples with the same antibody in one experiment. RPMAs have increased sensitivity and are capable of detecting proteins in the picogram range. Some researchers have even reported detection of proteins in the attogram range. This is a significant improvement over protein detection by ELISA, which requires microgram amounts of protein (6). The increase in sensitivity of RPMAs is due to the miniature format of the array, which leads to an increase in the signal density (signal intensity/area) coupled with tyramide deposition-enabled enhancement. The high sensitivity of RPMAs allows for the detection of low abundance proteins or biomarkers such as phosphorylated signaling proteins from very small amounts of starting material such as biopsy samples, which are often contaminated with normal tissue. Using laser capture microdissection lysates can be analyzed from as few as 10 cells, with each spot containing less than a hundredth of a cell equivalent of protein.
A great improvement of RPMAs over traditional forward phase protein arrays is a reduction in the number of antibodies needed to detect a protein. Forward phase protein arrays typically use a sandwich method to capture and detect the desired protein. This implies that there must be two epitopes on the protein (one to capture the protein and one to detect the protein) for which specific antibodies are available. Other forward phase protein microarrays directly label the samples, however there is often variability in the labeling efficiency for different protein, and often the labeling destroys the epitope to which the antibody binds. This problem is overcome by RPMAs as sample need not be labeled directly.
Another strength of RPMAs over forward phase protein microarrays and western blotting is the uniformity of results, as all samples on the chip are probed with the same primary and secondary antibody and the same concentration of amplification reagents for the same length of time. This allows for the quantification of differences in protein levels across all samples. Furthermore, printing each sample, on the chip in serial dilution (colorimetric) provides an internal control to ensure analysis is performed only in the linear dynamic range of the assay. Optimally, printing of calibrators and high and low controls directly on the same chip will then provide for unmatched ability to quantitatively measure each protein over time and between experiments. A problem that is encountered with tissue microarrays is antigen retrieval and the inherent subjectivity of immunohistochemistry. Antibodies, especially phospho-specific reagents, often detect linear peptide sequences that may be masked due to the three-dimensional conformation of the protein. This problem is overcome with RPMAs as the samples can be denatured, revealing any concealed epitopes.
Weaknesses
The biggest limitation of RPMA, as is the case for all immunoassays, is its dependence on antibodies for detection of proteins. Currently there is a limited but rapidly growing number of signaling proteins for which antibodies exist that give an analyzable signal. In addition, finding the appropriate antibody could require extensive screening of many antibodies by western blotting prior to beginning RPMA analysis. To overcome this issue, two open resource databases have been created to display western blot results for antibodies that have good binding specificity within the expected range. Furthermore, RPMAs, unlike western blots, do not resolve protein fractions by molecular weight. Thus, it is critical that upfront antibody validation be performed.
History
RPMA was first introduced in 2001 in a paper by Lance Liotta and Emanuel Petricoin who invented the technology. The authors used the technique to successfully analyze the state of pro-survival checkpoint protein at the microscopic transition stage using laser capture microdissection of histologically normal prostate epithelium, prostate intraepithelial neoplasia, and patient-matched invasive prostate cancer. Since then RPMA has been used in many basic biology, translational and clinical research. In addition, the technique has now been brought into clinical trials for the first time whereby patients with metastatic colorectal and breast cancers are chosen for therapy based on the results of the RPMA. This technique has been commercialized for personalized medicine-based applications by Theranostics Health, Inc.
References
External links
http://www.bio-itworld.com/BioIT_Article.aspx?id=97660
http://discover.nci.nih.gov/abminer
Microarrays | Reverse phase protein lysate microarray | Chemistry,Materials_science,Biology | 3,047 |
69,937 | https://en.wikipedia.org/wiki/Nuclear%20pulse%20propulsion | Nuclear pulse propulsion or external pulsed plasma propulsion is a hypothetical method of spacecraft propulsion that uses nuclear explosions for thrust. It originated as Project Orion with support from DARPA, after a suggestion by Stanislaw Ulam in 1947. Newer designs using inertial confinement fusion have been the baseline for most later designs, including Project Daedalus and Project Longshot.
History
Los Alamos
Calculations for a potential use of this technology were made at the laboratory from and toward the close of the 1940s to the mid-1950s.
Project Orion
Project Orion was the first serious attempt to design a nuclear pulse rocket. A design was formed at General Atomics during the late 1950s and early 1960s, with the idea of reacting small directional nuclear explosives utilizing a variant of the Teller–Ulam two-stage bomb design against a large steel pusher plate attached to the spacecraft with shock absorbers. Efficient directional explosives maximized the momentum transfer, leading to specific impulses in the range of seconds, or about thirteen times that of the Space Shuttle main engine. With refinements a theoretical maximum specific impulse of (1 MN·s/kg) might be possible. Thrusts were in the millions of tons, allowing spacecraft larger than 8 tons to be built with 1958 materials.
The reference design was to be constructed of steel using submarine-style construction with a crew of more than 200 and a vehicle takeoff weight of several thousand tons. This single-stage reference design would reach Mars and return in four weeks from the Earth's surface (compared to 12 months for NASA's current chemically powered reference mission). The same craft could visit Saturn's moons in a seven-month mission (compared to chemically powered missions of about nine years). Notable engineering problems that occurred were related to crew shielding and pusher-plate lifetime.
Although the system appeared to be workable, the project was shut down in 1965, primarily because the Partial Test Ban Treaty made it illegal; in fact, before the treaty, the US and Soviet Union had already separately detonated a combined number of at least nine nuclear bombs, including thermonuclear, in space, i.e., at altitudes of over 100 km (see high-altitude nuclear explosions). Ethical issues complicated the launch of such a vehicle within the Earth's magnetosphere: calculations using the (disputed) linear no-threshold model of radiation damage showed that the fallout from each takeoff would cause the death of approximately 1 to 10 individuals. In a threshold model, such extremely low levels of thinly distributed radiation would have no associated ill-effects, while under hormesis models, such tiny doses would be negligibly beneficial. The use of less efficient clean nuclear bombs for achieving orbit and then more efficient, higher yield dirtier bombs for travel would significantly reduce the amount of fallout caused from an Earth-based launch.
One useful mission would be to deflect an asteroid or comet on collision course with the Earth, depicted dramatically in the 1998 film Deep Impact. The high performance would permit even a late launch to succeed, and the vehicle could effectively transfer a large amount of kinetic energy to the asteroid by simple impact. The prospect of an imminent asteroid impact would obviate concerns over the few predicted deaths from fallout. An automated mission would remove the challenge of designing a shock absorber that would protect the crew.
Orion is one of very few interstellar space drives that could theoretically be constructed with available technology, as discussed in a 1968 paper, "Interstellar Transport" by Freeman Dyson.
Project Daedalus
Project Daedalus was a study conducted between 1973 and 1978 by the British Interplanetary Society (BIS) to design an interstellar uncrewed spacecraft that could reach a nearby star within about 50 years. A dozen scientists and engineers led by Alan Bond worked on the project. At the time fusion research appeared to be making great strides, and in particular, inertial confinement fusion (ICF) appeared to be adaptable as a rocket engine.
ICF uses small pellets of fusion fuel, typically lithium deuteride (6Li2H) with a small deuterium/tritium trigger at the center. The pellets are thrown into a reaction chamber where they are hit on all sides by lasers or another form of beamed energy. The heat generated by the beams explosively compresses the pellet to the point where fusion takes place. The result is a hot plasma, and a very small "explosion" compared to the minimum size bomb that would be required to instead create the necessary amount of fission.
For Daedalus, this process was to be run within a large electromagnet that formed the rocket engine. After the reaction, ignited by electron beams, the magnet funnelled the hot gas to the rear for thrust. Some of the energy was diverted to run the ship's systems and engine. In order to make the system safe and energy efficient, Daedalus was to be powered by a helium-3 fuel collected from Jupiter.
Medusa
The Medusa design has more in common with solar sails than with conventional rockets. It was envisioned by Johndale Solem in the 1990s and published in the Journal of the British Interplanetary Society (JBIS).
A Medusa spacecraft would deploy a large sail ahead of it, attached by independent cables, and then launch nuclear explosives forward to detonate between itself and its sail. The sail would be accelerated by the plasma and photonic impulse, running out the tethers as when a fish flees a fisher, generating electricity at the "reel". The spacecraft would use some of the generated electricity to reel itself up toward the sail, constantly smoothly accelerating as it goes.
In the original design, multiple tethers connected to multiple motor generators. The advantage over the single tether is to increase the distance between the explosion and the tethers, thus reducing damage to the tethers.
For heavy payloads, performance could be improved by taking advantage of lunar materials, for example, wrapping the explosive with lunar rock or water, stored previously at a stable Lagrange point.
Medusa performs better than the classical Orion design because its sail intercepts more of the explosive impulse, its shock-absorber stroke is much longer, and its major structures are in tension and hence can be quite lightweight. Medusa-type ships would be capable of a specific impulse of (500 to 1000 kN·s/kg).
Medusa became widely known to the public in the BBC documentary film To Mars By A-Bomb: The Secret History of Project Orion. A short film shows an artist's conception of how the Medusa spacecraft works "by throwing bombs into a sail that's ahead of it".
Project Longshot
Project Longshot was a NASA-sponsored research project carried out in conjunction with the US Naval Academy in the late 1980s. Longshot was in some ways a development of the basic Daedalus concept, in that it used magnetically funneled ICF. The key difference was that they felt that the reaction could not power both the rocket and the other systems, and instead included a 300 kW conventional nuclear reactor for running the ship. The added weight of the reactor reduced performance somewhat, but even using LiD fuel it would be able to reach neighboring star Alpha Centauri in 100 years (approx. velocity of 13,411 km/s, at a distance of 4.5 light years, equivalent to 4.5% of light speed).
Antimatter-catalyzed nuclear reaction
In the mid-1990s, research at Pennsylvania State University led to the concept of using antimatter to catalyze nuclear reactions. Antiprotons would react inside the nucleus of uranium, releasing energy that breaks the nucleus apart as in conventional nuclear reactions. Even a small number of such reactions can start the chain reaction that would otherwise require a much larger volume of fuel to sustain. Whereas the "normal" critical mass for plutonium is about 11.8 kilograms (for a sphere at standard density), with antimatter catalyzed reactions this could be well under one gram.
Several rocket designs using this reaction were proposed, some which would use all-fission reactions for interplanetary missions, and others using fission-fusion (effectively a very small version of Orion's bombs) for interstellar missions.
Magneto-inertial fusion
NASA funded MSNW LLC and the University of Washington in 2011 to study and develop a fusion rocket through the NASA Innovative Advanced Concepts NIAC Program.
The rocket uses a form of magneto-inertial fusion to produce a direct thrust fusion rocket. Magnetic fields cause large metal rings to collapse around the deuterium-tritium plasma, triggering fusion. The energy heats and ionizes the shell of metal formed by the crushed rings. The hot ionized metal is shot out of a magnetic rocket nozzle at a high speed (up to 30 km/s). Repeating this process roughly every minute would accelerate or decelerate the spacecraft. The fusion reaction is not self-sustaining and requires electrical energy to explode each pulse. With electrical requirements estimated to be between 100 kW to 1,000 kW (300 kW average), designs incorporate solar panels to produce the required energy.
Foil Liner Compression creates fusion at the proper energy scale. The proof of concept experiment in Redmond, Washington, was to use aluminum liners for compression. However, the ultimate design was to use lithium liners.
Performance characteristics are dependent on the fusion energy gain factor achieved by the reactor. Gains were expected to be between 20 and 200, with an estimated average of 40. Higher gains produce higher exhaust velocity, higher specific impulse and lower electrical power requirements. The table below summarizes different performance characteristics for a theoretical 90-day Mars transfer at gains of 20, 40, and 200.
By April 2013, MSNW had demonstrated subcomponents of the systems: heating deuterium plasma up to fusion temperatures and concentrating the magnetic fields needed to create fusion. They planned to put the two technologies together for a test before the end of 2013.
Pulsed fission-fusion propulsion
Pulsed fission-fusion (PuFF) propulsion is reliant on principles similar to magneto-inertial fusion. It aims to solve the problem of the extreme stress induced on containment by an Orion-like motor by ejecting the plasma obtained from small fuel pellets that undergo autocatalytic fission and fusion reactions initiated by a Z-pinch. It is a theoretical propulsion system researched through the NIAC Program by the University of Alabama in Huntsville. It is in essence a fusion rocket that uses a Z-pinch configuration, but coupled with a fission reaction to boost the fusion process.
A PuFF fuel pellet, around 1 cm in diameter, consists of two components: A deuterium-tritium (D-T) cylinder of plasma, called the target, which undergoes fusion, and a surrounding U-235 sheath that undergoes fission enveloped by a lithium liner. Liquid lithium, serving as a moderator, fills the space between the D-T cylinder and the uranium sheath. Current is run through the liquid lithium, a Lorentz force is generated which then compresses the D-T plasma by a factor of 10 in what is known as a Z-pinch. The compressed plasma reaches criticality and undergoes fusion reactions. However, the fusion energy gain (Q) of these reactions is far below breakeven (Q < 1), meaning that the reaction consumes more energy than it produces.
In a PuFF design, the fast neutrons released by the initial fusion reaction induce fission in the U-235 sheath. The resultant heat causes the sheath to expand, increasing its implosion velocity onto the D-T core and compressing it further, releasing more fast neutrons. Those again amplify the fission rate in the sheath, rendering the process autocatalytic. It is hoped that this results in a complete burn up of both the fission and fusion fuels, making PuFF more efficient than other nuclear pulse concepts. Much like in a magneto-inertial fusion rocket, the performance of the engine is dependent on the degree to which the fusion gain of the D-T target is increased.
One "pulse" consist of the injection of a fuel pellet into the combustion chamber, its consumption through a series of fission-fusion reactions, and finally the ejection of the released plasma through a magnetic nozzle, thus generating thrust. A single pulse is expected to take only a fraction of a second to complete.
See also
References
External links
G.R. Schmidt, J.A. Bunornetti and P.J. Morton, Nuclear Pulse Propulsion – Orion and Beyond, NASA technical report AlAA 2000-3856, 2000
J. C. Nance, "Nuclear Pulse Propulsion," IEEE Trans. on Nuclear Science 12, 177 (1965) [Reprinted as Ann. N.Y. Acad. Sci. 140, 396 (1966)].
"Nuclear Pulse Space Vehicle Study, Vol III," Report on NASA Contract NAS 8-11053, General Atomics, GA-5009, 19 Sep 64.
F. Dyson, "Death of a Project," Science 149, 141 (1965).
W. H. Robbins and H. B. Finger, H.B., "An Historical Perspective of the NERVA Nuclear Rocket Engine Technology Program", NASA Contractor Report 187154, AIAA-91-3451, July 1991.
Nuclear spacecraft propulsion
Plasma technology and applications | Nuclear pulse propulsion | Physics | 2,740 |
578,348 | https://en.wikipedia.org/wiki/Frot | Frot or frotting (slang for frottage; ) is a sexual practice between men that usually involves direct penis-to-penis contact. The term was popularized by gay male activists who disparaged the practice of anal sex, but has since evolved to encompass a variety of preferences for the act, which may or may not imply particular attitudes towards other sexual activities. This can also be used as some type of foreplay.
Because it is not penetrative, frot has the safe sex benefit of reducing the risk of transmitting HIV/AIDS; however, it still carries the risk of skin-to-skin sexually transmitted infections, such as HPV and pubic lice (crabs), both of which can be transmitted even when lesions are not visible.
It is analogous to tribadism, which is vulva-to-vulva contact between women.
Concept and etymology
The modern definition of frot emerged in a context of a debate about the status of anal sex within the gay male community; some in the anti-anal, pro-frot camp insist that anal sex ought to be avoided altogether. One view argued that the popularity of anal sex would decline, presumably with a corresponding drop in HIV rates, if gay men could somehow be persuaded to stop thinking of anal sex as a "vanilla" practice, but rather as something "kinky" and not-quite-respectable—as was the case in the 1950s and 1960s, when gay men who preferred to do only mutual masturbation and fellatio sometimes used the disparaging slang term brownie queen for aficionados of anal sex.
Gay activist Bill Weintraub began to heavily promote and recommend the gender-specific meaning of "penis-to-penis rubbing" as frot on Internet forums sometime in the late 1990s, and said he coined the term. "I don't use the word 'frottage,' because it is an ersatz French word which can indicate any sort of erotic rubbing," he stated. "Frot, by contrast, is always phallus-to-phallus sex." Weintraub believes that is what actual sex is; genital-genital contact.
Alternative terms for frot include frictation, which can refer to the wider meaning of frottage but also penis-penis sex specifically, as well as sword-fighting, Oxford style, Princeton rub, and Ivy League rub.
Sexual practices
General
Frot can be enjoyable because it mutually and simultaneously stimulates the genitals of both partners as it tends to produce pleasurable friction against the frenulum nerve bundle on the underside of each partner's penile shaft, just below the urinary opening (meatus) of the penis head (glans penis).
Safer sex
Since frot is a non-penetrative sex act, the risk of passing a sexually transmitted infection (STI) that requires direct contact between the mucous membranes and pre-ejaculate or semen is reduced. HIV is among the infections that require such direct contact, and research indicates that there is no risk of HIV transmission via frot. However, frot can still transmit other sexually transmitted infections, such as HPV (which can cause genital warts) and pubic lice (crabs). Vaccines are available against HPV.
Comparison with anal sex and debates
Some gay men, or men who have sex with men (MSM) in general, prefer to engage in frot or other forms of mutual masturbation because they find it more pleasurable or more affectionate than anal sex, to preserve technical virginity, or as safe sex alternatives to anal penetration. This preference has led to some debate in the gay male and MSM community regarding what constitutes "real sex" or the most sensual expression of sexual intimacy. Some frot advocates consider "two genitals coming together by mingling, caressing, sliding" and rubbing to be sex more than other forms of male sexual activity. Other men who have sex with men associate male masculinity with the sexual positions of "tops" and "bottoms" during anal sex.
During anal sex, the insertive partner may be referred to as the top or active partner. The one being penetrated may be referred to as the bottom or passive partner. Those with no strong preference for either are referred to as versatile. Some frot advocates insist that such roles introduce inequality during sexual intimacy, and that frot is "equal" because of mutual genital-genital stimulation. The lack of mutual genital stimulation and role asymmetry has led other frot advocates to denounce anal sex as degrading to the receptive partner. This view of dominance and inequality associated with sex roles is disputed by researchers who state that it is not clear that specific sexual acts are necessarily indicative of general patterns of masculinity or dominance in a gay male relationship, and that, for both partners, anal intercourse can be associated with being masculine. Additionally, some frot advocates, such as Bill Weintraub, are concerned with diseases that may be acquired through anal sex. In a 2005 article in The Advocate, one anal sex opponent said that no longer showing anal sex as erotic would help avoid HIV/AIDS, and opined that some gay men perceived him to be antigay when he was only trying to keep gay and bisexual men alive and healthy.
Gay men, and MSM in general, who prefer anal sex may view it as "[their] version of intercourse" and as "the natural apex of sex, a wonderful expression of intimacy, and a great source of pleasure". Psychologist Walt Odets said, "I think that anal sex has for gay men the same emotional significance that vaginal sex has for heterosexuals." Anal sex is generally viewed as vanilla sex among MSM, and is often thought to be expected, even by MSM who do not prefer the act. "Some people like [anal] because it seems taboo or naughty," stated author and sex therapist Jack Morin. "Some people like the flavor of dominance and submission... some don't."
MSM who defend the essential validity of anal sex have rejected claims made by radical frot advocates. Others have at times disparaged frottage as a makeshift, second-rate form of male/male intimacy—something better left to inexperienced teenagers and "closeted" older men. Odets said, "No one would propose that we initiate a public health measure by de-eroticizing vaginal sex. It would sound like a ridiculous idea. It's no less ridiculous for gay men."
HuffPost contributor and sexologist Joe Kort proposed the term side for gay men who are not interested in anal sex and instead prefer "to kiss, hug and engage in oral sex, rimming, mutual masturbation and rubbing up and down on each other", viewing "sides" as simply another gay male sexual preference akin to being a top, bottom or versatile, and adding that "Whether a man enjoys anal sex or not is no reflection on his sexual orientation, and if he's gay, it doesn't define whether or not he's 'really' having sex."
Prevalence
A 2011 survey of gay and bisexual men by the Journal of Sexual Medicine found that out of over 1,300 different combinations of sexual acts practiced, the most common, at 16% of all encounters, was "holding their partner romantically, kissing partner on mouth, solo masturbation, masturbating partner, masturbation by partner, and genital–genital contact."
Among other animals
Genital–genital rubbing has been observed between males of other animals as well. Among bonobos, frottage frequently occurs when two males hang from a tree limb and engage in penis fencing; it also occurs while two males are in the missionary position.
Frot-like genital rubbing between non-primate males has been observed among bull manatees, in conjunction with "kissing". When engaging in genital–genital rubbing, male bottlenose dolphins often penetrate the genital slit or, less commonly, the anus. Penis-to-penis rubbing is also common among homosexually active mammals.
See also
Intercrural sex
Sex position
References
Further reading
Olivia Judson (2002). Dr. Tatiana's Sex Advice to All Creation.
1990s neologisms
Gay masculinity
LGBTQ slang
Gay male erotica
Human penis
Sexual acts
Male homosexuality
Non-penetrative sex | Frot | Biology | 1,741 |
40,791,915 | https://en.wikipedia.org/wiki/Imaging%20particle%20analysis | Imaging particle analysis is a technique for making particle measurements using digital imaging, one of the techniques defined by the broader term particle size analysis. The measurements that can be made include particle size, particle shape (morphology or shape analysis and grayscale or color, as well as distributions (graphs) of statistical population measurements.
Description and history
Imaging particle analysis uses the techniques common to image analysis or image processing for the analysis of particles. Particles are defined here per particle size analysis as particulate solids, and thereby not including atomic or sub-atomic particles. Furthermore, this article is limited to real images (optically formed), as opposed to "synthetic" (computed) images (computed tomography, confocal microscopy, SIM and other super resolution microscopy techniques, etc.).
Given the above, the primary method for imaging particle analysis is using optical microscopy. While optical microscopes have been around and used for particle analysis since the 1600s, the "analysis" in the past has been accomplished by humans using the human visual system. As such, much of this analysis is subjective, or qualitative in nature. Even when some sort of qualitative tools are available, such as a measuring reticle in the microscope, it has still required a human to determine and record those measurements.
Beginning in the late 1800s with the availability of photographic plates, it became possible to capture microscope images permanently on film or paper, making measurements easier to acquire by simply using a scaled ruler on the hard copy image. While this significantly speeded up the acquisition of particle measurements, it was still a tedious, labor-intensive process, which not only made it difficult to measure statistically significant particle populations, but also still introduced some degree of human error to the process.
Finally, beginning roughly in the late 1970s, CCD digital sensors for capturing images and computers which could process those images, began to revolutionize the process by using digital imaging. Although the actual algorithms for performing digital image processing had been around for some time, it was not until the significant computing power needed to perform these analyses became available at reasonable prices that digital imaging techniques could be brought to bear in the mainstream. The first dynamic imaging particle analysis system was patented in 1982.
As faster computing resources became available at lowered costs, the task of making measurements from microscope images of particles could now be performed automatically by machine without human intervention, making it possible to measure significantly larger numbers of particles in much less time.
Image acquisition methods
The basic process by which imaging particle analysis is carried out is as follows:
A digital camera captures an image of the field of view in the optical system.
A gray scale thresholding process is used to perform image segmentation, segregating out the particles from the background, creating a binary image of each particle.
Digital image processing techniques are used to perform image analysis operations, resulting in morphological and grey-scale measurements to be stored for each particle.
The measurements saved for each particle are then used to generate image population statistics, or as inputs to algorithms for filtering and sorting the particles into groups of similar types. In some systems, sophisticated pattern recognition techniques may also be employed in order to separate different particle types contained in a heterogeneous sample.
Imaging particle analyzers can be subdivided into two distinct types, static and dynamic, based upon the image acquisition methods. While the basic principles are the same, the methods of image acquisition are different in nature, and each has advantages and disadvantages.
Static imaging particle analysis
Static image acquisition is the most common form. Almost all microscopes can be easily adapted to accept a digital camera via a C mount adaptor. This type of set-up is often referred to as a digital microscope, although many systems using that name are used only for displaying an image on a monitor.
The sample is prepared on a microscope slide which is placed on the microscope stage. Once the sample has been focused on, then an image can be acquired and displayed on the monitor. If it is a digital camera or a frame grabber is present, the image can now be saved in digital format, and image processing algorithms can be used to isolate particles in the field of view and measure them.
In static image acquisition only one field of view image is captured at a time. If the user wishes to image other portions of the same sample on the slide, they can use the X-Y positioning hardware (typically composed of two linear stages on the microscope to move to a different area of the slide. Care must be taken to insure that two images do not overlap so as not to count and measure the same particles more than once.
The major drawback to static image acquisition is that it is time consuming, both in sample preparation (getting the sample onto the slide with proper dilution if necessary), and in multiple movements of the stage in order to be able to acquire a statistically significant number of particles to count/measure. Computer-controlled X-Y positioning stages are sometimes used in these systems to speed the process up and to reduce the amount of operator intervention, but it is still a time consuming process, and the motorized stages can be expensive due to the level of precision required when working at high magnification.
The major advantages to static particle imaging systems are the use of standard microscope systems and simplicity of depth of field considerations. Since these systems can be made from any standard optical microscope, they may be a lower cost approach for people who already have microscopes. More important, though, is that microscope-based systems have less depth of field issues generally versus dynamic imaging systems. This is because the sample is placed on a microscope slide, and then usually covered with a cover slip, thus limiting the plane containing the particles relative to the optical axis. This means that more particles will be in acceptable focus at high magnifications.
Dynamic imaging particle analysis
In Dynamic image acquisition, large amounts of sample are imaged by moving the sample past the microscope optics and using high speed flash illumination to effectively "freeze" the motion of the sample. The flash is synchronized with a high shutter speed in the camera to further prevent motion blur. In a dry particle system, the particles are dispensed from a shaker table and fall by gravity past the optical system. In fluid imaging particle analysis systems, the liquid is passed across the optical axis by use of a narrow flow cell as shown at right.
The flow cell is characterized by its depth perpendicular to the optical axis, as shown in the second diagram on right. In order to keep the particles in focus, the flow depth is restricted so that the particles remain in a plane of best focus perpendicular to the optical axis. This is similar in concept to the effect of the microscope slide plus cover slip in a static imaging system. Since depth of field decreases exponentially with increasing magnification, the depth of the flow cell must be narrowed significantly with higher magnifications.
The major drawback to dynamic image acquisition is that the flow cell depth must be limited as described above. This means that, in general, particles larger in size than the flow cell depth can not be allowed in the sample being processed, because they will probably clog the system. So the sample will typically have to be filtered to remove particles larger than the flow cell depth prior to being evaluated. If it is desired to look at a very wide range of particle size, this may mean that the sample would have to be fractionated into smaller size range components, and run with different magnification/flow cell combinations.
The major advantage to dynamic image acquisition is that it enables acquiring and measuring particles at significantly higher speed, typically on the order of 10,000 particles/minute or greater. This means that statistically significant populations can be analyzed in far shorter time periods than previously possible with manual microscopy or even static imaging particle analysis. In this sense, dynamic imaging particle analysis systems combine the speed typical of particle counters with the discriminatory capabilities of microscopy.
Dynamic imaging particle analysis is used in aquatic microorganism research to analyze phytoplankton, zooplankton, and other aquatic microorganisms ranging from 2 um to 5 mm in size. Dynamic imaging particle analysis is also biopharmaceutical research to characterize and analyze particles ranging from 300 nm to 5mm in size.
Micro-flow imaging
Micro-flow imaging (MFI) is a particle analysis technique that uses flow microscopy to quantify particles contained in a solution based on size. This technique is used in the biopharmaceutical industry to characterize subvisible particles from approximately 1 μm to >50 μm.
References
Laboratory equipment
Counting instruments
Microscopy | Imaging particle analysis | Chemistry,Mathematics,Technology,Engineering | 1,745 |
69,043,830 | https://en.wikipedia.org/wiki/Buick%20Indy%20V6%20engine | The Buick Indy V6 engine is a powerful turbocharged, , V-6, Indy car racing internal combustion engine, designed and produced by Buick for use in the CART PPG Indy Car World Series, and later the IRL IndyCar Series; between 1982 and 1997. It shares the same architecture, and mechanical design, and is based on the Buick V6 road car engine. A slightly destroked 3.0-liter V6 engine was also used in the March 85G and March 86G IMSA GTP sports prototypes.
Though the Buick engine never won a CART series race, it did see some success at the Indianapolis 500, which was sanctioned singly by USAC. This was largely due to the fact that USAC permitted the non-overhead cam "stock block" pushrod engines a higher level of turbocharger "boost" (55 inHG) than CART's rules allowed. This made the engine attractive to smaller teams competing in the Indy 500; giving them a chance to compete with the higher budget teams, many of which ran the powerful Ilmor-Chevy or the Cosworth. Though the Buick engine had notorious reliability issues for the 500 miles, it often excelled in qualifying. Pancho Carter won the pole position with a Buick at the 1985 Indianapolis 500, and Gary Bettenhausen was the fastest qualifier in 1991. Roberto Guerrero became the first driver to break the 230 mph barrier in time trials, winning the pole for the 1992 race. Jim Crawford led eight laps and finished 6th in 1988, and Al Unser Sr. notched Buick's best Indy finish with a third in 1992.
Applications
Indy Cars
March 82C
March 83C
March 84C
March 85C
March 86C
March 87C
March 88C
Lola T89/00
Lola T90/00
Lola T91/00
Lola T92/00
Lola T93/00
Lola T95/00
IMSA GTP/Group C sports prototypes
Alba AR3-001
March 85G
March 86G
Alba AR8-001
Alba AR9-001
Alba AR20-01
References
Engines by model
Gasoline engines by model
Buick engines
IndyCar Series
V6 engines | Buick Indy V6 engine | Technology | 436 |
28,219,544 | https://en.wikipedia.org/wiki/Vandenberg%20Launch%20Facility%204 | Vandenberg Space Force Base Launch Facility 04 (LC-04) is a former United States Air Force (USAF) Intercontinental ballistic missile (ICBM) launch facility on Vandenberg Space Force Base, California, USA. It is a launch site for the land-based LGM-30 Minuteman missile series.
External links
Vandenberg Space Force Base | Vandenberg Launch Facility 4 | Astronomy | 74 |
24,733,592 | https://en.wikipedia.org/wiki/Tetramer | A tetramer () (tetra-, "four" + -mer, "parts") is an oligomer formed from four monomers or subunits. The associated property is called tetramery. An example from inorganic chemistry is titanium methoxide with the empirical formula Ti(OCH3)4, which is tetrameric in solid state and has the molecular formula Ti4(OCH3)16. An example from organic chemistry is kobophenol A, a substance that is formed by combining four molecules of resveratrol.
In biochemistry, it similarly refers to a biomolecule formed of four units, that are the same (homotetramer), i.e. as in Concanavalin A or different (heterotetramer), i.e. as in hemoglobin. Hemoglobin has 4 similar sub-units while immunoglobulins have 2 very different sub-units. The different sub-units may have each their own activity, such as binding biotin in avidin tetramers, or have a common biological property, such as the allosteric binding of oxygen in hemoglobin.
See also
Cluster chemistry; atomic and molecular clusters
Tetrameric protein
Tetramerium, a genus of plants belonging to the family Acanthaceae
Tetramery (botany), having four parts in a distinct whorl of a plant structure
References
Tetramers (chemistry) | Tetramer | Chemistry | 306 |
50,799,162 | https://en.wikipedia.org/wiki/Bondarzewia%20tibetica | Bondarzewia tibetica is a species of polypore fungus in the family Bondarzewiaceae. Found in Tibet, it was described as new to science in 2016.
References
External links
Fungi described in 2016
Fungi of China
Russulales
Taxa named by Bao-Kai Cui
Fungus species | Bondarzewia tibetica | Biology | 62 |
1,218,281 | https://en.wikipedia.org/wiki/Star%20of%20Life | The Star of Life is a symbol used to identify emergency medical services. It features a blue six-pointed star, outlined by a white border. The middle contains a Rod of Asclepius – an ancient symbol of medicine. The Star of Life can be found on ambulances, medical personnel uniforms, and other objects associated with emergency medicine or first aid. Elevators marked with the symbol indicate the lift is large enough to hold a stretcher. Medical bracelets sometimes use the symbol to indicate one has a medical condition that emergency services should be aware of.
The Star of Life is widely used around the world, but like many international symbols, it has not been adopted everywhere. Its use is restricted to authorized personnel in some countries.
History
The Star of Life originated in the United States. In 1963, the American Medical Association (AMA) designed the Star of Life as a "universal" symbol for medical identification. The AMA did not trademark or copyright the symbol, stating it was being "freely offered" to manufacturers, and also was for use on cards carried by persons with a medical condition.
By way of a 1964 resolution, it was adopted by the World Medical Association "as the universal emergency medical information symbol." The Star of Life was promoted by the American Red Cross and rapidly adopted worldwide.
In 1970, when the American Medical Association's Committee on Emergency Medical Services formed the National Registry of Emergency Medical Technicians (NREMT), the AMA chose the Star of Life to designate nationally certified Emergency Medical Services personnel. In 1973, the NREMT filed for a trademark for the Star of Life logo, under the category of a collective membership mark. This version featured a Star of Life enveloped by a circle. NREMT's trademark was granted in 1975, but "was not renewed and therefore has expired." NREMT subsequently registered a service mark featuring a Star of Life and the words "NATIONAL REGISTRY." This logo's trademark remains active.
Prior to the use of the Star of Life, ambulances in the United States commonly displayed a safety orange-colored cross on a square background. This was only a slight variation from the inverted Swiss flag (✚) used by the International Red Cross and Red Crescent Movement. In 1973, the American Red Cross complained that the traditional orange cross too closely resembled their logo of a red cross on a white background, the usage of which is restricted by the Geneva Conventions. Dr. Dawson Mills, Chief of the EMS Branch, National Highway Traffic Safety Administration in the United States, asked the National Registry of EMTs for permission to use the star as the "national identifier for Emergency Medical Services" in the United States," and in 1977 reported to Congress that it had become the national standard.
Leo R. Schwartz, Chief of the EMS Branch, National Highway Traffic Safety Administration in the United States, modified the Star of Life by adding the six main tasks of Emergency Medical Services and changing the color from red (used by the AMA) to blue. The "blue Star of Life" was recommended for adoption by the United States Department of Health, Education and Welfare on October 25, 1973, and was registered as a certification mark on February 1, 1977 in the name of the National Highway Traffic Safety Administration. Both NREMT's version and the US government's modification omit the white outline around the edge common on many of today's ambulances.
The US government's registration lists uses of the Star of Life, including "emergency medical care" and "emergency medical care vehicles." The US government's registration does not list any international classes of trademark.
Federal standards dictate requirements ambulances in the US must satisfy in order to display the Star of Life. The federal government has given states additional authority to manage the symbol. Private ambulance operators like AMR have trademarked logos featuring an embedded Star of Life.
Symbolism
The six branches of the star represent the six main tasks executed by rescuers all through the emergency chain:
Detection: The first rescuers on the scene, usually untrained civilians or those involved in the incident, observe the scene, understand the problem, identify the dangers to themselves and the others, and take appropriate measures to ensure their safety on the scene (environmental, electricity, chemicals, radiation, etc.).
Reporting: The call for professional help is made and dispatch is connected with the victims, providing emergency medical dispatch.
Response: The first rescuers provide first aid and immediate care to the extent of their capabilities.
On scene care: The EMS personnel arrive and provide immediate care to the extent of their capabilities on-scene.
Care in transit: The EMS personnel proceed to transfer the patient to a hospital via an ambulance or helicopter for specialized care. They provide medical care during the transportation.
Transfer to definitive care: Appropriate specialized care is provided at the hospital.
Global usage
Asia
Ambulances in Taiwan, mainland China, Hong Kong, and Macau display the Star of Life. The symbol is also widely used in Japan, Malaysia, the Philippines, and Singapore. It is less common in South Korea, where ambulances display a green cross and green lights.
West Asia
Most ambulances in Turkey do not use the Star of Life. Ambulances in Iran commonly display the Star of Life. Egyptian Ministry of Health ambulances display the Star of Life on one rear door, and a red crescent on the other. National Ambulances in the United Arab Emirates does not display the Star of Life, instead showing an EKG graphic on the sides and rear. In Saudi Arabia, the Red Crescent Society answers the emergency line, and provides service in vehicles bearing the crescent emblem.
In Israel, Magen David Adom displays a red Jewish star, sometimes shown with a Star of Life. The Jewish star is paired with the red crystal in times of conflict. Israel's other ambulance operator, United Hatzalah, has a logo based on both stars. "The Star of Life is a universal symbol of emergency medical care. The Star of David is our national symbol. Combining these two elements reminds us of the messages that we...focus on." said United Hatzalah's president.
In Palestine, the Palestine Red Crescent Society provides ambulance service in vehicles displaying a red crescent, alongside private operators who often display the Star of Life.
India
Indian Automotive Industry Standard AIS-125 is the National Ambulance Code of India. This document is managed jointly by the Automotive Research Association and the Ministry of Road Transport and Highways. It requires ambulances display several distinctive markings including a reflective Battenburg pattern, the word "AMBULANCE", and the emergency telephone number. The standard states "Displayed on the upper half of the left side should be a 'Star of Life' symbol, with a size of 40cm x 40cm... Displayed on the left back window should be a 'Star of Life' symbol, with a size of 85% of the window".
Central and South America
The Star of Life is used in many Spanish speaking nations, where it is known as . In Argentina, the (SAME), the capital district ambulance service, uses a green and slightly rounded version of the Star of Life in their logo. In Brazil, the Star of Life is known by its Portuguese name Estrela da Vida. A red Star of Life is incorporated into the national emergency service's visual identity standards. Brazil's ABNT Standard NBR 14561 for ambulance design makes direct reference to being based on the American Star of Life vehicle. Ambulances which do not comply with the Brazilian standard are prohibited from displaying the Star of Life or the word “RESGATE” (rescue).
Australia and New Zealand
A few patient transport providers like Ambulance Service Australia use the Star of Life. However, it is far more common to see the Maltese Cross in this region.
Europe
The European Union's ambulance design standard CEN 1789 in section A1 Recognition and Visibility of Ambulances states:With the exception of Red Cross societies or where the "Star of life" is locally registered, a blue reflective "Star of life" emblem (minimum size 500 mm) together with reflective letters, numerals or a symbol identifying the organization and the vehicle, should be applied to the roof of the ambulance...a blue reflective "Star of life" emblem should [also] be applied to the sides and rear of the ambulance.
In Portugal, the Star of Life is referred to by the Portuguese name . In March 1977, the then National Ambulance Service of Portugal (the present INEM, National Medical Emergency Institute) filed a trademark registration on the symbol, which was granted in 1981. The INEM continues to hold a trademark registration in the country, which is used to certify that vehicles are "in accordance with INEM standards" and personnel have "proper preparation". It may also be used on maps and road signs "to indicate the location or access to qualified emergency medical care services".
Belgian EMTs use blue stars; nurses, doctors, and ambulance drivers wear other colors. In the Netherlands, the Star of Life is widely used. The Dutch government owns a trademark on the symbol, alongside the paint scheme used on emergency vehicles.
After Latvia became independent in 1918, ambulance services in the 1920s and 1930s were provided by the Latvian Red Cross and thus displayed the Red Cross symbol. During the Soviet control of Latvia, that symbol continued to be used as a general symbol of medicine. As restoration of independence, the Riga Emergency Medical Service Station called on the government to adopt the Star of Life (, sometimes sniegpārsla - 'snowflake') and the change was made with effect from 12 January 1995. It is currently used on vehicles, uniforms and medical service buildings.
In Germany, the symbol was registered in 1993 by industry association BKS, an umbrella organization for private rescue services there. In December 2020, another ambulance industry association, DBRD, filed a challenge with the German Patent and Trade Mark Office. The challengers hired a law firm specializing in intellectual property rights to research the symbol's history. This led to allegations the Star of Life had already been in widespread public use since the 1960s, before being registered in Germany (or the US for that matter). The challengers claimed the trademark office seemed to have been not properly informed of these facts, which presumably could have led the original application to have been denied on public domain grounds. The European Union Intellectual Property Office's 2014 rejection of a trademark on a minimally modified Star of Life was also cited. , BKS continues to hold German copyright on the symbol.
In the United Kingdom, some NHS ambulances in the UK display the Star of Life in addition to the local Ambulance Service emblem. These (latter) emblems have a pale gold six-spoked wheel with a Rod of Asclepius in the foreground. A crown and Maltese Cross, a common EMS emblem in Commonwealth nations, are included in the design.
Gallery
Unicode
The snake-and-staff element of the symbol has a Unicode code point called the Staff of Aesculapius. Unicode has no dedicated code point for the Star of Life.
See also
First aid
Emergency medical technician
Emblems of the International Red Cross and Red Crescent Movement
References
External links
Wikibooks:First Aid
Star of Life from EMS
The San Diego Paramedics: Caduceus
Star symbols
First aid
Certification marks
Emergency medical services
Medical symbols
Symbols introduced in the 1960s
1963 introductions | Star of Life | Mathematics | 2,309 |
23,968,458 | https://en.wikipedia.org/wiki/ERF%20damper | An ERF damper or electrorheological fluid damper, is a type of quick-response active non-linear damper used in high-sensitivity vibration control.
References
Mechanical engineering | ERF damper | Physics,Engineering | 40 |
25,477,045 | https://en.wikipedia.org/wiki/Doctor%27s%20visit | A doctor's visit, also known as a physician office visit or a consultation, or a ward round in an inpatient care context, is a meeting between a patient with a physician to get health advice or treatment plan for a symptom or condition, most often at a professional health facility such as a doctor's office, clinic or hospital. According to a survey in the United States, a physician typically sees between 50 and 100 patients per week, but it may vary with medical specialty, but differs only little by community size such as metropolitan versus rural areas.
Procedure
The four great cornerstones of diagnostic medicine are anatomy (structure: what is there), physiology (how the structure/s work), pathology (what goes wrong with the anatomy and physiology), and psychology (mind and behavior). In addition, the physician should consider the patient in their 'well' context rather than simply as a walking medical condition. This means the socio-political context of the patient (family, work, stress, beliefs) should be assessed as it often offers vital clues to the patient's condition and further management.
A patient typically presents a set of complaints (the symptoms) to the physician, who then performs a diagnostic procedure, which generally includes obtaining further information about the patient's symptoms, previous state of health, living conditions, and so forth. The physician then makes a review of systems (ROS) or systems inquiry, which is a set of ordered questions about each major body system in order: general (such as weight loss), endocrine, cardio-respiratory, etc. Next comes the actual physical examination and other medical tests; the findings are recorded, leading to a list of possible diagnoses. These will be investigated in order of probability.The next task is to enlist the patient's agreement to a management plan, which will include treatment as well as plans for follow-up. Importantly, during this process the healthcare provider educates the patient about the causes, progression, outcomes, and possible treatments of his ailments, as well as often providing advice for maintaining health.
The physician's expertise comes from his knowledge of what is healthy and normal contrasted with knowledge and experience of other people who have had similar symptoms (unhealthy and abnormal), and the proven ability to relieve it with medicines (pharmacology) or other therapies about which the patient may initially have little knowledge.
Duration
A survey in the United States came to the result that, overall, a physician sees each patient for 13 to 16 minutes. Anesthesiologists, neurologists, and radiologists spend more time with each patient, with 25 minutes or more. On the other hand, primary care physicians spend a median of 13 to 16 minutes per patient, whereas dermatologists and ophthalmologists spend the least time, with a median of 9 to 12 minutes per patient. Overall, female physicians spend more time with each patient than do male physicians.
For the patient, the time spent at the hospital can be substantially longer due to various waiting times, administrative steps or additional care from other health personnel. Regarding wait time, patients that are well informed of the necessary procedures in a clinical encounter, and the time it is expected to take, are generally more satisfied even if there is a longer waiting time.
Web-based health care
With increasing access to computers and published online medical articles, the internet has increased the ability to perform self-diagnosis instead of going to a professional health care provider. Doctors may be fearful of misleading information and being inundated by emails from patients which take time to read and respond to (time for which they are not paid). About three-quarters of the U.S. population reports having a primary care physician, but the Primary Care Assessment Survey found "a significant erosion" in the quality of primary care from 1996 to 2000, most notably in the interpersonal treatment and thoroughness of physical examinations.
Research and development
Analysis
A study systematically assessed advice given by professional general practitioners, typically in the form of verbal-only consultation, for weight-loss to obese patients. They found it rarely included effective methods, was mostly generic, and was rarely tailored to patients' existing knowledge and behaviours.
The National Institute on Aging has produced a list of "Tips for Talking With Your Doctor" that includes asking "if your doctor has any brochures, fact sheets, DVDs, CDs, cassettes, or videotapes about your health conditions or treatments" – for example if a patient's blood pressure was found to be high, the patient could get "brochures explaining what causes high blood pressure and what [the person] can do about it".
Virtual doctor's visit
Software and health records
See also
House call
Doctor-patient relationship
General medical examination
References
External links
Practice of medicine
General practitioners
Human activities | Doctor's visit | Biology | 987 |
1,280,053 | https://en.wikipedia.org/wiki/History%20of%20radar | The history of radar (where radar stands for radio detection and ranging) started with experiments by Heinrich Hertz in the late 19th century that showed that radio waves were reflected by metallic objects. This possibility was suggested in James Clerk Maxwell's seminal work on electromagnetism. However, it was not until the early 20th century that systems able to use these principles were becoming widely available, and it was German inventor Christian Hülsmeyer who first used them to build a simple ship detection device intended to help avoid collisions in fog (Reichspatent Nr. 165546 in 1904). True radar which provided directional and ranging information, such as the British Chain Home early warning system, was developed over the next two decades.
The development of systems able to produce short pulses of radio energy was the key advance that allowed modern radar systems to come into existence. By timing the pulses on an oscilloscope, the range could be determined and the direction of the antenna revealed the angular location of the targets. The two, combined, produced a "fix", locating the target relative to the antenna. In the 1934–1939 period, eight nations developed independently, and in great secrecy, systems of this type: the United Kingdom, Germany, the United States, the USSR, Japan, the Netherlands, France, and Italy. In addition, Britain shared their information with the United States and four Commonwealth countries: Australia, Canada, New Zealand, and South Africa, and these countries also developed their own radar systems. During the war, Hungary was added to this list. The term RADAR was coined in 1939 by the United States Signal Corps as it worked on these systems for the Navy.
Progress during the war was rapid and of great importance, probably one of the decisive factors for the victory of the Allies. A key development was the magnetron in the UK, which allowed the creation of relatively small systems with sub-meter resolution. By the end of hostilities, Britain, Germany, the United States, the USSR, and Japan had a wide variety of land- and sea-based radars as well as small airborne systems. After the war, radar use was widened to numerous fields, including civil aviation, marine navigation, radar guns for police, meteorology, and medicine. Key developments in the post-war period include the travelling wave tube as a way to produce large quantities of coherent microwaves, the development of signal delay systems that led to phased array radars, and ever-increasing frequencies that allow higher resolutions. Increases in signal processing capability due to the introduction of solid-state computers has also had a large impact on radar use.
Significance
The place of radar in the larger story of science and technology is argued differently by different authors. On one hand, radar contributed very little to theory, which was largely known since the days of Maxwell and Hertz. Therefore radar did not advance science, but was instead a matter of technology and engineering. Maurice Ponte, one of the developers of radar in France, states:
But others point out the immense practical consequences of the development of radar. Far more than the atomic bomb, radar contributed to the Allied victory in World War II. Robert Buderi states that it was also the precursor of much modern technology. From a review of his book:
In later years radar was used in scientific instruments, such as weather radar and radar astronomy.
Early contributors
Heinrich Hertz
In 1886–1888 the German physicist Heinrich Hertz conducted his series of experiments that proved the existence of electromagnetic waves (including radio waves), predicted in equations developed in 1862–4 by the Scottish physicist James Clerk Maxwell. In Hertz's 1887 experiment he found that these waves would transmit through different types of materials and also would reflect off metal surfaces in his lab as well as conductors and dielectrics. The nature of these waves being similar to visible light in their ability to be reflected, refracted, and polarized would be shown by Hertz and subsequent experiments by other physicists.
Guglielmo Marconi
Radio pioneer Guglielmo Marconi noticed radio waves were being reflected back to the transmitter by objects in radio beacon experiments he conducted on March 3, 1899, on Salisbury Plain. In 1916 he and British engineer Charles Samuel Franklin used short-waves in their experiments, critical to the practical development of radar. He would relate his findings 6 years later in a 1922 paper delivered before the Institution of Electrical Engineers in London:
Christian Hülsmeyer
In 1904, Christian Hülsmeyer gave public demonstrations in Germany and the Netherlands of the use of radio echoes to detect ships so that collisions could be avoided. His device consisted of a simple spark gap used to generate a signal that was aimed using a dipole antenna with a cylindrical parabolic reflector. When a signal reflected from a ship was picked up by a similar antenna attached to the separate coherer receiver, a bell sounded. During bad weather or fog, the device would be periodically spun to check for nearby ships. The apparatus detected the presence of ships up to , and Hülsmeyer planned to extend its capability to . It did not provide range (distance) information, only warning of a nearby object. He patented the device, called the telemobiloscope, but due to lack of interest by the naval authorities the invention was not put into production.
Hülsmeyer also received a patent amendment for estimating the range to the ship. Using a vertical scan of the horizon with the telemobiloscope mounted on a tower, the operator would find the angle at which the return was the most intense and deduce, by simple triangulation, the approximate distance. This is in contrast to the later development of pulsed radar, which determines distance via two-way transit time of the pulse.
Germany
A radio-based device for remotely indicating the presence of ships was built in Germany by Christian Hülsmeyer in 1904. This has been recognized by the Institute of Electrical and Electronics Engineers as the invention of the first working radar system by inauguration of an IEEE Historic Milestone in October 2019.
Over the following three decades in Germany, a number of radio-based detection systems were developed but none were pulsed radars. This situation changed before World War II. Developments in three leading industries are described.
GEMA
In the early 1930s, physicist Rudolf Kühnhold, Scientific Director at the Kriegsmarine (German navy) Nachrichtenmittel-Versuchsanstalt (NVA—Experimental Institute of Communication Systems) in Kiel, was attempting to improve the acoustical methods of underwater detection of ships. He concluded that the desired accuracy in measuring distance to targets could be attained only by using pulsed electromagnetic waves.
During 1933, Kühnhold first attempted to test this concept with a transmitting and receiving set that operated in the microwave region at 13.5 cm (2.22 GHz). The transmitter used a Barkhausen–Kurz tube (the first microwave generator) that produced only 0.1 watt. Unsuccessful with this, he asked for assistance from Paul-Günther Erbslöh and Hans-Karl Freiherr von Willisen, amateur radio operators who were developing a VHF system for communications. They enthusiastically agreed, and in January 1934, formed a company, Gesellschaft für Elektroakustische und Mechanische Apparate (GEMA), for the effort. From the start, the firm was always called simply GEMA.
Work on a Funkmessgerät für Untersuchung (radio measuring device for research) began in earnest at GEMA. Hans Hollmann and Theodor Schultes, both affiliated with the prestigious Heinrich Hertz Institute in Berlin, were added as consultants. The first apparatus used a split-anode magnetron purchased from Philips in the Netherlands. This provided about 70 W at 50 cm (600 MHz), but suffered from frequency instability. Hollmann built a regenerative receiver and Schultes developed Yagi antennas for transmitting and receiving. In June 1934, large vessels passing through the Kiel Harbor were detected by Doppler-beat interference at a distance of about . In October, strong reflections were observed from an aircraft that happened to fly through the beam; this opened consideration of targets other than ships.
Kühnhold then shifted the GEMA work to a pulse-modulated system. A new 50 cm (600 MHz) Philips magnetron with better frequency stability was used. It was modulated with 2- μs pulses at a PRF of 2000 Hz. The transmitting antenna was an array of 10 pairs of dipoles with a reflecting mesh. The wide-band regenerative receiver used Acorn tubes from RCA, and the receiving antenna had three pairs of dipoles and incorporated lobe switching. A blocking device (a duplexer), shut the receiver input when the transmitter pulsed. A Braun tube (a CRT) was used for displaying the range.
The equipment was first tested at a NVA site at the Lübecker Bay near Pelzerhaken. During May 1935, it detected returns from woods across the bay at a range of . It had limited success, however, in detecting a research ship, Welle, only a short distance away. The receiver was then rebuilt, becoming a super-regenerative set with two intermediate-frequency stages. With this improved receiver, the system readily tracked vessels at up to range.
In September 1935, a demonstration was given to the Commander-in-Chief of the Kriegsmarine. The system performance was excellent; the range was read off the Braun tube with a tolerance of 50 meters (less than 1 percent variance), and the lobe switching allowed a directional accuracy of 0.1 degree. Historically, this marked the first naval vessel equipped with radar. Although this apparatus was not put into production, GEMA was funded to develop similar systems operating around 50 cm (500 MHz). These became the Seetakt for the Kriegsmarine and the Freya for the Luftwaffe (German Air Force).
Kühnhold remained with the NVA, but also consulted with GEMA. He is considered by many in Germany as the Father of Radar. During 1933–6, Hollmann wrote the first comprehensive treatise on microwaves, Physik und Technik der ultrakurzen Wellen (Physics and Technique of Ultrashort Waves), Springer 1938.
Telefunken
In 1933, when Kühnhold at the NVA was first experimenting with microwaves, he had sought information from Telefunken on microwave tubes. (Telefunken was the largest supplier of radio products in Germany) There, Wilhelm Tolmé Runge had told him that no vacuum tubes were available for these frequencies. In fact, Runge was already experimenting with high-frequency transmitters and had Telefunken's tube department working on cm-wavelength devices.
In the summer of 1935, Runge, now Director of Telefunken's Radio Research Laboratory, initiated an internally funded project in radio-based detection. Using Barkhausen-Kurz tubes, a 50 cm (600 MHz) receiver and 0.5-W transmitter were built. With the antennas placed flat on the ground some distance apart, Runge arranged for an aircraft to fly overhead and found that the receiver gave a strong Doppler-beat interference signal.
Runge, now with Hans Hollmann as a consultant, continued in developing a 1.8 m (170 MHz) system using pulse-modulation. Wilhelm Stepp developed a transmit-receive device (a duplexer) for allowing a common antenna. Stepp also code-named the system Darmstadt after his home town, starting the practice in Telefunken of giving the systems names of cities. The system, with only a few watts transmitter power, was first tested in February 1936, detecting an aircraft at about distance. This led the Luftwaffe to fund the development of a 50 cm (600 MHz) gun-laying system, the Würzburg.
Lorenz
Since before the First World War, Standard Elektrik Lorenz had been the main supplier of communication equipment for the German military and was the main rival of Telefunken. In late 1935, when Lorenz found that Runge at Telefunken was doing research in radio-based detection equipment, they started a similar activity under Gottfried Müller. A pulse-modulated set called Einheit für Abfragung (DFA – Device for Detection) was built. It used a type DS-310 tube (similar to the Acorn) operating at 70 cm (430 MHz) and about 1 kW power, it had identical transmitting and receiving antennas made with rows of half-wavelength dipoles backed by a reflecting screen.
In early 1936, initial experiments gave reflections from large buildings at up to about . The power was doubled by using two tubes, and in mid-1936, the equipment was set up on cliffs near Kiel, and good detections of ships at and aircraft at were attained.
The success of this experimental set was reported to the Kriegsmarine, but they showed no interest; they were already fully engaged with GEMA for similar equipment. Also, because of extensive agreements between Lorenz and many foreign countries, the naval authorities had reservations concerning the company handling classified work. The DFA was then demonstrated to the Heer (German Army), and they contracted with Lorenz for developing Kurfürst (Elector), a system for supporting Flugzeugabwehrkanone (Flak, anti-aircraft guns).
United Kingdom
In 1915, Robert Watson Watt joined the Meteorological Office as a meteorologist, working at an outstation at Aldershot in Hampshire. Over the next 20 years, he studied atmospheric phenomena and developed the use of radio signals generated by lightning strikes to map out the position of thunderstorms. The difficulty in pinpointing the direction of these fleeting signals using rotatable directional antennas led, in 1923, to the use of oscilloscopes in order to display the signals. The operation eventually moved to the outskirts of Slough in Berkshire, and in 1927 formed the Radio Research Station (RRS), Slough, an entity under the Department of Scientific and Industrial Research (DSIR). Watson Watt was appointed the RRS Superintendent.
As war clouds gathered over Britain, the likelihood of air raids and the threat of invasion by air and sea drove a major effort in applying science and technology to defence. In November 1934, the Air Ministry established the Committee for the Scientific Survey of Air Defence (CSSAD) with the official function of considering "how far recent advances in scientific and technical knowledge can be used to strengthen the present methods of defence against hostile aircraft". Commonly called the "Tizard Committee" after its Chairman, Sir Henry Tizard, this group had a profound influence on technical developments in Britain.
H. E. Wimperis, Director of Scientific Research at the Air Ministry and a member of the Tizard Committee, had read about a German newspaper article claiming that the Germans had built a death ray using radio signals, accompanied by an image of a very large radio antenna. Both concerned and potentially excited by this possibility, but highly skeptical at the same time, Wimperis looked for an expert in the field of radio propagation who might be able to pass judgement on the concept. Watt, Superintendent of the RRS, was now well established as an authority in the field of radio, and in January 1935, Wimperis contacted him asking if radio might be used for such a device. After discussing this with his scientific assistant, Arnold F. 'Skip' Wilkins, Wilkins quickly produced a back-of-the-envelope calculation that showed the energy required would be enormous. Watt wrote back that this was unlikely, but added the following comment: "Attention is being turned to the still difficult, but less unpromising, problem of radio detection and numerical considerations on the method of detection by reflected radio waves will be submitted when required".
Over the following several weeks, Wilkins considered the radio detection problem. He outlined an approach and backed it with detailed calculations of necessary transmitter power, reflection characteristics of an aircraft, and needed receiver sensitivity. He proposed using a directional receiver based on Watt's lightning detection concept, listening for powerful signals from a separate transmitter. Timing, and thus distance measurements, would be accomplished by triggering the oscilloscope's trace with a muted signal from the transmitter, and then simply measuring the returns against a scale. Watson Watt sent this information to the Air Ministry on February 12, 1935, in a secret report titled "The Detection of Aircraft by Radio Methods".
Reflection of radio signals was critical to the proposed technique, and the Air Ministry asked if this could be proven. To test this, Wilkins set up receiving equipment in a field near Upper Stowe, Northamptonshire. On February 26, 1935, a Handley Page Heyford bomber flew along a path between the receiving station and the transmitting towers of a BBC shortwave station in nearby Daventry. The aircraft reflected the 6 MHz (49 m) BBC signal, and this was readily detected by Arnold "Skip" Wilkins using Doppler-beat interference at ranges up to . This convincing test, known as the Daventry Experiment, was witnessed by a representative from the Air Ministry, and led to the immediate authorization to build a full demonstration system. This experiment was later reproduced by Wilkins for the 1977 BBC television series The Secret War episode "To See a Hundred Miles".
Based on pulsed transmission as used for probing the ionosphere, a preliminary system was designed and built at the RRS by the team. Their existing transmitter had a peak power of about 1 kW, and Wilkins had estimated that 100 kW would be needed. Edward George Bowen was added to the team to design and build such a transmitter. Bowens’ transmitter operated at 6 MHz (50 m), had a pulse-repetition rate of 25 Hz, a pulse width of 25 μs, and approached the desired power.
Orfordness, a narrow peninsula in Suffolk along the coast of the North Sea, was selected as the test site. Here the equipment would be openly operated in the guise of an ionospheric monitoring station. In mid-May 1935, the equipment was moved to Orfordness. Six wooden towers were erected, two for stringing the transmitting antenna, and four for corners of crossed receiving antennas. In June, general testing of the equipment began.
On June 17, the first target was detected — a Supermarine Scapa flying boat at range. It is historically correct that, on June 17, 1935, radio-based detection and ranging was first demonstrated in Britain . Watson Watt, Wilkins, and Bowen are generally credited with initiating what would later be called radar in this nation.
In December 1935, the British Treasury appropriated £60,000 for a five-station system called Chain Home (CH), covering approaches to the Thames Estuary. The secretary of the Tizard Committee, Albert Percival Rowe, coined the acronym RDF as a cover for the work, meaning Range and Direction Finding but suggesting the already well-known Radio Direction Finding.
Late in 1935, responding to Lindemann's recognition of the need for night detection and interception gear, and realizing existing transmitters were too heavy for aircraft, Bowen proposed fitting only receivers, what would later be called bistatic radar. Frederick Lindemann's proposals for infrared sensors and aerial mines would prove impractical. It would take Bowen's efforts — at the urging of Tizard, who became increasingly concerned about the need — to see air-to-surface-vessel (ASV) radar and, through it, aircraft interception (AI) radar, to fruition.
In 1937, Bowen's team set their crude ASV radar, the world's first airborne set, to detect the Home Fleet in dismal weather. Only in spring 1939, "as a matter of great urgency" after the failure of the searchlight system Silhouette, did attention turn to using ASV for air-to-air interception (AI). Demonstrated in June 1939, AI got a warm reception from Air Chief Marshal Hugh Dowding, and even more so from Churchill. This proved problematic. Its accuracy, dependent on the height of the aircraft, meant that CH, capable of only , was not accurate enough to place an aircraft within its detection range, and an additional system was required. Its wooden chassis had a disturbing tendency to catch fire (even with attention from expert technicians), so much so that Dowding, when told that Watson-Watt could provide hundreds of sets, demanded "ten that work". The Cossor and MetroVick sets were overweight for aircraft use and the RAF lacked night fighter pilots, observers, and suitable aircraft.
In 1940, John Randall and Harry Boot developed the cavity magnetron, which made ten-centimetre ( wavelength ) radar a reality. This device, the size of a small dinner plate, could be carried easily on aircraft and the short wavelength meant the antenna would also be small and hence suitable for mounting on aircraft. The short wavelength and high power made it very effective at spotting submarines from the air.
To aid Chain Home in making height calculations, at Dowding's request, the Electrical Calculator Type Q (commonly called the "Fruit Machine") was introduced in 1940.
The solution to night intercepts would be provided by Dr. W. B. "Ben" Lewis, who proposed a new, more accurate ground control display, the Plan Position Indicator (PPI), a new Ground-Controlled Interception (GCI) radar, and reliable AI radar. The AI sets would ultimately be built by EMI. GCI was unquestionably delayed by Watson-Watt's opposition to it and his belief that CH was sufficient, as well as by Bowen's preference for using ASV for navigation, despite Bomber Command disclaiming a need for it, and by Tizard's reliance on the faulty Silhouette system.
Air Ministry
In March 1936, the work at Orfordness was moved to Bawdsey Manor, nearby on the mainland. Until this time, the work had officially still been under the DSIR, but was now transferred to the Air Ministry. At the new Bawdsey Research Station, the Chain Home (CH) equipment was assembled as a prototype. There were equipment problems when the Royal Air Force (RAF) first exercised the prototype station in September 1936. These were cleared by the next April, and the Air Ministry started plans for a larger network of stations.
Initial hardware at CH stations was as follows: The transmitter operated on four pre-selected frequencies between 20 and 55 MHz, adjustable within 15 seconds, and delivered a peak power of 200 kW. The pulse duration was adjustable between 5 and 25 μs, with a repetition rate selectable as either 25 or 50 Hz. For synchronization of all CH transmitters, the pulse generator was locked to the 50 Hz of the British power grid. Four steel towers supported transmitting antennas, and four wooden towers supported cross-dipole arrays at three different levels. A goniometer was used to improve the directional accuracy from the multiple receiving antennas.
By the summer of 1937, 20 initial CH stations were in check-out operation. A major RAF exercise was performed before the end of the year, and was such a success that £10,000,000 was appropriated by the Treasury for an eventual full chain of coastal stations. At the start of 1938, the RAF took over control of all CH stations, and the network began regular operations.
In May 1938, Rowe replaced Watson Watt as Superintendent at Bawdsey. In addition to the work on CH and successor systems, there was now major work in airborne RDF equipment. This was led by E. G. Bowen and centered on 200-MHz (1.5 m) sets. The higher frequency allowed smaller antennas, appropriate for aircraft installation.
From the initiation of RDF work at Orfordness, the Air Ministry had kept the British Army and the Royal Navy generally informed; this led to both of these forces having their own RDF developments.
British Army
In 1931, at the Woolwich Research Station of the Army's Signals Experimental Establishment (SEE), W. A. S. Butement and P. E. Pollard had examined pulsed 600 MHz (50-cm) signals for detection of ships. Although they prepared a memorandum on this subject and performed preliminary experiments, for undefined reasons the War Office did not give it consideration.
As the Air Ministry's work on RDF progressed, Colonel Peter Worlledge of the Royal Engineer and Signals Board met with Watson Watt and was briefed on the RDF equipment and techniques being developed at Orfordness. His report, "The Proposed Method of Aeroplane Detection and Its Prospects", led the SEE to set up an "Army Cell" at Bawdsey in October 1936. This was under E. Talbot Paris and the staff included Butement and Pollard. The Cell's work emphasize two general types of RDF equipment: gun-laying (GL) systems for assisting anti-aircraft guns and searchlights, and coastal- defense (CD) systems for directing coastal artillery and defense of Army bases overseas.
Pollard led the first project, a gun-laying RDF code-named Mobile Radio Unit (MRU). This truck-mounted system was designed as a small version of a CH station. It operated at 23 MHz (13 m) with a power of 300 kW. A single tower supported a transmitting antenna, as well as two receiving antennas set orthogonally for estimating the signal bearing. In February 1937, a developmental unit detected an aircraft at a range of 60 miles (96 km). The Air Ministry also adopted this system as a mobile auxiliary to the CH system.
In early 1938, Butement started the development of a CD system based on Bowen's evolving 200-MHz (1.5-m) airborne sets. The transmitter had a 400 Hz pulse rate, a 2-μs pulse width, and 50 kW power (later increased to 150 kW). Although many of Bowen's transmitter and receiver components were used, the system would not be airborne so there were no limitations on antenna size.
Primary credit for introducing beamed RDF systems in Britain must be given to Butement. For the CD, he developed a large dipole array, high and wide, giving much narrower beams and higher gain. This could be rotated at a speed up to 1.5 revolutions per minute. For greater directional accuracy, lobe switching on the receiving antennas was adopted. As a part of this development, he formulated the first – at least in Britain – mathematical relationship that would later become well known as the "radar range equation".
By May 1939, the CD RDF could detect aircraft flying as low as and at a range of . With an antenna above sea level, it could determine the range of a 2,000-ton ship at and with an angular accuracy of as little as a quarter of a degree.
Royal Navy
Although the Royal Navy maintained close contact with the Air Ministry work at Bawdsey, they chose to establish their own RDF development at the Experimental Department of His Majesty's Signal School (HMSS) in Portsmouth, Hampshire, on the south coast.
HMSS started RDF work in September 1935. Initial efforts, under R. F. Yeo, were in frequencies between 75 MHz (4 m) and 1.2 GHz (25 cm). All of the work was under the utmost secrecy; it could not even be discussed with other scientists and engineers at Portsmouth. A 75 MHz range-only set was eventually developed and designated Type 79X. Basic tests were done using a training ship, but the operation was unsatisfactory.
In August 1937, the RDF development at HMSS changed, with many of their best researchers brought into the activity. John D. S. Rawlinson was made responsible for improving the Type 79X. To increase the efficiency, he decreased the frequency to 43 MHz ( 7 metre wavelength ). Designated Type 79Y, it had separate, stationary transmitting and receiving antennas.
Prototypes of the Type 79Y air-warning system were successfully tested at sea in early 1938. The detection range on aircraft was between 30 and 50 miles (48 and 80 km), depending on height. The systems were then placed into service in August on the cruiser and in October on the battleship HMS Rodney. These were the first vessels in the Royal Navy with RDF systems.
United States
In the United States, both the Navy and Army needed means of remotely locating enemy ships and aircraft. In 1930, both services initiated the development of radio equipment that could meet this need. There was little coordination of these efforts; thus, they will be described separately.
United States Navy
In the autumn of 1922, Albert H. Taylor and Leo C. Young at the U.S. Naval Aircraft Radio Laboratory were conducting communication experiments when they noticed that a wooden ship in the Potomac River was interfering with their signals. They prepared a memorandum suggesting that this might be used for ship detection in a harbor defense, but their suggestion was not taken up. In 1930, Lawrence A. Hyland working with Taylor and Young, now at the U.S. Naval Research Laboratory (NRL) in Washington, D.C., used a similar arrangement of radio equipment to detect a passing aircraft. This led to a proposal and patent for using this technique for detecting ships and aircraft.
A simple wave-interference apparatus can detect the presence of an object, but it cannot determine its location or velocity. That had to await the invention of pulsed radar, and later, additional encoding techniques to extract this information from a CW signal. When Taylor's group at the NRL were unsuccessful in getting interference radio accepted as a detection means, Young suggested trying pulsing techniques. This would also allow the direct determination of range to the target. In 1924, Hyland and Young had built such a transmitter for Gregory Breit and Merle A. Tuve at the Carnegie Institution of Washington for successfully measuring the height of the ionosphere.
Robert Morris Page was assigned by Taylor to implement Young's suggestion. Page designed a transmitter operating at 60 MHz and pulsed 10 μs in duration and 90 μs between pulses. In December 1934, the apparatus was used to detect a plane at a distance of one mile (1.6 km) flying up and down the Potomac. Although the detection range was small and the indications on the oscilloscope monitor were almost indistinct, it demonstrated the basic concept of a pulsed radar system. Based on this, Page, Taylor, and Young are usually credited with building and demonstrating the world's first pulsed radar.
An important subsequent development by Page was the duplexer, a device that allowed the transmitter and receiver to use the same antenna without overwhelming or destroying the sensitive receiver circuitry. This also solved the problem associated with synchronization of separate transmitter and receiver antennas which is critical to accurate position determination of long-range targets.
The experiments with pulsed radar were continued, primarily in improving the receiver for handling the short pulses. In June 1936, the NRL's first prototype radar system, now operating at 28.6 MHz, was demonstrated to government officials, successfully tracking an aircraft at distances up to . Their radar was based on low frequency signals, at least by today's standards, and thus required large antennas, making it impractical for ship or aircraft mounting.
Antenna size is inversely proportional to the operating frequency; therefore, the operating frequency of the system was increased to 200 MHz, allowing much smaller antennas. The frequency of 200 MHz was the highest possible with existing transmitter tubes and other components. The new system was successfully tested at the NRL in April 1937, That same month, the first sea-borne testing was conducted. The equipment was temporarily installed on the USS Leary, with a Yagi antenna mounted on a gun barrel for sweeping the field of view.
Based on success of the sea trials, the NRL further improved the system. Page developed the ring oscillator, allowing multiple output tubes and increasing the pulse-power to 15 kW in 5-μs pulses. A 20-by-23 ft (6 x 7 m), stacked-dipole "bedspring" antenna was used. In laboratory test during 1938, the system, now designated XAF, detected planes at ranges up to . It was installed on the battleship USS New York for sea trials starting in January 1939, and became the first operational radio detection and ranging set in the U.S. fleet.
In May 1939, a contract was awarded to RCA for production. Designated CXAM, deliveries started in May 1940. The acronym RADAR was coined from "Radio Detection And Ranging". One of the first CXAM systems was placed aboard the USS California, a battleship that was sunk in the Japanese attack on Pearl Harbor on December 7, 1941.
United States Army
As the Great Depression started, economic conditions led the U.S. Army Signal Corps to consolidate its widespread laboratory operations to Fort Monmouth, New Jersey. On June 30, 1930, these were designated the Signal Corps Laboratories (SCL) and Lt. Colonel (Dr.) William R. Blair was appointed the SCL Director.
Among other activities, the SCL was made responsible for research in the detection of aircraft by acoustical and infrared radiation means. Blair had performed his doctoral research in the interaction of electromagnet waves with solid materials, and naturally gave attention to this type of detection. Initially, attempts were made to detect infrared radiation, either from the heat of aircraft engines or as reflected from large searchlights with infrared filters, as well as from radio signals generated by the engine ignition.
Some success was made in the infrared detection, but little was accomplished using radio. In 1932, progress at the Naval Research Laboratory (NRL) on radio interference for aircraft detection was passed on to the Army. While it does not appear that any of this information was used by Blair, the SCL did undertake a systematic survey of what was then known throughout the world about the methods of generating, modulating, and detecting radio signals in the microwave region.
The SCL's first definitive efforts in radio-based target detection started in 1934 when the Chief of the Army Signal Corps, after seeing a microwave demonstration by RCA, suggested that radio-echo techniques be investigated. The SCL called this technique radio position-finding (RPF). Based on the previous investigations, the SCL first tried microwaves. During 1934 and 1935, tests of microwave RPF equipment resulted in Doppler-shifted signals being obtained, initially at only a few hundred feet distance and later greater than a mile. These tests involved a bi-static arrangement, with the transmitter at one end of the signal path and the receiver at the other, and the reflecting target passing through or near the path.
Blair was evidently not aware of the success of a pulsed system at the NRL in December 1934. In an internal 1935 note, Blair had commented:
In 1936, W. Delmar Hershberger, SCL's Chief Engineer at that time, started a modest project in pulsed microwave transmission. Lacking success with microwaves, Hershberger visited the NRL (where he had earlier worked) and saw a demonstration of their pulsed set. Back at the SCL, he and Robert H. Noyes built an experimental apparatus using a 75 watt, 110 MHz (2.73 m) transmitter with pulse modulation and a receiver patterned on the one at the NRL. A request for project funding was turned down by the War Department, but $75,000 for support was diverted from a previous appropriation for a communication project.
In October 1936, Paul E. Watson became the SCL Chief Engineer and led the project. A field setup near the coast was made with the transmitter and receiver separated by a mile. On December 14, 1936, the experimental set detected at up to range aircraft flying in and out of New York City.
Work then began on a prototype system. Ralph I. Cole headed receiver work and William S. Marks lead transmitter improvements. Separate receivers and antennas were used for azimuth and elevation detection. Both receiving and the transmitting antennas used large arrays of dipole wires on wooden frames. The system output was intended to aim a searchlight. The first demonstration of the full set was made on the night of May 26, 1937. A bomber was detected and then illuminated by the searchlight. The observers included the Secretary of War, Henry A. Woodring; he was so impressed that the next day orders were given for the full development of the system. Congress gave an appropriation of $250,000.
The frequency was increased to 200 MHz (1.5 m). The transmitter used 16 tubes in a ring oscillator circuit (developed at the NRL), producing about 75 kW peak power. Major James C. Moore was assigned to head the complex electrical and mechanical design of lobe switching antennas. Engineers from Western Electric and Westinghouse were brought in to assist in the overall development. Designated SCR-268, a prototype was successfully demonstrated in late 1938 at Fort Monroe, Virginia. The production of SCR-268 sets was started by Western Electric in 1939, and it entered service in early 1941.
Even before the SCR-268 entered service, it had been greatly improved. In a project led by Major (Dr.) Harold A. Zahl, two new configurations evolved – the SCR-270 (mobile) and the SCR-271 (fixed-site). Operation at 106 MHz (2.83 m) was selected, and a single water-cooled tube provided 8 kW (100 kW pulsed) output power. Westinghouse received a production contract, and started deliveries near the end of 1940.
The Army deployed five of the first SCR-270 sets around the island of Oahu in Hawaii. At 7:02 on the morning of December 7, 1941, one of these radars detected a flight of aircraft at a range of due north. The observation was passed on to an aircraft warning center where it was misidentified as a flight of U.S. bombers known to be approaching from the mainland. The alarm went unheeded, and at 7:48, the Japanese aircraft first struck at Pearl Harbor.
USSR
In 1895, Alexander Stepanovich Popov, a physics instructor at the Imperial Russian Navy school in Kronstadt, developed an apparatus using a coherer tube for detecting distant lightning strikes. The next year, he added a spark-gap transmitter and demonstrated the first radio communication set in Russia. During 1897, while testing this in communicating between two ships in the Baltic Sea, he took note of an interference beat caused by the passage of a third vessel. In his report, Popov wrote that this phenomenon might be used for detecting objects, but he did nothing more with this observation.
In a few years following the 1917 Russian Revolution and the establishment the Union of Soviet Socialist Republics (USSR or Soviet Union) in 1924, Germany's Luftwaffe had aircraft capable of penetrating deep into Soviet territory. Thus, the detection of aircraft at night or above clouds was of great interest to the Soviet Air Defense Forces (PVO).
The PVO depended on optical devices for locating targets, and had physicist Pavel K. Oshchepkov conducting research in possible improvement of these devices. In June 1933, Oshchepkov changed his research from optics to radio techniques and started the development of a razvedyvlatl’naya elektromagnitnaya stantsiya (reconnaissance electromagnetic station). In a short time, Oshchepkov was made responsible for a technical expertise sector of PVO devoted to radiolokatory (radio-location) techniques as well as heading a Special Design Bureau (SKB, spetsialnoe konstruktorskoe byuro) in Leningrad.
Radio-location beginnings
The Glavnoe Artilleriyskoe Upravlenie (GAU, Main Artillery Administration) was considered the "brains" of the Red Army. It not only had competent engineers and physicists on its central staff, but also had a number of scientific research institutes. Thus, the GAU was also assigned the aircraft detection problem, and Lt. Gen. M. M. Lobanov was placed in charge.
After examining existing optical and acoustical equipment, Lobanov also turned to radio-location techniques. For this he approached the Tsentral’naya Radiolaboratoriya (TsRL, Central Radio Laboratory) in Leningrad. Here, Yu. K. Korovin was conducting research on VHF communications, and had built a 50 cm (600 MHz), 0.2 W transmitter using a Barkhausen–Kurz tube. For testing the concept, Korovin arranged the transmitting and receiving antennas along the flight path of an aircraft. On January 3, 1934, a Doppler signal was received by reflections from the aircraft at some 600 m range and 100–150 m altitude.
For further research in detection methods, a major conference on this subject was arranged for the PVO by the Russian Academy of Sciences (RAN). The conference was held in Leningrad in mid-January 1934, and chaired by Abram Fedorovich Ioffe, Director of the Leningrad Physical-Technical Institute (LPTI). Ioffe was generally considered the top Russian physicist of his time. All types of detection techniques were discussed, but radio-location received the greatest attention.
To distribute the conference findings to a wider audience, the proceedings were published the following month in a journal. This included all of the then-existing information on radio-location in the USSR, available (in Russian language) to researchers in this field throughout the world.
Recognizing the potential value of radio-location to the military, the GAU made a separate agreement with the Leningrad Electro-Physics Institute (LEPI), for a radio-location system. This technical effort was led by B. K. Shembel. The LEPI had built a transmitter and receiver to study the radio-reflection characteristics of various materials and targets. Shembel readily made this into an experimental bi-static radio-location system called Bistro (Rapid).
The Bistro transmitter, operating at 4.7 m (64 MHz), produced near 200 W and was frequency-modulated by a 1 kHz tone. A fixed transmitting antenna gave a broad coverage of what was called a radioekran (radio screen). A regenerative receiver, located some distance from the transmitter, had a dipole antenna mounted on a hand-driven reciprocating mechanism. An aircraft passing into the screened zone would reflect the radiation, and the receiver would detect the Doppler-interference beat between the transmitted and reflected signals.
Bistro was first tested during the summer of 1934. With the receiver up to 11 km away from the transmitter, the set could only detect an aircraft entering a screen at about range and under 1,000 m. With improvements, it was believed to have a potential range of 75 km, and five sets were ordered in October for field trials. Bistro is often cited as the USSR's first radar system; however, it was incapable of directly measuring range and thus could not be so classified.
LEPI and TsRL were both made a part of Nauchno-issledovatelsky institut-9 (NII-9, Scientific Research Institute #9), a new GAU organization opened in Leningrad in 1935. Mikhail A. Bonch-Bruyevich, a renowned radio physicist previously with TsRL and the University of Leningrad, was named the NII-9 Scientific Director.
Research on magnetrons began at Kharkov University in Ukraine during the mid-1920s. Before the end of the decade this had resulted in publications with worldwide distribution, such as the German journal Annalen der Physik (Annals of Physics). Based on this work, Ioffe recommended that a portion of the LEPI be transferred to the city of Kharkiv, resulting in the Ukrainian Institute of Physics and Technology (LIPT) being formed in 1930. Within the LIPT, the Laboratory of Electromagnetic Oscillations (LEMO), headed by Abram A. Slutskin, continued with magnetron development. Led by Aleksandr S. Usikov, a number of advanced segmented-anode magnetrons evolved. (It is noted that these and other early magnetrons developed in the USSR suffered from frequency instability, a problem in their use in Soviet radar systems.)
In 1936, one of Usikov's magnetrons producing about 7 W at 18 cm (1.7 GHz) was used by Shembel at the NII-9 as a transmitter in a radioiskatel (radio-seeker) called Burya (Storm). Operating similarly to Bistro, the range of detection was about 10 km, and provided azimuth and elevation coordinates estimated to within 4 degrees. No attempts were made to make this into a pulsed system, thus, it could not provide range and was not qualified to be classified as a radar. It was, however, the first microwave radio-detection system.
While work by Shembel and Bonch-Bruyevich on continuous-wave systems was taking place at NII-9, Oshehepkov at the SKB and V. V. Tsimbalin of Ioffe's LPTI were pursuing a pulsed system. In 1936, they built a radio-location set operating at 4 m (75 MHz) with a peak-power of about 500 W and a 10-μs pulse duration. Before the end of the year, tests using separated transmitting and receiving sites resulted in an aircraft being detected at 7 km. In April 1937, with the peak-pulse power increased to 1 kW and the antenna separation also increased, test showed a detection range of near 17 km at a height of 1.5 km. Although a pulsed system, it was not capable of directly providing range – the technique of using pulses for determining range had not yet been developed.
Pre-war radio location systems
In June 1937, all of the work in Leningrad on radio-location suddenly stopped. The infamous Great Purge of dictator Joseph Stalin swept over the military high commands and its supporting scientific community. The PVO chief was executed. Oshchepkov, charged with "high crime", was sentenced to 10 years at a Gulag penal labor camp. NII-9 as an organization was saved, but Shenbel was dismissed and Bonch-Bruyevich was named the new director.
The Nauchnoissledovatel'skii ispytalel'nyi institut svyazi RKKA (NIIIS-KA, Scientific Research Institute of Signals of the Red Army), had initially opposed research in radio-location, favoring instead acoustical techniques. However, this portion of the Red Army gained power as a result of the Great Purge, and did an about face, pressing hard for speedy development of radio-location systems. They took over Oshchepkov's laboratory and were made responsible for all existing and future agreements for research and factory production. Writing later about the Purge and subsequent effects, General Lobanov commented that it led to the development being placed under a single organization, and the rapid reorganization of the work.
At Oshchepkov's former laboratory, work with the 4 m (75 MHz) pulsed-transmission system was continued by A. I. Shestako. Through pulsing, the transmitter produced a peak power of 1 kW, the highest level thus far generated. In July 1938, a fixed-position, bi-static experimental system detected an aircraft at about 30 km range at heights of 500 m, and at 95 km range, for high-flying targets at 7.5 km altitude. The system was still incapable of directly determining the range. The project was then taken up by Ioffe's LPTI, resulting in the development of a mobile system designated Redut (Redoubt). An arrangement of new transmitter tubes was used, giving near 50 kW peak-power with a 10 μs pulse-duration. Yagi antennas were adopted for both transmitting and receiving.
The Redut was first field tested in October 1939, at a site near Sevastopol, a port in Ukraine on the coast of the Black Sea. This testing was in part to show the NKKF (Soviet Navy) the value of early-warning radio-location for protecting strategic ports. With the equipment on a cliff about 160 meters above sea level, a flying boat was detected at ranges up to 150 km. The Yagi antennas were spaced about 1,000 meters; thus, close coordination was required to aim them in synchronization. An improved version of the Redut, the Redut-K, was developed by Aksel Berg in 1940 and placed aboard the light cruiser Molotov in April 1941. Molotov became the first Soviet warship equipped with radar.
At the NII-9 under Bonch-Bruyevich, scientists developed two types of very advanced microwave generators. In 1938, a linear-beam, velocity-modulated vacuum tube (a klystron) was developed by Nikolay Devyatkov, based on designs from Kharkiv. This device produced about 25 W at 15–18 cm (2.0–1.7 GHz) and was later used in experimental systems. Devyatkov followed this with a simpler, single-resonator device (a reflex klystron). At this same time, D. E. Malyarov and N. F. Alekseyev were building a series of magnetrons, also based on designs from Kharkov; the best of these produced 300 W at 9 cm (3 GHz).
Also at NII-9, D. S. Stogov was placed in charge of the improvements to the Bistro system. Redesignated as Reven (Rhubarb), it was tested in August 1938, but was only marginally better than the predecessor. With additional minor operational improvements, it was made into a mobile system called Radio Ulavlivatel Samoletov (RUS, Radio Catcher of Aircraft), soon designated as RUS-1. This continuous-wave, bi-static system had a truck-mounted transmitter operating at 4.7 m (64 MHz) and two truck-mounted receivers.
Although the RUS-1 transmitter was in a cabin on the rear of a truck, the antenna had to be strung between external poles anchored to the ground. A second truck carrying the electrical generator and other equipment was backed against the transmitter truck. Two receivers were used, each in a truck-mounted cabin with a dipole antenna on a rotatable pole extended overhead. In use, the receiver trucks were placed about 40 km apart; thus, with two positions, it would be possible to make a rough estimate of the range by triangulation on a map.
The RUS-1 system was tested and put into production in 1939, then entered service in 1940, becoming the first deployed radio-location system in the Red Army. About 45 RUS-1 systems were built at the Svetlana Factory in Leningrad before the end of 1941, and deployed along the western USSR borders and in the Far East. Without direct ranging capability, however, the military found the RUS-1 to be of little value.
Even before the demise of efforts in Leningrad, the NIIIS-KA had contracted with the UIPT in Kharkov to investigate a pulsed radio-location system for anti-aircraft applications. This led the LEMO, in March 1937, to start an internally funded project with the code name Zenit (a popular football team at the time). The transmitter development was led by Usikov, supplier of the magnetron used earlier in the Burya. For the Zenit, Usikov used a 60 cm (500 MHz) magnetron pulsed at 10–20 μs duration and providing 3 kW pulsed power, later increased to near 10 kW. Semion Braude led the development of a superheterodyne receiver using a tunable magnetron as the local oscillator. The system had separate transmitting and receiving antennas set about 65 m apart, built with dipoles backed by 3-meter parabolic reflectors.
Zenit was first tested in October 1938. In this, a medium-sized bomber was detected at a range of 3 km. The testing was observed by the NIIIS-KA and found to be sufficient for starting a contracted effort. An agreement was made in May 1939, specifying the required performance and calling for the system to be ready for production by 1941. The transmitter was increased in power, the antennas had selsens added to allow them to track, and the receiver sensitivity was improved by using an RCA 955 acorn triode as the local oscillator.
A demonstration of the improved Zenit was given in September 1940. In this, it was shown that the range, altitude, and azimuth of an aircraft flying at heights between 4,000 and 7,000 meters could be determined at up to 25 km distance. The time required for these measurements, however, was about 38 seconds, far too long for use by anti-aircraft batteries. Also, with the antennas aimed at a low angle, there was a dead zone of some distance caused by interference from ground-level reflections. While this performance was not satisfactory for immediate gun-laying applications, it was the first full three-coordinate radio-location system in the Soviet Union and showed the way for future systems.
Work at the LEMO continued on Zenit, particularly in converting it into a single-antenna system designated Rubin. This effort, however, was disrupted by the invasion of the USSR by Germany in June 1941. In a short while, the development activities at Kharkov were ordered to be evacuated to the Far East. The research efforts in Leningrad were similarly dispersed.
After eight years of effort by highly qualified physicists and engineers, the USSR entered World War II without a fully developed and fielded radar system.
Japan
As a seafaring nation, Japan had an early interest in wireless (radio) communications. The first known use of wireless telegraphy in warfare at sea was by the Imperial Japanese Navy, in defeating the Russian Imperial Fleet in 1904 at the Battle of Port Arthur. There was an early interest in equipment for radio direction-finding, for use in both navigation and military surveillance. The Imperial Navy developed an excellent receiver for this purpose in 1921, and soon most of the Japanese warships had this equipment.
In the two decades between the two World Wars, radio technology in Japan made advancements on a par with that in the western nations. There were often impediments, however, in transferring these advancements into the military. For a long time, the Japanese had believed that they had the best fighting capability of any military force in the world. The military leaders, who were then also in control of the government, sincerely felt that the weapons, aircraft, and ships that they had built were fully sufficient and, with these as they were, the Japanese Army and Navy were invincible. In 1936, Japan joined Nazi Germany and Fascist Italy in a Tripartite Pact.
Technology background
Radio engineering was strong in Japan's higher education institutions, especially the Imperial (government-financed) universities. This included undergraduate and graduate study, as well as academic research in this field. Special relationships were established with foreign universities and institutes, particularly in Germany, with Japanese teachers and researchers often going overseas for advanced study.
The academic research tended toward the improvement of basic technologies, rather than their specific applications. There was considerable research in high-frequency and high-power oscillators, such as the magnetron, but the application of these devices was generally left to industrial and military researchers.
One of Japan's best-known radio researchers in the 1920s–1930s era was Professor Hidetsugu Yagi. After graduate study in Germany, England, and America, Yagi joined Tohoku University, where his research centered on antennas and oscillators for high-frequency communications. A summary of the radio research work at Tohoku University was contained in a 1928 seminal paper by Yagi.
Jointly with Shintaro Uda, one of Yagi's first doctoral students, a radically new antenna emerged. It had a number of parasitic elements (directors and reflectors) and would come to be known as the Yagi-Uda or Yagi antenna. A U.S. patent, issued in May 1932, was assigned to RCA. To this day, this is the most widely used directional antenna worldwide.
The cavity magnetron was also of interest to Yagi. This HF (~10-MHz) device had been invented in 1921 by Albert W. Hull at General Electric, and Yagi was convinced that it could function in the VHF or even the UHF region. In 1927, Kinjiro Okabe, another of Yagi's early doctoral students, developed a split-anode device that ultimately generated oscillations at wavelengths down to about 12 cm (2.5 GHz).
Researchers at other Japanese universities and institutions also started projects in magnetron development, leading to improvements in the split-anode device. These included Kiyoshi Morita at the Tokyo Institute of Technology, and Tsuneo Ito at Tokoku University.
Shigeru Nakajima at Japan Radio Company (JRC) saw a commercial potential of these devices and began the further development and subsequent very profitable production of magnetrons for the medical dielectric heating (diathermy) market. The only military interest in magnetrons was shown by Yoji Ito at the Naval Technical Research Institute (NTRI).
The NTRI was formed in 1922, and became fully operational in 1930. Located at Meguro, Tokyo, near the Tokyo Institute of Technology, first-rate scientists, engineers, and technicians were engaged in activities ranging from designing giant submarines to building new radio tubes. Included were all of the precursors of radar, but this did not mean that the heads of the Imperial Navy accepted these accomplishments.
In 1936, Tsuneo Ito (no relationship to Yoji Ito) developed an 8-split-anode magnetron that produced about 10 W at 10 cm (3 GHz). Based on its appearance, it was named Tachibana (or Mandarin, an orange citrus fruit). Tsuneo Ito also joined the NTRI and continued his research on magnetrons in association with Yoji Ito. In 1937, they developed the technique of coupling adjacent segments (called push-pull), resulting in frequency stability, an extremely important magnetron breakthrough.
By early 1939, NTRI/JRC had jointly developed a 10-cm (3-GHz), stable-frequency Mandarin-type magnetron (No. M3) that, with water cooling, could produce 500-W power. In the same time period, magnetrons were built with 10 and 12 cavities operating as low as 0.7 cm (40 GHz). The configuration of the M3 magnetron was essentially the same as that used later in the magnetron developed by Boot and Randall at Birmingham University in early 1940, including the improvement of strapped cavities. Unlike the high-power magnetron in Britain, however, the initial device from the NTRI generated only a few hundred watts.
In general, there was no lack of scientific and engineering capabilities in Japan; their warships and aircraft clearly showed high levels of technical competency. They were ahead of Britain in the development of magnetrons, and their Yagi antenna was the world standard for VHF systems. It was simply that the top military leaders failed to recognize how the application of radio in detection and ranging – what was often called the Radio Range Finder (RRF) – could be of value, particularly in any defensive role; offense not defense, totally dominated their thinking.
Imperial Army
In 1938, engineers from the Research Office of Nippon Electric Company (NEC) were making coverage tests on high-frequency transmitters when rapid fading of the signal was observed. This occurred whenever an aircraft passed over the line between the transmitter and receiving meter. Masatsugu Kobayashi, the Manager of NEC's Tube Department, recognized that this was due to the beat-frequency interference of the direct signal and the Doppler-shifted signal reflected from the aircraft.
Kobayashi suggested to the Army Science Research Institute that this phenomenon might be used as an aircraft warning method. Although the Army had rejected earlier proposals for using radio-detection techniques, this one had appeal because it was based on an easily understandable method and would require little developmental cost and risk to prove its military value. NEC assigned Kinji Satake of their Research Institute to develop a system called the Bi-static Doppler Interference Detector (BDID).
For testing the prototype system, it was set up on an area recently occupied by Japan along the coast of China. The system operated between 4.0–7.5 MHz (75–40 m) and involved a number of widely spaced stations; this formed a radio screen that could detect the presence (but nothing more) of an aircraft at distances up to . The BDID was the Imperial Army's first deployed radio-based detection system, placed into operation in early 1941.
A similar system was developed by Satake for the Japanese homeland. Information centers received oral warnings from the operators at BDID stations, usually spaced between 65 and 240 km (40 and 150 mi). To reduce homing vulnerability – a great fear of the military – the transmitters operated with only a few watts power. Although originally intended to be temporary until better systems were available, they remained in operation throughout the war. It was not until after the start of war that the Imperial Army had equipment that could be called radar.
Imperial Navy
In the mid-1930s, some of the technical specialists in the Imperial Navy became interested in the possibility of using radio to detect aircraft. For consultation, they turned to Professor Yagi who was the Director of the Radio Research Laboratory at Osaka Imperial University. Yagi suggested that this might be done by examining the Doppler frequency-shift in a reflected signal.
Funding was provided to the Osaka Laboratory for experimental investigation of this technique. Kinjiro Okabe, the inventor of the split-anode magnetron and who had followed Yagi to Osaka, led the effort. Theoretical analyses indicated that the reflections would be greater if the wavelength was approximately the same as the size of aircraft structures. Thus, a VHF transmitter and receiver with Yagi antennas separated some distance were used for the experiment.
In 1936, Okabe successfully detected a passing aircraft by the Doppler-interference method; this was the first recorded demonstration in Japan of aircraft detection by radio. With this success, Okabe's research interest switched from magnetrons to VHF equipment for target detection. This, however, did not lead to any significant funding. The top levels of the Imperial Navy believed that any advantage of using radio for this purpose were greatly outweighed by enemy intercept and disclosure of the sender's presence.
Historically, warships in formation used lights and horns to avoid collision at night or when in fog. Newer techniques of VHF radio communications and direction-finding might also be used, but all of these methods were highly vulnerable to enemy interception. At the NTRI, Yoji Ito proposed that the UHF signal from a magnetron might be used to generate a very narrow beam that would have a greatly reduced chance of enemy detection.
Development of microwave system for collision avoidance started in 1939, when funding was provided by the Imperial Navy to JRC for preliminary experiments. In a cooperative effort involving Yoji Ito of the NTRI and Shigeru Nakajima of JRC, an apparatus using a 3-cm (10-GHz) magnetron with frequency modulation was designed and built. The equipment was used in an attempt to detect reflections from tall structures a few kilometers away. This experiment gave poor results, attributed to the very low power from the magnetron.
The initial magnetron was replaced by one operating at 16 cm (1.9 GHz) and with considerably higher power. The results were then much better, and in October 1940, the equipment obtained clear echoes from a ship in Tokyo Bay at a distance of about . There was still no commitment by top Japanese naval officials for using this technology aboard warships. Nothing more was done at this time, but late in 1941, the system was adopted for limited use.
In late 1940, Japan arranged for two technical missions to visit Germany and exchange information about their developments in military technology. Commander Yoji Ito represented the Navy's interest in radio applications, and Lieutenant Colonel Kinji Satake did the same for the Army. During a visit of several months, they exchanged significant general information, as well as limited secret materials in some technologies, but little directly concerning radio-detection techniques. Neither side even mentioned magnetrons, but the Germans did apparently disclose their use of pulsed techniques.
After receiving the reports from the technical exchange in Germany, as well as intelligence reports concerning the success of Britain with firing using RDF, the Naval General Staff reversed itself and tentatively accepted pulse-transmission technology. On August 2, 1941, even before Yoji Ito returned to Japan, funds were allocated for the initial development of pulse-modulated radars. Commander Chuji Hashimoto of the NTRI was responsible for initiating this activity.
A prototype set operating at 4.2 m (71 MHz) and producing about 5 kW was completed on a crash basis. With the NTRI in the lead, the firm NEC and the Research Laboratory of Japan Broadcasting Corporation (NHK) made major contributions to the effort. Kenjiro Takayanagi, Chief Engineer of NHK's experimental television station and called "the father of Japanese television", was especially helpful in rapidly developing the pulse-forming and timing circuits, as well as the receiver display. In early September 1941, the prototype set was first tested; it detected a single bomber at and a flight of aircraft at .
The system, Japan's first full Radio Range Finder (RRF – radar), was designated Mark 1 Model 1. Contracts were given to three firms for serial production; NEC built the transmitters and pulse modulators, Japan Victor the receivers and associated displays, and Fuji Electrical the antennas and their servo drives. The system operated at 3.0 m (100 MHz) with a peak-power of 40 kW. Dipole arrays with matte+-type reflectors were used in separate antennas for transmitting and receiving.
In November 1941, the first manufactured RRF was placed into service as a land-based early-warning system at Katsuura, Chiba, a town on the Pacific coast about from Tokyo. A large system, it weighed close to 8,700 kg (19,000 lb). The detection range was about for single aircraft and for groups.
Netherlands
Early radio-based detection in the Netherlands was along two independent lines: one a microwave system at the firm Philips and the other a VHF system at a laboratory of the Armed Forces.
The Philips Company in Eindhoven, Netherlands, operated Natuurkundig Laboratorium (NatLab) for fundamental research related to its products. NatLab researcher Klaas Posthumus developed a magnetron split into four elements.
In developing a communication system using this magnetron, C.H.J.A. Staal was testing the transmission by using parabolic transmitting and receiving antennas set side-by-side, both aimed at a large plate some distance away. To overcome frequency instability of the magnetron, pulse modulation was used. It was found that the plate reflected a strong signal.
Recognizing the potential importance of this as a detection device, NatLab arranged a demonstration for the Koninklijke Marine (Royal Netherlands Navy). This was conducted in 1937 across the entrance to the main naval port at Marsdiep. Reflections from sea waves obscured the return from the target ship, but the Navy was sufficiently impressed to initiate sponsorship of the research. In 1939, an improved set was demonstrated at Wijk aan Zee, detecting a vessel at a distance of .
A prototype system was built by Philips, and plans were started by the firm Nederlandse Seintoestellen Fabriek (a Philips subsidiary) for building a chain of warning stations to protect the primary ports. Some field testing of the prototype was conducted, but the project was discontinued when Germany invaded the Netherlands on May 10, 1940. Within the NatLab, however, the work was continued in great secrecy until 1942.
In the late 20's and early 30's, there were widespread rumours of a "death ray" being developed. Nobel prize winner Kamerlingh Onnes quickly discounted death rays. However, he recommended to set up a committee to investigate issues with physics in Weaponry. In November 1924, the Minister of War secretly established the Committee for Physical Armament chaired by Prof. G.J. Elias. In 1927, the Committee established a laboratory in support the Netherlands Armed Forces, called Meetgebouw (Measurements Building) located on the Plain of Waalsdorp.
In 1934, J.L.W.C. von Weiler joined the LFO and, with S.G. Gratama, began research on a 1.25-m (240-MHz) communication system for artillery spotting.
In 1935, while tests were being conducted on this system, a passing flock of birds disturbed the signal. Realizing that this might be a potential method for detecting aircraft, the Minister of War ordered the experiments to continue. Weiler and Gratama set about developing a system for directing searchlights and aiming anti-aircraft guns.
The experimental "electrical listening device" operated at 70 cm (430 MHz) and used pulsed transmission at an RPF of 10 kHz. A transmit-receive blocking circuit was developed to allow a common antenna. The received signal was displayed on a CR tube with a circular time base. This set was demonstrated to the Army in April 1938 and detected an aircraft at a range of . The set was rejected, however, because it could not withstand the harsh environment of Army combat conditions.
The Navy was more receptive. Funding was provided for final development, and Max Staal was added to the team. To maintain secrecy, they divided the development into parts. The transmitter was built at the Delft Technical College and the receiver at the University of Leiden. Ten sets would be assembled under the personal supervision of J.J.A. Schagen van Leeuwen, head of the firm Hazemeijer Fabriek van Signaalapparaten.
The prototype had a peak-power of 1 kW, and used a pulse length of 2 to 3 μs with a 10- to 20 kHz PRF. The receiver was a super-heterodyne type using Acorn tubes and a 6 MHz IF stage. The antenna consisted of 4 rows of 16 half-wave dipoles backed by a 3- by 3-meter mesh screen. The operator used a bicycle-type drive to rotate the antenna and the elevation could be changed using a hand crank.
Several sets were completed, and one was put into operation on the Malieveld in The Hague just before the Netherlands fell to Germany in May 1940. The set worked well, spotting enemy aircraft during the first days of fighting. To prevent capture, operating units and plans for the system were destroyed. Von Weiler and Max Staal fled to England aboard one of the last ships able to leave, carrying two disassembled sets with them. Later, Gratama and van Leeuwen also escaped to England.
France
In 1927, French physicists Camille Gutton and Emile Pierret experimented with magnetrons and other devices generating wavelengths going down to 16 cm. Camille's son, Henri Gutton, was with the Compagnie générale de la télégraphie sans fil (CSF) where he and Robert Warneck improved his father's magnetrons.
In 1934, following systematic studies on the magnetron, the research branch of the CSF, headed by Maurice Ponte, submitted a patent application for a device designed to detect obstacles using continuous radiation of ultra-short wavelengths produced by a magnetron. These were still CW systems and depended on Doppler interference for detection. However, as most modern radars, antennas were collocated. The device was measuring distance and azimuth but not directly as in the later "radar" on a screen (1939). Still, this was the first patent of an operational radio-detection apparatus using centimetric wavelengths.
The system was tested in late 1934 aboard the cargo ship Oregon, with two transmitters working at 80 cm and 16 cm wavelengths. Coastlines and boats were detected from a range of 10–12 nautical miles. The shortest wavelength was chosen for the final design, which equipped the liner as early as mid-1935 for operational use.
In late 1937, Maurice Elie at SFR developed a means of pulse-modulating transmitter tubes. This led to a new 16-cm system with a peak power near 500 W and a pulse width of 6 μs. French and U.S. patents were filed in December 1939. The system was planned to be sea-tested aboard the Normandie, but this was cancelled at the outbreak of war.
At the same time, Pierre David at the Laboratoire National de Radioélectricité (National Laboratory of Radioelectricity, LNR) experimented with reflected radio signals at about a meter wavelength. Starting in 1931, he observed that aircraft caused interference to the signals. The LNR then initiated research on a detection technique called barrage électromagnétique (electromagnetic curtain). While this could indicate the general location of penetration, precise determination of direction and speed was not possible.
In 1936, the Défense Aérienne du Territoire (Defence of Air Territory), ran tests on David's electromagnetic curtain. In the tests, the system detected most of the entering aircraft, but too many were missed. As the war grew closer, the need for an aircraft detection was critical. David realized the advantages of a pulsed system, and in October 1938 he designed a 50 MHz, pulse-modulated system with a peak-pulse power of 12 kW. This was built by the firm SADIR.
France declared war on Germany on September 3, 1939, and there was a great need for an early-warning detection system. The SADIR system was taken to near Toulon, and detected and measured the range of invading aircraft as far as . The SFR pulsed system was set up near Paris where it detected aircraft at ranges up to . However, the German advance was overwhelming and emergency measures had to be taken; it was too late for France to develop radars alone and it was decided that her breakthroughs would be shared with her allies.
In mid-1940, Maurice Ponte, from the laboratories of CSF in Paris, presented a cavity magnetron designed by Henri Gutton at SFR (see above) to the GEC laboratories at Wembley, Britain. This magnetron was designed for pulsed operation at a wavelength of 16 cm. Unlike other magnetron designs to that day, such as the Boots and Randall magnetron (see British contributions above), this tube used an oxide-coated cathode with a peak power output of 1 kW, demonstrating that oxide cathodes were the solution for producing high-power pulses at short wavelengths, a problem which had eluded British and American researchers for years. The significance of this event was underlined by Eric Megaw, in a 1946 review of early radar developments: "This was the starting point of the use of the oxide cathode in practically all our subsequent pulsed transmitting waves and as such was a significant contribution to British radar. The date was the 8th May 1940". A tweaked version of this magnetron reached a peak output of 10 kW by August 1940. It was that model which, in turn, was handed to the Americans as a token of good faith during the negotiations made by the Tizard delegation in 1940 to obtain from the U.S. the resources necessary for Britain to exploit the full military potential of her research and development work.
Italy
Guglielmo Marconi initiated the research in Italy on radio-based detection technology. In 1933, while participating with his Italian firm in experiments with a 600 MHz communications link across Rome, he noted transmission disturbances caused by moving objects adjacent to its path. This led to the development at his laboratory at Cornegliano of a 330-MHz (0.91-m) CW Doppler detection system that he called radioecometro. Barkhausen–Kurz tubes were used in both the transmitter and receiver.
In May 1935, Marconi demonstrated his system to the Fascist dictator Benito Mussolini and members of the military General Staff; however the output power was insufficient for military use. While Marconi's demonstration raised considerable interest, little more was done with his apparatus.
Mussolini directed that radio-based detection technology be further developed, and it was assigned to the Regio Istituto Elettrotecnico e delle Comunicazioni (RIEC, Royal Institute for Electro-technics and Communications). The RIEC had been established in 1916 on the campus of the Italian Naval Academy in Livorno. Lieutenant Ugo Tiberio, a physics and radio-technology instructor at the Academy, was assigned to head the project on a part-time basis.
Tiberio prepared a report on developing an experimental apparatus that he called telemetro radiofonico del rivelatore (RDT, Radio-Detector Telemetry). The report, submitted in mid-1936, included what was later known as the radar range equation. When the work got underway, Nello Carrara, a civilian physics instructor who had been doing research at the RIEC in microwaves, was added to be responsible for developing the RDT transmitter.
Before the end of 1936, Tiberio and Carrara had demonstrated the EC-1, the first Italian RDT system. This had an FM transmitter operating at 200 MHz (1.5 m) with a single parabolic cylinder antenna. It detected by mixing the transmitted and the Doppler-shifted reflected signals, resulting in an audible tone.
The EC-1 did not provide a range measurement; to add this capability, development of a pulsed system was initiated in 1937. Captain Alfeo Brandimarte joined the group and primarily designed the first pulsed system, the EC-2. This operated at 175 MHz (1.7 m) and used a single antenna made with a number of equi-phased dipoles. The detected signal was intended to be displayed on an oscilloscope. There were many problems, and the system never reached the testing stage.
Work then turned to developing higher power and operating frequencies. Carrara, in cooperation with the firm FIVRE, developed a magnetron-like device. This was composed of a pair of triodes connected to a resonate cavity and produced 10 kW at 425 MHz (70 cm). It was used in designing two versions of the EC-3, one for shipboard and the other for coastal defense.
Italy, joining Germany, entered WWII in June 1940 without an operational RDT. A breadboard of the EC-3 was built and tested from atop a building at the Academy, but most RDT work was stopped as direct support of the war took priority.
Others
In early 1939, the British Government invited representatives from the most technically advanced Commonwealth Nations to visit England for briefings and demonstrations on the highly secret RDF (radar) technology. Based on this, RDF developments were started in Australia, Canada, New Zealand, and South Africa by September 1939. In addition, this technology was independently developed in Hungary early in the war period.
Australia: the Radiophysics Laboratory in Australia was established at Sydney University under the Council for Scientific and Industrial Research; John H. Piddington was responsible for RDF development. The first project was a 200-MHz (1.5-m) shore-defense system for the Australian Army. Designated ShD, this was first tested in September 1941, and eventually installed at 17 ports. Following the Japanese attack on Pearl Harbor, the Royal Australian Air Force urgently needed an air-warning system, and Piddington's team, using the ShD as a basis, put the AW Mark I together in five days. It was being installed in Darwin, Northern Territory, when Australia received the first Japanese attack on February 19, 1942. A short time later, it was converted to a light-weight transportable version, the LW-AW Mark II; this was used by the Australian forces, as well as the U.S. Army, in early island landings in the South Pacific.
Canada: The early RDF developments in Canada were at the Radio Section of the National Research Council of Canada. Using commercial components and with essentially no further assistance from Britain, John Tasker Henderson led a team in developing the Night Watchman, a surface-warning system for the Royal Canadian Navy to protect the entrance to the Halifax Harbour. Successfully tested in July 1940, this set operated at 200 MHz (1.5 m), had a 1 kW output with a pulse length of 0.5 μs, and used a relatively small, fixed antenna. This was followed by a ship-borne set designated Surface Warning 1st Canadian (SW1C) with the antenna hand-rotated through the use of a Chevrolet steering wheel in the operator's compartment. The SW1C was first tested at sea in mid-May 1941, but the performance was so poor compared to the Royal Navy's Model 271 ship-borne radar that the Royal Canadian Navy eventually adopted the British 271 in place of the SW1C.
For coastal defense by the Canadian Army, a 200 MHz set with a transmitter similar to the Night Watchman was developed. Designated CD, it used a large, rotating antenna atop a wooden tower. The CD was put into operation in January 1942.
New Zealand: Ernest Marsden represented New Zealand at the briefings in England, and then established two facilities for RDF development – one in Wellington at the Radio Section of the Central NZ Post Office, and another at Canterbury University College in Christchurch. Charles N. Watson-Munro led the development of land-based and airborne sets at Wellington, while Frederick W. G. White led the development of shipboard sets at Christchurch.
Before the end of 1939, the Wellington group had converted an existing 180-MHz (1.6-m), 1 kW transmitter to produce 2-μs pulses and tested it to detect large vessels at up to 30 km; this was designated CW (Coastal Watching). A similar set, designated CD (Coast Defense) used a CRT for display and had lobe-switching on the receiving antenna; this was deployed in Wellington in late 1940. A partially completed ASV 200 MHz set was brought from Britain by Marsden, and another group at Wellington built this into an aircraft set for the Royal New Zealand Air Force; this was first flown in early 1940. At Christchurch, there was a smaller staff and work went slower, but by July 1940, a 430-MHz (70-cm), 5 kW set was tested. Two types, designated SW (Ship Warning) and SWG (Ship Warning, Gunnery), were placed into service by the Royal New Zealand Navy starting in August 1941. In all some 44 types were developed in New Zealand during WWII.
Radar systems were developed from 1939; initially New Zealand made but then (because of difficulty on sourcing components) British made. Transportable GCI radar sets were deployed in the Pacific, including one with RNZAF personnel at the American aerodrome at Henderson Field, Guadalcanal in September 1942, where the American SCR 270-B sets could not plot heights so were inadequate against frequent Japanese night raids. In the first half of 1943 additional New Zealand radar units and staff were sent to the Pacific at the request of COMOSPAC, Admiral Halsey.
South Africa did not have a representative at the 1939 meetings in England, but in mid-September, as Ernest Marsden was returning by ship to New Zealand, Basil F. J. Schonland came aboard and received three days of briefings. Schonland, a world authority on lightning and Director of the Bernard Price Institute of Geophysics at Witwatersrand University, immediately started an RDF development using amateur radio components and Institute's lightning-monitoring equipment. Designated JB (for Johannesburg), the 90-MHz (3.3-m), 500-W mobile system was tested in November 1939, just two months after its start. The prototype was operated in Durban before the end of 1939, detecting ships and aircraft at distances up to 80 km, and by the next March a system was fielded by anti-aircraft brigades of the South African Defence Force.
Hungary: Zoltán Lajos Bay in Hungary was a Professor of Physics at the Technical University of Budapest as well as the Research Director of Egyesült Izzolampa (IZZO), a radio and electrical manufacturing firm. In late 1942, IZZO was directed by the Minister of Defense to develop a radio-location (rádiólokáció, radar) system. Using journal papers on ionospheric measurements for information on pulsed transmission, Bay developed a system called Sas (Eagle) around existing communications hardware.
The Sas operated at 120 MHz (2.5 m) and was in a cabin with separate transmitting and receiving dipole arrays attached; the assembly was all on a rotatable platform. According to published records, the system was tested in 1944 atop Mount János and had a range of "better than 500 km". A second Sas was installed at another location. There is no indication that either Sas installation was ever in regular service. After the war, Bay used a modified Sas to successfully bounce a signal off the moon.
World War II radar
At the start of World War II in September 1939, both the United Kingdom and Germany knew of each other's ongoing efforts in radio navigation and its countermeasures – the "Battle of the beams". Also, both nations were generally aware of, and intensely interested in, the other's developments in radio-based detection and tracking, and engaged in an active campaign of espionage and false leaks about their respective equipment. By the time of the Battle of Britain, both sides were deploying range and direction-finding units (radars) and control stations as part of integrated air defense capability. However, the German Funkmessgerät (radio measuring device) systems could not assist in an offensive role and was thus not supported by Adolf Hitler. Also, the Luftwaffe did not sufficiently appreciate the importance of British Range and Direction Finding (RDF) stations as part of RAF's air defense capability, contributing to their failure.
While the United Kingdom and Germany led in pre-war advances in the use of radio for detection and tracking of aircraft, there were also developments in the United States, the Soviet Union, and Japan. Wartime systems in all of these nations will be summarized. The acronym RADAR (for RAdio Detection And Ranging) was coined by the U.S. Navy in 1940, and the subsequent name "radar" was soon widely used. The XAF and CXAM search radars were designed by the Naval Research Laboratory, and were the first operational radars in the US fleet, produced by RCA.
When France had just fallen to the Nazis and Britain had no money to develop the cavity magnetron on a massive scale, Churchill agreed that Sir Henry Tizard should offer the cavity magnetron to the Americans in exchange for their financial and industrial help (the Tizard Mission). An early 6 kW version, built in England by the General Electric Company Research Laboratories, Wembley, London (not to be confused with the similarly named American company General Electric), was given to the US government in September 1940. The British magnetron was a thousand times more powerful than the best American transmitter at the time and produced accurate pulses. At the time the most powerful equivalent microwave producer available in the US (a klystron) had a power of only ten watts. The cavity magnetron was widely used during World War II in microwave radar equipment and is often credited with giving Allied radar a considerable performance advantage over German and Japanese radars, thus directly influencing the outcome of the war. It was later described by noted Historian James Phinney Baxter III as "The most valuable cargo ever brought to our shores".
The Bell Telephone Laboratories made a producible version from the magnetron delivered to America by the Tizard Mission, and before the end of 1940, the Radiation Laboratory had been set up on the campus of the Massachusetts Institute of Technology to develop various types of radar using the magnetron. By early 1941, portable centimetric airborne radars were being tested in American and British aircraft. In late 1941, the Telecommunications Research Establishment in Great Britain used the magnetron to develop a revolutionary airborne, ground-mapping radar codenamed H2S; and was partly developed by Alan Blumlein and Bernard Lovell. The magnetron radars used by the US (e.g. H2X) and Britain could spot the periscope of a U-boat.
Post-war radar
World War II, which gave impetus to the great surge in radar development, ended between the Allies and Germany in May 1945, followed by Japan in August. With this, radar activities in Germany and Japan ceased for a number of years. In other countries, particularly the United States, Britain, and the USSR, the politically unstable post-war years saw continued radar improvements for military applications. In fact, these three nations all made significant efforts in bringing scientists and engineers from Germany to work in their weapon programs; in the U.S., this was under Operation Paperclip.
Even before the end of the war, various projects directed toward non-military applications of radar and closely related technologies were initiated. The US Army Air Forces and the British RAF had made wartime advances in using radar for handling aircraft landing, and this was rapidly expanded into the civil sector. The field of radio astronomy was one of the related technologies; although discovered before the war, it immediately flourished in the late 1940s with many scientists around the world establishing new careers based on their radar experience.
Four techniques, highly important in post-war radars, were matured in the late 1940s-early 1950s: pulse Doppler, monopulse, phased array, and synthetic aperture; the first three were known and even used during wartime developments, but were matured later.
Pulse-Doppler radar (often known as moving target indication or MTI), uses the Doppler-shifted signals from targets to better detect moving targets in the presence of clutter.
Monopulse radar (also called simultaneous lobing) was conceived by Robert Page at the NRL in 1943. With this, the system derives error-angle information from a single pulse, greatly improving the tracking accuracy.
Phased-array radar has the many segments of a large antenna separately controlled, allowing the beam to be quickly directed. This greatly reduces the time necessary to change the beam direction from one point to another, allowing almost simultaneous tracking of multiple targets while maintaining overall surveillance.
Synthetic-aperture radar (SAR), was invented in the early 1950s at Goodyear Aircraft Corporation. Using a single, relatively small antenna carried on an aircraft, a SAR combines the returns from each pulse to produce a high-resolution image of the terrain comparable to that obtained by a much larger antenna. SAR has wide applications, particularly in mapping and remote sensing.
One of the early applications of digital computers was in switching the signal phase in elements of large phased-array antennas. As smaller computers came into being, these were quickly applied to digital signal processing using algorithms for improving radar performance.
Other advances in radar systems and applications in the decades following WWII are far too many to be included herein. The following sections are intended to provide representative samples.
Military radars
In the United States, the Rad Lab at MIT officially closed at the end of 1945. The Naval Research Laboratory (NRL) and the Army's Evans Signal Laboratory continued with new activities in centimeter radar development. The United States Air Force (USAF) – separated from the Army in 1946 – concentrated radar research at their Cambridge Research Center (CRC) at Hanscom Field, Massachusetts. In 1951, MIT opened the Lincoln Laboratory for joint developments with the CRC. While the Bell Telephone Laboratories embarked on major communications upgrades, they continued with the Army in radar for their ongoing Nike air-defense program
In Britain, the RAF's Telecommunications Research Establishment (TRE) and the Army's Radar Research and Development Establishment (RRDE) both continued at reduced levels at Malvern, Worcestershire, then in 1953 were combined to form the Radar Research Establishment. In 1948, all of the Royal Navy's radio and radar R&D activities were combined to form the Admiralty Signal and Radar Establishment, located near Portsmouth, Hampshire. The USSR, although devastated by the war, immediately embarked on the development of new weapons, including radars.
During the Cold War period following WWII, the primary "axis" of combat shifted to lie between the United States and the Soviet Union. By 1949, both sides had nuclear weapons carried by bombers. To provide early warning of an attack, both deployed huge radar networks of increasing sophistication at ever-more remote locations. In the West, the first such system was the Pinetree Line, deployed across Canada in the early 1950s, backed up with radar pickets on ships and oil platforms off the east and west coasts.
The Pinetree Line initially used vintage pulsed radars and was soon supplemented with the Mid-Canada Line (MCL). Soviet technology improvements made these Lines inadequate and, in a construction project involving 25,000 persons, the Distant Early Warning Line (DEW Line) was completed in 1957. Stretching from Alaska to Baffin Island and covering over , the DEW Line consisted of 63 stations with AN/FPS-19 high-power, pulsed, L-Band radars, most augmented by AN/FPS-23 pulse-Doppler systems. The Soviet Unit tested its first Intercontinental Ballistic Missile (ICBM) in August 1957, and in a few years the early-warning role was passed almost entirely to the more capable DEW Line.
Both the U.S. and the Soviet Union then had ICBMs with nuclear warheads, and each began the development of a major anti-ballistic missile (ABM) system. In the USSR, this was the Fakel V-1000, and for this they developed powerful radar systems. This was eventually deployed around Moscow as the A-35 anti-ballistic missile system, supported by radars designated by NATO as the Cat House, Dog House, and Hen House.
In 1957, the U.S. Army initiated an ABM system first called Nike-X; this passed through several names, eventually becoming the Safeguard Program. For this, there was a long-range Perimeter Acquisition Radar (PAR) and a shorter-range, more precise Missile Site Radar (MSR).
The PAR was housed in a -high nuclear-hardened building with one face sloping 25 degrees facing north. This contained 6,888 antenna elements separated in transmitting and receiving phased arrays. The L-Band transmitter used 128 long-life traveling-wave tubes (TWTs), having a combined power in the megawatt range The PAR could detect incoming missiles outside the atmosphere at distances up to .
The MSR had an , truncated pyramid structure, with each face holding a phased-array antenna in diameter and containing 5,001 array elements used for both transmitting and receiving. Operating in the S-Band, the transmitter used two klystrons functioning in parallel, each with megawatt-level power. The MSR could search for targets from all directions, acquiring them at up to range.
One Safeguard site, intended to defend Minuteman ICBM missile silos near the Grand Forks AFB in North Dakota, was finally completed in October 1975, but the U.S. Congress withdrew all funding after it was operational but a single day. During the following decades, the U.S. Army and the U.S. Air Force developed a variety of large radar systems, but the long-serving BTL gave up military development work in the 1970s.
A modern radar developed by of the U.S. Navy is the AN/SPY-1. First fielded in 1973, this S-Band, 6 MW system has gone through a number of variants and is a major component of the Aegis Combat System. An automatic detect-and-track system, it is computer controlled using four complementary three-dimensional passive electronically scanned array antennas to provide hemispherical coverage.
Radar signals, traveling with line-of-sight propagation, normally have a range to ground targets limited by the visible horizon, or less than about . Airborne targets can be detected by ground-level radars at greater ranges, but, at best, several hundred miles. Since the beginning of radio, it had been known that signals of appropriate frequencies (3 to 30 MHz) could be "bounced" from the ionosphere and received at considerable distances. As long-range bombers and missiles came into being, there was a need to have radars give early warnings at great ranges. In the early 1950s, a team at the Naval Research Laboratory came up with the Over-the-Horizon (OTH) radar for this purpose.
To distinguish targets from other reflections, it was necessary to use a phase-Doppler system. Very sensitive receivers with low-noise amplifiers had to be developed. Since the signal going to the target and returning had a propagation loss proportional to the range raised to the fourth power, a powerful transmitter and large antennas were required. A digital computer with considerable capability (new at that time) was necessary for analyzing the data. In 1950, their first experimental system was able to detect rocket launches away at Cape Canaveral, and the cloud from a nuclear explosion in Nevada distant.
In the early 1970s, a joint American-British project, code named Cobra Mist, used a 10-MW OTH radar at Orfordness (the birthplace of British radar), England, in an attempt to detect aircraft and missile launchings over the Western USSR. Because of US-USSR ABM agreements, this was abandoned within two years. In the same time period, the Soviets were developing a similar system; this successfully detected a missile launch at . By 1976, this had matured into an operational system named Duga ("Arc" in English), but known to western intelligence as Steel Yard and called Woodpecker by radio amateurs and others who suffered from its interference – the transmitter was estimated to have a power of 10 MW. Australia, Canada, and France also developed OTH radar systems.
With the advent of satellites with early-warning capabilities, the military lost most of its interest in OTH radars. However, in recent years, this technology has been reactivated for detecting and tracking ocean shipping in applications such as maritime reconnaissance and drug enforcement.
Systems using an alternate technology have also been developed for over-the-horizon detection. Due to diffraction, electromagnetic surface waves are scattered to the rear of objects, and these signals can be detected in a direction opposite from high-powered transmissions. Called OTH-SW (SW for Surface Wave), Russia is using such a system to monitor the Sea of Japan, and Canada has a system for coastal surveillance.
Civil aviation radars
The post-war years saw the beginning of a revolutionary development in Air Traffic Control (ATC) – the introduction of radar. In 1946, the Civil Aeronautics Administration (CAA) unveiled an experimental radar-equipped tower for control of civil flights. By 1952, the CAA had begun its first routine use of radar for approach and departure control. Four years later, it placed a large order for long-range radars for use in en route ATC; these had the capability, at higher altitudes, to see aircraft within 200 nautical miles (370 km). In 1960, it became required for aircraft flying in certain areas to carry a radar transponder that identified the aircraft and helped improve radar performance. Since 1966, the responsible agency has been called the Federal Aviation Administration (FAA).
A Terminal Radar Approach Control (TRACON) is an ATC facility usually located within the vicinity of a large airport. In the US Air Force it is known as RAPCON (Radar Approach Control), and in the US Navy as a RATCF (Radar Air Traffic Control Facility). Typically, the TRACON controls aircraft within a 30 to 50 nautical mile (56 to 93 km) radius of the airport at an altitude between 10,000 and 15,000 feet (3,000 to 4,600 m). This uses one or more Airport Surveillance Radars (ASR-8, 9 and 11, ASR-7 is obsolete), sweeping the sky once every few seconds. These Primary ASR radars are typically paired with secondary radars (Air Traffic Radar Beacon Interrogators, or ATCBI) of the ATCBI-5, Mode S or MSSR types. Unlike primary radar, secondary radar relies upon an aircraft based transponder, which receives an interrogation from the ground and replies with an appropriate digital code which includes the aircraft id and reports the aircraft's altitude. The principle is similar to the military IFF Identification friend or foe. The secondary radar antenna array rides atop the primary radar dish at the radar site, with both rotating at approximately 12 revolutions per minute.
The Digital Airport Surveillance Radar (DASR) is a newer TRACON radar system, replacing the old analog systems with digital technology. The civilian nomenclature for these radars is the ASR-9 and the ASR-11, and AN/GPN-30 is used by the military.
In the ASR-11, two radar systems are included. The primary is an S-Band (~2.8 GHz) system with 25 kW pulse power. It provides 3-D tracking of target aircraft and also measures rainfall intensity. The secondary is a P-Band (~1.05 GHz) system with a peak-power of about 25 kW. It uses a transponder set to interrogate aircraft and receive operational data. The antennas for both systems rotate atop a tall tower.
Weather radar
During World War II, military radar operators noticed noise in returned echoes due to weather elements like rain, snow, and sleet. Just after the war, military scientists returned to civilian life or continued in the Armed Forces and pursued their work in developing a use for those echoes. In the United States, David Atlas, for the Air Force group at first, and later for MIT, developed the first operational weather radars. In Canada, J.S. Marshall and R.H. Douglas formed the "Stormy Weather Group " in Montreal. Marshall and his doctoral student Walter Palmer are well known for their work on the drop size distribution in mid-latitude rain that led to understanding of the Z-R relation, which correlates a given radar reflectivity with the rate at which water is falling on the ground. In the United Kingdom, research continued to study the radar echo patterns and weather elements such as stratiform rain and convective clouds, and experiments were done to evaluate the potential of different wavelengths from 1 to 10 centimetres.
Between 1950 and 1980, reflectivity radars, which measure position and intensity of precipitation, were built by weather services around the world. In United States, the U.S. Weather Bureau, established in 1870 with the specific mission of to provide meteorological observations and giving notice of approaching storms, developed the WSR-1 (Weather Surveillance Radar-1), one of the first weather radars. This was a modified version of the AN/APS-2F radar, which the Weather Bureau acquired from the Navy. The WSR-1A, WSR-3, and WSR-4 were also variants of this radar. This was followed by the WSR-57 (Weather Surveillance Radar – 1957) was the first weather radar designed specifically for a national warning network. Using WWII technology based on vacuum tubes, it gave only coarse reflectivity data and no velocity information. Operating at 2.89 GHz (S-Band), it had a peak-power of 410 kW and a maximum range of about . AN/FPS-41 was the military designation for the WSR-57.
The early meteorologists had to watch a cathode-ray tube. During the 1970s, radars began to be standardized and organized into larger networks. The next significant change in the United States was the WSR-74 series, beginning operations in 1974. There were two types: the WSR-74S, for replacements and filling gaps in the WSR-57 national network, and the WSR-74C, primarily for local use. Both were transistor-based, and their primary technical difference was indicated by the letter, S band (better suited for long range) and C band, respectively. Until the 1990s, there were 128 of the WSR-57 and WSR-74 model radars were spread across that country.
The first devices to capture radar images were developed during the same period. The number of scanned angles was increased to get a three-dimensional view of the precipitation, so that horizontal cross-sections (CAPPI) and vertical ones could be performed. Studies of the organization of thunderstorms were then possible for the Alberta Hail Project in Canada and National Severe Storms Laboratory (NSSL) in the US in particular. The NSSL, created in 1964, began experimentation on dual polarization signals and on Doppler effect uses. In May 1973, a tornado devastated Union City, Oklahoma, just west of Oklahoma City. For the first time, a Dopplerized 10-cm wavelength radar from NSSL documented the entire life cycle of the tornado. The researchers discovered a mesoscale rotation in the cloud aloft before the tornado touched the ground : the tornadic vortex signature. NSSL's research helped convince the National Weather Service that Doppler radar was a crucial forecasting tool.
Between 1980 and 2000, weather radar networks became the norm in North America, Europe, Japan and other developed countries. Conventional radars were replaced by Doppler radars, which in addition to position and intensity of could track the relative velocity of the particles in the air. In the United States, the construction of a network consisting of wavelength radars, called NEXRAD or WSR-88D (Weather Service Radar 1988 Doppler), was started in 1988 following NSSL's research. In Canada, Environment Canada constructed the King City station, with a five centimeter research Doppler radar, by 1985; McGill University dopplerized its radar (J. S. Marshall Radar Observatory) in 1993. This led to a complete Canadian Doppler network between 1998 and 2004. France and other European countries switched to Doppler network by the end of the 1990s to early 2000s. Meanwhile, rapid advances in computer technology led to algorithms to detect signs of severe weather and a plethora of "products" for media outlets and researchers.
After 2000, research on dual polarization technology moved into operational use, increasing the amount of information available on precipitation type (e.g. rain vs. snow). "Dual polarization" means that microwave radiation which is polarized both horizontally and vertically (with respect to the ground) is emitted. Wide-scale deployment is expected by the end of the decade in some countries such as the United States, France, and Canada.
Since 2003, the U.S. National Oceanic and Atmospheric Administration has been experimenting with phased-array radar as a replacement for conventional parabolic antenna to provide more time resolution in atmospheric sounding. This would be very important in severe thunderstorms as their evolution can be better evaluated with more timely data.
Also in 2003, the National Science Foundation established the Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere, "CASA", a multidisciplinary, multi-university collaboration of engineers, computer scientists, meteorologists, and sociologists to conduct fundamental research, develop enabling technology, and deploy prototype engineering systems designed to augment existing radar systems by sampling the generally undersampled lower troposphere with inexpensive, fast scanning, dual polarization, mechanically scanned and phased array radars.
Mapping radar
The plan position indicator, dating from the early days of radar and still the most common type of display, provides a map of the targets surrounding the radar location. If the radar antenna on an aircraft is aimed downward, a map of the terrain is generated, and the larger the antenna, the greater the image resolution. After centimeter radar came into being, downward-looking radars – the H2S ( L-Band) and H2X (C-Band) – provided real-time maps used by the U.S. and Britain in bombing runs over Europe at night and through dense clouds.
Synthetic-aperture radar
In 1951, Carl Wiley led a team at Goodyear Aircraft Corporation (later Goodyear Aerospace) in developing a technique for greatly expanding and improving the resolution of radar-generated images. Called synthetic aperture radar (SAR), an ordinary-sized antenna fixed to the side of an aircraft is used with highly complex signal processing to give an image that would otherwise require a much larger, scanning antenna; thus, the name synthetic aperture. As each pulse is emitted, it is radiated over a lateral band onto the terrain. The return is spread in time, due to reflections from features at different distances. Motion of the vehicle along the flight path gives the horizontal increments. The amplitude and phase of returns are combined by the signal processor using Fourier transform techniques in forming the image. The overall technique is closely akin to optical holography.
Through the years, many variations of the SAR have been made with diversified applications resulting. In initial systems, the signal processing was too complex for on-board operation; the signals were recorded and processed later. Processors using optical techniques were then tried for generating real-time images, but advances in high-speed electronics now allow on-board processes for most applications. Early systems gave a resolution in tens of meters, but more recent airborne systems provide resolutions to about 10 cm. Current ultra-wideband systems have resolutions of a few millimeters.
Other radars and applications
There are many other post-war radar systems and applications. Only a few will be noted.
Radar gun
The most widespread radar device today is undoubtedly the radar gun. This is a small, usually hand-held, Doppler radar that is used to detect the speed of objects, especially trucks and automobiles in regulating traffic, as well as pitched baseballs, runners, or other moving objects in sports. This device can also be used to measure the surface speed of water and continuously manufactured materials. A radar gun does not return information regarding the object's position; it uses the Doppler effect to measure the speed of a target. First developed in 1954, most radar guns operate with very low power in the X or Ku Bands. Some use infrared radiation or laser light; these are usually called LIDAR. A related technology for velocity measurements in flowing liquids or gasses is called laser Doppler velocimetry; this technology dates from the mid-1960s.
Impulse radar
As pulsed radars were initially being developed, the use of very narrow pulses was examined. The pulse length governs the accuracy of distance measurement by radar – the shorter the pulse, the greater the precision. Also, for a given pulse repetition frequency (PRF), a shorter pulse results in a higher peak power. Harmonic analysis shows that the narrower the pulse, the wider the band of frequencies that contain the energy, leading to such systems also being called wide-band radars. In the early days, the electronics for generating and receiving these pulses was not available; thus, essentially no applications of this were initially made.
By the 1970s, advances in electronics led to renewed interest in what was often called short-pulse radar. With further advances, it became practical to generate pulses having a width on the same order as the period of the RF carrier (T = 1/f). This is now generally called impulse radar.
The first significant application of this technology was in ground-penetrating radar (GPR). Developed in the 1970s, GPR is now used for structural foundation analysis, archeological mapping, treasure hunting, unexploded ordnance identification, and other shallow investigations. This is possible because impulse radar can concisely locate the boundaries between the general media (the soil) and the desired target. The results, however, are non-unique and are highly dependent upon the skill of the operator and the subsequent interpretation of the data.
In dry or otherwise favorable soil and rock, penetration up to is often possible. For distance measurements at these short ranges, the transmitted pulse is usually only one radio-frequency cycle in duration; With a 100 MHz carrier and a PRF of 10 kHz (typical parameters), the pulse duration is only 10 ns (nanosecond). leading to the "impulse" designation. A variety of GPR systems are commercially available in back-pack and wheeled-cart versions with pulse-power up to a kilowatt.
With continued development of electronics, systems with pulse durations measured in picoseconds became possible. Applications are as varied as security and motion sensors, building stud-finders, collision-warning devices, and cardiac-dynamics monitors. Some of these devices are matchbox sized, including a long-life power source.
Radar astronomy
As radar was being developed, astronomers considered its application in making observations of the Moon and other near-by extraterrestrial objects. In 1944, Zoltán Lajos Bay had this as a major objective as he developed a radar in Hungary. His radar telescope was taken away by the conquering Soviet army and had to be rebuilt, thus delaying the experiment. Under Project Diana conducted by the Army's Evans Signal Laboratory in New Jersey, a modified SCR-271 radar (the fixed-position version of the SCR-270) operating at 110 MHz with 3 kW peak-power, was used in receiving echoes from the Moon on January 10, 1946. Zoltán Bay accomplished this on the following February 6.
Radio astronomy also had its start following WWII, and many scientists involved in radar development then entered this field. A number of radio observatories were constructed during the following years; however, because of the additional cost and complexity of involving transmitters and associated receiving equipment, very few were dedicated to radar astronomy. In fact, essentially all major radar astronomy activities have been conducted as adjuncts to radio astronomy observatories.
The radio telescope at the Arecibo Observatory, opened in 1963, was the largest in the world. Owned by the U.S. National Science Foundation and contractor operated, it was used primarily for radio astronomy, but equipment was available for radar astronomy. This included transmitters operating at 47 MHz, 439 MHz, and 2.38 GHz, all with very-high pulse power. It has a 305-m (1,000-ft) primary reflector fixed in position; the secondary reflector is on tracks to allow precise pointing to different parts of the sky. Many significant scientific discoveries have been made using the Arecibo radar telescope, including mapping of surface roughness of Mars and observations of Saturn and its largest moon, Titan. In 1989, the observatory radar-imaged an asteroid for the first time in history.
After an auxiliary and main cable failure on the telescope in August and November 2020, respectively, the NSF announced the decision that they would decommission the telescope through controlled demolition, but that the other facilities on the Observatory would remain operational in the future. However, before the safe decommission of the telescope could occur, remaining support cables from one tower rapidly failed in the morning of December 1, 2020, causing the instrument platform to crash through the dish, shearing off the tops of the support towers, and partially damaging some of the other buildings, though there were no injuries. NSF has stated that it is still their intention to continue to have the other Observatory facilities operational as soon as possible and are looking at plans to rebuild a new telescope instrument in its place
Several spacecraft orbiting the Moon, Mercury, Venus, Mars, and Saturn have carried radars for surface mapping; a ground-penetration radar was carried on the Mars Express mission. Radar systems on a number of aircraft and orbiting spacecraft have mapped the entire Earth for various purposes; on the Shuttle Radar Topography Mission, the entire planet was mapped at a 30-m resolution.
The Jodrell Bank Observatory, an operation of the University of Manchester in Britain, was originally started by Bernard Lovell to be a radar astronomy facility. It initially used a war-surplus GL-II radar system operating at 71 MHz (4.2 m). The first observations were of ionized trails in the Geminids meteor shower during December 1945. While the facility soon evolved to become the third largest radio observatory in the world, some radar astronomy continued. The largest (250-ft or 76-m in diameter) of their three fully steerable radio telescopes became operational just in time to radar track Sputnik 1, the first artificial satellite, in October 1957.
See also
Cavity magnetron
History of smart antennas
Klystron
List of German inventions and discoveries
List of World War II electronic warfare equipment
Secrets of Radar Museum
References
Further reading
Blanchard, Yves, Le radar. 1904–2004 : Histoire d'un siècle d'innovations techniques et opérationnelles, éditions Ellipses,(in French)
Bowen, E. G.; "The development of airborne radar in Great Britain 1935–1945", in Radar Development to 1945, ed. by Russell Burns; Peter Peregrinus, 1988,
Bowen, E. G., Radar Days, Institute of Physics Publishing, Bristol, 1987,
Bragg, Michael., RDF1 The Location of Aircraft by Radio Methods 1935–1945, Hawkhead Publishing, 1988,
Brown, Jim, Radar – how it all began, Janus Pub., 1996,
Brown, Louis, A Radar History of World War 2 – Technical and Military Imperatives, Institute of Physics Publishing, 1999,
Buderi, Robert: The invention that changed the world: the story of radar from war to peace, Simon & Schuster, 1996,
Burns, Peter (editor): Radar Development to 1945, Peter Peregrinus Ltd., 1988,
Clark, Ronald W., Tizard, MIT Press, 1965, (An authorized biography of radar's champion in the 1930s.)
Dummer, G. W. A., Electronic Inventions and Discoveries, Elsevier, 1976, Pergamon, 1977,
Erickson, John; "Radio-location and the air defense problem: The design and development of Soviet Radar 1934–40", Social Studies of Science, vol. 2, p. 241, 1972
Frank, Sir Charles, Operation Epsilon: The Farm Hall Transcripts U. Cal. Press, 1993 (How German scientists dealt with Nazism.)
Guerlac, Henry E., Radar in World War II (in two volumes), Tomash Publishers / Am Inst. of Physics, 1987,
Hanbury Brown, Robert, Boffin: A Personal Story of the early Days of Radar and Radio Astronomy and Quantum Optics, Taylor and Francis, 1991,
Howse, Derek, Radar At Sea The Royal Navy in World War 2, Naval Institute Press, Annapolis, Maryland, US, 1993,
Jones, R. V., Most Secret War, Hamish Hamilton, 1978, (Account of British Scientific Intelligence between 1939 and 1945, working to anticipate Germany's radar and other developments.)
Kroge, Harry von, GEMA: Birthplace of German Radar and Sonar, translated by Louis Brown, Inst. of Physics Publishing, 2000,
Latham, Colin, and Anne Stobbs, Radar A Wartime Miracle, Sutton Publishing Ltd, 1996, (A history of radar in the UK during WWII told by the men and women who worked on it.)
Latham, Colin, and Anne Stobbs, The Birth of British Radar: The Memoirs of Arnold 'Skip' Wilkins, 2nd Ed., Radio Society of Great Britain, 2006,
Lovell, Sir Bernard Lovel, Echoes of War – The History of H2S, Adam Hilger, 1991,
Nakagawa, Yasudo; Japanese Radar and Related Weapons of World War II, translated and edited by Louis Brown, John Bryant, and Naohiko Koizumi, Aegean Park Press, 1997,
Pritchard, David., The Radar War Germany's Pioneering Achievement 1904–1945 Patrick Stephens Ltd, Wellingborough 1989,
Rawnsley, C. F., and Robert Wright, Night Fighter, Mass Market Paperback, 1998
Sayer, A. P., Army Radar – Historical Monograph, War Office, 1950
Swords, Seán S., Technical History of the Beginnings of Radar, IEE/Peter Peregrinus, 1986,
Watson, Raymond C., Jr. Radar Origins Worldwide: History of Its Evolution in 13 Nations Through World War II. Trafford Pub., 2009,
Watson-Watt, Sir Robert, The Pulse of Radar, Dial Press, 1959, (no ISBN) (An autobiography of Sir Robert Watson-Watt)
Zimmerman, David., Britain's Shield Radar and the Defeat of the Luftwaffe, Sutton Publishing, 2001,
External links
Radarworld.org: Radar World website
Radarworld.org: "Radar Family Tree" — by Martin Hollmann.
PenleyRadarArchives.org: "Early Radar History – an Introduction" — by Bill + Jonathan Penley (2002).
Fas.org: "Introduction to Naval Weapons Engineering" — Radar fundamentals section.
Foundation Centre for German Communications and Related Technologies: "Christian Hülsmeyer and about the early days of radar inventions" — by Arthur O. Bauer.
Purbeckradar.org: Early radar development in the UK
Hist.rloc.ru: "A History of Radio Location in the USSR"—
Jahre-radar.de: "100 Years of Radar"—
Jahre-radar.de: "The Century of Radar – from Christian Hülsmeyer to Shuttle Radar Topography Mission"—, by Wolfgang Holpp.
World War II
The Radar Pages.uk: "All you ever wanted to know about British air defence radar" — history and details of various British radar systems, by Dick Barrett.
The Radar Pages.uk: Deflating British Radar Myths of World War II — by Maj. Gregory C. Clark (1997).
The Secrets of Radar Museum: "Canada's involvement in WWII Radar"
Navweaps.com: German Radar Equipment of World War II
Articles containing video clips
Radar
Radar | History of radar | Technology,Engineering | 25,121 |
2,891,988 | https://en.wikipedia.org/wiki/Black%20Fork%20Mountain%20Wilderness | The Black Fork Mountain Wilderness Area is located in the U.S. states of Arkansas and Oklahoma. Created by an act of Congress in 1984, the wilderness covers an area of 13,139 acres (53 km²). The Arkansas portion contains and the Oklahoma portion contains . Located within Ouachita National Forest, the wilderness is managed by the U.S. Forest Service. The area is about north of Page, Oklahoma, and about north of Mena, Arkansas.
This infrequently visited wilderness follows the main ridgeline of Black Fork Mountain for 13 miles (21 km) which rises to more than 2,400 feet (731 m). Steep cliffsides provide sanctuary to groves of Dwarf Oak, Serviceberry and Granddaddy Greybeard (known as the fringe tree Chionanthus) which have a few unique species represented here.
There are few trails through the wilderness and none at all in the Oklahoma sections. Visitors should expect difficult hiking conditions and few sources for water as there are only two springs along the higher mountain slopes. Black bears are known to inhabit the wilderness, along with white-tailed deer, bobcat, skunk and pheasant.
The wilderness contains extensive areas of unlogged, old-growth forest. Along the ridge of Black Fork Mountain are several thousand acres of stunted old-growth Post Oak, Shortleaf Pine, and Hickory. There is also a small grove of old-growth sugar maple forest nestled in the folds of the mountain slopes.
U.S. Wilderness Areas do not allow motorized or mechanized vehicles, including bicycles. Although camping and fishing are usually allowed with a proper permit, no roads or buildings can be constructed and there is also no logging or mining, in compliance with the 1964 Wilderness Act. Wilderness areas within National Forests and Bureau of Land Management areas also allow hunting in season.
The mountain was the site of the crash of Texas International Airlines Flight 655 on September 27, 1973, in which 11 persons died.
See also
List of U.S. Wilderness Areas
List of old growth forests
References
External links
Protected areas of Le Flore County, Oklahoma
Protected areas of Polk County, Arkansas
Protected areas of Scott County, Arkansas
Wilderness areas of Arkansas
Wilderness areas of Oklahoma
Old-growth forests
Ouachita National Forest
1984 establishments in Arkansas
Protected areas established in 1984
1984 establishments in Oklahoma | Black Fork Mountain Wilderness | Biology | 472 |
25,181,369 | https://en.wikipedia.org/wiki/HD%20215497%20b | HD 215497 b is an extrasolar planet which orbits the K-type main sequence star HD 215497, located approximately 142 light years away in the constellation Tucana. This planet has at least 6.6 times the mass of Earth. This planet was detected by HARPS on October 19, 2009, together with 29 other planets, including HD 215497 c.
References
Exoplanets discovered in 2009
Exoplanets detected by radial velocity
Terrestrial planets
Super-Earths
Tucana | HD 215497 b | Astronomy | 102 |
68,842,959 | https://en.wikipedia.org/wiki/List%20of%20IBM%20PS/2%20models | The Personal System/2 or PS/2 was a line of personal computers developed by International Business Machines Corporation (IBM). Released in 1987, the PS/2 represented IBM's second generation of personal computer following the original IBM PC series, which was retired following IBM's announcement of the PS/2 in April 1987. Most PS/2s featured the Micro Channel architecture bus—a closed standard which was IBM's attempt at recapturing control of the PC market. However some PS/2 models at the low end featured ISA buses, which IBM included with their earlier PCs and which were widely cloned due to being a mostly-open standard. Many models of PS/2 were made, which came in the form of desktops, towers, all-in-ones, portables, laptops and notebooks.
Notes
Legend
Explanatory notes
Built-in or optional monitors are CRTs unless mentioned otherwise.
The Space Saving Keyboard is a 87-key numpad-less version of the Model M.
The 25 Collegiate, intended for college students, had two 720 KB floppy drives, maxed out the RAM to 640 KB, and came packaged with the official PS/2 Mouse, Windows 2.0, and four blank floppy disks.
Financial workstations came packaged with a 50-key function keypad and were intended for use in banks.
LS models are "LAN Stations": essentially the same as their non-LS counterparts but without floppy drives or hard drives and that connect to networks using Ethernet or Token Ring adapters (in essence, diskless workstations).
Ultimedia models shipped with a microphone and included SCSI CD-ROMs, M-Audio sound adapter cards and volume controls and headphone and microphone jacks at the front of the case.
Array models are PS/2 Servers with support for RAID.
Models
Main line
PS/2 Server
Portables
Related systems
See also
List of third-party Micro Channel computers
List of IBM Personal Computer models
References
IBM lists
IBM PS 2 models | List of IBM PS/2 models | Technology | 415 |
1,041,286 | https://en.wikipedia.org/wiki/Wakame | Wakame (Undaria pinnatifida) is a species of kelp native to cold, temperate coasts of the northwest Pacific Ocean. As an edible seaweed, it has a subtly sweet, but distinctive and strong flavour and satiny texture. It is most often served in soups and salads.
Wakame has long been collected for food in East Asia, and sea farmers in Japan have cultivated wakame since the eighth century (Nara period).
Although native to cold temperate coastal areas of Japan, Korea, China, and Russia, it has established itself in temperate regions around the world, including New Zealand, the United States, Belgium, France, Great Britain, Spain, Italy, Argentina, Australia and Mexico. , the Invasive Species Specialist Group has listed the species on its list of 100 worst globally invasive species.
Wakame, as with all other kelps and brown algae, is plant-like in appearance, but is unrelated to true plants, being, instead, a photosynthetic, multicellular stramenopile protist of the SAR supergroup.
Names
The primary common name is derived from the Japanese name (, , , ).
In English, it can also be called sea mustard.
In Chinese, it is called (裙带菜) or (海帶芽)
In French, it is called or ('sea fern').
In Korean, it is called (미역).
Etymology
In Old Japanese, stood for edible seaweeds in general as opposed to standing for algae. In kanji, such as , and were applied to transcribe the word. Among seaweeds, wakame was likely most often eaten, therefore especially meant wakame. It expanded later to other seaweeds like kajime, hirome (kombu), arame, etc. Wakame is derived from + (, lit. 'young seaweed'). If this is a eulogistic prefix, the same as the of tamagushi, wakame likely stood for seaweeds widely in ancient ages. In the Man'yōshū, in addition to and (both are read as ), (, soft wakame) can be seen. Besides, (, lit. 'beautiful algae'), which often appeared in the , may be wakame depending on poems.
History in the West
The earliest appearance in Western documents is probably in Nippo Jisho (1603), as Vacame.
In 1867 the word wakame appeared in an English-language publication, A Japanese and English Dictionary, by James C. Hepburn.
Starting in the 1960s, the word wakame started to be used widely in the United States, and the product (imported in dried form from Japan) became widely available at natural food stores and Asian-American grocery stores, due to the influence of the macrobiotic movement, and in the 1970s with the growing number of Japanese restaurants and sushi bars.
Aquaculture
Japanese and Korean sea-farmers have grown wakame for centuries, and are still both the leading producers and consumers. Wakame has also been cultivated in France since 1983, in sea fields established near the shores of Brittany.
Wild-grown wakame is harvested in Tasmania, Australia, and then sold in restaurants in Sydney and also sustainably hand-harvested from the waters of Foveaux Strait in Southland, New Zealand and freeze-dried for retail and use in a range of products.
Cuisine
Wakame fronds are green and have a subtly sweet flavour and satiny texture. The leaves should be cut into small pieces as they will expand during cooking.
In Japan and Europe, wakame is distributed either dried or salted, and used in soups (particularly miso soup), and salads (tofu salad), or often simply as a side dish to tofu and a salad vegetable like cucumber. These dishes are typically dressed with soya sauce and vinegar, possibly rice vinegar.
Goma wakame, also known as seaweed salad, is a popular side dish at American and European sushi restaurants. Literally translated, it means "sesame seaweed", as sesame seeds are usually included in the recipe.
In Korea, wakame is used to make seaweed soup called miyeok-guk in which wakame is stir-fried in sesame oil and boiled with meat broth.
Health effects
A study conducted at Hokkaido University found that a compound in wakame known as fucoxanthin may help burn fatty tissue in mice and rats. Studies in mice have shown that fucoxanthin induces expression of the fat-burning protein UCP1 that accumulates in fat tissue around the internal organs. Expression of UCP1 protein was significantly increased in mice fed fucoxanthin. Wakame is also used in topical beauty treatments. See also Fucoidan.
Wakame is a rich source of eicosapentaenoic acid, an omega-3 fatty acid. At over 400 mg/(100 kcal) or almost 1 mg/kJ, it has one of the higher nutrient-to-energy ratios for this nutrient, and among the very highest for a vegetarian source. Wakame is a low calorie food. A typical 10–20 g (1–2 tablespoon) serving of wakame contains roughly and provides 15–30 mg of omega-3 fatty acids. Wakame also has high levels of sodium, calcium, iodine, thiamine and niacin.
In Oriental medicine it has been used for blood purification, intestinal strength, skin, hair, reproductive organs and menstrual regularity.
In Korea, miyeok-guk soup is popularly consumed by women after giving birth as sea mustard () contains a high content of calcium and iodine, nutrients that are important for new nursing mothers. Many women consume it during the pregnancy phase as well. It is also traditionally eaten on birthdays for this reason, a reminder of the first food that the mother has eaten and passed on to her newborn through her milk.
Invasive species
Native to cold temperate coastal areas of Japan, Korea, China, and Russia, in recent decades it has become established in temperate regions around the world, including New Zealand, the United States, Belgium, France, Great Britain, Spain, Italy, Argentina, Australia and Mexico. It was nominated one of the 100 worst invasive species in the world. Undaria is commonly initially introduced or recorded on artificial structures, where its r-selected growth strategy facilitates proliferation and spread to natural reef sites. Undaria populations make a significant but inconsistent contribution of food and habitat to intertidal and subtidal reefs. Undaria invasion can cause changes to native community composition at all trophic levels. As well as increasing primary productivity, it can reduce the abundance and diversity of understory algal assemblages, out-compete some native macroalgal species and affect the abundance and composition of associated epibionts and macrofauna, including gastropods, crabs, urchins and fish. Its dense congregation and capability to latch onto any hard surface has caused it to become a major cause of damage to aquaculture apparatus, decreasing efficiency of fishing industries by clogging underwater equipment and fouling boat hulls.
Eradication of wakame within a localized area usually involves getting rid of the plants underwater, often via regular inspection of aquatic environments. Removing the plants underwater without disrupting native flora is accomplished by humans diving underwater and manually removing the reproductive parts of the wakame to reduce its spread. Proper and regular cleaning of underwater apparatus reduces the potential vectors for wakame spores, reducing the spread of the plant.
New Zealand
In New Zealand, Undaria pinnatifida was declared as an unwanted organism in 2000 under the Biosecurity Act 1993. It was first discovered in Wellington Harbour in 1987 and probably arrived as hull fouling on shipping or fishing vessels from Asia. In 2010, a single Undaria pinnatifida plant was discovered in Fiordland, which has since quickly spread from a small clump and localized itself throughout Fiordland.
Wakame is now found around much of New Zealand, from Stewart Island to as far north as the subtropical waters of Karikari Peninsula. It spreads in two ways: naturally, through the millions of microscopic spores released by each fertile organism, and through human mediated spread, most commonly via hull fouling and with marine farming equipment. It is a highly successful and fertile species, which makes it a serious invader. Its capability to grow in dense congregations on any hard surface allows it to outcompete native flora and fauna for sunlight and space. Although the effects of wakame in New Zealand are not fully understood, with the severity varying depending on the location, the negative impact of wakame is projected to be significant against the fishing and tourism industries in Fiordland, as well as overcrowding in popular diving locations.
Even though it is an invasive species, farming of wakame is permitted in already heavily infested areas of New Zealand, as part of a control program established since 2010. In 2012, the government allowed for the farming of wakame in Wellington, Marlborough and Banks Peninsula. Farmers of wakame must obtain permission from Biosecurity New Zealand to access approval of Sections 52 and 53 from the Biosecurity Act 1993, which deal with exceptions to the possession of pests and unwanted creatures. Furthermore, any farmed wakame must only be naturally settled in pre-existing marine farms; mussel farms are a commonly infested area for wakame. As an exceptional case of permitted farming purely as pest control, profitting from wakame is not permitted, with exception of Ngāi Tahu, in which the iwi's revenue from catching wakame is funded for further pest control.
United States
The seaweed has been found in several harbors in southern California. In May 2009 it was discovered in San Francisco Bay and aggressive efforts are underway to remove it before it spreads.
See also
Kelp
Kombu
Laverbread
Miyeok guk
References
External links
Wakame Seaweed at About.com
AlgaeBase link
Undaria pinnatifida at the FAO
Undaria pinnatifida at the Joint Nature Conservation Committee, UK
Global Invasive species database
Undaria Management at the Monterey Bay National Marine Sanctuary
Alariaceae
Algae of Korea
Marine biota of Asia
Edible algae
Edible seaweeds
Japanese cuisine terms
Plants described in 1873 | Wakame | Biology | 2,166 |
8,681,935 | https://en.wikipedia.org/wiki/Long-footed%20potoroo | The long-footed potoroo (Potorous longipes) is a small marsupial found in southeastern Australia, restricted to an area around the coastal border between New South Wales and Victoria. It was first recorded in 1967 when an adult male was caught in a dog trap in the forest southwest of Bonang, Victoria. It is classified as vulnerable.
P. longipes is the largest species of Potorous, resembling the long-nosed potoroo, Potorous tridactylus. It is a solitary, nocturnal creature, feeding on fungi, vegetation, and small invertebrates. It differs from P. tridactylus in its larger feet and longer tail.
Current threats to the species include predation by introduced feral cats and foxes, and loss of habitat from logging within its limited range.
Taxonomy
The scientific name of the animal commonly known as the long-footed potoroo is Potorous longipes. Potoroo is the common name for all of the three other species in the genus Potorous, Gilbert's potoroo, P. gilbertii, the broad-faced potoroo, P. platyops, and long-nosed potoroo, P. tridactylus. P. longipes is the largest potoroo, and most resembles P. tridactylus. The species was first recorded in 1967 in the East Gippsland region of Victoria, Australia. The formal description was published in 1980. Remains of the long-footed potoroo were found in predator droppings in 1986.
Description and anatomy
The long-footed potoroo is a very rare marsupial only found in Australia. A potoroo is a small type of kangaroo-like marsupial. It is about the size of a rabbit and its common name suggests, it has very long hind feet. These feet have long toes with very strong claws. The species is the largest potoroos with males weighing up to and females . The entire body length is . The tail can be between in length, while the hind foot is . This animal can be differentiated from other potoroos by its long back feet, which are the same length relative to its head. It has an extra footpad called the hallcual pad. The long-footed potoroo hops in a similar fashion to a kangaroo, yet can use its tail to grasp objects. It has a soft, dense coat, with grayish-brown fur that slowly fades into a lighter color on the feet and belly.
Behavior and life history
Habitat and distribution
The long-footed potoroo lives in a range of montane forests. It has also been found in the warmer temperate rainforest. This species lives where the soil is constantly moist. It spends its day time sleeping in a nest on the ground in a hidden, sheltered area. An essential feature of the long-footed potoroo's habitat is the dense vegetation cover that supplies protection and shelter from predators. This species was not known to science until 1967, so historically, it is inadequately understood. It has a very restricted area where it lives. The main populations can be found in Victoria, in the Barry Mountains, which is in the northeast part of the state, and in the East Gippsland, located in the far east. A smaller population lives north of the Victorian border in the south-east forest of New South Wales.
Population
The long-footed potoroo is very difficult to find in the wild due to its shy behavior. The National Recovery Plan states that a few thousand individuals are unlikely to remain in the wild as of now; only a few hundred long-footed potoroos may survive.
Diet
Long-footed potoroos' diet normally consists of up to 91% of fruiting fungi found under ground. They are known to consume up to 58 different species of fungi as part of their diet. These underground fungi are also called sporocarps or truffles. If necessary, they may also eat fruits, plant material, and soil-dwelling invertebrates. Their jaws have shearing premolars and molars that are rounded at the top, indicating a varied diet is consumed.
The long-footed potoroo plays a part in the symbiotic relationship between the fungi (Ectomycorrhizae) and the trees. It helps this relationship by releasing the spores of the fruiting fungi through its fecal material. In turn, this helps keep the forest healthy, benefiting both the fungi and the forest. The species of fungi that are eaten in the winter and summer are similar, but the amount of each type of fungal species varies between seasons and years. It has a sacculated fore stomach in which bacterial fermentation occurs. This aids in the breakdown of fungal cell walls.
Behavior and communication
The long-footed potoroo is very shy and elusive. It can produce a vocalization, a low kiss kiss sound when stressed or to communicate to its offspring. Although the long-footed potoroo is a nocturnal species, it may partake in early-morning basking in the sun. The long-footed potoroo is constantly hidden from plain sight. Under normal conditions, males are not aggressive. Nevertheless, if provoked, they can become aggressive in defending their home.
Mating, reproduction, and parental care
Breeding can occur all year, yet most young are born in the winter, spring, and early summer. Higher rainfall and deep, moist soil full of leaf litter provides a stable food supply. In turn, these periods of good conditions allows breeding to occur easily. When a female is in estrus, nearby males fight with one another, until dominance is established. The species has a monogamous mating system. The gestation period is around 38 days. In captivity, the offspring stay in the mother's pouch for 140 to 150 days. The offspring then reaches sexual maturity around 2 years old. Females can give birth up to three young per year, though one or two young is most commonly seen. After the young leave the pouch, they can stay with their mothers up to 20 weeks until they become independent. They stay in the mother's territory up to 12 months before leaving. The long-footed potoroo exhibits postpartum oestrus and embryonic diapauses.
Movement patterns
The long-footed potoroo moves to different parts of its territory due to the distribution of fungi. Thus seasonally, their territory boundaries change following the distribution of truffles. Males use a larger home range area than females use. The species is territorial and the territories of mated pairs can overlap with each other, but not with other pairs. The home range of the long-footed potoroo is between 22 and 60 ha in East Gippsland and between 14 and 23 ha in north-eastern Victoria.
Conservation issues
Status
As of 2006, the long-footed potoroo has been classified as endangered (EN) by the IUCN Red List. According to the IUCN Red List, the long-footed potoroo is considered endangered because its area of occurrence is less than 5,000 km2. The dispersed area where the animal is found is most likely in a decline of the number of individuals due to predators and competition for food from introduced pigs. It is listed as an endangered species on schedule 1 of the New South Wales Threatened Species Conservation Act 1995. It is also considered an endangered species under the Commonwealth Environmental Protection and Biodiversity Conservation Act 1999, and as endangered by the Victorian Flora and Fauna Guarantee Act 1988.
Threats
Their most serious predators include the red fox, feral cats, and wild dogs, all invasive species. Their habitat is greatly disturbed due to building roads, thus they have seemed to move along these roads and forage for food in these areas. This also causes a threat from being hit with a motor vehicle. In Victoria, the State Forest has about half of the long-footed potoroo population. Introduced pigs may be a large competitor for the long-footed potoroo's specialized diet.
Conservation plans
Information on this rare species is spotty. Thus, to conserve it effectively, further studies on its way of life and habitat need to be conducted. Research was performed on a small captive population that was able to breed in the 1980s and 1990s at the Healesville Sanctuary. Small steps have been taken to increase the population of long-footed potoroo and to protect it from extinction. In the State Forest of Victoria, the long-footed potoroo is protected through special areas in which logging is monitored or prevented and burning of the forest has been reduced. Their natural predators such as the wild dogs, red fox and feral cats have also been put under control. This will allow the long-footed potoroo to reclaim their habitat and allow their numbers to rise again. Conservation plans such as these will not only benefit the long-footed potoroo, but will also be beneficial to other threatened animal species in this area.
2019–2020 Australian bushfires
Over 82% of its habitat was burnt during the 2019-2020 Australian bushfires.
References
External links
Images and movies of long-footed potoroo at ARKive
Foundation for National Parks & Wildlife
Potoroids
Endangered fauna of Australia
Mammals of New South Wales
Mammals of Victoria (state)
EDGE species
Mammals described in 1980 | Long-footed potoroo | Biology | 1,882 |
49,861,676 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Human%E2%80%93Robot%20Interaction | The ACM/IEEE '''International Conference on Human-Robot Interaction (HRI)''' is an annual conference "focusing on human-robot interaction with roots in robotics, psychology, cognitive science, human computer interaction (HCI), human factors, artificial intelligence, organizational behavior, anthropology, and other fields". The conference is a joint undertaking of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) organizations.
See also
ACM
ACM SIGAI
IEEE
References
External links
ACM web site
IEEE web site
HCI 2016 conference site
Software engineering conferences | International Conference on Human–Robot Interaction | Engineering | 125 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.