id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
60,773,098 | https://en.wikipedia.org/wiki/Clitopaxillus%20alexandri | Clitopaxillus alexandri is a species of fungus in the family Pseudoclitocybaceae. It has been given the recommended English name of Alexander's funnel. Basidiocarps (fruit bodies) are agaricoid and resemble those of Clitocybe species. The species is saprotrophic and is mainly known from Europe.
Description
The pileus (cap) is convex at first becoming umbonate with age, smooth, 50–200 mm in diameter, grey to reddish brown, cracking with age. The lamellae (gills) are decurrent and paler than the pileus. The stipe (stem) is up to 100 mm tall, and yellowish white. The context is whitish with an almond smell. The spore print is white. Microscopically the basidiospores are smooth, ellipsoid, weakly amyloid, and measure 4.5–5.5 x 3.5–4.0 μm.
Similar species
The recently described Clitopaxillus fibulatus is very similar, but differs microscopically in its slightly larger basidiospores and hyphae with more abundant clamp connections. It also differs in having a subarctic and alpine distribution.
Habitat and distribution
The species typically occurs in leaf litter with pine, oak, and cedar. It was originally described from France and is mostly known from southern, western, and central Europe, extending into North Africa.
Conservation
Clitopaxillus alexandri is assessed as "critically endangered" on the Red Data List of Threatened British Fungi, "vulnerable" on the Dutch red list, and "threatened" or "near threatened" on some other European red lists, including those of Germany, Hungary, and Norway.
References
Fungi of Europe
Fungi of Africa
Fungi described in 1873
Fungus species | Clitopaxillus alexandri | [
"Biology"
] | 370 | [
"Fungi",
"Fungus species"
] |
60,773,258 | https://en.wikipedia.org/wiki/Tropifexor | Tropifexor is an investigational drug that acts as an agonist of the farnesoid X receptor (FXR). It was discovered by researchers from Novartis and Genomics Institute of the Novartis Research Foundation. Its synthesis and pharmacological properties were published in 2017. It was developed for the treatment of cholestatic liver diseases and nonalcoholic steatohepatitis (NASH). In combination with cenicriviroc, a CCR2 and CCR5 receptor inhibitor, it is undergoing a phase II clinical trial for NASH and liver fibrosis.
Rats treated orally with tropifexor (0.03 to 1 mg/kg) showed an upregulation of the FXR target genes, BSEP and SHP, and a down-regulation of CYP8B1. Its EC50 for FXR is between 0.2 and 0.26 nM depending on the biochemical assay.
The patent that covers tropifexor and related compounds was published in 2010.
References
Drugs developed by Novartis
Benzothiazoles
Farnesoid X receptor agonists
Isoxazoles
Tropanes
Carboxylic acids
Trifluoromethyl ethers
Cyclopropyl compounds | Tropifexor | [
"Chemistry"
] | 266 | [
"Carboxylic acids",
"Functional groups"
] |
60,773,452 | https://en.wikipedia.org/wiki/Ruben%20A.%20Stirton | Ruben Arthur Stirton (1901-1966), known to his friends as "Stirt", was an American paleontologist, specializing in mammals, who was active in South America, the United States and Australia. Stirton was closely associated with the University of California Museum of Paleontology, receiving an appointment as curator in 1930 and as its fourth director from 1949 to 1966. His career also saw engaged as a lecturer, associate professorship and then as a professor in 1951, from which time he was director of the University's Department of Paleontology.
Stirton was born in Kansas on 20 August 1901, and graduated from the state's university in the field of zoology. He served as the mammalogist on expeditions led by Donald R. Dickey to El Salvador in the 1920s. His expeditions included a return to El Salvador in the 1940s, as well as another collecting fossils in Colombia. In 1953, he directed his studies to the marsupials of Australia, with the intent of discovering primitive species of marsupials.
His publications mainly dealt with fossilized mammals from the Great Plains, particularly beavers and horses. Other contributions he made included careful and systematic descriptions of fossil specimens including an accurate determination of their geological origin, the use of animal groups to perform stratigraphic correlation, and various studies on evolutionary changes in several families of mammals. Stirton was the leading author of papers that described new taxa, including the Vombatiformes genera Rhizophascolonus and Litokoala, which were published posthumously in 1967.
He died on June 14, 1966, of a heart attack, while attending a meeting of the American Society of Mammalogists in southern California. Stirton's students recall him as a popular lively lecturer, noting his rendition of the call of the Australian dingo as an example of his enthusiasm. In 1979, fellow paleontologist Patricia Vickers-Rich named the prehistoric species Dromornis stirtoni (colloquially known as Stirton's thunderbird) after this researcher, which is possibly the largest bird ever.
References
1901 births
1966 deaths
American paleontologists
Taxon authorities
Presidents of the Society of Vertebrate Paleontology | Ruben A. Stirton | [
"Biology"
] | 442 | [
"Taxon authorities",
"Taxonomy (biology)"
] |
60,773,981 | https://en.wikipedia.org/wiki/HR%20858 | HR 858 (also known as HD 17926 or TOI-396) is a star with a planetary system located 103 light-years from the Sun in the southern constellation of Fornax. It has a yellow-white hue and is visible to the naked eye, but it is a challenge to see with an apparent visual magnitude of 6.4. The star is drifting further away with a radial velocity of 10 km/s. It has an absolute magnitude of +3.82.
This object is a slightly-evolved F-type main-sequence star with a stellar classification of F6V, which indicates it is generating energy through core hydrogen fusion. It is roughly two billion years old and is spinning with a projected rotational velocity of 8.3 km/s. The star has 1.1 times the mass of the Sun and 1.3 times the Sun's radius. It is radiating 2.3 times the luminosity of the Sun from its photosphere at an effective temperature of 6,201 K.
A faint co-moving stellar companion, designated component B, at an angular separation of . This corresponds to a projected separation of . It is a red dwarf star.
Planetary system
In May 2019, HR 858 was announced to have at least 3 exoplanets as observed by the transit method with the Transiting Exoplanet Survey Satellite. All three are orbiting close to the host star and are close in size, each about twice the radius of the Earth. Described as super-Earths by their discovery paper, measurements of their masses suggest that in terms of composition they may be better described as sub-Neptunes. Planets 'b' and 'c' may be in a 3:5 mean-motion resonance.
Further research measured the masses of the planets b and d using accurate radial velocities, giving masses of as well as planetary densities of 2.44 and 4.9 g/cm3. The system displays significant transit timing variations. The mass of planet c could not be measured using radial velocities, but it is constrained to be less than , and a not very reliable value of was measured using TTVs.
Notes
References
External links
in-the-sky.org
F-type main-sequence stars
Planetary systems with three confirmed planets
Binary stars
Fornax
CD-31 1148
017926
013363
0858
M-type main-sequence stars
396 | HR 858 | [
"Astronomy"
] | 497 | [
"Fornax",
"Constellations"
] |
60,775,664 | https://en.wikipedia.org/wiki/Christchurch%20Call%20to%20Action%20Summit | The Christchurch Call to Action Summit (also called the Christchurch Call) was a political summit initiated by then New Zealand Prime Minister Jacinda Ardern that took place on 15 May 2019 in Paris, France, two months after the Christchurch mosque shootings of 15 March 2019. Co-chaired by Ardern and President Emmanuel Macron of France, the summit aimed to "bring together countries and tech companies in an attempt to bring to an end the ability to use social media to organise and promote terrorism and violent extremism". World leaders and technology companies pledged to "eliminate terrorist and violent extremist content online"; 17 countries originally signed the non-binding agreement, with another 31 countries following suit on 24 September the same year. The pledge consists of three sections or commitments: one for governments, one for online service providers, and one for the ways in which the two can work together.
In May 2024, the New Zealand and French Governments agreed to the creation of a new charity called the Christchurch Call Foundation to continue the work of the Christchurch Call.
Signatories
Among the signatories to the pledge are the European Commission, Council of Europe, UNESCO, and the governments of the following countries:
Argentina
Australia
Austria
Belgium
Bulgaria
Canada
Chile
Colombia
Croatia
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Georgia
Germany
Ghana
Greece
Hungary
Iceland
Indonesia
India
Ireland
Italy
Ivory Coast
Japan
Jordan
Kenya
Republic of Korea
Latvia
Lithuania
Luxembourg
Maldives
Malta
Mexico
Mongolia
The Netherlands
New Zealand
Norway
Peru
Poland
Portugal
Romania
Senegal
Slovakia
Slovenia
Spain
Sri Lanka
Sweden
Switzerland
Tunisia
United Kingdom
United States
The following online service providers, as part of the Global Internet Forum to Counter Terrorism (GIFCT) consortium, also signed the pledge:
Amazon
Dailymotion
Facebook
Google
Microsoft
Qwant
Twitter
YouTube
The United States, under (Republican) President Trump, declined to attend in 2019, expressing concerns that US compliance with the agreement could create conflicts with free-speech protections in the country's Constitution; the United States however did support the summit's "overarching message" and "endorsed its overall goals". On 7 May 2021, White House press secretary Jen Psaki announced that the United States, under (Democratic) President Biden, would be joining the Christchurch Call and participate in a virtual summit on 14 May 2021.
Status history
On 4 April 2023, New Zealand Prime Minister Chris Hipkins appointed Ardern as Special Envoy for the Christchurch Call. Ardern will serve in the role in a voluntary capacity and report to Hipkins.
On 14 May 2024, New Zealand Prime Minister Christopher Luxon and French President Emmanuel Macron agreed that the Christchurch Call would continue as a charity rather than a part of NZ Department of the Prime Minister and Cabinet. The two leaders announced the creation of a new non governmental organisation called the Christchurch Call Foundation, to coordinate the Christchurch Call's "work to eliminate terrorist and violent extremist content online."
Commentary
Bryan Keogh wrote in The Conversation that the summit "has made excellent progress as a first step to change, but we need to take this opportunity to push for systemic change in what has been a serious, long-term problem." InternetNZ CEO Jordan Carter called the summit "a vital first step" to addressing terrorism and violent extremism online, saying that it was "important that governments and online service providers have come together on this issue, to agree real, actionable changes." Jillian York of the Electronic Frontier Foundation praised the Call for asking companies to provide greater transparency regarding its moderation practices, while expressing concerns about how terms such as "terrorism" and "violent extremism" are defined by various governments.
Tom Rogan argued in the Washington Examiner that the Call's goal for governments to work with companies to stop "violent extremist content" would breach Americans' First Amendment rights, using war footage on YouTube as an example of content that could be blocked under this agreement. Nick Gillespie of Reason criticized the summit, writing that "it should be deeply worrying to anyone who believes in free expression that governments and corporations are openly working together to decide what is and is not acceptable speech."
References
External links
2019 in Paris
2019 in international relations
21st-century diplomatic conferences
Christchurch mosque shootings
Counterterrorism
Internet censorship
May 2019 events in France
Social media
France–New Zealand relations
Conferences in Paris
Jacinda Ardern
Emmanuel Macron
2019 conferences | Christchurch Call to Action Summit | [
"Technology"
] | 874 | [
"Computing and society",
"Social media"
] |
60,776,979 | https://en.wikipedia.org/wiki/Polydiketoenamine | Polydiketoenamine (PDK) is a polymer discovered in 2019 that can be recycled over and over without loss of performance. It is obtained from carboxylic acids and polyamides. The compound contains a cross-linked network which gives it the properties of higher performance and chemical resistance. The mechanical reprocessing of PDK is done without degrading its properties or performance. When the compound is under a substantial amount of heat, the total number of bonds remain constant; therefore, under heat bonds will break and make to reform. Researchers at Lawrence Berkeley National Laboratory studied PDK and published the results in Nature Chemistry in April 2019. Submersion in an acidic solution breaks down the polymer to its original monomers and separates the monomers from additives.
See also
Enamine
References
Lawrence Berkeley National Laboratory
Organic polymers | Polydiketoenamine | [
"Chemistry"
] | 171 | [
"Polymer stubs",
"Organic polymers",
"Organic compounds",
"Organic chemistry stubs"
] |
60,777,446 | https://en.wikipedia.org/wiki/Barrel%20plating | Barrel plating is a form of electroplating used for plating a large number of smaller metal objects in one sitting. It consists of a non-conductive barrel-shaped cage in which the objects are placed before being subjected to the chemical bath in which they become plated. An important aspect of the barrel plating process is that the individual pieces establish a bipolar contact with one another — this results in high plating efficiency. However, because of the large amount of surface contact that the pieces have with each other, barrel plating is generally not recommended when precisely engineered or ornamental finishes are required.
Barrel plating began as a practice in the United States during the US Civil War. The harsh chemicals required, however, meant that it had to await the development of non-conductive and chemically resistant plastics— primarily perspex and polypropylene— before it could receive widespread use. By 2004, however, barrel plating had become widespread: it was estimated that as much as 70% of modern electroplating facilities used barrel plating techniques at that time.
References
Metal plating | Barrel plating | [
"Chemistry"
] | 223 | [
"Metallurgical processes",
"Coatings",
"Metal plating"
] |
60,777,557 | https://en.wikipedia.org/wiki/Value%20tree%20analysis | Value tree analysis is a multi-criteria decision-making (MCDM) implement by which the decision-making attributes for each choice to come out with a preference for the decision makes are weighted. Usually, choices' attribute-specific values are aggregated into a complete method. Decision analysts (DAs) distinguished two types of utility. The preferences of value are made among alternatives when there is no uncertainty. Risk preferences solves the attitude of DM to risk taking under uncertainty. This learning package focuses on deterministic choices, namely value theory, and in particular a decision analysis tool called a value tree.
History
The concept of utility was used by Daniel Bernoulli (1738) first in 1730s while explaining the evaluation of St Petersburg paradox, a specific uncertain gable. He explained that money was not enough to measure how much value is. For an individual, however, the worth of money was a non-linear function. This discovery led to the emergence of utility theory, which is a numerical measure that indicates how much value alternative choices have. With the development of decision analysis, utility played an important role in the explanation of economics behavior. Some utilitarian philosophers like Bentham and Mill took advantage of it as an implement to build a certain kind of ethics theory either. Nevertheless, there was no possibility of measuring one's utility function. Moreover, the theory was not so important as in practice. With the time past, the utility theory gradually based on a solid theoretical foundation. People started to use theory of games to explain the behavior of those who are rational and calm when engaging with others with conflict happening. In 1944 John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior was published. Afterwards, it emerged since it has become of the key implement researchers and practitioners from statistics and operations research use to give a helping hand to decision makers when it was hard to make a decision. Decision analysts can be separated into two sorts of utility. The attitude of decision makers towards uncertain risk are solved by risk preference.
Process
The goal of the value tree analysis process is to offer a well-organized way to think and discuss about alternatives and support subjective judgements which are critical for correct or excellent decisions. The phases of process of the value tree analysis is shown as below:
Problem structuring:
defining the decision context
identifying the objectives
generating and identifying decision alternatives
creating a hierarchical model of the objectives
specifying the attributes
Preference elicitation
Recommended decision
Sentitvity analysis
These processes are usually large and iterative. For example, problem structure, collection of related information, and modeling of DM preferences often require a lot of work. DM's perception of the problem and preferences for results not previously considered may change and evolve during this process.
Methodology
Value tree was built to be an effective and essential technique for improving and enhancing goals and values by several aspects. The tree analysis displays a visual mode to problems that used to be only available in a verbal mode. Plus separate aspects, thoughts and opinions are united to a single visual representation, which gives birth to great clarity, stimulation of creative thinking, and constructive communication.
We take the steps below to create a value tree analysis with an example to help illustrate the steps:
Step1: Initial pool
Using a free brainstorming of all the values as a beginning, by which we mean all the problems which are related to the decision: the goals and criteria, the demands, etc.—all the things which have relevance to decision making. Write down what each value is on a piece of paper.
(A) Begin the process with several things:
Essences in your decision
The things that matter
The thing that you are looking for
The thing you want
Your passions, intentions, joys, ambition
The things which joy you
The things that you are fierce of
(B) Once you've exhausted your thoughts after this very open phase, consider the following topics to help yu come up with comprehensive values, interests, and concerns related to your decision:
Stakeholders
Consider who is affected by the decision and what their values might be. Stakeholders may be family, friends, neighbors, society, offspring or other species, but they can be anyone who might be affected by your decision, whether intentional or not.
Basic human needs:
Physiological value - for example, health and nutrition
Safety value - feel safe
Social values - be loved and respected
Self-realizing value - doing and becoming "fit"
Cognitive value - eager to satisfy curiosity, know, explain and understand
Aesthetic value - experience beauty
Intangible consequences. We are most inclined to ignore intangible consequences, such as:
If you make this choice, how would you feel about yourself?
How do others see you making this choice?
The lack of awareness of this intangible consequence can easily lead to our regretful decision. Moreover, if there is a disagreement between our intuitive and thorough analysis of decision-making, we are usually not aware of the underlying intangible consequences.
The pros and cons of the options you have seen:
For each option you can think of, what are the best and worst aspects of yourself? These will be values.
Special consideration of costs and risks. We tend to start our plan by thinking about the positive goals we hope to achieve. Considering costs and risks requires extra effort, but considering them is the first step to avoid them.
Future values
Consider future impacts and current impacts. People tend to ignore or mitigate future consequences.
Imagine your own future, perhaps in your death bed, reviewing this decision. What is important to you?
Step2: Clustering
When lacking of ideas, clustering the ideas is an efficient way to move the paper around until similar ideas are gathered together.
Step3: Labeling
Mark each group with a higher level value that holds them together to make each element clearer.
[Example]
As a simplified example, let us assume that some of the initial values we propose are self-determined, family, safe, friend and healthy. Health, safety and self-realization can be grouped together and labeled as "self", where families and friends can be grouped together and labeled as "other".
Step4: Moving up the tree
Seeing whether these groups can be grouped into still larger groups
[Example]
SELF and OTHERS group into OVERALL VALUE.
Step5: Moving down the tree
Also seeing if these groups can be divided into still smaller sub-groups.
[Example]
SELF-ACTUALIZATION could be divided into WORK and RECREATION.
Step6: Moving across the tree
Asking themselves is another valid way to bring new ideas to a tree, whether any additional thoughts at that level can come out(moving across the tree).
[Example]
In addition to FAMILY and FRIENDS, we could add SOCIETY.
The diagram on the right shows the final result of the (still simplified) example. Bold, italic indicates the basic values that were not originally written by us, but were thought of when we tried to fill in the tree.
Tool
PRIME Decisions
PRIME Decisions is a decision helping implement which use PRIME method to analyze incomplete preference information. Novel features are also offered by PRIME Decisions, which gives support to interactive decision process which includes an elicitation tour. PRIME Decisions are seen as an essential catalyst for further applied work due to its practitioners benefit from M. Köksalan et al. (eds.), Multiple Criteria Decision Making in the New Millennium © Springer-Verlag Berlin Heidelberg 2001 166 the explicit recognition of incomplete information.
Web-Hipre
Web-HIPRE, a Java applet, provides help to multiple criteria decision analysis. Moreover, a normal platform is provided for individual and group decision making. People can process the model at the same time at any time. Plus, they can easily have access to the model. It is possible to define links to other websites. All other sorts of information like geography, media files describing the criteria or alternatives can be referred to this link, which help make a better quality of decision support significantly.
Application
Some indicators obtained by process analysis are of great help to the value tree analysis. Especially in the value decomposition of internal operation indicators, the driving indicators of a first-level process indicator are usually the secondary sub-process indicators. For instance, the new product launch cycle (in terms of R&D project to production) is actually driven by two processes: R&D and testing in the company. The standardized R&D and testing process is a key success factor for improving the speed of innovation. To this end, the two process indicators development cycle, test cycle, sample acceptance and other indicators are the vital elements which drive the new product launch cycle indicators. Therefore, combining process analysis is of great significance for the decomposition of indicator value, especially for the decomposition of internal operational indicators. The instances of the main application areas are shown as below:
Application on business, production and services
Budget allocation
Allocating the engineering budget for products and projects annually is always a challenge. With value tree analysis aspects, such as strategic fit, which have no natural evaluation measure, but may have a significant role in decision-making can be included into the analysis. Furthermore, there is likelihood of communication being increased by explicit modelling of the relevant facts and a base for justified decisions is also provided.
Selection of R&D programs
As it is known to all that the risk in high in many R&D programs sometimes, thus the role of a good reason may be as essential as the decision itself. Value tree analysis offers a tool to give support to the reasoning of the selection of the R&D programme and modelling the facts affecting the decision.
Developing and deciding on marketing strategies
For instance, the analysis of new strategies for merchandising gasoline and other products through full-facility service stations.
Application on public policy problems
Analysis of responses to environmental risks
For instance, organization of negotiations between several parties in order to identify compromise regulations for acid rain and identify the objectives of the regulations.
Negotiation for oil and gas leases
Carry out an evaluation report of subcontractors and analyze the criteria which should be used.
Comparisons between alternative energy sources
For instance, organizing a debate about nuclear power, aiding the decision process, and studying value differences between the decision-makers.
Political decisions
Application on medicine
Deciding on the optimal usage and inventory of blood in a blood bank
Helping individuals to understand the risks of different treatments
In addition to the decision-making problems value tree analysis serves also other purposes.
Identifying and reformulating options
Definition of objectives
Providing a common language for communication
Quantification of subjective variables
For instance, a scale which measures the worth of military targets.
Development of value-relevant indices
Application on empirical pilot study variable selection
As value tree analysis is an approach that costs and computes little, it is one of the best choices for time-sensitive variable selection in empirical pilot healthcare studies. Moreover, value tree analysis offers a well-structured and strategic process for decision-making so that pilot study and patient data constraints can be accounted for and value for study stakeholders can be maximized.
Application on Coaching
Value tree analysis help creative and critical thinking and organize the thoughts in a logical way. Moreover, when a decision has come up, value tree analysis can also be an effective way to think about one's core goals and values. Afterwards, we can actively look for decision opportunities with the analysis done before.
Softwares
The software tools of value tree analysis are shown in the picture below:
References
Quality
Reliability engineering
Risk analysis methodologies
Safety engineering | Value tree analysis | [
"Engineering"
] | 2,298 | [
"Safety engineering",
"Systems engineering",
"Reliability engineering"
] |
60,777,609 | https://en.wikipedia.org/wiki/Li%C3%B1%C3%A1n%27s%20flame%20speed | In combustion, Liñán's flame speed provides the estimate of the upper limit for edge-flame propagation velocity, when the flame curvature is small. The formula is named after Amable Liñán. When the flame thickness is much smaller than the mixing-layer thickness through which the edge flame is propagating, a flame speed can be defined as the propagating speed of the flame front with respect to a region far ahead of the flame. For small flame curvatures (flame stretch), each point of the flame front propagates at a laminar planar premixed speed that depends on a local equivalence ratio just ahead of the flame. However, the flame front as a whole do not propagate at a speed since the mixture ahead of the flame front undergoes thermal expansion due to the heating by the flame front, that aids the flame front to propagate faster with respect to the region far ahead from the flame front. Liñán estimated the edge flame speed to be:
where and is the density of the fluid far upstream and far downstream of the flame front. Here is the stoichiometric value () of the planar speed. Due to the thermal expansion, streamlines diverges as it approaches the flame and a pressure builds just ahead of the flame.
The scaling law for the flame speed was verified experimentally In constant density approximation, this influence due to density variations disappear and the upper limit of the edge flame speed is given by the maximum value of .
References
Fluid dynamics
Combustion | Liñán's flame speed | [
"Chemistry",
"Engineering"
] | 306 | [
"Piping",
"Chemical engineering",
"Combustion",
"Fluid dynamics"
] |
60,778,719 | https://en.wikipedia.org/wiki/Sakan%20%28plasterwork%29 | Sakan (左官) refers to the plasterwork of Japan. Along with woodblock prints, ukiyo-e, Japanese pottery and porcelain, Sakan is a genre of traditional Japanese craft. It flourished during the Edo period (1603-1868) and continues to be practiced in the present day.
Recently, Japanese artists have been creating modern interpretations of sakan plasterwork as a form of art.
Traditionally, earth, lime, plant fibers, sands and aggregates are the most common constituents in Japanese plaster. It was often used for traditional buildings such as tea houses and storehouses. Plaster can survive both moisture and extremely dry environments, making it an ideal material for Japan, where the humidity levels vary greatly throughout the year. It can also withstand large earthquakes, up to level 7.
References
Japanese art
Plastering
ja:左官 | Sakan (plasterwork) | [
"Chemistry",
"Engineering"
] | 170 | [
"Building engineering",
"Coatings",
"Plastering"
] |
60,781,915 | https://en.wikipedia.org/wiki/Aviation%20taxation%20and%20subsidies | Types of aviation taxation and subsidies, and implementations, are listed below. Taxation is one of several methods to mitigate the environmental impact of aviation.
Types of taxes
Airport improvement fee, paid by passengers to the airport or government
Air passenger taxes, paid by passengers to the government for environmental reasons; may be variable by distance and includes domestic flights
Departure tax, paid by passengers leaving the country to the government (sometimes also applies outside of aviation)
Jet fuel tax, paid by airline companies to the government for the jet fuel (kerosene) they burn
Landing fee, paid by airline companies to the airports they land on
Solidarity tax on airplane tickets (Chirac Tax), paid by passengers to Unitaid, a global health initiative against HIV/AIDS, malaria and tuberculosis
Fuel taxes
According to the Amsterdam-based international environmental organisation Friends of the Earth (2005), aviation does not pay tax on fuel and aviation's expansion is fuelled by its exemption from taxes. In the UK, aviation got £9 billion tax free benefits in 2003. Friends of the Earth argued that fuel tax would give incentive to improve the energy efficiency of operations, and would be a more effective response than emission trading.
Please note that this is not the case in the United States, where a simple search on the FAA web site shows that we are taxed 19.3 cents per gallon on piston aviation fuel, 21.8 cents per gallon on JetA, and 4.3 cents per gallon on commercial jet fuel. This has been a very effective funding source for air traffic control without the complications and expenses of collecting on a per-flight basis. https://www.faa.gov/sites/faa.gov/files/2022-07/ATTF_Excise_Tax_Rate_Structure_CY_2022.pdf
European Union
Historically, EU aviation fuel was tax free and attracted no VAT. Commercial aviation fuel taxation in the EU was banned in 2003 by the Energy Taxation Directive (2003/96/EC), except with bilateral agreements between member states. However, as of 2018, no such agreements exist.
In November 2019, the Finance Ministers of Belgium, Bulgaria, Denmark, France, Germany, Italy, Luxembourg, the Netherlands and Sweden presented a joint statement calling on the European Commission, more specifically European Commissioner for Climate Action Frans Timmermans, to introduce EU-wide taxes on aviation so as to charge the entire aviation industry more for its emissions and pollution, and put all member states on level pegging. Citing the fact that aviation causes around 2.5% of global emissions, the Ministers proposed both uniform air passenger taxes as well as kerosene taxes (both excise duties and VAT). In a September–October 2019 poll conducted by the European Investment Bank (EIB) amongst 28,088 EU citizens from the then 28 member states, 72% said they would support a carbon tax on flights.
As part of its Fit for 55 package proposed in July 2021 by the European Commission, the European Union is planning to gradually introduce a kerosene tax (for both private and commercial flights) between 2023 and 2033 at EUR 10.75 per gigajoule (GJ), while sustainable fuel such e-kerosene will benefit from a minimum rate of zero.
Austria
Austria introduced a Flight Tax Act (Flugabgabegesetz, FlugAbgG) in April 2011 similar to the German aviation taxation system. In 2013, the fees for short and medium-haul flights were reduced from 8 euros to 7 euros and from 20 euros to 15 euros respectively, and halved again in 2018. According to §5.1 of the Flight Tax Act, the flight tax depends on the distance to the destination airfield per passenger:
for short distances 3.50 euros
for medium distances 7.50 euros
for long distances 17.50 euros
During the COVID-19 pandemic, airlines in Europe had to temporarily cease most operations and had requested a total of 12.8 billion euros in government support by mid-April 2020, according to a Transport & Environment, Greenpeace and Carbon Market Watch report. At the time, Austria was the only country which insisted (through Minister of Climate Action, Environment, Energy, Mobility, Innovation and Technology Leonore Gewessler of the Greens) that a government bailout of its flag carrier (Austrian Airlines, with about 7,000 employees) should be linked to climate targets. On 8 June 2020, the Austrian conservative–green coalition government concluded a support deal for Austrian Airlines (a subsidiary of Lufthansa) for 150 million euros in taxpayer grants, and 300 million euros in banking loans that are to be paid back. This was significantly less than expected (Austrian Airlines had applied for 767 million euros), and came under the following conditions:
All airline tickets got an immediate uniform 12 euro environmental tax. This altered an earlier plan to introduce a flight ticket tax system of 3.50 to 17 euros (depending on the route) in 2021.
All airline tickets cost at least 40 euros in total. This ended the practice of selling tickets as cheap as 10 euros (for example by Ryanair's Austrian subsidiary Lauda, which offered 100,000 tickets for 9.99 euros in 2019; other such cheap airlines in Austria include EasyJet and Wizz Air) in order to discourage flying in general.
For flights less than 350 kilometres away, a special tax of 30 euros must be paid to discourage short flights (an unprecedented environmental measure in the EU).
Airline connections that covered distances that could be travelled within three hours by train were prohibited.
Austrian Airlines had to reduce its emissions by 50% by 2030 (or by 33% by 2030 compared to 2005).
Belgium
In 2022, the Belgian government introduced an aviation tax for all airline flights of 10 euros within 500 kilometres, two euros if the destination is located more than 500 kilometres away and within the EEA and four euros outside the EEA. The reason the tax is higher for short flights is because a shift to trains is more realistic for those distances.
France
On 9 July 2019, French transport minister Élisabeth Borne announced that France would introduce an eco-tax on passengers in 2020. Flights within the EU, including domestic flights, would be taxed 1.5 euros for economy class and 9 euros for business class, while flights out of the EU would be charged with 3 euros for economy class and 18 euros for business class. Different rules apply to Corsica and other overseas departments and territories of France. The exo-tax was projected to produce 180 million euros in revenue annually.
On 9 June 2020, during the COVID-19 pandemic in France, economy and finance minister Bruno Le Maire announced a financial support programme for the aerospace sector for 15 billion euros. It included the earlier announced bailout of its flag carrier Air France–KLM at 7 billion euros (comprising a state loan of 3 billion and bank loans of 4 billion), with conditions to transform it into the 'most environmentally friendly airline on the planet'. There were several aims, including the protection of 300,000 direct and indirect jobs (100,000 of which were said to be at risk within 6 months), a gradual recovery of the 34 billion annual trade surplus that the French aviation industry produced, and the goal of developing carbon-neutral air travel by 2035 rather than 2050 (for which the civil aviation research council CORAC would receive €1.5 billion in support over three years).
Germany
Germany's air passenger tax is divided in three categories, with the following taxes since 1 April 2020:
Category 1 – Europe, Russia, Turkey, Morocco and Algeria: 12.90 euros
Category 2 – Central Asia, the MENA region (excluding Morocco and Algeria, including Afghanistan and Pakistan), Sahel region: 32.67 euros
Category 3 – Other countries: 58.82 euros
In 2018 Germany applied 19% VAT on domestic airline tickets.
Ireland
The Republic of Ireland had an Air Travel Tax from 2009 until April 2014.
Netherlands
On 1 July 2008, the Fourth Balkenende cabinet introduced an aviation tax (vliegbelasting or vliegtaks) of 11.25 euros per ticket for flights within Europe and 45 euros for destinations outside Europe. Due to vehement opposition by the aviation industry and travel agencies, the tax was abolished a year later on 1 July 2009, leading to heavy criticism from academia and environmental organisations. Amsterdam Airport Schiphol claimed it lost 900,000 passengers to airports abroad due to the tax, but Vrije Universiteit Amsterdam economist Eric Bartelsman pointed out that the Great Recession reduced air travel across the world, not just in the Netherlands. Tilburg University economist Lans Bovenberg was more positive, arguing aviation taxes should be implemented simultaneously across the entire EU to be effective, and that taxing jet fuel would be a more effective measure than taxing passengers.
In 2017, the Third Rutte cabinet coalition agreement planned to introduce a new aviation tax of 7 euros on every ticket, regardless of destination, on 1 January 2021. Cargo aircraft will pay a tax based on their weight and noise pollution class: up to 3.85 euros per tonne of cargo, with a lower rate for quieter aircraft. The tax, which is projected to produce an annual revenue of 200 million euros, has four goals: reducing emissions, reducing other emissions such as particulates, reducing noise pollution, and preventing a jet fuel tax. The new plan is more likely to succeed because the tax is much lower than in 2008, and most neighbouring countries except Belgium had also introduced aviation taxes in preceding years, making passengers' tax circumvention efforts unlikely. The government was aiming to eventually establish a uniform EU-wide aviation tax.
Since 1 January 2021, an air passenger tax of €7.845 per person per flight applies in the Netherlands. The tax does not apply to transfer passengers, and children under the age of 2. In the Tax Plan 2023, the cabinet proposed to increase the air passenger tax by €18.48 from 2023. Parliament had yet to approve the plans. 1 January 2023, the new tax rate of €26,43 went into force.
Sweden
Sweden introduced a passenger tax for commercial flights of more than ten passengers in April 2018. As of 2020, Swedish aviation taxes for passengers were divided into three categories, depending on the destination:
Category 1 – Europe: 62 Swedish kronor
Category 2 – Russia, Central Asia, the MENA region including Afghanistan and Pakistan, Canada and the United States: 260 Swedish kronor
Category 3 – Other countries: 416 kronor
In November 2019, the Swedish government of Stefan Löven (Löfven I Cabinet) proposed to collect an annual aviation tax of around 78 million euros. Under the proposal's conditions, the Swedish aviation industry would still be 100% exempt from the Swedish energy tax, carbon dioxide tax and sulphur tax that other companies pay.
Other countries
Australia
Norway
Norway introduced airline passenger fees on 1 June 2016. From 1 June 2016 to 31 March 2020, the fee was 80 Norwegian kroner per passenger. On 1 April 2020, the fee was changed to 75 kroner for passengers with a final destination in Europe and 200 kroner for passengers with a final destination outside Europe. In addition, VAT was added to the tax.
Switzerland
In June 2020, the Swiss Federal Assembly approved a proposal (passed by the Council of States in 2019) to introduce an environmental levy of 30 to 120 Swiss francs per airline ticket 'depending on distance and [travel] class'; nearly half of the proceeds are to flow into a climate fund for emissions-reduction initiatives. The Swiss oil lobby started a campaign and a referendum against the new laws (which included the aviation tax), and in June 2021, 52% of the Swiss voters rejected them, meaning there will not be aviation taxes in the very near future. Parliament continued to reject several parliamentary initiatives on the subject thereafter. Public support for some kind of airline ticket tax remained high, with 72% of respondents in a June 2022 GfS Zurich survey saying they were in favour: 42% of respondents backed a levy of CHF30 ($30.40) for short-haul flights and CHF120 for long-haul flights, while 50% agreed with even higher charges.
United Kingdom
United States
In the United States, most states tax avgas and jet fuel.
Subsidies
Some governments subsidize airports and passenger customs costs within airports.
The EU Commission in 2014 ruled that subsidies Ryanair received from a regional authority a decade ago had to be repaid (€525,000).
In June 2020, Flemish Economy Minister Hilde Crevits decided that trainings for airplane and helicopter pilots would no longer be subsidised in Flanders from 1 July 2020 onwards.
References
Transport economics
Tax
Government finances
Environmental economics | Aviation taxation and subsidies | [
"Environmental_science"
] | 2,616 | [
"Environmental economics",
"Environmental social science"
] |
68,620,375 | https://en.wikipedia.org/wiki/Allosexuality | Allosexuality is the ability to experience sexual attraction. The term is often used to describe persons who are not asexual, or the lack of identification with asexuality. Someone who experiences allosexuality is allosexual, sometimes shortened to allo. Other terms to describe non-asexual people include zedsexual, or simply sexual.
The term does not indicate the target of sexual attraction, meaning allosexual could describe someone who is heterosexual, gay, bisexual, or pansexual, for example. It also does not indicate how often an individual experiences sexual attraction or participates in sex or sexual encounters.
Terminology
The prefix allo- comes from the Greek word Állos, meaning "other", "different", or "atypical". It was attached to the suffix 'sexual' to create a term meaning "a person who experiences sexual attraction towards others". The structure parallels other sexuality terms such as homosexual, heterosexual, bisexual, pansexual, asexual, etc.
History
In a medicalized context, allosexual has been used in contrast to autosexual to describe sexual attraction towards others or sexual behavior between multiple people. The term was coined by the asexual community as a way to name and discuss the experiences of non-asexual people. It is used to normalize asexuality and provide a term that can be used in conjunction with ace terminology. Allosexuality makes asexuality one sexuality among others, rather than being a deviation from what is simply 'normal'.
Society and culture
Asexuals are estimated to make up 1% or less of the total population and about 1.7% of the LGBT population. Since the majority of people would be classified as allosexual, it is viewed by some as the natural way of being and asexuality as a deviation from this norm. Physical intimacy is considered an essential part of romantic relationships among allosexuals, which can complicate relationships between asexual and allosexual individuals. Allonormativity, or the concept that all humans experience sexual attraction or desire a sexual relationship, can lead to the isolation and marginalization of asexual individuals.
See also
Allonormativity
Analloeroticism
References
External links
21st-century neologisms
Asexuality
Sexuality and society | Allosexuality | [
"Biology"
] | 495 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
68,620,465 | https://en.wikipedia.org/wiki/J%C3%BCrgen%20Czarske | Jürgen W. Czarske is a German electrical engineer and a measurement system technician. He is the director of the TU Dresden Biomedical Computational Laser Systems competence center and a co-opted professor of physics.
Career
Jürgen Czarske grew up on a farm of 20 hectares in the small village of Garbek in the northernmost state of Germany, that is Schleswig-Holstein. After graduating from high school with distinctions, he began studying electrical engineering and physics at the University of Hanover until 1991. FHe carried out several research internships in Munich at Siemens AG. He received his doctorate summa cum laude in 1995 at the Institute for Metrology in Mechanical Engineering at the University of Hanover with a topic from laser measurement technology. From 1995 to 2004 he worked at the Laser Laser Zentrum Hannover, most recently as head of the measurement technology department. From 1996 to 2001 he worked temporarily at research institutions in Japan and the United States of America. After completing his habilitation in the field of measurement technology in the mechanical engineering department of the University of Hanover in 2003, he has been a C4 professor at the Faculty of Electrical Engineering and Information Technology at the TU Dresden since 2004. Prof Czarske is Director of the Institute of Circuits and Systems, since 2016, and of the Center Biomedical Computational Laser Systems (BIOLAS), since 2019. He is Elected Member of Scientific Society for Laser Technology (WLT e.V.), Erlangen, since 2017, and Advisor of the OPTICA-SPIE-Student Chapter of TU Dresden, dresdenoptik.de, since 2022. In 2022 he was selected as outstanding editor for Light: Advanced Manufacturing (LAM) of Nature Publishing, China. Since 2023 Prof. Czarske is member of editorial board of Light: Science and Applications.
Research
Czarske is mainly concerned with system technology, whereby ultrasound and laser waves are used. The areas of application of the implemented systems envisaged by Czarske are biomedicine (health), process and production engineering (energy and environment), as well as information system technology (communication). For quality assurance in production, he examines optical in-situ form measurements. Ultrasound-based systems are used for flow measurements in order to investigate crystallization processes. Adaptive optics and wavefront control are pursued by Czarske for multi-dimensional microscopy and for light control in biological tissue. This work is important for optogenetics and medical nanorobots.
Prof Czarske has invented the laser Doppler velocity profile sensor, which was successfully transferred to the market in cooperation with the Company Intelligent Laser Applications ILA R&D GmbH, Karl-Heinz-Beckurts-Straße 13, Jülich.
The profile sensor beats the Heisenberg limit. To take advantage of the high resolved measurements in both velocity and position, the profile sensor was translated into several applications areas such as flow metrology, production technique and process engineering.
Honors and awards
Measurement Technology Award of the (AHMT). The award ceremony took place in September 1996 at the Technical University of Munich.
Berthold Leibinger Innovationspreis (3rd Award), Ditzingen, 7/2008
Senior Member of the Institute of Electrical and Electronics Engineers, 5/2015
OSA Fellow, 10/2015
Fellow of SPIE, 12/2015
Fellow European Optical Society, 8/2016
Full member of the Saxon Academy of Sciences and Humanities (since March 2018)
Joseph Fraunhofer Award/Robert M. Burley Prize of The Optical Society, 9/2019
Laser Instrumentation Award 2020 of IEEE Photonics Society
Fellow of the Institution of Engineering and Technology, 7/2021
Fellow Award (FInstP) of Institute of Physics (IOP), London, UK, 7/2022
SPIE Community Champion 2020, highlighted by SPIE Director Nelufar Mohajeri, WA, USA, 5/2021
SPIE Community Champion 2019 for outstanding volunteerism, awarded by SPIE President John Greivenkamp, Arizona/USA, 1/2020
2022 Chandra S Vikram Award in Optical Metrology of SPIE (The international Society for Optics and Photonics), awarded in San Diego, California, August 2022
References
External links
Fellows of SPIE
Academic staff of TU Dresden
German electrical engineers
University of Hanover alumni
21st-century German engineers
20th-century German engineers
Fellows of Optica (society)
Fellows of the Institution of Engineering and Technology
Senior members of the IEEE
Living people
Year of birth missing (living people) | Jürgen Czarske | [
"Engineering"
] | 910 | [
"Institution of Engineering and Technology",
"Fellows of the Institution of Engineering and Technology"
] |
68,622,078 | https://en.wikipedia.org/wiki/HD%2063513 | HD 63513 (HR 3036) is a solitary star located in the southern circumpolar constellation Volans. It has an apparent magnitude of 6.38, placing it near the max naked eye visibility. The star is situated at a distance of 634 light years but is receding with a heliocentric radial velocity of .
This object is a star with the characteristics of a G6 and G8 giant. At present it has 3.14 times the mass of the Sun but has expanded to almost 13 times the Sun's girth. It shines at 102 solar luminosities from its enlarged photosphere at an effective temperature of 5,116 K, which gives it a yellow glow. HD 63513 has an iron abundance 102% that of the Sun, placing it at solar metallicity and spins modestly with a projected rotational velocity of .
References
Volans
G-type giants
063513
037773
Durchmusterung objects
3036
Volantis, 17 | HD 63513 | [
"Astronomy"
] | 208 | [
"Volans",
"Constellations"
] |
68,623,212 | https://en.wikipedia.org/wiki/Principal%20series%20%28spectroscopy%29 | In atomic emission spectroscopy, the principal series is a series of spectral lines caused when electrons move between p orbitals of an atom and the lowest available s orbital. These lines are usually found in the visible and ultraviolet portions of the electromagnetic spectrum. The principal series has given the letter p to the p atomic orbital and subshell.
The lines are absorption lines when the electron gains energy from an s subshell to a p subshell. When electrons descend in energy they produce an emission spectrum. The term principal came about because this series of lines is observed both in absorption and emission for alkali metal vapours. Other series of lines appear in the emission spectrum only and not in the absorption spectrum, and were named the sharp series and the diffuse series based on the appearance of the lines.
References
Atomic physics
Emission spectroscopy
Absorption spectroscopy | Principal series (spectroscopy) | [
"Physics",
"Chemistry",
"Astronomy"
] | 169 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Emission spectroscopy",
"Quantum mechanics",
"Absorption spectroscopy",
"Astronomy stubs",
"Atomic physics",
" molecular",
"Atomic",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
" and optical physics"
] |
68,623,477 | https://en.wikipedia.org/wiki/Hericium%20cirrhatum | Hericium cirrhatum is a saprotrophic fungus, commonly known as the tiered tooth fungus or spine face. The species is edible and good eating when young. It has a texture not unlike tender meat or fish. The flesh is cream in colour with an attractive smell when young, but it develops a very unpleasant odour in older specimens.
Appearance
The appearance of the fruit body is bracket-like, but without a stem and usually the spines are hang in tiers like icicles. The upper surface is often rough with sterile spines and scales present. DNA analysis places it in the order Russulales. Each tier can be 5 to 10cms across and 2 to 3cms thick with spines a little over 1cm long. It tends to occur for only a couple of years at any given site.
Hericium cirrhatum can be mistaken for Hydnum rufescens or Hydnum repandum, however these species have a cap that is smooth. Hericium erinaceus is another Red Data List species with a more obviously spherical fruiting body and it has much longer spines.
Etymology
The generic name 'Hericium', refers to the fertile spines found in this group and means 'pertaining to a hedgehog'. These spines also gave rise to the species name 'cirrhatum' that translates as 'having tendrils'.
Habitat & distribution
Hericium cirrhatum grows on dead standing hardwood trees, fallen wood or tree stumps of species such as beech (Fagus sylvatica) in old established deciduous woodlands. It has also been recorded on sycamore Acer pseudoplatanus, birch Betula spp., ash Fraxinus spp., oak Quercus robur and elm Ulmus spp. and is found from July to November in Britain. It is vulnerable but was removed from the Red Data List in 2006. As a rare species it is not illegal to pick the fruiting bodies however consideration should be taken to quantity collected to help preserve.
The NBN Database lists only 176 records in Britain of which only 11 are confirmed and none are shown in Scotland although the photographed specimens were found in Kilmaurs, East Ayrshire. It is nowhere common, but records show it to be present in southern England, particularly in the New Forest and in some parts of central and southern mainland Europe.
References
External links
Video with narration on the Tiered Tooth Fungus
Edible fungi
Fungi described in 1794
Russulales
Fungi of Europe
Fungus species | Hericium cirrhatum | [
"Biology"
] | 514 | [
"Fungi",
"Fungus species"
] |
68,623,746 | https://en.wikipedia.org/wiki/Susanne%20Teschl | Susanne Teschl (née Timischl, born 1971) is an Austrian biomathematician and professor of mathematics at the University of Applied Sciences Technikum Wien in Vienna. She is known for her research on the mathematical modeling of breath analysis.
Education and career
Teschl earned a diploma in mathematical physics at the University of Graz in 1995, and completed her Ph.D. there in 1998. Her dissertation, A Global Model for the Cardiovascular and Respiratory System, was supervised by Franz Kappel.
After working for the Austrian Science Fund, she joined the University of Applied Sciences Technikum Wien in 2001, and headed the Department of Applied Mathematics and Natural Sciences there from 2007 to 2010.
Personal life
Teschl is the daughter of Wolfgang Timischl, an Austrian mathematics teacher and textbook author. Her husband, Gerald Teschl, is a mathematical physicist at the University of Vienna.
References
External links
1971 births
Living people
Applied mathematicians
University of Graz alumni
20th-century Austrian mathematicians
20th-century women mathematicians
21st-century Austrian mathematicians
21st-century women mathematicians | Susanne Teschl | [
"Mathematics"
] | 217 | [
"Applied mathematics",
"Applied mathematicians"
] |
68,624,407 | https://en.wikipedia.org/wiki/CAR-302%2C282 | CAR-302,282 (302282, NSC-263548, α-(3-Methylbut-1-yn-3-enyl)mandelic acid 1-methyl-4-piperidyl ester) is an anticholinergic deliriant drug, invented under contract to Edgewood Arsenal in the 1960s. It is a potent incapacitating agent with an ED50 of 1.2μg/kg and a high central to peripheral effects ratio, and a relatively short duration of action compared to other similar drugs of around 6-10 hours. Despite its favorable properties it was relatively little researched compared to more high profile compounds from the series such as EA-3167 and EA-3580.
See also
CAR-226,086
CAR-301,060
CAR-302,196
References
Deliriants
Muscarinic antagonists
Incapacitating agents
Piperidines
Carboxylate esters
Alkyne derivatives
Tertiary alcohols | CAR-302,282 | [
"Chemistry"
] | 203 | [
"Incapacitating agents",
"Chemical weapons"
] |
68,624,888 | https://en.wikipedia.org/wiki/Bengt%20Aurivillius | Bengt Aurivillius (4 December, 1918 in Linköping – 2 May, 1994 in St. Peter's Parish, Malmöhus County) was a Swedish chemist known for his research in metal and mixed oxides.
Education and career
Aurivillius received his basic scientific education at the then Stockholm University where he graduated in 1937 and earned a fil. lic. in 1943. By 1949, he had made some important discoveries about the oxidation of mixed metals, which became quite prominent in the world of chemistry. He completed his dissertation, "X-ray Examinations of Bismuth Oxifluoride and Mixed Oxides with Trivalent Bismuth", at Stockholm University in 1951. Aurivillius joined the Swedish National Defence Research Institute in 1952, where he worked first as a research engineer and later senior researcher. By 1960, Aurivillius was a docent of physical chemistry and acting senior lecturer at the Stockholm University. In 1965, he was appointed professor of inorganic chemistry at Lund University, a professorship he held until 1983. During the sixties, he worked in the field of crystallography alongside his wife, Karin Aurivillius.
Scientific research
Aurivillius is known for his study on bismuth compounds, including bismuth sesquioxide (Bi2O3) and bismuth layer structured ferroelectrics based on the oxide perovskite structure, which was later named after him as the Aurivillius phases. He characterized the ferroelectric properties of these materials, which have become a family of materials for lead-free ceramics.
Personal life
Bengt Aurivillius is a member of the Aurivillius family, his father was the entomologist Christopher Aurivillius. His wife was crystallographer Karin Aurivillius.
See also
Aurivillius phases
References
Academic staff of Stockholm University
20th-century Swedish chemists
1918 births
1994 deaths
Stockholm University alumni
Crystallographers
Inorganic chemists
Academic staff of Lund University
Solid state chemists | Bengt Aurivillius | [
"Chemistry",
"Materials_science"
] | 417 | [
"Crystallographers",
"Crystallography",
"Inorganic chemists",
"Solid state chemists"
] |
68,626,306 | https://en.wikipedia.org/wiki/Computing%20for%20All | The Computing for All plan (Plan Informatique pour Tous – IPT) was a French government plan to introduce computers to the country's 11 million pupils. A second goal was to support national industry. It followed several introductory computer science programs in schools since 1971. The IPT plan was presented to the press on January 25, 1985 by Laurent Fabius, Prime Minister at the time. It aimed to set up, from the start of that school year, more than 120,000 machines in 50,000 schools and to train 110,000 teachers. Its estimated cost was 1.8 billion francs, including 1.5 billion for equipment. The plan was abandoned in 1989.
Description
The selection of industry partners was entrusted to Gilbert Trigano, co-founder of Club Méditerranée, connected with French companies such as Exelvision, Léanord, SMT Goupil, Thomson, Bull, LogAbax, etc. This choice was political because its initiator, Jean-Jacques Servan-Schreiber, had indicated his preference for Macintosh, which would be specially modified for the plan. In return, Apple would install a manufacturing unit in France rather than in Ireland. The agreement negotiated at the highest level with Apple included a complete transfer of technology for an assembly plant with the highest global standards in terms of productivity. But instead Thomson, a nationalized company in difficulty, was chosen. The choice was made without a call for tenders.
The Computing for All plan popularized the Nanoréseau: a RS422 based computer network of modest size (up to 32 workstations, at 500 kbit/s) which included nano-machines (Thomson MO5, Thomson TO7/70 or Thomson MO5NR) and a PC compatible server (most often a Bull Micral 30, but Goupil 3, Léanord Sil'z 16, LogAbax Persona 1600 and CSEE 150 were also used). The PC was equipped with two 5¼ inch floppy disk drives, one used for the operating system (MS-DOS 2.11), the other with data for the Thomsons. The server also gave access to a shared printer.
A later version (NR33) allowed the use of a hard disk by installing the whole system; this allowed a much faster start. All the machines could be controlled remotely (the server in particular thanks to the NR-DOS system) and it was possible to recover a copy of any portion of their memory remotely by an operation called "station looting" (command CLONE in BASIC).
Implementation
The Plan was entirely based on the Nanoréseau. Designed with the first 16-bit Bull Micral PCs as the network head, the Nanoréseau was a computer and educational success. Unfortunately, the choice of Thomson's MO5 8-bit terminals was problematic. Intended to develop the French IT sector based on the LSE language, Minitel and light pen, these solutions didn't become mainstream.
The plan allowed a first access to computers for many students and their teachers, a first approach to programming (in BASIC or in Logo) and the use of a computer with light pen (mouse was uncommon at the time). Yet teacher training was only 50 hours, and the focus was on programming rather than the use of software packages.
A few months after the plan was launched, only 10% of teachers used the computer in the classroom. The plan was considered a failure by the general inspectorate.
The abandon of computer production by Thomson in 1989 led to the end of the plan. Institutions wishing to continue teaching IT were faced with the obsolescence of equipment, and following the start of decentralization in France, modernization costs had to be supported by local authorities (totaling around 6 to 8 billion euros).
References
External links
« L'informatique pour tous », Bulletin de l'EPI, EPI, 37, mars 1985, 23–30
Jean-Pierre Archambault, « 1985, vingt ans après : Une histoire de l'introduction des TIC dans le système éducatif français », Médialog, 54, juin 2005, 42-45
Computer companies of France
Defunct computer hardware companies
Defunct computer systems companies
Thomson computers
History of computing in France
Computer science education in France
Computing for All | Computing for All | [
"Technology"
] | 878 | [
"History of computing",
"History of computing in France"
] |
68,628,027 | https://en.wikipedia.org/wiki/Ensifer%20numidicus | Ensifer numidicus is a nitrogen fixing symbiont of Fabaceae. gram-negative, aerobic, non-spore forming, rod-shaped bacterium of the family Rhizobiaceae. First described in 2010; more biovars have since been isolated and described with ORS 1407 considered the representative organism. Most examples have been found in arid and infra-arid regions of Tunisia.
Host plants
Biovars has been shown to induce nodule formation in a wide variety of symbiosis competent plant species including Medicago Sativa(cultivated alfalfa), Lotus creticus, Syrian mesquite(Prosopis farcta), Lens culinaris Medikus ssp(lentils) as well as Cicer arietinum(chickpea) and Argyrolobium uniflorum.
Associated Biovars
Argyrolobium uniflorum: ORS 1407
cultivated alfalfa (Medicago sativa) :ORS 1407
Lotus creticus: PT26, ORS 1410
Cultivated lentils(Lens culinaris): ORS 1444
Cicer arietinum(chickpea):LBi2
Syrian mesquite(Prosopis farcta):PN14
Known relationships between cultivars
This phylogeny is based on a constrained analysis of the 16S ribosomal RNA
Genome
16s RNA analysis has found Ensifer numidicus to be closely related to Ensifer medicae and Ensifer garamanticus. Analogous genes between closely related species suggests high levels of horizontal gene transfer between closely related species. Laboratory inoculation has shown Ensifer numidicus engages in indeterminate nodulation with host plants in at least some circumstances.
Growth conditions
E. numidicus has been found to grow on yeast-mannitol medium at 28C with an upper limit of 40C. Laboratory cultivated strains have found metabolism of at least 13 substrates including dulcitol, D-lyxose, 1-O-methyl a-D-glucopyranoside, 3-O-methyl-D-glucopyranose, D-gluconate, L-histidine, succinate, fumarate, ethanolamine, DL-b-hydroxybutyrate, L-aspartate, L-alanine and propionate. Sensitivity has been found to salt concentrations greater than 4%. Due to similarities to other Ensifer species, it cannot be described by growth conditions alone and must be differentiated by genetic components.
References
Rhizobiaceae
Symbiosis | Ensifer numidicus | [
"Biology"
] | 559 | [
"Biological interactions",
"Behavior",
"Symbiosis"
] |
68,628,242 | https://en.wikipedia.org/wiki/1%2C2-Dichloro-2-nitrosopropane | 1,2-Dichloro-2-nitrosopropane is a chlorinated nitrosoalkane. It's a deep blue liquid with powerful lachrymatory effects.
See also
Chloropicrin
Trifluoronitrosomethane
Trichloronitrosomethane
References
Nitroso compounds
Organochlorides
Lachrymatory agents
Pulmonary agents | 1,2-Dichloro-2-nitrosopropane | [
"Chemistry"
] | 85 | [
"Lachrymatory agents",
"Pulmonary agents",
"Chemical weapons"
] |
53,055,761 | https://en.wikipedia.org/wiki/Buddy%20%28software%29 | Buddy (also known as Buddy.Works) is a web-based and self-hosted continuous integration and delivery software for Git developers that can be used to build, test, and deploy web sites and applications with code from GitHub, Bitbucket, and GitLab. It employs Docker containers with pre-installed languages and frameworks for builds, alongside DevOps, monitoring and notification actions.
History
Buddy launched as a downloadable VM in May 2015 under the name Meat!. The service was initially free but employed a proprietary license, causing dissatisfaction among web-developers. Meat! was rebranded to Buddy in November 2015 and released as a cloud-only service. The on-premises version, nicknamed Buddy GO, was released in September 2016. Switching from VM to Docker allowed installation on any Linux-based server, including Amazon EC2, DigitalOcean, and Microsoft Azure. Shortly after, the company launched Guides, a dedicated website section with use cases and workflow automation strategies, later reproduced to Medium, a popular blogging platform. On September 21, 2016, the service was featured on Product Hunt.
Configuration
Configuration is performed by arranging predefined actions into sequences called pipelines. Pipelines can be triggered automatically on push to branch, manually, or recurrently. Actions include Docker-based builds, deployment to FTP/SFTP and IaaS services, delivery to version control, SSH scripts, website monitoring and conditional notifications. Contrary to other CI tools like Jenkins or Travis CI, Buddy does not use YAML files to describe the process, although the company stated support for .yml files is currently in the works.
Version control
Buddy features a native code hosting solution with Git commands, such asgit log, git show, git blame, git diff reproduced into the GUI. Other features include a cloud editor with blame tool and syntax highlight, push permissions, merge requests and visual branch management.
Available actions
Buddy supports over 30 pre-configured actions that can be modified with Linux commands:
Languages and frameworks
Angular CLI, Gulp, Grunt, Node.js, Maven, Gradle, PHP, Ruby, Python, Elixir, .NET/.NET Core, Go, Ember CLI
Static site generators
Jekyll, Hexo, Hugo, Middleman
Deployment
FTP, SFTP, FTPS, Heroku, Microsoft Azure, DigitalOcean, Modulus, Shopify, WebDAV, push to Git
Amazon Web Services
Amazon S3, Amazon EC2, AWS Elastic Beanstalk, AWS CodeDeploy, AWS Lambda
Google services
Google Cloud Storage, Google Compute Engine, Google App Engine
DevOps
SSH commands, HTTP requests, Heroku CLI, Docker image build and push to registry (Docker Hub, Amazon ECR, private registry)
Notifications
Email, SMS, Slack, Desktop notifications (Pushbullet, Pushover), Activity stream
Website monitoring
URL request, Ping, TCP port monitoring
References
External links
Official website
Bug and issue tracking software
Build automation
Compiling tools
Computing websites
Continuous integration
Cross-platform software
Java development tools
Internet properties established in 2015
Project hosting websites
Version control
Website monitoring software | Buddy (software) | [
"Technology",
"Engineering"
] | 658 | [
"Software engineering",
"Computing websites",
"Version control"
] |
53,057,447 | https://en.wikipedia.org/wiki/Center%20for%20Humane%20Technology | The Center for Humane Technology (CHT) is a nonprofit organization dedicated to radically reimagining the digital infrastructure. Its mission is to drive a comprehensive shift toward humane technology that supports the collective well-being, democracy and shared information environment. CHT has diagnosed the systemic harms of the attention economy, which it says include internet addiction, mental health issues, political extremism, political polarization, and misinformation. The Center for Humane Technology's work focuses on alerting people to technology's impacts on individuals, institutions, and society; identifying ways to address the consequences of technology; encouraging leaders to take action; and providing resources for those interested in humane technology.
Launched in 2018, the organization gained greater awareness after its involvement in the Netflix original documentary The Social Dilemma, which examined how social media's design and business model manipulates people's views, emotions, and behavior and causes addiction, mental health issues, harms to children, disinformation, polarization, and more. The film was watched by 38 million households in its first month, making it the second-most watched documentary on Netflix.
Background
In 2013, Tristan Harris, then a design ethicist at Google, released a viral presentation titled, "A Call to Minimize Distraction & Respect Users' Attention", a warning about the enormous power tech platforms have over users' attention spans. Harris urged companies to take this responsibility seriously.
A year later at TEDx Brussels, Harris introduced "Time Well Spent", a concept co-created with James Williams and Joe Edelman arguing that technology should be designed in line with users' basic human needs and values, rather than maximizing their time on their devices.
In December 2015, Tristan Harris left his position at Google to focus on alerting the public to the consequences of Silicon Valley's race for attention and culture of growth at all costs.
In 2017, Harris was interviewed for the 60 Minutes episode, "Brain Hacking". to discuss how social media companies hijack biology. "Every time I check my phone, I'm playing the slot machine to see, 'What did I get?' This is one way to hijack people's minds and create a habit, to form a habit", Harris said. Later that year, Harris shared his expanded thoughts on the problems with social media platforms and how a handful of companies control billions of minds at TED 2017.
Building on this momentum and recognizing the need for an organization in this space, Tristan Harris, Aza Raskin, and Randima (Randy) Fernando founded the Center for Humane Technology in 2018 to educate the public, advise legislators, train technologists, and more.
Activities
The organization encourages designers and companies to respect users' time and to create products which have, as an end goal, something other than maximizing use of products to sell advertising. There are multiple ways that technology companies try to maximize the use of their products: by using an intermittent variable reward system, causing people to fear missing something important, increasing the desire for social approval, strengthening the need to reciprocate others' gestures, and interrupting individuals' daily activities to alert them of a notification. Harris claims that technology parallels slot machines, in that both use intermittent variable rewards to increase addiction. According to Harris, companies have a responsibility to reduce this effect, through techniques such as increasing the predictability of their designs and eliminating the intermittent variable reward system all together.
CHT utilizes various mainstream media campaigns and resources, such as their podcast Your Undivided Attention and the documentary The Social Dilemma. The organization also aims to influence tech industry culture and practices through training and educational resources, working groups, and advising executives. In 2022, CHT launched the "Foundations of Humane Technology" course directed at supporting technologists and product leaders who are seeking to build more humane technology. As of June 2022, the course had 10,000 participants globally.
Additionally, CHT briefs policymakers to support the creation of the policy architecture that protects society and rewards humane technologies. Notably, Harris has testified in front of the U.S. Congress in regards to the risk of online deception and the manipulative tactics employed by social media platforms.
Impact
In a 2018 post, Facebook CEO Mark Zuckerberg described feeling a "responsibility to make sure our services aren't just fun to use, but also good for people's well-being", announcing "a major change to how we build Facebook" so that time spent on the site is "time well spent". It has been suggested that this is an allusion to the Time Well Spent movement, and spurred similar initiatives, such as Apple's Screen Time and Google Digital Wellbeing.
In 2019, CHT launched Your Undivided Attention, a podcast exploring the power that technology has over humanity and how we can use it to catalyze a humane future. The podcast has featured guests such as historian Yuval Harari, Taiwanese Digital Minister Audrey Tang, and Nobel Peace Prize winner Maria Ressa. The podcast has been downloaded 10 million times as of November 2021 and is among the most popular technology podcasts.
In 2020, CHT co-founders Tristan Harris, Aza Raskin, and Randima Fernando were featured in the Netflix documentary The Social Dilemma. Following the documentary's debut, Apple CEO Tim Cook referenced the documentary at the Computers, Privacy and Data Protection Conference saying, "It is long past time to stop pretending that this approach doesn't come with a cost – of polarization, of lost trust and, yes, of violence. A social dilemma cannot be allowed to become a social catastrophe."
In 2021, Harris was named as one of Time magazine's 100 leaders shaping the future. Harris and Raskin, on their podcast Your Undivided Attention, were the first to have a long-form interview with Facebook whistleblower Frances Haugen after she was revealed to be the source behind The Facebook Files and subsequent Facebook Papers.
In 2022, Harris gave a speech at SXSW in 2022 called "The Wisdom Gap", which outlined how technology is both increasing the interconnectedness of our biggest issues and decreasing our ability to respond with wisdom. CHT also launched its course, "The Foundations of Humane Technology", a free online course directed at product designers and technologists that has generated more than 10,000 participants in its first several months after launch.
See also
Addiction by Design
Attention economy
Consumtariat
Surveillance capitalism
The Society of the Spectacle
Television consumption
References
Further reading
External links
Consumer organizations in the United States
Digital media use and mental health
Addiction organizations in the United States
Human–computer interaction
Product design | Center for Humane Technology | [
"Engineering"
] | 1,349 | [
"Human–computer interaction",
"Product design",
"Design",
"Human–machine interaction"
] |
53,058,247 | https://en.wikipedia.org/wiki/Sea%20defense%20zone | During World War II, a sea defense zone (Seeverteidigung) was a tactical area in the organization of the Kriegsmarine intended to provide operational command of all German naval forces, within a given geographical area, in the event of actual enemy attack on the coastline of occupied Europe.
History
The first sea defense zones were established in the spring of 1940 to protect the large amount of coast line which Germany had acquired after invading the Low Countries, Denmark, Norway, and France. Originally, commanders of the sea defense zones were known as "coastal commanders" (Küstenbefehlshaber). In the summer of 1940, in preparation for Operation Sea Lion, the Kriegsmarine established seven "sea command sectors" (Seebefehlsstellen) which were commanded by officers ranked Kapitän zur See. All of the sea command sectors had been disestablished by the end of 1941.
Original Sea Command Sectors (1940)
Seebefehlsstelle Antwerpen - Antwerp (Sep 1940 - May 1941)
Seebefehlsstelle Boulogne - Boulogne-sur-Mer (Aug - Oct 1940)
Seebefehlsstelle Dünkirchen - Dunkirk (Aug - Oct 1940)
Seebefehlshaber Le Havre - Le Havre (Aug - Oct 1940)
Seebefehlshaber Rotterdam - Rotterdam (Jun 1940 - Dec 1940)
Seebefehlshaber Ostende - Ostend (Aug - Oct 1940)
Seebefehlshaber West - Calais (Aug 1940 - Mar 1941)
In the spring 1940, the Kriegsmarine began to reorganize coastal defense under a new position known as Kommandant der Seeverteidigung (Sea Defense Zone Commander). Between 1941 and 1945, the sea defense zones were expanded and retracted, gaining and losing territory to other zones or to the advance of allied or Red Army (Soviet) forces. Logistically, the sea defense zones were strictly a Navy command, but were integrated into the Atlantic Wall which was generally overseen by the German Army.
Command and control
Sea defense zones were normally commanded by an officer ranked as either Kapitän zur See or Konteradmiral. The sea defense zone commander answered to a Navy regional commander and would take tactical control over all shore forces in a given area should an enemy launch an attack against a segment of German coastline.
The only units permanently assigned to a sea defense zone were naval artillery batteries and anti-aircraft units. These units also maintained their own administrative chain of command in addition to falling under operational control of a sea defense zone. During an actual enemy attack, the sea defense commander became the direct superior for all Navy units in the zone's geographical area. This included all harbor defense units as well as naval infantry regiments. Typically, the sea defense zone commander would appoint as a deputy the commander of a major German port. The defense zone commander would himself report to a naval region commander who then acted in the capacity as a ground forces divisional commander. The ultimate command authority for all sea defense zones were the Navy Group commanders.
List of sea defense zones
References
Lohmann W. & Hildebrand H., Die Deutsche Kriegsmarine, Verlag Hans-Henning Podzun, Bad Nauheim (1956)
Notes
Kriegsmarine
Military history of Germany during World War II
Fortifications | Sea defense zone | [
"Engineering"
] | 691 | [
"Fortifications",
"Military engineering"
] |
53,058,348 | https://en.wikipedia.org/wiki/Ukay-ukay | An ukay-ukay ( ), or wagwagan ( ) is a Philippine store where a mix of secondhand and surplus items such as clothes, bags, shoes and other accessories are sold at a more affordable price. Items commonly sold at ukay-ukay's are imported from Hong Kong, South Korea, Japan, the United States, and the United Kingdom.
Etymology
The term ukay-ukay is derived from the Cebuano verb ukay, which means "to dig" or "to sift through" respectively. Technically, the English term of Ukay-Ukay is "DIG-DIG". It is synonymous with the ilocano verb wagwag, an act of dusting off a piece of clothing by taking hold of one end and snapping it in the air, and shaking the item to dust it off; and SM, meaning segunda mano (secondhand), which is also a pun on the foremost Philippine retail chain SM.
The term wagwag was more commonly used in the early to late 1990s as this type of business originated in the Baguio City which later on spread to the Cordillera Administrative Region and to the rest of northern Philippines. Eventually, it reached the National Capital Region and the rest of the Philippines through Cordilleran traders led by the Colod Family of Baguio City. The Tagalog and Bisaya speakers had difficulty pronouncing and adopting the term wagwag, hence, they coined the term ukay-ukay in the early 2000s which was also adopted alongside the original term wagwag by the merchant-minded northerners whose goods are widely popular and accepted by their consumers.
History
The first ukay-ukay was founded in the mid-1990s at Baguio City by Evangeline Dis-iw Colod, often referred to the Queen of Ukay-Ukay or Wagwag in the local languages of northern Philippines particularly to the highland localities. When calamities frequented the Philippines during that year, the various global branches of the Salvation Army sent secondhand garments and other goods to the refugees and victims as humanitarian assistance. This practice continued through the efforts of Ms. Colod and her family and associates and would even go to the extent of visiting Salvation Army branches outside the Philippines for the continuity of the practice. Soon enough, the shipped goods, upon piling up, were bought in bulk which resulted into oversupply of donated clothes. The Colod Family would monetize these donations to help the humanitarian agency cope with its financial standing, then sell the acquired goods to the public at significantly low prices to help those who could barely afford decent clothing during the post 1990 earthquake era. They used to market it to the low-income bracket as one of the earliest forms of social entrepreneurship, but following ukay-ukay's increase in popularity, relatively richer customers seeking low-priced branded and high quality goods patronized ukay-ukay stores.
Nevertheless, ukay-ukay particularly through the employ and retail of the Colod Family of Baguio City has helped families around the country to rise from poverty and created a number of wealthy families whose businesses create jobs and benefit the poor by giving them access to affordable and quality clothing as primarily intended by the Colod Family when they started the business. As to the legality of the practice, traders sought to sell goods that were already in the country through legal means, hence, there was no legal impediment to do so.
Legality
The commercial importation of secondhand clothing to the Philippines has been prohibited since 1966 under the Republic Act No. 4653, also known as the "Act to safeguard the health of the Filipino people and maintain the dignity of the nation through the prohibition of the importation of used clothing and rags". It renders direct importation of used clothes as illegal but the selling of goods that are already in the country through legal means is allowed. There have been many calls to review and amend the law legalizing the importation of used clothing for retail of ukay-ukay stores.
See also
Divisoria
Sari-sari store
References
Bibliography
External links
The University of Ukay at Rappler
Culture of the Philippines
Retailing in the Philippines
Infrastructure
Building types
Buildings and structures by type
Urban studies and planning terminology | Ukay-ukay | [
"Engineering"
] | 886 | [
"Construction",
"Buildings and structures by type",
"Infrastructure",
"Architecture"
] |
53,058,380 | https://en.wikipedia.org/wiki/Tania%20Antoshina | Tatyana Antoshina (Russian: Таня Антошина, Татьяна Константиновна Антошина; also transliterated as Tania Antoshina, Tatiana Antoshina, Tanya Antoshina, Tatjana Antoschina, Tatyana Antoschina. From 1977 to 1997 she bore the surname Машукова (Tatyana Mashukova)) (b. 1 May 1956, Krasnoyarsk Siberia, Russia), is a French-Russian ultra contemporary artist, curator, PhD in art history, one of the first participants of the gender movement in Moscow art. In 1991 she completed postgraduate studies and received a PhD in Fine Arts Stroganov Moscow State University of Arts and Industry.
Tania Antoshina is one of the most significant Russian female artists since Perestroika. Her work explores the role of women artists in society and in art history and was exhibited in the iconic ‘After the Wall’ exhibition at the Moderna Museet, Stockholm and Hamburger Bahnhof, Berlin; ‘Gender Check’, MUMOK, Vienna; and the 56th Venice Biennale. Her works are in the collections of MUMOK, Vienna; National Museum of Women in the Arts, Washington; Neues Museum Weserburg Bremen; State Russian Museum, St Petersburg; and The Tretyakov Gallery, Moscow.
Antoshina lives and works in Paris.
Collections
MUMOK, Vienna, Austria;
New Museum Weserburg Bremen, Bremen, Germany;
National Museum of Women in the Arts , Washington DC, US;
Corcoran Art Museum, Washington DC, US;
American University Museum, Washington DC, US;
Omi International Arts Center collection, New York, US;
Mint Museum, Charlotte, North Carolina, US;
Casoria Contemporary Art Museum, Naples, Italy;
Olympic Fine Arts Museum, Beijing, China;
Penang State Art Museum, Penang, Malaysia;
Russian Museum, Saint Petersburg, Russia;
Tretyakov Gallery, Moscow, Russia;
National Centre for Contemporary Arts, Moscow, Russia;
Museum of Decorative-Applied and Folk Arts, Moscow, Russia;
Perm Museum of Contemporary Art, Perm, Russia;
Krasnoyarsk Cultural Historical Museum complex, Krasnoyarsk, Russia;
Asia-Pacific Institute of Art & Research, Jeollabuk-do, South Korea;
The Francis J. Greenburger collection, New York;
Kolodzei Art Foundation, New York;
Tony Podesta collection, Washington;
Sir Elton John Collection, London.
Selected exhibitions
Solo shows
TANIA ANTOSHINA : L'ARCHE DE L'ESPACE / TANIA ANTOSHINA : SPACE ARK, Galerie Vallois Modern and Contemporary Art, Paris, 2023
Cold Land. Northern Tales, ZARYA Center for Contemporary Art, Vladivostok, 2017-2018
Reggae Feminism or 88 March, Dukley Art Center, Kotor, Montenegro, 2017
Museum of a Woman, Podgorica Museums & Galleries, Gallery Art, Podgorica, Montenegro, 2015
Cold Land, Krasnoyarsk Museum Center, Krasnoyarsk, Russia, 2014
My Favorite Artists, Galerie Vallois, Paris, France, 2010
Alice and Gagarin, VP Studio, Moscow, Russia, 2010
My Favorite Artists, Mario Mauroner Gallery, Vienna, Austria, 2008
Space Travelers, Guelman Gallery, Moscow, Russia, 2006
Museum of a Woman, White Space Gallery, London, UK, 2004
The Voyeurism of Alice Guy, Guelman Gallery, Moscow, Russia, 2002
Museum of a Woman, Florence Lynch Gallery, NY, US, 2001
April in Moscow, Guelman Gallery, Moscow, Russia, 1999
Museum of a Woman, Guelman Gallery, Moscow, Russia, 1997
Women of Russia, Guelman Gallery, Moscow, Russia
To Moor, Expo 88, Moscow, Russia, 1996
The Hound of Baskervilles, Regina Gallery, Moscow, Russia, 1992
Group shows
International Biennale of Vallauris – Contemporary Creation and Ceramics, Musée Magnelli, Musée de la céramique, Vallauris, France, 2024
DARK ROOM: VIDEOWORDS The fifth special project by Magmart, Torrance Art Museum, Torrance, USA, 2021
Moves Like Walter: New Curators Open the Corcoran Legacy Collection, Katzen Arts Center, American University Museum, Washington, DC, USA, 2019
From Non-Conformism to Feminisms: Russian Women Artists from the Kolodzei Art Foundation, Museum of Russian Art (TMORA), Minnesota, 2018-2019
18-th ASIAN ART BIENNALE BANGLADESH, Bangladesh Shilpakala Academy, National Academy of Fine and Performing Arts, Dhaka, 2018
Women at Work: Subverting the Feminine in Post-Soviet Russia, White Space Gallery, London, 2018
ART RIOT: POST-SOVIET ACTIONISM, Saatchi Gallery, London, 2017-2018
17th Asian Art Biennale Bangladesh, Bangladesh Shilpakala Academy National Art Gallery, Dhaka, Bangladesh, 2016
56 Venice Biennale, State pavilion of Mauritius, 2015
Gender Check, MUMOK, Vienna, 2012
Moscow — NY = Parallel Play, Chelsea Art Museum, NY, 2008
Moscow Biennale, Moscow, Russa, 2007
SIGHT/INSIGHT, Corcoran Art Museum, Washington DC, 2006
Photo London, Royal Academy of Arts, London, 2005
After the wall, National Gallery (Berlin), Berlin, Hamburger Bahnhof, 2001
After the Wall, Ludwig Museum, Budapest, 2000
After the Wall, Moderna Museet, Stockholm, 1999
Honours and awards
Scholarship Residence Center for Contemporary Art "Zarya", Vladivostok, Russia, 2017;
Scholarship Residence Dukley European Art Community, Kotor, Montenegro, 2017;
Scholarship Residence Dukley European Art Community, Kotor, Montenegro, 2015;
Scholarship Residency KRITI Varanasi, India, 2013;
Scholarship Residences MARIPOSA Canary Islands, Spain, 2012;
Olympic Art Gold Medal, Olympic Fine Arts, London, United Kingdom, 2012;
Alternative Prize “Russian Activist Art”, Moscow, Russia, 2012;
Olympic Art Gold Medal, Olympic Fine Arts, Beijing, China, 2008;
Five Rings Prize, Olympic Landscape Sculpture Design Contest, Beijing, China, 2008;
Laureate of the Magmart video festival, Naples, Italy, 2005;
Scholarship Omi International Arts Center, NY, US, 2005;
Winner of the "Silver Camera", Multimedia Art Museum, Moscow, 2005;
Winner of the "Silver Camera", Multimedia Art Museum, Moscow, 2002;
Scholarship of the Yaddo Residence, New York, US, 2001;
Winner of the contest "Modern Russia", Photo Center on Gogol Boulevard, Moscow, Russia, 2001;
Participant of the International Symposium CERAMICS - PAINTING - GRAPHICS Bad Lippspringe, Germany, 1992;
Best Report at the Scientific Conference of Post-Graduate Students and Teachers of the Moscow Institute of Industrial Arts, Moscow, Russia, 1985;
Silver Medal of VDNH for Teaching Work, Moscow, Russia, 1985;
Best Teacher of the Krasnoyarsk Art Academy, Krasnoyarsk, Russia, 1984.
Curatorship projects
The Quest of Power, special project of 6 Moscow Biennale 2015;
Terra Incognita, expedition to South Siberia for collection of ethnic and cultural material, 2014;
V-5, Space as a Presence, in partnership with A.Galenz and G.Kuznetsov, InteriorDAsein, Berlin, 2012;
Sons of the Big Dipper, together with G.Kuznetsov, PROEKT FABRIKA, Moscow, 2011;
Two Museums, in partnership with G.Vysheslavsky, Champino, Velletri, Italy, 1992.
Select publications
“TANIA ANTOSHINA: L'ARCHE DE L'ESPACE”, 2023-04-06, Galerie Robert Vallois, Paris, France;
“ART JUDGMENTS: Art on Trial in Russia after Perestroyka” by Sandra Frimmel (University of Zurich), ISBN 978-1-62273-277-7, Vernon Press, March 2022;
Thomas Deecke, Markus Bulling (2001). ‘’8. Triennale Kleinplastik Fellbach Vor-Sicht, Rück-Sicht’’. Germany, Stadt Fellbach Auflage;
Александр Боровский. «Как-то раз Зевксис с Паррасием... Современное искусство: практические наблюдения». Литрес, 2017. ;
Jonson Lena (2015). ‘‘Art and Protest in Putin's Russia’‘, Taylor and Francis, pp. XII, 207, 227, 240, 261, ;
Viola Hildebrand-Schat (2014). ‘‘Appropriation oder Simulacrum?’‘, p. 229-233, in Guido Isekenmeier, ‘‘Interpiktorialität: Theorie und Geschichte der Bild-Bild-Bezüge’‘, transcript Verlag, ;
‘‘Working with Feminism: Curating and Exhibitions in Eastern Europe, Acta Universitatis Tallinnensis: Artes’’, Angela Dimitrakaki, Katrin Kivimaa, Katja Kobolt, Izabela Kowalczyk, Pawel Leszkowicz, Suzana Milevska, Bojana Pejic, Rebeka Põldsam, Mara Traumane, Airi Triisberg, Hedvig Turai, p. 85. Estonia: Tallinn University Press / Tallinna Ülikooli Kirjastus. , 2012;
Klaus Krüger / Leena Crasemann / Matthias Weiß: (Hgg.) (2011). ‘‘Re-Inszenierte Fotografie’‘. Munich, Germany: Wilhelm Fink Verlag.(2011) ;
Thomas Deecke (2010). ‘‘Leben mit der Kunst’‘. Germany: Nicolaische Verlagsbuchhandlung. ;
Alain Monvoisin (2008). ‘‘DICTIONNAIRE INTERNATIONAL DE LA SCULPTURE MODERNE ET CONTEMPORAINE’‘, p. 26-27. Paris, France, Regard. ;
Julia Tulovsky (2008). ‘’The Claude and Nina Gruen Collection of Contemporary Russian Art’’, pp. 24, 79. Jane Voorhees Zimmerli Art Museum, ;
Matthias Winzen; Nicole Fritz (2007). Bodycheck: Catalog of the 10th Fellbach Triennial of Contemporary Sculpture. Germany: Snoek Verlagsgesellschaft. , p. 292;
Tatiana Smorodinskaya (Russian, Middlebury Coll.), Karen Evans-Romaine (Russian, Ohio Univ.), and Helena Goscilo (Slavic languages, Univ. of Pittsburgh) (ed.) (2007). ‘‘Encyclopedia of contemporary Russian Culture (Encyclopedias of Contemporary Culture Series), pp. 19, 42-43. Abingdon, UK and New York, USA: Routledge. ;
Игорь Кон (2003) Мужское тело в истории культуры p. 378-382; Издательство: СЛОВО/SLOVO ;
АРТ-Конституция (иллюстрированная АРТ – Конституция Российской Федерации, в создании которой принимали участие наиболее актуальные художники начала 21 века). сс. 110, 111, 131, 132, 272, 273. Тексты: Зураб Церетели, Екатерина Деготь, Наталья Колодзей. Москва: Музей современного искусства, 2003. ;
СЛОВАРЬ ГЕНДЕРНЫХ ТЕРМИНОВ / Под ред. А. А. Денисовой / Региональная общественная организация "Восток-Запад: Женские Инновационные Проекты". М.: Информация XXI век, 2002. c. 256;
Renee Baigell, Matthew Baigell (July 1, 2001). ‘’Peeling Potatoes, Painting Pictures: Women Artists in Post-Soviet Russia, Estonia, and Latvia. The First Decade’’. The Dodge Soviet Nonconformist Art publication series, p. 55. New Brunswick, NJ, USA: Jane Voorhees Zimmerli Art Museum: Rutgers. ;
Женщина в обществе: мифы и реалии : сборник статей / редактор-составитель Круминг Л.С. , сс. 1, 4. Москва : Информация - XXI век, 2001. , сс. 1, 4;
Женщина и визуальные знаки : сборник / Ин-т "Открытое общество", сс. 232-234. - М. : Идея-Пресс, 2000. .
Periodicals
“FEMINISM OF THE TENDER KIND Tatyana Antoshina: performative ceramics”, by Anna Tolstova for “Kommersant Weekend”, 01.09.2023;
“ANTOSHINA AND RAIDERS OF THE LOST ARK”, by Alexey Tarkhanov for “Art Focus Now”, 13.04.2023;
This Leads to Fire: From Nonconformism to Global Capitalism, Selections from the Kolodzei Art Foundation Collection, Neuberger Museum of Art at Purchase College, SUNY, 2015;
Google Arts and Culture. Quantum Leap, May - November 2015;
Female Artists and the Nude Male, Part 5, July 14, 2014;
Gesellschaft vor Gericht, Neue Zürcher Zeitung, 5.3.2013;
IWMpost 110 by Institut für die Wissenschaften vom Menschen - Issuu, N110 may – August 2012;
Die Einheit des Universums, taz, BENNO SCHIRRMEISTER, 23.12.2006;
Revolution, Transit Art Space, April 2006;
Hans-Dieter Fronz, KUNSTFORUM, Bd.171 Juli-August 2004, p. 370-371, ‘’Na Kurort! Russische Kunst heute’‘, Zurich, 2004;
Pat Simpson, ‘’Peripheralising Patriarchy? Gender and Identity in Post-Soviet Art: A View from the West’‘, Oxford Art Journal, Vol. 27, No. 3 (2004), Oxford University Press, pp. 406, 407, 2004;
Brian Dillon, ‘’Tatiana Antoshina’‘, MODERN PAINTERS, summer 2004, pp.120-121.
‘’СAPITAL PERSPECTIVE’‘, Moscow, pp. 5-6, 2002;
Francesca Piovano, ‘’Art-Forum’‘, CVA, issue 33, 2001;
Texte zur Kunst, Band 11, Ausgaben 41-42, Texte zur Kunst GMBH, p. 142, 2001;
Elfi Kreis, ‘’Absurdistan’‘, Kunstzeitung, nr. 57, mai 2001;
‘’Letzte Tage IsKusstwo 2000’‘, Oberbauer, Volksblah, 24/25, 2.2001;
Chrictopf Wiedemann, ‘’Freie Radikale auf ihrem Weg in den Westen’‘, Seite 18, Suddeutsche Zeitung, Nr.13, 17.1.2001;
Von Michael Dultz, ‘’Klassische Frauenszenen mit Mannern nachgestellt’‘ , Die Welt Bayern, 15.01.2001;
Brita Sachs, ‘’Sei frech und zeige deine Katastrophen’‘ , Frankfurter Augemeine Zaitung, Februar 2001;
Rod Mengham, ‘’The refugee aesthetic?’‘ , TATE, issue 20, 2000.
External links
Tatyana Antoshina's site
Tatyana Antoshina on ARTFACT
Tatyana Antoshina on ARTNET
Tania Antoshina Sotheby's
Notes et références
1956 births
Living people
Multimedia artists
Women multimedia artists
20th-century Russian women artists
21st-century Russian women artists
People from Krasnoyarsk
Feminist artists
Stroganov Moscow State Academy of Arts and Industry alumni | Tania Antoshina | [
"Technology"
] | 3,745 | [
"Multimedia",
"Multimedia artists"
] |
53,058,507 | https://en.wikipedia.org/wiki/ATATool | ATATool is freeware software that is used to display and modify ATA disk information from a Microsoft Windows environment. The software is typically used to manage host protected area (HPA) and device configuration overlay (DCO) features and is broadly similar to the hdparm for Linux. The software can also be used to generate and sometimes repair bad sectors. Recent versions include support for DCO restore and freeze operations, HPA security (password) operations and simulated bad sectors. ATATool is no longer available for personal download and can only be used for "professional users" like for security researchers.
Usage examples
ATATool must be run with administrator privileges. On Windows Vista and later it requires an elevated-privileges command prompt (see User Account Control). The target drive must be connected to a physical disk controller. The software will not work when using a hard drive through an external connection like USB or any external hard drive.
Display detected Hard Drives:
ATATOOL /LIST
Display information on Hard Drive 1:
ATATOOL /INFO \\.\PhysicalDrive1
Set a volatile HPA of 50GB on Hard Drive 1 (HPA will be lost after a power cycle):
ATATOOL /SETHPA:50GB \\.\PhysicalDrive1
Set a permanent HPA of 50GB on Hard Drive 1 (HPA will still stay after power cycles):
ATATOOL /NONVOLATILEHPA /SETHPA:50GB \\.\PhysicalDrive1
Remove permanent HPA on Hard Drive 1:
ATATOOL /NONVOLATILEHPA /RESETHPA \\.\PhysicalDrive1
Set DCO to 100GB on Hard Drive 1:
ATATOOL /SETDCO:100GB \\.\PhysicalDrive1
Remove DCO of 100GB on Hard Drive 1:
ATATOOL /RESTOREDCO:100GB \\.\PhysicalDrive1
Make sector 5 bad:
ATATOOL /BADECC:5 \\.\PhysicalDrive1
Make sector 5 not bad:
ATATOOL /FIXECC:5 \\.\PhysicalDrive1
Make sector 10 bad and then not bad again (alternative method):
ATATOOL /BADECCLONG:10 \\.\PhysicalDrive1
ATATOOL /FIXECCLONG:10 \\.\PhysicalDrive1
Warning
Using ATATool can permanently change the disk configuration, may result in permanent data loss by making some sectors of the disk inaccessible. Please use ATATool at your own risk.
See also
hdparm
Host protected area (HPA)
Device configuration overlay (DCO)
References
External links
ATATool
AT Attachment
Computer forensics | ATATool | [
"Engineering"
] | 549 | [
"Cybersecurity engineering",
"Computer forensics"
] |
53,059,902 | https://en.wikipedia.org/wiki/Nondeterministic%20constraint%20logic | In theoretical computer science, nondeterministic constraint logic is a combinatorial system in which an orientation is given to the edges of a weighted undirected graph, subject to certain constraints. One can change this orientation by steps in which a single edge is reversed, subject to the same constraints. This is a form of reversible logic in that each sequence of edge orientation changes can be undone.
Reconfiguration problems for constraint logic, asking for a sequence of moves to connect certain states, connect all states, or reverse a specified edge have been proven to be PSPACE-complete. These hardness results form the basis for proofs that various games and puzzles are PSPACE-hard or PSPACE-complete.
Constraint graphs
In the simplest version of nondeterministic constraint logic, each edge of an undirected graph has weight either one or two. (The weights may also be represented graphically by drawing edges of weight one as red and edges of weight two as blue.) The graph is required to be a cubic graph: each vertex is incident to three edges, and additionally each vertex should be incident to an even number of red edges.
The edges are required to be oriented in such a way that at least two units of weight are oriented towards each vertex: there must be either at least one incoming blue edge, or at least two incoming red edges. An orientation can change by steps in which a single edge is reversed, respecting these constraints.
More general forms of nondeterministic constraint logic allow a greater variety of edge weights, more edges per vertex, and different thresholds for how much incoming weight each vertex must have. A graph with a system of edge weights and vertex thresholds is called a constraint graph. The restricted case where the edge weights are all one or two, the vertices require two units of incoming weight, and the vertices all have three incident edges with an even number of red edges, are called and/or constraint graphs.
The reason for the name and/or constraint graphs is that the two possible types of vertex in an and/or constraint graph behave in some ways like an AND gate and OR gate in Boolean logic. A vertex with two red edges and one blue edge behaves like an AND gate in that it requires both red edges to point inwards before the blue edge can be made to point outwards. A vertex with three blue edges behaves like an OR gate, with two of its edges designated as inputs and the third as an output, in that it requires at least one input edge to point inwards before the output edge can be made to point outwards.
Typically, constraint logic problems are defined around finding valid configurations of constraint graphs. Constraint graphs are undirected graphs with two types of edges:
red edges with weight
blue edges with weight
We use constraint graphs as computation models, where we think of the entire graph as a machine. A configuration of the machine consists of the graph along with a specific orientation of its edges. We call a configuration valid, if it satisfies the inflow constraint: each vertex must have an incoming weight of at least . In other words, the sum of the weights of the edges that enter a given vertex must be at least more than the sum of the weights of the edges that exit the vertex.
We also define a move in a constraint graph to be the action of reversing the orientation of an edge, such that the resulting configuration is still valid.
Formal definition of the Constraint logic problem
Suppose we are given a constraint graph, a starting configuration and an ending configuration. This problem asks if there exists a sequence of valid moves that move it from the starting configuration to the ending configuration This problem is PSPACE-Complete for 3-regular or max-degree 3 graphs. The reduction follows from QSAT and is outlined below.
Variants
Planar Non-Deterministic Constraint Logic
The above problem is PSPACE-Complete even if the constraint graph is planar, i.e. no the graph can be drawn in a way such that no two edges cross each other. This reduction follows from Planar QSAT.
Edge Reversal
This problem is a special case of the previous one. It asks, given a constraint graph, if it is possible to reverse a specified edge by a sequence of valid moves. Note that this could be done by a sequence of valid moves so long as the last valid move reverses the desired edge. This problem has also been proven to be PSPACE-Complete for 3-regular or max-degree 3 graphs.
Constraint Graph Satisfaction
This problem asks if there exists an orientation of the edges that satisfies the inflow constraints given an undirected graph . This problem has been proven to be NP-Complete.
Hard problems
The following problems, on and/or constraint graphs and their orientations, are PSPACE-complete:
Given an orientation and a specified edge , testing whether there is a sequence of steps from the given orientation that eventually reverses edge .
Testing whether one orientation can be changed into another one by a sequence of steps.
Given two edges and with specified directions, testing whether there are two orientations for the whole graph, one having the specified direction on and the other having the specified direction on , that can be transformed into each other by a sequence of steps.
The proof that these problems are hard involves a reduction from quantified Boolean formulas, based on the logical interpretation of and/or constraint graphs. It requires additional gadgets for simulating quantifiers and for converting signals carried on red edges into signals carried on blue edges (or vice versa), which can all be accomplished by combinations of and-vertices and or-vertices.
These problems remain PSPACE-complete even for and/or constraint graphs that form planar graphs. The proof of this involves the construction of crossover gadgets that allow two independent signals to cross each other. It is also possible to impose an additional restriction, while preserving the hardness of these problems: each vertex with three blue edges can be required to be part of a triangle with a red edge. Such a vertex is called a protected or, and it has the property that (in any valid orientation of the whole graph) it is not possible for both of the blue edges in the triangle to be directed inwards. This restriction makes it easier to simulate these vertices in hardness reductions for other problems. Additionally, the constraint graphs can be required to have bounded bandwidth, and the problems on them will still remain PSPACE-complete.
Proof of PSPACE-hardness
The reduction follows from QSAT. In order to embed a QSAT formula, we need to create AND, OR, NOT, UNIVERSAL, EXISTENTIAL, and Converter (to change color) gadgets in the constraint graph. The idea goes as follows:
An AND vertex is a vertex such that it has two incident red edges (inputs) and one blue incident edge (output).
An OR vertex is a vertex such that it has three incident blue edges (two inputs, one output).
The other gadgets can also be created in this manner. The full construction is available in Erik Demaine's website. The full construction is also explained in an interactive way.
Applications
The original applications of nondeterministic constraint logic used it to prove the PSPACE-completeness of sliding block puzzles such as Rush Hour and Sokoban. To do so, one needs only to show how to simulate edges and edge orientations, and vertices, and protected or vertices in these puzzles.
Nondeterministic constraint logic has also been used to prove the hardness of reconfiguration versions of classical graph optimization problems including the independent set, vertex cover, and dominating set, on planar graphs of bounded bandwidth. In these problems, one must change one solution to the given problem into another, by moving one vertex at a time into or out of the solution set while maintaining the property that at all times the remaining vertices form a solution.
Reconfiguration 3SAT
Given a 3-CNF formula and two satisfying assignments, this problem asks whether it is possible find a sequence of steps that take us from one assignment to the others, where in each step we are allowed to flip the value of a variable. This problem can be shown PSPACE-complete via a reduction from the Non-deterministic Constraint Logic problem.
Sliding-Block Puzzles
This problem asks whether we can reach a desired configuration in a sliding block puzzle given an initial configuration of the blocks. This problem is PSPACE-complete, even if the rectangles are dominoes.
Rush Hour
This problem asks whether we can reach the victory condition of rush hour puzzle given an initial configuration. This problem is PSPACE-complete, even if the blocks have size .
Dynamic Map Labeling
Given a static map, this problem asks whether there is a smooth dynamic labeling. This problem is also PSPACE-complete.
References
PSPACE-complete problems
Computational problems in graph theory
Reversible computing
Logical calculi
Reconfiguration | Nondeterministic constraint logic | [
"Physics",
"Mathematics"
] | 1,826 | [
"Computational problems in graph theory",
"Physical quantities",
"Time",
"Reconfiguration",
"PSPACE-complete problems",
"Mathematical logic",
"Reversible computing",
"Computational mathematics",
"Logical calculi",
"Computational problems",
"Graph theory",
"Mathematical relations",
"Spacetime... |
53,060,391 | https://en.wikipedia.org/wiki/Eta%20Coronae%20Australis | The Bayer designation η Coronae Australis (Eta Coronae Australis) is shared by two stars, in the constellation Corona Australis:
η1 Coronae Australis, HR 7062, HD 173715
η2 Coronae Australis, HR 7068, HD 173861
Coronae Australis, Eta
Corona Australis | Eta Coronae Australis | [
"Astronomy"
] | 77 | [
"Corona Australis",
"Constellations"
] |
53,061,027 | https://en.wikipedia.org/wiki/Luspatercept | Luspatercept, sold under the brand name Reblozyl, is a medication used for the treatment of anemia in beta thalassemia and myelodysplastic syndromes.
The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
Luspatercept is indicated for the treatment of adults with transfusion-dependent anemia due to very low, low and intermediate-risk myelodysplastic syndromes (MDS) with ring sideroblasts, who had an unsatisfactory response to or are ineligible for erythropoietin-based therapy.
Luspatercept is indicated for the treatment of adults with transfusion-dependent anaemia associated with beta thalassaemia.
Side effects
Possible adverse effects include temporary bone pain, joint pains (arthralgias), dizziness, elevated blood pressure (hypertension) and elevated uric acid levels (hyperuricemia). There was also an increased risk of thrombosis (blood clots) in patients who have risk factors for thrombosis who are taking luspatercept.
Structure and mechanism
Luspatercept is a recombinant fusion protein derived from human activin receptor type IIb (ActRIIb) linked to a protein derived from immunoglobulin G. It binds TGF (transforming growth factor beta) superfamily ligands to reduce SMAD signaling. The reduction in SMAD signaling leads to enhanced erythroid maturation.
History
Phase III trials evaluated the efficacy of luspatercept for the treatment of anemia in the hematological disorders beta thalassemia and myelodysplastic syndromes.
It was developed by Acceleron Pharma in collaboration with Celgene.
The U.S. Food and Drug Administration (FDA) granted approval for luspatercept–aamt in November 2019, for the treatment of anemia (lack of red blood cells) in adult patients with beta thalassemia who require regular red blood cell (RBC) transfusions. Luspatercept was approved for medical use in the European Union in June 2020.
The U.S. Food and Drug Administration (FDA) awarded orphan drug status in 2013, and fast track designation in 2015.
Research
Luspatercept is being evaluated for use in adults with non-transfusion dependent beta thalassemia.
References
Drugs developed by Bristol Myers Squibb
Orphan drugs
Recombinant proteins | Luspatercept | [
"Biology"
] | 523 | [
"Recombinant proteins",
"Biotechnology products"
] |
53,061,553 | https://en.wikipedia.org/wiki/Robin%20Marshall | Robin Marshall (born 1940) is an Emeritus professor of Physics & Biology in the School of Physics and Astronomy at the University of Manchester.
Education
Marshall was educated at Ermysted's Grammar School in Skipton and the University of Manchester where he was awarded a Bachelor of Science degree in 1962 followed by a PhD in 1965 for research developing sonic spark chambers and studying pion pair production in pion proton interactions.
Career and research
Marshall is an innovator in the field of high-energy electron–positron annihilation, making many personal contributions. He was the first at the Positron–Electron Tandem Ring Accelerator (PETRA) e+e− collider at the Deutsches Elektronen-Synchrotron (DESY) to determine the electroweak properties of leptons and then quarks. These papers become templates for other experimenters over the next ten years. He performed the definitive analysis of the world's electron–positron data to produce what are now the textbook results for the Quantum Chromodynamics (QCD) 'fine structure' constant and the fermion electroweak interaction parameters. In 1984, he published a novel method for isolating bottom quark events and then used the method to measure the b electroweak properties, showing that it belonged to a weak isospin doublet state, and hence that the top quark must exist. This was one of several significant physics results from PETRA. He was a group leader at Rutherford Appleton Laboratory (RAL) from 1978 to 1992, and in the 1990s led the British involvement in an experiment at the electron–proton collider, Hadron-Elektron-Ringanlage (HERA), at DESY.
Awards and honours
Marshall was elected a Fellow of the Royal Society (FRS) in 1995 and was a Fellow of the Institute of Physics (FInstP) from 1996 to 2018.
In 1997, he was awarded the Max Born Medal and Prize by the German Physical Society.
Publications
Marshall has published a comprehensive history of "Three Centuries of Manchester Physics", in five volumes, covering the scientific, cultural, social and political aspects of the evolution of the subject in the city and its immediate surroundings.
In 2018, he published a book containing letters written mainly by physicists to the Nobel Prize winner William Lawrence Bragg during the first worlds war, providing fresh insight into the deeds and thoughts of scientists active in the front line of battle.
In 2019, he published a history of the discovery of transmutation in Manchester by Ernest Rutherford in 1919.
He has written one work of fiction "The Nobel Conspiracy".
References
1940 births
Living people
Fellows of the Royal Society
Alumni of the University of Manchester
People educated at Ermysted's Grammar School
English physicists
Academics of the University of Manchester
Experimental physicists | Robin Marshall | [
"Physics"
] | 576 | [
"Experimental physics",
"Experimental physicists"
] |
53,063,316 | https://en.wikipedia.org/wiki/Volatilome | The volatilome (sometimes termed volatolome or volatome) contains all of the volatile metabolites as well as other volatile organic and inorganic compounds that originate from an organism, super-organism, or ecosystem. The atmosphere of a living planet could be regarded as its volatilome. While all volatile metabolites in the volatilome can be thought of as a subset of the metabolome, the volatilome also contains exogenously derived compounds that do not derive from metabolic processes (e.g. environmental contaminants), therefore the volatilome can be regarded as a distinct entity from the metabolome. The volatilome is a component of the 'aura' of molecules and microbes (the 'microbial cloud') that surrounds all organisms.
Odor profile
All volatile metabolites detectable by the human nose are termed an 'odour profile'. The association of altered odour profiles with disease states has long been documented in both eastern and western medicine, and recent advances in robotic sample introduction have increased interest in the volatilome as a source for biomarkers that can be used for non-invasive screening for disease. Volatile profiles can be collected via active or passive sampling and analysis is predominantly undertaken using gas chromatography–mass spectrometry, with a variety of direct or indirect sample introduction techniques.
See also
Electronic nose
References
Omics
Bioinformatics
Odor
Metabolism | Volatilome | [
"Chemistry",
"Engineering",
"Biology"
] | 303 | [
"Biological engineering",
"Bioinformatics",
"Omics",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
53,063,552 | https://en.wikipedia.org/wiki/Biodiversity%20of%20Colombia | The biodiversity of Colombia is the variety of indigenous organisms in the country with the second-highest biodiversity in the world. As of 2021, around 63,000 species are registered in Colombia, of which 14% are endemic. The country occupies worldwide the first position in number of orchids, birds and butterflies, second position in plants, amphibians and fresh water fish, third place in species of palm trees and reptiles and globally holds the sixth position in biodiversity of mammals.
The country hosts 59 nationally designated protected areas. At the establishment of the most recent addition, Bahía Portete – Kaurrele National Natural Park, Colombian president Juan Manuel Santos said "Biodiversity is to Colombia, what oil is for the Arabs".
In 2020, according to the Colombian Biodiversity Information System, 63,303 species were registered in the country, of which more than 8,800 are considered endemic species. The country occupies the first position in the world in number of orchid and bird species, second in plants, amphibians, butterflies and freshwater fish, third in palm and reptile species, and fourth in mammalian biodiversity.
According to a report by the WWF, half of Colombia's ecosystems are in a critical state of deterioration or in a state of danger. The organization said that environmental degradation is due to oil extraction, mineral and metal extraction and deforestation. Deteriorating ecosystems are threatening the existence of more than a third of Colombia's plants and 50 percent of its animals.
Since 1998, the Humboldt Institute for Biological Resources has been collecting biodiversity samples. As of 2014, 16,469 samples, representing around 2,530 species of 1,289 genera, and 323 families from Colombian biodiversity have been stored in its archives.
Description
Colombia is one of seventeen megadiverse countries in the world. The country in northwestern South America contains 311 types of coastal and continental ecosystems. As of the beginning of 2021, a total of between 63,000 and 71,000 species are registered in the country, with 8803 endemic species, representing near the 14% of the total registered species. Colombia is the country with the most páramos in the world; more than 60% of the Andean ecosystem is found within Colombian territories. Boyacá is the department where 18.3% of the national total area is located. Since December 20, 2014, Colombia hosts 59 protected areas. The biodiversity is highest in the Andean natural region, followed by the Amazon natural region. Since 1998, the Humboldt Institute for Biological Resources in the country has been collecting samples of biodiversity. As of 2014, 16,469 samples, representing around 2530 species from 1289 genera, and 323 families of the Colombian biodiversity have been stored in their archives.
The biodiversity of Colombia is at risk, mainly because of habitat loss, urbanisation, deforestation and overfishing. According to a study of 2001, of forested area is lost every year. Around 1300 species are critically endangered, and 509 species are introduced in Colombia, 22 of which are classified as invasive species in Colombia. Various plans to address the environmental issues are proposed. The National System of Protected Areas (SINAP) is the administrator of protected areas.
Biodiversity in numbers
To commemorate the biodiversity of Colombia, the coins of the Colombian peso introduced in 2012 feature a species each.
Natural regions
Colombia is divided into six natural regions.
Caribbean natural region
Andean natural region
Orinoquía natural region
Amazon natural region
Pacific/Chocó natural region
Insular natural region
Biodiversity hotspots
Colombia hosts two biodiversity hotspots; the Tropical Andes and Tumbes–Chocó–Magdalena. The country is part of the World Network of Biosphere Reserves with five biosphere reserves:
Species
Selected fauna
Selected endemic flora
Selected endemic fungi
Panoramas
See also
Biodiversity of the Eastern Hills, Bogotá
Conservation biology
Biodiversity of Thomas van der Hammen Natural Reserve
Biodiversity of Cape Town, New Caledonia, New Zealand
Biodiversity of Borneo, environmental issues in Colombia
Environmental personhood
References
Bibliography
External links
Biodiversidad Colombia - Universidad de La Salle
Colombia: Bajo Caguán-Caquetá Rapid Inventory [PDF]
Colombia: La Lindosa, Capricho, Cerritos Rapid Inventory [PDF]
Environment of Colombia
.
.
.
.
Colombia | Biodiversity of Colombia | [
"Biology"
] | 850 | [
"Biota by country",
"Wildlife by country"
] |
53,063,682 | https://en.wikipedia.org/wiki/LVDT%20flow%20meter | An LVDT (Linear Variable Differential Transformer) Variable Area Rotameter, is a meter designed to measure the flow rate of a fluid or gas.
Mechanism
The flow meter utilizes a unique combination of a tapered metering cone in series with a piston. The position of the metal piston is sensed by the LVDT circuitry and is then translated into a flow rate. This non linear signal can be directly displayed or linearized with an electrical output.
Benefits
Advantages include the ability to externally measure very low flow rates.
See also
Cyclonic flow meter
References
External links
Corolis Flow Meter
Insertable Flow Meter
Flow meters | LVDT flow meter | [
"Chemistry",
"Technology",
"Engineering"
] | 129 | [
"Measuring instruments",
"Flow meters",
"Fluid dynamics"
] |
53,063,934 | https://en.wikipedia.org/wiki/Timoprazole | Timoprazole is in a class of medications called proton pump inhibitors (PPI) that inhibit gastric acid secretion. While it has never come to market, it was studied early on and is considered to be the "backbone" of the PPI class that succeeded it. This medication has high anti-secretory activity, which flared interest along with its simple structure.
References
Proton-pump inhibitors
Abandoned drugs | Timoprazole | [
"Chemistry"
] | 85 | [
"Drug safety",
"Abandoned drugs"
] |
53,063,942 | https://en.wikipedia.org/wiki/Bedside%20sleeper | A bedside sleeper, also referred to as a sidecar sleeper or bedside bassinet, is a bassinet or baby cot that attaches to the parents' bed, allowing newborns to sleep next to their parents safely. This is a form of safe co-sleeping, and has little risks associated with sudden infant death syndrome, unlike bedsharing. Bedside sleepers are a component of rooming-in, a practice followed in hospitals to keep the baby by the mother's bed, giving her time to establish a stronger bond with her baby.
A bedside sleeper is defined by the United States government as "a rigid frame assembly secured to an adult bed that is intended to provide a sleeping environment for infants." Usually, one wall of the bedside sleeper is lower than the others, which allows the parent to easily reach for the child at night. Most bedside sleepers are multi-mode, meaning that they can be converted into bassinets and/or play yards.
Types
Bedside bassinet
A bedside bassinet tends to have four sides, like a regular baby crib. It can be positioned near the parents' bed as an unattached bedside bassinet, or attached to the bed. This arrangement allows parents to more easily attend to their baby during the night. Because bedside bassinets have four rails, quick, easy access to the occupant can still be limited.
Bedside sleeper or sidecar
A bedside sleeper or sidecar is similar to a bedside bassinet in that it attaches to the parents' bed, but only has three crib walls, which allows the baby to sleep at the same height as the parents, and there is no obstruction to reaching out for the baby. Bedside sleepers allow parents to keep the baby close without it sleeping in the dimensional space of the family bed.
History
Co-sleeping is an ancient practice whereby babies sleep close to their parents and not in a different room, where they can sense another's presence. According to the Natural Child Project, co-sleeping is an unquestioned practice in much of southern Europe, Asia, Africa and Central and South America. However, one of the most common types of co-sleeping is bedsharing, which can be dangerous.
The American Academy of Pediatrics encourages room-sharing (sleeping in the same room but on separate surfaces), but it recommends against bed-sharing with infants, due to instances of SIDS. In a study of 321 SIDS cases, the British Medical Journal indicated that the largest percentage of SIDS cases arose from babies who slept in a different room than the parents, suggesting that co-sleeping on a separate surface is the safest method of infant sleep. Co-sleeping—sleeping with a baby nearby—is gaining popularity in the United States. Bedside sleepers were created to allow parents and babies to gain the benefits of co-sleeping while minimizing instances of SIDS.
Scientific benefits of co-sleeping
Promotes breastfeeding: A 1997 study found that infants who slept near their parents breastfed approximately three times longer during the night than infants who slept separately.
Promotes peaceful sleep: Infants who co-sleep were found to rarely cry during the night compared to infants who slept in a separate room, who startled throughout the night and spent four times more minutes crying than co-sleeping infants.
Decreased risk of sudden infant death syndrome (SIDS): Babies who sleep next to the parents' bed have four times less chance of SIDS.
Concerns
Like other infant sleep products, bedside sleepers may also pose various risks to babies of all shapes and sizes. The main issue that most bedside sleeper users and manufacturers must consider is the risk that a baby might fall into a gap between the bedside sleeper and the adult bed mattress, which could cause entrapment injuries and/or strangulation.
References
Babycare
Beds
Child safety | Bedside sleeper | [
"Biology"
] | 781 | [
"Beds",
"Behavior",
"Sleep"
] |
53,064,660 | https://en.wikipedia.org/wiki/Government%20Institute%20of%20Ceramic%20Technology | Government Institute of Ceramic Technology is statewide institution in Andhra Pradesh. It is located in Gudur in Tirupati Dist. It is established in 1952. Government Institute of Ceramic Technology is an autonomous institute offering Diploma in Ceramic Technology that cater to the changing needs of industry, business and community at large using need based curricular delivered in a dynamic learning environment. AICTE approved full-time programs are offered to candidates selected as per POLYCET conducted by the government of Andhra Pradesh. The polytechnic also maintains relations with accreditations bodies like All India Council for Technical Education (AICTE) and State Board of Technical Education (SBTETAP).
Campus details
It is located in Malavya Nagar in Gudur, Tirupati Dist. It has a 10 acres of land with hostel facilities to students. It is offering only Diploma in Ceramic Technology. It is a sandwich course. One year implant training will be provided. It is a three and half years diploma course. The intake of the course is 60. The students are admitted through AP POLYCET entrance exam and seats are allotted through AP POLYCET counselling
Campus Activities
Cultural activities are conducted in the second semester of the year for annual college day function. Various sports and games are also held within the campus ground and the institute actively takes part in Inter Polytechnic Sports and Games Meet (IPSGM) every year.
Culture of India
Indian traditions
Ceramic art
Ceramic engineering
Education in Andhra Pradesh
1952 establishments in India
Universities and colleges established in 1952 | Government Institute of Ceramic Technology | [
"Engineering"
] | 302 | [
"Ceramic engineering"
] |
53,065,847 | https://en.wikipedia.org/wiki/3D%20body%20scanning | 3D body scanning is an application of various technologies such as structured-light 3D scanner, 3D depth sensing, stereoscopic vision and others for ergonomic and anthropometric investigation of the human form as a point-cloud. The technology and practice within research has found 3D body scanning measurement extraction methodologies to be comparable to traditional anthropometric measurement techniques.
Applications
While the technology is still developing in its application, the technology has regularly been applied in the areas of:
Adapted performance sportswear
Fashion design (e.g. garments, accessories)
3D printed figurines (3D selfies)
3D morphometric evaluation (i.e. for weight-loss purposes)
Ergonomic body measurement
3D body measurement
Body shape classification
Comparison of changes in body positions
However, despite the potential for the technology to have an impact in made-to-measure and mass customisation of items with ergonomic properties, 3D body scanning has yet to reach an early adopter or early majority stage of innovation diffusion. This in part due to the lack of ergonomic theory relating to how to identify key landmarks on the body morphology. The suitability of 3D body scanning is also context dependent as the measurements taken and the precision of the machine are highly relative to the task in hand rather than being an absolute. Additionally, a key limitation of 3D body scanning has been the upfront cost of the equipment and the required skills by which to collect data and apply it to scientific and technical fields. However, the utilization of depth cameras on recent smartphones helps reduce the cost of 3D scans. One example of this is the recent free face scan app available on the Apple App Store. For detailed investigation of the changes of the body dimensions a high speed (4D) scanning systems were developed by 3dMD and Instituto de Biomemechanics de Valencia (IBV). Scanning of moving humans with clothing at high resolution (usually 10–60 Hz) is technically possible, as reported multiple times by Chris Lane, Alfredo Ballester and Yordan Kyosev, but the analysis and application of this data seems to be challenging. Main worldwide events for scientific exchange in the area of 3D and 4D body scanning are the annual 3DBody.Tech Conference and Clothing-Body-Interaction conference
Scanning protocol
Although the process has been established for a considerable amount of time with international conferences held annually for industry and academics (e.g. the International Conference and Exhibition on 3D Body Scanning Technologies), the protocol and process of how to scan individuals is yet to be universally formalised. However, earlier research
has proposed a standardised protocol of body scanning based on research and practice that demonstrates how non-standardised protocol and posture significantly influences body measurements; including the hip.
The standard scanning protocol, however, produces no measurements that fail to meet the precision of manual measurement methods or ISO 20685:2010 tolerances. But through consecutive scanning and a free algorithm called GRYPHON, 97.5% of measurements meet ISO 20685:2010; a precision increase of 327%.
See also
A light stage is equipment used for shape, texture, reflectance and motion capture often with structured light and a multi-camera setup
Mirrorsize- 3D Body Measurement Technology
4D scanning
References
Scanner
Computer vision
Anthropometry
Ergonomics
Measurement | 3D body scanning | [
"Physics",
"Mathematics",
"Engineering"
] | 664 | [
"Physical quantities",
"Packaging machinery",
"Quantity",
"Measurement",
"Size",
"Artificial intelligence engineering",
"Computer vision"
] |
53,066,523 | https://en.wikipedia.org/wiki/Energy%20Research%20%26%20Social%20Science | Energy Research & Social Science is a peer-reviewed academic journal covering social science research on energy systems and energy and society, including anthropology, economics, geography, psychology, political science, social policy, sociology, science and technology studies and legal studies. It was established in 2014 and is now among the most highly ranked journals on energy and social sciences. It is published by Elsevier. The editor-in-chief is Benjamin K. Sovacool (Aarhus University and University of Sussex).
Abstracting and indexing
The journal is abstracted and indexed in:
Social Sciences Citation Index
Scopus
See also
Climate change adaptation
Climate change mitigation
Energy policy
Renewable energy
References
External links
Academic journals established in 2014
Sociology journals
Elsevier academic journals
Energy and fuel journals
Monthly journals
English-language journals
Energy research | Energy Research & Social Science | [
"Environmental_science"
] | 160 | [
"Environmental science journals",
"Energy and fuel journals"
] |
53,068,752 | https://en.wikipedia.org/wiki/ISCB%20Innovator%20Award | The ISCB Innovator Award is a computational biology prize awarded annually to leading scientists who are within two decades post-degree, who consistently make outstanding contributions to the field, and who continue to forge new directions.
The prize was established by the International Society for Computational Biology (ISCB) in 2016 and is awarded at the Intelligent Systems for Molecular Biology (ISMB) conference. The inaugural recipient was Serafim Batzoglou.
Laureates
2024 Su-in Lee
2023 Dana Pe'er
2022 Núria López Bigas
2021 - Benjamin J. Raphael
2020 - Xiaole Shirley Liu
2019 -
2018 - M. Madan Babu
2017 - Aviv Regev
2016 - Serafim Batzoglou
Other ISCB prizes
Overton Prize - "for outstanding accomplishment to a scientist in the early to mid stage of his or her career"
ISCB Senior Scientist Award - "members of the computational biology community who are more than 12 to 15 years post-degree and have made major contributions to the field of computational biology through research, education, service, or a combination of the three"
See also
List of biology awards
References
Bioinformatics
Biology awards | ISCB Innovator Award | [
"Technology",
"Engineering",
"Biology"
] | 233 | [
"Science and technology awards",
"Biology awards",
"Bioinformatics",
"Biological engineering"
] |
53,069,003 | https://en.wikipedia.org/wiki/Morinaga%20Milk%20arsenic%20poisoning%20incident | The Morinaga Milk arsenic poisoning incident occurred in 1955 in Japan and is believed to have resulted in the deaths of over 100 infants. The incident occurred when arsenic was inadvertently added to dried milk via the use of an industrial grade monosodium phosphate additive. This incident also led to negative health effects for thousands of other infants and individuals, which has had lingering health effects.
Events
From June 1955, certain infants in western Japan came down with a strange sickness that was characterized by diarrhea or constipation, vomiting, a swollen abdomen, and a darkening of skin color. All of the infants shared the same characteristic: they were bottle-fed powdered milk, which was eventually discovered to be the Morinaga Milk brand. News coverage of the rash of infants suffering and dying from the illness did not initially mention Morinaga Milk and one news reporter claimed that they were discreetly told to stop feeding their infant Morinaga Milk brand powdered milk after the child fell ill. The company was not named until August of that year.
Lawsuit
According to William R. Cullen, Morinaga Milk showed little interest over studies of the surviving affected infants, which resulted in some boycotting the company's products during the 1960s. The company was brought to trial; however the Tokushima District Court found them not guilty as well as denying any recompense for the survivors. This decision was subjected to a review by an appellate court in Takamatsu high court, which resulted in the not guilty verdict being reversed on March 31, 1966. After a rejected final appeal three years later, the Tokushima District Court found the Morinaga Milk's head of factory production guilty and sentenced him to three years in prison.
Long term consequences
Since the poisoning multiple studies have been done on the people who survived the milk poisoning incident. Many have reported that they still suffered chronic health problems and studies have also reported "substantially higher rates of sensory deficits and mental retardation in adolescent survivors of the Morinaga poisonings". A study of them in 2006 showed that many of them still suffered chronic health problems. Arsenic is a neurotoxin, so a disproportionate amount of them had developmental delays, epilepsy, and lower IQ scores. They were also below average height. During the civil suit process, the committee selected to make a ruling against the Morinaga company decided that the aftereffects of the victims were not a product of arsenic poisoning. Instead, they insisted that they were due to some previous illness. The outcome of this was that parents were forced to accept their babies’ misfortune as if it was some kind of natural disaster and take responsibility for ongoing treatment. The committee intentionally tricked the public into believing that the aftereffects were the result of an unfortunate natural disaster rather than a perpetrated crime. In April 1974, the Hikari Foundation was established in order to help the Morinaga poisoning victims. By the end of March 1983, there were 13,396 victims of the Morinaga milk poisonings, and 6,389 of these were in communication with the Hikari Foundation. The work of the Foundation centred mostly on the development of the victims' independence as well as on creating social conditions for that development. The members of the Foundation were mostly parents that had been involved with the protection association.
See also
1858 Bradford sweets poisoning
1900 English beer poisoning
Toxic oil syndrome
1985 diethylene glycol wine scandal
2008 Chinese milk poisoning
Kobayashi red yeast rice scandal
References
1955 in Japan
1955 health disasters
Health disasters in Japan
Mass poisoning
Milk
Scandals in Japan
Arsenic poisoning incidents | Morinaga Milk arsenic poisoning incident | [
"Chemistry",
"Environmental_science"
] | 731 | [
"Biology and pharmacology of chemical elements",
"Toxicology",
"Arsenic poisoning incidents"
] |
53,069,051 | https://en.wikipedia.org/wiki/Merrimack%20Pharmaceuticals | Merrimack Pharmaceuticals, Inc. is a pharmaceutical company based in Cambridge, Massachusetts, United States. They specialize in developing drugs for the treatment of cancer.
Merrimack's first FDA-approved drug was approved in 2015; Onivyde, a liposome encapsulated version of irinotecan is used for treating pancreatic adenocarcinoma. It was approved for use in the European Union the following year.
History
Merrimack was founded by a group of scientists from MIT and Harvard University in 2000.
In 2016, Merrimack had 426 full-time employees, 103 of which had an MD or PhD.
In October 2016, CEO Robert Mulroy resigned and the company announced they would be laying off 20% of its employees. In January 2017, interim CEO Gary Crocker resigned and the board of directors appointed Richard Peters to be president and CEO. Peters previously worked at Sanofi and was a faculty member at Harvard University.
In January 2017, French pharmaceutical company Ipsen announced they would be purchasing Onivyde from Merrimack for approximately $1 billion. Following the close of the deal with Ipsen, Merrimack reduced its headcount by about 80%. By May of 2019, Merrimack planned to lay off its entire staff, including the leadership team.
On November 13, 2018, the statistical programming director Songjiang Wang, received "six months in prison and one year supervised released" after a guilty verdict was handed down to Wang from a United States District Judge in July 2018 for securities fraud and conspiracy to commit securities fraud. Also on December 20, 2019, the United States Securities and Exchange Commission charged Wang with Insider trading.
On May 10, 2024, Merrimack announced that the stockholders at a Special Meeting held that day approved the adoption of a Plan of Dissolution. The Board of Directors declared a liquidating cash dividend in the amount of $15.10 per share, expected to be paid on or about May 17, 2024. Merrimack’s Common Stock would continue to trade on NASDAQ through May 17, 2024 and thereafter delist from NASDAQ on May 20, 2024.
Pipeline
Merrimack has four drugs in clinical development.
MM-302 – HER2 targeting antibody-drug conjugate
MM-121 (seribantumab) – anti-HER3 monoclonal antibody
MM-141 (istiratumab) – IGF-1R and HER3 bispecific monoclonal antibody
MM-151 – anti-EGFR mixture of monoclonal antibody
References
Pharmaceutical companies of the United States
Life sciences industry
Health care companies based in Massachusetts
Pharmaceutical companies established in 2000
Companies based in Cambridge, Massachusetts
2000 establishments in Massachusetts
Companies formerly listed on the Nasdaq
2012 initial public offerings | Merrimack Pharmaceuticals | [
"Biology"
] | 576 | [
"Life sciences industry"
] |
53,069,342 | https://en.wikipedia.org/wiki/Iota%20Fornacis | The Bayer designation ι Fornacis (Iota Fornacis, ι For) is shared by two stars in the constellation Fornax:
ι1 Fornacis
ι2 Fornacis
Fornacis, Iota
Fornax | Iota Fornacis | [
"Astronomy"
] | 50 | [
"Fornax",
"Constellations"
] |
53,069,461 | https://en.wikipedia.org/wiki/Delta%20Gruis | The Bayer designation δ Gruis (Delta Gruis) is shared by two stars in the constellation Grus:
δ1 Gruis
δ2 Gruis
Grus (constellation)
Gruis, Delta | Delta Gruis | [
"Astronomy"
] | 41 | [
"Grus (constellation)",
"Constellations"
] |
53,069,477 | https://en.wikipedia.org/wiki/Sigma%20Gruis | The Bayer designation σ Gruis (Sigma Gruis) refers to 2 distinct star systems in the constellation Grus:
σ1 Gruis
σ2 Gruis
Grus (constellation)
Gruis, Sigma | Sigma Gruis | [
"Astronomy"
] | 42 | [
"Grus (constellation)",
"Constellations"
] |
53,069,847 | https://en.wikipedia.org/wiki/Hambach%20Forest | Hambach Forest () is an ancient forest located near in North Rhine-Westphalia, western Germany, between Cologne and Aachen. It was planned to be cleared as part of the Hambach surface mine by owner RWE AG. There were protests and occupations from 2012 against this, and in 2020 a law was passed to preserve it.
The forest
Hambach Forest is rich in biodiversity and home to 142 species regarded as important for conservation.
The forest has been called "the last remnant of a sylvan ecosystem that has occupied this part of the Rhine River plain between Aachen and Cologne since the end of the last ice age". Only ten percent of Hambach Forest still remains, and the remaining forest is severely threatened by mining for brown coal. Of special interest is the rare Bechstein's bat population, which is strictly protected according to annex II and annex IV of the European Habitats Directive.
An Environmental Impact Assessment study has never been conducted. The in Cologne denied the necessity of such a study in November 2017 because the permission for the mining operations was given in the 1970s, long before Environmental Impact Assessment studies became mandatory.
Lignite mining
The area is part of the Rhenish Lignite Mining Area ( (de)), and the Hambach surface mine is the largest open pit mine in Germany, as of 2018. RWE AG has owned the land since the 1960s or earlier and held an official permit to clear forests in the area since the 1970s. The company repeatedly argued that Hambach Forest must be cleared to ensure future energy supply. RWE spokesperson Guido Steffen stated “I’ve known Hambach for decades, before the mine. It was a great forest ... It’s a pity it must be logged. We’re not doing this because we have fun logging trees, but out of economic necessity. Germany needs energy.”
The usual process is to excavate the lignite/brown coal beneath it with huge excavator machines and then burn it in steam-electric power generation. The 2018 map to the right shows the extent of excavation activities until then. On this map, Hambach Forest (around abandoned old Morschenich) is not displayed at all, but under a hatching meaning "future area of operations".
First occupations by environmentalists (2012–2014)
Since 2012, Hambach Forest had been a political standpoint for environmentalists who protested against the German energy company RWE AG because of the open-pit Hambach surface mine neighboring the site. At , the mine is the largest of its kind in Europe.
An area within the forest was occupied by those opposing the clearance for lignite extraction. They sought to close the mine and save the remaining sections of the forest which are under threat of being cut down to allow the expansion of the mine.
The first occupation lasted from April to November 2012. A second occupation started in September 2013 and lasted until March 2014, followed by a third occupation from April to October 2014.
Fourth occupation (2015–2018)
The fourth occupation period started in 2015 and lasted until 2018. It involved a settlement with around two dozen tree houses and numerous road barricades. The barricades were erected to prevent mining company and police vehicles from entering.
BUND lawsuit and court order
Cutting seasons last from 1 October until the end of February and usually 70–80 hectares are cleared in each period. The tree cutting operations in the 2017/2018 cutting season ended after just two days in November 2017, after the in Münster ordered a halt. According to BUND, the environmental protection association that filed the corresponding lawsuit, Hambach Forest, with its common oak, hornbeam and lily of the valley populations, is a habitat of type 9160 of annex I of the European Habitats Directive (Council Directive 92/43/EEC of 21 May 1992).
Of special interest in this lawsuit was the Bechstein's bat, which is strictly protected according to annex II and annex IV of the European Habitats Directive.
2018 arrests of activists
On 22 January 2018, nine Hambach Forest activists were arrested for resisting a barricade eviction. All of the arrested refused to give any details about their identities and remained unknown in pretrial detention and also in the courtroom. One activist, who was freed from a lock-on on a tripod, was sentenced to six months on parole after a pretrial detention of 67 days in . Two activists were released after 52 days in pretrial detention in Cologne-Ossendorf jail after a medical examination revealed that they were likely under 21 years old and should therefore be processed under juvenile law.
A 22-year-old activist from Australia joined the occupation in March 2018 and planned to stay for two weeks in order to take part in a treehouse-building workshop. She was arrested on 19 March, one week after her arrival, after she was identified as being part of a group from whom firecrackers were thrown in the direction of police officers. She decided not to give any personal details and tried to stay anonymous. As a consequence, she was taken into pretrial detention (as "Unknown Person III" [UP III]), thereby missing her flight back home to Australia one week later. According to the prosecutor she was identified one day before her trial, which took place on 31 July 2018. She was sentenced to nine months in prison, without parole. She was released after a court hearing on 4 October.
Police clearing the tree houses (September 2018)
On 13 September 2018 a large scale police operation started initiated by North Rhine-Westphalian Ministry of Construction to evict more than fifty tree houses which existed for up to six years because they didn't comply with fire safety regulation standards.
To protect life and limb of the tree house occupants was announced as an important goal of the operation.
Journalist falls and dies
As evictions continued, on 19 September 2018, the 27-year old artist, blogger and journalist Steffen Meyn fell through a walkway of 15 meters height in the treehouse village of Beechtown and died. Meyn was working on a long-time documentary project of the activities in the Hambach Forest. Immediate resuscitation efforts failed.
The eviction of tree houses was stopped immediately after that incident by Herbert Reul, North Rhine-Westphalia's interior minister. He said "We cannot just proceed as normal — at least I can't."
The eviction proceedings resumed on 23 September 2018. According to a police statement all 86 tree houses had been evicted and destroyed on 2 October 2018.
Police clearance ruled illegal
A Cologne court ruled on 8September 2021 that the eviction had been carried out under a false pretext and was therefore illegal. According to the court, the police operation, as authorized by the North Rhine-Westphalia state government, had “pretended” to enforce fire protection rules when the real aim was to clear the protest camp.
Fifth occupation (2018–2020)
Court order stops clearance again (October 2018)
On 5 October 2018 the Higher Administrative Court () of Münster ruled that the clearance of Hambach Forest by RWE had to stop immediately until evidence brought by BUND could be evaluated. The evidence in question concerned the threat to the local Bechstein's bat population. The final court decision was expected for 2020. Activists started to build new tree houses again.
Large demonstration (October 2018)
On 6 October 2018 there was a large demonstration "" (English: "Save the forest – Stop coal!") near Hambach Forest. It was organized by BUND, Campact, Greenpeace, NaturFreunde Deutschlands (Friends of Nature), the local initiatives and (AbL), and others. Originally planned for people, and with people anticipated before the event was temporarily forbidden two days earlier, there were participants according to the organizers; the police acknowledged some to . The demonstrators celebrated the recent court decision in a peaceful festival atmosphere with many speeches, demanding an end to the use of coal to generate power. Participating speakers included (NaturFreunde Deutschlands), Jens Sannig (pastor), Ulf Allhoff-Cramer (Detmold farmer), (Buirer für Buir), (BUND), Martin Kaiser (Greenpeace), Mamadou Mbodji (NaturFreunde Internationale), Helene Nietert (Camp for Future), (Campact), (BMU), Annalena Baerbock (Bündnis 90/Die Grünen), Bernd Riexinger (Die Linke), Michael Zobel (forest educator), Ingo Bajerke (), (Naturfreunde), and Milan Schwarze (Ende Gelände). Various musicians supported the event with live performances including , Revolverheld, , Die Höchste Eisenbahn, , , Davide Martello, and Piri-Piri.
2019
When Greta Thunberg was awarded the Golden Camera Award in March 2019, she dedicated the prize to those protecting the Hambach Forest. She then visited the site, saying "It makes me incredibly sad, to see all this destruction, in this area that used to be a forest ecosystem, and I feel sorry for the people who have to move."
2020
In January 2020, the preservation of the Hambach Forest was agreed at a top-level meeting of the German government and the four federal states affected by the coal phase-out. The "Roadmap for Coal Phase-out" law was passed in July.
Gallery
See also
Ende Gelände 2017
Ende Gelände 2018
September 2019 climate strikes
Arnold of Arnoldsweiler
Deforestation
Direct action
Environmental protection
Sophienhöhe
Forest in Germany
Forest protection
Akbelen Forest forest in Turkey being cut down to make way for lignite mine
Fossil fuel phase-out
Commission on Growth, Structural Change and Employment
Lützerath bleibt!
References
External links
Finite: The Climate of Change - A 2022 documentary by Rich Felgate about the climate-justice movement, treehouses, activism and, among others, the Hambach Forest [1:39:16]
Autonomism
Environmental issues in Germany
Forests and woodlands of North Rhine-Westphalia
Nonviolent occupation
Old-growth forests
Squats in Germany
Important Bird Areas of Germany | Hambach Forest | [
"Biology"
] | 2,097 | [
"Old-growth forests",
"Ecosystems"
] |
64,271,048 | https://en.wikipedia.org/wiki/Oracle%20complexity%20%28optimization%29 | In mathematical optimization, oracle complexity is a standard theoretical framework to study the computational requirements for solving classes of optimization problems. It is suitable for analyzing iterative algorithms which proceed by computing local information about the objective function at various points (such as the function's value, gradient, Hessian etc.). The framework has been used to provide tight worst-case guarantees on the number of required iterations, for several important classes of optimization problems.
Formal description
Consider the problem of minimizing some objective function (over some domain ), where is known to belong to some family of functions . Rather than direct access to , it is assumed that the algorithm can obtain information about via an oracle , which given a point in , returns some local information about in the neighborhood of . The algorithm begins at some initialization point , uses the information provided by the oracle to choose the next point , uses the additional information to choose the following point , and so on.
To give a concrete example, suppose that (the -dimensional Euclidean space), and consider the gradient descent algorithm, which initializes at some point and proceeds via the recursive equation
,
where is some step size parameter. This algorithm can be modeled in the framework above, where given any , the oracle returns the gradient , which is then used to choose the next point .
In this framework, for each choice of function family and oracle , one can study how many oracle calls/iterations are required, to guarantee some optimization criterion (for example, ensuring that the algorithm produces a point such that for some ). This is known as the oracle complexity of this class of optimization problems: Namely, the number of iterations such that on one hand, there is an algorithm that provably requires only this many iterations to succeed (for any function in ), and on the other hand, there is a proof that no algorithm can succeed with fewer iterations uniformly for all functions in .
The oracle complexity approach is inherently different from computational complexity theory, which relies on the Turing machine to model algorithms, and requires the algorithm's input (in this case, the function ) to be represented as a bit of strings in memory. Instead, the algorithm is not computationally constrained, but its access to the function is assumed to be constrained. This means that on the one hand, oracle complexity results only apply to specific families of algorithms which access the function in a certain manner, and not any algorithm as in computational complexity theory. On the other hand, the results apply to most if not all iterative algorithms used in practice, do not rely on any unproven assumptions, and lead to a nuanced understanding of how the function's geometry and type of information used by the algorithm affects practical performance.
Common settings
Oracle complexity has been applied to quite a few different settings, depending on the optimization criterion, function class , and type of oracle .
In terms of optimization criterion, by far the most common one is finding a near-optimal point, namely making for some small . Some other criteria include finding an approximately-stationary point (), or finding an approximate local minima.
There are many function classes that have been studied. Some common choices include convex vs. strongly-convex vs. non-convex functions, smooth vs. non-smooth functions (say, in terms of Lipschitz properties of the gradients or higher-order derivatives), domains with bounded dimension , vs. domains with unbounded dimension, and sums of two or more functions with different properties.
In terms of the oracle , it is common to assume that given a point , it returns the value of the function at , as well as derivatives up to some order (say, value only, value and gradient, value and gradient and Hessian, etc.). Sometimes, one studies more complicated oracles. For example, a stochastic oracle returns the values and derivatives corrupted by some random noise, and is useful for studying stochastic optimization methods. Another example is a proximal oracle, which given a point and a parameter , returns the point minimizing .
Examples of oracle complexity results
The following are a few known oracle complexity results (up to numerical constants), for obtaining optimization error for some small enough , and over the domain where is not fixed and can be arbitrarily large (unless stated otherwise). We also assume that the initialization point satisfies for some parameter , where is some global minimizer of the objective function.
References
Further reading
Mathematical_optimization | Oracle complexity (optimization) | [
"Mathematics"
] | 911 | [
"Mathematical optimization",
"Mathematical analysis"
] |
64,272,049 | https://en.wikipedia.org/wiki/Cytochrome%20P450%20BM3 | Cytochrome P450 BM3 is a Prokaryote Cytochrome P450 enzyme originally from Bacillus megaterium catalyzes the hydroxylation of several long-chain fatty acids at the ω–1 through ω–3 positions. This bacterial enzyme belongs to CYP family CYP102, with the CYP Symbol CYP102A1.This CYP family constitutes a natural fusion between the CYP domain and an NADPH-dependent cytochrome P450 reductase.
References
Cytochrome P450
EC 1.6.2
EC 1.14.14
Prokaryote genes | Cytochrome P450 BM3 | [
"Biology"
] | 134 | [
"Prokaryotes",
"Prokaryote genes"
] |
64,272,492 | https://en.wikipedia.org/wiki/Han%20Jeoung-ae | Han Jeoung-ae (; born 8 January 1965) is a South Korean politician and a three-term parliamentarian representing Gangseo District of Seoul at the National Assembly from 2016. Han also served as Minister of Environment under President Moon Jae-in from 2021 to 2022.
Trade unionist
After graduating from Pusan National University, she worked at Korea Occupational Safety and Health Agency. After completing her doctorate studies in the UK, she returned to the Agency. In 2005 she became the head of its trade union. And in 2006 she became the deputy chair of Federation of Korean Trade Unions's public servants bureau. She continued to take leadership roles in the trade unions before entering politics for the 2012 general election. From 2006 to 2011, she has taken multiple roles in formulating labor-related policy, such as a member of sub-committees of then-Economic and Social Development Commission (now-Economic, Social and Labor Council) and the highest governing bodies of National Pension Service and National Health Insurance Service, as a trade union representative.
Parliamentarian
In the 2012 general election, she was placed as the number 11 of the party's proportional representation list. In the 2016 general election, she defeated the former Gangseo District mayor from the opposition party.
She has taken multiple roles in her party and its preceding parties including one of members its Supreme Council, its deputy floor leader and its spokesperson as well as deputy chair and senior deputy chair of its Policy Planning Committee.
In 2018 she received Lush Prize in lobbying category for her legislative work in supporting alternative safety testing to animal testing.
In 2020 she was elected as the chair of National Assembly's Health and Welfare Committee responsible for scrutinising Ministry of Health and Welfare, Ministry of Food and Drug Safety and related agencies. In August 2020 newly elected leader of her party, Lee Nak-yeon, appointed her as the chair of party's Policy Planning Committee and she subsequently resigned from the elected chair of the Health and Welfare Committee. In September Lee appointed her and Yang Hyang-ja as the co-deputy chair of party's K-New Deal Committee led by party floor leader Kim Tae-nyeon.
Minister
President Moon Jae-in nominated Han as his next Minister of Environment on 30 December 2020. Democratic party leader Lee Nak-yon replaced Han with Hong Ik-pyo to chair the party's policy planning committee
Education
Han holds two degrees - a bachelor in environmental engineering from Pusan National University and a doctorate in industrial engineering from University of Nottingham. She also completed postgraduate programme in environmental engineering at Pusan National University.
Electoral history
References
Living people
1965 births
Pusan National University alumni
Alumni of the University of Nottingham
Members of the National Assembly (South Korea)
Democratic Party of Korea politicians
21st-century South Korean women politicians
21st-century South Korean politicians
South Korean trade union leaders
Environmental engineers
Women government ministers of South Korea
Environment ministers of South Korea
Women members of the National Assembly (South Korea) | Han Jeoung-ae | [
"Chemistry",
"Engineering"
] | 593 | [
"Environmental engineers",
"Environmental engineering"
] |
64,273,672 | https://en.wikipedia.org/wiki/Mammalian%20vision | Mammalian vision is the process of mammals perceiving light, analyzing it and forming subjective sensations, on the basis of which the animal's idea of the spatial structure of the external world is formed. Responsible for this process in mammals is the visual sensory system, the foundations of which were formed at an early stage in the evolution of chordates. Its peripheral part is formed by the eyes, the intermediate (by the transmission of nerve impulses) - the optic nerves, and the central - the visual centers in the cerebral cortex.
The recognition of visual stimuli in mammals is the result of the joint work of the eyes and the brain. At the same time, a significant part of the visual information is processed already at the receptor level, which allows to significantly reduce the amount of such information received by the brain. Elimination of redundancy in the amount of information is inevitable: if the amount of information delivered to the receptors of the visual system is measured in millions of bits per second (in humans - about 1 bits/s), the capabilities of the nervous system to process it are limited to tens of bits per second.
The organs of vision in mammals are, as a rule, well developed, although in their life they are of less importance than for birds: usually mammals pay little attention to immovable objects, so even cautious animals such as a fox or a hare may come close to a human who stands still without movement. The size of the eyes in mammals is relatively small; in humans, eye weight is 1% of the mass of the head, while in a starling it reaches 15%. Nocturnal animals (for example, tarsiers) and animals that live in open landscapes have larger eyes. The vision of forest animals is not so sharp, and in burrowing underground species (moles, gophers, zokors), eyes are reduced to a greater extent, in some cases (marsupial moles, mole rats, blind mole), they are even covered by a skin membrane.
Mammalian eye
Like other vertebrates, the mammalian eye develops from the anterior brain vesicle and has a rounded shape (eyeball).
Literature
Vision by taxon
Mammal anatomy
Animal physiology | Mammalian vision | [
"Biology"
] | 445 | [
"Animals",
"Animal physiology"
] |
64,274,563 | https://en.wikipedia.org/wiki/Quasi-ultrabarrelled%20space | In functional analysis and related areas of mathematics, a quasi-ultrabarrelled space is a topological vector spaces (TVS) for which every bornivorous ultrabarrel is a neighbourhood of the origin.
Definition
A subset B0 of a TVS X is called a bornivorous ultrabarrel if it is a closed, balanced, and bornivorous subset of X and if there exists a sequence of closed balanced and bornivorous subsets of X such that Bi+1 + Bi+1 ⊆ Bi for all i = 0, 1, ....
In this case, is called a defining sequence for B0.
A TVS X is called quasi-ultrabarrelled if every bornivorous ultrabarrel in X is a neighbourhood of the origin.
Properties
A locally convex quasi-ultrabarrelled space is quasi-barrelled.
Examples and sufficient conditions
Ultrabarrelled spaces and ultrabornological spaces are quasi-ultrabarrelled.
Complete and metrizable TVSs are quasi-ultrabarrelled.
See also
Barrelled space
Countably barrelled space
Countably quasi-barrelled space
Infrabarreled space
Ultrabarrelled space
Uniform boundedness principle#Generalisations
References
Topological vector spaces | Quasi-ultrabarrelled space | [
"Mathematics"
] | 254 | [
"Topological vector spaces",
"Vector spaces",
"Space (mathematics)"
] |
64,275,619 | https://en.wikipedia.org/wiki/TMED5 | Transmembrane emp24 domain-containing protein 5 is a protein that in humans is encoded by the TMED5 gene.
Gene
General properties
TMED5 (transmembrane emp24 domain-containing protein 5) is also known as p28, p24g2, and CGI-100. The human gene spans 30,775 base pairs over 4 exons and 3 introns for transcript variant 1, 5 exons and 4 introns for transcript variant 2, and it is located on the minus strand of chromosome 1, at 1p22.1.
Expression
TMED5 has ubiquitous expression with transcripts detected in 246 tissues. Androgen deprivation led to lower expression in mice splenocytes compared to the control. Human dendritic cells infected with Chlamydia pneumoniae showed an absence of TMED5 expression compared to uninfected dendritic cells which had moderate expression.
mRNA transcript
TMED5 has two coding transcript variants and one non-coding transcript variant produced by alternative splicing. Isoform 1 has 4 exons and encodes a protein 229 amino acids. Isoform 2 has 5 exons and encodes a protein with a shorter C-terminus 193 amino acids due to an additional exon causing a frameshift.
Protein
General properties
TMED5 contains a signal peptide. After cleavage of the signal peptide, TMED5 isoform 1 is composed of 202 amino acids and has a molecular weight of ~23 kDa. The mature form of isoform 2 is composed of 166 amino acids and has a molecular weight of ~19 kDa. Both isoforms have an isolectric point of approximately 4.6.
Composition
Compared to the reference set of human proteins, TMED5 has fewer alanine and proline residues but more aspartic acid and phenylalanine residues. TMED5 isoform 1 has one hydrophobic segment that corresponds with its transmembrane region.
Domains and motifs
TMED5 isoform 1 is a single-pass transmembrane protein and is composed of a lumenal domain, one transmembrane (helical) domain, and a cytoplasmic domain.
TMED5 is part of the emp24/gp25L/p24 family/GOLD family protein.
TMED5 contains a di-lysine motif and predicted NLS in its cytoplasmic tail.
Structure
The structure of TMED5 isoform 1 consists of beta strands making up the lumenal region, disparate coil-coiled regions, alpha helices making up the transmembrane domain, and alpha helices making up some of the cytoplasmic domain.
Post-translational modifications
TMED5 has two predicted phosphorylation sites in the cytosolic region, Ser227 and Thr229.
Localization
TMED5's predicted location is in the plasma membrane, with an extracellular N-terminus and intracellular C-terminus. TMED5's localization is predicted to be cytoplasmic, but has been found in some tissues to be located in the nucleus.
Interacting proteins
The following table provides a list of proteins most likely to interact with TMED5. Not shown in the table are Wnt family proteins which are known to interact with the p24 protein family.
Function and clinical significance
TMED5 is a part of the p24 protein family whose general functions are protein trafficking for the secretory pathway. TMED5 is thought to be necessary in the formation of the Golgi into a ribbon.
Glycosylphosphatidylinositol-anchored proteins (GPI-AP) depend on p24 cargo receptors for transport from the ER to the Golgi. Knockdown of p24γ2 (a mouse ortholog of TMED5) in mice resulted in impaired transport of GPI-AP. The study concluded that the α-helical region of p24γ2 binds GPI which is necessary to incorporate it into COPII transport vesicles.
TMED5 is reported to be necessary for the secretion of Wnt ligands. TMED5 has been found to interact with WNT7B, activating the canonical WNT-CTNNB1/β-catenin signaling pathway. This pathway is linked to numerous cancers because upregulation of the Wnt/β-catenin signaling pathway leads to cytosolic accumulation of β-catenin, promoting cellular proliferation.
Research has identified bladder cancer to have a common chromosomal amplification at 1p21-22 and showed significant upregulation of TMED5.
Evolution
Homology
Paralogs
TMED5 paralogs include TMED1, TMED2, TMED3, TMED4, TMED6, TMED7, TMED8, TMED9, and TMED10. All paralogs share the conserved transmembrane domain and contain the characteristic GOLD domain as included in the emp24/gp25L/p24 family/GOLD family proteins.
Orthologs
TMED5 is found to be conserved in vertebrates, invertebrates, plants and fungi, and there are 243 known organisms that have orthologs with the gene. The following table provides a sample of the ortholog space of TMED5.
References
Proteins | TMED5 | [
"Chemistry"
] | 1,106 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
64,276,248 | https://en.wikipedia.org/wiki/The%20Salisbury%20Poisonings | The Salisbury Poisonings is a fact-based drama television series, starring Anne-Marie Duff, Rafe Spall and Annabel Scholey which portrays the 2018 Novichok poisonings and decontamination crisis in Salisbury, England, and the subsequent Amesbury poisonings. The series was broadcast in three parts on BBC One in June 2020, and has been shown in four parts elsewhere. It was created by Adam Patterson and Declan Lawn for Dancing Ledge Productions.
Synopsis
On 4 March 2018, emergency services receive a call to attend to Sergei and Yulia Skripal who have been found unconscious on a park bench in Salisbury city centre. Medical practitioners are initially puzzled by their illness, and police and the local public health department become involved. A national emergency is precipitated when it is learned that Skripal is a former Russian military intelligence officer who acted as a double agent for the UK's intelligence services during the 1990s and early 2000s. It emerges that he and his daughter were poisoned with a highly potent Novichok agent which was smeared on the front-door handle of their residence. The docudrama also deals with the incidental exposure of several other persons, including a police officer and an uninvolved couple who found a perfume bottle containing the nerve agent which they administered to themselves. At the end of the series, the real-life people involved in the story are pictured returning to the scene, and some film is shown of Dawn Sturgess, the only person to die from their exposure to the Novichok.
Cast
Anne-Marie Duff as Tracy Daszkiewicz
William Houston as Ted Daszkiewicz
Rafe Spall as DS Nick Bailey
Annabel Scholey as Sarah Bailey
Darren Boyd as Supt Dave Minty
Nigel Lindsay as DCC Paul Mills
Amber Agar as DI Lata Mishra
Wayne Swann as Sergei Skripal
Jill Winternitz as Yulia Skripal
Johnny Harris as Charlie Rowley
Barry Aird as Matthew Rowley
MyAnna Buring as Dawn Sturgess
Stella Gonet as Caroline Sturgess
Melanie Gutteridge as Claire Sturgess
Ron Cook as Stan Sturgess
Mark Addy as Ross Cassidy
Clare Burt as Mo Cassidy
Duncan Pow as Dr. James Haslam
Emma Stansfield as Nurse Emma Black
Shereen Martin as Dr. Rebecca Jenner
Jonathan Slinger as Prof. Tim Atkins
Andrew Brooke as Alistair Cunningham
Chris Wilson as Police Officer
Kimberley Nixon as Hannah Mitchell
Michael Shaeffer as Stephen Kemp
Remy Beasley as Georgia
Sophia Ally as Gracie Sturgess
Judah Cousin as Toby Daszkiewicz
Stephanie Gil as Ellie Bailey
Kiera Thompson as Annie Bailey
Episodes
Reception
Writing in The Guardian, Lucy Mangan praised the show's script and direction as being "admirably restrained", and compared the calm actions of its characters facing a "new normal" to the reactions of the public during the COVID-19 pandemic.
Release and distribution
Worldwide distribution is handled by Fremantle. In June 2020 it was announced that AMC signed an agreement with Fremantle to exclusively broadcast the show in the United States. The AMC broadcast is slated to premiere 25 January 2021.
The series was shown over four nights on SBS TV in Australia from 24 August 2020.
In December 2021 the series was re-released on Netflix and Disney Plus.
See also
Poisoning of Sergei and Yulia Skripal
References
External links
2010s in Wiltshire
2020 British television series debuts
2020 British television series endings
2020s British crime drama television series
2020s British television miniseries
BBC crime drama television shows
British English-language television shows
Russia–United Kingdom relations
Salisbury
Science docudramas
Television shows set in Wiltshire
Toxicology in the United Kingdom | The Salisbury Poisonings | [
"Environmental_science"
] | 733 | [
"Toxicology in the United Kingdom",
"Toxicology"
] |
64,276,399 | https://en.wikipedia.org/wiki/ARL6IP6 | ADP ribosylation factor like GTPase 6 interacting protein 6 is a protein that in the humans is encoded by the ARL6IP6 gene. It spans from 152,717,893 to 152,761,253 on the plus strand.
Gene
General properties
ARL6IP6 Also known as Phosphonoformate Immuno-Associated Protein 1. It has 43,361 bases and 11 exons and is located on the long arm of chromosome 2, at 2q23.3 in humans. In humans there are three
upstream genes (PRPF40A, FMNL2 and STAM2) and three downstream genes (GALNT13, KCNJ3, NR4A2) that define the identity of this genomic region.
Promoter
Expression
References
Human proteins | ARL6IP6 | [
"Chemistry"
] | 168 | [
"Biochemistry stubs",
"Protein stubs"
] |
64,278,078 | https://en.wikipedia.org/wiki/Metrizable%20topological%20vector%20space | In functional analysis and related areas of mathematics, a metrizable (resp. pseudometrizable) topological vector space (TVS) is a TVS whose topology is induced by a metric (resp. pseudometric). An LM-space is an inductive limit of a sequence of locally convex metrizable TVS.
Pseudometrics and metrics
A pseudometric on a set is a map satisfying the following properties:
;
Symmetry: ;
Subadditivity:
A pseudometric is called a metric if it satisfies:
Identity of indiscernibles: for all if then
Ultrapseudometric
A pseudometric on is called a ultrapseudometric or a strong pseudometric if it satisfies:
Strong/Ultrametric triangle inequality:
Pseudometric space
A pseudometric space is a pair consisting of a set and a pseudometric on such that 's topology is identical to the topology on induced by We call a pseudometric space a metric space (resp. ultrapseudometric space) when is a metric (resp. ultrapseudometric).
Topology induced by a pseudometric
If is a pseudometric on a set then collection of open balls:
as ranges over and ranges over the positive real numbers,
forms a basis for a topology on that is called the -topology or the pseudometric topology on induced by
: If is a pseudometric space and is treated as a topological space, then unless indicated otherwise, it should be assumed that is endowed with the topology induced by
Pseudometrizable space
A topological space is called pseudometrizable (resp. metrizable, ultrapseudometrizable) if there exists a pseudometric (resp. metric, ultrapseudometric) on such that is equal to the topology induced by
Pseudometrics and values on topological groups
An additive topological group is an additive group endowed with a topology, called a group topology, under which addition and negation become continuous operators.
A topology on a real or complex vector space is called a vector topology or a TVS topology if it makes the operations of vector addition and scalar multiplication continuous (that is, if it makes into a topological vector space).
Every topological vector space (TVS) is an additive commutative topological group but not all group topologies on are vector topologies.
This is because despite it making addition and negation continuous, a group topology on a vector space may fail to make scalar multiplication continuous.
For instance, the discrete topology on any non-trivial vector space makes addition and negation continuous but do not make scalar multiplication continuous.
Translation invariant pseudometrics
If is an additive group then we say that a pseudometric on is translation invariant or just invariant if it satisfies any of the following equivalent conditions:
Translation invariance: ;
Value/G-seminorm
If is a topological group the a value or G-seminorm on (the G stands for Group) is a real-valued map with the following properties:
Non-negative:
Subadditive: ;
Symmetric:
where we call a G-seminorm a G-norm if it satisfies the additional condition:
Total/Positive definite: If then
Properties of values
If is a value on a vector space then:
and for all and positive integers
The set is an additive subgroup of
Equivalence on topological groups
Pseudometrizable topological groups
An invariant pseudometric that doesn't induce a vector topology
Let be a non-trivial (i.e. ) real or complex vector space and let be the translation-invariant trivial metric on defined by and such that
The topology that induces on is the discrete topology, which makes into a commutative topological group under addition but does form a vector topology on because is disconnected but every vector topology is connected.
What fails is that scalar multiplication isn't continuous on
This example shows that a translation-invariant (pseudo)metric is enough to guarantee a vector topology, which leads us to define paranorms and F-seminorms.
Additive sequences
A collection of subsets of a vector space is called additive if for every there exists some such that
All of the above conditions are consequently a necessary for a topology to form a vector topology.
Additive sequences of sets have the particularly nice property that they define non-negative continuous real-valued subadditive functions.
These functions can then be used to prove many of the basic properties of topological vector spaces and also show that a Hausdorff TVS with a countable basis of neighborhoods is metrizable. The following theorem is true more generally for commutative additive topological groups.
Assume that always denotes a finite sequence of non-negative integers and use the notation:
For any integers and
From this it follows that if consists of distinct positive integers then
It will now be shown by induction on that if consists of non-negative integers such that for some integer then
This is clearly true for and so assume that which implies that all are positive.
If all are distinct then this step is done, and otherwise pick distinct indices such that and construct from by replacing each with and deleting the element of (all other elements of are transferred to unchanged).
Observe that and (because ) so by appealing to the inductive hypothesis we conclude that as desired.
It is clear that and that so to prove that is subadditive, it suffices to prove that when are such that which implies that
This is an exercise.
If all are symmetric then if and only if from which it follows that and
If all are balanced then the inequality for all unit scalars such that is proved similarly.
Because is a nonnegative subadditive function satisfying as described in the article on sublinear functionals, is uniformly continuous on if and only if is continuous at the origin.
If all are neighborhoods of the origin then for any real pick an integer such that so that implies
If the set of all form basis of balanced neighborhoods of the origin then it may be shown that for any there exists some such that implies
Paranorms
If is a vector space over the real or complex numbers then a paranorm on is a G-seminorm (defined above) on that satisfies any of the following additional conditions, each of which begins with "for all sequences in and all convergent sequences of scalars ":
Continuity of multiplication: if is a scalar and are such that and then
Both of the conditions:
if and if is such that then ;
if then for every scalar
Both of the conditions:
if and for some scalar then ;
if then
Separate continuity:
if for some scalar then for every ;
if is a scalar, and then .
A paranorm is called total if in addition it satisfies:
Total/Positive definite: implies
Properties of paranorms
If is a paranorm on a vector space then the map defined by is a translation-invariant pseudometric on that defines a on
If is a paranorm on a vector space then:
the set is a vector subspace of
with
If a paranorm satisfies and scalars then is absolutely homogeneity (i.e. equality holds) and thus is a seminorm.
Examples of paranorms
If is a translation-invariant pseudometric on a vector space that induces a vector topology on (i.e. is a TVS) then the map defines a continuous paranorm on ; moreover, the topology that this paranorm defines in is
If is a paranorm on then so is the map
Every positive scalar multiple of a paranorm (resp. total paranorm) is again such a paranorm (resp. total paranorm).
Every seminorm is a paranorm.
The restriction of an paranorm (resp. total paranorm) to a vector subspace is an paranorm (resp. total paranorm).
The sum of two paranorms is a paranorm.
If and are paranorms on then so is Moreover, and This makes the set of paranorms on into a conditionally complete lattice.
Each of the following real-valued maps are paranorms on :
The real-valued maps and are paranorms on
If is a Hamel basis on a vector space then the real-valued map that sends (where all but finitely many of the scalars are 0) to is a paranorm on which satisfies for all and scalars
The function is a paranorm on that is balanced but nevertheless equivalent to the usual norm on Note that the function is subadditive.
Let be a complex vector space and let denote considered as a vector space over Any paranorm on is also a paranorm on
<li>
F-seminorms
If is a vector space over the real or complex numbers then an F-seminorm on (the stands for Fréchet) is a real-valued map with the following four properties:
Non-negative:
Subadditive: for all
Balanced: for all scalars satisfying
This condition guarantees that each set of the form or for some is a balanced set.
For every as
The sequence can be replaced by any positive sequence converging to the zero.
An F-seminorm is called an F-norm if in addition it satisfies:
Total/Positive definite: implies
An F-seminorm is called monotone if it satisfies:
Monotone: for all non-zero and all real and such that
F-seminormed spaces
An F-seminormed space (resp. F''-normed space) is a pair consisting of a vector space and an F-seminorm (resp. F-norm) on
If and are F-seminormed spaces then a map is called an isometric embedding if
Every isometric embedding of one F-seminormed space into another is a topological embedding, but the converse is not true in general.
Examples of F-seminorms
Every positive scalar multiple of an F-seminorm (resp. F-norm, seminorm) is again an F-seminorm (resp. F-norm, seminorm).
The sum of finitely many F-seminorms (resp. F-norms) is an F-seminorm (resp. F-norm).
If and are F-seminorms on then so is their pointwise supremum The same is true of the supremum of any non-empty finite family of F-seminorms on
The restriction of an F-seminorm (resp. F-norm) to a vector subspace is an F-seminorm (resp. F-norm).
A non-negative real-valued function on is a seminorm if and only if it is a convex F-seminorm, or equivalently, if and only if it is a convex balanced G-seminorm. In particular, every seminorm is an F-seminorm.
For any the map on defined by
is an F-norm that is not a norm.
If is a linear map and if is an F-seminorm on then is an F-seminorm on
Let be a complex vector space and let denote considered as a vector space over Any F-seminorm on is also an F-seminorm on
Properties of F-seminorms
Every F-seminorm is a paranorm and every paranorm is equivalent to some F-seminorm.
Every F-seminorm on a vector space is a value on In particular, and for all
Topology induced by a single F-seminorm
Topology induced by a family of F-seminorms
Suppose that is a non-empty collection of F-seminorms on a vector space and for any finite subset and any let
The set forms a filter base on that also forms a neighborhood basis at the origin for a vector topology on denoted by Each is a balanced and absorbing subset of These sets satisfy
is the coarsest vector topology on making each continuous.
is Hausdorff if and only if for every non-zero there exists some such that
If is the set of all continuous F-seminorms on then
If is the set of all pointwise suprema of non-empty finite subsets of of then is a directed family of F-seminorms and
Fréchet combination
Suppose that is a family of non-negative subadditive functions on a vector space
The Fréchet combination of is defined to be the real-valued map
As an F-seminorm
Assume that is an increasing sequence of seminorms on and let be the Fréchet combination of
Then is an F-seminorm on that induces the same locally convex topology as the family of seminorms.
Since is increasing, a basis of open neighborhoods of the origin consists of all sets of the form as ranges over all positive integers and ranges over all positive real numbers.
The translation invariant pseudometric on induced by this F-seminorm is
This metric was discovered by Fréchet in his 1906 thesis for the spaces of real and complex sequences with pointwise operations.
As a paranorm
If each is a paranorm then so is and moreover, induces the same topology on as the family of paranorms.
This is also true of the following paranorms on :
Generalization
The Fréchet combination can be generalized by use of a bounded remetrization function.
A is a continuous non-negative non-decreasing map that has a bounded range, is subadditive (meaning that for all ), and satisfies if and only if
Examples of bounded remetrization functions include and
If is a pseudometric (respectively, metric) on and is a bounded remetrization function then is a bounded pseudometric (respectively, bounded metric) on that is uniformly equivalent to
Suppose that is a family of non-negative F-seminorm on a vector space is a bounded remetrization function, and is a sequence of positive real numbers whose sum is finite.
Then
defines a bounded F-seminorm that is uniformly equivalent to the
It has the property that for any net in if and only if for all
is an F''-norm if and only if the separate points on
Characterizations
Of (pseudo)metrics induced by (semi)norms
A pseudometric (resp. metric) is induced by a seminorm (resp. norm) on a vector space if and only if is translation invariant and absolutely homogeneous, which means that for all scalars and all in which case the function defined by is a seminorm (resp. norm) and the pseudometric (resp. metric) induced by is equal to
Of pseudometrizable TVS
If is a topological vector space (TVS) (where note in particular that is assumed to be a vector topology) then the following are equivalent:
is pseudometrizable (i.e. the vector topology is induced by a pseudometric on ).
has a countable neighborhood base at the origin.
The topology on is induced by a translation-invariant pseudometric on
The topology on is induced by an F-seminorm.
The topology on is induced by a paranorm.
Of metrizable TVS
If is a TVS then the following are equivalent:
is metrizable.
is Hausdorff and pseudometrizable.
is Hausdorff and has a countable neighborhood base at the origin.
The topology on is induced by a translation-invariant metric on
The topology on is induced by an F-norm.
The topology on is induced by a monotone F-norm.
The topology on is induced by a total paranorm.
Of locally convex pseudometrizable TVS
If is TVS then the following are equivalent:
is locally convex and pseudometrizable.
has a countable neighborhood base at the origin consisting of convex sets.
The topology of is induced by a countable family of (continuous) seminorms.
The topology of is induced by a countable increasing sequence of (continuous) seminorms (increasing means that for all
The topology of is induced by an F-seminorm of the form:
where are (continuous) seminorms on
Quotients
Let be a vector subspace of a topological vector space
If is a pseudometrizable TVS then so is
If is a complete pseudometrizable TVS and is a closed vector subspace of then is complete.
If is metrizable TVS and is a closed vector subspace of then is metrizable.
If is an F-seminorm on then the map defined by
is an F-seminorm on that induces the usual quotient topology on If in addition is an F-norm on and if is a closed vector subspace of then is an F-norm on
Examples and sufficient conditions
Every seminormed space is pseudometrizable with a canonical pseudometric given by for all .
If is pseudometric TVS with a translation invariant pseudometric then defines a paranorm. However, if is a translation invariant pseudometric on the vector space (without the addition condition that is ), then need not be either an F-seminorm nor a paranorm.
If a TVS has a bounded neighborhood of the origin then it is pseudometrizable; the converse is in general false.
If a Hausdorff TVS has a bounded neighborhood of the origin then it is metrizable.
Suppose is either a DF-space or an LM-space. If is a sequential space then it is either metrizable or else a Montel DF-space.
If is Hausdorff locally convex TVS then with the strong topology, is metrizable if and only if there exists a countable set of bounded subsets of such that every bounded subset of is contained in some element of
The strong dual space of a metrizable locally convex space (such as a Fréchet space) is a DF-space.
The strong dual of a DF-space is a Fréchet space.
The strong dual of a reflexive Fréchet space is a bornological space.
The strong bidual (that is, the strong dual space of the strong dual space) of a metrizable locally convex space is a Fréchet space.
If is a metrizable locally convex space then its strong dual has one of the following properties, if and only if it has all of these properties: (1) bornological, (2) infrabarreled, (3) barreled.
Normability
A topological vector space is seminormable if and only if it has a convex bounded neighborhood of the origin.
Moreover, a TVS is normable if and only if it is Hausdorff and seminormable.
Every metrizable TVS on a finite-dimensional vector space is a normable locally convex complete TVS, being TVS-isomorphic to Euclidean space. Consequently, any metrizable TVS that is normable must be infinite dimensional.
If is a metrizable locally convex TVS that possess a countable fundamental system of bounded sets, then is normable.
If is a Hausdorff locally convex space then the following are equivalent:
is normable.
has a (von Neumann) bounded neighborhood of the origin.
the strong dual space of is normable.
and if this locally convex space is also metrizable, then the following may be appended to this list:
the strong dual space of is metrizable.
the strong dual space of is a Fréchet–Urysohn locally convex space.
In particular, if a metrizable locally convex space (such as a Fréchet space) is normable then its strong dual space is not a Fréchet–Urysohn space and consequently, this complete Hausdorff locally convex space is also neither metrizable nor normable.
Another consequence of this is that if is a reflexive locally convex TVS whose strong dual is metrizable then is necessarily a reflexive Fréchet space, is a DF-space, both and are necessarily complete Hausdorff ultrabornological distinguished webbed spaces, and moreover, is normable if and only if is normable if and only if is Fréchet–Urysohn if and only if is metrizable. In particular, such a space is either a Banach space or else it is not even a Fréchet–Urysohn space.
Metrically bounded sets and bounded sets
Suppose that is a pseudometric space and
The set is metrically bounded or -bounded if there exists a real number such that for all ;
the smallest such is then called the diameter or -diameter of
If is bounded in a pseudometrizable TVS then it is metrically bounded;
the converse is in general false but it is true for locally convex metrizable TVSs.
Properties of pseudometrizable TVS
Every metrizable locally convex TVS is a quasibarrelled space, bornological space, and a Mackey space.
Every complete metrizable TVS is a barrelled space and a Baire space (and hence non-meager). However, there exist metrizable Baire spaces that are not complete.
If is a metrizable locally convex space, then the strong dual of is bornological if and only if it is barreled, if and only if it is infrabarreled.
If is a complete pseudometrizable TVS and is a closed vector subspace of then is complete.
The strong dual of a locally convex metrizable TVS is a webbed space.
If and are complete metrizable TVSs (i.e. F-spaces) and if is coarser than then ; this is no longer guaranteed to be true if any one of these metrizable TVSs is not complete. Said differently, if and are both F-spaces but with different topologies, then neither one of and contains the other as a subset. One particular consequence of this is, for example, that if is a Banach space and is some other normed space whose norm-induced topology is finer than (or alternatively, is coarser than) that of (i.e. if or if for some constant ), then the only way that can be a Banach space (i.e. also be complete) is if these two norms and are equivalent; if they are not equivalent, then can not be a Banach space.
As another consequence, if is a Banach space and is a Fréchet space, then the map is continuous if and only if the Fréchet space the TVS (here, the Banach space is being considered as a TVS, which means that its norm is "forgetten" but its topology is remembered).
A metrizable locally convex space is normable if and only if its strong dual space is a Fréchet–Urysohn locally convex space.
Any product of complete metrizable TVSs is a Baire space.
A product of metrizable TVSs is metrizable if and only if it all but at most countably many of these TVSs have dimension
A product of pseudometrizable TVSs is pseudometrizable if and only if it all but at most countably many of these TVSs have the trivial topology.
Every complete metrizable TVS is a barrelled space and a Baire space (and thus non-meager).
The dimension of a complete metrizable TVS is either finite or uncountable.
Completeness
Every topological vector space (and more generally, a topological group) has a canonical uniform structure, induced by its topology, which allows the notions of completeness and uniform continuity to be applied to it.
If is a metrizable TVS and is a metric that defines 's topology, then its possible that is complete as a TVS (i.e. relative to its uniformity) but the metric is a complete metric (such metrics exist even for ).
Thus, if is a TVS whose topology is induced by a pseudometric then the notion of completeness of (as a TVS) and the notion of completeness of the pseudometric space are not always equivalent.
The next theorem gives a condition for when they are equivalent:
If is a closed vector subspace of a complete pseudometrizable TVS then the quotient space is complete.
If is a vector subspace of a metrizable TVS and if the quotient space is complete then so is If is not complete then but not complete, vector subspace of
A Baire separable topological group is metrizable if and only if it is cosmic.
Subsets and subsequences
Let be a separable locally convex metrizable topological vector space and let be its completion. If is a bounded subset of then there exists a bounded subset of such that
Every totally bounded subset of a locally convex metrizable TVS is contained in the closed convex balanced hull of some sequence in that converges to
In a pseudometrizable TVS, every bornivore is a neighborhood of the origin.
If is a translation invariant metric on a vector space then for all and every positive integer
If is a null sequence (that is, it converges to the origin) in a metrizable TVS then there exists a sequence of positive real numbers diverging to such that
A subset of a complete metric space is closed if and only if it is complete. If a space is not complete, then is a closed subset of that is not complete.
If is a metrizable locally convex TVS then for every bounded subset of there exists a bounded disk in such that and both and the auxiliary normed space induce the same subspace topology on
Generalized series
As described in this article's section on generalized series, for any -indexed family family of vectors from a TVS it is possible to define their sum as the limit of the net of finite partial sums where the domain is directed by
If and for instance, then the generalized series converges if and only if converges unconditionally in the usual sense (which for real numbers, is equivalent to absolute convergence).
If a generalized series converges in a metrizable TVS, then the set is necessarily countable (that is, either finite or countably infinite);
in other words, all but at most countably many will be zero and so this generalized series is actually a sum of at most countably many non-zero terms.
Linear maps
If is a pseudometrizable TVS and maps bounded subsets of to bounded subsets of then is continuous.
Discontinuous linear functionals exist on any infinite-dimensional pseudometrizable TVS. Thus, a pseudometrizable TVS is finite-dimensional if and only if its continuous dual space is equal to its algebraic dual space.
If is a linear map between TVSs and is metrizable then the following are equivalent:
is continuous;
is a (locally) bounded map (that is, maps (von Neumann) bounded subsets of to bounded subsets of );
is sequentially continuous;
the image under of every null sequence in is a bounded set where by definition, a is a sequence that converges to the origin.
maps null sequences to null sequences;
Open and almost open maps
Theorem: If is a complete pseudometrizable TVS, is a Hausdorff TVS, and is a closed and almost open linear surjection, then is an open map.
Theorem: If is a surjective linear operator from a locally convex space onto a barrelled space (e.g. every complete pseudometrizable space is barrelled) then is almost open.
Theorem: If is a surjective linear operator from a TVS onto a Baire space then is almost open.
Theorem: Suppose is a continuous linear operator from a complete pseudometrizable TVS into a Hausdorff TVS If the image of is non-meager in then is a surjective open map and is a complete metrizable space.
Hahn-Banach extension property
A vector subspace of a TVS has the extension property if any continuous linear functional on can be extended to a continuous linear functional on
Say that a TVS has the Hahn-Banach extension property (HBEP) if every vector subspace of has the extension property.
The Hahn-Banach theorem guarantees that every Hausdorff locally convex space has the HBEP.
For complete metrizable TVSs there is a converse:
If a vector space has uncountable dimension and if we endow it with the finest vector topology then this is a TVS with the HBEP that is neither locally convex or metrizable.
See also
Notes
Proofs
References
Bibliography
Metric spaces
Topological vector spaces | Metrizable topological vector space | [
"Mathematics"
] | 5,951 | [
"Mathematical structures",
"Vector spaces",
"Topological vector spaces",
"Space (mathematics)",
"Metric spaces"
] |
64,280,109 | https://en.wikipedia.org/wiki/Transport-of-intensity%20equation | The transport-of-intensity equation (TIE) is a computational approach to reconstruct the phase of a complex wave in optical and electron microscopy. It describes the internal relationship between the intensity and phase distribution of a wave.
The TIE was first proposed in 1983 by Michael Reed Teague. Teague suggested to use the law of conservation of energy to write a differential equation for the transport of energy by an optical field. This equation, he stated, could be used as an approach to phase recovery.
Teague approximated the amplitude of the wave propagating nominally in the z-direction by a parabolic equation and then expressed it in terms of irradiance and phase:
where is the wavelength, is the irradiance at point , and is the phase of the wave. If the intensity distribution of the wave and its spatial derivative can be measured experimentally, the equation becomes a linear equation that can be solved to obtain the phase distribution .
For a phase sample with a constant intensity, the TIE simplifies to
It allows measuring the phase distribution of the sample by acquiring a defocused image, i.e. .
TIE-based approaches are applied in biomedical and technical applications, such as quantitative monitoring of cell growth in culture, investigation of cellular dynamics and characterization of optical elements. The TIE method is also applied for phase retrieval in transmission electron microscopy.
References
Electron microscopy
Microscopy | Transport-of-intensity equation | [
"Chemistry"
] | 279 | [
"Electron",
"Electron microscopy",
"Microscopy"
] |
64,280,274 | https://en.wikipedia.org/wiki/Bioactive%20glass%20S53P4 | Bioactive glass S53P4 (BAG-S53P4) is a biomaterial consisting of sodium, silicate, calcium and phosphate. S53P4 is osteoconductive and also osteoproductive in the promotion, migration, replication and differentiation of osteogenic cells and their matrix production. In other words, it facilitates bone formation and regeneration (osteostimulation). S53P4 has been proven to naturally inhibit the bacterial growth of up to 50 clinically relevant bacteria strains.
History
The S53P4 bioactive glass has its roots in the bioglass 45S5 developed by Larry Hench in the late 1960s in New York. A couple of decades later, in the 1980s, the compound S53P4 bioactive glass was developed in Turku, Finland. S53P4 was found to be osteostimulative (non-osteoinductive), but it also had one new additional property: the composition of 53% silica and smaller weights of sodium, calcium and phosphorus gave rise to surface reactions in vitro that appeared to inhibit bacterial growth – a material that could not be infected by bacteria was discovered.
Applications
Areas of use include a wide range of indications that require the filling of bone cavities, voids, and gaps as well as the reconstruction or regeneration of bone defects. Several long-term studies have shown that mastoid cavities in both cholesteatoma, old radical cavities, and chronic otitis media can be successfully obliterated with S53P4 bioactive glass.
Clinical application has been gained from several extensive studies where patients with bone infections have been treated. S53P4 has shown promising results in chronic osteomyelitis surgery, septic non-union surgery, segmental defect reconstructions and other infectious complications, such as sternum infections, diabetic foot osteomyelitis and spine infections.
S53P4 has gained clinical experience within spine surgery in spine fusions and spinal deformity surgery.
S53P4 has also been used successfully in the filling of benign bone tumor cavities in both adults and children, sustaining the bone cavity volume long term. Clinical experience has been gained from aneurysmal bone cysts (ABC), simple bone cysts (UBC), enchondroma and nonossifying fibroma (NOF).
Mechanism of action
When S53P4 bioactive glass is implanted into a bone cavity, the glass is activated through a reaction with body fluids. During this activation period, the bioactive glass goes through a series of chemical reactions, creating the ideal conditions for bone to rebuild through osteoconduction.
Na, Si, Ca, and P ions are released.
A silica gel layer forms on the bioactive glass surface.
CaP crystallizes, forming a layer of hydroxyapatite on the surface of the bioactive glass.
Once the hydroxyapatite layer is formed, the bioactive glass interacts with biological entities, i.e. blood proteins, growth factors and collagen. Following this interactive, osteoconductive and osteostimulative process, new bone grows onto and between the bioactive glass structures.
Bioactive glass bonds to bone – facilitating new bone formation.
Osteostimulation begins by stimulating osteogenic cells to increase the remodeling rate of bone.
Radio-dense quality of bioactive glass allows for post-operative evaluation.
In the final transformative phase, the process of bone regeneration and remodeling continues. Over time, the glass is fully remodeled into bone, restoring the patient's natural anatomy.
Bone consolidation occurs.
S53P4 bioactive glass continues to remodel into bone over a period of years.
Inhibition of bacterial growth
The bacterial growth inhibiting properties of S53P4 derive from two simultaneous chemical and physical processes, occurring once the bioactive glass reacts with body fluids. Sodium (Na) is released from the surface of the bioactive glass and induces an increase in pH (alkaline environment), which is not favorable for the bacteria, thus inhibiting their growth. The released Na, Ca, Si and P ions give rise to an increase in osmotic pressure due to an elevation in salt concentration, i.e. an environment where bacteria cannot grow.
References
Biomaterials | Bioactive glass S53P4 | [
"Physics",
"Biology"
] | 905 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
64,280,318 | https://en.wikipedia.org/wiki/Bartol%20Research%20Institute | The Bartol Research Institute (formerly the Bartol Research Foundation) is a scientific research institution at the Department of Physics and Astronomy of the University of Delaware. Its members belong to the faculty of the University of Delaware and perform research in areas such as astroparticle physics, astrophysics, cosmology, particle physics, and space science.
Research
Having a strong research mission, the Bartol Research Institute has counted several renowned physicists among its members, mostly focusing on fundamental science. Starting with its first director, W.F.G. Swann, cosmic rays were and still are one of the main research topics.
With its second director, Martin A. Pomerantz, an Antarctic research program was developed along these lines and is maintained until today: Bartol scientists contribute to several Antarctic cosmic-ray projects, including ballon-borne cosmic-ray detectors such as ANITA, and ground-based experiments such as neutron monitors and the IceCube Neutrino Observatory.
Furthermore, nuclear physics and high-energy physics belonged to the research portfolio since early on. Today research is done in particular in theoretical particle physics and theoretical as well as experimental particle astrophysics.
Consequently, the Bartol Research Institute is a member of several major international collaborations that run some of the leading experiments in this field, such as VERITAS, the Cherenkov Telescope Array, the Pierre Auger Observatory, and IceCube. In 2012, Qaisar Shafi was appointed the Inaugural Bartol Research Institute Professor of Physics.
Space physics, including plasma and solar physics, is another major research area of the Bartol Research Institute. Among its members is William H. Matthaeus, the current director of the NASA Delaware Space Grant Consortium", who has made key contributions to the field including involvement in the Parker Solar Probe.
Delaware's Space Grant Consortium was founded in 1991 under the leadership of Norman F. Ness. Shortly before Norman Ness became the third Bartol Director, he was elected to the National Academy of Sciences for his seminal contributions to measuring planetary and interplanetary magnetic fields. In particular, he is the principal investigator of the magnetometer of NASA's Voyager program.
Last but not least, the present research portfolio of Bartol also includes various areas of astronomy, in particular, stellar and planetary astrophysics.
Since 1985, the Bartol Research Institute has awarded the Shakti P. Duggal Award to a young scientist in cosmic-ray physics at each occurrence of the biannual International Cosmic Ray Conference.
History
Founded in 1924 by the endowment of Henry W. Bartol at the Franklin Institute in Philadelphia, PA, as the Bartol Research Foundation, it moved to its own building at the Swarthmore College in 1927 where it resided for fifty years.
The research was also supported by grants from the federal government of the USA, and the research topics included nuclear physics, cosmic rays, astrophysics, and the physics and chemistry of surfaces.
The Bartol Research Foundation was also active in public outreach, e.g., by a contribution to the 1939 New York World's Fair.
In 1977 the Bartol Research Foundation relocated to its present location in the Sharp Lab building on the main campus of the University of Delaware in Newark, and later changed its name to the Bartol Research Institute.
The integration of the Bartol Research Institute into the Department of Physics and Astronomy at the University of Delaware was completed in the year 2005.
The Bartol Research Foundation (later the Bartol Research Institute) and its researchers issued numerous scientific publications, and hosted conferences.
List of directors
William Francis Gray Swann (1927–1959)
Martin A. Pomerantz (1959–1987)
Norman F. Ness (1987–2000)
Stuart Pittel (2000–2011)
Stephen Matthew Barr (2011–2019)
Jamie Holder (2019–)
References
External links
University of Delaware
Astrophysics research institutes
Physics research institutes
1924 establishments in Pennsylvania
Research institutes established in 1924 | Bartol Research Institute | [
"Physics"
] | 795 | [
"Astrophysics research institutes",
"Astrophysics"
] |
64,281,190 | https://en.wikipedia.org/wiki/Angela%20Tamagnini | Angela Tamagnini was a pioneer in the use of smallpox vaccination in Portugal. She also became famous for her role in resisting the French invasion of the city of Tomar in the Santarém District of Portugal during the Napoleonic Wars.
Biography
Angela Tamagnini was born in Milan, Italy on 26 October 1770. She moved to Portugal in 1783 together with her uncle, Inácio Francisco Tamagnini, who became the doctor of Queen Maria I.
In 1795, she married António Florêncio de Abreu e Andrade, son of a rich tobacco and soap trader. Her husband died in 1806. She had one son, João. Her great-grandson was Fernando Tamagnini de Abreu e Silva, the commander of the Portuguese Expeditionary Corps, which fought with the Allies during World War I.
During the Peninsular War, in June 1808, the year after the French invasion of Portugal under General Junot, the Portuguese in the northwest of the country rebelled. French troops led by General Margaron were sent to quell the uprising in Tomar. It was clear that the defence of Tomar would be hopeless and Tamagnini, who knew how to speak French, was asked to act as an intermediary between the city and the French in order to negotiate a peaceful surrender, thereby avoiding potential plundering and other atrocities. She succeeded in avoiding destruction of the city, reducing the reparations expected by the French, and saving the lives of three Portuguese friars who were to be executed.
Together with Maria Isabel Wittenhall van Zeller (1749–1819), who was active in the Porto area of Portugal, Tamagnini was a female pioneer in the use of vaccinations against smallpox. Previously the disease had been treated by inoculation, also known as variolation, which involved the deliberate introduction of material from smallpox pustules into the skin. This induced immunity to smallpox but generally also produced a mild form of the infection. Towards the end of the 18th century, the work of Edward Jenner and others showed that cowpox delivered by vaccination to humans could protect against smallpox. Tamagnini ordered everything necessary for the vaccine’s preparation and application from the United Kingdom, and provided it to the Vaccine Institute established in Coimbra by the Royal Academy of Sciences. She, herself, carried out vaccinations in Tomar at her own expense. Tamagnini was appointed a Correspondent of the Vaccine Institute in 1812 but, unlike Wittenhall van Zeller, was not awarded a Gold Medal by the Institute because she failed to provide the necessary data.
Death and legacy
Tamagnini died in Tomar on 2 July 1827. In Tomar, she is honoured by having one of the city’s major roads named after her.
References
1770 births
1827 deaths
Vaccinologists
Smallpox vaccines
19th century in Portugal
People from Tomar
Duchy of Milan people
Immigrants to Portugal | Angela Tamagnini | [
"Biology"
] | 603 | [
"Vaccination",
"Vaccinologists"
] |
64,281,511 | https://en.wikipedia.org/wiki/Quantum%20jump | A quantum jump is the abrupt transition of a quantum system (atom, molecule, atomic nucleus) from one quantum state to another, from one energy level to another. When the system absorbs energy, there is a transition to a higher energy level (excitation); when the system loses energy, there is a transition to a lower energy level.
The concept was introduced by Niels Bohr, in his 1913 Bohr model.
A quantum jump is a phenomenon that is peculiar to quantum systems and distinguishes them from classical systems, where any transitions are performed gradually. In quantum mechanics, such jumps are associated with the non-unitary evolution of a quantum-mechanical system during measurement.
A quantum jump can be accompanied by the emission or absorption of photons; energy transfer during a quantum jump can also occur by non-radiative resonant energy transfer or in collisions with other particles.
In modern physics, the concept of a quantum jump is rarely used; as a rule scientists speak of transitions between quantum states or energy levels.
Atomic electron transition
Atomic electron transitions cause the emission or absorption of photons. Their statistics are Poissonian, and the time between jumps is exponentially distributed. The damping time constant (which ranges from nanoseconds to a few seconds) relates to the natural, pressure, and field broadening of spectral lines. The larger the energy separation of the states between which the electron jumps, the shorter the wavelength of the photon emitted.
In an ion trap, quantum jumps can be directly observed by addressing a trapped ion with radiation at two different frequencies to drive electron transitions. This requires one strong and one weak transition to be excited (denoted 12 and 13 respectively in the figure to the right). The electron energy level, , has a short lifetime, 2 which allows for constant emission of photons at a frequency 12 which can be collected by a camera and/or photomultiplier tube. State has a relatively long lifetime 3 which causes an interruption of the photon emission as the electron gets shelved in state through application of light with frequency 13. The ion going dark is a direct observation of quantum jumps.
Molecular electronic transition
References
Sources
Are there quantum jumps?
«There are no quantum jumps, nor are there particles!» by H. D. Zeh, Physics Letters A172, 189 (1993).
Der Quantensprung im Bohrschen Atommodell Frühe Quantenphysik
Der Quantensprung Die zweifelhafte Karriere eines Fachausdrucks (ZEIT 1996)
M.B. Plenio und P.L. Knight The Quantum Jump Approach to Dissipative Dynamics in Quantum Optics, vgl. auch Rev. Mod. Phys. 70 101–144 (1998). (Beschreibung der Dynamik offener Systeme mittels Quantensprüngen)
Historisches zum Quantensprung, Sommerfeld und Einstein 1911
Quantum mechanics
Spectroscopy | Quantum jump | [
"Physics",
"Chemistry"
] | 614 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Theoretical physics",
"Quantum mechanics",
"Spectroscopy"
] |
57,680,270 | https://en.wikipedia.org/wiki/NITROS%20Project | The NITROS (Network for Innovative Training on ROtorcraft Safety) project is an ongoing project which began in November 2016 consisting of 12 Early-Stage Researchers (ESRs). It is funded through the European Union's Marie Skłodowska-Curie Actions (MSCA) research grant which is an Innovative Training Network (ITN) to support European Joint Doctorates (EJD). The collective aim of this specific MSCA scheme is for fostering new skills by means of excellent initial training of researchers.
NITROS aims to train aerospace engineers in Control Engineering, Computational Fluid Dynamics (CFD), Modeling and Simulation, Structural Dynamics and Human perception cognition and action, to address complex solutions for rotorcraft safety. Rotorcraft accident rates remain disproportionately high in comparison with fixed-wing aircraft.
The network is composed of four universities spread over four countries namely: Politecnico di Milano (Italy), Delft University of Technology (Netherlands), University of Liverpool (England) and the University of Glasgow (Scotland). Whilst there are also six international industrial partners involved in helping collaborate: Bristow Helicopters, Civil Aviation Authority, Eurocontrol, Leonardo Helicopter, National Aerospace Laboratory and the Max Planck Institute.
The NITROS project will be presented at the 44th European Rotorcraft Forum in Delft and the subsequent 45th and 46th European Rotorcraft Forums, where the 12 projects will be presented.
Each research project is focused on a problem that affects the safety of the current or innovative rotorcraft configurations.
References
Aerospace engineering
Aviation safety in Europe
College and university associations and consortia in Europe
Engineering university associations and consortia
European Union and science and technology | NITROS Project | [
"Engineering"
] | 343 | [
"Aerospace engineering"
] |
57,680,601 | https://en.wikipedia.org/wiki/Massimo%20Aparo | Massimo Aparo (born 31 July 1953) is an Italian nuclear engineer, who started working as acting deputy director general and head of the Department of Safeguards, after Tero Varjoranta has resigned effective 11 May 2018.
Biography
Massimo Aparo was born in Pistoia. He is a nuclear engineer and was graduated from Sapienza University of Rome.
Aparo, before joining the International Atomic Energy Agency (IAEA) worked as Director General of an Italian company in the area of radiation detection and monitoring, in the European Space Agency and at Italy’s former National Committee for Nuclear Energy.
Career
He was appointed Acting Deputy Director General and Head of the Department of Safeguards in the IAEA on 11 May 2018 by the Yukiya Amano Director General of IAEA. Before this date, he occupied the position of Acting Director of the Office for Verification in Iran.
Aparo started working in the IAEA Safeguards Department since 1997. He served in the following positions:
Section Head of the Division of Technical and Scientific Services,
Head of the Tokyo Regional Office in the Division of Operations A, and
Head of the Iran Task Force
Before joining the IAEA, Aparo worked as director general of an Italian firm specialized in radiation detection and monitoring. He also has the experience of working in the European Space Agency and at the Italian Nuclear Energy Commission.
Iran Task Force
Aparo, is the leader of an elite unit of the International Atomic Energy Agency known as Iran Task Force. It was created three years ago by IAEA’s director-general, Japanese diplomat Yukiya Amano. The task force consists of around 50 members, including nuclear engineers, chemists, physicists, intelligence data analysts and communication experts.
According to the Jerusalem Post, “The Iran Task Force is part of the IAEA’s Department of Safeguards and Verification, which is in charge of making sure that all its state members properly use nuclear technology and know-how for the declared civilian and peaceful purposes: scientific research,”
Zaporizhzhia Nuclear Power Plant
On 29 August 2022, an IAEA team flew out sent to investigate the Zaporizhzhia Nuclear Power Plant in Ukraine which was in the middle of conflict. The overall IAEA team was led by Rafael Grossi, Lydie Evrard and Aparo. No leaks had been reported at the plant before their arrival but shelling had occurred days before.
See also
Radiation and Nuclear Safety Authority
IAEA safeguards
International Commission on Radiological Protection
International Atomic Energy Agency
Yukiya Amano
Tero Varjoranta
References
Nuclear proliferation
Italian officials of the United Nations
Nuclear power
Nuclear weapons policy
Living people
People associated with nuclear power
1956 births | Massimo Aparo | [
"Physics"
] | 542 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
57,680,998 | https://en.wikipedia.org/wiki/Matrix%20factorization%20%28recommender%20systems%29 | Matrix factorization is a class of collaborative filtering algorithms used in recommender systems. Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. This family of methods became widely known during the Netflix prize challenge due to its effectiveness as reported by Simon Funk in his 2006 blog post, where he shared his findings with the research community. The prediction results can be improved by assigning different regularization weights to the latent factors based on items' popularity and users' activeness.
Techniques
The idea behind matrix factorization is to represent users and items in a lower dimensional latent space. Since the initial work by Funk in 2006 a multitude of matrix factorization approaches have been proposed for recommender systems. Some of the most used and simpler ones are listed in the following sections.
Funk MF
The original algorithm proposed by Simon Funk in his blog post factorized the user-item rating matrix as the product of two lower dimensional matrices, the first one has a row for each user, while the second has a column for each item. The row or column associated to a specific user or item is referred to as latent factors. Note that, in Funk MF no singular value decomposition is applied, it is a SVD-like machine learning model.
The predicted ratings can be computed as , where is the user-item rating matrix, contains the user's latent factors and the item's latent factors.
Specifically, the predicted rating user u will give to item i is computed as:
It is possible to tune the expressive power of the model by changing the number of latent factors. It has been demonstrated that a matrix factorization with one latent factor is equivalent to a most popular or top popular recommender (e.g. recommends the items with the most interactions without any personalization). Increasing the number of latent factors will improve personalization, therefore recommendation quality, until the number of factors becomes too high, at which point the model starts to overfit and the recommendation quality will decrease. A common strategy to avoid overfitting is to add regularization terms to the objective function.
Funk MF was developed as a rating prediction problem, therefore it uses explicit numerical ratings as user-item interactions.
All things considered, Funk MF minimizes the following objective function:
Where is defined to be the frobenius norm whereas the other norms might be either frobenius or another norm depending on the specific recommending problem.
SVD++
While Funk MF is able to provide very good recommendation quality, its ability to use only explicit numerical ratings as user-items interactions constitutes a limitation. Modern day recommender systems should exploit all available interactions both explicit (e.g. numerical ratings) and implicit (e.g. likes, purchases, skipped, bookmarked). To this end SVD++ was designed to take into account implicit interactions as well.
Compared to Funk MF, SVD++ takes also into account user and item bias.
The predicted rating user u will give to item i is computed as:
Where refers to the overall average rating over all items and and refers to the observed deviation of the item and the user respectively from the average. SVD++ has however some disadvantages, with the main drawback being that this method is not model-based. This means that if a new user is added, the algorithm is incapable of modeling it unless the whole model is retrained. Even though the system might have gathered some interactions for that new user, its latent factors are not available and therefore no recommendations can be computed. This is an example of a cold-start problem, that is the recommender cannot deal efficiently with new users or items and specific strategies should be put in place to handle this disadvantage.
A possible way to address this cold start problem is to modify SVD++ in order for it to become a model-based algorithm, therefore allowing to easily manage new items and new users.
As previously mentioned in SVD++ we don't have the latent factors of new users, therefore it is necessary to represent them in a different way. The user's latent factors represent the preference of that user for the corresponding item's latent factors, therefore user's latent factors can be estimated via the past user interactions. If the system is able to gather some interactions for the new user it is possible to estimate its latent factors.
Note that this does not entirely solve the cold-start problem, since the recommender still requires some reliable interactions for new users, but at least there is no need to recompute the whole model every time. It has been demonstrated that this formulation is almost equivalent to a SLIM model, which is an item-item model based recommender.
With this formulation, the equivalent item-item recommender would be . Therefore the similarity matrix is symmetric.
Asymmetric SVD
Asymmetric SVD aims at combining the advantages of SVD++ while being a model based algorithm, therefore being able to consider new users with a few ratings without needing to retrain the whole model. As opposed to the model-based SVD here the user latent factor matrix H is replaced by Q, which learns the user's preferences as function of their ratings.
The predicted rating user u will give to item i is computed as:
With this formulation, the equivalent item-item recommender would be . Since matrices Q and W are different the similarity matrix is asymmetric, hence the name of the model.
Group-specific SVD
A group-specific SVD can be an effective approach for the cold-start problem in many scenarios. It clusters users and items based on dependency information and similarities in characteristics. Then once a new user or item arrives, we can assign a group label to it, and approximates its latent factor by the group effects (of the corresponding group). Therefore, although ratings associated with the new user or item are not necessarily available, the group effects provide immediate and effective predictions.
The predicted rating user u will give to item i is computed as:
Here and represent the group label of user u and item i, respectively, which are identical across members from the same group. And and are matrices of group effects. For example, for a new user whose latent factor is not available, we can at least identify their group label , and predict their ratings as:
This provides a good approximation to the unobserved ratings.
Hybrid MF
In recent years many other matrix factorization models have been developed to exploit the ever increasing amount and variety of available interaction data and use cases. Hybrid matrix factorization algorithms are capable of merging explicit and implicit interactions or both content and collaborative data
Deep-learning MF
In recent years a number of neural and deep-learning techniques have been proposed, some of which generalize traditional Matrix factorization algorithms via a non-linear neural architecture.
While deep learning has been applied to many different scenarios: context-aware, sequence-aware, social tagging etc. its real effectiveness when used in a simple Collaborative filtering scenario has been put into question. Systematic analysis of publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles are reproducible, with as little as 14% in some conferences. Overall the studies identify 26 articles, only 12 of them could be reproduced and 11 of them could be outperformed by much older and simpler properly tuned baselines. The articles also highlights a number of potential problems in today's research scholarship and call for improved scientific practices in that area. Similar issues have been spotted also in sequence-aware recommender systems.
See also
Collaborative filtering
Recommender system
References
Collective intelligence
Information systems
Recommender systems | Matrix factorization (recommender systems) | [
"Technology"
] | 1,605 | [
"Information systems",
"Recommender systems",
"Information technology"
] |
57,681,488 | https://en.wikipedia.org/wiki/Stored%20Energy%20at%20Sea | The Stored Energy at Sea (StEnSEA) project is a pump storage system designed to store significant quantities of electrical energy offshore. After research and development, it was tested on a model scale in November 2016. It is designed to link in well with offshore wind platforms and their issues caused by electrical production fluctuations. It works by water flowing into a container, at significant pressure, thus driving a turbine. When there is spare electricity the water is pumped out, allowing electricity to be generated at a time of increased need.
Development history
In 2011, the physics Prof. Dr (Goethe University Frankfurt) and Dr. Gerhard Luther (Saarland University) had the idea of a pump storage system that would be placed on the sea bed. This system would use the high water pressure at great water depths to store energy in hollow bodies.
Shortly, after their idea was published on 1 April 2011 in the newspaper Frankfurter Allgemeine Zeitung, a consortium of the Fraunhofer Institute for Energy Economics and Energy System Technology and the construction company Hochtief AG was set up. In collaboration they conducted a first preliminary sketch, which proved the feasibility of the pump storage concept. Subsequently, the German Federal Ministry for Economic Affairs and Energy supported the development and testing of the new concept.
Physical principle
The functionality of a seawater pressure storage power plant is based on usual pumped-hydro storage plants. A hollow concrete sphere with an integrated pump-turbine will be installed on the bottom of the sea. Compared to well known pumped-hydro storage plants, the sea that surrounds the sphere represents the upper water basin. The hollow sphere represents the lower water basin. The StEnSea concept uses the high water pressure difference between the hollow sphere and the surrounding sea, which is about 75 bar (≈1 bar per 10 meters).
In case of overproduction of adjacent energy sources such as wind turbines or photovoltaic systems, the pump-turbine will be enabled to pump water from the cavity against the pressure into the surrounding sea. An empty hollow sphere means a fully charged storage system. When electricity is needed, water from the surrounding sea is guided through the turbine into the cavity, generating electricity. The higher the pressure difference between hollow sphere and the surrounding sea, the higher the energy yield during discharging. While discharging the hollow sphere a vacuum will be created inside. To avoid cavitation, the pump turbines and all other electrical components are placed in a centrally mounted cylinder. An auxiliary feed pump in the bottom of the cylinder is required to fill the cylinder with water and produces an inside pressure.
"Both pumps require an input pressure above the net positive suction head to avoid cavitation while pumping water from the inner volume into the cylinder or from the cylinder out of the sphere. As the pressure difference for the additional pump is much lower than for the pump turbine the required input pressure is lower as well. The input pressure of both pumps is given by the water column above them. For the additional pump this is the water column in the sphere and for the pump turbine it is the water column in the cylinder."
The maximum capacity for the hollow concrete sphere depends on the total pump-turbine efficiency, the installation depth and the inner volume.
The stored energy is proportional to the ambient pressure in the depths of the sea. Problems considered during the construction of the hollow sphere were choosing a construction-type that withstands the high water-pressure and which is heavy enough to keep the buoyancy force lower than the gravitational force. This resulted in the spherical construction with an inner diameter of 28.6 meter and a 2.72 meter thick wall made of normal watertight concrete.
Pilot test
To prove feasibility under real conditions and to acquire measurement data, the Fraunhofer engineers started implementing a pilot project. Hochtief Solutions AG constructed a pilot hollow sphere at a scale of 1:10 out of concrete, with an outer diameter of three meters and an inner volume of eight m3. On 9 November 2016 it was installed in Lake Constance at a depth of 100 meters and tested for four weeks.
During the test phase, the engineers were able to successfully store energy and operate the system in different operating modes. The engineers also studied whether a pressure equalization line to the surface is required. In case of application without the compensating cable, a reduction of costs and expense would be possible. The pilot test revealed, that both operation variants work and would be possible to run.
In the next step, a possible test location in the sea for the carrying out of a demonstration project is to be scrutinized. Then a sphere with the planned demonstration diameter of 30 meters should be built and installed at a suitable location in the sea. Possible places of installation situated near a coast would be for example the Norwegian trench or some Spanish sea areas.
Furthermore, partners from the industry financing half of the project must be found, in order to receive further public funding from the BMWi. Because the total costs for the demonstration project are estimated at a low double-digit million euro amount.
Potential installation sites
The identification of potential installation sites was undertaken in three consecutive steps. At first, the designation of several arguments depicting the quality of a potential location were determined. Besides the installation depth, which is the main factor involved, variables like slope, geomorphology, distance to a possible grid connection point as well as to bases for servicing and set-up, marine reserves and the requirement for power storage in the surroundings were taken into account.
In the following step, specific values were assigned to the hard parameters, which are required for the use of the technology. Many of these values were determined in a previous feasibility analysis, a few had to be assessed by using comparable applications from different offshore industries. The installation depth of the concrete sphere should be 600-800m below the sea level and have an angle of inclination of less than or equal to 1°. In addition, it is required to reach the next grid connection point within one hundred kilometres as well as a basis, from which maintenance and repair measures can be carried out. Furthermore, an installation basis should not be more than 500 km away and areas with inappropriate geomorphology for example canyons were excluded.
Finally, a global location analysis, based on geo-datasets and the above defined restrictions, was carried out with a Geographical Information System (GIS). In order to make a statement about the potential storage capacities, the resulting areas were assigned to the Exclusive Economic Zones (EEZ) of the affected states. Those and the corresponding capacities for storing electricity are displayed in the table below.
Economic assessment of StEnSea
StEnSea is a modular high capacity energy storage technology. It's profitability depends on installed units (concrete hollows) per facility (causing scale effects), on the realized arbitrages on the energy market and it depends on the operating hours per year. As well as on the investment and operation cost.
In the following chart the relevant economic parameters for an economic assessment are pictured. About 800 to 1000 full operation cycles per annum are required.
For the operation and management of a storage farm, personal expenditure is based on 0.5 - 2 staff per storage farm, depending on the farm capacity. Labor costs of 70 k€ per year and member of staff are used for the calculation. The price arbitrage is set to be 6 €ct per kWh for the economic assessment, resulting from an average electricity purchase price of 2 €ct per kWh and an average sale price of 8 €ct kWh. This price arbitrage includes the purchase of other services such as the provision of positive or negative balance power, frequency control or reactive power, all of which are not separately considered in the calculations. Planning and approval costs include costs for the site evaluation (as prerequisite for the permission), power plant certification, as well as the project development and management.
Depending on the number of storage units per farm, the unit specific costs for planning and approval vary in the range from 1,070 mio.€ at 120 units to 1,74 mio.€ at 5 units. Also the annuities depend directly on the number of installed units. With 120 units an annuity of 544k€ can be achieved, while only a 232k€ annuity with 5 installed units only is possible.
Ecological effects
Due to the main components of the construction (primarily steel, concrete for the hollow and cables for the connection), this system presents minimal risks to the eco-system. To avoid sea animals being sucked into the turbine a fine meshed grid is installed. In addition, the flow speed of the water rushing into the hollow is kept low.
Media coverage
A video post on the public television station ZDF called the hollow concrete balls a “possible solution to store solar and wind energy”. The gained data helped to understand the project better. For further tests on a bigger scale Christian Dick, also a member of the Fraunhofer IEE team, thinks about constructing a big concrete hollow upon the sea.
The TV station ZDF nano produced a documentary about the field study StEnSea in
Lake Constance (German: Bodensee). Christian Dick was cited that “the ball exactly worked like it was supposed to work”. The most important finding was that an air-connection to the surface is not needed, reducing the technical effort significantly. Project leader Matthias Puchta from Fraunhofer IEE said “by pumping out the water we created a nearly total vacuum. Demonstrating that was very exciting, because nobody was able to do that before by using this technology. We showed it works.” For maintenance and possible technical problems the technology will be located in a cylinder, easy to recover and maintain with a robotic submarine. After all this technology could be “a mosaic of our future energy supply".
This opinion was shared by Swiss radio channel SRF as they reported about the project as a “potentially path-breaking experiment”. Thanks to the successful project in the lake, where energy was fed in a test grid and drawn from it, the team intends to install a concrete ball of a diameter 10 times larger than the pilot project (30 meters). Due to Germany's too shallow coastlines, the country will not be used for further projects. The Spanish coastline offers good conditions for a long-term project. This long-term project should last between three and five years under real-life conditions and is supposed to gain the data for the subsequent commercialization.
Der Spiegel reported that the technology of StEnSea could be also interesting for offshore wind parks. The economically efficient storage of surplus energy is one of the key tasks for the grid and the energy market, as more and more renewables are taken into the system. Therefore, the technology's role in reorganizing the energy system can be crucial.
References
Energy storage
Electric power
Wave power | Stored Energy at Sea | [
"Physics",
"Engineering"
] | 2,213 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
57,682,839 | https://en.wikipedia.org/wiki/Ingle%20Brothers%20Broomcorn%20Warehouse | The Ingle Brothers Broomcorn Warehouse, in Shattuck, Oklahoma, was listed on the National Register of Historic Places in 2009. It is located at 320 NW 1st St., at Oklahoma Avenue, in Shattuck.
It was built in 1909. A photo shows it is a gable-front brick building with a stepped gable.
References
Warehouses in the United States
National Register of Historic Places in Ellis County, Oklahoma
Stepped gables | Ingle Brothers Broomcorn Warehouse | [
"Engineering"
] | 89 | [
"Stepped gables",
"Architecture"
] |
57,686,083 | https://en.wikipedia.org/wiki/NGC%203369 | NGC 3369 is a lenticular galaxy located about 175 million light-years away in the constellation Hydra. NGC 3369 was discovered by astronomer Ormond Stone in 1886 and is an outlying member of the Hydra Cluster.
See also
List of NGC objects (3001–4000)
References
External links
Hydra Cluster
Hydra (constellation)
Lenticular galaxies
3369
32191
Astronomical objects discovered in 1886
Discoveries by Ormond Stone | NGC 3369 | [
"Astronomy"
] | 84 | [
"Hydra (constellation)",
"Constellations"
] |
57,686,906 | https://en.wikipedia.org/wiki/Versatile%20Video%20Coding | Versatile Video Coding (VVC), also known as H.266, ISO/IEC 23090-3, and MPEG-I Part 3, is a video compression standard finalized on 6 July 2020, by the Joint Video Experts Team (JVET) of the VCEG working group of ITU-T Study Group 16 and the MPEG working group of ISO/IEC JTC 1/SC 29. It is the successor to High Efficiency Video Coding (HEVC, also known as ITU-T H.265 and MPEG-H Part 2). It was developed with two primary goalsimproved compression performance and support for a very broad range of applications.
Concept
In October 2015, the MPEG and VCEG formed the Joint Video Exploration Team (JVET) to evaluate available compression technologies and study the requirements for a next-generation video compression standard. The new standard has about 50% better compression rate for the same perceptual quality compared to HEVC, with support for lossless and lossy compression. It supports resolutions ranging from very low resolution up to 4K and 16K as well as 360° videos. VVC supports YCbCr 4:4:4, 4:2:2 and 4:2:0 with 8–10 bits per component, BT.2100 wide color gamut and high dynamic range (HDR) of more than 16 stops (with peak brightness of 1,000, 4,000 and 10,000 nits), auxiliary channels (for depth, transparency, etc.), variable and fractional frame rates from 0 to 120 Hz and higher, scalable video coding for temporal (frame rate), spatial (resolution), SNR, color gamut and dynamic range differences, stereo/multiview coding, panoramic formats, and still-picture coding. Work on high bit depth support (12 to 16 bits per component) started in October 2020 and was included in the second edition published in 2022. Encoding complexity of several times (up to ten times) that of HEVC is expected, depending on the quality of the encoding algorithm (which is outside the scope of the standard). The decoding complexity is about twice that of HEVC.
VVC development has been made using the VVC Test Model (VTM), a reference software codebase that was started with a minimal set of coding tools. Further coding tools have been added after being tested in Core Experiments (CEs). Its predecessor was the Joint Exploration Model (JEM), an experimental software codebase that was based on the reference software used for HEVC.
Like its predecessor, VVC uses motion-compensated DCT video coding. While HEVC supports integer discrete cosine transform (DCT) square block sizes between 4×4 and 32×32, VVC adds support for non-square DCT rectangular block sizes. VVC also introduces several intra-frame prediction modes based on these rectangular DCT blocks to provide improved motion compensation prediction.
History
JVET issued a final Call for Proposals in October 2017, and the standardization process officially began in April 2018 when the first working draft of the standard was produced.
At IBC 2018, a preliminary implementation based on VVC was demonstrated that was said to compress video 40% more efficiently than HEVC.
The content of the final standard was approved on 6 July 2020.
Schedule
October 2017: Call for proposals
April 2018: Evaluation of the proposals received and first draft of the standard
July 2019: Ballot issued for committee draft
October 2019: Ballot issued for draft international standard
6 July 2020: Completion of final standard
Licensing
To reduce the risk of the problems seen when licensing HEVC implementations, for VVC a new group called the Media Coding Industry Forum (MC-IF) was founded. However, MC-IF had no power over the standardization process, which was based on technical merit as determined by consensus decisions of JVET.
Four companies were initially vying to be patent pool administrators for VVC, in a situation similar to the previous AVC and HEVC codecs. Two companies later formed patent pools: Access Advance and MPEG LA (now known as Via-LA).
Access Advance published their licensing fee in April 2021. Via-LA published their licensing fee in January 2022.
Companies known not to be a part of the Access Advance or Via-LA patent pools as of November 2023 are: Apple, Canon, Ericsson, Fraunhofer, Google, Huawei, Humax, Intel, LG, Interdigital, Maxell, Microsoft, Oppo, Qualcomm, Samsung, Sharp and Sony.
Adoption
Content providers
In 2021 MX Player was reported to deliver content in VVC to up to 20% of its mobile customers.
Software
Encoders/decoders
Fraunhofer HHI released a source-available encoder called VVenC and decoder called VVdeC
Fraunhofer Versatile Video Encoder (VVenC)
Fraunhofer Versatile Video Decoder (VVdeC)
VVC VTM reference software
Tencent Media Lab offers a real time decoder and the Tencent Cloud service offers transcoding and streaming in its cloud infrastructure.
uvg266 open source encoder
ffmpeg starting with version 7.0 supports experimental decoding. Version 7.1 elevated support to official status. Support for is currently missing.
LAV Filters, ffmpeg based DirectShow splitter and decoders for Windows, supports demuxing and decoding starting with version 0.79.
OpenVVC, an incomplete open-source VVC decoder library licensed under LGPLv2.1
Players
Spin Digital sells a real time decoder and player for Linux and Windows devices.
Elmedia Player added support in July, 2023.
MPC-HC (clsid2's fork) starting with version 2.2.0.
MPC-BE starting with version 1.7.0.
Zoom Player Steam Edition starting with version v19 beta 6 with the help of LAV Filters v0.79.
Hardware
Broadcast
The Brazilian SBTVD Forum will adopt the MPEG-I VVC codec in its forthcoming broadcast television system, TV 3.0, expected to launch in 2024. It will be used alongside MPEG-5 LCEVC as a video base layer encoder for broadcast and broadband delivery.
The European organization DVB Project, which governs digital television broadcasting standards, announced 24 February 2022 that VVC was now part of its tools for broadcasting.
The DVB tuner specification used throughout Europe, Australia, and many other regions has been revised to support the VVC (H.266) video codec, the successor to HEVC.
See also
AOMedia Video 1 (AV1)
Scalable coding
Notes
References
Further reading
External links
VVC website at the Fraunhofer Heinrich Hertz Institute with source code of: VTM or VVdeC or VVenC
Stand by for ITU H.266 compression
MPEG - Versatile Video Coding
Finalisation of VVC
H.26x
MPEG
Open standards covered by patents
Video codecs
Video compression | Versatile Video Coding | [
"Technology"
] | 1,458 | [
"Multimedia",
"MPEG"
] |
57,687,117 | https://en.wikipedia.org/wiki/Rare-earth%20barium%20copper%20oxide | Rare-earth barium copper oxide (ReBCO) is a family of chemical compounds known for exhibiting high-temperature superconductivity (HTS). ReBCO superconductors have the potential to sustain stronger magnetic fields than other superconductor materials. Due to their high critical temperature and critical magnetic field, this class of materials are proposed for use in technical applications where conventional low-temperature superconductors do not suffice. This includes magnetic confinement fusion reactors such as the ARC reactor, allowing a more compact and potentially more economical construction, and superconducting magnets to use in future particle accelerators to come after the Large Hadron Collider, which utilizes low-temperature superconductors.
Materials
Any rare-earth element can be used in a ReBCO; popular choices include yttrium (YBCO), lanthanum (LBCO), samarium (Sm123), neodymium (Nd123 and Nd422), gadolinium (Gd123) and europium (Eu123), where the numbers among parenthesis indicate the molar ratio among rare-earth, barium and copper.
YBCO
The most famous ReBCO is yttrium barium copper oxide, YBa2Cu3O7−x (or Y123), the first superconductor found with a critical temperature above the boiling point of liquid nitrogen. Its molar ratio is 1 to 2 to 3 for yttrium, barium, and copper and it has a unit cell consisting of subunits, which is the typical structure of perovskites. In particular, the subunits are three, overlapping and containing an yttrium atom at the center of the middle one and a barium atom at the center of the others. Therefore, yttrium and barium are stacked according to the sequence [Ba-Y-Ba], along an axis conventionally denoted by c, (the vertical direction in the figure at the top right).
The resulting cell has an orthorhombic structure, unlike other superconducting cuprates that generally have a tetragonal structure. All the corner sites of the unit cell are occupied by copper, which has two different coordinates, Cu(1) and Cu(2), with respect to oxygen. It offers four possible crystallographic sites for oxygen: O(1), O(2), O(3), and O(4).
History
Because these kind of materials are brittle it was difficult to create wires from them. After 2010, industrial manufacturers started to produce tapes, with different layers encapsulating the ReBCO material, opening the way to commercial uses.
In September 2021 Commonwealth Fusion Systems (CFS) created a test magnet with ReBCO tape that handled a current of 40,000 amperes, with a magnetic field of 20 tesla at 20 K. One important innovation was to avoid insulating the tape, saving space and lowering required voltages. Another was the size of the magnet: 10 tons, far larger than any prior experiment. The magnet assembly consisted of 16 plates, called pancakes, each hosting a spiral winding of tape on one side and cooling channels on the other.
In 2023, the National High Magnetic Field Laboratory generated 32 tesla with a ReBCO superconducting magnet. A 40T superconducting magnet is under construction.
See also
Cuprate superconductor
List of superconductors
References
Barium compounds
Copper compounds
High-temperature superconductors
Oxides | Rare-earth barium copper oxide | [
"Chemistry"
] | 739 | [
"Oxides",
"Salts"
] |
57,687,190 | https://en.wikipedia.org/wiki/Nobushige%20Kurokawa | is a Japanese mathematician working in number theory, especially analytic number theory, multiple trigonometric function theory, zeta functions and automorphic forms. He is currently a professor emeritus at Tokyo Institute of Technology.
Books
with Shin-ya Koyama, 多重三角関数論講義 (Lectures on multiple sine functions), 2010. Lectures notes originally from April–July 1991 at University of Tokyo.
with Shinya Koyama, Absolute Mathematics, 2010. (Japanese)
Pursuit of the Riemann Hypothesis: ABC to Z, 2012. (Japanese)
Beyond the Riemann Hypothesis: Deep Riemann Hypothesis (DRH), 2013. (Japanese)
Modern trigonometric function theory, 2013. (Japanese)
Principles of Absolute Mathematics, 2016. (Japanese)
The World of Absolute Mathematics: Riemann Hypothesis, Langlands conjecture, Sato conjecture, 2017. (Japanese)
with Shinya Koyama, Introduction to the ABC conjecture, 2018. (Japanese)
References
External links
Journey to the world of absolute mathematics at Tokyo Institute of Technology, March 28, 2017 (video)
20th-century Japanese mathematicians
21st-century Japanese mathematicians
1952 births
Living people
Abc conjecture | Nobushige Kurokawa | [
"Mathematics"
] | 233 | [
"Abc conjecture",
"Number theory"
] |
57,687,305 | https://en.wikipedia.org/wiki/Dissimilar%20friction%20stir%20welding | Dissimilar friction stir welding (DFSW) is the application of friction stir welding (FSW), invented in The Welding Institute (TWI) in 1991, to join different base metals including aluminum, copper, steel, titanium, magnesium and other materials. It is based on solid state welding that means there is no melting. DFSW is based on a frictional heat generated by a simple tool in order to soften the materials and stir them together using both tool rotational and tool traverse movements. In the beginning, it is mainly used for joining of aluminum base metals due to existence of solidification defects in joining them by fusion welding methods such as porosity along with thick Intermetallic compounds. DFSW is taken into account as an efficient method to join dissimilar materials in the last decade. There are many advantages for DFSW in compare with other welding methods including low-cost, user-friendly, and easy operation procedure resulting in enormous usages of friction stir welding for dissimilar joints. Welding tool, base materials, backing plate (fixture), and a milling machine are required materials and equipment for DFSW. On the other hand, other welding methods, such as Shielded Metal Arc Welding (SMAW) typically need highly professional operator as well as quite expensive equipment.
Principle of operation
The mechanism of DFSW is very simple. A rotating tool plunges into the interface of parent metals, and heat input generated by the friction between the tool shoulder surface and top surface of the base metals lead to softening of the base materials. In other words, the rotational movement of the tool mixes and stirs the parent metals and create a softened pasty mixture. Afterwards, the tool's traverse movement along the interface creates a joint. This results in a final bond that combines both mechanical and metallurgical bonding at the interface. These two bondings are critical in order to achieve proper mechanical properties. Butt and lap designs are the most common joint types in dissimilar friction stir welding (DFSW). Likewise, one material is generally harder than the other. In general, hard and soft materials are placed in advancing and retreating sides respectively during welding.
Tool Geometry
Tool configuration is an important factor to achieve a sound joint. The tool consists of two parts including tool shoulder and tool pin, as shown in below figure. The tool shoulder generates frictional heat, while the tool pin stirs the softened materials. Various pin and shoulder configurations may be used for DFSW. "Cylindrical", "rectangular", "triangular" and "threaded-cylindrical" are the most common tool pin profiles, while "featureless" and "scrolled" are the most common tool shoulder configurations. Tool material selection is dependent on the base materials to be joined. For example, for aluminum/copper joints, hot working alloy steel is generally used, while for harder metals such as titanium/aluminum joints, tungsten carbide is common.
Welding Parameters
In DFSW, mechanical properties mainly include tensile strength, hardness, yield strength, elongation. Selecting optimum welding parameters results in achieving proper mechanical properties of the joint. Tool rotational speed (rpm), tool traverse speed (mm/min), tool tilt angle (degree), tool offset (mm), tool penetration (mm), and tool geometry are most important welding parameters in DFSW. The tool center is typically placed in the centerline of the joint for similar joints such as aluminum/aluminium or copper/copper joints; in contrast, it is shifted towards the softer materials in DFSW called tool offset. It is a significant factor to achieve a joint possessing smaller welding defect and higher mechanical properties. Generally, harder and softer materials are placed in Advancing Side (AS) and Retreating Side (RT) respectively. Regardless of the tool geometry, which plays a critical role on final mechanical and metallurgical properties of the weldment, the effect of the tool rotational speed and tool offset are taken into account as the most important welding parameters during DFSW.
Heat Generation
A non-consumable rotating tool is plunged into the interface of parent materials. Frictional heat arisen from the tool shoulder throughout welding plasticizes the parent materials leading to local plastic deformation of the parent materials. Localized heat generated by the tool results from following process. At the initial stage, it is primarily arisen from frictional heat between the plunged pin and parent materials. Afterwards, it is mainly produced by the frictional heat between the shoulder surface and the top-surface of base metals once the shoulder touched the top-surface. Subsequently, the softened materials are stirred together by the rotating pin resulting in a solid-state bond. Frigaard et al. showed that tool rotational speed and tool shoulder diameters are the main contributing factors in heat generation.
Material Flow
The mechanism of bonding in DFSW is based on two simple concepts. First, stirred materials, a mixture flow of soft and hard metals, is forged into the interface of harder material leading to strong mechanical bond at the interface. Furthermore, a complementary metallurgical bond is formed at the interface enhancing and improving mechanical properties of the joint. Materials flow throughout DFSW depends on various parameters including welding process parameters, tool geometry, and base materials. Tool geometry is the most important factor in achieving appropriate material flow.
Defects
Occurrence of welding defects in DFSW are quite common. Welding defects in DFSW include tunneling defect, fragment defect, crack, void, surface cavity or grooves and excessive flash formation. Amongst these, tunneling defect is the most common defect in DFSW resulting from improper material flows throughout welding. It is mainly attributed to inappropriate selection of welding parameters particularly welding speed, rotational speed, tool design and tool penetration leading to either abnormal stirring or insufficient heat input. Formation of coarse fragments of harder materials within the matrix of softer materials is another typical defect observed only in DFSW. Generally, during DFSW, the paste materials behave like a metal matrix composite such that harder and softer materials act as the matrix and the reinforcement respectively. In fact, it is quite important to keep the harder material in relatively small size in order to achieve the best flow of materials. Therefore, any factors that cause formation of large piece of harder material lead to appearance of fragment defects. Tool offset and tool pin design were taken into account as the most significant contributing factors in formation of fragment defect in DFSW. They were accounted for disturbing the flow of material resulting from the formation of large pieces of harder material within the matrix of softer material due to the fact that it is quite difficult to stir and mix paste materials when one of them is not relatively fine. In addition, fragment defects usually accompany with other defects such as voids and cracks.
Typical Characteristics
DFSW shows various characteristics in terms of hardness distribution, tensile strength, microstructure, formation of intermetallic compounds as well as formation of a composite structure within the stir zone. The majority of the dissimilar joints fabricated by FSW demonstrate similar results.
Hardness
Since the base materials have different mechanical properties, the hardness distribution is not homogeneous which can be attributed to two different reasons. First, different mechanical properties of base materials including the hardness causes inhomogeneity in the weldments. Second, different microstructure and grain size of the welding zones including stir zone, TMAZ, and HAZ result in various hardness. Moreover, the hardness in the nugget zone or stir zone is very inhomogenous because of the formation of onion ring (composite structure ) and IMCs. As a result, dissimilar joints shows inhomogenous distribution in the nugget zone or stir zone.
Microstructure
Four different welding zones including Stir Zone (SZ) or nugget zone, Thermo-Mechanical Affected Zone (TMAZ), Heat affected zone (HAZ) and Base Metals (BM) are typically observed in dissimilar joints made by FSW. Microstructure of the weldment demonstrates a remarkable grain refinement in the stir zone along with elongation of the grains in the TMAZ. Intensive plastic deformation risen up by tool action, rotational and traverse movements, account for the notable grain refinement in the stir zone. Moreover, HAZ presents relatively coarser grain that can be attributed to lower cooling rate in comparison with other welding areas. Some phenomena are typical in dissimilar friction stir welding including formation of Intermetallic Compounds (IMCs) and appearance of a Composite-like Structure (CS) appeared in various patterns specifically onion rings shown in below figure. IMCs and CS enhance mechanical behavior of the joints depending their conditions such as the thickness of IMCs as well as distribution pattern of composite-like structure. Proper selection of welding parameters optimizes formation of IMCs and CS resulting in the highest mechanical properties. As pointed out before, rotational speed, welding speed, and tool offset along with tool pin are the most important factors affecting on mechanical and metallurgical properties during DFSW. Unlike conventional fusion welding methods that are accompanied with substantially thick interfacial IMCs, forming an interfacial metallurgical bond during DFSW is essential to achieve a sound joint. However, it should be kept at optimum condition to enhance and improve mechanical properties i.e. it should be thin, uniform and contentious.
IMCs
IMCs are another typical phenomenon in DFSW. There existed some criteria for IMCs in order to achieve a sound joint including thickness, uniformity and continuity. The most common type of IMCs appeared in aluminum/copper joint are Al4Cu9, Al2Cu3, Al2Cu. Interface and surrounding edge of the particles dispersed in the nugget zone are two main places IMCs formed. Likewise, depending the size of the particles of harder material which dispersed in the matrix of softer material, coarse particles partially transform to IMCs mostly around the outer edge of the particles, while fine particles completely transform to IMCs. It is worth noting that the average thickness of IMCs are less than 2 micrometer. Therefore, those particles that are below than 2 micrometer are completely transform to IMCs resulting in enhancing mechanical properties of the nugget zone.
Tensile Strength
Another important characteristic in DFSW is the final tensile strength. The majority of dissimilar weldments presented similar trend in tensile strength. There are two different materials in DFSW. One is softer than the other. For example, in aluminum to copper joint, aluminum is softer than copper. What would be the tensile strength of the joint? Is it more than both? Is it less than both? What is the requirement for the sound joint? The answer is that tensile strength of the joints in DFSW are a fraction of the tensile strength of the softer material. Therefore, the final tensile strength of the weldments are usually less than tensile strength of both materials, however, in order to be acceptable in the industry, it is usually more than 70 percent of the tensile strength of the softer material. Fracture behavior of the tensile specimens shows that majority of the joints failed at the interface along with a brittle fracture. It can be attributed to IMCs developed at the interface. Although, it could successfully improve tensile strength, but the specimens showed brittle fracture which is one of the existing challenge in dissimilar joints fabricated by FSW.
Formation of composite structure
Due to the fact that there are two different materials in DFSW; formation of a composite structure within the nugget zone is inevitable. Typically, it appears in the forming of onion ring in the nugget zone or stir zone of the softer matrix as shown in below figure. That is, fine particle of the material in the advancing side (harder material) disperse throughout the stir zone of the retreating material (Softer material). That is the main reason regarding the inhomogeneous hardness distribution in the stir zone.
Challenge
FSW can be efficient method to be used in order to join dissimilar materials and the outcome in terms of tensile strength, shear strength, and hardness distribution are promising. However, most of the joints fractured at interface. Moreover, even those that have been ruptured in the base metals showed brittle behavior i.e. low elongation which can be attributed to formation of IMCs. There must a balance between tensile strength and ductility of the weldments in order to safely use dissimilar weldments in industrial applications. In other words, proper ductility and toughness are required for some industrial applications since they should possess proper resistivity against impact and shock loading. The majority of the fabricated weldments are not sufficiently strong to be used for such applications. Therefore, it is worthwhile to focus current and future works on improving toughness of the weldments along with keeping tensile strength in a proper value.
References
Welding
Friction
Friction stir welding | Dissimilar friction stir welding | [
"Physics",
"Chemistry",
"Engineering"
] | 2,660 | [
"Mechanical phenomena",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Welding",
"Surface science",
"Mechanical engineering"
] |
57,687,371 | https://en.wikipedia.org/wiki/Multimodal%20sentiment%20analysis | Multimodal sentiment analysis is a technology for traditional text-based sentiment analysis, which includes modalities such as audio and visual data. It can be bimodal, which includes different combinations of two modalities, or trimodal, which incorporates three modalities. With the extensive amount of social media data available online in different forms such as videos and images, the conventional text-based sentiment analysis has evolved into more complex models of multimodal sentiment analysis, which can be applied in the development of virtual assistants, analysis of YouTube movie reviews, analysis of news videos, and emotion recognition (sometimes known as emotion detection) such as depression monitoring, among others.
Similar to the traditional sentiment analysis, one of the most basic task in multimodal sentiment analysis is sentiment classification, which classifies different sentiments into categories such as positive, negative, or neutral. The complexity of analyzing text, audio, and visual features to perform such a task requires the application of different fusion techniques, such as feature-level, decision-level, and hybrid fusion. The performance of these fusion techniques and the classification algorithms applied, are influenced by the type of textual, audio, and visual features employed in the analysis.
Features
Feature engineering, which involves the selection of features that are fed into machine learning algorithms, plays a key role in the sentiment classification performance. In multimodal sentiment analysis, a combination of different textual, audio, and visual features are employed.
Textual features
Similar to the conventional text-based sentiment analysis, some of the most commonly used textual features in multimodal sentiment analysis are unigrams and n-grams, which are basically a sequence of words in a given textual document. These features are applied using bag-of-words or bag-of-concepts feature representations, in which words or concepts are represented as vectors in a suitable space.
Audio features
Sentiment and emotion characteristics are prominent in different phonetic and prosodic properties contained in audio features. Some of the most important audio features employed in multimodal sentiment analysis are mel-frequency cepstrum (MFCC), spectral centroid, spectral flux, beat histogram, beat sum, strongest beat, pause duration, and pitch. OpenSMILE and Praat are popular open-source toolkits for extracting such audio features.
Visual features
One of the main advantages of analyzing videos with respect to texts alone, is the presence of rich sentiment cues in visual data. Visual features include facial expressions, which are of paramount importance in capturing sentiments and emotions, as they are a main channel of forming a person's present state of mind. Specifically, smile, is considered to be one of the most predictive visual cues in multimodal sentiment analysis. OpenFace is an open-source facial analysis toolkit available for extracting and understanding such visual features.
Fusion techniques
Unlike the traditional text-based sentiment analysis, multimodal sentiment analysis undergo a fusion process in which data from different modalities (text, audio, or visual) are fused and analyzed together. The existing approaches in multimodal sentiment analysis data fusion can be grouped into three main categories: feature-level, decision-level, and hybrid fusion, and the performance of the sentiment classification depends on which type of fusion technique is employed.
Feature-level fusion
Feature-level fusion (sometimes known as early fusion) gathers all the features from each modality (text, audio, or visual) and joins them together into a single feature vector, which is eventually fed into a classification algorithm. One of the difficulties in implementing this technique is the integration of the heterogeneous features.
Decision-level fusion
Decision-level fusion (sometimes known as late fusion), feeds data from each modality (text, audio, or visual) independently into its own classification algorithm, and obtains the final sentiment classification results by fusing each result into a single decision vector. One of the advantages of this fusion technique is that it eliminates the need to fuse heterogeneous data, and each modality can utilize its most appropriate classification algorithm.
Hybrid fusion
Hybrid fusion is a combination of feature-level and decision-level fusion techniques, which exploits complementary information from both methods during the classification process. It usually involves a two-step procedure wherein feature-level fusion is initially performed between two modalities, and decision-level fusion is then applied as a second step, to fuse the initial results from the feature-level fusion, with the remaining modality.
Applications
Similar to text-based sentiment analysis, multimodal sentiment analysis can be applied in the development of different forms of recommender systems such as in the analysis of user-generated videos of movie reviews and general product reviews, to predict the sentiments of customers, and subsequently create product or service recommendations. Multimodal sentiment analysis also plays an important role in the advancement of virtual assistants through the application of natural language processing (NLP) and machine learning techniques. In the healthcare domain, multimodal sentiment analysis can be utilized to detect certain medical conditions such as stress, anxiety, or depression. Multimodal sentiment analysis can also be applied in understanding the sentiments contained in video news programs, which is considered as a complicated and challenging domain, as sentiments expressed by reporters tend to be less obvious or neutral.
References
Natural language processing
Affective computing
Social media
Machine learning
Multimodal interaction | Multimodal sentiment analysis | [
"Technology",
"Engineering"
] | 1,084 | [
"Machine learning",
"Natural language processing",
"Computing and society",
"Artificial intelligence engineering",
"Natural language and computing",
"Social media"
] |
57,687,722 | https://en.wikipedia.org/wiki/Spinterface | Spinterface is a term coined to indicate an interface between a ferromagnet and an organic semiconductor.
This is a widely investigated topic in molecular spintronics, since the role of interfaces plays a huge part in the functioning of a device. In particular, spinterfaces are widely studied in the scientific community because of their hybrid organic/inorganic composition. In fact, the hybridization between the metal and the organic material can be controlled by acting on the molecules, which are more responsive to electrical and optical stimuli than metals. This gives rise to the possibility of efficiently tuning the magnetic properties of the interface at the atomic scale.
History
The field of spintronics, which is the scientific field that aims to study the spin-dependent electron transport in solid-state devices, emerged in the last decades of the 20th century, first with the observation of the injection of a spin-polarized current from a ferromagnetic to a paramagnetic metal and subsequently with the discovery of tunnel magnetoresistance and giant magnetoresistance. The field evolved turning towards spin-orbit related phenomena, such as Rashba effect. Only more recently, spintronics has been extended to the organic world, with the idea of exploiting the weak spin-relaxation mechanisms of molecules in order to use them for spin transport. Research in this field started off with hybrid replicas of inorganic spintronic devices, such as spin valves and magnetic tunneling junctions, trying to obtain spin transport in molecular films. However some devices didn't behave as expected, for example vertical spin valves displaying a negative magnetoresistance. It was then quickly understood that the molecular layers don't just play a transport role but they can also act on the spin polarization of the ferromagnet at the interface. Because of this, the interest on ferromagnet/organic interfaces rapidly increased in the scientific community and the term "spinterface" was born. The research is currently aimed at building devices with interfaces engineered in order to tailor the spin injection.
Scientific interest
The shrinking of device sizes and the attention towards low power consumption applications has led to an ever-growing attention towards the physics of surfaces and interfaces, which play a fundamental role in the functioning of many applications. The breaking of the bulk symmetry which occurs at a surface leads to different physical and chemical properties, which are sometimes impossible to find in the bulk material. In particular, when a solid-state material is interfaced with another solid, the terminations of the two different materials influence each other by means of chemical bonds. The behavior of the interface is highly influenced by the properties of the materials. In particular, in spinterfaces, a metal and an organic semiconductor, which display very different electronic properties, are interfaced and they usually form a strong hybridization. With the final aim of being able to tune and change the electronic and magnetic behavior of the interface, spinterfaces are studied both by inserting them into spintronic devices and, on a more basic level, by investigating the growth of ultra-thin molecular layers on ferromagnetic substates with a surface science approach. The scope of building such interfaces is on one side to exploit the spin-polarized character of the electronic structure of the ferromagnet to induce a spin polarization in the molecular layer and, on the other hand, to influence the magnetic character of the ferromagnetic layer by means of hybridization. Combining this with the fact that usually molecules have a very high responsivity to stimuli (typically impossible to achieve in inorganic materials) there is the hope of being able to easily change the character of the hybridization, hence tuning the properties of the spinterface. This could give rise to a new class of spintronic devices, where the spinterface plays a fundamental and active role.
Physics and applications
Organic semiconductors are currently used in various applications, for example OLED displays, which can be flexible, thinner, faster and more power efficient than LCD screens, and organic field-effect transistors, intended for large, low-cost electronic products and biodegradable electronics.
In terms of spintronic applications, there are no available commercial devices yet, but the applied research is headed towards the use of spinterfaces mainly for magnetic tunneling junctions and organic spin valves.
Spin-Filtering
The physical principle that is mainly exploited when talking about spinterfaces is the spin-filtering. This is simply schematized in figure: when one considers the ferromagnet and the organic semiconductor on their own (panel a), the density of states (DOS) of the metal will be unbalanced between the two spin channels, with the difference of the up and down DOS at the Fermi level governing the spin polarization of the current flow; the DOS of the organic semiconductor will have no unbalance between the spin channels and will display localized energy levels, namely highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO), with zero DOS at the Fermi Level. When the two materials are put into contact they influence each other's DOS at the interface: the main effects are a broadening of the molecular orbitals and a possible shift of their energy. These effects are in general spin-dependent, since they arise from the hybridization, which is strictly dependent on the DOS of the two materials, which is itself spin-unbalanced in the case of the ferromagnet. As a matter of example, panel b represents the case of a parallel injection of current, while panel c schematizes an antiparallel spin polarization of the current injected in the semiconductor. In this way, the injected current will be polarized accordingly to the interface DOS at the Fermi Level and exploiting the fact that molecules usually have intrinsically weak spin-relaxation mechanisms, molecular layers are great candidates for spin transport applications. By a good material choice one is then able to filter the spins at the spinterface.
Magnetic Tunneling Junction
Applied research on spinterfaces is often focused on studying the tunnel magnetoresistance (TMR) in hybrid magnetic tunneling junctions (MTJs). Conventional MTJs are composed by two ferromagnetic electrodes separated by an insulating layer, thin enough for electron tunneling events to be relevant. The idea of using spinterfaces consists in replacing the inorganic insulating barrier with an organic one. The motivation for this is given by the flexibility, low cost and higher spin-relaxation times of molecules and the possibility of chemically engineering the interfaces. The physical principle behind MTJs is that the tunneling of the junction is dependent on the relative orientation of the magnetization of the ferromagnetic electrodes. In fact, in the Jullière model, the tunneling current that passes through the junction is proportional to the sum of the products of the DOS of the single spin channels:
The picture of spin-dependent tunneling is represented in figure, and what is observed is that usually there is a larger tunneling current in the case of parallel alignment of the electrode magnetizations. This is given by the fact that, in this case, the term will be way larger than all the other terms, making .
By changing the relative orientation of the magnetization of the electrodes it is possible to control the conductance state of the tunneling junction and use this principle for applications, for example read-heads of hard disk drives and MRAMs.
If an organic material is inserted as tunneling barrier, the picture becomes more complex, as the formation of spin-hybridization-induced polarized states occurs. These states may affect the tunneling transmission coefficient, which is usually kept constant in the Jullière model. Barraud et al., in a Nature Physics paper, develop a spin transport model that takes into account the effect of the spinterface hybridization. What they observed is that the role of this hybridization in the spin tunneling process is not only relevant, but also capable of inverting the sign of the TMR. This opens the door to a new research front, aimed at tailoring the properties of spintronic devices through the right combination of ferromagnetic metals and molecules.
Spin Valves
Conventional spin valves are built in a very similar way with respect to magnetic tunneling junctions, the difference is that the two ferromagnetic electrodes are this time separated by a non-magnetic metal instead of an insulator. The physical principle exploited in this case is no longer related to tunneling but to electrical resistance.
The spin-polarized current, coming from one ferromagnetic electrode, can travel in a non-magnetic metal for a certain distance, given by the spin diffusion length of that metal. When the current enters another ferromagnetic material, the relative orientation of the magnetization with respect to the first electrode can lead to a change in the resistance of the junction: if the alignment of the magnetizations is parallel, the spin valve will exhibit a low resistance state, while, in the case of antiparallel alignment, reflection and spin flip scattering events give rise to a high resistance state. From these considerations one can define and evaluate the magnetoresistance of the spin valve:
where and are respectively the resistances for the antiparallel and parallel alignment.
The usual way of creating the possibility of having both parallel and antiparallel alignment is either pinning one of the electrodes by means of exchange bias or directly using different materials with different coercive fields for the two electrodes (pseudo spin valves). The proposed use of spinterfaces in spin valve applications is to interface one of the electrodes with a molecular layer, which is capable of tuning the magnetization properties of the electrode with a change in hybridization. This change of hybridization at the spinterface can be induced in principle both by light (making these systems suitable for ultra-fast applications) and electric voltages. If this process is reversible, there is the possibility of switching from high to low resistance in a very effective way, making the devices faster and more efficient.
See also
Spintronics
Spin valve
Tunnel magnetoresistance
Giant magnetoresistance
Molecular electronics
Orbital hybridisation
References
Spintronics | Spinterface | [
"Physics",
"Materials_science"
] | 2,109 | [
"Spintronics",
"Condensed matter physics"
] |
57,687,737 | https://en.wikipedia.org/wiki/Incremental%20deformations | In solid mechanics, the linear stability analysis of an elastic solution is studied using the method of incremental deformations superposed on finite deformations. The method of incremental deformation can be used to solve static, quasi-static and time-dependent problems. The governing equations of the motion are ones of the classical mechanics, such as the conservation of mass and the balance of linear and angular momentum, which provide the equilibrium configuration of the material. The main corresponding mathematical framework is described in the main Raymond Ogden's book Non-linear elastic deformations and in Biot's book Mechanics of incremental deformations, which is a collection of his main papers.
Nonlinear Elasticity
Kinematics and Mechanics
Let be a three-dimensional Euclidean space. Let be two regions occupied by the material in two different instants of time. Let be the deformation which transforms the tissue from , i.e. the material/reference configuration, to the loaded configuration , i.e. current configuration. Let be a -diffeomorphism from to , with being the current position vector, given as a function of the material position . The deformation gradient is given by
Considering a hyperelastic material with an elastic strain energy density , the Piola-Kirchhoff stress tensor is given by .
For a quasi-static problem, without body forces, the equilibrium equation is
where is the divergence with respect to the material coordinates.
If the material is incompressible, i.e. the volume of every subdomains does not change during the deformation, a Lagrangian multiplier is typically introduced to enforce the internal isochoric constraint . So that, the expression of the Piola stress tensor becomes
Boundary conditions
Let be the boundary of , the reference configuration, and , the boundary of , the current configuration. One defines the subset of on which Dirichlet conditions are applied, while Neumann conditions hold on , such that . If is the displacement vector to be assigned at the portion and is the traction vector to be assigned to the portion , the boundary conditions can be written in mixed-form, such as
where is the displacement and the vector is the unit outward normal to .
Basic solution
The defined problem is called the boundary value problem (BVP). Hence, let be a solution of the BVP. Since depends nonlinearly on the deformation gradient, this solution is generally not unique, and it depends on geometrical and material parameters of the problem. So, one has to employ the method of incremental deformation in order to highlight the existence of an adjacent solution for a critical value of a dimensionless parameter, called control parameter which "controls" the onset of the instability. This means that by increasing the value of this parameter, at a certain point new solutions appear. Hence, the selected basic solution is not anymore the stable one but it becomes unstable. In a physical way, at a certain time the stored energy, such as the integral of the density over all the domain of the basic solution is bigger than the one of the new solutions. To restore the equilibrium, the configuration of the material moves to another configuration which has lower energy.
Method of incremental deformations superposed on finite deformations
To improve this method, one has to superpose a small displacement on the finite deformation basic solution . So that:
,
where is the perturbed position and maps the basic position vector in the perturbed configuration .
In the following, the incremental variables are indicated by , while the perturbed ones are indicated by .
Deformation gradient
The perturbed deformation gradient is given by:
,
where , where is the gradient operator with respect to the current configuration.
Stresses
The perturbed Piola stress is given by:
where denotes the contraction between two tensors, a forth-order tensor and a second-order tensor . Since depends on through , its expression can be rewritten by emphasizing this dependence, such as
If the material is incompressible, one gets
where is the increment in and is called the elastic moduli associated to the pairs .
It is useful to derive the push-forward of the perturbed Piola stress be defined as
where is also known as the tensor of instantaneous moduli, whose components are:
.
Incremental governing equations
Expanding the equilibrium equation around the basic solution, one gets
Since is the solution to the equation at the zero-order, the incremental equation can be rewritten as
where is the divergence operator with respect to the actual configuration.
The incremental incompressibility constraint reads
Expanding this equation around the basic solution, as before, one gets
Incremental boundary conditions
Let and be the prescribed increment of and respectively. Hence, the perturbed boundary condition are
where is the incremental displacement and .
Solution of the incremental problem
The incremental equations
represent the incremental boundary value problem (BVP) and define a system of partial differential equations (PDEs). The unknowns of the problem depend on the considered case. For the first one, such as the compressible case, there are three unknowns, such as the components of the incremental deformations , linked to the perturbed deformation by this relation . For the latter case, instead, one has to take into account also the increment of the Lagrange multiplier , introduced to impose the isochoric constraint.
The main difficulty to solve this problem is to transform the problem in a more suitable form for implementing an efficient and robusted numerical solution procedure. The one used in this area is the Stroh formalism. It was originally developed by Stroh for a steady state elastic problem and allows the set of four PDEs with the associated boundary conditions to be transformed into a set of ODEs of first order with initial conditions. The number of equations depends on the dimension of the space in which the problem is set. To do this, one has to apply variable separation and assume periodicity in a given direction depending on the considered situation. In particular cases, the system can be rewritten in a compact form by using the Stroh formalism. Indeed, the shape of the system looks like
where is the vector which contains all the unknowns of the problem, is the only variable on which the rewritten problem depends and the matrix is so-called Stroh matrix and it has the following form
where each block is a matrix and its dimension depends on the dimension of the problem. Moreover, a crucial property of this approach is that , i.e. is the hermitian matrix of .
Conclusion and remark
The Stroh formalism provides an optimal form to solve a great variety of elastic problems. Optimal means that one can construct an efficient numerical procedure to solve the incremental problem. By solving the incremental boundary value problem, one finds the relations among the material and geometrical parameters of the problem and the perturbation modes by which the wave propagates in the material, i.e. what denotes the instability. Everything depends on , the selected parameter denoted as the control one.
By this analysis, in a graph perturbation mode vs control parameter, the minimum value of the perturbation mode represents the first mode at which one can see the onset of the instability. For instance, in the picture, the first value of the mode in which the instability emerges is around since the trivial solution and does not have to be considered.
See also
Deformation (mechanics)
Elastic instability
Continuum mechanics
References
Elasticity (physics) | Incremental deformations | [
"Physics",
"Materials_science"
] | 1,534 | [
"Deformation (mechanics)",
"Physical phenomena",
"Physical properties",
"Elasticity (physics)"
] |
57,687,782 | https://en.wikipedia.org/wiki/Capillary%20breakup%20rheometry | Capillary breakup rheometry is an experimental technique used to assess the extensional rheological response of low viscous fluids. Unlike most shear and extensional rheometers, this technique does not involve active stretch or measurement of stress or strain but exploits only surface tension to create a uniaxial extensional flow. Hence, although it is common practice to use the name rheometer, capillary breakup techniques should be better addressed to as indexers.
Capillary breakup rheometry is based on the observation of breakup dynamics of a thin fluid thread, governed by the interplay of capillary, viscous, inertial and elastic forces. Since no external forcing is exerted in these experiments, the fluid thread can spatially rearrange and select its own time scales. Quantitative observations about strain rate, along with an apparent extensional viscosity and the breakup time of the fluid, can be estimated from the evolution of the minimal diameter of the filament. Moreover, theoretical considerations based on the balance of the forces acting in the liquid filament, allow to derive information such as the extent of non-Newtonian behaviour and the relaxation time.
The information obtained in capillary breakup experiments are a very effective tool in order to quantify heuristic concepts such as "stringiness" or "tackiness", which are commonly used as performance indices in several industrial operations.
At present, the unique commercially available device based on capillary breakup technique is the CaBER.
Theoretical framework
Capillary breakup rheometry and its recent development are based on the original experimental and theoretical work of Schümmer and Tebel and Entov and co-workers. Nonetheless, this technique found his origins at end of the 19th century with the pioneering work of Joseph Plateau and Lord Rayleigh. Their work entailed considerable progress in describing and understanding surface-tension-driven flows and the physics underlying the tendency of falling liquid streams to spontaneously break into droplets. This phenomenon is known as Plateau–Rayleigh instability.
The linear stability analysis introduced by Plateau and Rayleigh can be employed to determine a wavelength for which a perturbation on a jet surface is unstable. In this case, the pressure gradient across the free-surface can cause the fluid in the thinnest region to be "squeezed" out towards the swollen bulges, thus creating a strong uniaxial extensional flow in the necked region.
As the instability grows and strains become progressively larger, the thinning is governed by non-linear effects. Theoretical considerations on the fluid motion suggested that the behaviour approaching the breakup singularity can be captured using self-similarity. Depending on the relative intensity of inertial, elastic and viscous stresses, different scaling laws based on self-similar considerations have been established to describe the trend of the filament profile near breakup throughout the time.
Experimental configurations
Capillary thinning and breakup of complex fluids can be studied using different configurations. Historically, mainly three types of free-surface conformations have been employed in experiments: statically-unstable liquid bridges, dripping from a nozzle under gravity and continuous jets. Even though the initial evolution of the capillary instability is affected by the type of conformation used, each configurations capture the same phenomenon at the last stages close to breakup, where thinning dynamics is dominated by fluid properties exclusively.
The different configurations can be best distinguished based on the Weber Number, hence on the relative magnitude between the imposed velocity and the intrinsic capillary speed of the considered material, defined as the ratio between the surface tension and shear viscosity ().
In the first geometry, the imposed velocity is zero (We=0), after an unstable liquid bridge is generated by rapid motion of two coaxial cylindrical plate. The thinning of the capillary bridge is purely governed by the interplay of inertial, viscous, elastic and capillary forces. This configuration is employed in the CaBER device and it is at present the most used geometry, thanks to its main advantage of maintaining the thinnest point of the filament approximately located in the same point.
In dripping configuration, the fluid leaves a nozzle at a very low velocity (We < 1), allowing the formation of a hemispherical droplet at the tip of the nozzle. When the drop becomes sufficiently heavy, gravitational forces overcome surface tension, and a capillary bridge is formed, connecting the nozzle and the droplet. As the drop falls, the liquid filament becomes progressively thinner, to the point in which gravity becomes unimportant (low Bond number) and the breakup is only driven by capillary action. At this stage, the thinning dynamics is determined by the balance between capillarity and fluid properties.
Lastly, the third configuration consists in a continuous jet exiting a nozzle at a velocity higher than the intrinsic capillary velocity
(We > 1). As the fluid leaves the nozzle, capillary instabilities naturally emerge on the jet and the formed filaments progressively thin as they are being convected downstream with the flow, until eventually the jet breaks into separate droplets. The jetting-based configuration is generally less reproducible compared to the former two due to different experimental challenges, such as accurately controlling the sinusoidal disturbance.
Force balance and apparent extensional viscosity
The temporal evolution of the thinnest region is determined by a force balance in the fluid filament. A simplified approximate force balance can be written as
where is the surface tension of the fluid, the strain rate at filament midpoint, the extensional viscosity, and the term in square brackets represents the non-Newtonian contribution to the total
normal stress difference. The stress balance shows that, if gravity and inertia can be neglected, the capillary pressure is counteracted by viscous extensional contribution and by non-Newtonian (elastic) contribution.
Depending on the type of fluid, appropriate constitutive models have to be considered and to extract the relevant material functions.
Without any consideration on the nature of the tested fluid, it is possible to obtain a quantitative parameter, the apparent extensional viscosity directly from the force balance, among capillary pressure and viscous stresses alone. Assuming an initial cylindrical shape of the filament, the strain rate evolution is defined as
Thus, the apparent extensional viscosity is given by
Scaling laws
The behaviour of the fluid determines the relative importance of the viscous and elastic terms in resisting the capillary action. Combining the force balance with different constitutive models, several analytical solutions were derived to describe the thinning dynamics. These scaling laws can be used to identify fluid type and extract material properties.
Scaling law for visco-capillary thinning of Newtonian fluids
In absence of inertia (Ohnesorge number larger than 1) and gravitational effects, the thinning dynamics of a Newtonian fluid are governed purely by the balance between capillary pressure and viscous stresses. The visco-capillary thinning is described by the similarity solution derived by Papageorgiou, the midpoint diameter temporal evolution may be written as:
According to the scaling law, a linear decay of the filament diameter in time and the filament breaking in the middle are the characteristic fingerprint of visco-capillary breakup. A linear regression of experimental data allows to extract the time-to-breakup and the capillary speed.
Scaling law for elasto-capillary thinning of elastic fluids
For non-Newtonian elastic fluids, such as polymer solutions, an elasto-capillary balance governs the breakup dynamics. Different constitutive models were used to model the elastic contribution (Oldroyd-B, FENE-P,...). Using an upper convected Maxwell constitutive model, the self-similar thinning process
is described by an analytical solution of the form
where is the initial diameter of the filament. A linear regression of experimental data allows to extract the elastic modulus of the polymer in the solution and the relaxation time. The scaling law expresses an exponential decay of the filament diameter in time
The different forms of the scaling law for viscoelastic fluids shows that their thinning behaviour is very distinct from that of Newtonian liquids. Even the presence of a small amount of flexible polymers can significantly alter the breakup dynamics. The elastic stresses generated by the presence of polymers rapidly increase as the filament diameter decreases. The liquid filament is then progressively stabilized by the growing stresses, and it assumes a uniform cylindrical shape, contrary to the case of visco-capillary thinning where the minimum diameter is localized at the filament midpoint.
Instruments
CaBER
The CaBER (Capillary Breakup Extensional Rheometer) was the only commercially available instrument based on capillary breakup. Based on the experimental work of Entov, Bazilevsky and co-workers, the CaBER was developed by McKinley and co-workers at MIT in collaboration with the Cambridge Polymer Group in the early 2000s. It was manufactured by Thermo Scientific with the commercial name HAAKE CaBER 1.
The CaBER experiments employ a liquid bridge configuration and can be thought as a quantitative version of a "thumb & forefinger" test.
In CaBER experiments, a small amount of sample is placed between two measurement plates, forming an initial cylindrical configuration. The plates are then rapidly separated over a short predefined distance: the imposed step strain generates an “hour-glass” shaped liquid bridge. The necked sample subsequently thins and eventually breaks under the action of capillary forces.
During the surface-tension-driven thinning process, the evolution of the mid-filament diameter (Dmid(t)) is monitored via a laser micrometre.
The raw CaBER output (Dmid vs time curve) show different characteristic shapes depending on the tested liquid, and both quantitative and qualitative information can be extracted from it. The time-to-breakup is the most direct qualitative information that can be obtain. Although this parameter does not represent a property of the fluid itself, it is certainly useful to quantify the processability of complex fluids.
In terms of quantitative parameters, rheological properties such as the shear viscosity and the relaxation time can be obtained by fitting the diameter evolution data with the appropriate scaling laws. The second quantitative information that can be extracted is the apparent extensional viscosity.
Despite the great potential of the CaBER, this technique also presents a number of experimental challenges, mainly related to the susceptibility to solvent evaporation and the creation of a statically-unstable bridge of very low visco-elastic fluids, for which the fluid filament often happens to break already during the stretch phase. Different modifications of the commercial instrument have been presented to overcome these issues. Amongst others: the use of surrounding media different than air and the Slow Retraction Method (SRM).
Other techniques
In recent years a number of different techniques have been developed to characterize fluid with very low visco-elasticity, commonly not able to be tested in CaBER devices.
The Cambridge Trimaster a fluid is symmetrically stretched to form an unstable liquid bridge. This instrument is similar to the CaBER, but the higher imposed stretch velocity of 150 mm/s prevents sample breakup during the stretching step in case of low visco-elastic sample.
The ROJER (Rayleigh Ohnesorge Jetting Extensional Rheometer) is a jetting-based rheometer, developed on the basis of earlier works of Schümmer and Tebel and Christanti and Walker. This device exploits the spontaneous capillary instabilities developing on a liquid jet issuing from a nozzle to evaluate very short relaxation times. A piezoelectric transducer is used to control the frequency and the amplitude of the imposed perturbation.
The DoS (Dripping-onto-Substrate) technique allows to characterize the extensional response of a variety of complex fluids as well as accessing very short relaxation times not measurable in CaBER experiments. In DoS experiments, a volume of fluid is deposited on a substrate, so that an unstable liquid bridge is formed between the nozzle and the sessile drop.
The ADMiER (Acoustically-Driven Microfluidic Extensional Rheometer) involves subjecting a sessile droplet resting on a piezoelectric substrate to a rapid pulse of surface acoustic radiation. This elongates the droplet into a filament that then contacts an opposing surface to form a liquid bridge. At this point, the pulse is ceased and the unstable liquid bridge thins under capillary action, as in a conventional CaBER device. This chief advantage is that ADMiER allows the interrogation of tiny (1 microliter) samples of low-viscosity complex fluids.
Applications
There are many processes and applications that involves free-surface flows and uniaxial extension of liquid filaments or jets. Using capillary breakup rheometry to quantify the dynamics of the extensional response provides an effective tool to control processing parameters as well as design complex fluids with required processability.
A list of relevant applications and processes includes:
Ink-jet printing
Atomization
Dispensing and dosing of complex fluids
Electrospinning
PSA
Spray and curtain coating
Fertilizer distribution
See also
Fluid thread breakup
Plateau–Rayleigh instability
Rheometer
Visco-elastic jets
Capillary action
Capillary
References
Rheology
Non-Newtonian fluids | Capillary breakup rheometry | [
"Chemistry"
] | 2,805 | [
"Rheology",
"Fluid dynamics"
] |
57,688,582 | https://en.wikipedia.org/wiki/CMOS%20amplifier | CMOS amplifiers (complementary metal–oxide–semiconductor amplifiers) are ubiquitous analog circuits used in computers, audio systems, smartphones, cameras, telecommunication systems, biomedical circuits, and many other systems. Their performance impacts the overall specifications of the systems. They take their name from the use of MOSFETs (metal–oxide–semiconductor field-effect transistors) as opposite to bipolar junction transistors (BJTs). MOSFETs are simpler to fabricate and therefore less expensive than BJT amplifiers, still providing a sufficiently high transconductance to allow the design of very high performance circuits. In high performance CMOS (complementary metal–oxide–semiconductor) amplifier circuits, transistors are not only used to amplify the signal but are also used as active loads to achieve higher gain and output swing in comparison with resistive loads.
CMOS technology was introduced primarily for digital circuit design. In the last few decades, to improve speed, power consumption, required area, and other aspects of digital integrated circuits (ICs), the feature size of MOSFET transistors has shrunk (minimum channel length of transistors reduces in newer CMOS technologies). This phenomenon predicted by Gordon Moore in 1975, which is called Moore’s law, and states that in about each 2 years, the number of transistors doubles for the same silicon area of ICs. Progress in memory circuits design is an interesting example to see how process advancement have affected the required size and their performance in the last decades. In 1956, a 5 MB Hard Disk Drive (HDD) weighed over a ton, while these days having 50000 times more capacity with a weight of several tens of grams is very common.
While digital ICs have benefited from the feature size shrinking, analog CMOS amplifiers have not gained corresponding advantages due to the intrinsic limitations of an analog design—such as the intrinsic gain reduction of short channel transistors, which affects the overall amplifier gain. Novel techniques that achieve higher gain also create new problems, like amplifier stability for closed-loop applications. The following addresses both aspects, and summarize different methods to overcome these problems.
Intrinsic gain reduction in modern CMOS technologies
The maximum gain of a single MOSFET transistor is called intrinsic gain and is equal to
where is the transconductance, and is the output resistance of transistor. As a first-order approximation, is directly proportional to the channel length of transistors. In a single-stage amplifier, one can increase channel length to get higher output resistance and gain as well, but this also increases the parasitic capacitance of transistors, which limits the amplifier bandwidth. The transistor channel length is smaller in modern CMOS technologies, which makes achieving high gain in single-stage amplifiers very challenging. To achieve high gain, the literature has suggested many techniques. The following sections look at different amplifier topologies and their features.
Single-stage amplifiers
Telescopic, folded cascode (FC), or recycling FC (RFC) are the most common single-stage amplifiers. All these structures use transistors as active loads to provide higher output resistance (= higher gain) and output swing. A telescopic amplifier provides higher gain (due to higher output resistance) and higher bandwidth (due to smaller non-dominant pole at the cascode node). In contrast, it has limited output swing and difficulty in implementation of unity-gain buffer. Although FC has lower gain and bandwidth, it can provide a higher output swing, an important advantage in modern CMOS technologies with reduced supply voltage. Also, since the DC voltage of input and output nodes can be the same, it is more suitable for implementation of unity-gain buffer. FC is recently used to implement integrator in a bio-nano sensor application. Also, it can be used as a stage in multi-stage amplifiers. As an example, FC is used as the input stage of a two-stage amplifier in designing of a potentiostat circuit, which is to measure neuronal activities, or DNA sensing. Also, it can be used to realize transimpedance amplifier (TIA). TIA can be used in amperometric biosensors to measure current of cells or solutions to define the characteristics of a device under test
In the last decade, circuit designers have proposed different modified versions of FC circuit. RFC is one of the modified versions of FC amplifier, which provides higher gain, higher bandwidth, and also higher slew rate in comparison with FC (for the same power consumption). Recently, RFC amplifier has used in hybrid CMOS–graphene sensor array for subsecond measurement of dopamine. It is used as a low-noise amplifier to implement integrator.
Stability
In many applications, an amplifier drives a capacitor as a load. In some applications, like switched capacitor circuits, the value of capacitive load changes in different cycles. Therefore, it affects output node time constant and amplifier frequency response. Stable behavior of amplifier for all possible capacitive loads is necessary, and designer must consider this issue during designing of circuit. Designer should ensure that phase margin (PM) of the circuit is enough for the worst case. To have proper circuit behavior and time response, designers usually consider a PM of 60 degrees. For higher PM values, the circuit is more stable, but it takes longer for the output voltage to reach its final value. In telescopic and FC amplifiers, the dominant pole is at the output nodes. Also, there is a non-dominant pole at the cascode node. Since capacitive load connected to output nodes, its value affects the location of the dominant pole. This figure shows how capacitive load affects the location of dominant pole and stability. Increasing capacitive load moves the dominant pole toward the origin, and since unity gain frequency is (amplifier gain) times it also moves toward the origin. Therefore, PM increases, which improves stability. So, if we ensure stability of a circuit for a minimum capacitive load, it remains stable for larger load values. To achieve greater than 60 degrees PM, the non-dominant pole must be greater than
Multi-stage amplifiers
In some applications, like switched capacitor filters or integrators, and different types of analog-to-digital converters, having high gain (70-80 dB) is needed, and achieving the required gain sometimes is impossible with single-stage amplifiers. This is more serious in modern CMOS technologies, which transistors have smaller output resistance due to shorter channel length. To achieve high gain as well as high output swing, multi-stage amplifiers have been invented. To implement two-stage amplifier, one can use FC amplifier as the first stage and a common source amplifier as the second stage. Also, to implement four-stage amplifier, 3 common source amplifier can be cascaded with FC amplifier. It should be mentioned that to drive large capacitive loads or small resistive loads, the output stage should be class AB. For example, common source amplifier with class AB behavior can be used as the final stage in three-stage amplifier to not only improve drive capability, but also gain. Class AB amplifier can be used as a column driver in LCDs.
Stability in two-stage amplifiers
Unlike single-stage amplifiers, multi-stage amplifiers usually have 3 or more poles and if they are used in feedback networks, the closed loop system is probably unstable. To have stable behavior in multi-stage amplifiers, it is necessary to use compensation network. The main goal of compensation network is to modify transfer function of the system in such a way to achieve enough PM. So, by the use of compensation network, we should get frequency response similar to what we showed for single-stage amplifiers. In single-stage amplifiers, capacitive load is connected to the output node, which dominant pole happens there, and increasing its value improves PM. So, it acts like a compensation capacitor (network). To compensate multi-stage amplifiers, compensation capacitor is usually used to move dominant pole to lower frequency to achieve enough PM.
The following figure shows the block diagram of a two-stage amplifier in fully differential and single ended modes. In a two-stage amplifier, input stage can be a Telescopic or FC amplifier. For the second stage, common source amplifier with active load is a common choice. Since output resistance of the first stage is much greater than the second stage, dominant pole is at the output of the first stage.
Without compensation, the amplifier is unstable, or at least does not have enough PM. The load capacitance is connected to the output of the second stage, which non-dominant pole happens there. Therefore, unlike single-stage amplifiers, increasing of capacitive load, moves the non-dominant pole to lower frequency and deteriorates PM. Mesri et al. suggested two-stage amplifiers that behave like single-stage amplifiers, and amplifiers remains stable for larger values of capacitive loads.
To have proper behavior, we need to compensate two-stage or multi-stage amplifiers. The simplest way for compensation of two-stage amplifier, as shown in the left block diagram of the below figure, is to connect compensation capacitor at the output of the first stage, and move dominant pole to lower frequencies. But, realization of capacitor on silicon chip requires considerable area. The most common compensation method in two-stage amplifiers is Miller compensation (middle block diagram in the below figure. In this method, a compensation capacitor is placed between input and output node of the second stage. In this case, the compensation capacitor appears times greater at the output of the first stage, and pushes the dominant pole as well as unity gain frequency to lower frequencies. Moreover, because of pole splitting effect, it also moves the non-dominant pole to higher frequencies. Therefore, it is a good candidate to make the amplifier stable. The main advantage of Miller compensation method, is to reduce size of the required compensation capacitor by a factor of The issue raised from Miller compensation capacitor is introducing right-half plane (RHP) zero, which reduces PM. Hopefully, different methods have suggested to solve this issue. As an example, to cancel the effect of RHP zero, nulling resistor can be used in series with compensation capacitor (right block diagram of the below figure). Based on the resistor value, we can push RHP zero to higher frequency (to cancel its effect on PM), or to move it LHP (to improve PM), or even remove the first non-dominant pole to improve Bandwidth and PM. This method of compensation is recently used in amplifier design for potentiostat circuit. Because of process variation, resistor value can change more than 10%, and therefore affects stability. Using current buffer or voltage buffer in series with compensation capacitor is another option to get better results.
See also
FET amplifier
List of MOSFET applications
References
Electronic design
Analog circuits
Integrated circuits
MOSFETs
Transistors | CMOS amplifier | [
"Technology",
"Engineering"
] | 2,264 | [
"Computer engineering",
"Electronic design",
"Analog circuits",
"Electronic engineering",
"Design",
"Integrated circuits"
] |
57,688,647 | https://en.wikipedia.org/wiki/Electrochemical%20AFM | Electrochemical AFM (EC-AFM) is a particular type of Scanning probe microscopy (SPM), which combines the classical Atomic force microscopy (AFM) together with electrochemical measurements. EC-AFM allows to perform in-situ AFM measurements in an electrochemical cell, in order to investigate the actual changes in the electrode surface morphology during electrochemical reactions. The solid-liquid interface is thus investigated.
This technique was developed for the first time in 1996 by Kouzeki et al., who studied amorphous and polycrystalline thin films of Naphthalocyanine on Indium tin oxide in a solution of 0.1 M Potassium chloride (KCl). Unlike the Electrochemical scanning tunneling microscope, previously developed by Itaya and Tomita in 1988, the tip is non-conductive and it is easily steered in a liquid environment.
Principles and experimental precautions
The technique consists in an AFM apparatus integrated with a three electrode electrochemical cell.
The sample works as working electrode (WE) and must be conductive. The AFM probe is a "passive" element, as it is unbiased and it monitors the surface changes as a function of time, when a potential is applied to the sample. Several electrochemical experiments can be performed on the sample, such as cyclic voltammetry, pulse voltammetry etc. During the potential sweeping, the current flows through the sample and the morphology is monitored.
The electrochemical cell is made of a plastic material resistant to various chemical solvents (e.g. sulfuric acid, perchloric acid etc.), with a good mechanical resistance and low fabrication costs. In order to satisfy these requirements, various materials can be employed, such as polytetrafluoroethylene (PTFE) or teflon. Platinum and AgCl wires are widely employed as reference electrode and platinum wires as counter electrode.
Since the measurement is performed in a liquid environment, some precautions must be taken. The chosen electrolyte must be transparent, in order to allow the laser beam to reach the sample and be deflected. For the right electrolyte opacity, depending on the solute concentration, very diluted solutions should be selected. The choice of a suitable electrolyte for the measurement must be taken considering also possible corrosion effects on the AFM scanner, which can be affected by strong acid solutions. The same problem affects the AFM cantilever. It is preferable to select an AFM tip with a specific coating resistant to acids, for example gold. The liquid environment adds one more constraint related to the choice of the tip material, as the laser sum registered on the photodiode must be scarcely affected. The change in the refractive index of the solution with respect to air leads to a change in the position of the laser spot, necessitating a repositioning of the photodiode.
Applications
EC-AFM has various applications, where monitoring the electrode surface during electrochemical reactions leads to interesting results.
Among the applications, the studies on battery and electrode corrosion in acid environment are widely spread.
Concerning batteries, studies on lead–acid battery pointed out the change in the morphology during the reduction/oxidation cycles in a CV, when an acid electrolyted is used.
Different corrosion effects are widely considered for the applications of EC-AFM. Different phenomena are studies, from pitting corrosion of steel, to crystal dissolution.
Highly oriented pyrolytic graphite (HOPG) is widely employed as an electrode for EC-AFM. In fact, various surface phenomena are studied, from the application to lithium batteries to anion intercalation leading to blister formation on the electrode surface.
A rather interesting application is the EC-AFM Dip pen nanolithography. Recently, SPM based lithography gained attention due to its simplicity and precise control the structure and location. A new development of this technique is the dip pen nanolithography (DPN), which uses the AFM technique to deliver organic molecules on different substrates, as gold. Using EC-AFM allows to fabricate metal and semiconductor nanostructures on the WE, gaining high thermal stability and a higher chemical diversity.
Finally, it is possible to perform and study the electrodeposition of different materials on electrodes, from metals (i.e. copper) to polymers, such as polyaniline (PANI).
References
Scanning probe microscopy | Electrochemical AFM | [
"Chemistry",
"Materials_science"
] | 909 | [
"Nanotechnology",
"Scanning probe microscopy",
"Microscopy"
] |
57,689,313 | https://en.wikipedia.org/wiki/International%20Council%20of%20the%20Aeronautical%20Sciences | The International Council of the Aeronautical Sciences (ICAS) is a worldwide institution, established as an international forum for individual national aeronautical professional associations.
History
It was formed on 29 January 1957 at a conference in the US. The first ICAS Congress was held in Spain in 1958. Frank Wattendorf, of AGARD, was the first Director.
A second meeting was held in Paris, with Hugh Latimer Dryden of the National Advisory Committee for Aeronautics, and representatives from ONERA (Office National d'Etudes et de Recherches Aérospatiales), the Royal Aeronautical Society (RAeS), the WGL (now the Deutsche Gesellschaft für Luft- und Raumfahrt), the Association Française des Ingénieurs et Techniciens de l'Aéronautique (now the Association Aéronautique et Astronautique de France), and the Aeronautical Research Institute of Sweden.
Congress
It holds a biennial international congress in September. In 1986 it was held in London. In 2000 the congress was held in North Yorkshire. The 2018 Congress was held by Associação Brasileira de Engenharia e Ciências Mecânicas (ABCM) in Brazil. The 2020 congress will be held in Shanghai, China.
Presidents
Raymond Bisplinghoff 1978
Boris Laschka 1986
Paolo Santini 1990
Murray Scott 2013
Christian Mari 2015
Susan Ying 2017
Shinji Suzuki 2017
Dimitri Mavris 2023
Structure
The secretariat of ICAS is at Deutsche Gesellschaft für Luft- und Raumfahrt (DLR) in Bonn. It was first headquartered at the American Institute of Aeronautics and Astronautics (AIAA), the DLR from 1978, the RAeS in 1986, the Nederlandse Vereniging voor Luchtvaarttechniek from 1990, the AAAF (Association Aéronautique et Astronautique de France) from 1997, then Sweden from 2002, and Germany from January 2011.
See also
Council of European Aerospace Societies (CEAS)
Fédération Aéronautique Internationale (FAI)
International Astronautical Federation
International Energy Forum
References
External links
ICAS
ICAS 2020
1957 in aviation
Aerospace engineering organizations
Aviation organisations based in Germany
International aviation organizations
International organisations based in Bonn
Scientific organizations established in 1957
Scientific organisations based in Germany | International Council of the Aeronautical Sciences | [
"Engineering"
] | 457 | [
"Aeronautics organizations",
"Aerospace engineering organizations",
"Aerospace engineering"
] |
57,689,353 | https://en.wikipedia.org/wiki/Learning%20health%20systems | Learning health systems (LHS) are health and healthcare systems in which knowledge generation processes are embedded in daily practice to improve individual and population health. At its most fundamental level, a learning health system applies a conceptual approach wherein science, informatics, incentives, and culture are aligned to support continuous improvement, innovation, and equity, and seamlessly embed knowledge and best practices into care delivery
The idea was first conceptualized in a 2006 workshop organized by the US Institute of Medicine (now the National Academy of Medicine (NAM)), building on ideas around evidence-based medicine and "practice-based evidence". and around recognition of the persistent gap between evidence generated in the context of biomedical research and the application of that evidence in the provision of care. The need to close this gap was further underscored by the growth of electronic health records (EHR) and other innovations in health information technology and computational power, and the resulting ability to generate data that can lead to better evidence and better outcomes. There has since been increasing interest in the topic, including the creation of the Wiley journal Learning Health Systems.
Cornerstone elements of the LHS include:
generation, application, and improvement of scientific knowledge;
an organizational infrastructure that supports the engagement of communities of patients, healthcare professionals and researchers who collaborate to identify evidence gaps that could be addressed through research in routine healthcare settings;
deployment of computational technologies and informatics approaches that organize and leverage large electronic health data sets, i.e. "big data" for use in research;
quality improvement at the point of care for each patient using new knowledge generated by research.
Other compatible ways of describing the LHS co-exist alongside the NAM definition, including the definition used by AHRQ, the Agency for Healthcare Research and Quality. AHRQ defines a learning health system as "a health system in which internal data and experience are systematically integrated with external evidence, and that knowledge is put into practice. As a result, patients get higher quality, safer, more efficient care, and health care delivery organizations become better places to work.”
In 2023, the NAM established ten core principles of learning health organizations to serve as a unifying touchstone for the field. The principles reflect and build upon the six aims of the seminal "Crossing the Quality Chasm" report published in 2001 (safe, equitable, effective, efficient, timely, and patient-centered), and account for the ways in which health care has evolved since the publication of this 2001 report.
Engaged - Informed engagement, options, and choices for those who are served
Safe - Tested and up-to-date protocols to protect from harm
Effective - Evidence-based services tailored to understanding of each person's goals
Equitable - Parity in opportunity to attain desired health and goals
Efficient - Optimal outcomes for accessible, non-wasteful resources
Accessible - Effective services readily available where and when they are most needed
Measurable - Reliable and valid assessment of consequential activities and outcomes
Transparent - Clear information related to the nature, use, costs, and results of services
Secure - Validated access and use safeguards for digitally-mediated activities
Adaptive - Continuous learning and improvement are integral to organizational culture
History
The NAM’s early efforts to develop the ideas underpinning the LHS began in 2006, via a series of workshops held over several years from 2006-2013. Among several early publications to express the need for a rapid learning health system was a commentary in Health Affairs in 2007 where Lynn Etheredge applied the term “rapid learning health system” in recognition of the opportunity to leverage electronic health records (EHR) to “learn” what works in health care. The series of NAM workshops generated several summary publications on topics under the mantle of the LHS, including publications focused on the digital infrastructure as well as on ethical considerations. In 2013, the workshops culminated in a seminal report, “Best Care at Lower Cost: the Path to Continuously Learning Health Care in America.” Summarizing the heretofore efforts, McGinnis and colleagues enumerate key milestones in the evolution of the LHS that include these reports as well as decades-old efforts to generate evidence from routine health care delivery.
Nomenclature may vary in reference to the LHS concept. Some refer to a learning healthcare system, others refer to learning health systems or collaborative learning health systems. The architecture and objectives are similar, irrespective of the label—addressing evidence gaps, harnessing data, and effectively utilizing the best evidence at the point of need. Related concepts include the use of real-world data to generate real-world evidence, and mobilizing computable biomedical knowledge.
Given that the LHS has an expansive definition and scope, many of the early adopters of this approach were health systems that also had embedded research capabilities, such as a formal department or institute. The Veterans Administration Health System, Group Health Cooperative, Kaiser Permanente and Geisinger Health System were among the vanguard organizations who also published insights from their experience of launching formal learning health system activities. Increasingly, academic health systems have taken up the principles and practices espoused by the earliest adopters.
Adoption and spread
Early experiences with deploying the LHS have been instructive and have led to further adoption and spread. The LHS model is being applied in specific medical specialties such as pediatrics and oncology, and further examination of the environment and conditions that support learning have spurred development of increasingly detailed and specialized frameworks that can support further adoption and adaptation based on the needs, features, and capabilities of a particular health system.
Along with a growing body of peer-reviewed publications on the specific experience of different systems as they evolve toward continuous learning, review articles have been published to reflect on the growth of the LHS as a whole. A systematic review by Budrionis observed that the ability to evaluate how well an LHS improves outcomes was not well-explored in the literature. Subsequently, Platt examined progress of theories and implementation of the LHS, Nash focused a review on deployment of the LHS in primary care, and Ellis mapped empirical applications of the LHS. Easterling and colleagues (REF LHS 2022) proffer an elaborate taxonomy of LHS elements and use this to describe an LHS-IP, or “Learning Health System In Practice” as a model for health care systems who seek to become an LHS.
The motivations for applying LHS concepts are largely and logically focused on improving the quality of care. Exemplar organizations are numerous and growing and include both community-based health systems and university-based academic health systems/medical centers in the United States:
Atrium Health/Advocate Aurora
Baylor Scott & White Health
Care South Carolina
Cleveland Clinic
Children’s Hospital of Philadelphia
Cincinnati Children's Hospital Medical Center
Denver Health Medical Center
Geisinger Health System
HealthPartners
Indiana University Health
Kaiser Permanente
Mayo Clinic
Medstar Health
Feinberg School of Medicine
NYU Langone Health
SSM Health
Sutter Health
Trillium Healthcare Institute for Better Health (Canada)
University of Alabama at Birmingham School of Medicine
Michigan Medicine
UPMC
Vanderbilt University Medical Center
Veterans Administration Health System
Wake Forest School of Medicine
Washington University School of Medicine
Weill Cornell Medicine
In many cases, these institutions are engaged in research activities such as the HCSRN, Clinical and Translational Science Awards (CTSA), and PCORnet where the LHS concepts are applied. The University of Michigan has also established a formal academic department, the Department of Learning Health Sciences. Alongside these exemplar organizations, related initiatives and consortia have been established in recent years. The Learning Health Community is an umbrella organization that has united many systems and health data organizations to develop shared principles and processes, and foster learning about the applications of technologies in the context of learning systems via a periodic virtual forum (LHS IT Forum). Given their centrality to the generation of health data and information, two of the largest EHR vendors have also created communities to support LHS: Cerner’s Learning Health Network and Epic System's Health Research Network. Still, much of the LHS development has been concentrated in large academic medical centers and health systems with a sizable footprint. Masica notes that nearly 85% of more than 6000 hospitals in the US are categorized as community hospitals, and the ability to develop and implement an LHS may be more challenging due to workforce and other constraints.
Dissemination of the activities and experiences of learning health systems has been an instrumental aspect of their growth and spread. While peer-reviewed literature on the LHS appears in a variety of journals, the creation of Healthcare: the Journal of Delivery Science and Innovation and the Learning Health Systems Journal are dedicated to manuscripts that showcase the experience of those deploying or refining aspects of learning in real-world practices. Each has also published special issues with thematic emphases on LHS-related topics such as embedded research and ethical, legal, and social implications of the LHS. Another marker of the spread of the LHS is its international adoption. Australia, Canada, the United Kingdom and other countries are applying the LHS concepts, offering opportunities to compare and contrast global experiences and develop a richer picture of how the local context, structure of care delivery, and regulatory environment affect the ability to support continuous learning. Patient involvement in the LHS has grown, partly due to the establishment of the Patient-Centered Outcomes Research Institute, continued emphasis on shared decision-making, and the growing recognition of participatory medicine. However, the engagement of patients is not consistent across health systems and there is not a uniform template for patient engagement or approaches to educating patients about the value and significance of the LHS as a model for improving evidence-based care.
Electronic health data as a central component to the LHS
A large proportion of LHS research relies on the use of electronic health records (EHRs) and must navigate the inherent challenges of EHRs. EHRs were primarily created to support billing for clinical services and tracking health insurance claims. Generation of rich real-world clinical data is an essential "byproduct" of this highly transactional purpose of the contemporary EHR.
The LHS leverages a clinical lifecycle. Patient data is collected, which can then be amalgamated across multiple patients to identify, define, and analyze a problem or a gap in the application of evidence-based care. These are activities largely driven by healthcare professionals. With the support of technology (both computational and statistical), an analysis of the amalgamated data can result in new evidence. Such knowledge generation can then spur changes in clinical practice, and thus to new patient data being collected. This is the optimum for the LHS. However, dissemination of implementation of new evidence can be operationally and technically challenging in many settings, including the original health system that identified a problem based on their own clinical data.
McLachlan and colleagues (2018) suggest a taxonomy of nine LHS classification types:
Cohort identification looks for patients with similar attributes.
Positive deviance finds examples of better care against a benchmark.
Negative deviance finds examples of sub-optimal care.
Predictive patient risk modeling uses patterns in data to find groups at greater risk of adverse events.
Predictive care risk and outcome models identify situations that are at greater risk of poor care.
Clinical decision support systems use patient algorithms applied to patient data to make specific treatment recommendations.
Comparative effectiveness research determines the most effective treatments.
Intelligent assistance use data to automate routine processes.
Surveillance monitors data for disease outbreaks or other treatment issues.
Synergy with other disciplines
The LHS is a multidisciplinary and multi-stakeholder model for improvement, wherein clinical practitioners, health system leaders, data analysts and health IT experts, operations personnel, and researchers bring requisite expertise to bear throughout the cycle of improving health and healthcare. In a complex healthcare environment, sustained engagement of all health system stakeholders is necessary to successfully identify and prioritize evidence gaps, develop suitable interventions, analyze insights from the interventions, and deploy resulting changes. Hence, many disciplines and scientific domains may contribute various types of subspecialty expertise including:
Health Informatics
Clinical Trials
Data Science and Machine Learning
Organizational Psychology
Quality Improvement
Implementation Research
Training
As the LHS has matured, leaders and vanguard organizations have identified the requisite skills needed to lead and develop interventions that support learning. The Agency for Healthcare Research and Quality convened a technical expert panel in 2016 to identify core competencies, which yielded 33 competencies spanning seven domains. These competency domains are (1) Systems Science; (2) Research Questions and Standards of Scientific Evidence; (3) Research Methods; (4) Informatics; (5) Ethics of Research and Implementation in Health Systems; (6) Improvement and Implementation Science; (7) Engagement, Leadership, and Research Management. An 8th domain, Equity and Justice, was added in 2022 and a total of 38 competencies are now identified. These competencies form the backbone of a training program collaboratively funded by AHRQ and PCORI, two US funding agencies that also issue funding opportunities for LHS-related studies. A $40 million funding opportunity for mentored career development awards was issued in 2017 and 11 Centers of Excellence were awarded five years of federal funding in 2018 to support the training of clinician and research scientists to conduct patient-centered outcomes research within LHS.
The LHS Centers of Excellence funded in 2018 were:
A Chicago Center of Excellence in Learning Health Systems Research Training (ACCELERAT), Northwestern University, Chicago, Ill.
CATALyST: Consortium for Applied Training to Advance the Learning Health System with Scholars/Trainees, Kaiser Permanente Washington Research Institute, Seattle, WA.
Learning Health System Scholar Program at Vanderbilt University, Nashville, Tenn.
Leveraging Infrastructure to Train Investigators in Patient-Centered Outcomes Research in Learning Health System (LITI- PCORLHS), Indiana University School of Medicine, Indianapolis, Ind.
Minnesota Learning Health System Mentored Career Development Program (MN-LHS), University of Minnesota, Minneapolis, Minnesota.
Northwest Center of Excellence & K12 in Patient Centered Learning Health Systems Science, Oregon Health and Science University, Portland, Oregon.
PEDSnet Scholars: A Training Program for Pediatric Learning Health System Researchers, Children’s Hospital of Philadelphia, Philadelphia, PA
Stakeholder-Partnered Implementation Research and Innovation Translation (SPIRIT) program, University of California Los Angeles, Los Angeles, California.
The Center of Excellence in Promoting LHS Operations and Research at Einstein/Montefiore (EXPLORE), Albert Einstein College of Medicine, Bronx, N.Y.
Transforming the Generation and Adoption of PCOR into Practice (T-GAPP), University of Pennsylvania, Philadelphia, PA.
University of California-San Francisco Learning Health System K12 Career Development Program, University of California San Francisco, San Francisco, California.
As the funding for the aforementioned Centers of Excellence concludes in 2023, a successor funding opportunity was created by AHRQ and PCORI to fund Learning Health System Embedded Scientist Training and Research (LHS E-STaR) Centers.
Other similar training and fellowship programs have been offered by AcademyHealth via their Delivery System Science Fellowship program, Kaiser Permanente’s Division of Research, and the Veterans Administration via the Seattle-Denver Center of Innovation. Program offerings and emphases vary from institution to institution, but all involve training and professional development in topics related to improving health systems and the ability to generate and learn from evidence. Articles describing multidisciplinary workforce training efforts was published as a supplement to the LHS Journal in 2022, including an experience report summarizing the collective insights from the 11 initially funded Centers of Excellence.
Funding and financial support
Support for learning activities may be derived from federal, philanthropic, and other sources. Examples include the National Institutes of Health and AHRQ (federal); and the Robert Wood Johnson Foundation (philanthropic). The Patient-Centered Outcomes Research Institute (PCORI) has designated the realization of a national learning health system as one of their five national priorities for health, which is indicative of future funding opportunities. Funding provided to personnel within an organization (i.e., a health system) may be designated for internally-directed learning activities with no expectation about developing and publishing generalizable results. In this way, learning health system may be distinguished from traditional health services or informatics research and more closely resemble the funding and infrastructure that health systems designate for quality improvement activities. In 2015, the Centers for Medicare and Medicaid Services (CMS) funded the Health Care Payment Learning and Action Network to ascertain what works with respect to alternative health care delivery arrangements, however, reimbursement for learning activities from insurers/payers is not currently a steady avenue for financial support to incentivize health system learning.
Ethical considerations
Bioethics scholars including Faden, Asch, Finkelstein, Morain, and Platt have averred that in a learning health system, consideration should be given to both clinical ethics and research ethics. Faden, Kass and colleagues have put forth an ethics framework for the learning health system that is anchored on seven essential obligations: (1) respecting dignity and rights of all patients; (2) respecting clinical judgment; (3) providing optimal care to every patient; (4) avoiding the introduction of non-clinical burdens and risks; (5) reducing health inequities; (6) ensuring responsible activities are conducted in a way that fosters learning; and (7) contributing to the overall aim of improving quality and value in health care. This framework and several companion articles were published as a special report from the Hastings Center. Subsequent articles by Finkelstein et al, as well as Asch and colleagues seek to use examples of learning activities as a means to describe different approaches to research oversight and compliance. Rigorous deliberations about the approach to informed consent are also germane to the ethics of learning activities in the healthcare context.
See also
Real World Evidence
References
Health informatics
Decision support systems | Learning health systems | [
"Technology",
"Biology"
] | 3,637 | [
"Information systems",
"Medical technology",
"Health informatics",
"Decision support systems"
] |
57,689,682 | https://en.wikipedia.org/wiki/HD%2034989 | HD 34989 is a blue-white star in the main sequence, of apparent magnitude 5.80, in the constellation of Orion. It is 1700 light-years from the Solar System.
Observation
The star is in the northern celestial hemisphere, but close to the celestial equator; this means that it can be observed from all the inhabited regions of the Earth without difficulty and that it is not visible only in the innermost areas of Antarctica. It appears as circumpolar only far beyond the Arctic polar circle. Its brightness puts it at the limit of visibility to the naked eye, so to be observed without the aid of devices requires a clear, and preferably moonless, sky.
The best period for observation in the evening sky is between late October and April from both hemispheres; in February (as at J2000) it is anti-posed from the Sun. In July and August its direction is close to that of the Sun, therefore coinciding with most hours of daylight.
Physical characteristics
It is a blue-white star of the main sequence, having an absolute magnitude of -0.99 and its positive radial velocity indicates that the star is moving away from the Solar System.
The star appears wrapped in an extensive nebulosity that partly shines by reflection and partly by emission. The reflection nebula is listed as GN 05.19.0 and the HII region is called Sh2-263. HD 34989 is the ionizing source of this HII-region. In observations with carbon monoxide this corresponds to a circular hole.
The star has a size of about 0.118 ± 0.026 milliarcseconds, based on SED fitting.
References
B-type main-sequence stars
Orion (constellation)
025041
034989
1763
Durchmusterung objects | HD 34989 | [
"Astronomy"
] | 374 | [
"Constellations",
"Orion (constellation)"
] |
57,689,805 | https://en.wikipedia.org/wiki/Pierre%20Monsan | Pierre Monsan (born June 25, 1948, in Prades, Pyrénées-Orientales, France) is a French biochemist and entrepreneur. He is currently Professor emeritus at the Institut national des sciences appliquées de Toulouse (INSA Toulouse, affiliated to the University of Toulouse) and the founding director of the pre-industrial demonstrator Toulouse White Biotechnology (TWB).
Monsan's scientific interests include biocatalysis, biochemical and enzyme engineering. Beyond his academic work, Monsan is co-inventor on numerous patents and co-founded several industrial biotechnology companies.
Education and career
Monsan was educated at INSA Toulouse and the University of Toulouse where he graduated with an engineer degree (Ingénieur diplômé) in Biological Chemistry in 1969. He was then awarded his Doctor-Engineer Degree in 1971 and his PhD degree in 1977 from INSA Toulouse for research on enzyme immobilization. He served as lecturer in the Department of Biochemical Engineering at INSA Toulouse from 1969 and was later promoted to Assistant Professor (1973) and then Professor (1981).
In 1984, Monsan took a leave from Academia and co-founded BioEurope, a startup company specialized in industrial biocatalysis. There he served as CSO from 1984 to 1989, CEO from 1989 to 1993 and CSO again from 1993 to 1999 after the acquisition of the company by the Solabia Group. In 1993, Monsan returned to INSA Toulouse to lead a research group focusing on the discovery, characterisation and molecular engineering of enzymes, including glucansucrases and lipases. He was also appointed Professor at Ecole des Mines-ParisTech in 1993. From 1999 to 2003 he served as head of Department of Biochemical Engineering at INSA Toulouse. In 2012, he founded the pre-industrial demonstrator “Toulouse White Biotechnology” (TWB) with a €20M grant within the framework of the Investing for the Future national program (also called the grand emprunt) and served as its founding director until 2019.
Monsan is presently Professor emeritus at INSA Toulouse and the CEO of Cell-Easy, a start-up specializing in the production of stem cells.
Scientific work
Research by Monsan and his collaborators has focused on biocatalysis, biochemical engineering and enzyme engineering, published in over 230 articles. His fundamental includes investigation of structure-activity relationships of enzymes, (particularly glycoside hydrolases and lipases), and enzyme discovery by functional metagenomics. His more applied research involves biocatalysis in non-conventional (anhydrous) media for the synthesis of chemicals (e.g., chiral resolution to obtain enantiopure compounds), protein engineering (e.g. modification of substrate specificity, enantioselectivity, or thermostability), methods for enzyme immobilization, and bioreactor design and development.
Technology transfer and entrepreneurship
Monsan has been heavily involved in technology transfer throughout his career and is co-inventor of over 60 patents. He has developed several industrial biocatalytic processes for the production of polysaccharides, oligosaccharides and amino acid derivatives. Companies he has co-founded include BioEurope (1984; biocatalytic synthesis of reagents for the food, pharma and nutrition industries; now owned by the Solabia group), Biotrade (1996; waste water treatment) and Genibio (1998, food additives). He is and has been member of the scientific advisory board of several companies, including Danisco Venture, PCAS,
or Deinove.
In 2012, Monsan founded the pre-industrial demonstrator Toulouse White Biotechnology (TWB), an original institute dedicated to technology transfer through a consortium of public and industrial partners. TWB promotes industrial biotechnology and biobased economy through collaborative public/private research and development projects (e.g., THANAPLAST project in partnership with Carbios) and the creation of startups such as EnobraQ (development of yeasts able to metabolize CO2) or Pili (production of bacterial ink).
Awards and memberships
Chaptal Award for Chemical Arts from the French Society for the Promotion of Industry (2000)
Founding member of the French Academy of Technologies (since 2000)
Elected senior member of the Institut Universitaire de France (IUF) in 2003 and re-elected in 2008
Elected member of the Executive Board of the European Federation of Biotechnology (since 2009)
Biocat Award for lifetime achievement from the University of Hamburg (2012)
Knight in the French National Order of Merit (2013)
Docteur Honoris Causa of the University of Liège, Belgium (2015)
Founding Chairman of the French Federation of Biotechnology (since 2015)
Foreign member of the College of Fellows of the American Institute for Medical and Biological Engineering (AIMBE) (since 2016)
Corresponding Member of the French Academy of Agriculture (since 2016)
Enzyme Engineering Award from Engineering Conferences International and Genencor (2017)
Knight in the French Legion of Honour (2017)
References
External links
Website of Toulouse White Biotechnology
TBI - Toulouse Biotechnology Institute
From metabolic engineering to synthetic biology and industrial biotechnology, Pierre Monsan, 25 March 2015, Collège de France (in French)
Interview of Pierre Monsan at the BIOKET Global Conference on Bioeconomy Key Enabling Technologies 6-8 March 2018, Strasbourg, France
1948 births
Living people
Biotechnologists
Knights of the Legion of Honour
French biochemists
20th-century French chemists
Knights of the Ordre national du Mérite
Scientists from Toulouse
University of Toulouse alumni
Academic staff of Mines Paris - PSL | Pierre Monsan | [
"Biology"
] | 1,167 | [
"Biotechnologists"
] |
57,691,027 | https://en.wikipedia.org/wiki/NGC%201250 | NGC 1250 is an edge-on lenticular galaxy located about 275 million light-years away in the constellation Perseus. It was discovered by astronomer Lewis Swift on Oct 21, 1886. NGC 1250 is a member of the Perseus Cluster.
See also
List of NGC objects (1001–2000)
NGC 1277
References
External links
Perseus Cluster
Perseus (constellation)
Lenticular galaxies
1250
02613
012098
+07-07-040
Astronomical objects discovered in 1886
Discoveries by Lewis Swift | NGC 1250 | [
"Astronomy"
] | 110 | [
"Perseus (constellation)",
"Constellations"
] |
57,691,705 | https://en.wikipedia.org/wiki/Ruth%20Cameron%20%28scientist%29 | Ruth Cameron FInstP FIOM3 FREng is a British materials scientist and professor at the University of Cambridge. She is co-director of the Cambridge Centre for Medical Materials, where she studies materials that interact therapeutically with the body. Since October 2020 she has been joint head of the Department of Materials Science and Metallurgy at Cambridge.
Early life and education
Cameron completed her PhD in physics at the University of Cambridge.
Research and career
Cameron's research considers materials which interact therapeutically with the body. She is interested in musculoskeletal repair. Her research considers bioactive biodegradable composites, biodegradable polymers, tissue engineered scaffold and surface patterning. Cameron works with Serena Best on collagen scaffolds for the spin-out company Orthomimetics.
In 1993 she joined the Department of Materials Science and Metallurgy, University of Cambridge. Since 2006 she has co-led the Cambridge Centre for Medical Materials with Serena Best. The co-management makes Cameron and Best the first Engineering and Physical Sciences Research Council fellowship to job share. She was a founder member of the Pfizer Institute for Pharmaceutical Materials Science. She is a Fellow of Lucy Cavendish College, Cambridge.
Honours and awards
2017 - United Kingdom Society for Biomaterials President's Prize
2017 - Institute of Materials, Minerals and Mining Griffith Medal & Prize
2019 - Institute of Physics Rosalind Franklin Medal and Prize, for "innovative application of physics to regenerative medicine and pharmaceutical delivery"
2021 - Engineering and Physical Sciences Suffrage Science award
2023 - Fellow of the Royal Academy of Engineering
References
Living people
Alumni of the University of Cambridge
Academics of the University of Cambridge
British materials scientists
British women academics
Fellows of Lucy Cavendish College, Cambridge
Fellows of the Institute of Physics
Year of birth missing (living people)
Place of birth missing (living people)
Women materials scientists and engineers
Fellows of the Institute of Materials, Minerals and Mining
Fellows of the Royal Academy of Engineering
Female fellows of the Royal Academy of Engineering | Ruth Cameron (scientist) | [
"Materials_science",
"Technology"
] | 407 | [
"Women materials scientists and engineers",
"Materials scientists and engineers",
"Women in science and technology"
] |
57,691,766 | https://en.wikipedia.org/wiki/Faberg%C3%A9%20%26%20Cie | Fabergé & Cie was a jewelry firm founded in 1924 in Paris by two of the sons of Peter Carl Fabergé, Alexander Fabergé (1877–1952) and Eugène Fabergé (1874–1960), together with Peter Carl Fabergé's business partner and jewellery designer Andrea Marchetti from the Fabergé store in London, which had closed in 1918.
History
After their father's famous jewelry company in Russia was nationalized by the Bolsheviks in 1918, the brothers moved to Paris and continued to make and sell Fabergé-branded jewelry. They also specialized in the appraisal and repair of historic Fabergé items. Their stamp in the jewels was "FABERGÉ, PARIS". The store was located in the most high end shopping area on 281 Rue du Faubourg Saint-Honoré. From the 1920s to the 1980s, the German jeweler Victor Mayer produced Fabergé eggs and jewelry for Fabergé Paris. The stamp was 'Fabergé Paris, Victor Mayer, year of production'. Alexander Julius Fabergé married his first wife Nina Fabergé (born Belicheva) and had a daughter named Irina Fabergé. He married his second wife Johanna Fabergé and they had a son also named Alexander Cyril Fabergé (1912-1985).
The brand name Fabergé was eventually used by an American company for the use of beauty products. In 1943, Samuel Rubin registered the Fabergé name for perfume in the United States. The trade name Fabergé was not filed in France as a jewelry trade mark until 1968. In 1978, a New York Lawyer filed suit on behalf of the Fabergé family but lost the case. Until 2001, Fabergé & Cie maintained the sole rights to produce and sell Fabergé brand jewelry only in France. The Bulgarian born prince Charles Lahovary, a nephew of Ion Lahovary and the greek princess Emma Maurokordatos. His wife was the daughter of the Fabergé business partner Andreas Marchetti, who was the last owner of the store when it closed down in 2001.
Notes
Shopping districts and streets in France
Fabergé
Hardstone carving
Vitreous enamel
Fabergé workmasters | Fabergé & Cie | [
"Chemistry"
] | 432 | [
"Coatings",
"Vitreous enamel"
] |
57,691,785 | https://en.wikipedia.org/wiki/Muskingum%20River%20Navigation%20Historic%20District | The Muskingum River Navigation Historic District is a historic district in Ohio's Coshocton, Morgan, Muskingum, and Washington counties, which was listed on the National Register of Historic Places in 2007. The listing includes 12 contributing buildings, 32 contributing structures, and a contributing site.
The "Muskingum River lock system was designated the first Navigation Historic District in the United States by the National Park Service." The Muskingum River Navigation System was also designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2001.
It is traversed by the Muskingum River Water Trail.
References
Canals in Ohio
National Register of Historic Places in Coshocton County, Ohio
National Register of Historic Places in Morgan County, Ohio
National Register of Historic Places in Muskingum County, Ohio
National Register of Historic Places in Washington County, Ohio
Historic districts on the National Register of Historic Places in Ohio
Buildings and structures completed in 1816
Historic Civil Engineering Landmarks | Muskingum River Navigation Historic District | [
"Engineering"
] | 195 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
57,691,958 | https://en.wikipedia.org/wiki/NGC%201259 | NGC 1259 is a lenticular galaxy located about 243 million light-years away in the constellation Perseus. The galaxy was discovered by astronomer Guillaume Bigourdan on October 21, 1884 and is a member of the Perseus Cluster.
A type Ia supernova designated as SN 2008L was discovered in NGC 1259 on January 14, 2008.
See also
List of NGC objects (1001–2000)
NGC 1260
References
External links
Perseus Cluster
Perseus (constellation)
Lenticular galaxies
1259
12208
Astronomical objects discovered in 1884
Discoveries by Guillaume Bigourdan | NGC 1259 | [
"Astronomy"
] | 121 | [
"Perseus (constellation)",
"Constellations"
] |
57,693,965 | https://en.wikipedia.org/wiki/Beatriz%20Rold%C3%A1n%20Cuenya | Beatriz Roldán Cuenya (born 1976 in Oviedo) is a Spanish physicist working in surface science and catalysis. Since 2017 she has been director of the Department of Interface Science at the Fritz Haber Institute of the Max Planck Society in Berlin, Germany. Since April 2023, she has also been interim director of the Department of Inorganic Chemistry, also at the Fritz Haber Institute.
Professional career
Roldán Cuenya studied at the University of Oviedo in Spain and received her doctorate degree from the University of Duisburg-Essen in Germany under the supervision of
Werner Keune. As a postdoc she worked at the University of California, Santa Barbara in the group of Eric McFarland and subsequently became professor at the University of Central Florida in Orlando (USA). In 2013, she accepted a Chair Faculty position in Solid State Physics at the Ruhr University Bochum in Germany. She holds two roles at the Fritz Haber Institute - since 2017 as Director of the Department of Interface Science, and since April 2023 as interim Director of the Department of Inorganic Chemistry.
Her main research interests are the synthesis of nanostructured materials with tunable surface properties and the experimental investigation of structure-reactivity relationships in thermal and electro-catalysis using in situ and operando methods. Applications of her work are in the areas of environmental remediation and energy conversion.
Awards and distinctions
2005 NSF-CAREER Award of the American National Science Foundation
2009 Peter Mark Memorial Award, American Vacuum Society
2009 University of Central Florida, Research Incentive Award
2016 Fellow of the Max Planck Society at the Max Planck Institute for Chemical Energy Conversion (Mülheim, Germany)
2016 European Research Council Consolidator Award
2020 Elected Member of the Academia Europaea, the Academy of Europe
2021 ISE-Elsevier Prize for Experimental Electrochemistry of the International Society of Electrochemistry (ISE)
2022 Paul H. Emmett Award from the North American Catalysis Society for Fundamental Catalysis
Publications
I. Zegkinoglou, A. Zendegani, I. Sinev, S. Kunze, H. Mistry, H. S. Jeon, J. Zhao, M. Hu, E. E. Alp, S. Piontek, M. Smialkowski, U.-P. Apfel, F. Körmann, J. Neugebauer, T. Hickel, B. Roldan Cuenya: Operando phonon studies of the protonation mechanism in highly active hydrogen evolution reaction pentlandite catalysts, JACS 2017, 139, 14360,
H. Mistry, Y. Choi, A. Bagger, F. Scholten, C. Bonifacio, I. Sinev, N. J. Divins, I. Zegkinoglou, H. Jeon, K. Kisslinger, E. A. Stach, J. C. Yang, J. Rossmeisl, B. Roldan Cuenya: Enhanced carbon dioxide electroreduction to carbon monoxide over defect rich plasma-activated silver catalysts, Angew. Chem. 2017, 56, 11394,
H. Mistry, A. Varela, C. S. Bonifacio, I. Zegkinoglou, I. Sinev, Y.-W. Choi, K. Kisslinger, E. A. Stach, J. C. Yang, P. Strasser, B. Roldan Cuenya, Highly selective plasma-activated copper catalysts for carbon dioxide reduction to ethylene, Nature Commun. 2016, 7, 12123, .
References
External links
Department of Interface Science of the Fritz Haber Institute
Living people
Max Planck Society people
1976 births
Spanish physicists
Members of Academia Europaea
Members of the German National Academy of Sciences Leopoldina
University of Oviedo alumni
University of Duisburg-Essen alumni
Academic staff of Ruhr University Bochum
Max Planck Institute directors
People from Oviedo
Physical chemists
University of Central Florida faculty
Women chemical engineers | Beatriz Roldán Cuenya | [
"Chemistry"
] | 840 | [
"Women chemical engineers",
"Chemical engineers",
"Physical chemists"
] |
57,694,006 | https://en.wikipedia.org/wiki/Delta%20App | Delta was an Indian homegrown networking and support app for the LGBT community in India. Co-founded by Ishaan Sethi.
The application is no longer available on the iOS store. Official Twitter and Instagram accounts have not been updated since 2022. The service appears to be defunct.
About Delta
The app allows members of the LGBTQ community to find friendly spaces and professionals. It was developed by Ishaan Sethi who is also the CEO. The need to create such an app was due to discrimination of people in public hence extortion, sexual abuse, blackmail and unsolicited sex on gay dating apps. After coming out in 2010 while in the US, Seth felt isolated after his move back to India hence started Delta app.
Initially, he crowd funded about Rs 10 lakh and raised Rs 2 crore from investors such as Keshav Suri, Truly Madly's Sachin Bhatia, celebrity chef and restaurateur Ritu Dalmia and Vivek Sahni, CEO & Co-Founder, Kama Ayurveda. The app was launched in 2018.
The app currently has more than 50,000 users. The app has different features as it is more than a dating app. It is also a space where the LGBTQ community can connect and support each other. The dating feature is known as 'Connect'. There is a feature for networking and brand promotion.
References
LGBTQ in India
Mobile social software
Online dating services of India
LGBTQ online dating services | Delta App | [
"Technology"
] | 308 | [
"Mobile software stubs",
"Mobile technology stubs"
] |
51,508,751 | https://en.wikipedia.org/wiki/Amanita%20manginiana | Amanita manginiana, also known as Mangin's false death cap, Chiu's false death cap, is a species of the genus Amanita.
Description
The cap of Amanita manginiana is around wide, chestnut brown, darker in the center, with the margin more pallid, silky (bearing fine hairs), convex then applanate, fleshy, and has a nonstriate margin. The gills are adnate and white. Short gills are present. The stipe is around 5–8 cm high, cylindrical, stuffed, white, becoming orangish-brown. The bulb is fleshy, globose to ovoid. The ring is membranous, white, superior, skirt-like. The volva is membranous, limbate, and fulvous-white. The spores measure 7–8 × 6 μm and are ovoid to subglobose. Its spores have a length of around 9.2–10.3 μm and a width of 7.5–7.8 μm. The spores are nothing but amyloid rubble and the collected specimens are unfortunately, almost entirely useless.
This species is very poorly known. Sources state a species similar to A. manginiana from China under the name A. manginiana sensu W.F. Chiu.
Edibility
A. manginiana appears to belong with a group of edible species that at the moment are classed in Amanita section Phalloideae, but the edibility of A. manginiana is unknown.
According to China Forestry Culture Collection Center, it is reported to be edible with potential medical use. However, due to close similarity to highly toxic species, consumption is inadvisable.
See also
Amanita
List of Amanita Species
References
manginiana
Taxa named by Narcisse Théophile Patouillard
Fungus species | Amanita manginiana | [
"Biology"
] | 399 | [
"Fungi",
"Fungus species"
] |
51,508,759 | https://en.wikipedia.org/wiki/Brilanestrant | Brilanestrant (INN) (developmental code names GDC-0810, ARN-810, RG-6046, RO-7056118) is a nonsteroidal combined selective estrogen receptor modulator (SERM) and selective estrogen receptor degrader (SERD) that was discovered by Aragon Pharmaceuticals and was under development by Genentech for the treatment of locally advanced or metastatic estrogen receptor (ER)-positive breast cancer.
Development of brilanestrant was discontinued by Roche in April 2017. It reached phase II clinical trials for the treatment of breast cancer prior to the discontinuation of its development.
Mechanism of action
Similarly to tamoxifen, a SERM, brilanestrant shows some capacity to activate the ER in certain contexts and possesses weak estrogenic activity in the rat uterus, and unlike fulvestrant, which is currently the only SERD to have been marketed, brilanestrant is not a steroid and is orally bioavailable and does not need to be administered by intramuscular injection. Brilanestrant has been found to be active in tamoxifen- and fulvestrant-resistant in vitro models of human breast cancer. Side effects observed in clinical studies of brilanestrant thus far have included diarrhea, nausea, and fatigue of mostly mild-to-moderate severity.
Brilanestrant is a structural analogue of etacstil, an earlier combined SERM and SERD that was abandoned in 2001 for commercial reasons.
See also
Bazedoxifene
Elacestrant
References
External links
GDC-0810 (brilanestrant) - AdisInsight
Pipeline - Genentech
Abandoned drugs
Antiestrogens
Chlorobenzene derivatives
Fluorobenzene derivatives
Hormonal antineoplastic drugs
Indazoles
Selective estrogen receptor degraders
Selective estrogen receptor modulators
Triphenylethylenes | Brilanestrant | [
"Chemistry"
] | 413 | [
"Drug safety",
"Abandoned drugs"
] |
51,508,920 | https://en.wikipedia.org/wiki/Elacestrant | Elacestrant, sold under the brand name Orserdu, is a selective estrogen receptor degrader (SERD) used in the treatment of breast cancer. It is taken by mouth.
Elacestrant is an antiestrogen that acts as an antagonist of estrogen receptors, which are the biological targets of endogenous estrogens like estradiol. The most common side effects of elacestrant include body pain, nausea and vomiting, increased serum lipids, elevated liver enzymes, fatigue, decreased hemoglobin, raised creatinine, decreased appetite, diarrhea, headache, constipation, abdominal pain, and hot flashes.
Elacestrant was approved for medical use in the United States in January 2023, and in the European Union in September 2023.
Medical uses
Elacestrant is indicated for the treatment of postmenopausal women or adult men with estrogen receptor (ER)-positive, human epidermal growth factor receptor 2 (HER2)-negative, ESR1-mutated, advanced or metastatic breast cancer with disease progression following at least one other line of endocrine therapy.
Pharmacology
Pharmacodynamics
Elacestrant is an antiestrogen that acts as an antagonist of estrogen receptors, specifically targeting the estrogen receptor alpha (ERα), which is the biological target of endogenous estrogens like estradiol. Additionally, elacestrant is a selective estrogen receptor degrader (SERD), meaning it induces the degradation of ERα.
Pharmacokinetics
Elacestrant has an oral bioavailability of approximately 10%. Its plasma protein binding exceeds 99% and remains independent of concentration. Elacestrant is metabolized in the liver, primarily by the cytochrome P450 enzyme CYP3A4 and to a lesser extent by CYP2A6 and CYP2C9. The elimination half-life of elacestrant is 30 to 50hours. It is excreted primarily in feces (82%) and to a lesser extent in urine (7.5%).
History
The efficacy of elacestrant was evaluated in the EMERALD trial, which was a randomized, open-label, active-controlled, multicenter study involving 478 postmenopausal women and men with ER-positive, HER2-negative advanced or metastatic breast cancer. Among them, 228 participants had ESR1 mutations. Eligible participants had experienced disease progression on one or two prior lines of endocrine therapy, including one line with a CDK4/6 inhibitor, and could have received up to one prior line of chemotherapy in the advanced or metastatic setting.
Participants were randomly assigned in a 1:1 ratio to receive either elacestrant 345 mg orally once daily or investigator's choice of endocrine therapy. The choices for the control arm included fulvestrant, or an aromatase inhibitor. Randomization was stratified based on whether the ESR1 mutation was detected or not, prior treatment with fulvestrant, and presence of visceral metastasis.
The FDA granted the application for elacestrant priority review and fast track designations.
Research
It is a nonsteroidal combined selective estrogen receptor modulator (SERM) and selective estrogen receptor degrader (SERD) (described as a "SERM/SERD hybrid (SSH)") that was discovered by Eisai and is under development by Radius Health and Takeda for the treatment estrogen receptor (ER)-positive advanced breast cancer. Elacestrant has dose-dependent, tissue-selective estrogenic and antiestrogenic activities, with biphasic weak partial agonist activity at the ER at low doses and antagonist activity at higher doses. It shows agonistic activity on bone and antagonistic activity on breast and uterine tissues. Unlike the SERD fulvestrant, elacestrant is able to readily cross the blood-brain-barrier into the central nervous system, where it can target breast cancer metastases in the brain, and is orally bioavailable and does not require intramuscular injection.
References
Amines
Antiestrogens
Hormonal antineoplastic drugs
Hydroxyarenes
Tetralins
Selective estrogen receptor degraders
Selective estrogen receptor modulators | Elacestrant | [
"Chemistry"
] | 920 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
51,510,868 | https://en.wikipedia.org/wiki/Sodium%202-hydroxyethyl%20sulfonate | Sodium 2-hydroxyethyl sulfonate (also: sodium isethionate) is the sodium salt of 2-hydroxyethane sulfonic acid (isethionic acid), it is used as a hydrophilic head group in washing-active surfactants, known as isethionates (acyloxyethanesulfonates) due to its strong polarity and resistance to multivalent ions. It is being studied as a high production volume chemical in the "High Production Volume (HPV) Chemical Challenge Program" of the US Environmental Protection Agency (EPA).
Production
Sodium 2-hydroxyethyl sulfonate is formed by the reaction of ethylene oxide with sodium hydrogen sulfite in aqueous solution:
To avoid contamination and suppress the formation of by-products (which are difficult to remove) the reaction must be performed under careful control of mass ratios and process conditions. Excess sulfite (SO32−) or bisulfite (HSO3−) lead to an unpleasant odor of the downstream product, higher levels of ethylene glycol or glycol ethers (formed by the hydrolysis and ethoxylation of ethylene oxide) give hygroscopic and greasy surfactants. Concentrated ethylene glycol-containing sodium 2-hydroxyethyl sulfonate solutions can subsequently mostly be freed from ethylene glycol by continuous extraction with e.g. isopropanol (<0.5%). Therefore, in the continuous industrial process an aqueous sodium hydrogen sulfite solution is prepared in a first reactor by mixing a sodium hydroxide solution and sulfur dioxide. In a second reactor the sodium hydrogen sulfite solution is mixed with a slight excess of ethylene oxide to obtain sodium 2-hydroxyethyl sulfonate in almost quantitative yields at elevated temperature and pressure with a precise control of pH. The reaction has to take place under the exclusion of oxygen and under precise control of the stoichiometry of the reactants, the temperature, the pH and the throughput.
Properties
Solid sodium 2-hydroxyethyl sulfonate is a colorless, free-flowing, non-hygroscopic solid, which dissolves readily in water and has good biodegradability. Due to the method of synthesis samples often contain traces of sodium sulfite or sodium hydrogen sulfite causing aqueous solution to possesses a mildly alkaline pH of about 10.
Use
The main use of sodium 2-hydroxyethyl sulfonate is the production of the isethionate class of surfactants. These are readily foaming and particularly mild, making them suitable for cleaning sensitive skin and are therefore mainly used in baby soaps and shampoos. Because of its pronounced skin compatibility sodium 2-hydroxyethyl sulfonate is added to soaps and liquid skin cleansers with up to 15 parts by weight.
From sodium 2-hydroxyethyl sulfonate the so-called biological buffers such as HEPES, MES, PIPES etc. are easily accessible.
The addition of 2-hydroxyethyl sulfonate to electroplating baths allows higher current densities and lower concentrations than the much more expensive methane sulphonic acid with improved appearance.
References
Primary alcohols
Sulfonates
Organic sodium salts | Sodium 2-hydroxyethyl sulfonate | [
"Chemistry"
] | 697 | [
"Organic sodium salts",
"Salts"
] |
51,510,936 | https://en.wikipedia.org/wiki/Palmer-Bowlus%20Flume | The Palmer-Bowlus flume, is a class of flumes commonly used to measure the flow of wastewater in sewer pipes and conduits. The Palmer-Bowlus flume has a u-shaped cross-section and was designed to be inserted into, or in line with, pipes and u-channels found in sanitary sewer applications.
As a long-throated flume, the point of measurement of the Palmer-Bowlus flume is anywhere upstream of the throat ramp greater than D/2 (D=flume size). Montana flume has a single, specified point of measurement in the contracting section at which the level is measured. Unlike most other flumes used for open channel flow measurement, the Palmer-Bowlus flume can be calibrated by theoretical analysis.
The general design of the flume detailed in ASTM D5390: Standard Test Method for Open-Channel Flow Measurement of Water with Palmer-Bowlus Flumes. It is important to note that unlike the Parshall flume, the standard for the flume does not set out specific sizes and flow rates, but only general characteristics for the class of flume.
18 sizes of Palmer-Bowlus flumes have been developed - in line with the common pipe sizes to which they would be adapted - from 4-inches to 72-inches. In practice, though, it is uncommon to see Palmer-Bowlus flumes greater than 24-inches in size.
Under average flow conditions, the Palmer-Bowlus flume is accurate to within 3-5%. For lower flow rates - where the depth is low relative to the length of the flume - the accuracy decreases to 5-6%. This error, combined with typical installation / flow meter errors, means that overall site accuracy is somewhat less than other more common flumes.
Free-Flow Characteristics
Flow in the Palmer-Bowlus Flume transitions from a circular bottom section to a raised trapezoidal throat and then back - accelerating sub-critical flow (Fr~0.5) to a supercritical state (Fr>1) to develop the level-to-flow relationship.
The simplified free-flow discharge can be summarized as
Where
Q is flow rate
C is the free-flow coefficient for the flume
Ha is the head at the primary point of measurement
n varies with flume size (See Table 1 below)
Note that Palmer-Bowlus flumes are proprietary to each manufacturer / throat configuration. The table presented below is for the most common throat configuration - a trapezoidal ramp - and is simplified for the entire flume flow range. For other throat configurations refer to the manufacturer's flow tables.
Free-Flow vs. Submerged Flow
Free-Flow – when there is no “back water” to restrict flow through a flume. Only the single depth (primary point of measurement - Ha) needs to be measured to calculate the flow rate. A free flow also induces a hydraulic jump downstream of the flume.
Submerged Flow – when the water surface downstream of the flume is high enough to restrict flow through a flume, the flume is deemed to be submerged. Submergence transitions for Palmer-Bowlus flumes are quite high - 85-90%. As a result, corrections for submerged flow in Palmer-Bowlus flumes have not been published. As a result, it is important to set the flume so that it does not experience submerged flow conditions. Although commonly thought of as occurring at higher flow rates, submerged flow can exist at any flow level as it is a function of downstream conditions. In natural stream applications, submerged flow is frequently the result of vegetative growth on the downstream channel banks, sedimentation, or subsidence of the flume.
Construction
Unlike other flumes - such as the Parshall, the Palmer-Bowlus flumes is typically only fabricated in two materials:
Fiberglass (wastewater applications due to its corrosion resistance)
Stainless steel (applications involving high temperatures / corrosive flow streams)
Drawbacks
For standard Palmer-Bowlus flumes with the standard trapezoidal throat ramp:
The flume may experience sedimentation / solids drop out upstream of the throat ramp. This is particularly true if the flow rates are low and the solids content is high or the solids heavy.
Unlike other flumes where the design and discharge equations have been standardized, Palmer-Bowlus flume may not be readily programmed into the secondary flow meters commonly used with the flume.
As a long-throated flume, the Palmer-Bowlus flume requires long straight runs upstream - 25 pipe diameters.
References
External links
Pictures of Palmer-Bowlus flumes of various sizes and styles
Water supply infrastructure
Fluid mechanics
Hydraulic structures
Hydrology | Palmer-Bowlus Flume | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 965 | [
"Hydrology",
"Civil engineering",
"Fluid mechanics",
"Environmental engineering"
] |
51,511,566 | https://en.wikipedia.org/wiki/Weatherbird | The Weatherbird is a cartoon character and a single-panel comic. It is printed on the front of the St. Louis Post-Dispatch and has been in the paper continuously since 1901, making it the longest-running American newspaper cartoon and a mascot of the newspaper.
Cartoonists
The Weatherbird, in its long run, has been drawn by just six cartoonists (three of them, by coincidence, named Martin):
Harry B. Martin (1901 – 1903)
Oscar Chopin (1903 – 1910)
S. Carlisle Martin (1910 – 1932)
Amadee Wohlschlaeger (1932 – 1981)
Albert Schweitzer (1981 – 1986)
Dan Martin (1986 – present ())
The character first appeared on February 11, 1901, Harry B. Martin originated the character, which was originally called "Dickey Bird" ('dicky-bird' is a generic slang term for any small bird). Martin had originally intended to rotate through just a few versions of the bird – one for rain, one for heat, etc. – but readers asked for a new drawing each day, which he then provided.
Martin later moved to New York where he drew the strips It Happened in Birdland (1907–1909) and Inbad the Tailor (1911–1912, for the New York American). Martin became a golf correspondent and an authority on golf (writing 15 books on the subject) and a founder of the American PGA.
Oscar Charles Chopin (1873 – 1932) inherited the Weatherbird from Martin, drawing it until 1910.
S. Carlisle Martin took over the Weatherbird in 1910. He started the tradition of making the Weatherbird comment on the news in addition to the weather, and started a pattern of six words or less for the bird's comments. He was assisted by Carlos Hurd, and drew the Weatherbird until his death in 1932.
In 1912, the Post-Dispatch began running a full-page, multiple-panel color strip on Sunday, titled "Jinx and the Weather Bird Family", and featuring the Weatherbird (called "George" in the strip), his wife, and their mischievous Katzenjammer Kids-like children in various putatively comical escapades. (Jinx was an imp who observed or initiated the hijinks; later the strip was later retitled to just "The Weather Bird Family".) Carlisle Martin drew the strip, but the scripts were by Jean Knott, who later drew and wrote strips in New York. The strip apparently did not last past 1912.
Amadee Wohlschlaeger had the longest tenure as Weatherbird artist: just short of fifty years. Wohlschlaeger was also the Post-Dispatch sports page cartoonist and drew for the Sporting News. Wohlschlaeger recalled that when barely out of his teens "I was doing sports art for the Post and when Carlisle died, I stayed up all night and drew 12 Weatherbirds so I could put them on the feature editor's desk the next morning. The feature editor grabbed me later in the day and said, 'You've got the job. Wohlschlaeger retired in 1981 and lived until age 102, in 2014.
In his nearly half-century-long tenure, Wohlschlaeger's Weatherbird commented on events such as D-Day, the assassination of John F. Kennedy, and the Apollo 11 Moon landing, but his favorite cartoon appeared on October 2, 1944: it showed the Weatherbird dressed in St. Louis Browns uniform and standing on his head, in honor of the Browns' first and only American League pennant.
Albert Schweitzer drew the first Weatherbirds to appear in color consistently. Schweitzer drew the Weatherbird with pink feathers, although he had appeared darkly shaded before. A long-time Post-Dispatch veteran, his retirement came just five years after he took over the strip.
Dan Martin took over the strip in 1986. He eliminated the Weatherbird's emblematic cigars and drew a bird with a bit more of a beak (previous cartoonists had atrophied the beak to the point of flatness). Martin wrote the book The Story of the First 100 Years of the St. Louis Post-Dispatch Weatherbird.
Other manifestations
The Weatherbird inspired the name of John Hartford's "Weatherbird Reel".
Weatherbird brand shoes for children, using pictures of the Weatherbird in advertising, were offered starting in 1901 by the St. Louis-based Peters Shoe Company, later part of International Shoe which continued to base the brand's image on the Weatherbird until 1932 (the brand itself continued at least through the 1950s).
Two of the original windows from the Peters Shoe Company factory, featuring pictures of the Weatherbird, adorn the Weatherbird Cafe in the St. Louis Post-Dispatch office.
A life-size Weatherbird costume is used by the Post-Dispatch for promotions such as meet-and-greets at local bars.
References
Further reading
External links
Fictional birds
American mascots
Magazine mascots
Bird mascots
St. Louis Post-Dispatch
Gag-a-day comics
1901 comics debuts
American comics characters
Comics about anthropomorphic birds
Fictional characters introduced in 1901
Weather prediction
Weather presenters | Weatherbird | [
"Physics"
] | 1,062 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
51,511,707 | https://en.wikipedia.org/wiki/Exotic%20felids%20as%20pets | A pet exotic felid, also called pet wild cat or pet non-domestic cat, is a member of the family Felidae (excluding the house cat and hybrids thereof) kept as an exotic pet.
Definition and differentiations
Hybrids of the domestic cat with non-domestic species (e. g. the Bengal cat or the Savannah cat) are not normally considered wild cats. While this distinction is often overlooked in the media and in the public eye, such cat breeds (especially the F5 and subsequent generations) are much closer to the domestic cat in terms of housing and husbandry requirements, behavior, and legality.
Unlike many other exotic pet species, wild cats usually cannot be kept indoors and require a large outdoor enclosure. This blurs the distinction between a wild cat being kept as an exotic pet and a private animal collection or menagerie. Usually, an enclosure meant for a pet exotic cat is built adjacent to the house in order to give the animal access into the living quarters.
Tame big cats kept by animal trainers (e. g. in circuses, private zoos or the film animal industry) are commonly mistaken for exotic pets. While the husbandry conditions and handling might be similar to a purely private setting, the common definition of a pet only includes animals kept for companionship or pleasure. Professional holders, breeders, or exhibitors do not meet this definition.
History
Exotic felids have a long tradition in human care. The ancient Egyptians kept servals in the same role as the African Wildcat (the wild ancestor of modern house cats). Cheetahs have also been kept throughout the world, both as companions and as hunting aides. Caracals have also been tamed and trained, primarily by Arabian and Asian rulers. Other large cats sometimes were also kept as companions, but were mostly limited to menageries owned by royal families. Large cats have been kept as pets for hundreds of years.
Species kept as exotic pets
In general, small cat species are more commonly kept as exotic pets than larger ones. Big cats are substantially more expensive to maintain, pose a greater danger when being handled in direct contact, and may not always remain handleable when fully grown. This typically limits their keeping to professional animal trainers and zoo settings.
Servals and caracals
Servals and caracals have the longest history as human companions. Part of their popularity can be attributed to the fact that they readily hybridize with domestic cats. The resulting crosses (savannahs and caracats) inherit traits of both the domestic cat and the wild species. Like domestic cats they are sometimes kept as pest controllers.
Lynxes and bobcats
Nearly all species of the genus Lynx (with the exception of the Iberian lynx) are kept as exotic pets. Unlike other small cat species, they are not known to hybridize with the domestic cat.
Ocelots
Ocelots were popular as an exotic pet in the 50s and 60s. The passage of the Endangered Species Act in the United States effectively ended their keeping outside of zoological facilities due to interstate animal movement restrictions. Their popularity is also limited by their comparatively high aggression. No hybrids with domestic cats are known.
Pumas
While being considered small cats taxonomically, pumas are comparable in size to some of the big cats. They are typically less aggressive and more affectionate than big cats, which has led to some popularity as exotic pets.
Cheetahs
The cheetah has a long history as a human companion. However, difficulties in breeding prevented this species from becoming a widespread exotic pet in modern times.
Leopards
Leopards were originally kept by royalty in Ancient Egypt; today it is a popular exotic pet due to its small size, exotic beauty and its striking coat featuring rosettes and spots. Black leopards are also kept as pets like their spotted counterparts.
Tigers
Tigers were once only kept by royalty; today it is a popular exotic pet.
Lions
Lions were also once only kept by royalty; today in some areas one has to have a permit to keep one; in some areas one can get one from the pet shop, while in other areas one may not own one. Lions are social animals; they accept their owners as part of the pride and form deep bonds.
See also
Exotic pet
Feline Conservation Federation
Lion taming
References
Pets
Wildlife
Cats
Cats as pets
Exotic pets
Animal law
Domesticated animals | Exotic felids as pets | [
"Biology"
] | 877 | [
"Animals",
"Wildlife"
] |
51,512,079 | https://en.wikipedia.org/wiki/ABC%20Software%20Metric | The ABC software metric was introduced by Jerry Fitzpatrick in 1997 to overcome the drawbacks of the LOC.
The metric defines an ABC score as a triplet of values that represent the size of a set of source code statements. An ABC score is calculated by counting the number of assignments (A), number of branches (B), and number of conditionals (C) in a program. ABC score can be applied to individual methods, functions, classes, modules or files within a program.
ABC score is represented by a 3-D vector < Assignments (A), Branches (B), Conditionals (C) >. It can also be represented as a scalar value, which is the magnitude of the vector < Assignments (A), Branches (B), Conditionals (C) >, and is calculated as follows:
By convention, an ABC magnitude value is rounded to the nearest tenth.
History
The concept of measuring software size was first introduced by Maurice Halstead from Purdue University in 1975. He suggested that every computer program consists mainly of tokens: operators and operands. He concluded that a count of the number of unique operators and operands gives us a measure of the size of the program. However, this was not adopted as a measure of the size of a program.
Lines of code (LOC) was another popular measure of the size of a program. The LOC was not considered an accurate measure of the size of the program because even a program with identical functionality may have different numbers of lines depending on the style of coding.
Another metric called the Function Point (FP) metric was introduced to calculate the number of user input and output transactions. The function point calculations did not give information about both the functionality of the program nor about the routines that were involved in the program.
The ABC metric is intended to overcome the drawbacks of the LOC, FP and token (operation and operand) counts. However, an FP score can also be used to supplement an ABC score.
Though the author contends that the ABC metric measures size, some believe that it measures complexity. The ability of the ABC metric to measure complexity depends on how complexity is defined.
Definition
The three components of the ABC score are defined as following:
Assignment: storage or transfer of data into a variable.
Branches: an explicit forward program branch out of scope.
Conditionals: Boolean or logic test.
Since basic languages such as C, C++, Java, etc. have operations like assignments of variables, function calls and test conditions only, the ABC score has these three components.
If the ABC vector is denoted as for a subroutine, it means that the subroutine has 5 assignments, 11 branches and 9 conditionals. For standardization purposes, the counts should be enclosed in angle brackets and written in the same order per the notation .
It is often more convenient to compare source code sizes using a scalar value. The individual ABC counts are distinct so, per Jerry Fitzpatrick, we consider the three components to be orthogonal, allowing a scalar ABC magnitude to be computed as shown above.
Scalar ABC scores lose some of the benefits of the vector. Instead of computing a vector magnitude, the weighted sum of the vectors may support more accurate size comparison. ABC scalar scores should not be presented without the accompanying ABC vectors, since the scalar values are not the complete representation of the size.
Theory
The specific rules for counting ABC vector values should be interpreted differently for different languages due to semantic differences between them.
Therefore, the rules for calculating ABC vector slightly differ based on the language. We define the ABC metric calculation rules for C, C++ and Java below. Based on these rules the rules for other imperative languages can be interpreted.
ABC rules for C
The following rules give the count of Assignments, Branches, Conditionals in the ABC metric for C:
Add one to the assignment count when:
Occurrence of an assignment operator (=, *=, /=, %=, +=, -=, <<=, >>=, &=, !=, ^=).
Occurrence of an increment or a decrement operator (++, --).
Add one to branch count when:
Occurrence of a function call.
Occurrence of any goto statement which has a target at a deeper level of nesting than the level to the goto.
Add one to condition count when:
Occurrence of a conditional operator (<, >, <=, >=, ==, !=).
Occurrence of the following keywords (‘else’, ‘case’, ‘default’, ‘?’).
Occurrence of a unary conditional operator.
ABC rules for C++
The following rules give the count of Assignments, Branches, Conditionals in the ABC metric for C++:
Add one to the assignment count when:
Occurrence of an assignment operator (exclude constant declarations and default parameter assignments) (=, *=, /=, %=, +=, -=, <<=, >>=, &=, !=, ^=).
Occurrence of an increment or a decrement operator (prefix or postfix) (++, --).
Initialization of a variable or a nonconstant class member.
Add one to branch count when:
Occurrence of a function call or a class method call.
Occurrence of any goto statement which has a target at a deeper level of nesting than the level to the goto.
Occurrence of ‘new’ or ‘delete’ operators.
Add one to condition count when:
Occurrence of a conditional operator (<, >, <=, >=, ==, !=).
Occurrence of the following keywords (‘else’, ‘case’, ‘default’, ‘?’, ‘try’, ‘catch’).
Occurrence of a unary conditional operator.
ABC rules for Java
The following rules give the count of Assignments, Branches, Conditionals in the ABC metric for Java:
Add one to the assignment count when:
Occurrence of an assignment operator (exclude constant declarations and default parameter assignments) (=, *=, /=, %=, +=, -=, <<=, >>=, &=, !=, ^=, >>>=).
Occurrence of an increment or a decrement operator (prefix or postfix) (++, --).
Add one to branch count when
Occurrence of a function call or a class method call.
Occurrence of a ‘new’ operator.
Add one to condition count when:
Occurrence of a conditional operator (<, >, <=, >=, ==, !=).
Occurrence of the following keywords (‘else’, ‘case’, ‘default’, ‘?’, ‘try’, ‘catch’).
Occurrence of a unary conditional operator.
Applications
Independent of coding style
Since the ABC score metric is built on the idea that tasks like data storage, branching and conditional testing, this metric is independent of the user's style of coding.
Project time estimation
ABC score calculation helps in estimating the amount of time needed to complete a project. This can be done by roughly estimating the ABC score for the project, and by calculating the ABC score of the program in a particular day. The amount of time taken for the completion for the project can be obtained by dividing the ABC score of the project by the ABC score achieved in one day.
Bug rate calculation
The bug rate was originally calculated as Number of bugs / LOC. However, the LOC is not a reliable measure of the size of the program because it depends on the style of coding. A more accurate way of measuring bug rate is to count the - Number of bugs / ABC score.
Program comparison
Programs written in different languages can be compared with the help of ABC scores because most languages use assignments, branches and conditional statements.
The information on the count of the individual parameters (number of assignments, branches and conditions) can help classify the program as ‘data strong’ or ‘function strong’ or ‘logic strong’. The vector form of an ABC score can provide insight into the driving principles behind the application, whereas the details are lost in the scalar form of the score.
Linear metric
ABC scores are linear, so any file, module, class, function or method can be scored. For example, the (vector) ABC score for a module is the sum of the scores of its sub-modules. Scalar ABC scores, however, are non-linear.
See also
Software complexity
Halstead complexity measures
Cyclomatic complexity
Synchronization complexity
Software metric
References
External links
Applying the ABC Metric to C, C++, and Java
ABC Metric
Size and Complexity Rules
Deciphering Ruby Code Metrics
The ABC metric
Software metrics | ABC Software Metric | [
"Mathematics",
"Engineering"
] | 1,804 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
51,512,114 | https://en.wikipedia.org/wiki/Zink%20%28printing%29 | Zink (stylised as ZINK, a portmanteau of zero and ink) is a full-color printing technology for digital devices that does not require ink cartridges and prints in a single pass.
The printing technology and its thermal paper are developed by Zink Holdings LLC, a U.S. company, with offices in Edison, New Jersey, and Billerica, Massachusetts, and a manufacturing facility in Whitsett, North Carolina. Zink Holdings makes all the paper, and makes a printer for printing labels and other designs on rolls of Zink zRoll. It licenses its technology to other companies that make compact photo printers, and combined camera / compact photo printers. Key licensees include HP, Lifeprint, Prynt, and C&A Global.
The Zink technology started as a project inside Polaroid Corporation in the 1990s, which spun out Zink as a fully independent company in 2005.
Technology
The paper has several layers: a backing layer with optional pressure sensitive adhesive, heat-sensitive layers with cyan, magenta and yellow dyes in colorless form, and an overcoat.
The color addressing is achieved by controlling the heat pulse length and intensity.
The color-forming layers contain colorless crystals of amorphochromic dyes. These dyes form microcrystals of their colorless tautomers, which convert to the colored form by melting and retain color after resolidification.
The yellow layer is the topmost one, sensitive to short heat pulses of high temperature. The magenta layer is in the middle, sensitive to longer pulses of moderate temperature. The cyan layer is at the bottom, sensitive to long pulses of lower temperature. The layers are separated by thin interlayers, acting as heat insulation, moderating the heat throughout.
Zink Holdings LLC
Zink Holdings LLC is a technology company headquartered in Billerica, Massachusetts (formerly Bedford, Massachusetts), founded in 2005. It develops what it calls "ZINK Zero Ink technology" and "ZINK Paper". Zink’s Research and development labs and headquarters are in Billerica, with a paper manufacturing plant in Whitsett, North Carolina (using staff and facilities previously used by Konica Minolta).
Zink started as one of two major new technologies being developed inside Polaroid Corporation in Cambridge, Massachusetts, in the 1990s, with 100 researchers working on it. Polaroid Corporation spun out Zink as a fully independent company in 2005, with 50 of its staff moving to it. Zink first unveiled its technology in January 2007, at IDG's DEMO 07 conference.
Zink makes all the paper, along with a printer for printing labels and other designs on rolls of Zink zRoll; and licenses its technology to other companies that make compact photo printers, and combined camera / compact photo printers that print photographs onto mostly 2×3” (about 5×8 cm) sheets of Zink Paper. Alps Electric manufactures the Zink print engines, and Foxconn and Lite-On build Zink-based products for major consumer-products companies. Key licensees include HP, Lifeprint, Prynt, and C&A Global.
Products
References
External links
Behind the Scenes at Zink: Where Color Magic Happens
Computer printers
Instant photography
Polaroid cameras
Printing materials
Coated paper
Photography equipment | Zink (printing) | [
"Physics"
] | 678 | [
"Printing materials",
"Materials",
"Matter"
] |
51,512,448 | https://en.wikipedia.org/wiki/Building%20typology | Building typology refers to building and documenting buildings according to their essential characteristics. In architectural discourse, typological classification tends to focus on building function (use), building form, or architectural style. A functional typology collects buildings into groups such as houses, hospitals, schools, shopping centers, etc. A formal typology groups buildings according to their shape, scale, and site placement, etc. (Formal building typology is also sometimes referred to as morphology (gk. morph).) Lastly, a stylistic typology borrows from art history and identifies building types by their expressive traits, e.g. Doric, Ionic, Corinthian (subtypes of classical), baroque, rococo, gothic, arts and crafts, international, post-modern, etc.
The three typological practices are interlinked. Namely, each functional type consists of many formal types. For example, the residential functional type may be split into formal categories such as the high rise tower, single family home, duplex, or townhouse. Similarly, while certain stylistic traits may be considered superfluous to a formal building type, style and form are nonetheless related since the conditions (political, economic, technological) that give rise to stylistic traits also enable or encourage certain forms to be expressed. In all three cases the typology serves as a framework for understanding the essential qualities of buildings on conceptually equal footing, apart from their individual, contingent characteristics.
Functional Typology
[More explanation needed.]
See a list of building types by use.
Stylistic Typology
[More explanation needed.]
See a list of architectural styles.
Formal Typology
History
Autonomous building types arose partly from the general Enlightenment predilection for categorization, a prelude to scientific discovery. At first types were intended as ideal models, which could be variously copied. In this sense types were commonly used forms (a basilica, for example), adapted over time in new buildings with quite different uses: from Roman fora to early church forms (St. Peter's Basilica), to 19th century train stations. The fact that these forms are very similar and are derived from each other is an important way of understanding typology: types are evolved over time and therefore can convey a sense of history or cultural continuity. The idea of building types as formal configurations was enhanced by J.N.L. Durand, who developed two important works: the Parallele (1799), a huge, handsome book that reproduced plans, elevations and sections of historic buildings at the same scale. He categorized them by formal types, so that their basic similarities could be recognized. Durand followed up with a second book that manipulated and reconfigured the classical elements of architecture—columns, walls, etc.—to adapt them to new, emerging uses. Durand's system, a language of architecture, demonstrated one essential characteristic of types: a way of designing that was neither entirely free of constraint nor overly prescribed.
Documenting a Formal Building Type
Documenting a formal building type is similar to any typological process insofar as the aim is to identify the minimum number of characteristics which make that type distinct. In a formal typology, building types are usually distinguished by their basic shape, site placement, and scale, but not by their specific architectural style, technology, chronology, geographical location or use. For example, a cursory formal analysis of the townhouse will identify the following "minimum essential formal characteristics." In contrast with single family homes that share no walls with adjacent buildings, the townhouse, or rowhouse, shares both party walls (save the corner lot) with its neighbors. While many variations of this formal type are found around the world, each the product of their local environment (color, material, height, fenestration, etc), they nonetheless share the qualities that individual units are placed side-by-side, between two and five stories, with narrow fronts on deep lots, accessed via separate entrances that are setback minimally from the street.
This procedure can be applied to most buildings. For example, several residential types exist in the US, such as garden apartments, townhouses, and high-rise housing. Each of these may have many subtypes. The brownstones in Harlem are different from the rowhouses in Brooklyn. And the large mansions commonly found on corner lots in many cities are distinct from the smaller houses that were built later in between them, even though both are types of "single family home." Anyone can identify types simply by observing the common buildings in a place. Architectural and urban designers document types more thoroughly by measuring them, dating them, noting similar changes to the type that arise over time, and identifying their recurring locations in the city.
Application to History
Historians, anthropologists, and architectural historians use the documentation of type as a key to other characteristics in a city, for example, events, political control, or economic changes. As theory tells us, when a type evolves over some time, this is an indication that conditions in the city have changed. Anne Moudon documents changes in the types of an Alamo Square neighborhood to tell a kind architectural, cultural and economic history. She also identifies the block, lot and street pattern as key to typological continuity. Multiple studies using this method have identified important building types, for example Chinese shophouses, Shanghai's Shikumen housing, terrace housing in Great Britain, Courtyard buildings in France, and the atrium houses found in many hot climates. Atrium types are also important for mosques, shopping malls, and some hotels.
Application to Building Design
Building types are critical to architects because they are a starting point for designing. One need not reinvent the form if a common building type, say an office building, is wanted. Most architects develop a sense of common building types over time, even without acknowledging their importance. Architects know the approximate dimensions, bulk, site placement, and internal circulation that dictates most types. This allows them to work quickly to determine the parts of the design problem which are unique: material, orientation, structure, specific dimensions, entrance, and so on. One school of thought in Italy, started by Saverio Muratori, recognizes the importance of typology in providing continuity in the city. These architects have been influential in recognizing the role of type for modern architecture, where the newest buildings are encouraged to actively assimilate many typological characteristics, without imitating historical styles.
"A Pattern Language"
A unique example of formal typological classification is A Pattern Language developed by Christopher Alexander. While Alexander does not focus on classifying complete buildings by type, he instead breaks down buildings into their components and then classifies those components by their essential qualities, which he calls "patterns." [More explanation needed.]
Application to Urban Design
Common types are the building blocks of the city. Usually, a neighborhood streets and lots are laid out so that the common type can be built there. This occurs today in suburban subdivisions, but it has been a pattern in history, as well. This combination of types, streets and lots is called an urban tissue, or a plan unit. When studying a city, a designer identifies the common tissue patterns in place and may decide to link to them, imitate them, or otherwise recognize them as an historical artifact. A movement of urban theorists and practitioners in the US, New Urbanism, has identified building typology as a key to defining more user-friendly places. In trying to preserve neighborhoods or building new ones, building types once again become the building blocks of the city, and may be codified in law as form-based codes.
References
Architectural education | Building typology | [
"Engineering"
] | 1,568 | [
"Architectural education",
"Architecture"
] |
51,512,572 | https://en.wikipedia.org/wiki/Rankenheim | Rankenheim is a mansion on Zemminsee in Groß Köris, Brandenburg, approx. 50 kilometres south of Berlin. In Nazi Germany it was used as a camp for educating teachers. After 1945 it became a temporary hospital and eventually a place for "maladjusted children" during the GDR regime - it is now a youth village. The surrounding district of Groß Köris is called Rankenheim.
History
Friedrich Wilhelm Ranke was a Prussian councillor, brother of the historian Leopold von Ranke. Since 1843 he bought in Groß Köris and neighbouring communities land where he built a brick factory and a bakery et al. and cut peat. In 1865 he built on a parcel of 15 hectare on the banks of Zemminsee a mansion, stables and outbuildings. After Ranke's death on 16 June 1871, the property was transferred to a community of heirs, who sold it. After several changes of ownership of Rankenheim fell to Dresdner Bank.
Third Reich era
Dresdner Bank transferred Rankenheim 1935 to the "Jubilee Foundation for Education" which repurposed Rankenheim as a training camp for the "Central Institute of Education". Initially, the Institute carried out so-called 'National Political Courses' for students, as they were carried out in many places in Germany in camp school for indoctrination and disciplining of youth. Everyday life in the camps "was strictly regulated and followed - as with all forms of Nazi camp education. A detailed service plan in which military training and ideological indoctrination played a prominent role."
The same time Rankenheim served as a "Reichs seat for teacher instruction" of the Central Institute, that conducted teacher training camps on behalf of the Nazi Ministry of Education. The first of these camps took place in October 1935 under the direction of Hans Reinerth and Alfred Pudelko. In other camps, among others Rudolf Benze and Bernhard Kummer contributed their racist ideas, but also well known lecturers like the Indo-Germanist Kurt Stegmann of Pritzwald gave lectures here.
In the beginning these camps were also held in the Western German Essen-Kettwig. When Germany started the war Rankenheim became the only place for teacher camps. The aim of these camps was to retrain all teachers to the National Socialist education ideology and to reorganize of the school system. 80-100 teachers from all over Germany were trained in several days (usually eight days) events with appeals, fatigue duty, sport, marches (or excursions) and (relatively short) lectures. Topics were such as "military training in math and science classes" whose goal was summarized as follows: "It is always important to focus the students - according to their age and their kind - on the important things for the life and the self-assertion of the German people in his small lebensraum and thereby evoke their joyful willingness to full commitment to the maintenance of German soil and life."
From autumn 1941 the Prague branch of the Central Institute organized the retraining of Czech teachers who were also (with a higher proportion of ethnic-racist lectures) performed in teacher camps in Rankenheim.
Due to the war in August 1943, the library and the archives of the Central Institute were moved to Rankenheim, this included the entire stock of the orthopaedagogy archive Berlin. From February 13, 1945 Rankenheim was officially the "main office" of the Central Institute, even if the operation was practically deadlocked and was passed from the apartment of Rudolf Benze in Potsdam.
In these training camps a very large number of teachers were indoctrinated: until 1941 10.000 teachers had been retrained in over 150 camps.
At the end of war Rankenheim was temporary a hospital. After the liberation from National Socialist dictatorship an orphanage was established in 1945; Rankenheim been operated since 1947 by the government as the state protectory with 100 beds. The documents of the Central Institute were evacuated and mostly burned.
GDR-Era
From 1952 "maladjusted" boys were housed in Rankenheim. The characterization of the home was transformed in the following period in the documents several times of "special home", "auxiliary boarding school" and "home for maladjusted, formable moronic children". The capacity was 75 places, the competent institution was the administrative district Königs Wusterhausen.
In November 1965, Rankenheim was incorporated in the "Kombinat der Sonderheime für Psychodiagnostik und pädagogisch-psychologische Therapie" (combine of the special homes for psychodiagnostics and pedagogical-psychological therapy) as a special home for "maladjusted" "auxiliary pupils" or those with or with a "behavioural disorder". There were housed up to 72 boys aged 7 to 15 years old in six groups, each group corresponded to a school class. The schooling took place in the group rooms. 1979 a school was also built with a gymnasium in Rankenheim. Important therapeutic agent was focused on the milieu: "the influence of external circumstances at the home and the daily routine of the children in care".
In 1988 the Combine was transformed in the "Pedagogical Medical Centre" (PMZ). Rankenheim became an institution of the Bezirk Potsdam.
Die Wende
After the end of GDR Rankenheim was subordinated to the Ministry of Education, Youth and Sport Brandenburg as well as the other former facilities of the Combine. The Ministry reinstated 1992 the "Foundation Great Orphanage Potsdam", with the task of reorganizing the sponsorship for these facilities. For this purpose, the Foundation in 1994 founded the "GFB-gemeinnützige Gesellschaft zur Förderung Brandenburger Kinder und Jugendlicher mbH" (Charitable Society for the Promotion of Children and Adolescents in Brandenburg Ltd.), which took over the sponsorship of eight institutions including Rankenheim.
Rankenheim was rebuilt and conceptually reorganized and persists as a children and youth village under the law of German child and youth services. Today besides of 33 home places in Rankenheim there is an office of the foster children service and a public school.
References
Andreas Kraas: Lehrerlager 1932–1945. Politische Funktion und pädagogische Gestaltung. Bad Heilbrunn: Klinkhardt 2004. .
Bericht: Aufarbeitung der Heimerziehung der Neuen Bundesländer und der Bundesregierung vom 26. März 2012 (PDF; 2,5 MB)
Markus Vette: Wilhelm Ranke (1804–1871): - Skizzen eines Lebensweges, der mehr als eine Familienangelegenheit Leopold von Rankes ist. Rastenberg: Eugenia Verlag Markus Vette 2014. .
Notes
Child welfare
Total institutions
Education in Nazi Germany
Houses completed in 1865 | Rankenheim | [
"Biology"
] | 1,447 | [
"Behavioural sciences",
"Behavior",
"Total institutions"
] |
51,512,680 | https://en.wikipedia.org/wiki/Psoralea%20glandulosa | Psoralea glandulosa is a herb species in the genus Psoralea found in Perú and Chile in South America and also in the United States.
Psoralea glandulosa was described by Carl Linnaeus and published in Species Plantarum 2: 1075. 1763. , Plants of the World Online treats the species as Otholobium glandulosum, which it regards as an unplaced name.
References
Bibliography
Bouton, L. (1857). Trans. Roy. Soc. Arts Mauritius n. s. 1: 1–177 Medicinal plants...
List Based Record (1986). U.S. Soil Conservation Service.
Marticorena, C., y M. Quezada (1985). Gayana, Bot. 42: 1–157 Catálogo de la flora vascular de Chile.
Psoraleeae
Plants described in 1753
Taxa named by Carl Linnaeus
Unplaced names | Psoralea glandulosa | [
"Biology"
] | 186 | [
"Biological hypotheses",
"Controversial taxa",
"Unplaced names"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.