id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
53,875,120 | https://en.wikipedia.org/wiki/3C-model | The 3H-model of motivation ("3H" stands for the "three components of motivation") was developed by Hugo M. Kehr of UC Berkeley. The 3C-model is an integrative, empirically validated theory of motivation that can be used for systematic motivation diagnosis and intervention.
Main assumptions
The three components head, heart and hand
"3C" stands for the three components of motivation, which can be illustrated as three partially overlapping circles (see Fig. 1). In psychological terminology, the three components are explicit (self-attributed) motives, implicit (unconscious) motives, and perceived abilities. For practical applications, the metaphor "head", "heart" and "hand", which goes back to Johann Heinrich Pestalozzi, is being used.
Head represents our rationally derived intentions, our goals and the commitment to enact a certain action.
Heart represents the emotional sphere; the fun and pleasure associated with an activity; unconscious needs and motives, but also fears and bellyaches underlying an activity.
Hand represents skills and abilities, action-related knowledge and experiences with respect to the activity at hand.
Interplay of the three components
Fulfilment of the components head and heart results in intrinsic motivation: The person is fully concentrated and likes to perform the activity at hand. Hereby, it does not matter whether the component hand is also fulfilled: Skills and abilities are not a prerequisite for intrinsic motivation.
Optimal motivation results from all three components being fulfilled (represented by the overlap section of the three circles in Figure 1). Here, the person is intrinsically motivated and also has all the skills and abilities needed. This situation is associated with the experience of flow. However, if one of the two components head or heart is not fulfilled (i.e., the person is lacking cognitive support for the activity or experiences unpleasant belly-aches), the person will struggle when performing the activity. This situation may be experienced as "demotivation". Here, willpower (volition) is needed in order to perform the activity and suppress aversion or doubt. Volitional self-control can be momentarily successful – but it also induces a loss of energy and can, in the long run, lead to over-control and health problems.
Two kinds of volition need to be distinguished: Volition Type 1 is needed for tasks which are supported by the head but lack support from the heart. This is the case, for instance, when important, but aversive tasks need to be fulfilled. Volition Type 2 is needed for tasks that are supported by the heart but not the head; such situations may be experienced as temptation or fear.
Lacking support by the component hand requires problem-solving mechanisms to compensate one's lacking skills and abilities, for instance by asking others for help.
Application of the 3C-Model
In practical application, for instance in self-management, in coaching, in leadership training, or in change management, the 3C-model can be used for systematic diagnosis of motivation deficits and intervention.
Motivation diagnosis
For diagnostical purposes, fulfilment of the three components of motivation can be assessed with the so-called 3C-check. The following questions can be used (see Figure 2):
Head: "Is this activity really important to me?"
Heart: "Do I really like this activity?"
Hand: "Am I good at this activity?"
Based on the answers to these questions, appropriate support can be sought (see Figure 3).
Intervention
Interventions based on the 3C-check can best be illustrated with an example. Let us assume a sales manager has performed a 3C-check with her sales representative with respect to a guided sales pitch.
If the 3C-check shows that the sales pitch in use is supported by the components head and heart, but not the component hand (this combination is represented by Section A in Fig. 3), it should first be discussed whether the lack of skill and ability is only subjective (i.e., felt by the sales representative) or also objective. In case of an objective skill and ability deficit, (e.g., when the co-worker is not yet familiar with the guidelines or the calculation methods used), measures like training, coaching, or collegial advice may be appropriate. Moreover, other colleagues may be asked to assist the sales representative and help overcome the hand-related obstacle. In case of a subjective skill and ability deficit, the manager could try to increase the sales representative's self-efficacy by providing positive feedback about her employee's prior performance.
The 3C-check might also reveal lacking support by the components head (represented by sector B in Fig. 3): The sales representative may not be convinced, that the guided sales pitch in use is instrumental, or she prefers other sales channels. Here, the supervisor is well advised to try to increase her employee's cognitive support. For instance, she could convince her with arguments, set extrinsic rewards (e.g., a bonus) or solve goal conflicts by reprioritizing goals.
But what if the 3C-check shows that the components head and hand are fulfilled, but not the component heart (represented by sector C in Fig. 3)? That is if the sales representative finds the task important and instrumental and has all the required skills and abilities, but still does not like her task?
Perhaps she does not like following the rigid sales pitch guideline, she may not like to visit customers at home, or she may be anxious to receive a negative response from her customers. Here, her manager is well-advised to not ignore her lacking emotional support but to seek possible solutions. She could try to modify the task so that it better matches her employee's underlying motives or to find motive-congruent incentives. Kehr and von Rosenstiel call this "metamotivation". For example, if her employee's has a strong need for affiliation, she could try to preferably allocate uncomplicated, friendly customers to her, or to organize the sales pitch as a team event. In addition, the manager could assist her employee in finding a personal vision which matches her employees motives. These measures, if successful, arouse the component heart and increase one's emotional support. Otherwise, the managers may consult her sales representative in finding effective volitional strategies to overcome her motivational barriers. Kehr und von Rosenstiel call this "metavolition". Here, it seems advisable to reduce overcontrol (e.g., negative fantasies, excessive impulse control, too much planning) and rather motivate oneself by reframing (positive fantasies) or by changing aversive framework conditions, for instance by conducting the sales pitch in a neutral environment.
If the 3C-check shows support from all three components (represented by the overlap section of the three circles in Fig. 3), the manager can delegate the task to her co-worker and trust her co-worker's self-management. However, she might still want to keep in touch with her employee and be wary that the boundary conditions could become aversive or her co-workers motivation might change. Further, the manager should also consider future challenges for her co-worker.
Scientific background
Development
Initially, the 3C-model was published as the "compensatory model of work motivation and volition". The original title referred to one of the central assumptions of the model, namely that volition compensates for insufficient motivation. Because of the potential confusion with "worker compensation", however, the name was changed to "3C-model."
The notion of three independent components of motivation is based on McClellands differentiation of "motives, skills, and values".
Empirical research
The 3C-model has attracted considerable empirical research conducted at the University of Munich, UC Berkeley, MGSM in Sydney and the Technische Universität München, amongst others. An overview of the research regarding the 3C-model is given by Kehr (2014). Key results are:
Certain education styles are conducive for the development of discrepancies between head and heart, so-called motive discrepancies.
Discrepancies between head and heart impair well-being and lead to burnout.
Discrepancies between head and heart reduce one's willpower.
Fear motives, e.g. fear of rejection, reduce one's willpower and well-being.
Flow results from all three components of the 3C-model being fulfilled.
Footnotes
Literature
Kehr, H. M. (2004). Integrating implicit motives, explicit motives, and perceived abilities: The compensatory model of work motivation and volition. Academy of Management Review, 29(3), 479–499.
External links
Motivate yourself with visions, goals and willpower: Hugo M. Kehr at TEDxTUM (online)
Motivation
Psychological theories
Leadership | 3C-model | Biology | 1,848 |
93,492 | https://en.wikipedia.org/wiki/Distributed%20Component%20Object%20Model | Distributed Component Object Model (DCOM) is a proprietary Microsoft technology for communication between software components on networked computers. DCOM, which originally was called "Network OLE", extends Microsoft's COM, and provides the communication substrate under Microsoft's COM+ application server infrastructure.
The extension COM into Distributed COM was due to extensive use of DCE/RPC (Distributed Computing Environment/Remote Procedure Calls) – more specifically Microsoft's enhanced version, known as MSRPC.
In terms of the extensions it added to COM, DCOM had to solve the problems of:
Marshalling – serializing and deserializing the arguments and return values of method calls "over the wire".
Distributed garbage collection – ensuring that references held by clients of interfaces are released when, for example, the client process crashed, or the network connection was lost.
Combining significant numbers of objects in the client's browser into a single transmission in order to minimize bandwidth utilization.
One of the key factors in solving these problems is the use of DCE/RPC as the underlying RPC mechanism behind DCOM. DCE/RPC has strictly defined rules regarding marshalling and who is responsible for freeing memory.
DCOM was a major competitor to CORBA. Proponents of both of these technologies saw them as one day becoming the model for code and service-reuse over the Internet. However, the difficulties involved in getting either of these technologies to work over Internet firewalls, and on unknown and insecure machines, meant that normal HTTP requests in combination with web browsers won out over both of them. Microsoft, at one point, attempted to remediate these shortcomings by adding an extra HTTP transport to DCE/RPC called ncacn_http (Network Computing Architecture connection-oriented protocol).
DCOM was publicly launched as a beta for Windows 95 September 18, 1996.
DCOM is supported natively in all versions of Windows starting from Windows 95, and all versions of Windows Server since Windows NT 4.0
Security improvements
As part of the initiative that began at Microsoft as part of Secure Development Lifecycle to re-architect insecure code, DCOM saw some significant security-focused changes in Windows XP Service Pack 2.
In response to a security vulnerability reported by Tencent Security Xuanwu Lab in June 2021, Microsoft released security updates for several versions of Windows and Windows Server, hardening access to DCOM.
Alternative versions and implementations
COMsource is a Unix based implementation of DCOM, allowing interoperability between different platforms. Its source code is available, along with full and complete documentation, sufficient to use and also implement an interoperable version of DCOM. COMsource comes directly from the Windows NT 4.0 source code, and includes the source code for a Windows NT Registry Service.
In 1995, Digital and Microsoft announced Affinity for OpenVMS (also known as NT Affinity) which was intended to allow OpenVMS to serve as the persistence layer for Windows NT client-server applications. As part of this initiative, an implementation of the Distributed Component Object Model (DCOM) was added to OpenVMS Alpha. In order to support DCOM, VMS was provided with implementations of the Windows Registry, NTLM authentication, and a subset of Win32 APIs needed to support COM. DCOM was first added to OpenVMS V7.2-1 for the Alpha. A similar implementation of DCOM was added to Digital Unix as part of the AllConnect program.
TangramCOM was a separate project from Wine, focusing on implementing DCOM on Linux-based smartphones.
See also
ActiveX
Dynamic Data Exchange (DDE)
.NET Remoting
OLE for Process Control
References
External links
Distributed Component Object Model Protocol -- DCOM/1.0
The Open Groups COMsource
TangramCOM
Component-based software engineering
Inter-process communication
Windows communication and services
Object models
Object request broker | Distributed Component Object Model | Technology | 800 |
26,424,495 | https://en.wikipedia.org/wiki/Umbo%20%28mycology%29 | An umbo is a raised area in the center of a mushroom cap. Caps that possess this feature are called umbonate. Umbos that are sharply pointed are called acute, while those that are more rounded are broadly umbonate. If the umbo is elongated, it is cuspidate, and if the umbo is sharply delineated but not elongated (somewhat resembling the shape of a human areola), it is called mammilate or papillate.
References
Fungal morphology and anatomy
Mycology | Umbo (mycology) | Biology | 109 |
35,741,623 | https://en.wikipedia.org/wiki/Miguel%20Galuccio | Miguel Matías Galuccio (born April 23, 1968) is an Argentine petroleum engineer and executive. He was appointed CEO of the state energy firm YPF upon its renationalization on May 5, 2012. He is also the CEO of Vista Oil and Gas.
Biography
Galuccio was born in Paraná, Entre Ríos Province, in 1968. He enrolled at the Institute of Technology of Buenos Aires (ITBA) and graduated with a degree in petroleum engineering in 1994.
Miguel Galuccio is a globally accomplished energy industry executive who has served in a variety of key leadership roles in the energy industry.
He is the founder, Chairman and CEO of Vista, the first NYSE-listed independent energy company operating in the world-class Vaca Muerta formation in Argentina.
He serves as an active board member of Schlumberger, the world’s largest oilfield services company, where he also began his international career. He held several senior positions at Schlumberger prior to serving as Chairman and CEO of YPF, from May 2012 to April 2016.
Powered by his entrepreneurial vision and drive, Galuccio transformed a unique idea about energy production and transformation into a disruptive, thriving, results-oriented, safe, efficient and sustainable company, which continues moving forward and exploring new horizons. Today, he dedicates part of his time to Vista’s corporate venture capital arm as a member of Vista’s Investment Committee. Vista’s venture capital arm focuses primarily on investment opportunities in the energy and natural energy transition space.
Following his strong belief in the importance of innovation in the energy industry, Galuccio also co-founded and chairs GridX, a science-based company incubator focused on the creation of, and investment in, biotech startups. GridX selects and brings together promising entrepreneurs and scientists with the goal of generating a virtuous circle of knowledge and ideas. At present, GridX has already formed more than 30 companies targeting different sectors such as health, food, agriculture, energy, and materials, among others.
References
External links
1968 births
Living people
People from Paraná, Entre Ríos
Argentine people of Italian descent
Argentine engineers
Petroleum engineers
Argentine chief executives | Miguel Galuccio | Engineering | 446 |
59,303,035 | https://en.wikipedia.org/wiki/NTCA%20-%20The%20Rural%20Broadband%20Association | NTCA - The Rural Broadband Association (NTCA) is a membership association with the goal of improving communications services in rural America. With a membership comprising over 850 independent rural American telecommunications companies in 46 states, NTCA provides training and employee benefit packages to its members. It also advocates rural issues to legislatures, including universal service, rural infrastructure, cybersecurity, telemedicine, and consumer protection.
History
In 1949, the Rural Electrification Administration (REA) loan program was established to give long-term, low-interest loans to rural telephone systems. In response, the National Rural Electric Cooperative Association (NRECA) created a committee of representatives from emerging joint electric-telephone cooperative organizations. On June 1, 1954, the eight committee members founded the National Telephone Cooperative Association (NTCA) as a separate national organization that would represent telephone cooperatives. In 1956, NTCA successfully advocated for the maintenance of the REA telephone loan program, which the Eisenhower Administration had attempted to terminate. The organization soon entered into an agreement with NRECA to make the NRECA's insurance and benefit programs available to the employees of NTCA member organizations. By the end of 1956, NTCA’s membership had grown from its original eight members to sixty members, with that figure growing to nearly one hundred members during the 1960s. Throughout the 1960s, NTCA worked to improve the availability of financing to its members by supporting the REA and advocating for the creation of a supplemental bank for rural telephone systems. In 1971, Congress established the Rural Telephone Bank to provide financing to rural telephone companies. In 1970 NTCA allowed locally-owned and controlled commercial telecoms to join the association as non-voting members and in 1971 held its first legislative conference. During the 1970s, NTCA’s membership grew to nearly three hundred.
During the 1990s, NTCA participated in the advocacy efforts that led to the Telecommunications Act of 1996, a rewrite of the communications regulations of the United States that would deregulate and increase competition for the broadcasting and telecommunications markets. NTCA also urged the Federal Communications Commission (FCC) to retain effective competition standards for small cable systems and to support the removal of the telephone and cable television cross-ownership ban. In 1994, NTCA established the Foundation for Rural Service, a nonprofit foundation with the goal of spreading awareness of rural issues. In 2002, NTCA changed its name from the National Telephone Cooperative Association to the National Telecommunications Cooperative Association. In 2013, NTCA and the Organization for the Promotion and Advancement of Small Telecommunications Companies (OPASTCO) merged into one organization called NTCA–The Rural Broadband Association. The new organization included NTCA's 580 members and OPASTCO's 372 members.
In 2017, investment in rural infrastructure became a major priority for the federal government. NTCA advocated for including broadband in any infrastructure program.
2019 marked the 50th anniversary of NTCA’s Rural Broadband PAC, previously known as TECO, and the 25th anniversary of the Foundation for Rural Service.
In 2020, the COVID-19 pandemic spread, causing schools to close and teaching to be conducted online, which placed a spotlight on the importance of broadband. This rise in national attention led policymakers to include broadband funding in two key pieces of legislation passed in 2020: the Coronavirus Aid, Relief, and Economic Security (CARES) Act and the Coronavirus Response and Relief Supplemental Appropriations Act.
In 2021, Congress passed the Infrastructure Investment and Jobs Act, which significantly increased funding for broadband deployment in rural areas. Throughout the legislative and rulemaking processes, NTCA advocated for provisions to ensure that funding is used on networks are future-proof.
In October 2022, the NTCA was the most active group to lobby the FCC, filing at eight separate instances.
Programs
NTCA has several programs to further its goals of supporting rural telecommunications cooperatives, including Smart Rural Community, the Foundation for Rural Service and CyberShare: The Small Broadband Provider ISAC. Smart Rural Community is a program with the goal of increasing the use of broadband and technology in rural America by providing educational programming, providing grants and giving awards. Gig-Capable Provider certification is a program that certifies telecommunications companies that are capable of delivering gigabit broadband speeds to rural communities. The Foundation for Rural Service spreads awareness of rural issues by distributing educational resources, performing research and giving scholarships to rural students. CyberShare is NTCA’s cybersecurity program. Made specifically for small broadband companies, CyberShare provides high-quality indicators and mitigation strategies for cyber-attacks and gives participants access to daily and weekly reports and a secure web platform. Through its subsidiaries, NTCA also provides insurance and benefits programs to its members, runs a political action committee, and sells telecommunications equipment and technical services at group rates.
See also
Rural Internet in the United States
References
External links
Official site
Digital divide
Internet access
Rural development in the United States | NTCA - The Rural Broadband Association | Technology | 991 |
7,596,042 | https://en.wikipedia.org/wiki/Lawnmower%20Man%202%3A%20Beyond%20Cyberspace | Lawnmower Man 2: Beyond Cyberspace (also subtitled Jobe's War) is a 1996 American science fiction action film written and directed by Farhad Mann, and starring Matt Frewer, Patrick Bergin, Austin O'Brien, and Ely Pouget. It is a sequel to the 1992 film The Lawnmower Man. The film was negatively reviewed by both critics and general audience.
Plot
The founder of virtual reality, Dr. Benjamin Trace, has lost a legal battle to secure a patent on the most powerful worldwide communications chip ever invented. Touted as the one operating system to control all others, in the wrong hands the "Chiron Chip" has the potential to dominate a society dependent on computers. When corporate tycoon and virtual reality entrepreneur Jonathan Walker takes over the development of the Chiron Chip, he and his team discover Jobe Smith barely alive after the destruction of Virtual Space Industries. After having his face reconstructed and his legs amputated, they hook him up to their database to have him help perfect the Chiron Chip.
Six years later, a now 16-year-old Peter Parkette is a computer hacker living in the subways of a cyberpunk Los Angeles with a group of other runaway teens. While hooked into Cyberspace, Jobe reconnects with Peter and asks him to find Trace for him. Peter locates Trace living in a desert and brings him to his hideout to speak with Jobe. Jobe shows Trace his newly constructed cyber-world and asks about the Egypt link, a hidden Nano routine in the Chiron Chip's design. Trace refuses to tell him, noting that Jobe is insane and would not understand its power. Enraged, Jobe hacks into the subway system's computer to send another train crashing into the one Trace and the teenagers are in, but Trace causes the runaway train to crash into a construction site instead.
Walker and his team at "Virtual Light Industries" plan on announcing the functions of the chip and its virtual city to the public and world leaders, though Walker wants to use them for spying and blackmail. He uses Jobe to deal with anything that could stop him, such as crashing a plane carrying a senator who is opposed to the launch and killing anyone who gets too close to the truth through virtual reality. Trace, Peter, and the others make an attempt to break into Virtual Light to steal the chip but are nearly killed by Jobe before they are rescued by Dr. Cori Platt, Trace's former partner and lover.
After stealing the Chiron Chip, they find it is a decoy. Walker keeps the real chip in his office and the launch of the chip seems inevitable. Jobe begins causing havoc through the chip by accessing credit accounts, ATM machines, and water and power utilities in an attempt to destroy the world so that everyone may join and follow him as a virtual messiah. Walker attempts to stop Jobe but is gunned down by his own security.
The group returns to Virtual Light Industries. Trace explains that the Egypt link is a dam function designed to prevent "ultimate power". Jobe has built around the link without knowing its purpose. Trace and the others confront Jobe in his virtual city, in an attempt to get him roused enough to overpower himself. "Egypt" kicks in, destroying the virtual city and reducing Jobe to his original intellectually disabled persona. Peter goes to see Jobe before a wounded Walker takes Peter hostage in an attempt to bargain for the chip. Jobe distracts Walker long enough for Trace to strike him, causing him to land on exposed wiring that kills him. Peter and the others collect Jobe as they go home.
Cast
Production
The first Lawnmower Man had been New Line Cinema's highest grossing theatrical release of 1992 and a sequel had been initially advertised with the title Lawnmower Man 2: Mindfire on the 1993 VHS releases of the first film. Filming for the sequel commenced in March 1995 in Los Angeles with only Austin O'Brien returning from the original. Pierce Brosnan was initially asked to return as well but was unavailable due to the production of GoldenEye - this led to the hiring of Patrick Bergin as Dr. Benjamin Trace. Original director Brett Leonard was directing Virtuosity at the same time and did not return to helm the sequel to his original film.
Reception
Lawnmower Man 2 was poorly received by critics, with an 18% rating on Rotten Tomatoes, based on 11 reviews, with an average rating of 3.5/10. The plot and characters were generally negatively received, while the visual effects received mixed reviews.
References
External links
1996 films
Films about computing
Films about telepresence
Films about virtual reality
Cyberpunk films
American films about revenge
1996 science fiction films
American science fiction action films
Techno-thriller films
Films directed by Farhad Mann
Films scored by Robert Folk
Films shot from the first-person perspective
1990s English-language films
1990s American films
English-language science fiction films | Lawnmower Man 2: Beyond Cyberspace | Technology | 1,008 |
37,862,072 | https://en.wikipedia.org/wiki/PTQ%20implant | PTQ implant is a type of bio-compatible perianal injectable bulking agent used in urinary and fecal incontinence. The material is a type of silicone, and is injected into the desired area to bulk out the tissues and reduce incontinence symptoms.
It is a hydrogel of polyvinylpyrrolidone. It has been used in Europe.
See also
Implantable bulking agent
References
Implants (medicine)
Incontinence | PTQ implant | Biology | 101 |
76,863,523 | https://en.wikipedia.org/wiki/Pelota%20%28boat%29 | A pelota was an improvised rawhide boat used in South and Central America for crossing rivers. It was similar in some respects to the coracle of the British Isles or the bull boat of North America, but it had little or no wooden framework or internal supporting structure, often relying entirely on the stiffness of the hide and the packing of the cargo to keep it open and afloat. Thus, the hide could be carried about on horseback and set up quickly in an emergency, a commonplace rural skill. The vessel was towed by an animal, or by a human swimmer gripping a cord with the teeth, who had to be careful not to swamp it, women being considered particularly dexterous. Pelotas could convey substantial loads—around a quarter of a ton was common—and even small artillery pieces. They continued to be used well into the 20th century.
Necessity
There were few bridges in these regions and rivers had to be forded or, if too deep, crossed in a boat, which might well be unavailable. To cross a river in an emergency e.g. when swollen by torrential rains or during a military campaign, travellers on horseback had to employ such means as were to hand. They were unlikely to be carrying timber, which in some regions e.g. the treeless pampas might be hard to procure. Ox-hides were common, however, and many travellers were in the habit of carrying one under their saddle. (The native recado is a saddle applied in multiple superposed layers, one of which is a large square sheet of rawhide.)
Wrote botanist Augustin Saint-Hilaire: though he was reluctant to entrust his rich collection of specimens to this means of conveyance.
Construction
A sun-dried rawhide is inherently rather stiff, and tends to curve preferentially with the hairy side outwards. The legs were cut off to form a roughly rectangular structure and tied at the four corners to increase its curvature.
In some cases a refinement was employed. Since animal hides were habitually dried by staking them out on the ground, they came with peg-holes along the margins. By passing a cord through these eyelets, the curvature could be further enhanced - like tightening a purse-string. The vessel has been compared to the gigantic water-lily of the Amazon. Martin Dobrizhoffer, a Jesuit missionary in Paraguay, recorded that the four sides were raised "like the upturned brim of a hat", a distance of about 2 spans. Sometimes, a few sticks were inserted for internal support, but this was not usually necessary, or always possible.
If no cowhide was available, one might be procured by slaughtering an animal on the spot and skinning it. Since this hide lacked stiffness, however, it was necessary to vary the construction. The skin was stuffed with, and tied around, a bundle of straw, and only served as a rudimentary float.
Stiffness
If the hide was allowed to get wet through, it tended to become soft and pliable, hence useless. Then it was necessary to dry it out, or to use bracing sticks if these could be found.
Félix de Azara, a Spanish colonial official whose duties compelled him to travel through remote regions and who often used the pelota to cross rivers, complained in his travel diary that torrential rains not only caused flooding but gradually made the pelota useless.
It was said that if a pelota should take too long to cross a river, as might happen if the towing swimmer grew tired, or lost his hold, the hide would soften up and the vessel might sink.
Propulsion
The hide boat was towed across the river, either by a swimmer pulling a cord with his teeth, or by a bullock, or by holding onto the tail of a horse. In the Mato Grosso a second swimmer helped to guide it and push it from behind.
The French traveller de Moussy, who rode very extensively over Argentina, wrote:
Several other sources indicate it was an common rural skill.
Professional pelota towers
The Spanish Empire established a mail service linking Buenos Aires in the Atlantic world with Lima in the Pacific. Posts were set up at intervals along this 3,000 mile route where fresh horses could be obtained and there was (very) basic sleeping accommodation. They were often beside rivers. At each place a postmaster was put in charge who was supposed to keep at least 50 horses; he or she got no salary but was rewarded with valuable legal privileges. Private voyagers were encouraged to travel with the mail, being forbidden to take their own horses.
Where the rivers were too deep to be forded the postal service appointed pasadores (passers) whose function was to carry passengers and mail across in pelotas. Pasadores were not allowed to charge much for the mail but were able to recoup themselves from the private travellers. Thus, at some places there were official pelota towers - persons who swam across rivers and pulled the boat with their teeth - whose charges were regulated by law.
The most notable crossing was at the Río del Pasage or Pasaje (today called the Juramento River), which lay on the road between Tucumán and Salta. It could be forded quite easily in the dry season, but when the waters rose it grew wide and deep, with strong currents and eventually, turbulent waves. An artillery officer wrote that the river brought down logs that endangered pelota and swimmer alike; the latter had to be adept to dodge them. He recorded that the service was still functioning in 1833; a bridge would have been much better, but had not been built owing to bureaucratic inertia. At this spot Indian women were celebrated pelota towers. It is not clear when the service ceased to function, but no bridge was built until 1926.
Women
At this pass the local women were reputed the best swimmers, their dextrous handling of the pelotas being "justly admired"; according to Sir Woodbine Parish they were extremely expert at guiding these "frail barks" across the stream; indeed according to the French geographer Martin de Moussy, in that region "they had a monopoly on this singular industry". Likewise, Domingo Faustino Sarmiento, though he grew up in a completely different part of the country, upon reading James Fenimore Cooper, remarked
Sensation
The sensation of crossing a river in a pelota was described by naturalist Alcide d'Orbigny. It took d'Orbigny an hour to reach the opposite bank. Jesuit missionary Florian Paucke, who was towed across by a 15-year old youth, noticed that his flock was careful to see he was evenly balanced first.
The freeboard was small. "He that sits on it must keep his balance well, for on the slightest movement he will find himself underwater". That the user had to keep absolutely still to avoid rolling the contrivance, or risk sinking, was stressed by more than one author.
Utility
The boat served for transporting clothing and gear that one wanted to keep dry, or cargo which must not get wet e.g. ammunition.
It also served to convey those who could not swim, or would not. In the colonial era, Spanish military commanders, though they knew how to swim, held it was beneath their dignity to strip in front of the common soldiery, and were conveyed by pelota; "scorning the assistance of another person, they impel forwards by two forked boughs for oars". The Rodrigues Ferreira expedition to the Mato Grosso () drew indigenous people taking their children across in pelotas, propelled by pairs of women.
In the tropical Brazil they could be used for rivers that, though fordable, contained dangerous fish.
A pelota could carry two men. Azara wrote that it could easily carry a load of 16 to 25 arrobas (180 to 280) kilos. Azara felt it was safer than a native canoe, and so did Father Dobrizhoffer. Military stores and even (small) artillery pieces could be conveyed across rivers. During the Paraguayan War they were used by a Brazilian reconnaissance expedition in the Pantanal.
As noted, the postal service in colonial and newly independent Argentina appointed official pelota towers. Some of them took the mail across rivers a mile or more across.
General Manuel Belgrano recalled taking a small revolutionary army across the Corriente River in 1811 with nothing but two bad canoes and some pelotas. The river was about a cuadra (80 metres) wide, and unfordable. He noted that most of his men knew how to use a pelota, implying that it was standard rural knowhow.
Not all countrymen knew how to swim, however: it depended on the region. The cavalry troopers of General Paz were from the Province of Corrientes, where everyone did. Crossing a river at night, holding on to the manes or tails of their swimming horses - their arms, ammunition, uniforms and saddles safely dry in pelotas, which they had improvised from rawhide saddle blankets - they surprised and defeated the enemy at the Battle of Caaguazú.
Origins and diffusion
Its origin is uncertain, but it may have pre-dated the Columbian exchange. Some have denied this, arguing that the indigenous peoples lacked domestic animals that could provide a large, strong hide. Another view is that they sewed several small camelid hides together: after the Spanish introduced cattle and horses, this became unnecessary.
James Hornell thought that pelota knowledge spread with human migration along the eastern seaboard of South America and inland. There is evidence it was known to an indigenous people of Patagonia who had never seen a Spaniard.
In the 1890s it could be found in most parts of Brazil, and in Central America.
Even in the 20th century the pelota could be found
Classification
The use of inflated skins as swimming-floats (whence, raft-supports) is ancient, and widely distributed in many human cultures. However the floats are intended to be airtight, for which purpose the skin should be removed from the animal with as few incisions as possible, and must be made wet and pliable.
The pelota, on the other hand, was an open boat that resembled the coracle of the British Isles or the bull boat of North America, but lacked the internal supporting framework of those vessels, except insofar as a few sticks might be added as bracing. (Unlike the pelota, the hide of the bull boat was soft and pliable, and was applied to the supporting wicker framework with hair side facing inwards.) Since the pelota was not a permanent structure but a recourse, it can also be thought of as a skilled voyaging procedure rather than a boat.
In the classification of McGrail, the pelota together with the Mongol hide boat (below) seem to stand in a class of their own, since both were made from a single hide and kept to shape by the internal pressure of the cargo.
Mongolian parallel
The nearest parallel in human culture would appear to be the hun-t'o of medieval Mongolia. Iohannes de Plano Carpini, who went there in 1246, said:
Nomenclature
There is no specific word for this vessel. The Spanish word pelota is general, meaning a round object. It is sometimes specified more precisely as pelota de cuero (hide ball), but this can still mean a football. The Dictionary of the Royal Spanish Academy has the verb pelotear, to cross a river on a pelota, but it can also mean to bounce from place to place, a victim of buck-passing bureaucracy.
In Brazil it was also called pelota, but in Bahia it was called banguê in the indigenous language.
In parts of South America the boat was called a balsa but this is an only general word that can include raft or inflatable lifejacket. Amongst the Chiquitos people of Bolivia it was called natae; amongst the Abipones of the Chaco, ñataċ.
Eponym
The city of Pelotas, Brazil (pop. ) is thought to have derived its name from the water craft, used in the 18th century for crossing a local stream.
Footnotes
Sources
History of Central America
History of South America
History of transport
Indigenous boats | Pelota (boat) | Physics | 2,541 |
5,801,035 | https://en.wikipedia.org/wiki/Follistatin | Follistatin, also known as activin-bindings protein, is a protein that in humans is encoded by the FST gene. Follistatin is an autocrine glycoprotein that is expressed in nearly all tissues of higher animals.
Its primary function is the binding and bioneutralization of members of the TGF-β superfamily, with a particular focus on activin, a paracrine hormone.
An earlier name for the same protein was FSH-suppressing protein (FSP). At the time of its initial isolation from follicular fluid, it was found to inhibit the anterior pituitary's secretion of follicle-stimulating hormone (FSH).
Biochemistry
Follistatin is part of the inhibin-activin-follistatin axis.
Three isoforms, FS-288, FS-300, and FS-315 have been reported. Two, FS-288 and FS-315, are created by alternative splicing of the primary mRNA transcript. FS-300 (porcine follistatin) is thought to be the product of posttranslational modification via truncation of the C-terminal domain from the primary amino-acid chain.
Although FS is ubiquitous, its highest concentration is in the female ovary, followed by the skin.
Follistatin is produced by folliculostellate (FS) cells of the anterior pituitary. FS cells make numerous contacts with the classical endocrine cells of the anterior pituitary including gonadotrophs.
Function
In the tissues activin has a strong role in cellular proliferation, thereby making follistatin the safeguard against uncontrolled cellular proliferation and also allowing it to function as an instrument of cellular differentiation. These roles are vital in tissue rebuilding and repair, and may account for follistatin's high presence in the skin.
In the blood, activin and follistatin are involved in the inflammatory response following tissue injury or pathogenic incursion. The source of follistatin in circulating blood plasma has yet to be determined, but due to its autocrine nature speculation suggests the endothelial cells lining all blood vessels, or the macrophages and monocytes circulating within the whole blood, may be sources.
Follistatin is involved in embryo development. It has inhibitory action on bone morphogenic proteins (BMPs); BMPs induce the ectoderm to become epidermal ectoderm. Inhibition of BMPs allows neuroectoderm to arise from ectoderm, a process which eventually forms the neural plate. Other inhibitors involved in this process are noggin and chordin.
Follistatin and BMPs are play a role in folliculogenesis within the ovary. The main role of follistatin in the oestrus/menstrus ovary appears to be progression of the follicle from early antral to antral/dominant. It is also involved in the promotion of cellular differentiation of the estrogen producing granulosa cells (GC) of the dominant follicle into the progesterone producing large lutein cells (LLC) of the corpus luteum.
Clinical significance
Follistatin is studied for its role in regulation of muscle growth in mice, as an antagonist to myostatin (also known as GDF-8, a TGF superfamily member) which inhibits excessive muscle growth. Lee and McPherron demonstrated that inhibition of GDF-8, either by genetic elimination (knockout mice) or by increasing the amount of follistatin, resulted in increased muscle mass. In 2009, research with macaque monkeys demonstrated that regulating follistatin via gene therapy also resulted in muscle growth and increases in strength.
Increased levels of follistatin, by leading to increased muscle mass of certain core muscular groups, can increase life expectancy in cases of spinal muscular atrophy (SMA) in animal models.
Elevated circulating follistatin levels are also associated with increased risk of type 2 diabetes, early death, heart failure, stroke and chronic kidney disease. It has been demonstrated that follistatin contributes to insulin resistance in type 2 diabetes development and nonalcoholic fatty liver disease (NAFLD). The genetic regulation of follistatin secretion from the liver is via Glucokinase regulatory protein (GCKR) identified by large GWAS studies.
It is also investigated for its involvement in polycystic ovary syndrome (PCOS), in part to resolve debate as to its direct role in this disease.
Sporadic inclusion body myositis, a variant of inflammatory myopathy, involves muscle weakness. In one clinical trial, rAAV1.CMV.huFS344, 6 × 1011 vg/kg, walk test results significantly improved versus untreated controls, along with decreased fibrosis and improved regeneration.
ACE-083, a follistatin-based fusion protein, was investigated for treatment focal or asymmetric myopathies. Intramuscular ACE-083 increased growth and force production in injected muscle in wild-type mice and mouse models of Charcot-Marie-Tooth disease (CMT) and Duchenne muscular dystrophy, without systemic effects or endocrine disruption.
AAV-mediated FST reduced obesity-induced inflammatory adipokines and cytokines systemically and in synovial fluid. Mice receiving FST therapy were protected from post-traumatic osteoarthritis and bone remodeling from joint injury.
In another mouse study, high dose animals showed significant quadriceps growth.
References
Further reading
External links
Proteins
FOLN domain
KAZAL domain
Myostatin inhibitors | Follistatin | Chemistry | 1,209 |
2,565,082 | https://en.wikipedia.org/wiki/Intrinsically%20photosensitive%20retinal%20ganglion%20cell | Intrinsically photosensitive retinal ganglion cells (ipRGCs), also called photosensitive retinal ganglion cells (pRGC), or melanopsin-containing retinal ganglion cells (mRGCs), are a type of neuron in the retina of the mammalian eye. The presence of an additional photoreceptor was first suspected in 1927 when mice lacking rod and cone cells still responded to changing light levels through pupil constriction; this suggested that rods and cones are not the only light-sensitive tissue. However, it was unclear whether this light sensitivity arose from an additional retinal photoreceptor or elsewhere in the body. Recent research has shown that these retinal ganglion cells, unlike other retinal ganglion cells, are intrinsically photosensitive due to the presence of melanopsin, a light-sensitive protein. Therefore, they constitute a third class of photoreceptors, in addition to rod and cone cells.
Overview
Compared to the rods and cones, the ipRGCs respond more sluggishly and signal the presence of light over the long term. They represent a very small subset (~1%) of the retinal ganglion cells. Their functional roles are non-image-forming and fundamentally different from those of pattern vision; they provide a stable representation of ambient light intensity. They have at least three primary functions:
They play a major role in synchronizing circadian rhythms to the 24-hour light/dark cycle, providing primarily length-of-day and length-of-night information. They send light information via the retinohypothalamic tract (RHT) directly to the circadian pacemaker of the brain, the suprachiasmatic nucleus of the hypothalamus. The physiological properties of these ganglion cells match known properties of the daily light entrainment (synchronization) mechanism regulating circadian rhythms. In addition, ipRGCs could also influence peripheral tissues such as the hair follicle regeneration through SCN-sympathetic nerve circuit.
Photosensitive ganglion cells innervate other brain targets, such as the center of pupillary control, the olivary pretectal nucleus of the midbrain. They contribute to the regulation of pupil size and other behavioral responses to ambient lighting conditions.
They contribute to photic regulation and acute photic suppression of release of the hormone melatonin.
In rats, they play some role in conscious visual perception, including perception of regular gratings, light levels, and spatial information.
Photoreceptive ganglion cells have been isolated in humans, where, in addition to regulating the circadian rhythm, they have been shown to mediate a degree of light recognition in rodless, coneless subjects suffering with disorders of rod and cone photoreceptors. Work by Farhan H. Zaidi and colleagues showed that photoreceptive ganglion cells may have some visual function in humans.
The photopigment of photoreceptive ganglion cells, melanopsin, is excited by light mainly in the blue portion of the visible spectrum (absorption peaks at ~480 nanometers). The phototransduction mechanism in these cells is not fully understood, but seems likely to resemble that in invertebrate rhabdomeric photoreceptors. In addition to responding directly to light, these cells may receive excitatory and inhibitory influences from rods and cones by way of synaptic connections in the retina.
The axons from these ganglia innervate regions of the brain related to object recognition, including the superior colliculus and dorsal lateral geniculate nucleus.
Structure
ipRGC receptor
These photoreceptor cells project both throughout the retina and into the brain. They contain the photopigment melanopsin in varying quantities along the cell membrane, including on the axons up to the optic disc, the soma, and dendrites of the cell. ipRGCs contain membrane receptors for the neurotransmitters glutamate, glycine, and GABA. Photosensitive ganglion cells respond to light by depolarizing, thus increasing the rate at which they fire nerve impulses, which is opposite to that of other photoreceptor cells, which hyperpolarize in response to light.
Results of studies in mice suggest that the axons of ipRGCs are unmyelinated.
Melanopsin
Unlike other photoreceptor pigments, melanopsin has the ability to act as both the excitable photopigment and as a photoisomerase. Unlike the visual opsins in rod cells and cone cells, which rely on the standard visual cycles for recharging all-trans-retinal back into the photosensitive 11-cis-retinal, melanopsin is able to isomerize all-trans-retinal into 11-cis-retinal itself when stimulated with another photon. An ipRGC therefore does not rely on Müller cells and/or retinal pigment epithelium cells for this conversion.
The two isoforms of melanopsin differ in their spectral sensitivity, for the 11-cis-retinal isoform is more responsive to shorter wavelengths of light, while the all-trans isoform is more responsive to longer wavelengths of light.
Synaptic inputs and outputs
Inputs
ipRGCs are both pre- and postsynaptic to dopaminergic amacrine cells (DA cells) via reciprocal synapses, with ipRGCs sending excitatory signals to the DA cells, and the DA cells sending inhibitory signals to the ipRGCs. These inhibitory signals are mediated through GABA, which is co-released from the DA cells along with dopamine. Dopamine has functions in the light-adaptation process by up-regulating melanopsin transcription in ipRGCs and thus increasing the photoreceptor's sensitivity. In parallel with the DA amacrine cell inhibition, somatostatin-releasing amacrine cells, themselves inhibited by DA amacrine cells, inhibit ipRGCs. Other synaptic inputs to ipRGC dendrites include cone bipolar cells and rod bipolar cells.
Outputs
One postsynaptic target of ipRGCs is the suprachiasmatic nucleus (SCN) of the hypothalamus, which serves as the circadian clock in an organism. ipRGCs release both pituitary adenylyl cyclase-activating protein (PACAP) and glutamate onto the SCN via a monosynaptic connection called the retinohypothalamic tract (RHT). Glutamate has an excitatory effect on SCN neurons, and PACAP appears to enhance the effects of glutamate in the hypothalamus.
Other post synaptic targets of ipRGCs include: the intergenticulate leaflet (IGL), a cluster of neurons located in the thalamus, which play a role in circadian entrainment; the olivary pretectal nucleus (OPN), a cluster of neurons in the midbrain that controls the pupillary light reflex; the ventrolateral preoptic nucleus (VLPO), located in the hypothalamus and is a control center for sleep; the amygdala.
Function
Pupillary light reflex
Using various photoreceptor knockout mice, researchers have identified the role of ipRGCs in both the transient and sustained signaling of the pupillary light reflex (PLR). Transient PLR occurs at dim to moderate light intensities and is a result of phototransduction occurring in rod cells, which provide synaptic input onto ipRGCs, which in turn relay the information to the olivary pretectal nucleus in the midbrain. The neurotransmitter involved in the relay of information to the midbrain from the ipRGCs in the transient PLR is glutamate. At brighter light intensities the sustained PLR occurs, which involves both phototransduction of the rod providing input to the ipRGCs and phototransduction of the ipRGCs themselves via melanopsin. Researchers have suggested that the role of melanopsin in the sustained PLR is due to its lack of adaptation to light stimuli in contrast to rod cells, which exhibit adaptation. The sustained PLR is maintained by PACAP release from ipRGCs in a pulsatile manner.
Possible role in conscious sight
Experiments with rodless, coneless humans allowed another possible role for the receptor to be studied. In 2007, a new role was found for the photoreceptive ganglion cell. Zaidi and colleagues showed that in humans the retinal ganglion cell photoreceptor contributes to conscious sight as well as to non-image-forming functions like circadian rhythms, behaviour and pupillary reactions. Since these cells respond mostly to blue light, it has been suggested that they have a role in mesopic vision and that the old theory of a purely duplex retina with rod (dark) and cone (light) light vision was simplistic. Zaidi and colleagues' work with rodless, coneless human subjects hence has also opened the door into image-forming (visual) roles for the ganglion cell photoreceptor.
The discovery that there are parallel pathways for vision was made: one classic rod- and cone-based arising from the outer retina, the other a rudimentary visual brightness detector arising from the inner retina. The latter seems to be activated by light before the former. Classic photoreceptors also feed into the novel photoreceptor system, and colour constancy may be an important role as suggested by Foster.
It has been suggested by the authors of the rodless, coneless human model that the receptor could be instrumental in understanding many diseases, including major causes of blindness worldwide such as glaucoma, a disease which affects ganglion cells.
In other mammals, photosensitive ganglia have proven to have a genuine role in conscious vision. Tests conducted by Jennifer Ecker et al. found that rats lacking rods and cones were able to learn to swim toward sequences of vertical bars rather than an equally luminescent gray screen.
Violet-to-blue light
Most work suggests that the peak spectral sensitivity of the receptor is between 460 and 484 nm. Lockley et al. in 2003 showed that 460 nm (blue) wavelengths of light suppress melatonin twice as much as 555 nm (green) light, the peak sensitivity of the photopic visual system. In work by Zaidi, Lockley and co-authors using a rodless, coneless human, it was found that a very intense 481 nm stimulus led to some conscious light perception, meaning that some rudimentary vision was realized.
Discovery
In 1923, Clyde E. Keeler observed that the pupils in the eyes of blind mice he had accidentally bred still responded to light. The ability of the rodless, coneless mice to retain a pupillary light reflex was suggestive of an additional photoreceptor cell.
In the 1980s, research in rod- and cone-deficient rats showed regulation of dopamine in the retina, a known neuromodulator for light adaptation and photoentrainment.
Research continued in 1991, when Russell G. Foster and colleagues, including Ignacio Provencio, showed that rods and cones were not necessary for photoentrainment, the visual drive of the circadian rhythm, nor for the regulation of melatonin secretion from the pineal gland, via rod- and cone-knockout mice. Later work by Provencio and colleagues showed that this photoresponse was mediated by the photopigment melanopsin, present in the ganglion cell layer of the retina.
The photoreceptors were identified in 2002 by Samer Hattar, David Berson and colleagues, where they were shown to be melanopsin expressing ganglion cells that possessed an intrinsic light response and projected to a number of brain areas involved in non-image-forming vision.
In 2005, Panda, Melyan, Qiu, and colleagues demonstrated that the melanopsin photopigment was the phototransduction pigment in ganglion cells. Dennis Dacey and colleagues showed in a species of Old World monkey that giant ganglion cells expressing melanopsin projected to the lateral geniculate nucleus (LGN). Previously only projections to the midbrain (pre-tectal nucleus) and hypothalamus (supra-chiasmatic nuclei, SCN) had been shown. However, a visual role for the receptor was still unsuspected and unproven.
Research
Research in humans
Attempts were made to hunt down the receptor in humans, but humans posed special challenges and demanded a new model. Unlike in other animals, researchers could not ethically induce rod and cone loss either genetically or with chemicals so as to directly study the ganglion cells. For many years, only inferences could be drawn about the receptor in humans, though these were at times pertinent.
In 2007, Zaidi and colleagues published their work on rodless, coneless humans, showing that these people retain normal responses to nonvisual effects of light. The identity of the non-rod, non-cone photoreceptor in humans was found to be a ganglion cell in the inner retina as shown previously in rodless, coneless models in some other mammals. The work was done using patients with rare diseases that wiped out classic rod and cone photoreceptor function but preserved ganglion cell function. Despite having no rods or cones, the patients continued to exhibit circadian photoentrainment, circadian behavioural patterns, melatonin suppression, and pupil reactions, with peak spectral sensitivities to environmental and experimental light that match the melanopsin photopigment. Their brains could also associate vision with light of this frequency. Clinicians and scientists are now seeking to understand the new receptor's role in human diseases and blindness.
Intrinsically photosensitive RGCs have also been implicated in the exacerbation of headache by light during migraine attacks.
See also
Bistratified cell
Melanopsin
Midget cell
Parasol cell
Photoreceptor
References
External links
Melanopsin-expressing, Intrinsically Photosensitive Retinal Ganglion Cells, Webvision, University of Utah, US
ipRGCs Brown University, Rhode Island, US
Human eye anatomy
Histology
Photoreceptor cells
Circadian rhythm
Visual system
Neuroscience | Intrinsically photosensitive retinal ganglion cell | Chemistry,Biology | 3,045 |
39,284,674 | https://en.wikipedia.org/wiki/Plant%20manufactured%20pharmaceuticals | Plant manufactured Pharmaceuticals are pharmaceuticals derived from genetically modified plants used as therapeutic compounds. This can be used as the replacement for the traditional method of inoculating animals for Cell Culture production. We can use plants to cure and prevent diseases that may have once been deemed incurable. Through biotechnological advancements, we are able to produce complex therapeutic proteins from plant cells. Such Therapeutic Proteins are seen in brands like Enevrel and Remicade for rheumatoid arthritis, Herceptin, a breast cancer treatment. Plants like tobacco are hosts for protein production for applications such as; anemia, hepatitis C & B, hypertension, antimicrobial, and liver disease.
Impact on business and industry
With the advancement of Plant Manufactured Pharmaceuticals, comes the advancements of a new type production in industry. Companies such as ZEA Biosciences are developing cost-effective and scalable pharmaceutical ingredients using plants instead of cell culture. Unlike cell culture, plants can have a much larger production capacity, a mass quantity of plant hosts on site, and the ability to make specific antibodies that is used as a bio-reactor for specific patient needs. Indirectly, the need to grow plants that are being used as Plant Manufactured Pharmaceuticals will increase in geographic areas where certain plants naturally grow, for instance in developing countries. Increasing the need for agricultural societies in developing countries will help certain countries to export and make trade alliance with other countries and with the development of the therapies that can control diseases like Cholera and HIV/AIDS.
Ideas of enhanced recovery
Landscape Gardens that must be grown for the production of the therapeutic proteins brings a new time of recovery for certain patients. Professor Roger Ulrich of Texas A&M University believes that Therapeutic Gardens can help the spiritual needs of patients and enhance stress recovery. This relieves the patient of stress and gives the patient a feeling of tranquility during their recovery.
Criticism and awareness
Many corporations are allowed to create genetically modified organisms and secure them through intellectual property rights creating monopolies, a fact that continues to evoke criticism . Awareness and education is needed for the public to understand how even GM plants have helped medical research. For instance, in 1992, a group of American students produced a Hepatitis B vaccine from a genetically modified tobacco plant showing the ability of produce pharmaceutical compounds.
References
Pharmacognosy | Plant manufactured pharmaceuticals | Chemistry | 470 |
85,151 | https://en.wikipedia.org/wiki/Cloacina | Cloacina was a goddess who presided over the Cloaca Maxima ('Greatest Drain'), the main interceptor discharge outfall of the system of sewers in Rome.
Name
The theonym Cloācīna is a derivative of the noun cloāca ('sewer, underground drainage'; cf. cluere 'to purify'), itself from Proto-Italic *klowā-, ultimately from Proto-Indo-European *ḱleuH-o- ('clean'). A cult-title of Venus, Cloācīna may be interpreted as meaning 'The Purifier'.
In later English works, phrases such as "the temple of Cloacina" were sometimes used as euphemisms for the toilet.
Cult
The Cloaca Maxima was said to have been begun by Tarquinius Priscus, one of Rome's Etruscan kings, and finished by another, Tarquinius Superbus: Cloacina might have originally been an Etruscan deity. According to one of Rome's foundation myths, Titus Tatius, king of the Sabines, erected a statue to Cloacina at the place where Romans and Sabines met to confirm the end of their conflict, following the rape of the Sabine women. Tatius instituted lawful marriage between Sabines and Romans, uniting them as one people, ruled by himself and by Rome's founder, Romulus. The peace between Sabines and Romans was marked by a cleansing ritual using myrtle, at or very near an ancient Etruscan shrine to Cloacina, above a small stream that would later be enlarged as the main outlet for Rome's main sewer, the Cloaca Maxima. As myrtle was one of Venus' signs, and Venus was a goddess of union, peace and reconciliation, Cloacina was recognised as Venus Cloacina (Venus the Cleanser). She was also credited with the purification of sexual intercourse within marriage.
The small, circular shrine of Venus Cloacina was situated before the Basilica Aemilia on the Roman Forum and directly above the Cloaca Maxima. Some Roman coins had images of Cloacina's shrine. The clearest show two females, presumed to be deities, each with a bird perched on a pillar. One holds a small object, possibly a flower; birds and flowers are signs of Venus, among other deities. The figures may have represented the two aspects of the divinity, Cloacina-Venus.
References
Bibliography
Further reading
Information on Cloacina
See also
Toilet god
Mefitis
Love and lust goddesses
Roman goddesses
Toilet goddesses
Sewerage | Cloacina | Chemistry,Engineering,Environmental_science | 540 |
859,798 | https://en.wikipedia.org/wiki/Bottled%20gas | Bottled gas is a term used for substances which are gaseous at standard temperature and pressure (STP) and have been compressed and stored in carbon steel, stainless steel, aluminum, or composite containers known as gas cylinders.
Gas state in cylinders
There are four cases: either the substance remains a gas at standard temperature but increased pressure, the substance liquefies at standard temperature but increased pressure, the substance is dissolved in a solvent, or the substance is liquefied at reduced temperature and increased pressure. In the last case the bottle is constructed with an inner and outer shell separated by a vacuum (dewar flask) so that the low temperature can be maintained by evaporative cooling.
Case I
The substance remains a gas at standard temperature and increased pressure, its critical temperature being below standard temperature. Examples include:
air
argon
fluorine
helium
hydrogen
krypton
nitrogen
oxygen
Case II
The substance liquefies at standard temperature but increased pressure. Examples include:
ammonia
butane
carbon dioxide (also packaged as a cryogenic gas, Case IV)
chlorine
nitrous oxide
propane
sulfur dioxide
Case III
The substance is dissolved at standard temperature in a solvent. Examples include:
carbon dioxide in the form of a soft drink
sulfur trioxide in the form of fuming sulfuric acid
nitrogen dioxide in the form of red-fuming nitric acid
hydrogen chloride in the form of muriatic acid
Note: these four are most often found in containers other than metal bottles, and at low pressure, e.g. .
acetylene
Note: Acetylene cylinders contain an inert packing material, which may be agamassan, and are filled with a solvent such as acetone or dimethylformamide. The acetylene is pumped into the cylinder and it dissolves in the solvent. When the cylinder is opened the acetylene comes back out of solution, much like a carbonated beverage bubbles when opened. This is a workaround to acetylene's propensity to explode when pressurized above 200 kPa or liquified.
Case IV
The substance is liquefied at reduced temperature and increased pressure. These are also referred to as cryogenic gases. Examples include:
liquid nitrogen (LN2)
liquid hydrogen (LH2)
liquid oxygen (LOX)
carbon dioxide (also packaged as a liquefied gas, Case II)
Note: cryogenic gases are typically equipped with some type of 'bleed' device to prevent overpressure from rupturing the bottle and to allow evaporative cooling to continue.
Expansion and volume
The general rule is that one unit volume of liquid will expand to approximately 800 unit volumes of gas at standard temperature and pressure with some variation due to intermolecular force and molecule size compared to an ideal gas. Normal high pressure gas cylinders will hold gas at pressures from . An ideal gas pressurised to 200 bar in a cylinder would contain 200 times as much as the volume of the cylinder at atmospheric pressure, but real gases will contain less than that by a few percent. At higher pressures, the shortfall is greater.
Special handling considerations
Because the contents are under high pressure and are sometimes hazardous, there are special safety regulations for handling bottled gases. These include chaining bottles to prevent falling and breaking, proper ventilation to prevent injury or death in case of leaks and signage to indicate the potential hazards.
In the United States, the Compressed Gas Association (CGA) sells a number of booklets and pamphlets on safe handling and use of bottled gases. (Members of the CGA can get the pamphlets for free.) The European Industrial Gases Association and the British Compressed Gases Association provide similar facilities in Europe and the United Kingdom.
Nomenclature differences
In the United States, 'bottled gas' typically refers to liquefied petroleum gas. 'Bottled gas' is sometimes used in medical supply, especially for portable oxygen tanks. Packaged industrial gases are frequently called 'cylinder gas', though 'bottled gas' is sometimes used.
The United Kingdom and other parts of Europe more commonly refer to 'bottled gas' when discussing any usage whether industrial, medical or liquefied petroleum. However, in contrast, what the United States calls liquefied petroleum gas is known generically in the United Kingdom as 'LPG'; and it may be ordered using by one of several Trade names, or specifically as butane or propane depending on the required heat output.
Colour coding
Different countries have different gas colour codes but attempts are being made to standardise the colours of cylinder shoulders:
Colours of cylinders for Medical gases are covered by an International Organization for Standardization (ISO) standard, ISO 32; but not all countries use this standard.
Within Europe gas cylinders colours are being standardised according to EN 1089-3, the standard colours applying to the cylinder shoulder only; i.e., the top of the cylinder close to the pillar valve.
In the United States, colour-coding is not regulated by law.
The user should not rely on the colour of a cylinder to indicate what it contains. The label or decal should always be checked for product identification.
European cylinder colours
The colours below are specific shades, defined in the European Standard in terms of RAL coordinates. The requirements are based on a combination of a few named gases, otherwise on the primary hazard associated with the gas contents:
Specific gases
Based on gas properties
Gas mixtures, mostly for diving
Diving cylinders are left unpainted (for aluminium), or painted to prevent corrosion (for steel), often in bright colors, most often fluorescent yellow, to increase visibility. This should not be confused with industrial gases, where a yellow shoulder means chlorine.
See also
References
Notes
Standards
ISO 32: Gas cylinders for medical use—Marking for identification of content.
CEN EN 1089-3: Transportable gas cylinders, Part 3 - Colour Coding.
External links
Virtual Anesthesia Machine - 6 different color codes for medical gas cylinders, hoses and outlets
British Compressed Gases Association – Colour Coding of Cylinders.
Air Products – European Gas Cylinder Identification Chart.
Compressed Gas Association (U.S.)
Gases and Welding Distributors Association (U.S.)
European Industrial Gases Association (E.U.)
British Compressed Gases Association (UK)
Gases
Pressure vessels
Gas technologies
Industrial gases
Fuel containers
de:Gasflasche
Color codes | Bottled gas | Physics,Chemistry,Engineering | 1,279 |
1,624,551 | https://en.wikipedia.org/wiki/Betts%20electrolytic%20process | The Betts electrolytic process is an industrial process for purification of lead from bullion. Lead obtained from its ores is impure because lead is a good solvent for many metals. Often these impurities are tolerated, but the Betts electrolytic process is used when high purity lead is required, especially for bismuth-free lead.
Process description for lead
The electrolyte for this process is a mixture of lead fluorosilicate ("PbSiF6") and hexafluorosilicic acid (H2SiF6) operating at 45 °C (113 °F). Cathodes are thin sheets of pure lead and anodes are cast from the impure lead to be purified. A potential of 0.5 volts is applied. At the anode, lead dissolves, as do metal impurities that are less noble than lead. Impurities that are more noble than lead, such as silver, gold, and bismuth, flake from the anode as it dissolves and settle to the bottom of the vessel as "anode mud." Pure metallic lead plates onto the cathode, with the less noble metals remaining in solution. Because of its high cost, electrolysis is used only when very pure lead is needed. Otherwise pyrometallurgical methods are preferred, such as the Parkes process followed by the Betterton-Kroll process.
History
The process is named for its inventor Anson Gardner Betts who filed several patents for this method starting in 1901.
See also
Processing lead from ore
Lead smelter
Electrochemical engineering
References
External links
Bismuth
Bismuth
Lead
Electrolysis
Metallurgical processes | Betts electrolytic process | Chemistry,Materials_science,Engineering | 344 |
54,184,720 | https://en.wikipedia.org/wiki/Spiroxasone | Spiroxasone (, ) is a synthetic, steroidal antimineralocorticoid of the spirolactone group which was developed as a diuretic and antihypertensive agent but was never marketed. It was synthesized and assayed in 1963. The drug is 7α-acetylthiospirolactone with the ketone group removed from the C17α spirolactone ring. Similarly to other spirolactones like spironolactone, spiroxasone also possesses antiandrogen activity.
References
Abandoned drugs
Acetate esters
Antimineralocorticoids
Pregnanes
Spiro compounds
Spirolactones
Steroidal antiandrogens
Thioesters | Spiroxasone | Chemistry | 158 |
3,938,313 | https://en.wikipedia.org/wiki/Serine%20octamer%20cluster | The Serine octamer cluster in physical chemistry is an unusually stable cluster consisting of eight serine molecules (Ser) implicated in the origin of homochirality. This cluster was first discovered in mass spectrometry experiments. Electrospray ionization of an aerosol of serine in methanol results in a mass spectrum with a prominent ion peak of 841 corresponding to the Ser8+H+ cation. The smaller and larger clusters are virtually absent in the spectrum and therefore the number 8 is called a magic number. The same octamer ions are also produced by rapid evaporation of a serine solution on a hot (200-250 °C) metal surface or by sublimation of solid serine. After production, detection again is by mass-spectroscopic means. For the discussion of homochirality, these laboratory production methods are designed to mimic prebiotic conditions.
The cluster is not only unusually stable but also unusual because the clusters have a strong homochiral preference. A racemic serine solution produces a minimum amount of cluster and with solutions of both enantiomers a maximum amount is formed of both homochiral D-Ser8 and L-Ser8. In another experiment cluster formation of a racemic mixture with deuterium enriched L-serine results in a product distribution with hardly any 50/50 D/L clusters but a preference for either D or L enantioenriched clusters.
A model for chiral amplification is proposed whereby enantioenriched clusters are formed from a non-racemic mixture already enriched by L-serine as a result of a mirror-symmetry breaking process. Cluster formation is followed by isolation and on subsequent dissociation of the cluster a serene solution forms with a higher concentration of L-serine than in the original mixture. A cycle can be maintained in which each turn results in an incremental enrichment in L-serine. Many such cycles eventually result in enantiopure L-serine. This model has been experimentally verified.
Chiral transmission is assumed to take place through so-called substitution reactions of serine clusters. In these reactions, a serine monomer in a cluster can be replaced by another small biologically relevant molecule. For instance Ser8 reacts with glucose (Glc) to the Ser6 + Glc3 + Na+ cluster. Moreover, the cluster of synthetic L-glucose with Ser8 is less abundant than that with the biological D-glucose.
See also
Other magic numbers in chemistry: methane clathrate, Magic angle spinning
Other stable clusters in: aluminium superatoms
References
Physical chemistry
Stereochemistry | Serine octamer cluster | Physics,Chemistry | 544 |
60,389,003 | https://en.wikipedia.org/wiki/RISE%20project | The RISE Project (Rivera Submersible Experiments) was a 1979 international marine research project which mapped and investigated seafloor spreading in the Pacific Ocean, at the crest of the East Pacific Rise (EPR) at 21° north latitude. Using a deep sea submersible (ALVIN) to search for hydrothermal activity at depths around 2600 meters, the project discovered a series of vents emitting dark mineral particles at extremely high temperatures which gave rise to the popular name, "black smokers". Biologic communities found at 21° N vents, based on chemosynthesis and similar to those found at the Galápagos spreading center, established that these communities are not unique. Discovery of a deep-sea ecosystem not based on sunlight spurred theories of the origin of life on Earth.
Location
The RISE expedition took place on the East Pacific Rise spreading center at depths around , at 21° north latitude about south of Baja California, and southwest of Mazatlán, Mexico. The study area at 21° N was selected following results from a series of detailed near-bottom geophysical surveys that were designed to map the geologic features associated with a known spreading center.
Experiments
The project objective was detecting and mapping the sub-seafloor magma chamber that feeds lavas and igneous intrusions that create the oceanic crust and lithosphere in the process of seafloor spreading. The approach comprised many geophysical techniques including seismology, magnetism, crustal electrical properties, and gravity. The major experiment effort though, was seafloor observation and sample collection using the deep submergence submersible ALVIN on the crest of the EPR at depths of 2600 meters or more.
RISE was part of the RITA (Rivera-Tamayo expeditions) project, which included submersible investigations (CYAMEX) at 21° N and at the Tamayo Fracture zone at the mouth of the Gulf of California. The RITA project used the French submersible CYANA on the CYAMEX expeditions. CYANA dives at 21° N occurred in 1978, one year prior to the RISE expedition.
Participants
American, French, and Mexican biologists, geologists, and geophysicists participated in both the RISE and RITA expeditions. The RISE expedition was directed by scientists at the Scripps Institution of Oceanography, part of the University of California, San Diego. Project leaders were Fred Spiess and Ken Macdonald. Woods Hole Oceanographic Institution provided the ALVIN and its support tender the catamaran Lulu. Scripps provided surface survey vessels the Melville and New Horizon. The expedition took place during March to May 1979. The RITA Project was directed by French scientists and was led by Jean Francheteau.
Findings
The major finding of the RISE project was discovery of very hot hydrothermal fluids emanating from the sea floor from vents at separate locations along the crest of the rise. These were anticipated by the discovery during the CYAMEX expedition a year earlier of massive sulfide mineral deposits on the sea floor at 21°N, which were presumed to be due to hydrothermal activity, but which was not then observed. During RISE dives, the hot vents were found and were marked by mineralized chimneys, about a half-meter in diameter and one to a few meters high, composed of sulfide minerals of zinc, copper and iron. Emitting from the chimneys were black plumes or jets of fine particles of these minerals, giving rise to the popular name "black smokers". Temperatures measured of these jets were 380±30 °C. Several vents of lower temperature emissions were found (<23 °C). These warm vents were similar to those discovered at the Galapagos Spreading Center a few years earlier. Hot vents and black smokers were not found at the Galapagos. Modeling of gravity data measured on the seafloor suggested that much of the upper ocean crust at 21°N was fractured and filled with warm water.
Scientific impact
Massive sulfide deposits have been mined on land in places including Cyprus, Oman and Australia. The discovery of massive sulfide deposits associated with vent fields at spreading centers provided a model for how these deposits formed. It also spurred commercial efforts to mine these deep sea deposits found elsewhere.
Marine geologists were puzzled for years by conductive heat flow data from the seafloor that showed the measured values at spreading centers were too low for theoretical models of seafloor spreading. The convective crustal heat transfer computed for the first time from the vent plumes was estimated to be many-fold the observed conductive heat flow at a spreading center. These observations pointed to the importance of convective heat flow at spreading centers and provided an answer to the low heat flow problem.
The discovery of biological communities at low temperature warm vents at 21°N, populated by a benthic community the same or similar to that discovered at the Galapagos spreading center, established that life forms found at the Galapagos were not unique. Further, the significance of discovering at the Galapagos site and 21°N of a chemosynthetic ecosystem that was not dependent on sunlight, existed at high pressures, and was based on chemicals emitted via volcanism, provided a model for how life could have originated on Earth.
See also
Tanya Atwater
Robert Ballard
Jack Corliss
Rachel Haymon
Miriam Kastner
Bruce P. Luyendyk
Endeavor Hydrothermal Vents
Magic Mountain (vents offshore British Columbia, Canada)
Rivera Plate
References
Further reading
External links
Discovery narrative by WHOI for black smokers
Oceanography
Hydrothermal vents
Pacific Ocean | RISE project | Physics,Environmental_science | 1,124 |
4,700,310 | https://en.wikipedia.org/wiki/Contextual%20inquiry | Contextual inquiry (CI) is a user-centered design (UCD) research method, part of the contextual design methodology. A contextual inquiry interview is usually structured as an approximately two-hour, one-on-one interaction in which the researcher watches the user in the course of the user's normal activities and discusses those activities with the user.
Description
Contextual inquiry defines four principles to guide the interaction:
Context—Interviews are conducted in the user's actual workplace. The researcher watches users do their own work tasks and discusses any artifacts they generate or use with them. In addition, the researcher gathers detailed re-tellings of specific past events when they are relevant to the project focus.
Partnership—User and researcher collaborate to understand the user's work. The interview alternates between observing the user as he or she works and discussing what the user did and why.
Interpretation—The researcher shares interpretations and insights with the user during the interview. The user may expand or correct the researcher's understanding.
Focus—The researcher steers the interaction towards topics which are relevant to the team's scope.
If specific tasks are important, the user may be asked to perform those tasks.
A contextual interview generally has three phases, which may not be formally separated in the interview itself:
The introduction—The researcher introduces him or herself and may request permission to record and start recording. The researcher promises confidentiality to the user, solicits a high-level overview of the user's work, and consults with the user on the specific tasks the user will work on during the interview.
The body of the interview—The researcher observes the work and discusses the observations with the user. The researcher takes notes, usually handwritten, of everything that happens.
The wrap-up—The researcher summarizes what was gleaned from the interview, offering the user a chance to give final corrections and clarifications.
Before a contextual inquiry, user visits must be set up. The users selected must be doing work of interest currently, must be able to have the researcher come into their workplace (wherever it is), and should represent a wide range of different types of users. A contextual inquiry may gather data from as few as 4 users (for a single, small task) to 30 or more.
Following a contextual inquiry field interview, the method defines interpretation sessions as a way to analyze the data. In an interpretation session, 3-8 team members gather to hear the researcher re-tell the story of the interview in order. As the interview is re-told, the team add individual insights and facts as notes. They also may capture representations of the user's activities as work models (defined in the Contextual design methodology). The notes may be organized using an affinity diagram. Many teams use the contextual data to generate in-depth personas.
Contextual inquiries may be conducted to understand the needs of a market and to scope the opportunities. They may be conducted to understand the work of specific roles or tasks, to learn the responsibilities and structure of the role. Or they may be narrowly focused on specific tasks, to learn the details necessary to support that task.
Advantages
Contextual inquiry offers the following advantages over other customer research methods:
The open-ended nature of the interaction makes it possible to reveal tacit knowledge, knowledge about their own work process that users themselves are not consciously aware of. Tacit knowledge has traditionally been very hard for researchers to uncover.
The information produced by contextual inquiry is highly reliable. Surveys and questionnaires assume the questions they include are important. Traditional usability tests assume the tasks the user is asked to perform are relevant. Contextual inquiries focus on the work users need to accomplish, done their way—so it is always relevant to the user. And because it's their own work, the users are more committed to it than they would be to a sample task.
The information produced by contextual inquiry is highly detailed. Marketing methods such as surveys produce high-level information but not the detailed work practice data needed to design products. It is very difficult to get this level of detail any other way.
Contextual inquiry is a very flexible technique. Contextual inquiries have been conducted in homes, offices, hospital OPDs, operating theaters, automobiles, factory floors, construction sites, maintenance tunnels, and chip fabrication labs, among many other places.
Limitations
Contextual inquiry has the following limitations:
Contextual inquiry is resource-intensive. It requires travel to the informant's site, a few hours with each user, and then a few more hours to interpret the results of the interview.
History of the method
Contextual inquiry was first referenced as a "phenomenological research method" in a paper by Whiteside, Bennet, and Holtzblatt in 1988, which lays out much of the justification for using qualitative research methods in design. It was first fully described as a method in its own right by Wixon, Holtzblatt, and Knox in 1990, where comparisons with other research methods are offered. It is most fully described by Holtzblatt and Beyer in 1995.
Contextual inquiry was extended to the full contextual design methodology by Beyer and Holtzblatt between 1988 and 1992. Contextual design was briefly described by them for Communications of the ACM in 1995, and was fully described in Contextual Design in 1997.
Work models as a way of capturing representations of user work during interpretation sessions were first briefly described by Beyer and Holtzblatt in 1993 and then more fully in 1995.
See also
Design research
Ethnography
Scenario
References
Further reading
S. Jones, Learning DECwrite in the Workplace; Using Contextual Inquiry to Articulate Learning. Internal Digital Report: DEC-TR 677, December 1989.
An early use of CI to analyze the use of a software product.
L. Cohen, Quality Function Deployment: How to Make QFD Work for You. Addison-Wesley Publishing Company, Reading, Massachusetts, 1995.
Discusses the use of CI in Quality Function Deployment
D. Wixon and J. Ramey (Eds.), Field Methods Case Book for Product Design. John Wiley & Sons, Inc., NY, NY, 1996.
This book describes the experience of several different practitioners using field methods. Several people who have used Contextual Inquiry and Contextual Design have written chapters describing their experiences. This is a good resource for anyone wanting to adopt customer-centered methods in their own organization. It includes a chapter by Holtzblatt and Beyer describing the whole Contextual Design process.
Nardi, B. Context and Consciousness : Activity Theory and Human-Computer Interaction. Massachusetts Institute of Technology Press, Cambridge, MA, USA ©1995
External links
Contextual inquiry at UsabilityNet
Contextual Interviews at Usability.gov
Getting Started with Contextual Techniques
Human–computer interaction
Inquiry | Contextual inquiry | Engineering | 1,398 |
4,721,358 | https://en.wikipedia.org/wiki/Variable%20gauge | Variable gauge systems allow railway vehicles to travel between two railways with different track gauges. Vehicles are equipped with variable gauge axles (VGA). The gauge is altered by driving the train through a gauge changer installed at the break of gauge which moves the wheels to the gauge desired.
Variable gauge systems exist within the internal network of Spain, and are installed on international links between Spain/France (Spanish train), Sweden/Finland (Swedish train), Poland/Lithuania (Polish train) and Poland/Ukraine (Polish train).
A system for changing gauge without the need to stop is in widespread use for passenger traffic in Spain, for services run on a mix of dedicated high-speed lines (using Standard gauge) and older lines (using Iberian gauge). Similar systems for freight traffic are still in their infancy, as the higher axle weight increases the technological challenge. Although several alternatives exist, including transferring freight, replacing individual wheels and axles, bogie exchange, transporter flatcars or the simple transshipment of freight or passengers, they are impractical, thus a cheap and fast system for changing gauge would be beneficial for cross-border freight traffic.
Alternative names include Gauge Adjustable Wheelsets (GAW), Automatic Track Gauge Changeover Systems (ATGCS/AGCS), Rolling Stock Re-Gauging System (RSRS), Rail Gauge Adjustment System (RGAS), Shifting wheelset, Variable Gauge Rolling Truck, track gauge change and track change wheelset.
Overview
Variable gauge axles help solve the problem of a break-of-gauge without having to resort to dual gauge tracks or transshipment. Systems allow the adjustment between two gauges. No gauge changer designs supporting more than two gauges are used.
Systems
There are several variable gauge axle systems:
Talgo-RD (from Talgo).
The Talgo system has been in revenue service in Portbou and Irun, on the Spanish-French border, since 1969
It is used on the Strizh train (swift) between Moscow and Berlin.
From 2014 for freight wagons up to 22.5 tonne axleloads
CAF-BRAVA (from Construcciones y Auxiliar de Ferrocarriles)
The BRAVA system was originally designed in 1968 by the Vevey Company in Switzerland. The system was originally called the "Vevey axle". The design was subsequently obtained and improved by Construcciones y Auxiliar de Ferrocarriles (CAF).
DB Cargo–Knorr-Bremse. being developed in 2002 for use between Europe and Russia.
DBAG–Rafil Type V for freight (from for Deutsche Bahn).
Japan Railways RTRI (from the Japan Railway Technical Research Institute) to be used on motorised axles.
PKP SUW 2000 system produced by ZNTK Poznań for Polish State Railways.
Montreux–Lenk im Simmental line, also developed by Prose of Winterthur in 2022 (/). Strictly speaking, this is not a variable gauge axle system; the bogie wheels are individually suspended without a connecting axle, and their gauge can be adjusted. Furthermore, while the gauge is being changed, the height of the body is changed by 200 mm to match the difference in the platform heights on the two different gauge railways comprising the GoldenPass Express.
Compatibility
The variable gauge systems are not themselves all compatible. The SUW 2000 and Rafil Type V systems are interoperable, as are TALGO-RD and CAF-BRAVA.
In 2009, at Roda de Barà near Tarragona, a Unichanger capable of handling four different VGA systems was under development.
International traffic
VGA is particularly important with international railway traffic because gauge changes tend to occur more often at international borders.
Features
Different systems have different limitations, for example, some can be used on carriages and wagons only and are unsuitable for motive power, while others require that rolling stock is unloaded before going through the gauge changer. When one of the gauges is narrow there may not be enough space between the wheels for the Brakes, Gauge Changer and the Traction Motors.
Maximum speed
The maximum speed of the trains equipped with the different technologies varies. Only CAF and Talgo produce high-speed VGA, allowing speeds up to 330 km/h.
Speed changing
The Talgo RD GC changes gauge at a speed of so a train takes only 24 seconds to convert.
Gauge changer
A gauge changer is a device which forces the gauge adjustment in the wheels. Designs consist of a pair of running rails that gradually vary in width between the two gauges, combined with other rails and levers to perform the following steps, using Talgo RD as an example:
Verify that all vehicles in train are suitable for Gauge Change.
Support on – takes weight off lock and on the guide rails.
Unlock.
Move wheels to new position.
Relock.
Support off – put weight back on lock from the guide rails.
Verify correct operation and generate statistics. Use ECPB power and supervisory cables.
In the Spanish Talgo-RD system, a constant spray of water is used to lubricate the metal surfaces, to reduce heat and wear. A Talgo RD gauge changer is long and wide.
Limitations
At present the choice of gauge is limited to two out of three of and broad gauges and . With narrow gauges such as as found at Zweisimmen, Switzerland, there is less room between the wheels for the gauge change mechanism, the traction motors, and the brakes. The diameter of the wheels also limits the axleload to no more than 22.5 tonnes.
Operation
A variable gauge multiple unit, or a train including a variable gauge locomotive (e.g. Talgo 250) and rolling stock, may drive straight across a gauge changer. Normally the locomotive will not be able to change gauge, meaning that it must move out of the way whilst the remainder of the train itself passes through. On the opposite side, a new locomotive of the other gauge will couple to the train.
A Talgo train with a locomotive can drive across a gauge change at 1 axle per second at a speed of about .
A train (or an individual car) can be pushed halfway across the gauge-changer, uncoupled, and then (once far enough across) coupled to the new locomotive and pulled the rest of the way. A long length of wire-rope with hooks on the end means that the process can be asynchronous, with the rope used to bridge across the length of the gauge changer (to temporarily couple the arriving cars and receiving locomotive, although without braking control from the locomotive to the train vehicles).
On long-distance trains in Spain and night trains crossing from Spain into France, the arriving locomotive stops just short of the gauge changer, uncouples and moves into a short siding out of the way. Gravity then moves the train through the gauge changer at a controlled low speed. The new locomotive is coupled onto the front only after the full train has finished passing through the changer.
From 2014 gauge changing systems for freight wagons were being developed.
Countries
Australia
In 1933, as many as 140 inventions were offered to Australia railways to overcome the breaks of gauge between the different states. None was accepted. About 20 of these devices were adjustable wheels/axles of some kind or another, which may be analogous to the modern VGA. VGA systems were mostly intended for Broad Gauge and Standard Gauge lines.
Break of Gauge stations were installed at Port Pirie, Peterborough and Albury; these were fairly manual in operation. The newest installation was at Dry Creek and was of a more automatic design. The Talgo RD design is even more automatic and efficient.
Belarus/Poland
A Talgo gauge changing facility is installed at Brest near the Belarusian-Polish border. It is used by Russian Railways' fast trains connecting Moscow and Berlin.
Orders for 7 Talgo VGA trainsets placed were placed in 2011. The trains under the brand "Strizh" are in service since 2016.
Canada
Variable gauge axles were used for a while on the Grand Trunk Railway in the 1860s in Canada to connect and standard gauge without transshipment. Five hundred vehicles were fitted with "adjustable gauge trucks" but following heavy day-in, day-out use the system proved unsatisfactory, particularly in cold and snowy weather. The system used telescoping axles with wide hubs that allowed the wheels to be squeezed or stretched apart through a gauge-changer, after holding pins had been manually released.
Railway operations over the Niagara Bridge were also complicated.
Finland/Sweden
In 1999, a gauge-changer was installed at Tornio at the Finnish end of the dual-gauge section between Haparanda and Tornio, for use with variable gauge freight wagons. The Tornio gauge changer is a Rafil design from Germany; a similar Talgo-RD gauge changer at the Haparanda end used to exist, but was removed as it required de-icing in winter.
Train ferry traffic operated by SeaRail and arriving from Germany and Sweden by sea used bogie exchange facilities in the Port of Turku.
Georgia
A new gauge changer has been put in place in Akhalkalaki for Baku-Tbilisi-Kars railway. Northwestern end has rails apart, southeastern end has rails apart. Both bogie exchange and variable gauge adapters are provided.
Japan
The "Gauge Change Train" is a project started in Japan in the 1990s to investigate the feasibility of producing an electric multiple unit (EMU) train capable of operating both the Shinkansen high-speed network at and the original network at . See .
The first-generation train was tested from 1998 to 2006, including on the US High-speed Test Track in 2002. The second-generation train, intended to run at a maximum speed of , was test-run in various locations in Japan between 2006 and 2013. A third-generation train has been undergoing reliability trials since 2014 in preparation for potential introduction to service on the planned Kyushu Shinkansen extension to Nagasaki.
Gallery
Lithuania/Poland
A gauge changing facility of the Polish SUW 2000 system is installed at Mockava north of the Lithuanian-Polish border. VGA passenger trains between Lithuania and Poland were running between October 1999 and May 2005, and VGA goods trains between early 2000s and 2009.
Poland/Ukraine
There are two gauge changing facilities of the Polish SUW 2000 system installed on the Polish-Ukrainian border, one of them in Dorohusk (Poland) on the Warsaw-Kiyv line, another in Mostyska (Ukraine) on the Kraków-Lviv line. On 14 December 2003 VGA passenger trains were introduced between Kraków (Poland) and Lviv (Ukraine) instead of bogie exchange. VGA saves about 3 hours compared to bogie exchange. The trains last ran in 2016.
Spain
Spain is the largest user of variable gauge systems. This is because of the need to connect older mainlines built to Iberian gauge and extensive new high-speed railway lines and connections to France, using the standard gauge. Two gauge changes are installed on lines to France and at all entrances/exits leading between the high-speed network and older lines. There are also significant lengths of secondary lines but these are not connected to the main network.
In February 2004, RENFE placed orders for:
Forty-five CAF/Alstom 25 kV AC/3 kV DC, variable gauge EMUs for 250 km/h regional services, between October 2006 and May 2009 (€580 million)
Twenty-six 25 kV AC variable gauge trains for long-distance services using two Bombardier power cars and Talgo Series VII trailer cars (€370 million) Gauges involved are and .
Olmedo to Medina del Campo in Valladolid, Spanish test track.
November 2008 – High Speed trainset for Cadiz to Warsaw.
July 2009 – Talgo 250 supplied with Voith Turbo SZH-692 gauge change final drives.
There is also a circular test track in Spain.
Switzerland
Variable gauge bogies are implemented on the Montreux–Gstaad–Zweisimmen–Spiez–Interlaken line. Trains automatically switch from to at Zweisimmen. The bogie has no axles, which allow the bogie half frames holding the wheels on both sides to slide sideways relative to each other. The EV09-Prose gauge changer at Zweisimmen was satisfactorily tested on 19 June 2019. The system, designed to allow operation on both Montreux Oberland Bernois Railway's (MOB) 1000mm gauge line and BLS AG 1435mm gauge infrastructure, was first implemented on 11 December 2022. Moreover, while the gauge is being automatically changed at Zweisimmen, the air spring mounted on the bogie cross member is automatically adjusted by 200 mm to match the body height with the platform height on the MOB or BLS AG portion of the GoldenPass Express.
United Kingdom
John Fowler mentions in 1886 at attempt by the GWR to develop a "telescopical" axle.
Trams ran between Leeds () and Bradford ( gauge) following a successful trial in 1906 using Bradford tram car number 124. The system was later patented by – GB190601695 (A) of 1906. This system was improved again in patent GB190919655 (A) of 1909 by introducing a locking system acting on the wheel and axle rather than just the wheel rim. This provided a more effective grip where the wheel was free to move along the splined axle.
Comparison with bogie exchange
Time taken
In VGA, the train is pulled through the "adjuster" at about without any need to uncouple the wagons or disconnect (and test) the brake equipment. Alternatively, as the train need not be uncoupled, the locomotive may pull the coupled carriages all together.
See Talgo Gauge Changer.
Locomotives
Steam locomotive are generally not gauge convertible on-the-fly. While diesel locomotives can be bogie exchanged, this is not normally done owing to the complexity in the reconnection of cables and hoses. In Australia, some locomotives are transferred between gauges. The transfer might happen every few months, but not for an individual trip.
By 2004, variable gauge electric passenger locomotives were available from Talgo. It is not clear if variable gauge freight locomotives are available.
Electric
L-9202 is an experimental high speed Bo-Bo dual voltage (3 kV DC/25 kV AC) VGA locomotive.
Talgo 250 locomotives were also planned to haul dual-voltage variable-gauge trainsets from Montpellier from the border to Barcelona and Madrid. Two Talgo 250 power cars haul 11 passenger trailer cars.
EMU
Weight
A gauge adjustable bogie complete with wheelsets weighs a total of about one ton/tonne more than a conventional bogie and normally must use disc brakes, which cool more slowly.
History
1915. C. W. Prosser. – Argus
1921. C. R. Prosser. – Argus Friday 8 July 1921
1922. J. Grieve. – Argus 19 July 1922
See also
Alvia
Axle exchange
Bogie exchange
Bradford Corporation Tramways
Dual couplings
Gauge Change Train (Japan)
Interoperability
Ramsey car-transfer apparatus
Strizh
SUW 2000 a form of VGA
References
Further reading
External links
A train axle system with variable gauge wheels, patent EP1112908, assigned to Talgo SA.
Variable gauge bogie for rolling stock, patent EP0873929, assigned to Railway Technical Research Institute.
Railway axle assembly furnished with automatic change of track gauge and adaptable to conventional freight bogies, patent US5787814, assigned to Talgo SA.
Variable-Gauge Wagon Wheelsets – Brief Article, International Railway Journal, July, 1999
European Automatic Track Gauge Changeover Systems, UIC-backed study of different variable gauge systems
Talgo's variable gauge system explained (in French)
Talgo's Variable gauge system in action
Close-up view of MOB's variable bogie in action
UA report
CFR
http://osjd.plaske.ua/en/doklad/wishnevski.doc
Ukraine
Automatic gauge changing systems in Spain
Track gauges
Rolling stock
Vehicle technology
International rail transport | Variable gauge | Engineering | 3,337 |
50,807,880 | https://en.wikipedia.org/wiki/CVSO%2030 | CVSO 30 (PTFO 8-8695) is a suspected binary T Tauri star, located in constellation Orion at 1200 light years from Earth with one candidate planet called CVSO 30 c. The candidate planet is a gas giant. The star is named after the CIDA Variability Survey of Orion (CVSO) and the Palomar Transient Factory (PTF) and is within the 25 Ori group.
Planetary system
CVSO 30 may have one planet called CVSO 30 c. CVSO 30 c is calculated to have a period of 27,000 years and a semimajor axis of 660 AU.
Direct imaging of the suspected CVSO 30 c, with a calculated mass equal to 4.7 Jupiter's, has been achieved through photometric and spectroscopic high contrast observations carried out with the Very Large Telescope located in Chile, the Keck Observatory in Hawaii and the Calar Alto Observatory in Spain. However, the colors of the object suggest that it may actually be a background star, such as a K-type giant or a M-type subdwarf.
By 2020, the phase of "dips" caused by suspected planet CVSO 30 b had drifted nearly 180 degrees from the expected value, thus ruling out the existence of the planet. Instead, a rare type of stellar starspot activity with very large starspots is now suspected. Also, CVSO 30 is suspected to be a stellar binary, with the previously reported planetary orbital period equal to the rotation period of the companion star. Further investigation of "dips" by 2022 led to hypothesis of a large gas cloud close to synchronous orbit, dust would likely sublimate.
References
Further reading
T Tauri stars
Orion (constellation)
Hypothetical planetary systems
J05250755+0134243 | CVSO 30 | Astronomy | 368 |
37,342,825 | https://en.wikipedia.org/wiki/Raventoxin | Raventoxins are neurotoxins from the venom of the spider Macrothele raveni.
Sources
Raventoxins are toxins from the venom of the spider Macrothele raveni. This is a hairy spider, a member of the genus Macrothele, that can be found in the hilly areas of Ningming County, Guangxi Province in China.
Chemistry
Six different types of raventoxin have been described, named raventoxin-I to VI. Raventoxin-I consists of 43 amino acid residues. It has a molecular mass of 4840.11 Da. The toxin is partially homologous to δ-AcTx-Hv1a and δ-AcTx-Ar1, two toxins derived from Hadronyche versuta and Atrax robustus, respectively.
Raventoxin-II has a molecular weight of 3021.56 Da.
Raventoxin-III is a basic polypeptide, consisting of 29 amino acid residues. It has a molecular mass of 3286.58 Da.
Raventoxin-V has a molecular weight of 3133.48 Da.
Raventoxin-VI consists of 51 amino acid residues, and has a molecular weight of 5371.6 Da.
Target and mode of action
All described raventoxins have shown to exert a neurotoxic effect. At low concentration, raventoxin-I enhances muscle contraction, suggesting a direct action of the toxin on muscle, whereas at higher concentration it blocks neuromuscular transmission. No toxins have shown to act similarly.
The primary structure of raventoxin-III is identical to that of Magi 5 (β-hexatoxin-Mg1a), a toxin found in the venom of the spider Macrothele gigas. Magi 5 binds at site 4 of the alpha subunit of the mammalian voltage-gated sodium channel Nav1.2 (SCN2A). Binding of Magi 5 to the sodium channels shifts both activation and inactivation to more hyperpolarized voltages and slows the recovery from inactivation. Combined, these effects may lead to increased inactivation of the sodium channels at rest, leading to inhibition and blockage of neuromuscular transmission. The blockage is most probably reversible. Magi-5 competes with the scorpion beta-toxin Css IV for binding to the sodium channel at neurotoxin receptor site 4. One other known property of Magi-5 is its binding to site 3 of the insect sodium channel, observed in lepidopteran larvae, which raises the possibility of homology between the molecular structures of the binding site 3 (in insects) and 4 (in mammals).
Raventoxin-VI blocks neuromuscular transmission in a rat phrenic nerve preparation. Intracerebroventricular injection of the toxin leads to paralysis in rat.
Toxicity
Raventoxin-I and raventoxin-III have both shown excitation, spastic paralysis, gasping, a fast heartbeat and exophthalmos in mice. Only raventoxin-I also shows an increase in salivation. Both toxins can cause death in mice, when sufficiently administered. The LD50 of raventoxin-I is 0.772 mg/kg when intra-abdominally injected in mice.
Raventoxin-I and raventoxin-III are not toxic for cockroaches., but administration of Magi-5 (raventoxin-III) in lepidopteran larvae results in temporary paralysis of the insects.
Raventoxin-II and raventoxin-V also have insecticidal effects.
Therapeutic use
The effect of administering the whole venom of the Macrothele raveni spider has been studied in several diseases, especially in carcinomata. In HeLa cells, it showed necrosis, direct lysis and apoptosis. The antitumor effect of the venom is also shown in a human breast carcinoma cell line, MCF-7, where cytotoxic changes, apoptosis and necrosis where caused by the venom. After administration of the venom in affected mice, the tumor size significantly decreased compared to the tumor size in control mice.
References
Ion channel toxins
Neurotoxins
Protein toxins
Spider toxins | Raventoxin | Chemistry | 894 |
39,032,300 | https://en.wikipedia.org/wiki/Flora%20Japonica%20%281834%20book%29 | is a Flora written in Leyden by Bavarian botanist and traveler Philipp Franz von Siebold in collaboration with fellow Bavarian Joseph Gerhard Zuccarini. The work, written in Latin, carries the full title of Flora Japonica; sive, Plantae Quas in Imperio Japonico Collegit, Descripsit, ex Parte in Ipsis Locis Pingendas Curavit..
Begun in 1835 by Siebold and Zuccarini, work continued until 1842. After Zuccarini's death in 1848, Siebold discontinued his involvement with the work, and the materials accrued to in Leyden. After Siebold's death in 1866, F. A. W. Miquel, director of the Rijksherbarium, completed the work.
Siebold was already widely known in Japan for various endeavors, and this work cemented his scientific fame in Europe.
Background
In service of the Dutch East India Company, Siebold was stationed on Dejima, the artificial island next to Nagasaki, which served as then-isolated Japan's gateway to the West. He arrived in 1823, serving as both a physician and botanist, remaining in Japan until 1830. During his stay in the Orient, he started a small botanical garden behind his home and amassed over 1,000 native plants. In a specially built glasshouse he cultivated the Japanese plants to endure the Dutch climate and he sent many herbarium specimens to Europe. Following his return to Europe, he settled in Leyden and began work with Zuccarini on the Flora.
Work
The work was published as 2 volumes in 30 parts, with first part of volume I published in December 1835. Volume I was completed in June 1841. Parts 1-5 of the second volume were issued between 1842 and 1844, after which work by Siebold stopped. The final 5 parts of the second volume were issued by Miquel in 1870. Copies of the original work are rare, and one fetched US$27,500 at auction in 2013 ($ today).
References
External links
Text of at Google Books with illustration plates in black and white
Text of Volume I at Gallica with illustration plates in black and white
Text of Volume II at Gallica with illustration plates in black and white
Illustration plates of Flora Japonica in color at Wikimedia Commons
Illustration plates of Flora Japonica in color at www.BioLib.de
Florae (publication)
19th-century books in Latin | Flora Japonica (1834 book) | Biology | 505 |
265,407 | https://en.wikipedia.org/wiki/Isinglass | Isinglass ( ) is a form of collagen obtained from the dried swim bladders of fish. The English word origin is from the obsolete Dutch huizenblaas – huizen is a kind of sturgeon, and blaas is a bladder, or German Hausenblase, meaning essentially the same. The bladders, once removed from the fish, processed, and dried, are formed into various shapes for use.
It is used mainly for the clarification or fining of some beer and wine. It can also be cooked into a paste for specialised gluing purposes.
Although originally made exclusively from sturgeon, especially beluga, in 1795 an invention by William Murdoch facilitated a cheap substitute using cod. This was extensively used in Britain in place of Russian isinglass, and in the US hake was important. In modern British brewing all commercial isinglass products are blends of material from a limited range of tropical fish.
Foods and drinks
Before the inexpensive production of gelatin and other competing products, isinglass was used in confectionery and desserts such as fruit jelly and blancmange.
Isinglass finings are widely used as a processing aid in the British brewing industry to accelerate the fining, or clarification, of beer. It is used particularly in the production of cask-conditioned beers, although many cask ales are available which are not fined using isinglass. The finings flocculate the live yeast in the beer into a jelly-like mass, which settles to the bottom of the cask. Left undisturbed, beer will clear naturally; the use of isinglass finings accelerates the process. Isinglass is sometimes used with an auxiliary fining, which further accelerates the process of sedimentation.
Non-cask beers that are destined for kegs, cans, or bottles are often pasteurised and filtered. The yeast in these beers tends to settle to the bottom of the storage tank naturally, so the sediment from these beers can often be filtered without using isinglass. However, some breweries still use isinglass finings for non-cask beers, especially when attempting to repair bad batches.
Many vegetarians consider beers that are processed with these finings (such as most cask-conditioned ales in the UK) to be unsuitable for vegetarian diets (although acceptable for pescetarians). According to global data in 2018, along with low-calorie beer and gluten-free beer, beers that are acceptable for strict vegetarians are expected to grow in demand in the coming years. The demand increase is attributed to millennial consumers, and some companies have introduced vegetarian friendly options or done away with isinglass use. A beer-fining agent that is suitable for vegetarians is Irish moss, a type of red algae containing the polymer chemical carrageenan. However, carrageenan-based products (used in both the boiling process and after fermentation) primarily reduce hazes caused by proteins, but isinglass is used at the end of the brewing process, after fermentation, to remove yeast. Since the two fining agents act differently (on different haze-forming particles), they are not interchangeable, and some beers use both.
Isinglass finings are also used in the production of kosher wines, although for reasons of kashrut, they are not derived from the beluga sturgeon, because this fish is not kosher. Whether the use of a nonkosher isinglass renders a beverage nonkosher is a matter of debate in Jewish law. Rabbi Yehezkel Landau, in Noda B'Yehuda, first edition, Yore Deah 26, for example, permits such beverages. This is the position followed by many kashrut-observant Jews today.
The similar-sounding names has resulted in confusion between isinglass and waterglass, especially as both have been used to preserve eggs. A solution of isinglass was applied to eggs and allowed to dry, sealing their pores. Waterglass is sodium silicate. Eggs were submerged in solutions of waterglass, and a gel of silicic acid formed, also sealing the pores of the eggshell.
Conservation
Isinglass is also used as an adhesive to repair parchment, stucco and damage to paintings on canvas. Pieces of the best Russian isinglass are soaked overnight to soften and swell the dried material. Next, it is cooked slowly in a double boiler at 45 °C while being stirred. A small amount of gum tragacanth dissolved in water is added to the strained isinglass solution to act as an emulsifier.
When repairing paint that is flaking from parchment, isinglass can be applied directly to an area which has been soaked with a small amount of ethanol. It is typically applied as a very tiny drop that is then guided, with the help of a binocular microscope, under the edges of flaking paint.
It can also be used to coat tissue or goldbeater's skin. On paintings this can be used as a temporary backing to either canvas patches or filler until dried. Here, isinglass is similar to parchment size and other forms of gelatin, but it is unique in that as a dried film the adhesive can be reactivated with moisture. For this use, the isinglass is cooked with a few drops of glycerin or honey.
This adhesive is advantageous in situations where minimal use of water is desired for the parchment as the isinglass can be reactivated with an ethanol-water mixture. It also has a greater adhesive strength than many other adhesives used for parchment repair.
In popular culture
In the musical Oklahoma!, the song "The Surrey With the Fringe on Top" describes the surrey as having "isinglass curtains you can roll right down" although here the term refers to mica, commonly used for windows in vehicle side screens (but totally inflexible).
Mentioned in "The Book of Life" by Deborah Harkness, "her scales fell like isinglass", in reference to the scales of a fire drake named Corra.
References
Further reading
Woods, Chris (1995). "Conservation Treatments for Parchment Documents", Journal of the Society of Archivists, Vol. 16, Issue 2, pp. 221–239.
Chemozyme
Brewing ingredients
Winemaking
Fish products
Food ingredients
Conservation and restoration materials | Isinglass | Physics,Technology | 1,334 |
5,915,049 | https://en.wikipedia.org/wiki/Lindley%27s%20paradox | Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution. The problem of the disagreement between the two approaches was discussed in Harold Jeffreys' 1939 textbook; it became known as Lindley's paradox after Dennis Lindley called the disagreement a paradox in a 1957 paper.
Although referred to as a paradox, the differing results from the Bayesian and frequentist approaches can be explained as using them to answer fundamentally different questions, rather than actual disagreement between the two methods.
Nevertheless, for a large class of priors the differences between the frequentist and Bayesian approach are caused by keeping the significance level fixed: as even Lindley recognized, "the theory does not justify the practice of keeping the significance level fixed" and even "some computations by Prof. Pearson in the discussion to that paper emphasized how the significance level would have to change with the sample size, if the losses and prior probabilities were kept fixed". In fact, if the critical value increases with the sample size suitably fast, then the disagreement between the frequentist and Bayesian approaches becomes negligible as the sample size increases.
The paradox continues to be a source of active discussion.
Description of the paradox
The result of some experiment has two possible explanations hypotheses and and some prior distribution representing uncertainty as to which hypothesis is more accurate before taking into account .
Lindley's paradox occurs when
The result is "significant" by a frequentist test of indicating sufficient evidence to reject say, at the 5% level, and
The posterior probability of given is high, indicating strong evidence that is in better agreement with than
These results can occur at the same time when is very specific, more diffuse, and the prior distribution does not strongly favor one or the other, as seen below.
Numerical example
The following numerical example illustrates Lindley's paradox. In a certain city 49,581 boys and 48,870 girls have been born over a certain time period. The observed proportion of male births is thus / ≈ 0.5036. We assume the fraction of male births is a binomial variable with parameter We are interested in testing whether is 0.5 or some other value. That is, our null hypothesis is and the alternative is
Frequentist approach
The frequentist approach to testing is to compute a p-value, the probability of observing a fraction of boys at least as large as assuming is true. Because the number of births is very large, we can use a normal approximation for the fraction of male births with and to compute
We would have been equally surprised if we had seen female births, i.e. so a frequentist would usually perform a two-sided test, for which the p-value would be In both cases, the p-value is lower than the significance level α = 5%, so the frequentist approach rejects as it disagrees with the observed data.
Bayesian approach
Assuming no reason to favor one hypothesis over the other, the Bayesian approach would be to assign prior probabilities and a uniform distribution to under and then to compute the posterior probability of using Bayes' theorem:
After observing boys out of births, we can compute the posterior probability of each hypothesis using the probability mass function for a binomial variable:
where is the Beta function.
From these values, we find the posterior probability of which strongly favors over .
The two approaches—the Bayesian and the frequentist—appear to be in conflict, and this is the "paradox".
Reconciling the Bayesian and frequentist approaches
Almost sure hypothesis testing
Naaman proposed an adaption of the significance level to the sample size in order to control false positives: , such that with .
At least in the numerical example, taking , results in a significance level of 0.00318, so the frequentist would not reject the null hypothesis, which is in agreement with the Bayesian approach.
Uninformative priors
If we use an uninformative prior and test a hypothesis more similar to that in the frequentist approach, the paradox disappears.
For example, if we calculate the posterior distribution , using a uniform prior distribution on (i.e. ), we find
If we use this to check the probability that a newborn is more likely to be a boy than a girl, i.e. we find
In other words, it is very likely that the proportion of male births is above 0.5.
Neither analysis gives an estimate of the effect size, directly, but both could be used to determine, for instance, if the fraction of boy births is likely to be above some particular threshold.
The lack of an actual paradox
The apparent disagreement between the two approaches is caused by a combination of factors. First, the frequentist approach above tests without reference to . The Bayesian approach evaluates as an alternative to and finds the first to be in better agreement with the observations. This is because the latter hypothesis is much more diffuse, as can be anywhere in , which results in it having a very low posterior probability. To understand why, it is helpful to consider the two hypotheses as generators of the observations:
Under , we choose and ask how likely it is to see boys in births.
Under , we choose randomly from anywhere within 0 to 1 and ask the same question.
Most of the possible values for under are very poorly supported by the observations. In essence, the apparent disagreement between the methods is not a disagreement at all, but rather two different statements about how the hypotheses relate to the data:
The frequentist finds that is a poor explanation for the observation.
The Bayesian finds that is a far better explanation for the observation than
The ratio of the sex of newborns is improbably 50/50 male/female, according to the frequentist test. Yet 50/50 is a better approximation than most, but not all, other ratios. The hypothesis would have fit the observation much better than almost all other ratios, including
For example, this choice of hypotheses and prior probabilities implies the statement "if > 0.49 and < 0.51, then the prior probability of being exactly 0.5 is 0.50/0.51 ≈ 98%". Given such a strong preference for it is easy to see why the Bayesian approach favors in the face of even though the observed value of lies away from 0.5. The deviation of over 2σ from is considered significant in the frequentist approach, but its significance is overruled by the prior in the Bayesian approach.
Looking at it another way, we can see that the prior distribution is essentially flat with a delta function at Clearly, this is dubious. In fact, picturing real numbers as being continuous, it would be more logical to assume that it would be impossible for any given number to be exactly the parameter value, i.e., we should assume
A more realistic distribution for in the alternative hypothesis produces a less surprising result for the posterior of For example, if we replace with i.e., the maximum likelihood estimate for the posterior probability of would be only 0.07 compared to 0.93 for (of course, one cannot actually use the MLE as part of a prior distribution).
See also
Bayes factor
Notes
Further reading
Statistical hypothesis testing
Statistical paradoxes
Bayesian statistics | Lindley's paradox | Mathematics | 1,503 |
19,529,000 | https://en.wikipedia.org/wiki/Algebra%20tile | Algebra tiles are mathematical manipulatives that allow students to better understand ways of algebraic thinking and the concepts of algebra. These tiles have proven to provide concrete models for elementary school, middle school, high school, and college-level introductory algebra students. They have also been used to prepare prison inmates for their General Educational Development (GED) tests. Algebra tiles allow both an algebraic and geometric approach to algebraic concepts. They give students another way to solve algebraic problems other than just abstract manipulation. The National Council of Teachers of Mathematics (NCTM) recommends a decreased emphasis on the memorization of the rules of algebra and the symbol manipulation of algebra in their Curriculum and Evaluation Standards for Mathematics. According to the NCTM 1989 standards "[r]elating models to one another builds a better understanding of each".
Examples
Solving linear equations using addition
The linear equation can be modeled with one positive tile and eight negative unit tiles on the left side of a piece of paper and six positive unit tiles on the right side. To maintain equality of the sides, each action must be performed on both sides. For example, eight positive unit tiles can be added to both sides. Zero pairs of unit tiles are removed from the left side, leaving one positive tile. The right side has 14 positive unit tiles, so .
Solving linear equations using subtraction
The equation can be modeled with one positive tile and seven positive unit tiles on the left side and 10 positive unit tiles on the right side. Rather than adding the same number of tiles to both sides, the same number of tiles can be subtracted from both sides. For example, seven positive unit tiles can be removed from both sides. This leaves one positive tile on the left side and three positive unit tiles on the right side, so .
Multiplying polynomials
When using algebra tiles to multiply a monomial by a monomial, the student must first set up a rectangle where the length of the rectangle is the one monomial and then the width of the rectangle is the other monomial, similar to when one multiplies integers using algebra tiles. Once the sides of the rectangle are represented by the algebra tiles, one would then try to figure out which algebra tiles would fill in the rectangle. For instance, if one had x×x, the only algebra tile that would complete the rectangle would be x2, which is the answer.
Multiplication of binomials is similar to multiplication of monomials when using the algebra tiles . Multiplication of binomials can also be thought of as creating a rectangle where the factors are the length and width. As with the monomials, one would set up the sides of the rectangle to be the factors and then fill in the rectangle with the algebra tiles. This method of using algebra tiles to multiply polynomials is known as the area model and it can also be applied to multiplying monomials and binomials with each other. An example of multiplying binomials is (2x+1)×(x+2) and the first step the student would take is set up two positive x tiles and one positive unit tile to represent the length of a rectangle and then one would take one positive x tile and two positive unit tiles to represent the width. These two lines of tiles would create a space that looks like a rectangle which can be filled in with certain tiles. In the case of this example the rectangle would be composed of two positive x2 tiles, five positive x tiles, and two positive unit tiles. So the solution is 2x2+5x+2.
Factoring
In order to factor using algebra tiles, one has to start out with a set of tiles that the student combines into a rectangle, this may require the use of adding zero pairs in order to make the rectangular shape. An example would be where one is given one positive x2 tile, three positive x tiles, and two positive unit tiles. The student forms the rectangle by having the x2 tile in the upper right corner, then one has two x tiles on the right side of the x2 tile, one x tile underneath the x2 tile, and two unit tiles are in the bottom right corner. By placing the algebra tiles to the sides of this rectangle we can determine that we need one positive x tile and one positive unit tile for the length and then one positive x tile and two positive unit tiles for the width. This means that the two factors are and . In a sense this is the reverse of the procedure for multiplying polynomials.
References
Sources
Kitt, Nancy A. and Annette Ricks Leitze. "Using Homemade Algebra Tiles to Develop Algebra and Prealgebra Concepts." MATHEMATICS TEACHER 2000. 462-520.
Stein, Mary Kay et al., Implementing Standards-Based Mathematics Instruction. New York: Teachers College Press, 2000.
Larson, Ronald E., Algebra 1. Illinois: McDougal Littell,1998.
External links
The National Library of Virtual Manipulatives
Mathematical manipulatives
Algebra education | Algebra tile | Mathematics | 1,033 |
968,834 | https://en.wikipedia.org/wiki/VO2%20max | {{DISPLAYTITLE:VO2 max}}
V̇O2 max (also maximal oxygen consumption, maximal oxygen uptake or maximal aerobic capacity) is the maximum rate of oxygen consumption attainable during physical exertion. The name is derived from three abbreviations: "V̇" for volume (the dot over the V indicates "per unit of time" in Newton's notation), "O2" for oxygen, and "max" for maximum and usually normalized per kilogram of body mass. A similar measure is V̇O2 peak (peak oxygen consumption), which is the measurable value from a session of physical exercise, be it incremental or otherwise. It could match or underestimate the actual V̇O2 max. Confusion between the values in older and popular fitness literature is common. The capacity of the lung to exchange oxygen and carbon dioxide is constrained by the rate of blood oxygen transport to active tissue.
The measurement of V̇O2 max in the laboratory provides a quantitative value of endurance fitness for comparison of individual training effects and between people in endurance training. Maximal oxygen consumption reflects cardiorespiratory fitness and endurance capacity in exercise performance. Elite athletes, such as competitive distance runners, racing cyclists or Olympic cross-country skiers, can achieve V̇O2 max values exceeding 90 mL/(kg·min), while some endurance animals, such as Alaskan huskies, have V̇O2 max values exceeding 200 mL/(kg·min).
In physical training, especially in its academic literature, V̇O2 max is often used as a reference level to quantify exertion levels, such as 65% V̇O2 max as a threshold for sustainable exercise, which is generally regarded as more rigorous than heart rate, but is more elaborate to measure.
Normalization per body mass
V̇O2 max is expressed either as an absolute rate in (for example) litres of oxygen per minute (L/min) or as a relative rate in (for example) millilitres of oxygen per kilogram of the body mass per minute (e.g., mL/(kg·min)). The latter expression is often used to compare the performance of endurance sports athletes. However, V̇O2 max generally does not vary linearly with body mass, either among individuals within a species or among species, so comparisons of the performance capacities of individuals or species that differ in body size must be done with appropriate statistical procedures, such as analysis of covariance.
Measurement and calculation
Measurement
Accurately measuring V̇O2 max involves a physical effort sufficient in duration and intensity to fully tax the aerobic energy system. In general clinical and athletic testing, this usually involves a graded exercise test in which exercise intensity is progressively increased while measuring:
ventilation and
oxygen and carbon dioxide concentration of the inhaled and exhaled air.
V̇O2 max is measured during a cardiopulmonary exercise test (CPX test). The test is done on a treadmill or cycle ergometer. In untrained subjects, V̇O2 max is 10% to 20% lower when using a cycle ergometer compared with a treadmill. However, trained cyclists' results on the cycle ergometer are equal to or even higher than those obtained on the treadmill.
The classic V̇O2 max, in the sense of Hill and Lupton (1923), is reached when oxygen consumption remains at a steady state ("plateau") despite an increase in workload. The occurrence of a plateau is not guaranteed and may vary by person and sampling interval, leading to modified protocols with varied results.
Calculation: the Fick equation
V̇O2 may also be calculated by the Fick equation:
, when these values are obtained during exertion at a maximal effort. Here Q is the cardiac output of the heart, CaO2 is the arterial oxygen content, and CvO2 is the venous oxygen content. (CaO2 – CvO2) is also known as the arteriovenous oxygen difference.
The Fick equation may be used to measure V̇O2 in critically ill patients, but its usefulness is low even in non-exerted cases. Using a breath-based VO2 to estimate cardiac output, on the other hand, seems to be reliable enough.
Estimation using submaximal exercise testing
The necessity for a subject to exert maximum effort in order to accurately measure V̇O2 max can be dangerous in those with compromised respiratory or cardiovascular systems; thus, sub-maximal tests for estimating V̇O2 max have been developed.
The heart rate ratio method
An estimate of V̇O2 max is based on maximum and resting heart rates. In the Uth et al. (2004) formulation, it is given by:
This equation uses the ratio of maximum heart rate (HRmax) to resting heart rate (HRrest) to predict V̇O2 max. The researchers cautioned that the conversion rule was based on measurements on well-trained men aged 21 to 51 only, and may not be reliable when applied to other sub-groups. They also advised that the formula is most reliable when based on actual measurement of maximum heart rate, rather than an age-related estimate.
The Uth constant factor of 15.3 is given for well-trained men. Later studies have revised the constant factor for different populations. According to Voutilainen et al. 2020, the constant factor should be 14 in around 40-year-old normal weight never-smoking men with no cardiovascular diseases, bronchial asthma, or cancer.
Every 10 years of age reduces the coefficient by one, as well as does the change in body weight from normal weight to obese or the change from never-smoker to current smoker. Consequently, V̇O2 max of 60-year-old obese current smoker men should be estimated by multiplying the HRmax to HRrest ratio by 10.
Cooper test
Kenneth H. Cooper conducted a study for the United States Air Force in the late 1960s. One of the results of this was the Cooper test in which the distance covered running in 12 minutes is measured. Based on the measured distance, an estimate of V̇O2 max [in mL/(kg·min)] can be calculated by inverting the linear regression equation, giving us:
where d12 is the distance (in metres) covered in 12 minutes.
An alternative equation is:
where d′12 is distance (in miles) covered in 12 minutes.
Multi-stage fitness test
There are several other reliable tests and V̇O2 max calculators to estimate V̇O2 max, most notably the multi-stage fitness test (or beep test).
Rockport fitness walking test
Estimation of V̇O2 max from a timed one-mile track walk (as fast as possible) in decimal minutes (, e.g.: 20:35 would be specified as 20.58), sex, age in years, body weight in pounds (, lbs), and 60-second heart rate in beats-per-minute (, bpm) at the end of the mile. The constant is 6.3150 for males, 0 for females.
Correlation coefficient for the generalized formula is 0.88.
Reference values
Men have a V̇O2 max that is 26% higher (6.6 mL/(kg·min)) than women for treadmill and 37.9% higher (7.6 mL/(kg·min)) than women for cycle ergometer on average. V̇O2 max is on average 22% higher (4.5 mL/(kg·min)) when measured using a treadmill compared with a cycle ergometer.
Effect of training
Non-athletes
The average untrained healthy male has a V̇O2 max of approximately 35–40 mL/(kg·min). The average untrained healthy female has a V̇O2 max of approximately 27–31 mL/(kg·min). These scores can improve with training and decrease with age, though the degree of trainability also varies widely.
Athletes
In sports where endurance is an important component in performance, such as road cycling, rowing, cross-country skiing, swimming, and long-distance running, world-class athletes typically have high V̇O2 max values. Elite male runners can consume up to 85 mL/(kg·min), and female elite runners can consume about 77 mL/(kg·min).
Norwegian cyclist Oskar Svendsen holds the record for the highest V̇O2 ever tested with 97.5 mL/(kg·min).
Animals
V̇O2 max has been measured in other animal species. During loaded swimming, mice had a V̇O2 max of around 140 mL/(kg·min). Thoroughbred horses had a V̇O2 max of around 193 mL/(kg·min) after 18 weeks of high-intensity training. Alaskan huskies running in the Iditarod Trail Sled Dog Race had V̇O2 max values as high as 240 mL/(kg·min). Estimated V̇O2 max for pronghorn antelopes was as high as 300 mL/(kg·min).
Limiting factors
The factors affecting V̇O2 may be separated into supply and demand. Supply is the transport of oxygen from the lungs to the mitochondria (combining pulmonary function, cardiac output, blood volume, and capillary density of the skeletal muscle) while demand is the rate at which the mitochondria can reduce oxygen in the process of oxidative phosphorylation. Of these, the supply factors may be more limiting. However, it has also been argued that while trained subjects are probably supply limited, untrained subjects can indeed have a demand limitation.
General characteristics that affect V̇O2 max include age, sex, fitness and training, and altitude. V̇O2 max can be a poor predictor of performance in runners due to variations in running economy and fatigue resistance during prolonged exercise. The body works as a system. If one of these factors is sub-par, then the whole system's normal capacity is reduced.
The drug erythropoietin (EPO) can boost V̇O2 max by a significant amount in both humans and other mammals. This makes EPO attractive to athletes in endurance sports, such as professional cycling. EPO has been banned since the 1990s as an illicit performance-enhancing substance, but by 1998 it had become widespread in cycling and led to the Festina affair as well as being mentioned ubiquitously in the USADA 2012 report on the U.S. Postal Service Pro Cycling Team. Greg LeMond has suggested establishing a baseline for riders' V̇O2 max (and other attributes) to detect abnormal performance increases.
Clinical use to assess cardiorespiratory fitness and mortality
V̇O2 max/peak is widely used as an indicator of cardiorespiratory fitness (CRF) in select groups of athletes or, rarely, in people under assessment for disease risk. In 2016, the American Heart Association (AHA) published a scientific statement recommending that CRF quantifiable as V̇O2 max/peak be regularly assessed and used as a clinical vital sign; ergometry (exercise wattage measurement) may be used if V̇O2 is unavailable. This statement was based on evidence that lower fitness levels are associated with a higher risk of cardiovascular disease, all-cause mortality, and mortality rates. In addition to risk assessment, the AHA recommendation cited the value for measuring fitness to validate exercise prescriptions, physical activity counseling, and improve both management and health of people being assessed.
A 2023 meta-analysis of observational cohort studies showed an inverse and independent association between V̇O2 max and all-cause mortality risk. Every one metabolic equivalent increase in estimated cardiorespiratory fitness was associated with an 11% reduction in mortality. The top third of V̇O2 max scores represented a 45% lower mortality in people compared with the lowest third.
As of 2023, V̇O2 max is rarely employed in routine clinical practice to assess cardiorespiratory fitness or mortality due to its considerable demand for resources and costs.
History
British physiologist Archibald Hill introduced the concepts of maximal oxygen uptake and oxygen debt in 1922. Hill and German physician Otto Meyerhof shared the 1922 Nobel Prize in Physiology or Medicine for their independent work related to muscle energy metabolism. Building on this work, scientists began measuring oxygen consumption during exercise. Key contributions were made by Henry Taylor at the University of Minnesota, Scandinavian scientists Per-Olof Åstrand and Bengt Saltin in the 1950s and 60s, the Harvard Fatigue Laboratory, German universities, and the Copenhagen Muscle Research Centre.
See also
Anaerobic exercise
Arteriovenous oxygen difference
Cardiorespiratory fitness
Comparative physiology
Oxygen pulse
Respirometry
Running economy
Training effect
VDOT
vVO2max
References
Exercise biochemistry
Sports terminology
Respiratory physiology | VO2 max | Chemistry,Biology | 2,589 |
62,033,221 | https://en.wikipedia.org/wiki/Center%20for%20Telematics | The Center for Telematics (commonly known as ZFT by its acronym in German) is a German research institute located in the City of Würzburg in northern Bavaria; although its main research topic is on robotics and telematics, it is also among the leading institutes in Bavaria designing and building small satellites (cubesats) focusing in formation flying. In 2018, a German-Israeli research team led by the Center for Telematics received a research prize from the European Research Council to build ten micro-satellites for exploring the clouds and improve global climate models. The Center for Telematics has also collaborated closely with the University of Würzburg on the development of the OBC (On-Board Computer), as well as the attitude determination sensor suite and control system of the UWE-3 and UWE-4 CubeSat, the latter launched on 27 December 2018 as a secondary payload on a Soyuz-2.
Research area
ZFT does research on Telematics, the interdisciplinary integration of telecommunications, automation and information technology (commonly known as IT), which deals with techniques to provide services in remote locations. On this basis, applications can be realized in areas as diverse as industrial remote maintenance, remote control of robots, medicine, aerospace, transport, teleoperations of pico-satellites and remote education; telematics applications in the near future will change the way people do their job and even will change the driving experience.
Given the importance of analyzing different processes that enable on-site technicians and expert personnel concentrated in service centers to work together even if they are located on different continents. The Center for Telematics informs about opportunities for the use of telematics technologies, supporting the development of products, services, and applications for industry and academia. Then, the center works analyzing potential solutions for the support of processes in the industrial remote maintenance not only in the automotive industry but also in space applications. Due to this, in 2018 ZFT was selected as the winner on the German Telematics Award in the category "Networked Production" for its software Adaptive Management and Security System in the field of advanced automation technology
Telematics in space applications: Space Factory
The center for telematics also works in conjunction with the German Aerospace Center ( abbreviated DLR) on small satellites projects such as Space Factory 4.0, consisting on developing a robotic assembly of highly modular satellites on an in-orbit platform based on Industrie 4.0 processes. TU Darmstadt, TU Munich and the ZFT are involved in Space Factory 4.0. The objectives of Space Factory 4.0 are to study processes for the rapid production of small satellites on an in-orbit platform, and to analyze and explore the necessary support and ground infrastructure, taking into account Industry 4.0 and Space Guidance (ECSS) standards.
Infrastructure on site
For applied research and performance testing, in cooperation with the academia, there is an infrastructure consisting of a 3-meter satellite tracking antenna, the robot hall, the connected field testing ground, a high precision positioning measurement environment, industrial robotic arms, numerous mobile robots, highly accurate powerful motion simulators and a control console for remote control and remote maintenance tasks. So the center is also the ground station of the experimental satellites of the University of Würzburg.
References
Robotics in Germany
European Research Council grantees
University of Würzburg
Telematics
Würzburg
2005 establishments in Germany | Center for Telematics | Technology | 682 |
23,381,686 | https://en.wikipedia.org/wiki/Morisita%27s%20overlap%20index | Morisita's overlap index, named after Masaaki Morisita, is a statistical measure of dispersion of individuals in a population. It is used to compare overlap among samples (Morisita 1959). This formula is based on the assumption that increasing the size of the samples will increase the diversity because it will include different habitats (i.e. different faunas).
Formula:
xi is the number of times species i is represented in the total X from one sample.
yi is the number of times species i is represented in the total Y from another sample.
Dx and Dy are the Simpson's index values for the x and y samples respectively.
S is the number of unique species
CD = 0 if the two samples do not overlap in terms of species, and CD = 1 if the species occur in the same proportions in both samples.
Horn's modification of the index is (Horn 1966):
Note, not to be confused with Morisita’s index of dispersion.
References
Morisita, M. (1959). "Measuring of the dispersion and analysis of distribution patterns". Memoires of the Faculty of Science, Kyushu University, Series E. Biology. 2: 215–235.
Morisita, M. (1962). "Iδ-Index, A Measure of Dispersion of Individuals". Researches on Population Ecology, 4 (1), 1–7.
Horn, H. S. (1966). Measurement of "Overlap" in comparative ecological studies. The American Naturalist 100:419-424.
Linton, L. R.; Davies, Ronald W.; Wrona, F. J. (1981) "Resource Utilization Indices: An Assessment", Journal of Animal Ecology, 50 (1), 283-292
Ricklefs, Robert E.; Lau, Michael (1980) "Bias and Dispersion of Overlap Indices: Results of Some Monte Carlo Simulations", Ecology, 61 (5), 1019-1024
Garratt, Michael W.; Steinhorst, R. Kirk (1976). "Testing for significance of Morisita’s, Horn’s and related measures of overlap". American Midland Naturalist 96 (1), 245-251
External links
Community Metrics
Masaaki MORISITA
Population ecology
Ecological metrics | Morisita's overlap index | Mathematics | 484 |
15,677,650 | https://en.wikipedia.org/wiki/Framsticks | Framsticks is a 3D freeware Artificial Life simulator. Organisms consisting of physical structures ("bodies") and control structures ("brains") evolve over time against a user's predefined fitness landscape (for instance, evolving for speed), or spontaneously coevolve in a complex environment. Evolution of organisms occurs primarily through artificial selection, where an intelligent selector chooses the selection parameters and mutation rates. Also the organisms rate of crossing-over can be chosen thus reflecting the sharing of genes by mating in nature. The simulated organisms have genetic scripts inspired by DNA found in living organisms in nature. A user can isolate a particular organism in the gene pool and edit its genotype. Framsticks allows users to design organisms or manually edit the living genetic code of an organism. Users have the ability to seed the environment with energy orbs that the organisms convert to energy and material. Depending on how the organism does in its lifespan determines the future of the virtual gene pool. Gene pools can be exported and shared.
Bodies
The bodies are made up of various building blocks that are assembled according to a genetic script. Building blocks include: a rotator, hinge, muscle, structure, and receptor.
Brains
The brains are basic neural networks that show up as a network of firing neurons. The genetic script serves as the blueprints for the exact assembly and functioning of the neural network.
World
The world or ‘universe’ can be set to height-field editable as blocks and/or steep planes, ‘water’, flat, or a combination of all these and be edited by user as map in simple text-format. It has adjustable gravitation and water-level.
See also
Digital organism
Artificial life
List of other Alife Simulators
External links
Framsticks home page
Worlds and Organisms sample gallery
Comparison of Different Encodings for Simulated 3D Agents
Artificial life
Artificial life models
Agent-based software | Framsticks | Biology | 388 |
46,669,670 | https://en.wikipedia.org/wiki/Carl%20Sagan%20Institute | The Carl Sagan Institute: Pale Blue Dot and Beyond was founded in 2014 at Cornell University in Ithaca, New York to further the search for habitable planets and moons in and outside the Solar System. It is focused on the characterization of exoplanets and the instruments to search for signs of life in the universe. The founder and current director of the institute is astronomer Lisa Kaltenegger.
The institute, inaugurated in 2014 and renamed on 9 May 2015, collaborates with international institutions on fields such as astrophysics, engineering, earth and atmospheric science, geology and biology with the goal of taking an interdisciplinary approach to the search for life elsewhere in the universe and of the origin of life on Earth.
Carl Sagan was a faculty member at Cornell University beginning in 1968. He was the David Duncan Professor of Astronomy and Space Sciences and director of the Laboratory for Planetary Studies there until his death in 1996.
Research
The main goal of the Carl Sagan Institute is to model atmospheric spectral signatures including biosignatures of known and hypothetical planets and moons to explore whether they could be habitable and how they could be detected. Their research focuses on exoplanets and moons orbiting in the habitable zone around their host stars. The atmospheric characterization of such worlds would allow researchers to potentially detect the first habitable exoplanet. A team member has already produced a "color catalog" that could help scientists look for signs of life on exoplanets.
Bioreflectance spectra catalog
Team scientists used 137 different microorganism species, including extremophiles that were isolated from Earth's most extreme environments, and cataloged how each life form uniquely reflects sunlight in the visible and near-infrared to the short-wavelength infrared (0.35–2.5 μm) portions of the electromagnetic spectrum. This database of individual 'reflection fingerprints' (spectrum) might be used by astronomers as potential biosignatures to find large colonies of microscopic life on distant exoplanets. A combination of organisms would produce a mixed spectrum, also cataloged, of light bouncing off the planet. The method will also be applied to spot vegetation. The goal of the catalog is to provide astronomers with a baseline comparison to help scientists interpret the data that will come back from telescopes like the Nancy Grace Roman Space Telescope and the European Extremely Large Telescope.
Ultraviolet radiation on life forms could also induce biofluorescence in visible wavelengths. An exoplanet orbiting an M-type star with these life forms would glow when exposed to solar flares, allowing it to be detected by the new generations of space observatories.
Other catalogs and models
Institute scientists have catalogued the spectral emissions and albedo of Solar System objects, including all eight planets, nine moons, and two dwarf planets. They have also modeled Earth's atmosphere throughout geological history. Exoplanets with similar conditions to early Earth are considered candidates for emerging life forms.
See also
References
External links
Official Website
Space science organizations
Astrobiology
Exoplanetology
Carl Sagan | Carl Sagan Institute | Astronomy,Biology | 615 |
2,670,015 | https://en.wikipedia.org/wiki/Mu1%20Scorpii | {{DISPLAYTITLE:Mu1 Scorpii}}
Mu1 Scorpii (μ1 Scorpii, abbreviated Mu1 Sco, μ1 Sco) is a binary star system in the southern zodiac constellation of Scorpius. The combined apparent visual magnitude of the pair is about magnitude 3, making it one of the brighter members of Scorpius. Based upon parallax measurements, the distance of this system from the Sun is roughly 500 light-years (150 parsecs). This system is a member of the Scorpius–Centaurus association, the nearest OB association of co-moving stars to the Sun.
The primary (Mu1 Scorpii Aa) is formally named Xamidimura , from the Khoekhoe xami di mûra 'the (two) eyes of the lion'.
Properties
Mu1 Scorpii is an eclipsing binary of the Beta Lyrae type. Discovered to be a spectroscopic binary by Solon Irving Bailey in 1896, it was only the third such eclipsing pair to be discovered. This is a semidetached binary system where the secondary is close to filling its Roche lobe, or it may even be overflowing. The two stars revolve each other along a circular orbit with the components separated by 12.9 times the Sun's radius. Due to occultation of each component by the other, the apparent magnitude of the system decreased by 0.3 and 0.4 magnitudes over the course of the binary's orbit, which takes 34 hours 42.6 minutes to complete.
The primary component is a B-type main sequence star with a stellar classification of B1.5 V. It has 8.3 times the mass of the Sun and 3.9 times the Sun's radius. The secondary is a smaller B-type main sequence star with a classification of about B6.5 V, having 3.6 times the Sun's mass and 4.6 times the radius of the Sun. The effective temperature of the outer atmosphere for each star is 24,000 K for the primary and 17,000 K for the secondary. At these temperatures, the two stars glow with a blue-white hue.
Nomenclature
μ1 Scorpii (Latinised to Mu1 Scorpii) is the system's Bayer designation. The designations of the primary as Mu1 Scorpii Aa derives from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
The pair of stars Mu1 and Mu2 Scorpii are known as the xami di mura 'eyes of the lion' by the Khoikhoi people of South Africa.
In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Xamidimura for the component Mu1 Scorpii Aa on 5 September 2017 (along with Pipirima for the partner of Mu1 Scorpii) and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Tail, refers to an asterism consisting of Mu1 Scorpii, Epsilon Scorpii, Zeta1 Scorpii and Zeta2 Scorpii, Eta Scorpii, Theta Scorpii, Iota1 Scorpii and Iota2 Scorpii, Kappa Scorpii, Lambda Scorpii and Upsilon Scorpii. Consequently, the Chinese name for Mu1 Scorpii itself is (), "the First Star of Tail".
References
Scorpii, Mu1
Binary stars
Beta Lyrae variables
B-type main-sequence stars
Scorpius
082514
Durchmusterung objects
151890
6247
Scorpius–Centaurus association | Mu1 Scorpii | Astronomy | 842 |
77,919,448 | https://en.wikipedia.org/wiki/ATP-grasp | In molecular biology, the ATP-grasp fold is a unique ATP-binding protein structural motif made of two α+β subdomains that "grasp" a molecule of ATP between them. ATP-grasp proteins have ATP-dependent carboxylate-amine/thiol ligase activity.
Structure
Proteins of the ATP-grasp family have an overall structural configuration organised into three domains referred to as the N-terminal domain (or A-domain), the central domain (or B-domain), and the C-terminal domain (or C-domain).
Function
ATP-grasp enzymes catalyse the ATP-dependent ligation of a carboxylate-containing molecule to an amino or thiol group-containing molecule. The reactions typically involve formation of acylphosphate intermediates. These enzymes are involved in various metabolic pathways including purine biosynthesis, fatty acid synthesis, and gluconeogenesis.
Examples of proteins containing this domain
D-alanine-D-alanine ligase
glutathione synthetase
biotin carboxylase
carbamoyl phosphate synthetase
ribosomal protein S6 modification enzyme (RimK)
urea amidolyase
tubulin-tyrosine ligase
enzymes involved in purine biosynthesis.
Evolution and distribution
The ATP-grasp fold is evolutionarily conserved across different enzyme families and its presence is ubiquitous across prokaryotes and eukaryotes.
Use in research
Researchers have developed several types of inhibitors for these enzymes, including mechanism-based inhibitors, ATP-competitive inhibitors, and non-competitive inhibitors. Some ATP-grasp enzymes are being studied as potential targets for antibiotics and anti-obesity drugs.
References
External links
InterPro: ATP-grasp fold, subdomain 1 (IPR013815)
InterPro: ATP-grasp fold, subdomain 2 (IPR013816)
Protein domains
Protein folds
Protein superfamilies | ATP-grasp | Biology | 399 |
731,779 | https://en.wikipedia.org/wiki/Movie%20projector | A movie projector (or film projector) is an opto-mechanical device for displaying motion picture film by projecting it onto a screen. Most of the optical and mechanical elements, except for the illumination and sound devices, are present in movie cameras. Modern movie projectors are specially built video projectors (see also digital cinema).
Many projectors are specific to a particular film gauge and not all movie projectors are film projectors since the use of film is required.
Predecessors
The main precursor to the movie projector was the magic lantern. In its most common setup it had a concave mirror behind a light source to help direct as much light as possible through a painted glass picture slide and a lens, out of the lantern onto a screen. Simple mechanics to have the painted images moving were probably implemented since Christiaan Huygens introduced the apparatus around 1659. Initially, candles and oil lamps were used, but other light sources, such as the argand lamp and limelight were usually adopted soon after their introduction. Magic lantern presentations may often have had relatively small audiences, but the very popular phantasmagoria and dissolving views shows were usually performed in proper theatres, large tents or especially converted spaces with plenty seats.
Both Joseph Plateau and Simon Stampfer thought of lantern projection when they independently introduced stroboscopic animation in 1833 with a stroboscopic disc (which became known as the phenakistiscope), but neither of them intended to work on projection themselves.
The oldest known successful screenings of stroboscopic animation were performed by Ludwig Döbler in 1847 in Vienna and taken on a tour to several large European cities for over a year. His Phantaskop had a front with separate lenses for each of the 12 pictures on a disc and two separate lenses were cranked around to direct light through the pictures.
Wordsworth Donisthorpe patented ideas for a cinematographic film camera and a film presentation system in 1876. In reply to the introduction of the phonograph and a magazine's suggestion that it could be combined with projection of stereoscopic photography, Donisthorpe stated that he could do even better and announce that he would present such images in motion. His original Kinesigraph camera gave unsatisfactory results. He had better results with a new camera in 1889 but never seems to have been successful in projecting his movies.
Eadweard Muybridge developed his Zoopraxiscope in 1879 and gave many lectures with the machine from 1880 to 1894. It projected images from rotating glass disks. The images were initially painted onto the glass, as silhouettes. A second series of discs, made in 1892–94, used outline drawings printed onto the discs photographically, then colored by hand.
Ottomar Anschütz developed his first Electrotachyscope in 1886. For each scene, 24 glass plates with chronophotographic images were attached to the edge of a large rotating wheel and thrown on a small opal-glass screen by very short synchronized flashes from a Geissler tube. He demonstrated his photographic motion from March 1887 until at least January 1890 to circa 4 or 5 people at a time, in Berlin, other large German cities, Brussels (at the 1888 Exposition Universelle), Florence, Saint Petersburg, New York, Boston and Philadelphia. Between 1890 and 1894 he concentrated on the exploitation of an automatic coin-operated version that was an inspiration for Edison Company's Kinetoscope. From 28 November 1894 to at least May 1895 he projected his recordings from two intermittently rotating discs, mostly in 300-seat halls, in several German cities. During circa 5 weeks of screenings at the old Berlin Reichstag in February and March 1895, circa 7.000 paying visitors came to see the show.
In 1886, Louis Le Prince applied for a US patent for a 16-lens device that combined a motion picture camera with a projector. In 1888, he used an updated version of his camera to film the motion picture Roundhay Garden Scene and other scenes. The pictures were privately exhibited in Hunslet. After investing much time, effort and means in a slow and troublesome development of a definitive system, Le Prince eventually seemed satisfied with the result and had a demonstration screening scheduled in New York in 1890. However, he went missing after boarding a train in France and was declared dead in 1897. His widow and son managed to draw attention to Le Prince's work and eventually he came to be regarded as the true inventor of film (a claim also made for many others).
After years of development, Edison eventually introduced the coin-operated peep-box Kinetoscope movie viewer in 1893, mostly in dedicated parlors. He believed this was a commercially much more viable system than projection in theatres. Many other film pioneers found chances to study the technology of the kinetoscope and further developed it for their own movie projection systems.
The Eidoloscope, devised by Eugene Augustin Lauste for the Latham family, was demonstrated for members of the press on 21 April 1895 and opened to the paying public on May 20, in a lower Broadway store with films of the Griffo-Barnett prize boxing fight, taken from Madison Square Garden's roof on 4 May. It was the first commercial projection.
Nicholas Power opened a repair shop for Edison projectors, studied them, and developed one of the earliest and most successful projectors without excessive flicker.
Max and Emil Skladanowsky projected motion pictures with their Bioscop, a flicker-free duplex construction, from 1 to 31 November 1895. They started to tour with their motion pictures, but after catching the second presentation of the Cinématographe Lumière in Paris on 28 December 1895, they seemed to choose not to compete. They still presented their motion pictures in several European cities until March 1897, but eventually the Bioscop had to be retired as a commercial failure.
In Lyon, Louis and Auguste Lumière perfected the Cinématographe, a system that took, printed, and projected film. In late 1895 in Paris, father Antoine Lumière began exhibitions of projected films before the paying public, beginning the general conversion of the medium to projection. They quickly became Europe's main producers with their actualités like Workers Leaving the Lumière Factory and comic vignettes like The Sprinkler Sprinkled (both 1895). Even Edison, joined the trend with the Vitascope, a modified Jenkins' Phantoscope, within less than six months.
In the 1910s, a new consumer commodity was introduced aiming at familial activity, the silent home cinema. Hand-cranked tinplate toy movie projectors, also called vintage projectors, were used taking standard 35 mm 8 perforation silent cinema films.
Digital projectors
In 1999, digital cinema projectors were being tried out in some movie theaters. These early projectors played the movie stored on a computer, and sent to the projector electronically. Due to their relatively low resolution (usually only 2K) compared to later digital cinema systems, the images at the time had visible pixels. By 2006, the advent of much higher 4K resolution digital projection reduced pixel visibility. The systems became more compact over time. By 2009, movie theaters started replacing film projectors with digital projectors. In 2013, it was estimated that 92% of movie theaters in the United States had converted to digital, with 8% still playing film. In 2014, numerous popular filmmakers—including Quentin Tarantino and Christopher Nolan—lobbied large studios to commit to purchase a minimum amount of 35 mm film from Kodak. The decision ensured that Kodak's 35 mm film production would continue for several years.
Although usually more expensive than film projectors, high-resolution digital projectors offer many advantages over traditional film units. For example, digital projectors contain no moving parts except fans, can be operated remotely, are relatively compact and have no film to break, scratch or change reels of. They also allow for much easier, less expensive, and more reliable storage and distribution of content. All-electronic distribution eliminates all physical media shipments. There is also the ability to display live broadcasts in theaters equipped to do so.
Physiology
The illusion of motion in projected films is a stroboscopic effect that has conventionally been attributed to persistence of vision and later often to (misinterpretations of) beta movement and the phi phenomenon known from Gestalt psychology. The exact neurological principles are not yet entirely clear, but the retina, nerves and brain create the impression of apparent movement when presented with a rapid sequence of near-identical still images and interruptions that go unnoticed (or are experienced as flicker). A critical part of understanding this visual perception phenomenon is that the eye is not a camera, i.e.: there is no frame rate for the human eye or brain. Instead, the eye-brain system has a combination of motion detectors, detail detectors and pattern detectors, the outputs of all of which are combined to create the visual experience.
The frequency at which flicker becomes invisible is called the flicker fusion threshold, and is dependent on the level of illumination and the condition of the eyes of the viewer. Generally, the frame rate of 16 frames per second (frame/s) is regarded as the lowest frequency at which continuous motion is perceived by humans. This threshold varies across different species; a higher proportion of rod cells in the retina will create a higher threshold level. Because the eye and brain have no fixed capture rate, this is an elastic limit, so different viewers can be more or less sensitive in perceiving frame rates.
It is possible to view the black space between frames and the passing of the shutter by rapidly blinking one's eyes at a certain rate. If done fast enough, the viewer will be able to randomly trap the darkness between frames or the motion of the shutter. This will not work with (now obsolete) cathode-ray tube displays, due to the persistence of the phosphors, nor with LCD or DLP light projectors, because they refresh the image instantly with no blackout intervals as with traditional film projectors.
Silent films usually were not projected at constant speeds, but could vary throughout the show because projectors were hand-cranked at the discretion of the projectionist, often following some notes provided by the distributor. When the electric motor supplanted hand cranking in both movie cameras and projectors, a more uniform frame rate became possible. Speeds ranged from about 18 frame/s on upsometimes even faster than modern sound film speed (24 frame/s).
16 frame/sthough sometimes used as a camera shooting speedwas inadvisable for projection, due to the risk of the nitrate-base prints catching fire in the projector. Nitrate film stock began to be replaced by cellulose triacetate in 1948. A nitrate film fire and its devastating effect is featured in Cinema Paradiso (1988), a fictional film which partly revolves around a projectionist and his apprentice.
The birth of sound film created a need for a steady playback rate to prevent dialog and music from changing pitch and distracting the audience. Virtually all film projectors in commercial movie theaters project at a constant speed of 24 frame/s. This speed was chosen for both financial and technical reasons. A higher frame rate produces a better-looking picture but costs more as film stock is consumed faster. When Warner Bros. and Western Electric were trying to find the ideal compromise projection speed for the new sound pictures, Western Electric went to the Warner Theater in Los Angeles and noted the average speed at which films were projected there. They set that as the sound speed at which a satisfactory reproduction and amplification of sound could be conducted.
There are some specialist formats (e.g. Showscan and Maxivision) that project at higher rates—60 frames per second for Showscan and 48 for Maxivision. The Hobbit was shot at 48 frames per second and projected at the higher frame rate at specially equipped theaters.
Each frame of regular 24 fps movies are shown twice or more in a process called double-shuttering to reduce flicker.
Principles of operation
Projection elements
As in a slide projector there are essential optical elements:
Light source
Incandescent lighting and even limelight were the first light sources used in film projection. In the early 1900s up until the late 1960s, carbon arc lamps were the source of light in almost all theaters in the world.
The Xenon arc lamp was introduced in Germany in 1957 and in the US in 1963. After film platters became commonplace in the 1970s, Xenon lamps became the most common light source, as they could stay lit for extended periods of time, whereas a carbon rod used for a carbon arc could last for an hour at the most.
Most lamp houses in a professional theatrical setting produce sufficient heat to burn the film should the film remain stationary for more than a fraction of a second. Because of this, absolute care must be taken in inspecting a film so that it should not break in the gate and be damaged, particularly necessary in the era when flammable cellulose nitrate film stock was in use.
Reflector and condenser lens
A curved reflector redirects light that would otherwise be wasted toward the condensing lens.
A positive curvature lens concentrates the reflected and direct light toward the film gate.
Douser
(Also spelled dowser)
A metal or asbestos blade cuts off light before it can get to the film. The douser is usually part of the lamphouse and may be manually or automatically operated. Some projectors have a second, electrically controlled douser that is used for changeovers (sometimes called a changeover douser or changeover shutter). Some projectors have a third, mechanically controlled douser that automatically closes when the projector slows down (called a fire shutter or fire douser), to protect the film if the projector stops while the first douser is still open. Dousers protect the film when the lamp is on but the film is not moving, preventing the film from melting from prolonged exposure to the direct heat of the lamp. It also prevents the lens from scarring or cracking from excessive heat.
Film gate and frame advance
If a roll of film is continuously passed between the light source and the lens of the projector, only a continuous blurred series of images sliding from one edge to the other would be visible on the screen. In order to see an apparently moving clear picture, the moving film must be stopped and held still briefly while the shutter opens and closes.
The gate is where the film is held still prior to the opening of the shutter. This is the case for both filming and projecting movies. A single image of the series of images comprising the movie is positioned and held flat within the gate.
The gate also provides a slight amount of friction so that the film does not advance or retreat except when driven to advance the film to the next image. The intermittent mechanism advances the film within the gate to the next frame while the shutter is closed.
Registration pins prevent the film from advancing while the shutter is open. In most cases the registration of the frame can be manually adjusted by the projectionist, and more sophisticated projectors can maintain registration automatically.
Shutter
It is the gate and shutter that gives the illusion of one full frame being replaced exactly on top of another full frame. The gate holds the film still while the shutter is open. A rotating petal or gated cylindrical shutter interrupts the emitted light during the time the film is advanced to the next frame. The viewer does not see the transition, thus tricking the brain into believing a moving image is on screen. Modern shutters are designed with a flicker-rate of two times (48 Hz) or even sometimes three times (72 Hz) the frame rate of the film, so as to reduce the perception of screen flickering. (See Frame rate and Flicker fusion threshold.) Higher rate shutters are less light efficient, requiring more powerful light sources for the same light on screen.
Imaging lens and aperture plate
A projection objective with multiple optical elements directs the image of the film to a viewing screen. Projector lenses differ in aperture and focal length to suit different needs. Different lenses are used for different aspect ratios.
One way that aspect ratios are set is with the appropriate aperture plate, a piece of metal with a precisely cut rectangular hole in the middle of equivalent aspect ratio. The aperture plate is placed just behind the gate, and masks off any light from hitting the image outside of the area intended to be shown. All films, even those in the standard Academy ratio, have extra image on the frame that is meant to be masked off in the projection.
Using an aperture plate to accomplish a wider aspect ratio is inherently wasteful of film, as a portion of the standard frame is unused. One solution that presents itself at certain aspect ratios is the two-perf pulldown, where the film is advanced less than one full frame in order to reduce the unexposed area between frames. This method requires a special intermittent mechanism in all film-handling equipment throughout the production process, from the camera to the projector. This is costly, and prohibitively so for some theaters. The anamorphic format uses special optics to squeeze a high aspect ratio image onto a standard Academy frame thus eliminating the need to change the costly precision moving parts of the intermittent mechanisms. A special anamorphic lens is used on the camera to compress the image, and a corresponding lens on the projector to expand the image back to the intended aspect ratio.
Viewing screen
In most cases this is a reflective surface which may be either aluminized (for high contrast in moderate ambient light) or a white surface with small glass beads (for high brilliance under dark conditions). A switchable projection screen can be switched between opaque and clear by a safe voltage under 36V AC and is viewable from both sides. In a commercial theater, the screen also has millions of very small, evenly spaced holes in order to allow the passage of sound from the speakers and subwoofer which often are directly behind it.
Film transport elements
Film supply and takeup
Two-reel system
In the two-reel system the projector has two reels–one is the feed reel, which holds the part of the film that has not been shown, the other is the takeup reel, which winds the film that has been shown. In a two-reel projector the feed reel has a slight drag to maintain tension on the film, while the takeup reel is constantly driven with a mechanism that has mechanical slip, to allow the film to be wound under constant tension so the film is wound in a smooth manner.
The film being wound on the takeup reel is being wound head in, tails out. This means that the beginning (or head) of the reel is in the center, where it is inaccessible. As each reel is taken off of the projector, it must be re-wound onto another empty reel. In a theater setting there is often a separate machine for rewinding reels. For the 16 mm projectors that were often used in schools and churches, the projector could be re-configured to rewind films.
The size of the reels can vary based on the projectors, but generally films are divided and distributed in reels of up to , about 22 minutes at 24 frames/sec). Some projectors can even accommodate up to , which minimizes the number of changeovers (see below) in a showing. Certain countries also divide their film reels up differently; Russian films, for example, often come on reels, although it's likely that most projectionists working with changeovers would combine them into longer reels of at least , to minimize changeovers and also give sufficient time for threading and any possibly needed troubleshooting time.
Films are identified as single-reel short subjects, two-reelers (such as some of the early Laurel & Hardy, The Three Stooges, and other comedies), and features, which can take any number of reels (although most are limited to 1½ to 2 hours in length, enabling the theater to have multiple showings throughout the day and evening, each showing with a feature, commercials, and intermission to allow the audiences to change). For some time (ca. 1930–1960), a typical showing meant a short subject (a newsreel, short documentary, a two-reeler, etc.), a cartoon, and the feature. Some theaters would have movie-based commercials for local businesses, and the state of New Jersey required showing a diagram of the theater showing all of the exits.
Changeover systems
Because a single film reel does not contain enough film to show an entire feature, the film is distributed on multiple reels. To prevent having to interrupt the show when one reel ends and the next is mounted, two projectors are used in what is known as a changeover system. A human would, at the appropriate point, manually stop the first projector, shutting off its light, and start the second projector, which the projectionist had ready and waiting. Later the switching was partially automated, although the projectionist still needed to rewind and mount the bulky, heavy film reels. (35mm reels as received by theaters came unrewound; rewinding was the task of the operator who received the reel.) The two-reel system, using two identical projectors, was used almost universally for movie theaters before the advent of the single-reel system. Projectors were built that could accommodate a much larger reel, containing an entire feature. Although one-reel long-play systems tend to be more popular with the newer multiplexes, the two-reel system is still in significant use to this day.
As the reel being shown approaches its end, the projectionist looks for cue marks at the upper-right corner of the picture. Usually these are dots or circles, although they can also be slashes. Some older films occasionally used squares or triangles, and sometimes positioned the cues in the middle of the right edge of the picture.
The first cue appears before the end of the program on the reel, equivalent to eight seconds at the standard speed of 24 frames per second. This cue signals the projectionist to start the motor of the projector containing the next reel. After another of film is shown (seven seconds at 24 frames/sec), the changeover cue should appear, which signals the projectionist to actually make the changeover. When this second cue appears, the projectionist has , or one second, to make the changeover. If it does not occur within one second, the film will end and blank white light will be projected on the screen.
Twelve feet before the first frame of action countdown leaders have a start frame. The projectionist positions the start frame in the gate of the projector. When the first cue is seen, the motor of the starting projector is started. Seven seconds later the end of the leader and start of program material on the new reel should just reach the gate of the projector when the changeover cue is seen.
On some projectors, the operator would be alerted to the time for a change by a bell that operated when the feed reel rotation exceeded a certain speed (the feed reel rotates faster as the film is exhausted), or based on the diameter of the remaining film (Premier Changeover Indicator Pat. No. 411992), although many projectors do not have such an auditory system.
During the initial operation of a changeover, the two projectors use an interconnected electrical control connected to the changeover button so that as soon as the button is pressed, the changeover douser on the outgoing projector is closed in sync with the changeover douser on the incoming projector opening. If done properly, a changeover should be virtually unnoticeable to an audience. In older theaters, there may be manually operated, sliding covers in front of the projection booth's windows. A changeover with this system is often clearly visible as a wipe on the screen.
Once the changeover has been made, the projectionist unloads the full takeup reel from projector A moves the now-empty reel (that used to hold the film just unloaded) from the feed spindle to the takeup spindle, and loads reel #3 of the presentation on projector A. When reel 2 on projector B is finished, the changeover switches the live show from projector B back to projector A, and so on for the rest of the show.
When the projectionist removes a finished reel from the projector it is tails out, and needs to be rewound before the next show. The projectionist usually uses a separate rewind machine and a spare empty reel and rewinds the film so it is head out, ready to project again for the next show.
One advantage of this system (at least for the theatre management) was that if a program was running a few minutes late for any reason, the projectionist would simply omit one (or more) reels of film to recover the time.
In the early years, with no automation, errors were far from unknown: these included starting a movie that had not been rewound and getting reels confused, so they were projected in the wrong order. Correcting either of these, assuming that someone could tell that the reels were confused, required a complete stop of both projectors, often turning on the house lights, and a delay of a minute or so while the projectionist corrected the error and restarted a projector. These highly visible gaffes, which embarrassed the theater operators, were eliminated with the single-reel and digital systems.
Single-reel system
There are two widely used single-reel systems (also known as long-play systems) today: the tower system (vertical feed and takeup) and the platter system (non-rewinding; horizontal feed and takeup).
The tower system largely resembles the two-reel system, except in that the tower itself is generally a separate piece of equipment used with a slightly modified standard projector. The feed and takeup reels are held vertically on the axis, except behind the projector, on oversized spools with capacity or about 133 minutes at 24 frame/s. This large capacity alleviates the need for a changeover on an average-length feature; all of the reels are spliced together into one giant one. The tower is designed with four spools, two on each side, each with its own motor. This allows the whole spool to be immediately rewound after a showing; the extra two spools on the other side allow for a film to be shown while another is being rewound or even made up directly onto the tower. Each spool requires its own motor in order to set proper tensioning for the film since it has to travel (relatively) much further between the projector film transport and the spools. As each spool gains or loses film, the tension must be periodically checked and adjusted so that the film can be transported on and off the spools without either sagging or snapping.
In a platter system the individual 20-minute reels of film are also spliced together as one large reel, but the film is then wound onto a horizontal rotating table called a platter. Three or more platters are stacked together to create a platter system. Most of the platters in a platter system will be occupied by film prints; whichever platter happens to be empty serves as the take-up reel to receive the film that is playing from another platter.
The way the film is fed from the platter to the projector is not unlike an eight-track audio cartridge. Film is unwound from the center of the platter through a mechanism called a payout unit which controls the speed of the platter's rotation so that it matches the speed of the film as it is fed to the projector. The film winds through a series of rollers from the platter stack to the projector, through the projector, through another series of rollers back to the platter stack, and then onto the platter serving as the take-up reel.
This system makes it possible to project a film multiple times without needing to rewind it. As the projectionist threads the projector for each showing, the payout unit is transferred from the empty platter to the full platter and the film then plays back onto the platter it came from. In the case of a double feature, each film plays from a full platter onto an empty platter, swapping positions on the platter stack throughout the day.
The advantage of a platter is that the film need not be rewound after each show, which can save labor. Rewinding risks rubbing the film against itself, which can cause scratching of the film and smearing of the emulsion that carries the pictures. The disadvantages of the platter system are that the film can acquire diagonal scratches on it if proper care is not taken while threading film from platter to projector, and the film has more opportunity to collect dust and dirt as long lengths of film are exposed to the air. A clean projection booth kept at the proper humidity is of great importance, as are cleaning devices that can remove dirt from the film print as it plays.
Automation and the rise of the multiplex
The single reel system can allow for the complete automation of the projection booth operations, given the proper auxiliary equipment. Since films are still transported in multiple reels they must be joined together when placed on the projector reel and taken apart when the film is to be returned to the distributor. It is the complete automation of projection that has enabled the modern multiplex cinema – a single site typically containing from 8 to 24 theaters with only a few projection and sound technicians, rather than a platoon of projectionists. The multiplex also offers a great amount of flexibility to a theater operator, enabling theaters to exhibit the same popular production in more than one auditorium with staggered starting times. It is also possible, with the proper equipment installed, to interlock, i.e. thread a single length of film through multiple projectors. This is very useful when dealing with the mass crowds that an extremely popular film may generate in the first few days of showing, as it allows for a single print to serve more patrons.
Feed and extraction sprockets
Smooth wheels with triangular pins called sprockets engage perforations punched into one or both edges of the film stock. These serve to set the pace of film movement through the projector and any associated sound playback system.
Film loop
As with motion picture cameras, the intermittent motion of the gate requires that there be loops above and below the gate in order to serve as a buffer between the constant speed enforced by the sprockets above and below the gate and the intermittent motion enforced at the gate. Some projectors also have a sensitive trip pin above the gate to guard against the upper loop becoming too big. If the loop hits the pin, it will close the dousers and stop the motor to prevent an excessively large loop from jamming the projector.
Film gate pressure plate
A spring-loaded pressure plate functions to align the film in a consistent image plane, both flat and perpendicular to the optical axis. It also provides sufficient drag to prevent film motion during the frame display, while still allowing free motion under control of the intermittent mechanism. The plate also has spring-loaded runners to help hold film while in place and advance it during motion.
Intermittent mechanism
The intermittent mechanism can be constructed in different ways. For smaller gauge projectors (8 mm and 16 mm), a pawl mechanism engages the film's sprocket hole one side, or holes on each side. This pawl advances only when the film is to be moved to the next image. As the pawl retreats for the next cycle it is drawn back and does not engage the film. This is similar to the claw mechanism in a motion picture camera.
In 35 mm and 70 mm projectors, there usually is a special sprocket immediately underneath the pressure plate, known as the intermittent sprocket. Unlike all the other sprockets in the projector, which run continuously, the intermittent sprocket operates in tandem with the shutter, and only moves while the shutter is blocking the lamp, so that the motion of the film cannot be seen. It also moves in a discrete amount at a time, equal to the number of perforations that make up a frame (4 for 35 mm, 5 for 70 mm). The intermittent movement in these projectors is usually provided by a Geneva drive, also known as the Maltese Cross mechanism.
IMAX projectors use what is known as the rolling loop method, in which each frame is sucked into the gate by a vacuum and positioned by registration pins in the perforations corresponding to that frame.
Types
Projectors are classified by the size of the film used, i.e. the film format. Typical film sizes:
8 mm
Long used for home movies before the video camera, this uses double-sprocketed 16 mm film, which is run through the camera, exposing one side, then removed from the camera, the takeup and feed reels are switched, and the film run through a second time, exposing the other side. The 16 mm film is then split lengthwise into two 8 mm pieces that are spliced to make a single projectable film with sprockets holes on one side.
Super 8
Developed by Kodak, this film stock uses very small sprocket holes close to the edge that allow more of the film stock to be used for the images. This increases the quality of the image. The unexposed film is supplied in the 8 mm width, not split during processing as is the earlier 8 mm. Magnetic stripes could be added to carry encoded sound to be added after film development. Film could also be pre-striped for direct sound recording in suitably equipped cameras for later projection.
9.5 mm
Film format introduced by Pathé Frères in 1922 as part of the Pathé Baby amateur film system. It was conceived initially as an inexpensive format to provide copies of commercially made films to home users. The format uses a single, central perforation (sprocket hole) between each pair of frames, as opposed to 8 mm film which has perforations along one edge, and most other film formats which have perforations on each side of the image.
It became very popular in Europe over the next few decades and is still used by a small number of enthusiasts today. Over 300,000 projectors were produced and sold mainly in France and England, and many commercial features were available in the format. In the sixties, the last projectors of this format were being produced. The gauge is still alive today. 16 mm projectors are converted to 9,5mm and it is still possible to buy film stock (from the French Color City company).
16 mm
This was a popular format for audio-visual use in schools and as a high-end home entertainment system before the advent of broadcast television. In broadcast television news, 16 mm film was used before the advent of electronic news-gathering. The most popular home content were comedic shorts (typically less than 20 minutes in length in the original release) and bundles of cartoons previously seen in movie theaters. 16 mm enjoys widespread use today as a format for short films, independent features and music videos, being a relatively economical alternative to 35 mm. 16 mm film was a popular format used for the production of TV shows well into the HDTV era.
35 mm
The most common film size for theatrical productions during the 20th century. In fact, the common 35 mm camera, developed by Leica, was designed to use this film stock and was originally intended to be used for test shots by movie directors and cinematographers.
35 mm film is typically run vertically through the camera and projector. In the mid-1950s the VistaVision system presented widescreen movies in which the film moved horizontally, allowing much more film to be used for the image as this avoided the anamorphic reduction of the image to fit the frame width. As this required specific projectors it was largely unsuccessful as a presentation method while remaining attractive as filming, intermediate, and source for production printing and as an intermediate step in special effects to avoid film granularity, although the latter is now supplanted by digital methods.
70 mm
High-end movie productions were often produced in this film gauge in the 1950s and 1960s and many very large screen theaters are still capable of projecting it in the 21st century. It is often referred to as 65/70, as the camera uses film 65 mm wide, but the projection prints are 70 mm wide. The extra five millimeters of film accommodated the soundtrack, usually a six-track magnetic stripe. The most common theater installation would use dual-gauge 35/70 mm projectors.
70 mm film is also used in both the flat and domed IMAX projection system. In IMAX the film is transported horizontally in the film gate, similar to VistaVision.
Some productions intended for 35 mm anamorphic release were also released using 70 mm film stock. A 70 mm print made from a 35 mm negative is significantly better in appearance than an all-35 mm process and allowed for a release with six-track magnetic audio.
The advent of 35 mm prints with digital soundtracks in the 1990s largely supplanted the widespread release of the more expensive 70 mm prints.
Sound
Regardless of the sound format, any sound represented on the film image itself will not be the sound for the particular frame it occupies. In the gate of the projector head, there is no space for a reader, and the film is not travelling smoothly at the gate position. Consequently, all optical sound formats must be offset from the image because the sound reader is usually located above (for magnetic readers and most digital optical readers) or below (for analog optical readers and a few digital optical) the projector head.
See the 35 mm film article for more information on both digital and analog methods.
Optical
Optical sound constitutes the recording and reading of amplitude based on the amount of light that is projected through a soundtrack area on a film using an illuminating light or laser and a photocell or photodiode. As the photocell picks up the light in varying intensities, the electricity produced is intensified by an amplifier, which in turn powers a loudspeaker, where the electrical impulses are turned into air vibrations and thus, sound waves. In 16 mm, this optical soundtrack is a single mono track placed on the right side of the projected image, and the sound head is 26 frames after the gate. In 35 mm, this can be mono or stereo, on the left side of the projected image, with the sound head 21 frames after the gate.
The first form of optical sound was represented by horizontal bands of clear (white) and solid (black) area. The space between solid points represented amplitude and was picked up by the photo-electric cell on the other side of a steady, thin beam of light being shined through it. This variable density form of sound was eventually phased out because of its incompatibility with color stocks. The alternative and ultimately the successor of variable density has been the variable area track, in which a clear, vertical waveform against black represents the sound, and the width of the waveform is equivalent to the amplitude. Variable area does have slightly less frequency response than variable density, but because of the grain and variable infrared absorption of various film stocks, variable density has a lower signal-to-noise ratio.
Optical stereo is recorded and read through a bilateral variable area track. Dolby MP matrix encoding is used to add extra channels beyond the stereo pair. Left, center, right and surround channels are matrix-encoded into the two optical tracks, and decoded using licensed equipment.
In the 1970s and early 1980s, optical sound Super-8 mm copies were produced mainly for airline in-flight movies. Even though this technology was soon made obsolete by video equipment, the majority of small-gauge films used magnetic sound rather than optical sound for a higher frequency range.
Magnetic
Magnetic sound is no longer used in commercial cinema, but between 1952 and the early 1990s (when optical digital movie sound rendered it obsolete) it provided the highest fidelity sound from film because of its wider frequency range and superior signal-to-noise ratio compared to optical sound. There are two forms of magnetic sound in conjunction with projection: double-head and striped.
The first form of magnetic sound was the double-head system, in which the movie projector was interlocked with a dubber playing a 35 mm reel of a full-coat, or film completely coated with magnetic iron-oxide. This was introduced in 1952 with Cinerama, holding six tracks of stereophonic sound. Stereophonic releases throughout 1953 also used an interlocked full-coat for three-channel stereophonic sound.
In interlock, since the sound is on a separate reel, it does not need to be offset from the image. Today, this system is usually used only for very low-budget or student productions, or for screening rough cuts of films before the creation of a final married print. Sync between the two reels is checked with SMPTE leader, also known as countdown leader. If the two reels are synced, there should be one frame of beep sound exactly on the 2 frame of the countdown – two seconds, or 48 frames, before the picture start.
Striped magnetic film is motion picture film in which stripes of magnetic oxide are placed on the film between the sprocket holes and the edge of the film, and sometimes also between the sprocket holes and the image. Each of these stripes has one channel of the audio recorded on it. This technique was first introduced in September, 1953 by Hazard E. Reeves for Cinemascope. Four tracks are present on the film: Left, Center, Right and Surround. This 35 mm four-track magnetic sound format was used from 1954 through 1982 for roadshow screenings of big-budget feature films.
70 mm, which had no optical sound, used the five millimeters gained between the 65 mm negative and the final release print to place three magnetic tracks outside of the perforations on each side of the film for a total of six tracks. Until the introduction of digital sound, it was fairly common for 35 mm films to be blown up to 70 mm often just to take advantage of the greater number of soundtracks and the fidelity of the audio.
Although magnetic audio was of excellent quality it also had significant disadvantages. Magnetic sound prints were expensive, 35 mm magnetic prints cost roughly twice as much as optical sound prints, whilst 70 mm prints could cost up to 15 times as much as 35 mm prints. Furthermore, the oxide layer wore out faster than the film itself, and magnetic tracks were prone to damage and accidental erasure. Because of the high cost of installing magnetic sound reproduction equipment only a minority of movie theaters ever installed it and the magnetic soundheads needed considerable maintenance to keep their performance up to standard. As a consequence the use of the Cinemascope 35 mm four-track magnetic sound format decreased significantly during the course of the 1960s and received stiff competition from the Dolby SVA optical encoding format. However, 70 mm film continued to be used for prestigious roadshow screenings until the introduction of digital sound on 35 mm film in the early 1990s removed one of the major justifications for using this expensive format.
On certain stocks of Super 8 and 16 mm an iron-oxide sound recording strip was added for the direct synchronous recording of sound which could then be played by projectors with a magnetic sound head. It has since been discontinued by Kodak on both gauges.
Digital
Modern theatrical systems use optical representations of digitally encoded multi-channel sound. An advantage of digital systems is that the offset between the sound and picture heads can be varied and then set with the digital processors. Digital sound heads are usually above the gate. All digital sound systems currently in use have the ability to instantly and gracefully fall back to the analog optical sound system should the digital data be corrupt or the whole system fail.
Cinema Digital Sound (CDS)
Created by Kodak and ORC (Optical Radiation Corporation), Cinema Digital Sound was the first attempt to bring multi-channel digital sound to first-run theaters. CDS was available on both 35 mm and 70 mm films. Film prints equipped with CDS did not have the conventional analog optical or magnetic soundtracks to serve as a back-up in case the digital sound was unreadable. Another disadvantage of not having an analog back-up track is that CDS required extra film prints be made for the theaters equipped to play CDS. The three formats that followed, Dolby Digital, DTS and SDDS, can co-exist with each other and the analog optical soundtrack on a single version of the film print. This means that a film print carrying all three of these formats (and the analog optical format, usually Dolby SR) can be played in whichever format the theater is equipped to handle. CDS did not achieve widespread use and ultimately failed. It premiered with the film Dick Tracy and was used with several other films, such as Days of Thunder and Terminator 2: Judgment Day.
Sony Dynamic Digital Sound (SDDS)
SDDS runs on the outside of 35 mm film, between the perforations and the edges, on both edges of the film. It was the first digital system that could handle up to eight channels of sound. The additional two tracks are for an extra pair of screen channels (Left Center and Right Center) located between the 3 regular screen channels (Left, Center and Right). A pair of CCDs located in a unit above the projector reads the two SDDS tracks. The information is decoded and decompressed before being passed along to the cinema sound processor. By default, SDDS units use an onboard Sony Cinema Sound Processor, and when the system is set up in this manner, the theatre's entire sound system can be equalized in the digital domain. The audio data in an SDDS track is compressed in the 20-bit ATRAC2 compression scheme at a ratio of about 4.5:1. SDDS premiered with the film Last Action Hero. SDDS was the least commercially successful of the three competing digital sound systems for 35 mm film. Sony ceased the sale of SDDS processors in 2001–2002.
Dolby Digital
Dolby Digital data is printed in the spaces between the perforations on the soundtrack side of the film, 26 frames before the picture. Release prints with Dolby Digital always include an analog Dolby Stereo soundtrack with Dolby SR noise reduction, thus these prints are known as Dolby SR-D prints. Dolby Digital produces 6 discrete channels. In a variant called SR-D EX, the left and right surround channels can be dematrixed into left, right, and back surround, using a matrix system similar to Dolby Pro Logic. The audio data in a Dolby Digital track is compressed in the 16-bit AC-3 compression scheme at a ratio of about 12:1. The images between each perforation are read by a CCD located either above the projector or in the regular analog sound head below the film gate, a digital delay within the processor allowing correct lip-sync to be achieved regardless of the position of the reader relative to the picture gate. The information is then decoded, decompressed and converted to analog; this can happen either in a separate Dolby Digital processor that feeds signals to the cinema sound processor, or digital decoding can be built into the cinema processor. One disadvantage of this system is if the digital printing is not entirely within the space between the sprocket holes; if the track was off a bit on either the top or the bottom, the sound track would be unplayable, and a replacement reel would have to be ordered.
In 2006, Dolby discontinued the sale of their external SR-D processor (the DA20), but included Dolby Digital decoding in their CP500 and later CP650 cinema processors.
A consumer version of Dolby Digital is also used on most DVDs, often at higher data rates than the original film. A bit-for-bit version is used on Blu-ray Discs and HD DVDs called Dolby TrueHD. Dolby Digital officially premiered with the film Batman Returns, but it was earlier tested at some screenings of Star Trek VI: The Undiscovered Country.
Digital Theater Systems (DTS)
DTS actually stores the sound information on separate CD-ROMs supplied with the film. The CDs are fed into a special, modified computer that syncs up with the film through the use of DTS time code, decompresses the sound, and passes it through to a standard cinema processor. The time code is placed between the optical soundtracks and the actual picture and is read by an optical LED ahead of the gate. The time code is actually the only sound system that is not offset within the film from the picture but still needs to be physically set offset ahead of the gate in order to maintain continuous motion. Each disc can hold slightly over 90 minutes of sound, so longer films require a second disc. Three types of DTS sound exist: DTS-ES (Extended Surround), an 8-channel digital system; DTS-6, a 6-track digital system, and a now-obsolete 4 channel system. DTS-ES derives a back surround channel from the left surround and right surround channels using Dolby Pro Logic. The audio data in a DTS track is compressed in the 20-bit APTX-100 compression scheme at a ratio of 4:1.
Of the three digital formats currently in use, DTS is the only one that has been used with 70 mm presentations. DTS was premiered on Jurassic Park. Datasat Digital Entertainment, purchaser of DTS's cinema division in May 2008, now distributes Datasat Digital Sound to professional cinemas worldwide.
A consumer version of DTS is available on some DVDs, and was used to broadcast stereo TV prior to DTV. A bit-for-bit version of the DTS soundtrack is on Blu-ray Discs and HD DVDs called DTS-HD MA (DTS-HD Master Audio).
Leaders
Academy leader is placed at the head of film release prints containing information for the projectionist and featuring numbers which are black on a clear background, counting from 11 to 3 at 16-frame intervals (16 frames in 35 mm film = 1 ft). At −12 feet there is a START frame. The numbers appear as a single frame in opaque black leader.
SMPTE leader is placed at the head of film release prints or video masters containing information for the projectionist or video playback tech. The numbers count down in seconds from 8 to 2 at 24-frame intervals ending at the first frame of the 2 followed by 47 film frames of dark gray or black. Each number is held on the screen for 24 frames while an animated sweep-arm moves clockwise behind the number. As the sweep arm moves across the background field, the color changes from light gray to dark gray. Unlike the other numbers, the 2 only appears for one frame.
Usually there's a one-frame audio pop that plays 48 film frames (two seconds at 24 frames per second) before the first frame of action (FFOA). The pop is used to line up and synchronize audio and picture/video during printing processes or postproduction. The pop is in editorial (level) synchronization with the 2 frame on the SMPTE and EBU leader, and with the 3 frame on the Academy leader. On most theatrical release prints, the pop is removed by the laboratory to avoid any accidental playing of it during a screening.
EBU leader (European Broadcast Union) is very similar to the SMPTE leader but with some superficial graphics differences.
Types of lenses and screens
Spherical
Most motion picture lenses are of the spherical variety. Spherical lenses do not distort the image intentionally. Used alone for standard and cropped wide-screen projection, and in conjunction with an anamorphic adapter for anamorphic wide-screen projection, the spherical lens is the most common and versatile projection lens type.
Anamorphic
Anamorphic filming uses only special lenses, and requires no other modifications to the camera, projector and intermediate gear. The intended wide-screen image is compressed optically, using additional cylindrical elements within the lens so that when the compressed image strikes the film, it matches the standard frame size of the camera. At the projector a corresponding lens restores the wide aspect ratio to be seen on the screen. The anamorphic element can be an attachment to existing spherical lenses.
Some anamorphic formats utilized a more squarish aspect ratio (1.18:1, vs. the Academy 1.375:1 ratio) on-film in order to accommodate more magnetic or optical tracks. Various anamorphic implementations have been marketed under several brand names, including CinemaScope, Panavision and Superscope, with Technirama implementing a slightly different anamorphic technique using vertical expansion to the film rather than horizontal compression. Large format anamorphic processes included Ultra Panavision and MGM Camera 65 (which was renamed Ultra Panavision 70 in the early 60s). Anamorphic is sometimes called scope in theater projection parlance, presumably in reference to CinemaScope.
Fish eye with dome
The IMAX dome projection method (called OMNIMAX) uses 70 mm film running sideways through the projector to maximize the image area and extreme wide angle lenses to obtain an almost hemispherical image. The field of view is tilted, as is the projection hemisphere, so one may view a portion of the ground in the foreground. Owing to the great area covered by the picture it is not as bright as seen with flat screen projection, but the immersive qualities are quite convincing. While there are not many theaters capable of displaying this format there are regular productions in the fields of nature, travel, science, and history, and productions may be viewed in most large urban regions. These dome theaters are mostly located in large and prosperous science and technology museums.
Wide and deep flat screen
The IMAX flat screen system uses large format film, a wide and deep screen, and close and quite steep stadium seating. The effect is to fill the visual field to a greater degree than is possible with conventional wide-screen systems. Like the IMAX dome, this is found in major urban areas, but unlike the dome system it is practical to reformat existing movie releases to this method. Also, the geometry of the theater and screen are more amenable to inclusion within a newly constructed but otherwise conventional multiple theater complex than is the dome-style theater.
Multiple cameras and projectors
One wide screen development during the 1950s used non-anamorphic projection but used three side-by-side synchronized projectors. Called Cinerama, the images were projected onto an extremely wide, curved screen. Some seams were said to be visible between the images but the almost complete filling of the visual field made up for this. This showed some commercial success as a limited location (only in major cities) exhibition of the technology in This is Cinerama, but the only memorable story-telling film made for this technology was How the West Was Won, widely seen only in its Cinemascope re-release.
While neither a technical nor a commercial success, the business model survives as implemented by the documentary production, limited release locations, and long-running exhibitions of IMAX dome movies.
Three-dimensional
For techniques used to display pictures with a three-dimensional appearance (3D), see the 3D film article for some movie history and the stereoscopy article for technical information.
See also
Film format
List of film formats
Projector (disambiguation) for a directory of projector types
Projectionist
Movietone sound system
Sound follower
References
External links
Collection of restored cinema projectors and lighting by Regal Group, UK.
Film-Tech
American Wide Screen Museum
The story of the DP70 – The Todd-AO Projector
A Cinerama site
List of 3000 movie projectors and cameras
[https://books.google.com/books?id=OSQDAAAAMBAJ&pg=RA1-
Film and video technology
Projectors
Display devices | Movie projector | Engineering | 11,513 |
26,481,652 | https://en.wikipedia.org/wiki/Foliicolous | Foliicolous refers to the growth habit of certain lichens, algae, fungi, liverworts, and other bryophytes that prefer to grow on the leaves of vascular plants. Foliicolous simply means 'growing upon leaves' whilst epiphyllous derives from the Greek meaning on or over and means leaf so 'over leaf' and hypophyllous means 'under leaf'. The microhabitat on the leaf surface is called a phyllosphere.
See also
Epibiont
Epiphytes
Phyllosphere
Epiphytic fungus
Parasitic plant
Epilith
Microbiota
References
External links
Dr. Robert Lücking's Foliicolous Lichen Homepage
Lichens
Biological interactions
Ecology terminology
Plants by habit | Foliicolous | Biology | 160 |
14,352,658 | https://en.wikipedia.org/wiki/EMIEW | EMIEW is a robot developed by Hitachi. Another version has also been made called EMIEW 2. EMIEW stands for Excellent Mobility and Interactive Existence as Workmate. Two EMIEWs have been made, called Pal and Chum. Hitachi stated that Pal and Chum, have a vocabulary of about 100 words, and Pal exhibited these skills by telling reporters: "I want to be able to walk about in places like Shinjuku and Shibuya in the future without bumping into people and cars". Both EMIEWs have a top speed of 6 km/h (matching Honda's ASIMO) and can avoid obstacles.
Specifications
See also
Humanoid robot
References
External links
Social robots
Bipedal humanoid robots
2005 robots
Robots of Japan
Hitachi products | EMIEW | Technology | 156 |
42,430,964 | https://en.wikipedia.org/wiki/Doris%20Kuhlmann-Wilsdorf | Doris Kuhlmann-Wilsdorf (February 15, 1922 – March 25, 2010) was a German metallurgist.
Biography
Doris Kuhlmann-Wilsdorf was born in Bremen, Germany on February 15, 1922, to Adolph Friedrich and Elsa Kuhlmann. She attended the University of Göttingen from 1942 where she received her doctorate in materials science in 1947. Kuhlmann-Wilsdorf continued her research under Sir Nevill Francis Mott at the University of Bristol. She married Heinz Wilsdorf in 1950, with whom she travelled to University of the Witwatersrand to work as a lecturer in the same year.
In 1956 they moved to the United States. In 1957, the University of Pennsylvania School of Engineering and Applied Science appointed Doris Kuhlmann-Wilsdorf, B.S., M.S., Ph.D., a mechanical metallurgist, to the faculty position of Research Associate Professor of Metallurgical Engineering (the present-day department of Materials Science and Engineering), effective July, 1, 1957. She was the first woman to join the standing faculty of the School of Engineering and Applied Science at the University. In 1960, the School reappointed her and changed her title to Associate Professor of Metallurgy. She was therefore the first woman to earn tenure in the School of Engineering and Applied Science.
Just one year later, in July 1961, the School promoted her to Professor of Metallurgical Engineering. She was therefore the first woman to hold a senior professorship at the School of Engineering and Applied Science.
In 1963, however, Professor Kuhlmann-Wilsdorf left Penn to accept an appointment as Professor of Engineering Physics at the University of Virginia as a professor in the Physics and Materials Science departments. She was named university professor of applied science in 1966; she was the first woman named as a full professor at the University of Virginia outside the schools of Medicine and Nursing. In 1994 Kuhlmann-Wilsdorf and her husband funded a professorship in their name and former students created a memorial building on the campus in their name in 2001.
Kuhlmann-Wilsdorf retired in 2005 and died after a short illness on March 25, 2010, in Charlottesville, Virginia. Her papers are held at the Albert and Shirley Small Special Collections Library at the University of Virginia.
Research
Kuhlmann-Wilsdorf published over 250 papers and has been a consultant to a number of corporations. Her research was primarily in metallurgy and materials science (with her expertise in tribology), known for her design of electrical metalfiber brushes used as sliding electrical contacts. She was a fellow of the American Physical Society and the American Society of Metals.
Honors and awards
Medal for Excellence in Research of the American Society of Engineering Education (1965 and 1966)
Heyn Medal of the German Society of Materials Science (1988)
Society of Women Engineers Achievement Award (1989)
Ragnar Holm Scientific Achievement Award of the Institute of Electrical and Electronics Engineers (1991)
Christopher J. Henderson Inventor of the Year (2001)
Fellow of TMS-AIME (2006)
References
American materials scientists
German materials scientists
Tribologists
1922 births
2010 deaths
German women physicists
American women physicists
German women scientists
University of Göttingen alumni
University of Virginia faculty
Fellows of the American Physical Society
20th-century American physicists
20th-century American women scientists
American women academics
West German emigrants
Immigrants to the United States
Fellows of the Minerals, Metals & Materials Society
21st-century American women | Doris Kuhlmann-Wilsdorf | Materials_science | 713 |
49,222,617 | https://en.wikipedia.org/wiki/VeRoLog | The European Working Group on Vehicle Routing and Logistics Optimization (also, EWG VeRoLog, or simply VeRoLog) is a working group within EURO, the Association of European Operational Research Societies whose objective is to promote the application of operations research models, methods and tools to the field of vehicle routing and logistics, and to encourage the exchange of information among practitioners, end-users, and researchers, stimulating the work on new and important problems with sound scientific methods.
History
VeRoLog is one of the working groups of EURO, the Association of European Operational Research Societies. The Group was founded in 2011 by Daniele Vigo, Marielle Christiansen, Angel Corberan, Wout Dullaert, Richard Eglese, Geir Hasle, Stefan Irnich, Frederic Semet and Maria Grazia Speranza.
Governance
The group is managed by a Coordinator and an Advisory Board including the founding members. The current coordinator is Daniele Vigo.
Membership
The group is suitable for people who are presently engaged in Vehicle Routing and Logistics, either in theoretical aspects or in business, industry or public administration applications. Currently (2015), the group has about 1,500 members from 67 countries.
Conferences
VeRoLog holds conferences on a regular basis (once a year during Summer) and issues every year an award to the best doctoral dissertation on vehicle routing and logistics optimization.
Publications
In most cases, the annual conference is followed by a peer reviewed special issue of an international journal, presenting a selection of the contributions presented at the meeting. Recent special issues appeared on European Journal of Operational Research, and Computers and Operations Research.
A newsletter is emailed to all members every month.
References
Operations research
Working groups
Organizations established in 2011 | VeRoLog | Mathematics | 348 |
5,075,038 | https://en.wikipedia.org/wiki/Grandisol | Grandisol is a natural organic compound with the molecular formula C10H18O. It is a monoterpene containing a cyclobutane ring, an alcohol group, an alkene group and two chiral centers (one of which is quaternary).
Grandisol is a pheromone primarily important as the sex attractant of the cotton boll weevil (Anthonomus grandis), from which it gets its name. It is also a pheromone for other related insects. The cotton boll weevil is an agricultural pest that can cause significant economic damage if not controlled. Grandisol is the major constituent of the mixture known as grandlure, which is used to protect cotton crops from the boll weevil.
Synthesis
Grandisol was first isolated, identified, and synthesized by J. Tumlinson et al. at Mississippi State University in 1969. The most recent and highest yielding synthetic route to grandisol was reported in January 2010 by a group of chemists at Furman University. Though enantioselective syntheses have been reported, racemic grandisol has proven equally effective at attracting boll weevils as the natural enantiomer, rendering moot the need for enantioselective syntheses for agricultural purposes.
References
Insect pheromones
Primary alcohols
Isopropenyl compounds
Cyclobutanes
Monoterpenes | Grandisol | Chemistry | 295 |
37,350,078 | https://en.wikipedia.org/wiki/Hainantoxin | Hainantoxins (HNTX) are neurotoxins from the venom of the Chinese bird spider Haplopelma hainanum. Hainantoxins specifically inhibit tetrodotoxin-sensitive Voltage-gated sodium channels, thereby causing blockage of neuromuscular transmission and paralysis. Currently, 13 different hainantoxins are known (HNTX-I – HNTX-XIII), but only HNTX-I, -II, -III, -IV and -V have been investigated in detail.
Sources
HNTX-I, HNTX-III, HNTX-IV and HNTX-V are made by the Chinese bird spider Haplopelma hainanum (=Ornithoctonus hainana, Selenocosmia hainana).
Chemistry
Structure
Hainantoxins I, III, IV and V show high homology, including the presence of three disulfide bonds that form an inhibitor cysteine knot (ICK) motif.
HNTX-I
The main component of the venom of O. hainana is HNTX-I. It has 33 amino acid residues, with a total molecular weight of 3605-3608 Da. HNTX-I contains a short triple-stranded anti-parallel beta-sheet and four beta-turns. The amino acid residues His28 and Asp26 are needed for the bioactivity of HNTX-I.
HNTX-II
HNTX-II has a molecular weight of 4253 Da and contains 37 amino acid residues. The complete amino acid sequence of HNTX-II is NH2-LFECSV SCEIEK EGNKD CKKKK CKGGW KCKFN MCVKV-COOH.
HNTX-III
The structure of HNTX-III consists of 33-35 amino acid residues, which form a beta-sheet with connections between Asp7 and Cys9, Tyr21 and Ser23, and Lys27 and Val30.
HNTX-IV
HNTX-IV has 35 amino acid residues with a total molecular weight of 3989 Da. The first strand consists of an antiparallel beta-sheet. The complete amino acid sequence of HNTX-IV is NH2-ECLGFG KGCNPS NDQCCK SSNLVC SRKHRW CKYEI-CONH2. Lys 27, His28, Arg29 and Lys 32 are the neuroactive amino acid residues.
HNTX-V
HNTX-V consists of 35 amino acid residues. The whole amino acid residue sequence of HNTX-V is NH2-ECLGFG KGCNPS NDQCCK SANLVC SRKHRW CKYEI-COOH. At the active binding site of HNTX-V, Lys27 and Arg 29 are the most important.
Target
Channel
Hainantoxins selectively inhibit tetrodotoxin-sensitive (TTX-S) voltage-gated sodium channels (VGSCs). Voltage-gated Ca2+ channels (VGCCs), tetrodotoxin-resistant (TTX-R) VGSCs and rectifier-delayed potassium channels are not affected. HNTX-III and HNTX-IV are part of the Huwentoxin-I family. Toxins from the Huwentoxin-I family are thought to bind to site 1 on the sodium channels. Other hainantoxins bind at site 3 of the sodium channels. HNTX-I specifically blocks mammalian Nav1.2 and insect para/tipE channels expressed in Xenopus laevis oocytes. HNTX-I is a weak antagonist of the vertebrate TTX-S VGSCs, but is more potent on insect VGSCs.
Affinity
For the blockage of sodium channels, electrostatic interactions or hydrogen bonds are needed. Important for the electrostatic interaction is the presence of a positively charged region in the toxin, because the receptor site of the sodium channel contains a lot of negatively charged residues. In HNTX-I, the positively charged residues and a vicinal hydrophobic patch have most influence on the binding to the sodium channels. HNTX-IV has a positively charged patch containing the amino acids Arg26, Lys27, His28, Arg29 and Lys32, of which Lys27, Arg29 and Lys32 are the most important for interaction with the TTX-S VGSCs. HNTX-V also shows an interface of positively charged amino acids that are responsible for the binding with the TTX-S VGSCs, where also Lys27 and Arg29 are the most important. Subtle differences in the positively charged patch can result in altered electrostatic properties, causing altered pharmacological effects.
Table 1: IC50 values of four subgroups of hainantoxins
Mode of action
HNTX-I, HNTX-III, HNTX-IV, and HNTX-V are thought to bind to site 1 of voltage-dependent sodium channels, similar to TTX, and thereby block the channel pore. They do not alter activation and inactivation kinetics. Ion selectivity of the VGSCs is not changed by hainantoxin. The mode of action of HNTX-II is unclear, but is unlikely to involve sodium channels.
Toxicity
Symptoms
Hainantoxins can affect both vertebrates and invertebrates. HNTX-I has no significant effect on insects or rats. HNTX-III and HNTX-IV cause spontaneous contractions of the diaphragm muscle and the vas deferens smooth muscle of the rat. HNTX-III and HNTX-IV are able to paralyze cockroaches, and HNTX-IV can even paralyze rats.
LD50
Intracerebroventricular injection in mice with HNTX-II shows a LD50 of 1.41 μg/g. The intraperitoneal LD50 value of HNTX-IV in mice is 0.2 mg/kg. HNTX-III is 40 times more potent that HNTX-IV.
Therapeutic use
HNTX-III and HNTX-IV have an antagonistic effect on the toxin BMK-I, a toxic protein in the venom of the scorpion Buthus martensii.
References
Neurotoxins
Ion channel toxins
Spider toxins | Hainantoxin | Chemistry | 1,376 |
75,359,782 | https://en.wikipedia.org/wiki/Efinopegdutide | Efinopegdutide (MK-6024) is a dual agonist of the glucagon and GLP-1 receptors. It is being developed by Merck for non-alcoholic fatty liver disease. It was also developed for type 2 diabetes and obesity but these indications were discontinued.
References
Glucagon receptor agonists
GLP-1 receptor agonists
Peptide therapeutics
Drugs developed by Merck & Co. | Efinopegdutide | Chemistry | 92 |
62,457,740 | https://en.wikipedia.org/wiki/Structural%20Ramsey%20theory | In mathematics, structural Ramsey theory is a categorical generalisation of Ramsey theory, rooted in the idea that many important results of Ramsey theory have "similar" logical structures. The key observation is noting that these Ramsey-type theorems can be expressed as the assertion that a certain category (or class of finite structures) has the Ramsey property (defined below).
Structural Ramsey theory began in the 1970s with the work of Nešetřil and Rödl, and is intimately connected to Fraïssé theory. It received some renewed interest in the mid-2000s due to the discovery of the Kechris–Pestov–Todorčević correspondence, which connected structural Ramsey theory to topological dynamics.
History
is given credit for inventing the idea of a Ramsey property in the early 70s. The first publication of this idea appears to be Graham, Leeb and Rothschild's 1972 paper on the subject. Key development of these ideas was done by Nešetřil and Rödl in their series of 1977 and 1983 papers, including the famous Nešetřil–Rödl theorem. This result was reproved independently by Abramson and Harrington, and further generalised by . More recently, Mašulović and Solecki have done some pioneering work in the field.
Motivation
This article will use the set theory convention that each natural number can be considered as the set of all natural numbers less than it: i.e. . For any set , an -colouring of is an assignment of one of labels to each element of . This can be represented as a function mapping each element to its label in (which this article will use), or equivalently as a partition of into pieces.
Here are some of the classic results of Ramsey theory:
(Finite) Ramsey's theorem: for every , there exists such that for every -colouring of all the -element subsets of , there exists a subset , with , such that is -monochromatic.
(Finite) van der Waerden's theorem: for every , there exists such that for every -colouring of , there exists a -monochromatic arithmetic progression of length .
Graham–Rothschild theorem: fix a finite alphabet . A -parameter word of length over is an element , such that all of the appear, and their first appearances are in increasing order. The set of all -parameter words of length over is denoted by . Given and , we form their composition by replacing every occurrence of in with the th entry of .Then, the Graham–Rothschild theorem states that for every , there exists such that for every -colouring of all the -parameter words of length , there exists , such that (i.e. all the -parameter subwords of ) is -monochromatic.
(Finite) Folkman's theorem: for every , there exists such that for every -colouring of , there exists a subset , with , such that , and is -monochromatic.
These "Ramsey-type" theorems all have a similar idea: we fix two integers and , and a set of colours . Then, we want to show there is some large enough, such that for every -colouring of the "substructures" of size inside , we can find a suitable "structure" inside , of size , such that all the "substructures" of with size have the same colour.
What types of structures are allowed depends on the theorem in question, and this turns out to be virtually the only difference between them. This idea of a "Ramsey-type theorem" leads itself to the more precise notion of the Ramsey property (below).
The Ramsey property
Let be a category. has the Ramsey property if for every natural number , and all objects in , there exists another object in , such that for every -colouring , there exists a morphism which is -monochromatic, i.e. the set
is -monochromatic.
Often, is taken to be a class of finite -structures over some fixed language , with embeddings as morphisms. In this case, instead of colouring morphisms, one can think of colouring "copies" of in , and then finding a copy of in , such that all copies of in this copy of are monochromatic. This may lend itself more intuitively to the earlier idea of a "Ramsey-type theorem".
There is also a notion of a dual Ramsey property; has the dual Ramsey property if its dual category has the Ramsey property as above. More concretely, has the dual Ramsey property if for every natural number , and all objects in , there exists another object in , such that for every -colouring , there exists a morphism for which is -monochromatic.
Examples
Ramsey's theorem: the class of all finite chains, with order-preserving maps as morphisms, has the Ramsey property.
van der Waerden's theorem: in the category whose objects are finite ordinals, and whose morphisms are affine maps for , , the Ramsey property holds for .
Hales–Jewett theorem: let be a finite alphabet, and for each , let be a set of variables. Let be the category whose objects are for each , and whose morphisms , for , are functions which are rigid and surjective on . Then, has the dual Ramsey property for (and , depending on the formulation).
Graham–Rothschild theorem: the category defined above has the dual Ramsey property.
The Kechris–Pestov–Todorčević correspondence
In 2005, Kechris, Pestov and Todorčević discovered the following correspondence (hereafter called the KPT correspondence) between structural Ramsey theory, Fraïssé theory, and ideas from topological dynamics.
Let be a topological group. For a topological space , a -flow (denoted ) is a continuous action of on . We say that is extremely amenable if any -flow on a compact space admits a fixed point , i.e. the stabiliser of is itself.
For a Fraïssé structure , its automorphism group can be considered a topological group, given the topology of pointwise convergence, or equivalently, the subspace topology induced on by the space with the product topology. The following theorem illustrates the KPT correspondence:Theorem (KPT). For a Fraïssé structure , the following are equivalent:
The group of automorphisms of is extremely amenable.
The class has the Ramsey property.
See also
Ramsey theory
Fraïssé's theorem
Age (model theory)
References
Category theory
Ramsey theory
Model theory | Structural Ramsey theory | Mathematics | 1,337 |
9,553,738 | https://en.wikipedia.org/wiki/Ensemble%20Kalman%20filter | The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter for large problems (essentially, the covariance matrix is replaced by the sample covariance), and it is now an important data assimilation component of ensemble forecasting. EnKF is related to the particle filter (in this context, a particle is the same thing as an ensemble member) but the EnKF makes the assumption that all probability distributions involved are Gaussian; when it is applicable, it is much more efficient than the particle filter.
Introduction
The ensemble Kalman filter (EnKF) is a Monte Carlo implementation of the Bayesian update problem: given a probability density function (PDF) of the state of the modeled system (the prior, called often the forecast in geosciences) and the data likelihood, Bayes' theorem is used to obtain the PDF after the data likelihood has been taken into account (the posterior, often called the analysis). This is called a Bayesian update. The Bayesian update is combined with advancing the model in time, incorporating new data from time to time. The original Kalman filter, introduced in 1960, assumes that all PDFs are Gaussian (the Gaussian assumption) and provides algebraic formulas for the change of the mean and the covariance matrix by the Bayesian update, as well as a formula for advancing the mean and covariance in time provided the system is linear. However, maintaining the covariance matrix is not feasible computationally for high-dimensional systems. For this reason, EnKFs were developed. EnKFs represent the distribution of the system state using a collection of state vectors, called an ensemble, and replace the covariance matrix by the sample covariance computed from the ensemble. The ensemble is operated with as if it were a random sample, but the ensemble members are really not independent, as they all share the EnKF. One advantage of EnKFs is that advancing the PDF in time is achieved by simply advancing each member of the ensemble.
Derivation
Kalman filter
Let denote the -dimensional state vector of a model, and assume that it has Gaussian probability distribution with mean and covariance , i.e., its PDF is
Here and below, means proportional; a PDF is always scaled so that its integral over the whole space is one. This , called the prior, was evolved in time by running the model and now is to be updated to account for new data. It is natural to assume that the error distribution of the data is known; data have to come with an error estimate, otherwise they are meaningless. Here, the data is assumed to have Gaussian PDF with covariance and mean , where is the so-called observation matrix. The covariance matrix describes the estimate of the error of the data; if the random errors in the entries of the data vector are independent, is diagonal and its diagonal entries are the squares of the standard deviation (“error size”) of the error of the corresponding entries of the data vector . The value is what the value of the data would be for the state in the absence of data errors. Then the probability density of the data conditional of the system state , called the data likelihood, is
The PDF of the state and the data likelihood are combined to give the new probability density of the system state conditional on the value of the data (the posterior) by the Bayes theorem,
The data is fixed once it is received, so denote the posterior state by instead of and the posterior PDF by . It can be shown by algebraic manipulations that the posterior PDF is also Gaussian,
with the posterior mean and covariance given by the Kalman update formulas
where
is the so-called Kalman gain matrix.
Ensemble Kalman Filter
The EnKF is a Monte Carlo approximation of the Kalman filter, which avoids evolving the covariance matrix of the PDF of the state vector . Instead, the PDF is represented by an ensemble
is an matrix whose columns are the ensemble members, and it is called the prior ensemble. Ideally, ensemble members would form a sample from the prior distribution. However, the ensemble members are not in general independent except in the initial ensemble, since every EnKF step ties them together. They are deemed to be approximately independent, and all calculations proceed as if they actually were independent.
Replicate the data into an matrix
so that each column consists of the data vector plus a random vector from the -dimensional normal distribution . If, in addition, the columns of are a sample from the prior probability distribution, then the columns of
form a sample from the posterior probability distribution. To see this in the scalar case with : Let , and Then
.
The first sum is the posterior mean, and the second sum, in view of the independence, has a variance
,
which is the posterior variance.
The EnKF is now obtained simply by replacing the state covariance in Kalman gain matrix by the sample covariance computed from the ensemble members (called the ensemble covariance), that is:
Implementation
Basic formulation
Here we follow. Suppose the ensemble matrix and the data matrix are as above. The ensemble mean and the covariance are
where
and denotes the matrix of all ones of the indicated size.
The posterior ensemble is then given by
where the perturbed data matrix is as above.
Note that since is a covariance matrix, it is always positive semidefinite and usually positive definite, so the inverse above exists and the formula can be implemented by the Cholesky decomposition. In, is replaced by the sample covariance where and the inverse is replaced by a pseudoinverse, computed using the singular-value decomposition (SVD) .
Since these formulas are matrix operations with dominant Level 3 operations, they are suitable for efficient implementation using software packages such as LAPACK (on serial and shared memory computers) and ScaLAPACK (on distributed memory computers). Instead of computing the inverse of a matrix and multiplying by it, it is much better (several times cheaper and also more accurate) to compute the Cholesky decomposition of the matrix and treat the multiplication by the inverse as solution of a linear system with many simultaneous right-hand sides.
Observation matrix-free implementation
Since we have replaced the covariance matrix with ensemble covariance, this leads to a simpler formula where ensemble observations are directly used without explicitly specifying the matrix . More specifically, define a function of the form
The function is called the observation function or, in the inverse problems context, the forward operator. The value of is what the value of the data would be for the state assuming the measurement is exact. Then the posterior ensemble can be rewritten as
where
and
with
Consequently, the ensemble update can be computed by evaluating the observation function on each ensemble member once and the matrix does not need to be known explicitly. This formula holds also for an observation function with a fixed offset , which also does not need to be known explicitly. The above formula has been commonly used for a nonlinear observation function , such as the position of a hurricane vortex. In that case, the observation function is essentially approximated by a linear function from its values at ensemble members.
Implementation for a large number of data points
For a large number of data points, the multiplication by becomes a bottleneck. The following alternative formula is advantageous when the number of data points is large (such as when assimilating gridded or pixel data) and the data error covariance matrix is diagonal (which is the case when the data errors are uncorrelated), or cheap to decompose (such as banded due to limited covariance distance). Using the Sherman–Morrison–Woodbury formula
with
gives
which requires only the solution of systems with the matrix (assumed to be cheap) and of a system of size with right-hand sides. See for operation counts.
Further extensions
The EnKF version described here involves randomization of data. For filters without randomization of data, see.
Since the ensemble covariance is rank deficient (there are many more state variables, typically millions, than the ensemble members, typically less than a hundred), it has large terms for pairs of points that are spatially distant. Since in reality the values of physical fields at distant locations are not that much correlated, the covariance matrix is tapered off artificially based on the distance, which gives rise to localized EnKF algorithms. These methods modify the covariance matrix used in the computations and, consequently, the posterior ensemble is no longer made only of linear combinations of the prior ensemble.
For nonlinear problems, EnKF can create posterior ensemble with non-physical states. This can be alleviated by regularization, such as penalization of states with large spatial gradients.
For problems with coherent features, such as hurricanes, thunderstorms, firelines, squall lines, and rain fronts, there is a need to adjust the numerical model state by deforming the state in space (its grid) as well as by correcting the state amplitudes additively. In 2007, Ravela et al. introduce the joint position-amplitude adjustment model using ensembles, and systematically derive a sequential approximation which can be applied to both EnKF and other formulations. Their method does not make the assumption that amplitudes and position errors are independent or jointly Gaussian, as others do. The morphing EnKF employs intermediate states, obtained by techniques borrowed from image registration and morphing, instead of linear combinations of states.
Formally, EnKFs rely on the Gaussian assumption. In practice they can also be used for nonlinear problems, where the Gaussian assumption may not be satisfied. Related filters attempting to relax the Gaussian assumption in EnKF while preserving its advantages include filters that fit the state PDF with multiple Gaussian kernels, filters that approximate the state PDF by Gaussian mixtures, a variant of the particle filter with computation of particle weights by density estimation, and a variant of the particle filter with thick tailed data PDF to alleviate particle filter degeneracy.
See also
Data assimilation
Particle filter
Recursive Bayesian estimation
References
External links
EnKF webpage
TOPAZ, real-time forecasting of the North Atlantic ocean and Arctic sea-ice with the EnKF
EnKF-C, a compact framework for data assimilation into large-scale layered geophysical models with the EnKF
PDAF – Parallel Data Assimilation Framework – an open-source software for data assimilation providing different variants of the EnKF
Linear filters
Nonlinear filters
Bayesian statistics
Signal estimation
Monte Carlo methods | Ensemble Kalman filter | Physics | 2,204 |
3,916,626 | https://en.wikipedia.org/wiki/Wet-bulb%20temperature | The wet-bulb temperature (WBT) is a temperature that can be measured by a thermometer covered in cloth which has been soaked in water at ambient temperature (a wet-bulb thermometer) and over which air is passed. At 100% relative humidity, the wet-bulb temperature is equal to the air temperature (dry-bulb temperature); at lower humidity the wet-bulb temperature is lower than dry-bulb temperature because of evaporative cooling.
The wet-bulb temperature is defined as the temperature of a parcel of air cooled to saturation (100% relative humidity) by the evaporation of water into it, with the latent heat supplied by the parcel. A wet-bulb thermometer indicates a temperature close to the true (thermodynamic) wet-bulb temperature. The wet-bulb temperature is the lowest temperature that can be reached under current ambient conditions by the evaporation of water only.
Even heat-adapted people cannot carry out normal outdoor activities past a wet-bulb temperature of , equivalent to a heat index of . A reading of – equivalent to a heat index of – is considered the theoretical human survivability limit for up to six hours of exposure.
Intuition
If a thermometer is wrapped in a water-moistened cloth, it will behave differently. The drier and less humid the air is, the faster the water will evaporate. The faster water evaporates, the lower the thermometer's temperature will be relative to air temperature.
Water can evaporate only if the air around it can absorb more water. This is measured by comparing how much water is in the air to the maximum that could be in the air—the relative humidity. 0% means the air is completely dry, and 100% means the air contains all the water it can hold in the present circumstances and it cannot absorb any more water (from any source).
This is part of the cause of apparent temperature in humans. The drier the air, the more moisture it can take up beyond what is already in it, and the easier it is for extra water to evaporate. The result is that sweat evaporates more quickly in drier air, cooling down the skin faster. If the relative humidity is 100%, no water can evaporate, and cooling by sweating or evaporation is not possible.
When relative humidity is 100%, a wet-bulb thermometer can also no longer be cooled by evaporation, so it will read the same as an unwrapped thermometer.
General
The wet-bulb temperature is the lowest temperature that may be achieved by evaporative cooling of a water-wetted, ventilated surface.
By contrast, the dew point is the temperature to which the ambient air must be cooled to reach 100% relative humidity assuming there is no further evaporation into the air; it is the temperature where condensation (dew) and clouds would form.
For a parcel of air that is less than saturated (i.e., air with less than 100 percent relative humidity), the wet-bulb temperature is lower than the dry-bulb temperature, but higher than the dew point temperature. The lower the relative humidity (the drier the air), the greater the gaps between each pair of these three temperatures. Conversely, when the relative humidity rises to 100%, the three figures coincide.
For air at a known pressure and dry-bulb temperature, the thermodynamic wet-bulb temperature corresponds to unique values of the relative humidity and the dew point temperature. It therefore may be used for the practical determination of these values. The relationships between these values are illustrated in a psychrometric chart.
Lower wet-bulb temperatures that correspond with drier air in summer can translate to energy savings in air-conditioned buildings due to:
Reduced dehumidification load for ventilation air
Increased efficiency of cooling towers
Increased efficiency of evaporative coolers
Thermodynamic wet-bulb temperature
The thermodynamic wet-bulb temperature is the temperature a volume of air would have if cooled adiabatically to saturation by evaporation of water into it, all latent heat being supplied by the volume of air.
The temperature of an air sample that has passed over a large surface of liquid water in an insulated channel is the thermodynamic wet-bulb temperature—the air has become saturated by passing through a constant-pressure, ideal, adiabatic saturation chamber.
Meteorologists and others may use the term "isobaric wet-bulb temperature" to refer to the "thermodynamic wet-bulb temperature". It is also called the "adiabatic saturation temperature", though meteorologists also use "adiabatic saturation temperature" to mean "temperature at the saturation level", i.e. the temperature the parcel would achieve if it expanded adiabatically until saturated.
The thermodynamic wet-bulb temperature is a thermodynamic property of a mixture of air and water vapor. The value indicated by a simple wet-bulb thermometer often provides an adequate approximation of the thermodynamic wet-bulb temperature.
For an accurate wet-bulb thermometer, "the wet-bulb temperature and the adiabatic saturation temperature are approximately equal for air-water vapor mixtures at atmospheric temperature and pressure. This is not necessarily true at temperatures and pressures that deviate significantly from ordinary atmospheric conditions, or for other gas–vapor mixtures."
Temperature reading of wet-bulb thermometer
Wet-bulb temperature is measured using a thermometer that has its bulb wrapped in cloth—called a sock—that is kept wet with distilled water via wicking action. Such an instrument is called a wet-bulb thermometer. A widely used device for measuring wet- and dry-bulb temperature is a sling psychrometer, which consists of a pair of mercury bulb thermometers, one with a wet "sock" to measure the wet-bulb temperature and the other with the bulb exposed and dry for the dry-bulb temperature. The thermometers are attached to a swivelling handle, which allows them to be whirled around so that water evaporates from the sock and cools the wet bulb until it reaches thermal equilibrium.
An actual wet-bulb thermometer reads a temperature that is slightly different from the thermodynamic wet-bulb temperature, but they are very close in value. This is due to a coincidence: for a water-air system the psychrometric ratio (see below) happens to be close to 1, although for systems other than air and water they might not be close.
To understand why this is so, first consider the calculation of the thermodynamic wet-bulb temperature.
Experiment 1
In this case, a stream of unsaturated air is cooled. The heat from cooling that air is used to evaporate some water which increases the humidity of the air. At some point the air becomes saturated with water vapor (and has cooled to the thermodynamic wet-bulb temperature). In this case we can write the following balance of energy per mass of dry air:
saturated water content of the air (kgH2O/kgdry air)
initial water content of the air (same unit as above)
latent heat of water (J/kgH2O)
initial air temperature (K)
saturated air temperature (K)
specific heat of air (J/kg·K)
Experiment 2
For the case of the wet-bulb thermometer, imagine a drop of water with unsaturated air blowing over it. As long as the vapor pressure of water in the drop (function of its temperature) is greater than the partial pressure of water vapor in the air stream, evaporation will take place. Initially, the heat required for the evaporation will come from the drop itself.
Instead, as the drop starts cooling, it is now colder than the air, so convective heat transfer begins to occur from the air to the drop. Furthermore, the evaporation rate depends on the difference of concentration of water vapor between the drop-stream interface and the distant stream (i.e. the "original" stream, unaffected by the drop), and on a convective mass transfer coefficient, which is a function of the components of the mixture (i.e. water and air).
After a certain period, an equilibrium is reached: the drop has cooled to a point where the rate of heat carried away in evaporation is equal to the heat gain through convection. At this point, the following balance of energy per interface area is true:
water content of interface at equilibrium (kgH2O/kgdry air) (note that the air in this region is and has always been saturated)
water content of the distant air (same unit as above)
mass transfer coefficient (kg/m2⋅s)
air temperature at distance (K)
water drop temperature at equilibrium (K)
convective heat transfer coefficient (W/m2·K)
Note that:
is the driving force for mass transfer (constantly equal to throughout the entire experiment)
is the driving force for heat transfer (when reaches , the equilibrium is reached)
Let us rearrange that equation into:
Now let's go back to our original "thermodynamic wet-bulb" experiment, Experiment 1. If the air stream is the same in both experiments (i.e. and are the same), then we can equate the right-hand sides of both equations:
Rearranging:
If then the temperature of the drop in Experiment 2 is the same as the wet-bulb temperature in Experiment 1. Due to a coincidence, for the mixture of air and water vapor this is the case, the ratio (called psychrometric ratio) being close to 1.
Experiment 2 is what happens in a common wet-bulb thermometer, meaning that its reading is fairly close to the thermodynamic ("real") wet-bulb temperature.
Experimentally, the wet-bulb thermometer reads closest to the thermodynamic wet-bulb temperature if:
The sock is shielded from radiant heat exchange with its surroundings
Air flows past the sock quickly enough to prevent evaporated moisture from affecting evaporation from the sock
The water supplied to the sock is at the same temperature as the thermodynamic wet-bulb temperature of the air
In practice the value reported by a wet-bulb thermometer differs slightly from the thermodynamic wet-bulb temperature because:
The sock is not perfectly shielded from radiant heat exchange
Air flow rate past the sock may be less than optimum
The temperature of the water supplied to the sock is not controlled
At relative humidities below 100 percent, water evaporates from the bulb, cooling it below ambient temperature. To determine relative humidity, ambient temperature is measured using an ordinary thermometer, better known in this context as a dry-bulb thermometer. At any given ambient temperature, less relative humidity results in a greater difference between the dry-bulb and wet-bulb temperatures; the wet-bulb is colder. The precise relative humidity is determined by reading from a psychrometric chart of wet-bulb versus dry-bulb temperatures, or by calculation.
Psychrometers are instruments with both a wet-bulb and a dry-bulb thermometer.
A wet-bulb thermometer can also be used outdoors in sunlight in combination with a globe thermometer (which measures the incident radiant temperature) to calculate the Wet Bulb Globe Temperature (WBGT).
Adiabatic wet-bulb temperature
The adiabatic wet-bulb temperature is the temperature a volume of air would have if cooled adiabatically to saturation and then compressed adiabatically to the original pressure in a moist-adiabatic process. Such cooling may occur as air pressure reduces with altitude, as noted in the article on lifted condensation level.
This term, as defined in this article, may be most prevalent in meteorology.
As the value referred to as "thermodynamic wet-bulb temperature" is also achieved via an adiabatic process, some engineers and others may use the term "adiabatic wet-bulb temperature" to refer to the "thermodynamic wet-bulb temperature". As mentioned above, meteorologists and others may use the term "isobaric wet-bulb temperature" to refer to the "thermodynamic wet-bulb temperature".
"The relationship between the isobaric and adiabatic processes is quite obscure. Comparisons indicate, however, that the two temperatures are rarely different by more than a few tenths of a degree Celsius, and the adiabatic version is always the smaller of the two for unsaturated air. Since the difference is so small, it is usually neglected in practice."
Wet-bulb depression
The wet-bulb depression is the difference between the dry-bulb temperature and the wet-bulb temperature. If there is 100% humidity, dry-bulb and wet-bulb temperatures are identical, making the wet-bulb depression equal to zero in such conditions.
Wet-bulb temperature and health
Living organisms can survive only within a certain temperature range. When the ambient temperature is excessive, many animals cool themselves to below ambient temperature by evaporative cooling (sweat in humans and horses, saliva and water in dogs and other mammals); this helps to prevent potentially fatal hyperthermia due to heat stress. The effectiveness of evaporative cooling depends upon humidity; wet-bulb temperature, or more complex calculated quantities such as wet-bulb globe temperature (WBGT) which also takes account of solar radiation, give a useful indication of the degree of heat stress, and are used by several agencies as the basis for heat stress prevention guidelines.
Given the body's vital requirement to maintain a core temperature of approximately 37°C, a sustained wet-bulb temperature exceeding is likely to be fatal even to fit and healthy people, semi-nude in the shade and next to a fan; at this temperature human bodies switch from shedding heat to the environment, to gaining heat from it. A 2022 study found that the critical wet-bulb temperature at which heat stress can no longer be compensated in young, healthy adults mimicking basic activities of daily life strongly depended on the ambient temperature and humidity conditions, but was
5–10°C below the theoretical limit.
A 2015 study concluded that depending on the extent of future global warming, parts of the world could become uninhabitable due to deadly wet-bulb temperatures. A 2020 study reported cases where a wet-bulb temperature had already occurred, albeit too briefly and in too small a locality to cause fatalities. Severe mortality and morbidity impacts can occur at much lower wet-bulb temperatures due to suboptimal physiological and behavioral conditions; the 2003 European and 2010 Russian heat waves had values no greater than .
In 2018, South Carolina implemented new regulations to protect high school students from heat-related emergencies during outdoor activities. Specific guidelines and restrictions are in place for wet-bulb globe temperatures between and ; wet-bulb globe temperatures of or greater require all outdoor activities to be canceled.
Heat waves with high humidity
On 8 July 2003, Dhahran, Saudi Arabia saw the highest heat index ever recorded at with a temperature of and a dew point.
The 2015 Indian heat wave saw wet-bulb temperatures in Andhra Pradesh reach . A similar wet-bulb temperature was reached during the 1995 Chicago heat wave.
A heat wave in August 2015 saw temperatures of and a dew point of at Samawah, Iraq, and with a dew point of in Bandar-e Mahshahr, Iran. This implied wet-bulb temperatures of about and respectively. The government urged residents to stay out of the sun and drink plenty of water.
Highest recorded wet-bulb temperatures
The following locations have recorded wet-bulb temperatures of or higher. (Weather stations are typically at airports, so other locations in the city may have experienced higher values.)
Climate change
Study results indicate that limiting global warming to 1.5 °C would prevent most of the tropics from reaching the wet-bulb temperature of the human physiological limit of 35 °C.
See also
Atmospheric thermodynamics
Dew point
Heat index
Wet-bulb potential temperature
References
External links
3 ways to get wet-bulb temperatures for engineers
Wet-bulb chart for snow making (Fahrenheit)
Indirect evaporative cooler cools below wet-bulb
Wet-bulb and dew-point calculator from NOAA
Shortcut to calculating wet-bulb
Heat Stress Index Calculation
Atmospheric thermodynamics
Temperature
Meteorological data and networks
es:Temperatura#Temperatura húmeda | Wet-bulb temperature | Physics,Chemistry | 3,443 |
205,718 | https://en.wikipedia.org/wiki/Mimosa%20%28star%29 | Mimosa is the second-brightest object in the southern constellation of Crux (after Acrux), and the 20th-brightest star in the night sky. It has the Bayer designation β Crucis, which is Latinised to Beta Crucis and abbreviated Beta Cru or β Cru. Mimosa forms part of the prominent asterism called the Southern Cross. It is a binary star or a possible triple star system.
Nomenclature
β Crucis (Latinised to Beta Crucis) is the system's Bayer designation. Although Mimosa is at roughly −60° declination, and therefore not visible north of 30° latitude, in the time of the ancient Greeks and Romans it was visible north of 40° due to the precession of equinoxes, and these civilizations regarded it as part of the constellation of Centaurus.
It bore the traditional names Mimosa and the historical name Becrux . Mimosa, which is derived from the Latin for 'actor', may come from the flower of the same name. Becrux is a modern contraction of the Bayer designation. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Mimosa for this star.
In Chinese, (), meaning Cross, refers to an asterism consisting of Acrux, Mimosa, Gamma Crucis, and Delta Crucis. Consequently, Mimosa itself is known as (, .).
Stellar system
Based on parallax measurements, Mimosa is located at a distance of from the Earth. In 1957, German astronomer Wulff-Dieter Heintz discovered that it is a spectroscopic binary with components that are too close together to resolve with a telescope. The pair orbit each other every 5 years with an estimated separation that varies from 5.4 to 12.0 Astronomical Units. The system is only 8 to 11 million years old.
The primary, β Crucis A, is a massive star with about 16 times the Sun's mass. The projected rotational velocity of this star is about . However, the orbital plane of the pair is only about 10°, which probably means the inclination of the star's pole is also likely to be low. This suggests that the azimuthal rotational velocity is quite high, at about . With a radius of about 8.4 times the radius of the Sun, this would mean the star has a rotational period of only about 3.6 days.
β Crucis A is a known β Cephei variable, although with an effective temperature of about 27,000 K it is at the hot edge of the instability strip where such stars are found. It has three different pulsation modes, none of which are radial. The periods of all three modes are in the range of 4.03–4.59 hours. Owing to the first application of polarimetry it is the heaviest star with an age determined by asteroseismology. The star has a stellar classification of B0.5 III. While the luminosity class is typically associated with giant stars that have exhausted the supply of hydrogen at their cores, Mimosa's temperature and luminosity imply that it is more likely to be a main sequence star fusing hydrogen into helium in its core. At more than ten times the mass of the Sun, Mimosa has sufficient mass to explode as a supernova, which might occur in roughly 6 million years. The high temperature of the star's outer envelope is what gives the star the blue-white hue that is characteristic of B-type stars. It is generating a strong stellar wind and is losing about per year, or the equivalent of the mass of the Sun every 100 million years. The wind is leaving the system with a velocity of 2,000 km s−1 or more.
The secondary, β Crucis B, may be a main sequence star with a stellar class of B2. In 2007, a third companion was announced, which may be a low mass, pre-main sequence star. The X-ray emission from this star was detected using the Chandra X-ray Observatory. Two other stars, located at angular separations of 44 and 370 arcseconds, are likely optical companions that are not physically associated with the system. The β Crucis system may be a member of the Lower Centaurus–Crux sub-group of the Scorpius–Centaurus association. This is a stellar association of stars that share a common origin.
In culture
Mimosa is represented in the flags of Australia, New Zealand, Samoa and Papua New Guinea as one of five stars making up the Southern Cross. It is also featured in the flag of Brazil, along with 26 other stars, each of which represents a state. Mimosa represents the State of Rio de Janeiro.
A vessel named MV Becrux is used to export live cattle from Australia to customers in Asia. An episode dedicated to the vessel features in the television documentary series Mighty Ships.
References
External links
http://jumk.de/astronomie/big-stars/becrux.shtml
B-type main-sequence stars
B-type giants
Beta Cephei variables
Spectroscopic binaries
Lower Centaurus Crux
Crux
Crucis, Beta
4843
Durchmusterung objects
111123
062434
Stars with proper names | Mimosa (star) | Astronomy | 1,154 |
69,817,964 | https://en.wikipedia.org/wiki/Marta%20Bunster | Marta Cecilia del Carmen Bunster Balocchi is a Chilean scientist, most noted for her work in the fields of biochemistry, biophysics and crystallography. She is also known as one of the main promoters of bioinformatics in her country.
Biography
She began studying biochemistry in 1969 at the University of Concepción, where she spent most of her academic and professional career. She obtained a biochemistry diploma in 1974 for her work about X-ray diffraction on synthetic polypeptides. After obtaining her degree, she moved to Santiago, where she worked at the laboratory of Osvaldo Cori and Aida Traverso, from the Faculty of Chemical Sciences of the University of Chile. There, she collaborated in the investigation of the kinetic properties of a potato apyrase. After 4 months, she returned to Concepción and entered to the Doctor of Sciences Program, with a major in chemistry. In 1975, she was conferred an academic position as instructor of biophysics for biochemistry teachers at the Department of Physiology of the Institute of Medical Biological Sciences, precursor of the current Biological Sciences Faculty of the University of Concepción. Bunster obtained her doctoral degree in 1981 for her study on synthetic polymers of pharmacological application, at the University of Concepción and the laboratory of George B. Butler at the University of Florida. That year, she returned to Concepción once more and met doctor Hilda Cid, a renowned scientist in the fields of physics and crystallography, who had returned from Sweden after being politically persecuted. During those years, Cid specialized in crystallographic techniques at Uppsala University, which provided her the necessary equipment for her studies once she returned to Chile. Together, they established the Molecular Biophysics Laboratory of the Faculty of Biological Sciences and Natural Resources, now the Faculty of Biological Sciences, and started studying new methods for proteins structures and folding prediction. Among their first research was the development of the secondary structures prediction method by means of hydrophobicity profiles, which was greatly welcomed in the region due to its high reliability and low cost, being one of the bases of some of the modern techniques. In the mid 90's, and coinciding with Cid's retirement, Bunster investigated phycobilisomes, a fluorescent, macromolecular-light harvesting system present primarily in cyanobacteria and red algae. This research led to the development of spectroscopic techniques and its application. It allowed a greater understanding of conformational changes phenomena from a physical perspective.
Legacy
In the 2000s, driven by the boom of bioinformatics, Bunster dedicated her efforts to consolidating international cooperation in this area, forming in 2002 the Iberoamerican Network for Bioinformatics, later renamed as Iberoamerican Society for Bioinformatics (SoIBio), institution in which she assumed a directive role as Secretary on its first executive board, and on which she remains active to this day.
She was part of the Biological Sciences Doctoral Program since its creation, as well as one of the founding members and Director of the Master in Biochemistry and Bioinformatics and the Director of the Biochemistry and Molecular Biology Department from 2014 until her retirement in 2020.
Organizational activity
Bunster has been part of numerous scientific organizations during her career, both in Chile and abroad. Some of them include: Chilean Chemical Society, Chilean Biology Society, Society of Biochemistry and Molecular Biology of Chile, Biophysical Society, International Society for Computational Biology (ISCB), and the Latin American Cristallographic Association (LACA).
Featured publications
Cid, H., Bunster, M., Arriagada, E., & Campos, M. (1982). Prediction of secondary structure of proteins by means of hydrophobicity profiles. FEBS Letters, 150(1), 247–254. https://doi.org/10.1016/0014-5793(82)81344-6.
Cid, H., Vargas, V., Bunster, M., & Bustos, S. (1986). Secondary structure prediction of human salivary proline-rich proteins. FEBS letters, 198(1), 140–144. https://doi.org/10.1016/0014-5793(86)81200-5.
Cid, H., Bunster, M., Canales, M., & Gazitúa, F. (1992). Hydrophobicity and structural classes in proteins. Protein engineering, 5(5), 373–375. https://doi.org/10.1093/protein/5.5.373.
Contreras-Martel, C., Martinez-Oyanedel, J., Bunster, M., Legrand, P., Piras, C., Vernede, X., & Fontecilla-Camps, J. C. (2001). Crystallization and 2.2 Å resolution structure of R-phycoerythrin from Gracilaria chilensis: a case of perfect hemihedral twinning. Acta crystallographica. Section D, Biological crystallography, 57(Pt 1), 52–60. https://doi.org/10.1107/s0907444900015274.
Godoy, F. A., Bunster, M., Matus, V., Aranda, C., González, B., & Martínez, M. A. (2003). Poly-beta-hydroxyalkanoates consumption during degradation of 2,4,6-trichlorophenol by Sphingopyxis chilensis S37. Letters in applied microbiology, 36(5), 315–320. https://doi.org/10.1046/j.1472-765x.2003.01315.x.
Martínez-Oyanedel, J., Contreras-Martel, C., Bruna, C., & Bunster, M. (2004). Structural-functional analysis of the oligomeric protein R-phycoerythrin. Biological Research, 37(4). https://doi.org/10.4067/s0716-97602004000500003.
Tobella, L. M., Bunster, M., Pooley, A., Becerra, J., Godoy, F., & Martínez, M. A. (2005). Biosynthesis of poly-beta-hydroxyalkanoates by Sphingopyxis chilensis S37 and Wautersia sp. PZK cultured in cellulose pulp mill effluents containing 2,4,6-trichlorophenol. Journal of industrial microbiology & biotechnology, 32(9), 397–401. https://doi.org/10.1007/s10295-005-0011-1.
Contreras-Martel, C., Matamala, A., Bruna, C., Poo-Caamaño, G., Almonacid, D., Figueroa, M., Martínez-Oyanedel, J., & Bunster, M. (2007). The structure at 2 Å resolution of Phycocyanin from Gracilaria chilensis and the energy transfer network in a PC-PC complex. Biophysical chemistry, 125(2-3), 388–396. https://doi.org/10.1016/j.bpc.2006.09.014.
Figueroa, M., Hinrichs, M. V., Bunster, M., Babbitt, P., Martinez-Oyanedel, J., & Olate, J. (2009). Biophysical studies support a predicted superhelical structure with armadillo repeats for Ric-8. Protein science, 18(6), 1139–1145. https://doi.org/10.1002/pro.124.
Burgos, C. F., Castro, P. A., Mariqueo, T., Bunster, M., Guzmán, L., & Aguayo, L. G. (2015). Evidence for α-helices in the large intracellular domain mediating modulation of the α1-glycine receptor by ethanol and Gβγ. The Journal of pharmacology and experimental therapeutics, 352(1), 148–155. https://doi.org/10.1124/jpet.114.217976.
Sivakumar, R., Manivel, A., Meléndrez, M., Martínez-Oyanedel, J., Bunster, M., Vergara, C., & Manidurai, P. (2015). Novel heteroleptic ruthenium sensitizers containing carbazole linked 4,5-diazafluorene ligand for dye sensitized solar cells. Polyhedron, 87, 135–140. https://doi.org/10.1016/j.poly.2014.11.008.
Vásquez-Suárez, A., Lobos-González, F., Cronshaw, A., Sepúlveda-Ugarte, J., Figueroa, M., Dagnino-Leone, J., Bunster, M., & Martínez-Oyanedel, J. (2018). The γ33 subunit of R-phycoerythrin from Gracilaria chilensis has a typical double linked phycourobilin similar to β subunit. PLOS ONE, 13(4), e0195656. https://doi.org/10.1371/journal.pone.0195656.
References
Chilean biologists
Chilean biochemists
Biophysicists
Bioinformaticians
Crystallographers
University of Concepción alumni
Academic staff of the University of Concepción
Year of birth missing (living people)
Living people
Chilean women scientists
Women biochemists
Women biophysicists
Women bioinformaticians | Marta Bunster | Chemistry,Materials_science,Biology | 2,150 |
73,916,236 | https://en.wikipedia.org/wiki/Frankenia%20serpyllifolia | Frankenia serpyllifolia, commonly known as bristly sea-heath is a flowering plant in the family Frankeniaceae and grows in New South Wales, South Australia, Queensland and the Northern Territory. It is a small, spreading shrub with pink flowers.
Description
Frankenia serpyllifolia is a small, spreading herb to high and in diameter covered with short spreading hairs. The leaves are arranged opposite, long, wide, flat, exude salt, oval to oblong-shaped, flat or margins curved downward. The flowers are pink, mostly 5 petalled, petals long, borne singly at leaf axils or clusters of 2-70 flowers at the base of leaves or at the end of stems and the calyx long. Flowering occurs mostly in spring.
Taxonomy and naming
Frankenia serpyllifolia was first formally described in 1848 by John Lindley and the description was published in Journal of an Expedition into the Interior of Tropical Australia. The specific epithet (serpyllifolia) means "wild thyme-leaved".
Distribution and habitat
Bristly sea-heath grows on heavy soils or flood plains in South Australia, Queensland, New South Wales and the Northern Territory.
References
serpyllifolia
Halophytes
Caryophyllales of Australia
Flora of South Australia
Flora of the Northern Territory
Flora of Queensland
Flora of New South Wales | Frankenia serpyllifolia | Chemistry | 276 |
2,195,233 | https://en.wikipedia.org/wiki/Patern%C3%B2%E2%80%93B%C3%BCchi%20reaction | The Paternò–Büchi reaction, named after Emanuele Paternò and George Büchi, who established its basic utility and form, is a photochemical reaction, specifically a 2+2 photocycloaddition, which forms four-membered oxetane rings from an excited carbonyl and reacting with an alkene.
With substrates benzaldehyde and 2-methyl-2-butene the reaction product is a mixture of structural isomers:
Another substrate set is benzaldehyde and furan or heteroaromatic ketones and fluorinated alkenes.
The alternative strategy for the above reaction is called the Transposed Paternò−Büchi reaction.
See also
Aza Paternò−Büchi reaction - the aza-equivalent of the Paternò–Büchi reaction
Enone–alkene cycloadditions - photochemical reaction of an enone with an alkene to give a cyclobutene ring unit
References
Photochemistry
Organic reactions
Name reactions
Oxygen heterocycle forming reactions
Coupling reactions | Paternò–Büchi reaction | Chemistry | 221 |
46,777,441 | https://en.wikipedia.org/wiki/ShareSpace%20foundation | ShareSpace is a non-profit educational foundation focused on the benefits of the STEAM disciplinesscience, technology, engineering, arts, and mathfor both the individual young person and society as a whole.
History
At its founding by astronaut and lunar pioneer Buzz Aldrin in 1998, ShareSpace was intended to be used for the promotion of space tourism, with the larger goal of encouraging commercial space travel and exploration.
Aldrin himself, however, has documented both the challenges facing this goal and the logjam of approaches which have grown up in respect to it. In consequence, ShareSpace has been relaunched with its current STEAM educational focus. An initial result of the new focus was announced by the foundation in May 2015: a strategic partnership with Destination Imagination, another non-profit dedicated to education, which has participants across the United States and in more than 30 other countries.
A STEAM pioneer
ShareSpace includes the arts as one of the core disciplines which it promotes; thus, it uses the acronym STEAM as opposed to STEM:
Just as the term STEM (science, technology, engineering and math), made its big movement in the 80s, STEAM is doing that now. Buzz Aldrin’s ShareSpace Foundation is a strong supporter in the belief that by incorporating “arts” into the STEM equation even greater results will be achieved by people at all stages of their education.
The game is changing. It isn’t just about math and science anymore. It’s about creativity, imagination and above all, innovation. ShareSpace lights the fire and inspires children to explore the incredible world of science, technology, engineering, math AND arts.
In his role as spokesperson for ShareSpace, Aldrin cites the smartphone as an example of an important technological development in which artistry has played a key role.
The larger context
The Apollo program in which Buzz Aldrin participated stands as one of the great historical triumphs of applied education, and the foundation is also in the unique position of being able to draw on the legacy of an astronaut who is remarkable for his own educational exploits.
Aldrin, for example, is the only one of the early astronaut candidates to have entered the program with a doctorate an ScD in astronautics from the Massachusetts Institute of Technology (MIT).
For a foundation which encourages young people to seize the reins of their own education, the story of Aldrin's doctoral thesis is also relevant. What was to become the Apollo program had been announced by President John F. Kennedy in 1961. Aldrin wanted to be part of it, and so he chose to write his doctoral thesis on a topic which would prove irresistible to NASA: a method by which astronauts might use primitive "line of sight" techniques to accomplish sophisticated orbital rendezvous maneuvers.
References
External links
The official ShareSpace Foundation web site
Astronautics
Spaceflight
Space tourism
Non-profit organizations based in the United States | ShareSpace foundation | Astronomy | 582 |
14,000,891 | https://en.wikipedia.org/wiki/Provocation%20test | A provocation test, also called a provocation trial or provocation study, is a form of medical clinical trial whereby participants are exposed to either a substance or "thing" that is claimed to provoke a response, or to a sham substance or device that should provoke no response, or a severe exercise as in Erb's test for low serum calcium . An example of a provocation test, performed on an individual, is a skin allergy test.
See also
Blind experiment
Control group
References
Design of experiments
Clinical pharmacology
Epidemiology | Provocation test | Chemistry,Environmental_science | 113 |
39,439,699 | https://en.wikipedia.org/wiki/Battus%20%28trilobite%29 | Battus is a synonym for several agnostid trilobites, now assigned to other genera.
Etymology
In Greek mythology, Battus is a shepherd who witnessed Hermes stealing Apollo's cattle. Because he broke his promise not to reveal this theft, Hermes turned him to stone.
Taxonomy
Battus Barrande, 1846 was no longer available since Giovanni Antonio Scopoli used Battus in 1777 for a genus of swallowtail butterflies.
Trilobite species previously assigned to Battus
A number of species previously assigned to the genus Battus have since been transferred to other genera:
B. bibullatus = Phalacroma bibullatus
B. cuneiferus = Diplorrhina cuneifera
B. granulatum = Pleuroctenium granulatum
B. integer = Peronopsis integer
B. laevigatus = Lejopyge laevigata
B. nudus = Phalagnostus nudus
B. rex = Condylopyge rex
B. tardus = Trinodus tarda
References
Agnostida
Disused trilobite generic names | Battus (trilobite) | Biology | 231 |
642,136 | https://en.wikipedia.org/wiki/Specularity | Specularity is the visual appearance of specular reflections.
In computer graphics
In computer graphics, it means the quantity used in three-dimensional (3D) rendering which represents the amount of reflectivity a surface has. It is a key component in determining the brightness of specular highlights, along with shininess to determine the size of the highlights.
It is frequently used in real-time computer graphics and ray tracing, where the mirror-like specular reflection of light from other surfaces is often ignored (due to the more intensive computations required to calculate it), and the specular reflection of light directly from point light sources is modeled as specular highlights.
Specular mapping
A materials system may allow specularity to vary across a surface, controlled by additional layers of texture maps.
The early misinterpretation of "Specularity" in computer graphics
Early shaders included a parameter called "Specularity". CG Artists, confused by this term discovered by experimentation that the manipulation of this parameter would cause a reflected highlight from a light source to appear and disappear and therefore misinterpreted "specularity" to mean "light highlights". In fact "Specular" is defined in optics as Optics. (of reflected light) directed, as from a smooth, polished surface (opposed to diffuse ). A specular surface is a highly smooth surface. When the surface is very smooth, the reflected highlight is easy to see. As the surface becomes rougher, the reflected highlights gets broader and dimmer. This is a more "diffused" reflection.
In seismology
In the context of seismic migration, specularity is defined as the cosine of the angle made by the surface normal vector and the angle bisector of the angle defined by the directions of the incident and diffracted rays. For a purely specular seismic event the value of specularity should be equal to unity, as the angle between the surface normal vector and the angle bisector should be zero, according to Snell's Law. For a diffractive seismic event, the specularity can be sub-unitary. During the seismic migration, one can filter each seismic event according to the value of specularity, in order to enhance the contribution of diffractions in the seismic image. Alternatively, the events can be separated in different sub-images according to the value of specularity to produce a specularity gather.
See also
Specular holography
Reflection mapping
References
3D computer graphics
Geophysics | Specularity | Physics | 516 |
74,349,529 | https://en.wikipedia.org/wiki/NGC%207236 | NGC 7236 is an interacting lenticular galaxy located in the constellation Pegasus. It is located at a distance of about 300 million light years from Earth, which, given its apparent dimensions, means that NGC 7236 is about 150,000 light years across. NGC 7236 forms a pair with NGC 7237 and is a radio galaxy. It was discovered by Albert Marth on August 25, 1864.
NGC 7236 forms a pair with elliptical galaxy NGC 7237, which lies 35 arcseconds to the southeast. The two galaxies are undergoing a merger and are surrounded by hot gas (corona) with temperature of around 1 keV. The total mass of that gas is estimated to be . A smaller elliptical galaxy, NGC 7237C, lies 38 arcseconds southeast of NGC 7237. A faint tail emanates from NGC 7236. It is included in the Atlas of Peculiar Galaxies, in the category diffuse counter-tails. A tail is also visible in X-rays. A dust lane runs across the galaxy.
The galaxy pair is a source of radiowaves. The radio emission has a double lobe structure, with filaments, but no jets, while a weak core is identified as the nucleus of NGC 7237. The filaments could be created by the interaction of hot gas with the preexisting radio emitting plasma. Some bright radio sources are visible within the lobes but they could be background active galaxies.
One supernova has been observed in NGC 7236: SN2019krv (typeIa, mag. 18.4).
See also
List of NGC objects (7001–7840)
Gallery
References
External links
NGC 7236 on SIMBAD
Lenticular galaxies
Interacting galaxies
Radio galaxies
Pegasus (constellation)
7236
11958
169
442
068384
Discoveries by Albert Marth
Astronomical objects discovered in 1864 | NGC 7236 | Astronomy | 377 |
39,022,230 | https://en.wikipedia.org/wiki/Buccal%20administration | Buccal administration is a topical route of administration by which drugs held or applied in the buccal () area (in the cheek) diffuse through the oral mucosa (tissues which line the mouth) and enter directly into the bloodstream. Buccal administration may provide better bioavailability of some drugs and a more rapid onset of action compared to oral administration because the medication does not pass through the digestive system and thereby avoids first pass metabolism. Drug forms for buccal administration include tablets and thin films.
As of May 2014, the psychiatric drug asenapine; the opioid drugs buprenorphine, naloxone, and fentanyl; the cardiovascular drug nitroglycerin; the nausea medication prochlorperazine; the hormone replacement therapy testosterone; and nicotine as a smoking cessation aid were commercially available in buccal forms, as was midazolam, an anticonvulsant, used to treat acute epileptic seizures.
Buccal administration of vaccines has been studied, but there are challenges to this approach due to immune tolerance mechanisms that prevent the body from overreacting to immunogens encountered in the course of daily life.
Tablets
Buccal tablets are a type of solid dosage form administered orally in between the gums and the inner linings of the cheek. These tablets, held within the buccal pouch, either act on the oral mucosa or are rapidly absorbed through the buccal mucosal membrane. Since drugs "absorbed through the buccal mucosa bypass gastrointestinal enzymatic degradation and hepatic first-pass effect", prescribing buccal tablets is increasingly common among healthcare professionals.
Buccal tablets serve as an alternative drug delivery in patients where compliance is a known issue, including those who are unconscious, nauseated, or having difficulty in swallowing (i.e. dysphagia). A wide variety of these drugs are available on the market to be prescribed in hospitals and other healthcare settings, including common examples like Corlan, Fentora, and Buccastem.
The most common route for drug transport through the buccal mucosa is the paracellular pathway. Most hydrophilic drugs permeate the cheek linings via the paracellular pathway through the mechanism of passive diffusion, and hydrophobic drugs are transported through the transcellular pathway. This route of administration is beneficial for mucosal administration and transmucosal administration. Buccal tablets are typically formulated through the direct compression of drug, powder mixture, swollen polymer, and other agents that assist in processing.
Buccal tablets offer many advantages in terms of accessibility, ease of administration and withdrawal, and hence may improve patient compliance. Notable drawbacks of buccal tablets include the hazard of choking by involuntarily swallowing the tablet and irritation of the gums. Caution should be exercised along with counselling from medical practitioners before use of these tablets.
Clinical uses and common drug examples
With recent advances on buccal tablets and in conditions where the conventional oral route (i.e. swallowing of tablet) cannot be delivered effectively, some commonly prescribed buccal tablets available in healthcare settings are listed below as examples.
Hydrocortisone
Hydrocortisone is a corticosteroid that is clinically used to relieve the pain and discomfort of mouth ulcers and functions to speed the healing of mouth ulcers. Common side effects include: oral thrush, visual disturbances (e.g. blurry vision), worsening of diabetes, worsening of mouth infections, and allergic reactions (e.g. skin rash). Hydrocortisone is contraindicated in patients hypersensitive to hydrocortisone and those with mouth ulcers caused by dentures or infection as it can worsen the severity of mouth ulcers.
Some cautions and remarks include needing to gargle and spit water once tablet is fully dissolved to minimise risk of oral thrush, prolonged use may lead to withdrawal symptoms, chewing and swallowing of the tablet may limit its efficacy and give rise to additional side effects, and caution with CYP3A4 inhibitors.
Fentanyl
Fentanyl is an opioid analgesic used for the treatment of breakthrough pain in cancer patients who are already receiving and/or are tolerant to maintenance opioid therapy for chronic cancer pain Common side effects include: nausea, vomiting, headache, constipation and drowsiness. Fentanyl is contraindicated in patients hypersensitive to fentanyl, opioid non-tolerant patients, management of acute or postoperative pain, and those with severe hypotension or severe obstructive airway diseases (e.g. COPD)
Some cautions include needing to keep tablets out of the sight and reach of children, and must not be sucked, chewed or swallowed. Other remarks include caution when administered in patients with hepatic or renal impairment, having drug interactions with CYP3A4 inducers and inhibitors and co-administration with CNS sedative agents (e.g. antihistamines) will increase CNS side effects.
Prochlorperazine maleate
Prochlorperazine maleate is under the class of antiemetics and antipsychotics. These buccal tablets are administered for the treatment of severe nausea and vomiting associated with migraine, as well as managed in symptoms of schizophrenia. Side effects typically seen in patients using prochlorperazine maleate tablets include drowsiness, blurred vision, dry mouth, and headache. In rare cases, these tablets may cause serious allergic reactions (i.e. anaphylaxis). Prochlorperazine maleate is contraindicated in certain patient groups, including hypersensitivity to prochlorperazine maleate, certain diseases like glaucoma, epilepsy and Parkinson's disease. They are also avoided in those with hepatic and prostate gland problems.
Special caution is taken in patients with high risk of blood clot and stroke, along with associated risk factors (e.g. high blood pressure and high cholesterol levels). Those taking prochlorperazine maleate should avoid exposure to direct sunlight due to photosensitivity and taken certain drugs that are either sedative and give dry mouth (e.g. anticholinergics) or target the heart (e.g. antihypertensives and anticoagulants). Other remarks include being most effective when taken after food and possible withdrawal symptoms if they are abruptly stopped.
Mechanism of action
The buccal mucosa, along with the gingival and sublingual mucosa, is part of the oral mucosa. It is composed of non-keratinised tissue. Unlike intestinal and nasal mucosae, it lacks tight junctions and is instead equipped with loose intercellular links of desmosomes, gap junctions and hemidesmosomes. While it has a less permeable effect than sublingual administration, buccal administration is still capable of creating local or systemic effects following drug administration. In the oral cavity, buccal tablets potentiate their effect by entering the bloodstream direction through the internal jugular vein into the superior vena cava, avoiding acidic hydrolysis to take place in the gastrointestinal tract.
There are two major routes for drug transportation through the buccal mucosa: transcellular and paracellular pathways.
Small hydrophobic molecules and other lipophilic compounds mostly move across the buccal mucosa via the transcellular pathway. Drugs are transferred via the transcellular pathway through either facilitated diffusion for polar or ionic compounds, diffusion for low molecular weight molecules, or transcytosis and endocytosis for macromolecules. The physicochemical properties of the drug, for example, its oil/water partition coefficient, molecular weight, structural conformation, determines whether the molecules are transported through the transcellular pathway.
As the cell membrane is lipophilic, it is more difficult for drugs that are hydrophilic to permeate the membrane. Hence, the excipients of the formulation and the phospholipid bilayer assist in enhancing the diffusion of hydrophilic compounds (i.e. peptides, proteins, macromolecules).
Generally, small low-molecular-weight hydrophilic compounds diffuse across the buccal epithelium through the paracellular pathway via passive diffusion. The extracellular amphiphilic lipid matrix proves to be a major barrier for macromolecular hydrophilic compounds. After the administration of the buccal tablet, it must transport either through the epithelial layers to achieve its effect on the systemic circulation (systemic effect) or remain at a target site to elicit a local effect.
Benefits and limitations
Benefits
Buccal tablets offer many advantages over other solid dosage forms also intended for oral administration (e.g. enteric-coated tablets, chewable tablets, and capsules).
Buccal tablets can be considered in patients who experience difficulty in swallowing, since these tablets are absorbed into the blood stream between the gum and cheek. Difficulty in swallowing can occur in all age groups, especially in young infants and the elderly community. Buccal tablets are also used in unconscious patients. Additionally, in the case of accidental swallowing of a buccal tablet, adverse effects are minimal as most buccal drugs cannot survive hepatic first-pass metabolism.
Compared to orally ingested capsules and tablets, buccal tablets provide a more rapid onset of action because the oral mucosa is highly vascularised. Buccal tablets are also used in emergency situations because they can exert their effects quickly.
Buccal tablets directly enter the systemic circulation, bypassing the gastrointestinal tract and first-pass metabolism in the liver. As such, patients can take a reduced overall dose to minimise symptoms. In addition, buccal tablets can be removed if adverse reactions appear.
Limitations
In general, many drugs are not suitable to be delivered via the buccal mucosa due to the small dose criteria. Buccal tablets are rarely used in healthcare settings due to unwanted properties that may limit patient compliance, for example, unpleasant taste and irritation of the oral mucosa. These undesired characteristics may lead to accidental swallowing or involuntary expulsion of the buccal tablet. Buccal tablets are also not preferred for drugs that require extended-release.
Absorption of drugs via the buccal membrane may not be suitable for all patients. Due to possible undesirable side effects and loss of drug effectiveness, buccal tablets must not be crushed, chewed, or swallowed under any circumstances. As such, buccal tablets are not always appropriate for patients (e.g. individuals on enteral tube feeding). It is also noted that eating, drinking or smoking should be avoided until the buccal tablet is fully dissolved to prevent drug efficacy changes and concerns of choking.
Formulation and manufacturing
Buccal tablets are dry formulations that attain bioadhesion through dehydrating local mucosal surfaces. Many bioadhesive buccal tablet formulations are created through the direct compression method with a release retardant and swollen polymer, and are designed to either release the drug in a unidirectional or multidirectional manner into the saliva.
Conventional dosage forms are unable to ensure therapeutic drug levels in the circulation and the mucosa for mucosal and transmucosal administration because of the washing effect of saliva, and the mechanical stress of the oral cavity. These two mechanisms act as a physiological removal system that removes the formulation from the mucosa, resulting in a decreased exposure time and unpredictable pharmacological profile of the drug's distribution.
This effect can be countered by prolonging the contact between the active substance from the buccal tablet and the mucosa, the tablet should contain: mucoadhesive agents, penetration enhancers, enzyme inhibitors and solubility modifiers.
The mucoadhesive agents assist in the maintenance of prolonged contact between the drug with the absorption site. Penetration enhancers improve the ability of the drug to permeate the mucosa for transmucosal delivery or penetrate into the layers of the epithelium for mucosal delivery. Enzyme inhibitors partake in the protection of the drug from mucosal enzyme degradation, and solubility modifiers increase the solubility of drugs that are poorly absorbed.
See also
Sublabial administration
Sublingual administration
Route of administration
Pharmacology
References
External links
Generex Buccal Morphine and Fentanyl research
Mouth
Routes of administration | Buccal administration | Chemistry | 2,645 |
937,519 | https://en.wikipedia.org/wiki/Samuel%20Butler%20%28novelist%29 | Samuel Butler (4 December 1835 – 18 June 1902) was an English novelist and critic, best known for the satirical utopian novel Erewhon (1872) and the semi-autobiographical novel The Way of All Flesh (published posthumously in 1903 with substantial revisions and published in its original form in 1964 as Ernest Pontifex or The Way of All Flesh). Both novels have remained in print since their initial publication. In other studies he examined Christian orthodoxy, evolutionary thought, and Italian art, and made prose translations of the Iliad and Odyssey that are still consulted.
Early life
Butler was born on 4 December 1835 at the rectory in the village of Langar, Nottinghamshire. His father was Rev. Thomas Butler, son of Dr. Samuel Butler, then headmaster of Shrewsbury School and later Bishop of Lichfield. Dr. Butler was the son of a tradesman and descended from a line of yeomen, but his scholarly aptitude being recognised at a young age, he had been sent to Rugby and Cambridge, where he distinguished himself.
His only son, Thomas, wished to go into the Navy but succumbed to paternal pressure and entered the Anglican clergy, in which he led an undistinguished career, in contrast to his father's. Samuel's immediate family created for him an oppressive home environment (chronicled in The Way of All Flesh). Thomas Butler, states one critic, "to make up for having been a servile son, became a bullying father."
Samuel Butler's relations with his parents, especially with his father, were largely antagonistic. His education began at home and included frequent beatings, as was not uncommon at the time. Samuel wrote later that his parents were "brutal and stupid by nature". He later recorded that his father "never liked me, nor I him; from my earliest recollections I can call to mind no time when I did not fear him and dislike him.... I have never passed a day without thinking of him many times over as the man who was sure to be against me." Under his parents' influence, he was set on course to follow his father into the priesthood.
He was sent to Shrewsbury at age twelve, where he did not enjoy the hard life under its headmaster Benjamin Hall Kennedy, whom he later drew as "Dr. Skinner" in The Way of All Flesh. Then, in 1854, he went up to St John's College, Cambridge, where he obtained a first in Classics in 1858. The graduate society of St John's is named the Samuel Butler Room (SBR) in his honour.
Career
After Cambridge, he went to live in a low-income parish in London 1858–1859 as preparation for his ordination into the Anglican clergy; there he discovered that infant baptism made no apparent difference to the morals and behaviour of his peers and began questioning his faith. This experience would later serve as inspiration for his work The Fair Haven. Correspondence with his father about the issue failed to set his mind at peace, instead inciting his father's wrath. As a result, in September 1859, on the ship Roman Emperor, he emigrated to New Zealand.
Butler went there, like many early British settlers of materially privileged origins, to maximise distance between himself and his family. He wrote of his arrival and life as a sheep farmer on Mesopotamia Station in A First Year in Canterbury Settlement (1863), and he made a handsome profit when he sold his farm, but his chief achievement during his time there consisted of drafts and source material for much of his masterpiece Erewhon.
Erewhon revealed Butler's long interest in Darwin's theories of biological evolution. In 1863, four years after Darwin published On the Origin of Species, the editor of a New Zealand newspaper, The Press, published a letter captioned "Darwin among the Machines", written by Butler, but signed Cellarius. It compares human evolution to machine evolution, prophesying that machines would eventually replace humans in the supremacy of the earth: "In the course of ages we shall find ourselves the inferior race". The letter raises many of the themes now debated by proponents of the technological singularity, for example that computers evolve much faster than humans and that we are racing toward an unknowable future through explosive technological change.
Butler also spent time criticizing Darwin, partly because he (in the shadow of a previous Samuel Butler) believed that Darwin had not sufficiently acknowledged his grandfather Erasmus Darwin's contribution to his theory.
Butler returned to England in 1864, settling in rooms in Clifford's Inn (near Fleet Street), where he lived for the rest of his life. In 1872, the Utopian novel Erewhon appeared anonymously, causing some speculation as to who the author was. When Butler revealed himself, Erewhon made him a well-known figure, more because of this speculation than for its literary merits, which have been undisputed.
He was less successful when he lost money investing in a Canadian steamship company and in the Canada Tanning Extract Company, in which he and his friend Charles Pauli were made nominal directors. In 1874 Butler went to Canada, "fighting fraud of every kind" in an attempt to save the company, which collapsed, reducing his own capital to £2,000.
In 1839 his grandfather Dr. Butler had left Samuel property at Whitehall in Abbey Foregate, Shrewsbury, so long as he survived his own father and his aunt, Dr. Butler's daughter Harriet Lloyd. While at Cambridge in 1857 he sold the Whitehall mansion and six acres to his cousin Thomas Bucknall Lloyd, but kept the remaining land surrounding the mansion. When in the 1870s his old Shrewsbury School proposed to relocate to a site at Whitehall, Butler publicly opposed it and the school ultimately moved elsewhere. His aunt died in 1880 and his father's death in 1886 resolved his financial problems for the last 16 years of his own life. The land at Whitehall was sold for housing development; he laid out and named four roads – Bishop and Canon Streets after his grandfather's and father's clerical titles, Clifford Street after his London home, and Alfred Street in gratitude to his clerk.
Butler indulged himself, holidaying in Italy every summer and while there, producing his works on the Italian landscape and art. His close interest in the art of the Sacri Monti is reflected in Alps and Sanctuaries of Piedmont and the Canton Ticino (1881) and Ex Voto (1888). He wrote a number of other books, including a less successful sequel, Erewhon Revisited. His semi-autobiographical novel, The Way of All Flesh, did not appear in print until after his death, as he considered its tone of satirical attack on Victorian morality too contentious at the time.
Death
Butler died on 18 June 1902, aged 66, in a nursing home in St. John's Wood Road, London. By his wish, he was cremated at Woking Crematorium and by differing accounts, his ashes were dispersed or buried in an unmarked grave.
The Way of All Flesh was published after Butler's death by his literary executor, R. A. Streatfeild, in 1903. This version, however, altered Butler's text in many ways and cut important material. The manuscript was edited by Daniel F. Howard as Ernest Pontifex or The Way of All Flesh (Butler's original title) and published for the first time in 1964.
Sexuality
Butler's sexuality has been the subject of academic speculation and debate. Butler never married, although for years he made regular visits to a woman, Lucie Dumas. Herbert Sussman, having arrived at the conclusion that Butler was homosexual, opined that Butler's sexual association with Dumas was merely an outlet for his "intense same-sex desire". Sussman's theory calls Butler's assumption of "bachelorhood" merely a means to retain middle-class respectability in the absence of matrimony; he observes that there is no evidence of Butler's having any "genital contact with other men", but alleges that the "temptations of overstepping the line strained his close male relationships."
His first significant male friendship was with the young Charles Pauli, son of a German businessman in London, whom Butler met in New Zealand. They returned to England together in 1864 and took neighbouring apartments in Clifford's Inn. Butler had made a large profit from the sale of his New Zealand farm and undertook to finance Pauli's study of law by paying him a regular sum, which Butler continued to do long after the friendship had cooled, until Butler had spent all his savings. On Pauli's death in 1892, Butler was shocked to learn that Pauli had benefited from similar arrangements with other men and had died wealthy, but without leaving Butler anything in his will.
After 1878, Butler became close friends with Henry Festing Jones, whom he persuaded to give up his job as a solicitor to be his personal literary assistant and travelling companion, at a salary of £200 a year. Although Jones kept his own lodgings at Barnard's Inn, the two men saw each other daily until Butler's death in 1902, collaborating on music and writing projects in the daytime, and attending concerts and theatres in the evenings; they also frequently toured Italy and other favorite parts of Europe together. After Butler's death, Jones edited Butler's notebooks for publication and published his own biography of him in 1919.
Another friendship was with Hans Rudolf Faesch, a Swiss student who stayed with Butler and Jones in London for two years, improving his English, before departing for Singapore. Both Butler and Jones wept when they saw him off at the railway station in early 1895, and Butler subsequently wrote an emotional poem, "In Memoriam H. R. F.", instructing his literary agent to offer it for publication to several leading English magazines. However, once the Oscar Wilde trial began in the spring of that year, with revelations of homosexual behaviour among the literati, Butler feared being associated with the widely reported scandal and in a panic wrote to all the magazines, withdrawing his poem.
Some critics, beginning with Malcolm Muggeridge in The Earnest Atheist: A Study of Samuel Butler (1936), concluded that Butler was a sublimated or repressed homosexual and that his lifelong status as an "incarnate bachelor" was comparable to that of his writer contemporaries Walter Pater, Henry James, and E. M. Forster, also thought to be closeted homosexuals.
Philosophy and personal thought
Whether in his satire or fiction, Butler's studies on the evidence for Christianity, his works on evolutionary thought, or in his miscellaneous other writings, a consistent theme runs through, stemming largely from his personal struggle against the stifling of his own nature by his parents, which led him to seek more general principles of growth, development, and purpose: "What concerned him was to establish his nature, his aspirations, and their fulfillment upon a philosophic basis, to identify them with the nature, the aspirations, the fulfillment of all humanity – and, more than that, with the fulfillment of the universe.... His struggle became generalized, symbolic, tremendous."
The form that this search took was principally philosophical and – given the interests of the day – biological: "Satirist, novelist, artist, and critic that he was, he was primarily a philosopher," and in particular, a philosopher who looked for biological foundations for his work: "His biology was a bridge to a philosophy of life, which sought a scientific basis for religion, and endowed a naturalistically conceived universe with a soul." Indeed, "philosophical writer" was ultimately the self-description Butler chose as most fitting to his work.
Homer
Butler developed a theory that the Odyssey came from the pen of a young Sicilian woman, and that the scenes of the poem reflected the coast of Sicily (especially the territory of Trapani) and its nearby islands. He described his evidence for this in The Authoress of the Odyssey (1897) and in the introduction and footnotes to his prose translation of the Odyssey (1900). Robert Graves elaborated on the hypothesis in his novel Homer's Daughter.
Butler argued in a lecture entitled "The Humour of Homer", delivered at The Working Men's College in London, 1892, that Homer's deities in the Iliad are like humans, but "without the virtue", and that he "must have desired his listeners not to take them seriously." Butler translated the Iliad (1898). His other works include Shakespeare's Sonnets Reconsidered (1899), a theory that the sonnets, if rearranged, tell a story about a homosexual affair.
Theology
In a book of essays published after his death, entitled God the Known and God the Unknown, Samuel Butler argued for the existence of a single, corporeal deity, declaring belief in an incorporeal deity to be essentially the same as atheism. He asserted that this "body" of God was, in fact, composed of the bodies of all living things on earth, a belief that may be classed as "panzoism". He later changed his views, and decided that God was composed not only of all living things, but of all non-living things as well. He argued, however, that "some vaster Person [may] loom ... out behind our God, and ... stand in relation to him as he to us. And behind this vaster and more unknown God there may be yet another, and another, and another."
Heredity
Butler argued that each organism was not, in fact, distinct from its parents. Instead, he asserted that each being was merely an extension of its parents at a later stage of evolution. "Birth", he once quipped, "has been made too much of."
Evolution
Butler wrote four books on evolution: Life and Habit; Evolution, Old and New; Unconscious Memory; and Luck, or Cunning, As the Main Means of Organic Modification?. Butler accepted evolution but rejected Darwin's theory of natural selection. In his book Evolution, Old and New (1879), he accused Darwin of borrowing heavily from Buffon, Erasmus Darwin and Lamarck, while playing down these influences and giving them little credit. In 1912, the biologist Vernon Kellogg summed up Butler's views:
Butler, though strongly anti-Darwinian (that is, anti-natural selection and anti-Charles Darwin) is not anti-evolutionist. He professes, indeed, to be very much of an evolutionist, and in particular one who has taken it upon his shoulders to reinstate Buffon and Erasmus Darwin, and, as a follower of these two, Lamarck, in their rightful place as the most believable explainers of the factors and method of evolution. His evolution belief is a sort of Butlerized Lamarckism, tracing back originally to Buffon and Erasmus Darwin.
Historian Peter J. Bowler has described Butler as a defender of neo-Lamarckian evolution. Bowler noted that "Butler began to see in Lamarckism the prospect of retaining an indirect form of the design argument. Instead of creating from without, God might exist within the process of living development, represented by its innate creativity."
Butler's writings on evolution were criticised by scientists. Critics have pointed out that Butler admitted to be writing entertainment rather than science, and his writings were not taken seriously by most professional biologists. Butler's books were negatively reviewed in Nature by George Romanes and Alfred Russel Wallace. Romanes stated that Butler's views on evolution had no basis in science.
Gregory Bateson often mentioned Butler and saw value in some of his ideas, calling him "the ablest contemporary critic of Darwinian evolution". He noted Butler's insight into the efficiencies of habit formation (patterns of behaviour and mental processes) in adapting to an environment:
[M]ind and pattern as the explanatory principles, which, above all, required investigation, were pushed out of biological thinking in the later evolutionary theories, which were developed in the mid-nineteenth century by Darwin, Huxley, etc. There were still some naughty boys, like Samuel Butler, who said that mind could not be ignored in this way – but they were weak voices, and, incidentally, they never looked at organisms. I don't think Butler ever looked at anything except his own cat, but he still knew more about evolution than some of the more conventional thinkers.
Music
In Ernest Pontifex or The Way of All Flesh, protagonist Ernest Pontifex says that he had been trying all his life to like modern music but succeeded less and less as he grew older. On being asked when he considers modern music to have begun, he says, "with Sebastian Bach". Butler liked only Handel, and in a letter to Miss Savage said, "I only want Handel's Oratorios. I would have said and things of that sort, but there are no 'things of that sort' except Handel's." With Henry Festing Jones, Butler composed choral works that Eric Blom characterised as "imitation Handel", although with satirical texts. Two of the works they collaborated on were the cantatas Narcissus (private rehearsal 1886, published 1888), and Ulysses (published posthumously in 1904), both for solo voices, chorus, and orchestra. George Bernard Shaw wrote in a private letter that the music was invested with "a ridiculously complete command of the Handelian manner and technique." Around 1871 Butler was engaged as music critic by The Drawing Room Gazette. From 1890 he took counterpoint lessons with W. S. Rockstro.
Legacy and influence
Butler's friend Henry Festing Jones wrote the authoritative biography: the two-volume Samuel Butler, Author of Erewhon (1835–1902): A Memoir (commonly known as Jones's Memoir), published in 1919, and reissued by HardPress Publishing in 2013. Project Gutenberg hosts a shorter "Sketch" by Jones, first published in 1913 in The Humour of Homer and Other Essays and reissued in its own volume 1921 by Jonathan Cape as Samuel Butler: A Sketch. The most recent biography of Butler is Peter Raby's Samuel Butler: A Biography (Hogarth Press, 1991; University of Iowa Press, 1991).
Butler belonged to no literary school and spawned no followers in his lifetime. He was a serious but amateur student of the subjects he undertook, especially religious orthodoxy and evolutionary thought, and his controversial assertions effectively shut him out from both the opposing factions of church and science that played such a large role in late Victorian cultural life: "In those days one was either a religionist or a Darwinian, but he was neither."
His influence on literature, such as it was, came through The Way of All Flesh, which Butler completed in the 1880s, but left unpublished to protect his family, yet the novel, "begun in 1870 and not touched after 1885, was so modern when it was published in 1903, that it may be said to have started a new school", particularly for its use of psychoanalysis in fiction, which "his treatment of Ernest Pontifex [the hero] foreshadows."
Sue Zemka writes that "Among science fiction writers, The Book of the Machines has a canonical status, for it originates the conceit by which machines develop intelligent capacities and enslave mankind." For example, in Frank Herbert's Dune the "Butlerian Jihad" – "in-universe ancient revolt against 'thinking machines' that resulted in their prohibition" – is named after Butler.
The English novelist Aldous Huxley acknowledged the influence of Erewhon on his novel Brave New World. Huxley's Utopian counterpart to Brave New World, Island, also refers prominently to Erewhon. In From Dawn to Decadence, Jacques Barzun asks, "Could a man do more to bewilder the public?"
Main works
Darwin among the Machines (1863, largely incorporated into Erewhon)
Lucubratio Ebria (1865)
Erewhon, or Over the Range (1872)
Life and Habit (1878). Trubner (reissued by Cambridge University Press, 2009; )
Evolution, Old and New; Or, the Theories of Buffon, Dr. Erasmus Darwin, and Lamarck, as compared with that of Charles Darwin (1879)
Unconscious Memory (1880)
Alps and Sanctuaries of Piedmont and the Canton Ticino (1881)
Luck or Cunning as the Main Means of Organic Modification? (1887)
Ex Voto; An Account of the Sacro Monte or New Jerusalem at Verallo-Sesia. With some notice of Tabachetti's remaining work at the Sanctuary of Crea (1888)
The Life and Letters of Dr. Samuel Butler, Head-Master of Shrewsbury School 1798–1836, and Afterwards Bishop of Lichfield, In So Far as They Illustrate the Scholastic, Religious, and Social Life of England, 1790–1840. By His Grandson, Samuel Butler (1896, two volumes)
The Authoress of the Odyssey (1897)
The Iliad of Homer, Rendered into English Prose (1898)
Shakespeare's Sonnets Reconsidered (1899)
The Odyssey of Homer, Rendered into English Prose (1900)
Erewhon Revisited Twenty Years Later: Both by the Original Discoverer of the Country and by His Son (1901)
The Way of All Flesh (1903), text of original manuscript published as Ernest Pontifex or The Way of All Flesh (1964)
God the Known and God the Unknown (1909). This is a revised edition, posthumously published. R.A. Streatfeild's "Prefatory Note" to it states that the original edition "first appeared in the form of a series of articles which were published in 'The Examiner' in May, June and July, 1879."
The Note-Books of Samuel Butler Selections arranged and edited by Henry Festing Jones (1912)
Further Extracts from the Note-Books of Samuel Butler chosen and edited by A.T. Bartholomew (1934)
Samuel Butler's Notebooks Selections edited by Geoffrey Keynes and Brian Hill (1951)
The Family Letters of Samuel Butler 1841-1886 Selected, Edited and Introduced by Arnold Silver (1962)
The Fair Haven (attributed to 'John Pickard Owen', 1873, new edition 1913, revised and corrected edition 1923; considers inconsistencies between the Gospels)
A First Year in Canterbury Settlement With Other Early Essays (1914)
Selected Essays (1927)
Butleriana, A. T. Bartholomew, ed. (1932). The Nonesuch Press
The Essential Samuel Butler selected with an introduction by G. D. H. Cole (1950)
Quis Desiderio..? with engravings by Phillida Gili (1987) Libanus Press
References
Further reading
G. D. H. Cole (1947), Samuel Butler and The Way of All Flesh. London: Home & Van Thal Ltd.
P. N. Furbank (1948), Samuel Butler (1835 - 1902). Cambridge at the University Press.
Mrs. R. S. Garnett (1926), Samuel Butler and His Family Relations. London and Toronto: J. M. Dent & Sons Limited; New York: E. P. Dutton & Co.
Phyllis Greenacre, M.D. (1963), The Quest for the Father: A Study of the Darwin-Butler Controversy, as a Contribution to the Understanding of the Creative Individual. New York: International Universities Press, Inc.
Felix Grendon (1918), Samuel Butler's God. North American Review, Vol. 208, No. 753, pp. 277–286,
John F. Harris (1916), Samuel Butler, Author of Erewhon: The Man and His Work. London: Grant Richards Ltd
Philip Henderson (1954), Samuel Butler: The Incarnate Bachelor. Bloomington: Indiana University Press
Lee Elbert Holt (1941), Samuel Butler and His Victorian Critics. ELH, Vol. 8, No. 2, pp. 146–159. The Johns Hopkins University Press
Lee Elbert Holt (1964), Samuel Butler. New York: Twayne Publishers, Inc.
Thomas L. Jeffers (1981), Samuel Butler Revalued. University Park: Penn State University Press
C. E. M. Joad (1924), Samuel Butler (1835–1902). London: Leonard Parsons
Joseph Jones (1959), The Cradle of Erewhon: Samuel Butler in New Zealand. Austin: University of Texas Press
Steven Mintz (1983), A Prison of Expectations: The Family in Victorian Culture. New York University Press
Malcolm Muggeridge (1936), The Earnest Atheist: A Study of Samuel Butler. London: Eyre & Spottiswoode
James G. Paradis, ed. (2007), Samuel Butler, Victorian against the Grain: A Critical Overview. University of Toronto Press
Peter Raby (1991), Samuel Butler: A Biography. University of Iowa Press.
Robert F. Rattray (1914), The Philosophy of Samuel Butler. Mind, Vol. 23, No. 91, pp. 371–385
Robert F. Rattray (1935), Samuel Butler: A Chronicle and an Introduction. London: Duckworth
Elinor Shaffer (1988), Erewhons of the Eye: Samuel Butler as Painter, Photographer and Art Critic. London: Reaktion Books
George Gaylord Simpson (1961), Lamarck, Darwin and Butler: Three Approaches to Evolution. The American Scholar, Vol. 30, No. 2, pp. 238–249
Clara G. Stillman (1932), Samuel Butler: A Mid-Victorian Modern. New York: The Viking Press
Basil Willey (1960), Darwin and Butler: Two Views of Evolution. New York: Harcourt, Brace and Company
External links
Official English website for European Sacred Mounts
Darwin Among the Machines
1835 births
1902 deaths
Alumni of St John's College, Cambridge
English satirical novelists
Novelists from London
People educated at Shrewsbury School
People from Bingham, Nottinghamshire
19th-century New Zealand farmers
Victorian novelists
19th-century English novelists
Charles Darwin biographers
English male novelists
Lamarckism
Theistic evolutionists
Translators of Ancient Greek texts
Translators of Homer
English science fiction writers | Samuel Butler (novelist) | Biology | 5,370 |
25,181,469 | https://en.wikipedia.org/wiki/Meiotic%20recombination%20checkpoint | The meiotic recombination checkpoint monitors meiotic recombination during meiosis, and blocks the entry into metaphase I if recombination is not efficiently processed.
Generally speaking, the cell cycle regulation of meiosis is similar to that of mitosis. As in the mitotic cycle, these transitions are regulated by combinations of different gene regulatory factors, the cyclin-Cdk complex and the anaphase-promoting complex (APC). The first major regulatory transition occurs in late G1, when the start of meiotic cycle is activated by Ime1 instead of Cln3/Cdk1 in mitosis. The second major transition occurs at the entry into metaphase I. The main purpose of this step is to make sure that DNA replication has completed without error so that spindle pole bodies can separate. This event is triggered by the activation of M-Cdk in late prophase I. Then the spindle assembly checkpoint examines the attachment of microtubules at kinetochores, followed by initiation of metaphase I by APCCdc20. The special chromosome separation in meiosis, homologous chromosomes separation in meiosis I and chromatids separation in meiosis II, requires special tension between homologous chromatids and non-homologous chromatids for distinguishing microtubule attachment and it relies on the programmed DNA double strand break (DSB) and repair in prophase I. Therefore meiotic recombination checkpoint can be a kind of DNA damage response at specific time spot. On the other hand, the meiotic recombination checkpoint also makes sure that meiotic recombination does happen in every pair of homologs.
DSB-dependent pathway
The abrupt onset of M-Cdk in late prophase I depends on the positive transcription regulation feedback loop consisting of Ime2, Ndt80 and Cdk/cyclin complex. However the activation of M-Cdk is controlled by the general phosphorylation switch Wee1/Cdc25. Wee1 activity is high in early prophase I and the accumulation of Cdc25 activates M-Cdk by direct phosphorylation and marking Wee1 to be degraded.
Meiotic recombination may begin with a double-strand break, either induced by Spo11 or by other endogenous or exogenous causes of DNA damage. These DNA breaks must be repaired before metaphase I. and these DSBs must be repaired before metaphase I. The cell monitor these DSBs via ATM pathway, in which Cdc25 is suppressed when DSB lesion is detected. This pathway is the same as classical DNA damage response and is the part we know the best in meiotic recombination checkpoint.
DSB-independent pathway
The DSB-independent pathway was proposed when people studied spo11 mutant cells in some species and found that these Spo11 cells could not process to metaphase I even in the absence of DSB. The direct purpose of these DSBs is to help with the condensation of chromosomes. Even though the initial homolog paring in early leptotene is just random interactions, the further progression into presynaptic alignment depends on the formation of double strand breaks and single strand transfer complexes. Therefore the unsynapsed chromosomes in Spo11 cells can be a target of checkpoint. An AAA–adenosine triphosphatase (AAA-ATPase) was found to be essential in this pathway. but the mechanism is not yet clear. Some other studies also drew sex body formation into attention, and the signaling could be either structure based or transcription regulation such as meiotic sex chromosome inactivation. Under this cascade, failure to synapse will maintain the gene expression from sex chromosomes and some products may inhibit cell cycle progression. Meiotic sex chromosome inactivation only happens in male, which may partially be the reason why only Spo11 mutant spermatocytes but not oocytes fail to transition from prophase I to metaphase I. However the asynapsis does not happen only within sex chromosomes, and such transcription regulation was suspended until it was further expanded to all the chromosomes as meiotic silencing of unsynapsed chromatin, but the effector gene is not found yet.
Meiotic checkpoint protein kinases CHEK1 and CHEK2
The central role in meiosis of human and mouse CHEK1 and CHEK2 and their orthologs in Saccharomyces cerevisiae, Caenorhabditis elegans, Schizosaccharomyces pombe and Drosophila has been reviewed by MacQueen and Hochwagen and Subramanian and Hochwagen. During meiotic recombination in human and mouse, CHEK1 protein kinase is important for integrating DNA damage repair with cell cycle arrest. CHEK1 is expressed in the testes and associates with meiotic synaptonemal complexes during the zygonema and pachynema stages. CHEK1 likely acts as an integrator for ATM and ATR signals and in monitoring meiotic recombination. In mouse oocytes CHEK1 appears to be indispensable for prophase I arrest and to function at the G2/M checkpoint.
CHEK2 regulates cell cycle progression and spindle assembly during mouse oocyte maturation and early embryo development. Although CHEK2 is a down stream effector of the ATM kinase that responds primarily to double-strand breaks it can also be activated by ATR (ataxia-telangiectasia and Rad3 related) kinase that responds primarily to single-strand breaks. In mouse, CHEK2 is essential for DNA damage surveillance in female meiosis. The response of oocytes to DNA double-strand break damage involves a pathway hierarchy in which ATR kinase signals to CHEK2 which then activates p53 and p63 proteins.
In the fruitfly Drosophila, irradiation of germ line cells generates double-strand breaks that result in cell cycle arrest and apoptosis. The Drosophila CHEK2 ortholog mnk and the p53 ortholog dp53 are required for much of the cell death observed in early oogenesis when oocyte selection and meiotic recombination occur.
Meiosis-specific Transcription factor Ndt80
Ndt80 is a meiosis-specific transcription factor required for successful completion of meiosis and spore formation. The protein recognizes and binds to the middle sporulation element (MSE) 5'-C[AG]CAAA[AT]-3' in the promoter region of stage-specific genes that are required for progression through meiosis and sporulation. The DNA-binding domain of Ndt80 has been isolated, and the structure reveals that this protein is a member of the Ig-fold family of transcription factors. Ndt80 also competes with the repressor SUM1 for binding to promoters containing MSEs.
Transitions in yeast
When a mutation inactivates Ndt80 in budding yeast, meiotic cells display a prolonged delay in late pachytene, the third stage of prophase. The cells display intact synaptonemal complexes but eventually arrest in the diffuse chromatin stage that follows pachytene. This checkpoint-mediated arrest prevents later events from occurring until earlier events have been executed successfully and prevents chromosome missegregation.
Role in cell cycle progression
NDt80 is crucial for the completion of prophase and entry into meiosis 1, as it stimulates the expression of a large number of middle meiotic genes. Ndt80 is regulated through transcriptional and post-translational mechanisms (i.e. phosphorylation).
Interaction with Clb1
Ndt80 stimulates the expression of the B-type cyclin Clb-1, which greatly interacts with Cdk1 during meiotic divisions. Active complexes of Clb-1 with Cdk1 play a large role in triggering the events of the first meiotic division, and their activity is restricted to meiosis 1.
Interaction with Ime2
Ndt80 stimulates expression of itself and expression of protein kinase Ime2, both of which feedback to further stimulate Ndt80. This increased amount of Ndt80 protein further enhances the transcription of target genes. Early in meiosis 1, Ime2 activity rises and is required for the normal accumulation and activity of Ndt80. However, if Ndt80 is expressed prematurely, it will initially accumulate in an unmodified form. Ime2 can then also act as a meiosis-specific kinase that phosphorylates Ndt80, resulting in fully activated Ndt80.
Expression of Plk
Ndt80 stimulates the expression of the gene that encodes polo-like kinase, Plk. This protein is activated in late pachytene and is needed for crossover formation and partial loss of cohesion from chromosome arms. Plk is also both necessary and sufficient to trigger exit from pachytene points.
Recombination model
The meiotic recombination checkpoint operates in response to defects in meiotic recombination and chromosome synapsis, potentially arresting cells before entry into meiotic divisions. Because recombination is initiated by double stranded breaks (DSBs) at certain regions of the genome, entry into Meiosis 1 must be delayed until the DSBs are repaired. The meiosis-specific kinase Mek1 plays an important role in this and recently, it has been discovered that Mek1 is able to phosphorylate Ndt80 independently of IME2. This phosphorylation, however, is inhibitory and prevents Ndt80 from binding to MSEs in the presence of DSBs.
Roles outside of cell cycle progression
Heterokaryon Incompatibility
Heterokaryon Incompatibility (HI) has been likened to a fungal immune system; it is a non-self recognition mechanism that is ubiquitous among filamentous members of the Asomycota phylum of the Fungi kingdom. Vib-1 is an Ndt80 homologue in Neurospora crassa and is required for HI in this species. It has been found that mutations at the vib1 locus suppress non-self recognition, and VIB-1 is required for the production of downstream effectors associated with HI, such as extracellular proteases.
Female sexual development
Studies have indicated that Ndt80 homologues also play a role in female sexual development in fungi species other than the more commonly studied Saccharomyces cerevisiae. Mutations in vib-1 have been found to affect the timing and development of female reproductive structures prior to fertilization.
Role in Cancer
Although usually characterized in yeast and other fungi, the DNA-binding domain of Ndt80 is homologous to a number of proteins in higher eukaryotes and the residues used for binding are highly conserved. In humans, the Ndt80 homologue C11orf9 is highly expressed in invasive or metastatic tumor cells, suggesting potential usage as a target molecule in cancer treatment. However, not much progress has been made on this front in recent years.
See also
Cell cycle checkpoint
References
DNA repair | Meiotic recombination checkpoint | Biology | 2,346 |
33,026,985 | https://en.wikipedia.org/wiki/Lactarius%20fuliginellus | Lactarius fuliginellus is a species of fungus in the family Russulaceae. Described as a new species in 1962 by American mycologists Alexander H. Smith and Lexemuel Ray Hesler, the mushroom is found in North America.
See also
List of Lactarius species
References
External links
fuliginosus
Fungi described in 1962
Fungi of North America
Taxa named by Alexander H. Smith
Fungus species | Lactarius fuliginellus | Biology | 88 |
44,300,180 | https://en.wikipedia.org/wiki/Limacella%20guttata | Limacella guttata is a mushroom-forming fungus in the family Amanitaceae. Limacella guttata is found in Europe and North America, where it grows in damp woodlands typically dominated by deciduous plants such as ash, beech, and elm. The specific epithet guttata is derived from Latin, meaning "with droplets".
References
External links
Amanitaceae
Fungi described in 1793
Fungi of Europe
Fungi of North America
Fungus species | Limacella guttata | Biology | 93 |
7,428,060 | https://en.wikipedia.org/wiki/Simdesk | Simdesk, fully known as Simdesk Technologies, Inc., formerly Internet Access Technologies, was a Houston-based software as a service provider of on demand messaging and collaboration tools for business. It was founded by Ray C. Davis in 1999. Early in the company's history, it was sold to municipal authorities. The company began to commercially offer Simdesk direct to small businesses in March 2006. There were several Simdesk resellers, including KDDI in Japan.
Discontinuation
On May 1, 2008, Simdesk ceased operations, terminating retail hosted services for SMB and individual customers in the United States and Latin America. It was announced that externally hosted services based on Simdesk's platform license would remain in place. (As of 2009, the URLs have gone dark.) Its future direction has not been announced.
References
Further reading
External links
Simdesk homepage
Cache of the Simdesk homepage
KDDI Secure Share
Simdesk at Startup Houston
Simdesk: No comment
BlogHouston
Companies established in 1999
Defunct software companies of the United States
Companies disestablished in 2008
Companies based in Houston | Simdesk | Technology | 238 |
5,072,098 | https://en.wikipedia.org/wiki/Square%20lattice%20Ising%20model | In statistical mechanics, the two-dimensional square lattice Ising model is a simple lattice model of interacting magnetic spins. The model is notable for having nontrivial interactions, yet having an analytical solution. The model was solved by Lars Onsager for the special case that the external magnetic field H = 0. An analytical solution for the general case for has yet to be found.
Defining the partition function
Consider a 2D Ising model on a square lattice with N sites and periodic boundary conditions in both the horizontal and vertical directions, which effectively reduces the topology of the model to a torus. Generally, the horizontal coupling and the vertical coupling are not equal. With and absolute temperature and the Boltzmann constant , the partition function
Critical temperature
The critical temperature can be obtained from the Kramers–Wannier duality relation. Denoting the free energy per site as , one has:
where
Assuming that there is only one critical line in the plane, the duality relation implies that this is given by:
For the isotropic case , one finds the famous relation for the critical temperature
Dual lattice
Consider a configuration of spins on the square lattice . Let r and s denote the number of unlike neighbours in the vertical and horizontal directions respectively. Then the summand in corresponding to is given by
Construct a dual lattice as depicted in the diagram. For every configuration , a polygon is associated to the lattice by drawing a line on the edge of the dual lattice if the spins separated by the edge are unlike. Since by traversing a vertex of the spins need to change an even number of times so that one arrives at the starting point with the same charge, every vertex of the dual lattice is connected to an even number of lines in the configuration, defining a polygon.
This reduces the partition function to
summing over all polygons in the dual lattice, where r and s are the number of horizontal and vertical lines in the polygon, with the factor of 2 arising from the inversion of spin configuration.
Low-temperature expansion
At low temperatures, K, L approach infinity, so that as , so that
defines a low temperature expansion of .
High-temperature expansion
Since one has
Therefore
where and . Since there are N horizontal and vertical edges, there are a total of terms in the expansion. Every term corresponds to a configuration of lines of the lattice, by associating a line connecting i and j if the term (or is chosen in the product. Summing over the configurations, using
shows that only configurations with an even number of lines at each vertex (polygons) will contribute to the partition function, giving
where the sum is over all polygons in the lattice. Since tanh K, tanh L as , this gives the high temperature expansion of .
The two expansions can be related using the Kramers–Wannier duality.
Exact solution
The free energy per site in the limit is given as follows. Define the parameter as
The Helmholtz free energy per site can be expressed as
For the isotropic case , from the above expression one finds for the internal energy per site:
and the spontaneous magnetization is, for ,
and for .
Notes
References
Barry M. McCoy and Tai Tsun Wu (1973), The Two-Dimensional Ising Model. Harvard University Press, Cambridge Massachusetts,
John Palmer (2007), Planar Ising Correlations. Birkhäuser, Boston, .
Statistical mechanics
Exactly solvable models
Lattice models | Square lattice Ising model | Physics,Materials_science | 696 |
701,981 | https://en.wikipedia.org/wiki/Hideki%20Shirakawa | is a Japanese chemist, engineer, and Professor Emeritus at the University of Tsukuba and Zhejiang University. He is best known for his discovery of conductive polymers. He was co-recipient of the 2000 Nobel Prize in Chemistry jointly with Alan MacDiarmid and Alan Heeger.
Early life and education
Hideki Shirakawa was born in Tokyo, Japan, the second son of a military doctor. He had one elder and one younger brother and sister. Olympic marathoner champion Naoko Takahashi is his second cousin-niece. He lived in Manchukuo and Taiwan during childhood. Around third grade, he moved to Takayama, Gifu, which is the hometown of his mother.
Shirakawa graduated from Tokyo Institute of Technology (Tokyo Tech) with a bachelor's degree in chemical engineering in 1961, and his doctorate in 1966. Afterward, he obtained the post of assistant in Chemical Resources Laboratory at Tokyo Tech.
Career
While employed as an assistant at Tokyo Institute of Technology (Tokyo Tech) in Japan, Shirakawa developed polyacetylene, which has a metallic appearance. This result interested Alan MacDiarmid when MacDiarmid visited Tokyo Tech in 1975.
In 1976, he was invited to work in the laboratory of Alan MacDiarmid as a post-doctoral fellow at the University of Pennsylvania. The two developed the electrical conductivity of polyacetylene along with American physicist Alan Heeger.
In 1977 they discovered that doping with iodine vapor could enhance the conductivity of polyacetylene. The three scientists were awarded the Nobel Prize in Chemistry in 2000 in recognition of the discovery. With regard to the mechanism of electric conduction, it is strongly believed that nonlinear excitations in the form of solitons play a role.
In 1979, Shirakawa became an assistant professor in the University of Tsukuba; three years later, he advanced to a full professor. In 1991 he was appointed as Tsukuba's Chief of Science and Engineering Department of Graduate School (until March, 1993), and as Tsukuba's Chief of Category #3 group (until March, 1997).
Research
Source:
Shirakawa's research on conductive polymers can be broken down into four main categories: polyacetylene thin film synthesis, the causation of metallic conductivity due to chemical doping, the creation of conjugated (double or triple bonds in a molecule which are separated by a single bond) liquid crystalline polymers, and acetylene polymerization development that used liquid crystals as solvents.
Polyacetylene Synthesis: Polyacetylene was expected to have certain properties, with insolubility making the substance difficult to work with. Dr. Shirakawa found that polyacetylene thin films can be synthesized, and with the thin films, the doctor clarified the molecular and solidified structures of polyacetylene.
Creation of Metallic Conductivity: Dr. Shirakawa found that, when a trace of a halogen such as bromine or iodine is added to thin film polyacetylene, its electric conductivity increases, and it exhibits metallic conductivity. Shirakawa found that partial electron transfer between dopants and p-electrons of polyacetylene can generate metallic conductivity.
Using Liquid Crystals to Develop Acetylene Polymerization: Dr. Shirakawa developed a method for the production of highly conductive polyacetylene thin films which paralleled the polymerization of acetylene. Furthermore, he succeeded in the synthesis of thin films of helical polyacetylene whose chirality is controllable.
'Chirality: a property of asymmetry, meaning a molecule is distinguishable from its mirror image; that is, it cannot be superimposed onto it
Creation of Conjugated Liquid Crystalline Polymers: Dr. Shirakawa created self-oriented, conjugated liquid crystalline polymers by introducing liquid crystalline groups into the side chains of p-conjugated polymers such as polyacetylene. He also macroscopically oriented the polymers with electric or magnetic fields and succeeded in having the molecules electric anisotropy.
-The general definition of electrical anisotropy describes the variation of an electrical property depending on the lateral or vertical direction (x,y,z) in which a current flows.
Recognition
1983 – The Award of the Society of Polymer Science, Japan
2000 – SPSJ Award for Outstanding Achievement in Polymer Science and Technology
2000 – Nobel Prize in Chemistry
2000 – Order of Culture and selected as Person of Cultural Merit
2000 – Professor Emeritus of the University of Tsukuba
2001 – Special Award of the Chemical Society of Japan
2001 – Member of the Japan Academy
2006 – Professor Emeritus of the Zhejiang University
The Nobel Prize
Shirakawa was awarded the 2000 Nobel Prize in Chemistry together with UPenn's physics professor Alan J. Heeger and chemistry professor Alan G. MacDiarmid, "for the discovery and development of conductive polymers". He also became the first Japanese Nobel laureate who did not graduate from one of the National Seven Universities and the second Japanese chemistry Nobel laureate.
Over the years, Shirakawa has expressed that he does not want the Nobel Prizes to receive too much special treatment from mass media (especially the Japanese media). He hopes that many vital areas in fields outside the Nobel Prize categories will also become more widely known.
Relatives
One of his relatives, Hitomi Yoshizawa, is a member of the singing group Morning Musume Morning Girls. He is also related to Naoko Takahashi, the women's marathon gold medalist of the 2000 Summer Olympics.
Public issues
On 6 December 2013, the House of Councillors (Japan) approved the bill of the State Secrecy Law. Shirakawa and physics Nobel laureate Toshihide Maskawa issued a statement saying that the law:
"threatens the pacifist principles and fundamental human rights established by the constitution and should be rejected immediately...(omitted)...Even in difficult times, protecting the freedom of the press, of thought and expression and of academic research is indispensable."
See also
List of Japanese Nobel laureates
Notes
References
Biographical snapshots: Hideki Shirakawa , Journal of Chemical Education web site.
Dr. Shirakawa Hideki. Dr. SHIRAKAWA Hideki - University of Tsukuba. (n.d.). Retrieved December 9, 2022, from https://www.tsukuba.ac.jp/en/about/history/nobel/shirakawa/
External links
Nobel Lecture on 8 December 2000 The Discovery of Polyacetylene Film: The Dawning of an Era of Conducting Polymers
Official Homepage in Japanese
1936 births
Living people
Polymer scientists and engineers
Japanese scientists
Japanese Nobel laureates
Nobel laureates in Chemistry
People from Gifu Prefecture
Recipients of the Order of Culture
Tokyo Institute of Technology alumni
Academic staff of the University of Tsukuba | Hideki Shirakawa | Chemistry,Materials_science | 1,401 |
697,793 | https://en.wikipedia.org/wiki/Thrust-to-weight%20ratio | Thrust-to-weight ratio is a dimensionless ratio of thrust to weight of a rocket, jet engine, propeller engine, or a vehicle propelled by such an engine that is an indicator of the performance of the engine or vehicle.
The instantaneous thrust-to-weight ratio of a vehicle varies continually during operation due to progressive consumption of fuel or propellant and in some cases a gravity gradient. The thrust-to-weight ratio based on initial thrust and weight is often published and used as a figure of merit for quantitative comparison of a vehicle's initial performance.
Calculation
The thrust-to-weight ratio is calculated by dividing the thrust (in SI units – in newtons) by the weight (in newtons) of the engine or vehicle. The weight (N) is calculated by multiplying the mass in kilograms (kg) by the acceleration due to gravity (m/s). The thrust can also be measured in pound-force (lbf), provided the weight is measured in pounds (lb). Division using these two values still gives the numerically correct (dimensionless) thrust-to-weight ratio. For valid comparison of the initial thrust-to-weight ratio of two or more engines or vehicles, thrust must be measured under controlled conditions.
Because an aircraft's weight can vary considerably, depending on factors such as munition load, fuel load, cargo weight, or even the weight of the pilot, the thrust-to-weight ratio is also variable and even changes during flight operations. There are several standards for determining the weight of an aircraft used to calculate the thrust-to-weight ratio range.
Empty weight - The weight of the aircraft minus fuel, munitions, cargo, and crew.
Combat weight - Primarily for determining the performance capabilities of fighter aircraft, it is the weight of the aircraft with full munitions and missiles, half fuel, and no drop tanks or bombs.
Max takeoff weight - The weight of the aircraft when fully loaded with the maximum fuel and cargo that it can safely takeoff with.
Aircraft
The thrust-to-weight ratio and lift-to-drag ratio are the two most important parameters in determining the performance of an aircraft.
The thrust-to-weight ratio varies continually during a flight. Thrust varies with throttle setting, airspeed, altitude, air temperature, etc. Weight varies with fuel burn and payload changes. For aircraft, the quoted thrust-to-weight ratio is often the maximum static thrust at sea level divided by the maximum takeoff weight. Aircraft with thrust-to-weight ratio greater than 1:1 can pitch straight up and maintain airspeed until performance decreases at higher altitude.
A plane can take off even if the thrust is less than its weight as, unlike a rocket, the lifting force is produced by lift from the wings, not directly by thrust from the engine. As long as the aircraft can produce enough thrust to travel at a horizontal speed above its stall speed, the wings will produce enough lift to counter the weight of the aircraft.
Propeller-driven aircraft
For propeller-driven aircraft, the thrust-to-weight ratio can be calculated as follows in imperial units:
where is propulsive efficiency (typically 0.65 for wooden propellers, 0.75 metal fixed pitch and up to 0.85 for constant-speed propellers), hp is the engine's shaft horsepower, and is true airspeed in feet per second, weight is in lbs.
The metric formula is:
Rockets
The thrust-to-weight ratio of a rocket, or rocket-propelled vehicle, is an indicator of its acceleration expressed in multiples of gravitational acceleration g.
Rockets and rocket-propelled vehicles operate in a wide range of gravitational environments, including the weightless environment. The thrust-to-weight ratio is usually calculated from initial gross weight at sea level on earth and is sometimes called thrust-to-Earth-weight ratio. The thrust-to-Earth-weight ratio of a rocket or rocket-propelled vehicle is an indicator of its acceleration expressed in multiples of earth's gravitational acceleration, g.
The thrust-to-weight ratio of a rocket improves as the propellant is burned. With constant thrust, the maximum ratio (maximum acceleration of the vehicle) is achieved just before the propellant is fully consumed. Each rocket has a characteristic thrust-to-weight curve, or acceleration curve, not just a scalar quantity.
The thrust-to-weight ratio of an engine is greater than that of the complete launch vehicle, but is nonetheless useful because it determines the maximum acceleration that any vehicle using that engine could theoretically achieve with minimum propellant and structure attached.
For a takeoff from the surface of the earth using thrust and no aerodynamic lift, the thrust-to-weight ratio for the whole vehicle must be greater than one. In general, the thrust-to-weight ratio is numerically equal to the g-force that the vehicle can generate. Take-off can occur when the vehicle's g-force exceeds local gravity (expressed as a multiple of g).
The thrust-to-weight ratio of rockets typically greatly exceeds that of airbreathing jet engines because the comparatively far greater density of rocket fuel eliminates the need for much engineering materials to pressurize it.
Many factors affect thrust-to-weight ratio. The instantaneous value typically varies over the duration of flight with the variations in thrust due to speed and altitude, together with changes in weight due to the amount of remaining propellant, and payload mass. Factors with the greatest effect include freestream air temperature, pressure, density, and composition. Depending on the engine or vehicle under consideration, the actual performance will often be affected by buoyancy and local gravitational field strength.
Examples
Aircraft
Jet and rocket engines
Fighter aircraft
Table for Jet and rocket engines: jet thrust is at sea level
Fuel density used in calculations: 0.803 kg/l
For the metric table, the T/W ratio is calculated by dividing the thrust by the product of the full fuel aircraft weight and the acceleration of gravity.
J-10's engine rating is of AL-31FN.
See also
Power-to-weight ratio
Factor of safety
Notes
References
John P. Fielding. Introduction to Aircraft Design, Cambridge University Press,
Daniel P. Raymer (1989). Aircraft Design: A Conceptual Approach, American Institute of Aeronautics and Astronautics, Inc., Washington, DC.
George P. Sutton & Oscar Biblarz. Rocket Propulsion Elements, Wiley,
External links
NASA webpage with overview and explanatory diagram of aircraft thrust to weight ratio
Jet engines
Rocket engines
Engineering ratios | Thrust-to-weight ratio | Mathematics,Technology,Engineering | 1,327 |
40,769,155 | https://en.wikipedia.org/wiki/Orbiting%20Astronomical%20Observatory%202 | The Orbiting Astronomical Observatory 2 (OAO-2, nicknamed Stargazer) was the first successful space telescope (first space telescope being OAO-1, which failed to operate once in orbit), launched on December 7, 1968. An Atlas-Centaur rocket launched it into a nearly circular altitude Earth orbit. Data was collected in ultraviolet on many sources including comets, planets, and galaxies. It had two major instrument sets facing in opposite directions; the Smithsonian Astrophysical Observatory (SAO) and the Wisconsin Experiment Package (WEP). One discovery was large halos of hydrogen gas around comets, and it also observed Nova Serpentis, which was a nova discovered in 1970.
Celescope: Smithsonian Astrophysical Observatory
The Smithsonian Astrophysical Observatory, also called Celescope, had four 12 inch (30.5 cm) Schwarzschild telescopes that fed into Uvicons. The Uvicon was an ultra-violet light detector based on the Westinghouse Vidicon. Ultraviolet light was converted into electrons which were in turn converted to a voltage as those electrons hit the detection area of the tube. There has been a Uvicon in the collection of the Smithsonian Institution since 1973.
Various filters, photocathodes, and electronics aided in collecting data in several ultraviolet light passbands. The detectors showed a gradual loss of sensitivity and the experiment was turned off in April 1970. By the time it finished about 10 percent of the sky was observed resulting in a catalog of 5,068 UV stars.
Wisconsin Experiment Package
The Wisconsin Experiment Package had seven different telescopes for ultraviolet observations. For example, there was a nebular photoelectric photometer fed by a 16-inch (40.64 cm) telescope with a six-position filter wheel that unfortunately failed a few weeks after launch.
Construction was supervised by Arthur Code of the University of Wisconsin-Madison. WEP observed over 1200 targets in ultraviolet light before the mission ended in early 1973.
Discoveries
In addition to the Celescope's catalog of UV stars, the WEP observed comet Tago-Sato-Kosaka and found it to be surrounded by a cloud of hydrogen, confirming that the comet was largely made up of water, and detected the 2175-angstrom bump, an increase in UV absorption at that wavelength that is still not fully explained.
Spacecraft bus
The observatory was built in the shape of an octagonal prism. It measured about and weighed .
See also
Orbiting Astronomical Observatory
Orbiting Solar Observatory
References
External links
OAO 2 observations of the Alpha Persei cluster
OAO-2 Info and pics
50th Anniversary Overview of OAO-2 including video
Spacecraft launched in 1968
Orbiting Astronomical Observatory
Ultraviolet telescopes | Orbiting Astronomical Observatory 2 | Astronomy | 538 |
16,036,981 | https://en.wikipedia.org/wiki/Lava%20filter | A lava filter is a biological filter that uses lavastone pebbles as support material on which microorganisms can grow in a thin biofilm. This community of microorganisms, known as the periphyton, break down the odor components in the air, such as hydrogen sulfide. The biodegradation processes that occurs is provided by the bacteria themselves. In order for this to work, sufficient oxygen as well as water and nutrients (for cell growth) is to be supplied.
Method
Contaminated air enters the system at the bottom of the filter and passes in an upward direction through the filter. Water is supplied through the surface of the biofilter and trickles down over the lava rock to the bottom, where it is collected. Constant water provisioning at the surface prevents dry-out of the active bacteria in the biofilm and ensures a constant pH value in the filter. It also functions to make nutrients available to the bacteria.
Percolating water collected at the filter bottom contains odour components as well as sulfuric acid from the biological oxidation of hydrogen sulfide. Depending on the process design the collected water is recirculated or subjected to further treatment.
Types of systems
At present: 2 types of systems are used;
constantly submerged lava filters (for treatment ponds, combined treatment ponds/irrigation reservoirs, ...)
not-submerged lava filters (for wastewater treatment; wastewater is simply sprayed on the pebbles with this system)
Constantly submerged lavafilters
These are constructed out of 2 layers of lava pebbles and a top layer of nutrient-free soil (only at the plants roots). On top, water-purifying plants (as Iris pseudacorus and Sparganium erectum) are placed. Usually, around 1/4 of the dimension of lavastone is required to purify the water and just like slow sand filters, a series of herringbone drains are placed (with lava filters these are placed at the bottom layer).
The water-purifying plants used with constantly submerged, planted, lavafilters (e.g. treatment ponds, self-purifying irrigation reservoirs, ...) include a wide variety of plants, depending on the local climate and geographical location. Plants are usually chosen which are indigenous in that location for environmental reasons and optimum workings of the system. In addition to water-purifying (de-nutrifying) plants, plants that supply oxygen, and shade are also added in ecologic water catchments, ponds, ... This to allow a complete ecosystem to form. Finally, in addition to plants, locally grown bacteria and non-predatory fish are also added to eliminate pests. The bacteria are usually grown locally by submerging straw in water and allowing it to form bacteria (arriving from the surrounding atmosphere). The plants used (placed on an area 1/4 of the water mass) are divided in 4 separate water depth-zones; knowingly:
A water-depth zone from 0–20 cm; Iris pseudacorus, Sparganium erectum, ... may be placed here (temperate climates)
A water-depth zone from 40–60 cm; Stratiotes aloides, Hydrocharis morsus-ranae, ... may be placed here (temperate climates)
A water-depth zone from 60–120 cm; Nymphea alba, ... may be placed here (temperate climates)
A submerged water-depth zone; Myriophyllum spicatum, ... may be placed here (temperate climates)
Finally, three types of (non-predatory) fish (surface; bottom and ground-swimmers) are chosen. This of course to ensure that the fish may 'get along'. Examples of the three types of fish (for temperate climates) are:
Surface swimming fish: Leuciscus leuciscus, Leuciscus idus, Scardinius erythrophthalmus
Middle-swimmers: Rutilus rutilus
Bottom-swimming fish: Tinca tinca
See also
Constructed wetland
Treatment pond
Organisms used in water purification
References
Water filters
Appropriate technology
Environmental soil science
DIY culture | Lava filter | Chemistry,Environmental_science | 852 |
43,042,866 | https://en.wikipedia.org/wiki/History%20of%20parks%20and%20gardens%20of%20Paris | Paris today has more than 421 municipal parks and gardens, covering more than three thousand hectares and containing more than 250,000 trees. Two of Paris's oldest and most famous gardens are the Tuileries Garden, created in 1564 for the Tuileries Palace, and redone by André Le Nôtre in 1664; and the Luxembourg Garden, belonging to a château built for Marie de' Medici in 1612, which today houses the French Senate. The Jardin des Plantes was the first botanical garden in Paris, created in 1626 by Louis XIII's doctor Guy de La Brosse for the cultivation of medicinal plants. Between 1853 and 1870, the Emperor Napoleon III and the city's first director of parks and gardens, Jean-Charles Adolphe Alphand, created the Bois de Boulogne, the Bois de Vincennes, Parc Montsouris and the Parc des Buttes Chaumont, located at the four points of the compass around the city, as well as many smaller parks, squares and gardens in the neighborhoods of the city. One hundred sixty-six new parks have been created since 1977, most notably the Parc de la Villette (1987–1991) and Parc André Citroën (1992).
Some of the most notable recent gardens of Paris are not city parks, but parks belonging to museums, including the gardens of the Rodin Museum and the Musée du quai Branly or smaller intimate gardens of the Musée Delacroix or Musée de la Vie romantique.
From Roman times through the Middle Ages
Gardens existed in Paris in Roman times and the Middle Ages, either to produce fruits, vegetables and medicinal herbs; for the meditation of monks; or for the pleasure of the nobility; but no trace remains of the original gardens of the Roman town of Lutetia.
The royal palace on the Île de la Cité had a walled garden located at the southern point of the palace, near where the statue of Henry IV on the Pont Neuf is today. The garden disappeared when the Place Dauphine was built in the early 17th century.
The monasteries on the left bank had extensive gardens and orchards from the Middle Ages until the French Revolution. The Jardin des Plantes was built on land that originally belonged to the Abbey of Saint Victor. and a large part of the Luxembourg Garden today belonged to the neighboring Monastery of the Chartreux. A small modern recreation of a Medieval garden is found today next to the Cluny Museum, the former residence of Abbot of the Abbey of Cluny.
Renaissance Gardens and Gardens a la Française (1564-1700)
In 1495, King Charles VIII and his nobles imported the Renaissance garden style from Italy after their unsuccessful Italian War of 1494–1498. The new French Renaissance garden was characterized by symmetrical and geometric planting beds or parterres; plants in pots; paths of gravel and sand; terraces; stairways and ramps; moving water in the form of canals, cascades and monumental fountains, and extensive use of artificial grottoes, labyrinths and statues of mythological figures. They also featured a long axis perpendicular to the palace, with bodies of water and a view of the whole garden, They were designed to illustrate the Renaissance ideals of measure and proportion, and to remind viewers of the virtues of Ancient Rome. The French kings imported not only the ideas, but also Italian gardeners, landscape architects, and fountain-makers to create their gardens. The first examples in France were far from Paris, where there was more space for big gardens; the gardens of the royal Château d'Amboise (1496), the Château de Blois (c. 1500), Château de Fontainebleau (1528), and the Château de Chenonceau (1521), with additions by Catherine de' Medici in 1560.
In the mid-17th century, under Louis XIV, the French formal garden, or Jardin à la française, gradually replaced the Renaissance style garden; it was more formal and geometric, and symbolized the dominance of the King over nature. The most famous example was the gardens of Versailles, made by André Le Nôtre beginning in 1661. Le Nôtre remade the Tuileries Gardens in the new style beginning in 1664.
Jardins des Tuileries (1564)
The first royal garden of the Renaissance in Paris was the Jardin des Tuileries, created for Catherine de' Medici in 1564 to the west of her new Tuileries Palace. It was inspired by the gardens of her native Florence, particularly the Boboli Gardens, and made by a Florentine gardener, Bernard de Carnesecchi. The garden was laid out along the Seine, and divided into squares of fruit trees and vegetable gardens divided by perpendicular alleys and by boxwood hedges and rows of cypress trees. Like Boboli, it featured a grotto, with faience "monsters" designed by Bernard Palissy, whom Catherine had assigned to discover the secret of Chinese porcelain.
Under Henry IV, the old garden was rebuilt, following a design of Claude Mollet, with the participation of Pierre Le Nôtre, the father of the famous garden architect. A long terrace was built on the north side, looking down at the garden, and a circular basin was constructed, along with an octagonal basin on the central axis.
In 1664 he garden was remade again by André Le Nôtre in the style of the classic French formal garden, with parterres bordered with low shrubs and bodies of water organized along a wide central axis. He added the Grand Carré around the circular basin at the east end of the garden, and the horseshoe-shaped ramp at the west end, leading to a view of the entire garden.
In 1667, Charles Perrault, the author of Sleeping Beauty and other famous fairy tales, proposed to Louis XIV that the garden be opened at times to public. His proposal was accepted, and the public (with the exception of soldiers in uniform, servants and beggars) were allowed on certain days to promenade in the park.
Cours-la-Reine (Cours-Albert-I) (1616)
The Cours-la-Reine (part of which today is named Cours-Albert-I after the King of Belgium during the First World War) was created by Marie de' Medici, who, like Catherine de' Medici, was nostalgic for her native Florence It was a long promenade (1.5 kilometers) along the Seine originally planted in with four long rows of elm trees. Built before the Champs Elysees, it was a popular promenade for the nobility, on foot or on horseback.
Place Royale (now Place des Vosges) (1605–1612)
Place Royale (renamed Place des Vosges in 1800) is a residential square and public park ordered by Henry IV and built between 1605 and 1612. In his ordinance, the King called for "a place to promenade for the residents of Paris who are closely pressed together in their houses." The square was 108 meters long on each side, and lined with houses of the same height and in the same style. The center of the square was empty until 1639, when it was filled with an equestrian statue of Louis XIII. The square was divided in flower beds and lawns by diagonal alleys. The statue was destroyed during the French Revolution, then replaced with a new statue in 1822 during the Restoration. Four fountains were added in 1840.
Place Dauphine (1607) and Square du Vert-Galant (1884)
Place Dauphine was the second planned residential square, after the Place-Royale, built by order of Henry IV. It was located on the southern point of the Île de la Cité, on the site of the garden of the old royal palace. It was named for the future King Louis XIII, and was built in the shape of triangle, with the point touching the Pont Neuf, which had been finished in 1606. A statue of Henry IV on horseback was placed on the bridge at the entrance to the square in 1614, at the suggestion of his widow, Marie de' Medici. The original statue was destroyed during the French Revolution, but was replaced by a new statue in 1821.
The construction of the bridge joined two small islands to the Île de la Cité; one of these islands, the Île aux Juifs, had been the site where the last Grand Master of the Knights Templar, Jacques DeMolay had been burned at the stake in 1314. The point of the island below the bridge and the statue of Henry IV was turned into a public park in 1884, the Square du Vert Galant. It took its name from the nickname of Henry IV of France, famous for his romantic affairs.
Jardin des Plantes (1626)
The Jardin des Plantes, originally called the Jardin royal des herbes médicinales, was opened in 1626, under the supervision of Guy de La Brosse, the physician of King Louis XIII. Its original purpose was to provide medicinal plants for the court. It was built on land purchased from the neighboring Abbey of Saint Victor. A labyrinth and belvedere from 1840 can still be seen in the northwest section of the garden. In 1640, it became the first Paris garden to open to the public.
In the eighteenth century, under the French natural scientist Georges-Louis Leclerc, Comte de Buffon, who directed it from 1739 to 1788, the garden was doubled in size by an exchange of land with the Abbey, extending down to the banks of the Seine, and was greatly expanded with the addition of trees and plants brought by French explorers from around the world. One can see today a Robinia tree planted in 1636, and a sophora from 1747.
In 1793, after the Revolution, the royal garden became the National Museum of Natural History, and a zoo was added. with animals brought from the Palace of Versailles. The garden was expanded again, and a school of botany founded. The first greenhouse was built in 1833 by Charles Rohault de Fleury, a pioneer in the use of iron in architecture. The first of the large greenhouses seen today was built by Jules André in 1879; the greenhouse of cactuses by Victor Blavette in 1910; and the tropical greenhouse, 55 meters long, by René Berger in 1937. The alpine garden was added in 1931, and the rose garden and the Jardin des vivaces were added in 1964.
Jardin du Luxembourg (1630)
The Jardin du Luxembourg was created by Marie de' Medici, the widow of Henry IV between 1612 and 1630. It was placed behind the Luxembourg Palace, an imitation of the Pitti Palace in her native Florence. She began by planting two thousand elm trees, and commissioned a Florentine gardener, Tommaso Francini, to build the terraces and parterres, and the circular basin the center. The Medici Fountain was probably also the work of Francini, though it is sometimes attributed to Salomon de Brosse, the architect of the palace. After Marie de' Medici's death, the garden was largely neglected. The last royal owner was the Count de Provence, the future King Louis XVIII, who sold the eastern part of the garden for building lots.
After the French Revolution, the government of the French Directory nationalized the large pépiniére, or nursery, of the neighboring monastery of Chartreux, and attached it to the garden. In 1862, during the Second Empire, Georges-Eugène Haussmann replanted and restored the gardens, but also took a portion of the nursery in order to make room for two new streets, Rue Auguste Comte and Place André Honnorat. The Medici fountain was moved back to make room for the Rue de Medicis, and the present long basin and the statuary were added to the fountain.
During the reign of Louis-Philippe, who was fond of heroes of French history, the garden was decorated with the statues of the Queens of France and French women saints. During the French Third Republic, the government added the statues of writers, painters, composers, mythical figures, and a miniature of the Statue of Liberty by Frédéric Auguste Bartholdi. bringing the number of statues to more than seventy.
Jardin du Palais-Royal (1629)
The garden of the Palais-Royal was built by Cardinal Richelieu, after he bought the hôtel d'Angennes in 1623 and turned it into his own residence, the Palais-Cardinal. When he died he left it to Louis XIV, who had played in the gardens as a child, and in 1643 it became the Palais-Royal. The garden, designed by Claude Desgots, featured two rows of elm trees, elaborate parterres and ornamental flower beds, statues, fountains and two basins, and a grove of trees at one end. It became the property of Monsieur, the brother of Louis XIV, in 1692, and thereafter belonged to members of Orleans branch of the dynasty. After the death of Louis XIV, it became the residence of the Regent, Philippe II, Duke of Orléans, whose dissolute lifestyle provoked many scandals. The gardens became a popular meeting place for writers, and also a place frequented by prostitutes.
A fire destroyed much of the Palais in 1773, and the owner, Louis Philippe II, Duke of Orléans, decided to turn it into a profit-making establishment. He built an arcade of shops and cafés encircling the garden, with residences above, with a wooden gallery for promenading around the garden. A circus was established in the center for horseback riding. Since it was private property, he police were not permitted access. In the years before and after the Revolution, the garden and surrounding buildings became the meeting place for revolutionaries, for political debate, gambling and higher-class prostitution. The Theatre-Francaise, the future Comédie-Française, was established there in 1787. During the Revolution, the garden and arcade was nationalized, and the heads of those guillotined were on several occasions carried on pikes by a crowd around the garden, in front of the diners in the cafes. The owner of the garden, though he supported the Revolution and changed his name to Philippe-Egalité, was guillotined in 1793.
After the Restoration, the new Duke of Orleans, Louis-Philippe recovered his property, and returned it to respectability. He expelled the gambling salons and prostitutes, and had the garden and arcade rebuilt into roughly the way they look today. The Palais-Royale was burned by the Communards in May 1871, along with the Tuileries Palace and other symbols of royalty, and were rebuilt. During the 20th century, the residences overlooking the garden were the home of many French celebrities, including André Malraux, Jean Cocteau and Colette. and Two works of modern sculpture were added in 1986; an arrangement of columns, by Daniel Buren, and a fountain-sculpture of steel balls, by Pol Bury.
English gardens and follies (1700–1800)
Beginning in the mid-18th century, the French landscape garden began to replace the more formal and geometric jardin à la française. The new style originated in England in the early 18th century as the English landscape garden or Anglo-Chinese garden. It was inspired by the idealized romantic landscapes and the paintings of Hubert Robert, Claude Lorrain and Nicolas Poussin, European ideas about Chinese gardens, and the philosophy of Jean-Jacques Rousseau. The earliest example of the style in France was the Moulin Joli (1754–1772), along the Seine between Colombes and Argenteuil. the most famous was the Hameau de la Reine of Marie Antoinette in the gardens of Versailles (1774–1779).
In the late 18th century the Paris town houses of the French aristocracy, both on the right bank and left bank, usually had gardens in the English style. The largest private gardens in Paris were (and still are) those of the Élysée Palace (1722), now the residence of the President of France. and of the Hôtel Matignon (1725), the official residence of the Prime Minister of France (neither is open to the public).
The folly was a specific kind of Paris park that appeared in the end of the eighteenth century. Privately owned but open to the public, they were designed for both amusement and instruction, filled with architectural models from different parts of the world and different centuries. Most of the early Paris follies had a short life and were divided into building lots: the most famous follies were the Follies of Bouexière (1760), Boutin (1766), Beaujon (1773) and Folie Saint James (1777–1780). The one survivor, much transformed, is the Parc Monceau.
Parc Monceau (1778)
Parc Monceau was established by Phillippe d'Orléans, Duke of Chartres, a cousin of King Louis XVI, wealthy, and active in court politics and society. In 1769 he had begun purchasing the land where the park is located. In 1778, he decided to create a public park, and employed the writer and painter Louis Carrogis Carmontelle to design the gardens. In 1778, when the garden opened, Carmontelle was accused of imitating the English garden. In his book of images of the garden published in 1779, he responded: "It is not at all an English garden that we made at Monceau... we reunited in one single garden all times and all places. It is a simple fantasy, the desire to have an extraordinary garden, a pure amusement, and not a desire to imitate a nation, which, in making "natural" gardens, runs a roller over all the lawns and ruins nature." The park contained dozens of fabriques, or constructions, including an Egyptian pyramid, antique sculptures, a Gothic ruin, a Tatar tent, a Dutch windmill, a minaret, a Roman temple, and an enchanted grotto,. Visitors to the garden were instructed to follow a certain path, from site to site. The experience was enhanced by park employees in exotic costumes, and camels and other rare animals.
Beginning in 1781, most of the fabriques were removed, and Parc Monceau was remade into a more traditional English landscape garden. During the reign of Napoleon III, the park became the property of the city of Paris, and was made into a public park. Sections of the park were sold for the construction of large new townhouses to help finance the construction of the park. The park was surrounded by a monumental ornamental iron gate and fence and planted with a wide variety of exotic trees, shrubs and flowers. During the French Third Republic, the park was filled with statues of composers and writers. An arcade of the old Hôtel de Ville, burned by the Paris Commune in 1871, was installed in the garden to provide a picturesque ruin. A few traces of the original folly, including the Egyptian pyramid, can still be seen.
Parc de Bagatelle (1778–1787)
The Parc de Bagatelle was created by the Count of Artois, the future King Charles X of France, on a section of the Bois de Boulogne that he had purchased in 1777. He made a wager with his sister-in-law, Marie Antoinette, that he could build a château where she could be entertained in less than three months. Construction of the little château began on 21 September and was finished on 26 November. The chateau was the work of the architect François-Joseph Bélanger, while the garden was made by the Scottish landscape designer Thomas Blaikie. The garden was built at the same time as the Parc Monceau, and, like that garden, it was filled with fabriques and follies, including a hill made of rocks crowned by a "Pavilion of Philosophers" that was half-Gothic and half-Chinese; an obelisk, a bridge surmounted by a pagoda, Gothic ruins, a tatar tent and a spiral labyrinth. All the follies gradually disappeared, and the park became a more traditional English garden. The pavilion on top of the pile of rocks was replaced by a more traditional structure. During the Second Empire, the Pavilion of Eugénie in the rose garden was added, in honor of the frequent visits to the garden of the Empress of Napoleon III.
After the French Revolution, the garden was nationalized and made into a restaurant and a place for balls and festivals. After the restoration of the monarchy, it was returned to Count of Artois, whose family sold it in 1835 to Richard Seymour-Conway, 4th Marquess of Hertford, an English aristocrat who had settled in Paris. He and his heir, Sir Richard Wallace purchased additional land and enlarged the garden from to and reconstructed it, adding new terraces, lawns, the picturesque pond with lily-pads, and many trees, including a giant sequoia, planted in 1845, that now is more than 45 meters high.
In 1905 the heirs of Richard Wallace ceded the park to the City of Paris, which made extensive additions, including an enlarged rose garden, which became the site of the Concours international de roses nouvelles de Bagatelle, the international competition of new roses, in 1907.
Garden of the Rodin Museum (1755)
The Musée Rodin was originally constructed between 1728 and 1731 as a townhouse for Abraham Peyrenc, a wealthy Paris wig-maker. The second owner, the Duchess of Maine, created a long green for lawn-bowling, two covered alleys, and a grove of trees to the left of the entrance. The house and garden were purchased 1755 by Louis Antoine de Gontaut, the Duke of Biron and a Marechal of the royal army. He enlarged the garden to the south. Following the design of architect Pierre-François Aubert and gardener Dominique Moisy, the garden became a model of a classic French formal garden; The lawn was divided by a long north–south perspective and made into four sections of flowerbeds around an eighteen-meter basin. The east side of the garden was filled with trees, and they added an orangerie, a Dutch tulip garden, and a vegetable garden at the end. The Duke frequently held elaborate festivities and garden parties there, and often opened the garden to the public.
The Duke died in 1788, on the eve of the Revolution. After the Revolution the house and garden became the property of the Papal legate, then of the Russian Ambassador, and then, in 1820, of a religious order, the Dames du Sacré-Coeur-du-Jésus, and served as a boarding school until 1904. During that time the basin was filled in, garden largely let to grow wild, an orchard of fruit trees was added. The Dames also built a neo-gothic chapel in 1875, which, after the order was dissolved in 1904, became a residence where writers and artists could rent space. The sculptor Auguste Rodin became one of the tenants in 1908. The house and garden were purchased by the French State in 1911. Part of the garden was taken to build the neighboring Lycée Victor Duruy, but Rodin remained as tenant and placed works of his sculpture along the main path. After his death in 1917, it became a museum devoted to his work, which opened in 1919.
The basin was restored in 1927, but otherwise the garden remained much as it had been under the religious order. Beginning in 1993, the garden was redesigned by landscape designer Jacques Sgard both as an open-air gallery to display the works of Rodin, but also to recapture the appearance of an 18th-century formal French residential garden.
Parks and gardens of Napoleon III (1852–1870)
Napoleon III became the first elected President of France by an overwhelming vote in 1848. When he could not run for re-election, he organized a coup d'état in December 1851 and had himself declared Emperor of the French in December 1852. One of his first priorities as Emperor was to build new parks and gardens for Paris, particularly in the neighborhoods far from the center, where the few public parks of the city were all located.
Napoleon III named Georges-Eugène Haussmann his new prefect of the Seine in 1853, and commissioned him to build his new parks. Haussmann assembled a remarkable team: Jean-Charles Adolphe Alphand, the city's first Director of the new Service of Promenades and Plantations; Jean-Pierre Barillet-Deschamps, the city's first gardener-in-chief; Eugène Belgrand, a hydraulic engineer who rebuilt the city's sewers and water supply, and provided the water needed for the parks; and Gabriel Davioud, the city's chief architect, who designed chalets, temples, grottos, follies, fences, gates, lodges, lampposts and other park architecture.
Over the course of seventeen years, Napoleon III, Haussmann and Alphand created of new parks and gardens, and planted more than six hundred thousand trees, the greatest expansion of Paris green space before or since. They built four major parks in the north, south, east and west of the city, replanted and renovated the historic parks, and added dozens of small squares and gardens, so that no one lived more than ten minutes from a park or square. In addition, they planted tens of thousands of trees along the new boulevards that Haussmann created, reaching out from the center to the outer neighborhoods. The parks of Paris, particularly the Tuileries gardens and the new Bois de Boulogne, provided entertainment and relaxation for all classes of Parisians during the Second Empire.
The Bois de Boulogne (1852–1858)
The Bois de Boulogne was a scrubby forest to the west of the city, where the German, Russian and British occupation armies had camped after Napoleon's defeat and had cut down most of the older trees.
In 1852 Napoleon III had land transferred from the list of Imperial property to the city of Paris, and bought the plots of private land within the park. He had lived long years in exile in London, and had frequently visited London's Hyde Park, with its serpentine lake and winding paths. It became the model for his first major new park. Thousands of workers began digging artificial lakes and brought boulders from the Forest of Fontainebleau to build an artificial cascade. Belgrand, the hydraulic engineer, built a special conduit from the Ourq canal, dug artesian wells, and laid 66 kilometers of pipes to provide water for the future lakes, lawns and flowerbeds. Alphand laid out 95 kilometers of new roads, riding trails and paths winding through the park. The gardeners seeded of lawn and planted 420,000 trees.
The park was also intended to provide recreation for the Parisians; besides the roads for carriages and paths for horses and walking, Davioud built twenty-four chalets and pavilions around the park, which served as restaurants, cafes, theaters, and places of entertainment. Twenty hectares were set aside for a garden and zoo, the Jardin d'Acclimatation. In 1857 one corner of the park became the site of the Hippodrome de Longchamp, the city's most important horse racing track. During the winter, the lake became a popular destination for ice-skating. From its opening, the park was full of Parisians of all classes.
The Bois de Vincennes (1860–1865)
The Bois de Vincennes was originally a royal hunting preserve and the site of an important royal residence, the Château de Vincennes, still existing. After Louis XIV moved the royal residence to Versailles, the chateau was neglected. Under Louis XV, the chateau was redesigned, and walking paths were created in the forest. During the French Revolution, the center of the park was turned into a military training ground, with firing ranges for artillery and muskets. During the Restoration, Louis-Philippe took 170 acres of the forest and built barracks and military offices.
In 1860 Napoleon III ceded a large part of the forest to the city of Paris and Alphand began to turn it into a place for relaxation and recreation for the working-class population of eastern Paris. Much of the land in the center of the park was retained by the military, so Haussmann was obliged to buy additional private land around the perimeter of the park, making the Bois de Vincennes much more expensive to build; the Bois de Vincennes cost 12 million francs, while the Bois de Boulogne cost 3.46 million francs. Covering , it was slightly larger than the Bois de Boulogne, making it the largest park in the city. As he had done in the Bois de Boulogne, Alphand designed and dug twenty-five hectares of lakes, rivers, waterfalls and grottos. The city's chief gardener, Barillet-Deschamps, planted three hundred hectares of lawns and one hundred forty-eight hectares of flower beds. The hydraulic engineer Belgrand built a water channel from the Marne River, and pumps to hoist the water up thirty-five meters to a lake in the park, which served to water the park and to fill the lakes, streams and cascades. Davioud decorated the new park with fantasy temples, cafes kiosks and chalets. The park was completed with the addition of a racetrack, the Hippodrome de Vincennes, in 1863, a public firing range for pistols, rifles, and archery; and the Imperial farm, with orchards, fields, sheep and cows, so the urban residents of Paris could see a real farm at work
The Bois de Vincennes was the site of the cycling events of the 1900 Olympics, in a forty-thousand seat stadium built especially for that event. The Park was also the site of two large Colonial Expositions, in 1907 and 1931, celebrating the peoples and products France's empire. Several vestiges of the exhibits remain, including the old pavilion of French Cameroon, which was converted into a Buddhist temple and Institute in 1977. The Paris zoo was built for the 1931 exposition, and moved in 1934 to its present location in the east of the park, next to a sixty-five meter high man-made mountain, inhabited by alpine goats.
Parc des Buttes Chaumont (1864–1867)
The Parc des Buttes Chaumont, twenty-seven hectares in size in the north of the city, was an unpromising site for a garden; The soil was very poor, and the land bare of vegetation; its original name was "Chauvre-mont" or "bald hill." In medieval times, it was close the site of the gibbet, where the corpses of excited criminals were displayed. From 1789 it served as a sewage dump, and much of the site had been used as a stone quarry
Alphand began to build in 1864. Two years and one thousand workers were required simply to terrace the site and to bring in two hundred thousand square meters of topsoil. A small railroad line was built to carry the earth. Gunpowder was used to blast the rock, and to sculpt the 50-meter-high central promontory. A lake was dug at the foot of the promontory. Alphand laid out five kilometers of paths and roads, and Belgrand installed pumps and pipes to hoist water from the Ourq canal to supply the cascades and lake and water the new gardens. Davioud designed a grotto, using the tunnels of the old stone quarry; a circular temple, based on the Temple of Vesta, Tivoli, to crown the promontory, as well as four bridges to span the lake. The park opened on April 1, 1867, the opening day of the Paris Universal Exposition.
An urban legend says that bodies of Communards killing during the suppression of the 1871 Paris Commune are entombed inside the old stone quarries in the Promentory. In fact 754 bodies were placed there briefly for a short time after the fighting ended, but were buried in the city cemeteries soon afterwards.
Parc Montsouris (1865–1878)
Parc Montsouris was the last of the four large parks created by Napoleon III at the four cardinal points of the compass around Paris. It was precisely south of the exact center of Paris- a monument in the park placed by Napoleon I indicated the prime meridian which French maps used until 1911, instead of Greenwich, as the zero degree of longitude. Napoleon III decreed to construction of the park in 1865, but purchasing the land took time, and work did not begin until 1867. Work was also delayed because several hundred corpses that had been placed in the catacombs of Paris, part of which lay under the park, had to be moved. The park was inaugurated in 1869, but was not actually finished until 1878, under Alphand, who continued his work as the Director of Public Works of Paris under the French Third Republic.
Parc Montsouris, in area, had all the elements of a classical Second Empire garden in a smaller space; a lake, a cascade, winding paths, a cafe, a guignol theater, lawns and flower beds. It also had a remarkable folly: the Palais de Bardo, a reduced-scale replica of the summer residence of the beys of Tunis, which had originally been part of the Paris Universal Exposition of 1867. Made of wood and stucco, it was installed in the center of the park, where it served as a weather station, but gradually suffered from vandalism and neglect. It burned down in 1991, and was not replaced.
Gardens of the Belle Epoque and the Universal Expositions (1871–1914)
Napoleon III was captured by the Germans during the Franco-Prussian War of 1870 and the Second Empire was replaced by the French Third Republic. The new government named Jean-Charles Adolphe Alphand the Director of Public Works of Paris, and he continued the work he had begun under the Emperor and Haussmann. He finished Parc Montsouris and several smaller squares, including square Boucicault (now Square Maurice-Gardette) and square d'Anvers (1877). Much of Alphand's abundant energy was devoted to the building of the universal expositions of 1878 and 1889, each of which included extensive gardens. He was in charge of building the Paris Exposition of 1889, including the construction of the Eiffel Tower. It was his last great project before his death in 1891.
The construction of new squares and gardens was carried during the Third Republic by one of Alphand's protégés, the architect Jean Camille Formigé. While he did not undertake any new large parks on the scale of those of Alphand, he built a series of new squares in the Paris neighborhoods; square Ferdinand Brunot; square Frédéric Lemaître; square Adolphe Chérioux; square du Vert-Galant; square des Epinettes, and the square des Arènes de Lutèce. His most impressive accomplishment was the Serres d'Auteuil (1898), an ensemble of greenhouses which provided flowers, trees and shrubs for all of the parks of Paris.
Jardins du Trocadéro (1878–1937)
The Trocadero had originally been the site of a country house of Catherine de' Medici, then of a monastery, destroyed during the French Revolution. Napoleon had projected to build a palace for his son there; King Louis XVIII planned to build a monument there to the Battle of Trocadero in 1823. Under Napoleon III, Alphand had built a basin, paths radiating outwards, a large lawn, and a stairway descending from the hill down the edge of the river.
When the site was chosen in 1876 for the part of the Paris Universal Exposition of 1878, the architects Gabriel Davioud and Jules Bordais were chosen to construct the Palais de Trocadero, a massive temporary structure in a vaguely Moorish style, with large rotunda flanked by two towers, with curving wings on either side. The gardens, designed by Alphand, occupied the slope from the Palace on the top of the hill down to the Seine. The center of the garden was occupied by a long series of cascades ending in a large basin at the bottom at the bottom of the hill. The cascade was lined with statues of animals and of female figures representing the five continents (the statues now decorate the square next to the Musée D'Orsay). The largest piece of statuary in the garden was the head of the Statue of Liberty, made before the rest of the statue, and displayed in order to raise funds for its completion.
When the Exposition was finished, the gardens were redesigned into an English landscape garden; groves of trees were planted, winding paths laid out, and a stream and grotto were constructed. The gardens remained in place for the Paris Universal Exposition of 1889. For the Paris International Exposition of 1937 the palace was replaced by a modernist structure and the fountains were rebuilt, but the picturesque gardens on the hillsidewere left as they were. (See parks and gardens of the 1930s below).
Champ-de-Mars (1908–1927)
The Champ de Mars, in area, was created in 1765 as a parade ground and training field for the neighboring École Militaire. During the French Revolution it was the site of large patriotic festivals, including the Festival of the Supreme Being conducted by Robespierre in 1794. It was surrounded by a moat and not open to the public until 1860, when Napoleon III filled in the moat and planted trees along the borders, but it still remained the property of the Army. It was the site of the 1867 Paris Universal Exposition, which featured a large domed pavilion in the center, surrounded a garden, which itself surrounded by a large oval-shaped gallery. The rest of the Champ de Mars was occupied by exposition halls and extensive landscape gardens, designed by Alphand.
The Champ-de-Mars served again as the main site of the Paris Universal Exposition of 1878. A gigantic palace of glass and iron 725 meters long occupied he center of the park, surrounded by gardens designed by Alphand. For the Paris Universal Exposition of 1889, celebrating the centenary of the French Revolution, Alphand placed the Eiffel Tower in the center, near the monumental Gallery of Machines. The Exposition included a Palace of Fine Arts and a Palace of Liberal Arts. The space around the Eiffel tower and between the galleries and palaces was filled by a large landscape garden, which extended along the axis between the Eiffel Tower and the Seine, and ended at the river at a colossal fountain with a group of allegorical figures, called The City of Paris Illuminates the World with her Torch. The fountain was lit at night by electric lights shining up from the water through plates of colored glass.
In 1889, the Champ-de-Mars was formally transferred from the French army to the City of Paris. It was used once more for the Paris Universal Exposition of 1900, and, then, beginning in 1909 until 1927, it was developed into a public park. It was an unusual site; it was the only large park in Paris not enclosed with a fence, and it was crossed by three major boulevards. The huge Gallery of Machines, which occupied much of the site, was demolished in 1909. The park architect was Jean Camille Formigé (1849–1926), a protégé of Alphand. He used the long central axis from the Eiffel Tower to École Militaire to create a formal and symmetrical park in the French style. The long central axis was lined with paths and rows of trees; a basin with fountains was placed in the center; playgrounds were built along the sides. The original gardens from the 1889 exhibition, around the Eiffel Tower, were preserved in their original form and can still be seen. Like other French formal gardens, it was best seen from above, in this case from the top of the Eiffel Tower.
Parks and gardens of the early 20th century
Several small parks were created between 1901 and the beginning of World War II. Square Laurent-Prache was created in 1901 on the north side of the Church of Saint-Germain-des-Pres, on the site of the old Abbey of Saint-Germain, which was destroyed during the French Revolution in 1790. The wall of the church next to the park is decorated with gothic arcades taken from the destroyed Chapel of the Virgin. The centerpiece of the park today is a bronze head made by Pablo Picasso in 1959, in homage to the poet Apollinaire.
Square Félix-Desruelles was built in the same year along the south wall of the church. The little square is dominated by a colorful enameled gateway of the Pavilion of the Sèvres porcelain factory from the Paris Universal Exposition of 1900.
Square René-Viviani, created in 1928, is located next to the church of Saint-Julien-le-Pauvre, across the Seine from the Cathedral of Notre-Dame-de-Paris. Its most famous feature is the oldest living tree in Paris, a robinier, a variety of acacia, which was planted there in 1601 by the botanist Jean Robin. The park also contains a medieval well and fragments of gothic architecture from Notre-Dame Cathedral, taken out during its 19th-century restoration.
Gardens of Sacré-Cœur (1924–1929)
The building of Basilica of Sacré-Cœur at the top of Montmartre was first proposed after the 1870 defeat of France in the Franco-Prussian War, It was also the place where the Paris Commune began in March 1871 with the killing of two French generals by mutinous soldiers of the Paris National Guard. Alphand's plan called for a park that would descend eighty meters from the parvis in front of the church to the street at the bottom of the hill. The architect Jean-Camille Formigé designed a park with a dramatic and unobstructed approach to the church from the bottom of the hill; he designed two terraces, connected by stairways and by curving horseshoe-shaped ramps, lined by trees. Formigés plan also called for a cascade and fountains parallel to the stairways, but these were never built. The work on the church began in the 1880s, but proceeded very slowly, because of the difficulty of anchoring the church to the hillside, site of a former quarry. The Basilica was not dedicated until 1919. Formigé died in 1926, and the work on the gardens was finished by Léopold Béviére, and dedicated in 1929. The original name of the park was Square Willette, but in 2004, under the socialist government of Mayor Bernard Delenoye, it was renamed Square Louise-Michel, after the anarchist and revolutionary Louise Michel who had played an active role in the Paris Commune.
Parks and gardens of the 1930s
The 1930s saw an important change in the style of Paris gardens. From 1852 until the end of the 1920s, almost all Paris gardens had been designed by Jean-Charles Adolphe Alphand (1817–1891) and his protégé, Jean Camille Formigé, and they all had a similar picturesque style. Beginning in the 1930s, each Paris garden had a different designer, and the styles were varied. They tended to be more regular and more geometric, more like the classical French formal garden, and made greater use of sculpture, particularly the work of the modernist sculptors of the period. The gardens also tended to be smaller, and were placed in the outer neighborhoods, near the edge of the city.
Several of the new parks were built on land which had been the old fortified zone around the city, a wide strip where no building was allowed, created between 1840 and 1845 by Adolphe Thiers. The land was finally turned over to the city in 1919, and supporters of green space urged that it be turned into a belt of parkland around the city, but instead the government of the Third Republic chose to use much of the land for public housing and industrial sites. Instead of a circular belt of green space, they built a series of small squares, including square du Serment du Koufra (1930) in the 14th arrondissement; square du Docteur-Calmette (1932) in the 15th arrondissement; and square Marcel-Sembat (1931) in the 18th arrondissement.
The most important landscape architects of the period were Léon Azéma, a classically trained artist who won the prestigious Prix de Rome, who designed a dozen squares, including the Parc de la Butte-du-Chapeau-Rouge; and Roger Lardat, who designed series of squares and also redesigned parts of the Bois de Vincennes and the gardens of the Trocadero.
Other notable Paris parks, gardens and squares of the 1930s are those of the Cité Internationale Universitaire de Paris (1921–1939); Parc Kellerman (1939–1950); Square Saint-Lambert (1933); Square Séverine (1933–1934); Square Sarah Berhardt and Square Réjane (1936); Parc Choisy (1937); Square René-Le-Gall (1938); and Square Barye (1938).
New Jardins du Trocadéro (1937)
The major Paris architectural and landscape project of the 1930s was the Exposition Internationale des Arts et Techniques dans la Vie Moderne in 1937, on the hill of Chaillot. The old Palace of Trocadero, which had been used in two previous exhibits, was demolished and replaced by a large terrace with a panoramic view of the Seine and Eiffel Tower, and by the modernist white Palais de Chaillot, with two wings which enveloped the top of the hill. The picturesque landscape gardens on the slopes of the hills, built by Jean-Charles Alphand for the 1878 exposition were preserved, The of new gardens were designed by Léon Azéma and the architects Carlu and Boileau. The central element of the garden became a series of cascades, lined with statues, and a long basin containing rows of fountains and two powerful water cannon. The basins, fountains and dramatic lighting at night were designed by Roger-Henri Expert, who also designed the interior decoration on the famous French ocean liner Normandie. Many of the statues from the exposition, by the leading French sculptors of the time, were kept in place after the Exposition, or found new homes in the other new city parks of the period.
Parc de la Butte-du-Chapeau-Rouge (1939)
The Parc de la Butte-du-Chapeau-Rouge, originally known as the Square de la Butte-du-Chapeau-Rouge, in the 19th arrondissement, is one of a series of squares built in the old fortified zone which surrounded the city since the reign of Louis-Philippe. It was designed by Léon Azéma, and was similar to his plan for the gardens of the Trocadero two years earlier. The broad lawns and winding paths took advantage of the steep slope and served as a showcase for sculpture.
The main architectural features are the buffet d'eau, or cascading fountain, at the lower entrance, crowned by a statue of Eve by sculptor Raymond Couvènges (1938) and a classical portico with sculpture serving as the entrance to a playground. At the high end of the park, Two belvederes, reached by winding paths, offer panoramic views of the city.
Parc Kellermann (1939–1950)
Parc Kellermann was built at the same time as the Square de la Butte-du-Chapeau-Rouge, at the southern end of the city, at the edge of the 13th arrondissement. It was slightly larger than Chapeau-Rouge, (5.55 hectares compared with 4.68) which entitled it to be called a park rather than a square. It originally served as a site for several of the several smaller pavilions of the 1937 Exposition. The park was designed by the architect Jacques Gréber, who was architect-in-chief of the 1937 exposition, and who also had a notable career in the United States, where he designed the Benjamin Franklin Parkway in Philadelphia.
The site of the park had two different levels; much of the park was built in the old bed and banks of the Biévre river, now covered over. Gréber merged two different styles; the lower part of the park is picturesque, with a lake, stream, false rocks, groves of trees, winding paths and the other characteristic features of a park of the time of Napoléon III. The upper part, bordered by boulevard Kellermann, is a 1930s combination of classicism and modernism, with a cement portico, two brick excedres decorated with bas-relief sculptures in the 1930s style; a large parterre and basin; and long tree-lined alleys. The upper part of the park today offers exceptional views of the city, but also suffers from the noise of the neighboring highway that circles Paris.
Parks and gardens of the late 20th century (1940–1980)
Following the German occupation of Paris in 1940, the priority shifted from making parks to making playing fields and other sports facilities, following the ideology of Marshal Philippe Pétain and Vichy France regime. In 1939, Paris had twenty hectares of sports fields; In 1941 the Paris government published a plan to build an additional two hundred hectares of sports facilities and playing fields, mostly using the vacant land in the old fortified zone on the edge of the city.
The emphasis given to playgrounds and sports fields continued in the years after the War. The priorities of the successive French governments were the repair of the infrastructure destroyed by the War and building public housing. A number of squares were created, though most of the space was usually devoted to playgrounds rather than gardens. The new parks and squares included Squares Docteurs-Dejerine (1958), Emmanuel-Fleury (1973) and Leon Frapie (1973) in the 20th arrondissement; Squares Emile-Cohl and Georges-Melies (1959) in the 12th arrondissement; the squares around the porte de Champerret in the 17th arrondissement; and the square de la Porte-de-Plaine (1948–1952) in the 15th arrondissement.
Square Andre-Ullmann (1947), in the 17th arrondissement, is one of the typical postwar gardens; symmetrical and austere, it occupies a triangular space, with a pavilion with a rotunda in one corner, two alleys of plane trees, a central green, and bushes and shrubs carved into geometric shapes.
Square Emmanuel Fleury (1973) in the 20th arrondissement, with an area of , is larger than most of the postwar gardens, and, while it has sports fields, including a course for roller sports, it is more in the picturesque Napeoleon III style than the other postwar gardens, with rich flower beds, winding paths, groves of trees and kiosks.
Square Sainte-Odile (1976), in the 17th arrondissement, by landscape architect Jean Camand, was one of the first of a new model of gardens which appeared in the 1980s and 1990s; occupying a small space (1.13 hectare), it was divided into different spaces, each with a different style and theme, often radically different; next to a church, it includes a landscape garden in one section; a playground in another, a picturesque butte with a pavilion; a central basin with an abstract sculpture; and a monument to the harpist Lily Laskine.
Parc floral de Paris (1969)
The largest new garden created in Paris in the second part of the 20th century was the Parc floral de Paris, covering , which was built within the Bois de Vincennes in 1969. In 1959 and 1964 that park had been the site of a large international flower show, the Floralies internationales, and the two events had been so popular that the city decided to make a permanent site for flower exhibitions. Land was ceded to the city from military installations within the park, and the new gardens were created under the direction of landscape architect Daniel Collin. The new park was an ensemble of different flower gardens with different themes; a valley of flowers; a garden of contemporary sculptures; a water garden; and a children's garden, as well as pavilions for indoor displays and exhibits of exotic flowers, Japanese bonsai and other botanical attractions. A Garden of the Four Seasons was added in 1979, with flowers in bloom from early spring until the end of autumn. The park also featured an outdoor theater for musical events, and small lakes and fountains.
Jardin Tino-Rossi (1975–1980)
In the 19th century, the site of the Jardin Tino-Rossi, on the quai Saint-Bernard in the 5th arrondissement, had been the place where wine barrels were unloaded from barges for sale at the nearby Halle aux vins. In 1975, government of President Valéry Giscard d'Estaing decided to make the quai into a promenade, featuring rows of plantane trees planted along the quai in the 19th century, and a series of small garden amphitheaters by the edge of the water. In 1980, a more ambitious element was added; an outdoor sculpture garden featuring over fifty works by late 20th-century sculptors, including Alexander Calder, Constantin Brâncuși, and Jean Arp. While the promenade is generally considered a success, the works of sculpture have suffered over time from degradation and vandalism.
Parks of the Mitterrand era (1981–1995)
During the fourteen-year presidency of François Mitterrand, coinciding with the bicentennial of the French Revolution, Paris saw an explosion of major public works projects, including the Opéra Bastille, the Louvre pyramid and underground courtyard, and the new national library. The Mitterrand projects included opening one hundred and fifty new parks, squares and gardens, a larger number than those constructed under Napoleon III, though the total area of the new parks was much smaller. Unlike the Second Empire, when all the new gardens followed the same basic plan and picturesque style, the Mitterrand-era gardens were built by different architects and landscape architects, and offered a wide variety of styles and designs, from miniature recreations of natural wilderness to high-tech. Many of the new parks and gardens, such as La Villette, were built on former industrial sites, and a majority were built in the outer neighborhoods of the city, where the population was most dense. Most all of the new parks featured works of contemporary art and sculpture.
The landscape architect Bernard Tschumi, who designed the gardens of the Parc de la Villette, tried to explain the philosophy of the new parks in a book entitled The Parks of the 21st Century (1987) : "The conditions of the modern city have made invalid the historic prototype of a park as an image of nature. The park can no longer be conceived as a model of a utopian world in miniature, protected from vulgar reality. Rather than a place of escape, the contemporary park should be seen as an environment defined by the preoccupations of the inhabitants of the city, of their recreation needs and the pleasures defined by the working conditions and cultural aspirations of contemporary urban society."
Parc Georges-Brassens (1984)
Parc Georges-Brassens (15th arrondissement), occupying , is located on the site of the former Vaugirard slaughterhouse and horse market from 1894 to 1897, which were demolished between 1969 and 1979. The design, by architects Ghiulamila and Milliex and landscape architect Daniel Collin, preserved picturesque elements of the original market, including the belfry of the old auction market of the abattoirs of Vaugirard, and the covered horse market, which now serves on weekends as the site of an antiquarian book market. Modern sculptures of horses stand at the entrance to the garden. Corners of the park are occupied by a pre-school and day care center, and a theater. The landscape garden in the center of the park, in the picturesque tradition of Alphand and the Second Empire, has a lake, winding paths, flowerbeds, a rose garden and a garden of aromatic plants. The slope of the park also has a dry cascade of artificial stones, for children to climb.
Parc de Belleville (1988)
The Parc de Belleville, in the 20th arrondissement, was another early Mitterrand-era park. It was designed by architect Francois Debulois and landscape architect Paul Brichet, and built on a steeply-sloping site that covered , on the hill of Belleville, the highest point in the city. The park was located not far from the Parc des Buttes Chaumont, and shared some of the same picturesque elements as that Second-Empire park, including a terrace and belvedere at the top of the park with a panoramic view of the city, winding paths along the hillsides, abundant flowerbeds and groves of trees, and a series of cascades from the top of the hill down to a semi-circular basin, then under Rue Julian Lacroix to a circular basin in another garden in the jardin de Pali-Kao, a miniature park of three thousand square meters opened in 1989. Visitors to the park are invited to sit on the abundant lawns of the park, a practice long discouraged in Paris parks. Like Buttes-Chaumont, the Parc de Belleville originally had a grotto, built into the side of the old stone quarries, but it had to be closed because of vandalism and security concerns.
Parc de la Villette (1987–1991)
The Parc de la Villette was formerly the main slaughterhouse of the city, located in the 19th arrondissement at the intersection of the Canal de l'Ourq and the Canal Saint-Denis. One structure remains from the old site, the Grande Halle, built in 1867 by Jules de Merindol, a student of Baltard, who had built the famous glass and iron structures of Les Halles. In 1982, and international competition selected landscape architect Bernard Tschumi to design the park. The final design was composed of ten thematic gardens, which Tschumi described as a "cinematic promenade" of different sights and styles.
The Parc de la Villette is more in the category of a high-tech amusement park, like Disneyland or Tivoli Gardens, than a traditional park. Twenty hectares of the fifty-five hectare site are devoted to buildings and structures, including the Cite des Sciences et de l'industrie, the Cite de la Musique, the Zenith performance hall, a full-sized submarine, and the central landmark, the mirror-surfaced Geode, a geodesic dome. The ten thematic gardens include playgrounds and small peaceful sanctuaries; they include a garden of mirrors, a garden of shadows, a garden of islands, a garden of bamboos, a garden of dunes, and several thematic playgrounds, including one with a slide in the form of a dragon. The gardens also feature works of sculpture by noted artists and sculptors, including Claes Oldenburg and Daniel Buren.
Parc André Citroën (1992)
Parc André Citroën, located on the Seine in the 15th arrondissement, was the site of the Citroën automobile factory from 1915 until the 1970s. The plan for the new park was developed by landscape architects Gilles Clément and Alain Provost, along with architects Patrick Berger, Jean-François Jodry, and Jean-Pierre Viguier. Like most of the Mitterrand-era parks, it combined two very different styles of parks, a recreation park and a picturesque floral park; the centerpiece of the park is a large lawn, 273 by 85 meters, dedicated to recreation, sports and relaxation. The natural and pure garden aspect of the park is expressed by two very large greenhouses in the southeast overlooking the park, one an orangerie and the other displaying plants of the Mediterranean. There are also series of six small "serial gardens," each associated with a different metal, planet, state of water, and a sense; and the "Garden of movement", a meadow of different grasses blown by the wind. A canal frames one side of the large lawn, while the serial gardens, each in its own alcove, enclose the other side.
Promenade plantée (1993)
The Promenade plantée, in the 12th arrondissement, is the most original of Paris parks. The creation of landscape architect Jacques Vergely and architect Philippe Mathieux, it was built ten meters above the street on the abandoned viaduct of the Vincennes railway, which had been built under Napoleon III in 1859. The park extends 4.7 kilometers, from the site of the former Bastille station of the railroad line, close to Place de la Bastille, to Verneuil-l'Étang, at the peripheric highway at the outer edge of the city. The park offers a variety of different landscapes, from bamboo forest to picturesque flower garden, as well as fine views of the city. It is accessed by a number of stairways along its route, and is sometimes enclosed. Preference is given to people strolling, though joggers are allowed, if they do not impede the promenaders. Because of the narrow width of the promenade, bicycles are not permitted.
The Promenade plantée has inspired similar parks in other cities; the High Line in the Chelsea neighborhood of New York City, opened in 2009, and the three-mile Bloomingdale Trail in Chicago.
Jardin Atlantique (1994)
The Jardin Atlantique in the 15th arrondissement, like the Promenade plantée, has a highly unusual site, perched on twelve concrete pillars seventeen meters above street level, atop the roof over the Gare Montparnasse railway station, which connects Paris with the west of France. It was designed by landscape architects Michael Pena and François Brun, and was reportedly the most expensive park built in Paris. Like the other Mitterrand-era parks, it has a central lawn, surrounded by thematic gardens and sprinkled with modern sculpture and fountains. It also has thirty openings, which provide ventilation and light for the train tracks and platforms below, and the announcements of train arrivals and departures can be heard in the park above. The design of park has a vague resemblance to the deck of an ocean liner, in keeping with the connection of the train station to the Atlantic seaports of Cherbourg and Le Havre.
Because of the weight limitations and limited depth of soil, it has a high proportion of concrete and other structural materials compared to the amount of greenery. Nonetheless, the park has five hundred trees, planted in cubic stone boxes. The thematic gardens include a garden of varied moving in the wind; a garden of aquatic plants; a garden of coastal plants, a garden of blue and mauve colored flowers; and the "hall of silence", a meditation garden.
Parc de Bercy (1994–1997)
The site of the Parc de Bercy, alongside the Seine in the 12th arrondissement, was at the edge of the city limits until the time of Napoleon III. It was the site of the wine depot where barrels wine and spirits were unloaded from barges and taxed before they were delivered to the city. Under the Mitterrand program, the new park was intended as the east-Paris equivalent of the Tuileries Garden, alongside the Seine in the center of the city. The site already had a wide avenues lined by two hundred century-old chestnut and plantain trees, which, along with several old buildings from the wine depot, were integrated into the new park. Though the park was far from the center of the city, it was next to the new Palais Omnisports indoor sports facility and the Cinematheque (originally the American Center, designed by Frank Gehry) and linked by a new bridge to the new National Library across the Seine. On the river side, the park was bordered by a high terrace, which blocked the noise of the highway along the river and gave view over both the Seine and the park. The park also has an amphitheater, on the site where a neolithic village was discovered.
The landscaping of the new park was designed by architects Bernard Huet, Madeleine Ferrand, Jean-Pierre Feuges and landscape architects Ian la Caisne and Philippe Raguin. Their design created three separate gardens with different themes, connected by footbridges over the streets that divide them. The western park, near at the Palais Omnisports, called Les Prairies, features broad lawns under trees; this part of the park is also used for informal sports, soccer skateboarding and rollerblading. The center park is called Les Parterres, and is devoted to serious gardening. It includes an aromatic garden, a rose garden, and a vegetable garden where school groups come to learn about agriculture and gardening. The garden on the east is called Le Jardin romantique, and it has a water theme; it includes a canal, fishponds, cascades, and a pool with water lilies.
Paris parks and gardens of the 21st century
Following the late 20th century tradition of French Presidents constructing new museums and parks to mark their period in office, President Jacques Chirac launched the Musée du quai Branly, devoted to the arts of the Americas, Africa, Asia and Oceania.
In 1991, the banks of the Seine were declared a UNESCO cultural heritage site, and efforts began to make the highways and industrial space that remained along the river into a long promenade. Beginning in 2000, sections of the highways were closed on Sundays for promenades and jogging, and an artificial "beach" with sand and deck chairs was installed in summer. In 2008, during the administration of Mayor Bertrand Delanoë (2001–2014), the city of Paris began to transform portions of the highways built along the left and right banks of the Seine into parks and recreation areas. In 2013, a 2.3 kilometer section of the left-bank highway between the Pont d'Alma and the Musée d'Orsay opened as a permanent promenade, the Promenade des Berges de la Seine.
Gardens of the Musée du quai Branly (2006)
The site of the Musée du quai Branly, on the left bank of the Seine, facing the Palais de Chaillot, and just a hundred meters from the EIffel Tower, had been occupied by the buildings of the Ministry of Reconstruction and Urbanism. An international competition led to selection of architect Jean Nouvel to design the new museum. The original proposal for the museum had called for a garden occupying 7,500 square meters of the site. Nouvel increased the gardens to 17,500 square meters, and made a series of different gardens an integral part of the museum. The largest section, the "garden of movement", between the rue de l'Université and the quai Branly, is a composition of small gardens created by landscape architect Gilles Clément, designed to appear wild and to be the opposite of a French classical garden. The other notable feature of the garden is the Mur vegetale or "wall of vegetation", designed by Patrick Blanc; a composition of 15,000 plants of 150 different species which cover 800 square meters of the exterior facades of the museum and 150 square meters of the interior walls. The "wall" is renewed and trimmed each year.
Promenade des Berges de la Seine (2013)
In the 19th century and early 20th century, the paved quay of the Left Bank of the Seine between the Pont de l'Alma and the Musée d'Orsay had been used for several international expositions, for boat docks and storage depots, and for cafes and floating swimming pool. Between 1961 and 1967, highways were built along both banks of the river to relieve traffic congestion in the center of the city. In 1991, the banks of the river were classified as UNESCO cultural heritage site, and efforts began to turn the highways into parks and promenades. Beginning in 2008, a 2.3 kilometer section of the highway was permanently closed and made into the Promenade des Berges de la Seine, which was dedicated on June 19, 2013. The promenade includes five floating "islands", a total of 1800 square meters in size, placed atop barges, with trees, bushes, flowers and deck chairs. The former highway is lined with spaces for concerts and classes; outdoor exhibit space; playgrounds; a climbing wall; a discothèque under a bridge; and tipis and furnished containers which can be hired for lunches, celebrations or meetings. There are boat docks and several outdoor cafes along the promenade. All the facilities of the park are portable, and can be removed within 24 hours if the waters of the Seine rise too high. The promenade was designed by architect Franklin Azzi, and the islands were created by Jean-Christophe Chobet.
See also
List of parks and gardens in Paris
French Renaissance garden
French formal garden
French landscape garden
Haussmann's renovation of Paris
Paris during the Second Empire
Paris in the Belle Époque
References
Notes and citations
Bibliography
Landscape architecture
Paris
Paris | History of parks and gardens of Paris | Engineering | 14,062 |
30,793,446 | https://en.wikipedia.org/wiki/Charles%20Hershfield | Charles Hershfield, B.Sc., M.A.Sc, F.E.I.C, P.Eng. (1910–1990) was widely recognized by the engineering community and known for his innovative structural engineering solutions, as a senior assistant engineer and lieutenant with the Department of National Defense, a professor at the University of Toronto, as co-founder of the North American firm Morrison Hershfield, and as a prolific author. He was a lifelong advocate of education and the engineering profession.
Early life and education
Hershfield' parents Aaron and Molly Hershfield left Teofipol, Ukraine in the mid-1890s for Manitoba, Canada in search of greater opportunities and in the hopes of starting a family. On December 24, 1910 their son Charles Hershfield was born. As a child and later in his teens, Hershfield's interests resided in music, baseball, carpentry, and mechanical engineering. He attended St. John's High School in Winnipeg, Manitoba, and even from a young age, highly valued his education.
Hershfield later studied engineering at the University of Manitoba. Working with two of his classmates, H.F. Peters and W. Gruber, Hershfield submitted the thesis "Some Tests of Welded Joints". Hershfield graduated in 1930 with the degree of Bachelor of Science in civil engineering.
During the summer periods as an undergraduate Hershfield worked for the Dominion Bridge Company in Winnipeg. Upon graduation and until 1932, in conjunction with the Winnipeg City Engineer, he was involved with structural design of bridges, viaducts and subways.
Several years later Hershfield continued his education and received the degree of Master of Applied Science from the University of Toronto in 1950. His master's thesis was titled "Series Expansion of Joint Rotations for the Analysis of Rigidly Framed Structures ".
Working life
In 1935 Hershfield moved to Toronto, Ontario and until 1941 was employed with Standard Iron and Steel Works where his skills and services were focused in structural design, estimating, contracting, detailing and supervision of the fabrication and erection of steel structures.
From 1941 to 1943 he was a staff member of the Canadian Department of National Defense, Naval Service, Works and Building Branch as Senior Assistant Engineer, with the rank of Lieutenant. His work during this time included structural design on naval shore establishments including shops, storage facilities, training buildings, and drill halls, of wide variety as to size and materials of construction.
After leaving the naval service in 1943, Hershfield joined the staff of the Department of Civil Engineering at the University of Toronto. Hershfield taught a variety of courses related to structural engineering and supervised many graduate students. He was also principal instructor in structural engineering in the School of Architecture at University of Toronto.
In 1946 Hershfield, along with Carson Morrison, Joe Millman, and Mark Huggins, responded to the post-war building boom by founding the engineering consulting firm Morrison Hershfield Millman and Huggins. The firm today exists under the name Morrison Hershfield, with offices across North America specializing in multidisciplinary engineering and related expertise.
Hershfield retired from teaching at the University of Toronto in 1976 after 31 years of service but continued to work closely with Morrison Hershfield almost up to the time of his death.
Notable projects
Assisted with complex structural design of the roof for the Stratford Shakespeare Festival Theatre, Stratford, Ontario (1957)
Designed the complex structural roofing system for the Ontario Pavilion Building at Expo 67, Montreal, Quebec (1967)
Assisted with the expansion of the Toronto Mount Sinai Hospital, Toronto, Ontario
University of Toronto Medical Science Building, Toronto, Ontario
Awards and achievements
Elected to the grade of Partner with the Engineering Institute of Canada, 1955.
Member of the Publications Committee, for the Engineering Journal, 1970–74.
Elected to Fellow of the Engineering Institute of Canada, 1974.
Consulting Engineer Designation, Association of Professional Engineers, 1974.
Member of the Committee of Examiners, Ontario Association of Architects, 1974–82
Member of the Canadian Standards Association Standards Policy Board, 1977–78.
Requalifed as a Designated Consulting Engineer, Association of Professional Engineers, 1979.
Awarded the Engineering Medal by the Association of Professional Engineers Ontario, 1982.
Served as a Structural Engineer on the OAA Committee of Examiners, 1982.
Upon his death, the University of Toronto set up the Charles Hershfield Memorial Scholarship fund to acknowledge Hershfield's significant accomplishments as a structural engineer and professor. This scholarship is given to outstanding graduate students in structural engineering, established in 1990.
Technical papers
Roof Structure of New Theatre for Stratford Shakespearean Festival: a proposal on the Stratford Shakespearean Festival Theatre roof based on analytical methods and techniques used, 1956–1957.
One Cycle Moment Distribution for Structural Analysis: Presented in a meeting to the Engineering Institute of Canada, 1959.
Exploiting the Structural Potentials of Woven Fabrics: A paper regarding an increase in the variety of woven fabrics available and a corresponding increase in the possibilities of using them to great advantage to form parts or all of the structure for certain types of buildings, 1967.
Civil Engineering Education in Canada, Present and Future: Written with colleague G.W. Heinke at the University of Toronto and also a member of the Engineering Institute of Canada. This paper reviews a study taken place in 1968, looking at the opinions of all U of T department staff, present and former students, practising engineers from industry and government, as well as staff and administrators from a variety of technological institutes, Paper presented at the 83rd Annual Meeting of the Engineering Institute of Canada, Vancouver, September 1969. Published in the Engineering Journal in January 1970.
See also
Carson Morrison
Morrison Hershfield
External links
Morrison Hershfield Website
University of Toronto Website
CSA Website
Professional Engineers Ontario Website
Ontario Association of Architects Website
References
1910 births
1990 deaths
Fellows of the Engineering Institute of Canada
Academic staff of the University of Toronto
Structural engineers
University of Manitoba alumni
University of Toronto alumni | Charles Hershfield | Engineering | 1,204 |
33,973,727 | https://en.wikipedia.org/wiki/Graduation%20Pledge%20of%20Social%20and%20Environmental%20Responsibility | The Graduation Pledge of Social and Environmental Responsibility is a voluntary pledge made by students graduating from colleges or universities, stating their commitment to social and environment responsibility in their future careers. The pledge was first offered in 1987 at colleges and universities in the United States, and has since spread to some institutions in other countries.
Wording and meaning
The Pledge states: I pledge to explore and take into account the social and environmental consequences of any job I consider and will try to improve these aspects of any organizations for which I work. The purpose of the Pledge is to encourage graduating students to be aware of the social and environmental impact of their employment as they enter the workforce or continue their education, though it is left to students to define for themselves what "socially and environmentally responsible" means.
Students who voluntarily sign the pledge seek out employment that reflects their values, sometimes turning down jobs with which they do not feel comfortable. Others find employment and then work to make changes in their workplaces. Some examples include promoting recycling at their workplace, removing discriminatory language from training materials, working for gender equality in athletics, and helping to convince employers to reject contracts that would clearly have negative impacts on society or the environment.
History
The pledge was first established at Humboldt State University, California, in 1987. It originated at a time when social responsibility became a widely discussed theme in the wake of 1978 reconstitution of Physicians for Social Responsibility to involve doctors in policy discussions about nuclear power, the 1981 founding of Computer Professionals for Social Responsibility, and the socially responsible investing movement which grew out of the anti-apartheid divestment movement. By 1988, students at Stanford, UC Berkeley, UC Santa Cruz, San Francisco State University, and MIT organized graduation pledges for their commencement ceremonies.
Manchester University in Indiana hosted the effort from 1996 to 2007, at which time Bentley University near Boston became the host. The project takes shape in different ways depending on the institution. At some colleges, for example, students sign wallet-sized cards with the pledge written on them that they can keep as a reminder of the commitment they've made. Some institutions put the pledge into the formal program for their commencement ceremonies. At other institutions, pledge signers wear green ribbons or green tassels at commencement to signify their commitment (some schools use a different color of ribbon). At Bentley University the pledge is a "capstone" of its four-year Civic Leadership Program, and at Humboldt State University, the student government funds a student pledge coordinator internship.
Impact
The pledge serves to promote social and environmental responsibility on three different levels: Students and alumni who are making choices about employment; schools including values and citizenship in their curriculum as opposed to just knowledge and skills; and the workplaces and wider society being concerned about issues beyond just the bottom line. The Graduation Pledge Alliance maintains a web site for campus organizers and those who have signed the Pledge.
Extent
Over 100 universities and colleges as well as some high schools use the pledge to some extent. There are several different types of schools involved, including:
Liberal arts colleges such as Manchester University, Juniata College, Muhlenberg College and Hartwick College.
State universities such as the University of Colorado, Boulder and the University of Florida.
Private research universities such as the Massachusetts Institute of Technology and Stanford University.
Private graduate schools such as Antioch University New England.
Faith schools such as Goshen College and Bethel College.
Community Colleges such as Estrella Mountain Community College, Mesa Community College and Broward College.
Professional schools such as the Fashion Institute of Technology.
Schools outside the U.S. such as Trent University and the University of Saskatchewan in Canada and the Chinese Culture University in Taiwan.
Further reading
The Graduation Pledge of Social & Environmental Responsibility: An Effective Tool for Education and Action on Human Rights, by Matt Nicodemus.
References
External links
The Graduation Pledge Alliance website
Environmental education | Graduation Pledge of Social and Environmental Responsibility | Environmental_science | 778 |
30,136,372 | https://en.wikipedia.org/wiki/FANC%20proteins | FANC proteins are a network of at least 15 proteins that are associated with a cell process known as the Fanconi anemia.
History
Fanconi anemia was first described in 1927 by Guido Fanconi, a Swiss pediatrician. It is a chromosome instability syndrome characterized by the progressiveness of bone marrow failure and of cancer proneness.
Properties
The FA genes that code for the FANC proteins are a part of the caretaker group of cancer genes that prevent the buildup of mutations and chromosome abnormalities. The multiple FANC proteins come together to add up to the FANC/BRCA pathway.
Components
There are a large number of FANC proteins that participate in the FA pathway. It has a nuclear complex also known as the ‘FA core complex’ which is formed by the interaction of FANCA, FANCB, FANCC, FANCE, FANCF, FANCG, FANCL, FANCM and the accessory proteins (FAAP20, FAAP24, and FAAP100). These accessory proteins are also called Fanconi anemia associated proteins (FAAPs). There is also a group called the anchor complex which consists of FANCM, FAAP24, MHF1 (FAAP16/ CENP-S), and MHF2 (FAAP10/ CENP-X). The FANC proteins that are not a part of the core complex are FANCD1, FANCJ, and FANCN.
Components include:
core protein complex (FANCA, FANCB, FANCC, FANCE, FANCF, FANCG, FANCL, FANCM)
other: FANCD1, FANCD2, FANCI, FANCJ, FANCN, FANCP
Function
They are involved in DNA replication and damage response. FANC proteins are also in charge of repairing complex DNA interstrand cross-linking lesions and maintaining the genomic stability during DNA replication. DNA cross-linking is what hinders transcription and replication from occurring in the cell so it is important that the cell has methods to repair at every stage of the cell cycle. There are multiple different repair pathways but the FA pathway is the one that involves the FANC proteins. When cross-link is detected, then the ataxia-telangiectasia and RAD3-related protein will mediate the phosphorylation (P) of the FA core complex. This phosphorylated FA core complex is what is required to have a successful monoubiquitination of the two components that form the FANCI–D2 complex. Each of the proteins of the FA core complex are needed for this phosphorylation step except for FANCM. When a typical cell senses DNA damage it targets the monoubiquitinated isoform of FANCI–D2 to the chromatid with DNA damage, which is the cross-link. Studies have also shown that there is a connection between the FA DNA repair pathway and stem cell regulation but it is still unclear. FANC proteins also play a role in redox signaling and repair of oxidative DNA damages. Recent studies have dove into the FANC protein, FANCJ, and its enzymatic function along with its roles in repair. Other studies have shown the correlation between the FANC pathway and multiple other protein post translational modifications from ubiquitin-like families.
Pathogenesis
A mutation in 13 FANC genes can result in Fanconi anemia (FA), which is a cancer-prone chromosome instability disorder. Fanconi anemia occurs when there is a biallelic mutation that inactivates the genes that are in charge of the replication stress associated DNA damage response. Dysfunction of FANC proteins has been associated with a range of conditions, including the rendering of cell hypersensitivity to a type of DNA damage known as DNA interstrand cross-links (ICL) and defective DNA repair. FANC protein mutations have also lead to reduced fertility and predisposition to cancers like breast cancer and myeloid leukaemia. FANC proteins FANCD1 (BRCA2), FANCJ (BRIP), and FANCN (PALB2) have even been identified as the breast cancer susceptibility proteins. If a cell were to lack the FANC gene to code for these proteins then the cell would show a hypersensitive phenotype following H2O2 treatment.
Similar/ Related Protein
FANC proteins are related to BRCA.
FANC proteins are required to promote BLM-mediated anaphase.
FANC proteins also interacts with BRCA1.
FANC proteins also interacts with LIG4.
FANC proteins also interacts with DNA-PKcs.
FANC proteins also interacts with Ku70.
FANC proteins also interacts with Ku80.
FANC proteins also interacts with FAN1.
FANC proteins also interacts with XPF.
FANCC protein interacts with cdc2.
FANCC protein interacts with PKR.
FANCC protein interactS with p53.
FANC protein FANCD1 is also known as BRCA2.
FANC protein FANCJ is also known as BRIP1.
FANC protein FANCN is also known as PALB2.
FANC protein FANCO is also known as RAD51C.
FANC protein FANCP is also known as SLX4.
References
DNA repair
Protein families | FANC proteins | Chemistry,Biology | 1,113 |
16,091,266 | https://en.wikipedia.org/wiki/WR%20104 | WR 104 is a triple star system located about from Earth. The primary star is a Wolf–Rayet star (abbreviated as WR), which has a B0.5 main sequence star in close orbit and another more distant fainter companion.
The WR star is surrounded by a distinctive spiral Wolf–Rayet nebula, often referred to as a pinwheel nebula. The rotational axis of the binary system, and likely of the two closest stars, is directed approximately towards Earth. Within the next few hundred thousand years, the Wolf–Rayet star is predicted to experience a core-collapse supernova with a small chance of producing a long-duration gamma-ray burst.
The possibility of a supernova explosion from WR 104 having destructive consequences for life on Earth stirred interest in the mass media, and several popular science articles have been issued in the press since 2008. Some articles decide to reject the catastrophic scenario, while others leave it as an open question.
System
The Wolf–Rayet star that produces the characteristic emission line spectrum of WR 104 has a resolved companion and an unresolved spectroscopic companion, forming a triple system.
The spectroscopic pair consists of the Wolf–Rayet star and a B0.5 main sequence star. The WR star is visually 0.3 magnitudes fainter than the main sequence star, although the WR star is typically considered the primary, as it dominates the appearance of the spectrum and is more luminous. The two are in a nearly circular orbit separated by about 2 AU, which would be about one milli-arcsecond at the assumed distance. The two stars orbit every 241.5 days with a small inclination (i.e. nearly face-on).
The visually resolved companion is 1.5 magnitudes fainter than the combined spectroscopic pair and almost one arc-second away. It is thought to be physically associated, although orbital motion has not been observed. From the colour and brightness, it is expected to be a hot main sequence star.
Structure
The rotational axis of the binary system is directed approximately towards Earth at an estimated inclination of 0 to 16 degrees. This provides a fortunate viewing angle for observing the binary system and its dynamics.
Discovered as part of the Keck Aperture Masking Experiment WR 104 is surrounded by a distinctive dusty Wolf–Rayet nebula over 200 astronomical units in diameter formed by interaction between the stellar winds of the two stars as they rotate and orbit. The spiral appearance of the nebula has led to the name Pinwheel Nebula being used. The spiral structure of the nebula is composed of dust that would be prevented from forming by WR 104's intense radiation were it not for the star's companion. The region where the stellar wind from the two massive stars interacts compresses the material enough for the dust to form, and the rotation of the system causes the spiral-shaped pattern. The round appearance of the spiral leads to the conclusion that the system is seen almost pole on, and an almost circular orbital period of 220 days had been assumed from the pinwheel outflow pattern.
WR 104 shows frequent eclipse events as well as other irregular variations in brightness. The undisturbed apparent magnitude is around 12.7, but the star is rarely at that level. The eclipses are believed to be caused by dust formed from expelled material, not by the companion star.
Supernova progenitor
Both stars in the WR 104 system are predicted to end their days as core-collapse supernovae. The Wolf–Rayet star is in the final phase of its life cycle and is expected to turn into a supernova much sooner than the OB star. It is predicted to occur at some point within the next few hundred thousand years. With the relatively close proximity to the Solar System, the question of whether WR 104 will pose a future danger to life on Earth has been raised.
Gamma-ray burst
Apart from a core-collapse supernova, astrophysicists have speculated about whether WR 104 has the potential to cause a gamma-ray burst (GRB) at the end of its life. The companion OB star certainly has the potential, but the Wolf–Rayet star is likely to go supernova much sooner. There remain too many uncertainties and unknown parameters for any reliable prediction, and only sketchy estimates of a GRB scenario for WR 104 have been published.
Wolf–Rayet stars with a sufficiently high spin velocity, prior to going supernova, could produce a long duration gamma ray burst, beaming high energy radiation along its rotational axis in two oppositely directed relativistic jets. Presently, mechanisms for the generation of GRB emissions are not fully understood, but it is considered that there is a small chance that the Wolf–Rayet component of WR 104 may become one when it goes supernova.
Effects on Earth
According to available astrophysical data for both WR 104 and its companion, eventually both stars will finally be destroyed as highly directional anisotropic supernovae, producing concentrated radiative emissions as narrow relativistic jets.
Theoretical studies of such supernovae suggest jet formation aligns with the rotational axes of its progenitor star and its eventual stellar remnant, and will preferentially eject matter along their polar axes.
If these jets happen to be aimed towards our solar system, its consequences could significantly harm life on Earth and its biosphere, whose true impact depends on the amount of radiation received, the number of energetic particles and the source's distance. Knowing that the inclination of the binary system containing WR 104 is roughly 12° relative to line of sight, and assuming both stars have their rotational axes similarly orientated, suggests some potential risk. Recent studies suggest these effects pose a "highly unlikely" danger to life on Earth, with which, as stated by Australian astronomer Peter Tuthill, the Wolf–Rayet star would have to undergo an extraordinary string of successive events:
The Wolf–Rayet star would have to generate a gamma-ray burst (GRB); however, these events are mostly associated with galaxies with a low metallicity and have not yet been observed in our Milky Way Galaxy. Some astronomers believe it unlikely that WR 104 will generate a GRB; Tuthill tentatively estimates the probability for any kind of GRB event is around the level of one percent, but cautions more research is needed to be confident.
The rotational axis of the Wolf–Rayet star would have to be pointed in the direction of Earth. The star's axis is estimated to be close to the axis of the binary orbit of WR 104. Observations of the spiral plume are consistent with an orbital pole angle of anywhere from 0 to 16 degrees relative to the Earth, but a spectrographic observation suggest a significantly larger and therefore less dangerous angle of 30°–40° (possibly as much as 45°). Estimates of the "opening angle" jet's arc currently range from 2 to 20 degrees. (Note: The "opening angle" is the total angular span of the jet, not the angular span from the axis to one side. Earth would therefore only be in the intersecting path if the actual angle of the star's axis relative to Earth is less than half the opening angle.)
The jet would have to reach far enough in order to damage life on Earth. The narrower the jet appears, the farther it will reach, but the less likely it is to hit Earth.
Notes
References
External links
University of Sydney (Keck Observatory) page
Wolf–Rayet stars
Pinwheel nebulae
Sagittarius (constellation)
Sagittarii, V5097
Astronomical objects discovered in 1998
IRAS catalogue objects
Spectroscopic binaries
Triple star systems
B-type main-sequence stars | WR 104 | Astronomy | 1,557 |
25,597,490 | https://en.wikipedia.org/wiki/Solar%20neutrino%20problem | The solar neutrino problem concerned a large discrepancy between the flux of solar neutrinos as predicted from the Sun's luminosity and as measured directly. The discrepancy was first observed in the mid-1960s and was resolved around 2002.
The flux of neutrinos at Earth is several tens of billions per square centimetre per second, mostly from the Sun's core. They are nevertheless difficult to detect, because they interact very weakly with matter, traversing the whole Earth. Of the three types (flavors) of neutrinos known in the Standard Model of particle physics, the Sun produces only electron neutrinos. When neutrino detectors became sensitive enough to measure the flow of electron neutrinos from the Sun, the number detected was much lower than predicted. In various experiments, the number deficit was between one half and two thirds.
Particle physicists knew that a mechanism, discussed in 1957 by Bruno Pontecorvo, could explain the deficit in electron neutrinos. However, they hesitated to accept it for various reasons, including the fact that it required a modification of the accepted Standard Model. They first pointed at the solar model for adjustment, which was ruled out. Today it is accepted that the neutrinos produced in the Sun are not massless particles as predicted by the Standard Model but rather mixed quantum states made up of defined-mass eigenstates in different (complex) proportions. That allows a neutrino produced as a pure electron neutrino to change during propagation into a mixture of electron, muon and tau neutrinos, with a reduced probability of being detected by a detector sensitive to only electron neutrinos.
Several neutrino detectors aiming at different flavors, energies, and traveled distance contributed to our present knowledge of neutrinos. In 2002 and 2015, a total of four researchers related to some of these detectors were awarded the Nobel Prize in Physics.
Background
The Sun performs nuclear fusion via the proton–proton chain reaction, which converts four protons into alpha particles, neutrinos, positrons, and energy. This energy is released in the form of electromagnetic radiation, as gamma rays, as well as in the form of the kinetic energy of both the charged particles and the neutrinos. The neutrinos travel from the Sun's core to Earth without any appreciable absorption by the Sun's outer layers.
In the late 1960s, Ray Davis and John N. Bahcall's Homestake Experiment was the first to measure the flux of neutrinos from the Sun and detect a deficit. The experiment used a chlorine-based detector. Many subsequent radiochemical and water Cherenkov detectors confirmed the deficit, including the Kamioka Observatory and Sudbury Neutrino Observatory.
The expected number of solar neutrinos was computed using the standard solar model, which Bahcall had helped establish. The model gives a detailed account of the Sun's internal operation.
In 2002, Ray Davis and Masatoshi Koshiba won part of the Nobel Prize in Physics for experimental work which found the number of solar neutrinos to be around a third of the number predicted by the standard solar model.
In recognition of the firm evidence provided by the 1998 and 2001 experiments "for neutrino oscillation", Takaaki Kajita from the Super-Kamiokande Observatory and Arthur McDonald from the Sudbury Neutrino Observatory (SNO) were awarded the 2015 Nobel Prize for Physics. The Nobel Committee for Physics, however, erred in mentioning neutrino oscillations in regard to the SNO-Experiment: for the high-energy solar neutrinos observed in that experiment, it is not neutrino oscillations, but the Mikheyev–Smirnov–Wolfenstein effect. Bruno Pontecorvo was not included in these Nobel prizes since he died in 1993.
Proposed solutions
Early attempts to explain the discrepancy proposed that the models of the Sun were wrong, i.e. the temperature and pressure in the interior of the Sun were substantially different from what was believed. For example, since neutrinos measure the amount of current nuclear fusion, it was suggested that the nuclear processes in the core of the Sun might have temporarily shut down. Since it takes thousands of years for heat energy to move from the core to the surface of the Sun, this would not immediately be apparent.
Advances in helioseismology observations made it possible to infer the interior temperatures of the Sun; these results agreed with the well established standard solar model. Detailed observations of the neutrino spectrum from more advanced neutrino observatories produced results which no adjustment of the solar model could accommodate: while the overall lower neutrino flux (which the Homestake experiment results found) required a reduction in the solar core temperature, details in the energy spectrum of the neutrinos required a higher core temperature. This happens because different nuclear reactions, whose rates have different dependence upon the temperature, produce neutrinos with different energy. Any adjustment to the solar model worsened at least one aspect of the discrepancies.
Resolution
The solar neutrino problem was resolved with an improved understanding of the properties of neutrinos. According to the Standard Model of particle physics, there are three flavors of neutrinos: electron neutrinos, muon neutrinos, and tau neutrinos. Electron neutrinos are the ones produced in the Sun and the ones detected by the above-mentioned experiments, in particular the chlorine-detector Homestake Mine experiment.
Through the 1970s, it was widely believed that neutrinos were massless and their flavors were invariant. However, in 1968 Pontecorvo proposed that if neutrinos had mass, then they could change from one flavor to another. Thus, the "missing" solar neutrinos could be electron neutrinos which changed into other flavors along the way to Earth, rendering them invisible to the detectors in the Homestake Mine and contemporary neutrino observatories.
The supernova 1987A indicated that neutrinos might have mass because of the difference in time of arrival of the neutrinos detected at Kamiokande and IMB. However, because very few neutrino events were detected, it was difficult to draw any conclusions with certainty. If Kamiokande and IMB had high-precision timers to measure the travel time of the neutrino burst through the Earth, they could have more definitively established whether or not neutrinos had mass. If neutrinos were massless, they would travel at the speed of light; if they had mass, they would travel at velocities slightly less than that of light. Since the detectors were not intended for supernova neutrino detection, this could not be done.
Strong evidence for neutrino oscillation came in 1998 from the Super-Kamiokande collaboration in Japan. It produced observations consistent with muon neutrinos (produced in the upper atmosphere by cosmic rays) changing into tau neutrinos within the Earth: Fewer atmospheric neutrinos were detected coming through the Earth than coming directly from above the detector. These observations only concerned muon neutrinos. No tau neutrinos were observed at Super-Kamiokande. The result made it, however, more plausible that the deficit in the electron-flavor neutrinos observed in the (relatively low-energy) Homestake experiment has also to do with neutrino mass.
One year later, the Sudbury Neutrino Observatory (SNO) started collecting data. That experiment aimed at the 8B solar neutrinos, which at around 10 MeV are not much affected by oscillation in both the Sun and the Earth. A large deficit is nevertheless expected due to the Mikheyev–Smirnov–Wolfenstein effect as had been calculated by Alexei Smirnov in 1985. SNO's unique design employing a large quantity of heavy water as the detection medium was proposed by Herb Chen, also in 1985. SNO observed electron neutrinos, specifically, and all flavors of neutrinos, collectively, hence the fraction of electron neutrinos. After extensive statistical analysis, the SNO collaboration determined that fraction to be about 34%, in perfect agreement with prediction. The total number of detected 8B neutrinos also agrees with the then rough predictions from the solar model.
References
External links
Solar neutrino data
Solving the Mystery of the Missing Neutrinos
Raymond Davis Jr.'s logbook
Nova – The Ghost Particle
The Solar Neutrino Problem by John N. Bahcall
The Solar Neutrino Problem, by L. Stockman
A set of photos of different Neutrino detectors
John Bahcall's web site
Neutrino problem
Particle physics
Neutrinos | Solar neutrino problem | Physics | 1,854 |
12,687,840 | https://en.wikipedia.org/wiki/Subnet%20%28mathematics%29 | In topology and related areas of mathematics, a subnet is a generalization of the concept of subsequence to the case of nets. The analogue of "subsequence" for nets is the notion of a "subnet". The definition is not completely straightforward, but is designed to allow as many theorems about subsequences to generalize to nets as possible.
There are three non-equivalent definitions of "subnet".
The first definition of a subnet was introduced by John L. Kelley in 1955 and later, Stephen Willard introduced his own (non-equivalent) variant of Kelley's definition in 1970.
Subnets in the sense of Willard and subnets in the sense of Kelley are the most commonly used definitions of "subnet" but they are each equivalent to the concept of "subordinate filter", which is the analog of "subsequence" for filters (they are not equivalent in the sense that there exist subordinate filters on whose filter/subordinate–filter relationship cannot be described in terms of the corresponding net/subnet relationship).
A third definition of "subnet" (not equivalent to those given by Kelley or Willard) that equivalent to the concept of "subordinate filter" was introduced independently by Smiley (1957), Aarnes and Andenaes (1972), Murdeshwar (1983), and possibly others, although it is not often used.
This article discusses the definition due to Willard (the other definitions are described in the article Filters in topology#Non–equivalence of subnets and subordinate filters).
Definitions
There are several different non-equivalent definitions of "subnet" and this article will use the definition introduced in 1970 by Stephen Willard, which is as follows:
If and are nets in a set from directed sets and respectively, then is said to be a of ( or a ) if there exists a monotone final function
such that
A function is , , and an if whenever then and it is called if its image is cofinal in
The set being in means that for every there exists some such that that is, for every there exists an such that
Since the net is the function and the net is the function the defining condition may be written more succinctly and cleanly as either or where denotes function composition and is just notation for the function
Subnets versus subsequences
Importantly, a subnet is not merely the restriction of a net to a directed subset of its domain
In contrast, by definition, a of a given sequence is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. Explicitly, a sequence is said to be a of if there exists a strictly increasing sequence of positive integers such that for every (that is to say, such that ). The sequence can be canonically identified with the function defined by Thus a sequence is a subsequence of if and only if there exists a strictly increasing function such that
Subsequences are subnets
Every subsequence is a subnet because if is a subsequence of then the map defined by is an order-preserving map whose image is cofinal in its codomain and satisfies for all
Sequence and subnet but not a subsequence
The sequence is not a subsequence of although it is a subnet because the map defined by is an order-preserving map whose image is and satisfies for all
While a sequence is a net, a sequence has subnets that are not subsequences. The key difference is that subnets can use the same point in the net multiple times and the indexing set of the subnet can have much larger cardinality. Using the more general definition where we do not require monotonicity, a sequence is a subnet of a given sequence, if and only if it can be obtained from some subsequence by repeating its terms and reordering them.
Subnet of a sequence that is not a sequence
A subnet of a sequence is necessarily a sequence.
For an example, let be directed by the usual order and define by letting be the ceiling of Then is an order-preserving map (because it is a non-decreasing function) whose image is a cofinal subset of its codomain. Let be any sequence (such as a constant sequence, for instance) and let for every (in other words, let ). This net is not a sequence since its domain is an uncountable set. However, is a subnet of the sequence since (by definition) holds for every Thus is a subnet of that is not a sequence.
Furthermore, the sequence is also a subnet of since the inclusion map (that sends ) is an order-preserving map whose image is a cofinal subset of its codomain and holds for all Thus and are (simultaneously) subnets of each another.
Subnets induced by subsets
Suppose is an infinite set and is a sequence. Then is a net on that is also a subnet of (take to be the inclusion map ). This subnet in turn induces a subsequence by defining as the smallest value in (that is, let and let for every integer ). In this way, every infinite subset of induces a canonical subnet that may be written as a subsequence. However, as demonstrated below, not every subnet of a sequence is a subsequence.
Applications
The definition generalizes some key theorems about subsequences:
A net converges to if and only if every subnet of converges to
A net has a cluster point if and only if it has a subnet that converges to
A topological space is compact if and only if every net in has a convergent subnet (see net for a proof).
Taking be the identity map in the definition of "subnet" and requiring to be a cofinal subset of leads to the concept of a , which turns out to be inadequate since, for example, the second theorem above fails for the Tychonoff plank if we restrict ourselves to cofinal subnets.
Clustering and closure
If is a net in a subset and if is a cluster point of then In other words, every cluster point of a net in a subset belongs to the closure of that set.
If is a net in then the set of all cluster points of in is equal to
where for each
Convergence versus clustering
If a net converges to a point then is necessarily a cluster point of that net. The converse is not guaranteed in general. That is, it is possible for to be a cluster point of a net but for to converge to
However, if clusters at then there exists a subnet of that converges to
This subnet can be explicitly constructed from and the neighborhood filter at as follows: make
into a directed set by declaring that
then and is a subnet of since the map
is a monotone function whose image is a cofinal subset of and
Thus, a point is a cluster point of a given net if and only if it has a subnet that converges to
See also
Notes
Citations
References
Topology | Subnet (mathematics) | Physics,Mathematics | 1,466 |
1,818,270 | https://en.wikipedia.org/wiki/Hamiltonian%20vector%20field | In mathematics and physics, a Hamiltonian vector field on a symplectic manifold is a vector field defined for any energy function or Hamiltonian. Named after the physicist and mathematician Sir William Rowan Hamilton, a Hamiltonian vector field is a geometric manifestation of Hamilton's equations in classical mechanics. The integral curves of a Hamiltonian vector field represent solutions to the equations of motion in the Hamiltonian form. The diffeomorphisms of a symplectic manifold arising from the flow of a Hamiltonian vector field are known as canonical transformations in physics and (Hamiltonian) symplectomorphisms in mathematics.
Hamiltonian vector fields can be defined more generally on an arbitrary Poisson manifold. The Lie bracket of two Hamiltonian vector fields corresponding to functions f and g on the manifold is itself a Hamiltonian vector field, with the Hamiltonian given by the
Poisson bracket of f and g.
Definition
Suppose that is a symplectic manifold. Since the symplectic form is nondegenerate, it sets up a fiberwise-linear isomorphism
between the tangent bundle and the cotangent bundle , with the inverse
Therefore, one-forms on a symplectic manifold may be identified with vector fields and every differentiable function determines a unique vector field , called the Hamiltonian vector field with the Hamiltonian , by defining for every vector field on ,
Note: Some authors define the Hamiltonian vector field with the opposite sign. One has to be mindful of varying conventions in physical and mathematical literature.
Examples
Suppose that is a -dimensional symplectic manifold. Then locally, one may choose canonical coordinates on , in which the symplectic form is expressed as:
where denotes the exterior derivative and denotes the exterior product. Then the Hamiltonian vector field with Hamiltonian takes the form:
where is a square matrix
and
The matrix is frequently denoted with .
Suppose that M = R2n is the 2n-dimensional symplectic vector space with (global) canonical coordinates.
If then
if then
if then
if then
Properties
The assignment is linear, so that the sum of two Hamiltonian functions transforms into the sum of the corresponding Hamiltonian vector fields.
Suppose that are canonical coordinates on (see above). Then a curve is an integral curve of the Hamiltonian vector field if and only if it is a solution of Hamilton's equations:
The Hamiltonian is constant along the integral curves, because . That is, is actually independent of . This property corresponds to the conservation of energy in Hamiltonian mechanics.
More generally, if two functions and have a zero Poisson bracket (cf. below), then is constant along the integral curves of , and similarly, is constant along the integral curves of . This fact is the abstract mathematical principle behind Noether's theorem.
The symplectic form is preserved by the Hamiltonian flow. Equivalently, the Lie derivative
Poisson bracket
The notion of a Hamiltonian vector field leads to a skew-symmetric bilinear operation on the differentiable functions on a symplectic manifold M, the Poisson bracket, defined by the formula
where denotes the Lie derivative along a vector field X. Moreover, one can check that the following identity holds:
where the right hand side represents the Lie bracket of the Hamiltonian vector fields with Hamiltonians f and g. As a consequence (a proof at Poisson bracket), the Poisson bracket satisfies the Jacobi identity:
which means that the vector space of differentiable functions on , endowed with the Poisson bracket, has the structure of a Lie algebra over , and the assignment is a Lie algebra homomorphism, whose kernel consists of the locally constant functions (constant functions if is connected).
Remarks
Notes
Works cited
See section 3.2.
External links
Hamiltonian vector field on nLab
Hamiltonian mechanics
Symplectic geometry
William Rowan Hamilton | Hamiltonian vector field | Physics,Mathematics | 776 |
11,408,570 | https://en.wikipedia.org/wiki/Latent%20extinction%20risk | In conservation biology, latent extinction risk is a measure of the potential for a species to become threatened.
Latent risk can most easily be described as the difference, or discrepancy, between the current observed extinction risk of a species (typically as quantified by the IUCN Red List) and the theoretical extinction risk of a species predicted by its biological or life history characteristics.
Calculation
Because latent risk is the discrepancy between current and predicted risks, estimates of both of these values are required (See population modeling and population dynamics). Once these values are known, the latent extinction risk can be calculated as Predicted Risk - Current Risk = Latent Extinction Risk.
When the latent extinction risk is a positive value, it indicates that a species is currently less threatened than its biology would suggest it ought to be. For example, a species may have several of the characteristics often found in threatened species, such as large body size, small geographic distribution, or low reproductive rate, but still be rated as "least concern" in the IUCN Red List. This may be because it has not yet been exposed to serious threatening processes such as habitat degradation.
Conversely, negative values of latent risk indicate that a species is already more threatened than its biology would indicate, probably because it inhabits a part of the world where it has been exposed to extreme endangering processes. Species with severely low negative values are usually listed as an endangered species and have associated recovery and conservation plans.
Limits
One of the issues associated with latent extinction risk is its difficulty to calculate because of the limited availability of data for predicting extinction risk across large numbers of species. Hence, the only study of latent risk to date has focused on mammals, which are one of the best-studied groups of organisms.
Effects on conservation
A study of latent extinction risk in mammals identified a number of "hotspots" where the average value of latent risk for mammal species was unusually high. This study suggested that these areas represented an opportunity for proactive conservation efforts, because these could become the "future battlegrounds of mammal conservation" if levels of human impact increase. Unexpectedly, the hotspots of mammal latent risk include large areas of Arctic America, where overall mammal diversity is not high, but where many species have the kind of biological traits (such as large body size and slow reproductive rate) that could render them extinction-prone. Another notable region of high latent risk for mammals is the island chain of Indonesia and Melanesia, where there are large numbers of restricted-range endemic species.
Because it is much more cost-effective to prevent species declines before they happen than to attempt to rescue species from the brink of extinction, latent risk hotspots could form part of a global scheme to prioritize areas for conservation effort, together with other kinds of priority areas such as biodiversity hotspots.
References
Ecological metrics
Extinction
Environmental conservation | Latent extinction risk | Mathematics | 590 |
51,724,747 | https://en.wikipedia.org/wiki/Single%20pushout%20graph%20rewriting | In computer science, a single pushout graph rewriting or SPO graph rewriting refers to a mathematical framework for graph rewriting, and is used in contrast to the double-pushout approach of graph rewriting.
References
Further reading
Graph rewriting | Single pushout graph rewriting | Mathematics,Technology | 52 |
44,600,975 | https://en.wikipedia.org/wiki/Equatorial%20plasma%20bubble | Equatorial plasma bubbles are an ionospheric phenomenon near the Earth's geomagnetic equator at night time. They affect radio waves by causing varying delays. They degrade the performance of GPS.
Different times of the year and locations have different frequencies of occurrence. In Northern Australia, the most common times are February to April and August to October, when a plasma bubble is expected every night. Plasma bubbles have dimensions around 100 km. Plasma bubbles form after dark when the sun stops ionising the ionosphere. The ions recombine, forming a lower-density layer. This layer can rise through the more ionized layers above via convection, which makes a plasma bubble. The bubbles are turbulent with irregular edges.
An equatorial plasma bubble could have affected the Battle of Shah-i-Kot by disabling communications from a communications satellite to a helicopter.
On August 27, 2024, China's Powerful LARID RADAR, which China has developed for its military purposes, with which it can detect military satellites and nearby enemy presence if any, detected Plasma Bubble Over Egyptian Pyramids.
References
Ionosphere | Equatorial plasma bubble | Physics,Astronomy | 224 |
1,537,992 | https://en.wikipedia.org/wiki/Descent%20direction | In optimization, a descent direction is a vector that points towards a local minimum of an objective function .
Computing by an iterative method, such as line search defines a descent direction at the th iterate to be any such that , where denotes the inner product. The motivation for such an approach is that small steps along guarantee that is reduced, by Taylor's theorem.
Using this definition, the negative of a non-zero gradient is always a
descent direction, as .
Numerous methods exist to compute descent directions, all with differing merits, such as gradient descent or the conjugate gradient method.
More generally, if is a positive definite matrix, then
is a descent direction at . This generality is used in preconditioned gradient descent methods.
See also
Directional derivative
References
Mathematical optimization | Descent direction | Mathematics | 159 |
42,057,374 | https://en.wikipedia.org/wiki/Chem-seq | Chem-seq is a technique that is used to map genome-wide interactions between small molecules and their protein targets in the chromatin of eukaryotic cell nuclei. The method employs chemical affinity capture coupled with massively parallel DNA sequencing to identify genomic sites where small molecules interact with their target proteins or DNA. It was first described by Lars Anders et al. in the January, 2014 issue of "Nature Biotechnology".
Uses of Chem-seq
A substantial number of small-molecule ligands, including therapeutic drugs, elicit their effects by binding specific proteins associated with the genome. Mapping the global interactions of these chemical entities with chromatin in a genome-wide manner could provide insights into the mechanisms by which a small molecule influences cellular functions. When combined with other chromatin analysis techniques such as ChIP-seq, Chem-seq can be utilized to investigate the genome-wide effects of therapeutic modalities and to understand the effects of drugs on nuclear architecture in various biological contexts. In a broader sense, these methods will be useful to enhance our understanding of the therapeutic mechanisms through which small molecules modulate the function and activity of genome-associated proteins. Through the identification of the cellular targets of a drug, it becomes possible to gain an increased understanding of the causes of side effects and toxicity in the early stages of drug development, which should help to reduce the attrition rate in development.
Workflow of Chem-seq
Chem-seq relies on the ability to create a biotinylated version of a small molecule of interest to allow for downstream affinity capture. Chem-seq can be carried out either In vitro or In vivo, although the results from each have proven to be highly similar.
In vivo Chem-seq
During In vivo Chem-seq, cultured cells in medium are treated simultaneously with either a biotinylated version of the small molecule under study or DMSO (as a control) and 1% formaldehyde for the crosslinking of DNA, proteins and small molecules. DNA is then extracted from the cells, sonicated and enriched for regions containing the biotinylated molecule of interest by incubation with streptavidin magnetic beads, which have a very high affinity for biotin. The enriched DNA fraction is then purified, eluted from the beads and subjected to next generation sequencing. Genomic regions enriched in the Chem-seq library relative to the control are associated with the small molecule under study.
In vitro Chem-seq
In vitro Chem-seq begins with the crosslinking of cultured cells in medium with 0.5% formaldehyde. Cell nuclei are then harvested from the cells and their DNA is extracted. This extract is sonicated before being incubated with streptavidin magnetic beads that are bound to a biotinylated form of our compound of interest. This provides an opportunity for the small molecule of interest to interact with its target genomic regions. These genomic regions are then isolated using a magnet and subjected to next generation sequencing and analysis to determine regions enriched for our small molecule of interest.
Sensitivity
Chem-seq was tested on three classes of drugs using MM1.S multiple myeloma cells to:
1) Investigate the genome-wide binding of the bromodomain inhibitor JQ1 to the BET bromodomain family members BRD2, BRD3 and BRD4
2) Map the genomic binding sites of AT7519, an inhibitor of the cyclin dependent kinase CDK9, and
3) Study how the DNA intercalating agent psoralen interacts with genomic DNA in vivo.
In the first two trials, Chem-seq signals occurred at genomic sites occupied by the drugs' corresponding target proteins and were concordant with ChIP-seq results. However, bio-AT7519 produced weaker Chem-seq signals compared to those observed for bio-JQ1. There was also a substantial number of loci that were not co-occupied by bio-AT7519 and its target CDK9 which might be attributed to the weaker signal obtained for bio-AT7519 or because AT7519 can bind and inhibit other cyclin-dependent kinase like cdks 1, 2, 4, 5. In a third experiment, Chem-seq was efficient in mapping genomic binding sites of the DNA intercalating agent psoralen and showed that bio-psoralen preferentially binds to the transcription start site of active genes.
Advantages and Limitations
Advantages
Chem-seq is the first method that provides researchers with a way of determining the location of small molecules throughout the genome. It can be used in conjunction with ChIP-seq to cross reference the location of certain drugs with DNA binding proteins, like transcription factors, to discover novel interactions and aid in characterizing the molecular mechanisms through which small molecules affect the genome.
Because it uses next generation sequencing to determine small molecule binding sites, Chem-seq has a very high sensitivity and is compatible with other next generation sequencing based methods.
Previously, another similar technique known as chromatin affinity-precipitation (ChAP) assay was used to map the sites of interaction of metabolic compounds in the yeast genome, but Chem-seq is the first method to assess the genome-wide localization of small molecules in mammalian cells.
Limitations
For Chem-seq to be feasible, the small molecule under study must be amenable to biotinylation without disruption of its natural binding properties. This is simply not possible with certain small molecules and even when it is, the process can require expertise in organic chemistry. Once synthesized, the binding properties of the biotinylated compound must be tested. To date, this has been accomplished by comparing the binding kinetics of the biotinylated and unmodified compounds, a process that requires prior knowledge of the proteins that the compound binds.
The locations of Bio-JQ1 throughout the genome, as determined using Chem-seq, are almost identical to the ChIP-seq derived locations of the JQ1’s known target protein, BRD4. Although this may be viewed as a testament to the accuracy of the method it also highlights redundancies between the two techniques, especially when target proteins are previously known.
References
Protein methods
Molecular biology techniques
Biotechnology
DNA | Chem-seq | Chemistry,Biology | 1,312 |
25,338,858 | https://en.wikipedia.org/wiki/List%20of%20largest%20video%20screens | This is a list of the largest video-capable screens in the world.
See also
Jumbotron
References
Display technology
Technology-related lists
Videoscreens | List of largest video screens | Engineering | 32 |
3,753,857 | https://en.wikipedia.org/wiki/Work%20behavior | Work behavior is the behavior one uses in employment and is normally more formal than other types of human behavior. This varies from profession to profession, as some are far more casual than others. For example, a computer programmer would usually have far more leeway in their work behavior than a lawyer.
People are usually more careful than outside work in how they behave around their colleagues, as many actions intended to be in jest can be perceived as inappropriate or even harassment in the work environment. In some cases, men may take considerably more care so as not to be perceived as being sexually harassing than they would ordinarily.
Work behavior is one of the significant aspects of Human Behavior. It is an individual's communication towards the rest of the members of the work place. It involves both verbal as well as non-verbal mode of communication. For example, trust is a non-verbal behavior which is often reflected by a verbal communication at a work place. It represents your attitude towards your team and colleagues. A positive and good work behavior of an individual leads to higher performance, productivity and great outputs by the team or an individual. From the organizational perspective it is the most important area where Human Resource managers should focus.
Sackett and Walmsley (2014) identify the personality attributes most critical for workplace success, as published in Perspectives on Psychological Science. Their research highlights conscientiousness, emotional stability, and agreeableness as the top traits associated with positive job performance and outcomes. This study underscores the significance of these attributes in predicting employee effectiveness and organizational success.
Counterproductive work behavior
Counterproductive work behavior is also a type of work behavior. The majority of people do not know what counterproductive work behavior is. Counterproductive work behavior is the act that employees have against the organizations that do harm or violate the work production. Some examples of Counterproductive work behavior would include passive actions such as not working to meet date line or faking incompetence. Even people do not recognize this behavior, it seems normal to them. Some examples of counterproductive behavior are:
Intimate partner violence: Intimate partner violence occurs more often in the workplace. About 36% to 75% of employed women who experience Intimate partner violence have come out reporting that they have been harassed by a significant other while working. A variety of abusive behaviors is being demonstrated against victims to hinder their ability to come to work, get their work done, and stay in their current employment. The interference that the perpetrators employ are: Stocking them at their work site, harassing the victim, and interfering with the victim's work; for example, sabotaging the victim, so they can not get to work.
Boredom: Jobs that require individuals to do the same task on a daily basis can lead to counterproductive behaviors. Boredom on the job could result in unfavorable work practices such as frequently missing work, lack of concentration, or withdrawal from the task that the person was hired to do, and thus, leading to a decrease in work efficiency.
When people or someone ignore their colleagues while at work.
When people work slowly and the work needs to be done fast.
When people refuse to help their colleagues.
When people refuse to accept a task.
When people show less interest in their work.
When people show destructive behavior against their colleagues.
When people do not appreciate their colleague's success.
These are the examples of counterproductive behavior that people confront in their daily life.
A way to counteract this unproductive behavior is to address the principle that work behavior is a function of contingent consequences. By addressing what employees value most in their workplace, boredom on the job can be avoided. Competitive compensation, bonuses and merit-based rewards, retirement plans, supplemental training program and flexible work locations are the top five values that employees value most at their workplace. Recognizing positive and productive behavior at a workplace can be quite simple by using job analysis. This method gives others a better understanding and evaluation of a typical duty they are looking for (see also Industrial & Organizational Assessment).
Sexual harassment in the workplace
Sexual harassment occurs when one individual (whether it's a male or female) takes a sexual interest in the other person while at work and try to exploit them. The act of objectifying the target could lead to the feeling of insecurities, and pressures to leave the company. A researched showed that out of 134,200 people in a studied, 65% of men and 93% of women were harassed sexually in the place of work and that efficiency of work was affected due to job turnover and people calling out sick. The study also showed that sexual harassment could lead to people feeling depressed, result in a high level of anxiety, and mental and physical stress.
Interactions with colleagues
Effects of verbal abuse
Verbal abuse is a concept that indicates some form of mistreatment via oral expression. Verbal abuse can impact productivity in the workplace, both for the employee and employer. This type of behavior could lead to the resignation of the employee, poor quality of work, turnovers, and illness. Additionally, there is another type of verbal abuse called mobbing. This is when a group of individuals engages in non-physical abusive behavior at work. This could be expressed in aggressive and unprincipled forms of verbal abuse towards one person. If this behavior continues, the person will eventually feel pressured to quit his/her job due to poor performance.
Conflict resolution at work
It is important to resolve any issues that arise at work among team members. Conflict resolution plays a huge role in this. Handling these issues appropriately helps decrease harmful influences of all types of conflicts by bringing back integrity, building success in the work place and restoring efficiency. Working together to resolve conflict resolution lets conflict of different types to be fixed in a way that is beneficial to the group.
References
Human behavior
Industrial and organizational psychology
Organizational behavior
Workplace | Work behavior | Biology | 1,187 |
200,307 | https://en.wikipedia.org/wiki/Michel%20Rolle | Michel Rolle (21 April 1652 – 8 November 1719) was a French mathematician. He is best known for Rolle's theorem (1691). He is also the co-inventor in Europe of Gaussian elimination (1690).
Life
Rolle was born in Ambert, Basse-Auvergne. Rolle, the son of a shopkeeper, received only an elementary education. He married early and as a young man struggled to support his family on the meager wages of a transcriber for notaries and attorney. In spite of his financial problems and minimal education, Rolle studied algebra and Diophantine analysis (a branch of number theory) on his own. He moved from Ambert to Paris in 1675.
Rolle's fortune changed dramatically in 1682 when he published an elegant solution of a difficult, unsolved problem in Diophantine analysis. The public recognition of his achievement led to a patronage under minister Louvois, a job as an elementary mathematics teacher, and eventually to a short-termed administrative post in the Ministry of War. In 1685 he joined the Académie des Sciences in a very low-level position for which he received no regular salary until 1699. Rolle was promoted to a salaried position in the academy, a pensionnaire géometre,. This was a distinguished post because of the 70 members of the academy, only 20 were paid. He had then already been given a pension by Jean-Baptiste Colbert after he solved one of Jacques Ozanam's problems. He remained there until he died of apoplexy in 1719.
While Rolle's forte was always Diophantine analysis, his most important work was a book on the algebra of equations, called Traité d'algèbre, published in 1690. In that book Rolle firmly established the notation for the nth root of a real number, and proved a polynomial version of the theorem that today bears his name. (Rolle's theorem was named by Giusto Bellavitis in 1846.)
Rolle was one of the most vocal early antagonists of calculus – ironically so, because Rolle's theorem is essential for basic proofs in calculus. He strove intently to demonstrate that it gave erroneous results and was based on unsound reasoning. He quarreled so vehemently on the subject that the Académie des Sciences was forced to intervene on several occasions.
Among his several achievements, Rolle helped advance the currently accepted size order for negative numbers. Descartes, for example, viewed –2 as smaller than –5. Rolle preceded most of his contemporaries by adopting the current convention in 1691.
Rolle died in Paris. No contemporary portrait of him is known.
Work
Rolle was an early critic of infinitesimal calculus, arguing that it was inaccurate, based upon unsound reasoning, and was a collection of ingenious fallacies, but later changed his opinion.
In 1690, Rolle published Traité d'Algebre. It contains the first published description in Europe of the Gaussian elimination algorithm, which Rolle called the method of substitution Some examples of the method had previously appeared in algebra books, and Isaac Newton had previously described the method in his lecture notes, but Newton's lesson was not published until 1707. Rolle's statement of the method seems not to have been noticed insofar as the lesson for Gaussian elimination that was taught in 18th- and 19th-century algebra textbooks owes more to Newton than to Rolle.
Rolle is best known for Rolle's theorem in differential calculus. Rolle had used the result in 1690, and he proved it (by the standards of the time) in 1691. Given his animosity to infinitesimals it is fitting that the result was couched in terms of algebra rather than analysis. Only in the 18th century was the theorem interpreted as a fundamental result in differential calculus. Indeed, it is needed to prove both the mean value theorem and the existence of Taylor series. As the importance of the theorem grew, so did the interest in identifying the origin, and it was finally named Rolle's theorem in the 19th century. Barrow-Green remarks that the theorem might well have been named for someone else had not a few copies of Rolle's 1691 publication survived.
Critique of infinitesimal calculus
In a criticism of infinitesimal calculus that predated George Berkeley's, Rolle presented a series of papers at the French academy, alleging that the use of the methods of infinitesimal calculus leads to errors. Specifically, he presented an explicit algebraic curve, and alleged that some of its local minima are missed when one applies the methods of infinitesimal calculus. Pierre Varignon responded by pointing out that Rolle had misrepresented the curve, and that the alleged local minima are in fact singular points with a vertical tangent.
References
Bibliography
Rolle, Michel (1690). Traité d'Algebre. E. Michallet, Paris.
Rolle, Michel (1691). Démonstration d'une Méthode pour resoudre les Egalitez de tous les degrez.
External links
Michel Rolle Biography
1652 births
1719 deaths
People from Ambert
17th-century French mathematicians
18th-century French mathematicians
History of calculus
Members of the French Academy of Sciences | Michel Rolle | Mathematics | 1,099 |
10,898,092 | https://en.wikipedia.org/wiki/NeutrAvidin | Neutralite Avidin protein is a deglycosylated version of chicken avidin, with a mass of approximately 60,000 daltons. As a result of carbohydrate removal, lectin binding is reduced to undetectable levels, yet biotin binding affinity is retained because the carbohydrate is not necessary for this activity. Avidin has a high pI but NeutrAvidin has a near-neutral pI (pH 6.3), minimizing non-specific interactions with the negatively-charged cell surface or with DNA/RNA. Neutravidin still has lysine residues that remain available for derivatization or conjugation.
Like avidin itself, NeutrAvidin is a tetramer with a strong affinity for biotin (Kd = 10−15 M). In biochemical applications, streptavidin, which also binds very tightly to biotin, may be used interchangeably with NeutrAvidin.
Avidin immobilized onto solid supports is also used as purification media to capture biotin-labelled protein or nucleic acid molecules. For example, cell surface proteins can be specifically labelled with membrane-impermeable biotin reagent, then specifically captured using a NeutrAvidin support.
References
Bayer, Ed: "The avidin-biotin system", Dept. of Biological Chemistry, Weizmann Institute of Science, Israel
Proteins | NeutrAvidin | Chemistry | 301 |
24,389,211 | https://en.wikipedia.org/wiki/HD%2086264 | HD 86264 is a single star with an exoplanetary companion in the equatorial constellation of Hydra. It is too faint to be readily visible to the naked eye with an apparent visual magnitude of 7.41. The distance to this star, as determined by parallax measurements, is 219 light-years, and it is drifting further away with a radial velocity of +7.4 km/s. A 2015 survey ruled out the existence of any stellar companions at projected distances above 30 astronomical units.
This is an F-type main-sequence star with a stellar classification of F7V. It is about two billion years old with a modest level of chromospheric activity and is spinning with a projected rotational velocity of 13 km/s. The star is larger and more massive compared to the Sun, and it has a higher metallicity – the abundance of elements with a higher atomic number than helium. It is radiating four times the luminosity of the Sun from its photosphere at an effective temperature of 6,616 K.
Planetary system
In August 2009, it was announced that an exoplanet was found in an eccentric orbit around this host star. The extended habitable zone for this star ranges from out to . The planet orbits between and , crossing nearly all of the habitable zone. An estimate of the planet's inclination and true mass via astrometry, though with high error, was published in 2022.
See also
List of extrasolar planets
References
F-type main-sequence stars
Planetary systems with one confirmed planet
Hydra (constellation)
Durchmusterung objects
086264
048780 | HD 86264 | Astronomy | 334 |
54,064,049 | https://en.wikipedia.org/wiki/Phanteks | Phanteks is a Dutch company which mainly produces PC cases, fans, and other case accessories. The company has a base in the United States.
History
Phanteks's first product was the PH-TC14PE, a CPU cooler. Later, Phanteks started producing cases, beginning with the Enthoo Series.
Products
The company produces several different versions of the Enthoo series and a few of the Eclipse series. It also supplies CPU coolers, fans, and accessories, liquid cooling blocks, and fittings. The Evolv X case was shown at Computex 2018, and chosen as one of the best products in its category by TechSpot. In partnership with Asetek, they produced All In One Liquid Coolers.
See also
Personal computer
Computer hardware
Computer cases
References
Computer companies of the Netherlands
Computer enclosure companies
Computer hardware companies | Phanteks | Technology | 175 |
72,235,151 | https://en.wikipedia.org/wiki/Phylogenetic%20reconciliation | In phylogenetics, reconciliation is an approach to connect the history of two or more coevolving biological entities. The general idea of reconciliation is that a phylogenetic tree representing the evolution of an entity (e.g. homologous genes or symbionts) can be drawn within another phylogenetic tree representing an encompassing entity (respectively, species, hosts) to reveal their interdependence and the evolutionary events that have marked their shared history. The development of reconciliation approaches started in the 1980s, mainly to depict the coevolution of a gene and a genome, and of a host and a symbiont, which can be mutualist, commensalist or parasitic. It has also been used for example to detect horizontal gene transfer, or understand the dynamics of genome evolution.
Phylogenetic reconciliation can account for a diversity of evolutionary trajectories of what makes life's history, intertwined with each other at all scales that can be considered, from molecules to populations or cultures. A recent avatar of the importance of interactions between levels of organization is the holobiont concept, where a macro-organism is seen as a complex partnership of diverse species. Modeling the evolution of such complex entities is one of the challenging and exciting direction of current research on reconciliation.
Phylogenetic trees as nested structures
Phylogenetic trees are intertwined at all levels of organization, integrating conflicts and dependencies within and between levels. Macro-organism populations migrate between continents, their microbe symbionts switch between populations, the genes of their symbionts transfer between microbe species, and domains are exchanged between genes.
This list of organization levels is not representative or exhaustive, but gives a view of levels where reconciliation methods have been used.
As a generic method, reconciliation could take into account numerous other levels. For instance, it could consider the syntenic organization of genes, the interacting history of transposable elements and species, the evolution of a protein complex across species. The scale of evolutionary events considered can go from population events such as geographical diversification to nucleotids levels one inside genes, including for instance chromosome levels events inside genomes such as whole genome duplication.
Phylogenies have been used for representing the diversification of life at many levels of organization: macro-organisms, their cells throughout development, micro-organisms through marker genes, chromosomes, proteins, protein domains, and can also be helpful to understand the evolution of human culture elements such as languages or fairy tales.
At each of these levels, phylogenetic trees describe different stories made of specific diversification events, which may or may not be shared among levels. Yet because they are structurally nested (similar to matryoshka dolls) or functionally dependent, the evolution at a particular level is bound to those at other levels.
Phylogenetic reconciliation is the identification of the links between levels through the comparison of at least two associated trees. Originally developed for two trees, reconciliations for more than two levels have been recently constructed (see section Explicit modeling of three or more levels). As such, reconciliation provides evolutionary scenarios that reveal conflict and cooperation among evolving entities. These links may be unintuitive, for instance, genes present in the same genome may show uncorrelated evolutionary histories while some genes present in the genome of a symbiont may show a strong coevolution signal with the host phylogeny. Hence, reconciliation can be a useful tool to understand the constraints and evolutionary strategies underlying the assemblage that forms a holobiont.
Because all levels essentially deal with the same object, a phylogenetic tree, the same models of reconciliation—in particular those based on duplication-transfer-loss events, which are central to this article—can be transposed, with slight modifications, to any pair of connected levels: an "inner", "lower", or "associate" entity (e.g. gene, symbiont species, population) evolves inside an "upper", or "host" one (respectively species, host, or geographical area).
The upper and lower entities are partially bound to the same history, leading to similarities in their phylogenetic trees, but the associations can change over time, become more or less strict or switch to other partners.
History
The principle of phylogenetic reconciliation was introduced in 1979 to account for differences between genes and species-level phylogenies. In a parsimonious setting, two evolutionary events, gene duplication and gene loss were invoked to explain the discrepancies between a gene tree and a species tree. It also described a score on gene trees knowing the species tree and an aligned sequence by using the number of gene duplication, loss, and nucleotide replacement for the evolution of the aligned sequence, an approach still central today with new models of reconciliation and phylogeny inference.
The term reconciliation has been used by Wayne Maddison in 1997, as a reverse concept of "phylogenetic discord" resulting from gene level evolutionary events.
Reconciliation was then developed jointly for the coevolution of host and symbiont and the geographic diversification of species. In both settings, it was important to model a horizontal event that implied parallel branches of the host tree: host switch for host and symbiont and species dispersion from one area to another in biogeography. Unlike for genes and genomes, the coevolution of host and symbiont and the explanation of species diversification by geography are not always the null hypothesis. A visual depiction of the two phylogenies in a tanglegram can help assess such coevolution, although it has no statistical obvious interpretation.
Character methods, such as Brooks Parsimony Analysis, were proposed to test coevolution and reconstruct scenarios of coevolution. In these methods, one of the trees is forgotten except for its leaves, which are then used as a character evolving on the second tree.
First models for reconciliation, taking explicitly into account the two topologies and using a mechanistic event-based approach, were proposed for host and symbiont and biogeography. Debates followed, as the methods were not yet completely sound but integrated useful information in a new framework.
Costs for each event and a dynamic programming technique considering all pairs of host and symbiont nodes were then introduced into a host and symbiont approach, both of which still underlie most of the current reconciliation methods for host and symbiont as well as for species and genes.
Reconciliation returned to the framework it was introduced in, gene and species. After character models were considered for horizontal gene transfer, a new reconciliation model, following and improving the dynamic programming approach presented for host and symbiont, effectively introduced horizontal gene transfer to gene and species reconciliation on top of the duplication and loss model.
The progressive development of phylogenetic reconciliation was thus possible through exchanges between multiple research communities studying phylogenies at the host and symbiont, gene and species, or biogeography levels. This story and its modern developments have been reviewed several times, generally focusing on specific pairs of levels, with a few exceptions. New developments start to bring the different frameworks together with new integrative models.
Pocket Gophers and their chewing lices: a classical example
Pocket gophers (Geomyidae) and their chewing lice (Trichodectidae) form a well studied system of host and symbiont coevolution. The phylogeny of host and symbiont and the matching of the leaves of their trees are depicted on the left. For the host, O. stands for Orthogeomys, G. for Geomys and T. for Thomomys; for the symbiont, G. stands for Geomydoecus and T. for Thomoydoecus.
Reconciling the two trees means giving a scenario with evolutionary events and matching on the ancestral nodes depicting the coevolution of the two trees. The events considered in this system are the events of the DTL model: duplication, transfer (or host switch), loss, and cospeciation, the null event of coevolution.
Two scenarios were proposed in two studies, using two different frameworks which could be deemed as pre-dynamic programming DTL reconciliation.
In modern DTL reconciliation frameworks, costs are assigned to events. The two scenarios were then shown to correspond to maximum parsimonious reconciliation with different cost assignments.
The scenario A uses 6 cospeciations, 2 duplications, 3 losses and 2 host switches to reconcile the two trees, while scenario B uses 5 cospeciations, 3 duplications, 3 losses and 2 host switches. The cost of a scenario is the sum of the cost of its events. For instance, with a cost of 0 for cospeciation, 2 for duplication, 1 for loss and 3 for host switch, scenario A has a cost of and scenario B of , and so according to a parsimonious principle, scenario A would be deemed more likely (scenario A stays more likely as long as the cost of cospeciation is less than the cost of duplication).
Development of Phylogenetic Reconciliation Models
Models and methods used today in phylogeny are the result of several decades of research, made progressively complex, driven by the nature of the data and the quest for biological realism on one side, and the limits and progresses of mathematical and algorithmic methods on the other.
Pre-reconciliation models: characters on trees
Character methods can be used when there is no tree available for one of the levels, but only values for a character at the leaves of a phylogenetic tree for the other level. A model defines the events of character value change, their rate, probabilities or costs. For instance, the character can be the presence of a host on a symbiont tree, the geographical region on a species tree, the number of genes on a genome tree, or nucleotides in a sequence.
Such methods thus aim at reconstructing ancestral characters at internal nodes of the tree.
Although these methods have produced results on genome evolution, the utility of a second tree appears with very simple examples. If a symbiont has recently acquired the ability to spread in a group of species and thus it is present in most of them, character methods will wrongly indicate that the common ancestor of the hosts already had the symbiont. In contrast, a comparison of the symbiont and host trees would show discrepancies revealing horizontal transfers.
The origins of reconciliation: the Duplication Loss model and the Lowest Common Ancestor mapping
Duplication and loss were invoked first to explain the presence of multiple copies of a gene in a genome or its absence in certain species. It is possible with those two events to reconcile any two trees, i.e. to map the nodes and branches of the lower and upper trees, or equivalently to give a list of evolutionary events explaining the discrepancies between the upper tree and the lower tree.
A most parsimonious Duplication and Loss (DL) reconciliation is computed through the Lowest Common Ancestor (LCA) mapping: proceeding from the leaves to the root, each internal node is mapped to the lowest common ancestor of the mapping of its two children.
A Markovian model for reconciliation
The LCA mapping in the DL model follows a parsimony principle: no event should be invoked if it is not necessary. However the use of this principle is debated, and it is commonly admitted that it is more accurate in molecular evolution to fit a probabilistic model as a random walk, which does not necessarily produce parsimonious scenarios.
A birth and death Markovian model is such a model that can generate a lower tree "inside" a fixed upper one from root to leaves.
Statistical inference provides a framework to find most likely scenarios, and in that case, a maximum likelihood reconciliation of two trees is also a parsimonious one. In addition, it is possible with such a framework to sample scenarios, or integrate over several possible scenarios in order to test different hypotheses, for example to explore the space of lower trees. Moreover, probabilistic models can be integrated into larger models, as probabilities simply multiply when assuming independence, for instance combining sequence evolution and DL reconciliation.
Introducing horizontal transfer
Host switch, i.e. inheritance of a symbiont from a kin lineage, is a crucial event in the evolution of parasitic or symbiotic relationships between species. This horizontal transfer also models migration events in biogeography and became of interest for the reconciliation of gene and species trees when it appeared that many discrepancies could not simply be explained by duplication and loss and that horizontal gene transfer (HGT) was a major evolutionary process in micro-organisms evolution.
This switching, or horizontal transfer, pattern can also model admixture or introgression.
It is considered in character methods, without information from the symbiont phylogeny.
On top of the DL model, horizontal transfer enables new and very different reconciliation scenarios.
The simple yet powerful dynamic programming approach
The LCA reconciliation method yields a unique solution, which has been shown to be optimal for the problem of minimizing the weighted number of events, whatever the relative weights of duplication and loss. In contrast, with Duplication, horizontal Transfer and Loss (DTL), there can be several equally parsimonious reconciliations. For instance, a succession of duplications and losses can be replaced by a single transfer. One of the first ideas to define a computational problem and approach a resolution was, in a host/symbiont framework, to maximize the number of co-speciations with a heuristic algorithm.
Another solution is to give relative costs to the events and find a scenario that minimizes the sum of the costs of its events.
In the probabilistic model frameworks, the equivalent task consists of assigning rates or probabilities to events and search for maximum likelihood scenarios, or sample scenarios according to their likelihood. All these problems are solved with a dynamic programming approach. This dynamic programming method involves traversing the two trees in a postorder.
Proceeding from the leaves and then going up in the two trees, for each couple of internal nodes (one for each tree), the cost of a most parsimonious DTL reconciliation is computed.
In a parsimony framework, costs of reconciling a lower subtree rooted at with an upper subtree rooted at is initialized for the leaves with their matching:
And then inductively, denoting the children of the children of the costs associated with speciation, duplication, horizontal transfer and loss, respectively (with often fixed to 0),
The costs and , because they do not depend on , can be computed once for all , hence achieving quadratic complexity to compute for all couples of and . The cost of losses only appears in association with other events because in parsimony, a loss can always be associated with the preceding event in the tree.
The induction behind the use of dynamic programming is based on always progressing in the trees toward the roots. However some combinations of events that can happen consecutively can make this induction ill-defined.
One such combination consists of a transfer followed immediately by a loss in the donor lineage (TL). Restricting the use of this TL event repairs the induction. With an unlimited use, it is necessary to use or add other known methods to solve systems of equations like fixed point methods, or numerical solving of differential equations. In 2016, only two out of seven of the most commonly used parsimony reconciliation programs did handle TL events, although their consideration can drastically change the result of a reconciliation.
Unlike LCA mapping, DTL reconciliation typically yields several scenarios of minimal cost, in some cases an exponential number. The strength of the dynamic programming approach is that it enables to compute a minimum cost of coevolution of the input upper and lower tree in quadratic time, and to get a most parsimonious scenario through backtracking.
It can also be transposed to a probabilistic framework to compute the likelihood of coevolution and get a most likely reconciliation, replacing costs with rates, minimums by sums and sums by products.
Moreover, through multiple backtracks, the approach is suitable for enumerating all parsimonious solutions or to sample scenarios, optimal and sub-optimal, according to their likelihood.
Estimation of event costs and rates
Dynamic programming per se is only a partial solution and does not solve several problems raised by reconciliation.
Defining a most parsimonious DTL reconciliation requires assigning costs to the different kinds of events (D, T and L). Different cost assignments can yield different reconciliation scenarios, so there is a need for a way to choose those costs.
There is a diversity of approaches to do so. CoRe-PA explores in a recursive manner the space of cost vectors, searching for a good matching with the event frequencies in reconciliations.
ALE uses the same idea in a probabilistic framework to estimate the event rates by maximum likelihood.
Alternatively, COALA is a preprocess using approximate Bayesian computation with sequential Monte Carlo: simulation and statistic rejection or acceptance of parameters with successive refinement.
In the parsimony framework, it is also possible to divide the space of possible event costs into areas of costs which lead to the same Pareto optimal solution.
Pareto optimal reconciliations are such that no other reconciliation has a strictly inferior cost for one type of event (duplication, transfer or loss), and less or equal for the others.
It is possible as well to rely on external considerations in order to choose the event costs. For example, the software Angst chooses the costs that minimize the variation of genome size, in number of genes, between parent and children species.
The problem of temporal feasibility
The dynamic programming method works for dated (internal nodes are totally ordered) or undated upper trees. However, with undated trees, there is a temporal feasibility issue. Indeed, a horizontal transfer implies that the donor and the receiver are contemporaneous, therefore implying a time constraint on the tree. In consequence, two horizontal transfers may be incompatible, because they imply contradicting time constraints.
The dynamic programming approach can not easily check for such incompatibilities. If the upper tree is undated, finding a temporally feasible most parsimonious reconciliation is NP-hard. It is fixed parameter tractable, which means that there are algorithms running in time bounded by an exponential of the number of transfers in the output scenarios.
Some solutions imply integer linear programming or branch and bound exploration.
If the upper tree is dated, then there is no incompatibility issue because horizontal transfers can be constrained to never go backward in time. Finding a coherent optimal reconciliation is then solved in polynomial time or with a speed-up in RASCAL, by testing only a fraction of node mappings.
Most of the software taking undated trees does not look for temporal feasibility, except Jane, which explores the space of total orders via a genetic algorithm, or, in a post process, Notung, and Eucalypt, which searches inside the set of optimal solutions for time consistent ones.
Other methods work as supplementary layers to reconciliations, correcting reconciliations or returning a subset of feasible transfers, which can be used to date a species tree.
Expanding phylogenies: Transfers from the dead
In phylogenetics in general, it is important to keep in mind that the extant and ancestral species that are represented in any phylogeny are only a sparse sample of the species that currently exist or ever have existed. This is why one can safely assume that all transfers that can be detected using phylogenetic methods have originated in lineages that are, strictly speaking, absent from a studied phylogeny. Accounting for extinct or unsampled biodiversity in phylogenetic studies can give a better understanding of these processes. Originally, DTL reconciliation methods did not recognize this phenomenon and only allowed for transfer between contemporaneous branches of the tree, hence ignoring most plausible solutions. However, methods working on undated upper trees can be seen as implicitly handling the unknown diversity by allowing transfers "to the future" from the point of view of one phylogeny, that is, the donor is more ancient than the recipient. A transfer to the future can be translated into a speciation to unknown species, followed by a transfer from unknown species.
ALE in its dated version explicitly takes the unknown diversity into account by adding a Moran process of speciation/extinctions of species to the dated birth/death model of gene evolution.
Transfers from the dead are also handled in a parsimonious setting by Tera and ecceTERA, showing that considering these transfers improves the capacity to reconstruct gene trees using reconciliation, and with a more explicit model and in a probabilistic setting, in ALE undated.
The specificity of biogeography: a tree like structure for the "evolution" of areas
In biogeography, some applications of reconciliation approaches consider as an upper tree an area cladogram with defined ancestral nodes. For instance, the root can be Pangaea and the nodes contemporary continents. Sometimes, internal nodes are not ancestral areas but the unions of the areas of their children, to account for the possibility of species evolving along the lower tree to inhabit one or several areas. In this case, the evolutionary events are migration, where one species colonizes a new area, allopatric speciation, or vicariance, equivalent to co-speciation in host/symbiont comparisons.
Even though this approach does not always give a tree (if the unions AB and BC of leaves A, B, C exist, a child can have several parents), and this structure is not associated with time (it is possible for a species to go from A to AB by migration, as well as from AB to A by extinction), reconciliation methods—with events and dynamic programming—can infer evolutionary scenarios between the upper geographical structure and the lower species tree. Diva and Lagrange are two reconciliation models constructing such a tree-like structure and then applying reconciliation, the first with a parsimony principle, the second in a probabilistic framework. Additionally, BioGeoBEARS is a biogeography inference package that reimplemented DIVA and Lagrange models and allows for new options, like distant dependent transfers and discussion on statistical model selection.
Graphical output
With two trees and multiple evolutionary events linking them to represent, viewing reconciled trees is a challenging but necessary question in order to make reconciliation studies more accessible. Some reconciliation software include annotation of the evolutionary events on the lower trees, while others, and specific packages, in DL or DTL, trace the lower tree embedded in the upper one. One difficulty in this regard is the variety of output formats for the different reconciliation software. A common standard, recphyloxml, has been established and endorsed by part of the community, and a viewer is available, able to display reconciliation in multi level systems.
Addressing Additional Practical Considerations
Applying DTL reconciliation to biological data raises several problems related to uncertainty and confidence levels of input and output. Concerning the output, the uncertainty of the answer calls for an exploration of the whole solution space. Concerning the input, phylogenetic reconciliation has to handle uncertainties in the resolution or rooting of the upper or lower trees, or even to propose roots or resolutions according to their confidence.
Exploring the space of reconciliations
Dynamic programming makes it possible to sample reconciliations, uniformly among optimal ones or according to their likelihood.
It is also possible to enumerate them in time proportional to the number of solutions, a number which can quickly become intractable (even only for optimal ones). Finding and presenting structure among the multitude of possible reconciliations has been at the center of recent methodological developments, especially for host and symbiont aimed methods. Several works have focused on representing a set of reconciliations in a compact way, from a uniform sample of optimal ones or by constructing a graph summarizing the optimal solutions. This can be achieved by giving support values to specific events based on all optimal (or suboptimal) reconciliations, or with the use of a consensus reconciled tree. In a DL model, it is possible to define a median reconciliation, based on shared events and to compute it in polynomial time.
EMPRess can group similar reconciliations through clustering, with all pairwise distance between reconciliations computable in polynomial time (independently of the number of most parsimonious reconciliations). With the same aim, Capybara defines equivalence classes among reconciliations, efficiently computing representatives for all classes, and outputs with linear delay a given number of reconciliations (first optimal ones, then sub optimal).
The space of most parsimonious reconciliation can be expanded or reduced when increasing or decreasing horizontal transfer allowed distance, which is easily done by dynamic programming.
Inferring phylogenetic trees with reconciliation
Reconciliation and input uncertainty
Reconciliation works with two fixed trees, a lower and an upper, both assumed correct and rooted. However, those trees are not first hand data.
The most frequently used data for phylogenetics consists in aligned nucleotidic or proteic sequences.
Extracting DNA, sequencing, assembling and annotating genomes, recognizing homology relationships among genes and producing multiple alignments for phylogenetic reconstruction are all complex processes where errors can ultimately affect the reconstructed tree.
Any topology or rooting error can be misinterpreted and cause systematic bias. For instance, in DL reconciliations, errors on the lower tree bias the reconciliation toward more duplication events closer to the root and more losses closer to the leaves.
On the other hand, reconciliation, as a macro evolutionary model, can work as a supplementary layer to the micro evolutionary model of sequence evolution, resolving polytomies (nodes with more than two children) or rooting trees, or be intertwined with it through integrative models in order to get better phylogenies.
Most of the works in this direction focus on gene/species reconciliations, nevertheless some first steps have been made in host/symbiont, such as considering unrooted symbiont trees or dealing with polytomies in Jane.
Exploring the space of lower trees with reconciliation
Reconciliation can easily take unrooted lower trees as input, which is a frequently used feature because trees inferred from molecular data are typically unrooted. It is possible to test all possible roots, or a thoughtful triple traversal of the unrooted tree allows to do it without additional time complexity. In a duplication-loss model, the set of roots minimizing the costs are found close to one another, forming a "plateau", a property which does not generalize to DTL.
Reconciliation can also take as input non binary trees, that is, with internal nodes with more than two children. Such trees can be obtained for example by contracting branches with low statistical support. Inferring a binary tree from a non binary tree according to reconciliation scores is solved in DL with efficient methods. In DTL, the problem is NP hard. Heuristics and exact fixed parameter tractable algorithms are possible solutions.
Another way to handle uncertainty in lower trees is to take as input a sample of alternative lower trees instead of a single one. For example, in the paper that gave reconciliation its name, it was proposed to consider all most likely lower trees, and choose from these trees the best one according to their DL costs, a principle also used by TreeFix-DTL.
The sample of lower trees can similarly reflect their likelihood according to the aligned sequences, as obtained from Bayesian Markov chain Monte Carlo methods as implemented for example in Phylobayes. AngST, ALE and ecceTERA use "amalgamation", an extension of the DTL dynamic programming that is able to efficiently traverse a set of alternative lower trees instead of a single tree.
A local search in the space of lower trees guided by a joint likelihood, on the one hand from multiple sequence alignments and on the other hand from reconciliation with the upper tree, is achieved in Phyldog with a DL model and in GeneRax with DTL. In a DL model with sequence evolution and relaxed molecular clock, the lower tree space can be explored with an MCMC. MowgliNNI can modify the input gene tree at poorly supported nodes to increase DTL score, while TreeSolve resolves the multifurcations added by collapsing poorly supported nodes.
Finally, integrative models—mixing sequence evolution and reconciliation—can compute a joint likelihood via dynamic programming (for both reconciliation and gene sequences evolution), use Markov chain Monte Carlo to include molecular clock to estimate branch lengths, in a DL model or with a relaxed molecular clock, and in a DTL model. These models have been applied in gene/species frameworks, not yet in host/symbiont or biogeography contexts.
Inferring upper trees using reconciliation
Inferring an upper tree from a set of lower trees is a long-standing question related to the supertree problem. It is particularly interesting in the case of gene/species reconciliation where many (typically thousands of) gene trees are available from complete genome sequences. Supertree methods attempt to assemble a species tree based on sets of trees which may differ in terms of contemporary species sets and topology, but usually without consideration for the biological process explaining these differences. However, some supertree approaches are statistically consistent for the reconstruction of the species tree if the gene trees are simulated under a DL model. This means that if the number of input lower trees generated from the true upper tree via the DL model grows toward infinity, given that there are no additional errors, the output upper tree converges almost surely to the true one. This has been shown in the case of a quartet distance, and with a generalised Robinson Foulds multicopy distance, with better running time but assuming gene trees do not contain bipartitions contradicting the species tree, which seems rare under a DL model.
Reconciliation can also be used for the inference of upper trees. This is a computationally hard problem: already resolving polytomies in a non binary upper tree with a binary lower one—minimizing a DL reconciliation score—is NP-hard. In particular, reconstructing the species tree giving the best DL cost for several gene trees is NP-hard and 2-approximable. It is called the Gene Duplication problem or more generally Gene Tree parsimony. The problem was seen as a way to detect paralogy to get better species tree reconstruction. It is NP-hard, with interesting results on the problem complexity and the behaviour of the model with different input size, structure and ILS presence. Multiple solutions exists, with ILP or heuristics, and with the possibility of a deep coalescence score.
ODTL takes as input gene trees and searches a maximum likelihood species tree according to a DTL model, with a hill-climbing search. The approach produces a species tree with internal nodes ordered in time, ensuring a time compatibility for the scenarios of transfer among lower trees {link section|The problem of temporal feasibility}.
Addressing a more general problem, Phyldog searches for the maximum likelihood species tree, gene trees and DL parameters from multiple family alignments via multiple rounds of local search. It thus performs the exploration of both upper and lower trees at the same time. MixTreEM presents a faster solution.
Limits of the two-level DTL model
A limit to dynamic programming: non independent evolution of children lineages
The dynamic programming framework, like usual birth and death models, works under the hypothesis of independent evolution of children lineages in the lower tree. However, this hypothesis does not hold if the model is complemented with several other documented evolutionary events, such as horizontal transfer with replacement of a homologous gene in the recipient lineage, or gene conversion. Horizontal transfer with replacement is usually modeled by a rearrangement of the upper tree, called Subtree Prune and Regraft (SPR). Reconciling under SPR is NP-hard, even in dated trees, and fixed-parameter tractable regarding the output size.
Another way to model and infer replacing horizontal transfers is through maximum agreement forest, where branches are cut in the lower and upper trees in order to get two identical (or statistically indistinguishable) upper and lower forests. The problem is NP-hard, but several approximations have been proposed.
Replacing transfers can be considered on top of the DL model. In the same vein, gene conversion can be seen as a "replacing duplication". In this latter case, a polynomial algorithm which does not use dynamic programming and is an extension of the LCA method can find all optimal solutions, including gene conversions.
Integrating population levels: failure to diverge and Incomplete Lineage Sorting
In host/symbiont frameworks, a single symbiont species is sometimes associated to several host species. This means that while a speciation or diversification has been observed in the host, the populations are indistinguishable in the symbiont.
This is handled for example by additional polytomies in the symbiont tree, possibly leading to intractable inference problems, because polytomies need to be resolved.
It is also modeled by an additional evolutionary event "failure to diverge" (Jane, Amocoala).
Failure to diverge can be a way to allow "free" host switch in a population, a flow of symbionts between closely related hosts.
Following that vision, host switch allowed only for close hosts is considered in Eucalypt. This idea of horizontal flow between close populations can also be applied to gene/species frameworks, with a definition of species based on a gradient of gene flow between populations.
Failure to diverge is one way of introducing population dynamics in reconciliation, a framework mainly adapted to the multi-species level, where populations are supposed to be well differentiated. There are other population phenomena that limit this framework, one of them being deep coalescence of lineages, leading to Incomplete Lineage Sorting (ILS), which is not handled by the DTL model. The multi species coalescent is a classical model of allele evolution along a species tree, with birth of alleles
and sorting of alleles at speciations, that takes into account population sizes and naturally encompasses ILS.
In a reconciliation context, several attempts have been made in order to account for ILS without the complex integration of a population model. For example, ILS can be seen as a possible evolutionary pattern for the gene tree. In that case, children lineages are not independent of one another, leading to intractability results. ILS alone can be handled with LCA, but ILS + DL reconciliation is NP hard, even without transfers.
Notung handles ILS by collapsing short branches of the species tree in polytomies and allowing ILS as a free diversification of gene trees on those polytomies. ecceTERA binds the maximum size of connected parts of the species tree where ILS can happen, proposing a fixed parameter tractable algorithm in that parameter.
ILS and DL can be considered on an upper network instead of a tree. This models in particular introgression, with the possibility to estimate model parameters.
More integrative reconciliation models accounting for ILS have been proposed, including both DL and multispecies coalescent, with DLCoal. It is a probabilistic model with a parsimony translation, proposing two sequential LCA-type heuristics handled via an intermediate locus tree between gene and species.
However, outside of the gene/species reconciliation framework, ILS seems, for no particular reason, never considered in host/symbiont, nor in biogeography.
Cophylogeny with more than two levels
A striking aspect of reconciliation is the common methodology handling different levels of organization: it is used for comparing domain and protein trees, gene and species trees, hosts and symbiont trees, population and geographic trees. However, now that scientists tend to consider that multi-level models of biological functioning bring a novel and game changing view of organisms and their environment, the question is how to use reconciliation to bring phylogenetics to this holobiont era.
Coevolution of entities at different scales of evolution is at the basis of the holobiont idea: macro-organisms, micro-organisms and their genes all have a different history bound to a common functioning in a single ecosystem. Biological systems like the entanglement of host, symbionts and their genes imply functional and evolutionary dependencies between more than two levels.
Examples of multi level systems with complex evolutionary inter-dependencies
Genes coevolving beyond genome boundaries
The holobiont concept stresses the possibility of genes from different genomes to cooperate and coevolve. For instance, certain genes in a symbiont genome may provide a function to its host, like the production of a vital compound absent from available feeding sources. An iconic example is the case for blood-feeding or sap-feeding insects, which often depend on one or several bacterial symbionts to thrive on a resource that is abundant in sugar, but lacks essential amino-acids or vitamins. Another example is the association of Fabaceae with nitrogen-fixing bacteria. The compound beneficiary to the host is typically produced by a set of genes encoded in the symbiont genome, which throughout evolution, may be transferred to other symbionts, and/or in and out of the host genome. Reconciliation methods have the potential to reveal evolutionary links between portions of genomes from different species. A search for coevolving genes beyond the boundaries of the genomes in which they are encoded would highlight the basis for the association of organisms in the holobiont.
Horizontal gene transfer routes depend on multiple levels
In intracellular mutualistic symbiont insect systems, multiple occurrences of horizontal gene transfers have been identified, whether from host to symbiont, symbiont to host or symbiont to symbiont.
Transfers of endosymbiont genes involved in nutrition pathways beneficiary to the insect host have been shown to occur preferentially if the donor and recipient lineages share the same host. This is also the case in insects with bacterial symbionts providing defensive protein or in obligate leaf nodule bacterial symbionts associated with plants. In the human host, gene transfer has been shown to occur preferentially among symbionts hosted in the same organs.
A review of horizontal gene transfers in host/symbiont systems stresses the importance of supporting HGTs with multiple evidence. Notably it is argued that transfers should be considered better supported when involving symbionts sharing a habitat, a geographical area, or the same host. One should, however, keep in mind that most of the diversity of hosts and symbionts is unknown and that transfers may have occurred in unsampled closely related species, hosts or symbionts.
The idea that gene transfer in symbionts is constrained by the host can also be used to investigate the host's phylogenetic history. For instance, based on phylogeographical studies, it is now accepted that the bacterium Helicobacter pylori has been associated with human populations since the origins of the human species. An analysis of the genomes of Helicobacter pylori in Europe suggests that they are issued from a recombination between African and Asian Helicobacter pylori. This strongly implies early contacts between the corresponding human populations.
Similarly, an analysis of HGTs in coronaviruses from different mammalian species using reconciliation methods has revealed frequent contact between viral lineages, which can be interpreted as frequent host switches.
Cultural evolution
The evolution of elements of human culture, for instance languages and folktales, in association with human population genetics, has been studied using concepts from phylogenetics. Although reconciliation has never been used in this framework, some of these studies encompass multiple levels of organization, each represented by a tree or the evolution of a character, with a focus on the coevolution of these levels.
Language trees can be compared with population trees in order to reveal vertically transmitted folktales, via a character model on this language tree. Variants in each folktale's family, languages, genetic diversity, populations and geography can be compared two by two, to link folktale diversification with languages on one side and with geography on the other side. As in genetics with symbionts sharing host promoting HGTs, linguistic barriers can foreclose the transmission of folktales or language elements.
Investigating three-level systems using two-level reconciliation
Multi level reconciliation is not as developed as two-level reconciliation. One way to approach the evolutionary dependencies between more than two levels of organization is to try to use available standard two-level methods to give a first insight into a biological system's complexity.
Multi-gene events: implicit consideration of an intermediate level
At the gene/species tree level, one typically deals with many different gene trees. In this case, the hypothesis that different gene families evolve independently is made implicitly. However, this does not need to be the case. For instance, duplication, transfer and loss can occur for segments of a genome spanning an arbitrary number of contiguous genes. It is possible to consider such multi-gene events using an intermediate guide for lower trees inside the upper one. For instance, one can compute the joint likelihood of multiple gene tree reconciliations with a dated species tree with duplication, loss and whole genome duplication or in a parsimonious setting, and one definition of the problem is NP-hard. Similarly, the DL framework can be enriched with duplication and loss of chromosome segments instead of a single gene. However, DL reconciliation becomes intractable with that new possibility.
The link between two consecutive genes can also be modeled as an evolving character, subject to gain, loss, origination, breakage, duplication and transfer. The evolution of this link appears as an additional level to species and gene trees, partly constrained by the gene/species tree reconciliation, partly evolving on its own, according to genome organization. It thus models the synteny, or proximity between genes. At another scale, it can as well model the evolution of two domains belonging to a protein.
The detection of "highways of transfers", the preferential acquisition of groups of genes from a specific donor, is another example of non-independence of gene histories. Similarly, multi-gene transfers can be detected. It has also led to methodological developments such as reconciliations using phylogenetic networks, seen as a tree augmented with transfer edges, which can be used to constrain transfers in a DTL model. Networks can also be used to model introgression and incomplete lineage sorting.
Detecting coevolution in multiple pairs of levels
It is a central question to understand the evolution of a holobiont to know what the levels are that coevolve with each other, for instance between host species, host genes, symbionts and symbiont genes. It is possible to approach the multiple inter-dependencies between all levels of evolution by multiple pairwise comparisons of two evolving entities.
Reconciliation of host and symbiont on one side and geography and symbiont on the other can also help to identify patterns of diversification of host and symbiont that reflect either coevolution or patterns that can be explained by a common geographical diversification.
Similarly, a study used reconciliation methods to differentiate the effect of diet evolution and phylogenetic inertia on the composition of mammalian gut microbiomes. By reconstructing ancestral diets and microbiome composition onto a mammalian phylogeny, the study revealed that both effects contribute but at different time scales.
Explicit modeling of three or more levels
In a model of a multi-level system as host/symbiont/genes, horizontal gene transfers should be more likely between two symbionts of a same host. This is invisible to a two-level gene tree/species tree or host/symbiont reconciliation: in some cases, looking at any combination of two levels can lead to missing an evolutionary scenario which can only be the most likely if the information from the three trees is considered together.
Trying to face the limitation of these uses of standard two-level reconciliations with systems involving inter-dependencies at multiple levels, a methodological effort has been undertaken in the last decade to construct and use multi-level models. This requires the identification of at least one "intermediate" level between the upper and the lower one.
Pre-reconciliation: characters onto reconciled trees
A first step towards integrated three-level models is to consider phylogenetic trees at two levels and another level represented only with characters at the leaves of one of the trees.
For instance, a reconciliation of host and symbiont phylogenies can be informed by geographic data. Ancestral geographic locations of host and symbiont species obtained through a character inference method can then be used to constrain the host/symbiont reconciliation: ancestral hosts and symbionts can only be associated if they belong to the same geographical location.
At another scale, the evolution at the sub-gene level can be approached with a character method. Here, parts of genes (e.g. the sequence coding for protein domains) is reconciled according to a DL model with a species tree, and the genes they belong to are mentioned as characters of these parts. Ancestral genes are then reconstructed a posteriori via merge and splits of gene parts.
Two-level reconciliations informed by a third level
As pointed out by several studies mentioned in , an upper level can inform a reconciliation between an intermediate and lower one, notably for horizontal transfers.
Three-level models can take into account these assumptions to guide reconciliations between an intermediate tree and lower levels with the knowledge of an upper tree. The model can for example give higher likelihoods to reconciliation scenarios where horizontal gene transfers happen between entities sharing the same habitat. This has been achieved for the first time with DTL gene/species reconciliations nested with a DTL gene domain and gene reconciliation. Different costs for inter and intra transfers depend on whether or not transfers happen between genes of the same genomes.
Note that this model explicitly considers three levels and three trees, but does not yet define a real three-level reconciliation, with a likelihood or score associated. It relies on a sequential operation, where the second reconciliation is informed by the result of the first one.
The reconciliation problem in multi-level models
The next step is to define the score of a reconciliation consisting of three nested trees and to compute, given the three trees, three-level reconciliations according to their score. It has been achieved with a species/gene/domain system, where genes evolve within the species tree with a DL model and domains evolve within the gene/species system with a DTL model, forbidding domain transfers between genes of two different species. Inference involves candidate scenarios with joint scores. Computing the minimum score scenario is NP-hard, but dynamic programming or integer linear programming can offer heuristics. Variations of the problem considering multiple domains are available, and so is a simulation framework.
Inferring the intermediate tree using models of 3-level lower/intermediate/upper reconciliation
Just like two-level reconciliation can be used to improve lower or upper phylogenies, or to help constructing them from aligned sequences, joint reconciliation models can be used in the same manner.
In this vein, a coupled gene/species DL, domain gene DL and gene sequence evolution model in a Bayesian framework improves the reconstruction of gene trees.
Software
Multiple pieces of software have been developed to implement the various models of reconciliation. The following table does not aim for exhaustiveness but presents a number of software tools aimed at reconciling trees to infer reconciliation scenarios or for related usage, such as correcting or inferring trees, or testing coevolution.
The levels of interest section details the levels for which the software was implemented, even though it is entirely possible, for instance, to use a software made for species and gene reconciliation to reconcile host and symbionts. Parsimony or probability is the underlying model that is used for the reconciliation.
References
External links
Phylogenetics
Evolutionary biology
NP-complete problems | Phylogenetic reconciliation | Mathematics,Biology | 9,960 |
19,543,375 | https://en.wikipedia.org/wiki/Marion%206360 | Marion 6360, known as "the Captain", was a giant power shovel built by the Marion Power Shovel company. Completed and commissioned on October 15 1965, it was one of the largest land vehicles ever built, exceeded only by some dragline and bucket-wheel excavators. The shovel originally started work with Southwestern Illinois Coal Corporation, but the owners were soon bought out by Arch Coal. Everything remained the same at the mine except for the colors which were changed to red, white, and blue. Like most mining vehicles of extreme size, Marion 6360 only required a surprisingly small amount of men to operate, a total of four consisting of a operator, oiler, welder, and a ground man who looked after the trailing cable.
The shovel worked well for Arch Coal until September 9, 1991, when a fire broke out in the lower works of the shovel. It was caused by a burst hydraulic line that was spraying the hot fluids on an electrical relay panel. This fire caused a great deal of damage to both the lower works and machine house. Afterwards, engineers from both Arch and Marion Power Shovel surveyed the damage and deemed it too great to repair, and the machine was scrapped one year later in the last pit it dug.
The only Marion shovel that compared (in size and scope) to "The Captain" was the Marion 5960-M Power Shovel that worked at Peabody Coal Company's (Peabody Energy) River Queen Surface Mine in Central City, Kentucky. It was named the "Big Digger" and carried a bucket on a boom. It was Marion Power Shovel's second largest machine ever built and the third largest shovel in the world. This "sister shovel" was scrapped in early 1990 in Muhlenberg County, Kentucky.
Specifics
Boom Length:
Bucket Capacity: (double doors)
Dipper Stick Length:
Overall Weight: 12700 tons (11,521,000 kg)
Total Height:
Crawler Height:
Crawler Unit Length:
Individual Crawler Width:
Individual Track weight: 3.5 tons apiece (42 pads per track total)
Clearance Under Shovel: to the first level of the Lower Works
Largest Shovel In The World & Largest Ever Built By Marion--
Started Service: 1965
Dismantled: 1992
Power
Build time (site erection) 18 months & 150,000 man hours
See also
Bagger 288
Bagger 293
Big Muskie
The Silver Spade
Big Brutus
Bucyrus-Erie
Bucket-wheel excavator
Dragline
Excavator
Power Shovel
References
External links
Stripping shovels
Engineering vehicles
6360 | Marion 6360 | Engineering | 513 |
1,307,591 | https://en.wikipedia.org/wiki/Tocolytic | Tocolytics (also called anti-contraction medications or labor suppressants) are medications used to suppress premature labor (from Greek τόκος tókos, "childbirth", and λύσις lúsis, "loosening"). Preterm birth accounts for 70% of neonatal deaths. Therefore, tocolytic therapy is provided when delivery would result in premature birth, postponing delivery long enough for the administration of glucocorticoids, which accelerate fetal lung maturity but may require one to two days to take effect.
Commonly used tocolytic medications include β2 agonists, calcium channel blockers, NSAIDs, and magnesium sulfate. These can assist in delaying preterm delivery by suppressing uterine muscle contractions and their use is intended to reduce fetal morbidity and mortality associated with preterm birth. The suppression of contractions is often only partial and tocolytics can only be relied on to delay birth for a matter of days. Depending on the tocolytic used, the pregnant woman or fetus may require monitoring (e.g., blood pressure monitoring when nifedipine is used as it reduces blood pressure; cardiotocography to assess fetal well-being). In any case, the risk of preterm labor alone justifies hospitalization.
Indications
Tocolytics are used in preterm labor, which refers to when a baby is born too early before 37 weeks of pregnancy. As preterm birth represents one of the leading causes of neonatal morbidity and mortality, the goal is to prevent neonatal morbidity and mortality through delaying delivery and increasing gestational age by gaining more time for other management strategies like corticosteroids therapy that may help with fetus lung maturity. Tocolytics are considered for women with confirmed preterm labor between 24 and 34 weeks of gestation age and used in conjunction with other therapies that may include corticosteroids administration, fetus neuroprotection, and safe transfer to facilities.
Types of agents
There is no clear first-line tocolytic agent. Current evidence suggests that first line treatment with β2 agonists, calcium channel blockers, or NSAIDs to prolong pregnancy for up to 48 hours is the best course of action to allow time for glucocorticoid administration.
Various types of agents are used, with varying success rates and side effects. Some medications are not specifically approved by the U.S. Food and Drug Administration (FDA) for use in stopping uterine contractions in preterm labor, instead being used off-label.
According to a 2022 Cochrane review, the most effective tocolytics for delaying preterm birth by 48 hours, and 7 days were the nitric oxide donors, calcium channel blockers, oxytocin receptor antagonists and combinations of tocolytics.
Calcium-channel blockers (such as nifedipine) and oxytocin antagonists (such as atosiban) may delay delivery by 2 to 7 days, depending on how quickly the medication is administered. NSAIDs (such as indomethacin) and calcium channel blockers (such as nifedipine) are the most likely to delay delivery for 48 hours, with the least amount of maternal and neonatal side effects. Otherwise, tocolysis is rarely successful beyond 24 to 48 hours because current medications do not alter the fundamentals of labor activation. However, postponing premature delivery by 48 hours appears sufficient to allow pregnant women to be transferred to a center specialized for management of preterm deliveries, and thus administer corticosteroids for the possibility to reduce neonatal organ immaturity.
The efficacy of β-adrenergic agonists, atosiban, and indomethacin is a decreased odds ratio (OR) of delivery within 24 hours of 0.54 (95% confidence interval (CI): 0.32-0.91) and 0.47 within 48 hours (OR 0.47, 95% CI: 0.30-0.75).
Antibiotics were thought to delay delivery, but no studies have shown any evidence that using antibiotics during preterm labor effectively delays delivery or reduces neonatal morbidity. Antibiotics are used in people with premature rupture of membranes, but this is not characterized as tocolysis.
Contraindications to tocolytics
In addition to drug-specific contraindications, several general factors may contraindicate delaying childbirth with the use of tocolytic medications.
Fetus is older than 34 weeks gestation
Fetus weighs less than 2.5 kg, or has intrauterine growth restriction (IUGR) or placental insufficiency
Lethal congenital or chromosomal abnormalities
Cervical dilation is greater than 4 centimeters
Chorioamnionitis or intrauterine infection is present
Pregnant woman has severe pregnancy-induced hypertension, severe eclampsia/preeclampsia, active vaginal bleeding, placental abruption, a cardiac disease, or another condition which indicates that the pregnancy should not continue.
Maternal hemodynamic instability with bleeding
Intrauterine fetal demise, lethal fetal anomaly, or non-reassuring fetal status
Future direction of tocolytics
Most tocolytics are currently being used off-label. The future direction of the development of tocolytics agents should be directed toward better efficacy in intentionally prolonging pregnancy. This will potentially result in less maternal, fetal, and neonatal adverse effects when delaying preterm childbirth. A few tocolytic alternatives worth pursuing include Barusiban, a last generation of oxytocin receptor antagonists, as well as COX-2 inhibitors. More studies on the use of multiple tocolytics must be directed to research overall health outcomes rather than solely pregnancy prolongation.
See also
Labor induction
References
Chemical substances for emergency medicine
Obstetric drugs
Obstetrics
Obstetrical procedures
Childbirth | Tocolytic | Chemistry | 1,231 |
73,169,693 | https://en.wikipedia.org/wiki/Grothendieck%20trace%20theorem | In functional analysis, the Grothendieck trace theorem is an extension of Lidskii's theorem about the trace and the determinant of a certain class of nuclear operators on Banach spaces, the so-called -nuclear operators. The theorem was proven in 1955 by Alexander Grothendieck. Lidskii's theorem does not hold in general for Banach spaces.
The theorem should not be confused with the Grothendieck trace formula from algebraic geometry.
Grothendieck trace theorem
Given a Banach space with the approximation property and denote its dual as .
⅔-nuclear operators
Let be a nuclear operator on , then is a -nuclear operator if it has a decomposition of the form
where and and
Grothendieck's trace theorem
Let denote the eigenvalues of a -nuclear operator counted with their algebraic multiplicities. If
then the following equalities hold:
and for the Fredholm determinant
See also
Literature
References
Theorems in functional analysis
Topological tensor products
Determinants | Grothendieck trace theorem | Mathematics,Engineering | 212 |
926,797 | https://en.wikipedia.org/wiki/Picture-in-picture | Picture-in-picture (PiP) is a feature that can be found in television receivers, personal computers, and smartphones. It consists of a video stream playing within an inset window, freeing the rest of the screen for other tasks.
For televisions, picture-in-picture requires two independent tuners or signal sources to supply the large and the small picture. Two-tuner PiP TVs have a second tuner built in, but a single-tuner PiP TV requires an external signal source, which may be an external tuner, videocassette recorder, DVD player, or cable box. Picture-in-picture is often used to watch one program while waiting for another to start or advertisements to finish.
History
Adding a picture to an existing picture was done long before affordable PiP was available on consumer products. The first PiP was seen on the televised coverage of the 1976 Summer Olympics where a Quantel digital framestore device was used to insert a close-up picture of the Olympic flame during the opening ceremony. In 1978 Sharp introduced its TV in TV "Mr.X" (CT-1804 X) in Japan; the export version began in 1979 as "Dualvision" (17D50). In 1980, NEC introduced its "Popvision" television (CV-20T74P) in Japan with a rudimentary picture-aside-picture feature: a separate 6" (15 cm) CRT and tuner complemented the set's main 20" (50 cm) screen. Its price was ¥298,000 MSRP, equal to about $1,200 (at $1 = ¥250), and $1,200 in 1980 had the approximate buying power of $3,000 in 2007.
An early consumer implementation of picture-in-picture was the Multivision set-top box; it was not a commercial success. Later, PiP became available as a feature of advanced television receivers.
The first widespread consumer implementation of picture-in-picture was produced by Philips in 1983 in their high-end television sets. A separate video or RF input was available on the back of the set and displayed in black and white on one of the four corners of the screen. Televisions at the time were still in analog format, and earlier versions of the PiP implemented in analog were too costly. New digital technology allowed the second video signal to be digitized and saved in a digital memory chip, then replayed in a mini version. While the new technology was not good enough for color or full-screen viewing, it did provide a low-cost PiP feature.
The Blu-ray Disc and HD DVD specifications included picture-in-picture, allowing viewers to see content such as the director's commentary on a film they are watching. All the Blu-ray Disc titles in 2006 and 2007 that had a PiP track used two separate HD encodings, with one of the HD encodings including a hard-coded PiP track. Starting in 2008 Blu-ray Disc titles started being released that use one HD and one SD video track which can be combined with a Bonus View or BD-Live player. This method uses less disc space, allowing for PiP to be more easily added to a title. Several studios released Bonus View PiP Blu-ray Disc titles in 2008 such as Aliens vs. Predator: Requiem, Resident Evil: Extinction, V for Vendetta, and War.
In 2011, after DirecTV released the HR34 Home Media Center HD DVR, picture-in-picture was introduced to all HD DVR models onwards; The feature has five options: Upper Left, Upper Right, Lower Right, Lower Left, and Side-by-Side.
Software support
Some streaming video websites may minimize a video stream similarly when browsing outside the playback page. Some web browsers (including Google Chrome, Firefox, and Safari) provide APIs or similar functions that allow a playing video to be opened in a pop-up overlay atop other applications.
The mobile operating systems Android (starting with Android 7.0 for Android TV devices and Android 8.0 for other devices) and iOS (starting with iOS 14) similarly provide native APIs for picture-in-picture overlays.
References
Television technology
Television terminology | Picture-in-picture | Technology | 865 |
18,289,561 | https://en.wikipedia.org/wiki/Tegafur | Tegafur is a chemotherapeutic prodrug of 5-fluorouracil (5-FU) used in the treatment of cancers. It is a component of the combination drug tegafur/uracil. When metabolised, it becomes 5-FU.
It was patented in 1967 and approved for medical use in 1972.
Medical uses
As a prodrug to 5-FU it is used in the treatment of the following cancers:
Stomach (when combined with gimeracil and oteracil)
Breast (with uracil)
Gallbladder
Lung (specifically adenocarcinoma, typically with uracil)
Colorectal (usually when combined with gimeracil and oteracil)
Head and neck
Liver (with uracil)
Pancreatic
It is often given in combination with drugs that alter its bioavailability and toxicity such as gimeracil, oteracil or uracil. These agents achieve this by inhibiting the enzyme dihydropyrimidine dehydrogenase (uracil/gimeracil) or orotate phosphoribosyltransferase (oteracil).
Adverse effects
The major side effects of tegafur are similar to fluorouracil and include myelosuppression, central neurotoxicity and gastrointestinal toxicity (especially diarrhoea). Gastrointestinal toxicity is the dose-limiting side effect of tegafur. Central neurotoxicity is more common with tegafur than with fluorouracil.
Pharmacogenetics
The dihydropyrimidine dehydrogenase (DPD) enzyme is responsible for the detoxifying metabolism of fluoropyrimidines, a class of drugs that includes 5-fluorouracil, capecitabine, and tegafur. Genetic variations within the DPD gene (DPYD) can lead to reduced or absent DPD activity, and individuals who are heterozygous or homozygous for these variations may have partial or complete DPD deficiency; an estimated 0.2% of individuals have complete DPD deficiency. Those with partial or complete DPD deficiency have a significantly increased risk of severe or even fatal drug toxicities when treated with fluoropyrimidines; examples of toxicities include myelosuppression, neurotoxicity and hand-foot syndrome.
Mechanism of action
It is a prodrug to 5-FU, which is a thymidylate synthase inhibitor.
Pharmacokinetics
It is metabolised to 5-FU by CYP2A6.
Interactive pathway map
See also
Tegafur/uracil
Tegafur/gimeracil/oteracil
References
Organofluorides
Prodrugs
Pyrimidinediones
Pyrimidine antagonists
Tetrahydrofurans
Fluoropyrimidines
Drugs in the Soviet Union | Tegafur | Chemistry | 625 |
11,178,099 | https://en.wikipedia.org/wiki/Rata%20Die | Rata Die (R.D.) is a system for assigning numbers to calendar days (optionally with time of day), independent of any calendar, for the purposes of calendrical calculations. It was named (after the Latin ablative feminine singular for "from a fixed date") by Howard Jacobson.
Rata Die is somewhat similar to Julian Dates (JD), in that the values are plain real numbers that increase by 1 each day. The systems differ principally in that JD takes on a particular value at a particular absolute time, and is the same in all contexts, whereas R.D. values may be relative to time zone, depending on the implementation. This makes R.D. more suitable for work on calendar dates, whereas JD is more suitable for work on time per se. The systems also differ trivially by having different epochs: R.D. is 1 at midnight (00:00) local time on January 1, AD 1 in the proleptic Gregorian calendar, JD is 0 at noon (12:00) Universal Time on January 1, 4713 BC in the proleptic Julian calendar.
Forms
There are three distinct forms of R.D., heretofore defined using Julian Dates.
Dershowitz and Reingold do not explicitly distinguish between these three forms, using the abbreviation "R.D." for all of them.
Dershowitz and Reingold do not say that the RD is based on Greenwich time, but page 10 state that an R.D. with a decimal fraction is called a moment, with the function moment-from-jd taking the floating point R.D. as an argument and returns the argument -1721424.5. Consequently, there is no requirement or opportunity to supply a time zone offset.
Fractional days
The first form of R.D. is a continuously-increasing fractional number, taking integer values at midnight local time. It is defined as:
RD = JD − 1,721,424.5
Midnight local time on December 31, year 0 (1 BC) in the proleptic Gregorian calendar corresponds to Julian Date 1,721,424.5 and hence RD 0.
Day Number
In the second form, R.D. is an integer that labels an entire day, from midnight to midnight local time. This is the result of rounding the first form of R.D. downwards (towards negative infinity). It is the same as the relation between Julian Date and Julian Day Number (JDN). Thus:
RD = floor( JD − 1,721,424.5 )
Noon Number
In the third form, the R.D. is an integer labeling noon time, and incapable of labeling any other time of day. This is defined as
RD = JD − 1,721,425
where the R.D. value must be an integer, thus constraining the choice of JD. This form of R.D. is used by Dershowitz and Reingold for conversion of calendar dates between calendars that separate days on different boundaries.
See also
Decimal time#Fractional days
Julian date
Lilian date
References
Applied mathematics
Calendars
Calendaring standards | Rata Die | Physics,Mathematics | 652 |
17,806,864 | https://en.wikipedia.org/wiki/Visions%20of%20the%20Future | Visions of the Future is a 2007 documentary television series aired on the BBC Four television channel. The series stars theoretical physicist and futurist Michio Kaku as he documents cutting edge science.
There are three installments in the series.
Episodes
The Intelligence Revolution
The Intelligence Revolution - Kaku explains how he believes artificial intelligence will revolutionize the world. Also, Kaku investigates virtual reality technology and its potential. Controversially, Kaku documents the work of scientists using a combination of artificial intelligence and neuroscience technology transform a person suffering from major depressive disorder into one who is happy and content by the push of a button.
List of technologies:
Autonomous car
Ubiquitous computing and Internet of things
E-textiles
Head-mounted display
Virtual retinal display
Virtual reality
Augmented reality
Immersive virtual reality
Robotics and artificial intelligence
Cyborgology, Bionics and human enhancement
The Biotech Revolution
The Biotech Revolution - This episode focuses mainly on recent advances in genetics and biotechnology. Amongst other things Kaku documents advances in DNA screening, gene therapy and lab-grown organ transplants.
List of technologies:
Whole genome sequencing and personalized medicine
Genetic engineering
Gene therapy
Designer baby
Cancer Genome Project
Regenerative medicine
Tissue engineering, Printable organs
Cell therapy
Immunomodulation therapy
Life extension
Sirtuin 1
Transhumanism
The Quantum Revolution
The Quantum Revolution - Kaku investigates the advances of quantum physics and the effects it could have on the average human life. Kaku looks at the work of science fiction writers and the way that many concepts conceived for entertainment could in fact become reality. Kaku also speculates about the effects that such technology may have on the future of the human species.
List of technologies:
High-temperature superconductivity
Metamaterial
Carbon nanotube
Space elevator
Nuclear fusion
Nanotechnology
Nanorobotics
Molecular assembler
Quantum teleportation
External links
2007 British television series debuts
2007 British television series endings
BBC high definition shows
BBC television documentaries
2000s British television miniseries
British English-language television shows
Documentary television series about computing
Futurology documentaries | Visions of the Future | Technology | 412 |
1,190,775 | https://en.wikipedia.org/wiki/Debye | The debye ( , ; symbol: D) is a CGS unit (a non-SI metric unit) of electric dipole moment named in honour of the physicist Peter J. W. Debye. It is defined as statcoulomb-centimetres. Historically the debye was defined as the dipole moment resulting from two charges of opposite sign but an equal magnitude of 10−10 statcoulomb (generally called e.s.u. (electrostatic unit) in older scientific literature), which were separated by 1 ångström. This gave a convenient unit for molecular dipole moments.
{|
|-
|height=30|1 D ||= 10−18 statC·cm
|-
|height=30|
|= 10−18 cm5/2⋅g1/2⋅s−1
|-
|height=30|
|= 10−10 statC·Å
|-
|height=30|
|≘ C·m
|-
|height=30|
|≈
|-
|height=30|
|≈
|-
|height=30|
|≈
|-
|height=30|
|≈
|-
|}
Typical dipole moments for simple diatomic molecules are in the range of 0 to 11 D. Molecules with symmetry point groups or containing inversion symmetry will not have a permanent dipole moment, while highly ionic molecular species have a very large dipole moment, e.g. gas-phase potassium bromide, KBr, with a dipole moment of 10.41 D. A proton and an electron 1 Å apart have a dipole moment of 4.8 D.
The debye is still used in atomic physics and chemistry because SI units have until recently been inconveniently large. The smallest SI unit of electric dipole moment is the quectocoulomb-metre, which corresponds closely to 0.3 D.
See also
Buckingham (unit) (CGS unit of electric quadrupole)
Notes
References
Non-SI metric units
Peter Debye
Centimetre–gram–second system of units | Debye | Mathematics | 426 |
1,674,353 | https://en.wikipedia.org/wiki/History%20of%20pseudoscience | The history of pseudoscience is the study of pseudoscientific theories over time. A pseudoscience is a set of ideas that presents itself as science, while it does not meet the criteria to properly be called such.
Distinguishing between proper science and pseudoscience is sometimes difficult. One popular proposal for demarcation between the two is the falsification criterion, most notably contributed to by the philosopher Karl Popper. In the history of pseudoscience it can be especially hard to separate the two, because some sciences developed from pseudosciences. An example of this is the science chemistry, which traces its origins from the protoscience of alchemy.
The vast diversity in pseudosciences further complicates the history of pseudoscience. Some pseudosciences originated in the pre-scientific era, such as astrology and acupuncture. Others developed as part of an ideology, such as Lysenkoism, or as a response to perceived threats to an ideology. An example of this is creationism, which was developed as a response to the scientific theory of evolution.
Despite failing to meet proper scientific standards, many pseudosciences survive. This is usually due to a persistent core of devotees who refuse to accept scientific criticism of their beliefs, or due to popular misconceptions. Sheer popularity is also a factor, as is attested by astrology which remains popular despite being rejected by a large majority of scientists.
19th century
Among the most notable developments in the history of pseudoscience in the 19th century are the rise of Spiritualism (traced in America to 1848), homeopathy (first formulated in 1796), and phrenology (developed around 1800). Another popular pseudoscientific belief that arose during the 19th century was the idea that there were canals visible on Mars. A relatively mild Christian fundamentalist backlash against the scientific theory of evolution foreshadowed subsequent events in the 20th century.
The study of bumps and fissures in people's skulls to determine their character, phrenology, was originally considered a science. It influenced psychiatry and early studies into neuroscience. As science advanced, phrenology was increasingly viewed as a pseudoscience. Halfway through the 19th century, the scientific community had prevailingly abandoned it, although it was not comprehensively tested until much later.
Halfway through the century, iridology was invented by the Hungarian physician Ignaz von Peczely. The theory would remain popular throughout the 20th century as well.
Spiritualism (sometimes referred to as "Modern Spiritualism" or "Spiritism") or "Modern American Spiritualism" grew phenomenally during the period. The American version of this movement has been traced to the Fox sisters who in 1848 began claiming the ability to communicate with the dead. The religious movement would remain popular until the 1920s, when renowned magician Harry Houdini began exposing famous mediums and other performers as frauds (see also Harry Houdini#Debunking spiritualists). While the religious beliefs of Spiritualism are not presented as science, and thus are not properly considered pseudoscientific, the movement did spawn numerous pseudoscientific phenomena such as ectoplasm and spirit photography.
The principles of homeopathy were first formulated in 1796, by German physician Samuel Hahnemann. At the time, mainstream medicine was a primitive affair and still made use of techniques such as bloodletting. Homeopathic medicine by contrast consisted of extremely diluted substances, which meant that patients basically received water. Compared to the damage often caused by conventional medicine, this was an improvement. During the 1830s homeopathic institutions and schools spread across the US and Europe. Despite these early successes, homeopathy was not without its critics. Its popularity was on the decline before the end of the 19th century, though it has been revived in the 20th century.
The supposed Martian canals were first reported in 1877, by the Italian astronomer Giovanni Schiaparelli. The belief in them peaked in the late 19th century, but was widely discredited in the beginning of the 20th century.
The publication of Atlantis: The Antediluvian World by politician and author Ignatius L. Donnelly in 1882, renewed interest in the ancient idea of Atlantis. This highly advanced society supposedly existed several millennia before the rise of civilizations like Ancient Egypt. It was first mentioned by Plato, as a literary device in two of his dialogues. Other stories of lost continents, such as Mu and Lemuria also arose during the late 19th century.
In 1881 the Dutch Vereniging tegen de Kwakzalverij (English: Society against Quackery) was formed to oppose pseudoscientific trends in medicine. It is still active.
20th century
Among the most notable developments to pseudoscience in the 20th century are the rise of Creationism, the demise of Spiritualism, and the first formulation of ancient astronaut theories.
Reflexology, the idea that an undetectable life force connects various parts of the body to the feet and sometimes the hands and ears, was introduced in the US in 1913 as 'zone therapy'.
Creationism arose during the 20th century as a result of various other historical developments. When the modern evolutionary synthesis overcame the eclipse of Darwinism in the first half of the 20th century, American fundamentalist Christians began opposing the teaching of the theory of evolution in public schools. They introduced numerous laws to this effect, one of which was notoriously upheld by the Scopes Trial.
In the second half of the century the Space Race caused a renewed interest in science and worry that the USA was falling behind on the Soviet Union. Stricter science standards were adopted and led to the re-introduction of the theory of evolution in the curriculum. The laws against teaching evolution were now ruled unconstitutional, because they violated the separation of church and state. Attempting to evade this ruling, the Christian fundamentalists produced a supposedly secular alternative to evolution, Creationism. Perhaps the most influential publication of this new pseudoscience was The Genesis Flood by young earth creationists John C. Whitcomb and Henry M. Morris.
The dawn of the space age also inspired various versions of ancient astronaut theories. While differences between the specific theories exists, they share the idea that intelligent extraterrestrials visited Earth in the distant past and made contact with then living humans. Popular authors, such as Erich von Däniken and Zecharia Sitchin, began publishing in the 1960s. Among the most notable publications in the genre is Chariots of the Gods?, which appeared in 1968.
Late in the 20th century several prominent skeptical foundations were formed to counter the growth of pseudosciences. In the US, the most notable of these are, in chronological order, the Center for Inquiry (1991), The Skeptics Society (1992), the James Randi Educational Foundation (1996), and the New England Skeptical Society (1996). The Committee for Skeptical Inquiry, which has similar goals, had already been founded in 1976. It became part of the Center for Inquiry as part of the foundation of the latter in 1991. In the Netherlands Stichting Skepsis was founded in 1987.
21st century
At the beginning of the 21st century, a variety of pseudoscientific theories remain popular and new ones continue to crop up.
The Flat Earth is the idea that the Earth is flat. It is believed to have existed for thousands of years, but studies show this is a relatively new theory that begun in the 1990s when the internet starting up allowed such ideas to spread much quicker.
Creationism, in the form of Intelligent Design, suffered a major legal defeat in the Kitzmiller v. Dover Area School District trial. Judge John E. Jones III ruled that Intelligent Design is inseparable from Creationism, and its teaching in public schools violates the Establishment Clause of the First Amendment. The trial sparked much interest, and was the subject of several documentaries including the award-winning NOVA production Judgment Day: Intelligent Design on Trial (2007).
The pseudoscientific idea that vaccines cause autism originated in the 1990s, but became prominent in the media during the first decade of the 21st century. Despite a broad scientific consensus against the idea that there is a link between vaccination and autism, several celebrities have joined the debate. Most notable of these is Jenny McCarthy, whose son has autism.
In February 2009, surgeon Andrew Wakefield, who published the original research supposedly indicating a link between vaccines and autism, was reported to have fixed the data by The Sunday Times. A hearing by the General Medical Council began in March 2007, examining charges of professional misconduct. On 24 May 2010, he was struck off the United Kingdom medical register, effectively banning him from practicing medicine in Britain.
The most notable development in the ancient astronauts genre was the opening of Erich von Däniken's Mystery Park in 2003. While the park had a good first year, the number of visitors was much lower than the expected 500,000 a year. This caused financial difficulties, which led to the closure of the park in 2006.
See also
Histories of specific pseudosciences
History of astrology
History of creationism
History of perpetual motion machines
References
Pseudoscience
Pseudoscience | History of pseudoscience | Technology | 1,864 |
872,409 | https://en.wikipedia.org/wiki/Vitruvian%20module | A module (Latin modulus, a measure) is a term that was in use among Roman architects, corresponding to the semidiameter of the column at its base. The term was first set forth by Vitruvius (book iv.3), and was employed by architects in the Italian Renaissance to determine the relative proportions of the various parts of the Classical orders. The module was divided by the 16th century theorists into thirty parts, called minutes, allowing for much greater precision than was thought necessary by Vitruvius, whose subdivision was usually six parts.
When illustrating Palladio, the British architect Isaac Ware (The Four Books of Andrea Palladio's Architecture, London 1738; illustration, right) laid out the Doric order as an exercise in modular construction. The module he selected was a full column diameter taken at the base. He set his columns, 15 modules tall, at an intercolumniation of 5½ modules. His architrave and frieze, without the cornice, are equal to one module.
The tendency in Beaux-Arts architectural training was similarly to adopt the whole columnar diameter as the module when determining the height of the column or entablature or any of their subdivisions.
Thus module can be extended to mean more generally a unitary part that gives the measuring unit for the whole. In education, for example, lessons may be divided into modules.
Notes
References
Architectural theory | Vitruvian module | Engineering | 289 |
46,304,496 | https://en.wikipedia.org/wiki/W75N%28B%29-VLA2 | W75N(B)-VLA2 is a massive protostar located in the Cygnus X region
some 4,200 light-years from Earth, about 8 times more massive and 300 times brighter than the Sun, observed in 1996 and 2014 by the Karl G. Jansky Very Large Array (VLA). In 2014 its stellar wind had changed from a compact spherical form to a larger thermal, ionized elliptical one outlining collimated motion, giving critical insight into the very early stages of the formation of a massive star. Being able to observe its rapid growth as it happens (in real time in an astronomical context) is unique, according to Huib van Langevelde of Leiden University, one of the authors of a study of the object.
The authors of the study believe W75N(B)-VLA2 is forming in a dense, gaseous environment, surrounded by a dusty torus. The star intermittently ejects a hot, ionized wind for several years. Initially the wind can expand in all directions and forms a spherical shell; later it hits the dusty torus, which slows it. There is less resistance along the poles of the torus, so the wind moves more quickly there, giving rise to an elongated shape.
References
Cygnus (constellation)
Protostars | W75N(B)-VLA2 | Astronomy | 271 |
2,727,432 | https://en.wikipedia.org/wiki/Omega2%20Cygni | {{DISPLAYTITLE:Omega2 Cygni}}
Omega2 Cygni, Latinized from ω2 Cygni, is the Bayer designation for a solitary star in the northern constellation of Cygnus. It has an apparent visual magnitude of 5.5, which is faintly visible to the naked eye on a dark night. Based upon an annual parallax shift of 8.17 mas, it is located roughly 399 light years from the Sun. At that distance, the visual magnitude is diminished by an extinction factor of 0.08 due to interstellar dust.
This is a red giant star on the asymptotic giant branch, with a stellar classification of M2 III. It is a suspected variable star, although the evidence is considered "doubtful or erroneous". If it does exist, the variability is small with an amplitude of 0.05 magnitude and a timescale of around 30 days. There is a 58.3% chance that this star is a member of the Hercules stream.
See also
Omega1 Cygni
References
Suspected variables
M-type giants
Double stars
Cygni, Omega2
Cygnus (constellation)
Durchmusterung objects
Cygni, 46
195774
101243
7851 | Omega2 Cygni | Astronomy | 250 |
59,162,859 | https://en.wikipedia.org/wiki/He%20Jiankui%20genome%20editing%20incident | The He Jiankui genome editing incident is a scientific and bioethical controversy concerning the use of genome editing following its first use on humans by Chinese scientist He Jiankui, who edited the genomes of human embryos in 2018. He became widely known on 26 November 2018 after he announced that he had created the first human genetically edited babies. He was listed in Time magazine's 100 most influential people of 2019. The affair led to ethical and legal controversies, resulting in the indictment of He and two of his collaborators, Zhang Renli and Qin Jinzhou. He eventually received widespread international condemnation.
He Jiankui, working at the Southern University of Science and Technology (SUSTech) in Shenzhen, China, started a project to help people with HIV-related fertility problems, specifically involving HIV-positive fathers and HIV-negative mothers. The subjects were offered standard in vitro fertilisation services and in addition, use of CRISPR gene editing (CRISPR/Cas9), a technology for modifying DNA. The embryos' genomes were edited to remove the CCR5 gene in an attempt to confer genetic resistance to HIV. The clinical project was conducted secretly until 25 November 2018, when MIT Technology Review broke the story of the human experiment based on information from the Chinese clinical trials registry. Compelled by the situation, he immediately announced the birth of genome-edited babies in a series of five YouTube videos the same day. The first babies, known by their pseudonyms Lulu () and Nana (), are twin girls born in October 2018, and the second birth or the third baby born was in 2019, named Amy. He reported that the babies were born healthy.
His actions received widespread criticism, and included concern for the girls' well-being. After his presentation on the research at the Second International Summit on Human Genome Editing at the University of Hong Kong on 28 November 2018, Chinese authorities suspended his research activities the following day. On 30 December 2019, a Chinese district court found He Jiankui guilty of illegal practice of medicine, sentencing him to three years in prison with a fine of 3 million yuan. Zhang Renli and Qin Jinzhou received an 18-month prison sentence and a 500,000-yuan fine, and were banned from working in assisted reproductive technology for life.
He Jiankui has been variously referred to as a "rogue scientist", "China's Dr Frankenstein", and a "mad genius". The impact of human gene editing on resistance to HIV infection and other body functions in experimental infants remains controversial. The World Health Organization has issued three reports on the guidelines of human genome editing since 2019, and the Chinese government has prepared regulations since May 2019. In 2020, the National People's Congress of China passed Civil Code and an amendment to Criminal Law that prohibit human gene editing and cloning with no exceptions; according to the Criminal Law, violators will be held criminally liable, with a maximum sentence of seven years in prison in serious cases.
Origin
Since 2016, Han Seo-jun, then associate professor at the Southern University of Science and Technology (SUSTech) in Shenzhen, with Zhang Renli and Qin Jinzhou, have used human embryo in gene-editing technology for assisted reproductive medicine. On 10 June 2017, a Chinese couple, an HIV-positive father and HIV-negative mother, pseudonymously called Mark and Grace, attended a conference held by He at SUSTech. They were offered in vitro fertilisation (IVF) along with gene-editing of their embryos so as to develop innate resistance to HIV infection in their offspring. They agreed to volunteer through informed consent and the experiment was carried out in secrecy. Six other couples having similar fertility problems were subsequently recruited. The couples were recruited through a Beijing-based AIDS advocacy group called Baihualin China League. When later examined, the consent forms were noted as incomplete and inadequate. The couple were reported to have agreed to this experiment because, by Chinese rules, normally HIV positive fathers were not allowed to have children using IVF.
When the place of the clinical experiment was investigated, SUSTech declared that the university was not involved and that He had been on unpaid leave since February 2018, and his department attested that they were unaware of the research project.
Experiment and birth
He Jiankui, the researcher, took sperm and eggs from the couples, performed in vitro fertilisation with the eggs and sperm, and then edited the genomes of the embryos using CRISPR/Cas9. The editing targeted a gene, CCR5, that codes for a protein that HIV uses to enter cells. He was trying to reproduce the phenotype of a specific mutation in the gene, CCR5-Δ32, that few people naturally have and that possibly confers innate resistance to HIV, as seen in the case of the Berlin Patient. However, rather than introducing the known CCR5-Δ32 mutation, he introduced a frameshift mutation intended to make the CCR5 protein entirely nonfunctional. According to He, Lulu and Nana carried both functional and mutant copies of CCR5 given mosaicism inherent in the present state of the art in germ-line editing. There are forms of HIV that use a different receptor instead of CCR5; therefore, the work of He did not theoretically protect Lulu and Nana from those forms of HIV. He used a preimplantation genetic diagnosis process on the embryos that were edited, where three to five single cells were removed, and fully sequenced them to identify chimerism and off-target errors. He says that during the pregnancy, cell-free fetal DNA was fully sequenced to check for off-target errors, and an amniocentesis was offered to check for problems with the pregnancy, but the mother declined. Lulu and Nana were born in secrecy in October 2018. They were reported by He to be normal and healthy.
Revelation
He Jiankui was planning to reveal his experiments and the birth of Lulu and Nana at the Second International Summit on Human Genome Editing, which was to be organized at the University of Hong Kong during 27–29 November 2018. However, on 25 November 2018, Antonio Regalado, senior editor for biomedicine of MIT Technology Review, posted on the journal's website about the experiment based on He Jiankui's applications for conducting clinical trial that had been posted earlier on the Chinese clinical trials registry. At the time, He refused to comment on the conditions of the pregnancy. Prompted by the publicity, He immediately posted about his experiment and the successful birth of the twins on YouTube in five videos the same day. The next day, the Associated Press made the first formal news, which was most likely a pre-written account before the publicity. His experiment had received no independent confirmation, and had not been peer reviewed or published in a scientific journal. Soon after He's revelation, the university at which He was previously employed, the Southern University of Science and Technology, stated that He's research was conducted outside of their campus. China's National Health Commission also ordered provincial health officials to investigate his case soon after the experiment was revealed.
Amidst the furore, He was allowed to present his research at the Hong Kong meeting on 28 November under the title "CCR5 gene editing in mouse, monkey, and human embryos using CRISPR–Cas9". During the discussion session, He asserted, "Do you see your friends or relatives who may have a disease? They need help," and continued, "For millions of families with inherited disease or infectious disease, if we have this technology we can help them." In his speech, He also mentioned a second pregnancy under the same experiment. No reports disclosed, the baby might have been born around August 2019, and the birth was affirmed on 30 December when the Chinese court returned a verdict mentioning that there were "three genetically-edited babies". The baby was later revealed in 2022 as Amy.
Reactions and aftermath
On the news of Lulu and Nana having been born, the People's Daily announced the experimental result as "a historical breakthrough in the application of gene editing technology for disease prevention." But scientists at the Second International Summit on Human Genome Editing immediately developed serious concerns. Robin Lovell-Badge, head of the Laboratory of Stem Cell Biology and Developmental Genetics at the Francis Crick Institute, who moderated the session on 28 November, recalled that He Jiankui did not mention human embryos in the draft summary of the presentation. He received an urgent message on 25 November through Jennifer Doudna of the University of California, Berkeley, a pioneer of the CRISPR/Cas9 technology, to whom he had confided the news earlier that morning. As the news already broke out before the day of the presentation, he had to be brought in by the University of Hong Kong security from his hotel. Nobel laureate David Baltimore, the chair of the organizing committee of the summit, was the first to react after He's speech, and declared his horror and dismay at his work.
He did not disclose the parents' names (other than their pseudonyms Mark and Grace) and they did not make themselves available to be interviewed, so their reaction to this experiment and the ensuing controversy is not known. There was widespread criticism in the media and scientific community over the conduct of the clinical project and its secrecy, and concerns raised for the long term well-being of Lulu and Nana. Bioethicist Henry T. Greely of Stanford Law School declared, "I unequivocally condemn the experiment," and later, "He Jiankui’s experiment was, amazingly, even worse than I first thought." Kiran Musunuru, one of the experts called on to review He's manuscript and who later wrote a book on the scandal, called it a "historic ethical fiasco, a deeply flawed experiment".
On the night of 26 November 122 Chinese scientists issued a statement criticizing his research. They declared that the experiment was unethical, "crazy" and "a huge blow to the global reputation and development of Chinese science". The Chinese Academy of Medical Sciences made a condemnation statement on 5 January 2019 saying that:
A series of investigations was opened by He's university, local authorities, and the Chinese government. On 26 November 2018, SUSTech released a public notification on its website condemning He's conduct, mentioning the key points as:
The research work was conducted off-campus by Associate Professor He Jiankui without reporting to the university and the Department of Biology, and the university and the Department of Biology were unaware of it.
The Academic Committee of the Department of Biology considers that Associate Professor He Jiankui's use of gene editing technology for human embryo research is a serious violation of academic ethics and academic standards.
SUSTech strictly requires scientific research to comply with national laws and regulations and to respect and abide by international academic ethics and academic norms. The university will immediately hire authoritative experts to set up an independent committee to conduct an in-depth investigation, and will publish relevant information after the investigation.
On 28 November 2018, the organising committee of the Second International Summit on Human Genome Editing, led by Baltimore, issued a statement, declaring:At this summit we heard an unexpected and deeply disturbing claim that human embryos had been edited and implanted, resulting in a pregnancy and the birth of twins. We recommend an independent assessment to verify this claim and to ascertain whether the claimed DNA modifications have occurred. Even if the modifications are verified, the procedure was irresponsible and failed to conform with international norms. Its flaws include an inadequate medical indication, a poorly designed study protocol, a failure to meet ethical standards for protecting the welfare of research subjects, and a lack of transparency in the development, review, and conduct of the clinical procedures.On 29 January 2019, it was learned that a U.S. Nobel laureate Craig Mello interviewed He about his experiment with gene-edited babies. In February 2019, He's claims were reported to have been confirmed by Chinese investigators, according to NPR News. Around that time, news reported that the Chinese government may have helped fund the CRISPR babies experiment, at least in part, based on newly uncovered documents.
Consequences
On 29 November 2018, Chinese authorities suspended all of He's research activities, saying his work was "extremely abominable in nature" and a violation of Chinese law. He was sequestered in a university apartment under some sort of surveillance. On 21 January 2019, He was fired from his job at SUSTech and his teaching and research work at the university was terminated. The same day, the Guangdong Province administration investigated the "gene editing baby incident", which is explicitly prohibited by the state. On 30 December 2019, the Shenzhen Nanshan District People's Court found He Jiankui guilty of illegal practice of medicine, sentencing him to three years in prison with a fine of 3 million yuan.
Among the collaborators, only two were indicted – Zhang Renli of the Guangdong Academy of Medical Sciences and Guangdong General Hospital, received a two-year prison sentence and a 1-million RMB (about US$) fine, and Qin Jinzhou of the Southern University of Science and Technology received an 18-month prison sentence and a 500,000 RMB (about US$) fine. The three were found guilty of having "forged ethical review documents and misled doctors into unknowingly implanting gene-edited embryos into two women." Zhang and Qin were officially banned from working in assisted reproductive technology for life. In April 2022, He was released from prison.
On 26 November 2018, The CRISPR Journal published ahead of print an article by He, Ryan Ferrell, Chen Yuanlin, Qin Jinzhou, and Chen Yangran in which the authors justified the ethical use of CRISPR gene editing in humans. As the news of CRISPR babies broke out, the editors reexamined the paper and retracted it on 28 December, announcing:[It] has since been widely reported that Dr. He conducted clinical studies involving germline editing of human embryos, resulting in several pregnancies and two alleged live births. This was most likely in violation of accepted bioethical international norms and local regulations. This work was directly relevant to the opinions laid out in the Perspective; the authors' failure to disclose this clinical work manifestly impacted editorial consideration of the manuscript.Michael W. Deem, an American bioengineering professor at Rice University, and his doctoral advisor was involved in the research, and was present when people involved in the study gave consent. He was the only non-Chinese out of 10 authors listed in the manuscript submitted to Nature. Deem came under investigation by Rice University after news of the work was made public. As of 2022, the university never issued any information on his conduct. He resigned from the university in 2020, and pursued a business by creating a bioengineering and energy consultant company called Certus LLC.
Stanford University also investigated its faculty of He's confidants including William Hurlbut, Matthew Porteus, and Stephen Quake, his main mentor in gene editing. The university's review committee concluded that the accused "were not participants in [He Jiankui’s] research regarding genome editing of human embryos for intended implantation and birth and that they had no research, financial or organizational ties to this research."
In response to He's work, the World Health Organization, formed a committee comprising "a global, multi-disciplinary expert panel" called the Expert Advisory Committee on Developing Global Standards for Governance and Oversight of Human Genome Editing "to examine the scientific, ethical, social and legal challenges associated with human genome editing (both somatic and germline)" in December 2018. In 2019, it issued a call to halt all work on human genome editing, and launched a global registry to track research in the field. It had issued three reports for the recommended guidelines on human genome editing since 2019. As of 2021, the committee stood by the grounds that while somatic gene therapies have become useful in several disease, germline and heritable human genome editing is still with risks, and should be banned.
In May 2019, the Chinese government prepared gene-editing regulations stressing that anyone found manipulating the human genome by genome-editing techniques would be held responsible for any related adverse consequences. The Civil Code of the People's Republic of China was amended in 2020, adding Article 1009, which states: "any medical research activity associated with human gene and human embryo must comply with the relevant laws, administrative regulations and national regulation, must not harm individuals and violate ethical morality and public interest." It was enacted on 1 January 2021. A draft of the 11th Amendment to the Criminal Law of the People's Republic of China in 2020 has incorporated three types of crime: the illegal practice of human gene editing, human embryo cloning and severe endangering of the security of human genetic resources; with penalties of imprisonment of up to 7 years and a fine.
As of December 2021, Vivien Marx reported in the Nature Biotechnology article that both children were healthy.
Ethical controversies
Ethics of genome manipulation
Genome manipulations can be done at two levels: somatic (grown-up cells of the general body) and germline (sex cells and embryos for reproduction). The development of CRISPR gene editing enabled both somatic and germline editing (such as in assisted reproductive technology). There is no prohibition on somatic gene editing since the practice is generally covered by the available regulations. Prior to He's affair, there was already concern that it was possible to make genetically modified babies and such experiments would have ethical issues as the safety and success were not yet warranted by any study, and genetic enhancement of individual would be possible. Pioneer gene-editing scientists had warned in 2015 that "genome editing in human embryos using current technologies could have unpredictable effects on future generations. This makes it dangerous and ethically unacceptable. Such research could be exploited for non-therapeutic modifications." As Janet Rossant of the University of Toronto noted in 2018: "It has also raised ethical concerns, particularly with regard to the possibility of generating heritable changes in the human genome – so-called germline gene editing." In 2017, the National Academies of Sciences, Engineering, and Medicine published a report "Human Genome Editing: Science, Ethics and Governance" that endorsed germline gene editing in "the absence of reasonable alternatives" of disease management and to "improve IVF procedures and embryo implantation rates and reduce rates of miscarriage." However, the Declaration of Helsinki had stated that early embryo genome-editing for fertility purposes is unethical.
The American Society of Human Genetics had declared in 2017 that the basic research on in vitro human genome editing on embryos and gametes should be promoted but that "At this time, given the nature and number of unanswered scientific, ethical, and policy questions, it is inappropriate to perform germline gene editing that culminates in human pregnancy." In July 2018, the Nuffield Council on Bioethics published a policy document titled Genome Editing and Human Reproduction: Social and Ethical Issues in which it advocated human germline editing saying that it "is not 'morally unacceptable in itself' and could be ethically permissible in certain circumstances" when there are sufficient safety measures. The moral justification created critical debates. The United States National Institutes of Health Somatic Cell Genome Editing Consortium held that it "strictly focused on somatic editing; germline editing is not only excluded as a goal but is also considered to be an unacceptable outcome that should be carefully prevented."
The Chinese law Measures on Administration of Assisted Human Reproduction Technology (2001) prohibits any genetic manipulation of human embryos for reproductive purposes and allows assisted reproductive technology to be performed only by authorized personnel. On 7 March 2017, He Jiankui applied for ethics approval from Shenzhen HarMoniCare Women and Children's Hospital. In the application, He claimed that the genetically edited babies would be immune to HIV infection, in addition to smallpox and cholera, commenting: "This is going to be a great science and medicine achievement ever since the IVF technology which was awarded the Nobel Prize in 2010, and will also bring hope to numerous genetic disease patients." It was approved and signed by Lin Zhitong, the hospital administrator and one-time Director of Direct Genomics, a company established by He. Upon an inquiry, the hospital denied such approval. The hospital's spokesperson declared that there were no records of such ethical approval, saying, "[The] gene editing process did not take place at our hospital. The babies were not born here either." It was later confirmed that the approval certificate was forged.
Sheldon Krimsky of Tufts University reported that "[He Jiankui] is not a medical doctor, but rather received his doctorate in biophysics and did postdoctoral studies in gene sequencing; he lacks training in bioethics." However, He was aware of the ethical issues. On 5 November 2018, He and his collaborators submitted a manuscript on ethical guidelines for reproductive genome editing titled "Draft Ethical Principles for Therapeutic Assisted Reproductive Technologies" to The CRISPR Journal. It was published on 26 November, soon after news of the human experiment broke out. The journal made an inquiry concerning conflicts of interests, which was not disclosed by He. With no justification from He, the journal retracted the paper with a comment that it "was most likely in violation of accepted bioethical international norms and local regulations."
Although there were no specific laws in China on gene editing in humans, He Jiankui violated the available guideline on handling human embryos. According to the Guidelines for Ethical Principles in Human Embryonic Stem Cell Research (2003) of the Ministry of Science and Technology and the National Health Commission of China:Research in human embryonic stem cells shall be in compliance with the following behavioral norms:
Where blastula is obtained from external fertilization, somatic nucleus transplantation, unisexual duplicating technique or genetic modification, the culture period in vitro shall not exceed 14 days from the day of fecundation or nuclear transplantation.He Jiankui also attended an important meeting on "The ethics and societal aspects of gene editing" in January 2017 organized by Jennifer Doudna and William Hurlbut of Stanford University. Upon invitation from Doudna, He presented a topic on "Safety of Human Gene Embryo Editing" and later recalled that "There were very many thorny questions, triggering heated debates, and the smell of gunpowder was in the air."
The consent form of the experiment titled "Informed Consent" also indicates dubious statements. The aim of the study was presented as "an AIDS vaccine development project", even though the study was not about vaccines. Present was technical jargon which would be incomprehensible to a layperson. One of the more peculiar statement is that if the participants decide to abort the experiment "in the first cycle of IVF until 28 days post-birth of the baby", they would have to "pay back all the costs that the project team has paid for you. If the payment is not received within 10 calendar days from the issuance of the notification of violation by the project team, another 100,000 RMB (about US$) of fine will be charged." This violates the voluntary nature of the participation.
Medical ethics
CRISPR gene editing technology in humans has the potential to cause profound social impacts, such as in the long-term prevention of diseases in humans. However, He's human experiments raised ethical concerns the effect are unknown on future generations. Ethical concerns have been raised relative to the four ethical criteria of autonomy, justice, beneficence, and non-maleficence, first postulated by Tom Beauchamp and James Childress in Principles of Biomedical Ethics.
The ethical principle of autonomy requires that individuals have the ability and comprehensive information to make their own decisions based on their values and beliefs. He violated this by failing to inform patients of potential risks, including off-target mutations that might be a threat to the twins' lives.
Since He had forged the approval certificate from the hospital's Director of Direct Genomics, the procedure was likely "unlawful", which is against the principle of non-maleficence. Off-target mutations are likely to start at undesired sites, causing cell death or cell transformation. Sonia Ouagrham-Gormley, an associate professor in the Schar School of Policy and Government at George Mason University, and Kathleen Vogel, a professor in the School for the Future of Innovation in Society at Arizona State University, stated that the procedure was "unnecessary" and "risks the safety of the patients". The researchers criticized He's unethical action by presenting the fact that the prevention of HIV transmission from parents to newborn babies can be safely achieved with existing standard methods, such as sperm washing and caesarian section delivery.
The principle of justice argues that individuals should have the right to receive the same amount of care from medical providers regardless of their social and economic background. Beneficence requires healthcare providers to maximize benefits and put the benefit of the patients first. He's intervention in the twins' genes cannot be justified, and the risk-benefit is unacceptable. He paid the couple $40,000 to ensure that they would keep his operation confidential. This action can be viewed as an inducement and violates China's regulations on the prohibition of genetic manipulation of human gametes, zygotes, and embryos for reproductive purposes; HIV carriers being not allowed to have assisted reproductive technologies, and the manipulation of a human embryo for research being only permitted within 14 days.
Thus, while genome editing in humans has potential as an effective and cost-efficient method for manipulating genes within living cells, it requires more research and transparent procedures to be ethically justified.
Scientific issues
Effects of mutations
It is an established fact that C-C chemokine receptor type 5 (CCR5) is a protein essential for HIV infection of the white blood cells by acting as co-receptor to HIV. Mutation in the gene CCR5 (called CCR5Δ32 because the mutation is specifically a deletion of 32 base pairs in human chromosome 3) renders resistance to HIV. Resistance is higher when mutations are in two copies (homozygous alleles) and in only one copy (heterozygous alleles) the protection is very weak and slow. Not all homozygote individuals are completely resistant. In natural population, CCR5Δ32 homozygotes are rarer than heterozygotes. In 2007, Timothy Ray Brown (dubbed the Berlin patient) became the first person to be completely cured of HIV infection following a stem cell transplant from a CCR5Δ32 homozygous donor.
He Jiankui overlooked these facts. Two days after Lulu and Nana were born, their DNA were collected from blood samples of their umbilical cord and placenta. Whole genome sequencing confirmed the mutations. However, available sources indicate that Lulu and Nana are carrying incomplete CCR5 mutations. Lulu carries a mutant CCR5 that has a 15-bp in-frame deletion only in one chromosome 3 (heterozygous allele) while the other chromosome 3 is normal; and Nana carries a homozygous mutant gene with a 4-bp deletion and a single base insertion. He therefore failed to achieve the complete 32-bp deletion. Moreover, Lulu has only heterozygous modification which is not known to prevent HIV infection. Because the babies' mutations are different from the typical CCR5Δ32 mutation it is not clear whether or not they are prone to infection. There are also concerns about adverse effect called off-target mutation in CRISPR/Cas9 editing and mosaicism, a condition in which many different cells develop in the same embryo. Off-target mutation may cause health hazards, while mosaicism may create HIV susceptible cells. Fyodor Urnov, Director at the Altius Institute for Biomedical Sciences at Washington, asserted that "This [off-target mutation] is a key problem for the entirety of the embryo-editing field, one that the authors sweep under the rug here," and continued, "They [He's team] should have worked and worked and worked until they reduced mosaicism to as close to zero as possible. This failed completely. They forged ahead anyway."
His data on Lulu and Nana's mutation alignment (in Sanger chromatogram) showed three modifications, while two should be expected. Particularly in Lulu, the mutation is much more complex than He's report. There were three different combinations of alleles: two normal copies of CCR5, one normal copy and one with a 15-bp deletion, and one normal copy and an unknown large insertion. But George Church of Harvard University, in an interview with Science, explained that off-target mutations may not be dangerous, and that there is no need to reduce mosaicism excessively, saying, "There's no evidence of off-target causing problems in animals or cells. We have pigs that have dozens of CRISPR mutations and a mouse strain that has 40 CRISPR sites going off constantly and there are off-target effects in these animals, but we have no evidence of negative consequences." As to mosaicism, he said, "It may never be zero. We don’t wait for radiation to be zero before we do positron emission tomography scans or x-rays."
In February 2019, scientists reported that Lulu and Nana may have inadvertently (or perhaps, intentionally) had their brains altered, since CCR5 is linked to improved memory function in mice, as well as enhanced recovery from strokes in humans. Although He Jiankui stated during the Second International Summit on Human Genome Editing, that he was against using genome editing for enhancement, he also acknowledged that he was aware of the studies linking CCR5 to enhanced memory function.
In June 2019, researchers incorrectly suggested that the purportedly genetically edited humans may have been mutated in a way that shortens life expectancy. Rasmus Nielsen and Wei Xinzhu, both at the University of California, Berkeley, reported in Nature Medicine of their analysis of the longevity of 409,693 individuals from British death registry (UK BioBank) with the conclusion that two copies of CCR5Δ32 mutations (homozygotes) were about 20% more likely than the rest of the population to die before they were 76 years of age. The research finding was widely publicized in the popular and scientific media. However, the article overlooked sampling bias in UK Biobank's data, resulting in an erroneous interpretation, and was retracted four months later.
Rejections from peer-reviewed journals
Scientific works are normally published in peer-reviewed journals, but He failed to do so regarding the birth of gene-edited babies. This was one of the grounds on which He was criticized. It was later reported that He did submit two manuscripts to Nature and the Journal of the American Medical Association, which were both rejected, mainly on ethical issues. He's first manuscript titled "Birth of Twins After Genome Editing for HIV Resistance" was submitted to Nature on 19 November. He shared copies of the manuscript to the Associated Press, which he further allowed to document his works. In an interview, Hurlbut opined that the condemnation of He's work would have been less harsh if the study had been published, and said, "If it had been published, the publishing process itself would have brought a level of credibility because of the normal scrutiny involved; the data analysis would have been vetted."
The scientific manuscripts of He were revealed when an anonymous source sent them to the MIT Technology Review, which reported them on 3 December 2019.
Related research
The first successful gene-editing experiment of CCR5 in humans was in 2014. A team of researchers at the University of Pennsylvania, Philadelphia, Albert Einstein College of Medicine, New York, and Sangamo BioSciences, California, reported that they modified CCR5 on the blood cells (CD4 T cells) using zinc-finger nuclease which they introduced (infused) into 12 individuals with HIV. After complete treatment, the patients showed decreased viral load, and in one, HIV disappeared. The result was published in The New England Journal of Medicine.
Chinese scientists have successfully used CRISPR editing to create mutant mice and rats since 2013. The next year they reported successful experiment in monkeys involving a removal of two key genes (PPAR-γ and RAG1) that play roles in cell growth and cancer development. One of the leading researchers, Yuyu Niu later collaborated with He Jiankui in 2017 to test the CRISPR editing of CCR5 in monkeys, but the outcome was not fully assessed or published. Niu later commented that they "had no idea he was going to do this in a human being." In 2018, his team reported an induction of mutation to produce muscular dystrophy, and simultaneously by another independent Chinese team an induction of growth retardation in monkeys using CRISPR editing. In February 2018, scientists at the Chinese Academy of Sciences reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used by them to create the first cloned primates Zhong Zhong and Hua Hua in 2018 and Dolly the sheep. The mutant monkeys and clones were made for understanding several medical diseases and not for disease resistance.
The first clinical trials of CRISPR-Cas9 for the treatment of genetic blood disorders was started in August 2018. The study was jointly conducted by CRISPR Therapeutics, a Swiss-based company, and Vertex Pharmaceuticals, headquartered in Boston. The result was first announced on 19 November 2019 which states that the first two patients, one with β-thalassemia and the other with sickle cell disease, were treated successfully. Under the same project, a parallel study on 6 individuals with sickle cell disease was also conducted at Harvard Medical School, Boston. In both studies, the gene involved in blood cell formation BCL11A was modified in the bone marrow extracted from the individuals. Both the studies were simultaneously published in The New England Journal of Medicine on 21 January 2021 in two papers. The individuals have not complained the symptoms and needed blood transfusion normally required in such disease, but the method is arduous and poses high risk of infection in the bone marrow, to which David Rees at King's College Hospital commented, "Scientifically, these studies are quite exciting. But it’s hard to see this being a mainstream treatment in the long term."
In June 2019, Denis Rebrikov at the Kulakov National Medical Research Center for Obstetrics, Gynecology and Perinatology in Moscow announced through Nature that he was planning to repeat He's experiment once he got official approval from the Russian Ministry of Health and other authorities. Rebrikov asserted that he would use safer and better method than that of He, saying, "I think I'm crazy enough to do it." In a subsequent report on 17 October, Rebrikov said that he was approached by a deaf couple for help. He already started in vitro experiment to repair a gene that causes deafness, GJB2, using CRISPR.
In 2019, the Abramson Cancer Center of the University of Pennsylvania in US announced the use of the CRISPR technology to edit cancer genes in humans, and the results of the phase I clinical trial in 2020. The study started in 2018 with an official registration in the US clinical trials registry. The report in the journal Science indicates three individuals in their 60s with advanced refractory cancer, two of them with the blood cancer (multiple myeloma) and one with tissue cancer (sarcoma), were treated with their own cancer cells after CRISPR editing. The experiment was based on CAR T-cell therapy by which the T cells, obtained from the individuals were removed of three genes involved in cancer and were added a gene CTAG1B that produces an antigen NY-ESO-1. When the edited cells were introduced back into the individuals, the antigens attack the cancer cells. Although the results were acclaimed as the first "success of gene editing and cell function" in cancer research and "an important milestone in the development and clinical application of gene-edited effector cell therapy," it was far from curing the diseases. One died after the clinical trial, and the other two had recurrent cancer.
A similar clinical trial was reported by a team of Chinese scientists at the Sichuan University and their collaborators in 2020 in Nature Medicine. Here they removed only one gene (PDCD1 that produces the protein PD-1) in the T cells from 12 individuals having late-stage lung cancer. The study was found to be safe and effective. However, the edited T cells were not fully efficient and disappeared in most individuals, indicating that the treatments were not completely successful.
See also
Designer baby
Human Nature (2019 CRISPR film documentary)
Hwang affair
Unnatural Selection (2019 TV documentary)
Bioethics
References
External links
He's presentation and subsequent panel discussion, at the Second International Summit on Human Genome Editing. 27 November 2018, via Bloomberg Asia's Facebook page
2018 controversies
2018 in biology
2018 in China
Biology controversies
Genome editing
History of HIV/AIDS
Identical twins
He, Jiankui
Medical scandals in China
Science and technology in China
He, Jiankui | He Jiankui genome editing incident | Chemistry,Engineering,Biology | 7,668 |
1,720,307 | https://en.wikipedia.org/wiki/Monopotassium%20phosphate | Monopotassium phosphate (MKP) (also, potassium dihydrogen phosphate, KDP, or monobasic potassium phosphate) is the inorganic compound with the formula KH2PO4. Together with dipotassium phosphate (K2HPO4.(H2O)x) it is often used as a fertilizer, food additive, and buffering agent. The salt often cocrystallizes with the dipotassium salt as well as with phosphoric acid.
Single crystals are paraelectric at room temperature. At temperatures below , they become ferroelectric.
Structure
Monopotassium phosphate can exist in several polymorphs. At room temperature it forms paraelectric crystals with tetragonal symmetry. Upon cooling to it transforms to a ferroelectric phase of orthorhombic symmetry, and the transition temperature shifts up to when hydrogen is replaced by deuterium. Heating to changes its structure to monoclinic. When heated further, MKP decomposes, by loss of water, to potassium metaphosphate, , at .
Manufacturing
Monopotassium phosphate is produced by the action of phosphoric acid on potassium carbonate.
Applications
Fertilizer-grade MKP powder contains the equivalent of 52% and 34% , and is labeled NPK0-52-34. MKP powder is often used as a nutrient source in the greenhouse trade and in hydroponics.
As a crystal, MKP is noted for its non-linear optical properties. It is used in optical modulators and for non-linear optics such as second-harmonic generation (SHG).
Also, to be noted is KD*P, potassium dideuterium phosphate, with slightly different properties. Highly deuterated KDP is used in nonlinear frequency conversion of laser light instead of protonated (regular) KDP due to the fact that the replacement of protons with deuterons in the crystal shifts the third overtone of the strong OH molecular stretch to longer wavelengths, moving it mostly out of the range of the fundamental line at approximately 1064 nm of neodymium-based lasers. Regular KDP has absorbances at this wavelength of approximately 4.7–6.3% per cm of thickness while highly deuterated KDP has absorbances of typically less than 0.8% per cm.
Monopotassium phosphate is also used as an ingredient in sports drinks such as Gatorade and Powerade.
In medicine, monopotassium phosphate is used for phosphate substitution in hypophosphatemia.
Gallery
References
External links
International Chemical Safety Card 1608
EPA: Potassium dihydrogen phosphate Fact Sheet
Potassium Phosphatea Hydroculture Salt
Second-harmonic generation
Phosphates
Potassium compounds
Acid salts
Nonlinear optical materials
Transparent materials
E-number additives
Inorganic fertilizers | Monopotassium phosphate | Physics,Chemistry | 584 |
32,001,633 | https://en.wikipedia.org/wiki/Dunton%20Technical%20Centre | The Dunton Campus (informally Ford Dunton or Dunton) is a major automotive research and development facility located in Dunton Wayletts, Laindon, England, which is owned and operated by Ford. Ford Dunton houses the main design team of Ford of Europe alongside its Merkenich Technical Centre in Cologne, Germany. With the closure of Ford's Warley site (located in Brentwood, Essex) in September 2019, the staff from the UK division of Ford Credit and Ford's UK Sales and Marketing departments have moved to the Dunton site. As of November 2019, Dunton had around 4,000 staff working at the site.
Location
Ford Dunton is situated at the junction of West Mayne (B148) and the A127 Southend Arterial Road, in Dunton Wayletts in the district of Basildon. An electricity pylon line straddles the site. In front of the building, to the north, is a vehicle test track. To the south is the Southfields Business Park. The site lies in the religious parish of Laindon with Dunton, formerly in Dunton and Bulphan before 1976. Dunton is a small hamlet to the west, with a former church near Dunton Hall. There is a Ford dealership on the B148 on the north-west corner of the site.
In order to promote health and well-being at the site, there are walking routes and outdoor natural areas preserved on the site. There is a picnic area and a pond surrounded by a copse of mixed deciduous trees. The pond is home to many large fish and you can see the protected snail species Helix pomatia.
History
Percival Perry, 1st Baron Perry brought Ford to the UK in 1928.
Construction
Ford Dunton was constructed by George Wimpey for a contracted price of £6.5 million. The total cost of the centre was around £10 million. The centre originally had of space for design work, making it the largest engineering research centre in Europe. Another development site at Aveley had been opened in 1956 which made prototype cars and spare parts, and closed in 2004. Ford's earlier UK design site was at Dagenham and it previously had seven engineering sites around the UK, with five in Essex; these all moved to Dunton.
Ford Dunton was opened by Harold Wilson, then the British prime minister, on 12 October 1967.
1967 to 2000
At the time of its opening, Dunton was assigned responsibility within Ford of Europe for vehicle design, interior styling, chassis and body interior engineering, engine calibration and product planning. Ford's Merkenich Centre in Cologne, Germany was given principal responsibility for body and electrical engineering, base engine design, advanced engine development, exterior styling, homologation, vehicle development (ride, handling, NVH) and transmission engineering. This was a 'systems' approach to the engineering process intended to eliminate the duplication of engineering responsibility within Ford of Europe.
In the late 1960s Dunton worked on an experimental electric car, first shown on 7 June 1967, and called the Ford Comuta.
On 10 May 1971 Peter Walker opened a £1 million engine emissions laboratory at Dunton, the largest of its type in Europe. In November 1974 the world's first automated (computerised) multiple engine (six) test bed was constructed at Dunton, built in co-operation with the engineering department of Queen Mary, University of London. In 1974 a Honeywell 6050 computer was installed at Dunton at a cost of £820,000. The computer was linked to Merkenich and to the Ford test track at Lommel in Belgium. From 1978 Dunton had access to a CDC Cyber 176 computer at the USA base in Dearborn.
Special Vehicle Engineering developed the 4x4 system in the mid 1980s. SVE vehicles had Garrett turbochargers. Many of the RS models had the bodywork made at Karmann in Osnabrück, Germany; the vehicles had Pirelli tyres.
By 1984 staff at Dunton were conducting video-conferences with colleagues at Merkenich, using the ECS-1 satellite, and enabled by British Telecom International.
In the 1980s Ford spent £100m a year on British research. In 1988 the site worked with Prof Paul Shayler of the University of Nottingham mechanical engineering department
The Sierra Sapphire was launched in a £228m development in February 1987, with Clive Ennos and Andy Jacobson at Dunton. A £10m 53,000 sq ft R&D Electronics Technical Centre was built from 1987, to open in early 1989, to develop spark plugs, fuel pumps, and engine management systems.
In 1988 Dunton prepared the way for design of the Mondeo (codename CDW27) by pioneering, in collaboration with Merkenich, the Worldwide Engineering Release System (WERS). Dunton at this time was the most advanced automotive development centre in Europe.
New Zetec engines developed in 1991 under Ian MacPherson, in conjunction with Yamaha.
From 1992 to 1996, 300 engineering jobs were moved to Merkenich. Engine development was largely at Dunton, not Germany. A new four-storey £22m centre was built from 1995.
In 1995 Dunton, in collaboration with the University of Southampton, developed a device which is capable of detecting different types of plastic (for recycling) using the triboelectric effect, including polypropylene, polyethylene, nylon and acrylonitrile butadiene styrene (ABS).
In August 1997 the site developed the 145 mph Mondeo ST24, with a 2.5 litre Duratec V6 engine. On 16 December 1997 Alexander Trotman, Baron Trotman opened a £128 million environmental engine testing facility at Dunton.
2000 to present
In 2003 a Silicon Graphics International (SGI) Reality Centre was constructed at Dunton, incorporating SGI Onyx 3000 visualisation supercomputers, using the InfiniteReality3 graphics rendering system.
In March 2010 Ford announced plans to develop a new generation of environmentally friendly engines and vehicle technologies at Dunton following an announcement by the UK government that it would underwrite £360 million of a £450 million loan to Ford from the European Investment Bank. In July 2010 the new coalition government confirmed that it would honour the loan commitment, and the contract was signed in a ceremony at Dunton attended by the business minister Mark Prisk on 12 July.
In recent years Dunton has been responsible for the development of the ECOnetic range of vehicles, and has contributed to development of the EcoBoost range of engines.
In 2020 during the medical ventilators crisis generated by COVID-19, a Dunton Manufacturing team participated in VentilatorUKChallenge consortium and had a major contribution to deliver over 10,000 units to the NHS in just 12 weeks.
Visits
In 1971 it was visited by the Secretary of State for the Environment
In April 1989 Virginia Bottomley visited, where she drove the new Fiesta
Prince Charles had a business meeting on 19 May 1997
Activities
Dunton houses the main design team of Ford of Europe, alongside its Merkenich Technical Centre in Cologne. Currently Dunton has responsibility for the design of the Ford Fiesta, the Ford Ka, engines for Ford of Europe (powertrain), commercial vehicles and the interior of Ford of Europe cars. It has facilities to simultaneously test fifteen cars and around one hundred engines. Around 3,000 engineers currently work at Dunton.
Ford Dunton was also the home of Ford Team RS, and as part of the Special Vehicle Engineering section of Ford created by Rod Mansfield, developed the XR family of 'hot hatch' vehicles with the Ford Fiesta RS Turbo, more recently becoming the RS family of vehicles. Ford also notably worked in this area of design with Cosworth of Northampton.
Notable staff
Eamonn Martin, 1993 London Marathon winner worked at Dunton
Colin Stancombe, racing driver, worked at Ford SVE
See also
Whitley plant – was previously owned by Ford, now Jaguar Land Rover.
National Engineering Laboratory
References
External links
History of the site at BBC Essex
WikiMapia
Video clips
Environmental Test Laboratory
University of Cambridge solar powered vehicle built by Cambridge University Eco Racing
News items
Electric cars in December 2009
40th birthday in October 2007
Prince of Wales visits in July 2007
1967 establishments in England
Automotive engineering
Automotive industry in the United Kingdom
Borough of Basildon
Buildings and structures completed in 1967
Buildings and structures in the Borough of Basildon
Engine technology
Engineering research institutes
Ford Motor Company facilities
Ford of Europe
Ford vehicle design
Research and development in the United Kingdom
Road test tracks
Science and technology in Essex | Dunton Technical Centre | Technology,Engineering | 1,741 |
8,193,519 | https://en.wikipedia.org/wiki/Village%20Homes | Village Homes is a planned community in Davis, Yolo County, California. It is designed to be ecologically sustainable by harnessing the energies and natural resources that exist in the landscape, especially stormwater and solar energy.
History
The principal designer of Village Homes was architect Mike Corbett who began planning in the 1960s, with construction continuing from south to north from the 1970s through the 1980s. Village Homes was completed in 1982, and has attracted international attention from its inception as an early model of an environmentally friendly housing development, including a visit from then-French President François Mitterrand.
Sustainability
The 225 homes and 20 apartment units that now are the Village Homes community use solar panels for heating, and they are oriented around common areas at the rear of the buildings, rather than around the street at the front. All streets are oriented east–west, with all lots positioned north–south. This feature has become standard practice in Davis and elsewhere since it enables homes with passive solar designs to make full use of the sun's energy throughout the year. The development also uses natural drainage, called bioswales, to collect water to irrigate the common areas and support the cultivation of edible foods, such as nut and fruit trees and vegetables for consumption by residents, without incurring the cost of using treated municipal water.
References
External links
http://www.villagehomesdavis.org/
Village Homes information on the Davis Wiki.
http://www.context.org/ICLIB/IC35/Browning.htm
https://web.archive.org/web/20070609124520/http://arch.ced.berkeley.edu/vitalsigns/workup/siegel_house/vh_bkgd.html
https://web.archive.org/web/20131110035906/http://www.earthfuture.com/community/villagehomes.asp
Documentary videos about Village Homes
Global Gardener: Urban Permaculture feat. Bill Mollison (1991)
GeoffLawton.com: Food Forest Suburb (2015)
Davis, California
Sustainable communities
Stormwater management
Renewable energy
Permaculture | Village Homes | Chemistry,Environmental_science | 451 |
21,039,403 | https://en.wikipedia.org/wiki/CPK-MB%20test | The CPK-MB test (creatine phosphokinase-MB), also known as CK-MB test, is a cardiac marker used to assist diagnoses of an acute myocardial infarction, myocardial ischemia, or myocarditis. It measures the blood level of CK-MB (creatine kinase myocardial band), the bound combination of two variants (isoenzymes CKM and CKB) of the enzyme phosphocreatine kinase.
In some locations, the test has been superseded by the troponin test. However, recently, there have been improvements to the test that involve measuring the ratio of the CK-MB1 and CK-MB2 isoforms.
The newer test detects different isoforms of the B subunit specific to the myocardium whereas the older test detected the presence of cardiac-related isoenzyme dimers.
Many cases of CK-MB levels exceeding the blood level of total CK have been reported, especially in newborns with cardiac malformations, especially ventricular septal defects. This reversal of ratios is in favor of pulmonary emboli or vasculitis. An autoimmune reaction creating a complex molecule of CK and IgG should be taken into consideration.
See also
Troponin
References
Blood tests
Cardiology | CPK-MB test | Chemistry | 281 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.