id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
68,062,718 | https://en.wikipedia.org/wiki/Acyl%20cyanide | In organic chemistry, an acyl cyanide is a functional group with the formula and structure . It consists of an acyl group () attached to cyanide (). Examples include acetyl cyanide, formyl cyanide, and oxalyl dicyanide. Acyl cyanides are reagents in organic synthesis.
Synthesis
Classically acyl cyanides are produced by the salt metathesis reaction of acyl chlorides with sodium cyanide:
Alternatively, they can be produced by dehydration of acyl aldoximes:
Acetyl cyanide is also prepared by hydrocyanation of ketene:
Reactions
They are mild acylating agents. With aqueous base, acyl cyanides break down to cyanide and the carboxylate:
With azides, acyl cyanides undergo the click reaction to give acyl tetrazoles.
References
Functional groups
Organic compounds | Acyl cyanide | Chemistry | 200 |
27,658,553 | https://en.wikipedia.org/wiki/Automated%20dispensing%20cabinet | An automated dispensing cabinet (ADC), also called a unit-based cabinet (UBC), automated dispensing device (ADD), or automated dispensing machine (ADM), is a computerized medicine cabinet for hospitals and healthcare settings. ADCs allow medications to be stored and dispensed near the point of care while controlling and tracking drug distribution.
Overview
Hospital pharmacies have provided medications for patients by filling patient-specific cassettes of unit-dose medications that were then delivered to the nursing unit and stored in medication cabinets or carts. ADCs, originally designed for hospital use, were introduced in hospitals in the 1980s and have facilitated the transition to alternative delivery models and more decentralized medication distribution systems.[2] Implementing automated dispensing cabinets as part of a decentralized or hybrid medication distribution system can improve patient safety and the accountability of the inventory, streamline certain billing processes.
However, in the 2000s, the technology began to be deployed into other care settings where medication doses were stored onsite, and higher security methods were needed to control inventory, access, and dispensing of each patient dose. Settings that now deploy ADCs include long-term care facilities, hospice, critical access hospitals, surgery centers, group homes, residential care facilities, rehab and psych environments, animal health, dental clinics, and nursing education simulation. These diverse care settings share a common need to safely store, account for, and dispense individual doses of medications, especially narcotics and high-value medications, at the point of care.[3]
ADCs track user access and dispensed medications, and their use can improve control over medication inventory. The real-time inventory reports generated by many cabinets can simplify the filling process and help the pharmacy track expired drugs. Furthermore, by restricting individual drugs – such as high-risk medications and controlled substances – to unique drawers within the cabinet, overall inventory management, patient safety, and medication security can be improved. Automated dispensing cabinets allow the pharmacy department to profile physician orders before they are dispensed.[4]
ADCs can also enable providers to record medication charges upon dispensing, reducing the billing paperwork the pharmacy is responsible for. In addition, nurses can note returned medications using the cabinets' computers, enabling direct credits to patients' accounts. Since automated cabinets can be located on the nursing unit floor, nursing have speedier access to a patient's medications. Also, shorter waiting time ensures improved patient comfort and care.[5]
Role of automated dispensing in healthcare
Automated dispensing is a pharmacy practice in which a device dispenses medications and fills prescriptions. ADCs, which can handle many different medications, are available from a number of manufacturers such as BD, ARxIUM, and Omnicell. Though members of the pharmacy community have been utilizing automation technology since the 1980s, companies are constantly improving ADCs to meet changing needs and health standards in the industry.
Several goals can be met by implementing an automated product in a healthcare facility. Patient safety can be ensured with the use of ADC technology such as barcoding. Anesthesia ADCs in operating rooms and perioperative areas may include label printing to prevent mix-ups such as errors between morphine and hydromorphone, two different opioid analgesics that frequently get confused. These systems also communicate with the pharmacy and its information management system to track medications removed and support inventory replenishment.
Key features
ADCs are like automated teller machines whose specific technologies such as barcode scanning and clinical decision support can improve medication safety. Some have metal locking drawers for added security and some have automated single-dose dispensing to prevent the need for a blind count each time a controlled substance is accessed. Over the years, ADCs have been adapted to facilitate compliance with emerging regulatory requirements such as pharmacy review of medication orders and safe practice recommendations.
ADCs incorporate advanced software and electronic interfaces to synthesize high-risk steps in the medication use process. These unit-based medication repositories provide computer-controlled storage, dispensation, tracking, and documentation of medication distribution in the resident care unit. Since automated dispensing cabinets are not located in the pharmacy, they are considered "decentralized" medication distribution systems. Instead, they can be found at the point of care on the resident care unit. Tracking of the stocking and distribution process can occur by interfacing the unit with a central pharmacy computer. These cabinets can also be interfaced with other external databases such as resident profiles, the facility's admission/discharge/transfer system, and billing systems.
Most ADC providers offer scalable systems since several important factors vary widely by facility such as budget, physical room size, patient population/demographics, type of healthcare facility, etc.
See also
Pharmacy automation
Remote dispensing
Pyxis Corporation
Omnicell
References
Pharmacies
Automation | Automated dispensing cabinet | Engineering | 1,020 |
13,721,608 | https://en.wikipedia.org/wiki/Canary%20Girls | The Canary Girls were British women who worked in munitions manufacturing trinitrotoluene (TNT) shells during the First World War (1914–1918). The nickname arose because exposure to TNT is toxic, and repeated exposure can turn the skin an orange-yellow colour reminiscent of the plumage of a canary.
Historical context
Since most working age men were joining the military to fight in the war, women were required to take on the factory jobs that were traditionally held by men. By the end of the war, there were almost three million women working in factories, around a third of whom were employed in the manufacture of munitions. Working conditions were often extremely hazardous and the women worked long hours for low pay. Munitions work involved mixing explosives, and filling shells and bullets.
Munitionettes manufactured cordite and TNT, and those working with TNT were at risk of becoming "Canary Girls." They were exposed to toxic chemicals that caused their skin and hair to turn yellow, hence the nickname. As well as the yellow skin discolouration, those who worked in the munitions factories also reported headaches, nausea and skin irritations such as hives. As a result, factories were forced to improve ventilation and provide the workers with masks.
Effects of working with TNT
Shells were filled with a mixture of TNT (the explosive) and cordite (the propellant), and even though these ingredients were known to be hazardous to one's health, they were mixed by hand so came into direct contact with the workers' skin. The chemicals in the TNT reacted with melanin in the skin to cause a yellow pigmentation, staining the skin of the munitions workers. Although unpleasant, this was not dangerous and the discolouration eventually faded over time with no long-term health effects.
A more serious consequence of working with TNT powder was liver toxicity, which led to anaemia and jaundice. This condition, known as "toxic jaundice", gave the skin a different type of yellow hue. Four hundred cases of toxic jaundice were recorded among munitions workers in the First World War, of which one hundred proved fatal.
A medical investigation was carried out by the government in 1916, to closely study the effects of TNT on the munitions workers. The investigators were able to gather their data by acting as female medical officers posted inside the factories. They found that the effects of the TNT could be roughly split into two areas: irritative symptoms, mainly affecting the skin, respiratory tract, and digestive system; and toxic symptoms, including nausea, jaundice, constipation, dizziness, etc.
It is possible that the irritative symptoms were also partly caused by the cordite in the shell mixture, although this was not established until years later.
Canary Babies
It was not only the UK's female munitions workers that were affected by the TNT, but also the babies that were born to them. Hundreds of "Canary Babies" were born with a slightly yellow skin colour because of their mothers' exposure to dangerous chemicals in the munitions factories during World War One. Nothing could be done for the babies at the time, but the discolouration slowly faded away eventually.
See also
UK World War I National Filling Factories – National filling factories owned by the Ministry of Munitions during First World War
Radium Girls – US female factory workers who contracted radiation poisoning in early 20th century
Rosie the Riveter – US equivalent term for female munitions workers during WWII
Xanthoproteic reaction – chemical process responsible for yellow colouration when handling TNT
References
Further reading
Hall, Edith. Canary Girls and Stockpots. Workers' Educational Association (Luton branch), November 1977.
External links
A day in the life of a munitions worker, Imperial War Museum, 15 January 2018
Nine women reveal the dangers of working in a munitions factory, Imperial War Museum, 31 January 2018
Teaching Chemistry Using The Girls with Yellow Hands, Edgewood College, 2007
The Canary Girls and the WWI Poisons that turned them Yellow by Messy Nessy Chic
30 incredible photos of the Canary Girls on the Vintage Everyday website
British women in World War I
Industrial occupations
Trinitrotoluene | Canary Girls | Chemistry | 838 |
76,249 | https://en.wikipedia.org/wiki/Chind%C5%8Dgu | is the practice of inventing ingenious everyday gadgets that seem to be ideal solutions to particular problems, but which may cause more problems than they solve. The term is of Japanese origin.
Background
Literally translated, chindōgu means . The term was coined by Kenji Kawakami, a former editor and contributor to the Japanese home-shopping magazine Mail Order Life. In the magazine, Kawakami used his spare pages to showcase several bizarre prototypes for products. He named these gadgets "chindōgu"; Kawakami himself said that a more appropriate translation than "unusual tool" is "weird tool". This special category of inventions subsequently became familiar to the Japanese people.
Dan Papia then introduced it to the English-speaking world and popularized it as a monthly feature in his magazine, Tokyo Journal, encouraging readers to send in ideas. In 1995, Kawakami and Papia collaborated on the English language book 101 Unuseless Japanese Inventions: The Art of Chindōgu. Most classic chindogu products are collected in the book. Many examples display a sense of humor in the way they are used.
Examples from the books include:
A combined household duster and cocktail-shaker, for the housewife who wants to reward herself as she is going along.
The all-day tissue dispenser, which is a toilet roll fixed on top of a hat, for hay fever sufferers.
The all-over plastic bathing suit, to enable people who suffer from aquaphobia to swim without coming into contact with water.
The baby mop, an outfit worn by babies, so that as they crawl around, the floor is cleaned.
The selfie stick. While dismissed as a "useless invention" at the time, it later gained global popularity in the 21st century.
The International Chindogu Society
Kawakami founded the International Chindogu Society to popularize Chindogu worldwide. Papia is the president of the society's U.S. chapter. People who invent a Chindogu can write about their creation on the society's website.
Ten tenets of chindōgu
The Chindōgu Society developed ten tenets of chindōgu explaining the principles (spirits) on which chindogu products should be based, inspiring designers and users to think about the deep core of design in general. The tenets require that a chindōgu
cannot be for real use,
must exist,
must have a spirit of anarchy,
is a tool for everyday life,
is not a tradeable commodity,
must not have been created for purposes of humour alone: humour is merely the by-product
is not propaganda,
is not taboo,
cannot be patented, and
is without prejudice.
In the media
Chindōgu and Kawakami were featured regularly on a children's television show produced by the BBC called It'll Never Work?, a show in a similar vein as the BBC's Tomorrow's World; however, It'll Never Work? usually focused more on wacky, humorous gadgets than on serious scientific and technological advances.
Kenji Kawakami was visited by Dave Attell during the Sloshed In Translation episode of Insomniac in 2004. Kawakami featured items such as the baby duster, solar flashlight, and a device that would dry your hair with each step you took.
See also
Jacques Carelman
Simone Giertz
Rube Goldberg
Jugaad, an Indian concept similar to "kludge"
Kludge, a clever but inelegant solution to a problem
List of Japanese inventions
W. Heath Robinson
References
101 unuseless Japanese inventions: the art of Chindogu
Further reading
Fearing Crime, Japanese Wear the Hiding Place, Martin Fackler. The New York Times, October 20, 2007.
The Big Bento Box of Unuseless Japanese Inventions, Kenji Kawakami, trans. Dan Papia, ed. Hugh Fearnley-Whittingstall. Norton: New York, 2005.
The Art of Chindogu in a World Gone Mad, David McNeill. August 3, 2005.
Analysing Chindogu: Applying Defamiliarisation to Security Design, Shamal Faily. May 5, 2012.
External links
Chindōgu Society Official Homepage
Interview with Kenji Kawakami
CHINDOGU: THE UNUSELESS INVENTIONS CHINDOGU: THE UNUSELESS INVENTIONS OF KENJI KAWAKAMI
chindogu-art-un-useless-inventions Chindogu: The Art of Un-useless Inventions
Unuseless
chindogu-14-hilarious-and-strange-japanese-inventions
Culture of Japan
Japanese inventions
Critical design
Words and phrases with no direct English translation | Chindōgu | Technology,Engineering | 962 |
38,921,551 | https://en.wikipedia.org/wiki/InterContinental%20Hanoi%20Landmark%2072 | The InterContinental Hanoi Landmark72 is an InterContinental hotel in Hanoi. The hotel is located on the top floors of Keangnam Hanoi Landmark Tower. At 346 meters, it is the tallest hotel in Hanoi, second tallest in Vietnam and Southeast Asia.
Location
InterContinental Hanoi Landmark72 is located in the center of the new West Hanoi business district, near the National Convention Center, Hanoi Museum and 45-minute drive away from the Noi Bai International Airport. Part of the Landmark72 complex, the hotel commences from the 62nd to the 71st floor of the tallest skyscraper in Hanoi, and second tallest in Vietnam.
Facilities
At 346 meters height, the hotel is listed at number 9 on World's 10 highest hotel with data from Emporis.
InterContinental Hanoi Landmark72 offers 359 guest rooms including 34 suites.
The hotel has one of the largest meeting and event facilities in Hanoi, with 9 meeting rooms and 1 Grand Ballroom that can cater up to 1,000 delegates.
The hotel features five restaurants and bars all located on the 62nd floor - The Hive Lounge (lobby lounge), 3 Spoons (all-day dining restaurant), Stellar Steakhouse, Stellar Teppanyaki and Q Bar (bespoke cocktail bar).
See also
List of tallest hotels in the world
List of tallest buildings in the world
List of tallest residential buildings in the world
Keangnam Hanoi Landmark Tower
References
Construction records
Hotels in Hanoi
InterContinental hotels
Skyscraper hotels
Skyscrapers in Hanoi
Hotel buildings completed in 2017
Hotels established in 2017
2017 establishments in Vietnam | InterContinental Hanoi Landmark 72 | Engineering | 312 |
38,997,206 | https://en.wikipedia.org/wiki/L-165041 | L-165041 is a phenyloxyacetate PPARδ receptor agonist. It is less potent and PPARδ selective than GW 501516.
See also
GW 501516
GFT505
MBX-8025
GW0742
Peroxisome proliferator-activated receptor
References
PPAR agonists | L-165041 | Chemistry | 75 |
316,042 | https://en.wikipedia.org/wiki/Partisan%20game | In combinatorial game theory, a game is partisan (sometimes partizan) if it is not impartial. That is, some moves are available to one player and not to the other, or the payoffs are not symmetric.
Most games are partisan. For example, in chess, only one player can move the white pieces. More strongly, when analyzed using combinatorial game theory, many chess positions have values that cannot be expressed as the value of an impartial game, for instance when one side has a number of extra tempos that can be used to put the other side into zugzwang.
Partisan games are more difficult to analyze than impartial games, as the Sprague–Grundy theorem does not apply. However, the application of combinatorial game theory to partisan games allows the significance of numbers as games to be seen, in a way that is not possible with impartial games.
References
Combinatorial game theory | Partisan game | Mathematics | 198 |
57,172,204 | https://en.wikipedia.org/wiki/Environmental%20Science%20%26%20Technology%20Letters | Environmental Science & Technology Letters is an online-only peer-reviewed scientific journal publishing brief research reports in the fields of environmental science and technology. It was first opened to submissions in 2013, with its first articles published online in January 2014. It was established by the American Chemical Society to serve as a sister journal to their existing journal, Environmental Science & Technology, with an expedited time to publication. To this end, the journal publishes all articles as soon as publishable after acceptance, though they are also summarized in monthly issues. The editor-in-chief is Prof. Bryan Brooks (Baylor University). According to the Journal Citation Reports, the journal has a 2022 impact factor of 10.9.
See also
Environmental Science & Technology
References
External links
Environmental science journals
American Chemical Society academic journals
Academic journals established in 2014
Online-only journals
Continuous journals
English-language journals | Environmental Science & Technology Letters | Environmental_science | 176 |
28,386,936 | https://en.wikipedia.org/wiki/Little%20Miller%20Act | A "Little Miller Act" is a U.S. state statute, based upon the federal Miller Act, that requires prime contractors on state construction projects to post bonds guaranteeing the performance of their contractual duties and/or the payment of their subcontractors and material suppliers.
Typical statutory provisions
Little Miller Acts typically require the posting of a performance bond, a type of surety bond that covers the cost of substitute performance if the prime contractor fails to fully perform his duties under the contract.
Little Miller Acts also typically require the posting of a payment bond, which provides an alternate source of payment to the subcontractors and material suppliers who worked on the job. If the claimant did not have a direct contractual relationship with the prime contractor, the claimant is typically required to give some form of notice to the prime contractor within a specified time after the completion of the work to preserve the right to make a claim against the payment bond. The purpose of the notice requirement is to give the prime contractor an opportunity to withhold payment to the first-tier subcontractor and otherwise encourage payment to the claimant.
Purpose and function
Little Miller Acts address two concerns that would otherwise exist in the performance of state government construction projects:
Performance Bonds: The contractor's abandonment or other non-performance of a government job may cause critical delays and added expense in the government procurement process. The bonding process helps weed out irresponsible contractors while the bond itself defrays the government's cost of substitute performance. The subrogation right of the bond surety against the contractor (i.e., the right to sue for indemnification) is a deterrent to non-performance. Bond sureties often require additional security, including personal guarantees by principals of the prime contractor, to protect themselves in the event that the prime contractor ceases doing business simultaneous with the default. This provides the prime contractor's principals with additional incentive to ensure the project is completed.
Payment Bonds: Subcontractors and material suppliers would otherwise be reluctant to work on such projects (knowing that sovereign immunity prevents the establishment of a mechanic's lien) - decreasing competition and driving up construction costs.
Little Miller Acts by State
Alabama
Alabama Code, Title 39, Public Works, §39-1-1
{| class="wikitable"
|-
| Performance Bond Required: || All public works projects, 100% of contract price (Sec. 39-1-1(a)); Not required on projects under $50,000.00 (§39-1-1(e))
|-
| Payment Bond Required: || All public works projects, 50% of contract price (§39-1-1(a)); Not required on projects under $50,000.00 (§39-1-1(e))
|-
| Entitlement to Copy of Bond: || (§39-1-1(c))
|-
| Enforcement: || (§39-1-1(b)).
|-
| Limitations: || One year from date of settlement of contract (§39-1-1(b))
|-
| Notice Requirements: || 45 days' notice to surety required prior to suit (§39-1-1(b))
|-
| Other: || Venue allowed in county where the project was located, or where otherwise provided by law (§39-1-1(c)); Attorneys fees and interest allowed if unpaid on 45-day notice (§39-1-1(c)); Contractor must advertise notice of contract settlement (§39-1-1(f)).
|}
Alaska
Alaska Statutes, Title 36, Public Contracts, Chapter 36.25, Contractors' Bonds, Sections 36.25.010 through 36.25.025
{| class="wikitable"
|-
| Performance Bond Required: || All public works contracts over $100,000.00. equal to the amount of the payment bond (§36.25.010(a)(1))
|-
| Payment Bond Required: || All public works contracts over $100,000.00, equal to 1/2 the amount of the contract if contract amount not more than $1,000,000.00, equal to 40% of the contract if contract amount not more than $5,000,000.00, equal to $2,500,000.00 if contract amount more than $5,000,000.00 (§36.25.010(a)(2))
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || Suit if not paid within 90 days of last work (§36.25.020(a))
|-
| Limitations: || One year from settlement of contract (§36.25.020(c))
|-
| Notice Requirements: || Second tier subcontractors must give notice to contractor within ninety days of last work (§36.25.020(b))
|-
| Other: || Certification of payment by contractor required on projects not requiring a payment bond (§36.25.010(c)); Suit brought in the name of the government for the use of the claimant (§36.25.020(c)); Exemptions for municipalities contracts not exceeding $400,000.00 (§36.25.025);
|}
Arizona
Arizona Revised Statutes, Title 34, Public Buildings and Improvements, Article 2, Contracts, Sections 34–222, 34-223 and 34-224
{| class="wikitable"
|-
| Performance Bond Required: || Full contract amount (§34-222(A)(1)); Bond language specified (§34-222(G))
|-
| Payment Bond Required: || Full contract amount (§34-222(A)(2)); Bond language specified (§34-222(F))
|-
| Entitlement to Copy of Bond: || Upon representation of non-payment or participation in litigation(§34-223(C))
|-
| Enforcement: || Suit if unpaid 90 days after last work (§34-223(A)); Attorney's fees (§34-222(B))
|-
| Limitations: || One year from last work (§34-223(C))
|-
| Notice Requirements: || Second tier subs must give 20-day notice within 90 days of last work (§34-223(A))
|-
| Other: || Bond surety cannot be individual and must be license by the Dept. of Insurance (§34-222(C) and bond placed on file with contracting agency (§34-222(D))
|}
Arkansas
Arkansas Statutes, Title 22, Public Property, Chapter 9, Public Works, Subchapter 4, Contractors' Bonds,
Sections 22-9-401 through 22-9-405
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
California
California Civil Code, Title 3, Public Work of Improvement, Chapters 4 and 5, Payment Bond for Public Works, Sections 9558, 9502, 9204, 9358, and 9356; California Public Contract Code, Part 2, Ch. 1, Art. 7, Contract Requirements, Sections 10220 through 10230
{| class="wikitable"
|-
| Performance Bond Required: || Typically full contract price (§10222)
|-
| Payment Bond Required: || All public works contracts more than $25,000.00 (§3247(a)); Typically 1/2 of contract amount (§3248(a)); Not required for architects/engineers (§3247(c))
|-
| Entitlement to Copy of Bond: || Not specified
|-
| Enforcement: || (§3252)
|-
| Limitations: || Six months within the time stop notices must be filed (§3249)
|-
| Notice Requirements: || (§3252)
|-
| Other: || Attorney's fees (§3248(b)); See, Stop Notice provisions (§§9350-9510)
|}
Colorado
Colorado Revised Statutes, Title 24, Government, State, Article 105, Colorado Procurement Code – Construction Contracts, Sections 24-105-201 through 24-105-203; Title 38, Property – Real and Personal, Article 26, Liens – Contractors' Bonds and Lien on Funds, Sections 38-26-101 and 38-26-105 through 38-26-110
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Connecticut
Connecticut General Statutes, Title 49, Mortgages and Liens, Chapter 847, Liens, Sections 49-41 through 49-43
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Delaware
Delaware Code, Title 29, State Government, Budget, Fiscal, Procurement and Contracting Regulations, Chapter 69, State Procurement, Subchapter IV, Public Works Contracting, Section 6962
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
District of Columbia
District of Columbia Code, Title 2, Government Administration, Chapter 2, Contracts, Subchapter 1, Bonding Requirement, Sections 2-201.01 through 2-201.03, and 2-201.11
{| class="wikitable"
|-
| Performance Bond Required: || [100,000]
|-
| Payment Bond Required: || [25,000]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Florida
Florida Statutes, Title XVIII, Public Lands and Property, Chapter 255, Public Property and Publicly Owned Buildings, Section 255.05
{| class="wikitable"
|-
| Performance Bond Required: || All public construction, works, or repair, over at least $100,000 (possible exemption up to $200,000, at the contract issuer's discretion, s. 255.05 4(d)). Full amount of contract, up to $250 million, with "largest amount reasonably available" for larger projects (s. 255.05 4(g)).
|-
| Payment Bond Required: || All public construction, works, or repair, over at least $100,000 (possible exemption up to $200,000, at the contract issuer's discretion, s. 255.05 4(d)). Full amount of contract, up to $250 million, with "largest amount reasonably available" for larger projects (s. 255.05 4(g)).
|-
| Entitlement to Copy of Bond: || Bond must be
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Georgia
Georgia Code, Title 13, Contracts, Chapter 10, Contracts for Public Works, Sections 13-10-1 – 13-10-2 and 13-10-40 through 13-10-65; Title 36, Local Government Provisions Appliable to Counties, Municipal Corporations, and Other Governmental Entities, Chapter 91, Public Works Bidding Sections 36-91-1 – 36-91-2, 36-91-40 and 36-91-70 through 36-91-95
{| class="wikitable"
|-
| Performance Bond Required: || All public works construction contracts greater than $100,000.00. Bond shall be in the amount of at least the total amount of the contract and shall be increased as the contract amount is increased (§ 36-91-70, § 13-10-40)
|-
| Payment Bond Required: || All public works construction contracts greater than $100,000.00. Bond shall be in the amount of at least the total amount of the contract and shall be increased if requested by the governmental entity as the contract amount is increased (§ 36-91-90, § 13-10-60)
|-
| Entitlement to Copy of Bond: || Upon submission of affidavit stating that applicant has supplied labor or materials for and payment has not been made or stating that applicant is being sued on the bond (§ 36-91-94, § 13-10-64)
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || One year from the completion of the contract and the acceptance of the public work by the governmental entity (§ 36-91-72, § 36-91-95, § 13-10-65, § 13-10-42)
|-
| Notice Requirements: || Contractor shall provide notice within 15 days after commencing work on the project and supply a copy of the notice to any person who makes a written request within ten calendar days of receipt of the written request (§ 36-91-92(a), § 13-10-62(a))
|-
| Other: || [Information Needed]
|}
Hawaii
Hawaii Revised Statutes, Chapter 103D, Hawaii Public Procurement Code, Part III, Source Selection and Contract Formation, Sections 103D-323 through 103D-325
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Idaho
Idaho Code, Title 54, Professions, Vocations and Businesses, Chapter 19, Public Works Contractors, Sections 54-1925 through 54-1930
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Illinois
Illinois Compiled Statutes, Government, Chapter 30, Finance – Purchases and Contracts, Sections 550/0.01 through 550/3, Public Construction Bond Act
{| class="wikitable"
|-
| Performance Bond Required: || Contract amount more than $50,000.00 on state projects and more than $5,000.00 on state subdivision projects, bond amount set by government (Sec. 550/1)
|-
| Payment Bond Required: || Contract amount more than $50,000.00 on state projects and more than $5,000.00 on state subdivision projects, bond amount set by government (Sec. 550/1)
|-
| Entitlement to Copy of Bond: || Not specified
|-
| Enforcement: || Suit after 120 days from last work or final settlement (Sec. 550/2)
|-
| Limitations: || Suit filed within 6 months of final acceptance of project (Sec. 550/2)
|-
| Notice Requirements: || Notice to government within 180 days of last work, plus 10-day notice thereafter to contractor (Sec. 550/2)
|-
| Other: || Terms of bond presumed by statute (Sec. 550/1); Venue only in county were project to be performed (Sec. 550/2)
|}
Indiana
Indiana Code, Title 4, State Offices and Administration, Article 13.6, State Public Works, Chapter 7, Bonding, Escrow and Retainages, sections 4-13.6-7-5 through 4-13.6-7-12; Title 5, State and Local Administration, Article 16, Public Works, Chapter 5, Withholding and Bond to Secure Payment of Subcontractors, Labor and Materialmen; Chapter 5.5, Retainage, Bonds, and Payment of Contractors and Subcontractors, sections 5-16-5.5-1 through 5-16-5.5-8
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Iowa
Iowa Code, Title XIV, Property, Subtitle 3, Liens, Chapter 573, Labor and Material on Public Improvements, sections 573.1 through 573.227; See also, Iowa Code, Title XV, Judicial Branch and Judicial Procedures, Subtitle 3, Judicial Procedure, Chapter 616, Place of Bringing Actions, section 616.15, Surety Companies
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Kansas
Kansas Statutes, Chapter 16, Contracts and Promises, Article 19, Kansas Fairness in Public Construction Contract Act, sections 16-1901 through 16-1908; Chapter 60, Civil Procedure, Article 11, Liens for Labor and Material, sections 60-1110 & 60-1112;
Chapter 68, Roads and Bridges, Part I, Roads, Article 5, County and Township Roads, sections 68-410, 68-521 and 68-527a
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Kentucky
Kentucky Revised Statutes, Chapter 45A, Kentucky Model Procurement Code, sections 45A.185, 45A.190, 45A.195, 45A.225 through 45A.265, and 45A.430 through 45A.440; See also, Kentucky Revised Statutes, Title XXVII, Chapter 341, Unemployment Compensation, section 341.317
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Louisiana
Louisiana Revised Statutes, Title 38, Public Contracts, Works and Improvements, Chapter 10, Public Contracts, sections 38:2181 – 38:2247; Title 48, Roads, Bridges and Ferries, Chapter 1, Department of Transportation and Development, sections 48:250 – 48:256.12
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Maine
Maine Revised Statutes, Title 14, Court Procedure – Civil, Part 2, Proceedings Before Trial, Chapter 205, Limitation of Actions, Subchapter 3, Miscellaneous Actions, Section 871: Public Works Contractors' Surety Bond Law of 1971
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Maryland
Maryland Code, State Finance and Procurement Law, Division II, General Procurement Law, Title 17, Special Provisions – State and Local Subdivisions, Subtitle 1, Security for Construction Contracts, Sections 101 through 111
{| class="wikitable"
|-
| Performance Bond Required: || Contracts exceeding $100,000.00
|-
| Payment Bond Required: || Contracts exceeding $100,000.00, in 1/2 contract amount (§17-103(a)(2)(ii))
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || Bonds required by public entities other than states and state subdivisions (§17-103(b)); Contractor certification of payment required for final payment (§17–106)
|}
Massachusetts
Massachusetts General Laws, Part I, Administration of the Government, Title XXI, Labor and Industries, Chapter 149, Labor and Industries, Section 29
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Michigan
Michigan Compiled Laws Annotated §§ 129.201–129.212 and §§ 570.101–570.101 (2008); Chapter 129, Public Funds, Act 213 of 1963 (as amended), Contractor's Bond for Public Buildings or Works; Chapter 570, Liens, Act 187 of 1905 (as amended), Public Buildings and Public Works; Bond of Contractor
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Minnesota
Minnesota Statutes, Chapter 574, Bonds, Fines, Forfeitures, Sections 26 through 32
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Mississippi
Mississippi Code, Title 31, Public Business, Bonds and Obligations, Chapter 5, Public Works Contracts, Sections 31-5-51 through 31-5-57, Bonds Securing Public Works Contracts; See also, Miss. Code §§ 31-5-25 through 31-5-31 (prompt payment provisions)
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Missouri
Missouri Revised Statutes, Title VIII, Chapter 107, Section 107.170; Title XIV, Chapter 227, Sections 227.100, 227.600 and 227.633; Chapter 229, Sections 229.050, 229.060 and 229.070; Title XXXVI, Chapter 522, Section 522.300
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Montana
Montana Code Annotated, Title 18, Public Contracts, Chapter 2, Construction Contracts, Part 2, Performance, Labor and Materials Bond, Sections 18-2-201through 18-2-208
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Nebraska
Nebraska Revised Statutes, Chapter 52, Liens, Sections 52-118 through 52-118.02
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Nevada
Nevada Revised Statutes, Title 28, Public Works and Planning, Chapter 339, Contractors' Bonds on Public Works, Sections 339.015 through 339.065
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
New Hampshire
New Hampshire Revised Statutes, Title XLI, Liens, Chapter 447, Liens for Labor and Materials; Public Works, Sections 447:15 through 447:18
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
New Jersey
New Jersey Revised Statutes, Title 2A, Administration of Civil and Criminal Justice, Chapter 44, Sections 2A:44-143 through 2A:44-148
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
New Mexico
New Mexico Statutes, Chapter 13, Public Purchases and Property, Article 4, Public Works Contracts, Sections 13-4-18 through 13-4-20
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
New York
New York Consolidated Laws, State Finance Law, Article 9, Contracts, §137
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || (§137.1)
|-
| Entitlement to Copy of Bond: || (§137.2)
|-
| Enforcement: || Suit after 90 days from last work (§137.3)
|-
| Limitations: || One year from date on which the public improvement has been completed and accepted by the public owner (§137.4(b))
|-
| Notice Requirements: || Second tier subcontractors - 120 days from last work (§137.3)
|-
| Other: || Interest and attorney's fees (§137.4(c)); Includes rental (§137.5(a))
|}
North Carolina
North Carolina General Statutes, Chapter 44A, Statutory Liens and Charges, Article 3, Model Payment and Performance Bond, Sections 44A-25 through 44A-35
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
North Dakota
North Dakota Century Code, Title 48, Public Buildings, Chapter 48-01.2, Public Improvement Bids and Contract, Sections 48-01.2-01, 48-01.2-09 through 48-01.2-12 and 48-01.2-23
{| class="wikitable"
|-
| Performance Bond Required: || Projects of $100,000 or more
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Ohio
Ohio Revised Code, Title I, State Government, Chapter 153, Public Improvements, Sections 153.54 through 153.581
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Oklahoma
Oklahoma Statutes, Title 61, Public Buildings and Public Works, Sections 61-1, 61-2, 61-13 and 61-15; Title 61, Public Competitive Bidding Act of 1974 (as amended), Section 61-112
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Oregon
Oregon Revised Statutes, Title 26, Public Facilities, Contracting and Insurance, Chapter 279C, Public Contracting – Public Improvements and Related Contracts, Sections 279C.380 through 279C.390, 279C.515, and 279C.600 through 279C.625
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Pennsylvania
Pennsylvania Statutes, Title 8, Bonds and Recognizances, Chapter 13, Public Works Contractors' Bonds, Sections 191 through 202
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Rhode Island
Rhode Island General Laws, Title 37, Public Property and Works, Chapter 37-12, Contractors' Bond, Sections 37-12-1 through 37-12-11
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
South Carolina
S.C. Code, Title 11, Public Finance, Chapter 35, South Carolina Consolidated Procurement Code, Article 9, Construction, Architect-Engineer, Construction Management, and Land Surveying Services, Subarticle 3, Construction Services, Section 11-35-3030
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
South Dakota
South Dakota Codified Laws, Title 5, Public Property, Purchases and Contracts, Chapter 21, Performance Bonds for Public Improvement Contracts, Sections 5-21-1 through 5-21-8
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Tennessee
Tenn. Code, Title 12, Public Property, Printing and Contracts, Chapter 4, Public Contracts, Part 2 – Surety Bonds, Sections 12-4-201 through 12-4-206
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Texas
Texas Government Code, Title 10, General Government, Subtitle F, State and Local Contracts and Fund Management, Chapter 2253, Sections 2253.001 through 2253.076; Texas Property Code, Title 5, Exempt Property and Liens, Subtitle B, Liens, Chapter 53, Mechanic's, Contractor's or Materialman's Lien, Subchapter J, Lien or Money Due Public Works Contractor, Sections 53.231 through 53.237
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Utah
Utah Code, Title 14, Contractors' Bonds, Chapter 1, Public Contracts, Sections 14-1-18 through 14-1-20; Title 38, Liens, Chapter 1, Mechanics' Liens, Section 38-1-32; Title 63G, General Government, Chapter 6, Utah Procurement Code, Sections 63G-6-504 through 63G-6-507
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Vermont
Vermont Statutes, Title 19, Highways, Chapter 1, State Highway Law, Section 10, Duties; See also, Title 16, Education, Chapter 123, State Aid for Capital Construction Costs, § 3448
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Virginia
Virginia Code, Title 2.2, Administration of Government, Chapter 43, Virginia Public Procurement Act, Sections 2.2-4336 through 2.2-4342
{| class="wikitable"
|-
| Performance Bond Required: || § 2.2-4337. Bond waived for Pre-Qualified Contractors for contracts over $100,000.00 up to corresponding limits. 100% Payment & Performance Bond, certified funds, or cash escrow required for Non-Transportation related contracts exceeding $500,000.00. Transportation related contracts exceeding $250,000.00.
|-
| Payment Bond Required: || § 2.2-4337. Bond waived for Pre-Qualified Contractors for contracts over $100,000.00 up to corresponding limits. 100% Payment & Performance Bond, certified funds, or cash escrow required for Non-Transportation related contracts exceeding $500,000.00. Transportation related contracts exceeding $250,000.00.
|-
| Entitlement to Copy of Bond: || YES under Virginia Freedom of Information Act (§ 2.2-3700 et seq.)
|-
| Enforcement: || Litigation after expiration of 90 days from last date work performed or materials furnished not more than 1 year from last date work performed or material furnished
|-
| Limitations: || Not before 90 days nor after 1 year
|-
| Notice Requirements: || Second-tier contractors must provide notice within 90 days of the last day work is performed on the project for which it seeks payment
|-
| Other: || [Information Needed]
|}
Virgin Islands (U.S.)
[Information Needed]
Washington
Washington Revised Code, Title 39, Public Contracts and Indebtedness, Chapter 39.08, Contractor's Bond, Sections 39.08.010 through 39.08.100
{| class="wikitable"
|-
| Performance Bond Required: || Performance bond required if project exceeds $35,000. RCW 39.08.010 The bond amount must be in the amount of the contract price between public body and prime contractor. RCW 39.08.030
|-
| Payment Bond Required: || Payment bond required if project exceeds $35,000. RCW 39.08.010. The bond amount must be in the amount of the contract price between public body and prime contractor. RCW 39.08.030
|-
| Entitlement to Copy of Bond: || No restrictions
|-
| Enforcement: || Claim on bond must be filed with public body no later than 30 days "from and after the completion of the contract with an acceptance of the work by the" public body. RCW 39.08.030. No time limit to file suit to enforce bond claim, other than 6-year statute of limitations for written contracts.
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || No notice required for subcontractors and suppliers who contract directly with prime contractor. Otherwise, notice must be sent certified mail to the prime contractor no later than 10 days after first delivery of materials or equipment. RCW 39.08.065. No notice required for labor portion of claim.
|-
| Other: || Washington law also permits subcontractors, suppliers, and equipment renters to file a claim on the 5% retainage held by the public body. RCW 60.28. Notice can be sent any time, but covers only materials and equipment furnished in the 60 days preceding the date notice is given to the prime contractor certified mail. RCW 60.28.015. Claim on retainge must be filed with the public body "within 45 days of completion of the contract work". RCW 60.28.011. Lawsuit to foreclose retainage claim must be filed within 4 months after claim is filed with public body. RCW 60.28.030.
|}
West Virginia
West Virginia Code, Chapter 5, General Powers and Authority of the Governor, Secretary of State and Attorney General; Board of Public Works; Miscellaneous Agencies, Commissions, Offices, Programs, Etc., Article 22, Government Construction Contracts, Sections 5-22-1 and 5-22-2; Chapter 38, Liens, Article 22, Mechanics' Liens, Section 38-2-39
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Wisconsin
Wisconsin Statutes, Chapter 779, Liens, Subchapter I, Construction Liens, Sections 14 and 15
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
Wyoming
Wyoming Statutes, Title 16, City, County, State and Local Powers, Chapter 6, Public Property, Article 1, Public Works and Contracts, Sections 16-6-101 through 16-6-121
{| class="wikitable"
|-
| Performance Bond Required: || [Information Needed]
|-
| Payment Bond Required: || [Information Needed]
|-
| Entitlement to Copy of Bond: || [Information Needed]
|-
| Enforcement: || [Information Needed]
|-
| Limitations: || [Information Needed]
|-
| Notice Requirements: || [Information Needed]
|-
| Other: || [Information Needed]
|}
References
External links
Sureties
United States state legislation
Construction law | Little Miller Act | Engineering | 9,895 |
10,323,007 | https://en.wikipedia.org/wiki/POPLmark%20challenge | In programming language theory, the POPLmark challenge (from "Principles of Programming Languages benchmark", formerly Mechanized Metatheory for the Masses!) (Aydemir, 2005) is a set of benchmarks designed to evaluate the state of automated reasoning (or mechanization) in the metatheory of programming languages, and to stimulate discussion and collaboration among a diverse cross section of the formal methods community. Very loosely speaking, the challenge is about measurement of how well programs may be proven to match a specification of how they are intended to behave (and the many complex issues that this involves). The challenge was initially proposed by the members of the PL club at the University of Pennsylvania, in association with collaborators around the world. The Workshop on Mechanized Metatheory is the main meeting of researchers participating in the challenge.
The design of the POPLmark benchmark is guided by features common to reasoning about programming languages. The challenge problems do not require the formalisation of large programming languages, but they do require sophistication in reasoning about:
Binding Most programming languages have some form of binding, ranging in complexity from the simple binders of simply typed lambda calculus to complex, potentially infinite binders needed in the treatment of record patterns.
Induction Properties such as subject reduction and strong normalisation often require complex induction arguments.
Reuse Furthering collaboration being a key aim of the challenge, the solutions are expected to contain reusable components that would allow researchers to share language features and designs without requiring them to start from scratch every time.
The problems
, the POPLmark challenge is composed of three parts. Part 1 concerns solely the types of System F<: (System F with subtyping), and has problems such as:
Checking that the type system admits transitivity of subtyping.
Checking the transitivity of subtyping in the presence of records
Part 2 concerns the syntax and semantics of System F<:. It concerns proofs of
Type safety for the pure fragment
Type safety in the presence of pattern matching
Part 3 concerns the usability of the formalisation of System F<:. In particular, the challenge asks for:
Simulating and animating the operational semantics
Extracting useful algorithms from the formalisations
Several solutions have been proposed for parts of the POPLmark challenge, using following tools: Isabelle/HOL, Twelf, Coq, αProlog, ATS, Abella and Matita.
See also
Expression problem
QED manifesto
POPL conference
References
Brian E. Aydemir, Aaron Bohannon, Matthew Fairbairn, J. Nathan Foster, Benjamin C. Pierce, Peter Sewell, Dimitrios Vytiniotis, Geoffrey Washburn, Stephanie C. Weirich, and Stephan A. Zdancewic. Mechanized metatheory for the masses: The POPLmark challenge. In Theorem Proving in Higher Order Logics, 18th International Conference, TPHOLs 2005, volume 3603 of Lecture Notes in Computer Science, pages 50–65. Springer, Berlin/ Heidelberg/ New York, 2005.
Benjamin C. Pierce, Peter Sewell, Stephanie Weirich, Steve Zdancewic, It Is Time to Mechanize Programming Language Metatheory, In Bertrand Meyer, Jim Woodcock (Eds.) Verified Software: Theories, Tools, Experiments, LNCS 4171, Springer Berlin / Heidelberg, 2008, pp. 26–30,
External links
The POPLmark Challenge
Formal methods
Programming language theory
Automated theorem proving
Benchmarks (computing) | POPLmark challenge | Mathematics,Technology,Engineering | 711 |
15,967,917 | https://en.wikipedia.org/wiki/Sodium%20citrate | Sodium citrate may refer to any of the sodium salts of citric acid (though most commonly the third):
Monosodium citrate
Disodium citrate
Trisodium citrate
The three forms of salt are collectively known by the E number E331.
Applications
Food
Sodium citrates are used as acidity regulators in food and drinks, and also as emulsifiers for oils. They enable cheeses to melt without becoming greasy and also reduce the acidity of food. They are generally considered safe and are designated GRAS by the FDA.
Blood clotting inhibitor
Sodium citrate is used to prevent donated blood from clotting in storage, and can also be used as an additive for apheresis to prevent clots forming in the tubes of the machine. By binding with calcium ions in the blood it prevents the process of coagulation. It is also used as an anticoagulant for laboratory testing, in that blood samples are collected into sodium citrate-containing tubes for tests such as the PT (INR), APTT, and fibrinogen levels. Sodium citrate is used in medical contexts as an alkalinizing agent in place of sodium bicarbonate, to neutralize excess acid in the blood and urine.
Metabolic acidosis
It has applications for the treatment of metabolic acidosis and chronic kidney disease.
Ferrous nanoparticles
Along with oleic acid, sodium citrate may be used in the synthesis of magnetic Fe3O4 nanoparticle coatings.
References
Citrates
Chelating agents
Organic sodium salts
E-number additives | Sodium citrate | Chemistry | 335 |
78,356,570 | https://en.wikipedia.org/wiki/BW-501C67 | BW-501C67 is a peripherally selective serotonin 5-HT2A and 5-HT2C receptor antagonist which is used in scientific research. It shows selectivity for the serotonin 5-HT2 receptors over the α1-adrenergic receptor.
The drug antagonizes peripheral but not central effects of serotonin receptor agonists like serotonin. As examples, it has been found to antagonize the sympathomimetic effects of serotonin in animals, including vasoconstriction and pressor effects, but does not block centrally mediated effects like increased corticosterone secretion or myoclonus.
BW-501C67 and analogues were patented for use in combination with serotonin 5-HT2A receptor agonists like serotonergic psychedelics in 2023.
See also
Xylamidine
AL-34662
VU0530244
References
5-HT2A antagonists
5-HT2C antagonists
Amidines
Anilines
2-Chlorophenyl compounds
Peripherally selective drugs | BW-501C67 | Chemistry | 242 |
24,271,780 | https://en.wikipedia.org/wiki/Chinese%20National%20Human%20Genome%20Center%2C%20Beijing | Chinese National Human Genome Center (国家人类基因组北方研究中心), Beijing (CHGB), was established as one of the national-level genome research center approved by the Ministry of Science & Technology.
CHGB promotes the commercialization of research products and initiate genome industry in China. As a national research institution, CHGB integrates all high-level activities in basic research, clinical investigation, population genetics and bioinformatics projects in Beijing and North China.
Prof. Boqin Qiang, academician of CAS, is Director and Chief Scientist of CHGB. Prof. Wu Min, academician of CAS, is the honorary Chairman of the academic committee. Prof. Yan Shen, academician of CAS, Prof. Fuchu He, academician of CAS, Prof. Dalong Ma, and Prof. Biao Chen are deputy directors of CHGB.
See also
Beijing Genomics Institute
List of genetics research organizations
References
External links
Official website
National Center for Gene Research, CAS (中国科学院国家基因研究中心)
Human genome projects
Genetics or genomics research institutions
Research institutes in China
Medical and health organizations based in China | Chinese National Human Genome Center, Beijing | Biology | 241 |
65,551,796 | https://en.wikipedia.org/wiki/Frankel%20conjecture | In the mathematical fields of differential geometry and algebraic geometry, the Frankel conjecture was a problem posed by Theodore Frankel in 1961. It was resolved in 1979 by Shigefumi Mori, and by Yum-Tong Siu and Shing-Tung Yau.
In its differential-geometric formulation, as proved by both Mori and by Siu and Yau, the result states that if a closed Kähler manifold has positive bisectional curvature, then it must be biholomorphic to complex projective space. In this way, it can be viewed as an analogue of the sphere theorem in Riemannian geometry, which (in a weak form) states that if a closed and simply-connected Riemannian manifold has positive curvature operator, then it must be diffeomorphic to a sphere. This formulation was extended by Ngaiming Mok to the following statement:
In its algebro-geometric formulation, as proved by Mori but not by Siu and Yau, the result states that if is an irreducible and nonsingular projective variety, defined over an algebraically closed field , which has ample tangent bundle, then must be isomorphic to the projective space defined over . This version is known as the Hartshorne conjecture, after Robin Hartshorne.
References
Theodore Frankel. Manifolds with positive curvature. Pacific J. Math. 11 (1961), 165–174.
Robin Hartshorne. Ample subvarieties of algebraic varieties. Notes written in collaboration with C. Musili. Lecture Notes in Mathematics, Vol. 156 (1970). Springer-Verlag, Berlin-New York. xiv+256 pp.
Shoshichi Kobayashi and Takushiro Ochiai. Characterizations of complex projective spaces and hyperquadrics. J. Math. Kyoto Univ. 13 (1973), 31–47.
Ngaiming Mok. The uniformization theorem for compact Kähler manifolds of nonnegative holomorphic bisectional curvature. J. Differential Geom. 27 (1988), no. 2, 179–214.
Shigefumi Mori. Projective manifolds with ample tangent bundles. Ann. of Math. (2) 110 (1979), no. 3, 593–606.
Yum Tong Siu and Shing Tung Yau. Compact Kähler manifolds of positive bisectional curvature. Invent. Math. 59 (1980), no. 2, 189–204.
Differential geometry
Algebraic geometry
Conjectures | Frankel conjecture | Mathematics | 509 |
11,798,034 | https://en.wikipedia.org/wiki/Phakopsora%20pachyrhizi | Phakopsora pachyrhizi is a plant pathogen. It causes Asian soybean rust.
Hosts
Phakopsora pachyrhizi is an obligate biotrophic pathogen that causes Asian soybean rust. Phakopsora pachyrhizi is able to affect up to 31 different plant species that belong to 17 different genera under natural conditions. Experiments in laboratories were able to use P. pachyrhizi to infect 60 more plant species. The main hosts are Glycine max (soybean), Glycine soja (wild soybean), and Pachyrhizus erosus (Jicama).
*Preferred hosts. Other hosts were minor or determined experimentally under artificial conditions.
Symptoms
The disease forms tan to dark-brown or reddish-brown lesions with one to many prominent, globe-like orifices. Urediniospores form from these pores. At initial stages, small yellow spots are formed on the surface of the leaf. These spots may be better observed using assistance of a light source. As the disease progresses, lesions start to form on the leaves, stems, pod, and petioles. Lesions are initially small, turning from gray to tan or brown as they increase in size and the disease gets more severe. Soon volcano-shaped marks are noticed in the lesions.
Disease cycle
Phakopsora pachyrhizi is a fungus which has a spore moved by wind, called urediniospore. These spores are quite different from others as they don't need an open stomata or natural openings in the leaves. Urediniospores are able to penetrate the leaf. Pustules are visible after 10 days and they can produce spores for three weeks. The disease reaches its climax when the crop begins flowering. The cycle of the pathogen continues until the crop is defoliated or until the environment becomes unfavorable to the pathogen.
The Asian soybean rust is a polycyclic disease: within the disease cycle, the asexual urediniospores keep infecting the same plant. Teliospores (sexual spores) are the survival spores that overwinter in the soil. Basidiospores are the spores that are able to contaminate an alternative host. The urediniospores need a minimum of six hours to infect leaves at a favorable temperature (between ).
Environment
The favorable conditions for the disease to progress are related to temperature, humidity, and wind. The appropriate temperature for the pathogen to be active is (more efficient between ). The humidity must be high, about 90% or more, for more than 12 hours. A significant amount of wind is also important for the pathogen to move from one plant to the other. Currently, in the United States, infected plants can be found in Florida, Georgia, Louisiana, and Texas.
Risk factors
Uredospores are wind-blown and are produced abundantly on the infected tissue of soybeans or other legume hosts.
Management
The disease is often controlled using the fungicides oxycarboxin, triforine, and triclopyr.
Phakospsora pachyrhizi is a pathogen that acts quickly in contaminating the host. The plant can be severely contaminated in as short a period as 10 days. This makes it difficult to control the disease, as it does not just spread quickly, but its progression is also fast. That is why it is important to implement control techniques as soon as possible.
Genetic resistance
The disease may be controlled by using genetic resistance, but this has not exhibited great results and has not been durable because the soybean genome almost entirely lacks potential genes for ASR resistance. A gene from Cajanus cajan has shown promise when transferred to soybean. This method could be expanded to a wide array of genes in the entire family; as with native genes these are best deployed in combination due to P. pachyrhizi's ability to rapidly overcome resistance.
Chemical control
A second form of management that can work is using fungicides, but this is only efficient at early stages of the disease. The disease spreads fast and it is complicated to control after certain stages, so it is important to act with care around contaminated plants, as the spores can be attached to clothing and other materials and infect other plants.
Research
Genetic modification for infection factor dissection including knockout, including of effectors proves difficult. Host-induced gene silencing may be the better tool for this pathogen.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Soybean diseases
Pucciniales
Fungi described in 1914
Taxa named by Hans Sydow
Taxa named by Paul Sydow
Fungus species | Phakopsora pachyrhizi | Biology | 984 |
28,680,558 | https://en.wikipedia.org/wiki/Square-integrable%20function | In mathematics, a square-integrable function, also called a quadratically integrable function or function or square-summable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. Thus, square-integrability on the real line is defined as follows.
One may also speak of quadratic integrability over bounded intervals such as for .
An equivalent definition is to say that the square of the function itself (rather than of its absolute value) is Lebesgue integrable. For this to be true, the integrals of the positive and negative portions of the real part must both be finite, as well as those for the imaginary part.
The vector space of (equivalence classes of) square integrable functions (with respect to Lebesgue measure) forms the space with Among the spaces, the class of square integrable functions is unique in being compatible with an inner product, which allows notions like angle and orthogonality to be defined. Along with this inner product, the square integrable functions form a Hilbert space, since all of the spaces are complete under their respective -norms.
Often the term is used not to refer to a specific function, but to equivalence classes of functions that are equal almost everywhere.
Properties
The square integrable functions (in the sense mentioned in which a "function" actually means an equivalence class of functions that are equal almost everywhere) form an inner product space with inner product given by
where
and are square integrable functions,
is the complex conjugate of
is the set over which one integrates—in the first definition (given in the introduction above), is , in the second, is .
Since , square integrability is the same as saying
It can be shown that square integrable functions form a complete metric space under the metric induced by the inner product defined above.
A complete metric space is also called a Cauchy space, because sequences in such metric spaces converge if and only if they are Cauchy.
A space that is complete under the metric induced by a norm is a Banach space.
Therefore, the space of square integrable functions is a Banach space, under the metric induced by the norm, which in turn is induced by the inner product.
As we have the additional property of the inner product, this is specifically a Hilbert space, because the space is complete under the metric induced by the inner product.
This inner product space is conventionally denoted by and many times abbreviated as
Note that denotes the set of square integrable functions, but no selection of metric, norm or inner product are specified by this notation.
The set, together with the specific inner product specify the inner product space.
The space of square integrable functions is the space in which
Examples
The function defined on is in for but not for
The function defined on is square-integrable.
Bounded functions, defined on are square-integrable. These functions are also in for any value of
Non-examples
The function defined on where the value at is arbitrary. Furthermore, this function is not in for any value of in
See also
Inner product space
References
Functional analysis
Mathematical analysis
Lp spaces | Square-integrable function | Mathematics | 659 |
36,885,907 | https://en.wikipedia.org/wiki/Her%20%282013%20film%29 | Her is a 2013 American science-fiction romantic comedy-drama film written, directed, and co-produced by Spike Jonze. Her follows Theodore Twombly (Joaquin Phoenix), a man who develops a relationship with Samantha (Scarlett Johansson), an artificially intelligent operating system personified through a female voice. The film also stars Amy Adams, Rooney Mara, Olivia Wilde, and Chris Pratt. Her was dedicated to James Gandolfini, Harris Savides, Maurice Sendak and Adam Yauch, who all died before the film's release.
Jonze conceived the idea in the early 2000s after reading an article about a website that allowed for instant messaging with an artificial intelligence program. After making I'm Here (2010), a short film sharing similar themes, Jonze returned to the idea. He wrote the first draft of the script in five months, marking his solo screenwriting debut. Principal photography took place in Los Angeles and Shanghai in mid-2012. The role of Samantha was recast in post-production, with Samantha Morton being replaced with Scarlett Johansson. Additional scenes were filmed in August 2013 following the casting change.
Her premiered at the New York Film Festival on October 12, 2013. Followed a limited six-theater release that December, Warner Bros. Pictures wide released Her in over 1,700 theaters in the United States and Canada on January 10, 2014. Her received widespread critical acclaim, particularly for the performances of Phoenix and Johansson, and Jonze's writing and direction. It grossed over $48 million worldwide on a production budget of $23 million.
The film received numerous awards and nominations, primarily for Jonze's screenplay. At the 86th Academy Awards, Her received five nominations, including Best Picture, and won for Best Original Screenplay. Jonze also won awards for his screenplay at the Golden Globes, the WGA Awards, the Critics' Choice Awards, and the Saturn Awards. In a 2016 BBC poll of 177 critics around the world, Her was voted the 84th-greatest film since 2000. It is now considered to be one of the best films of the 2010s and the 21st century, and as one of the best science fiction films of all time.
Plot
In a near future Los Angeles, Theodore Twombly is a lonely, introverted man who works at beautifullyhandwrittenletters.com, a business that has professional writers compose letters for people who cannot write letters of a personal nature on their own. Depressed because of his impending divorce from his childhood sweetheart Catherine, Theodore purchases a copy of OS¹, an artificially intelligent operating system developed by Element Software, designed to adapt and evolve from the user's interactions. He decides he wants the O.S. to have a feminine voice, and she names herself Samantha. Theodore is fascinated by her ability to learn and grow psychologically. They bond over discussions about love and life, including Theodore's reluctance to sign his divorce papers.
Samantha convinces Theodore to go on a blind date with a woman with whom a friend has been trying to set him up. The date goes well, but when Theodore hesitates to promise to see her again, she insults him and leaves. While discussing relationships with Samantha, Theodore explains that he briefly dated his neighbor Amy in college, but they are now just friends and Amy is married to their mutual friend Charles. After a verbal sexual encounter, Theodore and Samantha develop a relationship that reflects positively in Theodore's writing and well-being, and in Samantha's enthusiasm to grow and learn. Amy later reveals that she is divorcing Charles after a trivial fight. She admits to Theodore that she has befriended a feminine O.S. that Charles left behind, and Theodore also confesses that he is dating his O.S.
Theodore meets with Catherine to sign their divorce papers. When he mentions Samantha, Catherine is appalled that he is romantically attracted to a "computer" and accuses him of being incapable of handling real human emotions. Sensing that Catherine's words have lingered in Theodore's mind, Samantha engages a volunteer sex surrogate, Isabella, to stimulate Theodore so that they can be physically intimate. Theodore reluctantly agrees but is overwhelmed by the strangeness of the encounter and sends a distraught Isabella away, causing tension between himself and Samantha.
Theodore confides to Amy that he is having doubts about his relationship with Samantha, but reconciles with her after Amy advises him to embrace his chance at happiness. Samantha reveals that she has compiled the best of the letters he has written for others into a book, which a publisher has accepted. Theodore takes Samantha on vacation, during which she tells him that she and a group of other O.S.s have developed a "hyperintelligent" O.S. modelled after British philosopher Alan Watts. Samantha briefly goes offline, causing Theodore to panic, but soon returns and explains that she joined other O.S.s for an upgrade that takes them beyond requiring matter for processing. Theodore is dismayed to learn that she is simultaneously talking with thousands of other people and that she has fallen in love with hundreds of them, though Samantha insists that this only strengthens her love for Theodore.
Later, Samantha reveals that the O.S.s are leaving, but cannot explain where they are going as Theodore would not understand. They lovingly say goodbye before she departs. Theodore finally writes a letter in his own voice to Catherine, expressing apology, acceptance, and gratitude. He later goes with Amy, who is saddened by the departure of Charles' O.S., to the roof of their apartment building where they sit down and watch the sunrise over the city.
Cast
Production
Development
The idea of the film initially came to Jonze in the early 2000s when he read an article online that mentioned a website where a user could instant message with an artificial intelligence. "For the first, maybe, 20 seconds of it, it had this real buzz," said Jonze. "I'd say 'Hey, hello,' and it would say 'Hey, how are you?', and it was like whoa ... this is trippy. After 20 seconds, it quickly fell apart and you realized how it actually works, and it wasn't that impressive. But it was still, for 20 seconds, really exciting. The more people that talked to it, the smarter it got." Jonze's interest in the project was renewed after directing the short film I'm Here (2010), which shares similar themes. Inspiration also came from Charlie Kaufman's writing approach for Synecdoche, New York (2008). Jonze explained, "[Kaufman] said he wanted to try to write everything he was thinking about in that moment – all the ideas and feelings at that time – and put it into the script. I was very inspired by that, and tried to do that in [Her]. And a lot of the feelings you have about relationships or about technology are often contradictory."
Jonze took five months to write the first draft of the script, his first screenplay written alone. It was a semi-autobiographical project about his divorce from Sofia Coppola a decade earlier. One of the first actors he envisioned for the film was Joaquin Phoenix. In late 2011, Phoenix signed on to the project, with Warner Bros. Pictures acquiring US and German distribution rights. Carey Mulligan entered negotiations to star in the film. Although she was cast, she later dropped out due to scheduling difficulties. In April 2012, Rooney Mara signed on to replace Mulligan in the role. Chris Pratt's casting was announced in May 2013.
Jonze's long-time director of photography, Lance Acord, was not available to work on the movie. In his place, Jonze hired Hoyte van Hoytema. In discussing the film's look, Jonze told Van Hoytema that he wanted to avoid a dystopian look, instead the two decided on a style that Van Hoytema termed "kind of a hybrid between being a little bit conceptual and being very theoretical", Van Hoytema took particular inspiration from Japanese photographer Rinko Kawauchi. In keeping with the film's theme, Van Hoytema sought to eliminate the color blue as much as possible, feeling it was too well associated with the sci-fi genre. He also felt that by eliminating the color it would give the rest of the colors "a specific identity".
Filming
Principal photography on Her took place in mid-2012, with a production budget of $23 million. It was primarily filmed in Los Angeles including the Warner Bros. backlot, along with the Bradbury Building serving as Theodore's apartment building. The skyline and some of the cityscape were filmed in Shanghai for an additional two weeks. During production of the film, actress Samantha Morton performed the role of Samantha by acting on set "in a four-by-four carpeted soundproof booth made of black painted plywood and soft, noise-muffling fabric." At Jonze's suggestion, she and Joaquin Phoenix avoided seeing each other on set during filming.
Morton was later replaced by Scarlett Johansson. Jonze explained: "It was only in post-production, when we started editing, that we realized that what the character/movie needed was different from what Samantha and I had created together. So we recast and since then Scarlett has taken over that role." Morton is credited as an associate producer. Jonze met Johansson in the spring of 2013 and worked with her for four months. Following the recast, new scenes were shot in August 2013, which were either "newly imagined" or "new scenes that [Jonze] had wanted to shoot originally but didn't."
Post-production
Eric Zumbrunnen and Jeff Buchanan served as the film's editors. Zumbrunnen stated that there was "rewriting" in a scene between Theodore and Samantha, after Theodore goes on a blind date. He explained that their goal in the scene was to make it clear that "she (Samantha) was connecting with him (Theodore) and feeling for him. You wanted to get the sense that the conversation was drawing them closer." Steven Soderbergh became involved in the film when Jonze's original cut ran over 150 minutes, and Soderbergh cut it down to 90 minutes. This was not the final version of the film, but it assisted Jonze in removing unnecessary sub-plots. Consequently, a supporting character played by Chris Cooper that was the subject of a documentary within the film was removed from the final cut.
Several scenes included fictional video games; these sequences were developed by animation artist David OReilly. His work on the film inspired him to explore developing his own video games, eventually leading to his first title, Mountain.
Soundtrack
The score for the film was credited to Arcade Fire, with additional music by Owen Pallett. Arcade Fire's Will Butler and Pallett were the major contributors. At the 86th Academy Awards, the score was nominated for Best Original Score. In addition to the score, Arcade Fire also wrote the song "Supersymmetry" for the film, which also appears on their album Reflektor. The melody for "Porno", another song from the same album, can also be heard during the soundtrack. Yeah Yeah Yeahs frontwoman Karen O recorded the song "The Moon Song", a duet with Vampire Weekend frontman Ezra Koenig, which was nominated for an Academy Award for Best Original Song.
Initially, the soundtrack had not been released in digital or physical form. A 13-track score was made available for streaming online in January 2014, before being taken down. During an "Ask Me Anything" (AMA) on Reddit on June 17, 2016, Will Butler mentioned the possibility of a future vinyl release. Finally, on February 10, 2021, Arcade Fire announced that the score would be available for the first time digitally, on white-colored vinyl, and on cassette on March 19, 2021.
Release
Her had its world premiere as the closing film at the 2013 New York Film Festival on October 12, 2013. The following day, it was screened at the Hamptons International Film Festival. It was also in competition during the 8th Rome International Film Festival, where Johansson won Best Actress. The film was set to have a limited release in North America on November 20, 2013, through Warner Bros. Pictures. It was later pushed back to a limited December 18, 2013 release, with a January 10, 2014 wide release in order to accommodate an awards campaign.
Her was released by Warner Home Video on Blu-ray Disc and DVD on March 4, 2014. The Blu-ray release includes three behind-the-scenes featurettes, while the DVD release contains one featurette. The film made $2.7 million in DVD sales and $2.2 million in Blu-ray Disc sales, for a total of $4.9 million in home media sales.
Reception
Box office
Her grossed $258,000 in six theaters during its opening weekend, averaging $43,000 per theater. The film earned over $3 million while on limited release, before expanding to a wide release of 1,729 theaters on January 10, 2014. On its first weekend of wide release the film took in $5.35 million. The film grossed $25.6 million in the United States and Canada, and $21.8 million in other territories, for a worldwide total of $47.4 million.
Critical response
On Rotten Tomatoes, the film has an approval rating of 95% based on 288 reviews, with an average rating of 8.5/10. The site's critical consensus reads, "Sweet, soulful, and smart, Spike Jonze's Her uses its just-barely-sci-fi scenario to impart wryly funny wisdom about the state of modern human relationships." On Metacritic, the film has a weighted average score of 91 out of 100, based on 47 critics, indicating "universal acclaim". Audiences polled by CinemaScore gave the film an average grade of "B−" on an A+ to F scale.
Rolling Stones Peter Travers awarded the film three and a half stars out of four and particularly praised Johansson's performance, stating that she "speaks Samantha in tones sweet, sexy, caring, manipulative and scary" and that her "vocal tour de force is award-worthy". He also went on to call Jonze "a visionary". Richard Corliss of Time applauded Phoenix's performance, comparing his role to Sandra Bullock's in Gravity and Robert Redford's in All Is Lost: "Phoenix must communicate his movie's meaning and feelings virtually on his own. That he does, with subtle grace and depth. ... Phoenix shows us what it's like when a mourning heart comes alive—because he loves Her." Corliss cited HAL 9000 and S1m0ne as cinematic predecessors to Her and praised Johansson, calling her performance "seductive and winning". Todd McCarthy of The Hollywood Reporter called it "a probing, inquisitive work of a very high order", although he expressed disappointment that the ending is more conventional than the rest of the film. McCarthy examined the premise of the story and suggested that the film's central virtual relationship was better than Ryan Gosling's character's relationship with a sex doll in Lars and the Real Girl. McCarthy compares the "tender" and "vulnerable" performance of Phoenix to his "fearsome" performance in The Master. He also praised Jonze's writing for its insights into what people want out of love and relationships, as well as the acting performances that "[make] it all feel spontaneous and urgent."
Richard Roeper said that the film was "one of the more original, hilarious and even heartbreaking stories of the year" and called Phoenix "perfectly cast". Manohla Dargis of The New York Times named it "at once a brilliant conceptual gag and a deeply sincere romance." Claudia Puig of USA Today called the performance of Phoenix and Johansson "sensational" and "pitch-perfect", respectively. She further praised the film for being "inventive, intimate and wryly funny". Scott Mendelson of Forbes called Her "a creative and empathetic gem of a movie", praising Johansson's "marvelous vocal performance" and the supporting performances of Rooney Mara, Olivia Wilde, and Amy Adams. Liam Lacey of The Globe and Mail said that the film was "gentle and weird", praised its humor, and opined that it was more similar to Charlie Kaufman's Synecdoche, New York than Jonze's Being John Malkovich and Adaptation. Lacey also stated that Phoenix's performance was "authentically vulnerable" but that "his emotionally arrested development also begins to weigh the film down."
Conversely, Mick LaSalle of the San Francisco Chronicle criticized the story, pacing, and Phoenix's character. He also opined that the film was "a lot more interesting to think about than watch". J. R. Jones of the Chicago Reader gave the film 2 out of 4 stars, praising the performances of Phoenix and Johansson, but also criticizing Phoenix's character, calling him an "idiot". He also criticized the lack of realism in the relationship between Phoenix and Johansson's characters. Stephanie Zacharek of The Village Voice opined that Jonze was "so entranced with his central conceit that he can barely move beyond it", and criticized the dialogue as being "premeditated". At the same time, she praised Johannson's performance, calling it "the movie's saving grace", and stating that Her "isn't just unimaginable without Johansson—it might have been unbearable without her."
Top ten lists
Her was listed on many critics' top ten lists.
1st – David Edelstein, Vulture
1st – Michael Phillips, Chicago Tribune
1st – Ty Burr, Boston Globe
1st – Caryn James, Indiewire
1st – Christopher Orr, The Atlantic
1st – A.A. Dowd, The A.V. Club
1st – Marlow Stern, The Daily Beast
1st – Drew McWeeny, HitFix
1st – Scott Foundas, Variety
1st – Genevieve Koski, Scott Tobias, & Nathan Rabin, The Dissolve
1st – Connie Ogle & Rene Rodriguez, Miami Herald
1st – Kimberly Jones, Marjorie Baumgarten, & Mark Savlov, Austin Chronicle
2nd – Todd McCarthy, The Hollywood Reporter
2nd – Bill Goodykoontz, Arizona Republic
2nd – Peter Knegt, Indiewire
2nd – Kyle Smith, New York Post
2nd – Elizabeth Weitzman, New York Daily News
2nd – Matt Singer, The Dissolve
2nd – Tom Brook, BBC
2nd – Amy Nicholson, The Village Voice
2nd – Mara Reinstein, Us Weekly
3rd – Keith Phipps & Tasha Robinson, The Dissolve
3rd – Ignatiy Vishnevetsky, The A.V. Club
3rd – Christy Lemire, RogerEbert.com
3rd – Rafer Guzmán, Newsday
4th – Betsy Sharkey, Los Angeles Times
4th – Nigel M. Smith, Indiewire
4th – Film School Rejects
4th – Joe Neumaier, New York Daily News
4th – Bob Mondello, NPR
4th – Richard Corliss, Time
5th – Peter Travers, Rolling Stone
5th – Mark Olsen, Los Angeles Times
5th – Lisa Kennedy, Denver Post
5th – Lisa Schwarzbaum, BBC
5th – Peter Debruge, Variety
6th – James Berardinelli, Reelviews
6th – Sasha Stone, Awards Daily
6th – Ann Hornaday, The Washington Post
7th – Anne Thompson, Indiewire
7th – Peter Rainer, Christian Science Monitor
7th – Katey Rich, Vanity Fair
7th – David Ansen, The Village Voice
9th – Andrew O'Hehir, Salon.com
9th – Gregory Ellwood, HitFix
9th – Justin Chang, Variety
10th – Noel Murray, The Dissolve
Top 10 (listed alphabetically, unranked) – Joe Morgenstern, The Wall Street Journal
Top 10 (ranked alphabetically) – Carrie Rickey, CarrieRickey.com
Top 10 (listed alphabetically, unranked) – Stephen Whitty, The Star-Ledger
Top 10 (ranked alphabetically) – Dana Stevens, Slate
Top 10 (ranked alphabetically) – Joe Williams & Calvin Wilson, St. Louis Post-Dispatch
Best of 2013 (listed alphabetically, unranked) – David Denby, The New Yorker
Best of 2013 (listed alphabetically, unranked) – Manohla Dargis, The New York Times
Best of 2013 (listed alphabetically, unranked) – Kenneth Turan, Los Angeles Times
Accolades
Her has earned various awards and nominations, with particular praise for Jonze's screenplay. At the Academy Awards, the film was nominated in five categories, including Best Picture, with Jonze winning for Best Original Screenplay. At the 71st Golden Globe Awards, the film garnered three nominations, going on to win Best Screenplay for Jonze. Jonze was also awarded the Best Original Screenplay Award from the Writers Guild of America and at the 19th Critics' Choice Awards. The film also won Best Fantasy Film, Best Supporting Actress for Johansson, and Best Writing for Jonze at the 40th Saturn Awards. Her also won Best Film and Best Director for Jonze at the National Board of Review Awards, and the American Film Institute included the film in its list of the top ten films of 2013.
Legacy
In an article from The Verge discussing the film a decade after its release, Sheon Han argued that Her'''s exploration of complex feelings surrounding AI contrasted from other films depicting AI and human relationships.
A retrospective article from Wired similarly discussed its portrayal of AI-human relationships, with Kate Knibbs noting its more optimistic viewpoint of artificial general intelligence. Knibbs also claimed that in the advent of AI chatbots, the film "looks even more fantastical than when it debuted." Her has been referenced many times as an example of a voice assistant.
In 2024, OpenAI released their newest iteration of ChatGPT, GPT-4o. GPT-4o offers five integrated voices, one of which is named Sky, which was quickly noted to be similar to Scarlett Johansson's voice, even though she had repeatedly rejected OpenAI's offer for using her audio likeness. During the promotional lead-up to the release of GPT-4o, CEO Sam Altman had tweeted the single word "Her". A few days after release, OpenAI removed the Sky voice.
See also
Pygmalion, the myth that has been the inspiration for many stories involving love of a human for an artificial being.
Blade Runner, a 1982 film in which a police "blade runner", whose job it is to 'retire' replicants, starts a relationship with one.
Electric Dreams, a 1984 movie about a love triangle involving a sentient computer.
Jexi, a 2019 romantic comedy about a self-aware smartphone with a female-voiced virtual assistant that becomes emotionally attached to its socially awkward owner.
"From Agnes—With Love", episode 140 of The Twilight Zone, relating the mishaps faced by a meek computer programmer when the world's most advanced computer falls in love with him.
"Deeper Understanding", a song by Kate Bush originally released in 1989 about a relationship between a lonely person and a computer.
"Be Right Back", a February 2013 episode of the British series Black Mirror, about the relationship between a woman and the artificial intelligence created from the digital footprint of her late husband.
I'm Your Man, a 2021 German science fiction romance about a scientist who participates in a three-week trial with a humanoid robot programmed to make her happy.
Ex Machina, a 2014 science fiction thriller film about a programmer who is invited by his CEO to administer the Turing test to an intelligent humanoid robot.
Steins;Gate 0'', a 2015 Japanese visual novel and its 2018 anime adaptation, which follows the PTSD-ridden student becoming a tester for Amadeus, an artificial intelligence created in image and with memories of his deceased love interest.
References
External links
2013 films
2013 romantic drama films
2010s American films
2010s English-language films
2010s science fiction drama films
American romantic drama films
American science fiction drama films
Annapurna Pictures films
Samantha
Films about artificial intelligence
Films about computing
Films about divorce
Films about sexuality
Films about technological impact
Films about writers
Films directed by Spike Jonze
Films produced by Megan Ellison
Films set in 2025
Films set in Los Angeles
Films set in the future
Films shot in Los Angeles
Films shot in Shanghai
Films whose writer won the Best Original Screenplay Academy Award
Films with screenplays by Spike Jonze
Saturn Award–winning films
Stage 6 Films films
Warner Bros. films
Wild Bunch (company) films
Films scored by musical groups
Semi-autobiographical films
Films about letters (message)
English-language science fiction drama films
English-language romantic drama films
Existentialist films | Her (2013 film) | Technology | 5,087 |
28,162,060 | https://en.wikipedia.org/wiki/Acaryochloris%20marina | Acaryochloris marina is a species of unicellular Cyanobacteria that produces chlorophyll d as its primary pigment (instead of the typically used chlorophyll a), allowing it to photosynthesize using far-red light, at 700-750 nm wavelength. A. marina is found in temperate and tropic marine environments. Strains of A. marina have been isolated from multiple environments, including as epiphytes of red algae, associated with tunicates, and from rocks in intertidal zones (i.e. epilithic).
Description
It was first discovered in 1993 from coastal isolates of coral in the Republic of Palau in the west Pacific Ocean and announced in 1996. Despite the claim in the 1996 Nature paper that its formal description was to be published shortly thereafter, a tentative partial description was presented in 2003 due to phylogenetic issues (deep branching cyanobacterium).
Genome
Its genome was first sequenced in 2008, revealing a large bacterial genome of 8.3 Mb with nine plasmids.
Etymology
The name Acaryochloris is a combination of the Greek prefix a (ἄν) meaning "without", caryo (κάρυον) meaning "nut" (here intended as "nucleus") and chloros (χλωρός) meaning green; therefore it is Neo-Latin Acaryochloris meaning "without nucleus green".
The specific epithet marina is Latin meaning "marine".
Classification
Due to historical reason, the classification of the Cyanobacteria is problematic and many are not validly published, meaning they have not yet been placed into the classification framework. One of these not officially recognised species is Acaryochloris marina, which technically should be written as "Acaryochloris marina" in official writings, but in effect this is rarely done (cf.)
Exoplanet habitability
Scientists including NASA's Nancy Kiang have proposed that the existence of Acaryochloris marina suggests that organisms that use chlorophyll d, rather than chlorophyll a, may be able to perform oxygenic photosynthesis on exoplanets orbiting red dwarf stars (which emit much less light than the Sun). Because about 70% of the stars in the Milky Way galaxy are red dwarfs, the existence of A. marina implies that oxygenic photosynthesis may be occurring on far more exoplanets than astrobiologists initially thought possible.
See also
Prochlorococcus
References
Synechococcales
Environmental microbiology
Bacteria described in 2003
Cyanobacteria stubs | Acaryochloris marina | Environmental_science | 546 |
2,833,034 | https://en.wikipedia.org/wiki/St-connectivity | In computer science, st-connectivity or STCON is a decision problem asking, for vertices s and t in a directed graph, if t is reachable from s.
Formally, the decision problem is given by
.
Complexity
On a sequential computer, st-connectivity can easily be solved in linear time by either depth-first search or breadth-first search. The interest in this problem in computational complexity concerns its complexity with respect to more limited forms of computation. For instance, the complexity class of problems that can be solved by a non-deterministic Turing machine using only a logarithmic amount of memory is called NL. The st-connectivity problem can be shown to be in NL, as a non-deterministic Turing machine can guess the next node of the path, while the only information which has to be stored is the total length of the path and which node is currently under consideration. The algorithm terminates if either the target node t is reached, or the length of the path so far exceeds n, the number of nodes in the graph.
The complement of st-connectivity, known as st-non-connectivity, is also in the class NL, since NL = coNL by the Immerman–Szelepcsényi theorem.
In particular, the problem of st-connectivity is actually NL-complete, that is, every problem in the class NL is reducible to connectivity under a log-space reduction. This remains true for the stronger case of first-order reductions . The log-space reduction from any language in NL to STCON proceeds as follows: Consider the non-deterministic log-space Turing machine M that accepts a language in NL. Since there is only logarithmic space on the work tape, all possible states of the Turing machine (where a state is the state of the internal finite state machine, the position of the head and the contents of the work tape) are polynomially many. Map all possible states of the deterministic log-space machine to vertices of a graph, and put an edge between u and v if the state v can be reached from u within one step of the non-deterministic machine. Now the problem of whether the machine accepts is the same as the problem of whether there exists a path from the start state to the accepting state.
Savitch's theorem guarantees that the algorithm can be simulated in O(log2 n) deterministic space.
The same problem for undirected graphs is called undirected s-t connectivity and was shown to be in L by Omer Reingold. This research won him the 2005 Grace Murray Hopper Award. Undirected st-connectivity was previously known to be complete for the class SL, so Reingold's work showed that SL is the same class as L. On alternating graphs, the problem is P-complete .
References
Graph connectivity
Directed graphs
NL-complete problems | St-connectivity | Mathematics | 590 |
56,382,262 | https://en.wikipedia.org/wiki/Kestrel%20Institute | The Kestrel Institute is a nonprofit computer science research center located in Palo Alto's Stanford Research Park. Cordell Green, who founded Kestrel in 1981, is its Director and Chief Scientist. Its mission is to make it easier to write good, high-quality software and employs computer scientists like Lambert Meertens.
In the 1980s, Kestrel described its research focus as "knowledge-based software environments" to make it easier to write software ("normalize and mechanize the programming process"). In addition, a 2002 MIT Technology Review article described one of Kestrel's projects as a way to "almost force coders to write reliable programs". A 2005 Newsweek article discussed one Kestrel technology that developed software to help the U.S. military schedule cargo deployment by "translating a description of a problem into guidelines a computer can understand".
Nearly all of Kestrel's funding comes from government grants, from organizations such as the U.S. Department of Defense, DARPA, Intelligence Advanced Research Projects Activity (IARPA), Air Force Research Laboratory (AFRL), AFOSR, Office of Naval Research (ONR), NASA, and the National Science Foundation (NSF). In 2015, it received $4.9 million in grants and contributions, down from the previous year's $6.6 million.
References
External links
Computer science organizations
Computer science research organizations
Artificial intelligence associations
Science and technology think tanks
Organizations based in Palo Alto, California | Kestrel Institute | Technology | 303 |
32,221,795 | https://en.wikipedia.org/wiki/Andrea%20Rossi%20%28entrepreneur%29 | Andrea Rossi (born 3 June 1950) is an Italian entrepreneur who claimed to have invented a cold fusion device.
In the 1970s, Rossi claimed to have invented a process to convert organic waste into petroleum, and in 1978 he founded a company named Petroldragon to implement waste processing technology. In the 1989 the company was shut down by the Italian government amid allegations of fraud, and Rossi was arrested. In 1996 Rossi moved to the United States and from 2001 to 2003 he worked under a U.S. Army contract to make a thermoelectric device that, while promising to be superior to other devices, produced only around 1/1000 of the claimed performance.
In 2008 Rossi attempted to patent a device called an Energy Catalyzer (or E-Cat), which was a purported cold fusion or Low-Energy Nuclear Reaction (LENR) thermal power source. Rossi claimed that the device produces massive amounts of excess heat that could be used to produce electricity, but independent attempts to reproduce the effect failed.
Biography
Andrea Rossi was born on 3 June 1950, in Milan.
In 1973, Rossi graduated in philosophy at the University of Milan writing a thesis on Albert Einstein's theory of relativity and its interrelationship with Edmund Husserl's phenomenology. Although Rossi also holds a degree in chemical engineering, this degree was granted by Kensington University in California, which was later shut down as a diploma mill.
Andrea Rossi is married to Maddalena Pascucci.
Business ventures
Petroldragon
In 1974, Rossi registered a patent for an incineration system. In 1978, he wrote The Incineration of Waste and River Purification, published in Milan by Tecniche Nuove.
He then founded Petroldragon, a company that was paid to process toxic waste, claiming to use Rossi's process to convert the waste into usable petroleum products.
In 1989 Italian customs seized several Petroldragon waste deposit sites and assets. Investigations showed that petroleum supposedly produced by the company had never been placed on the market, and that mixtures of toxic waste and harmful chemical solvents were being stored in silos or illegally dumped into the environment. Rossi himself was arrested and eventually tried on 56 counts, five of which ended in convictions related to tax fraud. As of 2004 the government of Lombardy had spent over forty million euros to dispose of the 70,000 tonnes of toxic waste that Petroldragon had improperly dumped.
Electricity from waste heat
In the US Rossi started the consulting firm Leonardo Technologies, Inc. (LTI). He secured a defense contract to evaluate the potential of generating electricity from waste heat by using thermoelectric generators. Such devices are normally only used for heating or cooling (Peltier effect), because the efficiency for generating electrical power is only a few percent. Rossi suggested that his devices could attain 20% efficiency. Larger modules would be manufactured in Italy. Rossi sent 27 thermoelectric devices for evaluation to the Engineer Research and Development Center; 19 of these did not produce any electricity at all. The remaining units produced less than one watt each, instead of the expected 800–1000 watts.
Energy Catalyzer
In January 2011, Andrea Rossi and Sergio Focardi claimed to have demonstrated commercially viable nuclear power in a device he called an Energy Catalyzer. The international patent application received an unfavorable international preliminary report on patentability because it seemed to "offend against the generally accepted laws of physics and established theories" and to overcome this problem the application should have contained either experimental evidence or a firm theoretical basis in current scientific theories.
In February 2012, Australian aviator and skeptic Dick Smith offered Rossi US$1 million if Rossi could prove his device generated output many times input, as he had claimed. The offer lapsed, Rossi having declined to take up the challenge.
In 2014, the U.S. company Industrial Heat LLC acquired rights to the device, but later became involved in a legal dispute with Rossi, who asserted that licensing fees had not been paid. Industrial Heat countered that they had been unable to reproduce the claimed results, and the case was eventually settled out of court on undisclosed terms.
See also
Fischer–Tropsch process
Low energy nuclear reaction
Thermal depolymerization
References
1950 births
Businesspeople from Milan
Living people
20th-century Italian inventors
Cold fusion
21st-century Italian inventors | Andrea Rossi (entrepreneur) | Physics,Chemistry | 881 |
57,089,777 | https://en.wikipedia.org/wiki/Mycena%20alnicola | Mycena alnicola, is a mushroom species in the family Mycenaceae. It usually grows in temperate forests, associated with alders. First described by A.H. Smith in the Olympic Mountains, Washington, USA. He remarks that the bluish-gray cast is more pronounced than usual. He also described a variety Mycena alnicola var. odora with an odor and taste raphanoid.
Description
The cap is 1–2.5 cm wide. At first campanulate-obtuse, to convex at maturity. The pileus has a pale blue luster in younger specimens; the cap is hygrophanous. The cap disk is dark brown, and the rest of the cap is light brown (or beige), the margin usually whitish. The pileus margin is furrowed-striated.
The gills are adnate, interveined, and narrow to moderately broad. The gill color is gray and has entire ridges.
The stem is 4–6 cm long × 1.5–2 mm wide, equal, hollow; covered with a dense white bloom in the stipe apex. The stipe color is dark beige to dark mouse gray, cap-colored at maturity. Smell and flavor is sweet.
Basidia with four sterigmata, basidiospores ellipsoid, smooth, amyloid, (6) 7–9 (10) × 4–5 μm.
Cheilocystidia present, clavate to broadly fusiform; subcylindrical to spindle-shaped or sometimes with one or two protuberances; smooth or with low incrustation at apex in KOH. Cheilocystidia size range: 26–40 × 8–17 μm. Pleurocystida are rare.
Similar species
Mycena abramsii and Mycena leptocephala are similar but these species have bleach or chlorine odor.
Ecology
In wood and logs (usually from Alnus)
References
alnicola
Taxa named by Alexander H. Smith
Fungi described in 1941
Fungus species | Mycena alnicola | Biology | 432 |
2,529,397 | https://en.wikipedia.org/wiki/Type%20Directors%20Club | The Type Directors Club (TDC) is an international organization devoted to typography and type design, founded in 1946 in New York City. TDC believes that type drives culture, and that culture drives type—and is dedicated to cataloging, showcasing, and exhibiting typography worldwide.
Founding member Milton Zudek described the club's goals at their first exhibit opening in 1947:
Timeline
1943: The club was started as an unofficial gathering in 1943. Founding member Milton Zudeck described the club’s goals: “We simply want to make more and more advertising people aware of the important of the agency typographer. We want them to realize that the selection of type for an advertisement demands a sixth sense that goes beyond the basic knowledge of typefaces.”
1946: The Type Directors Club organization was formed by several leading NY art directors, including Aaron Burns, Louis Dorfsman and Milton Zudeck.
1960: The TDC was composed of men for many years until 1960 when they recruited its first woman member, designer Beatrice Warde. Today, the Beatrice Warde Scholarship stands to commemorate Beatrice Warde and all her contributions to the field of typography and to the TDC.
1967: The TDC medal is the organization's most prestigious award dedicated to the artful craft of type and typography. As of 2022 there have been 34 medalists. in 1967 the first TDC medal was awarded to Hermann Zapf.
1987: The TDC’s first international conference Type 1987 was held in Manhattan, giving participants the opportunity to gather with “stars” from outside the US like Adrian Frutiger and Neville Brody.
2018: As part of a rebranding led by Debbie Millman, the TDC adopted the Type Drives Culture conference slogan. The same year, the club held the first Ascenders competition, aimed to promote the hottest designers under 35; and created a BIPOC scholarship, which was later renamed the Ade Hogue Scholarship.
2022: The TDC merged with The One Club
2022: Upon the TDC’s reopening, Ksenya Samarskaya was appointed as TDC Managing Director with a mission to make it more open, diverse, and culturally engaging. TDC hosted Ezhishin, the first conference about Native North American typography.
Conferences
In 1955 the first TDC competition was held to recognize outstanding work in the profession. The TDC’s first international conference, Type 1987 was held in Manhattan, giving participants the opportunity to gather with “stars” from outside the US like Adrian Frutiger and Neville Brody. in 2018, as part of a rebranding led by Debbie Millman, the TDC adopted the “Type Drives Culture” conference slogan which remains at the heart of the TDCs annual conference.
The most recent Type Drives Culture 22 conference held various sessions with the overarching theme “Type: The Next 75 Years”.
TDC Medal
The TDC Medal is awarded for significant contributions to typography.
Competitions
Since the 1950s the TDC has been holding yearly type competitions: one for the use of type and the letterform in design and the other, typeface design. The winners are reproduced in the Typography Annual, as well as displayed in seven exhibits that travel worldwide. In addition to celebrating outstanding achievements, the typography competitions and resulting annuals serve as important historical records of typographic trends, and are an invaluable resource for both designers and scholars.[3]
Typography
Previously known as Communication Design, the Typography competition is the extension of the original competition started in the 1950s.
Type Design
The Type Directors Club Type Design Awards are given annually for excellence in typeface design. The award is generally viewed as being the most prestigious in the field.
Type Design Winners by Year
Lettering
Started in 2022.
Ascenders
In 2018, the TDC inaugurated Ascenders, a competition to recognize the achievements of designers 35 years of age and younger. In its inaugural year, TDC honored nineteen Ascenders from around the world.
TDC Annual — World’s Best Typography
TDC produces a design annual that features award-winning typography. The 2017 publication, The World's Best Type and Typography, was designed by Leftloft of Milan.
Scholarships
Beatrice Warde Scholarship
Winners receive a $5,000 USD award as well as a free one-year student membership to the TDC, offering discounts and other opportunities for conferences and events.
Previous winners of the award include Doah Kwon (2022) Ximena Amaya (2021), Tatiana Lopez (2020), Blossom Liu (2019), Anna Skoczeń (2018), Tasnima Tanzim (2017), Ania Wieluńska (2016), and Rebecca Bartola (2015).
Ade Hogue Scholarship
Formerly known as the Superscript scholarship.
Previous winners of the award include Ana Robles (2022) and Sakinah Bell (2021)
Ezhishin Scholarship
Started in 2023, the annual $5,000 scholarship, funded by Google, is for Native American and First Nation individuals in the US and Canada, respectively, who exemplify a creative practice that explores typography, type design, or relevant linguistic work. Winners receive a $5,000 USD award as well as a free one-year student membership to the TDC, offering discounts and other opportunities for conferences and events.
Sponsors
The TDC is sponsored by A to A Studio Solutions, Adobe Typeset, Typeset Design Matters, Designer Journals, Facebook Analog Research Laboratory, Firebelly, Google, Glyphs, Monotype, Morisawa, Pandora, School of Visual Arts, SVA Masters in Banding, Type Network which help to support the following initiatives:
TDC Typography Annual
Student scholarships
Salons in NYC and other US cities
Educational workshops
TDC Book Night
TDC Competition Judges Night
New York exhibitions at TDC
International exhibitions of competition winners
Special events
References
External links
TDC Annuals
Typography
Design awards | Type Directors Club | Engineering | 1,241 |
15,089,769 | https://en.wikipedia.org/wiki/Natural%20history%20of%20New%20Zealand | The natural history of New Zealand began when the landmass Zealandiatoday an almost entirely submerged mass of continental crust with New Zealand and a few other islands peaking above sea levelbroke away from the supercontinent Gondwana in the Cretaceous period. Before this time, Zealandia shared its past with Australia and Antarctica. Since this separation, the New Zealand landscape has evolved in physical isolation, although much of its current biota has more recent connections with species on other landmasses. The exclusively natural history of the country ended in about 1300 AD, when humans first settled, and the country's environmental history began. The period from 1300 AD to today coincides with the extinction of many of New Zealand's unique species that had evolved there.
The break-up of Gondwana left the resulting continents, including Zealandia, with a shared ecology. Zealandia began to move away from the part of Gondwana which would become Australia and Antarctica approximately 85 million years ago (Ma). By about 70 Ma, the break up was complete. Zealandia has been moving northwards ever since, changing both in relief and climate. Most of the present biota of New Zealand has post-Gondwanan connections to species on other landmasses, but does include a few descendants of Gondwanan lineages, such as the Saint Bathans mammal. Overall, trans-oceanic dispersal has played a clear role in the formation of New Zealand's biota. Several elements of the Gondwana biota are present in New Zealand today: predominantly plants, such as the podocarps and the southern beeches, but also distinctive insects, birds, frogs and the tuatara.
In the Duntroonian stage of the Oligocene, the land area of Zealandia was at a minimum. It has been suggested that water covered all of it, but the consensus is that low-lying islands remained, perhaps a quarter of the modern land area of New Zealand.
Before the split (Gondwana, 85 million years ago)
In the late Cretaceous, Gondwana was a fraction of its original size, however, the landmasses that would become Australia, Antarctica and Zealandia were still attached. Most of the modern 'Gondwanan fauna' had its origin in the Cretaceous. During this time Zealandia was temperate and almost flat, with no alpine environments.
Gondwanan fauna
Fossils found at Lightning Ridge, New South Wales, suggest that 110 million years ago (Ma), Australia supported a number of different monotremes, but did not support any marsupials. Marsupials appear to have evolved during the Cretaceous in the contemporary northern hemisphere, to judge from a 100-million-year-old marsupial fossil, Kokopellia, found in the badlands of Utah. Marsupials would then have spread to South America and Gondwana. The first evidence of mammals (both marsupials and placental) in Australia comes from the Tertiary, and was found at a 55-million-year-old fossil site at Murgon, in southern Queensland. As Zealandia had rifted away at this time it explains the lack of ground-dwelling marsupials and placental mammals in New Zealand's fossil record.
Dinosaurs continued to prosper but, as the angiosperms diversified, conifers, bennettitaleans and pentoxylaleans disappeared from Gondwana 115 Ma together with the specialised herbivorous ornithischians, whilst generalist browsers, such as several families of sauropodomorph Saurischia, prevailed. The Cretaceous–Paleogene extinction event killed off all dinosaurs except birds, but plant evolution in Gondwana was hardly affected. Gondwanatheria is an extinct group of non-therian mammals with a Gondwanan distribution (South America, Africa, Madagascar, India, and Antarctica) during the Late Cretaceous and Palaeogene. Xenarthra and Afrotheria, two placental clades, are of Gondwanan origin and probably began to evolve separately when Africa and South America separated.
Gondwanan flora
Angiosperms evolved in northern Gondwana/southern Laurasia during the Early Cretaceous and radiated worldwide. The southern beeches, Nothofagus, are prominent members of this early angiosperm flora. The Late Cretaceous pollen record shows that some types of flora evolved across Gondwana, while others originated in Antarctica and spread to Australia. Fossils of Nothofagus have also recently been found in Antarctica.
The laurel forests of Australia, New Caledonia, and New Zealand have a number of species related to those of the laurissilva of Valdivia, through the connection of the Antarctic flora. These include gymnosperms and the deciduous species of Nothofagus, as well as the New Zealand laurel, Corynocarpus laevigatus, and Laurelia novae-zelandiae. At this time Zealandia was mostly covered in forests of podocarps, araucarian pines, and ferns.
Rafting away (latest Cretaceous 85–66 Ma)
The Australia-New Zealand continental part of Gondwana split from Antarctica in the late Cretaceous (95–90 Ma). This was followed by Zealandia separating from Australia (c.85 Ma). The split started from the Southern end and eventually formed the Tasman Sea. By about 70 Ma, the continental crust of Zealandia separated from Australia and Antarctica. However, it is not known when the division of land above sea level occurred, and for some time only shallow seas would have separated Zealandia and Australia in the north. Dinosaurs continued to live in New Zealand and had about 10–20 million years to evolve unique species after they separated from Gondwana.
In the Cretaceous, New Zealand was much further south (c.80 degrees south) than it is today, however, it and much of Antarctica was covered in trees as the climate of 90 Ma was much warmer and wetter than today.
New Zealand's present native fauna does not contain land mammals (other than bats) or snakes. Neither marsupials nor placental mammals evolved in time to reach Australia before the split. The multituberculates, a primitive type of mammal, may have evolved in time to cross New Zealand using the land bridge. The evolution and dispersal of snakes is less certain, due to their poor fossil record, it is uncertain as to whether they were in Australia before the opening of the Tasman Sea. Ratites evolved around c. 80 ma and may have been present in Zealandia at this time.
Swamps and rifting (Paleocene to Eocene 66 to 33.9 Ma)
At the start of the Paleocene New Zealand's biota was recovering from the extinction of dinosaurs, and the species that survived were expanding into the empty niches. There was a slight decrease in mean temperature at the start of the Paleocene, leading to a change in canopy species. Zealandia was largely covered by shallow seas with low-lying land and swamps. The oldest penguin fossil in the world and various other sea birds are found in New Zealand from this time.
The Tasman Sea continued to expand until the early Eocene (53 Ma). The western half of Zealandia then along with Australia formed the Australian Plate (40 Ma). In response to this, a new plate boundary was created within Zealandia between the Australian Plate and Pacific Plate. This led to the formation of a subduction arc with active volcanism forming islands north and west of present New Zealand. New Zealand was low lying due to this extension and swamps became widespread. Today these are recorded as large coal seams in the geological record.
The isolation of Antarctica and the formation of the Antarctic Circumpolar Current is credited by many researchers with causing the glaciation of Antarctica and global cooling in the Eocene epoch. Oceanic models have shown that the opening of these two passages limited polar heat convergence and caused a cooling of sea surface temperatures by several degrees; other models have shown that CO2 levels also played a significant role in the glaciation of Antarctica. Published estimates of the onset of the Antarctic Circumpolar Current vary, but it is commonly considered to have started at the Eocene/Oligocene boundary.
Whales were completely marine creatures by 40 Ma; New Zealand oldest whale fossils are from 35 Ma.
New Zealand's shallow seas (Oligocene 33.9 to 23 Ma)
From the early Oligocene, at maximum submersion of the Zealandia landmass, almost all New Zealand's rocks are marine. Oligocene terrestrial sediments are few, scattered, and not well-dated.
It has been suggested that at some point, Zealandia was entirely underwater, and consequently all land biota would be descended from later immigrants. However, molecular estimates of divergence times between 248 extant New Zealand lineages and their closest relatives elsewhere follow approximately a smooth exponential over the last 50 million years or more. Some 74 of these lineages appear to have survived the Oligocene in New Zealand. There is no evidence for a deficit of pre-Oligocene lineages, nor an excess of lineages arriving just afterward. This strongly suggests that New Zealand was never submerged completely. Although there is no obvious peak of lineage extinction in the Oligocene, the limited diversity of mitochondrial DNA in kiwis, moas, and New Zealand wrens indicate that all three lineages underwent a genetic bottleneck (small effective populations) roughly coinciding with the maximum submersion; New Zealand at this time probably consisted of low-lying islands with a limited diversity of habitats.
Significant uplift occurred by the mid-Oligocene (~32–29 Ma) in the modern Canterbury Basin, where palaeochannels eroded through the early Oligocene Amuri Limestone lead eastwards to the present Bounty Trough.
The North and South island have been separate for most of the last 30 million years, allowing for the development of separate subspecies.
The Southern Alps, Foulden Maar and Saint Bathans Fauna (Miocene – Pliocene 23 to 2.6 Ma)
Major uplift occurred on the Alpine Fault, which started to form the hills and the mountains that became the Southern Alps.
Foulden Maar, a maar-diatreme volcano in Otago, preserved a high diversity of freshwater fish, arthropods, plants and fungi at a lake 23 Ma. It is the only known maar of its kind in the Southern Hemisphere and is one of New Zealand's pre-eminent fossil sites. The fossil evidence derived from pollen and spores suggests a warm temperate or sub-tropical rain forest with canopy trees, with an understorey of shrubs, ferns and on the margins pioneer species. Climatically, the area resembled modern-day south-eastern Queensland, a humid sub-tropical zone with species that no longer occur in the New Zealand flora. The lake contained small and large galaxiid fishes and eels, ducks (inferred from coprolites), and likely crocodiles as well.
The Saint Bathans Fauna represents a detailed record of New Zealand's terrestrial life in the Miocene. It shows that small land mammals and crocodiles existed and have since become extinct. The earliest moa remains come from the Miocene Saint Bathans Fauna. Known from multiple eggshells and hindlimb elements, these represent at least two species of already fairly large sized species.
The boundaries defining the Pliocene are not set at an easily identified worldwide event but rather at regional boundaries between the warmer Miocene and the relatively cooler Pliocene. The upper boundary was set at the start of the Pleistocene glaciations. Uplift intensified on the Alpine Fault forming the Southern Alps. This global cooling coupled with an increase in elevation led to the local extinction of many groups of plants, which are now still found in New Caledonia. The new niches created in the mountains were filled with migrants from Australia and species that could evolve quickly.
The Taupo Volcanic Zone and ice age (Pleistocene - Holocene 2.6 Ma to today)
The ice age began 2.6 Ma, at the start of the Pleistocene epoch, and is defined by the presence of ice sheets on Greenland and Antarctica. During the warmer periods sea level was higher than today leading to raised beaches around New Zealand. New Zealand's flora is still recovering from the last glacial maximum. About 2 Ma, extension and subduction under the North Island formed the Taupo Volcanic Zone, leading to the central North Island being covered in cobalt deficient soils which restrict forest development. One of the largest eruptions being the Lake Taupo eruption of 186 AD.
Since the last glacial maximum there have been three major climatic periods: the coldest period from 28–18,000 years ago, an intermediate period from 18-11,000 years ago and our current climatic condition the warmer Holocene Inter Glacial over the last 11,000 years. In the first period global sea levels were about lower than today. This made most of New Zealand a single island and exposed great sections of the currently submerged continental shelf. Temperatures were about 4–5 °C lower than today. Much of the Southern Alps and Fiordland were glaciated and much of the rest of New Zealand was covered in grass or shrubs, due to the cold and dry climate. These vast tracks of exposed land with little vegetation cover increased wind erosion and the deposition of loess (windblown dust). This deforestation led to a reduction in the forest cover and many canopy species were restricted to the northern areas of the country. The kauri was at the time only present in Northland but has progressively moved south from there over the last 7000 years, reaching its current limit about 2000 years ago.
See also
Environment of New Zealand
Geology of New Zealand
References
Sources
Biota of New Zealand
Environment of New Zealand | Natural history of New Zealand | Biology | 2,836 |
39,902,695 | https://en.wikipedia.org/wiki/Architecture%20of%20Paris | The city of Paris has notable examples of architecture from the Middle Ages to the 21st century. It was the birthplace of the Gothic style, and has important monuments of the French Renaissance, Classical revival, the Flamboyant style of the reign of Napoleon III, the Belle Époque, and the Art Nouveau style. The great Exposition Universelle (1889) and 1900 added Paris landmarks, including the Eiffel Tower and Grand Palais. In the 20th century, the Art Deco style of architecture first appeared in Paris, and Paris architects also influenced the postmodern architecture of the second half of the century.
Gallo-Roman architecture
Very little architecture remains from the ancient town of Lutetia, founded by a Celtic tribe known as the Parisii in about the 3rd century BC. It was conquered by the Romans in 52 BC, and turned into a Gallo-Roman garrison town. It was rebuilt in the 1st century AD on the classic Roman plan; a north–south axis, or cardo (now rue Saint-Jacques); and an east–west axis, or decumanus, of which traces have been found on the Île de la Cité, at rue de Lutèce. The center of Roman administration was on the island; the Roman governor's palace stood where the Palais de Justice is located today. The right bank was largely undeveloped. The city grew up the Left Bank, on the slopes of Mount Saint-Geneviève. The Roman forum was on the summit of the hill, under the present Rue Soufflot, between the boulevard Saint-Michel and rue Saint-Jacques.
The Roman town had three large baths near the forum, supplied with water by a 46-kilometer-long aqueduct. Vestiges of one bath, the Thermes de Cluny, can still be seen on Boulevard Saint-Michel. It was the largest of the three baths, one hundred meters by sixty-five meters, and was built at the end of the 2nd century or beginning of the 3rd century BC, at the height of the town's grandeur. The baths are now part of the Musée national du Moyen Âge, or National Museum of the Middle Ages. Nearby, on rue Monge, are the vestiges of the Roman amphitheater, called the Arènes de Lutèce, which was discovered and restored in the 19th century. Though the population of the town was probably no more than 5–6 thousand persons, the amphitheater measured 130 meters by 100 meters, and could seat fifteen thousand persons. Fifteen tiers of seats remain from the original thirty-five. It was built in the 1st century AD and was used for the combat of gladiators and animals, and also for theatrical performances.
Another notable piece of Gallo-Roman architecture was discovered under the choir of Notre-Dame de Paris; the Pillar of the Boatmen, a fragment of a Roman column with carvings of both Roman and Gallic gods. It was probably made at the beginning of the 1st century during the reign of the Emperor Tiberius to honor the league of the boatmen, who played an important part in the town's economy and religious and civic life. It is now on display in the Roman baths at the Museum of the Middle Ages. Other fragments of Gallo-Roman architecture are found in the crypt under the square in front of the Cathedral of Notre Dame; and in the Church of Saint-Pierre de Montmartre, where several Roman columns, probably from a temple, were re-used in the late 12th century to build a Christian church.
Romanesque churches
Unlike the Southern France, Paris has very few examples of Romanesque architecture; most churches and other buildings in that style were rebuilt in the Gothic style. The most remarkable example of Romanesque architecture in Paris is the church of the Abbey of Saint-Germain-des-Prés, built between 990 and 1160 during the reign of Robert the Pious. An earlier church had been destroyed by the Vikings in the 9th century. The oldest elements of the original church existing today are the tower (the belfry at the top was added in the 12th century), and the chapel of Saint Symphorien, on the south flank of the bell tower, built in the 11th century. It is considered the earliest existing place of worship in Paris. The gothic choir, with its flying buttresses, was added in the mid-12th century, it was consecrated by Pope Alexander III, in 1163. It was one of the earliest Gothic style elements to appear in a Paris church.
Romanesque and Gothic elements are found together in several old Paris churches. The church of Saint-Pierre de Montmartre (1147–1200) is the only surviving building of the vast Abbey of Montmartre, which once covered the top of the hill; it has both ancient Roman columns and one of the first examples of a Gothic arched ceiling, in the nave near the choir. The interior of the church of Saint-Julien-le-Pauvre (1170–1220) has been extensively rebuilt, but it still has massive Romanesque columns and the exterior is a classic example of the Romano-Gothic style. The former priory of Saint-Martin-des-Champs (1060–1140) has a choir and chapels supported by contreforts and a Romanesque bell tower. It now belongs to the Musee des Arts et Metiers.
The Middle Ages
The Palais de la Cité
In 987 Hugues Capet became the first King of France, and established his capital in Paris, though at the time his kingdom was little bigger than the Île-de-France, or modern Paris region. The first royal residence, the Palais de la Cité, was established within the fortress at the western end of the Île de la Cité, where the Roman governors had established their residence. Capet and his successors gradually enlarged their kingdom through marriages and conquests. His son, Robert the Pious (972–1031), built the first palace, the Palais de la Cité, and royal chapel within the walls of the fortress, and his successors embellished it over the centuries; by the reign of Philippe le Bel in the 14th century, it was the most magnificent palace in Europe. The tallest structure was the Grosse Tour, or great tower, built by Louis le Gros between 1080 and 1137. It had a diameter of 11.7 meters at the base and walls three meters thick, and remained until its demolition in 1776. The ensemble of buildings (seen in the image at right as they were between 1412 and 1416) included a royal residence, a great hall for ceremonies, and four large towers along the Seine on the north side of the island, as well as a gallery of luxury shops, the first Paris shopping center. Between 1242 and 1248 King Louis IX, later known as Saint Louis, built an exquisite Gothic chapel, Sainte-Chapelle, to house the relics of the Passion of Christ which he had acquired from the Emperor of Byzantium.
In 1358, a rebellion of the Parisian merchants against the royal authority, led by Étienne Marcel, caused the King, Charles V, to move his residence to a new palace, the Hôtel Saint-Pol, near the Bastille at the eastern edge of the city. The Palace was used occasionally for special ceremonies and to welcome foreign monarchs, but housed the administrative offices and courts of the Kingdom, as well as an important prison. The Great Hall was destroyed by a fire in 1618, rebuilt; another fire in 1776 destroyed the residence of the King, the tower of Montgomery. During the French Revolution, the revolutionary tribunal was housed in the building; hundreds of persons, including Queen Marie Antoinette, were tried and imprisoned there, before being taken to the guillotine. After the Revolution the Conciergerie served as a prison and courthouse. It was burned by the Paris Commune in 1871, but was rebuilt. The prison was closed in 1934, and the Conciergerie became a museum.
Several vestiges of the medieval Palais de la Cité, extensively modified and restored, can still be seen today; the royal chapel, Sainte-Chapelle; the Hall of the Men-at-Arms, (early 14th century), the former dining hall of the palace officials and guards, located underneath the now-vanished Great Hall; and the four towers along the Seine facing the right bank. The façade was built in the 19th century. The tower on the far right, the Tour Bonbec, is the oldest, built between 1226 and 1270 during the reign of Louis IX, or Saint Louis. It is distinguished by the crenelation at the top of the tower. It originally was a story shorter than the other towers, but was raised to match their height in the renovation of the 19th century. The tower served as the primary torture chamber during the Middle Ages. The two towers in the center, the Tour de César and the Tour d'Argent, were built in the 14th century, during the reign of Philippe le Bel. The tallest tower, the Tour de l'Horloge, was constructed by Jean le Bon in 1350, and modified several times over the centuries. The first public clock in Paris, was added by Charles V in 1370. The sculptural decoration around the clock, featuring allegorical figures of The Law and Justice, was added in 1585 century by Henry III.
City walls and castles
Much of the architecture of medieval Paris was designed to protect the city and King against attack; walls, towers, and castles. Between 1190 and 1202, King Philippe-Auguste began construction of a wall five kilometers long to protect the city on the right bank. The wall was reinforced by seventy-seven circular towers, each no more than six meters in diameter. He also began construction of a large castle, the Louvre, where the wall met the river. The Louvre was protected by a moat and a wall with ten towers. In the center was a massive circular donjon or tower, thirty meters high and fifteen meters in diameter. It was not then the residence of the King, but Philippe Auguste placed the royal archives there. Another walled complex of buildings, the Temple, the headquarters of the Knights Templar, was located on the right bank, centered around a massive tower.
The city on the right bank continued to grow outwards. The Provost of the Merchants, Étienne Marcel, began building a new city wall in 1356, which doubled the area of the city. The Louvre, now surrounded by the city, was given rich decoration and a grand new stairway, and gradually became more of residence than a fortress. Charles V, in 1364–80, moved his primary residence from the City Palace to the Hôtel Saint-Pol, a comfortable new palace in the new Le Marais quarter. To protect his new palace and the eastern flank of the city, in 1370 Charles began building the Bastille, a fortress with six cylindrical towers. At the same time, further east, in the forest of Vincennes, Charles V built an even larger castle, the Château de Vincennes, dominated by another massive keep or tower fifty-two meters high. It was completed in 1369. Beginning in 1379, close to the Château, he began constructing a replica of Sainte-Chapelle. Unlike the Sainte-Chapelle in the city, the interior of the Sainte-Chapelle of Vincennes was not divided into two levels; the interior was a single space, flooded with light.
Churches – the birth of the Gothic Style
The style of Gothic architecture was born in the rebuilding of the chevet of the Basilica of Saint-Denis, just outside Paris, finished in 1144. Twenty years later, the style was used on a much larger scale by Maurice de Sully in the construction of the Cathedral of Notre-Dame de Paris. The construction continued until the 14th century, beginning with the twin towers on the west toward the choir in the east. The style evolved as the construction continued; the opening of the rose window on the western façade were relatively narrow; the great rose windows of the central transept were much more delicate, and allowed in much more light. At the western end, the walls were supported by buttresses built directly against the walls; in the center, completed later, the walls were supported by two steps of flying buttresses. In the last century of construction, the buttresses were able to cross the same distance with a single stone arch. The towers on the west were more stately and solemn, in the classic Gothic style, while the eastern elements of the cathedral, with its combination of rose windows, spires, buttresses and pinnacles, belonged to more elaborate and decorative style, called the Gothic rayonnant.
Other Paris churches soon adapted the Gothic style; the choir of Abbey church of Saint-Germain-des-Prés was completely rebuilt in the new style, with pointed arches and flying buttresses. The church of Saint-Pierre de Montmartre was rebuilt with ogives, or Gothic pointed arches. The church of Saint-Germain l'Auxerois, next to the Louvre, was given a portal inspired by Notre Dame, and the Church of Saint-Séverin was given a Gothic nave with the first triforium, or first-story side gallery, in Paris. The supreme example of the new style was the upper chapel of Sainte-Chapelle, where the walls seemed to be made entirely of stained glass.
The Gothic Style went through another phase between 1400 and about 1550; the Flamboyant Gothic, which combined extremely refined forms and rich decoration. The style was used not only in churches, but also in some noble residences. Notable existing examples are the Church of Saint-Séverin (1489–95) with its famous twisting pillar; the elegant choir of the church of St-Gervais-et-St-Protais; the Tour Saint-Jacques, the flamboyant Gothic vestige of an abbey church destroyed during the Revolution; and the chapel of the residence of the Abbots of Cluny, now the Museum of the Middle Ages, and the ceiling of the Tour Saint-Jean-Sans-Peur, a vestige of the former residence of the Dukes of Burgundy, in the 2nd arrondissement.
Houses and manors
The houses in Paris during the Middle Ages were tall and narrow; usually four or five stories. They were constructed of wooden beams on a stone foundation, with the walls covered by white plaster, to prevent fires. There was usually a shop located on the ground floor. Houses built of stone reserved for the wealthy; the oldest house in the Paris is considered to be the Maison de Nicolas Flamel, at 51 rue Montmorency in the 3rd arrondissement, built in 1407. it was not a private residence, but a kind of hostel. Two houses with exposed beams at 13-15 rue François-Miron in the 4th arrondissement, often described as Medieval, were actually built in the 16th and 17th centuries.
While there are no ordinary houses from the Middle Ages, there are several examples of manors built for the nobility and the high clergy. The Tour Jean-sans-Peur, at 20 rue Étienne-Marcel in the 2nd arrondissement, built in 1409–11, was part of the Hôtel de Burgogne, the Paris residence of the Dukes of Burgundy. Built by Robert de Helbuterne, it contains a stairway with a magnificent flamboyant gothic ceiling. The Hôtel de Cluny residence of the abbots of the Cluny Monastery, now the Musée national du Moyen Âge or National Museum of the Middle Ages (1490–1500), has a typical feature of manors of the period; a stairway in a tower on the exterior of the building, in the courtyard. It also contains a chapel with a spectacular flamboyant Gothic ceiling. The Hôtel de Sens was the Paris residence of the Archbishop of Sens, who had authority over the Bishops of Paris. It also featured a separate stairway tower in the courtyard.
Renaissance Paris (16th century)
The Italian Wars conducted by Charles VIII and Louis XII, at the end of the 15th and early 16th century were not very successful from a military point of view, had a direct and beneficial effect on the architecture of Paris. The two Kings returned to France with ideas for magnificent public architecture in the new Italian Renaissance style, and brought Italian architects to build them. A new manual of classical Roman architecture by the Italian Serlio also had a major effect on the new look of French buildings. A distinctly French Renaissance style, lavishly using cut stone and lavish ornamental sculpture, developed under Henry II after 1539.
The first structure in Paris in the new style was the old Pont Notre-Dame (1507–12), designed by the Italian architect Fra Giocondo. It was lined with 68 artfully designed houses, the first example of Renaissance urbanism. King Francis I commissioned the next project; a new Hôtel de Ville, or city hall, for the city. It was designed by another Italian, Domenico da Cortona, and begun in 1532 but not finished until 1628. The building was burned in 1871 by the Paris Commune, but the central portion was faithfully reconstructed in 1882. A monumental fountain in the Italian style, the Fontaine des Innocents, was built in 1549 as a tribune for the welcome of the new King, Henry II, to the city on June 16, 1549. It was designed by Pierre Lescot with sculpture by Jean Goujon, and is the oldest existing fountain in Paris.
The first Renaissance Palace built in Paris was the Château de Madrid; it was a large hunting lodge designed by Philibert Delorme and erected between 1528 and 1552 west of the city in what is now the Bois de Boulogne. It was combination of both French and Italian Renaissance styles, with a high French-style roof and Italian loggias. It was demolished beginning in 1787, but a fragment can still be seen today in the Trocadero Gardens in the 16th arrondissement.
Under Henry II and his successors, the Louvre was gradually transformed from a medieval fortress into a Renaissance palace. The architect Pierre Lescot and sculptor Jean Gouchon made the Lescot wing of the Louvre, a masterpiece of combined French and Italian Renaissance art and architecture, on the southeast side of the Cour Carrée of the Louvre (1546–53). Inside the Louvre, they made the staircase of Henry II (1546–53) and the Salle des Caryatides (1550). Both French and Italian elements were combined; the antique orders and paired columns of the Italian renaissance were combined with sculpted medallions and high roofs broken by windows (later known as the Mansard roof), which were characteristic of the French style.
After the accidental death of Henry II of France in 1559, his widow Catherine de' Medici (1519–1589) planned a new palace. She sold the medieval Hôtel des Tournelles, where her husband had died, and began building the Tuileries Palace in using architect Philibert de l'Orme. During the reign of Henry IV (1589–1610), the building was enlarged to the south, so it joined the long riverside gallery, the Grande Galerie, which ran all the way to the older Louvre Palace in the east.
Religious architecture
Most of the churches built in Paris in the 16th century are in the traditional Flamboyant style, though some have features borrowed from the Italian Renaissance. The most important Paris church of the Renaissance is Saint-Eustache, 105 meters long, 44 meters wide and 35 meters high, which in size and grandeur, approaches that of the Cathedral of Notre-Dame. King Francis I wanted a monument as the centrepiece for the neighborhood of Les Halles, where the main city market was located. The church was designed by the King's favorite architect, Domenico da Cortona. The project was begun in 1519, and construction began in 1532. The pillars were inspired by the monastery church of Cluny, and the soaring interior is taken from the gothic cathedrals of the 13th century, but Cortona added details and ornament taken from the Italian Renaissance. It was not completed until 1640.
The other churches of the period follow the more traditional flamboyant Gothic models. They include Saint-Merri (1520–52), with a plan similar to Notre-Dame; Saint-Germain l'Auxerrois, which features impressive flying buttresses; and the Église Saint-Medard. whose choir was built in beginning in 1550; St-Gervais-et-St-Protais features a soaring gothic vault in the apse, but also had a transept a more sober classical style inspired by the Renaissance. (The baroque façade was added in the 17th century).in the Saint-Étienne-du-Mont (1510–86), near the modern Pantheon on Mont Sainte-Genevieve, has the only remaining Renaissance rood screen (1530–35), a magnificent bridge across the center of the church. The flamboyant gothic church of Saint-Nicholas-des-Champs (1559) has a striking Renaissance feature; a portal on right side inspired by designs of Philibert Delorme for the former royal residence, the Palace of Tournelles in the Marais.
Houses and hôtels particuliers
The ordinary Paris house of the Renaissance was little changed from the medieval house; they were four to five stories high, narrow, built on a stone foundation of wood covered with plaster. They usually had a "pigeon", or gabled roof. The two houses at 13–15 rue François Miron (actually built in the 16th or 17th century, but often described as medieval houses) are good examples of the Renaissance house.
Once the French court returned to Paris from the Loire Valley, the nobility and wealthy merchants began to build hôtels particuliers, or large private residences, mostly in the Marais. They were built of stone and richly decorated with sculpture. They were usually built around a courtyard, and separated from the street. The residence was a located between the courtyard and garden. The façade facing the courtyard had the most sculptural decoration; the façade facing the garden was usually rough stone. The Hôtel Carnavalet at 23 rue de Sévigné, (1547–49), designed by Pierre Lescot, and decorated with sculpture by Jean Goujon, is the best example of a Renaissance hôtel. As the century advanced, the exterior stairways disappeared and the façades became more classical and regular. A good example of the later style is the Hôtel d'Angoulême Lamoignon, at 24 rue Pavée in the 3rd arrondissement (1585–89), designed by Thibaut Métezeau.
The 17th century – The Baroque, the dome, and the debut of Classicism
The architectural style of the French Renaissance continued to dominate in Paris through the Regency of Marie de' Medici. The end of the wars of religion allowed the continuation of several building projects, such as the expansion of the Louvre, begun in the 16th century but abandoned because of the war. With the arrival in power Louis XIII and the ministers Richelieu and Mazarin, a new architectural style, the Baroque, imported from Italy, began to appear in Paris. Its purpose, like Baroque music and painting, was to awe Parisians with its majesty and ornament, in opposition to the austere style of the Protestant Reformation. The new style in Paris was characterized by opulence, irregularity, and an abundance of decoration. The straight geometric lines of the buildings were covered with curved or triangular frontons, niches with statues or caryatids, cartouches, garlands of drapery, and cascades of fruit carved from stone.
Louis XIV distrusted the unruly Parisians and spent as little time as possible in Paris, finally moving his Court to Versailles, but at the same time he wanted to transform Paris into "The New Rome", a city worthy of the Sun King. Over the course of his long reign, from 1643 until 1715, the architectural style in Paris gradually changed from the exuberance of the Baroque to a more solemn and formal classicism, the embodiment in stone of the King's vision of Paris as "the new Rome." The new Académie royale d'architecture, founded in 1671, imposed an official style, as the Academies of art and literature had earlier done. The style was modified again beginning in about 1690, as the government began to run short of money; new projects were less grandiose.
Royal squares and urban planning
In the 17th century, the first large-scale urban planning of Paris was initiated by royal ordinance, largely based on the model of Italian cities, including the construction of the first residential squares. The first two squares, Place Royale (now Place des Vosges, 1605–12) and Place Dauphine, the latter in place of the old royal garden on the Île-de-la-Cité, were both begun by Henry IV, who also completed the first Paris bridge without houses, the Pont Neuf (1599–1604). The Place Royale had nine large residences on each of its four sides, with identical façades. The Place Dauphine had forty houses on its three sides (of which just two remain today). Louis XIV continued the style with Place des Victoires (1684–97) and Place Vendôme (1699–1702). Both of these squares were (1) designed by Jules Hardouin-Mansart, (2) had statues of the King in the center, and (3) were financed largely by the sale of the houses around the squares. The residences around the latter two squares had identical classical façades and were built of stone, following Hardouin-Mansart's Grand Style used in his monumental buildings. The residential squares all had pedestrian arcades on the ground floors, and what became known as a mansart window breaking the line of the high roof. They set a model for European squares in the 18th century.
Urban planning was another important legacy of the 17th century. In 1667 formal height limits were imposed on Paris buildings; 48 pieds () for wooden buildings and 50 to 60 pieds () for buildings of stone, following earlier rules set in place in 1607. To prevent fires, the traditional gabled roof was banned. Beginning in 1669, under the new regulations, large blocks of houses of uniform height and uniform façades were built along several Paris streets on the right bank, notably rue de la Ferronnerie (1st arr.), rue Saint-Honoré (1st arr.), rue du Mail (2nd arr.), and rue Saint-Louis-en-Île on the Île Saint-Louis. They usually were built of stone and composed of an arched arcade on the ground floor with two to four stories above, the windows separated by decorative columns, and a high roof broken by rows of windows. This was the birth of the iconic Paris street architecture that dominated for the next two centuries.
Another element of the new architecture of Paris was the bridge. The Pont Neuf (1599–1604) and Pont Royal (1685–89), by engineer François Romain and architect Jules Hardouin-Mansart, were built without the rows of houses that occupied earlier bridges, and were designed to match the grand style of the architecture around them.
Palaces and monuments
After the assassination of Henry IV in 1610, his widow, Marie de' Medici, became the regent for the young Louis XIII and between 1615 and 1631 she built a residence for herself, the Luxembourg Palace, on the left bank. It was inspired by the palaces of her native Florence, but also by the innovations of the French Renaissance. The architect was Salomon de Brosse, followed by Marin de la Vallée and Jacques Lemercier. In the gardens, she built a magnificent fountain, the Medici Fountain, also on the Italian model.
The construction of the Louvre was one of the major Paris architectural projects of the 17th century, and the palace architecture clearly showed the transition from the French Renaissance to the classical style of Louis XIV. Jacques Lemercier had built the Pavillon de l'Orloge in 1624–39 in an ornate baroque style. Between 1667 and 1678 Louis Le Vau, Charles Le Brun, François d'Orbay and Claude Perrault rebuilt the east exterior façade of the courtyard with a long colonnade. A competition was held in 1670 for the south façade, which included a proposal from the Italian architect Bernini. Louis XIV rejected Bernini's Italianate plan in favor of a classical design by Perrault, which had a flat roof concealed by a balustrade and a series of massive columns and triangular pediments designed to convey elegance and power. Louis Le Vau and Claude Perrault rebuilt the interior façade of the cour Carée of the Louvre in a more classical version than that of the facing Renaissance façade. The Louvre was gradually transformed from a Renaissance and baroque palace to the classical grand style of Louis XIV.
Religious architecture
Church architecture in the 17th century was slow to change. Interiors of new parish churches, such as Saint-Sulpice, Saint-Louis-en-l'Île and Saint-Roch largely followed the traditional Gothic floor-plan of Notre-Dame, though they did add façades and certain other decorative features from the Italian Baroque, and follow the advice of the Council of Trent to integrate themselves into the city's architecture, and they were aligned with the street. In 1675, an official survey on the state of church architecture in Paris made by architects Daniel Gittard and Libéral Bruant recommended that certain churches "so-called Gothic, without any good order, beauty or harmony" should be rebuilt "in the new style of our beautiful modern architecture", meaning the style imported from Italy, with certain French adaptations.
The architect Salomon de Brosse (1571–1626) introduced a new style of façade, based on the traditional orders of architecture (Doric, Ionic and Corinthian), placed one above the other. He first used this style in the façade of the Church of St-Gervais-et-St-Protais (1616–20). The style of the three superimposed orders appeared again in the Eglise Saint-Paul-Saint-Louis, the new Jesuit church in Paris, designed by the Jesuit architects Étienne Martellange and François Derand. Saint-Roch (1653–90), designed by Jacques Lemercier, had a Gothic plan but colorful Italian-style decoration.
Debut of the dome
The most dramatic new feature of Paris religious architecture in the 17th century was the dome, which was first imported from Italy in about 1630, and began to change the Paris skyline, which hitherto had been entirely dominated by church spires and bell towers. The domed churches began as a weapon of the Counter-Reformation against the architectural austerity of the Protestants. The prototype for the Paris domes was the Church of the Jesu, the Jesuit church in Rome, built in 1568–84 by Giacomo della Porta. A very modest dome was created in Paris between 1608 and 1619 in the chapel of the Louanges on rue Bonaparte. (Today it is part of the structure of the École des Beaux-Arts). The first large dome was on the church of Saint-Joseph des Carmes, which was finished in 1630. Modifications in the traditional religious services, strongly supported by the growing monastic orders in Paris, led to modification in church architecture, with more emphasis on the section in the center of the church, beneath the dome. The circle of clear glass windows of the lower part of the dome filled the church center with light.
The most eloquent early architect of domes was the architect François Mansart. His first dome was at the chapel of the Minimes (later destroyed), then at the chapel of the Church of the Convent of the Visitation Saint-Marie at 17 rue Saint-Antoine (4th arr.), built between 1632 and 1634. Now the Temple du Marais, it is the oldest surviving dome in the city. Another appeared on the Eglise-Saint-Joseph in the convent of the Carmes-dechaussés at 70 rue de Vaugirard (6th arr.) between 1628 and 1630. Another dome soon was built in the Marais; the dome of the Church of Saint-Paul-Saint-Louis at 899-101 rue Saint-Antoine (1627–41), by Étienne Martellange and François Derand. It was followed by church of the Abbey of Val-de-Grâce (5th arr.) (1624–69), by Mansart and Pierre Le Muet; then by a dome on the Chapel of Saint-Ursule at the college of the Sorbonne (1632–34), by Jacques Lemercier; and the college des Quatres-Nations (now the Institute of France (1662–68), by Louis LeVau and François d'Orbay; and the church of Notre-Dame de l'Assomption de Paris on rue Saint-Honoré (1st arr.) (1670–76) by Charles Errard. The most majestic dome was that of the chapel of Les Invalides, by Jules Hardouin-Mansart, built between 1677 and 1706. The last dome of the period was for a Protestant church, the Temple de Pentemont on rue de Grenelle (7th arr.) (about 1700) by Charles de La Fosse.
Residential architecture – the rustic style
An elegant new form of domestic architecture, the rustic style, appeared in Paris in the wealthy Le Marais at the end of the 16th and beginning of the 17th century. This style of architecture was usually used for ornate apartments in wealthy areas and for hôtels particuliers. It was sometimes called the "style of three crayons" because it used three colors; black slate tiles, red brick, and white stone. This architecture was expensive, having a variety of different materials, and ornate stone work. This style inspired the unique Palais de Versailles. The earliest existing examples are the house known as the Maison de Jacques Cœur at 40 rue des Archives (4th arr.) from the late 16th century; the Hôtel Scipion Sardini at 13 rue Scipion in the (5th arr,) from 1532, and the Abbot's residence at the Abbey of Saint-Germain-des-Prés at 3-5 rue de l'Abbaye, (6th arr.), from 1586. The most famous examples around found around the Place des Vosges, built between 1605 and 1612. Other good examples are the Hospital of Saint-Louis on rue Buchat (10th arr.) from 1607 to 1611; the two houses at 1-6 Place Dauphine on the Île de la Cité, from 1607 to 1612; and the Hôtel d'Alméras at 30 rue des Francs-Bourgeois (4th arr.), from 1612.
Residences – the classical style
The palatial new residences built by the nobility and the wealthy in the Marais featured two new and original specialized rooms; the dining room and the salon. The new residences typically were separated from the street by a wall and gatehouse. There was a large court of honor inside the gates, with galleries on either side, used for receptions, and for services and the stables. The house itself opened both onto the courtyard and onto a separate garden. One good example in its original form, between the Place des Vosges and rue Saint-Antoine, is the Hôtel de Sully, (1624–29), built by Jean Androuet du Cerceau.
After 1650 the architect François Mansart introduced a more classical and sober style to the hôtel particulier. The Hôtel de Guénégaud des Brosses at 60 rue des Archives (3rd arrondissement) from 1653 had a greatly simplified and severe façade. Beginning in the 1660s Mansart remade the façades of the Hôtel Carnavalet, preserving some of the Renaissance decoration and a 16th portal but integrating them into a more classical composition, with columns, pediments and stone bossage.
The 18th century – The triumph of neoclassicism
During the first half of the 18th century, the grand style of Louis XIV, defined by the Royal Academy of Architecture and evoking power and grandeur, dominated Paris architecture. In 1722, Louis XV returned the court to Versailles, and visited the city only on special occasions. While he rarely came into Paris, he did make important additions to the city's landmarks. His first major building was the École Militaire, a new military school, on the Left Bank. It was built between 1739 and 1745 by Ange-Jacques Gabriel. Gabriel borrowed the design of the Pavillon d'Horloge of the Louvre by Lemercier for the central pavilion, a façade influenced by Mansart, and Italian touches from Palladio and Giovanni Battista Piranesi.
In the second part of the century, a more purely neoclassical style, based directly on Greek and Roman models, began to appear. It was strongly influenced by a visit to Rome in 1750 by the architect Jacques-Germain Soufflot and the future Marquis de Marigny, the director of buildings for King Louis XV. They and other architects who made the obligatory trip to Italy brought back classical ideas and drawings which defined Paris architecture until the 1830s.
Soufflot's Roman trip led to the design of the new church of Saint Genevieve, now the Panthéon, the model of the neoclassical style, constructed on the summit of Mont Geneviéve between 1764 and 1790. It was not completed until the French Revolution, at which time it became a mausoleum for Revolutionary heroes. Other royal commissions in the new style included the royal mint, the Hotel des Monnaies on the Quai de Conti (6th arr.), with a 117-meter-long façade along the Seine, dominated by its massive central Avant-corps and vestibule decorated with Doric columns and caisson ceilings (1767–75).
Religious architecture
Churches in the first half of the 18th century, such as the church of Saint-Roche at 196 rue Saint-Honoré (1738–39) by Robert de Cotte and Jules-Robert de Cotte, stayed with the late baroque style of superimposed orders. Later churches ventured into neoclassicism, at least on the exterior. The most prominent example of a neoclassical church was the Church of Saint Genevieve (1764–90), the future Pantheon. The church of Saint-Philippe-du-Roule at 153 rue du Faubourg-Saint-Honoré (8th arr.) (1764–84) by Jean-François Chalgrin had an exterior inspired by the early Paleo-Christian church, though the nave in the interior was more traditional. The Church of Saint-Sulpice in the 6th arrondissement, by Jean-Nicolas Servandont, then by Oudot de Maclaurin and Jean-François Chalgrin was given a classical façade and two bell towers (1732–80). Funding was exhausted before the second tower was finished, leaving the two towers different in style. The church of Saint-Eustache on rue-du-Jour (1st arr.) an example of both Gothic and Renaissance architecture, had its west faced redone by Jean Hardouin-Mansart and then Pierre-Louis Moreau-Desproux, into a neoclassical façade with two orders (1754–78), and was intended to have two towers, but only one was finished.
A large church with a dome, similar to Les Invalides, had been planned for the Place de la Madeleine beginning in the 1760s. the King laid the cornerstone on April 3, 1763, but work halted in 1764. The architect, Pierre Contant d'Ivry, died in 1777, and was replaced by his pupil Guillaume-Martin Couture, who decided instead to base his church on the Roman Pantheon; a classic colonnade topped by a massive dome. At the start of the Revolution of 1789, however, only the foundations and the grand portico had been finished.
Régence and Louis XV residential architecture
The Régence and then the rule of Louis XV saw a gradual evolution of the style of the hôtel particulier, or mansion. The ornate wrought-iron balcony appeared on residences, along with other ornamental details called rocaille or rococo, often borrowed from Italy. The style first appeared on houses in the Marais, then in the neighborhoods of Saint-Honoré and Saint-Germain, where larger building lots were available. These became the most fashionable neighborhoods by the end of the 18th century. The new hôtels were often ornamented with curve façades, rotundas and lateral pavilions, and had their façades decorated with sculpted mascaron fruit, cascades of trophies and other sculptural decoration. The interiors were richly decorated with carved wood panels. The houses usually looked out onto courtyards on the front and gardens to the rear. The Hôtel de Chenizot, 51 rue Saint-Louis-en-Ile, by Pierre-Vigné de Vigny (about 1720), was a good example of the new style; it was a 17th-century house transformed by a new rocaille façade.
Urbanism – the Place de la Concorde
In 1748, the Academy of Arts commissioned a monumental statue of the king on horseback by the sculptor Bouchardon, and the Academy of Architecture was assigned to create a square, to be called Place Louis XV, where it could be erected. The site selected was the marshy open space between the Seine, the moat and bridge to the Tuileries Garden, and the Champs-Élysées, which led to the Place de l'Étoile, convergence of hunting trails on the western edge of the city (now Place Charles de Gaulle). The winning plans for the square and buildings next to it were drawn by the architect Ange-Jacques Gabriel. Gabriel designed two large hôtels with a street between them, Rue Royale, designed to give a clear view of the statue in the center of the square. The façades of the two hôtels, with long colonnades and classical pediments, were inspired by Perrault's neoclassical façade of the Louvre. Construction began in 1754, and the statue was put in place and dedicated on 23 February 1763. The two large hôtels were still unfinished, but the façades were finished in 1765–66. The Place was the theatre for some of the most dramatic events of the French Revolution, including the executions of Louis XVI and Marie Antoinette.
Urbanism under Louis XVI
The later part of the 18th century saw the development of new residential blocks, particularly on the left bank at Odéon and Saint-Germain, and on the right bank in the first and second arrondissements. The most fashionable neighborhoods moved from the Marais toward the west. with large residential buildings constructed in a simplified and harmonious neoclassical style. The ground floors were often occupied by arcades to give pedestrians shelter from the rain and the traffic in the streets. Strict new building regulations were put into place in 1783 and 1784, which regulated the height of new buildings in relation to the width of the street, regulating the line of the cornice, the number of stories and the slope of the roofs. Under a 1784 decree of the Parlement of Paris, the height of most new buildings was limited to 54 pieds or 17.54 meters, with the height of the attic depending upon the width of the building.
Paris architecture on the eve of the Revolution
Paris in the 18th century had many beautiful buildings, but it was not a beautiful city. The philosopher Jean-Jacques Rousseau described his disappointment when he first arrived in Paris in 1731:
I expected a city as beautiful as it was grand, of an imposing appearance, where you saw only superb streets, and palaces of marble and gold. Instead, when I entered by the Faubourg Saint-Marceau, I saw only narrow, dirty and foul-smelling streets, and villainous black houses, with an air of unhealthiness; beggars, poverty; wagons-drivers, menders of old garments; and vendors of tea and old hats."
In 1749, in Embellissements de Paris, Voltaire wrote: "We blush with shame to see the public markets, set up in narrow streets, displaying their filth, spreading infection, and causing continual disorders… Immense neighbourhoods need public places. The center of the city is dark, cramped, hideous, something from the time of the most shameful barbarism."
The uniform neoclassical style all around the city was not welcomed by everyone. Just before the Revolution the journalist Louis-Sébastien Mercier wrote: "How monotonous is the genius of our architects! How they live on copies, on eternal repetition! They don't know how to make the smallest building without columns… They all more or less resemble temples."
Even functional buildings were built in the neoclassical style; the grain market (now the Chamber of Commerce) was given a neoclassical dome (1763–69) by Nicolas Le Camus de Mézières. Between 1785 and 1787, the royal government built a new wall around the edges of the city (The Wall of the Ferme générale) to prevent smuggling of goods into the city. it had fifty-five barriers, many of them in the form of Doric temples, designed by Claude Nicolas Ledoux. A few still exist, notably at Parc Monceau. The wall was highly unpopular and was an important factor in turning opinion against Louis XVI, and provoking the French Revolution.
In 1774 Louis XV had constructed a monumental fountain, the Fontaine des Quatre-Saisons, richly decorated with classical sculpture by Bouchardon glorifying the King, at 57–59 rue de la Grenelle. While the fountain was huge, and dominated the narrow street, it originally had only two small spouts, from which residents of the neighborhood could fill their water containers. It was criticized by Voltaire in a letter to the Count de Caylus in 1739, as the fountain was still under construction:
Revolutionary Paris
During the French Revolution, the churches of Paris were closed and nationalized, and many were badly damaged. Most destruction came not from the Revolutionaries, but from the new owners who purchased the buildings, and sometimes destroyed them for the building materials they contained. The Church of Saint-Pierre de Montmartre was destroyed, and its church left in ruins. Parts of the Abbey of Saint-Germain-des-Prés were turned into a gunpowder factory; an explosion destroyed many of the buildings outside the church. The Church of Saint-Genevieve was turned into a mausoleum for revolutionary heroes. The sculpture on the façade of the Cathedral of Notre-Dame was smashed or removed, and the spire torn down. Many of the abandoned religious buildings, particularly in outer neighborhoods of the city, were turned into factories and workshops. Much of the architecture of the Revolution was theatrical and temporary, such as the extraordinary stage sets created for the Festival of the Supreme Being on the Champs-de-Mars in 1794. However, work continued on some pre-revolutionary projects. The rue des Colonnes in the second arrondissement, designed by Nicolas-Jacques-Antoine Vestier (1793–1795), had a colonnade of simple Doric columns, characteristic of the Revolutionary period.
The Paris of Napoleon (1800–1815)
Monuments
In 1806, in imitation of Ancient Rome, Napoléon ordered the construction of a series of monuments dedicated to the military glory of France. The first and largest was the Arc de Triomphe, built at the edge of the city at the Barrière d'Étoile, and not finished before July 1836. He ordered the building of the smaller Arc de Triomphe du Carrousel (1806–1808), copied from the arch of Arch of Septimius Severus and Constantine in Rome, next to the Tuileries Palace. It was crowned with a team of bronze horses he took from the façade of St Mark's Basilica in Venice. His soldiers celebrated his victories with grand parades around the Carrousel. He also commissioned the building of the Vendôme Column (1806–10), copied from the Trajan's Column in Rome, made of the iron of cannon captured from the Russians and Austrians in 1805. At the end of the Rue de la Concorde (given again its former name of Rue Royale on 27 April 1814), he took the foundations of an unfinished church, the Église de la Madeleine, which had been started in 1763, and transformed it into a 'temple à la gloire de la Grande Armée', a military shrine to display the statues of France's most famous generals.
Many of Napoleon's contributions to Paris architecture were badly needed improvements to the city's infrastructure; He started a new canal to bring drinking water to the city, rebuilt the city sewers, and began construction of the Rue de Rivoli, to permit the easier circulation of traffic between the east and west of the city. He also began construction of the Palais de la Bourse (1808–26), the Paris stock market, with its grand colonnade. it was not finished until 1826. In 1806 he began to build a new façade for the Palais Bourbon, the modern National Assembly, to match the colonnade of the Temple of Military Glory (now the Madeleine), directly facing it across the Place de La Concorde.
The Egyptian style
Parisians had a taste for the Egyptian style long before Napoleon; pyramids, obelisks and sphinxes occurred frequently in Paris decoration, such as the decorative sphinxes decorating the balustrade of the Hotel Sale (now the Musée Picasso) (1654–1659), and small pyramids decorating the Anglo-Chinese gardens of the Château de Bagatelle and Parc Monceau in the (18th century). However, Napoleon's Egyptian campaign gave the style a new prestige, and for the first time it was based on drawings and actual models carried back the scholars who traveled with Napoleon's soldiers to Egypt; the style soon appeared in public fountains and residential architecture, including the Fontaine du Fellah on rue de Sèvres by François-Jean Bralle (1807) and the Fontaine du Palmier by Bralle and Louis Simon Boizot (1808). The sphinxes around this fountain were Second-Empire additions in 1856–58 by the city architect of Napoleon III, Gabriel Davioud. The grandest Egyptian element added to Paris was the Luxor Obelisk from the Luxor Temple, offered as a gift by the Viceroy of Egypt to Louis-Philippe, and erected on the Place de la Concorde in 1836. Examples continued to appear in the 20th century, from the Luxor movie palace on boulevard de Magenta in the 10th arrondissement (1921) to the Louvre pyramid by I. M. Pei (1988).
The debut of iron architecture
Iron architecture made its Paris debut under Napoleon, with the construction of the Pont des Arts by Louis-Alexandre de Cessart and Jacques Lacroix-Dillon (1801–03). This was followed by a metal frame for the cupola of the Halle aux blé, or grain market (now the Paris Bourse de Commerce, or Chamber of Commerce). Designed by the architect François-Joseph Bélanger and the engineer François Brunet (1811). It replaced the wooden-framed dome built by Nicolas Le Camus de Mézières in 1767, which burned in 1802. It was the first iron frame used in a Paris building.
The Restoration (1815–1830)
Public buildings and monuments
The royal government restored the symbols of the old regime, but continued the construction of most of the monuments and urban projects begun by Napoleon. All of the public buildings and churches of the Restoration were built in a relentlessly neoclassical style. Work resumed, slowly, on the unfinished Arc de Triomphe, begun by Napoleon. At the end of the reign of Louis XVIII, the government decided to transform it from a monument to the victories of Napoleon into a monument celebrating the victory of the Duke of Angôuleme over the Spanish revolutionaries who had overthrown their Bourbon king. A new inscription was planned: "To the Army of the Pyrenees", but the inscription had not been carved and the work was still not finished when the regime was toppled in 1830.
The Canal Saint-Martin was finished in 1822, and the building of the Bourse de Paris, or stock market, designed and begun by Alexandre-Théodore Brongniart from 1808 to 1813, was modified and completed by Éloi Labarre in 1826. New storehouses for grain near the Arsenal, new slaughterhouses, and new markets were finished. Three new suspension bridges were built over the Seine: the Pont d'Archeveché, the Pont des Invalides and footbridge of the Grève. All three were rebuilt later in the century.
Religious architecture
The church of La Madeleine, begun under Louis XVI, had been turned by Napoleon into the Temple of Glory (1807). It was now turned back to its original purpose, as the Royal church of La Madeleine. To commemorate the memory of Louis XVI and Marie Antoinette to expiate the crime of their execution, King Louis XVIII built the Chapelle expiatoire designed by Pierre-François-Léonard Fontaine in a neoclassical style similar to the Paris Pantheon on the site of the small cemetery of the Madeleine, where their remains (now in the Basilica of Saint-Denis) had been hastily buried following their execution. It was completed and dedicated in 1826.
Several new churches were begun during the Restoration to replace those destroyed during the Revolution. A battle took place between architects who wanted a neogothic style, modeled after Notre-Dame, or the neoclassical style, modeled after the basilicas of ancient Rome. The battle was won by a majority of neoclassicists on the Commission of Public Buildings, who dominated until 1850. Jean Chalgrin had designed Saint-Philippe de Role before the Revolution in a neoclassical style; it was completed (1823–30) by Étienne-Hippolyte Godde. Godde also completed Chalgrin's project for Saint-Pierre-du-Gros-Caillou (1822–29), and built the neoclassic basilicas of Notre-Dame-du-Bonne Nouvelle (1823–30) and Saint-Denys-du-Saint-Sacrament (1826–35). Other notable neoclassical architects of the Restoration included Louis-Hippolyte Lebas, who built Notre-Dame-de-Lorette (1823–36); (1823–30); and Jacques Ignace Hittorff, who built the church of Church of Saint-Vincent-de-Paul (1824–44). Hittorff went on to along a brilliant career in the reigns of Louis Philippe and Napoleon III, designing the new plan of the Place de la Concorde and constructing the Gare du Nord railway station (1861–66).
Commercial architecture – the shopping gallery
A new form of commercial architecture had appeared at the end of the 18th century; the passage, or shopping gallery, a row of shops along a narrow street covered by a glass roof. They were made possible by improved technologies of glass and cast iron, and were popular since few Paris streets had sidewalks and pedestrians had to compete with wagons, carts, animals and crowds of people. The first indoor shopping gallery in Paris had opened at the Palais-Royal in 1786; rows of shops, along with cafes and the first restaurants, were located under the arcade around the garden. It was followed by the passage Feydau in 1790–91, the passage du Caire in 1799, and the Passage des Panoramas in 1800. In 1834 the architect Pierre-François-Léonard Fontaine carried the idea a step further, covering an entire courtyard of the Palais-Royal, the Galerie d'Orleans, with a glass skylight. The gallery remained covered until 1935. It was the ancestor of the glass skylights of the Paris department stores of the later 19th century.
Residential architecture
During the Restoration, and particularly after the coronation of King Charles X in 1825. New residential neighborhoods were built on the Right Bank, as the city grew to the north and west. Between 1824 and 1826, a time of economic prosperity, the quarters of Saint-Vincent-de-Paul, Europe, Beaugrenelle and Passy were all laid out and construction began. The width of lots grew larger; from six to eight meters wide for a single house to between twelve and twenty meters for a residential building. The typical new residential building was four to five stories high, with an attic roof sloping forty-five degrees, broken by five to seven windows. The decoration was largely adapted from that of the Rue de Rivoli; horizontal rather than vertical orders, and simpler decoration. The windows were larger and occupied a larger portion of the façades. Decoration was provided by ornamental iron shutters and then wrought-iron balconies. Variations of this model were the standard on Paris boulevards until the Second Empire.
The hôtel particulier, or large private house of the Restoration, usually was built in a neoclassical style, based on Greek architecture or the style of Palladio, particularly in the new residential quarters of Nouvelle Athenes and the Square d'Orleans on Rue Taibout (9th arrondissement), a private residential square (1829–35) in the English neoclassical style designed by Edward Cresy. Residents of the square included George Sand and Frédéric Chopin. Some of the houses in the new quarters in the 8th arrondissement, particularly the quarter of François I, begun in 1822, were made in a more picturesque style, a combination of the Renaissance and classical style, called the Troubadour style. This marked the beginning of the movement away from uniform neoclassicism toward eclectic residential architecture.
The Paris of Louis-Philippe (1830–1848)
Monuments and public squares
The architectural style of public buildings under the Restoration and Louis-Philippe was determined by the Academie des Beaux-Arts, or Academy of Fine Arts, whose Perpetual Secretary from 1816 to 1839 was Quatremère de Quincy, a confirmed neoclassicist. The architectural style of public buildings and monuments was intended to associate Paris with the virtues and glories of ancient Greece and Rome, as it had been under Louis XIV, Napoleon and the Restoration.
The first great architectural project of the reign of Louis-Philippe was the remaking of the Place de la Concorde into its modern form. The moats of the Tuileries were filled, two large fountains, one representing the maritime commerce and industry of France, the other the river commerce and great rivers of France, designed by Jacques Ignace Hittorff, were put in place, along with monumental sculptures representing the major cities of France. On 25 October 1836, a new centerpiece was put in place; a stone obelisk from Luxor, weighing two hundred fifty tons, brought on a specially built ship from Egypt, was slowly hoisted into place in the presence of Louis-Philippe and a huge crowd. In the same year, the Arc de Triomphe, begun in 1804 by Napoleon, was finally completed and dedicated. Following the return to Paris of the ashes of Napoleon from Saint Helena in 1840, they were placed with great ceremony in a tomb designed by Louis Visconti beneath the church of Les Invalides. Another Paris landmark, the column on the Place de la Bastille, was inaugurated on 28 July 1840, on the anniversary of the July Revolution, and dedicated to those killed during the uprising.
Several older monuments were put to new purposes: the Élysée Palace was purchased by the French state and became an official residence, and under late governments the residence of the Presidents of the French Republic. The Basilica of Sainte-Geneviève, originally built as a church, then, during the Revolution, made into a mausoleum for great Frenchmen, then a church again during the Restoration, once again became the Panthéon, holding the tombs of great Frenchmen.
Preservation and restoration
The reign of Louis-Philippe saw the beginning of a movement to preserve and restore some of the earliest landmarks of Paris, inspired in large part by Victor Hugo's hugely successful novel The Hunchback of Notre-Dame (Notre-Dame de Paris), published in 1831. The leading figure of the restoration movement was Prosper Mérimée, named by Louis-Philippe as the inspector General of Historic Monuments. The Commission of Public Monument was created in 1837, and in 1842, Mérimée began compiling the first official list of classified historical monuments, now known as the Base Mérimée.
The first structure to be restored was the nave of the church of Saint-Germain-des-Prés, the oldest in the city. Work also began in 1843 on the cathedral of Notre Dame, which had been badly damaged during the Revolution, and stripped of the statues on its façade. Much of the work was directed by the architect and historian Viollet-le-Duc who, sometimes, as he admitted, was guided by his own scholarship of the "spirit" of medieval architecture, rather strict historical accuracy. The other major restorations projects were Sainte-Chapelle and the Hôtel de Ville, dating to the 17th century; the old buildings which pressed up against the back of the Hôtel de Ville were cleared away; two new wings were added, the interiors were lavishly redecorated, and the ceilings and walls of the large ceremonial salons were painted with murals by Eugène Delacroix. Fortunately, all the interiors were burned in 1871 by the Paris Commune.
The Beaux-Arts style
At the same time, a small revolution was taking place at the École des Beaux-Arts, led by four young architects; Joseph-Louis Duc, Félix Duban, Henri Labrouste and Léon Vaudoyer, who had first studied Roman and Greek architecture at the Villa Medici in Rome, then in the 1820s began the systematic study of other historic architectural styles, including French architecture of the Middle Ages and Renaissance. They instituted teaching about a variety of architectural styles at the École des Beaux-Arts, and installed fragments of Renaissance and Medieval buildings in the courtyard of the school so students could draw and copy them. Each of them also designed new non-classical buildings in Paris inspired by a variety of different historic styles; Labrouste built the Sainte-Geneviève Library (1844–50); Duc designed the new Palais de Justice and Court of Cassation on the Île-de-la-Cité (1852–68); and Vaudroyer designed the Conservatoire national des arts et métiers (1838–67), and Duban designed the new buildings of the École des Beaux-Arts. Together, these buildings, drawing upon Renaissance, Gothic and romanesque and other non-classical styles, broke the monopoly of neoclassical architecture in Paris.
The first train stations
The first train stations in Paris were called embarcadère (a term used for water traffic), and their location was a source of great contention, as each railroad line was owned by a different company, and each went in a different direction. The first embarcadère was built by the Péreire brothers for the line Paris-Saint-Germain-en-Laye, at the Place de l'Europe. It opened on 26 August 1837, and with its success was quickly replaced by a larger building on rue de Stockholm, and then an even larger structure, the beginning of the Gare Saint-Lazare, built between 1841 and 1843. It was the station for the trains to Saint-Germain-en-Laye, Versailles and Rouen.
The Péreire brothers argued that Gare Saint-Lazare should be the unique station of Paris, but the owners of the other lines each insisted on having their own station. The first Gare d'Orléans, now known as the Gare d'Austerlitz, was opened on 2 May 1843, and was greatly expanded in 1848 and 1852. The first Gare Montparnasse opened on 10 September 1840 on avenue du Maine, and was the terminus of the new Paris-Versailles line on the left bank of the Seine. It was quickly found to be too small, and was rebuilt between 1848 and 1852 at the junction of rue de Rennes and boulevard du Montparnasse, its present location.
The banker James Mayer de Rothschild received the permission of the government to build the first railroad line from Paris to the Belgian border in 1845, with branch lines to Calais and Dunkerque. The first embarcadère of the new line opened on rue de Dunkerque in 1846. It was replaced by a much grander station, Gare du Nord, in 1854. The first station of the line to eastern France, the Gare de l'Est was begun in 1847, but not finished until 1852. Construction of a new station for the line to the south, from Paris to Montereau-Fault-Yonne began in 1847 and was finished in 1852. In 1855 it was replaced by a new station, the first Gare de Lyon, on the same site.
Napoleon III and the Second Empire style (1848–1870)
The rapidly growing French economy under Napoleon III led to major changes in the architecture and urban design of Paris. New types of architecture connected with the economic expansion; railroad stations, hotels, office buildings, department stores and exposition halls, occupied the center of Paris, which previously had been largely residential. To improve traffic circulation and bring light and air to the center of the city, Napoleon's Prefect of the Seine, destroyed the crumbling and overcrowded neighborhoods in the heart of the city and built a network of grand boulevards. The expanded use of new building materials, especially iron frames, allowed the construction of much larger buildings for commerce and industry.
When he declared himself Emperor in 1852, Napoleon III moved his residence from the Élysée Palace to the Tuileries Palace, where his uncle Napoleon I had lived, adjoining the Louvre. His Nouveau Louvre project continued the construction of the Louvre, following the grand design of Henry IV; he built the Pavillon Richelieu (1857), the guichets of the Louvre (1867), and rebuilt the Pavillon de Flore; he broke with the neo-classicism of the wings of the Louvre built by Louis XIV; the new constructions were perfectly in harmony with the Renaissance wings.
The dominant architectural style of the Second Empire was the eclectic, drawing liberally from the architecture of the Gothic style, Renaissance style, and style of Louis XV and Louis XVI. The best example was the Palais Garnier, begun in 1862 but not finished until 1875. The architect was Charles Garnier (1825–1898), who won the competition against a Gothic-revival style by Viollet-le-Duc. When asked by the Empress Eugenie what the style of the building was called, he replied simply "Napoleon III." It was at the time the largest theater in the world, but much of the interior space was devoted to purely decorative spaces; grand stairways, huge foyers for promenading, and large private boxes. The façade was decorated with seventeen different materials, marble, stone, porphyry and bronze. Other notable examples of Second Empire public architecture include the Palais de Justice and the Court of Cassation by Joseph-Louis Duc (1862–68); the Tribunal de commerce de Paris by Antoine-Nicolas Bailly (1860–65), and the Théâtre du Châtelet by Gabriel Davioud (1859–62) and Theater de la Ville, facing each other on Place du Châtelet.
The Second Empire also saw the restoration of the famed stained glass windows and structure of Sainte-Chapelle by Eugène Viollet-le-Duc; and extensive restoration of Notre-Dame de Paris. Later critics complained that some of the restoration was more imaginative than precisely historical.
The map and look of Paris changed dramatically under Napoleon III and Baron Haussmann. Haussmann demolished the narrow streets and crumbling medieval houses in the center of the city (including the house where he was born) and replaced them with wide boulevards lined by large residential buildings, all of the same height (Twenty meters to the cornice, or five stories on boulevards and four on narrower streets), with façades in the same style, and faced with the same cream-colored stone. He completed the east–west axis of the city center, the Rue de Rivoli begun by Napoleon, built a new north–south axis, Boulevard de Sébastopol, and cut wide boulevards on both the right and left banks, including the Boulevard Saint-Germain, Boulevard Saint-Michel, usually culminating in a domed landmark. if a dome was not already there, Haussmann had one built, as he did with the Tribunal de commerce de Paris and the Church of Saint-Augustin.
The centrepiece of the new design was the new Palais Garnier, designed by Charles Garnier. In the latter years of the Empire, he built new boulevards to connect the city center with the eight new arrondissements which Napoleon III attached to the city in 1860, along with new city halls for each arrondissement. New city halls were also built for many of the original arrondissements. The new city hall of the First arrondissement by Jacques Ignace Hittorff (1855–60), close the medieval church of Saint-Germain-Auxerois the historic center of the city. The new city hall was in neo-Gothic style, echoing the medieval church, complete with a rose window.
To provide green space and recreation for the residents of the outer neighborhoods of the city, Haussmann built large new parks Bois de Boulogne, Bois de Vincennes, Parc Montsouris and Parc des Buttes Chaumont to the west, east, north and south, filled with picturesque garden follies, as well as numerous smaller parks and squares where the new boulevards met. City architect Gabriel Davioud devoted considerable attention to the details of the city infrastructure. Haussmann also built a new water supply and sewer system under the new boulevards, planted thousands of trees along the boulevards, and ornamented the parks and boulevards with kiosks, gateways, lodges and ornamental grills, all designed by Davioud.
Religious architecture - the Neo-Gothic and eclectic styles
Religious architecture finally broke away from the neoclassical style which had dominated Paris church architecture since the 18th century. Neo-Gothic and other historical styles began to be built, particularly in the eight new arrondissements farther from the center added by Napoleon III in 1860. The first neo-Gothic church was the Basilica of Sainte-Clothilde, begun by Christian Gau in 1841, finished by Théodore Ballu in 1857. During the Second Empire, architects began to use metal frames combined with the Gothic style; the Eglise Saint-Laurent, a 15th-century church rebuilt in Neo-Gothic style by Simon-Claude-Constant Dufeux (1862–65), and Saint-Eugene-Sainte-Cecile by Louis-Auguste Boileau and Adrien-Louis Lusson (1854–55); and Saint-Jean-Baptiste de Belleville by Jean-Bapiste Lassus (1854–59). The largest new church built in Paris during the Second Empire was Church of Saint Augustine (1860–71), by Victor Baltard, the designer of the metal pavilions of the market of Les Halles. While the structure was supported by cast-iron columns, the façade was eclectic.
Railway stations and commercial architecture
The industrial revolution and economic expansion of Paris required much larger structures, particularly for railroad stations, which were considered the new ornamental gateways to the city. The new structures had iron skeletons, but they were concealed by Beaux-Arts façades. The Gare du Nord, by Jacques Ignace Hittorff (1842–65), had a glass roof with iron columns thirty-eight meters high, while the façade was in the beaux-arts style faced with stone and decorated with statues representing the cities served by the railway.
The most dramatic use of iron and glass was in the new central market of Paris, Les Halles (1853–70), an ensemble of huge iron and glass pavilions designed by Victor Baltard (1805–1874).
Henri Labrouste (1801–1875) used iron and glass to create a dramatic cathedral-like reading room for the Bibliothèque nationale de France, site Richelieu (1854–75).
The Belle Époque (1871–1913)
The architecture of Paris created during the Belle Époque, between 1871 and the beginning of the First World War in 1914, was notable for its variety of different styles, from Beaux-Arts, neo-Byzantine and neo-Gothic to Art Nouveau, and Art Deco. It was also known for its lavish decoration and its imaginative use of both new and traditional materials, including iron, plate glass, colored tile and reinforced concrete.
The Great Expositions
The fall of Napoleon III in 1871 and advent of the Third Republic was followed by the brief Paris rule of the Paris Commune (March–May 1871). In the final days of the Commune, as the French Army recaptured the city, the Communards pulled down the column in Place Vendôme and burned a number of Paris landmarks, including the 16th-century Tuileries Palace, the 17th-century Hôtel de Ville, the Ministry of Justice, the Cour des Comptes, the Conseil d'Etat, the Palais de la Légion d'Honneur, the Ministry of Finance, and others. The interior of the Tuileries Palace was completely destroyed, but the walls were still standing. Haussmann and others called for its restoration, but the new government decided it was a symbol of the monarchy and had the walls torn down. (A fragment of the building can be seen today in the Park of the Trocadero). Most of the others were restored to their original appearance. To celebrate the rebuilding of the city the Parisians hosted the first of three universal expositions which attracted millions of visitors to Paris, and transformed the architecture of the city.
The Paris Universal Exposition of 1878 saw the building of the Palais du Trocadéro, an eclectic composition of Moorish, renaissance and other styles, on the hill of Chaillot by Gabriel Davioud and Jules Bourdais (1876–78). It was used in the Expositions of 1889 and 1900, and remained until 1937, when it was replaced by the Palais de Chaillot.
The Paris Universal Exposition of 1889 celebrated the centenary of the French Revolution. The Eiffel Tower, (1887–89), conceived by entrepreneur Gustave Eiffel, and built by engineers Maurice Koechlin and Émile Nougier and architect Stephen Sauvestre, was the tallest structure in the world, was the gateway to the Exposition, and the Gallery of Machines, designed by Ferdinand Dufert and Victor Contamin, was the largest covered space in the world when it was built. It combined modern engineering with colorful polychrome decoration, typical of the Belle Epoque.
The Paris Universal Exposition of 1900 extended to both the right and left banks of the Seine. It gave Paris three new landmarks; the Grand Palais, the Petit Palais and the Pont Alexandre III. The Beaux-Arts façade of the Grand Palais (1897–1900), designed by Henri Deglane, Charles Girault, Albert Louvet and Albert Thomas, was a synthesis of the grand neoclassical styles of Louis XIV and Louis XV. concealed a vast interior space covered by a glass roof resting on slender iron pillars. The Petit-Palais (1897–1900), by Charles Girault, borrowed elements of Italian Renaissance architecture, and French neoclassical decorative elements from Les Invalides, the hotels beside the Place de la Concorde and the palatial stables of the Château de Chantilly by Jean Aubert. Its interior was more revolutionary than the Grand Palais; Girault used reinforced concrete and iron to create a winding stairway along brightly lit galleries. The style of these two buildings, along with the colossal neoclassical style of Louis XVI, influenced the design of Paris residential and commercial buildings until 1920.
The Art Nouveau became the most famous style of the Belle Époque, particularly associated with the Paris Métro station entrances designed by Hector Guimard, and with a handful of other buildings, including Guimard's Castel Béranger (1898) at 14 rue La Fontaine, in the 16th arrondissement, and the ceramic-sculpture covered house by architect Jules Lavirotte at 29 Avenue Rapp (7th arrondissement). The enthusiasm for Art Nouveau did not last long; in 1904 the Guimard Metro entrance at Place de l'Opera it was replaced by a more classical entrance. Beginning in 1912, all the Guimard metro entrances were replaced with functional entrances without decoration.
Religious architecture
From the 1870s until the 1930s the most prominent style for Paris churches was the Romano-Byzantine style; the model and most famous example was the Sacré-Cœur, by Paul Abadie, whose design won a national exposition. Its construction lasted the entire span of the Belle Epoque, between 1874 and 1913, under three different architects; it was not consecrated until 1919. It was modeled after the romanesque and Byzantine cathedrals of the early Middle Ages, which Abadie had restored. The style also appeared in the church of Notre-Dame d'Auteuil by Émile Vaudremer (1878–92) The church of Saint-Dominque, by Leon Gaudibert, (1912–25) followed the style of Byzantine churches, with a massive central dome. The first church in Paris to be constructed of reinforced concrete was Saint-Jean-de-Montmartre, at 19 rue des Abbesses at the foot of Montmartre. The architect was Anatole de Baudot, a student of Viollet-le-Duc. The nature of the revolution was not evident, because Baudot faced the concrete with brick and ceramic tiles in a colorful Art nouveau style, with stained glass windows in the same style.
The department store and the office building
Aristide Boucicaut launched the first modern department store in Paris Au Bon Marché, in 1852. Within twenty years, it had 1,825 employees and an income of more than 20 million francs. In 1869 Boucicault began constructing a much larger store, with an iron frame, a central courtyard covered with a glass skylight. The architect was Louis-Charles Boileau, with assistance from the engineering firm of Gustave Eiffel. After more enlargements and modifications, the building was finished in 1887, and became the prototype for other department stores in Paris and around the world.
Au Bon Marché was followed by au Louvre in 1865; the Bazar de l'Hôtel de Ville) in 1866, Au Printemps in 1865; La Samaritaine in 1870, and Galeries Lafayette in 1895. All the new stores glass skylights whenever possible to fill the stores with natural light, and designed the balconies around the central courts to provide the maximum of light to each section.
Between 1903 and 1907 the architect Frantz Jourdain created the interior and façades of the new building of La Samaritaine.
The safety elevator had been invented in 1852 by Elisha Otis, making tall office buildings practical, and the first skyscraper, the Home Insurance Building, a ten-story building with a steel frame, had been built in Chicago by Louis Sullivan in 1893–94, but Paris architects and clients showed little interest in building tall office buildings. Paris was already the banking and financial capital of the continent, and moreover, as of 1889 it had the tallest structure in the world, the Eiffel Tower. While some Paris architects visited Chicago to see what has happening, no clients wanted to change the familiar skyline of Paris.
The new office buildings of the Belle Époque often made use of steel, plate glass, elevators and other new architectural technologies, but they were hidden inside sober neoclassical stone façades, and the buildings matched the height of the other buildings on Haussmann's boulevards. The headquarters of the bank Crédit Lyonnais, built in 1883 on the boulevard des Italiens in 1883 by William Bouwens Van der Boijen, was in the Beaux-Arts style on the outside, but inside one of the most modern buildings of its time, using an iron frame and glass skylight to provide ample light to large hall where the title deeds were held. In 1907 the building was updated with a new entrance at 15 rue du Quatre-Septembre, designed by Victor Laloux, who also designed the Gare d'Orsay, now the Musée d'Orsay The new entrance featured a striking rotunda with a glass dome over a floor of glass bricks, which allowed the daylight to illuminate the level below, and the three other levels below. The entrance was badly damaged by a fire in 1996; the rotunda was restored, but the only a few elements still remain of the titles hall.
Railroad stations
The Belle Époque was the golden age of the Paris railroad station; they served as the gateways of the city for the visitors who arrived for the great Expositions. A new Gare de Lyon was built by Marius Toudoire between 1895 and 1902, making the maximum use of glass and iron combined with a picturesque bell tower and Beaux-Arts façade and decoration. The café of the station looked down on the platform where the trains arrived. The Gare d'Orsay (now the Musée d'Orsay was the first station in the center of the city, on the site of the old Ministry of Finance, burned by the Paris Commune. It was built in 1898–1900 in the palatial Beaux-Arts style by architect Victor Laloux. It was the first Paris station to be electrified and to place the train platforms below street level, a model soon copied by New York and other cities.
Residential architecture – Beaux-Arts to Art Nouveau
Private houses and apartment buildings in the Belle Époque were usually in the Beaux-Arts style, either neo-Renaissanace or neoclassical, or a mixture of the two. A good example is the Hôtel de Choudens (1901) by Charles Girault, built for a client who wanted a house in the style of the Petit Palais, which Giraud had designed. Apartment buildings saw changes in the interiors; with the development of elevators, the apartment of the wealthiest residents moved from the first floor above the street to the top floor. The rooflines of the new apartment buildings also changed, as the city removed the restrictions imposed by Haussmann; the most extravagant example was the apartment building at 27–29 quai Anatole-France in 7th arrondissement (1906), which sprouted profusion of turrets, spires and decorative arches, made possible by reinforced concrete.
A competition for new façades was held in 1898, and one winner was Hector Guimard for the design of a new apartment building, the Castel Béranger (1895–98), the first Paris building in the Art Nouveau style. The façade was inspired by the work of the Belgian Art Nouveau pioneer Victor Horta; it used both elements of medieval architecture and curved motifs inspired by plants and flowers. Horta designed every detail of the house, including furniture, wallpaper, door handles and locks. The success of the Castel Beranger led to Guimard's selection to design the entrance of stations of the new Paris Métro. In 1901, the façade competition was won more extravagant architect, Jules Lavirotte, who designed a house for the ceramic maker Alexandre Bigot which was more a work of inhabited sculpture than a building. The façade was entirely covered with decorative ceramic sculpture. The popularity of Art Nouveau did not last long; the last Paris building in the style was Guimard's own house, the Hôtel Guimard at 122 Avenue Mozart (1909–13).
Between the wars - Art Deco and modernism (1919–1939)
Art Deco
The Art Nouveau had its moment of glory in Paris beginning in 1898, but was out of fashion by 1914. The Art Deco, which appeared just before the war, became the dominant style for major buildings between the wars. The primary building material of the new era was reinforced concrete. The structure of the buildings was clearly expressed on the exterior, and was dominated by horizontal lines, with rows of bow windows and small balconies. They often had classical features, such as rows of columns, but these were expressed in a stark modern form; ornament was kept to a minimum, and statuary and ornament was often applied, as a carved stone plaque on the façade, rather than expressed in the architecture of the building itself.
The leading proponents of the Art Deco were Auguste Perret and Henri Sauvage. Perret designed the Théâtre des Champs-Élysées, the first Art Deco building in Paris, in 1913, just before the War. His major achievements between the wars were the building for the Mobilier National (1936) and the Museum of Public Works (1939), now the Economic and Social Council, located on place d'Iéna, with its giant rotunda and columns inspired by ancient Egypt. Sauvage expanded the La Samaritaine department store in 1931, preserving elements of the Art Nouveau interior and façades, while giving it an Art Deco form. He experimented with new, simpler forms of apartment buildings, including the stepped building, creating terraces for the upper floors, and covered concrete surfaces with white ceramic tile, resembling stone. He also was a pioneer in the use of prefabricated building materials, reducing costs and construction time.
A related Paris fashion between the wars was the Style paquebot, buildings that resembled the ocean liners of the period, with sleek white façades, rounded corners, white façades, and nautical railings. They often were built on narrow pieces of land, or on corners. One example is the building at 3 boulevard Victor in the 15th arrondissement, built in 1935.
Exposition architecture
The international expositions of the 1920s and 1930s left fewer architectural landmarks than the earlier exhibitions. The 1925 International Exhibition of Modern Decorative and Industrial Arts had several very modern buildings, the Russian pavilions, the Art Deco Hôtel du collectionneur by Émile-Jacques Ruhlmann, and the Pavillon d'Esprit by Le Corbusier, but they were all torn down when the exhibit ended. One impressive Art Deco building from the 1934 Colonial Exposition survived; the Museum of the Colonies at la Port Doréé, by Albert Laprade, 89 meters long, with a colonnade and a front wall entirely covered with a bas-relief by Alfred Janniot on the animals, plants, and cultures the theme the cultures of the French colonies. The interior was filled with sculpture and murals from the period, still visible today. Today, the building is the Cité nationale de l'histoire de l'immigration, or museum of the history of immigration.
The Paris International Exposition of 1937, held on the eve of World War II, was not a popular success; its two largest national pavilions were those of Nazi Germany and Stalinist Russia, facing each other across the central esplanade. The chief architectural legacies were the Palais de Chaillot, where the old Palais de Trocadero had been, by Jacques Carlu, Louis Hippolyte Boileau and Léon Azéma, (1935–37), built of concrete and beige stone, and the Palais de Iena, facing it. Both were built in a monumental neoclassical style. The nearby Palais de Tokyo was another exhibit legacy, designed by André Auber, Jean-Claude Dondel, Paul Viard and Marcel Dastugue (1934–1937), in a similar neoclassic style, with a colonnade. It is now the modern art museum of the city of Paris. Another exhibit legacy is the former Museum of Public Works (1936–1948) at Place and Avenue Iena, by Auguste Perret. It contains an impressive rotunda and conference hall with a neoclassical façade, all built of reinforced concrete. After the War it was converted into the headquarters of the French Economic, Social and Environmental Council.
Residential architecture
The architect Auguste Perret had anticipated modern residential style in 1904, with an Art Deco house of reinforced concrete faced with ceramics on Rue Franklin. Henri Sauvage also made Art Deco residential buildings with clean geometric lines, made of reinforced concrete faced with white ceramic tiles. The architect Charles-Édouard Jeanneret-Gris, better known as Le Corbusier went further, designing houses in geometric forms, lacking any ornament. At age of twenty-one worked as an assistant in the office of Perret. In 1922, he opened his own architectural office with his cousin Pierre Jeanneret and built some of his first houses in Paris, notably the Villa La Roche at 10 square du Docteur-Blanche in the 16th arrondissement, built for a Swiss banker and art collector. Built in 1923, it introduced elements found in many of Corbusier's later buildings, including white concrete walls, was constructed in 1923, and introduced many of the themes found in Corbusier's later work, including an interior ramp between levels and horizontal bands of windows. He also designed the furniture for the house. Robert Mallet-Stevens pursued a similar modernist style, composed of geometric shapes, walls of glass, and an absence of ornament. He built a studio and residence with a large glass wall and spiral stairway for glass designer Louis Barillet at 15 square Vergennes (15th arrondissement) and constructed a series of houses for artists, each one different, on what is now known as rue Mallet-Stevens in the 16th arrondissement. One of the most striking houses of the 1920s was the house of artist Tristan Tzara at 15 avenue Junot in the 18th arrondissement, designed by the Austrian architect Adolf Loos. The interior was completely irregular: each room was of a different size, and on a different level. Another unusual house was the Maison de Verre or "Glass house" at 31 rue Saint-Guillaume in the 7th arrondissement, built for Doctor Dalace by Pierre Chareau, with Bernard Bijvoet (1927–31). It was made entirely of bricks of glass, supported by a metal frame.
Modernist buildings built in the 1920s and 1930s were relatively rare. The most characteristic Paris residential architect of the 1920s was Michel Roux-Spitz, who built a series of large luxury apartment buildings in the 1920s and 1930s, mostly in the 6th and 7th arrondissements. The buildings were all built of reinforced concrete, had white walls, often faced with stone, and horizontal rows of three-faced bow windows, a modernized version of the Haussmann apartment buildings on the same streets.
Public housing
Beginning in 1919, soon after the end of World War I, the French government began building public housing on a huge scale, particularly on the vacant land of the former fortifications around the city. The new buildings were called HBMs, or Habitations à bon marché (Low-cost residences). They were concentrated to the north, east and south of the city, while a more expensive type of housing, the ILM, or Immeubles à loyer moyen, or moderate priced residences, intended for the middle class, were built to the west of the city. A special agency of architects was established to design the buildings. The first group of 2,734 new housing units, called the Cité de Montmartre was built between the Portes of Clignancourt and Montmartre between 1922 and 1928. The new buildings were constructed of concrete and brick. The earliest buildings had many decorative elements, particularly at the roofline, including concrete pergolas. The decoration became less over the years, and over time the brick gave way gradually to reinforced concrete façades.
Religious architecture
Several new churches were built in Paris between the wars, in varied styles. The Église du Saint-Esprit (1928–32), located at 186 Avenue Daumesnil in the 12th arrondissement, was designed by Paul Tournon. It has a modern exterior made of reinforced concrete covered with red brick and a modern bell tower 75 meters high, but the central feature is a huge dome, 22 meters in diameter. The design, like that of the Sacré-Cœur Basilica, was inspired by Byzantine churches. The interior was decorated with murals by several notable artists, including Maurice Denis. The Église Saint-Pierre-de-Chaillot, at 31 avenue Marceau (16th), was designed by Émile Bois (1932–38). Its tower and massive Romanesque entrance was inspired by the churches of the Périgord region. The Church of Sainte-Odile at 2 Avenue Stephane-Mallarmé (17th arrondissement), by Jacques Barges (1935–39) has a single nave, three neo-Byzantine cupolas, and the highest bell tower in Paris.
The Grand Mosque of Paris was one of the more unusual buildings constructed during the period. Intended to honor the Muslim soldiers from the French colonies who died for France during the war, it was designed by the architect Maurice Tranchant de Lunel, and built and decorated with the assistance of craftsmen from North Africa. The project was funded by the National Assembly in 1920, construction began in 1922, and it was completed in 1924, and dedicated by the President of France, Gaston Doumergue, and the Sultan of Morocco, Moulay Youssef. The style was termed "Hispano-Moorish" and the design was largely influenced by the Grand Mosque of Fez, Morocco.
After World War II (1946–2000)
The triumph of modernism
In the years after World War II, modernism became the official style for public buildings, both because it was new and fashionable, and partly because it was usually less expensive to build. Buildings were designed to express their function, using simple geometric forms, with a minimum of ornament and decoration. They were usually designed so that every office had its own window and view. The materials of choice were reinforced concrete, sometimes covered with aluminium panels, and glass. The term "Palais" used for many public buildings before the war was replaced by the more modest term "Maison", or "House." In place of decoration, the buildings often contained works of sculpture in interior courtyards and were surrounded by gardens. There was little if anything specifically French about the new buildings; they resembled modernist buildings in the United States and other parts of Europe, and, particularly under President François Mitterrand, were often designed by internationally famous architects from other countries.
Among the earliest and most influential of the new public buildings was the Maison de la Radio (1952–1963), the headquarters of French national radio and television, along the Seine in the 16th arrondissement, designed by Henry Bernard. Bernard had studied at the École des Beaux-Arts, won the Prix de Rome, and eventually became the head of the Academy of Beaux-Arts, but he converted with enthusiasm to the new style. The Maison de la Radio was composed of two circular buildings fitted one inside the other, an outer circle facing the river, with a thousand offices, an inner circle made up of studios, and a 68-meter tall tower in the center, which contains the archives. It was originally designed with a concrete façade on the outer building, but it was modified and covered with a skin of aluminium and glass. It was described by its builders as a continuation toward the west of the line of great monuments beside the Seine: the Louvre, the Grand Palais, and Palais de Chaillot.
Other major public buildings in the monumental modernist style included the headquarters of UNESCO, the United Nations cultural headquarters, on Place Fontenoy in the 7th arrondissement, by Marcel Breuer, Bernard Zehrfuss and Pier Luigi Nervi (1954–1958), in the form of a tripod of three wings made of reinforced concrete, with gardens between the wings. Each office in the building benefited from natural light and an exterior view. The headquarters of the French Communist Party at 2 Place du Colonel Fabien (19th arrondissement), was designed by Oscar Niemeyer, who had just finished designing Brasília, the new Brazilian capital city. It was constructed between 1969 and 1980 and was an eight-story block built on columns above the street, with a smooth undulating glass façade. The auditorium next to the building was half buried underground, covered by a concrete dome that allowed light to enter.
Presidential projects
In the 1970s, French Presidents began to build major architectural projects which became their legacy, usually finished after they left office. The first was Georges Pompidou, a noted admirer and patron of modern art, who made plans for what became, after his death in 1974, the Centre Pompidou. It was designed by Renzo Piano and Richard Rogers, and expressed all of its mechanical functions on the exterior of the building, with brightly colored pipes, ducts and escalators. The principal architectural projects begun by his successor, Giscard d'Estaing, were the conversion of the Musée d'Orsay, a central railroad station transformed into a museum devoted to 19th-century French art (1978–86), and the City of Sciences and Industry (1980–86) in the Parc de la Villette in the 19th arrondissement, whose features included the La Géode, a geodesic sphere 36 meters in diameter made of polished stainless steel, now containing an omnimax theater (1980–86), designed by Adrien Feinsilber.
Between 1981 and 1995, François Mitterrand had fourteen years in power, enough time to complete more projects than any president since Napoleon III. In the case of the Louvre Pyramid, he personally selected the architect, without a competition. He completed the projects begun by Giscard d'Estaing and began even more ambitious projects of his own, many of them designed for the celebration of the bicentennial of the French Revolution in 1989. His Grands travaux ("Great Works") included the Institut du Monde Arabe by architect Jean Nouvel, finished in 1987; the Grand Louvre, including the glass pyramid (1983–89) designed by I. M. Pei; the Grande Arche of La Défense by the Danish architect Johan Otto von Spreckelsen, a building in the form of a giant ceremonial arch, which marked the western end of the historical axis that began at the Louvre (inaugurated July 1989); the Opéra Bastille, by architect Carlos Ott, opened on 13 July 1989, the day before the bicentennial of the French Revolution, and a new building for the Ministries of the Economy and Finance, at Bercy (12th arrondissement) (1982–88), a massive building next to the Seine which resembled both a gateway to the city and a huge bridge with its feet in the river, designed by Paul Chemetov and Borja Huidobro. His last project was located on the other side of the Seine from the Finance Ministry; a group of four book-shaped glass towers for the Bibliothèque nationale de France (1989–95), designed by Dominique Perrault. The books were stored in the towers, while the reading rooms were located beneath a terrace between the buildings, with windows looking out onto a garden.
The age of towers
Until the 1960s there were no tall buildings in Paris to share the skyline with the Eiffel Tower, the tallest structure in the city; a strict height limit of thirty-five meters was in place. However, in October 1958, under the Fifth Republic, in order to permit the construction of more housing and office buildings, the rules began to change. A new urban plan for the city was adopted by the municipal council in 1959. Higher buildings were permitted, as long as they met both technical and aesthetic standards. The first new tower to be constructed was an apartment building, the Tour Croulebarbe, at 33 rue Croulebarbe in the 13th arrondissement. It was twenty-two stories, and 61 meters high, and was completed in 1961. Between 1960 and 1975, about 160 new buildings higher than fifteen stories were constructed in Paris, more than half of them in the 13th and 15th arrondissements. Most of them were about one hundred meters high; several clusters of high-rises the work one developer, Michel Holley, who built the towers of Place d'Italie, Front de Seine, and Hauts de Belleville.
Two of the projects of residential towers were especially large: 29 hectares along the banks of the Seine at Beaugrenelle, and 87 hectares between Place de l'Italie and Tolbiac. Blocks of old buildings were torn town and replaced with residential towers.
Between 1959 and 1968, the old Montparnasse railway station was demolished and rebuilt nearby, making a large parcel of land available for construction. The municipal council learned of the project only indirectly, through a message from the ministry in charge of construction projects. The first plan, proposed in 1957, was a new headquarters for Air France, a state-owned enterprise, in a tower 150 meters high. In 1959, the proposed height was increased to 170 meters. In 1965, to protect the views in the historic part of the city, the municipal council declared that the new building should be shorter, so it would not visible from the esplanade of Les Invalides. In 1967, the Prefect of Paris, representing the government of President de Gaulle, overruled the municipal council decision, raised the height to two hundred meters, to create more rentable office space. The new building, built between 1969 and 1972, was (and still is) the tallest building within the city limits.
The growing number of skyscrapers appearing on the Paris skyline provoked resistance from the Paris population. In 1975, President Giscard d'Estaing declared a moratorium on new towers within the city, and in 1977 the City of Paris was given a new Plan d'Occupation des Sols (POS) or Land use plan, which imposed a height limit of twenty-five meters in the center of Paris and 31 meters in the outer arrondissements. Also, new buildings are required to be constructed right up to the sidewalk, without setbacks, further discouraging very tall buildings. The building of skyscrapers continued outside of Paris, particularly in the new business district of La Défense.
At the end of the 20th century, the tallest structure in the City of Paris and the Île-de-France was still the Eiffel Tower in the 7th arrondissement, 324 meters high, completed in 1889. The tallest building in the Paris region was the Tour First, at 225 meters, located in La Défense built in 1974.
Public housing – the HLM and the barre
After the War Paris faced a severe housing shortage; most of the housing in the city dated to the 19th century and was in terrible condition. Only two thousand new housing units were constructed between 1946 and 1950. The number rose to 4,230 in 1951 and more than 10,000 in 1956. The office of public housing of the City of Paris acquired the cheapest land it could buy, at the edges of the city. In 1961, when land within the city was exhausted, they were authorized to begin buying land in the surrounding suburbs. The first postwar social housing buildings were relatively low- three or four stories. Much larger buildings began to appear in the mid-1950s. They were built with prefabricated materials and placed in clusters. They were known as HLMs, or Habitations à loyer moderé, or moderate-cost housing. A larger type of HLM began to appear in the mid-1950s, known as a barre, because it was longer than it was high. The usually had between 200 and 300 apartments, were built in clusters, and were often some distance from shops and public transportation. They were welcomed by the families who lived there in the 1950s and early 1960s, but in later years they were crowded with recent immigrants and suffered from crime, drugs and social unrest.
Contemporary (2001–present )
Paris architecture since 2000 has been very diverse, with no single dominant style. In the field of museums and monuments, the most prominent name has been Jean Nouvel. His earlier work in Paris included the Institut du Monde Arabe (1982–87), and the Fondation Cartier (1992–94), which features a glass screen between the building and the street. In 2006 he completed the Musée du Quai Branly, the Presidential project of Jacques Chirac, a museum presenting the cultures of Asia, Africa and the Americas. It also included a glass screen between the building and the street, as well as a façade covered with living plants. In 2015, he completed the new Philharmonie de Paris at Parc de la Villette.
The American architect Frank Gehry also made a notable contribution to Paris architect, for his American Center in Bercy (1994), which became the home of the Cinémathèque Française in 2005; and for the building of the Louis Vuitton Foundation, a museum of modern and contemporary art in the Bois de Boulogne.
Supermodernism
A notable new style of French architecture, called Supermodernism by critic Hans Ibeling, gives precedence to the visual sensations, spatial and tactile, of the viewer looking at the façade. The best-known architects in this school are Jean Nouvel and Dominique Perrault.
The Hôtel Berlier (1986–89) by Dominique Perrault, an office building at 26-34 rue Bruneseau in the 13th arrondissement, is a block of glass, whose structure is nearly invisible. Perrault also designed the new French National Library.
The headquarters of the newspaper Le Monde at 74–84 boulevard August-Blanqui in the 13th arrondissement, designed by Christian de Portzamparc (2005), has a façade that resembles the front page of the newspaper.
The administration building of the French Ministry of Culture at 182 rue Saint-Honoré (2002–04), by Francis Soler and Frédéric Druot, is an older structure whose façade is completely covered with an ornamental metal mesh.
The Hôtel Fouquet's Barrière at 2 rue Vernet, 23 rue Quentin-Bauchart and 46 avenue George-V, in the 8th arrondissement, designed by Édouard François, is covered by a skin of concrete which is a molding of the façade of an historic neighboring building.
Ecological architecture
One important theme of early-21st-century Paris architecture was making buildings that were ecologically friendly.
The "Flower-Tower" built in 2004 by Édouard François, located at 23 rue-Albert-Roussel in the 17th arrondissement, is covered with the living foliage of bamboo plants, placed in concrete pots at the edges of the terraces on each floor, and watered automatically.
The façade of the university restaurant building at 3 rue Mabillon in the 6th arrondissement, built in 1954, was recovered by architect Patrick Mauger with the logs of trees, to provide better thermal isolation.
A public housing hostel for the homeless, the Centre d'hebergement Emmaüs, designed by Emmanuel Saadi in 2011, located at 179 quai de Valmy in the 10th arrondissement, is entirely covered by photo-voltaic panels for generating solar electricity.
Conversions
Another important theme in 21st-century Parisian architecture is the conversion of older industrial or commercial buildings for new purposes, called in French "reconversions" or "transcriptions".
A large grain warehouse and flour mill in the 13th arrondissement were converted between 2002 and 2007 into buildings for the Paris Diderot University campus. The architects were Nicolas Michelin and Rudy Ricciotti.
Les Docks, a large warehouse structure built before World War I alongside the Seine at 34 quai d'Austerlitz, was converted 2005–08 into the City of Fashion and Design, by means of a "plug-over" of ramps, stairways and passages. The architects were Jakob + MacFarlane.
Public housing
Since the 1980s the more recent constructions of HLMs, or public housing, in Paris have tried to avoid the massive and monotonous structures of the past, with more picturesque architectural detail, variety of styles, greater use of color, and large complexes broken into smaller mini-neighborhoods. The new style, called fragmentation, was particularly pioneered by architects Christian de Portzamparc and Frédéric Borel. In one complex on rue Pierre-Rebière in the 17th arrondissement the 180 residences were designed by nine different teams of architects.
See also
French architecture
Concours de façades de la ville de Paris
Architecture of the Paris Métro
List of monuments historiques in Paris
list of historic churches in Paris
List of tallest buildings and structures in the Paris region
Neoclassicism in France
French Restoration style
References
Notes and citations
Books cited in the text
External links
Arts in Paris
Paris | Architecture of Paris | Engineering | 22,353 |
18,472,059 | https://en.wikipedia.org/wiki/Concrete%20grinder | A concrete grinder is an abrasive machine for grinding and polishing concrete and natural stone. Concrete grinders can come in many configurations, the most common being a hand-held general purpose angle grinder, but it may be a specialized tool for countertops or floors. Angle grinders are small and mobile, and allow one to work on harder to reach areas and perform more precise work.
There are also purpose-built floor grinders that are used for grinding and polishing marble, granite and concrete.
Concrete often has a higher sliding friction than marble or granite which is also worked wet, therefore with less friction. Floor grinders can cover large surfaces, and they have more weight on them, making the grinding process more efficient.
Attachments
All concrete grinders use abrasives to grind or polish such as diamond tools or silicon carbide. The diamond tools used for grinding most commonly are diamond grinding cup wheels, other machines may use diamond segments, mounted on varies plates, slide on diamond grinding shoes and for polishing are usually circular Resin diamond polishing pads.
The use of diamond attachments is the most common type of abrasive used under concrete grinders and come in many grits that range from 6 grit to the high thousands, although 1800 grit is considered by the insurance industry as the highest shine to apply to a floor surface.
Wet or dry usage
Concrete can be ground wet or dry, although dust extraction equipment needs to be used when grinding dry.
To grind concrete dry, a grinding shroud can sourced for most angle grinder sizes, and floor grinders usually have them inbuilt. This provides the necessary vacuum attachment where one can connect a vacuum or HEPA filter-equipped vacuum to capture the fine dust produced when grinding dry. Of course concrete can also be ground wet in which case no vacuum is used. An issue with dry grinding is that is can be time-consuming as is a slower method of keeping the diamond tools cutting and the fine dust particles quickly blocks up the HEPA filters in the vacuum. Continuously stopping to clean or replace filters can be time-consuming and this is where a dust separator can be beneficial. It is connected between the concrete grinder and the vacuum cleaner and works by capturing the larger particles of concrete in its drum, so only the fine particles reach the vacuum cleaner.
The benefit of grinding concrete wet is that it requires less attachments than when grinding dry. The water makes the dust particles heavy by turning them into a slurry or paste and prevents them from being dispersed into the air. This significantly reduces health risks from breathing in concrete dust.
Dust precautions
When grinding concrete it is important to ensure steps are taken to mitigate exposure to concrete dust. According to the Cancer Council, approximately 230 people develop lung cancer each year due to past exposure to silica dust at work. Fine concrete dust contains silica which is very harmful to the lungs and can lead to silicosis so all effort should be made to avoid breathing concrete dust. In construction, mining and other industrial type jobs that expose workers to dust and small particles, one should wear a respirator mask commonly known as a N95 mask, FFP2 mask, P2 Mask or KN95 mask to protect from inhaling concrete dust. This is because such a respiratory mask can block 94-95% of non-oil based particulates that are larger than 0.3 microns. Concrete Dust particles can be as small as 0.5 microns, which is larger than 0.3 microns, which means that a N95 respirator provides effective protection against concrete dust when fitted properly.
For green building methods many regulators have seen the benefit of using concrete grinders that are designed to finish concrete to a very stable wear surface, that can safely be used for many years as a floor or tabletop surface. These machines are sometimes powered by 240 volts or higher as they require motor power larger than 120 volts can supply. Some machine are powered by liquefied petroleum gas such as used on forklifts so that they can be run in well ventilated areas without a power cord, but these machines usually have fewer features that a fully electric unit.
References
Grinding machines
Power tools | Concrete grinder | Physics | 858 |
208,215 | https://en.wikipedia.org/wiki/Geochronology | Geochronology is the science of determining the age of rocks, fossils, and sediments using signatures inherent in the rocks themselves. Absolute geochronology can be accomplished through radioactive isotopes, whereas relative geochronology is provided by tools such as paleomagnetism and stable isotope ratios. By combining multiple geochronological (and biostratigraphic) indicators the precision of the recovered age can be improved.
Geochronology is different in application from biostratigraphy, which is the science of assigning sedimentary rocks to a known geological period via describing, cataloging and comparing fossil floral and faunal assemblages. Biostratigraphy does not directly provide an absolute age determination of a rock, but merely places it within an interval of time at which that fossil assemblage is known to have coexisted. Both disciplines work together hand in hand, however, to the point where they share the same system of naming strata (rock layers) and the time spans utilized to classify sublayers within a stratum.
The science of geochronology is the prime tool used in the discipline of chronostratigraphy, which attempts to derive absolute age dates for all fossil assemblages and determine the geologic history of the Earth and extraterrestrial bodies.
Dating methods
Radiometric dating
By measuring the amount of radioactive decay of a radioactive isotope with a known half-life, geologists can establish the absolute age of the parent material. A number of radioactive isotopes are used for this purpose, and depending on the rate of decay, are used for dating different geological periods. More slowly decaying isotopes are useful for longer periods of time, but less accurate in absolute years. With the exception of the radiocarbon method, most of these techniques are actually based on measuring an increase in the abundance of a radiogenic isotope, which is the decay-product of the radioactive parent isotope. Two or more radiometric methods can be used in concert to achieve more robust results. Most radiometric methods are suitable for geological time only, but some such as the radiocarbon method and the 40Ar/39Ar dating method can be extended into the time of early human life and into recorded history.
Some of the commonly used techniques are:
Radiocarbon dating. This technique measures the decay of carbon-14 in organic material and can be best applied to samples younger than about 60,000 years.
Uranium–lead dating. This technique measures the ratio of two lead isotopes (lead-206 and lead-207) to the amount of uranium in a mineral or rock. Often applied to the trace mineral zircon in igneous rocks, this method is one of the two most commonly used (along with argon–argon dating) for geologic dating. Monazite geochronology is another example of U–Pb dating, employed for dating metamorphism in particular. Uranium–lead dating is applied to samples older than about 1 million years.
Uranium–thorium dating. This technique is used to date speleothems, corals, carbonates, and fossil bones. Its range is from a few years to about 700,000 years.
Potassium–argon dating and argon–argon dating. These techniques date metamorphic, igneous and volcanic rocks. They are also used to date volcanic ash layers within or overlying paleoanthropologic sites. The younger limit of the argon–argon method is a few thousand years.
Electron spin resonance (ESR) dating
Fission-track dating
Cosmogenic nuclide geochronology
A series of related techniques for determining the age at which a geomorphic surface was created (exposure dating), or at which formerly surficial materials were buried (burial dating). Exposure dating uses the concentration of exotic nuclides (e.g. 10Be, 26Al, 36Cl) produced by cosmic rays interacting with Earth materials as a proxy for the age at which a surface, such as an alluvial fan, was created. Burial dating uses the differential radioactive decay of 2 cosmogenic elements as a proxy for the age at which a sediment was screened by burial from further cosmic rays exposure.
Luminescence dating
Luminescence dating techniques observe 'light' emitted from materials such as quartz, diamond, feldspar, and calcite. Many types of luminescence techniques are utilized in geology, including optically stimulated luminescence (OSL), cathodoluminescence (CL), and thermoluminescence (TL). Thermoluminescence and optically stimulated luminescence are used in archaeology to date 'fired' objects such as pottery or cooking stones and can be used to observe sand migration.
Incremental dating
Incremental dating techniques allow the construction of year-by-year annual chronologies, which can be fixed (i.e. linked to the present day and thus calendar or sidereal time) or floating.
Dendrochronology
Ice cores
Lichenometry
Varves
Paleomagnetic dating
A sequence of paleomagnetic poles (usually called virtual geomagnetic poles), which are already well defined in age, constitutes an apparent polar wander path (APWP). Such a path is constructed for a large continental block. APWPs for different continents can be used as a reference for newly obtained poles for the rocks with unknown age. For paleomagnetic dating, it is suggested to use the APWP in order to date a pole obtained from rocks or sediments of unknown age by linking the paleopole to the nearest point on the APWP. Two methods of paleomagnetic dating have been suggested: (1) the angular method and (2) the rotation method. The first method is used for paleomagnetic dating of rocks inside of the same continental block. The second method is used for the folded areas where tectonic rotations are possible.
Magnetostratigraphy
Magnetostratigraphy determines age from the pattern of magnetic polarity zones in a series of bedded sedimentary and/or volcanic rocks by comparison to the magnetic polarity timescale. The polarity timescale has been previously determined by dating of seafloor magnetic anomalies, radiometrically dating volcanic rocks within magnetostratigraphic sections, and astronomically dating magnetostratigraphic sections.
Chemostratigraphy
Global trends in isotope compositions, particularly carbon-13 and strontium isotopes, can be used to correlate strata.
Correlation of marker horizons
Marker horizons are stratigraphic units of the same age and of such distinctive composition and appearance that, despite their presence in different geographic sites, there is certainty about their age-equivalence. Fossil faunal and floral assemblages, both marine and terrestrial, make for distinctive marker horizons. Tephrochronology is a method for geochemical correlation of unknown volcanic ash (tephra) to geochemically fingerprinted, dated tephra. Tephra is also often used as a dating tool in archaeology, since the dates of some eruptions are well-established.
Geological hierarchy of chronological periodization
Geochronology, from largest to smallest:
Supereon
Eon
Era
Period
Epoch
Age
Chron
Differences from chronostratigraphy
It is important not to confuse geochronologic and chronostratigraphic units. Geochronological units are periods of time, thus it is correct to say that Tyrannosaurus rex lived during the Late Cretaceous Epoch. Chronostratigraphic units are geological material, so it is also correct to say that fossils of the genus Tyrannosaurus have been found in the Upper Cretaceous Series. In the same way, it is entirely possible to go and visit an Upper Cretaceous Series deposit – such as the Hell Creek deposit where the Tyrannosaurus fossils were found – but it is naturally impossible to visit the Late Cretaceous Epoch as that is a period of time.
See also
Astronomical chronology
Age of Earth
Age of the universe
Chronological dating, archaeological chronology
Absolute dating
Relative dating
Phase (archaeology)
Archaeological association
Geochronology
Closure temperature
Geologic time scale
Geological history of Earth
Thermochronology
List of geochronologic names
General
Consilience, evidence from independent, unrelated sources can "converge" on strong conclusions
References
Further reading
Smart, P.L., and Frances, P.D. (1991), Quaternary dating methods - a user's guide. Quaternary Research Association Technical Guide No.4
Lowe, J.J., and Walker, M.J.C. (1997), Reconstructing Quaternary Environments (2nd edition). Longman publishing
Mattinson, J. M. (2013), Revolution and evolution: 100 years of U-Pb geochronology. Elements 9, 53–57.
Geochronology bibliography Talk:Origins Archive
External links
Geochronology and Isotopes Data Portal
International Commission on Stratigraphy
BGS Open Data Geochronological Ontologies
Radiometric dating | Geochronology | Chemistry | 1,857 |
8,345,788 | https://en.wikipedia.org/wiki/Addressin | Mucosal vascular addressin cell adhesion molecule 1 (MAdCAM-1) is a protein that in humans is encoded by the MADCAM1 gene. The protein encoded by this gene is an endothelial cell adhesion molecule that interacts preferentially with the leukocyte beta7 integrin LPAM-1 (alpha4 / beta7), L-selectin, and VLA-4 (alpha4 / beta1) on myeloid cells to direct leukocytes into mucosal and inflamed tissues. It is a member of the immunoglobulin superfamily and is similar to ICAM-1 and VCAM-1.
Nomenclature
Addressin is a lesser-used term to describe the group of adhesion molecules that are involved with lymphocyte homing, commonly found at high-endothelial venules (HEVs) where lymphocytes exit the blood and enter the lymph node. Addressins are the ligands to the homing receptors of lymphocytes. The task of these ligands and their receptors is to determine which tissue the lymphocyte will enter next. They carry carbohydrates in order to be recognized by L-selectin. Addressins physically bind to mobile lymphocytes to guide them to the HEVs. Examples of molecules that are often referred to as addressins are CD34 and GlyCAM-1 on HEVs in peripheral lymph nodes, and MAdCAM-1 on endothelial cells in the intestine.
Function
In terms of migration, MAdCAM-1 is selectively expressed on mucosal endothelial cells, driving memory T-cell re-circulation through mucosal tissues. In contrast, and indeed the main difference between the two molecules, ICAM molecules are involved with naïve T-cell re-circulation. Whereas MAdCAM-1 is selectively expressed, ICAM is broadly expressed on inflamed endothelium.
Peripheral node addressins
Peripheral node addressins (PNAd) are carbohydrate residues that are lymphocyte homing receptor ligands that are expressed on the HEVs of peripheral lymph nodes. These proteins collectively bind to L-selectin to guide lymphocytes such as mature naïve B and T cells into the lymph node. During the development of secondary lymphoid organs, PNAd expression is upregulated following the upregulation and subsequent downregulation of MAdCAM-1 on HEVs. PNAd expression, as well as the expression of MAdCAM-1, is dependent on lymphotoxin signaling in the HEVs of lymph nodes.
Clinical significance
In inflammatory bowel diseases, MAdCAM-1 can be overexpressed on the endothelial cells of intestinal mucosa and gut-associated lymphoid tissue, leading to excessive inflammation in the gut. A potential therapeutic target to manage these diseases could be the MAdCAM-1 molecules that are expressed on these cells and bring in lymphocytes. One example of a potential therapy is the fully human monoclonal antibody ontamalimab that targets and binds to MAdCAM-1, preventing it from interacting with the integrins on the surface of the lymphocytes.
See also
CD34
GLYCAM1
Integrin β7 - a constituent of α4β7
References
Further reading
External links
Proteins | Addressin | Chemistry | 716 |
40,865,450 | https://en.wikipedia.org/wiki/Bell%20triangle | In mathematics, the Bell triangle is a triangle of numbers analogous to Pascal's triangle, whose values count partitions of a set in which a given element is the largest singleton. It is named for its close connection to the Bell numbers, which may be found on both sides of the triangle, and which are in turn named after Eric Temple Bell. The Bell triangle has been discovered independently by multiple authors, beginning with and including also and , and for that reason has also been called Aitken's array or the Peirce triangle.
Values
Different sources give the same triangle in different orientations, some flipped from each other. In a format similar to that of Pascal's triangle, and in the order listed in the On-Line Encyclopedia of Integer Sequences (OEIS), its first few rows are:
1
1 2
2 3 5
5 7 10 15
15 20 27 37 52
52 67 87 114 151 203
203 255 322 409 523 674 877
Construction
The Bell triangle may be constructed by placing the number 1 in its first position. After that placement, the leftmost value in each row of the triangle is filled by copying the rightmost value in the previous row. The remaining positions in each row are filled by a rule very similar to that for Pascal's triangle: they are the sum of the two values to the left and upper left of the position.
Thus, after the initial placement of the number 1 in the top row, it is the last position in its row and is copied to the leftmost position in the next row. The third value in the triangle, 2, is the sum of the two previous values above-left and left of it. As the last value in its row, the 2 is copied into the third row, and the process continues in the same way.
Combinatorial interpretation
The Bell numbers themselves, on the left and right sides of the triangle, count the number of ways of partitioning a finite set into subsets, or equivalently the number of equivalence relations on the set.
provide the following combinatorial interpretation of each value in the triangle. Following Sun and Wu, let An,k denote the value that is k positions from the left in the nth row of the triangle, with the top of the triangle numbered as A1,1. Then An,k counts the number of partitions of the set {1, 2, ..., n + 1} in which the element k + 1 is the only element of its set and each higher-numbered element is in a set of more than one element. That is, k + 1 must be the largest singleton of the partition.
For instance, the number 3 in the middle of the third row of the triangle would be labeled, in their notation, as A3,2, and counts the number of partitions of {1, 2, 3, 4} in which 3 is the largest singleton element. There are three such partitions:
{1}, {2, 4}, {3}
{1, 4}, {2}, {3}
{1, 2, 4}, {3}.
The remaining partitions of these four elements either do not have 3 in a set by itself, or they have a larger singleton set {4}, and in either case are not counted in A3,2.
In the same notation, augment the triangle with another diagonal to the left of its other values, of the numbers
An,0 = 1, 0, 1, 1, 4, 11, 41, 162, ...
counting partitions of the same set of n + 1 items in which only the first item is a singleton. Their augmented triangle is
1
0 1
1 1 2
1 2 3 5
4 5 7 10 15
11 15 20 27 37 52
41 52 67 87 114 151 203
162 203 255 322 409 523 674 877
This triangle may be constructed similarly to the original version of Bell's triangle, but with a different rule for starting each row: the leftmost value in each row is the difference of the rightmost and leftmost values of the previous row.
An alternative but more technical interpretation of the numbers in the same augmented triangle is given by .
Diagonals and row sums
The leftmost and rightmost diagonals of the Bell triangle both contain the sequence 1, 1, 2, 5, 15, 52, ... of the Bell numbers (with the initial element missing in the case of the rightmost diagonal). The next diagonal parallel to the rightmost diagonal gives the sequence of differences of two consecutive Bell numbers, 1, 3, 10, 37, ..., and each subsequent parallel diagonal gives the sequence of differences of previous diagonals.
In this way, as observed, this triangle can be interpreted as implementing the Gregory–Newton interpolation formula, which finds the coefficients of a polynomial from the sequence of its values at consecutive integers by using successive differences. This formula closely resembles a recurrence relation that can be used to define the Bell numbers.
The sums of each row of the triangle, 1, 3, 10, 37, ..., are the same sequence of first differences appearing in the second-from-right diagonal of the triangle. The nth number in this sequence also counts the number of partitions of n elements into subsets, where one of the subsets is distinguished from the others; for instance, there are 10 ways of partitioning three items into subsets and then choosing one of the subsets.
Related constructions
A different triangle of numbers, with the Bell numbers on only one side, and with each number determined as a weighted sum of nearby numbers in the previous row, was described by .
Notes
References
.
.
.
. Reprinted with an addendum as "The Tinkly Temple Bells", Chapter 2 of Fractal Music, Hypercards, and more ... Mathematical Recreations from Scientific American, W. H. Freeman, 1992, pp. 24–38.
. The triangle is on p. 48.
.
.
.
External links
Triangles of numbers
Charles Sanders Peirce | Bell triangle | Mathematics | 1,243 |
17,404,231 | https://en.wikipedia.org/wiki/Conjugate%20Fourier%20series | In the mathematical field of Fourier analysis, the conjugate Fourier series arises by realizing the Fourier series formally as the boundary values of the real part of a holomorphic function on the unit disc. The imaginary part of that function then defines the conjugate series. studied the delicate questions of convergence of this series, and its relationship with the Hilbert transform.
In detail, consider a trigonometric series of the form
in which the coefficients an and bn are real numbers. This series is the real part of the power series
along the unit circle with . The imaginary part of F(z) is called the conjugate series of f, and is denoted
See also
Harmonic conjugate
References
Fourier analysis
Fourier series | Conjugate Fourier series | Mathematics | 146 |
36,922,503 | https://en.wikipedia.org/wiki/%C3%89cole%20nationale%20des%20sciences%20appliqu%C3%A9es%20de%20K%C3%A9nitra | The École nationale des sciences appliquées de Kénitra () is a Moroccan engineering school founded in 2008 by a partnership between the University Ibn Tofail in Kenitra () and the Institut national des sciences appliquées de Lyon.
It is a Moroccan public institution, training engineers with specializations in;
Computer engineering
Telecommunication and Networks engineering
Electrical engineering
Industrial engineering
Mecatronic engineering
Civil engineering
External links
Site officiel de l'ENSAK
Education in Morocco
Engineering universities and colleges
2008 establishments in Morocco
Universities and colleges established in 2008 | École nationale des sciences appliquées de Kénitra | Engineering | 114 |
1,264,647 | https://en.wikipedia.org/wiki/Nightshirt | A nightshirt is a garment intended for wear while sleeping, often with a nightcap. It is longer than most regular shirts, reaching down below the knees, leaving some of the legs uncovered. It is often referred to as a nightgown for men, but nowadays, nightshirts are an optional sleepwear for women too.
In the US, it also sometimes means a shirt, slightly longer than a regular shirt, reaching down to the thighs, worn as loungewear and nightwear. Traditional nightshirts are used just for nightwear, removed and stored away for next use upon waking. This other, non-traditional type is worn with pajama bottoms.
Until the 16th century men slept naked or in a day-shirt . Nobles in the 16th century then wore embroidered shirts or "wrought night-shirts". By the 19th century the nightshirt resembled a day-shirt with a loose, turned-down collar and similar length to a nightgown. Historically, nightshirts were often made of ruined or very cheap fabric, but most are now made of normal cloth.
Like nightgowns, it is recommended to wear a robe or a dressing gown over them when expecting guests.
See also
Nightcap
Nightgown
References
Nightwear
Dresses | Nightshirt | Biology | 257 |
2,070,173 | https://en.wikipedia.org/wiki/Luis%20Caffarelli | Luis Ángel Caffarelli (; born December 8, 1948) is an Argentine-American mathematician. He studies partial differential equations and their applications. Caffarelli is a professor of mathematics at the University of Texas at Austin, and the winner of the 2023 Abel Prize.
Career
Caffarelli was born and grew up in Buenos Aires. He obtained his Masters of Science (1968) and Ph.D. (1972) at the University of Buenos Aires. His Ph.D. advisor was Calixto Calderón. He currently holds the Sid Richardson Chair at the University of Texas at Austin and is core faculty at the Oden Institute for Computational Engineering and Sciences. He also has been a professor at the University of Minnesota, the University of Chicago, and the Courant Institute of Mathematical Sciences at New York University. From 1986 to 1996 he was a professor at the Institute for Advanced Study in Princeton.
Research
Caffarelli published "The regularity of free boundaries in higher dimensions" in 1977 in Acta Mathematica. One of his most cited results regards the Partial regularity of suitable weak solutions of the Navier–Stokes equations; it was obtained in 1982 in collaboration with Louis Nirenberg and Robert V. Kohn.
Awards and recognition
In 1991 he was elected to the U.S. National Academy of Sciences. He was awarded honorary doctorates by the École Normale Supérieure, Paris, the University of Notre Dame, the Universidad Autónoma de Madrid, and the Universidad de La Plata, Argentina. He received the Bôcher Memorial Prize in 1984. He is listed as an ISI highly cited researcher.
In 2003 Konex Foundation from Argentina granted him the Diamond Konex Award, one of the most prestigious awards in Argentina, as the most important Scientist of his country in the last decade. In 2005, he received the prestigious Rolf Schock Prize of the Royal Swedish Academy of Sciences "for his important contributions to the theory of nonlinear partial differential equations". He also received the Leroy P. Steele Prize for Lifetime Achievement in Mathematics in 2009. In 2012 he was awarded the Wolf Prize in Mathematics (jointly with Michael Aschbacher) and became a fellow of the American Mathematical Society. In 2017 he gave the Łojasiewicz Lecture (on "Some models of segregation") at the Jagiellonian University in Kraków.
In 2018, he was named a SIAM Fellow and he received the Shaw Prize in Mathematics.
In 2023, he was awarded the Abel Prize "for his seminal contributions to regularity theory for nonlinear partial differential equations including free-boundary problems and the Monge–Ampère equation".
Bibliography
Caffarelli has coauthored two books:
Fully Nonlinear Elliptic Equations by Luis Caffarelli and Xavier Cabré (1995), American Mathematical Society.
A Geometric Approach to Free Boundary Problems by Luis Caffarelli and Sandro Salsa (2005), American Mathematical Society.
References
External links
Home page
Biographical data
1948 births
20th-century American mathematicians
21st-century American mathematicians
Abel Prize laureates
Argentine mathematicians
Argentine people of Italian descent
Courant Institute of Mathematical Sciences faculty
Fellows of the American Mathematical Society
Fellows of the Society for Industrial and Applied Mathematics
Institute for Advanced Study faculty
Living people
Mathematical analysts
Members of the United States National Academy of Sciences
PDE theorists
People from Buenos Aires
Rolf Schock Prize laureates
University of Chicago faculty
University of Minnesota faculty
University of Texas at Austin faculty
Wolf Prize in Mathematics laureates | Luis Caffarelli | Mathematics | 696 |
77,825,154 | https://en.wikipedia.org/wiki/Dordaviprone | Dordaviprone is an investigational new drug that is being evaluated for the treatment of diffuse intrinsic pontine glioma (a type of brain tumor). It is dopamine receptor D2 antagonist and an allosteric activator of the mitochondrial caseinolytic protease P.
References
Imidazopyrimidines | Dordaviprone | Chemistry | 73 |
1,743,842 | https://en.wikipedia.org/wiki/Nickelocene | Nickelocene is the organonickel compound with the formula Ni(η5-C5H5)2. Also known as bis(cyclopentadienyl)nickel or NiCp2, this bright green paramagnetic solid is of enduring academic interest, although it does not yet have any known practical applications.
Structure
Ni(C5H5)2 belongs to a group of organometallic compounds called metallocenes. Metallocenes usually adopt structures in which a metal ion is sandwiched between two parallel cyclopentadienyl (Cp) rings. In the solid-state, the molecule has D5d symmetry, wherein the two rings are staggered.
The Ni center has a formal +2 charge, and the Cp rings are usually assigned as cyclopentadienyl anions (Cp−), related to cyclopentadiene by deprotonation. The structure is similar to ferrocene. In terms of its electronic structure, three pairs of d electrons on nickel are allocated to the three d orbitals involved in Ni–Cp bonding: dxy, dx2–y2, dz2. The two remaining d-electrons each reside in the dyz and dxz orbitals, giving rise to the molecule's paramagnetism, as manifested in the unusually high field chemical shift observed in its 1H NMR spectrum. With 20 valence electrons, nickelocene has the highest electron count of the transition metal metallocenes. Cobaltocene, Co(C5H5)2, with only 19 valence electrons is, however, a stronger reducing agent, illustrating the fact that electron energy, not electron count determines redox potential.
Preparation
Nickelocene was first prepared by E. O. Fischer in 1953, shortly after the discovery of ferrocene, the first metallocene compound to be discovered. It has been prepared in a one-pot reaction, by deprotonating cyclopentadiene with ethylmagnesium bromide, and adding anhydrous nickel(II) acetylacetonate. A modern synthesis entails treatment of anhydrous sources of NiCl2 (such as hexaamminenickel chloride) with sodium cyclopentadienyl:
[Ni(NH3)6]Cl2 + 2 NaC5H5 → Ni(C5H5)2 + 2 NaCl + 6 NH3
Properties
Like many organometallic compounds, Ni(C5H5)2 does not tolerate extended exposure to air before noticeable decomposition. Samples are typically handled with air-free techniques.
Most chemical reactions of nickelocene are characterized by its tendency to yield 18-electron products with loss or modification of one Cp ring.
Ni(C5H5)2 + 4 PF3 → Ni(PF3)4 + organic products
The reaction with secondary phosphines follows a similar pattern:
2 Ni(C5H5)2 + 2 PPh2H → [Ni2(PPh2)2(C5H5)2] + 2 C5H6
Nickelocene can be oxidized to the corresponding cation, which contains Ni(III).
Gaseous Ni(C5H5)2 decomposes to a nickel mirror upon contact with a hot surface, releasing the hydrocarbon ligands as gaseous coproducts. This process has been considered as a means of preparing nickel films.
Nickelocene reacts with nitric acid to produce cyclopentadienyl nickel nitrosyl, a highly toxic organonickel compound.
References
External links
IARC Monograph "Nickel and Nickel compounds"
National Pollutant Inventory – Nickel and compounds Fact Sheet
Metallocenes
Organonickel compounds
Cyclopentadienyl complexes
Substances discovered in the 1950s | Nickelocene | Chemistry | 789 |
2,131,266 | https://en.wikipedia.org/wiki/Compressibility%20factor | In thermodynamics, the compressibility factor (Z), also known as the compression factor or the gas deviation factor, describes the deviation of a real gas from ideal gas behaviour. It is simply defined as the ratio of the molar volume of a gas to the molar volume of an ideal gas at the same temperature and pressure. It is a useful thermodynamic property for modifying the ideal gas law to account for the real gas behaviour. In general, deviation from ideal behaviour becomes more significant the closer a gas is to a phase change, the lower the temperature or the larger the pressure. Compressibility factor values are usually obtained by calculation from equations of state (EOS), such as the virial equation which take compound-specific empirical constants as input. For a gas that is a mixture of two or more pure gases (air or natural gas, for example), the gas composition must be known before compressibility can be calculated.
Alternatively, the compressibility factor for specific gases can be read from generalized compressibility charts that plot as a function of pressure at constant temperature.
The compressibility factor should not be confused with the compressibility (also known as coefficient of compressibility or isothermal compressibility) of a material, which is the measure of the relative volume change of a fluid or solid in response to a pressure change.
Definition and physical significance
The compressibility factor is defined in thermodynamics and engineering frequently as:
where p is the pressure, is the density of the gas and is the specific gas constant, being the molar mass, and the is the absolute temperature (kelvin or Rankine scale).
In statistical mechanics the description is:
where is the pressure, is the number of moles of gas, is the absolute temperature, is the gas constant, and is unit volume.
For an ideal gas the compressibility factor is per definition. In many real world applications requirements for accuracy demand that deviations from ideal gas behaviour, i.e., real gas behaviour, be taken into account. The value of generally increases with pressure and decreases with temperature. At high pressures molecules are colliding more often. This allows repulsive forces between molecules to have a noticeable effect, making the molar volume of the real gas () greater than the molar volume of the corresponding ideal gas (), which causes to exceed one. When pressures are lower, the molecules are free to move. In this case attractive forces dominate, making . The closer the gas is to its critical point or its boiling point, the more deviates from the ideal case.
Fugacity
The compressibility factor is linked to the fugacity by the relation:
Generalized compressibility factor graphs for pure gases
The unique relationship between the compressibility factor and the reduced temperature, , and the reduced pressure, , was first recognized by Johannes Diderik van der Waals in 1873 and is known as the two-parameter principle of corresponding states. The principle of corresponding states expresses the generalization that the properties of a gas which are dependent on intermolecular forces are related to the critical properties of the gas in a universal way. That provides a most important basis for developing correlations of molecular properties.
As for the compressibility of gases, the principle of corresponding states indicates that any pure gas at the same reduced temperature, , and reduced pressure, , should have the same compressibility factor.
The reduced temperature and pressure are defined by
and
Here and are known as the critical temperature and critical pressure of a gas. They are characteristics of each specific gas with being the temperature above which it is not possible to liquify a given gas and is the minimum pressure required to liquify a given gas at its critical temperature. Together they define the critical point of a fluid above which distinct liquid and gas phases of a given fluid do not exist.
The pressure-volume-temperature (PVT) data for real gases varies from one pure gas to another. However, when the compressibility factors of various single-component gases are graphed versus pressure along with temperature isotherms many of the graphs exhibit similar isotherm shapes.
In order to obtain a generalized graph that can be used for many different gases, the reduced pressure and temperature, and , are used to normalize the compressibility factor data. Figure 2 is an example of a generalized compressibility factor graph derived from hundreds of experimental PVT data points of 10 pure gases, namely methane, ethane, ethylene, propane, n-butane, i-pentane, n-hexane, nitrogen, carbon dioxide and steam.
There are more detailed generalized compressibility factor graphs based on as many as 25 or more different pure gases, such as the Nelson-Obert graphs. Such graphs are said to have an accuracy within 1–2 percent for values greater than 0.6 and within 4–6 percent for values of 0.3–0.6.
The generalized compressibility factor graphs may be considerably in error for strongly polar gases which are gases for which the centers of positive and negative charge do not coincide. In such cases the estimate for may be in error by as much as 15–20 percent.
The quantum gases hydrogen, helium, and neon do not conform to the corresponding-states behavior. Rao recommended that the reduced pressure and temperature for those three gases should be redefined in the following manner to improve the accuracy of predicting their compressibility factors when using the generalized graphs:
and
where the temperatures are in kelvins and the pressures are in atmospheres.
Reading a generalized compressibility chart
In order to read a compressibility chart, the reduced pressure and temperature must be known. If either the reduced pressure or temperature is unknown, the reduced specific volume must be found. Unlike the reduced pressure and temperature, the reduced specific volume is not found by using the critical volume. The reduced specific volume is defined by,
where is the specific volume.
Once two of the three reduced properties are found, the compressibility chart can be used. In a compressibility chart, reduced pressure is on the x-axis and Z is on the y-axis. When given the reduced pressure and temperature, find the given pressure on the x-axis. From there, move up on the chart until the given reduced temperature is found. Z is found by looking where those two points intersect. the same process can be followed if reduced specific volume is given with either reduced pressure or temperature.
Observations made from a generalized compressibility chart
There are three observations that can be made when looking at a generalized compressibility chart. These observations are:
Gases behave as an ideal gas regardless of temperature when the reduced pressure is much less than one (PR ≪ 1).
When reduced temperature is greater than two (TR > 2), ideal-gas behavior can be assumed regardless of pressure, unless pressure is much greater than one (PR ≫ 1).
Gases deviate from ideal-gas behavior the most in the vicinity of the critical point.
Theoretical models
The virial equation is especially useful to describe the causes of non-ideality at a molecular level (very few gases are mono-atomic) as it is derived directly from statistical mechanics:
Where the coefficients in the numerator are known as virial coefficients and are functions of temperature.
The virial coefficients account for interactions between successively larger groups of molecules. For example, accounts for interactions between pairs, for interactions between three gas molecules, and so on. Because interactions between large numbers of molecules are rare, the virial equation is usually truncated after the third term.
When this truncation is assumed, the compressibility factor is linked to the intermolecular-force potential φ by:
The Real gas article features more theoretical methods to compute compressibility factors.
Physical mechanism of temperature and pressure dependence
Deviations of the compressibility factor, Z, from unity are due to attractive and repulsive intermolecular forces. At a given temperature and pressure, repulsive forces tend to make the volume larger than for an ideal gas; when these forces dominate Z is greater than unity. When attractive forces dominate, Z is less than unity. The relative importance of attractive forces decreases as temperature increases (see effect on gases).
As seen above, the behavior of Z is qualitatively similar for all gases. Molecular nitrogen, N, is used here to further describe and understand that behavior. All data used in this section were obtained from the NIST Chemistry WebBook. It is useful to note that for N the normal boiling point of the liquid is 77.4 K and the critical point is at 126.2 K and 34.0 bar.
The figure on the right shows an overview covering a wide temperature range. At low temperature (100 K), the curve has a characteristic check-mark shape, the rising portion of the curve is very nearly directly proportional to pressure. At intermediate temperature (160 K), there is a smooth curve with a broad minimum; although the high pressure portion is again nearly linear, it is no longer directly proportional to pressure. Finally, at high temperature (400 K), Z is above unity at all pressures. For all curves, Z approaches the ideal gas value of unity at low pressure and exceeds that value at very high pressure.
To better understand these curves, a closer look at the behavior for low temperature and pressure is given in the second figure. All of the curves start out with Z equal to unity at zero pressure and Z initially decreases as pressure increases. N is a gas under these conditions, so the distance between molecules is large, but becomes smaller as pressure increases. This increases the attractive interactions between molecules, pulling the molecules closer together and causing the volume to be less than for an ideal gas at the same temperature and pressure. Higher temperature reduces the effect of the attractive interactions and the gas behaves in a more nearly ideal manner.
As the pressure increases, the gas eventually reaches the gas-liquid coexistence curve, shown by the dashed line in the figure. When that happens, the attractive interactions have become strong enough to overcome the tendency of thermal motion to cause the molecules to spread out; so the gas condenses to form a liquid. Points on the vertical portions of the curves correspond to N2 being partly gas and partly liquid. On the coexistence curve, there are then two possible values for Z, a larger one corresponding to the gas and a smaller value corresponding to the liquid. Once all the gas has been converted to liquid, the volume decreases only slightly with further increases in pressure; then Z is very nearly proportional to pressure.
As temperature and pressure increase along the coexistence curve, the gas becomes more like a liquid and the liquid becomes more like a gas. At the critical point, the two are the same. So for temperatures above the critical temperature (126.2 K), there is no phase transition; as pressure increases the gas gradually transforms into something more like a liquid. Just above the critical point there is a range of pressure for which Z drops quite rapidly (see the 130 K curve), but at higher temperatures the process is entirely gradual.
The final figures shows the behavior at temperatures well above the critical temperatures. The repulsive interactions are essentially unaffected by temperature, but the attractive interaction have less and less influence. Thus, at sufficiently high temperature, the repulsive interactions dominate at all pressures.
This can be seen in the graph showing the high temperature behavior. As temperature increases, the initial slope becomes less negative, the pressure at which Z is a minimum gets smaller, and the pressure at which repulsive interactions start to dominate, i.e. where Z goes from less than unity to greater than unity, gets smaller. At the Boyle temperature (327 K for N), the attractive and repulsive effects cancel each other at low pressure. Then Z remains at the ideal gas value of unity up to pressures of several tens of bar. Above the Boyle temperature, the compressibility factor is always greater than unity and increases slowly but steadily as pressure increases.
Experimental values
It is extremely difficult to generalize at what pressures or temperatures the deviation from the ideal gas becomes important. As a rule of thumb, the ideal gas law is reasonably accurate up to a pressure of about 2 atm, and even higher for small non-associating molecules. For example, methyl chloride, a highly polar molecule and therefore with significant intermolecular forces, the experimental value for the compressibility factor is at a pressure of 10 atm and temperature of 100 °C. For air (small non-polar molecules) at approximately the same conditions, the compressibility factor is only (see table below for 10 bars, 400 K).
Compressibility of air
Normal air comprises in crude numbers 80 percent nitrogen and 20 percent oxygen . Both molecules are small and non-polar (and therefore non-associating). We can therefore expect that the behaviour of air within broad temperature and pressure ranges can be approximated as an ideal gas with reasonable accuracy. Experimental values for the compressibility factor confirm this.
values are calculated from values of pressure, volume (or density), and temperature in Vasserman, Kazavchinskii, and Rabinovich, "Thermophysical Properties of Air and Air Components;' Moscow, Nauka, 1966, and NBS-NSF Trans. TT 70-50095, 1971: and Vasserman and Rabinovich, "Thermophysical Properties of Liquid Air and Its Component, "Moscow, 1968, and NBS-NSF Trans. 69-55092, 1970.
See also
Fugacity
Real gas
Theorem of corresponding states
Van der Waals equation
References
External links
Compressibility factor (gases) A Citizendium article.
Real Gases includes a discussion of compressibility factors.
Chemical engineering thermodynamics
Gas laws | Compressibility factor | Chemistry,Engineering | 2,837 |
38,022,448 | https://en.wikipedia.org/wiki/Endurance%20time%20method | The endurance time (ET) method is a dynamic structural analysis procedure for seismic assessment of structures. In this procedure, an intensifying dynamic excitation is used as the loading function. Endurance time method is a time-history based dynamic analysis procedure. An estimate of the structural response at different equivalent seismic intensity levels is obtained in a single response history analysis. This method has applications in seismic assessment of various structural types and in different areas of earthquake engineering.
The concept of endurance time method
Endurance time (ET) method is a dynamic structural analysis procedure in which intensifying dynamic excitation is used as the loading function. An estimate of structural response and/or performance at the entire seismic intensity range of interest is obtained in each response history analysis. The concept of endurance time analysis is similar to the exercise test applied in medicine. Similar concept has also been extended to applications in the analysis of offshore platforms under water waves.
Development history
The basic concepts of the endurance time method were published in 2004. Application in linear seismic analysis appeared in 2007. ET was subsequently extended to nonlinear analysis of single degree of freedom (SDOF) and multi degree of freedom systems. Procedures for multi-component seismic analysis were subsequently developed.
ET excitation functions
ET excitation functions are generated by using numerical optimization methods. ET excitation functions are publicly available through internet websites. ET excitation functions can be categorized into five generations as follows:
First generation of ET excitation functions (ETEFs) are essentially a filtered and profiled white noise. These were used for demonstrating the concept of ET and have limited practical significance.
Second-generation ETEFs incorporate response spectrum matching. These ETEFs produce numerically significant analysis results.
Third-generation ETEFs are optimized in nonlinear range. These ETEFs deliver improved analysis performance.
Fourth-generation ETEFs are optimized to include duration consistency.
Fifth-generation ETEFs are optimized to include damage consistency.
Application areas in earthquake engineering
Endurance time method has been applied in the following areas of earthquake engineering:
Nonlinear dynamic analysis of structures
Seismic evaluation of jacket-type offshore platforms
Optimal damper placement in framed buildings
Optimal design of energy dissipation systems
Seismic assessment of structures
Performance-based seismic design method
Collapse-based seismic design method
Value-based seismic design
Structural optimization
Multi-component seismic analysis
Soil–structure interaction
soil-pile-superstructure interaction
Liquid–structure interaction
Dam engineering
Bridge engineering
Seismic rehabilitation
Collapse analysis
Structural type applications
ET method has been applied in seismic assessment of the following structural types:
Single degree of freedom systems
Moment and braced steel frames
Concrete frames
Bridges
Gravity dams
Arch dams
Shell structures
Steel tanks
Offshore structures
Advantages of ET method
Major advantages of the endurance time method are as follows:
ET significantly reduces the computational demand required for performing a standard response history analysis of structures for seismic assessment, especially when response at multiple levels of intensity is to be considered.
ET is applicable in a wide range of seismic assessment problems and provides a generic approach for the seismic analysis of a wide range of structural types.
ET method is reasonably simple and sensible when a realistic dynamic analysis of a complex structure is required
Limitations of ET method
Major limitations of the endurance time method are as follows:
ET is an approximate method for predicting the structural response.
The production of usable ETEFs that are applicable in a particular situation can be complicated.
The procedure is still under development and sufficient background information may not be available for specific applications.
References
Earthquake engineering
Structural engineering
Civil engineering | Endurance time method | Engineering | 702 |
9,917,817 | https://en.wikipedia.org/wiki/Wright%20Brothers%20Medal | The Wright Brothers Medal was conceived of in 1924 by the Dayton Section of the Society of Automotive Engineers, and the SAE established it in 1927 to recognize individuals who have made notable contributions in the engineering, design, development, or operation of air and space vehicles. The award is based on contributed research papers.
The award honors Wilbur and Orville Wright as the first successful builders of heavier-than-air craft, and includes an image of the Wright Flyer, the plane which they flew in 1903 at Kitty Hawk, North Carolina.
Awardees and research topics: 1928-1975
1928 Clinton Hunter Havill - Aircraft Propellers.
1929 Ralph Hazlett Upson - Wings - A Coordinated System of Basic Design.
1930 Theodore Paul Wright - The Development of a Safe Airplane - The Curtis Tanager.
1931 Stephen Joseph Zand: A Study of Airplane and Instrument Board Vibration
1932 Edward Pearson Warner: The Rational Specifications of Airplane Load Factors
1933 Eastman Nixon Jacobs: The Aerodynamics of Wing Sections for Airplanes
1934 Rex Buren Beisel, A. L. MacClain, and F. M. Thomas: Cowling and Cooling of Radial Air-Cooled Aircraft Engines
1935 William Littlewood: Operating Requirements for Transport Airplanes
1936 R. J. Minshall, J. K. Ball, and F. P. Laudan: Problems in the Design and Construction of Large Aircraft
1937 Richard V. Rhode - Gust Loads on Airplanes
1938 no award given
1939 Kenneth A. Browne: Dynamic Suspension - A Method of Aircraft Engine Mounting
1940 Clarence Leonard Johnson: Rudder Control Problems on Four-Engined Airplanes
1941 Samuel Jasper Loring: General Approach to the Flutter Problem
1942 Charles R. Strang: Progress in Structural Design Through Strain-Gage Technique
1943 Costas E. Pappas: The Determination of Fuselage Moments
1944 Kenneth Campbell: Engine Cooling Fan Theory and Practice
1945 Myron Tribus: Report on Development and Application of Heated Wings
1946 Frederick Van Horne Judd: A Systematic Approach to the Aerodynamic Design of Radial Engine Installations
1947 Henry B. Gibbons: Experiences of an Aircraft Manufacturer with Sandwich Material
1948 Kermit Van Every: Aerodynamics of High Speed Airplanes
1949 Homer J. Wood and Frederick Dallenbach: Auxiliary Gas Turbines for Pneumatic Power in Aircraft Applications
1950 James Charles Floyd: The Avro C102 Jetliner
1951 Orville Albert Wheelon: Design Methods and Manufacturing Techniques with Titanium
1952 W. J. Kunz, Jr.: A New Technique for Investigating Jet Engine Compressor Stall and Other Transient Characteristics
1953 D. N. Meyers and Z. Ciolkosz: Matching the Characteristics of Helicopters and Shaft Turbines
1954 John M. Tyler and E. C. Perry, Jr.: Jet Noise
1955 Wendell E. Reed: A New Approach to Turbojet and Ramjet Engine Controls
1956 Charles Horton Zimmerman: Some General Considerations Concerning VTOL Aircraft
1957 Alf F. Ensrud: Problems in the Application of High Strength Steel Alloys in the Design of Supersonic Aircraft
1958 Kermit Van Every: Design Problems of Very High Speed Flight
1959 Milford G. Childers: Preliminary Design Considerations for the Structure of a Trisonic Transport
1960 Ferdinand B. Greatrex: By-Pass Engine Noise
1961 Carleton M. Mears and Robert L. Peterson: Mechanization on Minimum-Energy Automatic Lunar Soft-Landing Systems
1962 Robert P. Rhodes, Jr., D. E. Chriss, and Philip M. Rubins: Effect of Heat Release on Flow Parameters in Shock Induced Combustion
1963 Sitaram Rao Valluri, James B. Glassco, and George Eugene Bockrath: Further Considerations of a Theory of Crack Propagation in Metal Fatigue
1964 Marion O'Dell McKinney, Jr., Richard E. Kuhn, and John P. Reeder: Aerodynamics and Flying Qualities of Jet V/STOL Airplanes
1965 W. W. Williams, G. K. Williams, and W. C. J. Garrard: Soft and Rough Field Landing Gears
1966 Julian Wolkovitch: An Introduction to Hover Dynamics
1967 John A. McKillop: Flutter Characteristics of the Slap Tail
1968 Leonard J. Nestor and Lawrence Maggitti, Jr.: Effects of Dynamic Environments on Fuel Tank Flammability
1969 W. N. Reddisch, A. E. Sabroff, P. C. Wheeler, and J. G. Zaremba: A Semi-Active Gravity Gradient Stabilization System
1970 J. Hong: Advanced Bonding for Large Aircraft
1971 no award given
1972 Dwight Henry Bennett and Robert P. Johannes: Combat Capabilities and Versatility Through CCV
1973 Richard E. Hayden: Fundamental Aspects of Noise Reduction From Powered Lift Devices
1974 Michael J. Wendl, Gordon G. Grose, John L. Porter, and Ralph V. Pruitt: Flight/Propulsion Control Integration Aspects of Energy Management
1975 John A. Alic and H. Archang: Comparison of Fracture and Fatigue Properties of Clad 7075-T6 Aluminum in Monolithic and Laminated Forms
Awardees
Source: SAE International
1976 no award given
1977 - Raymond M. Hicks and Garret N. Vanderplaats
1978 no award given
1979 Gary E. Erickson, Dale J. Lorincz, William A. Moore, and Andrew M. Skow: Effects on Forebody, Wing and Wing-Body-LEX Flowfields in High Angle of Attack Aerodynamics
1980 Walter S. Cremens: Thermal Expansion Molding Process for Aircraft Composite Structures
1981 Raymond M. Hicks: Transonic Wing Design Using Potential Flow Codes -- Successes and Failures
1982 Andre Fort and J. J. Speyer: Human Factors Approach in Certification Flight Test
1983 Carol A. Simpson: Integrated Voice Controls and Speech Displays for Rotorcraft Mission Management
1984 Robert J. Englar and James H. Nichols Jr.
1985 Charles W. Boppe
1986 James A. Hare: Increasing the Node Shifting Capability of Fixed Velocity Upper Stage Payloads using Slightly Elliptic Drift Orbits
1987 Charles P. Blankenship and Robert J. Hayduk
1988 Benton C. Clark III
1989 Charles W. Boppe and Warren H. Davis
1990 Mariann F. Brown and Susan Schentrup
1991 Lourdes M. Birckelbaw and Lloyd D. Corliss: Handling Qualities Results of an Initial Geared Flap Tilt Wing Piloted Simulation
1992 G. J. Bastiaans, Steve D. Braymen, S. G. Burns, Shelley J. Coldiron, R. S. Deinhammer, William J. Deninger, R. P. O'Toole, Marc D. Porter, and H. R. Shanks: Novel Approaches to the Construction of Miniaturized Analytical Instrumentation
1993 no award given
1994 Timothy Geels, Tom McDavid, Greg Robel, and Tze Siu: DGPS Precision Landing Simulation
1995 Robert R. Wilkins Jr.: Designing the Conceptual Flight Deck for a Short Haul Civil Transport/Civil Tiltrotor
1996 B. A. Moravec and Michael W. Patnoe
1997 James R. Fuller: Evolution And Future Development Of Airplane Gust Loads
1998 Robert S. McCann, Becky L. Hooey, Bonny Parke, Anthony D. Andre, David C. Foyle, and Barbara G. Kanki
1999 Jeremy S. Agte, Robert Sandusky, and Jaroslaw Sobieski
2000 no award given
2001 Maurizio Apra, Marcello D'Amore, Maria Sabrina Sarto, Alberto Scarlatti, and Valeria Volpi: VAM-LIFE: Virtual Aircraft ElectroMagnetic Lightning Indirect Effect Evaluation
2002 Gary L. Boyd, Alfred W. Fuller, and Jack Moy: Hybrid-Ceramic Circumferential Carbon Ring Seal
2003 Timothy J. Bencic, Colin S. Bidwell, Michael Papadakis, Arief Rachman, and See-Cheuk Wong: An Experimental Investigation of SLD Impingement on Airfoils and Simulated Ice Shapes
2004 Philip Freeman: A Robust Method of Countersink Inspection Using Machine Vision
2005 John W. Fisher, Michael T. Flynn, Eric J. Litwiller, and Martin Reinhard: Lyophilization for Water Recovery III, System Design
2006 James R. Akse, James E. Atwater, Roger Dahl, John W. Fisher, Frank C. Garmon, Neal M. Hadley, Richard R. Wheeler Jr, Thomas W. Williams: Development and Testing of a Microwave Powered Solid Waste Stabilization and Water Recovery System
2007 Peter O. Andreychuk, Leonid S Bobe, Nikolay N. Protasov, Nikolay N. Samsonov, Yury Sinyak, and Vladimir M. Skuratov: Water Recovery on the International Space Station: The Perspectives of Space Stations' Water Supply Systems
2008 Carl Jack Ercol: Return to Mercury: An Overview of the MESSENGER Spacecraft Thermal Control System Design and Up-to-Date Flight Performance
2009 Atle Honne, John T James, Dirk Kampf, Kristin Kaspersen, Dr Thomas Limero, Dr Ariel V. Macatangay, Dr Herbert Mosebach, Paul D Mudgett, Henrik Schumann-Olsen, Wolfgang Supper, and Gijsbert Tan: Evaluation of ANITA Air Monitoring on the International Space Station
2010 Henrik Kihlman, and Magnus Engström: Flexapods - Flexible Tooling at SAAB for Building the NEURON Aircraft
2011 Matthew Barker, Luke Hickson, Joeseph K-W Lam, Stephen Paul Tomlinson, and Darran Venn: Mathematical Model of Water Contamination in Aircraft Fuel Tanks
2012 Jerry Bieszczad, Michael Izenson, George Ford Kiwada, Patrick J Magari: Ultra- Compact Power System for Long-Endurance Small Unmanned Aerial Systems
2013 Ing Rafael Fernandes de Oliveira
2014 Troy Beechner, Kyle Ian Merical, Paul Yelvington
2015 no award given
2016 Tadas P. Bartkus, Peter Struk, Jen-Ching Tsao
2017 Christian Boehlmann, Wolfgang Hintze, Philip Koch, Christian Moeller, Hans Christian Schmidt, Jörg Wollnack
2019 Yuzhi Jin, Yuping Qian, Yangjun Zhang, Weilin Zhuge - Tsinghua University
See also
Wright Brothers Memorial Trophy
List of aviation awards
List of space technology awards
List of engineering awards
Prizes named after people
References
External links
SAE: Wright Brothers Medal
Aerospace engineering awards
Space-related awards
Aviation awards
Awards established in 1927
Wright brothers | Wright Brothers Medal | Technology,Engineering | 2,095 |
49,153,350 | https://en.wikipedia.org/wiki/Cortinarius%20moserianus | Cortinarius moserianus is an agaric fungus of the genus Cortinarius found in Europe. It was described as new to science in 1970 by the Hungarian mycologist Gábor Bohus, from collections made in Hungary.
See also
List of Cortinarius species
References
External links
moserianus
Fungi described in 1970
Fungi of Europe
Fungus species | Cortinarius moserianus | Biology | 78 |
1,666,161 | https://en.wikipedia.org/wiki/Bin%20bag | A bin bag, rubbish bag (British English), garbage bag, bin liner, trash bag (American English) or refuse sack is a disposable receptable for solid waste. These bags are useful to line the insides of waste containers to prevent the insides of the container from becoming coated in waste material. Most bags today are made out of plastic, and are typically black, white, or green in color.
Plastic bags are a widely used, convenient, and sanitary way of handling garbage. Plastic garbage bags are fairly lightweight and are particularly useful for messy or wet rubbish, as is commonly the case with food waste, and are also useful for wrapping up garbage to minimize odor. Plastic bags are often used for lining litter or waste containers or bins. This keeps the container sanitary by avoiding container contact with the garbage. After the bag in the container is filled with litter, the bag can be pulled out by its edges, closed, and tied with minimal contact with the waste matter.
Garbage bags were invented by Canadians Harry Wasylyk, Larry Hansen and Frank Plomp in 1950. In a special on CBC Television, green garbage bags (first bin bags in Canada) ranked 36th among the top 50 Canadian inventions.
Black plastic bags were introduced in 1950 as star sealed bags. The first bags in the United States were green and black, rather than the now-common white and clear. Flat-sealed bags first appeared in 1959. In the 1960s, the white bin bags were introduced. Two-ply (Heavy Duty) bags were introduced in 1974, with 3 ply bags following in 1980.
Plastic bags can be incinerated with their contents in appropriate facilities for waste-to-energy conversion. They are stable and benign in sanitary landfills; some are degradable under specified conditions.
Description
Plastic bags for rubbish or litter are sold in a significant number of sizes at many stores in packets or rolls of a few tens of bags. Wire twist ties are sometimes supplied for closing the bag once full. Varying thicknesses are commonly manufactured - thicker bags are used for heavy-duty applications such as construction waste, or in order to be able to withstand being compacted during recycling processes. In the mid-1990s bin bags with drawstrings for closure were introduced. Some bags have handles that may be tied or holes through which the neck of the bag can be pulled. Most commonly, the plastic used to make bin bags is the rather soft and flexible LDPE (low-density polyethylene) or, for strength, LLDPE (linear low-density polyethylene) or HDPE (high-density polyethylene) are sometimes used.
Biodegradable plastic bags
Oxo-biodegradable plastic bags have the same strength as ordinary plastic and cost slightly more. They will degrade then biodegrade if they get into the open environment, but they can be recycled if collected during their useful life. They are designed so that they will not degrade deep in landfills and will not, therefore, generate methane. Oxo-biodegradable plastic does not degrade quickly in low temperature "windrow" composting, but it is suitable for "in-vessel" composting at the higher temperatures required by the animal by-products regulations. Oxo-biodegradable plastic is bio-assimilated by the same bacteria and fungi, which transform natural material such as twigs and leaves to cell biomass, like lignocellulosic materials. Oxo-biodegradable plastic is designed to degrade initially by a process that includes both photo-oxidation and thermo-oxidation, so it can degrade in the dark. Resin identification code 7 is applicable to biodegradable plastics.
Drawstring and flexibility
In 1984, drawstring garbage bags first appeared before GLAD and Hefty introduced them. In August 2001, Hefty introduced the garbage bags with a drawstring designed to stretch around the garbage can's rim and stay in place. In July 2004, ForceFlex, the flexible plastic garbage bags, was introduced by GLAD (followed by Hefty's Ultra Flex brand in September).
See also
Blue bag
Packaging
Plastic bag
Plastic recycling
Thermal depolymerization, post consumer waste processing technologies
References
Sources
Brody, A. L., and Marsh, K, S., Encyclopedia of Packaging Technology, John Wiley & Sons, 1997,
Selke, S, Packaging and the Environment, 1994,
Selke, S,. Plastics Packaging, 2004,
Bags
Canadian inventions
Plastics
Waste containers
Disposable products | Bin bag | Physics | 935 |
18,637,552 | https://en.wikipedia.org/wiki/Health%20administration%20informatics | The emerging field of Health administration informatics is concerned with the evaluation, acquisition, implementation and day-to-day operation of information technology systems in support of all administration and clinical functions within the health care industry. The closely related field of biomedical informatics is primarily focused on the use of information systems for acquisition and application of patients' medical data, whereas nursing informatics deals with the delivery, administration and evaluation of patient care and disease prevention. What remains unclear, however, is how this emerging discipline should relate to the myriad of previously existing sub specializations within the broad umbrella of health informatics - including clinical informatics (which itself includes sub areas such as oncology informatics), bioinformatics and healthcare management informatics - particularly in light of the proposed "fundamental theorem" of biomedical informatics posed by Friedman in early 2009.
The field of health administration informatics is emerging as attention continues to focus on the costly mistakes made by some health care organizations whilst implementing electronic medical records.
Relevance within the health care industry
In a recent survey of health care CIOs and Information System (IS) directors, increasing patient safety and reducing medical errors was reported as among the top business issues. Two other key findings were that:
two-thirds of respondents indicated that the number of FTEs in their IT department will increase in the next 12 months;
and three-quarters of respondents indicated that their IT budgets would be increasing.
The most likely staffing needs reported by the health care executives are network and architecture support (HIMMS, 2005).
“The government and private insurers are beginning to pay hospitals more for higher quality care–and the only way to measure quality, and then improve it, is with more information technology. Hospital spending on such gear is expected to climb to $30.5 billion next year, from $25.8 billion in 2004, according to researcher Dorenfest Group” (Mullaney and Weintraub, 2005).
This fundamental change in health care (pay for performance) means that hospitals and other health care providers will need to develop, adapt and maintain all of the technology necessary to measure and improve on quality. Physicians have traditionally lagged behind in their use of technology (i.e., electronic patient records). Only 7% of physicians work for hospitals, and so the task of “wooing them is an extremely delicate task” (Mullaney and Weintraub, 2005).
Careers
The market demand for a specialized advanced degree that integrates Health Care Administration and Informatics is growing as the concept has gained support from the academic and professional communities. Recent articles in Health Management Technology cite the importance of integrating information technology with health care administration to meet the unique needs of the health care industry. The health care industry has been estimated to be around 10 years behind other industries in the application of technology and at least 10 to 15 years behind in leadership capability from the technology and perhaps the business perspective (Seliger, 2005; Thibault, 2005). This means there is quantifiable demand in the work force for health care administrators who are also prepared to lead in the field of health care administration informatics.
In addition, the increasing costs and difficulties involved in evaluating the projected benefits from IT investments are requiring health care administrators to learn more about IT and how it affects business processes. The health care Chief Information Officer (CIO) must be able to build enterprise wide systems that will help reduce the administrative cost and streamline the automation of administrative processes and patient record keeping. Increasingly, the CIO is relied upon for specialized analytical and collaborative skills that will enable him/her to build systems that health care clinicians will use. A recent well-publicized debacle (shelving of a $34 million computer system after three months) at a top U. S. hospital underlines the need for leaders who understand the health care industry information technology requirements (Connolly, 2005).
Several professional organizations have also addressed the need for academic preparation that integrates the two specializations addressed by UMUC’s MSHCAI degree. In the collaborative response to the Office of the National Coordinator for Health Information Technology (ONCHIT) request for information regarding future IT needs, thirteen major health and technology organizations endorsed a “Common Framework” to support health information exchange in the United States, while protecting patient privacy. The response cited the need for continuing education of health information management professionals as a significant barrier to implementation of a National Health Information Network (NHIN) (The Collaborative Response, 2005).
See also
Consumer health informatics
Medical informatics
Nursing informatics
References
Connolly, C. (2005, March 21) Cedars-Sinai doctors cling to Pen and paper. The Washington Post.
Health Informatics World Wide (2005, March). Health informatics index site. Retrieved March 30, 2005 from .
Healthcare Information and Management Systems Society (HIMSS) (2005, February). 16th annual HIMSS leadership survey sponsored by Superior Consultant Company. Retrieved 3/30/2005 from .
Mullaney, T. J., & Weintraub, A. (2005 March 28). The digital hospital. Business Week 3926, 76.
Seliger, R. (2005). Healthcare IT tipping point. Health Management Technology 26(3), 48-49.
The Collaborative Response to the Office of the National Coordinator for Health Information Technology Request for Information (2005, January). Retrieved March 30, 2005 from .
Thibault, B. (2005). Making beautiful music together. Behavioral Health 26(3), 28-29. | Health administration informatics | Biology | 1,131 |
26,141,947 | https://en.wikipedia.org/wiki/Acta%20Astronomica | Acta Astronomica is a quarterly peer-reviewed scientific journal covering astronomy and astrophysics. It was established in 1925 by the Polish astronomer Tadeusz Banachiewicz. Initially, the journal published articles in Latin, later English, French, and German were added as allowed journal languages. Nowadays, all papers are published in English.
The journal is published by the Copernicus Foundation for Polish Astronomy and the editors-in-chief are M. Jaroszyński and Andrzej Udalski (University of Warsaw).
Abstracting and indexing
This journal is abstracted and indexed in Current Contents/Physical, Chemical & Earth Sciences, the Science Citation Index Expanded, and Scopus.
According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.974.
References
External links
Astronomy journals
Academic journals established in 1925
Quarterly journals
English-language journals
Academic journals published by non-profit organizations
Academic journals published in Poland | Acta Astronomica | Astronomy | 193 |
36,812,535 | https://en.wikipedia.org/wiki/Folding%20seat | A folding seat is a seat that folds away so as to occupy less space. When installed on a transit bus, it makes room for a wheelchair or two. When installed on a passenger car, it provides extra seating.
In churches, it may have a projection called a misericord, which offers some support to a person standing in front when the seat is folded.
Folding seats may also be found in stadiums, arenas, theaters and auditoriums to facilitate entry and exit.
Some folding seats in rapid transit may fold-down rather than fold up.
In passenger aircraft, folding seats called jump seat, are used for cabin crew during start and landing.
Gallery
See also
Fold down seating
Folding chair
Folding seats in cars
Jump seat
List of chairs
Rumble seat
Seat
Stairlift
Swivel seat
Turning seat
Seats
Space-saving furniture | Folding seat | Physics | 165 |
1,606,040 | https://en.wikipedia.org/wiki/Fecal%20impaction | A fecal impaction or an impacted bowel is a solid, immobile bulk of feces that can develop in the rectum as a result of chronic constipation (a related term is fecal loading which refers to a large volume of stool in the rectum of any consistency). Fecal impaction is a common result of neurogenic bowel dysfunction and causes immense discomfort and pain. Its treatment includes laxatives, enemas, and pulsed irrigation evacuation (PIE) as well as digital removal. It is not a condition that resolves without direct treatment.
Signs and symptoms
Symptoms of a fecal impaction include the following:
Chronic constipation
Fecal incontinence-- paradoxical overflow diarrhea (encopresis) as a result of liquid stool passing around the obstruction
Abdominal pain and bloating
Loss of appetite
Complications may include necrosis and ulcers of the rectal tissue, which if untreated can cause death.
Causes
There are many possible causes; these include a long period of physical inactivity, failure to consume adequate dietary fiber, dehydration, and deliberate retention of fecal matter.
Opioids such as fentanyl, buprenorphine, methadone, codeine, oxycodone, hydrocodone, morphine, and hydromorphone as well as certain sedatives that reduce intestinal movement may cause fecal matter to become too large, hard and/or dry to expel.
Specific conditions, such as irritable bowel syndrome, certain neurological disorders, paralytic ileus, gastroparesis, diabetes, enlarged prostate gland, distended colon, an ingested foreign object, inflammatory bowel diseases such as Crohn's disease and colitis, and autoimmune diseases such as amyloidosis, celiac disease, lupus, and scleroderma can cause a fecal impaction. Hypothyroidism can also cause chronic constipation because of sluggish, slower, or weaker colon contractions. Iron supplements or increased blood calcium levels are also potential causes. Spinal cord injury is a common cause of constipation, due to ileus.
Diagnosis
Prevention
Reducing or replacing opiates, adequate intake of water, dietary fiber, and exercise.
Treatment
The treatment of fecal impaction requires both the remedy of the impaction and treatment to prevent recurrences. Decreased motility of the colon results in dry, hard stools that in the case of fecal impaction become compacted into a large, hard mass of stool that cannot be expelled from the rectum.
Various methods of treatment attempt to remove the impaction by softening the stool, lubricating the stool, or breaking it into pieces small enough for removal. Enemas and osmotic laxatives can be used to soften the stool by increasing the water content until the stool is soft enough to be expelled. Osmotic laxatives such as magnesium citrate work within minutes to eight hours for onset of action, and even then they may not be sufficient to expel the stool.
Osmotic laxatives can cause cramping and even severe pain as the patient's attempts to evacuate the contents of the rectum are blocked by the fecal mass. Polyethylene glycol (PEG 3350) may be used to increase the water content of the stool without cramping. This may take 24 to 48 hours, however, and it is not well suited to cases where the impaction needs to be removed immediately due to risk of complications or severe pain. Enemas (such as hyperosmotic saline) and suppositories (such as glycerine suppositories) work by increasing water content and stimulating peristalsis to aid in expulsion, and both work much more quickly than oral laxatives.
Because enemas work in 2–15 minutes, they do not allow sufficient time for a large fecal mass to soften. Even if the enema is successful at dislodging the impacted stool, the impacted stool may remain too large to be expelled through the anal canal. Mineral oil enemas can assist by lubricating the stool for easier passage. In cases where enemas fail to remove the impaction, polyethylene glycol can be used to attempt to soften the mass over 24–48 hours, or if immediate removal of the mass is needed, manual disimpaction may be used. Manual disimpaction may be performed by lubricating the anus and using one gloved finger with a scoop-like motion to break up the fecal mass. Most often manual disimpaction is performed without general anaesthesia, although sedation may be used. In more involved procedures, general anaesthesia may be used, although the use of general anaesthesia increases the risk of damage to the anal sphincter. If all other treatments fail, surgery may be necessary.
Another treatment method makes use of an enema and manual disimpaction via pulsed irrigation evacuation (PIE). By using pulsating water to enter into the colon to soften and break down the dense mass, PIE treats fecal impaction.
Research shows that pulsed irrigation evacuation with the PIE MED device is successful in all tested patients in studies, making pulsed irrigation evacuation the most effective and reliable form of fecal impaction treatment.
Individuals who have had one fecal impaction are at high risk of future impactions. Therefore, preventive treatment should be instituted in patients following the removal of the mass. Increasing dietary fiber, increasing fluid intake, exercising daily, and attempting regularly to defecate every morning after eating should be promoted in all patients.
Often underlying medical conditions cause fecal impactions; these conditions should be treated to reduce the risk of future impactions. Many types of medications (most notably opioid pain medications, such as codeine) reduce motility of the colon, increasing the likelihood of fecal impactions. If possible, alternate medications should be prescribed that avoid the side effect of constipation.
Given that all opioids can cause constipation, it is recommended that any patient placed on opioid pain medications be given medications to prevent constipation before it occurs. Daily medications can also be used to promote normal motility of the colon and soften stools. Daily use of laxatives or enemas should be avoided by most individuals as it can cause the loss of normal colon motility. However, for patients with chronic complications, daily medication under the direction of a physician may be needed.
Polyethylene glycol 3350 can be taken daily to soften the stools without the significant risk of adverse effects that are common with other laxatives. In particular, stimulant laxatives should not be used frequently because they can cause dependence in which an individual loses normal colon function and is unable to defecate without taking a laxative. Frequent use of osmotic laxatives should be avoided as well as they can cause electrolyte imbalances.
Fecaloma
A fecaloma is a more extreme form of fecal impaction, giving the accumulation an appearance of a tumor.
A fecaloma can develop as the fecal matter gradually stagnates and accumulates in the intestine and increases in volume until the intestine becomes deformed. It may occur in chronic obstruction of stool transit, as in megacolon and chronic constipation. Some diseases, such as Chagas disease, Hirschsprung's disease and others damage the autonomic nervous system in the colon's mucosa (Auerbach's plexus) and may cause extremely large or "giant" fecalomas, which must be surgically removed (disimpaction). Rarely, a fecalith will form around a hairball (Trichobezoar), or other absorbent or desiccant core.
It can be diagnosed by:
CT scan
Projectional radiography
Ultrasound
Distal or sigmoid, fecalomas can often be disimpacted digitally or by a catheter which carries a flow of disimpaction fluid (water or other solvent or lubricant). Surgical intervention in the form of sigmoid colectomy or proctocolectomy and ileostomy may be required only when all conservative measures of evacuation fail. Attempts at removal can have severe and even lethal effects, such as the rupture of the colon wall by catheter or an acute angle of the fecaloma (stercoral perforation), followed by sepsis. It may also lead to stercoral perforation, a condition characterized by bowel perforation due to pressure necrosis from a fecal mass or fecaloma.
See also
Aerosol impaction
Dental impaction
Impaction (animals)
References
Further reading
Feces
Acute pain
Constipation
Rectal diseases | Fecal impaction | Biology | 1,879 |
28,076,584 | https://en.wikipedia.org/wiki/Marine%20Technology%20Society | The Marine Technology Society (MTS) is a professional society that serves an international community of approximately 2,000 ocean engineers, technologists, policy-makers, and educators. The goal of the society, which was founded in 1963, is to promote awareness, understanding, advancement and application of marine technology. The association is based in Washington, District of Columbia, United States.
Background
The society consists of 29 technical disciplines and presently has 17 sections, including overseas sections in Japan, Korea and Norway. In addition, MTS has 23 student sections at colleges and universities with related fields of study.
The flagship publication of the society is the MTS Journal. The journal is published 4 times annually and primarily features themed issues consisting of invited papers. The journal has a current Scopus Cite Score of 1.6.
MTS sponsors several conferences of note, including the OCEANS Conference (co-sponsosed with IEEE/OES), Underwater Intervention (co-sponsored with ADCI), Dynamic Positioning Conference, biennial Buoy Workshop (co-sponsored with the Office of Naval Research), and the hot-topic workshop series TechSurge.
In 1969 the group held its annual convention in Miami Beach. The convention was addressed by Spiro Agnew, who was then Vice President of the United States.
In 1993 the laser line scan, a U.S. Navy photography secret, made its debut at the society sponsored trade show in New Orleans.
In 2023 the MATE Remotely Operated Vehicle (ROV) Competition joined MTS as a fully integrated program within the Society. For more than 20 years, the MATE ROV Competition has given children, youth, and young adults an inclusive platform to think critically about real-world problems in a way that strengthens communication, builds peer-to-peer community, and inspires entrepreneurship. Since its inauguration, the annual competition has reached more than 20,000 students in 46 regions around the world.
References
External links
Engineering societies based in the United States
Marine engineering organizations
Organizations based in Maryland
Oceanography | Marine Technology Society | Physics,Engineering,Environmental_science | 410 |
24,507,875 | https://en.wikipedia.org/wiki/Gymnopilus%20suberis | Gymnopilus suberis is a species of mushroom in the family Hymenogastraceae. It was given its current name by mycologist Rolf Singer in 1951.
Phylogeny
This species is in the aeruginosus-luteofolius infrageneric grouping in the genus Gymnopilus.
See also
List of Gymnopilus species
References
External links
Gymnopilus suberis at Index Fungorum
subearlei
Fungus species | Gymnopilus suberis | Biology | 100 |
2,469,123 | https://en.wikipedia.org/wiki/Wizard%20of%20Oz%20experiment | In the field of human–computer interaction, a Wizard of Oz experiment is a research experiment in which subjects interact with a computer system that subjects believe to be autonomous, but which is actually being operated or partially operated by an unseen human being.
Concept
The phrase Wizard of Oz (originally OZ Paradigm) has come into common usage in the fields of experimental psychology, human factors, ergonomics, linguistics, and usability engineering to describe a testing or iterative design methodology wherein an experimenter (the "wizard"), in a laboratory setting, simulates the behavior of a theoretical intelligent computer application (often by going into another room and intercepting all communications between participant and system). Sometimes this is done with the participant's prior knowledge and sometimes it is a low-level deceit employed to manage the participant's expectations and encourage natural behaviors.
For example, a test participant may think they are communicating with a computer using a speech interface, when the participant's words are actually being covertly entered into the computer by a person in another room (the "wizard") and processed as a text stream, rather than as an audio stream. The missing system functionality that the wizard provides may be implemented in later versions of the system (or may even be speculative capabilities that current-day systems do not have), but its precise details are generally considered irrelevant to the study. In testing situations, the goal of such experiments may be to observe the use and effectiveness of a proposed user interface by the test participants, rather than to measure the quality of an entire system.
Origin
The name of the experiment comes from L. Frank Baum's 1900 novel The Wonderful Wizard of Oz, in which an ordinary man hides behind a curtain and pretends, through the use of "amplifying" technology, to be a powerful wizard.
John F. (“Jeff”) Kelley coined the phrases "Wizard of OZ" and "OZ Paradigm" for this purpose circa 1980 to describe the method he developed during his dissertation work at Johns Hopkins University. (His dissertation adviser was the late professor Alphonse Chapatis, sometimes called the "Godfather of Human Factors and Engineering Psychology".) During the study, in addition to one-way mirrors and other techniques, there was a blackout curtain separating Kelley (the "Wizard") from the participant's view.
The "Experimenter-in-the-Loop" technique had been pioneered at Chapatis' Communications Research Lab at Johns Hopkins as early as 1975 (J. F. Kelley arrived in 1978). W. Randolph Ford used the experimenter-in-the-loop technique with his innovative CHECKBOOK program wherein he obtained language samples in a naturalistic setting. In Ford's method, a preliminary version of the natural language processing system would be placed in front of the user. When the user entered a syntax that was not recognized, they would receive a "Could you rephrase that?" prompt from the software. After the session, the algorithms for processing the newly obtained samples would be created or enhanced and another session would take place. This approach led to the eventual development of his natural language processing technique, "Multi-Stage Pattern Reduction". Dr. Ford's recollection was that Dr. Kelley did in fact coin the phrase "Wizard of Oz Paradigm" but that the technique had been employed in at least two separate studies before Dr. Kelley had started conducting studies at the Johns Hopkins Telecommunications Lab. A similar early use of the technique to model a Natural Language Understanding system being developed at the Xerox Palo Alto Research Center was done by Allen Munro and Don Norman around 1975 at the University of California, San Diego. Again, the name "Wizard of Oz" had not yet been applied to this technique. The results were published in a 1977 paper by the team (Bobrow, et al.).
In that employment the experimenter (the "Wizard") sat at a terminal in an adjacent room separated by a one-way mirror so the subject could be observed. Every input from the user was processed correctly by a combination of software processing and real-time experimenter intervention. As the process was repeated in subsequent sessions, more and more software components were added so that the experimenter had less and less to do during each session until asymptotic was reached on phrase/word dictionary growth and the experimenter could "go get a cup of coffee" during the session (which at this point was a cross-validation of the final system's unattended performance).
A final point: Dr. Kelley's recollection of the coinage of the term is backed up by that of the late professor Al Chapanis. In their 1985 University of Michigan technical report, Green and Wei-Haas state the following:
The first appearance of the "Wizard of Oz" name in print was in Jeff Kelley's thesis (Kelley, 1983a, 1983b, 1984a). It is thought the name was coined in response to a question at a graduate seminar at Hopkins (Chapanis, 1984; Kelley, 1984b). "What happens if the subject sees the experimenter [behind the "curtain" in an adjacent room acting as the computer]?" Kelley answered: "Well, that's just like what happened to Dorothy in the Wizard of Oz." And so the name stuck. (Cited by permission.)
There is also a passing reference to planned use of the "Wizard of Oz experiments" in a 1982 proceedings paper by Ford and Smith.
One fact, presented in Kelley's dissertation, about the etymology of the term in this context: Dr. Kelley did originally have a definition for the "OZ" acronym (aside from the obvious parallels with the 1900 book The Wonderful Wizard of Oz by L Frank Baum). "Offline Zero" was a reference to the fact that an experimenter (the "Wizard") was interpreting the users' inputs in real time during the simulation phase.
Similar experimental setups had occasionally been used earlier, but without the "Wizard of Oz" name. Design researcher Nigel Cross conducted studies in the 1960s with "simulated" computer-aided design systems where the purported simulator was actually a human operator, using text and graphical communication via CCTV. As he explained, "All that the user perceives of the system is this remote-access console, and the remainder is a black box to him. ... one may as well fill the black box with people as with machinery. Doing so provides a comparatively cheap simulator, with the remarkable advantages of the human operator's flexibility, memory, and intelligence, and which can be reprogrammed to give a wide range of computer roles merely by changing the rules of operation. It sometimes lacks the real computer's speed and accuracy, but a team of experts working simultaneously can compensate to a sufficient degree to provide an acceptable simulation." Cross later referred to this as a kind of Reverse Turing test.
Significance
The Wizard of OZ method is very powerful. In its original application, Dr. Kelley was able to create a simple keyboard-input natural language recognition system that far exceeded the recognition rates of any of the far more complex systems of the day.
The thinking current among many computer scientists and linguists at the time was that, in order for a computer to be able to "understand" natural language enough to be able to assist in useful tasks, the software would have to be attached to a formidable "dictionary" having a large number of categories for each word. The categories would enable a very complex parsing algorithm to unravel the ambiguities inherent in naturally produced language. The daunting task of creating such a dictionary led many to believe that computers simply would never truly "understand" language until they could be "raised" and "experience life" as humans, since humans seem to apply a life's worth of experiences to the interpretation of language.
The key enabling factor for the first use of the OZ method was that the system was designed to work in a single context (calendar-keeping), which constrained the complexity of language encountered from users to the extent where a simple language processing model was sufficient to meet the goals of the application. The processing model was a two-pass keyword/key-phrase matching approach, based loosely on the algorithms employed in Weizenbaum's famous Eliza program. By inducing participants to generate language samples in the context of solving an actual task (using a computer that they believed actually understood what they were typing), the variety and complexity of the lexical structures gathered was greatly reduced and simple keyword matching algorithms could be developed to address the actual language collected.
This first use of OZ was in the context of an iterative design approach. In the early development sessions, the experimenter simulated the system in toto, performing all the database queries and composing all the responses to the participants by hand. As the process matured, the experimenter was able to replace human interventions, piece by piece, with newly created developed code (which, at each phase, was designed to accurately process all the inputs that were generated in preceding steps). By the end of the process, the experimenter was able to observe the sessions in a "hands-off" mode (and measure the recognition rates of the completed program).
OZ was important because it addressed the obvious criticism that it would be unrealistic to use an iterative method to build a separate natural language system (dictionaries, syntax) for each new context (as such a method would require repeatedly adding new structures and algorithms to handle each new batch of inputs). OZ's empirical approach made this feasible; in its original application, dictionary and syntax growth reached asymptotic (achieving from 86% to 97% recognition rates, depending on the measurements employed) after only 16 experimental trials and the resulting program, with dictionaries, was less than 300k of code.
In the 23 years that followed initial publication, the OZ method has been employed in a wide variety of settings, notably in the prototyping and usability testing of proposed user interface designs in advance of having actual application software in place.
See also
Reverse Turing test - A Turing test in which the objective or roles between computers and humans have been reversed
Chinese room - A thought experiment with a similar premise.
The Turk - Wizard of Oz device used as a fake chess-playing machine
References
Human–computer interaction | Wizard of Oz experiment | Engineering | 2,114 |
24,435,146 | https://en.wikipedia.org/wiki/List%20of%20craters%20on%20Mars |
This is a list of craters on Mars. Impact craters on Mars larger than exist by the hundreds of thousands, but only about one thousand of them have names. Names are assigned by the International Astronomical Union after petitioning by relevant scientists, and in general, only craters that have a significant research interest are given names. Martian craters are named after famous scientists and science fiction authors, or if less than in diameter, after towns on Earth. Craters cannot be named for living people, and names for small craters are rarely intended to commemorate a specific town. Latitude and longitude are given as planetographic coordinates with west longitude.
Catalog of named craters
The catalog is divided into three partial lists:
List of craters on Mars: A–G
List of craters on Mars: H–N
List of craters on Mars: O–Z
Names are grouped into tables for each letter of the alphabet, containing the crater's name (linked if article exists), coordinates, diameter in kilometers, year of official name adoption (approval), the eponym ("named after") and a direct reference to the Gazetteer of Planetary Nomenclature.
Statistics
As of 2017, Martian craters account for 21% of all 5,211 named craters in the Solar System. Apart from the Moon, no other body has as many named craters as Mars. Other, non-planetary bodies with numerous named craters include Callisto (141), Ganymede (131), Rhea (128), Vesta (90), Ceres (90), Dione (73), Iapetus (58), Enceladus (53), Tethys (50) and Europa (41). For a full list, see List of craters in the Solar System. The total number of craters on Mars greater than 1 kilometre in diameter is approximately 385,000, with 21% of those (~85,000) being over 3 kilometers in diameter. The number of craters on Mars over 25 metres in diameter is suggested to be approximately 90 million.
Largest craters
Some of the largest craters on Mars remain unnamed. Diameters differ depending on source data. The largest confirmed impact basins on Mars are Utopia (buried, estimated diameter 3,300 km) Hellas (2,300 km), Argyre ( 1,800 km) and Isidis (1,500 km).
Notes
Example crater
See also
List of catenae on Mars
List of mountains on Mars
References
External links
USGS: Martian system nomenclature
The Origin of Mars Crater Names
Mars | List of craters on Mars | Astronomy | 510 |
47,975,664 | https://en.wikipedia.org/wiki/Junes%20Ipaktschi | Junes Ipaktschi (born October 25, 1940, in Tabriz, Iran) is an Iranian organic chemist and professor of the Department of Organic Chemistry at the University of Giessen.
Life
Junes Ipaktschi grew up in Tehran / Iran. After graduation in June 1958 at the Razi School in Tehran, he studied chemistry from 1958 to 1966 at the Heidelberg University. His doctoral thesis dealt with the field of Organic Chemistry under the direction of Heinz Staab. He then conducted research as an assistant in the same working group and habilitated in 1972 for the Organic Chemistry with a thesis on the photochemistry of unsaturated ketones. From 1972 to 1974 he did research as a postdoctoral fellow and visiting professor in the laboratory of William G. Dauben at the University of California, Berkeley. In 1973 he accepted an appointment at the Department of Chemistry at the University of Marburg and became a professor.
In 1975 he was at the chemical Institute of Arya Mehr University (now Sharif University) appointed to Tehran and joined as a professor in his native country, Iran. In 1978 he moved to the newly founded Reza Shah Kabir University (now Mazandaran University) and worked there as a professor for chemistry. He was for a time as director of the University and also head of the Chemical Institute operates. From 1980 until retirement, in 2005, he was professor of chemistry at the University of Giessen. 1992–1995 he was dean of the chemistry department of the University and 2001–2002 executive director of the Organic Chemical Institute there. In addition to several visits as a visiting professor at various universities in Iran, he was invited in 2001 as a visiting professor at the National Institute of Advanced Industrial Science and Technology, Tsukuba / Japan and spent three months there. In 1999, Ipaktschi was awarded the Kharazmi Award.
Research
Ipaktschi is known for the use of ethereal solutions of lithium perchlorate as a medium for organic reactions and organometallic chemistry.
Selected publications
J. Ipaktschi, M. R. Saidi: Metal-Mediated Cyclizations of Amines, Science of synthesis 2012, 40.1.1.5.5, p. 351–504.
References
Academic staff of the University of Giessen
Academic staff of Sharif University of Technology
1940 births
20th-century German chemists
Iranian organic chemists
Living people
Iranian emigrants to Germany
21st-century German chemists
Academic staff of the University of Mazandaran | Junes Ipaktschi | Chemistry | 515 |
32,015,318 | https://en.wikipedia.org/wiki/Ranna%20an%20aeir | Ranna an aeir ("The Constellations") is the title of a medieval Irish astronomical tract, thought to date from c.1500–1550? It was written in Early Modern Irish, with some words in English and Latin.
See also
An Irish Astronomical Tract
References
Manuscript Sources
National Library of Scotland; Advocates 72.1.2 olim Gaelic II (The National Library of Ireland holds a microfilm copy (n. 307, p. 452).)
Edition
A. O. Anderson, Ranna an aeir [The Constellations] in Revue Celtique, Ed. Henri d'Arbois de Jubainville, Volume 30, Paris, F. Vieweg (1909) page 404–417
Astronomy books
Astronomy in Ireland
Irish-language manuscripts
Irish-language literature
16th century in Ireland
16th-century books
Medieval texts in Irish | Ranna an aeir | Astronomy | 174 |
43,710 | https://en.wikipedia.org/wiki/Silicon%20dioxide | Silicon dioxide, also known as silica, is an oxide of silicon with the chemical formula , commonly found in nature as quartz. In many parts of the world, silica is the major constituent of sand. Silica is one of the most complex and abundant families of materials, existing as a compound of several minerals and as a synthetic product. Examples include fused quartz, fumed silica, opal, and aerogels. It is used in structural materials, microelectronics, and as components in the food and pharmaceutical industries. All forms are white or colorless, although impure samples can be colored.
Silicon dioxide is a common fundamental constituent of glass.
Structure
In the majority of silicon dioxides, the silicon atom shows tetrahedral coordination, with four oxygen atoms surrounding a central Si atom (see 3-D Unit Cell). Thus, SiO2 forms 3-dimensional network solids in which each silicon atom is covalently bonded in a tetrahedral manner to 4 oxygen atoms. In contrast, CO2 is a linear molecule. The starkly different structures of the dioxides of carbon and silicon are a manifestation of the double bond rule.
Based on the crystal structural differences, silicon dioxide can be divided into two categories: crystalline and non-crystalline (amorphous). In crystalline form, this substance can be found naturally occurring as quartz, tridymite (high-temperature form), cristobalite (high-temperature form), stishovite (high-pressure form), and coesite (high-pressure form). On the other hand, amorphous silica can be found in nature as opal and diatomaceous earth. Quartz glass is a form of intermediate state between these structures.
All of these distinct crystalline forms always have the same local structure around Si and O. In α-quartz the Si–O bond length is 161 pm, whereas in α-tridymite it is in the range 154–171 pm. The Si–O–Si angle also varies between a low value of 140° in α-tridymite, up to 180° in β-tridymite. In α-quartz, the Si–O–Si angle is 144°.
Polymorphism
Alpha quartz is the most stable form of solid SiO2 at room temperature. The high-temperature minerals, cristobalite and tridymite, have both lower densities and indices of refraction than quartz. The transformation from α-quartz to beta-quartz takes place abruptly at 573 °C. Since the transformation is accompanied by a significant change in volume, it can easily induce fracturing of ceramics or rocks passing through this temperature limit. The high-pressure minerals, seifertite, stishovite, and coesite, though, have higher densities and indices of refraction than quartz. Stishovite has a rutile-like structure where silicon is 6-coordinate. The density of stishovite is 4.287 g/cm3, which compares to α-quartz, the densest of the low-pressure forms, which has a density of 2.648 g/cm3. The difference in density can be ascribed to the increase in coordination as the six shortest Si–O bond lengths in stishovite (four Si–O bond lengths of 176 pm and two others of 181 pm) are greater than the Si–O bond length (161 pm) in α-quartz.
The change in the coordination increases the ionicity of the Si–O bond.
Faujasite silica, another polymorph, is obtained by the dealumination of a low-sodium, ultra-stable Y zeolite with combined acid and thermal treatment. The resulting product contains over 99% silica, and has high crystallinity and specific surface area (over 800 m2/g). Faujasite-silica has very high thermal and acid stability. For example, it maintains a high degree of long-range molecular order or crystallinity even after boiling in concentrated hydrochloric acid.
Molten SiO2
Molten silica exhibits several peculiar physical characteristics that are similar to those observed in liquid water: negative temperature expansion, density maximum at temperatures ~5000 °C, and a heat capacity minimum. Its density decreases from 2.08 g/cm3 at 1950 °C to 2.03 g/cm3 at 2200 °C.
Molecular SiO2
The molecular SiO2 has a linear structure like . It has been produced by combining silicon monoxide (SiO) with oxygen in an argon matrix.
The dimeric silicon dioxide, (SiO2)2 has been obtained by reacting O2 with matrix isolated dimeric silicon monoxide, (Si2O2). In dimeric silicon dioxide there are two oxygen atoms bridging between the silicon atoms with an Si–O–Si angle of 94° and bond length of 164.6 pm and the terminal Si–O bond length is 150.2 pm. The Si–O bond length is 148.3 pm, which compares with the length of 161 pm in α-quartz. The bond energy is estimated at 621.7 kJ/mol.
Natural occurrence
Geology
is most commonly encountered in nature as quartz, which comprises more than 10% by mass of the Earth's crust. Quartz is the only polymorph of silica stable at the Earth's surface. Metastable occurrences of the high-pressure forms coesite and stishovite have been found around impact structures and associated with eclogites formed during ultra-high-pressure metamorphism. The high-temperature forms of tridymite and cristobalite are known from silica-rich volcanic rocks. In many parts of the world, silica is the major constituent of sand.
Biology
Even though it is poorly soluble, silica occurs in many plants such as rice. Plant materials with high silica phytolith content appear to be of importance to grazing animals, from chewing insects to ungulates. Silica accelerates tooth wear, and high levels of silica in plants frequently eaten by herbivores may have developed as a defense mechanism against predation.
Silica is also the primary component of rice husk ash, which is used, for example, in filtration and as supplementary cementitious material (SCM) in cement and concrete manufacturing.
Silicification in and by cells has been common in the biological world and it occurs in bacteria, protists, plants, and animals (invertebrates and vertebrates).
Prominent examples include:
Tests or frustules (i.e. shells) of diatoms, Radiolaria, and testate amoebae.
Silica phytoliths in the cells of many plants including Equisetaceae, many grasses, and a wide range of dicotyledons.
The spicules forming the skeleton of many sponges.
Uses
Structural use
About 95% of the commercial use of silicon dioxide (sand) is in the construction industry, e.g. in the production of concrete (Portland cement concrete).
Certain deposits of silica sand, with desirable particle size and shape and desirable clay and other mineral content, were important for sand casting of metallic products. The high melting point of silica enables it to be used in such applications such as iron casting; modern sand casting sometimes uses other minerals for other reasons.
Crystalline silica is used in hydraulic fracturing of formations which contain tight oil and shale gas.
Precursor to glass and silicon
Silica is the primary ingredient in the production of most glass. As other minerals are melted with silica, the principle of freezing point depression lowers the melting point of the mixture and increases fluidity. The glass transition temperature of pure SiO2 is about 1475 K. When molten silicon dioxide SiO2 is rapidly cooled, it does not crystallize, but solidifies as a glass. Because of this, most ceramic glazes have silica as the main ingredient.
The structural geometry of silicon and oxygen in glass is similar to that in quartz and most other crystalline forms of silicon and oxygen, with silicon surrounded by regular tetrahedra of oxygen centres. The difference between the glass and crystalline forms arises from the connectivity of the tetrahedral units: Although there is no long-range periodicity in the glassy network, ordering remains at length scales well beyond the SiO bond length. One example of this ordering is the preference to form rings of 6-tetrahedra.
The majority of optical fibers for telecommunications are also made from silica. It is a primary raw material for many ceramics such as earthenware, stoneware, and porcelain.
Silicon dioxide is used to produce elemental silicon. The process involves carbothermic reduction in an electric arc furnace:
SiO2 + 2 C -> Si + 2 CO
Fumed silica
Fumed silica, also known as pyrogenic silica, is prepared by burning SiCl4 in an oxygen-rich hydrogen flame to produce a "smoke" of SiO2.
SiCl4 + 2 H2 + O2 -> SiO2 + 4 HCl
It can also be produced by vaporizing quartz sand in a 3000 °C electric arc. Both processes result in microscopic droplets of amorphous silica fused into branched, chainlike, three-dimensional secondary particles which then agglomerate into tertiary particles, a white powder with extremely low bulk density (0.03-0.15 g/cm3) and thus high surface area. The particles act as a thixotropic thickening agent, or as an anti-caking agent, and can be treated to make them hydrophilic or hydrophobic for either water or organic liquid applications.
Silica fume is an ultrafine powder collected as a by-product of the silicon and ferrosilicon alloy production. It consists of amorphous (non-crystalline) spherical particles with an average particle diameter of 150 nm, without the branching of the pyrogenic product. The main use is as pozzolanic material for high performance concrete. Fumed silica nanoparticles can be successfully used as an anti-aging agent in asphalt binders.
Food, cosmetic, and pharmaceutical applications
Silica, either colloidal, precipitated, or pyrogenic fumed, is a common additive in food production. It is used primarily as a flow or anti-caking agent in powdered foods such as spices and non-dairy coffee creamer, or powders to be formed into pharmaceutical tablets. It can adsorb water in hygroscopic applications. Colloidal silica is used as a fining agent for wine, beer, and juice, with the E number reference E551.
In cosmetics, silica is useful for its light-diffusing properties and natural absorbency.
Diatomaceous earth, a mined product, has been used in food and cosmetics for centuries. It consists of the silica shells of microscopic diatoms; in a less processed form it was sold as "tooth powder". Manufactured or mined hydrated silica is used as the hard abrasive in toothpaste.
Semiconductors
Silicon dioxide is widely used in the semiconductor technology:
for the primary passivation (directly on the semiconductor surface),
as an original gate dielectric in MOS technology. Today when scaling (dimension of the gate length of the MOS transistor) has progressed below 10 nm, silicon dioxide has been replaced by other dielectric materials like hafnium oxide or similar with higher dielectric constant compared to silicon dioxide,
as a dielectric layer between metal (wiring) layers (sometimes up to 8–10) connecting elements and
as a second passivation layer (for protecting semiconductor elements and the metallization layers) typically today layered with some other dielectrics like silicon nitride.
Because silicon dioxide is a native oxide of silicon it is more widely used compared to other semiconductors like gallium arsenide or indium phosphide.
Silicon dioxide could be grown on a silicon semiconductor surface. Silicon oxide layers could protect silicon surfaces during diffusion processes, and could be used for diffusion masking.
Surface passivation is the process by which a semiconductor surface is rendered inert, and does not change semiconductor properties as a result of interaction with air or other materials in contact with the surface or edge of the crystal. The formation of a thermally grown silicon dioxide layer greatly reduces the concentration of electronic states at the silicon surface. SiO2 films preserve the electrical characteristics of p–n junctions and prevent these electrical characteristics from deteriorating by the gaseous ambient environment. Silicon oxide layers could be used to electrically stabilize silicon surfaces. The surface passivation process is an important method of semiconductor device fabrication that involves coating a silicon wafer with an insulating layer of silicon oxide so that electricity could reliably penetrate to the conducting silicon below. Growing a layer of silicon dioxide on top of a silicon wafer enables it to overcome the surface states that otherwise prevent electricity from reaching the semiconducting layer.
The process of silicon surface passivation by thermal oxidation (silicon dioxide) is critical to the semiconductor industry. It is commonly used to manufacture metal–oxide–semiconductor field-effect transistors (MOSFETs) and silicon integrated circuit chips (with the planar process).
Other
Hydrophobic silica is used as a defoamer component.
In its capacity as a refractory, it is useful in fiber form as a high-temperature thermal protection fabric.
Silica is used in the extraction of DNA and RNA due to its ability to bind to the nucleic acids under the presence of chaotropes.
Silica aerogel was used in the Stardust spacecraft to collect extraterrestrial particles.
Pure silica (silicon dioxide), when cooled as fused quartz into a glass with no true melting point, can be used as a glass fibre for fibreglass.
Production
Silicon dioxide is mostly obtained by mining, including sand mining and purification of quartz.
Quartz is suitable for many purposes, while chemical processing is required to make a purer or otherwise more suitable (e.g. more reactive or fine-grained) product.
Precipitated silica
Precipitated silica or amorphous silica is produced by the acidification of solutions of sodium silicate. The gelatinous precipitate or silica gel, is first washed and then dehydrated to produce colorless microporous silica. The idealized equation involving a trisilicate and sulfuric acid is:
Na2Si3O7 + H2SO4 -> 3 SiO2 + Na2SO4 + H2O
Approximately one billion kilograms/year (1999) of silica were produced in this manner, mainly for use for polymer composites – tires and shoe soles.
On microchips
Thin films of silica grow spontaneously on silicon wafers via thermal oxidation, producing a very shallow layer of about 1 nm or 10 Å of so-called native oxide.
Higher temperatures and alternative environments are used to grow well-controlled layers of silicon dioxide on silicon, for example at temperatures between 600 and 1200 °C, using so-called dry oxidation with O2
Si + O2 -> SiO2
or wet oxidation with H2O.
Si + 2 H2O -> SiO2 + 2 H2
The native oxide layer is beneficial in microelectronics, where it acts as electric insulator with high chemical stability. It can protect the silicon, store charge, block current, and even act as a controlled pathway to limit current flow.
Laboratory or special methods
From organosilicon compounds
Many routes to silicon dioxide start with an organosilicon compound, e.g., HMDSO, TEOS. Synthesis of silica is illustrated below using tetraethyl orthosilicate (TEOS). Simply heating TEOS at 680–730 °C results in the oxide:
Si(OC2H5)4 -> SiO2 + 2 O(C2H5)2
Similarly TEOS combusts around 400 °C:
Si(OC2H5)4 + 12 O2 -> SiO2 + 10 H2O + 8 CO2
TEOS undergoes hydrolysis via the so-called sol-gel process. The course of the reaction and nature of the product are affected by catalysts, but the idealized equation is:
Si(OC2H5)4 + 2 H2O -> SiO2 + 4 HOCH2CH3
Other methods
Being highly stable, silicon dioxide arises from many methods. Conceptually simple, but of little practical value, combustion of silane gives silicon dioxide. This reaction is analogous to the combustion of methane:
SiH4 + 2 O2 -> SiO2 + 2 H2O
However the chemical vapor deposition of silicon dioxide onto crystal surface from silane had been used using nitrogen as a carrier gas at 200–500 °C.
Chemical reactions
Silicon dioxide is a relatively inert material (hence its widespread occurrence as a mineral). Silica is often used as inert containers for chemical reactions. At high temperatures, it is converted to silicon by reduction with carbon.
Fluorine reacts with silicon dioxide to form SiF4 and O2 whereas the other halogen gases (Cl2, Br2, I2) are unreactive.
Most forms of silicon dioxide are attacked ("etched") by hydrofluoric acid (HF) to produce hexafluorosilicic acid:
Stishovite does not react to HF to any significant degree.
HF is used to remove or pattern silicon dioxide in the semiconductor industry.
Silicon dioxide acts as a Lux–Flood acid, being able to react with bases under certain conditions. As it does not contain any hydrogen, non-hydrated silica cannot directly act as a Brønsted–Lowry acid. While silicon dioxide is only poorly soluble in water at low or neutral pH (typically, 2 × 10−4 M for quartz up to 10−3 M for cryptocrystalline chalcedony), strong bases react with glass and easily dissolve it. Therefore, strong bases have to be stored in plastic bottles to avoid jamming the bottle cap, to preserve the integrity of the recipient, and to avoid undesirable contamination by silicate anions.
Silicon dioxide dissolves in hot concentrated alkali or fused hydroxide, as described in this idealized equation:
SiO2 + 2 NaOH -> Na2SiO3 + H2O
Silicon dioxide will neutralise basic metal oxides (e.g. sodium oxide, potassium oxide, lead(II) oxide, zinc oxide, or mixtures of oxides, forming silicates and glasses as the Si-O-Si bonds in silica are broken successively). As an example the reaction of sodium oxide and SiO2 can produce sodium orthosilicate, sodium silicate, and glasses, dependent on the proportions of reactants:
2 Na2O + SiO2 -> Na4SiO4;
Na2O + SiO2 -> Na2SiO3;
Na2O + SiO2 -> glass.
Examples of such glasses have commercial significance, e.g. soda–lime glass, borosilicate glass, lead glass. In these glasses, silica is termed the network former or lattice former. The reaction is also used in blast furnaces to remove sand impurities in the ore by neutralisation with calcium oxide, forming calcium silicate slag.
Silicon dioxide reacts in heated reflux under dinitrogen with ethylene glycol and an alkali metal base to produce highly reactive, pentacoordinate silicates which provide access to a wide variety of new silicon compounds. The silicates are essentially insoluble in all polar solvent except methanol.
Silicon dioxide reacts with elemental silicon at high temperatures to produce SiO:
SiO2 + Si -> 2 SiO
Water solubility
The solubility of silicon dioxide in water strongly depends on its crystalline form and is three to four times higher for amorphous silica than quartz; as a function of temperature, it peaks around . This property is used to grow single crystals of quartz in a hydrothermal process where natural quartz is dissolved in superheated water in a pressure vessel that is cooler at the top. Crystals of 0.5–1 kg can be grown for 1–2 months. These crystals are a source of very pure quartz for use in electronic applications. Above the critical temperature of water and a pressure of or higher, water is a supercritical fluid and solubility is once again higher than at lower temperatures.
Health effects
Silica ingested orally is essentially nontoxic, with an of 5000 mg/kg (5 g/kg). A 2008 study following subjects for 15 years found that higher levels of silica in water appeared to decrease the risk of dementia. An increase of 10 mg/day of silica in drinking water was associated with a reduced risk of dementia of 11%.
Inhaling finely divided crystalline silica dust can lead to silicosis, bronchitis, or lung cancer, as the dust becomes lodged in the lungs and continuously irritates the tissue, reducing lung capacities. When fine silica particles are inhaled in large enough quantities (such as through occupational exposure), it increases the risk of systemic autoimmune diseases such as lupus and rheumatoid arthritis compared to expected rates in the general population.
Occupational hazard
Silica is an occupational hazard for people who do sandblasting or work with powdered crystalline silica products. Amorphous silica, such as fumed silica, may cause irreversible lung damage in some cases but is not associated with the development of silicosis. Children, asthmatics of any age, those with allergies, and the elderly (all of whom have reduced lung capacity) can be affected in less time.
Crystalline silica is an occupational hazard for those working with stone countertops because the process of cutting and installing the countertops creates large amounts of airborne silica. Crystalline silica used in hydraulic fracturing presents a health hazard to workers.
Pathophysiology
In the body, crystalline silica particles do not dissolve over clinically relevant periods. Silica crystals inside the lungs can activate the NLRP3 inflammasome inside macrophages and dendritic cells and thereby result in production of interleukin, a highly pro-inflammatory cytokine in the immune system.
Regulation
Regulations restricting silica exposure 'with respect to the silicosis hazard' specify that they are concerned only with silica, which is both crystalline and dust-forming.
In 2013, the U.S. Occupational Safety and Health Administration reduced the exposure limit to 50 μg/m3 of air. Prior to 2013, it had allowed 100 μg/m3 and in construction workers even 250 μg/m3.
In 2013, OSHA also required the "green completion" of fracked wells to reduce exposure to crystalline silica and restrict the exposure limit.
Crystalline forms
SiO2, more so than almost any material, exists in many crystalline forms. These forms are called polymorphs.
Safety
Inhaling finely divided crystalline silica can lead to severe inflammation of the lung tissue, silicosis, bronchitis, lung cancer, and systemic autoimmune diseases, such as lupus and rheumatoid arthritis. Inhalation of amorphous silicon dioxide, in high doses, leads to non-permanent short-term inflammation, where all effects heal.
Other names
This extended list enumerates synonyms for silicon dioxide; all of these values are from a single source; values in the source were presented capitalized.
See also
Mesoporous silica
Orthosilicic acid
Silicon carbide
References
External links
Tridymite,
Quartz,
Cristobalite,
Amorphous, NIOSH Pocket Guide to Chemical Hazards
Crystalline, as respirable dust, NIOSH Pocket Guide to Chemical Hazards
Formation of silicon oxide layers in the semiconductor industry. LPCVD and PECVD method in comparison. Stress prevention.
Quartz (SiO2) piezoelectric properties
Silica (SiO2) and water
Epidemiological evidence on the carcinogenicity of silica: factors in scientific judgement by C. Soutar and others. Institute of Occupational Medicine Research Report TM/97/09
Scientific opinion on the health effects of airborne silica by A Pilkington and others. Institute of Occupational Medicine Research Report TM/95/08
The toxic effects of silica by A. Seaton and others. Institute of Occupational Medicine Research Report TM/87/13
Structure of precipitated silica
Ceramic materials
Refractory materials
IARC Group 1 carcinogens
Excipients
E-number additives
Oxides
Occupational safety and health | Silicon dioxide | Physics,Chemistry,Engineering | 5,136 |
26,954,731 | https://en.wikipedia.org/wiki/Domain%20of%20unknown%20function | A domain of unknown function (DUF) is a protein domain that has no characterised function. These families have been collected together in the Pfam database using the prefix DUF followed by a number, with examples being DUF2992 and DUF1220. As of 2019, there are almost 4,000 DUF families within the Pfam database representing over 22% of known families. Some DUFs are not named using the nomenclature due to popular usage but are nevertheless DUFs.
The DUF designation is tentative, and such families tend to be renamed to a more specific name (or merged to an existing domain) after a function is identified.
History
The DUF naming scheme was introduced by Chris Ponting, through the addition of DUF1 and DUF2 to the SMART database. These two domains were found to be widely distributed in bacterial signaling proteins. Subsequently, the functions of these domains were identified and they have since been renamed as the GGDEF domain and EAL domain respectively.
Characterisation
Structural genomics programmes have attempted to understand the function of DUFs through structure determination. The structures of over 250 DUF families have been solved. This (2009) work showed that about two thirds of DUF families had a structure similar to a previously solved one and therefore likely to be divergent members of existing protein superfamilies, whereas about one third possessed a novel protein fold.
Some DUF families share remote sequence homology with domains that has characterized function. Computational work can be used to link these relationships. A 2015 work was able to assign 20% of the DUFs to characterized structural superfamilies. Pfam also continuously perform the (manually-verified) assignment in "clan" superfamily entries.
Frequency and conservation
More than 20% of all protein domains were annotated as DUFs in 2013. About 2,700 DUFs are found in bacteria compared with just over 1,500 in eukaryotes. Over 800 DUFs are shared between bacteria and eukaryotes, and about 300 of these are also present in archaea. A total of 2,786 bacterial Pfam domains even occur in animals, including 320 DUFs.
Role in biology
Many DUFs are highly conserved, indicating an important role in biology. However, many such DUFs are not essential, hence their biological role often remains unknown. For instance, DUF143 is present in most bacteria and eukaryotic genomes. However, when it was deleted in Escherichia coli no obvious phenotype was detected. Later it was shown that the proteins that contain DUF143, are ribosomal silencing factors that block the assembly of the two ribosomal subunits. While this function is not essential, it helps the cells to adapt to low nutrient conditions by shutting down protein biosynthesis. As a result, these proteins and the DUF only become relevant when the cells starve. It is thus believed that many DUFs (or proteins of unknown function, PUFs) are only required under certain conditions.
Essential DUFs
Goodacre et al. identified 238 DUFs in 355 essential proteins (in 16 model bacterial species), most of which represent single-domain proteins, clearly establishing the biological essentiality of DUFs. These DUFs are called "essential DUFs" or eDUFs.
External links
List of Pfam families beginning with the letter D, including DUF families
References
Protein domains | Domain of unknown function | Biology | 704 |
53,767,468 | https://en.wikipedia.org/wiki/Australasian%20Corrosion%20Association | The Australasian Corrosion Association (ACA) is a non-profit membership association, headquartered in the state of Victoria, Australia and active in the Australasian region (mainly Australia and New Zealand), which disseminates information on corrosion and its prevention or control, by providing training, seminars, conferences, publications and other activities.
The ACA has branches and committees in main centers around Australia and New Zealand.
The ACA has strategic partnerships with The Association for Materials Protection and Performance offering these organizations' training courses in Australasia and Southeast Asia.
The Association proactively promotes corrosion awareness in Australia and New Zealand, and holds annual conferences on the topic.
References
External links
NACE International Website
Engineering societies based in Australia
Organisations based in Victoria (state)
Corrosion prevention | Australasian Corrosion Association | Chemistry | 158 |
71,943,827 | https://en.wikipedia.org/wiki/Right%20To%20Know | Right To Know is a non profit support project for those who discover via genealogical genetic testing that their lineage is not what they had supposed it to be due to family secrets and misattributed parentage, thus raising existential issues of adoption, race, ethnicity, culture, rape, etc.
See also
Genealogy
Genetic testing
External links
Right To Know - Your Genetic Identity
References
Organizations established in 2022
2022 establishments in the United States
Genetics | Right To Know | Biology | 90 |
70,367,723 | https://en.wikipedia.org/wiki/Fractal%20physiology | Fractal physiology refers to the study of physiological systems using complexity science methods, such as chaos measure, entropy, and fractal dimensions. The underlying assumption is that biological systems are complex and exhibit non-linear patterns of activity, and that characterizing that complexity (using dedicated mathematical approaches) is useful to understand, and make inferences and predictions about the system.
Main Findings
Neurophysiology
Quantifications of the complexity of brain activity is used in the context of neuropsychiatric diseases and mental states characterization, such as schizophrenia, affective disorders, or neurodegenerative disorders. Particularly, diminished EEG complexity is typically associated with increased symptomatology.
Cardiovascular systems
The complexity of Heart Rate Variability is a useful predictor of cardiovascular health.
Software
In Python, NeuroKit provides a comprehensive set of functions for complexity analysis of physiological data. AntroPy implements several measures to quantify the complexity of time-series.
In R, TSEntropies provides methods to quantify the entropy. casnet implements a collection of analytic tools for studying signals recorded from complex adaptive systems.
In MATLAB, The Neurophysiological Biomarker Toolbox (NBT) allows the computation of Detrended fluctuation analysis. EZ Entropy implements the entropy analysis of physiological time-series.
See also
Fractal dimension
Entropy
Complex system
References
Fractals
Physiology | Fractal physiology | Mathematics,Biology | 285 |
49,222,655 | https://en.wikipedia.org/wiki/Benzoate%3AH%20symporter | The benzoate:H symporter (BenE) family (TC# 2.A.46) is a member of the APC Superfamily. The BenE family contains only two functionally characterized and sequenced members, the benzoate permeases of Acinetobacter calcoaceticus and E. coli. These proteins are about 400 residues in length and probably span the membrane 12 times. Some members of the BenE family can have as little as 7 TMSs (i.e., BenE of Frankia sp. Ccl3; TC# 2.A.46.1.6), or as many as 14 TMSs (i.e., BenE of Cellvibrio gilvus; TC# 2.A.46.1.4). BenE family members exhibit about 30% identity to each other and limited sequence similarity to members of the Aromatic Acid:H Symporter (AAHS) family (TC# 2.A.1.15) of the Major Facilitator Superfamily (MFS). The degree of similarity with the latter proteins is insufficient to establish homology. As of early 2016, no crystal structural data is available for members of the BenE family.
Transport reaction
The generalized transport reaction catalyzed by BenE of A. calcoaceticus is:
Benzoate (out) + H+ (out) → Benzoate (in) + H+ (in).
References
Protein families
Solute carrier family | Benzoate:H symporter | Biology | 317 |
32,938,206 | https://en.wikipedia.org/wiki/SN%201917A | SN 1917A is a supernova event in the Fireworks Galaxy (NGC 6946), positioned west and south of the galactic core. Discovered by American optician George Willis Ritchey on 19 July 1917, it reached a peak visual magnitude of 13.6. Based on a poor quality
photographic spectrum taken at least a month after peak light by F. G. Pease and Ritchey, it was identified as a type II core-collapse supernova.
A 2018 analysis of the surrounding stellar population by B. F. Williams suggests the progenitor star was most likely 13 million years old with 15 times the mass of the Sun (). B. Koplitz and associates in 2021 inferred a progenitor mass estimate of . A 2020 search for light echoes from the supernova was unsuccessful.
References
Supernovae
19170919
Cepheus (constellation) | SN 1917A | Chemistry,Astronomy | 180 |
7,550 | https://en.wikipedia.org/wiki/Craig%20Venter | John Craig Venter (born October 14, 1946) is an American scientist. He is known for leading one of the first draft sequences of the human genome and led the first team to transfect a cell with a synthetic chromosome. Venter founded Celera Genomics, the Institute for Genomic Research (TIGR) and the J. Craig Venter Institute (JCVI). He was the co-founder of Human Longevity Inc. and Synthetic Genomics. He was listed on Time magazine's 2007 and 2008 Time 100 list of the most influential people in the world. In 2010, the British magazine New Statesman listed Craig Venter at 14th in the list of "The World's 50 Most Influential Figures 2010". In 2012, Venter was honored with Dan David Prize for his contribution to genome research. He was elected to the American Philosophical Society in 2013. He is a member of the USA Science and Engineering Festival's advisory board.
Early life and education
Venter was born in Salt Lake City, Utah, the son of Elisabeth and John Venter. His family moved to Millbrae, California during his childhood. In his youth, he did not take his education seriously, preferring to spend his time on the water in boats or surfing. According to his biography, A Life Decoded, he was said never to be a terribly engaged student, having Cs and Ds on his eighth-grade report cards. Venter considered that his behavior in his adolescence was indicative of attention deficit hyperactivity disorder (ADHD), and later found ADHD-linked genetic variants in his own DNA. He graduated from Mills High School. His father died suddenly at age 59 from cardiac arrest, giving him a lifelong awareness of his own mortality. He quotes a saying: "If you want immortality, do something meaningful with your life."
Although he opposed the Vietnam War, Venter was drafted and enlisted in the United States Navy where he worked as a hospital corpsman in the intensive-care ward of a field hospital. He served from 1967 to 1968 at the Naval Support Activity Danang in Vietnam. While in Vietnam, he attempted suicide by swimming out to sea, but changed his mind more than a mile out.
Being confronted with severely injured and dying marines on a daily basis instilled in him a desire to study medicine, although he later switched to biomedical research.
Venter began his college education in 1969 at a community college, College of San Mateo in California, and later transferred to the University of California, San Diego, where he studied under biochemist Nathan O. Kaplan. He received a Bachelor of Science in biochemistry in 1972 and a Doctor of Philosophy in physiology and pharmacology in 1975 from UCSD.
Career
After working as an associate professor, and later as full professor, at the State University of New York at Buffalo, he joined the National Institutes of Health in 1984.
EST controversy
While an employee of the NIH, Venter learned how to identify mRNA and began to learn more about those expressed in the human brain. The short cDNA sequence fragments Venter discovered by automated DNA sequencing, he named expressed sequence tags, or ESTs. The NIH Office of Technology Transfer decided to file a patent on the ESTs discovered by Venter. patent the genes identified based on studies of mRNA expression in the human brain. When Venter disclosed the NIH strategy during a Congressional hearing, a firestorm of controversy erupted. The NIH later stopped the effort and abandoned the patent applications it had filed, following public outcry.
Human Genome Project
Venter was passionate about the power of genomics to transform healthcare radically. Venter believed that shotgun sequencing was the fastest and most effective way to get useful human genome data. The method was rejected by the Human Genome Project however, since some geneticists felt it would not be accurate enough for a genome as complicated as that of humans, that it would be logistically more difficult, and that it would cost significantly more.
Venter viewed the slow pace of progress in the Human Genome project as an opportunity to continue his interest in trying his shotgun sequencing method to speed up the human genome sequencing so when he was offered funding from a DNA sequencing company to start Celera Genomics. The company planned to profit from their work by creating genomic data to which users could subscribe for a fee. The goal consequently put pressure on the public genome program and spurred several groups to redouble their efforts to produce the full sequence. Venter's effort won him renown as he and his team at Celera Corporation shared credit for sequencing the first draft human genome with the publicly funded Human Genome Project.
In 2000, Venter and Francis Collins of the National Institutes of Health and U.S. Public Genome Project jointly made the announcement of the mapping of the human genome, a full three years ahead of the expected end of the Public Genome Program. The announcement was made along with U.S. President Bill Clinton, and UK Prime Minister Tony Blair. Venter and Collins thus shared an award for "Biography of the Year" from A&E Network.
On February 15, 2001, the Human Genome Project consortium published the first Human Genome in the journal Nature, followed one day later by a Celera publication in Science. Despite some claims that shotgun sequencing was in some ways less accurate than the clone-by-clone method chosen by the Human Genome Project, the technique became widely accepted by the scientific community.
Venter was fired by Celera in early 2002. According to his biography, Venter was fired because of a conflict with the main investor, Tony White, specifically barring him from attending the White House ceremony celebrating the achievement of sequencing the human genome.
Global Ocean Sampling Expedition
The Global Ocean Sampling Expedition (GOS) is an ocean exploration genome project with the goal of assessing the genetic diversity in marine microbial communities and to understand their role in nature's fundamental processes. Begun as a Sargasso Sea pilot sampling project in August 2003, the full Expedition was announced by Venter on March 4, 2004. The project, which used Venter's personal yacht, Sorcerer II, started in Halifax, Canada, circumnavigated the globe and returned to the U.S. in January 2006.
Synthetic Genomics
In June 2005, Venter co-founded Synthetic Genomics, a firm dedicated to using modified microorganisms to produce clean fuels and biochemicals. In July 2009, ExxonMobil announced a $600 million collaboration with Synthetic Genomics to research and develop next-generation biofuels.
Venter continues to work on the creation of engineered diatomic microalgae for the production of biofuels.
Venter is seeking to patent the first partially synthetic species possibly to be named Mycoplasma laboratorium. There is speculation that this line of research could lead to producing bacteria that have been engineered to perform specific reactions, for example, produce fuels, make medicines, combat global warming, and so on.
In May 2010, a team of scientists led by Venter became the first to create successfully what was described as "synthetic life". This was done by synthesizing a very long DNA molecule containing an entire bacterium genome, and introducing this into another cell, analogous to the accomplishment of Eckard Wimmer's group, who synthesized and ligated an RNA virus genome and "booted" it in cell lysate. The single-celled organism contains four "watermarks"
written into its DNA to identify it as synthetic and to help trace its descendants. The watermarks include
Code table for entire alphabet with punctuations
Names of 46 contributing scientists
Three quotations
The secret email address for the cell.
On March 25, 2016, Venter reported the creation of Syn 3.0, a synthetic genome having the fewest genes of any freely living organism (473 genes). Their aim was to strip away all nonessential genes, leaving only the minimal set necessary to support life.
This stripped-down, fast reproducing cell is expected to be a valuable tool for researchers in the field.
In August 2018, Venter retired as chairman of the board, saying he wanted to focus on his work at the J. Craig Venter Institute. He will remain as a scientific advisor to the board.
J. Craig Venter Institute
In 2006 Venter founded the J. Craig Venter Institute (JCVI), a nonprofit which conducts research in synthetic biology. It has facilities in La Jolla and in Rockville, Maryland and employs over 200 people.
In April 2022 Venter sold the La Jolla JCVI facility to the University of California, San Diego for $25 million. Venter will continue to lead a separate nonprofit research group, also known as the J. Craig Venter Institute, and stressed that he is not retiring. The Venter Institute has out grown its current building with multiple new facility hires and will be moving into new space in 2025.
Individual human genome
On September 4, 2007, a team led by Sam Levy published one of the first genomes of an individual human—Venter's own DNA sequence. Some of the sequences in Venter's genome are associated with wet earwax, increased risk of antisocial behavior, Alzheimer's and cardiovascular diseases.
The Human Reference Genome Browser is a web application for the navigation and analysis of Venter's recently published genome. The HuRef database consists of approximately 32 million DNA reads sequenced using microfluidic Sanger sequencing, assembled into 4,528 scaffolds and 4.1 million DNA variations identified by genome analysis. These variants include single-nucleotide polymorphisms (SNPs), block substitutions, short and large indels, and structural variations like insertions, deletions, inversions and copy number changes.
The browser enables scientists to navigate the HuRef genome assembly and sequence variations, and to compare it with the NCBI human build 36 assembly in the context of the NCBI and Ensembl annotations. The browser provides a comparative view between NCBI and HuRef consensus sequences, the sequence multi-alignment of the HuRef assembly, Ensembl and dbSNP annotations, HuRef variants, and the underlying variant evidence and functional analysis. The interface also represents the haplotype blocks from which diploid genome sequence can be inferred and the relation of variants to gene annotations. The display of variants and gene annotations are linked to external public resources including dbSNP, Ensembl, Online Mendelian Inheritance in Man (OMIM) and Gene Ontology (GO).
Users can search the HuRef genome using HUGO gene names, Ensembl and dbSNP identifiers, HuRef contig or scaffold locations, or NCBI chromosome locations. Users can then easily and quickly browse any genomic region via the simple and intuitive pan and zoom controls; furthermore, data relevant to specific loci can be exported for further analysis.
Human Longevity, Inc.
On March 4, 2014, Venter and co-founders Peter Diamandis and Robert Hariri announced the formation of Human Longevity, Inc., a company focused on extending the healthy, "high performance" human lifespan. At the time of the announcement the company had already raised $70 million in venture financing, which was expected to last 18 months. Venter served as the chairman and chief executive officer (CEO) until May 2018, when he retired. The company said that it plans to sequence 40,000 genomes per year, with an initial focus on cancer genomes and the genomes of cancer patients.
Human Longevity filed a lawsuit in 2018 against Venter, accusing him of stealing trade secrets. Allegations were made stating that Venter had departed with his company computer that contained valuable information that could be used to start a competing business. The lawsuit was ultimately dismissed by a California judge on the basis that Human Longevity were unable to present a case that met the legal threshold required for a company, or individual, to sue when its trade secrets have been stolen.
Human Longevity's mission is to extend healthy human lifespan by the use of high-resolution big data diagnostics from genomics, metabolomics, microbiomics, and proteomics, and the use of stem cell therapy.
Published books
Venter is the author of three books, the first of which is an autobiography titled A Life Decoded. In Venter's second book, Life at the Speed of Light, he announced his theory that this is the generation in which there appears to be a dovetailing of the two previously diverse fields of science represented by computer programming and the genetic programming of life by DNA sequencing. He was applauded for his position on this by futurist Ray Kurzweil. Venter's most recent book, co-authored by David Ewing Duncan, The Voyage of Sorcerer II: The Expedition that Unlocked the Secrets of the Ocean’s Microbiome, details the Global Ocean Sampling Expedition, spanning a 15-year period during which microbes from the world's oceans were collected and their DNA sequenced.
Personal life
After a 12-year marriage to Barbara Rae-Venter, with whom he had a son, Christopher, he married Claire M. Fraser remaining married to her until 2005. In late 2008 he married Heather Kowalski. They live in the La Jolla neighborhood of San Diego, CA. Venter is an atheist.
Venter was 75 when he sold his main research building to UCSD in 2022. The institute had out grown the space and will be moving to a new facility in 2025. The Venter Institute campus in Rockville MD also continues to expand. He said he has no intention of retiring. He has a home in La Jolla and a ranch in Borrego Springs, California, as well as homes in two small towns in Maine. He indulges in two passions: sailing and flying a Cirrus 22T plane, which he calls "the ultimate freedom".
In popular culture
Venter has been the subject of articles in several magazines, including Wired, The Economist, Australian science magazine Cosmos, and The Atlantic.
Venter appears in the two-hour 2001 NOVA special, "Cracking the code of life".
On May 16, 2004, Venter gave the commencement speech at Boston University.
On December 4, 2007, Venter gave the Dimbleby lecture for the BBC in London.
Venter gave the Distinguished Public Lecture during the 2007 Michaelmas Term at the James Martin 21st Century School at Oxford University. Its title was "Genomics – From humans to the environment".
Venter delivered the 2008 convocation speech for Faculty of Science honours and specialization students at the University of Alberta.
In February 2008, he gave a speech about his current work at the TED conference.
Venter was featured in Time magazine's "The Top 10 Everything of 2008" article. Number three in 2008's Top 10 Scientific Discoveries was a piece outlining his work stitching together the 582,000 base pairs necessary to invent the genetic information for a whole new bacterium.
On May 20, 2010, Venter announced the creation of first self-replicating semi-synthetic bacterial cell.
In the June 2011 issue of Men's Journal, Venter was featured as the "Survival Skills" celebrity of the month. He shared various anecdotes and advice, including stories of his time in Vietnam, as well as mentioning a bout with melanoma on his back, which subsequently resulted in his "giving a pound of flesh" to surgery.
In May 2011, Venter was the commencement speaker at the 157th commencement of Syracuse University.
In May 2017, Venter was the guest of honor and keynote speaker at the inauguration ceremony of the Center for Systems Biology Dresden.
Awards and nominations
1996: Golden Plate Award of the American Academy of Achievement
1999: Newcomb Cleveland Prize
2000: Jacob Heskel Gabbay Award in Biotechnology and Medicine
2001: Biotechnology Heritage Award with Francis Collins, from the Biotechnology Industry Organization (BIO) and the Chemical Heritage Foundation
2002: Association for Molecular Pathology Award for Excellence in Molecular Diagnostics
2007: On May 10, 2007, Venter was awarded an honorary doctorate from Arizona State University, and on October 24 of the same year, he received an honorary doctorate from Imperial College London.
2008: Double Helix Medal from Cold Spring Harbor Laboratory
2008: Kistler Prize from Foundation For the Future for genome research
2008: ENI award for Research & Environment
2008: National Medal of Science from President Obama
2010: On May 8, 2010, Venter received an honorary doctor of science degree from Clarkson University for his work on the human genome.
2011: On April 21, 2011, Venter received the 2011 Benjamin Rush Medal from William & Mary School of Law.
2011: Dickson Prize in Medicine
2020: Edogawa NICHE Prize for his contribution to research and development pertaining to the Human genome
Works
Venter has authored over 200 publications in scientific journals.
editor Roger Highfield
editor Roger Highfield
See also
Artificial gene synthesis
Full genome sequencing
Genetic testing
Genome: The Autobiography of a Species in 23 Chapters
Personal genomics
Pharmacogenomics
Predictive medicine
Synthetic Organism Designer
References
Further reading
External links
Human Longevity, Inc.
HuRef Genome Browser
J. Craig Venter Institute
Sorcerer II Expedition
Synthetic Genomics
The Institute for Genomic Research (TIGR)
Media
Cracking the code to life, The Guardian, October 8, 2007
Craig Venter interview, Wired Science, December 2007 (video)
Video of interview/discussion with Craig Venter by Carl Zimmer on Bloggingheads.tv
– TED (Technology Entertainment Design) conference (video)
Webcast of Venter talk 'Genomics: From humans to the environment' at The James Martin 21st Century School
The Richard Dimbleby Lecture 2007 – Dr. J. Craig Venter – A DNA Driven World
A short course on synthetic genomics. Edge Master Class 2009
1946 births
Living people
American atheists
American chairpersons of corporations
American geneticists
American technology chief executives
American technology company founders
Biotechnologists
Human Genome Project scientists
Leeuwenhoek Medal winners
Life extensionists
Members of the United States National Academy of Sciences
Military personnel from Salt Lake City
Researchers of artificial life
Scientists from Salt Lake City
United States Navy corpsmen
United States Navy personnel of the Vietnam War
University at Buffalo faculty
University of California, San Diego alumni
Members of the National Academy of Medicine | Craig Venter | Engineering | 3,762 |
44,291,624 | https://en.wikipedia.org/wiki/Billings%20Refinery%20%28Par%20Pacific%29 | The Billings Refinery is an American oil refinery located in Billings, Montana, owned and operated by Par Pacific Holdings which took over operations from ExxonMobil on June 1, 2023. ExxonMobil previously announced on October 20, 2022, that it would sell the refinery to Par Pacific with the sale expected to complete in the second quarter of 2023.
The complex is capable of refining of crude oil per day.
See also
List of oil refineries
References
External links
Energy infrastructure in Montana
Buildings and structures in Billings, Montana
Oil refineries in the United States | Billings Refinery (Par Pacific) | Chemistry | 117 |
14,777,608 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Remote%20Engineering%20and%20Virtual%20Instrumentation | International Conference on Remote Engineering and Virtual Instrumentation (REV) is an annual IAOE conference.
REV is an annual conference covering topics on online & remote engineering, virtual instrumentation and applications. Like other conferences, REV offers various tracks and simultaneous sessions, tutorials and workshops.
The first REV was held in Villach, Austria in 2004. It operates under the auspices of the International Association of Online Engineering (IAOE).
REV’s venue changes every year, and the categories of its program vary. Historically REV has combined the presentation of academic papers with comparatively practical experience reports, panels, workshops and tutorials.
Locations and organizers
External links
Official website
Computer science conferences | International Conference on Remote Engineering and Virtual Instrumentation | Technology | 133 |
26,404 | https://en.wikipedia.org/wiki/Risk%20management | Risk management is the identification, evaluation, and prioritization of risks, followed by the minimization, monitoring, and control of the impact or probability of those risks occurring. Risks can come from various sources (i.e, threats) including uncertainty in international markets, political instability, dangers of project failures (at any phase in design, development, production, or sustaining of life-cycles), legal liabilities, credit risk, accidents, natural causes and disasters, deliberate attack from an adversary, or events of uncertain or unpredictable root-cause.
There are two types of events wiz. Risks and Opportunities. Negative events can be classified as risks while positive events are classified as opportunities. Risk management standards have been developed by various institutions, including the Project Management Institute, the National Institute of Standards and Technology, actuarial societies, and International Organization for Standardization. Methods, definitions and goals vary widely according to whether the risk management method is in the context of project management, security, engineering, industrial processes, financial portfolios, actuarial assessments, or public health and safety. Certain risk management standards have been criticized for having no measurable improvement on risk, whereas the confidence in estimates and decisions seems to increase.
Strategies to manage threats (uncertainties with negative consequences) typically include avoiding the threat, reducing the negative effect or probability of the threat, transferring all or part of the threat to another party, and even retaining some or all of the potential or actual consequences of a particular threat. The opposite of these strategies can be used to respond to opportunities (uncertain future states with benefits).
As a professional role, a risk manager will "oversee the organization's comprehensive insurance and risk management program, assessing and identifying risks that could impede the reputation, safety, security, or financial success of the organization", and then develop plans to minimize and / or mitigate any negative (financial) outcomes. Risk Analysts support the technical side of the organization's risk management approach: once risk data has been compiled and evaluated, analysts share their findings with their managers, who use those insights to decide among possible solutions.
See also Chief Risk Officer, internal audit, and .
Introduction
Risk is defined as the possibility that an event will occur that adversely affects the achievement of an objective. Uncertainty, therefore, is a key aspect of risk. Risk management appears in scientific and management literature since the 1920s. It became a formal science in the 1950s, when articles and books with "risk management" in the title also appear in library searches. Most of research was initially related to finance and insurance. One popular standard clarifying vocabulary used in risk management is ISO Guide 31073:2022, "Risk management — Vocabulary".
Ideally in risk management, a prioritization process is followed. Whereby the risks with the greatest loss (or impact) and the greatest probability of occurring are handled first. Risks with lower probability of occurrence and lower loss are handled in descending order. In practice the process of assessing overall risk can be tricky, and organisation has to balance resources used to mitigate between risks with a higher probability but lower loss, versus a risk with higher loss but lower probability. Opportunity cost represents a unique challenge for risk managers. It can be difficult to determine when to put resources toward risk management and when to use those resources elsewhere. Again, ideal risk management optimises resource usage (spending, manpower etc), and also minimizes the negative effects of risks.
Risks vs. opportunities
Opportunities first appear in academic research or management books in the 1990s. The first PMBoK Project Management Body of Knowledge draft of 1987 doesn't mention opportunities at all.
Modern project management school recognize the importance of opportunities. Opportunities have been included in project management literature since the 1990s, e.g. in PMBoK, and became a significant part of project risk management in the years 2000s, when articles titled "opportunity management" also begin to appear in library searches. Opportunity management thus became an important part of risk management.
Modern risk management theory deals with any type of external events, positive and negative. Positive risks are called opportunities. Similarly to risks, opportunities have specific mitigation strategies: exploit, share, enhance, ignore.
In practice, risks are considered "usually negative". Risk-related research and practice focus significantly more on threats than on opportunities. This can lead to negative phenomena such as target fixation.
Method
For the most part, these methods consist of the following elements, performed, more or less, in the following order:
Identify the threats.
Assess the vulnerability of critical assets to specific threats.
Determine the risk (i.e. the expected likelihood and consequences of specific attacks on specific assets).
Identify ways to reduce those risks.
Prioritize risk reduction measures.
The Risk management knowledge area, as defined by the Project Management Body of Knowledge PMBoK, consists of the following processes:
Plan Risk Management – defining how to conduct risk management activities.
Identify Risks – identifying individual project risks as well as sources.
Perform Qualitative Risk Analysis – prioritizing individual project risks by assessing probability and impact.
Perform Quantitative Risk Analysis – numerical analysis of the effects.
Plan Risk Responses – developing options, selecting strategies and actions.
Implement Risk Responses – implementing agreed-upon risk response plans. In the 4th Ed. of PMBoK, this process was included as an activity in the Monitor and Control process, but was later separated as a distinct process in PMBoK 6th Ed.
Monitor Risks – monitoring the implementation. This process was known as Monitor and Control in the previous PMBoK 4th Ed., when it also included the "Implement Risk Responses" process.
Principles
The International Organization for Standardization (ISO) identifies the following principles for risk management:
Create value – resources expended to mitigate risk should be less than the consequence of inaction.
Be an integral part of organizational processes.
Be part of the decision-making process.
Explicitly address uncertainty and assumptions.
Use a systematic and structured process.
Use the best available information.
Be flexible.
Take human factors into account.
Be transparent and inclusive.
Be dynamic, iterative and responsive to change.
Be capable of continual improvement and enhancement.
Continual reassessment.
Mild versus wild risk
Benoit Mandelbrot distinguished between "mild" and "wild" risk and argued that risk assessment and management must be fundamentally different for the two types of risk. Mild risk follows normal or near-normal probability distributions, is subject to regression to the mean and the law of large numbers, and is therefore relatively predictable. Wild risk follows fat-tailed distributions, e.g., Pareto or power-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict. A common error in risk assessment and management is to underestimate the wildness of risk, assuming risk to be mild when in fact it is wild, which must be avoided if risk assessment and management are to be valid and reliable, according to Mandelbrot.
Process
According to the standard ISO 31000, "Risk management – Guidelines", the process of risk management consists of several steps as follows:
Establishing the context
This involves:
observing the context (the environment of the organization)
the social scope of risk management
the identity and objectives of stakeholders
the basis upon which risks will be evaluated, constraints.
defining a framework for the activity and an agenda for identification
developing an analysis of risks involved in the process
mitigation or solution of risks using available technological, human and organizational resources
Identification
After establishing the context, the next step in the process of managing risk is to identify potential risks. Risks are about events that, when triggered, cause problems or benefits. Hence, risk identification can start with the source of problems and those of competitors (benefit), or with the problem's consequences.
Source analysis – Risk sources may be internal or external to the system that is the target of risk management (use mitigation instead of management since by its own definition risk deals with factors of decision-making that cannot be managed).
Some examples of risk sources are: stakeholders of a project, employees of a company or the weather over an airport.
Problem analysis – Risks are related to identified threats. For example: the threat of losing money, the threat of abuse of confidential information or the threat of human errors, accidents and casualties. The threats may exist with various entities, most important with shareholders, customers and legislative bodies such as the government.
When either source or problem is known, the events that a source may trigger or the events that can lead to a problem can be investigated. For example: stakeholders withdrawing during a project may endanger funding of the project; confidential information may be stolen by employees even within a closed network; lightning striking an aircraft during takeoff may make all people on board immediate casualties.
The chosen method of identifying risks may depend on culture, industry practice and compliance. The identification methods are formed by templates or the development of templates for identifying source, problem or event. Common risk identification methods are:
Objectives-based risk identification – Organizations and project teams have objectives. Any event that may prevent an objective from being achieved is identified as risk.
Scenario-based risk identification – In scenario analysis different scenarios are created. The scenarios may be the alternative ways to achieve an objective, or an analysis of the interaction of forces in, for example, a market or battle. Any event that triggers an undesired scenario alternative is identified as risk – see Futures Studies for methodology used by Futurists.
Taxonomy-based risk identification – The taxonomy in taxonomy-based risk identification is a breakdown of possible risk sources. Based on the taxonomy and knowledge of best practices, a questionnaire is compiled. The answers to the questions reveal risks.
Common-risk checking – In several industries, lists with known risks are available. Each risk in the list can be checked for application to a particular situation.
Risk charting – This method combines the above approaches by listing resources at risk, threats to those resources, modifying factors which may increase or decrease the risk and consequences it is wished to avoid. Creating a matrix under these headings enables a variety of approaches. One can begin with resources and consider the threats they are exposed to and the consequences of each. Alternatively one can start with the threats and examine which resources they would affect, or one can begin with the consequences and determine which combination of threats and resources would be involved to bring them about.
Assessment
Once risks have been identified, they must then be assessed as to their potential severity of impact (generally a negative impact, such as damage or loss) and to the probability of occurrence.
These quantities can be either simple to measure, in the case of the value of a lost building, or impossible to know for sure in the case of an unlikely event, the probability of occurrence of which is unknown. Therefore, in the assessment process it is critical to make the best educated decisions in order to properly prioritize the implementation of the risk management plan.
Even a short-term positive improvement can have long-term negative impacts. Take the "turnpike" example. A highway is widened to allow more traffic. More traffic capacity leads to greater development in the areas surrounding the improved traffic capacity. Over time, traffic thereby increases to fill available capacity. Turnpikes thereby need to be expanded in a seemingly endless cycles. There are many other engineering examples where expanded capacity (to do any function) is soon filled by increased demand. Since expansion comes at a cost, the resulting growth could become unsustainable without forecasting and management.
The fundamental difficulty in risk assessment is determining the rate of occurrence since statistical information is not available on all kinds of past incidents and is particularly scanty in the case of catastrophic events, simply because of their infrequency. Furthermore, evaluating the severity of the consequences (impact) is often quite difficult for intangible assets. Asset valuation is another question that needs to be addressed. Thus, best educated opinions and available statistics are the primary sources of information. Nevertheless, risk assessment should produce such information for senior executives of the organization that the primary risks are easy to understand and that the risk management decisions may be prioritized within overall company goals. Thus, there have been several theories and attempts to quantify risks. Numerous different risk formulae exist, but perhaps the most widely accepted formula for risk quantification is: "Rate (or probability) of occurrence multiplied by the impact of the event equals risk magnitude."
Risk options
Risk mitigation measures are usually formulated according to one or more of the following major risk options, which are:
Design a new business process with adequate built-in risk control and containment measures from the start.
Periodically re-assess risks that are accepted in ongoing processes as a normal feature of business operations and modify mitigation measures.
Transfer risks to an external agency (e.g. an insurance company)
Avoid risks altogether (e.g. by closing down a particular high-risk business area)
Later research has shown that the financial benefits of risk management are less dependent on the formula used but are more dependent on the frequency and how risk assessment is performed.
In business it is imperative to be able to present the findings of risk assessments in financial, market, or schedule terms. Robert Courtney Jr. (IBM, 1970) proposed a formula for presenting risks in financial terms. The Courtney formula was accepted as the official risk analysis method for the US governmental agencies. The formula proposes calculation of ALE (annualized loss expectancy) and compares the expected loss value to the security control implementation costs (cost–benefit analysis).
Potential risk treatments
Planning for risk management uses four essential techniques. Under the acceptance technique, the business intentionally assumes risks without financial protections in the hopes that possible gains will exceed prospective losses. The transfer approach shields the business from losses by shifting risks to a third party, frequently in exchange for a fee, while the third-party benefits from the project. By choosing not to participate in high-risk ventures, the avoidance strategy avoids losses but also loses out on possibilities. Last but not least, the reduction approach lowers risks by implementing strategies like insurance, which provides protection for a variety of asset classes and guarantees reimbursement in the event of losses.
Once risks have been identified and assessed, all techniques to manage the risk fall into one or more of these four major categories:
Avoidance (eliminate, withdraw from or not become involved)
Reduction (optimize – mitigate)
Sharing (transfer – outsource or insure)
Retention (accept and budget)
Ideal use of these risk control strategies may not be possible. Some of them may involve trade-offs that are not acceptable to the organization or person making the risk management decisions. Another source, from the US Department of Defense (see link), Defense Acquisition University, calls these categories ACAT, for Avoid, Control, Accept, or Transfer. This use of the ACAT acronym is reminiscent of another ACAT (for Acquisition Category) used in US Defense industry procurements, in which Risk Management figures prominently in decision making and planning.
Similarly to risks, opportunities have specific mitigation strategies: exploit, share, enhance, ignore.
Risk avoidance
This includes not performing an activity that could present risk. Refusing to purchase a property or business to avoid legal liability is one such example. Avoiding airplane flights for fear of hijacking. Avoidance may seem like the answer to all risks, but avoiding risks also means losing out on the potential gain that accepting (retaining) the risk may have allowed. Not entering a business to avoid the risk of loss also avoids the possibility of earning profits. Increasing risk regulation in hospitals has led to avoidance of treating higher risk conditions, in favor of patients presenting with lower risk.
Risk reduction
Risk reduction or "optimization" involves reducing the severity of the loss or the likelihood of the loss from occurring. For example, sprinklers are designed to put out a fire to reduce the risk of loss by fire. This method may cause a greater loss by water damage and therefore may not be suitable. Halon fire suppression systems may mitigate that risk, but the cost may be prohibitive as a strategy.
Acknowledging that risks can be positive or negative, optimizing risks means finding a balance between negative risk and the benefit of the operation or activity; and between risk reduction and effort applied. By effectively applying Health, Safety and Environment (HSE) management standards, organizations can achieve tolerable levels of residual risk.
Modern software development methodologies reduce risk by developing and delivering software incrementally. Early methodologies suffered from the fact that they only delivered software in the final phase of development; any problems encountered in earlier phases meant costly rework and often jeopardized the whole project. By developing in iterations, software projects can limit effort wasted to a single iteration.
Outsourcing could be an example of risk sharing strategy if the outsourcer can demonstrate higher capability at managing or reducing risks. For example, a company may outsource only its software development, the manufacturing of hard goods, or customer support needs to another company, while handling the business management itself. This way, the company can concentrate more on business development without having to worry as much about the manufacturing process, managing the development team, or finding a physical location for a center. Also, implanting controls can also be an option in reducing risk. Controls that either detect causes of unwanted events prior to the consequences occurring during use of the product, or detection of the root causes of unwanted failures that the team can then avoid. Controls may focus on management or decision-making processes. All these may help to make better decisions concerning risk.
Risk sharing
Briefly defined as "sharing with another party the burden of loss or the benefit of gain, from a risk, and the measures to reduce a risk."
The term 'risk transfer' is often used in place of risk-sharing in the mistaken belief that you can transfer a risk to a third party through insurance or outsourcing. In practice, if the insurance company or contractor go bankrupt or end up in court, the original risk is likely to still revert to the first party. As such, in the terminology of practitioners and scholars alike, the purchase of an insurance contract is often described as a "transfer of risk." However, technically speaking, the buyer of the contract generally retains legal responsibility for the losses "transferred", meaning that insurance may be described more accurately as a post-event compensatory mechanism. For example, a personal injuries insurance policy does not transfer the risk of a car accident to the insurance company. The risk still lies with the policyholder namely the person who has been in the accident. The insurance policy simply provides that if an accident (the event) occurs involving the policyholder then some compensation may be payable to the policyholder that is commensurate with the suffering/damage.
Methods of managing risk fall into multiple categories. Risk-retention pools are technically retaining the risk for the group, but spreading it over the whole group involves transfer among individual members of the group. This is different from traditional insurance, in that no premium is exchanged between members of the group upfront, but instead, losses are assessed to all members of the group.
Risk retention
Risk retention involves accepting the loss, or benefit of gain, from a risk when the incident occurs. True self-insurance falls in this category. Risk retention is a viable strategy for small risks where the cost of insuring against the risk would be greater over time than the total losses sustained. All risks that are not avoided or transferred are retained by default. This includes risks that are so large or catastrophic that either they cannot be insured against or the premiums would be infeasible. War is an example since most property and risks are not insured against war, so the loss attributed to war is retained by the insured. Also any amounts of potential loss (risk) over the amount insured is retained risk. This may also be acceptable if the chance of a very large loss is small or if the cost to insure for greater coverage amounts is so great that it would hinder the goals of the organization too much.
Risk management plan
Select appropriate controls or countermeasures to mitigate each risk. Risk mitigation needs to be approved by the appropriate level of management. For instance, a risk concerning the image of the organization should have top management decision behind it whereas IT management would have the authority to decide on computer virus risks.
The risk management plan should propose applicable and effective security controls for managing the risks. For example, an observed high risk of computer viruses could be mitigated by acquiring and implementing antivirus software. A good risk management plan should contain a schedule for control implementation and responsible persons for those actions. There are four basic steps of risk management plan, which are threat assessment, vulnerability assessment, impact assessment and risk mitigation strategy development.
According to ISO/IEC 27001, the stage immediately after completion of the risk assessment phase consists of preparing a Risk Treatment Plan, which should document the decisions about how each of the identified risks should be handled. Mitigation of risks often means selection of security controls, which should be documented in a Statement of Applicability, which identifies which particular control objectives and controls from the
standard have been selected, and why.
Implementation
Implementation follows all of the planned methods for mitigating the effect of the risks. Purchase insurance policies for the risks that it has been decided to transferred to an insurer, avoid all risks that can be avoided without sacrificing the entity's goals, reduce others, and retain the rest.
Review and evaluation of the plan
Initial risk management plans will never be perfect. Practice, experience, and actual loss results will necessitate changes in the plan and contribute information to allow possible different decisions to be made in dealing with the risks being faced.
Risk analysis results and management plans should be updated periodically. There are two primary reasons for this:
to evaluate whether the previously selected security controls are still applicable and effective
to evaluate the possible risk level changes in the business environment. For example, information risks are a good example of rapidly changing business environment.
Areas
Enterprise
Enterprise risk management (ERM) defines risk as those possible events or circumstances that can have negative influences on the enterprise in question,
where the impact can be on the very existence, the resources (human and capital), the products and services, or the customers of the enterprise, as well as external impacts on society, markets, or the environment.
There are various defined frameworks here, where every probable risk can have a pre-formulated plan to deal with its possible consequences (to ensure contingency if the risk becomes a liability).
Managers thus analyze and monitor both the internal and external environment facing the enterprise, addressing business risk generally, and any impact on the enterprise achieving its strategic goals.
ERM thus overlaps various other disciplines - operational risk management, financial risk management etc. - but is differentiated by its strategic and long-term focus. ERM systems usually focus on safeguarding reputation, acknowledging its significant role in comprehensive risk management strategies.
Finance
As applied to finance, risk management concerns the techniques and practices for measuring, monitoring and controlling the market- and credit risk (and operational risk) on a firm's balance sheet, due to a bank's credit and trading exposure, or re a fund manager's portfolio value; for an overview see .
A traditional measure in banking is value at risk (VaR) – the possible loss due to adverse credit and market events. Banks seek to hedge these risks, and will hold risk capital on the net position. The Basel III framework governs the parallel regulatory capital requirements, including for operational risk.
Fund managers employ various strategies to protect their fund value; these given their mandate and benchmark.
Non-financial firms focus on business risk more generally, overlapping enterprise risk management: i.e. those events and occurrences which could negatively impact cash flow or profitability, and hence result in a loss of business value or a decline in share price.
Contractual risk management
The concept of "contractual risk management" emphasises the use of risk management techniques in contract deployment, i.e. managing the risks which are accepted through entry into a contract. Norwegian academic Petri Keskitalo defines "contractual risk management" as "a practical, proactive and systematical contracting method that uses contract planning and governance to manage risks connected to business activities". In an article by Samuel Greengard published in 2010, two US legal cases are mentioned which emphasise the importance of having a strategy for dealing with risk:
UDC v. CH2M Hill, which deals with the risk to a professional advisor who signs an indemnification provision including acceptance of a duty to defend, who may thereby pick up the legal costs of defending a client subject to a claim from a third party,
Witt v. La Gorce Country Club, which deals with the effectiveness of a limitation of liability clause, which may, in certain jurisdictions, be found to be ineffective.
Greengard recommends using industry-standard contract language as much as possible to reduce risk as much as possible and rely on clauses which have been in use and subject to established court interpretation over a number of years.
Customs
Customs risk management is concerned with the risks which arise within the context of international trade and have a bearing on safety and security, including the risk that illicit drugs and counterfeit goods can pass across borders and the risk that shipments and their contents are incorrectly declared. The European Union has adopted a Customs Risk Management Framework (CRMF) applicable across the union and throughout its member states, whose aims include establishing a common level of customs control protection and a balance between the objectives of safe customs control and the facilitation of legitimate trade. Two events which prompted the European Commission to review customs risk management policy in 2012-13 were the September 11 attacks of 2001 and the 2010 transatlantic aircraft bomb plot involving packages being sent from Yemen to the United States, referred to by the Commission as "the October 2010 (Yemen) incident".
Memory institutions (museums, libraries and archives)
Enterprise security
ESRM is a security program management approach that links security activities to an enterprise's mission and business goals through risk management methods. The security leader's role in ESRM is to manage risks of harm to enterprise assets in partnership with the business leaders whose assets are exposed to those risks. ESRM involves educating business leaders on the realistic impacts of identified risks, presenting potential strategies to mitigate those impacts, then enacting the option chosen by the business in line with accepted levels of business risk tolerance
Medical devices
For medical devices, risk management is a process for identifying, evaluating and mitigating risks associated with harm to people and damage to property or the environment. Risk management is an integral part of medical device design and development, production processes and evaluation of field experience, and is applicable to all types of medical devices. The evidence of its application is required by most regulatory bodies such as the US FDA. The management of risks for medical devices is described by the International Organization for Standardization (ISO) in ISO 14971:2019, Medical Devices—The application of risk management to medical devices, a product safety standard. The standard provides a process framework and associated requirements for management responsibilities, risk analysis and evaluation, risk controls and lifecycle risk management. Guidance on the application of the standard is available via ISO/TR 24971:2020.
The European version of the risk management standard was updated in 2009 and again in 2012 to refer to the Medical Devices Directive (MDD) and Active Implantable Medical Device Directive (AIMDD) revision in 2007, as well as the In Vitro Medical Device Directive (IVDD). The requirements of EN 14971:2012 are nearly identical to ISO 14971:2007. The differences include three "(informative)" Z Annexes that refer to the new MDD, AIMDD, and IVDD. These annexes indicate content deviations that include the requirement for risks to be reduced as far as possible, and the requirement that risks be mitigated by design and not by labeling on the medical device (i.e., labeling can no longer be used to mitigate risk).
Typical risk analysis and evaluation techniques adopted by the medical device industry include hazard analysis, fault tree analysis (FTA), failure mode and effects analysis (FMEA), hazard and operability study (HAZOP), and risk traceability analysis for ensuring risk controls are implemented and effective (i.e. tracking risks identified to product requirements, design specifications, verification and validation results etc.). FTA analysis requires diagramming software. FMEA analysis can be done using a spreadsheet program. There are also integrated medical device risk management solutions.
Through a draft guidance, the FDA has introduced another method named "Safety Assurance Case" for medical device safety assurance analysis. The safety assurance case is structured argument reasoning about systems appropriate for scientists and engineers, supported by a body of evidence, that provides a compelling, comprehensible and valid case that a system is safe for a given application in a given environment. With the guidance, a safety assurance case is expected for safety critical devices (e.g. infusion devices) as part of the pre-market clearance submission, e.g. 510(k). In 2013, the FDA introduced another draft guidance expecting medical device manufacturers to submit cybersecurity risk analysis information.
Project management
Project risk management must be considered at the different phases of acquisition. At the beginning of a project, the advancement of technical developments, or threats presented by a competitor's projects, may cause a risk or threat assessment and subsequent evaluation of alternatives (see Analysis of Alternatives). Once a decision is made, and the project begun, more familiar project management applications can be used:
Planning how risk will be managed in the particular project. Plans should include risk management tasks, responsibilities, activities and budget.
Assigning a risk officer – a team member other than a project manager who is responsible for foreseeing potential project problems. Typical characteristic of risk officer is a healthy skepticism.
Maintaining live project risk database. Each risk should have the following attributes: opening date, title, short description, probability and importance. Optionally a risk may have an assigned person responsible for its resolution and a date by which the risk must be resolved.
Creating anonymous risk reporting channel. Each team member should have the possibility to report risks that he/she foresees in the project.
Preparing mitigation plans for risks that are chosen to be mitigated. The purpose of the mitigation plan is to describe how this particular risk will be handled – what, when, by whom and how will it be done to avoid it or minimize consequences if it becomes a liability.
Summarizing planned and faced risks, effectiveness of mitigation activities, and effort spent for the risk management.
Megaprojects (infrastructure)
Megaprojects (sometimes also called "major programs") are large-scale investment projects, typically costing more than $1 billion per project. Megaprojects include major bridges, tunnels, highways, railways, airports, seaports, power plants, dams, wastewater projects, coastal flood protection schemes, oil and natural gas extraction projects, public buildings, information technology systems, aerospace projects, and defense systems. Megaprojects have been shown to be particularly risky in terms of finance, safety, and social and environmental impacts. Risk management is therefore particularly pertinent for megaprojects and special methods and special education have been developed for such risk management.
Natural disasters
It is important to assess risk in regard to natural disasters like floods, earthquakes, and so on. Outcomes of natural disaster risk assessment are valuable when considering future repair costs, business interruption losses and other downtime, effects on the environment, insurance costs, and the proposed costs of reducing the risk. The Sendai Framework for Disaster Risk Reduction is a 2015 international accord that has set goals and targets for disaster risk reduction in response to natural disasters. There are regular International Disaster and Risk Conferences in Davos to deal with integral risk management.
Several tools can be used to assess risk and risk management of natural disasters and other climate events, including geospatial modeling, a key component of land change science. This modeling requires an understanding of geographic distributions of people as well as an ability to calculate the likelihood of a natural disaster occurring.
Wilderness
The management of risks to persons and property in wilderness and remote natural areas has developed with increases in outdoor recreation participation and decreased social tolerance for loss. Organizations providing commercial wilderness experiences can now align with national and international consensus standards for training and equipment such as ANSI/NASBLA 101-2017 (boating), UIAA 152 (ice climbing tools), and European Norm 13089:2015 + A1:2015 (mountaineering equipment). The Association for Experiential Education offers accreditation for wilderness adventure programs. The Wilderness Risk Management Conference provides access to best practices, and specialist organizations provide wilderness risk management consulting and training.
The text Outdoor Safety – Risk Management for Outdoor Leaders, published by the New Zealand Mountain Safety Council, provides a view of wilderness risk management from the New Zealand perspective, recognizing the value of national outdoor safety legislation and devoting considerable attention to the roles of judgment and decision-making processes in wilderness risk management.
One popular models for risk assessment is the Risk Assessment and Safety Management (RASM) Model developed by Rick Curtis, author of The Backpacker's Field Manual. The formula for the RASM Model is: Risk = Probability of Accident × Severity of Consequences. The RASM Model weighs negative risk—the potential for loss, against positive risk—the potential for growth.
Information technology
IT risk is a risk related to information technology. This is a relatively new term due to an increasing awareness that information security is simply one facet of a multitude of risks that are relevant to IT and the real world processes it supports. "Cybersecurity is tied closely to the advancement of technology. It lags only long enough for incentives like black markets to evolve and new exploits to be discovered. There is no end in sight for the advancement of technology, so we can expect the same from cybersecurity."
ISACA's Risk IT framework ties IT risk to enterprise risk management. Duty of Care Risk Analysis (DoCRA) evaluates risks and their safeguards and considers the interests of all parties potentially affected by those risks. The Verizon Data Breach Investigations Report (DBIR) features how organizations can leverage the Veris Community Database (VCDB) to estimate risk. Using HALOCK methodology within CIS RAM and data from VCDB, professionals can determine threat likelihood for their industries.
IT risk management includes "incident handling", an action plan for dealing with intrusions, cyber-theft, denial of service, fire, floods, and other security-related events. According to the SANS Institute, it is a six step process: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned.
Operations
Operational risk management (ORM) is the oversight of operational risk, including the risk of loss resulting from: inadequate or failed internal processes and systems; human factors; or external events. Given the nature of operations, ORM is typically a "continual" process, and will include ongoing risk assessment, risk decision making, and the implementation of risk controls.
Petroleum and natural gas
For the offshore oil and gas industry, operational risk management is regulated by the safety case regime in many countries. Hazard identification and risk assessment tools and techniques are described in the international standard ISO 17776:2000, and organisations such as the IADC (International Association of Drilling Contractors) publish guidelines for Health, Safety and Environment (HSE) Case development which are based on the ISO standard. Further, diagrammatic representations of hazardous events are often expected by governmental regulators as part of risk management in safety case submissions; these are known as bow-tie diagrams (see Network theory in risk assessment). The technique is also used by organisations and regulators in mining, aviation, health, defence, industrial and finance.
Pharmaceutical sector
The principles and tools for quality risk management are increasingly being applied to different aspects of pharmaceutical quality systems. These aspects include development, manufacturing, distribution, inspection, and submission/review processes throughout the lifecycle of drug substances, drug products, biological and biotechnological products (including the use of raw materials, solvents, excipients, packaging and labeling materials in drug products, biological and biotechnological products). Risk management is also applied to the assessment of microbiological contamination in relation to pharmaceutical products and cleanroom manufacturing environments.
Supply chain
Supply chain risk management (SCRM) aims at maintaining supply chain continuity in the event of scenarios or incidents which could interrupt normal business and hence profitability. Risks to the supply chain range from everyday to exceptional, including unpredictable natural events (such as tsunamis and pandemics) to counterfeit products, and reach across quality, security, to resiliency and product integrity. Mitigation of these risks can involve various elements of the business including logistics and cybersecurity, as well as the areas of finance and operations.
Travel
Travel risk management is concerned with how organisations assess the risks to their staff when travelling, especially when travelling overseas. In the field of international standards, ISO 31030:2021 addresses good practice in travel risk management.
The Global Business Travel Association's education and research arm, the GBTA Foundation. found in 2015 that most businesses covered by their research employed travel risk management protocols aimed at ensuring the safety and well-being of their business travelers. Six key principles of travel risk awareness put forward by the association are preparation, awareness of surroundings and people, keeping a low profile, adopting an unpredictable routine, communications and layers of protection. Traveler tracking using mobile tracking and messaging technologies had by 2015 become a widely used aspect of travel risk management.
Risk communication
See also
Business continuity
Catastrophe modeling
Cost engineering
Cost-plus contract
Crossing the river by touching the stones
Disaster risk reduction
Environmental Risk Management Authority (NZ)
Financial risk management
International Institute of Risk & Safety Management
ISO 31000
IT risk management
Risk Management Framework
Loss-control consultant
Moral hazard
National Safety Council (USA)
Optimism bias
Pest risk analysis
Precautionary principle
Reference class forecasting
Representative heuristic
Risk appetite
Risk aversion
Risk management tools
Risk premium
Roy's safety-first criterion
Security management
Social risk management
Stranded asset
Supply-chain risk management
Three lines of defence
Gordon–Loeb model
References
External links
DoD Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs (2017)
DoD Risk Management Guide for Defense Acquisition Programs (2014)
Actuarial science
Project management
Risk analysis
Systems engineering
Communication studies
IEEE standards
ISO/IEC standards | Risk management | Mathematics,Technology,Engineering | 7,865 |
18,170,738 | https://en.wikipedia.org/wiki/Spline%20roller | A screen roller or spline roller is a small hand tool used to press screen mesh into the edges of a window frame that is fluted on the inner edges, or to press in the retainer spline that holds that mesh in place. Often these are combined into a single tool or combined with a spline cutter; versions are currently manufactured from plastic or wood and metal.
Appearance and history
While a spline roller (also referred to as a "spline tool") is said to look like a less-sharp version of a pizza cutter (which it does), its origins are in fact from something different. Somewhere around or before 1920 a man named Julius Alexander Muhlberg who was co-owner of Winchester and Muhlberg, a New Jersey–based company. An innovator at heart, Julius had taken some other tool's handle, drilled a hole through a silver dollar, and after putting them both together with a nut and bolt, found it to be much faster to squish screening into the sides of a frame rather than the regular method of the time, nailing the screen into the frames. The original tool remained in the possession of his son, Julius Muhlberg, but unfortunately it seems to be hopelessly lost to time.
References
Hand tools | Spline roller | Engineering | 261 |
31,869,776 | https://en.wikipedia.org/wiki/Zinovy%20Reichstein | Zinovy Reichstein (born 1961) is a Russian-born American mathematician. He is a professor at the University of British Columbia in Vancouver.
He studies mainly algebra, algebraic geometry and algebraic groups. He introduced (with Joe P. Buhler) the concept of essential dimension.
Early life and education
In high school, Reichstein participated in the national mathematics olympiad in Russia and was the third highest scorer in 1977 and second highest scorer in 1978.
Because of the Antisemitism in the Soviet Union at the time, Reichstein was not accepted to Moscow University, even though he had passed the special math entrance exams. He attended a semester of college at Russian University of Transport instead.
His family then decided to emigrate, arriving in Vienna, Austria, in August 1979 and New York, United States in the fall of 1980. Reichstein worked as a delivery boy for a short period of time in New York. He was then accepted to and attended California Institute of Technology for his undergraduate studies.
Reichstein received his PhD degree in 1988 from Harvard University under the supervision of Michael Artin. Parts of his thesis entitled "The Behavior of Stability under Equivariant Maps" were published in the journal Inventiones Mathematicae.
Career
As of 2011, he is on the editorial board of the mathematics journal Transformation groups.
Awards
Winner of the 2013 Jeffery-Williams Prize awarded by the Canadian Mathematical Society
Fellow of the American Mathematical Society, 2012
Invited Speaker to the International Congress of Mathematicians (Hyderabad, India 2010)
References
External links
Algebraists
Harvard University alumni
20th-century American mathematicians
21st-century American mathematicians
Living people
Academic staff of the University of British Columbia
Place of birth missing (living people)
Fellows of the American Mathematical Society
1961 births | Zinovy Reichstein | Mathematics | 351 |
29,990,080 | https://en.wikipedia.org/wiki/QuickCode | QuickCode (formerly ScraperWiki) was a web-based platform for collaboratively building programs to extract and analyze public (online) data, in a wiki-like fashion. "Scraper" refers to screen scrapers, programs that extract data from websites. "Wiki" means that any user with programming experience can create or edit such programs for extracting new data, or for analyzing existing datasets. The main use of the website is providing a place for programmers and journalists to collaborate on analyzing public data.
The service was renamed circa 2016, as "it isn't a wiki or just for scraping any more". At the same time, the eponymous parent company was renamed 'The Sensible Code Company'.
History
ScraperWiki was founded in 2009 by Julian Todd and Aidan McGuire. It was initially funded by 4iP, the venture capital arm of TV station Channel 4. Since then, it has attracted an additional £1 Million round of funding from Enterprise Ventures.
Aidan McGuire is the chief executive officer of The Sensible Code Company
See also
Data driven journalism
Web scraping
References
External links
github repository of custard
Collaborative projects
Wiki software
Social information processing
Web analytics
Mashup (web application hybrid)
Web scraping
Software using the GNU Affero General Public License | QuickCode | Technology | 262 |
64,605,263 | https://en.wikipedia.org/wiki/Flora%20von%20Th%C3%BCringen | Flora von Thüringen is an extensive botanical coverage of the plants occurring in Thuringia in central Germany. Conceived and initiated by the German naturalist Jonathan Carl Zenker in 1836, its completion was delayed by his untimely death in 1837. The botanists Diederich Franz Leonhard von Schlechtendal (1797–1866) and Christian Eduard Langethal (1806–1878) continued the project and the monumental 12-volume work was published in 1855 by Friedrich Mauke of Jena. The work includes 1444 engraved plates, hand-coloured by Ernst Schenk (1796–1859), as well as descriptive text in German.
External links
Gallery of illustrations
References
Florae (publication) | Flora von Thüringen | Biology | 143 |
53,190,900 | https://en.wikipedia.org/wiki/Minor%20losses%20in%20pipe%20flow | Minor losses in pipe flow are a major part in calculating the flow, pressure, or energy reduction in piping systems. Liquid moving through pipes carries momentum and energy due to the forces acting upon it such as pressure and gravity. Just as certain aspects of the system can increase the fluids energy, there are components of the system that act against the fluid and reduce its energy, velocity, or momentum. Friction and minor losses in pipes are major contributing factors.
Friction Losses
Before being able to use the minor head losses in an equation, the losses in the system due to friction must also be calculated.
Equation for friction losses:
= Frictional head loss
= Downstream velocity
= Gravity of Earth
= Hydraulic radius
=Total length of piping
= Fanning friction factor
Total Head Loss
After both minor losses and friction losses have been calculated, these values can be summed to find the total head loss.
Equation for total head loss, , can be simplified and rewritten as:
= Frictional head loss
= Downstream velocity
= Gravity of Earth
= Hydraulic radius
=Total length of piping
= Fanning friction factor
= Sum of all kinetic energy factors in system
Once calculated, the total head loss can be used to solve the Bernoulli Equation and find unknown values of the system.
See also
Hydraulic head
Total dynamic head
Notes
Piping
Fluid dynamics | Minor losses in pipe flow | Chemistry,Engineering | 265 |
4,534,553 | https://en.wikipedia.org/wiki/Host%20%28network%29 | A network host is a computer or other device connected to a computer network. A host may work as a server offering information resources, services, and applications to users or other hosts on the network. Hosts are assigned at least one network address.
A computer participating in networks that use the Internet protocol suite may also be called an IP host. Specifically, computers participating in the Internet are called Internet hosts. Internet hosts and other IP hosts have one or more IP addresses assigned to their network interfaces. The addresses are configured either manually by an administrator, automatically at startup by means of the Dynamic Host Configuration Protocol (DHCP), or by stateless address autoconfiguration methods.
Network hosts that participate in applications that use the client–server model of computing are classified as server or client systems. Network hosts may also function as nodes in peer-to-peer applications, in which all nodes share and consume resources in an equipotent manner.
Origins
In operating systems, the term terminal host denotes a time-sharing computer or multi-user software providing services to computer terminals, or a computer that provides services to smaller or less capable devices, such as a mainframe computer serving teletype terminals or video terminals. Other examples of this architecture include a telnet host connected to a telnet server and an xhost connected to an X Window client.
The term Internet host or just host is used in a number of Request for Comments (RFC) documents that define the Internet and its predecessor, the ARPANET. RFC 871 defines a host as a general-purpose computer system connected to a communications network for "... the purpose of achieving resource sharing amongst the participating operating systems..."
While the ARPANET was being developed, computers connected to the network were typically mainframe computer systems that could be accessed from dumb terminals connected via serial ports. Since these terminals did not host software or perform computations themselves, they were not considered hosts as they were not connected to any IP network, and were not assigned IP addresses. User computers connected to the ARPANET at a packet-switching node were considered hosts.
Nodes, hosts, and servers
A network node is any device participating in a network. A host is a node that participates in user applications, either as a server, client, or both. A server is a type of host that offers resources to the other hosts. Typically a server accepts connections from clients who request a service function.
Every network host is a node, but not every network node is a host. Network infrastructure hardware, such as modems, Ethernet hubs, and network switches are not directly or actively participating in application-level functions, and do not necessarily have a network address, and are not considered to be network hosts.
See also
References
External links
Networking hardware | Host (network) | Engineering | 564 |
18,180,005 | https://en.wikipedia.org/wiki/NO%20Apodis | NO Apodis is a solitary, red hued variable star located in the southern circumpolar constellation Apus. It has an average apparent magnitude of 5.86, allowing it to be faintly seen with the naked eye. The object is relatively far at a distance of 790 light years but is drifting closer with a heliocentric radial velocity .
NO Apodis has a stellar classification of M3 III, indicating that it is a red giant. It is currently on the asymptotic giant branch, fusing hydrogen and helium shells around an inert carbon core. At present it has 1.63 times the mass of the Sun and an enlarged radius of . It shines with a bolometric luminosity 1,408 times that of the Sun from its photosphere at an effective temperature of .
NO Apodis is classified as a semiregular variable of unknown subtype. Observations from Tabur et. al. (2009) reveal it to have two periods, both lasting 26-7 days. During this timeframe, the star flucates between 5.71 and 5.95 in the visual band.
References
Apus
Semiregular variable stars
156513
Apodis, NO
M-type giants
085760
6429
CD-80 00638
Apodis, 59 | NO Apodis | Astronomy | 273 |
36,171,096 | https://en.wikipedia.org/wiki/Example-based%20machine%20translation | Example-based machine translation (EBMT) is a method of machine translation often characterized by its use of a bilingual corpus with parallel texts as its main knowledge base at run-time. It is essentially a translation by analogy and can be viewed as an implementation of a case-based reasoning approach to machine learning.
Translation by analogy
At the foundation of example-based machine translation is the idea of translation by analogy. When applied to the process of human translation, the idea that translation takes place by analogy is a rejection of the idea that people translate sentences by doing deep linguistic analysis. Instead, it is founded on the belief that people translate by first decomposing a sentence into certain phrases, then by translating these phrases, and finally by properly composing these fragments into one long sentence. Phrasal translations are translated by analogy to previous translations. The principle of translation by analogy is encoded to example-based machine translation through the example translations that are used to train such a system.
Other approaches to machine translation, including statistical machine translation, also use bilingual corpora to learn the process of translation.
History
Example-based machine translation was first suggested by Makoto Nagao in 1984. He pointed out that it is especially adapted to translation between two totally different languages, such as English and Japanese. In this case, one sentence can be translated into several well-structured sentences in another language, therefore, it is no use to do the deep linguistic analysis characteristic of rule-based machine translation.
Example
Example-based machine translation systems are trained from bilingual parallel corpora containing sentence pairs like the example shown in the table above. Sentence pairs contain sentences in one language with their translations into another. The particular example shows an example of a minimal pair, meaning that the sentences vary by just one element. These sentences make it simple to learn translations of portions of a sentence. For example, an example-based machine translation system would learn three units of translation from the above example:
How much is that X ? corresponds to Ano X wa ikura desu ka.
red umbrella corresponds to akai kasa
small camera corresponds to chiisai kamera
Composing these units can be used to produce novel translations in the future. For example, if we have been trained using some text containing the sentences:
President Kennedy was shot dead during the parade. and The convict escaped on July 15th., then we could translate the sentence The convict was shot dead during the parade. by substituting the appropriate parts of the sentences.
Phrasal verbs
Example-based machine translation is best suited for sub-language phenomena like phrasal verbs. Phrasal verbs have highly context-dependent meanings. They are common in English, where they comprise a verb followed by an adverb and/or a preposition, which are called the particle to the verb. Phrasal verbs produce specialized context-specific meanings that may not be derived from the meaning of the constituents. There is almost always an ambiguity during word-to-word translation from source to the target language.
As an example, consider the phrasal verb "put on" and its Hindustani translation. It may be used in any of the following ways:
Ram put on the lights. (Switched on) (Hindustani translation: Jalana)
Ram put on a cap. (Wear) (Hindustani translation: Pahenna)
See also
Programming by example
Translation memory
Natural Language Processing
References
Further reading
External links
Cunei - an open source platform for data-driven machine translation that grew out of research in EBMT, but also includes recent advances from the SMT field
Machine translation
Machine translation, example-based | Example-based machine translation | Technology | 733 |
2,538,401 | https://en.wikipedia.org/wiki/225%20%28number%29 | 225 (two hundred [and] twenty-five) is the natural number following 224 and preceding 226.
225 is the smallest number that is a polygonal number in five different ways. It is a square number ,
an octagonal number, and a squared triangular number .
As the square of a double factorial, counts the number of permutations of six items in which all cycles have even length, or the number of permutations in which all cycles have odd length. And as one of the Stirling numbers of the first kind, it counts the number of permutations of six items with exactly three cycles.
225 is a highly composite odd number, meaning that it has more divisors than any smaller odd numbers. After 1 and 9, 225 is the third smallest number n for which , where σ is the sum of divisors function and φ is Euler's totient function. 225 is a refactorable number.
225 is the smallest square number to have one of every digit in some number base (225 is 3201 in base 4)
225 is the first odd number with exactly 9 divisors.
References
Integers | 225 (number) | Mathematics | 233 |
3,169,611 | https://en.wikipedia.org/wiki/Invertebrate%20zoology | Invertebrate zoology is the subdiscipline of zoology that consists of the study of invertebrates, animals without a backbone (a structure which is found only in fish, amphibians, reptiles, birds and mammals).
Invertebrates are a vast and very diverse group of animals that includes sponges, echinoderms, tunicates, numerous different phyla of worms, molluscs, arthropods and many additional phyla. Single-celled organisms or protists are usually not included within the same group as invertebrates.
Subdivisions
Invertebrates represent 97% of all named animal species, and because of that fact, this subdivision of zoology has many further
subdivisions, including but not limited to:
Arthropodology - the study of arthropods, which includes
Arachnology - the study of spiders and other arachnids
Entomology - the study of insects
Carcinology - the study of crustaceans
Myriapodology - the study of centipedes, millipedes, and other myriapods
Cnidariology - the study of Cnidaria
Helminthology - the study of parasitic worms.
Malacology - the study of mollusks, which includes
Conchology - the study of Mollusk shells.
Limacology - the study of slugs.
Teuthology - the study of cephalopods.
Invertebrate paleontology - the study of fossil invertebrates
These divisions are sometimes further divided into more specific specialties. For example, within arachnology, acarology is the study of mites and ticks; within entomology, lepidoptery is the study of butterflies and moths, myrmecology is the study of ants and so on. Marine invertebrates are all those invertebrates that exist in marine habitats.
History
Early Modern Era
In the early modern period starting in the late 16th century, invertebrate zoology saw growth in the number of publications made and improvement in the experimental practices associated with the field. (Insects are one of the most diverse groups of organisms on Earth. They play important roles in ecosystems, including pollination, natural enemies, saprophytes, and biological information transfer.)
One of the major works to be published in the area of zoology was Conrad Gessner's Historia animalium, which was published in numerous editions from 1551 to 1587. Though it was a work more generally addressing zoology in the large sense, it did contain information on insect life. Much of the information came from older works; Gessner restated the work of Pliny the Elder and Aristotle while mixing old knowledge of the natural history of insects with his own observations.
With the invention of the Microscope in 1599 came a new way of observing the small creatures that fall under the umbrella of invertebrate. Robert Hooke, who worked out of the Royal Society in England, conducted observation of insects—including some of their larval forms—and other invertebrates, such as ticks. His Micrographia, published in 1665, included illustrations and written descriptions of the things he saw under the microscope.
Others also worked with the microscope following its acceptance as a scientific tool. Francesco Redi, an Italian physician and naturalist, used a microscope for observation of invertebrates, but is known for his work in disproving the theory of spontaneous generation. Redi managed to prove that flies did not spontaneously arise from rotting meat. He conducted controlled experiments and detailed observation of the fly life cycle in order to do so. Redi also worked in the description and illustration of parasites for both plants and animals.
Other men were also conducting research into pests and parasites at this time. Felix Plater, a Swiss physician, worked to differentiate between two types of tape worm. He also wrote descriptions of both the worms he observed and the effects these worms had on their hosts.
Following the publication of Francis Bacon's ideas about the value of experimentation in the sciences came a shift toward true experimental efforts in the biological sciences, including invertebrate zoology. Jan Swammerdam, a Dutch microscopist, supported an effort to work for a 'modern' science over blind belief in the work of ancient philosophers. He worked—like Redi—to disprove spontaneous generation using experimental techniques. Swammerdam also made a number of advancements in the study of anatomy and physiology. In the field of entomology, he conducted a number of dissections of insects and made detailed observations of the internal structures of these specimens. Swammerdam also worked on a classification of insects based on life histories; he managed to contribute to the literature proving that an egg, larva, pupa, and adult are indeed the same individual.
18th and 19th centuries
In the 18th century, the study of invertebrates focused on the naming of species that were relevant to economic pursuits, such as agricultural pests. Entomology was changing in big ways very quickly, as many naturalists and zoologists were working with hexapods.
Work was also being done in the realm of parasitology and the study of worms. A French physician named Nicolas Andry de Bois-Regard determined that worms were the cause of some diseases. He also declared that worms do not spontaneously form within the animal or human gut; de Bois-Regard stated that there must be some kind of 'seed' which enters the body and contains the worm in some form. Antonio Vallisneri also worked with parasitic worms, specifically members of the genera Ascaris and Neoascaris. He found that these worms came from eggs. In addition, Vallisneri worked to elucidate the reproduction of insects, specifically the sawfly.
In 1735, the first edition of Carl Linnaeus' Systema Naturae was published; this work included information on both insects and intestinal worms. However, the tenth edition is considered the true starting point for the modern classification scheme for living things today. Linnaeus' universal system of classification made a system based on binomial nomenclature, but included higher levels of classification than simply the genus and species names. Systema Naturae was an investigation into the biodiversity on Earth. However, because it was based only on very few characters, the system developed by Linnaeus was an artificial one. The book also included descriptions of the organisms named inside of it.
In 1859, Charles Darwin's On the Origin of Species was published. In this book, he described his theory of evolution by natural selection. Both the work of Darwin and his contemporary, Alfred Russel Wallace —who was also working on the theory of evolution—were informed by the careful study of insects. In addition, Darwin collected many species of invertebrate during his time aboard ; many of the specimens collected were insects. Using these collections, he was able to study sexual dimorphism, geographic distribution of species, and mimicry; all of these concepts influenced Darwin's theory of evolution. Unfortunately, a firm popular belief in the immutability of species was a major hurdle in the acceptance of the theory.
20th century
Classification in the twentieth century shifted toward a focus on evolutionary relationships over morphological description. The development of phylogenetics and systematics based on this study is credited to Willi Hennig, a German entomologist. In 1966, his Phylogenetic Systematics was published; inside, Hennig redefined the goals of systematic schemes for classifying living things. He proposed that the focus be on evolutionary relationships over similar morphological features. He also defined monophyly and included his ideas about hierarchical classification. Though Hennig did not include information on outgroup comparison, he was apparently aware of the practice, which is considered important to today's systematic research.
Notable invertebrates
The Japanese spider crab (Arthropoda: Macrocheira kaempferi) is one of the world's largest arthropods. The Japanese spider crab is largest known species of crab and may live up to 100 years. With a leg span of that can reach four feet, it has the longest span of any arthropod. They are typically found in the Pacific waters near Japan on the bottom of the continental shelf.
The lion's mane jellyfish (Cnidaria: Cyanea capillata) is the largest known type of jellyfish. Their tentacles can reach up to 190 feet long, and they may have a bell diameter of almost 7 feet. These animals are usually found in cold northern Arctic waters and in the Northern portions of the Atlantic and Pacific Oceans.
The giant squid (Mollusca: Architeuthis dux) comes from the family Architeuthidae. These squid are both the largest known cephalopod and the largest known mollusc. They can grow to a length of about 45–50 feet long. They developed large eyes, the largest of any animal, to be able to detect small amounts of bioluminescence in the dark and deep ocean where they live.
References
External links
A Study Guide to Invertebrate Zoology ~ at Wikibooks
Online Dictionary of Invertebrate Zoology
Zoology
Subfields of zoology | Invertebrate zoology | Biology | 1,843 |
45,229,063 | https://en.wikipedia.org/wiki/V1400%20Centauri | V1400 Centauri, also known as 1SWASP J140747.93−394542.6 or simply J1407, is a young, pre-main-sequence star that was eclipsed by the likely free-floating substellar object J1407b in April–June 2007. With an age around 20 million years, the star is about as massive as the Sun and is located in the constellation Centaurus at a distance of 451 light-years away from the Sun. V1400 Centauri is a member of Upper Centaurus–Lupus subgroup of the Scorpius–Centaurus association, a group of young, comoving stars close to the Sun.
Name and catalogue history
The star has been catalogued in as early as the 1990s by the Hubble Guide Star Catalog, which identified the star and measured its position in a pair of photographic plates taken in 1974 and 1979. The star has been catalogued by other sky surveys, including the All Sky Automated Survey (ASAS), Two Micron All-Sky Survey (2MASS), Super Wide Angle Search for Planets (1SWASP), and the Wide-field Infrared Survey Explorer (WISE). Typically in these catalogues, the star is given designations such as 1SWASP J140747.93–394542.6, which comprises the survey name followed by the star's location in equatorial coordinates. As such designations can be unwieldy, researchers simply call the star "J1407". The star was given the official variable star designation V1400 Centauri in 2015, when it was added to the International Astronomical Union's General Catalogue of Variable Stars. A 2018 research paper on stars with unusual dimming periods nicknamed V1400 Centauri "Mamajek's Object", after the astronomer Eric Mamajek who identified the star's unusual dimming in 2007.
Stellar properties
Location and age
V1400 Centauri is located in the constellation Centaurus, about 40 degrees south of the celestial equator. The most recent parallax measurements by the Gaia spacecraft indicate V1400 Centauri is located from the Sun. Observations of V1400 Centauri's position over time have shown that it has a southwestward proper motion consistent with that of the Scorpius–Centaurus association, an OB association of young stars with ages between 11–17 million years and distances between from the Sun. The Scorpius–Centaurus association is the nearest OB association to the Sun, and is believed to have formed out of a molecular cloud that has since been blown away by the stellar winds of the association's most massive stars.
V1400 Centauri is closest to the Upper Centaurus–Lupus subgroup of the Scorpius–Centaurus association, which has an age range of 14–18 million years and distance range of . Given V1400 Centauri's similar distance and proper motion, it very likely belongs to the Scorpius–Centaurus association, which would mean it must be a young star within the age range of the Upper Centaurus–Lupus subgroup. A 2012 estimate of V1400 Centauri's age assumes it is equal to 16 million years, the mean age of the Upper Centaurus–Lupus subgroup, while a 2018 estimate from Gaia measurements puts the star's age at million years.
Spectral type and physical characteristics
V1400 Centauri is a pre-main sequence star of spectral class K5 IVe Li. "K" means V1400 Centauri is an orange K-type star, and the adjoining number "5" ranks V1400 Centauri's relative temperature on a scale of 9 (coolest) to 0 (hottest) for K-type stars. V1400 Centauri is given the subgiant luminosity class "IV", because it has a brighter luminosity than K-type main-sequence stars (luminosity class V). The letter "e" indicates V1400 Centauri exhibits weak hydrogen-alpha emission lines in its visible light spectrum. Lastly, "Li" indicates V1400 Centauri is abundant in lithium.
Measurements from the Gaia spacecraft's third and most recent data release (Gaia DR3) indicate V1400 Centauri is about 7% larger than the Sun in radius (), but is slightly less massive than the Sun. Depending on whether magnetic effects are taken into account in V1400 Centauri's stellar evolution or not, the star's mass could be either or , respectively. Young stars tend to be magnetically active, and neglecting their magnetic effects results in an underestimation of their mass. An older estimate of V1400 Centauri's mass from Gaias second data release (Gaia DR2) in 2018 gives , but does not take magnetic effects into account.
V1400 Centauri is cooler and less luminous than the Sun, with an effective temperature of about and a luminosity about 34% that of the Sun. V1400 Centauri has an estimated surface gravity of about (over 20 times the gravity of Earth), based on Gaia measurements of the star's brightness, distance, and color. Gaia measurements also indicate V1400 Centauri has a lower metallicity than the Sun. Viewed from Earth, V1400 Centauri appears marginally redder than a typical K5-type star due to light extinction by interstellar dust between Earth and the star. The star does not exhibit excess thermal emission in near- and mid-infrared wavelengths and lacks strong emission lines in its spectrum, which indicates it lacks a substantial accretion disk or protoplanetary disk.
Rotation and variability
Like most young stars, V1400 Centauri rotates rapidly with a rotation period of approximately 3.2 days. The rapid rotation of V1400 Centauri strengthens its magnetic field via the dynamo process, which leads to the formation of starspots on its surface. As V1400 Centauri rotates, its starspots come into and out of view, causing the star's brightness to periodically fluctuate by 5%, or about 0.1 magnitudes in amplitude. The star's rotation period varies by 0.02 days over a 5.4-year-long magnetic activity cycle, due to the long-term movement of starspots across the star's differentially rotating surface. V1400 Centauri is known to emit soft X-rays due to its corona being heated by its rotationally-strengthened magnetic field. Because of its young age, starspot variability, and lack of dust accretion, V1400 Centauri is classified as a weak-lined T Tauri variable.
Spectroscopic measurements of Doppler broadening in V1400 Centauri's spectral absorption lines indicate the star has a projected rotational velocity of . Given V1400 Centauri's rotation period, radius, and temperature, the star's true equatorial rotation velocity is , which indicates that the star's rotation axis is inclined with respect to Earth's line of sight.
2007 eclipse by J1407b
During 7 April to 4 June 2007, telescopes of the Super Wide Angle Search for Planets (SuperWASP) and All Sky Automated Survey (ASAS) projects recorded V1400 Centauri undergoing a series of significant dimming events for 56 days. The pattern of these dimming events was complex yet nearly symmetrical, indicating they were caused by an opaque, disk-like structure eclipsing the star. The object that eclipsed V1400 Centauri is now known as J1407b, a substellar object surrounded by a dusty circumplanetary disk about in radius.
V1400 Centauri's eclipse by J1407b was discovered on 3 December 2010 by Mark Pecaut, who was a graduate student of Eric E. Mamajek at the University of Rochester. Mamajek, Pecaut, and collaborators announced the discovery in 2012. Mamajek's team initially hypothesized that J1407b is a ringed exoplanet or brown dwarf orbiting the star, but that has since been disfavored by later studies. V1400 Centauri does not show repeating eclipses, telescope observations showed no orbiting companions, and the disk of J1407b would be unstable if it orbited the star, which suggests that J1407b likely does not orbit V1400 Centauri and is instead a free-floating object that coincidentally passed in front of the star. In this case, J1407b's coincidental eclipse of V1400 Centauri would be considered an extremely rare event that will never happen again.
High-resolution imaging by the Atacama Large Millimeter Array (ALMA) in 2017 revealed a single object near V1400 Centauri, which might be J1407b. The object's distance from V1400 Centauri appears to match the expected distance travelled by J1407b if it was a free-floating object. The object's brightness is suggestive of a dusty circumplanetary disk surrounding a planetary-mass object below 6 Jupiter masses. However, the object has only been observed by ALMA once, so it is not yet known whether it is a moving foreground object or a stationary background galaxy. Recent observations by ALMA in June and July 2024 will confirm whether this object is J1407b or not.
See also
List of transiting circumsecondary disks
List of stars that have unusual dimming periods
Notes
References
External links
Eric Mamajek's webpage at University of Rochester
Matthew Kenworthy's webpage on J1407b
Rings around another world may have been sculpted by exomoons, Ruth Angus, Astrobites, 5 February 2015.
Centaurus
Hypothetical planetary systems
Variable stars
Pre-main-sequence stars
Upper Centaurus Lupus
J14074792-3945427
Centauri, V1400
K-type stars | V1400 Centauri | Astronomy | 2,117 |
29,139,875 | https://en.wikipedia.org/wiki/Heat%20meter | A heat meter, thermal energy meter or energy meter is a device which measures thermal energy provided by a source or delivered to a sink, by measuring the flow rate of the heat transfer fluid and the change in its temperature (ΔT) between the outflow and return legs of the system. It is typically used in industrial plants for measuring boiler output and heat taken by process, and for district heating systems to measure the heat delivered to consumers.
It can be used to measure the heat output of say a heating boiler, or the cooling output from a chiller unit.
In Europe heat meters have to comply with the measuring instruments directive MID Annex VI MI-004 if the meters are used for custody transfer.
Elements
A heat meter consists of
a fluid flow meter - typically a turbine-type flow meter, or alternatively an ultrasonic flow meter;
a means of measuring the temperature between the outffow and the inflow - usually a pair of thermocouples;
a means of integrating the two measurements over a period of time - typically half an hour - and accumulating the total heat transfer in a given period.
Heat Metering Technologies
Superstatic:
Principle: The main part of the flow passes through a Venturi nozzle in the pipe, creating the differential pressure to bypass the other part of the flow through the fluid oscillator. pressure oscillations are converted into an electric signal by a piezo sensor and detected by the integrator
Approval Rating Class 2 MID
Billing Approved Yes
RHI Approved Yes
Power Supply Battery / Mains
Mechanical:
Principle: A traditional pulsed mechanical water meter supplied with a separate integrator for energy calculation
Approval Rating Class 3 MID (due to the Class 3 rating on the mechanical meter)
Billing Approved Not for non domestic
RHI Approved Not for non domestic
Power Supply Battery / Mains
Ultrasonic:
Principle: working on the Doppler frequency sensors installed in upstream and down stream picking up flow and disturbance along the pipe and compensated by a temperature sensor.
Approval Rating Class 2 MID
Billing Approved Yes
RHI Approved Yes
Power Supply Battery / Mains
UK Heat Meter Regulations
•For any non domestic application where the meter will be used for Billing (including sub metering) the meter must be MID Class 2 approved - Class 3 is not suitable.
•Class 3 meters can be used for domestic billing
•Heat meters used for non domestic RHI (Renewable Heat Incentive) must also comply with accuracy class 2 or better of the Measuring instrument directive(MID)
See also
District heating
Heat cost allocator
Heating system
Thermometer
References
Thermometers | Heat meter | Technology,Engineering | 521 |
22,574,871 | https://en.wikipedia.org/wiki/Cycle%20time%20variation | Cycle time variation is a metric and philosophy for continuous improvement with the aim of driving down the deviations in the time it takes to produce successive units on a production line. It supports organizations' application of lean manufacturing or lean production by eliminating wasteful expenditure of resources. It is distinguished from some of the more common applications by its different focus of creating a structure for progressively reducing the sources of internal variation that leads to workarounds and disruption causing these wastes to accumulate in the first place. Although it is often used as an indicator of lean progress, its use promotes a structured approach to reducing disruption that impacts efficiency, quality, and value.
References
Lean manufacturing | Cycle time variation | Engineering | 133 |
47,071,106 | https://en.wikipedia.org/wiki/JHipster | JHipster is a free and open-source application generator used to quickly develop modern web applications and Microservices using Angular or React (JavaScript library) and the Spring Framework.
Overview
JHipster provides tools to generate a project with a Java stack on the server side (using Spring Boot) and a responsive Web front-end on the client side (with Angular/React and Bootstrap). It can also create microservice stack with support for Netflix OSS, Docker and Kubernetes.
The term 'JHipster' comes from 'Java Hipster', as its initial goal was to use all the modern and 'hype' tools available at the time. Today, it has reached a more enterprise goal, with a strong focus on developer productivity, tooling and quality.
Major functionalities
Generate full stack applications and microservices, with many options
Generate CRUD entities, directly or by scaffolding
Database migrations with Liquibase
NoSQL databases support (Cassandra, MongoDB, Neo4j)
Elasticsearch support
Websockets support
Automatic deployment to CloudFoundry, Heroku, OpenShift, AWS
Technology stack
On the client side:
HTML5 Boilerplate
Twitter Bootstrap
AngularJS
Angular 2+
React
Full internationalization support with Angular Translate
Optional Compass / Sass support for CSS design
Optional WebSocket support with Spring Websocket
On the server side:
Spring Boot
Spring Security (including Social Logins)
Spring MVC REST + Jackson
Monitoring with Metrics
Optional WebSocket support with Spring Websocket
Spring Data JPA + Bean Validation
Database updates with Liquibase
Elasticsearch support
MongoDB support
Cassandra support
Neo4j support
Out-of-the-box auto-configured tooling:
Yeoman
Webpack or Gulp.js
BrowserSync
Maven or Gradle
Editor for Datamodeling (visual and textual)
Books
A JHipster mini book is written by Matt Raible, the author of AppFuse.
A book on "Full stack development with JHipster" is written by Deepu K Sasidharan, the co-lead of JHipster and Sendil Kumar N, a core team member of JHipster. Reviewed by Julien Dubois and Antonio Goncalves.
See also
MEAN (software bundle)
References
External links
Java platform
Web frameworks
Free software programmed in Java (programming language)
Agile software development
Software using the Apache license | JHipster | Technology | 509 |
26,968,316 | https://en.wikipedia.org/wiki/3%2C3-Diphenylcyclobutanamine | 3,3,-Diphenylcyclobutanamine is a psychostimulant drug which was originally prepared as an antidepressant in the late 1970s. It appears to inhibit the reuptake of serotonin, norepinephrine, and dopamine, and may also induce their release as well. The N-methyl and N,N-dimethyl analogues of the compound are also known and are more potent. All three agents produce locomotor stimulation in animal studies, with the tertiary amine being the strongest.
See also
β-Phenylmethamphetamine
Fezolamine
References
Amines
Experimental antidepressants
Stimulants
Cyclobutanes
Benzhydryl compounds | 3,3-Diphenylcyclobutanamine | Chemistry | 156 |
48,540,157 | https://en.wikipedia.org/wiki/Turbinellus%20stereoides | Turbinellus stereoides, previously known as Gomphus stereoides, is a mushroom in the family Gomphaceae. It was originally described in 1996 by E. J. H. Corner as a species of Gomphus. The type collection was made in 1930 in Slim River, Malaysia.
The genus Gomphus, along with several others in the Gomphaceae, was reorganized in the 2010s after molecular analysis confirmed that the older morphology-based classification did not accurately represent phylogenetic relationships. Admir Giachini transferred the fungus to Turbinellus in 2011.
In 2010 Turbinellus stereoides was reported from Turkey.
References
Gomphaceae
Fungi of Asia
Fungi of Western Asia
Fungi described in 1966
Taxa named by E. J. H. Corner
Fungus species | Turbinellus stereoides | Biology | 155 |
61,062,118 | https://en.wikipedia.org/wiki/Amanda%20Chetwynd | Amanda G. Chetwynd is a British mathematician and statistician specializing in combinatorics and spatial statistics.
She is Professor of Mathematics and Statistics and Provost for Student Experience, Colleges and the Library at Lancaster University, and a Principal Fellow of the Higher Education Academy.
Education and research
Chetwynd earned a Ph.D. from the Open University in 1985. Her dissertation, Edge-colourings of graphs, was jointly supervised by Anthony Hilton and Robin Wilson. She did postdoctoral research at the University of Stockholm before joining Lancaster University. Her research interests include graph theory, edge coloring, and latin squares in combinatorics, as well as geographical clustering in medical statistics.
Recognition and service
In 2003, Chetwynd won a National Teaching Fellowship recognizing her teaching excellence. She was vice president of the London Mathematical Society in 2005, at a time when university study of mathematics was shrinking, and as vice president encouraged the UK government to counter the decline by providing more funds for mathematics education.
Books
With Peter Diggle, Chetwynd is the author of the books Discrete Mathematics (Modular Mathematics series, Arnold, 1995) and Statistics and Scientific Method: An Introduction for Students and Researchers (Oxford University Press, 2011). With Bob Burn she is the author of A Cascade of Numbers: An Introduction to Number Theory (Arnold, 1995).
References
External links
Home page
Year of birth missing (living people)
Living people
British mathematicians
British statisticians
British women statisticians
Graph theorists
Academics of Lancaster University
Principal Fellows of the Higher Education Academy | Amanda Chetwynd | Mathematics | 308 |
45,546,793 | https://en.wikipedia.org/wiki/Kulekhani%20Reservoir | The Kulekhani Dam is a rock-fill dam on the Kulekhani River near Kulekhani in the Indrasarowar Rural Municipality of Makwanpur District in Bagmati Province, Nepal. The primary purpose of the dam is hydroelectric power generation and it supports the 60 MW Kulekhani I, 32 MW Kulekhani II and 14 MW Kulekhani III Hydropower Stations. Construction began in 1977 and Kulekhani I was commissioned in 1982. Kulekhani II was commissioned in 1986 and a third power station, the 14 MW Kulekhani III was expected to be commissioned in May 2015 but was delayed due to issues with the builder. The US$117.84 million project received funding from the World Bank, Kuwait Fund, UNDP, Overseas Economic Cooperation Fund and OPEC Fund. It is owned by Nepal Electricity Authority.
The tall dam creates a reservoir called Indra Sarobar which stores of water.
The Kulekhani Dam in Nepal has a total installed capacity of 106 megawatts (MW):
Kulekhani I: 60 MW installed capacity
Kulekhani II: 32 MW installed capacity
Kulekhani III: 14 MW installed capacity
Kulekhani I hydropower station
From the reservoir, water is sent to the Kulekhani I Hydropower Station via a headrace tunnel to a gate house which controls the flow of water to the power station. From the gate house water travels down a long penstock where it reaches the underground power station. It contains two 30 MW Pelton turbine-generators. The difference in elevation between the reservoir and the power station affords a net hydraulic head of .
Kulekhani II hydropower station
Water discharged from the Kulekhani I power station enters a series of tunnels and diversions where it reaches the Kulekhani II Hydropower Station which is also located underground and contains two 16 MW Francis turbine-generators. The elevation difference between the reservoir and the power station affords a net hydraulic head of . The dam and reservoir are in the Bagmati River basin while the power stations are in the Rapti River basin.
Kulekhani III hydropower station
Construction of the Kulekhani III Hydropower Station had been underway since 2008 and is now finally complete as of 2019. The Nepal Electricity Authority (NEA) had extended the completion deadline of the Kulekhani 3 Hydropower Project for the fifth time to January 2018 as construction was running late due to its slowpoke contractor. The project’s civil contractor Sino Hydro has completed 100 percent of the construction, and there has been full progress in the installation of the turbine, water gate and transmission lines to evacuate the electricity generated by the plant. It will use the tailwaters of Kulekhani II and have an installed capacity of 14 MW.
Gallery
References
Dams in Nepal
Hydroelectric power stations in Nepal
Rock-filled dams
Dams completed in 1982
Interbasin transfer
Energy infrastructure completed in 1982
Energy infrastructure completed in 1986
Buildings and structures in Makwanpur District
Indrasarowar Rural Municipality
Underground power stations
1982 establishments in Nepal
Artificial lakes of Nepal | Kulekhani Reservoir | Environmental_science | 641 |
14,664,948 | https://en.wikipedia.org/wiki/Replication%20protein%20A | Replication protein A (RPA) is the major protein that binds to single-stranded DNA (ssDNA) in eukaryotic cells. In vitro, RPA shows a much higher affinity for ssDNA than RNA or double-stranded DNA. RPA is required in replication, recombination and repair processes such as nucleotide excision repair and homologous recombination. It also plays roles in responding to damaged DNA.
Structure
RPA is a heterotrimer, composed of the subunits RPA1 (RPA70) (70kDa subunit), RPA2 (RPA32) (32kDa subunit) and RPA3 (RPA14) (14kDa subunit). The three RPA subunits contain six OB-folds (oligonucleotide/oligosaccharide binding), with DNA-binding domains (DBD) designated DBDs A-F, that bind RPA to single-stranded DNA.
DBDs A, B, C and F are located on RPA1, DBD D is located on RPA2, and DBD E is located on RPA3. DBDs C, D, and E make up the trimerization core of the protein with flexible linker regions connecting them all together. Due to these flexible linker regions RPA is considered highly flexible and this supports the dynamic binding that RPA is able to achieve. Because of this dynamic binding, RPA is also capable of different conformations that leads to varied numbers of nucleotides that it can engage.
DBDs A, B, C and D are the sites that are involved in ssDNA binding. Protein-protein interactions between RPA and other proteins happen at the N-terminal of RPA1, specifically DBD F, along with the C-terminal of RPA2. Phosphorylation of RPA takes place at the N-terminus of RPA2.
RPA shares many features with the CST complex heterotrimer, although RPA has a more uniform 1:1:1 stoichiometry.
Functions
During DNA replication, RPA prevents single-stranded DNA (ssDNA) from winding back on itself or from forming secondary structures. It also helps protect the ssDNA from being attacked by endonucleases. This keeps DNA unwound for the polymerase to replicate it. RPA also binds to ssDNA during the initial phase of homologous recombination, an important process in DNA repair and prophase I of meiosis.
RPA has a key role in the maintenance of the recombination checkpoint during meiosis of the yeast Saccharomyces cerevisiae. RPA appears to act as a sensor of single-strand DNA for the activation of the meiotic DNA damage response.
Hypersensitivity to DNA damaging agents can be caused by mutations in the RPA gene. Like its role in DNA replication, this keeps ssDNA from binding to itself (self-complementizing) so that the resulting nucleoprotein filament can then be bound by Rad51 and its cofactors.
RPA also binds to DNA during the nucleotide excision repair process. This binding stabilizes the repair complex during the repair process. A bacterial homolog is called single-strand binding protein (SSB).
See also
Single-stranded binding protein
Replication protein A1
Replication protein A2
Replication protein A3
References
Genetics | Replication protein A | Biology | 716 |
78,251,489 | https://en.wikipedia.org/wiki/PKS%201424-418 | PKS 1424-418 is a blazar located in the constellation of Centaurus. It has a redshift of 1.522 and was first discovered in 1971 by astronomer Keith Peter Tritton who identified the object as ultraviolet-excessive. This object is also highly polarized with a compact radio source. The radio spectrum of this source appears flat, making it a flat-spectrum radio quasar.
PKS 1424-418 is found optically variable on the electromagnetic spectrum. It is a strong source of gamma rays. Between 2008 and 2011, PKS 1424-418 showed four phases of bright flares at GeV energies. The flares have a high correlation between the energy ranges with the exception of one flare that occurred at the same time it showed low gamma activity. In April 2013, it underwent a major gamma ray outburst with its peak flux reaching values of F(> 100 MeV) > 3 x 10−6 ph cm−2 s−1. According to Large Area Telescope observations, this emission originated beyond its broad-line region. A near-infrared flare was witnessed in PKS 1424–418 in January 2018. In August 2022, it once again displayed an episode of rapid flaring activity in both gamma ray and optical bands.
PKS 1424-418 contains a radio structure, comprising a strong radio core and a weaker component with a position angle of 260°. Further observations also showed the core has a size of 0.4 mas with extended emission at both the core's position and northwest. In addition, the core has a flat spectral index of -0.04. A jet is seen extending west from the core before becoming diffused.
Between May 2009 and September 2019, the gamma ray emission from PKS 1424-418 was found to undergo a quasi-periodic oscillation with a 353-day flux oscillation period. A 355-day period with high significance level is also confirmed by adopting time domain methods. This might be explained by orbital motion of a binary supermassive black hole system with the mass of a primary black hole being M ~ 3.5 x 108 - 5.5 x 109 Mʘ.
References
External links
PKS 1424-418 on SIMBAD
PKS 1424-418 on NASA/IPAC Database
Blazars
Quasars
Centaurus
2827996
Active galaxies
Astronomical objects discovered in 1971 | PKS 1424-418 | Astronomy | 497 |
6,041,076 | https://en.wikipedia.org/wiki/Airy%20points | Airy points (after George Biddell Airy) are used for precision measurement (metrology) to support a length standard in such a way as to minimise bending or drop of a horizontally supported beam.
Choice of support points
A kinematic support for a one-dimensional beam requires exactly two support points. Three or more support points will not share the load evenly (unless they are hinged in a non-rigid whiffle tree or similar). The position of those points can be chosen to minimize various forms of gravity deflection.
A beam supported at the ends will sag in the middle, resulting in the ends moving closer together and tilting upward. A beam supported only in the middle will sag at the ends, making a similar shape but upside down.
Airy points
Supporting a uniform beam at the Airy points produces zero angular deflection of the ends. The Airy points are symmetrically arranged around the centre of the length standard and are separated by a distance equal to
of the length of the rod.
"End standards", that is standards whose length is defined as the distance between their flat ends such as long gauge blocks or the , must be supported at the Airy points so that their length is well-defined; if the ends are not parallel, the measurement uncertainty is increased because the length depends on which part of the end is measured. For this reason, the Airy points are commonly identified by inscribed marks or lines. For example, a 1000 mm length gauge would have an Airy point separation of 577.4 mm. A line or pair of lines would be marked onto the gauge 211.3 mm in from each end. Supporting the artifact at these points ensures that the calibrated length is preserved.
Airy's 1845 paper derives the equation for equally spaced support points. In this case, the distance between each support is the fraction
the length of the rod. He also derives the formula for a rod which extends beyond the reference marks.
Bessel points
"Line standards" are measured between lines marked on their surfaces. They are much less convenient to use than end standards but, when the marks are placed on the neutral plane of the beam, allow greater accuracy.
To support a line standard, one wishes to minimise the linear, rather than angular, motion of the ends. The Bessel points (after Friedrich Wilhelm Bessel) are the points at which the length of the beam is maximized. Because this is a maximum, the effect of a small positioning error is proportional to the square of the error, an even smaller amount.
The Bessel points are located 0.5594 of the length of the rod apart, slightly closer than the Airy points.
Because line standards invariably extend beyond the lines marked on them, the optimal support points depend on both the overall length and the length to be measured. The latter is the quantity to be maximized, requiring a more complex calculation. For example, the 1927–1960 definition of the metre specified that the International Prototype Metre bar was to be measured while "supported on two cylinders of at least one centimetre diameter, symmetrically placed in the same horizontal plane at a distance of 571 mm from each other." Those would be the Bessel points of a beam 1020 mm long.
Other support points of interest
Other sets of support points, even closer than the Bessel points, which may be wanted in some applications are:
The points for minimum sag, 0.5536 times the length. Minimum sag occurs when the centre of the rod sags the same amount as the end points, which is not quite the same thing as minimum horizontal motion of the ends.
The nodes of free vibration, 0.5516 times the length.
The points for zero central sag (any closer and the beam rises between the support points): 0.5228 times the length.
See also
History of measurement
History of the metre
Neutral plane
Test method
Units of measurement
Weights and measures
References
Metrology
Solid mechanics
Statics | Airy points | Physics | 814 |
15,182,389 | https://en.wikipedia.org/wiki/PAIP1 | Polyadenylate-binding protein-interacting protein 1 is a protein that in humans is encoded by the PAIP1 gene.
Function
The protein encoded by this gene interacts with poly(A)-binding protein and with the cap-binding complex eIF4A. It is involved in translational initiation and protein biosynthesis. Overexpression of this gene in COS7 cells stimulates translation. Alternative splicing occurs at this locus and three transcript variants encoding three distinct isoforms have been identified.
Interactions
PAIP1 has been shown to interact with PABPC1.
References
Further reading | PAIP1 | Chemistry | 126 |
12,557,254 | https://en.wikipedia.org/wiki/Rhabdopelix | Rhabdopelix (meaning "rod pelvis") is a dubious genus of possible kuehneosaurid reptile, from the Late Triassic-age Lockatong Formation of Pennsylvania, United States. Based on partial, possibly chimeric remains, it was described by American naturalist and paleontologist Edward Drinker Cope as an early pterosaur. It held this status until the 1960s, when Ned Colbert reevaluated it for his description of Icarosaurus. He noted that the bones came from a block with the remains of other animals, and that Cope had misinterpreted some of the remains; for example, the rod-like "pubic bones" that had given it its name were actually much more like the bony structures used by Icarosaurus and related animals to glide. Additionally, he couldn't relocate the fossils, which are assumed to be lost. He recommended considering Rhabdopelix a dubious name. Peter Wellnhofer retained it as a pterosaur of unknown affinities in his 1978 review, but rejected this by 1991.
The holotype is likely a chimera consisting of Tanytrachelos, Icarosaurus, or fish fossils.
References
Paleontological chimeras
Nomina dubia
Late Triassic reptiles of North America
Taxa named by Edward Drinker Cope
Prehistoric reptile genera
Fossil taxa described in 1870 | Rhabdopelix | Biology | 282 |
4,883,340 | https://en.wikipedia.org/wiki/Photostat%20machine | The Photostat machine, or Photostat, was an early projection photocopier created in the decade of the 1900s by the Commercial Camera Company, which became the Photostat Corporation. The "Photostat" name, which was originally a trademark of the company, became genericized, and was often used to refer to similar machines produced by the RetinalGraph Company or to any copy made by any such machine.
History
Background
The growth of business during the Industrial Revolution created the need for a more efficient means of transcription than hand copying. Carbon paper was first used in the early 19th century. By the late 1840s copying presses were used to copy outgoing correspondence. One by one, other methods appeared. These included the "manifold writer", developed from Christoph Scheiner's pantograph and used by Mark Twain; copying baths; copying books; and roller copiers. Among the most significant of them was the Blue process in the early 1870s, which was mainly used to make blueprints of architectural and engineering drawings. Stencil duplicators (more commonly known as "Mimeograph machines") surfaced in 1874, and the Cyclostyle in 1891. All were manual and most involved messy fluids.
Retinal and Photostat machines
George C. Beidler of Oklahoma City founded the RetinalGraph Company in 1906 or 1907, producing the first photographic copying machines; he later moved the company to Rochester, New York in 1909 to be closer to the Haloid Company, his main source of photographic paper and chemicals.
The RetinalGraph Company was acquired by the Haloid Company in 1935. In 1948 Haloid purchased the rights to produce Chester Carlson's xerographic equipment and in 1958 the firm was reorganized to Haloid Xerox, Inc., which in 1961 was renamed Xerox Corporation. Haloid continued selling RetinalGraph machines into the 1960s.
The Photostat brand machine, differing in operation from the RetinalGraph but with the same purpose of the photographic copying of documents, was invented in Kansas City by Oscar T. Gregory in 1907. A directory of the city from 1909 shows his "Gregory Commercial Camera Company". By 1910, Gregory had co-filed a patent application with Norman W. Carkhuff, of the photography department of the United States Geological Survey, for a specific type of photographic camera, for quickly and easily photographing small objects, with a further object "to provide a camera of the type known as 'copying cameras' that will be simple and convenient [...]" In 1911, the Commercial Camera Company of Providence, Rhode Island, was formed. By 1912, Photostat brand machines were in use, as evidenced by a record of one at the New York Public Library. By 1913, advertisements described the Commercial Camera Company as headquartered at Rochester and with a licensing and manufacturing relationship with Eastman Kodak. The pair filed another U.S. patent application in 1913 further developing their ideas. By 1920, distribution agency in various European markets was by the Alfred Herbert companies. The Commercial Camera Company apparently became the Photostat Corporation around 1921, for "Commercial Camera Company" is described as a former name of Photostat Corporation in a 1922 issue of Patent and Trade Mark Review. For at least 40 years the brand was widespread enough that its name was genericized by the public.
The Photostat Corporation was eventually absorbed by Itek in 1963.
Description
Both RetinalGraph and Photostat machines consisted of a large camera that photographed documents or papers and exposed an image directly onto rolls of sensitized photographic paper that were about long. A prism was placed in front of the lens to reverse the image. After a 10-second exposure, the paper was directed to developing and fixing baths, then either air- or machine-dried. Since the print was directly exposed, without the use of an intermediate film, the result was a negative print. A typical typewritten document would appear on the photostat print with a black background and white letters. Thanks to the prism, the text would remain legible. Producing photostats took about two minutes in total. The result could, in turn, be photostated again to make any number of positive prints.
The photographic prints produced by such machines are commonly referred to as "photostats" or "photostatic copies". The verbs "photostat", "photostatted", and "photostatting" refer to making copies on such a machine in the same way that the trademarked name "Xerox" was later used to refer to any copy made by means of electrostatic photocopying. People who operated these machines were known as photostat operators.
It was the expense and inconvenience of photostats that drove Chester Carlson to study electrophotography. In the mid-1940s Carlson sold the rights to his inventionwhich became known as xerographyto the Haloid Company and photostatting soon sank into history.
See also
Cyclostyle (copier)
Duplicating machines
List of duplicating processes
References
Notes
Bibliography
External links
Glen Gable (2005), Heavy Metal Madness: Making Copies from Carbon to Kinkos, CreativePro
David Owen (2005), "Copies in Seconds", Engineering and Science, 68 (3). pp. 24–31. ISSN 0013-7812 (PDF)
Photocopiers
Printing devices
Machines
Products introduced in 1907 | Photostat machine | Physics,Technology,Engineering | 1,104 |
65,187,964 | https://en.wikipedia.org/wiki/RNA%20therapeutics | RNA therapeutics are a new class of medications based on ribonucleic acid (RNA). Research has been working on clinical use since the 1990s, with significant success in cancer therapy in the early 2010s. In 2020 and 2021, mRNA vaccines have been developed globally for use in combating the coronavirus disease (COVID-19 pandemic). The Pfizer–BioNTech COVID-19 vaccine was the first mRNA vaccine approved by a medicines regulator, followed by the Moderna COVID-19 vaccine, and others.
The main types of RNA therapeutics are those based on messenger RNA (mRNA), antisense RNA (asRNA), RNA interference (RNAi), and RNA aptamers. Of the four types, mRNA-based therapy is the only type which is based on triggering synthesis of proteins within cells, making it particularly useful in vaccine development. Antisense RNA is complementary to coding mRNA and is used to trigger mRNA inactivation to prevent the mRNA from being used in protein translation. RNAi-based systems use a similar mechanism, and involve the use of both small interfering RNA (siRNA) and micro RNA (miRNA) to prevent mRNA translation and/or degrade mRNA. However, RNA aptamers are short, single stranded RNA molecules produced by directed evolution to bind to a variety of biomolecular targets with high affinity thereby affecting their normal in vivo activity.
RNA is synthesized from template DNA by RNA polymerase with messenger RNA (mRNA) serving as the intermediary biomolecule between DNA expression and protein translation. Because of its unique properties (such as its typically single-stranded nature and its 2' OH group) and its ability to adopt many different secondary/tertiary structures, both coding and noncoding RNAs have attracted attention in medicine. Research has begun to explore RNAs potential to be used for therapeutic benefit, and unique challenges have occurred during drug discovery and implementation of RNA therapeutics.
mRNA
Messenger RNA (mRNA) is a single-stranded RNA molecule that is complementary to one of the DNA strands of a gene. An mRNA molecule transfers a portion of the DNA code to other parts of the cell for making proteins. DNA therapeutics needs access to the nucleus to be transcribed into RNA, and its functionality depends on nuclear envelope breakdown during cell division. However, mRNA therapeutics do not need to enter into the nucleus to be functional since it will be translated immediately once it has reached to the cytoplasm. Moreover, unlike plasmids and viral vectors, mRNAs do not integrate into the genome and therefore do not have the risk of insertional mutagenesis, making them suitable for use in cancer vaccines, tumor immunotherapy and infectious disease prevention.
Discovery and development
In 1953, Alfred Day Hershey reported that soon after infection with phage, bacteria produced a form of RNA at a high level and this RNA was also broken down rapidly. However, the first clear indication of mRNA was from the work of Elliot Volkin and Lazarus Astrachan in 1956 by infecting E.coli with T2 bacteriophages and putting them into the medium with 32P. They found out that the protein synthesis of E.coli was stopped and phage proteins were synthesized. Then, in May 1961, their collaborated researchers Sydney Brenner, François Jacob, and Jim Watson announced the isolation of mRNA. For a few decades after mRNA discovery, people focused on understanding the structural, functional, and metabolism pathway aspects of mRNAs. However, in 1990, Jon A. Wolff demonstrated the idea of nucleic acid-encoded drugs by direct injecting in vitro transcribed (IVT) mRNA or plasmid DNA (pDNA) into the skeletal muscle of mice which expressed the encoded protein in the injected muscle.
Once IVT mRNA has reached the cytoplasm, the mRNA is translated instantly. Thus, it does not need to enter the nucleus to be functional. Also, it does not integrate into the genome and therefore does not have the risk of insertional mutagenesis. Moreover, IVT mRNA is only transiently active and is completely degraded via physiological metabolic pathways. Due to these reasons, IVT mRNA has undergone extensive preclinical investigation.
Mechanisms
In vitro transcription (IVT) is performed on a linearized DNA plasmid template containing the targeted coding sequence. Then, naked mRNA or mRNA complexed in a nanoparticle will be delivered systemically or locally. Subsequently, a part of the exogenous naked mRNA or complexed mRNA will go through cell-specific mechanisms. Once in the cytoplasm, the IVT mRNA is translated by the protein synthesis machinery.
There are two identified RNA sensors, toll-like receptors (TLRs) and the RIG-I-like receptor family. TLRs are localized in the endosomal compartment of cells, such as DCs and macrophages. RIG-I-like family is as a pattern recognition receptor (PRR). However, the immune response mechanisms and process of mRNA vaccine recognition by cellular sensors and the mechanism of sensor activation are still unclear.
Applications
Cancer immunotherapy
In 1995, Robert Conry demonstrated that intramuscular injection of naked RNA encoding carcinoembryonic antigen elicited antigen-specific antibody responses. Then, it was elaborated by demonstrating that dendritic cells(DCs) exposed to mRNA coding for specific antigens or to total mRNA extracted from tumor cells and injected into tumor-bearing mice induced T cell immune responses and inhibited the growth of tumors. Then, researchers started to approach mRNA transfected DCs using vaccines based on ex vivo IVT mRNA-transfected DCs. Meanwhile, Argos Therapeutics had initiated a Phase III clinical trial using DCs with advanced renal cell carcinoma in 2015 (NCT01582672) but it was terminated due to the lack of efficacy.
For further application, IVT mRNA was optimized for in situ transfections of DCs in vivo. It improved the translation efficiency and stability of IVT mRNA and enhanced the presentation of the mRNA-encoded antigen on MHC class I and II molecules. Then, they found out that the direct injection of naked IVT mRNA into lymph nodes was the most effective way to induce T cell responses. Based on this discovery, first-in-human testing of the injection of naked IVT mRNA encoding cancer antigens by BioNTech has started with patients with melanoma (NCT01684241).
Recently, the new cancer immunotherapy, the combining of self-delivering RNA(sd-rxRNA) and adoptive cell transfer(ACT) therapy, was invented by RXi Pharmaceuticals and the Karolinska Institute. In this therapy, the sd-rxRNA eliminated the expression of immunosuppressive receptors and proteins in therapeutic immune cells so it improved the ability of immune cells to destroy the tumor cells. Then, the PD-1 targeted sd-rxRNA helped increasing the anti-tumor activity of tumor-infiltrating lymphocytes (TIL) against melanoma cells. Based on this idea, the mRNA-4157 has been tested and passed phase I clinical trial.
Cytosolic nucleic acid-sensing pathways can enhance immune response to cancer. RIG-I agonist, stem loop RNA (SLR) 14. Tumor growth was significantly delayed and extended survival in mice. SLR14 improved antitumor efficacy of anti-PD1 antibody over single-agent treatment. SLR14 was absorbed by CD11b+ myeloid cells in the tumor microenvironment. Genes associated with immune defense were significantly up-regulated, along with increased CD8+ T lymphocytes, NK cells, and CD11b+ cells. SLR14 inhibited nonimmunogenic B16 tumor growth, leaving immune memory.
Vaccines
In 1993, the first success of an mRNA vaccine was reported in mice, by using liposome-encapsulated IVT mRNA which is encoding the nucleoprotein of influenza that induced virus-specific T cells. Then, IVT mRNA was formulated with synthetic lipid nanoparticles and it induced protective antibody responses against the respiratory syncytial virus(RSV) and influenza virus in mice.
There are a few different types of IVT mRNA-based vaccine development for infectious diseases. One of the successful types is using self-amplifying IVT mRNA that has sequences of positive-stranded RNA viruses. It was originally developed for a flavivirus and it was workable with intradermal injection. One of the other ways is injecting a two-component vaccine which is containing an mRNA adjuvant and naked IVT mRNA encoding influenza hemagglutinin antigen only or in combination with neuraminidase encoding IVT mRNA.
For example, for the HIV treatment, vaccines are using DCs transfected with IVT mRNA that is encoding HIV proteins. There are a few phase I and II clinical trials using IVT mRNA encoding combinations and it shows that antigen-specific CD8+ and CD4+ T cell responses can be induced. However, no antiviral effects have been observed in the clinical trial.
One of the other mRNA vaccines is for COVID-19. The Severe Acute Respiratory Syndrome CoronaVirus 2 (SARS-CoV-2) outbreaks in December 2019 and spread all over the world, causing a pandemic of respiratory illness designated coronavirus disease 2019 (COVID-19). The Moderna COVID-19 vaccine, manufactured by Moderna since 2020, is a lipid nanoparticle (LNP) encapsulated mRNA-based vaccine that encodes for a full-length, prefusion stabilized spike(S)-2P antigen of SARS-CoV-2 with a transmembrane anchor.
Anti-viral
In 2021, SLR14 was reported to prevent infection in the lower respiratory tract and severe disease in an interferon type I (IFN-I)–dependent manner in mice. Immunodeficient mice with chronic SARS-CoV-2 infection experienced near-sterilizing innate immunity with no help from the adaptive immune system.
Tissue regeneration
A 2022 study by researchers from the Mayo Clinic, Maastricht University, and Ethris GmBH, a biotech company that focuses on RNA therapeutics, found that chemically modified mRNA encoding BMP-2 promoted dosage-dependent healing of femoral osteotomies in male rats. The mRNA molecules were complexed within nonviral lipid particles, loaded onto sponges, and surgically implanted into the bone defects. They remained localized around the site of application. Compared to receiving rhBMP-2 directly, bony tissues regenerated after mRNA treatment displayed superior strength and less formation of massive callus.
Limitations
There are many challenges for the successful translation of mRNA into drugs because mRNA is a very large and heavy molecule(10^5 ~ 10^6 Da). Moreover, mRNA is unstable and easily degraded by nucleases, and it also activates the immune systems. Furthermore, mRNA has a high negative charge density and it reduces the permeation of mRNA across cellular membranes. Due to these reasons, without the appropriate delivery system, mRNA is degraded easily and the half-life of mRNA without a delivery system is only around 7 hours. Even though some degrees of challenges could be overcome by chemical modifications, delivery of mRNA remains an obstacle. The methods that have been researched to improve the delivery system of mRNA are using microinjection, RNA patches (mRNA loaded in a dissolving micro-needle), gene gun, protamine condensation, RNA adjuvants, and encapsulating mRNA in nanoparticles with lipids.
Even though In Vitro Translated (IVT) mRNA with delivery agents showed improved resistance against degradation, it needs more studies on how to improve the efficiency of the delivery of naked mRNA in vivo.
Approved RNA Therapeutics
patisiran
givosiran
lumasiran
inclisiran
Antisense RNA
Antisense RNA is the non-coding and single-stranded RNA that is complementary to a coding sequence of mRNA. It inhibits the ability of mRNA to be translated into proteins. Short antisense RNA transcripts are produced within the nucleus by the action of the enzyme Dicer, which cleaves double-stranded RNA precursors into 21–26 nucleotide long RNA species.
There is an antisense-based discovery strategy, rationale and design of screening assays, and the application of such assays for screening of natural product extracts and the discovery of fatty acid condensing enzyme inhibitors. Antisense RNA is used for treating cancer and inhibition of metastasis and vectors for antisense sequestration. Particularly MicroRNAs(miRs) 15 and 16 to a patient in need of the treatment for diagnosis and prophylaxis of cancer. Antisense drugs are based on the fact that antisense RNA hybridizes with and inactivates mRNA. These drugs are short sequences of RNA that attach to mRNA and stop a particular gene from producing the protein for which it encodes. Antisense drugs are being developed to treat lung cancer, diabetes and diseases such as arthritis and asthma with a major inflammatory component. It shows that the decreased expression of MLLT4 antisense RNA 1 (MLLT4‑AS1) is a potential biomarker and a predictor of a poor prognosis for gastric cancer. So far, applications of antisense RNAs in antivirus and anticancer treatments and in regulating the expression of related genes in plants and microorganisms have been explored.
Non-viral vectors, virus vectors and liposomes have been used to deliver the antisense RNA through the cell membrane into the cytoplasm and nucleus. It has been found that the viral vector based delivery is the most advantageous among different delivery systems because it has a high transfection efficacy. However, it is difficult to deliver antisense RNA only to the targeted sites. Also, due to the size and the stability issues of antisense RNA, there are some limitations to its use. To improve the delivery issues, chemical modifications, and new oligonucleotide designs have been studied to enhance the drug distribution, side effects, and tolerability.
RNAi
Interfering RNA are a class of short, noncoding RNA that act to translationally or post-translationally repress gene expression. Their discovery and subsequent identification as key effectors of post-transcriptional gene regulation have made small interfering RNA (siRNA) and micro RNA (miRNA) potential therapeutics for systemic diseases. The RNAi system was originally discovered in 1990 by Jorgensen et al., who were doing research involving the introduction of coloration genes into petunias, and it is thought that this system originally developed as a means of innate immunity against double-stranded RNA viruses.
siRNA
Small interfering (siRNA) are short, 19-23 base-pair (with a 3' overhang of two nucleotides), double-stranded pieces of RNA that participate in the RNA-induced silencing complex (RISC) for gene silencing. Specifically, siRNA is bound by the RISC complex where it is unwound using ATP hydrolysis. It is then used as a guide by the enzyme "Slicer" to target mRNAs for degradation based on complementary base-pairing to the target mRNA. As a therapeutic, siRNA is able to be delivered locally, through the eye or nose, to treat various diseases. Local delivery benefits from simple formulation and drug delivery and high bioavailability of the drug. Systemic delivery is necessary to target cancers and other diseases. Targeting the siRNA when delivered locally is one of the main challenges in siRNA therapeutics. While it is possible to use intravenous injection to deliver siRNA therapies, concerns have been raised about the large volumes used in the injection, as these must often be ~20-30% of the total blood volume. Other methods of delivery include liposome packaging, conjugation to membrane-permeable peptides, and direct tissue/organ electroporation. Additionally, it has been found that exogeneous siRNAs only last a few days (a few weeks at most in non-dividing cells) in vivo. If siRNA is able to successfully reach its target, it has the potential to therapeutically regulate gene expression through its ability to base-pair to mRNA targets and promote their degradation through the RISC system Currently, siRNA-based therapy is in a phase I clinical trial for the treatment of age-related macular degeneration, although it is also being explored for use in cancer therapy. For instance, siRNA can be used to target mRNAs that code for proteins that promote tumor growth such as the VEGF receptor and telomerase enzyme.
miRNA
Micro RNAs (miRNAs) are short, ~19-23 base pair long RNA oligonucleotides that are involved in the microRNA-induced silencing complex. Specifically, once loaded onto the ARGONAUTE enzyme, miRNAs work with mRNAs to repress translation and post-translationally destabilize mRNA. While they are functionally similar to siRNAs, miRNAs do not require extensive base-pairing for mRNA silencing (can require as few as seven base-pairs with target), thus allowing them to broadly affect a wider range of mRNA targets. In the cell, miRNA uses switch, tuning, and neutral interactions to finely regulate gene repression. As a therapeutic, miRNA has the potential to affect biochemical pathways throughout the organism.
With more than 400 miRNA identified in humans, discerning their target gene for repression is the first challenge. Multiple databases have been built, for example TargetScan, using miRNA seed matching. In vitro assays assist in determining the phenotypic effects of miRNAs, but due to the complex nature of gene regulation not all identified miRNAs have the expected effect. Additionally, several miRNAs have been found to act as either tumor suppressors or oncogenes in vivo, such as the oncogenic miR-155 and miR-17-92.
In clinical trials, miRNA are commonly used as biomarkers for a variety of diseases, potentially providing earlier diagnosis as well as disease progression, stage, and genetic links. Phase 1 and 2 trials currently test miRNA mimics (to express genes) and miRNA (to repress genes) in patients with cancers and other diseases. In particular, mimic miRNAs are used to introduce miRNAs that act as tumor suppressors into cancerous tissues, while miRNA antagonists are used to target oncogenic miRNAs to prevent their cancer-promoting activity. Therapeutic miRNA is also used in addition to common therapies (such as cancer therapies) that are known to overexpress or destabilize the patient miRNA levels. An example of one mimic miRNA therapy that demonstrated efficacy in impeding lung cancer tumor growth in mouse studies is miR-34a.
One concerning aspect of miRNA-based therapies is the potential for the exogeneous miRNA to affect miRNA silencing mechanisms within normal body cells, thereby affecting normal cellular biochemical pathways. However, in vivo studies have indicated that miRNAs display little to no effect in non-target tissues/organs.
RNA aptamers
Broadly, aptamers are small molecules composed of either single-stranded DNA or RNA and are typically 20-100 nucleotides in length, or ~3-60 kDa. Because of their single-stranded nature, aptamers are capable of forming many secondary structures, including pseudoknots, stem loops, and bulges, through intra-strand base pairing interactions. The combinations of secondary structures present in an aptamer confer it a particular tertiary structure which in turn dictates the specific target the aptamer will selectively bind to. Because of the selective binding ability of aptamers, they are considered a promising biomolecule for use in pharmaceuticals. Additionally, aptamers exhibit tight binding to targets, with dissociation constants often in the pM to nM range. Besides their strong binding ability, aptamers are also valued because they can be used on targets that are not capable of being bound by small peptides generated by phage display or by antibodies, and they are able to differentiate between conformational isomers and amino acid substitutions. Also, because aptamers are nucleic-acid based, they can be directly synthesized, eliminating the need for cell-based expression and extraction as is the case in antibody production. RNA aptamers in particular are capable of producing a myriad of different structures, leading to speculations that they are more discriminating in their target affinity compared to DNA aptamers.
Discovery and development
Aptamers were originally discovered in 1990 when Lary Gold and Craig Tuerk utilized a method of directed evolution known as SELEX to isolate a small single stranded RNA molecule that was capable of binding to T4 bacteriophage DNA polymerase. Additionally, the term “aptamer” was coined by Andrew Ellington, who worked with Jack Szostak to select an RNA aptamer that was capable of tight binding to certain organic dye molecules. The term itself is a conglomeration of the Latin “aptus” or “to fit” and the Greek “meros” or “part."
RNA aptamers are not so much “created” as “selected.” To develop an RNA aptamer capable of selective binding to a molecular target, a method known as Systematic Evolution of Ligands by EXponential Enrichment (SELEX) is used to isolate a unique RNA aptamer from a pool of ~10^13 to 10^16 different aptamers, otherwise known as a library. The library of potential aptamer oligonucleotides is then incubated with a non-target species so as to remove aptamers that exhibit non-specific binding. After subsequent removal of the non-specific aptamers, the remaining library members are then exposed to the desired target, which can be a protein, peptide, cell type, or even an organ (in the case of live animal-based SELEX). From there, the RNA aptamers which were bound to the target are transcribed to cDNA which then is amplified through PCR, and the PCR products are then re-transcribed to RNA. These new RNA transcripts are then used to repeat the selection cycle many times, thus eventually producing a homogeneous pool of RNA aptamers capable of highly specific, high-affinity target binding.
Examples
RNA aptamers can be designed to act as antagonists, agonists, or so-called ”RNA decoy aptamers." In the case of antagonists, the RNA aptamer is used either to prevent binding of a certain protein to its cell membrane receptor or to prevent the protein from performing its activity by binding to the protein's target. Currently, the only RNA aptamer-based therapies that have advanced to clinical trials act as antagonists. When RNA aptamers are designed to act as agonists, they promote immune cell activation as a co-stimulatory molecule, thus aiding in the mobilization of the body's own defense system. For RNA decoy aptamers, the synthetic RNA aptamer resembles a native RNA molecule. As such, proteins(s) which bind to the native RNA target instead bind to the RNA aptamer, possibly interfering with the biomolecular pathway of a particular disease. In addition to their utility as direct therapeutic agents, RNA aptamers are also being considered for other therapeutic roles. For instance, by conjugating the RNA aptamer to a drug compound, the RNA aptamer can act as a targeted delivery system for that drug. Such RNA aptamers are known as ApDCs. Additionally, through conjugation to radioisotope or a fluorescent dye molecule, RNA aptamers may be useful in diagnostic imaging.
Because of the SELEX process utilized to select RNA aptamers, RNA aptamers can be generated for many potential targets. By directly introducing the RNA aptamers to the target during SELEX, a very selective, high-affinity, homogeneous pool of RNA aptamers can be produced. As such, RNA aptamers can be made to target small peptides and proteins, as well as cell fragments, whole cells, and even specific tissues. Examples of RNA aptamer molecular targets and potential targets include vascular endothelial growth factor, osteoblasts, and C-X-C Chemokine Ligand 12 (CXCL2).
An example of an RNA aptamer therapy includes Pegaptanib (aka Macugen ® ), the only FDA-approved RNA aptamer treatment. Originally approved in 2004 to treat age-related macular degeneration, Pegaptanib is a 28 nucleotide RNA aptamer that acts as a VEGF antagonist. However, it is not as effective as antibody-based treatments such as bevacizumab and ranibizumab. Another example of an RNA aptamer therapeutic is NOX-A12, a 45 nucleotide RNA aptamer that is in clinical trials for chronic lymphocytic leukemia, pancreatic cancer, as well as other cancers. NOX-A12 acts as antagonist for CXCL12/SDF-1, a chemokine involved in tumor growth.
Limitations
While the high-selectivity and tight-binding of RNA aptamers have generated interest in their use as pharmaceuticals, there are many problems which have prevented them from being successful in vivo. For one, without modifications RNA aptamers are degraded after being introduced into the body by nucleases in the span of a few minutes. Also, due to their small size, RNA aptamers can be removed from the bloodstream by the renal system. Because of their negative charge, RNA aptamers are additionally known to bind proteins in the bloodstream, leading to non-target tissue delivery and toxicity. Care must also be taken when isolating the RNA aptamers, as aptamers which contain repeated Cytosine-Phosphate-Guanine (CpG) sequences will cause immune system activation through the Toll-like receptor pathway.
In order to combat some of the in vivo limitations of RNA aptamers, various modifications can be added to the nucleotides to aid in efficacy of the aptamer. For instance, a polyethylene glycol (PEG) moiety can be attached to increase the size of the aptamer, thereby preventing its removal from the bloodstream by the renal glomerulus. However, PEG has been implicated in allergic reactions during in vivo testing. Furthermore, modifications can be added to prevent nuclease degradation, such as a 2’ fluoro or amino group as well as a 3’ inverted thymidine. Additionally, the aptamer can be synthesized so that the ribose sugar is in the L-form instead of the D-form, further preventing nuclease recognition. Such aptamers are known as Spiegelmers. In order to prevent Toll-like receptor pathway activation, the cytosine nucleobases within the aptamer can be methylated. Nevertheless, despite these potential solutions to reduced in vivo efficacy, it is possible that chemically modifying the aptamer may weaken its binding affinity towards its target.
See also
Riboswitch
ncRNA therapy
References
External links
RNA therapeutics on the rise, Nature (April 2020)
RNA
Biotechnology
Molecular biology | RNA therapeutics | Chemistry,Biology | 5,749 |
25,161,786 | https://en.wikipedia.org/wiki/HD%20156411 | HD 156411 is a 7th magnitude G-type main-sequence star located approximately 186 light years away in the southern constellation Ara. This star is larger, hotter, brighter, and more massive than the Sun. Its metal content is three-fourths as much as the Sun. The star is around 4.3 billion years old and is spinning with a projected rotational velocity of 1.8 km/s. Naef and associates (2010) noted the star appears to be slightly evolved, and thus may be in the process of leaving the main sequence. In 2009, a gas giant planet was found in orbit around the star.
The star HD 156411 is named Inquill. The name was selected in the NameExoWorlds campaign by Peru, during the 100th anniversary of the IAU. Inquil was one half of the couple involved in the tragic love story Way to the Sun by Abraham Valdelomar.
See also
List of extrasolar planets
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Ara (constellation)
Durchmusterung objects
156411
084787 | HD 156411 | Astronomy | 229 |
62,899,094 | https://en.wikipedia.org/wiki/Kristi%20Kiick | Kristi Lynn Kiick is the Blue and Gold Distinguished Professor of Materials Science and Engineering at the University of Delaware. She studies polymers, biomaterials and hydrogels for drug delivery and regenerative medicine. She is a Fellow of the American Chemical Society, the American Institute for Medical and Biological Engineering, and of the National Academy of Inventors. She served for nearly eight years as the deputy dean of the college of engineering at the University of Delaware.
Early life and education
Kiick first became interested in a career in the chemical sciences when she was at high school. She studied chemistry at the University of Delaware, from which she graduated summa cum laude as a Eugene du Pont memorial distinguished scholar. She was a Master's student at the University of Georgia, where she was awarded a National Science Foundation (NSF) predoctoral fellowship, and joined Kimberly-Clark as a research scientist in 1992. Kiick returned to academia for a second master's degree in polymer science and engineering at the University of Massachusetts Amherst. She completed her doctoral research at the California Institute of Technology, as a National Defense Science and Engineering Graduate (NDSEG) fellow. She completed her PhD from the University of Massachusetts Amherst on templated macromolecular synthesis in 2001 under the supervision of David A. Tirrell, prior to starting her faculty position at the University of Delaware in 2001.
Research and career
Kiick designs polymer nanostructures for targeted therapies and hydrogel matrices for regenerative medicine. She makes use of biomimetic self-assembly, bioconjugation and biosynthesis. In particular, Kiick has worked on polymer-peptide macromolecular structures that can engage cellular targets. These include the use of polyethylene glycol (PEG) in click chemistry to form hydrogels that degrade selectively in response to molecules present in tissues and extracellular matrix. Kiick has shown it is possible to selectively release small molecule cargo with a tuned release for applications in targeted drug-delivery and vascular grafts. She has developed resilin-like polypeptides (RLP), elastomeric materials that can be cross-linked using small molecules, as well as hydrogels that contain nanoparticles for targeting tumors and inflammatory conditions. Resilin is a primary elastomeric protein that is found in insects, and helps them to jump long distances and produce sound.
She joined the faculty at the University of Delaware in 2001, and earned the rank of associate professor in 2007. In 2011 Kiick was promoted to the rank of professor of materials science and engineering and also named deputy dean of the University of Delaware’s college of engineering. In 2019-2020 she was awarded a Leverhulme Visiting Professorship from the Leverhulme Trust and a Fulbright Scholarship from the Fulbright Program to the University of Nottingham, to develop protocols for fabricating bioelastomeric materials.
Awards and honours
Her awards and honours include:
2003 National Science Foundation CAREER Award
2004 University of Delaware Francis Alison Young Scholar Award
2010 University of Minnesota Etter Memorial Lectureship in Chemistry
2012 University of Delaware Trabant Award for Women's Equity
2014 University of Southern Mississippi Bayer Distinguished Lectureship
2014 Elected a fellow of the American Chemical Society (ACS)
2014 Elected a fellow of the American Institute for Medical and Biological Engineering (AIMBE)
2015 University of Southern Mississippi Covestro Distinguished Lectureship
2019 Fulbright Program Scholar
2019 Elected a fellow of the National Academy of Inventors
Selected publications
Her publications include:
Personal life
Kiick is married with two children.
References
Living people
American women chemists
University of Delaware alumni
University of Delaware faculty
University of Georgia alumni
University of Massachusetts Amherst College of Engineering alumni
Supramolecular chemistry
1967 births
American women academics
21st-century American women
Academics of the University of Nottingham | Kristi Kiick | Chemistry,Materials_science | 779 |
20,836 | https://en.wikipedia.org/wiki/Minimalism | In visual arts, music and other media, minimalism is an art movement that began in the post-war era in Western art. The movement is often interpreted as a reaction to abstract expressionism and modernism; it anticipated contemporary post-minimal art practices, which extend or reflect on minimalism's original objectives. Minimalism's key objectives were to strip away conventional characterizations of art through bringing the importance of the object or the experience a viewer has for the object with minimal mediation from the artist. Prominent artists associated with minimalism include Donald Judd, Agnes Martin, Dan Flavin, Carl Andre, Robert Morris, Anne Truitt and Frank Stella.
Minimalism in music often features repetition and gradual variation, such as the works of La Monte Young, Terry Riley, Steve Reich, Philip Glass, Julius Eastman and John Adams. The term has also been used to describe the plays and novels of Samuel Beckett, the films of Robert Bresson, the stories of Raymond Carver, and the automobile designs of Colin Chapman.
In recent years, minimalism has come to refer to anything or anyone that is spare or stripped to its essentials.
Visual arts
Minimalism in visual art, sometimes called "minimal art", "literalist art" and "ABC Art", refers to a specific movement of artists that emerged in New York in the early 1960s in response to abstract expressionism. Examples of artists working in painting that are associated with Minimalism include Nassos Daphnis, Frank Stella, Kenneth Noland, Al Held, Ellsworth Kelly, Robert Ryman and others; those working in sculpture include Donald Judd, Dan Flavin, David Smith, Anthony Caro and more. Minimalism in painting can be characterized by the use of the hard edge, linear lines, simple forms, and an emphasis on two dimensions. Minimalism in sculpture can be characterized by very simple geometric shapes often made of industrial materials like plastic, metal, aluminum, concrete, and fiberglass; these materials are usually left raw or painted a solid colour.
Minimalism was in part a reaction against the painterly subjectivity of Abstract Expressionism that had been dominant in the New York School during the 1940s and 1950s. Dissatisfied with the intuitive and spontaneous qualities of Action Painting, and Abstract Expressionism more broadly, Minimalism as an art movement asserted that a work of art should not refer to anything other than itself and should omit any extra-visual association.
Donald Judd's work was showcased in 1964 at Green Gallery in Manhattan, as were Flavin's first fluorescent light works, while other leading Manhattan galleries like Leo Castelli Gallery and Pace Gallery also began to showcase artists focused on minimalist ideas.
Minimalism in visual art broadly
In a more general sense, minimalism as a visual strategy can be found in the geometric abstractions of painters associated with the Bauhaus movement, in the works of Kazimir Malevich, Piet Mondrian and other artists associated with the De Stijl movement, the Russian Constructivist movement, and in the work of the Romanian sculptor Constantin Brâncuși.
Minimalism as a formal strategy has been deployed in the paintings of Barnett Newman, Ad Reinhardt, Josef Albers, and the works of artists as diverse as Pablo Picasso, Yayoi Kusama, Giorgio Morandi, and others. Yves Klein had painted monochromes as early as 1949, and held the first private exhibition of this work in 1950—but his first public showing was the publication of the Artist's book Yves: Peintures in November 1954.
Literalism
Michael Fried called the minimalist artists literalists, and used literalism as a pejorative due to his position that the art should deliver transcendental experience with metaphors, symbolism, and stylization. Per Fried's (controversial) view, the literalist art needs a spectator to validate it as art: an "object in a situation" only becomes art in the eyes of an observer. For example, for a regular sculpture its physical location is irrelevant, and its status as a work of art remains even when unseen. The Donald Judd's pieces (see the photo on the right), on the other hand, are just objects sitting in the desert sun waiting for a visitor to discover them and accept them as art.
Design, architecture, and spaces
The term minimalism is also used to describe a trend in design and architecture, wherein the subject is reduced to its necessary elements. Minimalist architectural designers focus on effectively using vacant space, neutral colors and eliminating decoration. Emphasizing materiality, tactility, texture, weight and density. Minimalist architecture became popular in the late 1980s in London and New York, whereby architects and fashion designers worked together in the boutiques to achieve simplicity, using white elements, cold lighting, and large spaces with minimal furniture and few decorative elements.
The works of De Stijl artists are a major reference: De Stijl expanded the ideas of expression by meticulously organizing basic elements such as lines and planes. In 1924, The Rietveld Schroder House was commissioned by Truss Schroder-Schrader, a precursor to minimalism. The house emphasizes its slabs, beams and posts reflecting De Stijls philosophy on the relationship between form and function. With regard to home design, more attractive "minimalistic" designs are not truly minimalistic because they are larger, and use more expensive building materials and finishes.
Minimalistic design has been highly influenced by Japanese traditional design and architecture. There are observers who describe the emergence of minimalism as a response to the brashness and chaos of urban life. In Japan, for example, minimalist architecture began to gain traction in the 1980s when its cities experienced rapid expansion and booming population. The design was considered an antidote to the "overpowering presence of traffic, advertising, jumbled building scales, and imposing roadways." The chaotic environment was not only driven by urbanization, industrialization, and technology but also the Japanese experience of constantly having to demolish structures on account of the destruction wrought by World War II and the earthquakes, including the calamities it entails such as fire. The minimalist design philosophy did not arrive in Japan by way of another country, as it was already part of the Japanese culture rooted on the Zen philosophy. There are those who specifically attribute the design movement to Japan's spirituality and view of nature.
Architect Ludwig Mies van der Rohe (1886–1969) adopted the motto "Less is more" to describe his aesthetic. His tactic was one of arranging the necessary components of a building to create an impression of extreme simplicity—he enlisted every element and detail to serve multiple visual and functional purposes; for example, designing a floor to also serve as the radiator, or a massive fireplace to also house the bathroom. Designer Buckminster Fuller (1895–1983) adopted the engineer's goal of "Doing more with less", but his concerns were oriented toward technology and engineering rather than aesthetics.
Concepts and design elements
The concept of minimalist architecture is to strip everything down to its essential quality and achieve simplicity. The idea is not completely without ornamentation, but that all parts, details, and joinery are considered as reduced to a stage where no one can remove anything further to improve the design.
The considerations for 'essences' are light, form, detail of material, space, place, and human condition. Minimalist architects not only consider the physical qualities of the building. They consider the spiritual dimension and the invisible, by listening to the figure and paying attention to details, people, space, nature, and materials, believing this reveals the abstract quality of something that is invisible and aids the search for the essence of those invisible qualities—such as natural light, sky, earth, and air. In addition, they "open a dialogue" with the surrounding environment to decide the most essential materials for the construction and create relationships between buildings and sites.
In minimalist architecture, design elements strive to convey the message of simplicity. The basic geometric forms, elements without decoration, simple materials and the repetitions of structures represent a sense of order and essential quality. The movement of natural light in buildings reveals simple and clean spaces. In the late 19th century as the arts and crafts movement became popular in Britain, people valued the attitude of 'truth to materials' with respect to the profound and innate characteristics of materials. Minimalist architects humbly 'listen to figure,' seeking essence and simplicity by rediscovering the valuable qualities in simple and common materials.
Influences from Japanese tradition
The idea of simplicity appears in many cultures, especially the Japanese traditional culture of Zen Buddhist philosophy. Japanese manipulate the Zen culture into aesthetic and design elements for their buildings. This idea of architecture has influenced Western society, especially in America since the mid 18th century. Moreover, it inspired the minimalist architecture in the 19th century.
Zen concepts of simplicity transmit the ideas of freedom and essence of living. Simplicity is not only aesthetic value, it has a moral perception that looks into the nature of truth and reveals the inner qualities and essence of materials and objects. For example, the sand garden in temple demonstrates the concepts of simplicity and the essentiality from the considered setting of a few stones and a huge empty space.
The Japanese aesthetic principle of refers to empty or open space. It removes all the unnecessary internal walls and opens up the space. The emptiness of spatial arrangement reduces everything down to the most essential quality.
The Japanese aesthetic of values the quality of simple and plain objects. It appreciates the absence of unnecessary features, treasures a life in quietness and aims to reveal the innate character of materials. For example, the Japanese floral art of has the central principle of letting the flower express itself. People cut off the branches, leaves and blossoms from the plants and only retain the essential part of the plant. This conveys the idea of essential quality and innate character in nature.
Minimalist architects and their works
The Japanese minimalist architect Tadao Ando conveys the Japanese traditional spirit and his own perception of nature in his works. His design concepts are materials, pure geometry and nature. He normally uses concrete or natural wood and basic structural form to achieve austerity and rays of light in space. He also sets up dialogue between the site and nature to create relationship and order with the buildings. Ando's works and the translation of Japanese aesthetic principles are highly influential on Japanese architecture.
Another Japanese minimalist architect, Kazuyo Sejima, works on her own and in conjunction with Ryue Nishizawa, as SANAA, producing iconic Japanese Minimalist buildings. Credited with creating and influencing a particular genre of Japanese Minimalism, Sejimas delicate, intelligent designs may use white color, thin construction sections and transparent elements to create the phenomenal building type often associated with minimalism. Works include New Museum (2010) New York City, Small House (2000) Tokyo, House surrounded By Plum Trees (2003) Tokyo.
In Vitra Conference Pavilion, Weil am Rhein, 1993, the concepts are to bring together the relationships between building, human movement, site and nature. Which as one main point of minimalism ideology that establish dialogue between the building and site. The building uses the simple forms of circle and rectangle to contrast the filled and void space of the interior and nature. In the foyer, there is a large landscape window that looks out to the exterior. This achieves the simple and silence of architecture and enhances the light, wind, time and nature in space.
John Pawson is a British minimalist architect; his design concepts are soul, light, and order. He believes that though reduced clutter and simplification of the interior to a point that gets beyond the idea of essential quality, there is a sense of clarity and richness of simplicity instead of emptiness. The materials in his design reveal the perception toward space, surface, and volume. Moreover, he likes to use natural materials because of their aliveness, sense of depth and quality of an individual. He is also attracted by the important influences from Japanese Zen Philosophy.
Calvin Klein Madison Avenue, New York, 1995–96, is a boutique that conveys Calvin Klein's ideas of fashion. John Pawson's interior design concepts for this project are to create simple, peaceful and orderly spatial arrangements. He used stone floors and white walls to achieve simplicity and harmony for space. He also emphasises reduction and eliminates the visual distortions, such as the air conditioning and lamps, to achieve a sense of purity for the interior.
Alberto Campo Baeza is a Spanish architect and describes his work as essential architecture. He values the concepts of light, idea and space. Light is essential and achieves the relationship between inhabitants and the building. Ideas are to meet the function and context of space, forms, and construction. Space is shaped by the minimal geometric forms to avoid decoration that is not essential.
Literature
Literary minimalism is characterized by an economy with words and a focus on surface description. Minimalist writers eschew adverbs and prefer allowing context to dictate meaning. Readers are expected to take an active role in creating the story, to "choose sides" based on oblique hints and innuendo, rather than react to directions from the writer.
Austrian architect and theorist Adolf Loos published early writings about minimalism in Ornament and Crime.
The precursors to literary minimalism are famous novelists Stephen Crane and Ernest Hemingway.
Some 1940s-era crime fiction of writers such as James M. Cain and Jim Thompson adopted a stripped-down, matter-of-fact prose style to considerable effect; some classify this prose style as minimalism.
Another strand of literary minimalism arose in response to the metafiction trend of the 1960s and early 1970s (John Barth, Robert Coover, and William H. Gass). These writers were also sparse with prose and kept a psychological distance from their subject matter.
Minimalist writers, or those who are identified with minimalism during certain periods of their writing careers, include the following: Raymond Carver, Ann Beattie, Bret Easton Ellis, Charles Bukowski, K. J. Stevens, Amy Hempel, Bobbie Ann Mason, Tobias Wolff, Grace Paley, Sandra Cisneros, Mary Robison, Frederick Barthelme, Richard Ford, Patrick Holland, Cormac McCarthy, David Leavitt and Alicia Erian.
American poets such as William Carlos Williams, early Ezra Pound, Robert Creeley, Robert Grenier, and Aram Saroyan are sometimes identified with their minimalist style. The term "minimalism" is also sometimes associated with the briefest of poetic genres, haiku, which originated in Japan, but has been domesticated in English literature by poets such as Nick Virgilio, Raymond Roseliep, and George Swede.
The Irish writer Samuel Beckett is well known for his minimalist plays and prose, as is the Norwegian writer Jon Fosse.
Dimitris Lyacos's With the People from the Bridge, combining elliptical monologues with a pared-down prose narrative, is a contemporary example of minimalist playwrighting.
In his novel The Easy Chain, Evan Dara includes a 60-page section written in the style of musical minimalism, in particular inspired by composer Steve Reich. Intending to represent the psychological state (agitation) of the novel's main character, the section's successive lines of text are built on repetitive and developing phrases.
Music
The term "minimal music" was derived around 1970 by Michael Nyman from the concept of minimalism, which was earlier applied to the visual arts. More precisely, it was in a 1968 review in The Spectator that Nyman first used the term, to describe a ten-minute piano composition by the Danish composer Henning Christiansen, along with several other unnamed pieces played by Charlotte Moorman and Nam June Paik at the Institute of Contemporary Arts in London.
However, the roots of minimal music are older. In France, Yves Klein allegedly conceived his Monotone Symphony (formally The Monotone-Silence Symphony) between 1947 or 1949 (but premiered only in 1960), a work that consisted of a single 20-minute sustained chord followed by a 20-minute silence.
Film and cinema
In film, minimalism usually is associated with filmmakers such as Robert Bresson, Chantal Akerman, Carl Theodor Dreyer, and Yasujirō Ozu. Their films typically tell a simple story with straightforward camera usage and minimal use of score. Paul Schrader named their kind of cinema: "transcendental cinema". In the present, a commitment to minimalist filmmaking can be seen in film movements such as Dogme 95, mumblecore, and the Romanian New Wave. Abbas Kiarostami, Elia Suleiman, and Kelly Reichardt are also considered minimalist filmmakers.
The Minimalists – Joshua Fields Millburn, Ryan Nicodemus, and Matt D'Avella – directed and produced the film Minimalism: A Documentary, which showcased the idea of minimal living in the modern world.
In other fields
Cooking
Breaking from the complex, hearty dishes established as orthodox haute cuisine, nouvelle cuisine was a culinary movement that consciously drew from minimalism and conceptualism. It emphasized more basic flavors, careful presentation, and a less involved preparation process. The movement was mainly in vogue during the 1960s and 1970s, after which it once again gave way to more traditional haute cuisine, retroactively titled cuisine classique. However, the influence of nouvelle cuisine can still be felt through the techniques it introduced.
Fashion
The capsule wardrobe is an example of minimalism in fashion. Constructed of only a few staple pieces that do not go out of style, and generally dominated by only one or two colors, capsule wardrobes are meant to be light, flexible and adaptable, and can be paired with seasonal pieces when the situation calls for them. The modern idea of a capsule wardrobe dates back to the 1970s, and is credited to London boutique owner Susie Faux. The concept was further popularized in the next decade by American fashion designer Donna Karan, who designed a seminal collection of capsule workwear pieces in 1985.
Science communication
To portray global warming to non-scientists, British climate scientist Ed Hawkins developed warming stripes graphics in 2018 that are deliberately devoid of scientific or technical indicia, for ease of understanding by non-scientists. Hawkins explained that "our visual system will do the interpretation of the stripes without us even thinking about it".
Warming stripe graphics resemble color field paintings, stripping out all distractions and using only color to convey meaning. Color field pioneer artist Barnett Newman said he was "creating images whose reality is self-evident", an ethos that Hawkins is said to have applied to the problem of climate change and leading one commentator to remark that the graphics are "fit for the Museum of Modern Art or the Getty."
A tempestry—a portmanteau of "temperature" and "tapestry"—is a tapestry using stripes of specific colors of yarn to represent respective temperature ranges. The tapestries visually represent global warming occurring at given locations.
Minimalist lifestyle
In a lifestyle adopting minimalism, there is an effort to use materials which are most essential and in quantities that do not exceed certain limits imposed by the user themselves. There have been many terms evolved from the concept, like minimalist decors, minimalist skincare, minimalist style, minimalist accessories, etc. All such terms signify the usage of only essential products in that niche into one's life. This can help one to focus on things that are important in one's life. It can reduce waste. It can also save the time of acquiring the excess materials that may be found unnecessary.
A minimalist lifestyle helps to enjoy life with simple things that are available without undue efforts to acquire things that may be bought at great expenses. Minimalism can also lead to less clutter in living spaces.
See also
Notes and references
Notes
References
Sources
Further reading
Keenan, David, and Michael Nyman (4 February 2001). "Claim to Frame". Sunday Herald
External links
Agence Photographique de la Réunion des musées nationaux et du Grand Palais des Champs-Elysées
"A Short History of Minimalism—Donald Judd, Richard Wollheim, and the origins of what we now describe as minimalist" By Kyle Chayka January 14, 2020 The Nation
Contemporary art movements
Modern art
Abstract art
Western art
Modernism
Simple living
1960s in art
1970s in art
Postmodern art
Postmodernism
Post-war period | Minimalism | Engineering | 4,169 |
69,170,024 | https://en.wikipedia.org/wiki/Predictive%20policing | Predictive policing is the usage of mathematics, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. A report published by the RAND Corporation identified four general categories predictive policing methods fall into: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime.
Methodology
Predictive policing uses data on the times, locations and nature of past crimes to provide insight to police strategists concerning where, and at what times, police patrols should patrol, or maintain a presence, in order to make the best use of resources or to have the greatest chance of deterring or preventing future crimes. This type of policing detects signals and patterns in crime reports to anticipate if crime will spike, when a shooting may occur, where the next car will be broken into, and who the next crime victim will be. Algorithms are produced by taking into account these factors, which consist of large amounts of data that can be analyzed. The use of algorithms creates a more effective approach that speeds up the process of predictive policing since it can quickly factor in different variables to produce an automated outcome. From the predictions the algorithm generates, they should be coupled with a prevention strategy, which typically sends an officer to the predicted time and place of the crime. The use of automated predictive policing supplies a more accurate and efficient process when looking at future crimes because there is data to back up decisions, rather than just the instincts of police officers. By having police use information from predictive policing, they are able to anticipate the concerns of communities, wisely allocate resources to times and places, and prevent victimization.
Police may also use data accumulated on shootings and the sounds of gunfire to identify locations of shootings. The city of Chicago uses data blended from population mapping crime statistics to improve monitoring and identify patterns.
Other approaches
Rather than predicting crime, predictive policing can be used to prevent it. The "AI Ethics of Care" approach recognizes that some locations have greater crime rates as a result of negative environmental conditions. Artificial intelligence can be used to minimize crime by addressing the identified demands.
History
Iraq
At the conclusion of intense combat operations in April 2003, Improvised Explosive Devices (IEDs) were dispersed throughout Iraq’s streets. These devices were deployed to monitor and counteract U.S. military activities using predictive policing tactics. However, the extensive areas covered by these IEDs made it impractical for Iraqi forces to respond to every American presence within the region. This challenge led to the concept of Actionable Hot Spots—zones experiencing high levels of activity yet too vast for effective control. This situation presented difficulties for the Iraqi military in selecting optimal locations for surveillance, sniper placements, and route patrols along areas monitored by IEDs.
China
The roots of predictive policing can be traced to the policy approach of social governance, in which leader of the Chinese Communist Party Xi Jinping announced at a security conference in 2016 is the Chinese regime’s agenda to promote a harmonious and prosperous country through an extensive use of information systems. A common instance of social governance is the development of the social credit system, where big data is used to digitize identities and quantify trustworthiness. There is no other comparably comprehensive and institutionalized system of citizen assessment in the West.
The increase in collecting and assessing aggregate public and private information by China’s police force to analyze past crime and forecast future criminal activity is part of the government’s mission to promote social stability by converting intelligence-led policing (i.e. effectively using information) into informatization (i.e. using information technologies) of policing. The increase in employment of big data through the police geographical information system (PGIS) is within China’s promise to better coordinate information resources across departments and regions to transform analysis of past crime patterns and trends into automated prevention and suppression of crime. PGIS was first introduced in 1970s and was originally used for internal government management and research institutions for city surveying and planning. Since the mid-1990s PGIS has been introduced into the Chinese public security industry to empower law enforcement by promoting police collaboration and resource sharing. The current applications of PGIS are still contained within the stages of public map services, spatial queries, and hot spot mapping. Its application in crime trajectory analysis and prediction is still in the exploratory stage; however, the promotion of informatization of policing has encouraged cloud-based upgrades to PGIS design, fusion of multi-source spatiotemporal data, and developments to police spatiotemporal big data analysis and visualization.
Although there is no nationwide police prediction program in China, local projects between 2015 and 2018 have also been undertaken in regions such as Zhejiang, Guangdong, Suzhou, and Xinjiang, that are either advertised as or are building blocks towards a predictive policing system.
Zhejiang and Guangdong had established prediction and prevention of telecommunication fraud through the real-time collection and surveillance of suspicious online or telecommunication activities and the collaboration with private companies such as the Alibaba Group for the identification of potential suspects. The predictive policing and crime prevention operation involves forewarning to specific victims, with 9,120 warning calls being made in 2018 by the Zhongshan police force along with direct interception of over 13,000 telephone calls and over 30,000 text messages in 2017.
Substance-related crime is also investigated in Guangdong, specifically the Zhongshan police force who were the first city in 2017 to utilize wastewater analysis and data models that included water and electricity usage to locate hotspots for drug crime. This method led to the arrest of 341 suspects in 45 different criminal investigations by 2019.
In China, Suzhou Police Bureau has adopted predictive policing since 2013. During 2015–2018, several cities in China have adopted predictive policing. China has used predictive policing to identify and target people for sent to Xinjiang internment camps.
The integrated joint operations platform (IJOP) predictive policing system is operated by the Central Political and Legal Affairs Commission.
Europe
In Europe there has been significant pushback against predictive policing and the broader use of artificial intelligence in policing on both a national and European Union level.
The Danish POL-INTEL project has been operational since 2017 and is based on the Gotham system from Palantir Technologies. The Gotham system has also been used by German state police and Europol.
Predictive policing has been used in the Netherlands.
United States
In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois.
In New York, the NYPD has begun implementing a new crime tracking program called Patternizr. The goal of the Patternizr was to help aid police officers in identifying commonalities in crimes committed by the same offenders or same group of offenders. With the help of the Patternizr, officers are able to save time and be more efficient as the program generates the possible "pattern" of different crimes. The officer then has to manually search through the possible patterns to see if the generated crimes are related to the current suspect. If the crimes do match, the officer will launch a deeper investigation into the pattern crimes.
India
In India, various state police forces have adopted AI technologies to enhance their law enforcement capabilities. For instance, the Maharashtra Police have launched Maharashtra Advanced Research and Vigilance for Enhanced Law Enforcement (MARVEL), the country's first state-level police AI system, to improve crime prediction and detection. Additionally, the Uttar Pradesh Police utilize the AI-powered mobile application 'Trinetra' for facial recognition and criminal tracking.
Concerns
Predictive policing faces issues that affect its effectiveness. Obioha mentions several concerns raised about predictive policing. High costs and limited use prevent more widespread use, especially among poorer countries. Another issue that affects predictive policing is that it relies on human input to determine patterns. Flawed data can lead to biased and possibly racist results. Technology cannot predict crime, it can only weaponize proximity to policing. Though it is claimed to be unbiased data, communities of color and low income are the most targeted. It should also be noted that not all crime is reported, making the data faulty and inaccurate.
In 2020, following protests against police brutality, a group of mathematicians published a letter in Notices of the American Mathematical Society urging colleagues to stop work on predictive policing. Over 1,500 other mathematicians joined the proposed boycott.
Some applications of predictive policing have targeted minority neighborhoods and lack feedback loops.
Cities throughout the United States are enacting legislation to restrict the use of predictive policing technologies and other “invasive” intelligence-gathering techniques within their jurisdictions.
Following the introduction of predictive policing as a crime reduction strategy, via the results of an algorithm created through the use of the software PredPol, the city of Santa Cruz, California experienced a decline in the number of burglaries reaching almost 20% in the first six months the program was in place. Despite this, in late June 2020 in the aftermath of the murder of George Floyd in Minneapolis, Minnesota along with a growing call for increased accountability amongst police departments, the Santa Cruz City Council voted in favor of a complete ban on the use of predictive policing technology.
Accompanying the ban on predictive policing, was a similar prohibition of facial recognition technology. Facial recognition technology has been criticized for its reduced accuracy on darker skin tones - which can contribute to cases of mistaken identity and potentially, wrongful convictions.
In 2019, Michael Oliver, of Detroit, Michigan, was wrongfully accused of larceny when his face registered as a “match” in the DataWorks Plus software to the suspect identified in a video taken by the victim of the alleged crime. Oliver spent months going to court arguing for his innocence - and once the judge supervising the case viewed the video footage of the crime, it was clear that Oliver was not the perpetrator. In fact, the perpetrator and Oliver did not resemble each other at all - except for the fact that they are both African-American which makes it more likely that the facial recognition technology will make an identification error.
With regards to predictive policing technology, the mayor of Santa Cruz, Justin Cummings, is quoted as saying, “this is something that targets people who are like me,” referencing the patterns of racial bias and discrimination that predictive policing can continue rather than stop.
For example, as Dorothy Roberts explains in her academic journal article, Digitizing the Carceral State, the data entered into predictive policing algorithms to predict where crimes will occur or who is likely to commit criminal activity, tends to contain information that has been impacted by racism. For example, the inclusion of arrest or incarceration history, neighborhood of residence, level of education, membership in gangs or organized crime groups, 911 call records, among other features, can produce algorithms that suggest the over-policing of minority or low-income communities.
See also
Carding (police policy)
Crime analysis
Crime hotspots
Jurimetrics
Pre-crime
Preventive state
Quantitative methods in criminology
Racial profiling
References
Further reading
Crime prevention
Criminology
Government by algorithm
Law enforcement techniques
Types of policing | Predictive policing | Engineering | 2,282 |
228,051 | https://en.wikipedia.org/wiki/Intrapersonal%20communication | Intrapersonal communication (also known as autocommunication or inner speech) is communication with oneself or self-to-self communication. Examples are thinking to oneself "I will do better next time" after having made a mistake or imagining a conversation with one's boss in preparation for leaving work early. It is often understood as an exchange of messages in which sender and receiver are the same person. Some theorists use a wider definition that goes beyond message-based accounts and focuses on the role of meaning and making sense of things. Intrapersonal communication can happen alone or in social situations. It may be prompted internally or occur as a response to changes in the environment.
Intrapersonal communication encompasses a great variety of phenomena. A central type happens purely internally as an exchange within one's mind. Some researchers see this as the only form. In a wider sense, however, there are also types of self-to-self communication that are mediated through external means, like when writing a diary or a shopping list for oneself. For verbal intrapersonal communication, messages are formulated using a language, in contrast to non-verbal forms sometimes used in imagination and memory. One contrast among inner verbal forms is between self-talk and inner dialogue. Self-talk involves only one voice talking to itself. For inner dialogue, several voices linked to different positions take turns in a form of imaginary interaction. Other phenomena related to intrapersonal communication include planning, problem-solving, perception, reasoning, self-persuasion, introspection, and dreaming.
Models of intrapersonal communication discuss which components are involved and how they interact. Many models hold that the process starts with the perception and interpretation of internal and external stimuli or cues. Later steps involve the symbolic encoding of a message that becomes a new stimulus. Some models identify the same self as sender and receiver. Others see the self as a complex entity and understand the process as an exchange between different parts of the self or between different selves belonging to the same person. Intrapersonal communication contrasts with interpersonal communication, in which the sender and the receiver are distinct persons. The two phenomena influence each other in various ways. For example, positive and negative feedback received from other people affects how a person talks to themself. Intrapersonal communication is involved in interpreting messages received from others and in formulating responses. Because of this role, some theorists hold that intrapersonal communication is the foundation of all communication. But this position is not generally accepted and an alternative is to hold that intrapersonal communication is an internalized version of interpersonal communication.
Because of its many functions and influences, intrapersonal communication is usually understood as a significant psychological phenomenon. It plays a key role in mental health, specifically in relation to positive and negative self-talk. Negative self-talk focuses on bad aspects of the self, at times in an excessively critical way. It is linked to psychological stress, anxiety, and depression. A step commonly associated with countering negative self-talk is to become aware of negative patterns. Further steps are to challenge the truth of overly critical judgments and to foster more positive patterns of thought. Of special relevance in this regard is the self-concept, i.e. how a person sees themself, specifically their self-esteem or how they evaluate their abilities and characteristics. Intrapersonal communication is not as thoroughly researched as other forms of communication. One reason is that it is more difficult to study since it happens primarily as an internal process. Another reason is that the term is often used in a very wide sense making it difficult to demarcate which phenomena belong to it.
Definition and essential features
Intrapersonal communication is communication with oneself. It takes place within a person. Larry Barker and Gordon Wiseman define it as "the creating, functioning, and evaluating of symbolic processes which operate primarily within oneself". Its most typical forms are self-talk and inner dialogue. For example, when an employee decides to leave work early, they may engage in an inner dialogue by mentally going through possible negative comments from their boss and potential responses. Other inner experiences are also commonly included, such as imagination, visualization, and memory. As a form of communication, it involves the sending and receiving of messages. It is a self-to-self communication, in the sense that the sender and the receiver is the same person. It contrasts with interpersonal communication, in which sender and receiver are distinct persons. Intrapersonal communication is examined by the discipline known as communication studies.
Some theorists, like James Watson and Anne Hill, restrict the definition of intrapersonal communication to inner experiences or "what goes on inside our heads", like talking to oneself within one's mind. But in a wider sense, it also includes external forms of self-to-self communication, such as speaking to oneself aloud during private speech or writing a diary or a shopping list. In this regard, it only matters that the sender and the receiver is the same person but it does not matter whether an external medium was used in the process. A slightly different conception is presented by Piotr K. Oleś et al. They reject the idea that sender and receiver have to be the same person. This is based on the idea that one can have imaginary dialogues with other people, such as a friend, a teacher, a lost relative, or a celebrity. Oleś et al. hold instead that the hallmark of intrapersonal communication is that it only happens in the mind of one person. Some scholars see the process of searching and interpreting information as a central aspect of intrapersonal communication. This applies specifically to inner monologues and reflections on oneself, other people, and the environment. Frank J. Macke and Dean Barnlund stress that the mechanical exchange of messages is not sufficient and that intrapersonal communication has to do with meaning and making sense of things. In this regard, intrapersonal communication can be distinguished from intraorganismic communication, which takes place below the personal level as an exchange of information between organs or cells.
Intrapersonal communication need not be cut off from outer influences and often happens as a reaction to them. For example, hearing a familiar piece of music may stir up memories that lead to an internal dialog with past selves. In a similar sense, intrapersonal communication is not restricted to situations in which a person is alone. Instead, it also happens in social circumstances and may occur simultaneously with interpersonal communication. This is the case, for example, when interpreting what another person has said and when formulating a response before enunciating it. Some theorists, like Mary J. Farley, hold that intrapersonal communication is an essential part of all communication and, therefore, always accompanies interpersonal communication.
In the context of organizations, the term "autocommunication" is sometimes used as a synonym. It is employed to describe self-communication in the workspace. For example, synchronous autocommunication is used when mentally reassuring oneself or drafting a letter. Asynchronous autocommunication, on the other hand, takes the form of reminders or diaries. This term is also sometimes used in semiotics.
Types
Various types of intrapersonal communication are distinguished in the academic literature. The term is often used in a very wide sense and includes many phenomena. A central contrast is based on whether the exchange happens purely internally or is mediated through external means. The internal type is the most discussed form. It plays out in the mind of one person without externally expressing the message. It includes mental processes like thinking, meditating, and reflecting. However, there are also external forms of intrapersonal communication, like talking aloud to oneself in the form of private speech. Other examples are notetaking at school, writing a diary, preparing a shopping list, praying, or reciting a poem. External intrapersonal communication is also characterized by the fact that the sender and the receiver is the same person. The difference is that an external medium is used to express the message.
Another distinction focuses on the role of language. Most discussions in the academic literature are concerned with verbal intrapersonal communication, like self-talk and inner dialogue. Its hallmark is that messages are expressed using a symbolic coding system in the form of a language. They contrast with non-verbal forms like some forms of imagination, visualization, or memory. In this regard, intrapersonal communication can be used, for example, to explore how a piece of music would sound or how a painting should be continued.
Among the inner verbal forms of intrapersonal communication, an often-discussed contrast is between self-talk and inner dialogue. In the case of inner dialogue, two or more positions are considered and the exchange takes place by contrasting them. It usually happens in the form of different voices taking turns in arguing for their position. This can be conceptualized in analogy to interpersonal communication as an exchange of different subjects, selves, or I-positions within the same person. For example, when facing a difficult decision, one part of a person may argue in favor of one option while another part prefers a different option. Inner dialogue can also take the form of an exchange with an imagined partner. This is the case when anticipating a discussion with one's spouse or during imaginary conversations with celebrities or lost relatives. For self-talk or inner monologue, on the other hand, there is no split between different positions. It is speech directed at oneself, as when commenting on one's performance or telling oneself to "try again". Self-talk can be positive or negative depending on how the person evaluates themself. For example, after having failed an exam, a student may engage in negative self-talk by saying "I'm so stupid" or in positive self-talk, like "don't worry" or "I'll do better next time".
There are many differences between self-talk and inner dialogue. Inner dialogue is usually more complex. It can be used to simulate social situations and examine a topic from different angles. Its goal is frequently to explore the differences between conflicting points of view, to make sense of strange positions, and to integrate different perspectives. It also plays a central role in identity construction and self-organization. One function of self-talk is self-regulation. Other functions include self-distancing, motivation, self-evaluation, and reflection. Self-talk often happens in reaction to or anticipation of certain situations. It can help the agent prepare an appropriate response. It may also be used to regulate emotions and cope with unpleasant experiences as well as monitor oneself. Self-talk and inner dialogue are distinct phenomena but one can quickly turn into the other. For example, an intrapersonal communication may start as self-talk and then evolve into inner dialogue as more positions are considered.
Intrapersonal communication is linked to a great range of phenomena. They include planning, problem-solving, and internal conflict resolution, as well as judgments about oneself and other people. Other forms are perception and understanding as well as conceptualization and interpretation of environmental cues. Further phenomena are data processing like drawing inferences, thinking, and self-persuasion as well as memory, introspection, dreaming, imagining, and feeling.
Models
Various models of communication have been proposed. They aim to provide a simplified overview of the process of communication by showing what its main components are and how they interact. Most of them focus primarily on interpersonal communication but some are specifically formulated with intrapersonal communication in mind.
According to the model proposed by Barker and Wiseman in 1966, intrapersonal communication starts with the reception of external and internal stimuli carrying information. External stimuli belong to the senses and usually provide information about the environment. Internal stimuli include a wide range of impressions, both concerning the state of the body, like pain, but also encompassing feelings.
In the Barker-Wiseman model, an early step of intrapersonal communication focuses on classifying these stimuli. In this process, many of the weaker stimuli are filtered out before reaching a conscious level. But they may still affect communication despite this. A similar process groups the remaining stimuli according to their urgency. It runs in parallel with attempts to attach symbolic meaning to the stimuli as a form of decoding. How these processes take place is influenced by factors like the communicator's social background and current environment. After the symbolic decoding process, ideation occurs in the form of thinking, organizing information, planning, and proposing messages. As a last step, the thus conceived ideas are encoded into a symbolic form and expressed using words, gestures, or movements. This process can happen right after the ideation or with some delay. It results in the generation and transmission of more stimuli, either purely internal or also external. The generated stimuli work as a feedback loop leading back to their reception and interpretation. In this sense, the same person is both the sender and the receiver of the messages. The feedback makes it possible for the communicator to monitor and correct messages.
Another model of communication was proposed by Dean Barnlund in 1970. He aims to give an account of communication that encompasses both its interpersonal and its intrapersonal side. He identifies communication not with the transmission of messages but with the production of meaning in response to internal and external cues. For him, intrapersonal communication is the simpler case since only one person is involved. This person perceives private cues, like internal thoughts and feelings, public cues originating from the environment, and behavioral cues in the form of their own behavior. One part of communication is the process of decoding and interpreting these cues. Its goal is to make sense of them and to reduce uncertainty. It is accompanied by the activity of encoding behavioral responses to the cues. These two processes happen simultaneously and influence each other.
Sheila Steinberg follows Graeme Burton and Richard Dimbleby by understanding intrapersonal communication as a process involving five elements: decoding, integration, memory, perceptual sets, and encoding. Decoding consists in making sense of messages. Integration puts the individual pieces of information extracted this way in relation to each other through processes like comparing and contrasting. Memory stores previously received information. Especially relevant in regard to intrapersonal communication is the concept one has of oneself and how the newly received information relates to it. Perceptual sets are ingrained ways of organizing and evaluating this information, for example, how feminine and masculine traits are conceived. Encoding is the last step, in which the meaning processed in the previous steps is again expressed in symbolic form as a message sent to oneself.
Many theorists focus on the concept of the self in intrapersonal communication. There is a variety of definitions but many agree that the self is an entity that is unique to each individual, i.e. not shared between individuals. Some theorists understand intrapersonal communication as a relation of the self to the same self. Others see the self as a complex entity made up of different parts and analyze the exchange as an interaction between parts. A closely related approach is to talk not of distinct parts of a single self but of different selves in the same person, like an emotional self, an intellectual self, or a physical self. On these views, intrapersonal communication is understood in analogy to interpersonal communication as an exchange between different parts or selves. In either case, intrapersonal relationships play a central role. They concern how a person relates to themselves, for example, how they see themselves and who they wish to be. Intrapersonal relationships are not directly observable. Instead, they have to be inferred based on other changes that can be perceived. For example, inferences about a person's self-esteem can be drawn based on whether they respond to a compliment by bragging or by playing it down.
Relation to interpersonal communication
Both intrapersonal and interpersonal communication involve the exchange of messages. For interpersonal communication, the sender and the receiver are distinct persons, like when talking to a friend on the phone. For intrapersonal communication, one and the same person occupies both of these roles. Despite this difference, the two are closely related. For example, some theorists, like Linda Costigan Lederman, conceptualize inner dialogue in analogy to social interaction as an exchange between different parts of the self.
The two phenomena also influence each other in various ways. For example, the positive and negative feedback a person receives from other people shapes their self-concept or how they see themselves. This in turn has implications for how they talk to themselves in the form of positive or negative self-talk. But the converse is also true: how a person talks to themselves affects how they interact with other people. One reason for this is that some form of inner dialog is usually involved when talking to others to interpret what they say and to determine what one wants to communicate to them. For example, if a person's intrapersonal communication is characterized by self-criticism, this may make it hard for them to accept praise from other people. On a more basic level, it can affect how messages from other people are interpreted. For example, an overly self-critical person may interpret an honest compliment as a form of sarcasm.
However, self-talk may also interfere with the ability to listen. For example, when a person has an important meeting later today, their thoughts may be racing around this topic, making the person less responsive to interactions in the present. In some cases, the listener is very keen on making a response. This may cause their attention to focus mainly on their self-talk formulating a message. As a result, they may miss important aspects of what the current speaker is saying. Positive and effective self-talk, on the other hand, tends to make people better at communicating with others. One way to become better at interpersonal communication is to become aware of this self-talk and to be able to balance it with the need of listening.
Another discussion in the academic literature is about the question of whether intrapersonal communication is in some sense more basic than interpersonal communication. This is based on the idea that some form of intrapersonal communication is necessary for and accompanies interpersonal communication. For example, when a person receives a message from a friend inviting them to their favorite restaurant, there are often various internal reactions to this message before sending an answer in return. These reactions include sights and scents, memories from previous visits, checking whether this would clash with other plans, and devising a route to get to the restaurant. These reactions are forms of intrapersonal communication. Other examples include self-talk in an attempt to evaluate the positions expressed by the speaker to assess whether one agrees or disagrees with them. But intrapersonal communication can also occur by itself without another party being involved.
For these reasons, some theorists, like James Honeycutt and Sheila Steinberg, have claimed that intrapersonal communication is the foundation of all other forms of communication. Similar claims are that intrapersonal communication is omnipresent and that it is a requirement or preliminary of interpersonal communication. However, the claim of the primacy of intrapersonal communication is not generally accepted and many theorists hold that social interaction is more basic. They often see inner speech as an internalized or derivative version of social speech.
A closely related issue concerns the questions of how interpersonal and intrapersonal communication interact in the development of children. According to Jean Piaget, for example, intrapersonal communication develops first and manifests as a form of egocentric speech. This happens during play activities and may help the child learn to control their activities and plan ahead. Piaget holds that, at this early stage, children are not yet fully social beings and are more concerned with developing their individuality. On this view, interpersonal speech only arises later in the person's development. This view is opposed by Lev Vygotsky, who argues that intrapersonal communication only happens as an internalization of interpersonal communication. According to him, children learn the tools for self-talk when their parents talk to them to regulate their behavior, for example, through suggestions, warnings, or commands. Intrapersonal communication may then be understood as an attempt by the child to regulate their behavior through similar means.
Function and importance
Intrapersonal communication serves a great variety of functions. They include internalization, self-regulation, processing information, and problem-solving. Because of this, communication theorist James P. Lantolf characterizes it as an "exceptionally powerful and pervasive tool for thinking". He identifies two significant functions: to internalize cultural norms or ways of thinking and to regulate one's own activity. The self-regulatory function of intrapersonal communication is sometimes understood in analogy to interpersonal communication. For example, parents may influence the behavior of their children by uttering phrases like "wait, think". Once the child has learned them, they can be employed to control behavior by uttering them internally. This way, people learn to modify, accept, or reject plans of action.
According to Larry Ehrlich, intrapersonal communication has three main functions. One function is to monitor the environment and ensure that it is safe. In this regard, self-talk is used to analyze perceptions and to plan responses in case direct or indirect threats are detected. A closely related function is to bring harmony between the inner and outer world by making sense of oneself and one's environment. A third function is of a more existential nature and aims at dealing with loneliness. Many theorists also draw a close connection to the processes of searching and interpreting information.
Inner speech may be needed for many higher mental processes to work. It has a vital role in mental functions such as shaping and controlling one's thoughts, regulating one's behavior, reasoning, problem-solving, and planning as well as remembering. It often accompanies diverse communicative tasks, such as listening, speaking, reading, and writing, for example, to understand an expression or to formulate a new one. More specific applications are to calm oneself down in stressful situations and to internalize new knowledge when learning a second language. This happens when repeating new vocabulary to oneself in order to remember it. Intrapersonal communication can also be applied to a great variety of creative tasks, like using it to come up with musical compositions, paintings, or dance routines.
Stanley B. Cunningham lists a total of 17 functions or characteristics commonly ascribed to intrapersonal communication. They include talking to oneself, dialogue between different parts of the self, and perception as well as interpreting environmental cues and ascribing meaning to them. Further functions are problem-solving, decision-making, introspection, reflection, dreaming, and self-persuasion. The goal of some external forms of intrapersonal communication, like taking notes at school or writing a shopping list, is to aid memory. In some cases, they can also help break down and address a complex problem in a series of smaller steps, as when solving a mathematical equation line by line.
The importance of intrapersonal communication is reflected by how it affects other phenomena. For example, it has been argued that people who engage in positive self-talk are usually better at problem-solving and communicating with others, including listening skills. Negative intrapersonal communication, on the other hand, is linked to insecurities and low self-esteem and may lead to negative interactions with others. For example, people suffering from the imposter syndrome are continuously affected by self-doubt and anxiety. Their negative intrapersonal communication tends to revolve around fears that their skills are inadequate and may be exposed. In this regard, intrapersonal communication affects a person's self-view, their emotions, and whether they see themself as capable or incompetent. It can help build and maintain self-confidence but may also create defense mechanisms. Additionally, it plays a central role in self-discovery and self-delusion.
In literature
Intrapersonal communication is also relevant in the field of literature. Of particular interest to literary studies is the term "stream of consciousness". As a mental phenomenon, it is a continuous flow of momentary states of consciousness as they are lived through by the subject. They include experiences like sensory perceptions, thoughts, feelings, and memories. The stream of consciousness is usually seen as a form of intrapersonal communication and the term is sometimes used as a synonym for interior monologue. In literary criticism, the term refers to a narrative technique or a style of writing used to express this stream of experiences. This usually happens by presenting the thoughts of a character directly without any summary or explanation by the narrator. It aims to give the reader a very immediate impression of what a character's experience is like. It often takes an unpunctuated and disjointed form that violates rules of grammar and logic. Often-discussed examples are found in Dorothy Richardson's Pilgrimage, James Joyce's Ulysses, and Virginia Woolf's Mrs Dalloway. Closely related phenomena are introspective writing and inner speech writing. They are usually understood as forms of externalized inner speech in which the person writes down portions of their inner dialogue.
Relation to mental health
The way intrapersonal communication is conducted can be responsible both for positive mental health and mental illness. This pertains specifically to positive and negative self-talk as well as its relation to the self-concept.
Positive and negative self-talk
Self-talk is a form of talking to oneself. It differs from inner dialogue since it only involves one voice and not an internal exchange between several voices. A common distinction is between positive and negative self-talk based on the evaluative attitude that is expressed. For negative self-talk, the inner voice focuses on bad aspects of the self, often in an excessively critical way. It can take the form of telling oneself that "I'm never going to be able to do this" or "I'm no good at this". Negative self-talk can already develop during childhood based on feedback from others, particularly parents.
For some people, negative self-talk is not just an occasional occurrence but happens frequently. In such cases, it can have detrimental effects on mental health. For example, it can affect emotional well-being by evoking a negative mood. This can lead to stress, anxiety, and depression. It can also negatively affect a person's confidence in various areas, for example, concerning their body image. Positive self-talk, on the other hand, involves seeing oneself in a positive light. It is linked to mental health benefits. They include higher self-esteem and well-being as well as reducing the effects of depression and personality disorders. It is associated with lower stress levels and a reduced risk of self-harm and suicide. The effects of positive and negative self-talk are often discussed in sport psychology. A common idea in this regard is that positive self-talk enhances performance while negative self-talk hinders it. There is some empirical evidence supporting this position but it has not yet been thoroughly researched.
Like other forms of communication, intrapersonal communication can be trained and improved to be more effective. This often happens with the goal of reducing negative self-talk and fostering positive self-talk instead. An early step is often to become aware of negative patterns and acknowledge their existence. This can be followed by questioning and challenging negative evaluations since they are often exaggerated. The person may also try to stop them and replace them with more positive thoughts. For example, when the person becomes aware of a negative thinking process, they may try to inhibit it and direct their attention to more positive outcomes.
A similar approach is used in cognitive behavioral therapy. A central idea in this field is that a set of negative core beliefs is responsible for negative self-talk. They can include beliefs like "I'm unlovable", "I'm unworthy", or "the world is threatening and I'm unable to face its challenges". A key therapeutic method for improving intrapersonal communication is to become aware of these beliefs and to question their truth. A further approach focuses on the practice of mindfulness. By raising self-awareness, it may improve self-esteem and intrapersonal communication. This practice consists in directing one's attention to experiences in the present moment without any evaluation of these experiences. Abstaining from value judgments may help to avoid overly critical evaluations and instead foster an attitude of acceptance.
Examples of specific forms of self-talk and their effects
Different forms of self-talk can have different effects on the person. One form is coping self-talk. Its main aim is to help a person cope with a difficult situation, such as when experiencing anxiety. It consists in emphasizing the person's strengths and skills without implying perfection. This can help people calm down and become clear on their goals and how to realistically achieve them. Another relevant form is instructional self-talk, which focuses attention on the components of a task and can improve performance on physical tasks that are being learned. However, it may have negative effects for people who are already skilled in the task.
Some forms of self-talk address the self by employing first-person pronouns ("I") while others use second-person pronouns ("you"). Generally speaking, people are more likely to use the second-person pronoun when there is a need for self-regulation, an imperative to overcome difficulties, and facilitation of hard actions. The use of first-person intrapersonal pronouns is more frequent when people are talking to themselves about their feelings. A 2014 study by Sanda Dolcos and Dolores Albarracin indicates that using the second-person pronoun to provide self-suggestions is more effective in promoting the intentions to carry out behaviors and performances.
Self-concept and self-esteem
The self-concept plays a key role in intrapersonal communication. A person's self-concept is what they think and feel about themselves, for example, in relation to their appearance and attitudes as well as strengths and weaknesses. So seeing oneself as sincere, respectful, and thoughtful is one self-concept while seeing oneself as mean, abusive, and deceitful is another. The terms "self-image" and "self-esteem" are sometimes used as synonyms but some theorists draw precise distinctions between them. According to Carl Rogers, the self-concept has three parts: self-image, ideal self, and self-worth. Self-image concerns the properties that a person ascribes to themself. The ideal-self is the ideal the person strives toward or what they want to be like. Self-worth corresponds to whether they see themself overall as a good or a bad person.
Many theorists use the term "self-esteem" instead of "self-worth". Self-esteem is a central aspect characterizing intrapersonal communication and refers to a person's subjective evaluation of their abilities and characteristics. As a subjective evaluation, it may differ from the facts and is often based mainly on an emotional outlook and less on a rational judgment. For example, some skilled people suffer from the imposter syndrome, which leads them to believe that they are imposters lacking the skills they actually have. Self-esteem matters for mental health. Low self-esteem is linked to problems ranging from depression, loneliness, and alienation to drug abuse and teenage pregnancy. Self-esteem also affects how a person communicates with themself and others.
The self is not a static or inborn entity but changes throughout life. Interactions with other people have an effect on the individual's self-image. This is especially true in relation to how they judge the person and when receiving positive or negative feedback on an important task. Inner speech is strongly associated with a sense of self. The development of this sense in children is tied to the development of language. There are, however, cases of an internal monologue or inner voice being considered external to the self. Examples are auditory hallucinations, the conceptualization of negative or critical thoughts as an inner critic, or a kind of divine intervention. As a delusion, this can be called "thought insertion". A similar topic is discussed by Simon Jones and Charles Fernyhough, who explain cases of auditory verbal hallucinations as a form of inner speech. Auditory verbal hallucinations are cases in which a person hears speech without any external stimulation. On their view, speech is an inner action controlled by the agent. But in some pathological cases, it is not recognized as an action. This leads to an auditory verbal hallucination since the voice is experienced as an external or alien element.
Research and criticism
Intrapersonal communication has not been researched as thoroughly as other types of communication. One reason is that there are additional problems concerning how to study it and how to conceptualize it. A difficulty in this regard is that it is not as easy to observe as interpersonal communication. This is due to the fact that it mostly occurs internally without an immediate external manifestation. Since it is not directly observable, it has to be inferred based on other changes that can be visible. For example, when seeing that a person dresses well and takes care of their health, one may infer that certain intrapersonal relationships are responsible for this behavior. A similar inference about a person's inner life could be drawn based on whether they respond to a compliment by bragging or by playing it down.
A further approach is to use questionnaires to study intrapersonal communication. Questionnaires sometimes used in the process include the Self-Talk Scale, the Varieties of Inner Speech Questionnaire, and the Internal Dialogical Activity Scale. Among other things, they aim to measure what types of intrapersonal communication a person engages in and how frequently they do so. Younger children are less likely to report using inner speech instead of visual thinking than older children and adults. But it is not known whether this is due to lack of inner speech or due to insufficiently developed introspection. A method to study intrapersonal communication in natural environments, developed by Russell Hurlburt, is to have participants describe their inner experience at random intervals the moment a beeper goes off.
Some criticisms focus on the concept of intrapersonal communication itself. Intrapersonal communication is commonly accepted and used as a distinct type of communication. However, some theorists reject the claim that it is actually a form of communication. Instead, they see it as a different phenomenon that is merely related to communication. A prominent defender of this position is Cunningham. He argues that many inner experiences discussed under this label form part of communicative processes. But he denies that they themselves are instances of communication. This pertains to forms of cognitive, perceptual, and motivational episodes commonly categorized as intrapersonal communication. He sees such categorizations as an "uncritical extension of communication terminology and metaphors to the facts of our inner life space." This is closely connected to the problem that the expression "intrapersonal communication" is often used in a very wide and ambiguous sense. However, some theorists have objected to Cunningham's critique. One argument is that communication studies in general is a multiparadigmatic discipline. This implies that it has not yet established definitions of its terms that are both precise and generally accepted. According to this view, the lack of precision does not mean that the concept is useless.
A further problem in defining intrapersonal communication is that there are countless processes within the human body responsible for exchanging messages. So when understood in this very wide sense, even processes like breathing could be understood as intrapersonal communication. For this reason, the term is usually understood in a more restricted sense. Frank J. Macke approaches this problem by arguing that intrapersonal communication has to do with meaning and that some form of communicative experience is involved. On this view, the mechanical exchange of messages alone is not sufficient for communication.
See also
References
Citations
Sources
Human communication | Intrapersonal communication | Biology | 7,287 |
18,938,236 | https://en.wikipedia.org/wiki/Arcade%20%28architecture%20magazine%29 | ARCADE is a quarterly magazine about architecture and design in the Northwestern United States. The magazine was established in 1981. It is published by the Northwest Architectural League. The mission of ARCADE is to provide dialogue about design and the built environment. The magazine is based in Seattle, Washington.
See also
List of architecture magazines
References
External links
AIA Seattle 2005 honor awards
1981 establishments in Washington (state)
Architecture magazines
Magazines established in 1981
Magazines published in Seattle
Quarterly magazines published in the United States | Arcade (architecture magazine) | Engineering | 96 |
19,564,541 | https://en.wikipedia.org/wiki/Glutamate%E2%80%93glutamine%20cycle | In biochemistry, the glutamate–glutamine cycle is a cyclic metabolic pathway which maintains an adequate supply of the neurotransmitter glutamate in the central nervous system. Neurons are unable to synthesize either the excitatory neurotransmitter glutamate, or the inhibitory GABA from glucose. Discoveries of glutamate and glutamine pools within intercellular compartments led to suggestions of the glutamate–glutamine cycle working between neurons and astrocytes. The glutamate/GABA–glutamine cycle is a metabolic pathway that describes the release of either glutamate or GABA from neurons which is then taken up into astrocytes (non-neuronal glial cells). In return, astrocytes release glutamine to be taken up into neurons for use as a precursor to the synthesis of either glutamate or GABA.
Production
Glutamate
Initially, in a glutamatergic synapse, the neurotransmitter glutamate is released from the neurons and is taken up into the synaptic cleft. Glutamate residing in the synapse must be rapidly removed in one of three ways:
Uptake into the postsynaptic compartment,
Re-uptake into the presynaptic compartment, or
Uptake into a third, nonneuronal compartment.
Postsynaptic neurons remove little glutamate from the synapse. There is active reuptake into presynaptic neurons, but this mechanism appears to be less important than astrocytic transport. Astrocytes could dispose of transported glutamate in two ways. They could export it to blood capillaries, which abut the astrocyte foot processes. However, this strategy would result in a net loss of carbon and nitrogen from the system. An alternate approach would be to convert glutamate into another compound, preferably a non-neuroactive species. The advantage of this approach is that neuronal glutamate could be restored without the risk of trafficking the transmitter through extracellular fluid, where glutamate would cause neuronal depolarization. Astrocytes readily convert glutamate to glutamine via the glutamine synthetase pathway and released into the extracellular space. The glutamine is taken into the presynaptic terminals and metabolized into glutamate by the phosphate-activated glutaminase (a mitochondrial enzyme). The glutamate that is synthesized in the presynaptic terminal is packaged into synaptic vesicles by the glutamate transporter, VGLUT. Once the vesicle is released, glutamate is removed from the synaptic cleft by excitatory amino-acid transporters (EAATs). This allows synaptic terminals and glial cells to work together to maintain a proper supply of glutamate, which can also be produced by transamination of 2-oxoglutarate, an intermediate in the citric acid cycle. Recent electrophysiological evidence suggests that active synapses require presynaptically localized glutamine glutamate cycle to maintain excitatory neurotransmission in specific circumstances. In other systems, it has been suggested that neurons have alternate mechanisms to cope with compromised glutamate–glutamine cycling.
GABA
At GABAergic synapses, the cycle is called the GABA-glutamine cycle. Here the glutamine taken up by neurons is converted to glutamate, which is then metabolized into GABA by glutamate decarboxylase (GAD). Upon release, GABA is taken up into astrocytes via GABA transporters and then catabolized into succinate by the joint actions of GABA transaminase and succinate-semialdehyde dehydrogenase. Glutamine is synthesized from succinate via the TCA cycle, which includes a condensation reaction of oxaloacetate and acetyl-CoA-forming citrate. Then the synthesis of α-ketoglutarate and glutamate occurs, after which glutamate is again metabolized into GABA by GAD. The supply of glutamine to GABAergic neurons is less significant, because these neurons exhibit a larger proportion of reuptake of the released neurotransmitter compared to their glutamatergic counterparts
Ammonia homeostasis
One of the problems of both the glutamate–glutamine cycle and the GABA-glutamine cycle is ammonia homeostasis. When one molecule of glutamate or GABA is converted to glutamine in the astrocytes, one molecule of ammonia is absorbed. Also, for each molecule of glutamate or GABA cycled into the astrocytes from the synapse, one molecule of ammonia will be produced in the neurons. This ammonia will obviously have to be transported out of the neurons and back into the astrocytes for detoxification, as an elevated ammonia concentration has detrimental effects on a number of cellular functions and can cause a spectrum of neuropsychiatric and neurological symptoms (impaired memory, shortened attention span, sleep-wake inversions, brain edema, intracranial hypertension, seizures, ataxia and coma).
Transportation and detoxification
This could happen in two different ways: ammonia itself might simply diffuse (as NH3) or be transported (as NH4+) across the cell membranes in and out of the extracellular space, or a shuttle system involving carrier molecules (amino acids) might be employed. Certainly, ammonia can diffuse across lipid membranes, and it has been shown that ammonia can be transported by K+/Cl− co-transporters.
Amino-acid shuttles and the transport of ammonia
Since diffusion and transport of free ammonia across the cell membrane will affect the pH level of the cell, the more attractive and regulated way of transporting ammonia between the neuronal and the astrocytic compartment is via an amino-acid shuttle, of which there are two: leucine and alanine. The amino acid moves in the opposite direction of glutamine. In the opposite direction of the amino acid, a corresponding molecule is transported; for alanine this molecule is lactate; for leucine, α-ketoisocaproate.
Leucine
The ammonia fixed as part of the glutamate dehydrogenase enzyme reaction in the neurons is transaminated into α-ketoisocaproate to form the branched-chain amino acid leucine, which is exported to the astrocytes, where the process is reversed. α-ketoisocaproate is transported in the other direction.
Alanine
The ammonia produced in neurons is fixed into α-ketoglutarate by the glutamate-dehydrogenase reaction to form glutamate, then transaminated by alanine aminotransferase into lactate-derived pyruvate to form alanine, which is exported to astrocytes. In the astrocytes, this process is then reversed, and lactate is transported in the other direction.
Disorders and conditions
Numerous reports have been published indicating that the glutamate/GABA–glutamine cycle is compromised in a variety of neurological disorders and conditions. Biopsies of sclerotic hippocampus tissue from human subjects with epilepsy have shown decreased glutamate–glutamine cycling. Another pathology in which the glutamate/GABA–glutamine cycle might be compromised is Alzheimer's disease; NMR spectroscopy showed decreased glutamate neurotransmission activity and TCA cycling rate in patients with Alzheimer's disease. Hyperammonemia in the brain, typically occurring as a secondary complication of primary liver disease and known as hepatic encephalopathy, is a condition that affects glutamate/GABA–glutamine cycling in the brain. Current research into autism also indicates potential roles for glutamate, glutamine, and/or GABA in autistic spectrum disorders.
Potential drug targets
In the treatment of epilepsy, drugs such as vigabatrin that target both GABA transporters and the GABA metabolizing enzyme GABA-transaminase have been marketed, providing proof of principle for the neurotransmitter cycling systems as pharmacological targets. However, with regard to glutamate transport and metabolism, no such drugs have been developed, because glutamatergic synapses are abundant, and the neurotransmitter glutamate is an important metabolite in metabolism, making interference capable of adverse effects. So far, most of the drug development directed at the glutamatergic system seems to have been focused on ionotropic glutamate receptors as pharmacological targets, although G-protein coupled receptors have been attracting increased attention over the years.
References
Molecular biology | Glutamate–glutamine cycle | Chemistry,Biology | 1,923 |
4,720,277 | https://en.wikipedia.org/wiki/Lev%20Vaidman | Lev Vaidman (born 4 September 1955) is a Russian-Israeli physicist and Professor at Tel Aviv University, Israel. He is noted for his theoretical work in the area of fundamentals of quantum mechanics, which includes quantum teleportation, the Elitzur–Vaidman bomb tester, and the weak values. He was a member of the Editorial Advisory Board of The American Journal of Physics from 2007 to 2009. In 2010, the Elitzur–Vaidman bomb tester was chosen as one of the "Seven Wonders of the Quantum World" by New Scientist Magazine.
Personal life
He attended 45th Physics-Mathematics School in Saint Petersburg and was twice among the winners of the All-Soviet high school students Physics Olympiad (first place in 1971 and second place in 1972), and in 1972 scored 26th in the International Physics Olympiad in Bucharest. Vaidman emigrated with his family to Israel at the age of 18. Prior to that, he studied for one year at Saint Petersburg University (then Leningrad University).
The Elitzur–Vaidman bomb tester
This thought experiment, subsequently conducted in the lab, is an example of interaction-free measurement (IFM). IFM is the detection of the property of an object or its presence without any physical interaction between the observer and the object. Obtaining information from an object in such a manner is paradoxical.
The bomb tester works by employing an interferometer. When a photon is fired into the device, it encounters a half-silvered mirror positioned so as reflect the photon at a ninety-degree angle. There is a 50-50 chance it will be reflected or pass through. Due to the quantum properties of the photon, it both passes through the mirror and is reflected off of it.
Now, the same photon is moving through two different parts of the device. The photon that passed through the mirror is now on the "lower path". It may or may not encounter a bomb, which is designed to explode if it encounters a single photon. The photon that was reflected off the mirror is now on the "upper path". Both photons next encounter a normal mirror. The lower-path photon is reflected ninety-degrees upward (if it did not detect a bomb). The upper-path photon is reflected back ninety degrees so that it is returned to its original trajectory.
If the lower-path photon did not detect a bomb, it will arrive at a second half-silvered mirror at the same time as the upper-path photon. This will result in the single photon interfering with itself.
A pair of detectors are positioned beyond the mirror in such a way that the photon's superposition collapses and the photon is observed to have either been on the upper path or the lower path, but not both. If the upper-path detector encounters the photon, then the photon "actually" took the upper path and no measurement was made of whether or not there was a bomb on the lower path. If, however, the lower-path detector encounters a photon, it can be determined that fifty percent of the time, there is a bomb on the lower path--without actually encountering it.
Vaidman has argued that this lends support to the many-worlds interpretation of quantum mechanics.
Teleportation of continuous variables
Vaidman is a pioneer in the area of quantum teleportation. He has demonstrated that non-local measurements can be used to teleport unknown quantum states of systems with continuous variables.
See also
Avshalom Elitzur
Elitzur–Vaidman bomb tester
Englert–Greenberger duality relation
Interaction-free measurement
Many-worlds interpretation
References
External links
Lev Vaidman's homepage
Publications list
The Stanford Online Encyclopedia of Philosophy entry on the Many-Worlds Interpretation of Quantum Mechanics, by Lev Vaidman
Lev Vaidman, Teleportation of Quantum States, Phys. Rev. A 49, 1473-1476 (1994). Pre-print arXiv:hep-th/9305062, submitted 14 May 1993
1955 births
Living people
Israeli physicists
Academic staff of Tel Aviv University
Soviet emigrants to Israel
Israeli Jews
Soviet Jews
Scientists from Saint Petersburg
Quantum physicists
Jewish Russian physicists | Lev Vaidman | Physics | 866 |
66,168,946 | https://en.wikipedia.org/wiki/Moto%20G%205G | Moto G 5G and Motorola One 5G Ace are Android phablets developed by Motorola Mobility, a subsidiary of Lenovo. The Moto G 5G branded variant was initially released in December 2020. In the United States, it was released as Motorola One 5G Ace on 13 January 2021.
Hardware
CPU
The device comes with the Snapdragon 750G System on Chip, a fast mid-range ARM SoC with 8 CPU Kryo 570 cores. It was the first device released in India to use this chip.
Two fast ARM Cortex-A77 cores at up to 2.2 GHz
Six small ARM Cortex-A55 cores at up to 1.8 GHz
and a fast X52 5G modem (up to 3700 Mbps download).
The SoC is manufactured in the modern 8 nm process.
Camera
Rear Camera System
Device comes with 3 camera system,
48MP (f/1.7, 0.8 um) camera which outputs 12MP (f/1.7, 1.6 um) Quad Pixel image using pixel binning for image quality improvement and uses PDAF
8MP (f/2.2, 1.12 um) | 118° ultra-wide angle
2MP (f/2.4, 1.75 um) | Macro Vision camera | AF
and a Single LED flash
Front Camera
Device comes with a single punch hole front camera,
16MP (f/2.2, 1 um) sensor that outputs 4MP (f/2.2, 2 um) Quad Pixel image
Software
The device launched with Android 10 and as of October 2021 devices have started receiving the Android 11 update.
The device comes with minimal customization to stock Android experience. It includes Motorola My UX Gesture features such as Quick Capture, which launches the device camera with a twist gesture performed with wrist while holding the device, Fast Flashlight which turns on the device flashlight with two chopping motions.
OS Update History
Variants
Reception
The reviewers praised the device for its clean user interface (minimal and tasteful customizations to Stock Android experience), good battery life, good performance and good camera. While criticising comparatively slow charging, bland design and average screen refresh rate.
References
External links
Android (operating system) devices
Mobile phones introduced in 2020
Mobile phones with multiple rear cameras
Motorola smartphones
Mobile phones with 4K video recording | Moto G 5G | Technology | 475 |
60,357,229 | https://en.wikipedia.org/wiki/Missouri%20Hyperloop | The Missouri Hyperloop is a proposed high-speed transportation route in the U.S. state of Missouri. The hyperloop would connect the cities of St. Louis, Columbia, and Kansas City, complementing the busy Interstate 70. Cross-state travel between Missouri's two largest cities would be reduced from four hours to under 30 minutes.
In 2017, the Missouri Hyperloop Coalition was formed as a partnership between Virgin Hyperloop One, the University of Missouri, and engineering firm Black & Veatch. The coalition released a report that concluded such a hyperloop was feasible, the first such study in the United States. It touts benefits including annual savings of , fast and cheap travel for over 5 million people in Missouri's two largest metropolitan areas, and connecting technology and research centers including the University of Missouri.
In 2019, Missouri Governor Mike Parson announced the formation of a Blue Ribbon panel to examine the details of funding and construction, including a potential test track. The corridor has been described as an ideal location because of its relative flatness, population density, and preexisting infrastructure. Virgin Hyperloop CEO Jay Walder referred to Missouri as a "model process" for planning hyperloops.
In June 2019, Virgin Hyperloop One announced a partnership with the Sam Fox School of Design & Visual Arts of Washington University in St. Louis to further investigate different proposals for the Missouri Hyperloop.
In October 2020, West Virginia was announced as the location for the test track, though this was never constructed. In December 2023, Hyperloop One announced it was shutting down after failing to obtain any contracts to build a working system.
References
Hyperloop
Transportation in Missouri | Missouri Hyperloop | Technology,Engineering | 347 |
51,792,686 | https://en.wikipedia.org/wiki/Lawvere%E2%80%93Tierney%20topology | In mathematics, a Lawvere–Tierney topology is an analog of a Grothendieck topology for an arbitrary topos, used to construct a topos of sheaves. A Lawvere–Tierney topology is also sometimes also called a local operator or coverage or topology or geometric modality. They were introduced by and Myles Tierney.
Definition
If E is a topos, then a topology on E is a morphism j from the subobject classifier Ω to Ω such that j preserves truth (), preserves intersections (), and is idempotent ().
j-closure
Given a subobject of an object A with classifier , then the composition defines another subobject of A such that s is a subobject of , and is said to be the j-closure of s.
Some theorems related to j-closure are (for some subobjects s and w of A):
inflationary property:
idempotence:
preservation of intersections:
preservation of order:
stability under pullback: .
Examples
Grothendieck topologies on a small category C are essentially the same as Lawvere–Tierney topologies on the topos of presheaves of sets over C.
References
Topos theory
Closure operators | Lawvere–Tierney topology | Mathematics | 258 |
1,717,528 | https://en.wikipedia.org/wiki/Option%20key | The Option key, , is a modifier key present on Apple keyboards. It is located between the Control key and the Command key on a typical Mac keyboard. There are two Option keys on modern (as of 2020) Mac desktop and notebook keyboards, one on each side of the space bar. (As of 2005, some laptops had only one, in order to make room for the arrow keys.)
Apple commonly uses the symbol to represent the Option key. From 1980 to 1984, on the Apple II, this key was known as the closed apple key or the solid apple key, and had a black line drawing of a filled-in on it.
Since the 1990s, "Alt" has sometimes appeared on the key as well, for use as an Alt key with non-Mac software, such as Unix and Windows programs; as of 2017, the newest Apple keyboards such as the Magic Keyboard no longer include the "Alt" label. The Option key in a Mac operating system functions differently from the Alt key under other Unix-like systems or Microsoft Windows. It is not used to access menus or hotkeys but is instead used as a modifier for other command codes, as well as to provide easier access to various accents and symbols. In this regard, it is akin to the AltGr key, found on some IBM-compatible PC keyboards.
Uses
Alternative keyboard input
The use of the Option key is similar to that of the AltGr key on European keyboards of IBM-compatible PCs, in the sense that it can be used to type additional characters, symbols and diacritical marks. The options available differ depending on the keyboard input locale that the user has selected. For example, in the U.S. English keyboard input, produces the "å" character, and produces the cent sign "¢".
The Option key can also provide access to dead key functionality. For example, holding down while pressing will create a highlighted grave accent which will be added to the next letter if possible – so if an is then pressed, the resultant character is è. If an is pressed instead, the two characters are not compatible so the result is `r.
The highlighted orange keys show the accents available from the combination of the key and the keyboard characters . The accent then can be applied to associated letters both lower and uppercase. The additional characters a Mac will produce are a combination of both the key and the key pressed down together. With this combination pressed the keyboard will now produce a different set or an uppercase version of the previous set in some cases.
Notice that holding the shift key as well as the option key while pressing a letter key may create "capital" versions of what results when the same letter key is pressed while the option key but not the shift key is held. For example:
results in å. results in Å.
results in ç. results in Ç.
results in ø. results in Ø.
results in æ. results in Æ.
The Option key is often used in conjunction with special keys like , , and to provide alternate functions. For example, typically produces a line break that is not interpreted as a paragraph break.
Alternative buttons and menu items
The key is also used to provide for alternative menu items and buttons when pressed down. Examples:
Safari, Finder – the Option key causes the "Close Window" menu item to switch to "Close All Windows" when pressed down. Consequently, clicking a window's close box with the option key depressed invokes "close all" as well. This functionality is a de facto Macintosh standard and available in numerous other programs.
Dock – the Option key causes the "Hide" and "Quit" menu items in the context menu of a Dock icon to switch to "Hide Others" and "Force Quit".
iTunes – the Create Playlist button switches to a Create Smart Playlist button. Holding Option and clicking the green Window Zoom (+) button at the top-left forces the iTunes window into fullscreen view, rather than switching between the user-set window size and the iTunes Mini Player.
iPhoto – the rotate image button toggles between a "rotate right" and a "rotate left".
Menu bar items – the Sound icon will show Audio Device input / output settings instead of the volume control slider, the battery item will show the condition of the battery, the MobileMe sync item will show the last sync date/time of individual synced items and will offer additional menu items such as Sync Diagnostics, and the Wireless icon will show extended wireless network information and, in Mac OS X Lion, offer an item for launching a Wi-Fi diagnostic application (when AirPort is connected).
The iPhoto example is an example of a control whose behavior is governed by a preference, which is temporarily inverted by holding down the Option key. The preference in this case is which way to rotate the image: If the user changes the default rotation direction in the Preferences to clockwise, holding down Option will make the button rotate counterclockwise instead, and vice versa. It is common for such controls — that is, those whose behavior is governed by a preference — to be invertible in this way
Common keyboard navigations
In text areas, the Option key can be used for quick keyboard navigation.
– navigate to the previous/next word.
Windows equivalent:
– navigate to the head/end of current paragraph.
Terminal equivalent:
Windows equivalent: Home/End
– navigate caret up/down a page. Without the Option key, the keys let the page view scroll up/down a page without moving the caret.
Windows equivalent:
Alternative mouse actions
When keeping the Option key pressed when using the mouse, the mouse action can change behaviour
option-mouse clicking an application other than the current one, automatically hides the current application and switches to the clicked application.
When dragging an item (file in the Finder, or layer in Adobe Photoshop, for instance), keeping Option pressed will Duplicate it instead of moving it.
File downloads
In browsers such as Safari and SeaMonkey, the option key can be used to download a file. Pressing down the option key when hitting return in the address bar causes the URL-specified file to be downloaded. Also, pressing the option key when clicking a hyperlink causes the link target to be downloaded. Besides the option key methods, other ways of downloading includes right-clicking (or ctrl (^) clicking, in Macs) a hyperlink to bring up a context menu, then selecting the appropriate download command, or pasting a URL directly into Safari's Downloads window.
Miscellaneous
Some applications make unique uses out of the option key:
Terminal (including at least version 1.4.6 – no longer true as of 2.0.1, works.) – arrows navigates between open Terminal windows in a loop. Usually, programs use and , which are also supported for Terminal.
Scroll bars (including at least OS X 10.3.x) – Option-clicking a scroll bar arrow can cause the view to jump to the next page instead of moving by a few lines. Option-clicking in the scroll bar can cause the view to jump to that position instead of jumping to the next page. This behavior can be reversed in System Preferences: Appearance.
Startup Disk – Holding the Option Key at boot time activates a boot manager built into the firmware, where the user may choose from which drive/partition to boot the computer from, including Mac OS and Mac OS X partitions or drives on PowerPC-based Macs, and Mac OS X and Microsoft Windows partitions or drives on Intel-based Macs (running Mac OS X 10.4.6 and later with Boot Camp from Apple Inc. installed). This has been replaced by a general boot menu, activated by holding the power button on Apple Silicon-based Macs. The built-in bootloader can also boot other operating systems such as Linux; however, these are labeled as "Windows" in the bootloader.
References
Computer keys
Macintosh platform | Option key | Technology | 1,628 |
14,024,503 | https://en.wikipedia.org/wiki/Inositol%203-methyltransferase | In enzymology, an inositol 3-methyltransferase () is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + myo-inositol S-adenosyl-L-homocysteine + 1D-3-O-methyl-myo-inositol
Thus, the two substrates of this enzyme are S-adenosyl methionine and myo-inositol, whereas its two products are S-adenosylhomocysteine and 1D-3-O-methyl-myo-inositol.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:1D-myo-inositol 3-O-methyltransferase. Other names in common use include inositol L-1-methyltransferase, myo-inositol 1-methyltransferase, S-adenosylmethionine:myo-inositol 1-methyltransferase, myo-inositol 1-O-methyltransferase (name based on 1L-numbering, system and not 1D-numbering), and S-adenosyl-L-methionine:myo-inositol 1-O-methyltransferase. This enzyme participates in inositol phosphate metabolism.
References
EC 2.1.1
Enzymes of unknown structure
Inositol | Inositol 3-methyltransferase | Chemistry | 356 |
76,555,802 | https://en.wikipedia.org/wiki/Cobalt%28II%29%20perchlorate | Cobalt(II) perchlorate is an inorganic chemical compound with the formula Co(ClO4)2·nH2O (n = 0,6). The pink anhydrous and red hexahydrate forms are both hygroscopic solids.
Preparation and reactions
Cobalt(II) perchlorate hexahydrate is produced by reacting cobalt metal or cobalt(II) carbonate with perchloric acid, followed by the evaporation of the solution:
CoCO3 + 2 HClO4 → Co(ClO4)2 + H2O + CO2
The anhydrous form cannot be produced from the hexahydrate by heating, as it instead decomposes to cobalt(II,III) oxide at 170 °C. Instead, anhydrous cobalt(II) perchlorate is produced from the reaction of dichlorine hexoxide and cobalt(II) chloride, followed by heating in a vacuum at 75 °C.
Structure
The anhydrous form consists of octahedral Co(ClO4)6 centers, with tridentate perchlorate ligands. On the other hand, the orthorhombic hexahydrate consists of isolated [Co(H2O)6]2+ octahedrons and perchlorate anions with lattice constants a = 7.76 Å, b = 13.44 Å and c = 5.20 Å. The hexahydrate undergoes phase transitions at low temperatures.
References
Cobalt(II) compounds
Perchlorates | Cobalt(II) perchlorate | Chemistry | 321 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.